text
stringlengths 14
1.76M
|
|---|
# Cross-Lingual Named Entity Recognition Using Parallel Corpus: A New Approach
Using XLM-RoBERTa Alignment
Bing Li
Microsoft
<EMAIL_ADDRESS>
&Yujie He
Microsoft
<EMAIL_ADDRESS>
&Wenjin Xu
Microsoft
<EMAIL_ADDRESS>
###### Abstract
We propose a novel approach for cross-lingual Named Entity Recognition (NER)
zero-shot transfer using parallel corpora. We built an entity alignment model
on top of XLM-RoBERTa to project the _entities_ detected on the English part
of the parallel data to the target language sentences, whose accuracy
surpasses all previous unsupervised models. With the alignment model we can
get pseudo-labeled NER data set in the target language to train task-specific
model. Unlike using translation methods, this approach benefits from natural
fluency and nuances in target-language original corpus. We also propose a
modified loss function similar to focal loss but assigns weights in the
opposite direction to further improve the model training on noisy pseudo-
labeled data set. We evaluated this proposed approach over 4 target languages
on benchmark data sets and got competitive F1 scores compared to most recent
SOTA models. We also gave extra discussions about the impact of parallel
corpus size and domain on the final transfer performance.
## 1 Introduction
Named entity recognition (NER) is a fundamental task in natural language
processing, which seeks to classify words in a sentence to predefined semantic
types. Due to its nature that the ground truth label exists at word level,
supervised training of NER models often requires large amount of human
annotation efforts. In real-world use cases where one needs to build multi-
lingual models, the required human labor scales at least linearly with number
of languages, or even worse for low resource languages. Cross-Lingual transfer
on Natural Language Processing(NLP) tasks has been widely studied in recent
years Conneau et al. (2018); Kim et al. (2017); Ni et al. (2017); Xie et al.
(2018); Ni and Florian (2019); Wu and Dredze (2019); Bari et al. (2019); Jain
et al. (2019), in particular zero-shot transfer which leverages the advances
in high resource language such as English to benefit other low resource
languages. In this paper, we focus on the cross-lingual transfer of NER task,
and more specifically using parallel corpus and pretrained multilingual
language models such as mBERT Devlin (2018) and XLM-RoBERTa (XLM-R) Lample and
Conneau (2019); Conneau et al. (2020).
Our motivations are threefold. (1) Parallel corpus is a great resource for
transfer learning and is rich between many language pairs. Some recent
research focus on using completely unsupervised machine translations (e.g.
word alignment Conneau et al. (2017)) for cross-lingual NER, however
inaccurate translations could harm the transfer performance. For example in
the word-to-word translation approach, word ordering may not be well
represented during translations, such gaps in translation quality may harm
model performance in down stream tasks. (2) A method could still provide
business value-add even if it only works for some major languages that have
sufficient parallel corpus as long as it has satisfying performance. It is a
common issue in industry practices where there is a heavily customized task
that need to be extended into major markets but you do not want to annotate
large amounts of data in other languages. (3) Previous attempts using parallel
corpus are mostly heuristics and statistical-model based Jain et al. (2019);
Xie et al. (2018); Ni et al. (2017). Recent breakthroughs in multilingual
language models have not been applied to such scenarios yet. Our work bridges
the gap and revisits this topic with new technologies.
We propose a novel semi-supervised method for the cross-lingual NER transfer,
bridged by parallel corpus. First we train an NER model on source-language
data set - in this case English - assuming that we have labeled task-specific
data. Second we label the English part of the parallel corpus with this model.
Then, we project those recognized entities onto the target language, i.e.
label the span of the same entity in target-language portion of the parallel
corpus. In this step we will leverage the most recent XLM-R model Lample and
Conneau (2019); Conneau et al. (2020), which makes a major distinction between
our work and previous attempts. Lastly, we use this pseudo-labeled data to
train the task-specific model in target language directly. For the last step
we explored the option of continue training from a multilingual model fine-
tuned on English NER data to maximize the benefits of model transfer. We also
tried a series of methods to mitigate the noisy label issue in this semi-
supervised approach.
The main contributions of this paper are as follows:
* •
We leverage the powerful multilingual model XLM-R for entity alignment. It was
trained in a supervised manner with easy-to-collect data, which is in sharp
contrast to previous attempts that mainly rely on unsupervised methods and
human engineered features.
* •
Pseudo-labeled data set typically contains lots of noise, we propose a novel
loss function inspired by the focal loss Lin et al. (2017). Instead of using
native focal loss, we went the opposite direction by weighting hard examples
less as those are more likely to be noise.
* •
By leveraging existing natural parallel corpus we got competitive F1 scores of
NER transfer on multiple languages. We also tested that the domain of parallel
corpus is critical in an effective transfer.
## 2 Related Works
There are different ways to conduct zero-shot multilingual transfer. In
general, there are two categories, model-based transfer and data-based
transfer. Model-based transfer often use source language to train an NER model
with language independent features, then directly apply the model to target
language for inference Wu and Dredze (2019); Wu et al. (2020). Data-based
transfer focus on combining source language task specific model, translations,
and entity projection to create weakly-supervised training data in target
language. Some previous attempts includes using annotation projection on
aligned parallel corpora, translations between a source and a target langauge
Ni et al. (2017); Ehrmann et al. (2011), or to utilize Wikipedia hyperlink
structure to obtain anchor text and context as weak labels Al-Rfou et al.
(2015); Tsai et al. (2016). Different variants exist in annotation projection,
e.g. Ni et al. Ni et al. (2017) used maximum entropy alignment model and data
selection to project English annotated labels to parallel target language
sentences. Some other work used bilingual mapping combined with lexical
heuristics or used embedding approach to perform word-level translation with
which naturally comes the annotation projection Mayhew et al. (2017); Xie et
al. (2018); Jain et al. (2019). This kind of translation + projection approach
is used not just in NER, but in other NLP tasks as well such as relation
extraction Kumar (2015). There are obvious limitations to the translation +
projection approach, word or phrase-based translation makes annotation
projection easier, but sacrifices the native fluency and language nuances. In
addition, orthographic and phonetic based features for entity matching may
only be applicable to languages that are alike, and requires extensive human
engineered features. To address these limitations, we proposed a novel
approach which utilizes machine translation training data and combined with
pretrained multilingual language model for entity alignment and projection.
## 3 Model Design
We will demonstrate the entity alignment model component and the full training
pipeline of our work in following sections.
### 3.1 Entity Alignment Model
Translation from a source language to target language may break the word
ordering therefore an alignment model is needed to project entities from
source language sentence to target language, so that the labels from source
language can be zero-shot transferred. In this work, we use XLM-R Lample and
Conneau (2019); Conneau et al. (2020) series of models which introduced the
translation language model(TLM) pretraining task for the first time. TLM
trains the model to predict a masked word using information from both the
context and the parallel sentence in another language, making the model
acquire great cross-lingual and potentially alignment capability.
Our alignment model is constructed by concatenating the English name of the
entity and the target language sentence, as segment A and segment B of the
input sequence respectively. For token level outputs of segment B, we predict
1 if it is inside the translated entity, 0 otherwise. This formulation
transforms the entity alignment problem into a token classification task. An
implicit assumption is that the entity name will still be a consecutive phrase
after translation. The model structure is illustrated in Fig 1.
Figure 1: Entity alignment model. The query entity on the left is ’Cologne’,
and the German sentence on the right is ’Köln liegt in Deutschlands’, which is
’Cologne is located in Germany’ in English, and ’Köln’ is the German
translation of ’Cologne’. The model predicts the word span aligned with the
query entity.
### 3.2 Cross-Lingual Transfer Pipeline
Fig 2 shows the whole training/evaluation pipeline which includes 5 stages:
(1) Fine-tune pretrained language model on CoNLL2003 Sang and Meulder (2003)
to obtain English NER model; (2) Infer labels for English sentences in the
parallel corpus; (3) Run the entity alignment model from the previous
subsection and find corresponding detected English entities in the target
language, failed examples are filtered out during the alignment process; (4)
Fine-tune the multilingual model with data generated from step (3); (5)
Evaluate the new model on the target language test sets.
Figure 2: Training Pipeline Diagram. Yellow pages represent English documents
while light blue pages represent German documents. Step 1 and 5 used original
CoNLL data for train and test respectively; step 2, 3, and 4 used machine
translation data from OPUS website. Pretrained model is either mBert or XLM-R.
The final model was first fine-tuned on English NER data set then fine-tuned
on target language pseudo-labeled NER data set.
## 4 Experiments And Discussions
### 4.1 Parallel Corpus
In our method, we leveraged the availability of large-scale parallel corpus to
transfer the NER knowledge obtained in English to other languages. Existing
parallel corpora is easier to obtain than annotated NER data. We used parallel
corpus crawled from the OPUS website Tiedemann (2012). In our experiments, we
used the following data sets:
* •
Ted2013: consists of volunteer transcriptions and translations from the TED
web site and was created as training data resource for the International
Workshop on Spoken Language Translation 2013.
* •
OpenSubtitles: a new collection of translated movie subtitles that contains 62
languages. Lison and Tiedemann (2016)
* •
WikiMatrix: Mined parallel sentences from Wikipedia in different languages.
Only pairs with scores above 1.05 are used. Schwenk et al. (2019)
* •
UNPC: Manually translated United Nations documents from 1994 to 2014. Ziemski
et al. (2016)
* •
Europarl: A parallel corpus extracted from the European Parliament web site.
Koehn (2005)
* •
WMT-News: A parallel corpus of News Test Sets provided by WMT for training SMT
that contains 18 languages.111http://www.statmt.org/wmt19/translation-
task.html
* •
NewsCommentary: A parallel corpus of News Commentaries provided by WMT for
training SMT, which contains 12 languages.222http://opus.nlpl.eu/News-
Commentary.php
* •
JW300: Mined, parallel sentences from the magazines Awake! and Watchtower.
Agić and Vulić (2019)
In this work, we focus on 4 languages, German, Spanish, Dutch and Chinese. We
randomly select data points from all data sets above with equal weights. There
might be slight difference in data distribution between languages due to data
availability and relevance.
### 4.2 Alignment Model Training
The objective of alignment model is to find the entity from a foreign
paragraph given its English name. We feed the English name and the paragraph
as segment A and B into the XLM-R model Lample and Conneau (2019); Conneau et
al. (2020). Unlike NER task, the alignment task has no requirement for label
completeness, since we only need one entity to be labeled in one training
example. The training data set can be created from Wikipedia documents where
anchor text in hyperlinks naturally indicate the location of entities and one
can get the English entity name via linking by Wikipedia entity Id. An
alternative to get the English name for mentions in another language is
through state-of-the-art translation system. We took the latter approach to
make it simple and leveraged Microsoft’s Azure Cognitive Service to do the
translation.
During training, we also added negative examples with faked English entities
which did not appear in the other language’s sentence. The intuition is to
force model to focus on English entity (segment A) and its translation instead
of doing pure NER and picking out any entity in other language’s sentence
(segment B). We also added examples of noun phrases or nominal entities to
make the model more robust.
We generated a training set of 30K samples with 25$\%$ negatives for each
language and trained a XLM-R-large model Conneau et al. (2020) for 3 epochs
with batch size 64. The initial learning rate is 5e-5 and other
hyperparameters were defaults from HuggingFace Transformers library for the
token classification task. The precision/recall/F1 on the reserved test set
reached 98$\%$. The model training was done on 2 Tesla V100 and took about 20
minutes.
### 4.3 Cross-Lingual Transfer
| | DE | | | | ES | | | | NL | | | | ZH |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Model | P | R | F1 | | P | R | F1 | | P | R | F1 | | P | R | F1
Bari et al. (2019) | - | - | 65.24 | | - | - | 75.9 | | - | - | 74.6 | | - | - | -
Wu and Dredze (2019) | - | - | 71.1 | | - | - | 74.5 | | - | - | 79.5 | | - | - | -
Moon et al. (2019) | - | - | 71.42 | | - | - | 75.67 | | - | - | 80.38 | | - | - | -
Wu et al. (2020) | - | - | 73.16 | | - | - | 76.75 | | - | - | 80.44 | | - | - | -
Wu et al. (2020) | - | - | 73.22 | | - | - | 76.94 | | - | - | 80.89 | | - | - | -
Wu et al. (2020) | - | - | 74.82 | | - | - | 79.31 | | - | - | 82.90 | | - | - | -
Our Models | | | | | | | | | | | | | | |
mBERT zero-transfer | 67.6 | 77.4 | 72.1 | | 72.4 | 78.2 | 75.2 | | 77.8 | 79.3 | 78.6 | | 64.1 | 65.0 | 64.6
mBERT fine-tune | 73.1 | 76.2 | 74.6 | | 77.7 | 77.6 | 77.6 | | 80.5 | 76.7 | 78.6 | | 80.8 | 63.3 | 71.0
| | | (+2.5) | | | | (+2.4) | | | | (+0.0) | | | | (+6.4)
XLM-R zero-transfer | 67.9 | 79.8 | 73.4 | | 79.8 | 81.9 | 80.8 | | 82.2 | 80.3 | 81.2 | | 68.7 | 65.5 | 67.1
XLM-R fine-tune | 75.5 | 78.4 | 76.9 | | 76.9 | 81.0 | 78.9 | | 81.2 | 78.4 | 79.7 | | 77.8 | 65.9 | 71.3
| | | (+3.5) | | | | (-1.9) | | | | (-1.5) | | | | (+4.2)
Table 1: Cross-Lingual Transfer Results on German, Spanish, Dutch and Chinese:
Experiments are done with both mBERT and XLM-RoBERTa model. For each of them
we compared zero-transfer result (trained on CoNLL2003 English only) and fine-
tune result using zero-transfer pseudo-labeled target language NER data. Test
sets for German, Spanish, Dutch are from CoNLL2003 and CoNLL2002, and People’s
Daily data set for Chinese.
We used the CoNLL2003 Sang and Meulder (2003) and CoNLL2002333http://lcg-
www.uia.ac.be/conll2002/ner/ data sets to test our cross-lingual transfer
method for German, Spanish and Dutch. We ignored the training sets in those
languages and only evaluated our model on test sets. For Chinese, we used
People’s Daily444https://github.com/zjy-ucas/ChineseNER as the major
evaluation set, and we also reported numbers on MSRA Levow (2020) and Weibo
Peng and Dredze (2015) data sets in the next section. One notable difference
for People’s Daily data set is that it only covers three entity types, LOC,
ORG and PER, so we suppressed the MISC type from English during the transfer
by training English NER model with ’MISC’ marked as ’O’.
To enable cross-lingual transfer, we first trained an English teacher model
using the CoNLL2003 EN training set with XLM-R-large as the base model. We
trained with focal loss Lin et al. (2017) for 5 epochs. We then ran inference
with this model on the English part of the parallel data. Finally, with the
alignment model, we projected entity labels onto other languages. To ensure
the quality of target language training data, we discarded examples if any
English entity failed to map to tokens in the target language. We also
discarded examples where there are overlapping target entities because it will
cause conflicts in token labels. Furthermore, when one entity is mapped to
multiple, we only keep the example if all the target mention phrases are the
same. This is to accommodate the situation where same entity is mentioned more
than once in one sentence.
As the last step, we fine-tuned the multilingual model pre-trained on English
data set with lower n(0, 3, 6, etc.) layers frozen, on the target language
pseudo-labeled data. We used both mBERT Devlin et al. (2019); Devlin (2018)
and XLM-RConneau et al. (2020) with about 40K training samples for 1 epoch.
The results are shown in Table 1. All the inference, entity projection and
model training experiments are done on 2 Tesla V100 32G gpus and the whole
pipeline takes about 1-2 hours. All numbers are reported as an average of 5
random runs with the same settings.
For loss function we used something similar to focal loss Lin et al. (2017)
but with opposite weight assignment. The focal loss was designed to weight
hard examples more. This intuition holds true only when training data is
clean. In some scenarios such as the cross-lingual transfer task, the pseudo-
labeled training data contains lots of noise propagated from upper-stream of
the pipeline, in which case, those ’hard’ examples are more likely to be just
errors or outliers and could hurt the training process. We went the opposite
direction and lowered their weights instead, so that the model could focus on
less noisy labels. More specifically, we added weight $(1+p_{t})^{\gamma}$
instead of $(1-p_{t})^{\gamma}$ on top of the regular cross entropy loss, and
for the hyper-parameter $\gamma$ we experimented with values from 1 to 5 and
the value of 4 it works best.
From Table 1, we can see for mBERT model, fine-tuning with pseudo-labeled data
has significant effects on all languages except NL. The largest improvement is
in Chinese, 6.4$\%$ increase in F1 on top of the zero-transfer result, this
number is 2.5$\%$ for German and 2.4$\%$ for Spanish. The same experiment with
XLM-R model shows a different pattern, F1 increased 3.5$\%$ for German but
dropped a little bit on Spanish and Dutch after fine-tuning. For Chinese, we
see a comparable improvement with mBERT of 4.2$\%$. The negative result on
Spanish and Dutch is probably because XLM-R has already had very good
pretraining and language alignment for those European languages, which can be
seen from the high zero-transfer numbers, therefore a relatively noisy data
set did not bring much gain. On the contrary, Chinese is a relatively distant
language from the perspective of linguistics so that the add-on value of task
specific fine-tuning with natural data is larger.
Another pattern we observed from Table 1 is that across all languages, the
fine-tuning step with pseudo-labeled data is more beneficial to precision
compared with recall. We observed a consistent improvement in precision but a
small drop in recall in most cases.
## 5 Discussions
### 5.1 Impact of the amount of parallel data
One advantage of the parallel corpora method is high data availability
compared to the supervised approach. Therefore a natural question next is
whether more data is beneficial for the cross-lingual transfer task. To answer
this question, we did a series of experiments with varying number of training
examples ranging from 5K to 200K, and the model F1 score increases with the
amount of data at the beginning and plateaued around 40K. All the numbers
displayed in Table 1 are reported for training on a generated data set of size
around 40K (sentences). One possible explanation for the plateaued performance
might be due to the propagated error in the pseudo-labeled data set. Domain
mismatch may also limit the effectiveness of transfer between languages. More
discussions on this topic in the next section.
### 5.2 Impact of the domain of parallel data
Figure 3: F1 scores evaluated on three data sets using different domains’
parallel data. Blue column on the left is the result of zero-shot model
transfer. To its right are F1 scores for 3 different domains and all domains
combined.
Learnings from machine translation community showed that the quality of neural
machine translation models usually strongly depends on the domain they are
trained on, and the performance of a model could drop significantly when
evaluated on a different domain Koehn and Knowles (2017). Similar observation
can be used to explain challenges of the NER cross-lingual transfer. In NER
transfer, the first domain mismatch comes from the natural gap in entity
distributions between different language corpus. Many entities only live
inside the ecosystem of a specific group of languages and may not be
translated naturally to others. The second domain mismatch is between the
parallel corpora and the NER data set. The English model might not have good
domain adaptation ability and could perform well on the CoNLL2003 data set but
poorly on the parallel data set.
Domain | PER | ORG | LOC | All
---|---|---|---|---
OpenSubtitles | 24,036 | 3,809 | 5,196 | 33,041
UN | 1,094 | 25,875 | 12,718 | 39,687
News | 10,977 | 9,568 | 28,168 | 48,713
All Domains Combined | 10,454 | 14,269 | 17,412 | 42,135
Table 2: Entity Count by type in the pseudo-labeled Chinese NER training data
set. We listed multiple domains that were extracted from different parallel
data source. And AllDomains is a combination of all resources.
To study the impact of the domain of parallel data on transfer performance, we
did an experiment on Chinese using parallel data from different domains. We
picked three representative data sets from OPUS Tiedemann (2012),
OpenSubtitles, UN(contains UN and UNPC), News(contains WMT and News-
Commentary) and another one with all these three combined. OpenSubtitles is
from movie subtitles and language style is informal and oral. UN is from
united nation reports and language style is more formal and political
flavored. News data is from newspaper and content is more diverse and closer
to CoNLL data sets. We evaluated the F1 on three Chinese testsets, Weibo, MSRA
and People’s Daily, where Weibo contains messages from social media, MSRA is
more formal and political, and People’s Daily data set are newspaper articles.
From Fig 3 we see OpenSubtitles performs best on Weibo but poorly on the other
two. On the contrary UN performs worst on Weibo but better on the other two.
News domain performs the best on People’s Daily, which is also consistent with
one’s intuition because they are both from newspaper articles. All domains
combined approach has a decent performance on all three testsets.
Different domains of data have quite large gap in the density and distribution
of entities, for example OpenSubtitles contains more sentences that does not
have any entity. In the experiments above we did filtering to keep the same
ratio of ’empty’ sentences among all domains. We also examined the difference
in type distributions. In Tab 2, we calculated entity counts by type and
domain. OpenSubtitles has very few ORG and LOC entities, whereas UN data has
very few PER data. News and All domain data are more balanced.
Figure 4: F1 score by type evaluated on People’s Daily data set. The same as
Fig 3, we compare the results using different domains of parallel data for the
NER transfer.
In Fig 4, we show evaluation on People’s Daily by type. We want to understand
how does parallel data domain impacts transfer performance for different
entity types. News data has the best performance on all types, and Subtitles
has very bad result on ORG. All these observations are consistent with the
type distribution in Table 2.
## 6 Ablation Study
To better understand the contribution of each stage in the training process,
we conducted an ablation study on German data set with mBERT model. We
compared 6 different settings: (1) approach proposed in this work, i.e. fine-
tune the English model with pseudo-labeled data with the new loss, denoted as
re-weighted (RW) loss; (2) zero-transfer is direct model transfer which was
trained only on English NER data; (3) fine-tune the English model with pseudo-
labeled data with regular cross-entropy (CE) loss; (4) Skip fine-tune on
English and directly fine-tune mBERT on pseudo-labeled data. (5)(6) fine-tune
the model directly with mixed English and pseudo-labeled data simultaneously
with RW and CE losses respectively.
| Test P | Test R | F1
---|---|---|---
Sequential fine-tune with RW | 73.1 | 76.2 | 74.6
Zero-transfer | 67.6 | 77.4 | 72.1
Sequential fine-tune with CE | 71.2 | 74.3 | 72.7
Skip fine-tune on English | 71.0 | 73.6 | 72.3
Fine-tune on En/De mixed (CE) | 73.1 | 75.9 | 74.5
Fine-tune on En/De mixed (RW) | 65.5 | 42.6 | 51.6
Table 3: Ablation study results evaluated on CoNLL2002 German NER data. All
experiments used the mBERT-base model. RW denotes the re-weighted loss we
proposed in the paper; CE denotes the regular cross entropy loss.
From Table 3, we see that both pretraining on English and fine-tuning with
pseudo German data are essential to get the best score. The RW loss performed
better in sequential fine-tuning than in simultaneous training with mixed
English and German data, this is probably because noise portion in English
training set is much smaller than in the German pseudo-labeled training set,
using RW loss on English data failed to exploit the fine-grained information
in some hard examples and results in insufficiently optimized model. Another
observation is that training the mBERT with a combination of English and
German using cross entropy loss, we can get almost the same score as our best
model, which is trained with two stages.
## 7 Conclusion
In this paper, we proposed a new method of doing NER cross-lingual transfer
with parallel corpus. By leveraging the XLM-R model for entity projection, we
are able to make the whole pipeline automatic and free from human engineered
feature or data, so that it could be applied to any other language that has a
rich resource of translation data without extra cost. This method also has the
potential to be extended to other NLP tasks, such as question answering. In
this paper, we thoroughly tested the new method in four languages, and it is
most effective in Chinese. We also discussed the impact of parallel data
domain on NER transfer performance and found that a combination of different
domain parallel corpora yielded the best average results. We also verified the
contribution of the pseudo-labeled parallel data by an ablation study. In the
future we will further improve the alignment model precision and also explore
alternative ways of transfer such as self teaching instead of direct fine-
tuning. We are also interested to see how the propose approach generalizes to
cross-lingual transfers for other NLP tasks.
## References
* Ni et al. (2017) Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1470–1480. Association for Computational Linguistics.
* Kim et al. (2017) Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2832–2838, Copenhagen, Denmark. Association for Computational Linguistics.
* Conneau et al. (2018) Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics.
* Xie et al. (2018) Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018\. Neural cross-lingual named entity recognition with minimal resources. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 369–379, Brussels, Belgium. Association for Computational Linguistics.
* Ni and Florian (2019) Jian Ni and Radu Florian. 2019. Neural cross-lingual relation extraction based on bilingual word embedding mapping. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 399–409, Hong Kong, China. Association for Computational Linguistics.
* Lin et al. (2017) T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002, 2017.
* Wu and Dredze (2019) Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 833–844, Hong Kong, China. Association for Computational Linguistics.
* Jain et al. (2019) Alankar Jain, Bhargavi Paranjape, and Zachary C. Lipton. Entity projection via machine translation for cross-lingual NER. In EMNLP, pages 1083–1092, 2019.
* Bari et al. (2019) M Saiful Bari, Shafiq Joty, and Prathyusha Jwalapuram. Zero-resource cross-lingual named entity recognition. arXiv preprint arXiv:1911.09812, 2019.
* Sang and Meulder (2003) Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In _Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003_ , pages 142–147.
* Wu and Dredze (2019) Shijie Wu and Mark Dredze. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. arXiv preprint arXiv:1904.09077, 2019.
* Devlin (2018) Jacob Devlin. 2018. Multilingual bert readme document.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Conneau et al. (2017) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. _arXiv preprint arXiv:1710.04087_.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. _arXiv preprint arXiv:1901.07291_.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8440–8451, Online. Association for Computational Linguistics.
* Tiedemann (2012) Jörg Tiedemann. 2012. Parallel Data, Tools and Interfaces in OPUS. In _Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12)_ , Istanbul, Turkey. European Language Resources Association (ELRA).
* Lison and Tiedemann (2016) Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_ , Paris, France. European Language Resources Association (ELRA).
* Schwenk et al. (2019) Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia. _arXiv preprint arXiv:11907.05791_.
* Ziemski et al. (2016) Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations Parallel Corpus v1.0. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_ , Paris, France. European Language Resources Association (ELRA).
* Koehn (2005) Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In _Conference Proceedings: the tenth Machine Translation Summit_ , pages 79–86, Phuket, Thailand. AAMT, AAMT.
* Agić and Vulić (2019) Željko Agić and Ivan Vulić. 2019. JW300: A Wide-Coverage Parallel Corpus for Low-Resource Languages. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3204–3210, Florence, Italy. Association for Computational Linguistics.
* Ehrmann et al. (2011) Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entity-annotated corpus using annotation projection. In Proceedings of Recent Advances in Natural Language Processing. Association for Computational Linguistics, pages 118–124. http://aclweb.org/anthology/R11-1017.
* Al-Rfou et al. (2015) Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-ner: Massive multilingual named entity recognition. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, Vancouver, British Columbia, Canada. https://doi.org/10.1137/1.9781611974010.66.
* Tsai et al. (2016) Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In _CoNLL_ , pages 219–228.
* Mayhew et al. (2017) Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In _EMNLP_ , pages 2526–2535.
* Kumar (2015) Faruqui Kumar, S. 2015. Multilingual open relation extraction using cross-lingual In _Proceedings of NAACL-HLT_ , 1351–1356.
* Wu et al. (2020) Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, Börje F Karlsson, Biqing Huang, and Chin-Yew Lin. Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. In AAAI, 2020.
* Moon et al. (2019) Taesun Moon, Parul Awasthy, Jian Ni, and Radu Florian. 2019. Towards lingua franca named entity recognition with bert. _arXiv preprint arXiv:1912.01389_.
* Wu et al. (2020) Wu, Q.; Lin, Z.; Karlsson, B. F.; Lou, J.-G.; and Huang, B. 2020. Single-/Multi-Source Cross-Lingual NER via Teacher-Student Learning on Unlabeled Data in Target Language. In _Association for Computational Linguistics_.
* Wu et al. (2020) Qianhui Wu and Zijia Lin and Börje F. Karlsson and Biqing Huang and Jian-Guang Lou. UniTrans: Unifying Model Transfer and Data Transfer for Cross-Lingual Named Entity Recognition with Unlabeled Data. In _IJCAI 2020_.
* Levow (2020) Gina-Anne Levow. The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, 2006. .
* Peng and Dredze (2015) Nanyun Peng and Mark Dredze. Named Entity Recognition for Chinese Social Media with Jointly Trained Embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015. .
* Koehn and Knowles (2017) Philipp Koehn and Rebecca Knowles. Six Challenges for Neural Machine Translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. 2017. .
|
Proc R Soc A
mathematical modeling, artificial intelligence, category theory
John D. Foley
# Operads for complex system design specification, analysis and synthesis
John D. Foley1 Spencer Breiner2 Eswaran Subrahmanian2,3 and John M. Dusel1
1Metron, Inc., 1818 Library St., Reston, VA, USA
2US National Institute of Standards and Technology, Gaithersburg, MD, USA
3Carnegie Mellon University, Pittsburgh, PA, USA<EMAIL_ADDRESS>
###### Abstract
As the complexity and heterogeneity of a system grows, the challenge of
specifying, documenting and synthesizing correct, machine-readable designs
increases dramatically. Separation of the system into manageable parts is
needed to support analysis at various levels of granularity so that the system
is maintainable and adaptable over its life cycle. In this paper, we argue
that operads provide an effective knowledge representation to address these
challenges. Formal documentation of a syntactically correct design is built up
during design synthesis, guided by semantic reasoning about design
effectiveness. Throughout, the ability to decompose the system into parts and
reconstitute the whole is maintained. We describe recent progress in effective
modeling under this paradigm and directions for future work to systematically
address scalability challenges for complex system design.
###### keywords:
complex systems, system design, automated reasoning, compositionality, applied
category theory, operads
††journal: rspa
## 1 Introduction
We solve complex problems by separating them into manageable parts [2, 86].
Human designers do this intuitively, but details can quickly overwhelm
intuition. Multiple aspects of a problem may lead to distinct decompositions
and complementary models of a system–e.g. competing considerations for
cyberphysical systems [63, 87]–or simulation of behavior at many levels of
fidelity–e.g. in modeling and simulation [88]–leading to a spectrum of models
which are challenging to align. We argue that operads, formal tools developed
to compose geometric and algebraic objects, are uniquely suited to separate
complex systems into manageable parts and maintain alignment across
complementary models.
(1)(2)(3)SyntaxAbstractsystem
designs$\mapsto$ComposingdesignsSemanticsComputationalmodels$\mapsto$Composingmodels
Figure 1: Separating concerns with operads: (1) Composition separates
subsystem designs (green boundaries –); (2) Functorial semantics separate
abstract systems from computational model instances (red boundaries –); (3)
Natural transformations separate and align (blue boundary –) complementary
models ($\square$, $\diamond$ ).
Operads provide three ways to separate concerns for complex systems: (1)
designs for subsystems are separated into composable modules; (2) syntactic
designs to compose systems are separated from the semantic data that model
them; and (3) separate semantic models can be aligned to evaluate systems in
different ways. The three relationships are illustrated in Figure 1.
Hierarchical decomposition (Fig. 1, –) is nothing new. Both products and
processes are broken down to solve problems from design to daily maintenance.
Operads provide a precise language to manage complex modeling details that the
intuitive–and highly beneficial–practice of decomposition uncovers, e.g.,
managing multiple, complementary decompositions and models.
Operads separate the syntax to compose subsystems from the semantic data
modeling them (Fig. 1, –). Syntax consists of abstract “operations” to design
the parts and architecture of a system. Semantics define how to interpret and
evaluate these abstract blueprints. Operad syntax is typically lightweight and
declarative. Operations can often be represented both graphically and
algebraically (Fig. 4), formalizing intuitive design diagrams. Operad
semantics model specific aspects of a system and can range from fast to
computationally expensive.
The most powerful way operads separate is by aligning complementary models
while maintaining compatibility with system decompositions (Fig. 1, –).
Reconciling complementary models is a persistent and pervasive issue across
domains [84, 26, 29, 66, 63, 27, 79, 69]. Historically, Eilenberg & Mac Lane
[34] invented _natural transformations_ to align computational models of
topological spaces. Operads use natural transformations to align hierarchical
decompositions, which is particularly well-suited to system design.
This paper articulates a uniform and systematic foundation for system design
and analysis. In essence, the syntax of an operad defines _what can be_ put
together, which is a prerequisite to decide _what should be_ put together.
Interfaces define which designs are _syntactically_ feasible, but key
_semantic_ information must be expressed to evaluate candidate designs.
Formulating system models within operad theory enforces the intellectual
hygiene required to make sure that different concerns stay separated while
working together to solve complex design problems.
We note five strengths of this foundation that result from the three ways
operads separate a complex problem and key sections of the paper that provide
illustrations.
Expressive, unifying meta-language. A meta- or multi-modeling [18] language is
needed to express and relate multiple representations. The key feature of
operad-based meta-modeling is its focus on coherent mappings between models
(Fig. 1, –, –), as opposed to a universal modeling context, like UML, OWL,
etc., which is inevitably under or over expressive for particular use cases.
Unification allows complementary models to work in concert, as we see in Sec.
5 for function and control. Network operads—originally developed to design
systems—were applied to task behavior. This power to unify and express becomes
especially important when reasoning across domains with largely independent
modeling histories; compare, e.g., [87]. (22.2, 22.4, 3, 44.1, 5)
Minimal data needed for specification. Data needed to set up each
representation of a problem is minimal in two ways: (1) any framework must
provide similar, generative data; and (2) each level only needs to specify
data relevant to that level. Each representation is self-sufficient and can be
crafted to efficiently address a limited portion of the full problem. The
modeler can pick and choose relevant representations and extend the meta-model
as needed. (4, 66.2)
Efficient exploration of formal correct designs. An operad precisely defines
how to iteratively construct designs or adapt designs by substituting
different subsystems. Constructing syntactically invalid designs is not
possible, restricting the relevant design space, and correctness is preserved
when moving across models. Semantic reasoning guides synthesis–potentially at
several levels of detail. This facilitates lazy evaluation: first syntactic
correctness is guaranteed, then multitudes of coarse models are explored
before committing to later, more expensive evaluations. The basic moves of
iteration, substitution, and moving across levels constitute a rich framework
for exploration. We obtain not only an effective design but also formal
documentation of the models which justify this choice. (22.2–2.3, 6, 77.5)
Separates representation from exploitation. Operads and algebras provide
structure and representation for a problem. Exploitation of this structure and
representation is a separate concern. As Herbert Simon noted during his Nobel
Prize speech [85]: “…decision makers can satisfice either by finding optimum
solutions for a simplified world, or by finding satisfactory solutions for a
more realistic world.” This is an either-or proposition for a simple
representation. By laying the problem across multiple semantic models, useful
data structures for each model–e.g. logical, evolutionary or planning
frameworks–can be exploited by algorithms that draw on operad-based iteration
and substitution. (6, 77.5)
Hierarchical analysis and synthesis. Operads naturally capture options for the
hierarchical decomposition of a system, either within a semantic model to
break up large scale problems or across models to gradually increase modeling
fidelity. (22.1, 5,66.3, 77.1)
### 1.1 Contribution to design literature
There are well-known examples of the successful exploitation of separation.
For instance, electronic design automation (EDA) has had decades of success
leveraging hierarchical separation of systems and components to achieve very
large scale integration (VLSI) of complex electronic systems [16, 36, 82]. We
do not argue that operads are needed for extremely modular domains. Instead,
operads may help broaden the base of domains that benefit from separation and
provide a means to integrate and unify treatment across domains. On the other
hand, for highly integral domains the ability to separate in practically
useful ways may be limited [89, 102]. The recent applications we present help
illustrate where operads may prove useful in the near and longer term; see
77.3 for further discussion.
Compared to related literature, this article is application driven and outward
focused. Interest in applying operads and category theory to systems
engineering has surged [21, 38, 53, 67, 71, 94] as part of a broader wave
applying category theory to design databases, software, proteins, etc. [22,
31, 40, 46, 91, 92, 93]. While much of loc. cit. matches applications to
existing theoretical tools, the present article describes recent _application
driven_ advancements and overviews _specific methods_ developed to address
challenges presented by domain problems. We introduce operads for design to a
general scientific audience by explaining what the operads do relative to
broadly applied techniques and how specific domain problems are modeled.
Research directions are presented with an eye towards opening up
interdisciplinary partnerships and continuing application driven
investigations to build on recent insights.
### 1.2 Organization of the paper
The present article captures an intermediate stage of technical maturity:
operad-based design has shown its practicality by lowering barriers of entry
for applied practitioners and demonstrating applied examples across many
domains. However, it has not realized its full potential as an applied meta-
language. Much of this recent progress is not focused solely on the analytic
power of operads to separate concerns. Significant progress on explicit
specification of domain models and techniques to automatically synthesize
designs from basic building blocks has been made. Illustrative use cases and
successful applications for design specification, analysis and synthesis
organize the exposition.
Section 2 introduces operads for design by analogy to other modeling
approaches. Our main examples are introduced in Section 3. Section 4 describes
how concrete domains can be specified with minimal combinatorial data,
lowering barriers to apply operads. Section 5 concerns analysis of a system
with operads. Automated synthesis is discussed in Section 6. Future research
directions are outlined in Section 7, which includes a list of open problems.
Ex. 33.1SpecificationSection 4Ex. 33.2 Analysis Section 5Ex. 33.3 Synthesis
Section 6GeneratorsCompositionalityScalability Figure 2: Organization of the
paper around applied examples introduced in Sec. 3.
Notations. Throughout, we maintain the following notional conventions for:
* •
syntax operads (Fig. 1, left), capitalized calligraphy: $\mathcal{O}$
* •
types (Fig. 1, edges on left), capitalized teletype:
$\mathtt{X},\mathtt{Y},\mathtt{Z},\ldots$
* •
operations (Fig. 1, nodes on left), uncapitalized teletype:
$\mathtt{f},\mathtt{g},\mathtt{h},\ldots$
* •
semantic contexts (Fig. 1, right), capitalized bold:
$\mathbf{Sem},\mathbf{Set},\mathbf{Rel},\dots$
* •
functors from syntax to semantics (Fig. 1, arrows across red), capitalized
sans serif: $\mathsf{Model}\colon{\mathcal{O}}\to\mathbf{Sem}$;
* •
alignment of semantic models via natural transformations (Fig. 1, double arrow
across blue), uncapitalized sans serif:
$\mathsf{align}\colon\mathsf{Model_{1}}\Rightarrow\mathsf{Model_{2}}$;
## 2 Applying operads to design
We introduce operads by an analogy, explaining what an operad is and
motivating its usefulness for systems modeling and analysis. The theory [64,
70, 103] pulls together many different intuitions. Here we highlight four
analogies or ‘views’ of an operad: hierarchical representations (tree view),
strongly-typed programming languages (API 111Application Programming Interface
view), algebraic equations (equational view) and system cartography (map-
maker’s view). Each view motivates operad concepts; see Table 1.
The paradigm of this paper is based on a typed operad, also known as a
‘colored operad’ [103] or ‘symmetric multicategory’ [64, 2.2.21]. A typed
operad ${\mathcal{O}}$ has:
* •
A set $T$ of types.
* •
Sets of operations
${\mathcal{O}}(\mathtt{X_{1}},\ldots,\mathtt{X_{n}};\mathtt{Y})$ where
$\mathtt{X_{i}},\mathtt{Y}\in T$ and we write
$\mathtt{f}\colon\langle\mathtt{X_{i}}\rangle\to\mathtt{Y}$ to indicate that
$\mathtt{f}\in{\mathcal{O}}(\mathtt{X_{1}},\ldots,\mathtt{X_{n}};\mathtt{Y})$.
* •
A specific way to compose any operation
$\mathtt{f}\colon\langle\mathtt{Y_{i}}\rangle\to\mathtt{Z}$ with
$\mathtt{g_{i}}\colon\langle\mathtt{X_{ij}}\rangle\to\mathtt{Y_{i}}$ whose
output types match the inputs of $f$ to obtain a composite
$\mathtt{f}\circ(\mathtt{g_{1}},\dots,\mathtt{g_{n}})=\mathtt{h}\colon\langle\mathtt{X_{ij}}\rangle\to\mathtt{Z}$.
These data are subject to rules [103, 11.2] governing permutation of arguments
and assuring that iterative composition is coherent, analogous to
associativity for ordinary categories [68, I].
Table 1: The theory of operads draws on many familiar ideas, establishing a dictionary between contexts. Operads | Tree | API | Equational | Systems
---|---|---|---|---
Types | Edges | Data types | Variables | Boundaries
Operations | Nodes | Methods | Operators | Architectures
Composites | Trees | Scripts | Evaluation | Nesting
Algebras | Labels | Implementations | Values | Models
### 2.1 The tree view
Hierarchies are everywhere, from scientific and engineered systems to
government, business and everyday life; they help to decompose complex
problems into more manageable pieces. The fundamental constituent of an
operad, called an _operation_ , represents a single step in a hierarchical
decomposition. We can think of this as a single branching in a labeled tree,
e.g.:
$\mathtt{op}$$\mathtt{System}$$\mathtt{Sub_{1}}$$\mathtt{Sub_{2}}$
Formally, this represents an element
$\mathtt{op}\in{\mathcal{O}}(\mathtt{Sub_{1}},\mathtt{Sub_{2}};\mathtt{System})$.
More generally, we can form new operations—trees—by _composition_. Given
further refinements for the two subsystems $\mathtt{Sub_{1}}$ and
$\mathtt{Sub_{2}}$, by $\mathtt{op_{1}}$ and $\mathtt{op_{2}}$, respectively,
we have three composites:
$\begin{array}[]{ccc}\scalebox{0.85}{ \leavevmode\hbox to98.66pt{\vbox
to100.79pt{\pgfpicture\makeatletter\hbox{\hskip
57.14551pt\lower-64.61296pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{10.75293pt}{0.0pt}\pgfsys@curveto{10.75293pt}{5.93875pt}{5.93875pt}{10.75293pt}{0.0pt}{10.75293pt}\pgfsys@curveto{-5.93875pt}{10.75293pt}{-10.75293pt}{5.93875pt}{-10.75293pt}{0.0pt}\pgfsys@curveto{-10.75293pt}{-5.93875pt}{-5.93875pt}{-10.75293pt}{0.0pt}{-10.75293pt}\pgfsys@curveto{5.93875pt}{-10.75293pt}{10.75293pt}{-5.93875pt}{10.75293pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.27779pt}{-1.18056pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{-10.57571pt}{-28.45276pt}\pgfsys@curveto{-10.57571pt}{-22.50798pt}{-15.39479pt}{-17.6889pt}{-21.33957pt}{-17.6889pt}\pgfsys@curveto{-27.28435pt}{-17.6889pt}{-32.10342pt}{-22.50798pt}{-32.10342pt}{-28.45276pt}\pgfsys@curveto{-32.10342pt}{-34.39754pt}{-27.28435pt}{-39.21661pt}{-21.33957pt}{-39.21661pt}\pgfsys@curveto{-15.39479pt}{-39.21661pt}{-10.57571pt}{-34.39754pt}{-10.57571pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{-21.33957pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.01736pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{1}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.72226pt}{26.00832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{System}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-53.8125pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{11}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-18.24655pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{12}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{18.7194pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{2}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-6.57181pt}{-8.76236pt}\pgfsys@lineto{-14.7612pt}{-19.68166pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{4.8987pt}{-9.79639pt}\pgfsys@lineto{24.50134pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-27.9181pt}{-37.22353pt}\pgfsys@lineto{-36.7494pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-16.43616pt}{-38.2591pt}\pgfsys@lineto{-11.0663pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}} }&\scalebox{0.85}{
\leavevmode\hbox to112.89pt{\vbox
to100.79pt{\pgfpicture\makeatletter\hbox{\hskip
48.63231pt\lower-64.61296pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{10.75293pt}{0.0pt}\pgfsys@curveto{10.75293pt}{5.93875pt}{5.93875pt}{10.75293pt}{0.0pt}{10.75293pt}\pgfsys@curveto{-5.93875pt}{10.75293pt}{-10.75293pt}{5.93875pt}{-10.75293pt}{0.0pt}\pgfsys@curveto{-10.75293pt}{-5.93875pt}{-5.93875pt}{-10.75293pt}{0.0pt}{-10.75293pt}\pgfsys@curveto{5.93875pt}{-10.75293pt}{10.75293pt}{-5.93875pt}{10.75293pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.27779pt}{-1.18056pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{32.10342pt}{-28.45276pt}\pgfsys@curveto{32.10342pt}{-22.50798pt}{27.28435pt}{-17.6889pt}{21.33957pt}{-17.6889pt}\pgfsys@curveto{15.39479pt}{-17.6889pt}{10.57571pt}{-22.50798pt}{10.57571pt}{-28.45276pt}\pgfsys@curveto{10.57571pt}{-34.39754pt}{15.39479pt}{-39.21661pt}{21.33957pt}{-39.21661pt}\pgfsys@curveto{27.28435pt}{-39.21661pt}{32.10342pt}{-34.39754pt}{32.10342pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{21.33957pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.66177pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{2}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.72226pt}{26.00832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{System}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-18.24655pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{21}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{10.2062pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{22}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{38.65897pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{23}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-45.2993pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{1}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{6.57181pt}{-8.76236pt}\pgfsys@lineto{14.7612pt}{-19.68166pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-5.80553pt}{-9.28781pt}\pgfsys@lineto{-30.62671pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{29.09235pt}{-36.2052pt}\pgfsys@lineto{41.88611pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{21.33957pt}{-39.41661pt}\pgfsys@lineto{21.33957pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{13.58679pt}{-36.2052pt}\pgfsys@lineto{0.79303pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}} }&\scalebox{0.85}{
\leavevmode\hbox to149.86pt{\vbox
to100.79pt{\pgfpicture\makeatletter\hbox{\hskip
71.37189pt\lower-64.61296pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{10.75293pt}{0.0pt}\pgfsys@curveto{10.75293pt}{5.93875pt}{5.93875pt}{10.75293pt}{0.0pt}{10.75293pt}\pgfsys@curveto{-5.93875pt}{10.75293pt}{-10.75293pt}{5.93875pt}{-10.75293pt}{0.0pt}\pgfsys@curveto{-10.75293pt}{-5.93875pt}{-5.93875pt}{-10.75293pt}{0.0pt}{-10.75293pt}\pgfsys@curveto{5.93875pt}{-10.75293pt}{10.75293pt}{-5.93875pt}{10.75293pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.27779pt}{-1.18056pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{-24.8021pt}{-28.45276pt}\pgfsys@curveto{-24.8021pt}{-22.50798pt}{-29.62117pt}{-17.6889pt}{-35.56595pt}{-17.6889pt}\pgfsys@curveto{-41.51073pt}{-17.6889pt}{-46.3298pt}{-22.50798pt}{-46.3298pt}{-28.45276pt}\pgfsys@curveto{-46.3298pt}{-34.39754pt}{-41.51073pt}{-39.21661pt}{-35.56595pt}{-39.21661pt}\pgfsys@curveto{-29.62117pt}{-39.21661pt}{-24.8021pt}{-34.39754pt}{-24.8021pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{-35.56595pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-42.24374pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{1}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.72226pt}{26.00832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{System}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-68.03888pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{11}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-32.47293pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{12}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-8.5528pt}{-6.84222pt}\pgfsys@lineto{-27.00462pt}{-21.6037pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-42.14449pt}{-37.22353pt}\pgfsys@lineto{-50.97578pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-30.66254pt}{-38.2591pt}\pgfsys@lineto{-25.29268pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{46.3298pt}{-28.45276pt}\pgfsys@curveto{46.3298pt}{-22.50798pt}{41.51073pt}{-17.6889pt}{35.56595pt}{-17.6889pt}\pgfsys@curveto{29.62117pt}{-17.6889pt}{24.8021pt}{-22.50798pt}{24.8021pt}{-28.45276pt}\pgfsys@curveto{24.8021pt}{-34.39754pt}{29.62117pt}{-39.21661pt}{35.56595pt}{-39.21661pt}\pgfsys@curveto{41.51073pt}{-39.21661pt}{46.3298pt}{-34.39754pt}{46.3298pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{35.56595pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.88815pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{2}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.02017pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{21}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{24.43259pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{22}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{52.88535pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{23}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{8.5528pt}{-6.84222pt}\pgfsys@lineto{27.00462pt}{-21.6037pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{27.81317pt}{-36.2052pt}\pgfsys@lineto{15.01941pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{35.56595pt}{-39.41661pt}\pgfsys@lineto{35.56595pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{43.31873pt}{-36.2052pt}\pgfsys@lineto{56.11249pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\end{array}$ (1)
Together with the original operation, these represent four views of the same
system at different levels of granularity; compare, e.g., [65, Fig. 2]. This
reveals an important point: an operad provides a collection of interrelated
models that fit together to represent a complex system.
The relationship between models is constrained by the _principle of
compositionality_ : the whole is determined by its parts _and_ their
organization. Here, the whole is the root, the parts are the leaves, and each
tree is an organizational structure. Formally, _associativity axioms_ , which
generalize those of ordinary categories, enforce compositionality. For
example, composing the left-hand tree above with $\mathtt{op_{2}}$ must give
the same result as composing the center tree with $\mathtt{op_{1}}$. Both give
the tree on the right, since they are built up from the same operations. In
day-to-day modeling these axioms are mostly invisible, ensuring that
everything “just works”, but the formal definitions [103, 11.2] provide
explicit requirements and desiderata for modeling languages “under the hood”.
Operads encourage principled approaches to emergence by emphasizing the
organization of a system. Colloquially speaking, an emergent system is “more
than the sum of its parts”; operations provide a means to describe these
nonlinearities. This does not explain emergent phenomena, which requires
detailed semantic modeling, but begins to break up the problem with separate
(but related) representation of components and their interactions. The
interplay between these elements can be complex and unexpected, even when the
individual elements are quite simple.222For example, diffusion rates
(components) and activation/inhibition (interactions) generate zebra’s stripes
in Turing’s model of morphogenesis [99]. Compositional models may develop and
exhibit emergence as interactions between components are organized, in much
the same way as the systems they represent.
### 2.2 The API view
For most applications, trees bear labels: fault trees, decision trees, syntax
trees, dependency trees and file directories, to name a few. A tree’s labels
indicate its semantics either explicitly with numbers and symbols or
implicitly through naming and intention.
In an operad, nodes identify operations while edges—called _types_ —restrict
the space of valid compositions. This is in analogy to type checking in
strongly-typed programming languages, where we can only compose operations
when types match. In the API view, the operations are abstract method
declarations:
> def op(x1 : Sub1, x2 : Sub2) : System,
> ---
> def op1(y1 : Sub11, y2 : Sub12) : Sub1,
> def op2(z1 : Sub21, z2 : Sub22, z3 : Sub23) : Sub2.
Composites are essentially scripted methods defined in the API. For example,
> def treeLeft(y1 : Sub11, y2 : Sub12, x2 : Sub2) : System
> = op(op1(y1, y2), x2),
is a script for left-most tree above. However, the compiler will complain with
an invalid syntax error for any script where the types don’t match, say
> def badTree(y1 : Sub11, y2 : Sub12, x2 : Sub2) : System
> = op(x2 ,op1(y1, y2)),
If an operad is an API—a collection of abstract types and methods—then an
_operad algebra_ $\mathsf{A}$ is a concrete implementation. An algebra
declares: 1) a set of instances for each type; 2) a function for each
operation, taking instances as arguments and returning a single instance for
the composite system. That is, $\mathsf{A}\colon{\mathcal{O}}\to\mathbf{Set}$
has:
* •
for each type $\mathtt{X}\in T$, a set $\mathsf{A}(\mathtt{X})$ of instances
of type $\mathtt{X}$, and
* •
for each operation
$\mathtt{f}\colon\langle\mathtt{X_{i}}\rangle\to\mathtt{Y}$, the function
$\mathsf{A}(\mathtt{f})$ acts on input elements
$a_{i}\in\mathsf{A}(\mathtt{X_{i}})$ to obtain a single output element
$\mathsf{A}(\mathtt{f})(a_{1},\dots,a_{n})\in\mathsf{A}(\mathtt{Y}).$
Required coherence rules [103, 13.2] are analogous to the definition of a
functor into $\mathbf{Set}$ [68, I.3]. For example, we might declare a state
space for each subsystem, and a function to calculate the overall system state
given subsystem states. Alternatively, we might assign key performance
indicators (KPIs) for each level in a system and explicit formulae to
aggregate them. The main thing to remember is: just as an abstract method has
many implementations, an operad has many algebras. Just like an API, the
operad provides a common syntax for a range of specific models, suited for
specific purposes.
Unlike a traditional API, an operad provides an explicit framework to express
and reason about semantic relationships between _different_ implementations.
These different implementations are linked by type-indexed mappings between
instances called _algebra homomorphisms_. For example, we might like to
extract KPIs from system state. The principle of compositionality places
strong conditions on this extraction: the KPIs extracted from the overall
system state must agree with the KPIs obtained by aggregating subsystem KPIs.
That is, in terms of trees and in pseudocode:
$\mathsf{KPI}(\mathtt{op})$$\mathsf{extr}(\mathtt{Sub_{1}})$$\mathsf{extr}(\mathtt{Sub_{2}})$$\mathsf{KPI}(\mathtt{System})$$\mathsf{State}(\mathtt{Sub_{1}})$$\mathsf{State}(\mathtt{Sub_{2}})$$\mathsf{KPI}(\mathtt{Sub_{1}})$$\mathsf{KPI}(\mathtt{Sub_{2}})$ | = | $\mathsf{State}(\mathtt{op})$$\mathsf{extr}(\mathtt{System})$ $\mathsf{KPI}(\mathtt{System})$$\mathsf{State}(\mathtt{Sub_{1}})$$\mathsf{State}(\mathtt{Sub_{2}})$$\mathsf{State}(\mathtt{System})$
---|---|---
KPI(op)(extr(x1), extr(x2)) | == | extr(State(op)(x1, x2)).
For any state instances for $\mathtt{Sub_{1}}$ and $\mathtt{Sub_{2}}$ at the
base of the tree, the two computations must produce the same KPIs for the
overall system at the top of the tree. Here $\mathsf{KPI}(\mathtt{op})$ and
$\mathsf{State}(\mathtt{op})$ implement $\mathtt{op}$ in the two algebras,
while $\mathsf{extr}(-)$ are _components_ of the algebra homomorphism to
extract KPIs. Similar to associativity, these compositionality conditions
guarantee that extracting KPIs “just works” when decomposing a system
hierarchically.
### 2.3 The equational view
We have just seen an equation between trees that represent implementations.
Because an operad can be studied without reference to an implementation, we
can also define equations between abstract trees. This observation leads to
another view of an operad: as a system of equations.
The first thing to note is that equations occur within the sets of operations
${\mathcal{O}}(\mathtt{X_{1}},\ldots,\mathtt{X_{n}};\mathtt{Y})$; an equation
between two operations only makes sense if the input and output types match.
Second, if one side of an equation $\mathtt{f}=\mathtt{f}^{\prime}$ occurs as
a subtree in a larger operation $\mathtt{g}$, substitution generates a new
equation $\mathtt{g}=\mathtt{g^{\prime}}$. Two trees are equal if and only if
they are connected by a chain of such substitutions (and associativity
equations). In general, deciding whether two trees are equal (the word
problem) may be intractable. Third, we can often interpret composition of
operations as a normal-form computation:
$\begin{array}[]{ccccc}\begin{array}[]{c}\scalebox{0.85}{ \leavevmode\hbox
to149.86pt{\vbox to100.79pt{\pgfpicture\makeatletter\hbox{\hskip
71.37189pt\lower-64.61296pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{10.75293pt}{0.0pt}\pgfsys@curveto{10.75293pt}{5.93875pt}{5.93875pt}{10.75293pt}{0.0pt}{10.75293pt}\pgfsys@curveto{-5.93875pt}{10.75293pt}{-10.75293pt}{5.93875pt}{-10.75293pt}{0.0pt}\pgfsys@curveto{-10.75293pt}{-5.93875pt}{-5.93875pt}{-10.75293pt}{0.0pt}{-10.75293pt}\pgfsys@curveto{5.93875pt}{-10.75293pt}{10.75293pt}{-5.93875pt}{10.75293pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.27779pt}{-1.18056pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{-24.8021pt}{-28.45276pt}\pgfsys@curveto{-24.8021pt}{-22.50798pt}{-29.62117pt}{-17.6889pt}{-35.56595pt}{-17.6889pt}\pgfsys@curveto{-41.51073pt}{-17.6889pt}{-46.3298pt}{-22.50798pt}{-46.3298pt}{-28.45276pt}\pgfsys@curveto{-46.3298pt}{-34.39754pt}{-41.51073pt}{-39.21661pt}{-35.56595pt}{-39.21661pt}\pgfsys@curveto{-29.62117pt}{-39.21661pt}{-24.8021pt}{-34.39754pt}{-24.8021pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{-35.56595pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-42.24374pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{1}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.72226pt}{26.00832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{System}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-68.03888pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{11}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-32.47293pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{12}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-8.5528pt}{-6.84222pt}\pgfsys@lineto{-27.00462pt}{-21.6037pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-42.14449pt}{-37.22353pt}\pgfsys@lineto{-50.97578pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-30.66254pt}{-38.2591pt}\pgfsys@lineto{-25.29268pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{46.3298pt}{-28.45276pt}\pgfsys@curveto{46.3298pt}{-22.50798pt}{41.51073pt}{-17.6889pt}{35.56595pt}{-17.6889pt}\pgfsys@curveto{29.62117pt}{-17.6889pt}{24.8021pt}{-22.50798pt}{24.8021pt}{-28.45276pt}\pgfsys@curveto{24.8021pt}{-34.39754pt}{29.62117pt}{-39.21661pt}{35.56595pt}{-39.21661pt}\pgfsys@curveto{41.51073pt}{-39.21661pt}{46.3298pt}{-34.39754pt}{46.3298pt}{-28.45276pt}\pgfsys@closepath\pgfsys@moveto{35.56595pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.88815pt}{-29.63332pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{op_{2}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.02017pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{21}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{24.43259pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{22}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{52.88535pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{23}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{10.95293pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{8.5528pt}{-6.84222pt}\pgfsys@lineto{27.00462pt}{-21.6037pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{27.81317pt}{-36.2052pt}\pgfsys@lineto{15.01941pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{35.56595pt}{-39.41661pt}\pgfsys@lineto{35.56595pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{43.31873pt}{-36.2052pt}\pgfsys@lineto{56.11249pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to149.86pt{\vbox
to100.79pt{\pgfpicture\makeatletter\hbox{\hskip
71.37189pt\lower-64.61296pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{}
{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{38.52913pt}{-14.22638pt}\pgfsys@curveto{38.52913pt}{-7.71782pt}{21.27931pt}{-2.44173pt}{0.0pt}{-2.44173pt}\pgfsys@curveto{-21.27931pt}{-2.44173pt}{-38.52913pt}{-7.71782pt}{-38.52913pt}{-14.22638pt}\pgfsys@curveto{-38.52913pt}{-20.73494pt}{-21.27931pt}{-26.01103pt}{0.0pt}{-26.01103pt}\pgfsys@curveto{21.27931pt}{-26.01103pt}{38.52913pt}{-20.73494pt}{38.52913pt}{-14.22638pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{-14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.91118pt}{-16.72638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathtt{op}(\mathtt{op_{1}},\mathtt{op_{2}})$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.72226pt}{26.00832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{System}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-68.03888pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{11}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-32.47293pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{12}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{-2.24173pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-14.7728pt}{-25.3049pt}\pgfsys@lineto{-46.36618pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-5.92259pt}{-26.07002pt}\pgfsys@lineto{-17.3873pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.02017pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{21}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{24.43259pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{22}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{52.88535pt}{-59.47551pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathtt{Sub_{23}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{-2.24173pt}\pgfsys@lineto{0.0pt}{20.53088pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{1.99507pt}{-26.19511pt}\pgfsys@lineto{5.79585pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{9.67163pt}{-25.83138pt}\pgfsys@lineto{28.97888pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{16.30753pt}{-25.09679pt}\pgfsys@lineto{52.1619pt}{-48.99808pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\end{array}\end{array}$
We then compare composed operations directly to decide equality. For example,
there is an operad whose operations are matrices. Composition computes a
normal form for a composite operation by block diagonals and matrix
multiplication,
$\begin{array}[]{ccccc}\begin{array}[]{l}op:n\times(m_{1}+m_{2})\\\
op_{1}:m_{1}\times(k_{11}+k_{12})\\\
op_{2}:m_{2}\times(k_{21}+k_{22}+k_{23})\\\ \end{array}&\longmapsto&\Big{(}\
op\ \Big{)}\cdot\begin{pmatrix}\ op_{1}\ &0\\\ \\\ 0&\ op_{2}\ \\\
\end{pmatrix}.\end{array}$
Operad axioms constrain composition. For example, the axiom mentioned in 22.1
corresponds to:
$\Big{(}\ op\ \Big{)}\cdot\begin{pmatrix}op_{1}&0\\\ \\\ 0&I_{m_{2}}\\\
\end{pmatrix}\cdot\begin{pmatrix}I_{k_{11}}&0&0\\\ 0&I_{k_{12}}&0\\\
0&0&op_{2}\\\ \end{pmatrix}=\Big{(}\ op\
\Big{)}\cdot\begin{pmatrix}I_{m_{1}}&0\\\ \\\ 0&op_{2}\\\
\end{pmatrix}\cdot\begin{pmatrix}op_{1}&0&0&0\\\ 0&I_{k_{21}}&0&0\\\
0&0&I_{k_{22}}&0\\\ 0&0&0&I_{k_{23}}\\\ \end{pmatrix}.$
The key point is that any algebra that implements the operad must satisfy
_all_ of the equations that it specifies. Type discipline controls which
operations can compose; equations between operations control the resulting
composites. Declaring equations between operations provides additional
contracts for the API. For instance, any unary operation
$\mathtt{f}\colon\mathtt{X}\to\mathtt{X}$ (a loop) generates an infinite
sequence of composites
$\mathsf{id}_{\mathtt{X}},\mathtt{f},\mathtt{f}^{2},\mathtt{f}^{3},\ldots$.
Sometimes this is a feature of the problem at hand, but in other cases we can
short-circuit the infinite regress with assumptions like idempotence
($\mathtt{f}^{2}=\mathtt{f}$) or cyclicity
($\mathtt{f}^{n}=\mathsf{id}_{\mathtt{X}}$) and ensure that algebras contain
no infinite loops.
### 2.4 The systems view
When we apply operads to study systems, we often think of an operation
$\mathtt{f}\colon\langle\mathtt{X_{i}}\rangle\to Y$ as a system architecture.
Intuitively $\mathtt{Y}$ is the system and the
$\mathtt{X_{1}},\ldots,\mathtt{X_{n}}$ are the components, but this is a bit
misleading. It is better to think of types as boundaries or interfaces, rather
than systems. Instead, $\mathtt{f}$ is the system architecture, with component
interfaces $\mathtt{X_{i}}$ and environmental interface $\mathtt{Y}$.
Composition formalizes the familiar idea [65, Fig. 2] that one engineer’s
system is the next’s component; it acts by nesting subsystem architectures
within higher-level architectures.
Once we establish a system architecture, we would like to use this structure
to organize our data and analyses of the system. Moreover, according to the
principle of compositionality, we should be able to construct a system-level
analysis from an operation by analyzing the component-level inputs and
synthesizing these descriptions according to the given operations.
The process of extracting computations from operations is called _functorial
semantics_ , in which a model is represented as a mapping
$\mathsf{M}\colon\mathbf{Syntax}\longrightarrow\mathbf{Semantics}$. The syntax
defines a system-specific architectural design. Semantics are universal and
provide a computational context to interpret specific models. Matrices,
probabilities, proofs, and dynamical equations all have their own rules for
composition, corresponding to different semantic operads.
The mapping $\mathsf{M}$ encodes, for each operation, the data, assumptions
and design artifacts (e.g., geometric models) needed to construct the relevant
computational representations for the architecture, its components and the
environment. From this, the system model as a whole is determined by
composition in the semantic context. The algebras
($\mathsf{State}$,$\mathsf{KPI}$) described in 22.2 are typical examples, with
syntax ${\mathcal{O}}$ and taking semantic values in sets and functions. The
mappings themselves, called _functors_ , map types and operations
($\mathtt{System}$, $\mathtt{op}$) to their semantic values, while preserving
how composition builds up complex operations.
The functorial perspective allows complementary models–e.g. system state vs.
KPIs–to be attached to the same design. This includes varying the semantic
context as well as modeling details; see Sec. 5 for examples of non-
deterministic semantics. Though functorial models may be radically different,
they describe the _same system_ , as reflected by the overlapping syntax.
In many cases, relevant models are _not_ independent, like system state and
KPIs. Natural transformations, like the extraction homomorphism in 22.2,
provide a means to align ostensibly independent representations. Since models
are mappings, we often visualize natural transformations as a two-dimensional
cells:
$\textstyle{{\mathcal{O}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{State}}$$\scriptstyle{\mathsf{KPI}}$
$\Rightarrow$
$\textstyle{\mathbf{Set}.}$ (2)
Formal conditions guarantee that when moving from syntax to semantics [103,
13.2] or between representations [64, 2.3.5], reasoning about how systems
decompose hierarchically “just works.”
Since functors and higher cells assure coherence with hierarchical
decomposition, we can use them to build up a desired model in stages, working
backwards from simpler models:
$\textstyle{\mathcal{O}_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\textsf{Extension}_{1}\
\ }$$\scriptstyle{\mathsf{Model}_{2}\textrm{ is defined
by}}$$\textstyle{\mathbf{Sem_{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hskip
9.04166pt\textsf{Reduction}_{2}}$$\textstyle{\mathcal{O}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\textsf{Model}_{1}\textrm{
is defined by}}$$\scriptstyle{\textsf{Extension}_{2}\ \
}$$\textstyle{\mathbf{Sem}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hskip
9.04166pt\textsf{Reduction}_{1}}$$\textstyle{\mathcal{O}_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\textsf{Model}_{0}}$$\textstyle{\mathbf{Sem}_{0}}$
This is a powerful technique for at least two reasons. First, complexity can
be built up in stages by layering on details. Second, complex models built at
later stages are partially validated through their coherence with simpler
ones. The latter point is the foundation for lazy evaluation: many coarse
models can be explored before ever constructing expensive models.
Separating out the different roles within a model encourages efficiency and
reuse. An architecture (operation) developed for one analysis can be
repurposed with strong coherence between models (algebra instances) indexed by
the same conceptual types. The syntax/semantics distinction also helps address
some thornier meta-modeling issues. For example, syntactic types can
distinguish conceptually distinct entities while still mapping to the same
semantic entities. We obtain the flexibility of structural or duck typing in
the semantics without sacrificing the type safety provided by the syntax.
## 3 Main examples
Though operads are general tools [64, 70, 103], we focus on two classes of
operads applied to system design: wiring diagram operads and network operads.
These are complementary. Wiring diagrams provide a top-down view of the
system, whereas network operads are bottom-up. This section introduces 3
examples that help ground the exposition as in Fig. 2.
### 3.1 Specification
Network operads describe atomic types of systems and ways to link them
together with operations. These features enable: (1) specification of atomic
building blocks for a domain problem; and (2) bottom up synthesis of designs
from atomic systems and links. A general theory of network operads [6, 7, 73,
74] was recently developed under the Defense Advanced Research Projects Agency
(DARPA) Complex Adaptive System Composition and Design Environment (CASCADE)
program. Minimal data can be used to specify a functor–called a network model
[6, 4.2]–which constructs a network operad [6, 7.2] customized to a domain
problem.
| $\mathtt{Boat}$ | $\mathtt{Helo}$ | $\mathtt{UAV}$ | $\mathtt{QD}$
---|---|---|---|---
$\mathtt{Cut}$ | 1 | 1 | 1 | 1
$\mathtt{Boat}$ | | | 1 | 1
$\mathtt{FW}$ | | | | 1
$\mathtt{FSAR}$ | | | | 1
$\mathtt{Helo}$ | | | | 1
(a) Examples of carrying relationships in ${\mathcal{O}}_{Sail}$
$\bullet$ $\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$ $\mathtt{QD}$ (b)
Operation $\mathtt{f}\in{{\mathcal{O}}}_{Sail}$ to specify carrying
Figure 3: Which types are allowed to carry other types–indicated with
$1$–specify an operad ${\mathcal{O}}_{Sail}$; $\mathtt{f}$ specifies that a
$\mathtt{Helo}$ ($\bullet$) and a $\mathtt{QD}$ ($\bullet$) are carried by a
$\mathtt{Cut}$ ($\bullet$) and another $\mathtt{QD}$ ($\bullet$) is carried on
the $\mathtt{Helo}$ ($\bullet$).
The first example illustrates designs of search and rescue (SAR)
architectures. The domain problem was inspired by the 1979 Fastnet Race and
the 1998 Sydney to Hobart Yacht Race and we refer to it as the sailboat
problem. It illustrates how network operads facilitate the specification of a
model with combinatorial data called a network template. For example, Fig. 3
shows the carrying relationships between different system types to model
(e.g., a $\mathtt{Boat}$ can carry a $\mathtt{UAV}$ (Unmanned Aerial Vehicle)
but a $\mathtt{Helo}$ cannot). This data specifies a network operad
${\mathcal{O}}_{Sail}$ whose: (1) objects are lists of atomic system types;
(2) operations describe systems carrying other systems; and (3) composition
combines carrying instructions. We discuss this example in greater detail in
Sec. 4.
Functional Decomposition | Control Decomposition
---|---
$\bullet$ $\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$ $\mathtt{QD}$$\mathtt{Lab}$$\mathtt{Box}$$\mathtt{Bath}$$\mathtt{Chassis}$$\mathtt{Optics}$$\mathtt{Intfr}$$\mathtt{TempSys}$$\mathtt{LengthSys}$$\mathtt{LSI}$$\mathtt{f}\left\\{\rule{0.0pt}{32.86288pt}\right.$$\mathtt{l}\left\\{\rule{0.0pt}{35.56593pt}\right.$$\left.\rule{0.0pt}{35.56593pt}\right\\}\mathtt{t}$ | $\bullet$ $\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$ $\mathtt{QD}$$\mathtt{Lab}$$\mathtt{Box}$$\mathtt{Optics}$$\mathtt{Intfr}$$\mathtt{Chassis}$$\mathtt{Bath}$$\mathtt{Sensors}$$\mathtt{Actuators}$$\mathtt{LSI}$$\mathtt{g}\left\\{\rule{0.0pt}{32.72049pt}\right.$$\mathtt{s}\left\\{\rule{0.0pt}{35.56593pt}\right.$$\left.\rule{0.0pt}{35.56593pt}\right\\}\mathtt{a}$
Operad Equation:
$\mathtt{f}(\mathtt{l},\mathtt{t})=\mathtt{g}(\mathtt{s},\mathtt{a})$
$\bullet$ $\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\mathtt{Optics}$$\mathtt{Chassis}$$\mathtt{Intfr}$$\mathtt{Lab}$$\mathtt{Box}$$\mathtt{Bath}$
$\mathtt{LSI}$$\mathtt{laser}$$\mathtt{intensity}$$\mathtt{intensity}$$\mathtt{focus}$$\mathtt{drive}$$\mathtt{fringe}$$\mathtt{heat}_{1}$$\mathtt{heat}_{2}$$\mathtt{setPt}$$\mathtt{H_{2}O}$$\mathtt{temp}$$\mathtt{TempSys}$$\mathtt{LengthSys}$$\mathtt{Sensors}$$\mathtt{Actuators}$
Figure 4: An equation in a wiring diagram operad operations expresses a common
refinement of hierarchies.
### 3.2 Analysis
A wiring diagram operad describes the interface each system exposes, making it
clear what can be put together [90, 94, 104]. The designer has to specify
precisely how information and physical quantities are shared among components,
while respecting their interfaces. The operad facilitates top-down analysis of
a design by capturing different ways to decompose a composite system.
The second example analyzes a precision-measurement system called the Length
Scale Interferometer (LSI) with wiring diagrams. It helps illustrate the
qualitative features of operads over and above other modeling approaches and
the potential to exploit their analytic power to separate concerns. Figure 4
illustrates joint analysis of the LSI to address different aspects of the
design problem: functional roles of subsystems and control of the composite
system. This analysis example supports these illustrations in Sec. 5.
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$ $\mathtt{HC130}$
(a) Specification of primitive tasks $:=$ transitions
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$
(b) Coordinate tasks to compose
Figure 5: Primitive operations are composed for two UH60s ($\bullet$) to
rendezvous at $c$ and maneuver together to $d$. Each primitive operation is
indexed by a transition; types and space-time points must match to compose.
### 3.3 Synthesis
The third example describes the automated design of mission task plans for SAR
using network operads. The SAR tasking example illustrates the expressive
power of applying existing operads and their potential to streamline and
automate design synthesis. Fig. 5(a) is analogous to Fig. 3, but whereas a
sparse matrix specify an architecture problem, here a Petri net is used to
model coordinated groups of agents.
For the SAR tasking problem, much of the complexity results from agents’ need
to coordinate in space and time–e.g. when a helicopter is refueled in the air,
as in $\tau_{3}$ of Fig. 5(a). To facilitate coordination, the types of the
network operad are systematically extended via a network model whose target
categories add space and time dimensions; compare, e.g., [7]. In this way,
task plans are constrained at the level of syntax to enforce these key
coordination constraints; see, e.g., Fig. 5(b) where two UH60s at the same
space-time point ($\bullet$ $(c,2)$) maneuver together to $d$. We describe
automated synthesis for this example in Sec. 6.
## 4 Cookbook modeling of domain problems
In this section we describe some techniques for constructing operads and their
algebras, using an example-driven, cookbook-style approach. We emphasize
recent developments for network operads and dive deeper into the SAR
architecture problem.
### 4.1 Network models
The theory of network models provides a general method to construct an operad
${\mathcal{O}}$ by mixing combinatorial and compositional structures. Note
that this lives one level of abstraction _above_ operads; we are interested in
_constructing_ a language to model systems–e.g. for a specific domain. This
provides a powerful alternative to coming up with operads one-by-one. A
general construction allows the applied practitioner to cook-up a domain-
specific syntax to compose systems by specifying some combinatorial
ingredients.
The first step is to specify what the networks to be composed by
${\mathcal{O}}$ look like. Often this is some sort of graph, but what kind?
Are nodes typed (e.g., colored)? Are edges symmetric or directed? Are loops or
parallel edges allowed? What about $n$-way relationships for $n>2$ (hyper-
edges)? We can mix, match and combine such combinatorial data to define
different _network models_ , which specify the system types and kinds of
relationships between them relevant to some domain problem. The network model
describes the operations we need to compose the networks specific to the
problem at hand.
Three compositional structures which describe the algebra of operations. The
_disjoint_ or _parallel_ structure combines two operations for networks with
$m$ and $n$ nodes, respectively, into a single operation for networks with
$m+n$ nodes. More restrictively, the _overlay_ or _in series_ structure
superimposes two operations to design networks on $n$ nodes. The former
structure combines separate operations to support modular development of
designs; the latter supports an incremental design process, either on top of
existing designs or from scratch. The last ingredient permutes nodes in a
network, which assures coherence between different ordering of the nodes. This
last structure is often straightforward to specify. If it is not, one should
consider if symmetry is being respected in a natural way.
We can distill the main idea behind overlay by asking, what happens when we
add an edge to a network? It depends on the kind of network being composed by
${\mathcal{O}}$:
In a simple graph | but in a labeled graph | and in a multigraph
---|---|---
| $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$ | | $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$ | | $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$
| \+ $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$ | | +$\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$ | | \+ $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y}$
| $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y,}$ | | $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{2}$$\textstyle{y,}$ | | $\textstyle{x\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{y.}$
These differences are controlled by a _monoid_ 333A set with a binary
operation, usually written $\cdot$ unless the operation is commutative
($m+n=n+m$). A monoid is always associative, $\ell\cdot(m\cdot n)=(\ell\cdot
m)\cdot n$ and has a unit $e$ satisfying $e\cdot m=m=m\cdot e$–e.g.
multiplication of $n\times n$ matrices., which provides each $+$ shown. Above,
the monoids are bitwise OR, addition, and maximum, respectively. As a further
example, if edge addition is controlled by $\mathbb{Z}/2\mathbb{Z}$ then $+$
will have a toggling effect.
Consider simple graphs. Given a set of nodes $\mathtt{n}$, write
$U_{\mathtt{n}}$ for the set of all undirected pairs $i\neq j$ (a.k.a. simple
edges $i\mathchar 45\relax\mathchar 45\relax j$), so that
$|U_{\mathtt{n}}|={\binom{|\mathtt{n}|}{2}}$. Then we can represent a simple
graph over $\mathtt{n}$ as a $U_{\mathtt{n}}$-indexed vector of bits $\langle
b_{i\mathchar 45\relax\mathchar 45\relax j}\rangle$ describing which edges to
‘turn on’ for a design. Each bit defines whether or not to add an $i\mathchar
45\relax\mathchar 45\relax j$ edge to the network and the overlay
compositional structure is given by the monoid
$\mathsf{SG}(\mathtt{n}):=\mathbf{Bit}^{U_{\mathtt{n}}}$, whose $+$ is bitwise
OR for the product over simple edges–i.e. adding $i\mathchar 45\relax\mathchar
45\relax j$ then adding $i\mathchar 45\relax\mathchar 45\relax j$ is the same
as adding $i\mathchar 45\relax\mathchar 45\relax j$ a single time. The
disjoint structure
$\sqcup:\mathsf{SG}(\mathtt{m})\times\mathsf{SG}(\mathtt{n})\longrightarrow\mathsf{SG}(\mathtt{m+n})$
forms the disjoint sum of the graphs $g$ and $h$. Finally, permutations act by
permuting the nodes of a simple graph. Together, these compositional
structures define a network model
$\mathsf{SG}\colon\mathcal{S}\to\mathbf{Mon}$ which determines how operations
are composed in the constructed network operad; see, Fig. 6 or [6, 3.2, 7.2]
for complete technical details.
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$
Figure 6: Parallel ($\sqcup$) and in series ($+$) compositional structures
define how to combine operations.
This definition has an analogue for $\mathbb{N}$-weighted graphs,
$\mathsf{LG}(\mathtt{n}):=(\mathbb{N},+)^{U_{\mathtt{n}}}$, with overlay given
by sum of edge weights and another for multi-graphs,
$\mathsf{MG}(\mathtt{n}):=(\mathbb{N},\max)^{U_{\mathtt{n}}}$, with overlay
equivalent to union of multisets; see [6, 3.3, 3.4] for details. More
generally, we can label edges with the elements of _any_ monoid. Many of these
examples are strange—binary addition makes edges cancel when they add—but
their formal construction is straightforward; see [6, Thm. 3.1].
Equivalently, we can view the undirected edges in $U_{\mathtt{n}}$ as
generators, subject to certain idempotence and commutativity relations:
$\mathsf{SG}(\mathtt{n}):=\langle e\in U_{\mathtt{n}}|e\cdot e=e,e\cdot
e^{\prime}=e^{\prime}\cdot e\rangle.$ Here the idempotence relations come from
$\mathbf{Bit}$ while the commutativity relations promote the single copies of
$\mathbf{Bit}$ for each $i\mathchar 45\relax\mathchar 45\relax j$ to a well-
defined network model. Similar tricks work for lots of other network
templates; we just change the set of generators to allow for new
relationships. For example, to allow self-loops, we add loop edge generators
$L_{\mathtt{n}}=\mathtt{n}+U_{\mathtt{n}}$ to express relationships from a
node $i$ to itself. Likewise, network operads for directed graphs can be
constructed by using generators $D_{\mathtt{n}}=\mathtt{n}\times\mathtt{n}$,
and one can also introduce higher-arity relationships.
In all cases, the formal definition of a network model assures that all the
combinatorial and compositional ingredients work well together; one precise
statement of “working well together” is given in [6, 2.3]. Once a _network
template_ —which expresses minimal data to declare the ingredients for a
network model—is codified in a theorem as in [6, 3.1], it can be reused in a
wide variety of domains to set up the specifics of composition.
### 4.2 Cooking with operads
The prototype for network operads is a simple network operad, which models
only one kind of thing, such as aircraft. The types of a simple network operad
are natural numbers, which serve to indicate how many aircraft are in a
design. Operations of the simple network operad are simple graphs on some
number of vertices. For example, Fig. 6 above shows a simple network operad to
describe a design for point-to-point communication between aircraft.
Structural network operads extend this prototype in two directions: (1) a
greater diversity of things-to-be-modeled is supported by an expanded
collection of types; and (2) more sorts of links or relationships between
things are expressed via operations. To illustrate the impact of network
templates, suppose we are modeling heterogeneous system types with multiple
kinds interactions. For simplicity we consider simple interactions, which can
be undirected or directed.
A _network template_ need only declare the _primitive_ ways system types can
interact to define a network model–e.g. a list of tuples $\textrm{(directed :
carrying, }\mathtt{Helo},\textrm{ }\mathtt{Cut}\textrm{)}$. This data is
minimal in two ways: (1) _any_ framework must provide data to specify
potentially valid interactions; and (2) this approach allows _only_ those
interactions that make sense upon looking at the types of the systems
involved. Thus, interactions must be syntactically correct when constructing
system designs.
Presently, we will consider an example from the DARPA CASCADE program: the
sailboat problem introduced in 33.1. This SAR application problem was inspired
by the 1979 Fastnet Race and the 1998 Sydney to Hobart Yacht Race, in which
severe weather conditions resulted in many damaged vessels distributed over a
large area. Both events were tragic, with 19 and 6 deaths, respectively, and
remain beyond the scale of current search and rescue planning. Various larger
assets—e.g. ships, airplanes, helicopters—could be based at ports and ferry
smaller search and rescue units—e.g. small boats, quadcopters—to the search
area.
Specifically, there were 8 atomic types to model:
$P=\\{\mathtt{Port},\mathtt{Cut},\mathtt{Boat},\mathtt{FW},\mathtt{FSAR},\mathtt{Helo},\mathtt{UAV},\mathtt{QD}\\}.$
The primary relationship to specify a structural design is various assets
carrying another types, so only one kind of interaction is needed: carrying.
This relationship is directed; e.g., a cutter ($\mathtt{Cut}$) can carry a
helicopter ($\mathtt{Helo}$) but not the other way around. Specifying allowed
relationships amounts to specifying pairs of type $(p,p^{\prime})\in P\times
P$ such that type $p^{\prime}$ can carry type $p$; see Fig. 3 for examples.
Fig. 3 data is extended to: (1) specify to that $\mathtt{Port}$ can carry all
types other than $\mathtt{Port}$, $\mathtt{UAV}$ and $\mathtt{QD}$; (2)
conform to an input file format to declare simple directed or undirected
interactions, e.g, the JSON format in Fig. 7.
{‘colors’ : [‘port’, ‘cut’, …, ‘qd’],
‘directed’ : {
‘carrying’: {
‘cut’: [‘port’],
‘boat’: [‘port’, ‘cut’],
…,
‘qd’: [‘cut’, …, ‘helo’] } } }
(a) Network template data to specify the operad ${{\mathcal{O}}}_{Sail}$
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$ $\mathtt{QD}$ (b) Example
operation $\mathtt{f}\in{{\mathcal{O}}}_{Sail}$
Figure 7: After specifying ${{\mathcal{O}}}_{Sail}$, $\mathtt{f}$ places a
$\mathtt{QD}$ ($\bullet$) on a $\mathtt{Cut}$ ($\bullet$) and another
$\mathtt{QD}$ ($\bullet$) on a $\mathtt{Helo}$ ($\bullet$).
If another type of system or kind of interaction is needed, then the file is
appropriately extended. For example, we can include buoys by appending Buoy to
the array of colors and augmenting the relationships in the carrying node. Or,
we can model the undirected (symmetric) relationship of communication by
including an entry such as ‘undirected’: {‘communication’: {‘port‘: [‘cut’,
...], ...}}. Moreover, modifications to network templates–such as ignoring
(undirected : communication) or combining $\mathtt{QD}$ and $\mathtt{UAV}$
into a single type–naturally induce mappings between the associated operads
[6, 5.8].
### 4.3 Cooking with algebras
Because all designs are generated from primitive operations to add edges, it
is sufficient to define how primitive operations act in order to define an
algebra. For the sailboat problem, semantics are oriented to enable the
delivery of a high capacity for search—known in the literature as search
effort [97, 3.1]—in a timely manner. Given key parameters for each asset–e.g.
speed, endurance, search efficiency across kinds of target and conditions,
parent platform, initial locations–and descriptions of the search
environment–e.g. expected search distribution, its approximate evolution over
time–the expected number of surviving crew members found by the system can be
estimated [97, Ch. 3].
Among these data, the parent platform and initial locations vary within a
scenario and the rest describe the semantics of a given scenario. In fact, we
assume all platforms must trace their geographical location to one of a small
number of base locations, so that the system responds from bases, but is
organized to support rapid search. Once bases are selected, the decision
problem is a choice of operation: what to bring (type of the composite system)
and how to organize it (operation to carry atomic systems). Data for the
operational context specifies a particular algebra; see, e.g., Table 2. Just
as for the operad, this data is lightweight and configurable.
Table 2: Example properties captured in algebra for sailboat problem including time on station (ToS), speed for search (S) and max speed (R), and sweep widths measuring search efficiency for target types person in water (PIW), crew in raft (CIR) and demasted sailboats (DS) adrift. Type | Cost ($) | ToS (hr) | Speed (kn) S R | Sweep Width (nmi) PIW CIR DS
---|---|---|---|---
$\mathtt{Cut}$ | 200M | $\infty$ | 11 28 | 0.5 4.7 8.5
$\mathtt{Boat}$ | 500K | 6 | 22 35 | 0.4 4.2 7.5
$\mathtt{FW}$ | 60M | 9 | 180 220 | 0.1 2.2 7.6
$\mathtt{FSAR}$ | 72M | 10 | 180 235 | 0.5 12.1 16.6
$\mathtt{Helo}$ | 9M | 4 | 90 180 | 0.5 1.5 4.8
$\mathtt{UAV}$ | 250K | 3 | 30 45 | 0.5 1.8 4.5
$\mathtt{QD}$ | 15K | 4 | 35 52 | 0.5 1.5 4.8
Related cookbook approaches. Though we emphasized network operads, the
generators approach is often studied and lends itself to encoding such
combinatorially data with a “template,” in a cookbook fashion. The generators
approach to “wiring” has been developed into a theory of hypergraph categories
[38, 41], which induce wiring diagram operads. Explicit presentations for
various wiring diagram operads are given in [104]. Augmenting monoidal
categories with combinatorially specified data has also been investigated,
e.g. in [42].
## 5 Functorial Systems Analysis
In this section we demonstrate the use of functorial semantics in systems
analysis. As in 22.4, a functor establishes a relationship between a syntactic
or combinatorial model of a system (components, architecture) and some
computational refinement of that description. This provides a means to
consider a given system from different perspectives, and also to relate those
viewpoints to one another. To drive the discussion, we will focus on the
Length Scale Interferometer (LSI) and its wiring diagram model introduced in
33.2.
### 5.1 Wiring diagrams
Operads can be applied to organize both qualitative and quantitative
descriptions of hierarchical systems. Because operations can be built up
iteratively from simpler ones to specify a complete design, different ways to
build up a given design provide distinct avenues for analysis.
Figure 4 shows a wiring diagram representation of a precision measurement
instrument called the Length Scale Interferometer (LSI) designed and operated
by the US National Institute of Standards and Technology (NIST). Object types
are system or component boundaries; Fig. 4 has: 6 components, the exterior,
and 4 interior boundaries. Each boundary has an interface specifying its
possible interactions, which are implicit in Fig. 4, but define explicit types
in the operad.
An operation in this context represents one step in a hierarchical
decomposition, as in 22.1. For example, the blue boxes in Fig. 4 represent a
functional decomposition of the LSI into length-measurement and temperature-
regulation subsystems:
$\mathtt{f}\colon\mathtt{LengthSys},\mathtt{TempSys}\to\mathtt{LSI}$. These
are coupled via (the index of refraction of) a $\mathtt{laser}$ interaction
and linked to interactions at the system boundary. The operation $\mathtt{f}$
specifies the connections between blue and black boundaries.
Composition in a wiring diagram operad is defined by nesting. For this
functional decomposition, two further decompositions $\mathtt{l}$ and
$\mathtt{t}$ describe the components and interactions within
$\mathtt{LengthSys}$ and $\mathtt{TempSys}$, respectively. The wiring diagram
in Fig. 4 is the composite $\mathtt{f}(\mathtt{l},\mathtt{t})$.
This approach cleanly handles multiple decompositions. Here the red boxes
define a second, control-theoretic decomposition
$\mathtt{g}:\mathtt{Sensors},\mathtt{Actuators}\to\mathtt{LSI}$.
Unsurprisingly, the system is tightly coupled from this viewpoint, with heat
flow to maintain the desired temperature, mechanical action to modify the path
of the laser, and a feedback loop to maintain the position of the optical
focus based on measured intensity. The fact that these two viewpoints specify
the _same_ system design is expressed by the equation:
$\mathtt{f}(\mathtt{l},\mathtt{t})=\mathtt{g}(\mathtt{s},\mathtt{a})$; see
22.3 for related discussion.
### 5.2 A probabilistic functor
Wiring diagrams can be applied to document, organize and validate a wide
variety of system-specific analytic models. Each model is codified as an
algebra, a functor from syntax to semantics (22.4). For the example of this
section, all models have the same source (syntax), indicating that we are
considering the same system, but the target semantics vary by application. We
have already seen some functorial models: the algebras in 44.3. These can be
interpreted as functors from the carrying operad ${\mathcal{O}}_{Sail}$ to the
operad of sets and functions $\mathbf{Set}$. Though $\mathbf{Set}$ is the
“default” target for operad algebras, there are many alternative semantic
contexts tailored to different types of analysis. Here we target an operad of
probabilities $\mathbf{Prob}$, providing a simple model of nondeterministic
component failure.
The data for the functor is shown in Table 3. Model data is indexed by
operations444Types and operations, more generally, but the types carry no data
in this simple example. in the domain, an operad $\mathcal{W}$ extracted from
the wiring diagram in Fig. 4. The functor assigns each operation to a
probability distribution that specifies the chance of a failure in each
subsystem, assuming some error within the super-system. For example, the
length measurement and temperature regulation subsystems are responsible for
40% and 60% of errors in the LSI, respectively. This defines a Bernoulli
distribution $P_{\mathtt{f}}$. Similarly, the decomposition $\mathtt{t}$ of
the temperature system defines a categorical distribution with 3 outcomes:
$\mathtt{Box}$, $\mathtt{Bath}$ and $\mathtt{Lab}$.
Relative probabilities compose by multiplication. This allows us to compute
more complex distributions for nested diagrams. For the operation shown in
Fig. 4, this indicates that the bath leads to nearly half of all errors
($60\%\times 80\%=48\%$) in the system.
Operad equations must be preserved in the semantics. Since
$\mathtt{f}(\mathtt{l},\mathtt{t})=\mathtt{g}(\mathtt{s},\mathtt{a})$, failure
probabilities of source components don’t depend on whether we think of them in
terms of functionality or control. For the bath, this relative failure
probability is
$\overbrace{60\%}^{P_{\mathtt{f}}}\times\overbrace{80\%}^{P_{\mathtt{t}}}=48\%=\overbrace{72\%}^{P_{\mathtt{g}}}\times\overbrace{66.7\%}^{P_{\mathtt{a}}},$
and five analogous equations hold for the other source components.
Functorial semantics separates concerns: different operad algebras answer
different questions. Here we considered _if_ a component will fail. The LSI
example is developed further in [20, 4] by a second algebra describing _how_ a
component might fail, with Boolean causal models to propagate failures. The
two perspectives are complementary, and loc. cit. explores integrating them
with algebra homomorphisms (22.4).
Table 3: Failure probabilities form an operad algebra for LSI component failure. $P_{\mathtt{f}}$ | $ls$ | $\mapsto$ | 40% | $P_{\mathtt{g}}$ | $sn$ | $\mapsto$ | 28%
---|---|---|---|---|---|---|---
$ts$ | $\mapsto$ | 60% | $ac$ | $\mapsto$ | 72%
$P_{\mathtt{l}}$ | $in$ | $\mapsto$ | 10% | $P_{\mathtt{s}}$ | $lb$ | $\mapsto$ | 21.4%
$\mathtt{op}$ | $\mapsto$ | 30% | $bt$ | $\mapsto$ | 21.4%
$ch$ | $\mapsto$ | 60% | $\mathtt{op}$ | $\mapsto$ | 42.9%
$P_{\mathtt{t}}$ | $ba$ | $\mapsto$ | 80% | $in$ | $\mapsto$ | 14.3%
$bx$ | $\mapsto$ | 10% | $P_{\mathtt{a}}$ | $ch$ | $\mapsto$ | 33.3%
$lb$ | $\mapsto$ | 10% | $ba$ | $\mapsto$ | 66.7%
### 5.3 Interacting semantics
Its toy-example simplicity aside, the formulation of a failure model
$\mathcal{W}\to\mathbf{Prob}$, as in Table 3 is limited in at least two
respects. First, it tells us _which_ components fail, but not _how_ or _why_.
Second, the model is static, but system diagnosis is nearly always a dynamic
process. We give a high-level sketch of an extended analysis to illustrate the
integration of overlapping functorial models.
The first step is to characterize some additional information about the types
in $\mathcal{W}$ (i.e., system boundaries). We start with the dual notions of
_requirements_ and _failure modes_. For example, in the temperature regulation
subsystem of the LSI we have
$\begin{array}[]{ccc}T_{\mathtt{laser}}\leq
20.02^{\circ}\textrm{C}&\leftrightarrow&T_{\mathtt{laser}}\textsf{ too
high}\\\\[2.15277pt] 19.98^{\circ}\textrm{C}\leq
T_{\mathtt{laser}}&\leftrightarrow&T_{\mathtt{laser}}\textsf{ too low}\\\
\vdots&&\vdots\end{array}$
Requirements at different levels of decomposition are linked by traceability
relations. These subsystem requirements trace up to the measurement
uncertainty for the LSI as a whole. Dually, an out-of-band temperature at the
subsystem level can be traced back to a bad measurement in the $\mathtt{Box}$
enclosure, a short in the $\mathtt{Bath}$ heater or fluctuations in the
$\mathtt{Lab}$ environment.
Traceability is compositional: requirements decompose and failures bubble up.
This defines an operad algebra555Many operads are defined from ordinary
categories using a symmetric monoidal products [56, 7]. If a category carries
more than one product, we use a superscript to indicate which is in use. The
the disjoint union (+) corresponds to the disjunctive composition “a failure
in one component _or_ another; soon we will use the Cartesian product $\times$
to consider the conjunctive relationship between “the state of one component
_and_ the other”. $\mathsf{Req}:\mathcal{W}\to\mathbf{Rel}^{+}$. Functoriality
expresses the composition of traceability requirement across levels. See [20,
5] discussion of how to link these relations with Table 3 data.
For dynamics, we need _state_. We start with a state space for each
interaction among components. For example, consider the $\mathtt{laser}$
interaction coupling $\mathtt{Chassis}$, $\mathtt{Intfr}$, and $\mathtt{Box}$.
The most relevant features of the laser are its vacuum wavelength
$\lambda_{0}$ and the ambient temperature, pressure and humidity (needed to
correct for refraction). This corresponds to a four-dimensional state-space
(or a subset thereof)
$\mathsf{State}(\mathtt{laser})\cong\overbrace{[-273.15,\infty)]}^{T_{\mathtt{laser}}}\times\overbrace{[0,\infty)]}^{P_{\mathtt{laser}}}\times\overbrace{[0,1]}^{RH_{\mathtt{laser}}}\times\overbrace{[0,\infty)}^{\lambda_{0}}\subseteq\mathbb{R}^{4}.$
A larger product defines an _external state space_ at each system boundary
$\begin{array}[]{rl}\mathsf{State}(\mathtt{TempSys})=&\mathsf{State}(\mathtt{laser})\times\mathsf{State}(\mathtt{temp})^{2}\times\mathsf{State}(\mathtt{setPt})\times\mathsf{State}(\mathtt{H_{2}O})\\\\[4.30554pt]
\mathsf{State}(\mathtt{Box})=&\mathsf{State}(\mathtt{laser})\times\mathsf{State}({\mathtt{temp}})\times\mathsf{State}(\mathtt{heat})^{2}\\\
\vdots\\\ \end{array}$
Similarly, we can define an _internal state space_ for each operation by
taking the product over all the interactions that appear in that diagram. We
can decompose the internal state space in terms of either the system boundary
or the components666Coupled variables are formalized through a partial product
called the pullback, a common generalization of the Cartesian product, subset
intersection and inverse image constructions.:
$\begin{array}[]{rcl}\mathsf{State}(\mathtt{f})&\cong&\mathsf{State}(\mathtt{LSI})\times\overbrace{\mathsf{State}(\mathtt{laser})}^{\mathclap{\textrm{hidden
variable}}}\\\\[4.30554pt]
&\cong&\mathsf{State}(\mathtt{LengthSys})\times\mathclap{\underbrace{\underset{\mathsf{State}(\mathtt{laser})}{}}_{\textrm{coupled
variable}}}\mathsf{State}(\mathtt{TempSys})\end{array}$
The projections from these (partial) products form a relation, and these
compose to define a functor $\mathcal{W}\to\mathbf{Rel}^{\times}$:
$\textstyle{\mathtt{LSI}}$$\textstyle{\mathtt{LengthSys},\
\mathtt{TempSys}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathtt{f}\in\mathcal{W}}$$\textstyle{\mathsf{State}(\mathtt{LSI})}$$\textstyle{{\overbrace{\mathsf{State}(\mathtt{f})}^{\in\mathbf{Rel}^{\times}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{0}}$$\scriptstyle{\langle
p_{1},p_{2}\rangle}$$\textstyle{\mathsf{State}(\mathtt{LengthSys})\times\mathsf{State}(\mathtt{TempSys})}$$\textstyle{{\overbrace{\left\langle{\scriptscriptstyle\ldots},\mbox{$\tiny\begin{pmatrix}1\\\
2\\\ 3\end{pmatrix}$},\mbox{$\tiny\begin{pmatrix}8\\\
9\end{pmatrix}$},{\scriptscriptstyle\ldots}\right\rangle}^{p_{0}(s)}}}$$\textstyle{{\overbrace{\left\langle{\scriptscriptstyle\ldots},\underbrace{\mbox{$\tiny\begin{pmatrix}1\\\
2\\\
3\end{pmatrix}$}}_{\mathclap{\mathtt{fringe}}},\underbrace{\mbox{$\tiny\begin{pmatrix}4\\\
5\\\ 6\\\
7\end{pmatrix}$}}_{\mathclap{\mathtt{laser}}},\underbrace{\mbox{$\tiny\begin{pmatrix}8\\\
9\end{pmatrix}$}}_{\mathclap{\mathtt{H_{2}O}}},{\scriptscriptstyle\ldots}\right\rangle}^{s\in\mathsf{State}(\mathtt{f})}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\left\langle{\overbrace{\left\langle{\scriptscriptstyle\ldots},\mbox{$\tiny\begin{pmatrix}1\\\
2\\\ 3\end{pmatrix}$},\mbox{$\tiny\begin{pmatrix}4\\\ 5\\\ 6\\\
7\end{pmatrix}$}\right\rangle}^{p_{1}(s)}},{\overbrace{\left\langle\mbox{$\tiny\begin{pmatrix}4\\\
5\\\ 6\\\ 7\end{pmatrix}$},\mbox{$\tiny\begin{pmatrix}8\\\
9\end{pmatrix}$},{\scriptscriptstyle\ldots}\right\rangle}^{p_{2}(s)}}\right\rangle}$
Each requirement $R\in\mathsf{Req}(\mathtt{X})$ defines a subset
$|R|\subseteq\mathsf{State}(\mathtt{X})$, and a state is _valid_ if it
satisfies all the requirements: $\mathsf{Val}(\mathtt{X})=\bigcap_{R}|R|$.
Using pullbacks (inverse image) we can translate validity to internal state
spaces in two different ways. External validity (left square) checks that a
system satisfies its contracts; joint validity (right square) couples
component requirements to define the allowed joint states.
$\textstyle{\mathsf{Val}(\mathtt{LSI})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathsf{XVal}(\mathtt{f})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathsf{JVal}(\mathtt{f})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathsf{Val}(\mathtt{LengthSys})\times\mathsf{Val}(\mathtt{TempSys})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathsf{State}(\mathtt{LSI})}$$\textstyle{\mathsf{State}(\mathtt{f})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{0}}$$\textstyle{\mathsf{State}(\mathtt{f})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle
p_{1},p_{2}\rangle}$$\textstyle{\mathsf{State}(\mathtt{LengthSys})\times\mathsf{State}(\mathtt{TempSys})}$
A requirement model is _sound_ if joint validity entails external validity,
corresponding to the dashed arrow above. With some work, one can show that
these diagrams form the operations in an operad of entailments $\mathbf{Ent}$;
see [21, 6] for a similar construction. The intuition is quite clear:
$\begin{array}[]{crcl}&\textrm{component reqs.}&\Rightarrow&\textrm{subsystem
reqs.}\\\ +&\textrm{subsystem reqs.}&\Rightarrow&\textrm{system reqs.}\\\
\hline\cr&\textrm{component reqs.}&\Rightarrow&\textrm{system reqs.}\\\
\end{array}$
There is a functor $\mathsf{Context}:\mathbf{Ent}\to\mathbf{Rel}^{\times}$,
which extracts the relation across the bottom row of each entailment. Noting
that the $\mathsf{State}$ relations occur in the validity entailment, we can
reformulate requirement specification as a _lifting problem_ (Fig. 8(a)):
given functors $\mathsf{State}$ and $\mathsf{Context}$, find a factorization
$\mathsf{Val}$ making the triangle commute. The second and third diagrams
(Fig. 8(b)–8(c)) show how to extend the lifting problem with prior knowledge,
in this case a top-level requirement and a known (e.g., off the shelf)
component capability.
$\textstyle{\mathbf{Ent}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Context}}$$\textstyle{\mathcal{W}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{State}}$$\scriptstyle{\mathsf{Val}}$$\textstyle{\mathbf{Rel}^{\times}}$
(a) Free Specification
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathtt{LSI}}$$\scriptstyle{\lambda_{0}\cdot\mathtt{fringe}=L_{\mathtt{drive}}}$$\textstyle{\mathbf{Ent}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Context}}$$\textstyle{\mathcal{W}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{State}}$$\scriptstyle{\mathsf{Val}}$$\textstyle{\mathbf{Rel}^{\times}}$
(b) Top-down requirement
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathtt{Intfr}}$$\scriptstyle{\varepsilon_{\mathtt{fringe}}\leq\frac{\lambda_{0}}{8}}$$\textstyle{\mathbf{Ent}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Context}}$$\textstyle{\mathcal{W}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{State}}$$\scriptstyle{\mathsf{Val}}$$\textstyle{\mathbf{Rel}^{\times}}$
(c) Bottom-up requirement
Figure 8: Requirement specification expressed as lifting problems.
Finally we are ready to admit dynamics, but it turns out that we have already
done most of the work. All that is needed is to modify the spaces attached to
our interactions. In particular, we can distinguish between static and dynamic
state variables; for the $\mathtt{laser}$, $T$, $P$ and $RH$ are dynamic while
$\lambda_{0}$ is static. Now we replace the static values
$T,P,RH\in\mathbb{R}$ by functions $T(t),P(t),RH(t)\in\mathbb{R}^{T}$, thought
of as _trajectories_ through the state space over a timeline $t\in\tau$. For
example, we have
$\mathsf{Traj}(\mathtt{laser})\subseteq\overbrace{(\mathbb{R}^{\tau})^{3}}^{T,P,RH}\times\overbrace{\mathbb{R}}^{\lambda_{0}}.$
From this, we construct $\mathsf{Traj}:\mathcal{W}\to\mathbf{Rel}^{\times}$
using exactly the same recipe as above. Trajectories and states are related by
a pair of algebra homomorphisms
$and$const$.Thefirstpicksoutainstantaneousstateforeachpointintime,whilethesecondidentifiesconstantfunctions,whichdescribefixed-
pointsofthedynamics:$$\begin{array}[]{ccc}\textrm{Global view}&&\textrm{Local
view}\\\ \hline\cr\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
8.1389pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern 32.1389pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbf{Rel}^{\times}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
65.37808pt\raise-8.41386pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.75pt\hbox{$\scriptstyle{-\times\tau}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
81.0778pt\raise-20.50468pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces{\hbox{\kern
40.16394pt\raise-16.4968pt\hbox{\hbox{\kern
0.0pt\raise-2.50002pt\hbox{$\scriptstyle{\displaystyle\Downarrow}$}}}}}\ignorespaces\ignorespaces{\hbox{\kern
41.60835pt\raise-13.7472pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{$\scriptstyle{}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern
40.16394pt\raise-41.24158pt\hbox{\hbox{\kern
0.0pt\raise-2.50002pt\hbox{$\scriptstyle{\displaystyle\Downarrow}$}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
44.60835pt\raise-38.49196pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.15279pt\hbox{$\scriptstyle{\mathsf{const}}$}}}\kern
3.0pt}}}}}}\ignorespaces{}{\hbox{\kern 90.54726pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern-8.1389pt\raise-27.49438pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathcal{W}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
6.74547pt\raise-7.67497pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.71112pt\hbox{$\scriptstyle{\mathsf{Traj}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 39.74214pt\raise-3.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
38.80139pt\raise-27.49438pt\hbox{\hbox{\kern
0.0pt\raise-2.39166pt\hbox{$\scriptstyle{\mathsf{State}}$}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
81.0778pt\raise-27.49438pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}}{}\ignorespaces\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{{}{}{{}}{{}{}{}}{}}}}\ignorespaces{}\ignorespaces{}{}{}{{}{}}\ignorespaces\ignorespaces{\hbox{\kern
37.1528pt\raise-55.0943pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.71112pt\hbox{$\scriptstyle{\mathsf{Traj}}$}}}\kern
3.0pt}}}}}}\ignorespaces{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{\hbox{\kern
90.16716pt\raise-30.49342pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{{}{}{}{{}}{{}{}{}\lx@xy@spline@}{}}}}\ignorespaces{}\ignorespaces\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{{}{}{{}}{{}{}{}}{}}}}\ignorespaces{}{\hbox{\kern
41.60835pt\raise-27.49438pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern
81.0778pt\raise-27.49438pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbf{Rel}^{\times}}$}}}}}}}\ignorespaces}}}}\ignorespaces}&&\vbox{\lx@xy@svg{\hbox{\raise
0.0pt\hbox{\kern
33.78299pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathsf{Traj}(\mathtt{LSI})\times
T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
0.0pt\raise-14.45831pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.39166pt\hbox{$\scriptstyle{{\mathtt{LSI}}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-18.41663pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathsf{State}(\mathtt{LSI})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
0.0pt\raise-43.37494pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.46945pt\hbox{$\scriptstyle{\mathsf{const}_{\mathtt{LSI}}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-47.33325pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathsf{Traj}(\mathtt{LSI})}$}}}}}}}\ignorespaces}}}}\ignorespaces}\end{array}$$\par
Theproblemisthatthestatespaceexplodes;functionspacesareverylarge.Nonetheless,allofthesystemintegrationlogicisidentical,andusingtheentailmentoperad$Ent$,wecanbuildinadditionalrestrictionstolimitthesearchspace.Inparticular,wecanrestrictattentiontothesubsetoffunctionsthatsatisfiesaparticulardifferentialequationorstate-
transitionrelationship.Thisdrasticallylimitsthesetofvalidtrajectories,thoughtheresultingsetmaybedifficulttocharacterizeandthemethodsforexploringitwillvarybycontext.\par{\bf
Relatedanalyticapplications.}Wiringdiagramshaveanestablishedappliedliteratureforsystemdesignproblems,see,e.g.,\cite[cite]{[\@@bibref{}{CyberWire,
Seven, OpWire, CatSci, SpivakTan,
OpenDyn}{}{}]}.Morebroadly,theanalyticstrengthofcategorytheorytoexpresscompositionalityandfunctionalsemanticsisexploredinnumerousrecentappliedworks
--e.g.\ engineeringdiagrams\cite[cite]{[\@@bibref{}{ PaLin, Props,
OpenPetriNets, OpenCM, CoyaThesis, Seven,
DigitalCircuits}{}{}]},Markovprocesses\cite[cite]{[\@@bibref{}{CompMark,
BioMark}{}{}]},databaseintegration\cite[cite]{[\@@bibref{}{BSW, Seven,
CompPow, CatSci, SpivakKent, SpivakWis,
MultiManu}{}{}]},behaviorallogic\cite[cite]{[\@@bibref{}{Seven, ToposSem,
TempType,
SheafEvent}{}{}]},naturallanguageprocessing\cite[cite]{[\@@bibref{}{FoundNLP,
SentNPL, QuantNL}{}{}]},machinelearning\cite[cite]{[\@@bibref{}{Len,
BackProp}{}{}]},cybersecurity\cite[cite]{[\@@bibref{}{YonedaHack,
SemanticsCyber,
CyberWire}{}{}]},quantumcomputation\cite[cite]{[\@@bibref{}{OptQuant,
ReduceQuant, Quantomatic}{}{}]}andopengames\cite[cite]{[\@@bibref{}{GameGraph,
GameMixed, CompGame}{}{}]}.\par$
## 6 Automated synthesis with network operads
An operad acting on an algebra provides a starting point to automatically
generate and evaluate candidate designs. Formally correct designs (operations
in some operad) combine basic systems (elements of some algebra of that
operad) into a composite system.
### 6.1 Sailboat example
Consider the sailboat problem introduced in 33.1 and revisited in 44.2–4.3.
Network operads describe assets and ports carrying each other while algebra-
based semantics guided the search for effective designs by capturing the
impact of available search effort. To apply this model to automate design
synthesis, algorithms explored designs within budget constraints based on
costs in Table 2. Exploration iteratively composed up to budget constraints
and operational limits on carrying777Though not used for this application, it
turns of that degree limits–e.g. how many quadcopters a helicopter can
carry–can be encoded directly into operad operations; the relevant mathematics
was worked out in [73].. With these analytic models, greater sophistication
was not needed; other combinatorial search algorithms–e.g. simulated
annealing–are readily applied to large search spaces. The most effective
designs could ferry a large number of low cost search and rescue units–e.g.
quadcopters ($\mathtt{QD}$)–quickly to the scene–e.g. via helicopters
($\mathtt{Helo}$).
### 6.2 Tasking example
Surprisingly, network operads—originally developed to design systems—can also
be applied to “task" them: in other words, declare their behavior. An elegant
example of this approach is given in [7] where “catalyst" agents enable
behavioral options for a system.
The SAR tasking problem. The sailboat problem is limited by search: once
sailboat crew members are found, their recovery is relatively straightforward.
In hostile environments, recovery of isolated personnel (IPs) can become very
complex. The challenge is balancing the time criticality of recovery with the
risk to the rescuers by judiciously orchestrating recovery teams888The
recovery of downed airman Gene Hambleton, call sign Bat 21 Bravo, during the
Vietnam War is a historical example of ill-fated SAR risk management.
Hambleton’s eventual recovery cost 5 additional aircraft being shot down and
11 deaths; for comparison, a total 71 rescuers and 45 aircraft were lost to
save 3,883 lives during Vietnam War SAR [23]. . Consider the potential
challenges of a large scale earthquake during severe drought conditions which
precipitates multiple wildfires over a large area. The 2020 Creek Fire near
Fresno, CA required multiple mass rescue operations (MROs) to rescue over 100
people in each case by pulling in National Guard, Navy and Marine assets to
serve as search and rescue units (SRUs) [52, 62]. Though MRO scenarios are
actively considered by U.S. SAR organizations, the additional challenge of
concurrent MROs distributed over a large area is not typically studied.
In this SAR tasking example, multiple, geographically distributed IP groups
compete for limited SRUs. The potential of coordinating multiple agent
types—e.g., fire fighting airplanes together with helicopters—to jointly
overcome environment risks is considered as well as aerial refueling options
for SRUs to extend their range. Depending on available assets, recovery
demands and risks, a mission plan may need to work around some key agent
types–e.g. refueling assets–and maximize the impact of others–e.g. moving
protective assets between recovery teams.
Under CASCADE, tasking operations were built up from primitive tasks that
coordinate multiple agent types to form a composite task plan. Novel concepts
to coordinate teams of SRUs are readily modeled with full representation of
the diversity of potential mission plan solutions.
Network models for tasking. A network model for tasking defines atomic agent
types $C$ and possible task plans for each list of agent types. Whereas a
network model to design structure
$\mathsf{\Gamma}\colon\mathcal{S}(C)\to\mathbf{Mon}$ has values that are
possible graphical designs, a network model to task behavior
$\mathsf{\Lambda}\colon\mathcal{S}(C)\to\mathbf{Cat}$ has values that are
categories whose morphisms index possible task plans for the assembled types;
compare, e.g., [7, Thm. 9]. Each morphism declares a sequence of tasks for
each agent–many of which will be coordinated with other agents.
If the system is comprised of only a single UH-60 helicopter, its possible
tasks are captured in $\mathsf{\Lambda}(\mathtt{UH60})$. In this application,
these tasks are paths in a graph describing ‘safe maneuvers.’ For unsafe
maneuvers, UH-60s travel in pairs–or perhaps with escorts such as a HC-130 or
CH-47 equipped with a Modular Airborne Fire Fighting System (MAFFS). Anything
one UH-60 can do, so can two, but not vice versa. Thus there is a proper
inclusion
$\mathsf{\Lambda}(\mathtt{UH60})\times\mathsf{\Lambda}(\mathtt{UH60})\subsetneq\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{UH60})$.
Similarly,
$\mathsf{\Lambda}(\mathtt{UH60})\times\mathsf{\Lambda}(\mathtt{HC130})\subsetneq\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{HC130})$
since once both a UH-60 and HC-130 are present, a joint behavior of midair
refueling of the UH-60 by the HC-130 becomes possible. Formally, these
inclusions are lax structure maps–e.g.
$\Phi_{(\mathtt{UH60},\mathtt{UH60})}\colon\mathsf{\Lambda}(\mathtt{UH60})\times\mathsf{\Lambda}(\mathtt{UH60})\to\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{UH60})$,
specifies: given tasks for a single UH-60 (left coordinate) and tasks for
another UH-60 (right coordinate), define the corresponding joint tasking of
the pair. Here the joint tasking is: each UH-60 operates independently within
the safe graph. On the other hand, tasks in
$\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{UH60})$ to maneuver in unsafe
regions can not be constructed from independent taskings of each UH-60. Such
tasks must be set for some pair or other allowed team–e.g. a CH-47 teamed with
an UH-60.
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$ $\mathtt{HC130}$
(a) Four primitive tasks specified in a Petri net; arcs indicate types
involved in each task.
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$ $\mathtt{HC130}$
${\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{UH60})}$${\mathsf{\Lambda}(\mathtt{UH60})}$${\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{HC130})}$${{\begin{array}[]{c}M(\mathtt{UH60}\otimes\mathtt{UH60})\\\
\scalebox{0.5}{\mbox{$\displaystyle\begin{bmatrix}-1&0&1&0&0&0&0&0\\\
0&-1&1&0&0&0&0&0\\\ 0&0&0&0&-1&0&1&0\\\ 0&0&0&0&0&-1&1&0\\\
0&0&-1&1&0&0&-1&1\end{bmatrix}$}}\\\ {}\hfil\\\
M^{s}(\mathtt{UH60}\otimes\mathtt{UH60})\\\
\scalebox{0.5}{\mbox{$\displaystyle\begin{bmatrix}1&0&0&0&0&0&0&0\\\
0&1&0&0&0&0&0&0\\\ 0&0&0&0&1&0&0&0\\\ 0&0&0&0&0&1&0&0\\\
0&0&1&0&0&0&1&0\end{bmatrix}$}}\end{array}}}$${{\begin{array}[]{c}M(\mathtt{UH60})\\\
\scalebox{1.0}{\mbox{$\displaystyle\begin{bmatrix}-1&0&1&0\\\
0&-1&1&0\end{bmatrix}$}}\\\ {}\hfil\\\ M^{s}(\mathtt{UH60})\\\
\scalebox{1.0}{\mbox{$\displaystyle\begin{bmatrix}1&0&0&0\\\
0&1&0&0\end{bmatrix}$}}\end{array}}}$${{\begin{array}[]{c}M(\mathtt{UH60}\otimes\mathtt{HC130})\\\
\scalebox{0.75}{\mbox{$\displaystyle\begin{bmatrix}-1&0&1&0&0&0&0&0\\\
0&-1&1&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0\\\ \end{bmatrix}$}}\\\ {}\hfil\\\
M^{s}(\mathtt{UH60}\otimes\mathtt{HC130})\\\
\scalebox{0.75}{\mbox{$\displaystyle\begin{bmatrix}1&0&0&0&0&0&0&0\\\
0&1&0&0&0&0&0&0\\\ 0&0&1&0&0&0&1&0\\\ \end{bmatrix}$}}\end{array}}}$Network
Model Constraint Matrices (b) More primitive tasks become possible as
available agent types increase. Type update matrices $M(-)$ and target to
source constraint matrices $M^{s}(-)$ translate type changing and matching,
resp.
Figure 9: Specified primitive tasks determine an operad ${\mathcal{O}}_{SAR}$
and a constraint program to explore operations.
Applying the cookbook: operads. While the above discussion sketches how to
specify a network model for tasking, which constructs a network operad [6],
these precise details [37] need not concern the applied practitioner999That
is, a Petri net specifies the network model
$\mathsf{\Lambda}\colon\mathcal{S}(C)\to\mathbf{Cat}$ to task behavior. The
construction of $\mathsf{\Lambda}$ [37] is similar to the construction
described in [7, Thm. 9], but adapted to colored Petri nets whose transitions
preserve the number of tokens of each color; see, e.g., Fig. 9(a). Compared to
[7, Thm. 9], $C$ corresponds to token colors, rather than catalysts [7, Def.
6], and species index discrete coordination locations. Target categories
encode allowed paths for each atomic agent type, (cont.)
9 (cont.), e.g., for Fig. 9(a) $\mathsf{\Lambda}(\mathtt{UH60})$ is (freely)
generated by objects $\\{a,b,c,d\\}$ and morphisms $\tau_{1}\colon a\to c$ and
$\tau_{2}\colon a\to c$, whereas $\mathsf{\Lambda}(\mathtt{HC130})$ is just by
generated $\\{a,b,c,d\\}$ since no transition involves a single
$\mathtt{HC130}$. By describing each target category as an appropriate
subcategory of a product of path categories, the symmetric group action is
given permuting coordinates, which allows the role of each atomic agent in a
task to be specified.. It is sufficient to provide a Petri net as template,
from which a network operad is constructed. Whereas a template to design
structures defines the basic ways system types can interact, a template to
task behavior defines the primitive tasks for agent types $C$, which are token
colors in the Petri net.
No specification of ‘staying put’ tasks are needed; these are implicit. All
other primitive tasks are (sparsely) declared. For example, each edge of the
‘safe graph’ for a solo UH-60 declares: (1) a single agent of type
$\mathtt{UH60}$ participates in this ‘traverse edge’ task; and (2)
participation is possible if a $\mathtt{UH60}$ is available at the source of
the edge. Likewise, each edge of the ‘unsafe graph’ for pairs of UH-60s should
declare similar information for pairs, but what about operations to refuel an
UH-60 with a HC-130? It turns out that transitions in a Petri net carry
sufficient data [7, 37] and have a successful history of specifying generators
for a monoidal category [9, 72, 8]. The Petri net Fig. 9(a) shows examples
where, for simplicity, tasks to traverse edges are only shown in the left to
right direction. This sparse declaration is readily extended–e.g. to add
recovery focused CH-47s, which tested their operational limits to rescue as
many as 46 people during the 2020 Creek Fire–$C$ and the set of transitions
are augmented to encode the new options for primitive tasks.
This specification of syntax is almost sufficient for the SAR tasking problem
and would be for situations where only the sequence of tasks for each agent
needs to be planned. When tasking SAR agents, _when_ tasks are performed is
semantically important because where and how long air-based agents ‘stay put’
impacts success: (1) fuel burned varies dramatically for ground vs. air
locations; (2) risk incurred varies dramatically for safe vs. unsafe
locations. For comparison, in a ground-based domain without environmental
costs, these considerations might be approximately invariant relative to the
time tasks occur, and therefore, can be omitted from tasking syntax.
Timing information creates little added burden for building a
template–transitions declaring primitive tasks need only be given durations
derivable from scenario data–and it is technically straightforward to add a
time dimension to the network model.
Constraints from syntax. A direct translation of primitive tasks to decision
variables for a constraint program is possible. For syntax, the idea is very
simple: enforce type matching constraints on composing operad morphisms. Here
we will briefly indicate the original mixed integer linear program developed
for SAR tasking; later this formulation was reworked to leverage the
scheduling toolkit of the CPLEX optimization software package.
To illustrate the concept, let us first consider the constraint program for an
operad to plan tasks without time and then add the time dimension101010Simply
increasingly dimensionality is not computationally wise–which was the point of
exploring the CPLEX scheduling toolkit to address the time dimension–but this
model still serves as a conceptual reference point.. Operad types are
translated to boolean vectors $m_{j}$–whose entries capture individual agents
at discrete coordination locations. Parallel composition of primitive
operations is expressed with boolean vectors $\Sigma_{j}$ indexed over
primitive tasks for specific agents. Type vectors $m_{j}$ indicate the
coordination location each agent with value one; operation vectors
$\Sigma_{j}$ indicate which tasks are planned in parallel.
Assuming an operation with task vector $\Sigma_{j}$ and source vector $m_{j}$,
the target is $m_{j+1}=m_{j}+M\Sigma_{j}$ where $M$ describes the relationship
between source and target for primitive tasks. Rows of $M$ correspond to
primitive tasks while columns correspond to individual agents. The target to
source constraint for single step of in-series composition is $m_{j+1}\geq
M^{s}\Sigma_{j+1}$ where $M^{s}$ has rows that give requirements for each
primitive task. Here the LHS describes the target and the RHS describes the
source. The inequality appears to allow for implicit identities for agents
without tasking–e.g. if $\Sigma_{j}$ is a zero vector, then $m_{j+1}=m_{j}$.
This constraint prevents an individual agent from being assigned conflicting
tasks or ‘teleporting’ to begin a task.
As seen in Fig. 9(b), additional agents: (1) enable more primitive tasks,
indexed by Petri net transitions (top two rows); and (2) expand the type
vector/matrix column dimension to account for new agent-location pairs and
increase the matrix row dimension to account for new tasks (bottom two rows).
For example, the first 4 rows of $M(\mathtt{UH60}\otimes\mathtt{UH60})$
correspond to the image of
$\mathsf{\Lambda}(\mathtt{UH60})\times\mathsf{\Lambda}(\mathtt{UH60})$ in
$\mathsf{\Lambda}(\mathtt{UH60}\otimes\mathtt{UH60})$. The last row
corresponds to new task, $\tau_{4}$, for the available pair of UH-60s. During
implementation, the constraints can be declared task-by-task/row-by-row to
sparsely couple the involved agents. Once a limit on the number of steps of in
series composition is set–i.e. a bound for the index $j$ is given–a finite
constraint program is determined.
Time is readily modeled discretely with tasks given integer durations. This
corresponds to a more detailed network model, $\mathsf{\Lambda}_{t}$, whose
types include a discrete time index; see Fig. 5(b) for example operations.
Under these assumptions, one simply replaces the abstract steps of in series
composition with a time index and decomposes $M$ and $\Sigma_{j}$ by the
duration $d$ of primitive tasks:
$m_{t}+\sum_{d=1}^{d_{\max}}M_{d}\Sigma_{t-d,d}=m_{t+1};~{}~{}~{}~{}m_{t+1}\geq\sum_{d=1}^{d_{\max}}M^{s}_{d}\Sigma_{t+1,d}$
so that $\Sigma_{t,d}$ describes tasks beginning at time $t$; the inequality
allows for ‘waiting’ operations. One can also model tasks more coarsely–with
$\mathsf{\Lambda}_{\bullet}\colon\mathbb{N}(C)\to\mathbf{Cat}$–to construct an
operad to task counts of agents without individual identity. Then, the type
vectors $m_{j}$ (resp., operation vectors $\Sigma_{j}$) have integer entries
to express agent counts (resp., counts of planned tasks) with corresponding
reductions in dimensionality. These three levels of network models
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$${\mathcal{S}(C)}$${\mathbf{Cat}}$${\mathbb{N}(C)}$$\scriptstyle{\mathsf{\Lambda}_{t}}$$\scriptstyle{\mathsf{\Lambda}}$$\scriptstyle{\mathsf{\Lambda}_{\bullet}}$
naturally induce morphisms of network operads111111Strictly speaking, the
coarsest (lowest) level is not network model; its domain is a free commutative
monoidal category. Nevertheless, a completely analogous construction produces
a typed operad fitting into this diagram. [6, 6.18] and encode mappings of
syntactic variables that preserve feasibility. In particular, the top two
levels describe a precise mapping from task scheduling (highest) to task
planning (middle). The lowest level $\mathsf{\Lambda}_{\bullet}$ forgets the
individual identity of agents, providing a coarser level for planning.
This very simple idea of enforcing type matching constraints is inherently
natural121212I.e., operad morphisms push forward feasible assignments
variables in the domain to feasible assignments in the codomain.. However,
further research is needed to determine if this natural hierarchical structure
can be exploited by algorithms–e.g. by branching over pre-images of solutions
to coarser levels–perhaps for domains were operational constraints coming from
algebras are merely a nuisance, as opposed to being a central challenge for
SAR planning. For instance, a precise meta-model for planning and scheduling
provides a common jumping off point to apply algorithms from those two
disciplines.
Applying the cookbook: algebras. Because the operad template defines
generating operations, specifying algebras involves: (1) capturing the salient
features of each agent type as its internal state; and (2) specifying how
these states update under generating morphisms–including, for operads with
time, the implicit ‘waiting’ operations. For the SAR tasking problem, the
salient features are fuel level and cumulative probability of survival
throughout the mission. Typical primitive operations will not increase these
values; fuel is expended or some risk is incurred. The notable exception is
refueling operations which return the fuel level of the receiver to maximum.
By specifying the non-increasing rate for each agent–location pair, the action
of ‘waiting’ operations are specified. In practice, these data are derivable
from environmental data for a scenario so that end users can manipulate them
indirectly.
Operational constraints from algebras. Salient features of each agent type are
captured as auxiliary variables determined by syntactic decision variables.
The values of algebra variables are constrained of update equations–e.g. to
update fuel levels for agents with $\max(f_{j}+F\Sigma_{j},f_{\max})=f_{j+1}$,
where $f_{\max}$ specifies max fuel capacities. Having expressed the semantics
for generating operations, one can enforce additional operational
constraints–e.g. safe fuel levels: $f_{j+1}\geq f_{\min}.$
Extending the domain of application. As noted above, this sparse declaration
of a tasking domain is readily extended–e.g. to add a new atomic type or new
ways for agents to coordinate. Syntactically, this is amounts to new elements
of $C$ or transitions to define primitive tasks. Semantics must capture the
impact of primitive operations on state, which can be roughly estimated
initially and later refined. This flexibility is especially useful for rapid
prototyping of ‘what if’ options for asset types and behaviors, as the
wildfire SAR tasking problem illustrates.
Suppose, for example, that we wanted to model a joint SAR and fire fighting
problem. Both domains are naturally expressed with network operads to task
behavior. Even if the specification formats were independently developed: (1)
each format must encode the essential combinatorial data for each domain; and
(2) category theory provides a method to integrate domain data: construct a
pushout. Analogous to taking the union of two sets along a common
intersection, one identifies the part of the problem common to both
domains–e.g. MAFFS-equipped HC-130s and their associated tasks appearing in
both domains–to construct a cross-domain model
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$${\mathtt{Spec_{\cap}}}$${\mathtt{Spec_{SAR}}}$${\mathtt{Spec_{FF}}}$${\mathtt{Spec_{\cup}}.}$${\ulcorner}$
The arrows in this diagram account for translating the file format for the
overlap into each domain-specific format and choosing a specific output format
for cross-domain data.
On the other hand, suppose that the machine readable representation of each
domain was tightly coupled to algorithms–e.g. mathematical programming for SAR
and planning framework for fire fighting. There is no artifact suitable for
integrating these domains since expression was prematurely optimized. We
describe a general workflow to separate specification from representation and
exploitable data structures and algorithms in 77.5.
### 6.3 Other examples of automated synthesis
Though network templates facilitate exploration from atoms, how to explore
valid designs is a largely distinct concern from defining the space of
designs, as discussed in 1.
Novel search strategies via substitution For example, in the DARPA
Fundamentals of Design (FUN Design) program, composition of designs employed a
genetic algorithm (GA). FUN Design focused on generating novel conceptual
designs for mechanical systems–e.g. catapults to launch a projectile.
Formulating this problem with network operads followed the cookbook approach:
there were atomic types of mechanical components and basic operations to link
them.
The operad-based representation provided guarantees of design feasibility and
informed how to generalize the GA implementation details. Specifically,
composition for atomic algebra elements defined genetic data; crossover
produced child data to compose from atoms; and mutation modified parameters of
input algebra elements. Crafting a crossover step is typically handled case-
by-case while this strategy generalizes to other problems that mix
combinatorial and continuously varying data, provided this data is packaged as
an operad acting on an algebra. Guarantees of feasibility dramatically reduced
the number unfit offspring evaluated by simulation against multiple fitness
metrics. Moreover, computational gains from feasibility guarantees increase as
the design population becomes more combinatorially complex.
Integrated structure and behavior Large classes of engineering problems
compose components to form an ‘optimized’ network–e.g. in chemical process
synthesis, supply chains, and water purification networks [61, 75, 80, 105].
Given a set of inputs, outputs and available operations (process equipment
with input and output specification), the goal is to identify the optimal
State Equipment Networks (SEN) for behavioral flows of materials and energy. A
given production target for outputs is evaluated in terms of multiple
objectives such as environmental impact and cost. For example, the chemical
industry considers the supply chain, production and distribution network
problem [105] systematically as three superstructure optimization problems
that can be composed to optimize enterprise level, multi-subsystem structures.
Each sub-network structure is further optimized for low cost and other metrics
including waste, environmental impact and energy costs. The operadic paradigm
would provide a lens to generalize and refine existing techniques to jointly
explore structure and behavior.
CASCADE prototyped integrated composition of structure and behavior for
distributed logistics applications. Here an explicit resupply plan to task
agents was desired. Structural composition was needed to account for the
resupply capacity for heterogeneous delivery vehicles and the positioning of
distributed resupply depots. Probabilistic models estimated steady state
resupply capacities of delivery fleet mixes to serve estimates of demand.
First, positioning resupply locations applied hill climbing to minimize the
expected disruption of delivery routes when returning to and departing from
resupply locations. Second, this disruption estimate was used to adjust the
resupply capacity estimate of each delivery asset type. Third, promising
designs where evaluated using a heuristic task planning algorithm. At each
stage, algorithms focused on finding satisficing solutions which allowed broad
and rapid explorations of the design and tasking search space.
Synthesis with applied operads and categories. Research activity to apply
operads and monoidal categories to automated design synthesis is increasing.
Wiring diagrams have been applied to automate protein design [46, 92] and
collaborative design [40, Ch. 4] of physical systems employing practical
semantic models and algorithms [24, 25, 106, 107]. Software tools are
increasingly focused on scaling up computation, e.g. [30, 55, 58], as opposed
to software to augment human calculation, as in [14, 60, 81], and managing
complex domains with commercial-grade tools [22, 95, 76, 101]. Recent work to
optimize quantum circuits [35, 59] leverages such developments. The use of
wiring diagrams to improve computational efficiency via normal forms is
explored in [77].
In the next section, we discuss research directions to develop the meta-
modeling potential of applied operads to: (1) decompose a problem within a
semantic model to divide and conquer; and (2) move between models to fill in
details from coarse descriptions. We also discuss how the flow of
representations used for SAR–network template, operad model of composition,
exploitation data structures and algorithms–could be systematized into a
reusable software framework.
## 7 Toward practical automated analysis and synthesis
In this section, we describe lessons learned from practical experiences with
applying operads to automated synthesis. We frame separation of concerns in
the language of operads to describe strategies to work around issues raised by
this experience. This gives not only a clean formulation of separation but
also a principled means to integrate and exploit concerns.
### 7.1 Lessons from automated synthesis
The direct, network template approach facilitates correct and transparent
modeling for complex tasking problems. However, computational tractability is
limited to small problems–relative to the demands of applications. More
research is needed to develop efficient algorithms that break up the search
into manageable parts, leveraging the power of operads to separate concerns.
Under CASCADE, we experimented with the CPLEX scheduling toolkit to informally
model across levels of abstraction and exploit domain specific information. In
particular, generating options to plan, but not schedule, key maneuvers with
traditional routing algorithms helped factor the problem effectively. These
applied experiments were not systematized into a formal meta-modeling
approach, although our prototype results were promising. Specification of
these levels–as in Sec. 4–and controlling the navigation of levels using
domain-specifics would be ideal.
The FUN DESIGN genetic algorithm approach illustrates the potential operads
have to: (1) generalize case-by-case methods131313In fact, applying genetic
algorithms to explore network structures was inspired by the success of
NeuroEvolution of Augmenting Topologies (NEAT) [96] to generate novel neural
network architectures.; (2) separate concerns, in this case by leveraging the
operad syntax for combinatorial crossover and algebra parameters for
continuous mutation; and (3) guarantee correctness as complexity grows.
Distributed logistics applications in CASCADE show the flexibility afforded by
multiple stage exploration for more efficient search.
### 7.2 Formal separation of concerns
We begin by distinguishing focus from filter, which are two ways operads
separate. Focus selects _what_ we look at, while filter captures _how_ we look
at it. These are questions of syntax and semantics, respectively. To be
useful, the _what_ of our focus must align with the _how_ of the filter.
Separation of focus occurs within the syntax operad of system maps. In 22.1,
four trees correspond to different views on the same system. We can zoom into
one part of the system while leaving other portions black-boxed at a high
level. Varying the target type of an operation changes the scope for system
composition, such as restricting attention to a subsystem.
Filtering, on the other hand, is semantic; we choose which salient features to
model and which to suppress, controlled by the semantic context used to
‘implement’ the operations. As described in 55.3, the default semantic context
is $\mathbf{Set}$ where: (1) each type in the operad is mapped to a set of
possible instances for that type; and (2) each operation is mapped to a
function to compose instances. Instances or algebra elements for the sailboat
problem (Sec. 4) describe the key features of structural system designs. For
SAR tasking (Sec. 6), mission plan instances track the key internal states of
agents–notably fuel and risk–throughout its execution. Section 5 illustrates
alternative semantic contexts as such probability $\mathbf{Prob}$ or relations
between sets $\mathbf{Rel}$.
Focus and filter come together to solve particular problems. The analysis of
the LSI system in Sec. 5 tightly focuses the syntax operad $\mathcal{W}$ to
include only the types and operations from Fig. 4. Formally, this is
accomplished by considering the image of the generating types and operations
in the operad of port-graphs [20, 3]. This tight focus means semantics need
only be defined for LSI components. In each SAR tasking problem of Sec. 6, an
initial, source configuration of agent types is given, narrowing the focus of
each problem. The SAR focus is much broader because an operation to define the
mission plan must be constructed. Even so, semantics filter down to just the
key features of the problem and how to update them when generating operations
act.
Functorial semantics, as realized by an operad algebra
$\mathsf{A}\colon{\mathcal{O}}\to\mathbf{Sem}$, helps factor the overall
problem model to facilitate its construction and exploitation. For example, we
can construct the probabilistic failure model in Table 3 by normalizing
historical failures. First we limit focus from all port-graphs $\mathcal{P}$
to $\mathcal{W}$ then semantics for counts in $\mathbb{N}^{+}$, an operad of
counts and sums, are normalized to obtain probabilities in $\mathbf{Prob}$:
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$${\mathcal{W}}$${\mathbb{N}^{+}}$${\mathcal{P}}$${\mathbf{Prob}.}$$\scriptstyle{\mathsf{A}}$
The power to focus and filter is amplified because we are not limited by a
single choice of how to filter. In addition to limiting focus with the source
of an operad algebra, we can simplify filters. Such natural transformations
between functors are ‘filters of filters’ that align different compositional
models precisely–e.g. requirements over state (55.3) or timed scheduling over
two levels of planning (66.2). In this first case the syntax operad
$\mathcal{W}$ stays the same and semantics are linked by an algebra
homomorphism (22.4). In the second case, both the operad and algebra must
change to determine simpler semantics–e.g. to neglect the impact of waiting
operations, which bound performance. Such precision supports automation to
explore design space across semantic models and aligns the ability to focus
within each model. By working backward relative to the construction process,
we can lift partial solutions to gradually increase model fidelity–e.g.
exploring schedules over effective plans. This gives a foundation for lazy
evaluation during deep exploration of design space, which we revisit in 77.5.
For a simple but rich example of these concepts working together, consider the
functional decomposition $\mathtt{f}(\mathtt{l},\mathtt{t})$ in Fig. 4. We
could model the length system $\mathtt{l}$ using rigid-body dynamics, the
temperature system $\mathtt{t}$ as a lumped-element model, and super-system
$\mathtt{f}$ as a computation (Edlén equation) that corrects the observed
$\mathtt{fringe}$ count based on the measured temperature:
$\textstyle{\\{\mathtt{l}\\}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{N}_{\mathrm{mech}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Rigid}}$$\textstyle{\mathsf{Dyn}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Impl}}$$\textstyle{\mathcal{W}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\displaystyle\Uparrow}$$\scriptstyle{\displaystyle\Downarrow}$$\textstyle{\mathcal{N}_{\mathrm{comp}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$Edlén$\textstyle{\mathsf{Type}}$$\textstyle{\\{\mathtt{t}\\}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{N}_{\mathrm{therm}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Lump}}$$\textstyle{\mathsf{Dyn}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{Impl}}$
(3)
The upper and lower paths construct implementations of dynamical models based
on the aforementioned formalisms. The center path implements a correction on
the data stream coming from the interferometer, based on a stream of
temperature data. The two natural transformations indicate extraction of one
representation, a stream of state values, from the implementation of the
dynamical models. Composition in $\mathcal{W}$ then constructs the two data
streams and applies the correction.
A key strength of the operadic paradigm is its genericity: the same principles
of model construction, integration and exploitation developed for measurement
and SAR apply to all kinds of systems. In principle, we could use the same
tools and methodology to assemble transistors into circuits, unit processes
into chemical factories and genes into genomes. The syntax and semantics
change with the application, but the functorial perspective remains the same.
For the rest of this section, we describe research directions to realize such
a general purpose vision to decompose a complex design problem into
subproblems and support rapid, broad exploration of design space.
### 7.3 Recent advancements, future prospects and limits
Progress driven by applications. Section 4 describes how cookbook-style
approaches enable practitioners to put operads to work. Generative data define
a domain and compositionality combines it into operads and algebras to
separate concerns. Network operads [6, 7, 73] were developed in response to
the demands of applications to construct operads from generative data. Section
5 describes rich design analysis by leveraging multiple decompositions of
complex systems and working across levels of abstraction. Focusing on a
specific applied problem–the LSI at NIST–provided further opportunities for
analysis since model semantics need only be defined for the problem at hand;
see also Eq. 3. Progress in streamlining automated synthesis from building
blocks is recounted in Sec. 6 where the domain drives coordination
requirements to task behavior.
Prospects. If interactions between systems are well-understood (specification)
and can be usefully modeled by compositional semantics (analysis), then
automated design synthesis leveraging separation for scalability becomes
possible. For instance, most references from the end of Sec. 5 correspond to
domains that are studied with diagrams that indicate interactions and have
associated compositional models. This allows intricate interactions to be
modeled–compare, e.g. classical [82] vs. quantum [1, 35, 59] computing–while
unlocking separation of concerns. Cookbook and focused approaches guide
practitioners to seek out the minimal data needed for a domain problem–as in
the examples presented–but operads for design requires compositional models.
Limitations. We note three issues limiting when operad apply: (1) key
interactions among systems and components are inputs; (2) not all design
problems become tractable via decomposition and hierarchy; and (3) there is no
guarantee of compositional semantics to exploit. For instance, though the
interactions for the $n$-body problem are understood (1), this does not lend
itself to decomposition (2) or exploitable compositional semantics (3).
Whitney [102] notes that integral mechanical system design must address safety
issues at high power levels due to challenging, long-range interactions. Some
aspects of mechanical system design may yield to operad analysis–e.g., bond
graphs [17] or other sufficiently “diagrammatic” models–but others may not.
Both examples illustrate how overnumerous or long range interaction can lead
to (2). Operads can work at the system rather than component level if system
properties can be extracted into compositional models. However, operads do not
provide a means to extract such properties or understand problems that are
truly inseparable theoretically or practically.
### 7.4 Research directions for applied operads
We now briefly overview research directions toward automated analysis and
synthesis.
Operad-based decomposition and adaptation. Decomposition, ways a complex
operation can be broken down into simpler operations, is a dual concept to the
composition of operations. Any subsystem designed by a simpler operation can
be adapted: precisely which operations can be substituted is known, providing
a general perspective to craft algorithms. To be practical, the analytic
questions of _how_ to decompose and _when_ to adapt subsystems must be
answered.
One research direction applies the lens of operad composition to abstract and
generalize existing algorithms that exploit decomposition–e.g. to: (1)
generalize superstructure optimization techniques discussed in 66.3; extend
(2) extend the crossover and mutation steps for the FUN DESIGN work 6.1, which
are global in the sense that they manipulate full designs, to local steps
which adapt parts of a design, perhaps driven by analysis to re-work specific
subsystems; and (3) explore routing as a proxy for tasking planning, analyzing
foundational algorithms like Ford-Fulkerson [44] and decomposition techniques
such as contraction hierarchies [45]. An intriguing, but speculative, avenue
is to attempt to learn how to decompose a system or select subsystems to adapt
in a data-driven way, so that the operad syntax constrains otherwise lightly
supervised learning. A theoretical direction is to seriously consider the dual
role of decomposition, analogous to Hopf and Frobenius algebra [33], and
attempt to gain deeper understanding of the interplay of composition and
decomposition, eventually distilling any results into algorithms141414For
example, Bellman’s principle of optimality is _decompositional_ –i.e. parts of
an optimal solution are optimal. .
Multiple levels of modeling. The LSI example shows how a system model can be
analysed to address different considerations. This sets the stage to adapt a
design–e.g. bolster functional risk points and improve control in back and
forth fashion–until both considerations are acceptable. Applied demonstrations
for SAR tasking suggest a multi-level framework: (1) encoding operational
concepts; (2) planning options for key maneuvers; and (3) multistage planning
and scheduling to support these maneuvers.
$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$$\mathtt{UH60}$
$\mathtt{UH60}$$\mathtt{UH60}+\mathtt{HC130}$$\mathtt{UH60}+\mathtt{HC130}$$2\mathtt{UH60}$$2\mathtt{UH60}$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$
$\mathtt{HC130}$$\phantom{{|}}\tau_{1}$$\phantom{{|}}\tau_{2}$$\phantom{{|}}\tau_{4}$$(a,0)$$(b,1)$$(c,2)$$(c,2)$$((d,d),(4,4))$$2$$2$$2$$2$$2$$4$$\sqcup$$\mapsto$$2$$2$$4$$4$$+$$\mapsto$$2$$2$$4$$\bullet$
$\mathtt{Cut}$$\bullet$ $\mathtt{Helo}$$\bullet$
$\mathtt{QD}$$\;\;a\;\;$$\;\;b\;\;$$\;\;c\;\;$$\;\;d\;\;$$\;\phantom{\Big{|}}\tau_{1}\;$$\;\phantom{\Big{|}}\tau_{2}\;$$\;\phantom{\Big{|}}\tau_{3}\;$$\;\phantom{\Big{|}}\tau_{4}\;$$\bullet$
$\mathtt{UH60}$$\bullet$ $\mathtt{HC130}$Templates
{‘‘colors’’ : [‘‘port’’, ‘‘cut’’, …, ‘‘qd’’],
‘‘undirected’’ : {
‘‘comms’: {
‘‘port’’: [‘‘cut’’, …, ‘‘uav" ],
‘‘cut’’: [‘‘boat’’, …, ‘‘helo’’] … } } }
{‘‘colors’’ : [‘‘port’’, ‘‘cut’’, …, ‘‘u’’],
‘‘directed’’ : {
‘‘carrying’’: {
‘‘cut’’: [‘‘port’’],
…,
‘‘u’’: [‘‘cut’’, …, ‘‘helo’’] } } }
Core Meta-Model
${{\mathcal{O}}(\mathsf{\Gamma}_{1})}$${\mathbf{Sem}}$${{\mathcal{O}}(\mathsf{\Gamma}_{2})}$$\scriptstyle{\mathsf{A_{1}}}$$\scriptstyle{\mathsf{A_{2}}}$
Exploitation librariesGradient AscentEvol. Algorithm⋮Planning⋮ Figure 10: A
software framework to leverage a meta-model: templates define each level and
how to move between, libraries exploit each level, and core meta-model
facilitates control across levels.
Unifying top-down and bottom-up points of view. We have laid out the
analytic–exemplified by wiring diagrams–and synthetic–exemplified by network
operads–points of view for complex systems. Even if the goal is practical
automated synthesis, scalability issues promote analytic decomposition and
abstraction to efficiently reason toward satisficing solutions. Two approaches
to unification include: (1) create a combined syntax for analysis and
synthesis, a ‘super operad’ combining both features; (2) act by an analytic
operad on the synthetic syntax, extending composition of operations. While the
former approach is arguably more unified, the later more clearly separates
analysis and synthesis and may provide a constructive approach to former.
### 7.5 Functorial programming with operads
At this point, experience implementing operads for design suggests a software
framework. While conceptually simple, this sketch helps clarify the practical
role of a precise meta-model.
Rather than working directly with operads to form a core meta-modeling
language, cf. [18], a workflow akin to popular frameworks for JavaScript
development would put developers in the drivers seat: adopters focus on
controlling the flow of data and contribute to an ecosystem of libraries for
lower-level data processing. Achieving this requires work before and after the
meta-model. First, transferable methods get an applied problem into operads
(Fig. 10, left). As in Section 4, this data constructs operads and algebras to
form the core meta-model. Core data feeds explicitly exploitable data
structures and algorithms to analyze (Sec. 5) and automatically construct
(Sec. 6) complex systems (Fig. 10, right). On far the left, end user tools
convert intent to domain inputs. Rightmost, libraries access exploitation data
structures and algorithms, including those exploiting the syntax and semantics
separation or substitution and adaptation. At the center, the core meta-model
guarantees that the scruffier ends of the framework exposed to end users and
developers are correctly aligned and coherently navigated.
This framework provides significant opportunities to separate concerns
compared to other approaches. Foremost, the core model separates syntax from
semantics. As noted in 1, applied methods tend to conflate syntax and
semantics. For instance, aggregate programming [15] provides: 1) semantics for
networked components with spatial and temporal extent; and (2) interactions
are proximity-based. The former feature is powerful but limiting: by choosing
a single kind of semantics, modeling is wedded to the scales it abstracts
well. The individual component scale is not modeled, even syntactically, which
would complicate any attempt to align with other models. The latter precludes
syntactic declaration of interactions–e.g. to construct architectures not
purely based on proximity–and the absolute clarity about what can be put
together provided by the operad syntax. Relative to computational efforts to
apply operads or monoidal categories, e.g. [30, 55], this sketch places
greater emphasis on specification and exploitation: specification of a domain
is possible without exposing the meta-model, algorithms searching within each
model are treated as black boxes that produce valid designs. Separate
specification greatly facilitates set up by experts in the domain, but not the
meta-model. Separate exploitation encourages importing existing data
structures and algorithms to exploit each model.
### 7.6 Open problems
The software framework just sketched separates out the issues of practical
specification, meta-modeling and fast data structures and algorithms. We
organize our discussion of open problems around concrete steps to advance
these issues. In our problem statements, “multiple” means at least three to
assure demonstration of the genericity of the operadic paradigm.
Practical specification. The overarching question is whether the minimal
combinatorial data which can specify operads, their algebras and algebra
homomorphisms in theory can be practically implemented in software. We propose
the following problems to advance the state-of-the-art for network template
specification of operads described in Sec. 4:
1. 1.
Demonstrate a specification software package for operad algebras for multiple
domains.
2. 2.
Develop specification software for algebra homomorphisms to demonstrate
correctly aligned navigation between multiple models for a single domain.
3. 3.
Develop and implement composition of specifications to combine multiple parts
of a domain problem or integrate multiple domains.
This last point is inline with the discussion of extending a domain in 66.2
and motivates a need to reconcile independently developed specification
formats.
1. 4.
Demonstrate automatic translation across specification formats.
Core meta-model. As a practical matter, state-of-the-art examples exercise
general principles of the paradigm but do not leverage general purpose
software to encode the meta-model.
1. 5.
Develop and demonstrate reusable middleware to explicitly encode multiple
semantic models and maps between them which (a) takes inputs from
specification packages; and (b) serves as a platform to navigate models.
We have seen rich examples of focused analysis with wiring diagrams in Sec. 5
and automated composition from building blocks in Sec. 6. Theoretically, there
is the question of integrating the top-down and bottom-up perspectives:
1. 6.
Develop unified foundations to integrate: (a) analytic and synthetic styles of
operads; and (b) composition with decomposition.
Potential starting points for these theoretical advancements are described in
77.4. Developing understanding of limitations overviewed in 77.3 requires
engagement with a range of applications:
1. 7.
Investigate limits of operads for design to: (a) identify domains or specific
aspects of domains lacking minimal data; (b) demonstrate the failure of
compositionality for potentially useful semantics; and (c) characterize
complexity barriers due to integrality.
Navigation of effective data structures and algorithms. Lastly, there is the
question of whether coherent navigation of models can be made practical. This
requires explicit control of data across models and fast data structures and
algorithms within specific models. The general-purpose evolutionary algorithms
discussed in 66.3 motivate:
1. 8.
Develop reusable libraries that exploit (a) substitution of operations and
instances to adapt designs and (b) separation of semantics from syntax.
SAR tasking experience and prototype explorations for distributed logistics
illustrate the need to exploit moving _across_ models:
1. 9.
Develop and demonstrate general purpose strategies to exploit separation
across models via hierarchical representation of model fidelity–e.g. example:
(a) Structure over behavior; and (b) planning over scheduling.
2. 10.
Quantify the impact of separation of concerns on: (a) computational
complexity; and (b) practical computation time.
For this last point, _isolating_ the impact of each way to separate concerns
is of particular interest to lay groundwork to systematically analyze complex
domain problems. Finally, there is question of demonstrating an end-to-end
system to exploit the operadic, meta-modeling paradigm.
1. 12.
Demonstrate systematic, high-level control of iteration, substitution and
moving across multiple models to solve a complex domain problem.
2. 13.
Develop high-level control framework–similar to JavaScript frameworks for
UI–or programming language–similar to probabilistic programming–to
systematically control iteration, substitution and movement across multiple
models.
## 8 Conclusion
Operads provide a powerful meta-language to unite complementary system models
within a single framework. They express multiple options for decomposition and
hierarchy for complex designs, both within and across models. Diverse concerns
needed to solve the full design problem are coherently separated by functorial
semantics, maintaining compositionality of subsystems. Each semantic model can
trade-off precision and accuracy to achieve an elegant abstraction, while
algorithms exploit the specifics of each model to analyze and synthesize
designs.
The basic moves of iteration, substitution and moving across multiple models
form a rich framework to explore design space. The trade-off is that the
technical infrastructure needed to fully exploit this paradigm is daunting.
Recent progress has lowered barriers to specify domain models and streamline
tool chains to automatically synthesize designs from basic building blocks.
Key parts of relevant theory and its implementation in software have been
prototyped for example applications. Further research is needed to integrate
advancements in automatic specification and synthesis with the analytic power
of operads to separate concerns. To help focus efforts, we described research
directions and proposed some concrete open problems.
This article does not present research with ethical considerations.
This article has no additional data.
All authors contributed to the development of the review, its text and
provided feedback and detailed edits on the work as a whole. JF coordinated
the effort and led the writing of Sec. 6. SB led Sec. 5 and co-led Sec. 2, 3
and 4 with JF. ES focused on connections to applications while JD focused on
assuring accessible mathematics discussion.
We declare that we have no competing interests.
JF and JD were supported by the DARPA Complex Adaptive System Composition and
Design Environment (CASCADE) project under Contract No. N66001-16-C-4048.
The authors thank John Baez, Tony Falcone, Ben Long, Tom Mifflin, John
Paschkewitz, Ram Sriram and Blake Pollard for helpful discussions and two
anonymous referees for comments that significantly improved the presentation.
Official contribution of the National Institute of Standards and Technology;
not subject to copyright in the United States. Any commercial equipment,
instruments, or materials identified in this paper are used to specify the
experimental procedure adequately. Such identification is not intended to
imply recommendation or endorsement by the National Institute of Standards and
Technology, nor is it intended to imply that the materials or equipment
identified are necessarily the best available for the purpose.
## References
* [1] S. Abramsky and B. Coecke, Categorical quantum mechanics, Handbook of quantum logic and quantum structures, 2 (2009), 261–325.
* [2] C. Alexander, Notes on the Synthesis of Form, Harvard University Press, (1964)
* [3] J. C. Baez, B. Coya and F. Rebro, Props in network theory, Theor. Appl. Categ. 33 25 (2018), 727–-783.
* [4] J. C. Baez and B. Fong, A compositional framework for passive linear networks, Theor. Appl. Categ. 33 38 (2018), 1158–-1222.
* [5] J. C. Baez, B. Fong, and B. S. Pollard, A compositional framework for Markov processes, J. Math. Phys. 57 3 (2016), 033301.
* [6] J. C. Baez, J. Foley, J. Moeller and B. Pollard, Network models, Theor. Appl. Categ. 35 20 (2020), 700–744.
* [7] J. C. Baez, J. Foley and J. Moeller, Network models from Petri nets with catalysts, Compositionality 1 4 (2019).
* [8] J. C. Baez, F. Genovese, J. Master, and M. Shulman, Categories of Nets, Preprint (2021). Available as arXiv:2101.04238.
* [9] J. C. Baez and J. Master, Open Petri nets, Math. Struct. Comp. Sci. 30 3 (2020), 314–341.
* [10] J. C. Baez, D. Weisbart and A. Yassine, Open systems in classical mechanics, Preprint (2021). Available as arXiv:1710.11392.
* [11] G. Bakirtzis, F. Genovese, and C. H. Fleming, Yoneda Hacking: The Algebra of Attacker Actions, Preprint, (2021), available as arXiv:2103:00044.
* [12] G. Bakirtzis, C. H. Fleming, and C. Vasilakopoulou, Categorical Semantics of Cyber-Physical Systems Theory, ACM Transactions on Cyber-Physical Systems (2021, in press), available as arXiv:2010.08003.
* [13] G. Bakirtzis, C. Vasilakopoulou, and C. H. Fleming, Compositional cyber-physical systems modeling, Proceedings 3rd Annual International Applied Category Theory Conference 2020 (ACT 2020)
* [14] K. Bar, A. Kissinger and J. Vicary, Globular: an online proof assistant for higher-dimensional rewriting, Log. Methods Comput. Sci. 14 1 (2018), 1–16.
* [15] J. Beal, D. Pianini and M. Viroli, Aggregate programming for the internet of things, Computer 48 9 (2015), 22–30.
* [16] R. K. Brayton, G. D. Hachtel, C. McMullen and A. Sangiovanni-Vincentelli, Logic minimization algorithms for VLSI synthesis, Vol. 2. Springer Science & Business Media, 1984.
* [17] B. Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, PhD thesis, University of California–Riverside, 2018.
* [18] A. Boronat, A. Knapps, J. Meseguer and M. Wirsing, What is a multi-modeling language? International Workshop on Algebraic Development Techniques, Springer, Berlin, Heidelberg (2008), 71–87.
* [19] S. Breiner, B. Pollard and E. Subrahmanian, Workshop on Applied Category Theory: Bridging Theory and Practice. Special Publication (NIST SP) 1249 (2020).
* [20] S. Breiner, B. Pollard, E. Subrahmanian and O. Marie-Rose, Modeling Hierarchical System with Operads, Proceedings of the 2019 Applied Category Theory Conference, (2020) 72–83.
* [21] S. Breiner, R. D. Sriram and E. Subrahmanian, Compositional models for complex systems, Artificial Intelligence for the Internet of Everything, eds. Academic Press, Cambridge Massachusetts (2019) 241–270.
* [22] K. S. Brown, D. I. Spivak and R. Wisnesky, Categorical data integration for computational science, Comput. Mater. Sci. 164 (2019), 127–132.
* [23] S. Busboom, Bat 21: A Case Study, Carlisle Barracks, PA: U.S. Army War College, (1990).
* [24] A. Censi, A mathematical theory of co-design, Preprint (2015). Available as arXiv:1512.08055.
* [25] A. Censi, Uncertainty in Monotone Codesign Problems, IEEE Robotics and Automation Letters 2 3 (2017), 1556–1563.
* [26] M. Chechik, S. Nejati and M. Sabetzadeh, A relationship-based approach to model integration, Innovations in Systems and Software Engineering 8 1 (2012), 3–18.
* [27] N. Chungoora and R. I. Young, Semantic reconciliation across design and manufacturing knowledge models: A logic-based approach, Applied Ontology 6 4 (2011), 295–295.
* [28] B. Coecke, M. Sadrzadeh and S. Clark, Mathematical foundations for a compositional distributional model of meaning, Linguistic Analysis 36 (2010), 345–-384.
* [29] H. Choi, C. Crump, C. Duriez, A. Elmquist, G. Hager, D. Han, F. Hearl, J. Hodgins, A. Jain, F. Leve, and C. Li, On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward, Proc. Natl. Acad. Sci. U.S.A. 118 1 (2021)
* [30] G. de Felice, A. Toumi and B. Coecke, DisCoPy: Monoidal Categories in Python, Proceedings 3rd Annual International Applied Category Theory Conference 2020 (ACT 2020).
* [31] Z. Diskin and T. Maibaum, Category theory and model-driven engineering: From formal semantics to design patterns and beyond. Model-Driven Engineering of Information Systems: Principles, Techniques, and Practice 173 (2014).
* [32] E. Di Lavore, J. Hedges, and P. Sobociński, Compositional Modelling of Network Games, Computer Science Logic 2021, Leibniz International Proceedings in Informatics 183 (2021).
* [33] P. Dusko, Monoidal computer I: Basic computability by string diagrams, Info. Comp. 226 (2013), 94–116.
* [34] S. Eilenberg and S. Mac Lane, General Theory of Natural Equivalences, Trans. AMS 58 (1945), 231–294.
* [35] A. Fagan and R. Duncan, Optimising Clifford Circuits with Quantomatic, Proceedings 15th International Conference on Quantum Physics and Logic (QPL 2018), (2019) 85–-105.
* [36] A. Ferrari and A. Sangiovanni-Vincentelli, System design: Traditional concepts and new paradigms, Proceedings 1999 IEEE International Conference on Computer Design: VLSI in Computers and Processors, (1999) 2–-12.
* [37] J. D. Foley, An example of exploring coordinated SoS behavior with an operad and algebra integrated with a constraint program, CASCADE tech report, 2018.
* [38] B. Fong, The algebra of open and interconnected systems, DPhil thesis, University of Oxford, 2016.
* [39] B. Fong and M. Johnson, Lenses and learners, Proceedings of the Eighth International Workshop on Bidirectional Transformations (Bx 2019), (2019).
* [40] B. Fong and D. I. Spivak, An invitation to applied category theory: seven sketches in compositionality, Cambridge University Press, 2019.
* [41] B. Fong and D. I. Spivak, Hypergraph categories, J. Pure Appl. Algebra 223 11 (2019), 4746–4777.
* [42] B. Fong and D. I. Spivak, Supplying bells and whistles in symmetric monoidal categories, Preprint (2020). Available as arXiv:1908.02633.
* [43] B. Fong, D. I. Spivak and R. Tuyéras, Backprop as functor: A compositional perspective on supervised learning, 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 1–13.
* [44] L. R. Ford and D. R. Fulkerson, Maximal flow through a network, Can. J. of Math. 8 (1956), 399–404.
* [45] R. Geisberger, P. Sanders, D. Schultes and D. Delling, Contraction hierarchies: Faster and simpler hierarchical routing in road networks, International Workshop on Experimental and Efficient Algorithms, Springer, Berlin, Heidelberg (2008), 319–333.
* [46] T. Giesa, R. Jagadeesan, D. I. Spivak and M. J. Buehler, Matriarch: a python library for materials architecture, ACS biomaterials science & engineering, 1 10 (2015), 1009–1015.
* [47] D. R. Ghica, A. Jung and A. Lopez, Diagrammatic Semantics for Digital Circuits, 26th EACSL Annual Conference on Computer Science Logic (CSL 2017), Leibniz International Proceedings in Informatics, (2017) 82 24:1–24:16.
* [48] N. Ghani, C. Kupke, A. Lambert and F. N. Forsberg, Compositional Game Theory with Mixed Strategies: Probabilistic Open Games Using a Distributive Law, Proceedings of the 2019 Applied Category Theory Conference, (2020) 95–-105.
* [49] N. Ghani, J. Winschel and P. Zahn, Compositional game theory, Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, (2018) 472–481.
* [50] J. Girard, Linear logic, Theor. Comput. Sci. 50 1 (1987), 1–101.
* [51] E. Grefenstette, M. Sadrzadeh, S. Clark, B. Coecke, and S. Pulman, Concrete sentence spaces for compositional distributional models of meaning, Computing meaning, Springer, Dordrecht (2018) 71-86.
* [52] J. Guy, 142 Evacuated From Creek Fire by Military Helicopters; Body of Deceased Man Flown to Fresno, Fresno Bee, Sept. 8, 2020.
* [53] K. Gürlebeck, D. Hofmann and D. Legatiuk, Categorical approach to modelling and to coupling of models, Math. Meth. Appl. Sci. 40 3 (2017), 523–534.
* [54] J. Hedges and M. Sadrzadeh, A generalised quantifier theory of natural language in categorical compositional distributional semantics with bialgebras, Math. Struct. Comp. Sci. 29 6 (2019), 783–809.
* [55] M. Halter, E. Patterson, A. Baas and J. Fairbanks, Compositional Scientific Computing with Catlab and SemanticModels, Preprint (2020). Available as arXiv:2005.04831.
* [56] C. Hermida. Representable Multicategories, Advances in Mathematics 151 2 (2000), 164-225.
* [57] P. Johnson-Freyd, J. Aytac, and G. Hulett , Topos Semantics for a Higher-Order Temporal Logic of Actions, Proceedings of the 2019 Applied Category Theory Conference, (2020) 161–-171.
* [58] A. Kissinger and J. van de Wetering, PyZX: Large Scale Automated Diagrammatic Reasonin, Proceedings 16th International Conference on Quantum Physics and Logic (QPL 2019), (2020) 229–-241.
* [59] A. Kissinger and J. van de Wetering, Reducing the number of non-Clifford gates in quantum circuits, Physical Review A, 102 2 (2020) 022406.
* [60] A. Kissinger and V. Zamdzhiev, Reducing the number of non-Clifford gates in quantum circuits, International Conference on Automated Deduction, Springer, Cham (2015) 326–336.
* [61] C. S. Khor, B. Chachuat and N. Shah, A superstructure optimization approach for water network synthesis with membrane separation-based regenerators, Computers & chemical engineering 42 (2012), 48–63.
* [62] R. Kuwada, J. Guy and D. Cooper, Creek Fire roars toward mountain resort towns, after airlift rescues hundreds trapped by flames, Fresno Bee, Sept. 6, 2020.
* [63] E. A. Lee, The past, present and future of cyber-physical systems: A focus on models. Sensors, 15 3 (2015), 4837–4869.
* [64] T. Leinster, Higher operads, higher categories, London Math. Soc. Lec. Note Series, 298 (2003)
* [65] N. Leveson, The Drawbacks in Using The Term ‘System of Systems’, Biomedical Instrumentation & Technology, (2013), 115–118.
* [66] C. Lisciandra, and J. Korbmacher, Multiple models, one explanation, J. Econ. Methodol. , (2021), 1–21.
* [67] M. A. Mabrok and M. J. Ryan, Category theory as a formal mathematical foundation for model-based systems engineering, Appl. Math. Inf. Sci. 11 (2017), 43–51.
* [68] S. Mac Lane. Categories for the Working Mathematician, 2nd edition, Vol. 5. Springer, (1998)
* [69] E. Marder and A. L. Taylor, Multiple models to capture the variability in biological neurons and networks, Nat. Neurosci. 14 2 (2011), 133–138.
* [70] M. Markl, S. Shnider and J. D. Stasheff, Operads in Algebra, Topology and Physics, AMS, 2002
* [71] J. Master, E. Patterson, and A. Canedo, String Diagrams for Assembly Planning, DIAGRAMS 2020 11th International Conference on the Theory and Application of Diagrams (2020)
* [72] J. Meseguer and U. Montanari, Petri nets are monoids, Inf. Comput. 88 (1990), 105–155.
* [73] J. Moeller, Noncommutative network models, Math. Struct. Comp. Sci. 30 1 (2020), 14–32.
* [74] J. Moeller and C. Vasilakopoulou, Monoidal Grothendieck Construction, Theor. Appl. Categ. 35 31 (2020), 1159–1207.
* [75] S. M. Neiro and J. M. Pinto, Supply chain optimization of petroleum refinery complexes, Proceedings of the 4th International Conference on Foundations of Computer-Aided Process Operations, (2003), 59–72.
* [76] J. S. Nolan, B. S. Pollard, S. Breiner, D. Anand, and E. Subrahmanian, Compositional models for power systems, Proceedings of the 2019 Applied Category Theory Conference, (2020) 72–83.
* [77] S. M. Patterson, D. I. Spivak and D. Vagner, Wiring diagrams as normal forms for computing in symmetric monoidal categories, Proceedings of the 2020 Applied Category Theory Conference (ACT 2020)
* [78] B. S. Pollard, Open Markov processes: A compositional perspective on non-equilibrium steady states in biologys, Entropy 18 (2016), 140.
* [79] A. Quarteroni, Mathematical models in science and engineering, Not. Am. Math. Soc. 56 1 (2009), 10–19.
* [80] R. Raman and I. E. Grossmann, Integration of logic and heuristic knowledge in MINLP optimization for process synthesis, Computers & chemical engineering 16 3 (1992), 155–171.
* [81] D. Reutter and J. Vicary, High-level methods for homotopy construction in associative n-categories, 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), IEEE (2019), 1–13.
* [82] A. Sangiovanni-Vincentelli, The tides of EDA, IEEE Design & Test of Computers 20 6 (2003), 59–75.
* [83] P. Schultz and D. I. Spivak, Temporal Type Theory, Springer International Publishing, 2019.
* [84] G. Simon, T. Levendovszky, S. Neema, E. Jackson, T. Bapty, J. Porter and J. Sztipanovits, Foundation for model integration: Semantic backplane, International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 45011, (2012), 1077–1086. American Society of Mechanical Engineers.
* [85] H. A. Simon, Rational decision making in business organizations, Am. Econ. Rev. 69 4 (1979), 493–513.
* [86] H. A. Simon, The architecture of complexity, Facets of systems science, Springer, (1991), 457–476.
* [87] M. Sirjani, E. A. Lee, and E. Khamespanah, Verification of Cyberphysical Systems, Mathematics, 8 7, (2020), 1068.
* [88] J. A. Sokolowski , and C. M. Banks, Modeling and simulation fundamentals: theoretical underpinnings and practical domains, John Wiley & Sons, (2010)
* [89] M. E. Sosa, S. D. Eppinger, and C. M. Rowles, Identifying modular and integrative systems and their impact on design team interactions, J. Mech. Des. 125 2 (2003) 240–252.
* [90] D. I. Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits, Preprint (2013). Available as arXiv:1305.0297.
* [91] D. I. Spivak, Category Theory for the Sciences, MIT Press, Cambridge Massachusetts, 2014.
* [92] D. I. Spivak, T. Giesa, E. Wood and M. J. Buehler, Category theoretic analysis of hierarchical protein materials and social networks, PloS one 6 9 (2011).
* [93] D. I. Spivak and R. E. Kent, Ologs: a categorical framework for knowledge representation, PloS one 7 1 (2012).
* [94] D. I. Spivak and J. Tan, Nesting of dynamical systems and mode dependent networks, J. Complex Networks 5 3 (2017), 389–408.
* [95] D. I. Spivak and R. Wisnesky, Fast Left-Kan Extensions Using The Chase, Preprint (2020) Available at www.categoricaldata.net.
* [96] K. O. Stanley and R. Miikkulainen, Evolving neural networks through augmenting topologies, Evolutionary Comp. 10 2 (2002), 99–127.
* [97] L. D. Stone, J. O. Royset, and A. L. Wasburn. Optimal Search for Moving Targets, Vol. 237. Springer, 2016.
* [98] M. E. Szabo, Algebra of proofs, Studies in logic and the foundations of mathematics, Vol. 88. North-Holland Publishing Company, 1978.
* [99] A. M. Turing, The Chemical Basis of Morphogenesis, Philos. Trans. R. Soc. Lond., B, Biol. Sci. 237 641 (1952), 37–72.
* [100] D. Vagner, D. I. Spivak, and E. Lerman, Algebras of open dynamical systems on the operad of wiring diagrams, Theor. Appl. Categ. 30 55 (2015), 1793–1822.
* [101] R. Wisnesky S. Breiner, A. Jones, D. I. Spivak and E. Subrahmanian, Using category theory to facilitate multiple manufacturing service database integration, Journal of Computing and Information Science in Engineering 17 2 (2017)
* [102] D. E. Whitney, Physical limits to modularity, (2002)
* [103] D. Yau, Colored Operads, American Mathematical Society, Providence, Rhode Island, 2016.
* [104] D. Yau, Operads of Wiring Diagrams, Vol. 2192. Springer, 2018.
* [105] H. Yeomans and I. E. Grossmann, A systematic modeling framework of superstructure optimization in process synthesis, Computers & Chemical Engineering 23 6 (1999), 709–731.
* [106] G. Zardini, N. Lanzetti, M. Salazar, A. Censi, E. Frazzoli, and M. Pavone, Towards a co-design framework for future mobility systems, Annual Meeting of the Transportation Research Board (2020).
* [107] G. Zardini, D. Milojevic, A. Censi, and E. Frazzoli, Co-Design of Embodied Intelligence: A Structured Approach, Preprint, (2020), available as arXiv:2011:10756.
* [108] G. Zardini, D. I. Spivak, A. Censi, and E. Frazzoli, A Compositional Sheaf-Theoretic Framework for Event-Based Systems (Extended Version), Preprint, (2020), available as arXiv:2005:04715.
|
# Exact and Approximate Heterogeneous
Bayesian Decentralized Data Fusion
Ofer Dagan, Nisar R. Ahmed Manuscript received Month day, 2021; revised Month
day, 2021.The authors are with the Smead Aerospace Engineering Sciences
Department, University of Colorado Boulder, Boulder, CO 80309 USA (e-mail:
<EMAIL_ADDRESS>Nisar.Ahmed@colorado.edu).
###### Abstract
In Bayesian peer-to-peer decentralized data fusion, the underlying
distributions held locally by autonomous agents are frequently assumed to be
over the same set of variables (homogeneous). This requires each agent to
process and communicate the full global joint distribution, and thus leads to
high computation and communication costs irrespective of relevancy to specific
local objectives. This work studies a family of heterogeneous decentralized
fusion problems, where the set of problems in which either the communicated or
the processed distributions describe different, but overlapping, states of
interest that are subsets of a larger full global joint state is considered.
We exploit the conditional independence structure of such problems and provide
a rigorous derivation for a family of exact and approximate heterogeneous
conditionally factorized channel filter methods. We further extend existing
methods for approximate conservative filtering and decentralized fusion in
heterogeneous dynamic problems. Numerical examples show more than 99.5%
potential communication reduction for heterogeneous channel filter fusion, and
a multi-target tracking simulation shows that these methods provide consistent
estimates.
###### Index Terms:
Bayesian decentralized data fusion (DDF), distributed robot systems, multi-
robot systems, sensor fusion.
## I Introduction
Bayesian decentralized data fusion (DDF) has a wide range of applications,
such as cooperative localization [1], multi-target tracking [2], multi-robot
localization and mapping [3], and more. Decentralized data fusion, while
generally less accurate compared to centralized data fusion, offers advantages
in terms of scalability, flexibility and robustness. One of the challenges of
decentralized data fusion stems from the difficulty of accounting for common
data and dependencies between communicating agents and avoiding ‘rumor
propagation’, where dependencies between data sources are incorrectly ignored.
In decentralized multi-agent systems aiming at some joint mission, such as
autonomous cooperative robot localization [1],[4], [5], or tracking [2], the
requirement for each agent to recursively update and communicate a global full
joint posterior probability distribution function (pdf) over an identical
(homogeneous) set of states leads to large overhead in local processing and
communication bandwidth. It is therefore desirable to enable processing,
communication and fusion of a posterior pdf over a subset of different but
overlapping states; we name such a process _heterogeneous fusion_. Consider
for example the 30 robot cooperative localization scenario given in [1]. If
each agent has 4 unknown position states, the full joint distribution has 120
variables, and requires processing a $120\times 120$ covariance matrix
(assuming Gaussian distribution). This includes states of agents ‘far away’
from each other in the network, which has negligible effect on the local
position estimate and might be considered ‘irrelevant’. But, if each agent
includes in its estimate only immediate ‘relevant’ neighbors’ states which
form a subset of the global joint distribution, then the local heterogeneous
joint distribution shrinks, e.g. to 16 states for a 3-neighbor topology. This
has a clear computation and communication gain over the homogeneous
alternative. However it might lead to indirect correlations between variables
not mutually monitored by both agents and result in an overconfident estimate.
To the best of our knowledge, existing methods have yet to solve this problem
and accurately account for the indirect correlations.
DDF algorithms can be considered exact or approximate, depending on how they
account for dependencies in the data shared between agents, to guarantee that
every piece of data is counted not more than once. In exact methods, these
dependencies are explicitly accounted for, either by pedigree-tracking, which
can be cumbersome and impractical in large ad hoc networks [6], or by adding a
_channel filter_ (CF), which requires enforcing a tree-structured
communication topology [7, 8]. Approximate methods assume different levels of
correlation between the communicating agents and fuse them in such a way that
the common data is promised not to be double counted, thus ensuring
conservativeness of the fused posterior. The most commonly used approximate
method is covariance intersection (CI) [9], when agents share only the first
two moments (mean and covariance) of their distributions, or the geometric
mean density (GMD) for general pdfs fusion [10]. In CI, the fused result is a
weighted average of the information vector and matrix describing the first and
second moments of the underlying distributions, respectively, where the weight
is optimized based on a predetermined cost function, e.g., determinant of the
posterior fused covariance matrix. Both exact and approximate fusion methods
usually assume that the distributions to be fused as well as the posterior
distributions are homogeneous, i.e. describe the same full joint state random
vector. However, these methods cannot be directly applied to heterogeneous
fusion, where pdfs over overlapping parts of the full joint pdfs are fused.
This paper builds upon the work presented in [11] and further develops a
rigorous Bayesian probabilistic approach for fusion of heterogeneous pdfs in
tree-structured networks. The key idea is to utilize conditional independence
properties between subsets of variables (states) in order to lower the
dimension of the local joint distributions to be fused. In [11] the
applicability of the conditionally factorized channel filter (CF2) algorithms
for dynamic systems is limited since the full time history has to be tracked
for conditional independence. This paper improves and extends the theory and
applicability of the work presented in [11] by making the following
contributions:
1. 1.
The applicability of the CF2 heterogeneous fusion algorithms are extended to
dynamic problems by (i) enabling conservative filtering (ii) developing the
information augmented state (iAS) version of the augmented state algorithm
[12].
2. 2.
The definition and desiderata for a heterogeneous fusion rule are more
formally and clearly defined, along with practical measures for evaluation.
3. 3.
The theory developed in this paper is brought to practice with a detailed
explanation of two of the CF2 algorithms and pseudo-code for the case of
Gaussian distributions is presented.
4. 4.
The scalability of the heterogeneous fusion methods is demonstrated by an
extended example, calculating the communication and computation savings, and
covering different problem sizes.
5. 5.
New simulations of dynamic decentralized multi-target tracking scenarios are
provided, to demonstrate conservative filtering and compare it to the full
time history smoothing approach.
The rest of this paper consists of three main parts. The first (Sec. II-V)
discusses heterogeneous fusion in terms of general probability distributions
(pdfs); Sec. II defines the heterogeneous decentralized fusion problem and
reviews the state of the art; Sec. III derives a family of exact and
approximate conditionally factorized CF methods; Conditional independence in
dynamic systems, including augmented state (AS) density and conservative
filtering for general pdfs, is discussed in Sec. IV, and Sec. V details the
proposed fusion algorithm. In the second part (Sec. VI) Gaussian distributions
are treated as special case to develop: a closed-form fusion CF2 rule; the
information augmented state (iAS) method; and a conservative Kalman filtering
method. In the third part (Sec. VI-F-VII), numerical examples demonstrate the
potential communication and computation gains of the described methods (Sec.
VI-F) via static and dynamic multi-agent multi-target tracking simulations
(Sec. VII). Sec. VIII then draws conclusions and suggests avenues for
additional work.
## II Problem Statement and Related Work
To motivate the problem, consider a simple static target tracking problem with
one target and two tracking agents as a running example (Fig. 1a). Both agents
$i$ and $j$ track the position of the target, described by the random state
vector $x$, and are assumed to have perfect knowledge of their own position,
but unknown constant biases in the agent-target relative measurement vectors
$y_{i,k}$ and $y_{j,k}$, described by the random state vectors $s_{i}$ and
$s_{j}$, respectively, where $k$ indicates the time step. The agents can also
take relative observations $m_{i,k}$ and $m_{j,k}$ to different landmarks to
locally collect data on their measurement biases.
As shown in Fig. 1a, the agents’ local biases $s_{i}$ and $s_{j}$ become
coupled due to measurements $y_{i,k}$ and $y_{j,k}$ of the common target $x$.
In the case of homogeneous information fusion, the two agents preform
inference over and communicate the full joint pdfs describing all the random
variables, including each other’s local biases. But in the heterogeneous
fusion case, agents might hold a pdf over only a subset of the random
variables, for example over the target ($x$) and the agent’s local bias (e.g.,
$s_{i}$), making the dependencies between the local biases hidden. Therefore
an agent might not be aware of the existence of another local bias random
state vector (e.g., $s_{j}$) held by another agent. These dependencies are key
to the problem, and the main challenge in heterogeneous fusion compared to
homogeneous fusion is to account for them during fusion, where they are not
explicitly represented in the local posterior pdfs.
(a)$s_{i}$$s_{j}$$x$$y_{i,k}$$y_{j,k}$$m_{i,k}$$m_{j,k}$$k=1:N$$k=1:N$(b)$s_{i}$$s_{j}$$x_{1}$$x_{2}$$x_{k}$$y_{i,1}$$y_{j,1}$$y_{i,2}$$y_{j,2}$$y_{i,k}$$y_{j,k}$$m_{i,k}$$m_{j,k}$$k=1:N$
Figure 1: (a) Static and (b) Partially dynamic Bayesian networks for two local
random vectors $s_{i},s_{j}$ (local measurement biases) and one common random
vector $x$ (target state). In (a), $s_{i}$ and $s_{j}$ are conditionally
independent given the static state $x$; in (b) the full time history $x_{1:k}$
is required for conditional independence.
### II-A Problem Statement
Assume a network of $n_{a}$ autonomous agents, performing recursive
decentralized Bayesian updates to their prior pdf, with the goal of monitoring
some global random state vector $\chi_{k}$ at time $k$. Each agent
$i\in\\{1,...,n_{a}\\}$ is tasked with a local subset of states
$\chi_{i,k}\subseteq\chi_{k}$, an $n_{i}$-dimensional vector of random
variables at time $k$. An agent can update its local prior pdf for
$\chi_{i,k}$, by (i) using Bayes’ rule to recursively update a posterior pdf
for $\chi_{i,k}$ with independent sensor data $Y_{i,k}$ described by the
conditional likelihood $p(Y_{i,k}|\chi_{i,k})$, and (ii) performing peer-to-
peer fusion of external data relevant to $\chi_{i,k}$ from any neighboring
agent $j\in N_{a}^{i}$, where $N_{a}^{i}$ is the set of agents communicating
with $i$.
The heterogeneous fusion question is now: what is a peer-to-peer fusion rule
$\mathbb{F}$ which given the local prior distribution
$p_{i}(\chi_{i,k}|Z^{-}_{i,k})$ and a distribution over ‘relevant’ random
states from a neighboring agent $j$, $p_{j}(\chi_{j,k}^{r}|Z^{-}_{i,k})$,
returns a local fused posterior combining the data from both agents,
$p_{i}(\chi_{i,k}|Z_{i,k}^{+})$,
$\begin{split}p_{i}(\chi_{i,k}|Z_{i,k}^{+})=\mathbb{F}(p_{i}(\chi_{i,k}|Z^{-}_{i,k}),p_{j}(\chi_{j,k}^{r}|Z^{-}_{j,k})).\end{split}$
(1)
Where $\chi_{j,k}^{r}$ is the subset of random states at agent $j$ that are
relevant to agent $i$ and assumed to be a non-empty set. $Z_{i,k}^{-}$
($Z_{j,k}^{-}$) is the local data agent $i$ ($j$) has prior to fusion with
agent $j$ ($i$), and $Z_{i,k}^{+}\equiv Z_{i,k}^{-}\cup Z_{j,k}^{-}$ is the
combined data after fusion. Notice that while the motivation is heterogeneous
fusion, the above statement is general and can be used for homogeneous and
heterogeneous fusion. For instance, in the target tracking example, if
$\chi_{i,k}=\chi_{j,k}=\chi_{j,k}^{r}=[x^{T},s_{i}^{T},s_{j}^{T}]^{T}$ then
(1) simplifies to a homogeneous fusion problem, which can be solved using
different exact or approximate fusion rules, as discussed in the introduction.
However, if $\chi_{i,k}=[x^{T},s_{i}^{T}]^{T}$ and
$\chi_{j,k}=[x^{T},s_{j}^{T}]^{T}$, then the ‘relevant’ random states in $j$
are $\chi_{j,k}^{r}=x$, and (1) describes heterogeneous fusion. In this case,
note that $i$’s knowledge about $s_{i}$, described by the marginal pdf
$p(s_{i}|Z_{i,k}^{+})$, should still be updated following fusion with respect
to $x$ (likewise for $j$’s marginal pdf over $s_{j}$). This key distinction
separates conventional homogeneous fusion from heterogeneous fusion: in the
latter, agents seek to update their posterior pdf over their entire local
random state vector, using data gained only from fusion over relevant subsets
of local random state vectors. Heterogeneous fusion thus encompasses the set
of problems where the set of relevant random states $\chi_{j,k}^{r}$ is a
subset of either agent $i$’s random states $\chi_{j,k}^{r}\subset\chi_{i,k}$,
or agent $j$’s random states $\chi_{j,k}^{r}\subset\chi_{j,k}$ or both.
The goal of this paper is to develop the theory and understanding of the
heterogeneous fusion problem defined above. To be able to track the
information flow in the system, the following assumptions are made:
1. 1.
The $n_{a}$ agents all communicate in a bilateral tree-structured network
topology. This guarantees that information can only flow in one path between
any two agents, thus avoiding loops.
2. 2.
Full rate communication and sequential processing of incoming messages from
other agents (where agent sends a message at time step $k$ before processing
messages received at the same time step $k$).
3. 3.
The random state vector definitions between neighboring agents in the network
communication graph have at least one overlapping random state in their local
random state vector. Further, if agent $l$ is more than one hop away from
agent $i$ and has an overlapping random state with $i$, then all agents on the
tree path from $i$ to $l$ also share that common random state.
4. 4.
At time step $k=0$, if there is a common prior distribution over the common
states of interest it is known to both agents, i.e.,
$p_{i}(\chi_{i,0}\cap\chi_{j,0})=p_{j}(\chi_{i,0}\cap\chi_{j,0})=p_{c}(\chi_{i,0}\cap\chi_{j,0})$
where here $p_{c}$ is the common prior pdf. If there is no common prior, then
$p_{c}(\chi_{i,0}\cap\chi_{j,0})=1$.
5. 5.
The global problem structure (representing the full joint distribution over
$\chi_{k}$) for state estimation at the agents is such that it allows
conditional independence between random states that are local to each agent
given (conditioned on) the common random states between them.
An example for these assumptions is given in Fig. 2. Shown is a 5-agent,
6-target tracking problem, where the local measurements to targets
$x_{t}(t=1,..6)$ and landmarks to estimate local biases $s_{i}(i=1,..5)$ are
denoted by the full black arrows. The tree structured communication topology
is indicated by the dashed red arrows between the agents. It can be seen that
due to the bilateral communication between any two agents there are no loops
in the network and information can flow only in one path. Assumption 3 is also
demonstrated, for example, if agents 3 and 5 both estimate target 5’s random
states, agent 4 must estimate it as well. Finally, the conditional
independence holds, as for example agent 1’s local random state vectors
$x_{1}$ and $s_{1}$ are conditionally independent from agent 2’s local random
state vectors $x_{3}$ and $s_{2}$ given the common target random state vector
$x_{2}$ ($x_{1},s_{1}\perp x_{3},s_{2}|x_{2}$).
Figure 2: Target tracking example. Full black arrows denote local measurements
to targets $x_{t}$ and landmarks to estimate local biases $s_{i}$, red dashed
arrows indicate bi-directional communication channel between agents. The tree
structure topology can be seen, as there are no loops.
While assumption 1-4 can be easily enforced, assuming conditional independence
(assumption 5) is not trivial in dynamic problems. In [11] the full time
history is augmented to maintain conditional independence, but the problem of
regaining conditional independence, while keeping the local pdf consistent and
conservative when filtering (marginalizing out past random states) remains
unsolved. Thus, in addition to a fusion rule $\mathbb{F}$, a complementary
conservative filtering operation is needed to regain conditional independence.
### II-B What is A Good Fusion Rule?
In the above formulation of the problem it is left to define a good fusion
rule $\mathbb{F}$ in (1) and how to evaluate it. Recently Lubold and Taylor
[13] claim that a fusion rule should provide a posterior pdf which is
conservative, i.e., overestimates the uncertainty relative to the true pdf.
They suggest new definitions for conservativeness, but to the best of our
knowledge it is not widely used. In the context of homogeneous fusion, common
definitions use the terms consistent and conservative interchangeably [9],
[14] and assume that the point estimate uncertainty can be described by mean
and covariance. In the following, the intuition regarding conservativeness
from [13] is combined with the common definitions of consistency from
homogeneous fusion to define a ‘good’ heterogeneous fusion rule $\mathbb{F}$,
firstly in terms of pdfs and then in the case of Bayesian point estimation.
From the standpoint of pdf fusion, a good heterogeneous fusion rule
$\mathbb{F}$ results in an updated local posterior pdf that: (i) overestimates
the uncertainty relative to the true pdf, and (ii) is conservative over the
agent’s random states of interest $\chi_{i,k}\subseteq\chi_{k}$ relative to
the marginal pdf over $\chi_{i,k}$ of a consistent centralized pdf.
Consistency here means that the fused result does not overestimate or
underestimate the uncertainty. The centralized pdf refers to the posterior pdf
over the full random state vector $\chi_{k}$ conditioned on all the available
data from all the agents up and including time step $k$,
$p(\chi_{k}|\bigcup_{i\in N_{a}}Z^{-}_{i,k})$.
Since consistency and conservativeness are often defined by the first two
moments of the pdf, i.e., their mean and covariance, the above definition can
be further narrowed in the context of Bayesian point estimation. A good
heterogeneous fusion rule $\mathbb{F}$ in this case is then one that when
forming a point estimate from its resulting local posterior
$p_{i}(\chi_{i,k}|Z_{i,k}^{+})$, for example by finding the minimum mean
squared error (MMSE) estimate, the estimate: (i) overestimates the uncertainty
relative to the true state error statistics, and (ii) is conservative relative
to the marginal error estimate of a consistent centralized point estimator.
For example, assume the means of the Gaussian random state vectors $\chi_{i}$
and $\chi$ are $\mu_{\chi_{i}}$ and $\mu_{\chi}$, and the covariances,
describing the mean squared error, are
$\Sigma_{\chi_{i}}=E[(\chi_{i}-\mu_{\chi_{i}})(\chi_{i}-\mu_{\chi_{i}})^{T}]$
and $\Sigma_{\chi}=E[(\chi-\mu_{\chi})(\chi-\mu_{\chi})^{T}]$, respectively,
where $E[\cdot]$ is the expectation operator. The actual values are unknown,
and the approximate estimate of them is given by $\bar{\mu}_{\chi_{i}}$,
$\bar{\mu}_{\chi}$, and $\bar{\Sigma}_{\chi_{i}}$, $\bar{\Sigma}_{\chi}$.
The definitions above then translate to the following:
1. (i)
Overestimation of the uncertainty relative to the true error statistics
implies that $\bar{\Sigma}_{\chi_{i}}-\Sigma_{\chi_{i}}\succeq 0$, i.e., the
resulting matrix difference is positive semi-definite (PSD).
2. (ii)
Conservativeness relative to the marginal estimate of the centralized
estimator implies that
$\bar{\Sigma}_{\chi_{i}}-\bar{\Sigma}_{\chi_{i}}^{cent}\succeq 0$, where
$\bar{\Sigma}_{\chi_{i}}^{cent}$ is the marginal covariance over $\chi_{i}$,
taken from the joint centralized covariance over $\chi$.
The centralized estimate in this case can be considered consistent if, for
example, it passes the NEES hypothesis test [15]. Note that since a consistent
centralized estimate neither overestimates nor underestimates the uncertainty,
a conservative (higher uncertainty) local estimate is expected to
underestimate the uncertainty. Thus requiring the local estimate to be
conservative relative to a consistent centralized estimate implicitly requires
it to not overestimate the uncertainty of the true error statistics.
### II-C Related Work
The idea of splitting a global state vector into $N$ subsets of $N_{a}$
vectors ($N_{a}\ll N$) has been posed to solve two different problems. The
first problem, as presented for example in [16, 17, 18, 19], tries to
reconstruct an MMSE _point estimate_ of the global state vector, by fusing
$N_{a}$ local heterogeneous _point estimates_. The second problem, which is
the focus of this paper, tries to find $N_{a}$ different local _posterior
pdfs_ , given all the information locally available up to the current time.
The problem of locally fusing data from a non-equal state vectors dates back
to [20]. While this work offers a simple way for distributing into
heterogeneous local state vectors, it assumes that the local pdfs are
decoupled. In [21], Khan _et al._ relax that assumption, but ignore
dependencies between states in the fusion step and restrict the state
distribution to only states that are directly sensed by local sensors, where
in practice, an agent might be ‘interested’ in a larger set of local states
that are not all directly sensed. Similarly in cooperative localization [4],
[5] it is often assumed that agents are neighbors only if they take relative
measurement to each-other. Then, the local state is augmented with the other
agent’s position states to process the measurement, often by assuming or
approximating marginal independence. It can be shown that this approach
represents a subset of the heterogeneous fusion problems considered in the
paper.
In [22], Chong and Mori use conditional independence in Bayesian networks to
reduce the subset of states to be communicated. However, they assume
hierarchical fusion and for dynamic systems only consider the case of
deterministic state processes, omitting the important case of stochastic
processes which is considered in this paper. The work by Whitacre _et al._[2],
does not formally discuss conditional independence but introduces the idea of
marginalizing over common states to fuse Gaussian distributions, when cross
correlations of common states are known. However, they implicitly assume cross
correlations among the conditioned, or unique, states are small. While this
assumption might hold for their application, it does not offer a robust
solution. Reference [23] uses CI with Gaussian distributions to relax the
assumption of known correlations of the fused state. Reference [24] suggests a
similar solution to [23] but restrict it to systems where all agents have the
same set of common states. Although scalable and simple, such approaches do
not account for dependencies between locally exclusive states.
This work aims at gaining insight and understanding on how such issues should
be addressed by exploiting the structure of the underlying dynamical system in
such problems.
## III Conditionally Factorized Channel Filters - CF2
The approach presented here assumes a tree-structured communication network
and starts with a general probabilistic channel filter (CF) formulation. Then,
as seen in Fig. 3, different fusion problems of interest are discussed.
Starting from the original homogeneous CF, where all agents keep and share
posterior pdf of the full global joint random state vector, cases are then
considered where an agent is only interested in and/or observes a subset of
the global joint distribution. It is shown that by leveraging conditional
independence, agents can communicate only marginal pdfs, leading to the
_Factorized CF (F-CF)_ and the _Bi-directional factorized CF (BDF-CF)_ ,
depending on whether marginal pdf is sent in one or two directions,
respectively. These extensions to the CF enable communication reduction by
sending only new and relevant data while maintaining the accuracy of the
original CF.
Additionally, a branch of approximate CF methods is introduced which includes
the _approximate BDF-CF_ and the _heterogeneous state CF (HS-CF)_. In these
methods, agents communicate only marginal pdf over common subsets, leaving the
conditional pdfs over the unique subset of variables local at each agent. The
main difference between these two methods is the size of the local joint
distributions, which influences the local processing requirements. In
approximate BDF-CF, each agent processes its unobserved variables in order to
get a rough estimate over less relevant random states. However, in the HS-CF,
the state space for the local joint distribution is minimized to hold only
locally relevant variables, which significantly reduces computation as shown
in Sec. VI-F.
Figure 3: Different CFs derived in this paper, where the first three blocks
describe exact fusion and the dashed blocks approximate, with local relevant
states $s=[s_{i}^{T},s_{j}^{T}]^{T}$ and common states of interest $x$ for
fusion shown.
### III-A Decentralized Fusion and The Homogeneous CF
Let the full random state vector $\chi$ be defined as
$\chi=\begin{bmatrix}X\\\ S\\\ \end{bmatrix}\ \ \
\in\mathbb{R}^{(n_{X}+n_{S})\times 1},$ (2)
where $X$ and $S$ are vectors with $n_{X}$ and $n_{S}$ elements, respectively.
In the target tracking example (Fig. 2),
$X=[x_{1}^{T},x_{2}^{T},...,x_{n_{t}}^{T}]^{T}$ represent the random state
vectors of $n_{t}=6$ targets and $S=[s_{1}^{T},s_{2}^{T},...,s_{n_{a}}]^{T}$
represent the unknown random bias state vectors of the $n_{a}=5$ tracking
agents. For now it is assumed that the random state vector $\chi$ is static
and the time index notation is dropped (dynamic states will be revisited and
considered later). The state vector $\chi$ is a multivariate random variable
and can be described by the joint pdf, $p(\chi)=p(X,S)$.
The goal is to find the fused estimated underlying distribution of the state
$\chi$ given the joint data $Z^{+}_{k}=Z^{-}_{i,k}\cup Z^{-}_{j,k}$ at agents
$i$ and $j$. Using a distributed variant of Bayes’ rule, [8] shows that the
exact posterior pdf conditioned on joint data of $i$ and $j$ is given by:
$p_{f}(\chi|Z_{k}^{+})=\frac{1}{C}\cdot\frac{p_{i}(\chi|Z_{i,k}^{-})p_{j}(\chi|Z_{j,k}^{-})}{p_{c}(\chi|Z_{i,k}^{-}\cap
Z_{j,k}^{-})},$ (3)
where $C$ is a normalizing constant and $p_{c}$ is the posterior pdf
conditioned on the common data shared by agents $i$ and $j$ prior to the
current fusion.
In [7], Grime and Durrant-Whyte suggest that each agent add a filter on the
communication channel (hence channel filter) with a neighboring agent in a
tree network. The CF explicitly tracks $p_{c}$ between the two agents, thus
allowing exact removal of the common data to avoid double counting. While this
operation leads to exact fusion, i.e., equal to a centralized fusion center
(assuming full rate communication), the original CF formulation assumes that
both agents hold distributions over the full random state vector $\chi$. Thus
communication and local processing (inference) is with respect to (w.r.t.) the
full joint pdf $p(\chi)$. This approach is not scalable for large state-
spaces, which motivates the extension of the original CF into a family of
heterogeneous fusion methods that ‘break’ the joint pdf into smaller parts.
Intuitively, if agent $i$ only ‘cares’ about a subset $(x,s_{i})$ and $j$ only
‘cares’ about $(x,s_{j})$, we would like to enable communication of data only
regarding the common target $x$ as in the case of the heterogeneous state CF
(Fig. 3, right hand block). In the following, the structure of the underlying
estimation problem is exploited to conditionally factorize into relevant
subsets of the global random state vector, thereby enabling reduced
communication costs and allowing each agent to locally hold a smaller pdf,
i.e., reducing the computational cost of inference.
### III-B Exact Factorized CF
From the law of total probability, the joint pdf over $\chi$ can be
conditionally factorized as
$p(\chi)=p(X)\cdot p(S|X)=p(S)\cdot p(X|S).$ (4)
Using this factorization and taking $X=x$, (3) can be expressed as
$\begin{split}&p_{f}(\chi|Z_{k}^{+})=\\\
&\frac{1}{C}\cdot\frac{p_{i}(x|Z_{i,k}^{-})p_{j}(x|Z_{j,k}^{-})}{p_{c}(x|Z_{i,k}^{-}\cap
Z_{j,k}^{-})}\cdot\frac{p_{i}(S|x,Z_{i,k}^{-})p_{j}(S|x,Z_{j,k}^{-})}{p_{c}(S|x,Z_{i,k}^{-}\cap
Z_{j,k}^{-})}.\end{split}$ (5)
In the following sections, this factorization of the fusion equation and
insights regarding conditional independence are used to derive new CF
variants. These differ in their assumptions and result in different
communication and local computation benefits.
#### III-B1 Factorized Channel Filter (F-CF)
First, a set of problems is considered where both agents take measurements to
a common random state vector $x$, but only one of them, for example agent $i$,
takes measurements to collect data on a local random state vector $S=s_{i}$.
It is further assumed that agent $j$ does not gain any data regarding $s_{i}$
from any other communicating agent. Thus $s_{i}$ and the data in $j$ that is
not available to agent $i$, noted by $Z_{j\setminus i,k}^{-}$, are
conditionally independent given $x$ ($s_{i}\perp Z_{j\setminus i,k}^{-}|x$).
This leads to the following important conclusion:
$p_{j}(s_{i}|x,Z_{j,k}^{-})=p_{c}(s_{i}|x,Z_{i,k}^{-}\cap
Z_{j,k}^{-})=p_{f}(s_{i}|x,Z_{k-1}^{+}),$ (6)
which means that the data agent $j$ has regarding $s_{i}$ is equal to the
common data with $i$, and is exactly the data it had available after the
previous fusion step ($k-1$). Eq. (5) can therefore be written as,
$\begin{split}p_{f}(\chi|Z_{k}^{+})=\frac{1}{C}\cdot\frac{p_{i}(x|Z_{i,k}^{-})p_{j}(x|Z_{j,k}^{-})}{p_{c}(x|Z_{i,k}^{-}\cap
Z_{j,k}^{-})}\cdot p_{i}(s_{i}|x,Z_{i,k}^{-}).\end{split}$ (7)
The fusion equation above has the following intuitive interpretation: if agent
$j$ does not gain any new local data regarding state $s_{i}$ (conditioned on
$x$), it should only communicate the marginal pdf $p_{j}(x|Z_{j,k}^{-})$,
describing the common random state vector $x$, to agent $i$.
This variant of the CF is dubbed _Factorized CF (F-CF)_ , since the
distribution is factorized into two contributions: 1) marginal pdf regarding
$x$, based on data gathered and shared by the two agents, and 2) conditional
pdf regarding the local random state vector $s_{i}$ based only on data from
agent $i$. The transition from the original CF to the factorized CF is shown
in Fig. 3, and is enabled by using the conditional independence of the problem
to separate common and local data. This fusion rule provides a possible
$\mathbb{F}$ sought in (1): here, the communicated distributions are not over
the same set of random states, as $i$ sends the joint distribution
$p_{i}(\chi|Z^{-}_{i,k})$ and $j$ sends only the marginal distribution
$p_{j}(x|Z^{-}_{j,k})$. Note that while the posterior fused pdf is the same as
can be achieved by the homogeneous fusion rule (3), the math behind this
heterogeneous fusion rule is fundamentally different. In the latter, the
conditional pdf $p_{i}(s_{i}|x,Z_{i,k}^{-})$, which is kept local at agent $i$
or is simply replaced in agent $j$, treats the common random state vector $x$
as a function parameter. Then, the fused marginal is recombined with the local
(or replaced) state conditional pdf in the joint distribution via the law of
total probability (4) at each agent. This weights the conditional pdf
differently as a function of $x$ and changes the overall joint pdf, thus
implicitly/indirectly updating $s_{i}$.
#### III-B2 Bi-directional Channel Filter (BDF-CF)
Consider now the case where both agents have different local random state
vectors $s_{i}$ and $s_{j}$ ($S=[s_{i}^{T},s_{j}^{T}]^{T}$). The global random
state vector is then $\chi=[x^{T},s_{i}^{T},s_{j}^{T}]^{T}$, and $\chi\sim
p(\chi)=p(x,s_{i},s_{j})$. If the local random state vectors are independent
given the common random state vector ($s_{i}\perp s_{j}|x$), then the
following holds:
$p(\chi)=p(x)\cdot p(s_{i}|x)\cdot p(s_{j}|x).$ (8)
Using the above factorization (5) can be split further,
$\begin{split}&p_{f}(\chi|Z^{+})=\frac{1}{C}\cdot\frac{p_{i}(x|Z_{i}^{-})p_{j}(x|Z_{j}^{-})}{p_{c}(x|Z_{i}^{-}\cap
Z_{j}^{-})}\cdot\\\
&\frac{p_{i}(s_{i}|x,Z_{i}^{-})p_{j}(s_{i}|x,Z_{j}^{-})}{p_{c}(s_{i}|x,Z_{i}^{-}\cap
Z_{j}^{-})}\cdot\frac{p_{i}(s_{j}|x,Z_{i}^{-})p_{j}(s_{j}|x,Z_{j}^{-})}{p_{c}(s_{j}|x,Z_{i}^{-}\cap
Z_{j}^{-})}.\end{split}$ (9)
As in the F-CF, assume that $s_{i}\perp Z_{j\setminus i,k}^{-}|x$ and
$s_{j}\perp Z_{i\setminus j,k}^{-}|x$, so (9) can be simplified:
$\begin{split}&p_{f}(\chi|Z^{+})=\\\
&\frac{1}{C}\cdot\frac{p_{i}(x|Z_{i}^{-})p_{j}(x|Z_{j}^{-})}{p_{c}(x|Z_{i}^{-}\cap
Z_{j}^{-})}\cdot p_{i}(s_{i}|x,Z_{i}^{-})\cdot
p_{j}(s_{j}|x,Z_{j}^{-}).\end{split}$ (10)
This CF variant is dubbed the _Bi-directional factorized CF (BDF-CF)_. The
symmetry between the two agents can be seen: the agents share marginal pdfs
regarding the random state vector $x$, and then each agent shares its unique
conditional pdf regarding the local states $s_{i}$ or $s_{j}$. As in the F-CF,
this gives another fusion rule $\mathbb{F}$, only now $i$ sends its marginal
pdf $p_{i}(x,s_{i}|Z_{i}^{-})=p_{i}(x|Z_{i}^{-})\cdot
p_{i}(s_{i}|x,Z_{i}^{-})$ and $j$ sends its marginal pdf
$p_{j}(x,s_{j}|Z_{j}^{-})$. The posterior fused pdfs are equal,
$p_{i,f}(x,s_{i},s_{j}|Z^{+})=p_{j,f}(x,s_{i},s_{j}|Z^{+})$.
Both the F-CF and the BDF-CF are a mathematically equivalent versions of the
original CF (for static systems). Their advantage though, is the considerable
reduction in communication requirements achieved by sending only new and
relevant data (given that the above assumptions are met). While the BDF-CF
(and F-CF) has the potential of saving communication costs, as shown later in
section VI-F, it still requires that every agent holds a local pdf over the
full global random state vector and communicates the conditional pdfs
regarding its local random states. By sacrificing part of the ‘exactness’ of
the CF over less relevant random states, significant reductions in both
computation and communication requirements can be gained. This leads to a new
family of approximate CF methods.
### III-C Approximate Factorized CF
As noted above, mathematical equivalence to the original CF is achieved under
the assumption that agents do not locally gain data regarding each other’s
local states. This might still require considerable communication volume, for
example, when $s_{i}$ are agent $i$’s local states for a navigation filter,
which typically has 16 or more states [25]. In the following approximate
heterogeneous fusion algorithms, computation is further reduced by only
sending the marginal pdfs regarding common random states.
#### III-C1 Approximate BDF-CF
Consider the case where agents prioritize their random states of interest
(e.g., $[x^{T},s_{i}^{T}]$ for agent $i$) over random states local to other
agents (e.g. $s_{j}$), i.e., it is more important that their local pdf is
accurate in portions of the states of interest than in portions relevant to
other agents. The approximate fusion rule at $i$ can then be written as
$\begin{split}p_{i,f}&(\chi|Z_{i}^{+})=\\\
&\frac{1}{C}\cdot\frac{p_{i}(x|Z_{i}^{-})p_{j}(x|Z_{j}^{-})}{p_{c}(x|Z_{i}^{-}\cap
Z_{j}^{-})}\cdot p_{i}(s_{i}|x,Z_{i}^{-})\cdot p_{i}(s_{j}|x).\end{split}$
(11)
A similar expression can be written for agent $j$ by switching $i$ with $j$
for the conditional pdfs over $s_{i}$ and $s_{j}$. The terms
$p_{i}(s_{i}|x,Z_{i}^{-})$ and $p_{i}(s_{j}|x)$ represent the local
conditional distributions agent $i$ holds regarding the states not mutually
monitored by $i$ and $j$. Notice that conditioned on $x$, the data local to
agent $i$ is independent from $s_{j}$, i.e., agent $i$ doesn’t gain any data
directly influencing agent $j$’s local random state vector $s_{j}$. This
version of the heterogeneous CF is named the _Approximate BDF-CF_ , as it
approximates the posterior fused pdf that can be achieved by receiving the
conditional pdf $p_{j}(s_{j}|x,Z_{j}^{-})$ from agent $j$, as done in the BDF-
CF (10). The function formulation for $\mathbb{F}$ still holds, since this
fusion rule leads to different fused posterior results for agents $i$ and $j$.
Here the fundamental mathematical difference from homogeneous fusion is
highlighted again. In this heterogeneous fusion rule, fusion is over the
common random state vector alone, thus the local pdf is directly updated only
by the marginal pdf and this marginal fused pdf is the same at both agents.
However, when agent $i$ ($j$) merges back the updated marginal pdf into its
local joint pdf via the law of total probability, the conditional
probabilities over $s_{i}$ and $s_{j}$, which depend on local and non-equal
sets of data ($Z_{i}^{-}\neq Z_{j}^{-}$), will scale each of agent’s joint pdf
differently, leading to non-equal joint pdfs at agents $i$ and $j$.
#### III-C2 Heterogeneous State Channel Filter (HS-CF)
So far it has been assumed that all agents across the network hold a local pdf
over the same global random state vector $\chi$, which includes all locally
relevant random states. Thus, as the number of tasks in the network increases
so does the local computation load. However, if each agent to only hold a pdf
over its locally relevant subsets of states, the local computation then scales
with the agent tasks and not the global network tasks (or number of agents).
This motivates the last CF variant, the _Heterogeneous state CF (HS-CF)_.
Here, each agent holds its own pdf over heterogeneous subsets of random
states, e.g., $\chi_{i}=\begin{bmatrix}x^{T},s_{i}^{T}\end{bmatrix}^{T}$ and
$\chi_{j}=\begin{bmatrix}x^{T},s_{j}^{T}\end{bmatrix}^{T}$ for agents $i$ and
$j$, respectively. The fusion rule for each agent, over their locally relevant
random states, can be written as
$\begin{split}&p_{i,f}(\chi_{i}|Z_{i}^{+})=\frac{1}{C_{i}}\cdot\frac{p_{i}(x|Z_{i}^{-})p_{j}(x|Z_{j}^{-})}{p_{c}(x|Z_{i}^{-}\cap
Z_{j}^{-})}\cdot p_{i}(s_{i}|x,Z_{i}^{-})\\\
&p_{j,f}(\chi_{j}|Z_{j}^{+})=\frac{1}{C_{j}}\cdot\frac{p_{i}(x|Z_{i}^{-})p_{j}(x|Z_{j}^{-})}{p_{c}(x|Z_{i}^{-}\cap
Z_{j}^{-})}\cdot p_{j}(s_{j}|x,Z_{j}^{-}).\end{split}$ (12)
HS-CF fusion gives another fusion rule $\mathbb{F}$ for the problem statement
in (1), where agents fuse marginal pdfs regarding the common random state
vector $x$, and then update the joint pdf by merging the fused marginal with
the local relevant conditional pdf, as in the Approximate BDF-CF. Notice that
while here, as opposed to all previous heterogeneous fusion rules developed
above, the two pdfs are over different sets of random states, however, the
marginal pdfs over the common random state vector $x$ held by both agents will
still be equal, i.e, $p_{i,f}(x)=p_{j,f}(x)$.
### III-D Log-Likelihood Representation
In multi-agent data fusion problems it is common to use the log of the pdf
instead the pdf itself. In addition to the connection that these log-
likelihoods have with formal definitions of quantities of information (e.g.,
Shanon information, Fisher information), their advantages for data fusion is
twofold: first, fusion is done by summation and subtraction instead of
multiplication and division. Second, when possible, it allows for the exposure
of the sufficient statistics, describing the pdf, e.g., the mean and
covariance for Gaussian distributions, as detailed in Sec. VI.
In the following, the fusion equations for the BDF-CF and HS-CF are expressed
in the log-likelihood representation, where the rest of the fusion rules can
be similarly translated and are omitted for brevity.
Taking natural logarithm of equation (10) and combining local log-likelihoods
via the law of total probability, the BDF-CF can be written as
$\begin{split}\log[p_{f}(\chi|Z^{+})]&=\log[p_{i}(x,s_{i}|Z^{-}_{i})]+\log[p_{j}(x,s_{j}|Z^{-}_{j})]-\\\
&\log[p_{c}(x|Z^{-}_{i}\cap Z_{j}^{-})]+\tilde{C}.\end{split}$ (13)
This representation demonstrates the intuitive interpretation of the BDF-CF
fusion rule. Here the fused posterior data $Z^{+}$ is built from the summation
of the contributions of local data available at each agent as it affects the
relevant subset of states, minus the contribution of the data they have in
common, where the latter only directly contributes to the marginal of the
common target $x$.
Similarly, the HS-CF fusion rule (12) representation in log-likelihood is
given by,
$\begin{split}\log[p_{i,f}&(\chi_{i}|Z^{+}_{i})]=\log[p_{i}(x,s_{i}|Z^{-}_{i})]+\\\
&\log[p_{j}(x|Z^{-}_{j})]-\log[p_{c}(x|Z^{-}_{i}\cap
Z_{j}^{-})]+\tilde{C}_{i}\\\
\log[p_{j,f}&(\chi_{j}|Z^{+}_{j})]=\log[p_{j}(x,s_{j}|Z^{-}_{j})]+\\\
&\log[p_{i}(x|Z^{-}_{i})]-\log[p_{c}(x|Z^{-}_{i}\cap
Z_{j}^{-})]+\tilde{C}_{j}.\\\ \end{split}$ (14)
In the rest of this paper, this representation is used to discuss the fusion
algorithm for general pdfs (Sec. V) and to derive a closed form fusion rule,
based on the sufficient statistics, for the special case where the underlying
distributions are Gaussian (Sec. VI).
## IV Conditional Independence in Dynamic Systems
For a dynamic or partially-dynamic Bayesian network, as in Fig. 1(b), it is
generally not possible to claim conditional independence of local states (or
local data) based on the filtered dynamic state, $s_{i}\not\perp s_{j}|x_{k}$,
i.e., $s_{i}$ and $s_{j}$ are not conditionally independent given $x_{k}$ when
it is successively marginalized over time. Since this conditional independence
is required to allow heterogeneous fusion, there is a need to regain
conditional independence in dynamic stochastic systems. There are two
approaches to solve this problem: (i) by keeping a distribution over the full
time history $p(x_{k:0},s_{i},s_{j}|Z_{k})$, where $x_{k:0}$ denotes all
common dynamic states from $k=0$ until current time step $k$; (ii) by
enforcing conditional independence after marginalization by disconnecting the
dependencies between the relevant random states. In the following, these
solutions for the case of general pdfs are discussed. Then, Sec. VI derives
specific closed-form representations for Gaussian distributions.
### IV-A Augmented State
The distribution over the full augmented state
$\chi_{k:0}=[x_{k:0}^{T},s_{i}^{T},s_{j}^{T}]^{T}$, given the data $Z_{k}^{-}$
can be recursively updated using the following formula [26]:
$p(\chi_{k:0}|Z_{k}^{-})=\frac{1}{C}\cdot
p(\chi_{k-1:0}|Z_{k-1}^{+})p(x_{k}|x_{k-1})p(Y_{k}|\chi_{k}),$ (15)
where here $Z_{k-1}^{+}$ is used to indicate an agent’s data after the
previous fusion step, $k-1$, $Y_{k}$ for the local sensor data gained at the
current time step $k$, and $Z_{k}^{-}$ is the data at time step $k$ prior to
fusion.
The augmented state approach leads to increase in the communication and
computation requirements as the size of the state vector $\chi_{k:0}$
increases. However, agents need to only send messages on the time window the
size of the network (as information propagates in the tree), which bounds the
communication requirement. Furthermore, due to the Markovian property of the
dynamic system, algorithms for efficient inference that factorize the
distribution by taking advantage of its structure can be designed. For
example, for Gaussian distributions, the information matrix structure is close
to block-diagonal (see Sec. VI-C), i.e., elimination can be done efficiently.
### IV-B Conservative Filtering
Full knowledge over past system states has been assumed thus far, which
enables conditional independence between two agents local states and the
derivation of a family of heterogeneous fusion algorithms named CF2. However,
in many distributed fusion applications it is desirable to maintain only a
limited time window of recent state history. Thus, marginalizing out past
states into a smaller sliding window of recent time steps might be favored, as
maintaining the full accumulated state densities results in rapid state
dimension growth and yields computation and communication burden. While
marginalization is rather trivial for homogeneous fusion problems, in
heterogeneous fusion extra care must be taken to maintain conditional
independence. Without loss of generality, for the rest of the paper, a small
sliding window of only the current time step (as done in Kalman-filter (KF)
for example) will be used.
The main assumption is that local bias states $s_{i},s_{j}$ are conditionally
independent given the target states $x_{k:1}$. Since recursive marginalization
of the past target state $x_{k-1}$ results a coupling between $s_{i}$ and
$s_{j}$ (Fig. 1(b)), it is desirable to enforce conditional independence after
marginalization. A similar principle is also known in the graphSLAM literature
as conservative sparsification [27, 28], where Gaussian distributions are
discussed.
Here, the problem for general distributions is first described in a more
formal way, with consideration of an open research question regarding the term
‘conservative’ for general pdfs. Then, Sec. (VI-D) focuses on Gaussian
distributions and details the solution for conservative sparse marginalization
to enable conservative filtering.
Given a joint distribution $p(x_{1},x_{2},s_{i},s_{j})$, described by the PGM
of Fig. 1(b), marginalizing out $x_{1}$, as done in filtering, results in
coupling of all the variables in its Markov blanket, $x_{2},s_{i}$ and
$s_{j}$. Since conditional independence between $s_{i}$ and $s_{j}$ is a
fundamental assumption in the basis of the proposed methods, it is necessary
to retain it after marginalization. Thus, the goal is to approximate the dense
distribution $p(x_{2},s_{i},s_{j})$ by a sparse distribution such that
$\tilde{p}(x_{2},s_{i},s_{j})=\frac{1}{C}\cdot
p(x_{2})p(s_{i}|x_{2})p(s_{j}|x_{2}).$ (16)
For the pdf to be consistent, the approximation $\tilde{p}(x_{2},s_{i},s_{j})$
has to be conservative w.r.t. $p(x_{2},s_{i},s_{j})$, which, loosely speaking,
means the approximate distribution $\tilde{p}$ overestimates the uncertainty
of the true distribution $p$. See Sec. II-B for a more detailed discussion and
definitions of consistency and conservativeness as treated in this paper.
## V Fusion Algorithm
Decentralized fusion algorithms, in general, are built out of two main steps:
sending out a message and fusion of incoming message. In the original
(homogeneous) channel filter (CF) algorithm [7], a message contains the
sufficient statistics of the Gaussian distribution, and fusion is done simply
by adding the received information and subtracting the common information
(here ‘information’ is used to mean information vector and information
matrix), where both actions are over the same full state vector $\chi$. In the
heterogeneous CF2, on the other hand, either the communicated or the local
distributions are over different random state vectors. Thus there is a need to
clarify how to locally construct and fuse messages to or from neighboring
agents, respectively.
First, different sets of random states are defined, allowing the separation of
the full global random state vector $\chi$ into smaller subsets as can be seen
in Fig. 4. Recall that $\chi_{i}$ ($\chi_{j}$) was previously defined as the
subset of locally relevant random states at agent _i_ (_j_), and let
$\chi_{c}^{ij}=\chi_{c}^{ji}$ be the set of common random states between
agents _i_ and _j_. Now, in some cases, as in the BDF-CF, agents might pass
through pdfs regarding random states in their random state vector, where the
data corresponding to that those random states were not gained by a local
observation, but from communication with a different neighboring agent. For
example, in the target tracking example of Fig. 2, agent 2 will passes a pdf
over target 1, $p(x_{1}|Z_{1})$, based on data collected by agent 1 and
communicated to agent 2 in previous fusion step, rather by taking a direct
local sensor measurement to the target. These sets of states are dubbed
‘passed through’ states and defined as $\chi_{\neg i}^{ij}$ for states that
are to be passed from _i_ to _j_ but are not local to _i_ and similarly
$\chi_{\neg j}^{ji}$. With these definitions the following holds,
$\chi=\chi_{i}+\chi_{j}-\chi_{c}^{ij}+\chi_{\neg i}^{ij}+\chi_{\neg j}^{ji}.$
It is now possible to discuss the content of messages and the expression for
local fusion for the BDF-CF and the HS-CF.
Figure 4: Diagram presenting the division of the full random state vector
$\chi$ into smaller subsets.
In the BDF-CF, an agent _i_ holds a posterior distribution over the full
global random state vector $\chi$. Assuming tree communication topology for
the network of agents, agent $i$ needs to communicate to its neighboring agent
_j_ a distribution over the set of local states $\chi_{i}$ and the “pass
through” set $\chi_{\neg i}^{ij}$. On the other hand, in the HS-CF, agent _i_
’s local distribution is only over the set of local relevant states
$\chi_{i}$. Agent _i_ thus sends agent _j_ the following distributions,
$\begin{split}&\text{BDF-CF:}\ \ p_{i}^{ij}(\chi_{i}\cup\chi_{\neg i}^{ij})\ \
\ \forall j\in N^{i}_{a}\\\ &\text{HS-CF:}\ \ p_{i}^{ij}(\chi_{c}^{ij})\ \ \ \
\ \ \ \ \ \ \ \forall j\in N^{i}_{a},\end{split}$ (17)
where $N^{i}_{a}$ is the set of all agents in _i_ ’s neighborhood.
The local fusion equations requires summing (and subtracting) log-likelihoods
over different sets of random states, depending on the sending agents, as each
pair of agents, in general, have different sets of random states in common, as
can be seen bellow:
$\begin{split}\text{BDF-CF:}\ \ \ \\\
\log[p_{i,f}(\chi)]&=\log[p_{i}(\chi_{i})]+\\\ &\sum_{j\in
N^{i}_{a}}\log[p_{j}^{ji}(\chi_{j}\cup\chi_{\neg
j}^{ji})]-\log[p_{c}^{ji}(\chi_{c}^{ji})]\\\ \text{HS-CF:}\ \ \ \ \\\
\log[p_{i,f}(\chi_{i}&)]=\log[p_{i}(\chi_{i})]+\\\ &\sum_{j\in
N^{i}_{a}}\log[p_{j}^{ji}(\chi_{c}^{ji})]-\log[p_{c}^{ji}(\chi_{c}^{ji})].\end{split}$
(18)
Notice that for a two agent, one common target problem, with the definitions:
$\begin{split}&\chi=[x^{T},s_{i}^{T},s_{j}^{T}]^{T},\ \ \
\chi_{i}=[x^{T},s_{i}^{T}]^{T},\\\ &\chi_{j}=[x^{T},s_{j}^{T}]^{T},\ \ \ \ \ \
\chi_{c}^{ji}=x,\ \ \ \chi_{\neg j}^{ji}=\varnothing\end{split}$
using the equations (17) and (18) result in the BDF-CF and HS-CF equations
given in (13) and (14), respectively. An example of the algorithm for a linear
Gaussian system, where agents communicate and fuse sufficient statistics
(information vector and information matrix) is given in the next section.
## VI Gaussians - A Closed Form Fusion Rule
The goal here is to derive a closed-form fusion rule for the special case of
linear-Gaussian distributions to demonstrate how it works in practice and get
insight into the structure of the heterogeneous fusion problem. The
information (canonical) form of the Gaussian distribution will be used to this
end, as it is particularly convenient for deriving and describing key steps in
data fusion processing, for example in the information filter, the original CF
[7], the CI algorithm [9], and more. The use of the Gaussian information form
has two main advantages: (i) multiplication and division over pdfs become
summation and subtraction of the sufficient statistics (information vector and
information matrix) obtained from the log pdfs; and (ii) it gives insight into
the conditional independence structure of the problem, since zero off-diagonal
terms in the resulting information matrices indicate conditional independence
between corresponding random states, as discussed further and utilized for
conservative filtering in Sec. VI-D.
### VI-A Preliminaries
Assume the full joint distribution over the random state vector $\chi$,
defined in (2), is a multivariate Gaussian with mean $\mu$ and covariance
matrix $\Sigma$,
$\mu=\begin{pmatrix}\mu_{X}\\\ \mu_{S}\\\ \end{pmatrix},\ \ \ \ \ \ \
\Sigma=\begin{pmatrix}\Sigma_{XX}&\Sigma_{XS}\\\
\Sigma_{SX}&\Sigma_{SS}\end{pmatrix}\\\ $ (19)
where $X$ and $S$ denote two correlated subsets of the joint random state
$\chi$. The information form of $\chi$ is given by
$\zeta=\Sigma^{-1}\mu=\begin{pmatrix}\zeta_{X}\\\ \zeta_{S}\\\ \end{pmatrix},\
\ \ \ \ \ \ \Lambda=\Sigma^{-1}=\begin{pmatrix}\Lambda_{XX}&\Lambda_{XS}\\\
\Lambda_{SX}&\Lambda_{SS}\end{pmatrix}\\\ $ (20)
The pdf in information form for the normally distributed state $\chi$, with
information vector $\zeta$ and information matrix $\Lambda$ is [29]:
$p(\chi;\zeta,\Lambda)=\frac{\exp(-\frac{1}{2}\zeta^{T}\Lambda^{-1}\zeta)}{\det(2\pi\Lambda^{-1})^{\frac{1}{2}}}\exp\big{(}{-\frac{1}{2}\chi^{T}\Lambda\chi+\zeta^{T}\chi\big{)}}.$
(21)
This pdf can also be expressed using factorization (4), where the marginal and
conditional distributions of a Gaussian are also Gaussian,
$\begin{split}p(X)&=\mathcal{N}^{-1}(X;\bar{\zeta}_{X},\bar{\Lambda}_{XX})\\\
p(S|X)&=\mathcal{N}^{-1}(S;\zeta_{S|X},\Lambda_{S|X})\end{split}$ (22)
where $\mathcal{N}^{-1}$ represents the information form of the Gaussian
distribution $\mathcal{N}$, and $(\bar{\zeta}_{X},\bar{\Lambda}_{XX})$ and
$(\zeta_{S|X},\Lambda_{S|X})$ are the sufficient statistics for the marginal
and conditional pdfs in information form, respectively, defined as [30]:
$\begin{split}\bar{\zeta}_{X}=\zeta_{X}-\Lambda_{XS}\Lambda^{-1}_{SS}\zeta_{S}&,\
\ \ \
\bar{\Lambda}_{XX}=\Lambda_{XX}-\Lambda_{XS}\Lambda^{-1}_{SS}\Lambda_{SX}\\\
\zeta_{S|X}=\zeta_{S}-\Lambda_{SX}X&,\ \ \ \
\Lambda_{S|X}=\Lambda_{SS}\end{split}$ (23)
### VI-B Fusion
Starting with the original homogeneous fusion rule (3), by substituting linear
Gaussian distributions, taking logs and differentiating once for the fused
information vector ($\zeta^{f}=(\Sigma^{f})^{-1}\mu^{f}$) and twice for the
fused information matrix ($\Lambda^{f}=(\Sigma^{f})^{-1}$), [8] to obtain the
fusion equations,
$\begin{split}\zeta^{f}=\zeta^{i}+\zeta^{j}-\zeta^{c}\ ,\ \ \ \ \
\Lambda^{f}=\Lambda^{i}+\Lambda^{j}-\Lambda^{c}.\end{split}$ (24)
These equations are the basis of the original linear-Gaussian CF [7], which
explicitly tracks the ‘common information’ vector and matrix
$(\zeta_{c},\Lambda_{c})$, describing the pdf over $\chi$ conditioned on the
common data between communicating pairs of agents $i$ and $j$ ($Z^{-}_{i}\cap
Z^{-}_{j}$) in tree-structured communication networks.
Define $\bar{\zeta}^{f}_{X}$ and $\bar{\Lambda}^{f}_{XX}$ to be the _fused_
marginal information vector and matrix, respectively, over the common target
random state $X$ corresponding to $p_{f}(X)$, represented in information form.
Without loss of generality, the fused marginal information vector and matrix
can be achieved by using different fusion methods, exact and approximate (CI
[9] for example). This paper restricts attention to the CF for exact fusion.
Then using (24), the fused marginal information vector and matrix for Gaussian
distributions is given by
$\begin{split}\bar{\zeta}^{f}_{X}=\bar{\zeta}^{i}_{X}+\bar{\zeta}^{j}_{X}-\bar{\zeta}^{c}_{X},\
\ \
\bar{\Lambda}^{f}_{XX}=\bar{\Lambda}^{i}_{XX}+\bar{\Lambda}^{j}_{XX}-\bar{\Lambda}^{c}_{XX}\end{split}$
(25)
The HS-CF fusion rule, eq. (14), for Gaussian pdfs can be represented by the
simple closed form expression,
$\begin{split}\zeta^{i,f}=&\left(\begin{array}[]{c}\bar{\zeta}^{f}_{X}\\\
\hdashline[2pt/2pt]0\end{array}\right)+\left(\begin{array}[]{c}\Lambda^{i}_{xs_{i}}(\Lambda^{i}_{s_{i}s_{i}})^{-1}\zeta^{i}_{s_{i}}\\\
\hdashline[2pt/2pt]\zeta^{i}_{s_{i}}\end{array}\right)\\\
\Lambda^{i,f}=&\left(\begin{array}[]{c;{2pt/2pt}c}\bar{\Lambda}^{f}_{xx}&0\\\
\hdashline[2pt/2pt]0&0\end{array}\right)+\left(\begin{array}[]{c;{2pt/2pt}c}\Lambda^{i}_{xs_{i}}(\Lambda^{i}_{s_{i}s_{i}})^{-1}\Lambda^{i}_{s_{i}x}&\Lambda^i_{xs_i}\\\
\hdashline[2pt/2pt]\Lambda^{i}_{s_{i}x}&\Lambda^i_{s_is_i}\end{array}\right),\end{split}$
(26)
where $\bar{\zeta}^{f}_{x}$ and $\bar{\Lambda}^{f}_{xx}$ are given in (25) and
it is assumed $X=x$ and $S=[s_{i}^{T},s_{j}^{T}]^{T}$. An equivalent
expression for the fused information vector and matrix at agent $j$ is
achieved by switching $i$ with $j$.
It is important to note, as seen from (25), that the fused marginal pdf
$(\bar{\zeta}^{f}_{x},\bar{\Lambda}^{f}_{xx})$ is the same for agents $i$ and
$j$. However, the conditional part is kept local (the right part of (26)),
which means that after fusion, the _local joint_ distributions in $i$ (w.r.t.
$x$ and $s_{i}$) and $j$ (w.r.t. $x$ and $s_{j}$) are different. While agents
only update the information vector and matrix of the marginal pdf, over the
common random state $x$, the local joint distribution in moment representation
(e.g., $\mu^{i,f},\Sigma^{i,f}$) will be updated, thus also updating the local
states $s_{i}$ ($s_{j}$).
### VI-C The Information Augmented State
For Gaussian distributions there are two similar but not equivalent filters:
the _augmented_ state (AS), [12], and the _accumulated_ state density (ASD),
[26], where they mostly differ in their retrodiction formulation. This is more
involved in the ASD and not imperative to our solution, so the attention here
is limited to the augmented state implementation [12]. Here the emphasis is on
Gaussian state distributions; a solution of the ASD for general distributions
is given in [26].
The augmented state for a sliding time window from time step $n$ to time step
$k$ (denoted by subscript $k:n$) given in [12] uses covariance formulation for
the prediction step, and information representation for the update step.
However, since the algorithms developed in this paper work in Gaussian
information space, it is advantageous to work with a full information filter
formulation. The derivation of the information AS (_iAS_) is given in Appendix
A, with the main results provided here, namely the prediction and update steps
in information form.
For a dynamic system, described by the discrete time equations
$\begin{split}&x_{k}=F_{k}x_{k-1}+Gu_{k}+\omega_{k},\ \ \ \
\omega_{k}\sim\mathcal{N}(0,Q_{k})\\\ &y_{k}=H_{k}x_{k}+v_{k},\ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ v_{k}\sim\mathcal{N}(0,R_{k}),\end{split}$ (27)
where $F_{k}$ is the state transition matrix, $G$ is the control effect matrix
and $H_{k}$ is the sensing matrix. $\omega_{k}$ and $v_{k}$ are the zero mean
white Gaussian process and measurement noise, respectively.
The predicted information vector and matrix for the time window $k:n$, given
all information up to and including time step $k-1$ are given by
$\begin{split}&\zeta_{k:n|k-1}=\begin{pmatrix}Q_{k}^{-1}Gu_{k}\\\
\zeta_{k-1:n|k-1}-\mathbf{F}^{T}Q_{k}^{-1}Gu_{k}\end{pmatrix}\\\
&\Lambda_{k:n|k-1}=\begin{pmatrix}Q_{k}^{-1}&-Q_{k}^{-1}\mathbf{F}\\\
-\mathbf{F}^{T}Q_{k}^{-1}&\Lambda_{k-1:n|k-1}+\mathbf{F}^{T}Q_{k}^{-1}\mathbf{F}\end{pmatrix},\\\
\end{split}$ (28)
where $\mathbf{F}=\big{[}F_{k-1}\ \ 0_{m\times m(k-n-2)}\big{]}$ and $m$ is
the size of the (not augmented) state vector. Notice the simplicity of the
above expression and its interesting interpretation: the predicted conditional
information matrix at the current time step, given previous time steps (upper
left block), depends inversely on process noise alone and is not affected by
the state dynamics.
For completeness, the measurement update in Gaussian information space is [12]
$\begin{split}&\zeta_{k:n|k}=\zeta_{k:n|k-1}+J_{k}i_{k}\\\
&\Lambda_{k:n|k}=\Lambda_{k:n|k-1}+J_{k}I_{k}J_{k}^{T},\end{split}$ (29)
where $J_{k}=\big{[}I_{m}\ \ 0_{m\times m(k-n-1)}\big{]}^{T}$,
$i_{k}=H_{k}^{T}R_{k}^{-1}y_{k}$ and $I_{k}=H_{k}^{T}R_{k}^{-1}H_{k}$.
For linear-Gaussian problems, the above _iAS_ can be used locally at each
agent, enabling conditional independence such that for $n=1$,
$\begin{split}p(x_{k:1},s_{i},s_{j})&=\\\ p(&x_{k:1}|Z_{k})\cdot
p(s_{i}|x_{k:1},Z_{k})\cdot p(s_{j}|x_{k:1},Z_{k}).\end{split}$ (30)
It can be seen that the full information matrix and vector given in eq. (28)
grows rapidly with the time step $k$, inducing high processing and
communication costs. Note that the block tri-diagonal structure of the updated
information matrix $\Lambda_{k:1|k-1}$, resulting from the Markov property of
the system dynamics can be utilized to reduce computation burden. But this
does not resolve the communication load problem which scales with the size of
the tree network, allowing messages to propagate through from one end of the
tree to the other. Instead, a filtering approach is taken to marginalize past
states to process only a sliding window $k:n$ ($n>1$) while maintaining
conditional independence. This requires conservative filtering, discussed in
the next section.
### VI-D Conservative Filtering
The approach of Vial _et al._[27] is adopted for the special case of Gaussian
distributions. The sparse structure of the marginalized approximate
information matrix is enforced by removing the links between $s_{i}$ and
$s_{j}$. In other words, given a true dense Gaussian distribution
$\mathcal{N}(\zeta_{tr},\Lambda_{tr})$, a sparse approximate distribution
$\mathcal{N}(\zeta_{sp},\Lambda_{sp})$ is sought such that the mean is
unchanged and the approximation is conservative in the PSD sense,
$\begin{split}\Lambda_{tr}^{-1}\zeta_{tr}=\Lambda_{sp}^{-1}\zeta_{sp},\ \ \ \
\ \Lambda_{tr}-\Lambda_{sp}\succeq 0,\end{split}$ (31)
where again the information form of the Gaussian distribution is used.
Reference [27] minimizes the Kullback-Leibler Divergence (KLD) to find a lower
bound on the dense true information matrix $\Lambda_{tr}$. Along similar
lines, [31] suggests a method named ‘Uniform Pre-Transmission Eigenvalue-Based
Scaling’ to conservatively approximate a covariance matrix $\Sigma$ by
inflating a diagonal matrix $D$ built out of the diagonal entries of the full
matrix $\Sigma$. To achieve a conservative approximation $D_{c}$, $D$ is
inflated by multiplying by the largest eigenvalue of $Q=D^{-\frac{1}{2}}\Sigma
D^{-\frac{1}{2}}$. This results in $D_{c}=\lambda_{max}D$ such that
$D_{c}-\Sigma\succeq 0$.
This method is generalized here to find a lower bound sparse information
matrix $\Lambda_{sp}$ and regain conditional independence between $s_{i}$ and
$s_{j}$. This new generalized method differs from the one suggested in [31] in
two ways. Firstly, the approximation, $\Lambda_{sp}$, is allowed to be any
information matrix achieved by setting any off-diagonal elements of the true
dense information matrix $\Lambda_{tr}$ to zero, i.e., the resulting matrix is
in general not diagonal or even block-diagonal. Note that since the
information matrix (i.e. not the covariance) is changed, setting off-diagonal
elements to zero directly controls the conditional independence structure of
the underlying distribution. Specifically for the purpose of this paper, terms
relating local random states (e.g., $s_{i}$ and $s_{j}$) in $\Lambda_{tr}$ are
set to zero to regain conditional independence given common target states
(e.g., $x_{k:n}$).
The second change from the original method is that the information matrix is
approximated, as opposed to the covariance matrix. This means that a _lower_
bound is sought and not an _upper_ bound, i.e. ‘information’ must be deflated,
instead of uncertainty being inflated. This is achieved by choosing the
_minimal_ eigenvalue of
$\tilde{Q}=\Lambda_{sp}^{-\frac{1}{2}}\Lambda_{tr}\Lambda_{sp}^{-\frac{1}{2}}$,
resulting in
$\Lambda_{tr}-\lambda_{min}\Lambda_{sp}\succeq 0,$ (32)
where $\lambda_{min}\Lambda_{sp}$ is the sought of conservative marginal
sparse approximation of the dense information matrix $\Lambda_{tr}$. The new
information vector is computed to such that (31) holds
$\zeta_{sp}=(\lambda_{min}\Lambda_{sp})\Lambda_{tr}^{-1}\zeta_{tr}.$ (33)
### VI-E Closed-Form Algorithm
This subsection provides a full summary description of the different steps
each agent $i$ takes to locally process, communicate and fuse data with its
neighbor $j$. While the steps described in the pseudo code in Algorithm 1 are
general in the sense that it can be applied with any pdf, it assumes linear-
Gaussian distributions and uses the theory developed above to provide equation
references (green arrows) for the closed-form expressions of the different
operations. Following Sec. V, the BDF-CF and HS-CF algorithms are used as an
example to detail the algorithm from the perspective of one agent $i$
communicating with a neighbor $j$.
Algorithm 1 BDF-CF / HS-CF algorithm
1:Define: $\chi_{i}$, $\chi_{j}$, $\chi_{c}^{ij}$ $\chi_{\neg i}^{ij}$,
$\chi_{\neg j}^{ji}$, Priors $\triangleright$ Fig. 4
2:for All time steps do
3: Propagate state local states $\triangleright$ Eq. 28
4: Propagate common states in the CF
5: if BDF-CF then
6: Conservative filtering $\triangleright$ Sec. VI-D
7: else if HS-CF then
8: Marginalize out past state
9: end if
10: Measurement update $\triangleright$ Eq. 29
11: Send message $\triangleright$ Eq. 17
12: Fuse received message $\triangleright$ Eq. 18
13: Update common information
14:end for
15:return
### VI-F Calculation of Communication and Computation Savings
To highlight the potential gain of the proposed CF2 methods with respect to
communication and computation complexity and how they change with scale, three
numerical examples (small, medium and large) of a multi-agent multi-target
tracking problem are presented. Consider the problem introduced earlier of
tracking $n_{t}$ ground targets by $n_{a}$ agents (trackers), where each agent
computes a local KF estimate, i.e., the system dynamics are assumed to be
linear with additive Gaussian white noise (27). Each agent $i$ has 6 unknown
local position states described by the random vector $s_{i}$ and takes
measurements to $n_{t}^{i}$ targets, each having 4 position/velocity states
described by the random vector $x_{t}$ (e.g., east and north coordinates and
velocities). The full state vector then has $6n_{a}+4n_{t}$ random states.
Assume tree topology in ascending order, where each agent tracks 1/2/3
targets, corresponding to the small/medium/large examples, respectively, but
has only one target in common with its neighbor. Now, using the same logic as
before, assume that each agent is only concerned with the targets it takes
measurements to and its own position states. Each agent has only 10/14/18
local ‘relevant’ random states for tracking 1/2/3 targets, respectively.
Table I presents a comparison between the different channel filters for the
three different scale problems. The baseline for comparison is the original
(homogeneous) CF, with each agent estimating the full state vector. For the
communication data requirement, double precision (8 bytes per element) is
assumed. Since the matrices are symmetric covariances, agents only needs to
communicate $n(n+1)/2$ upper diagonal elements, where $n$ is the number of
random states. Each agent’s computation complexity is determined by the cost
of inverting an $n\times n$ matrix. It can be seen from the table that even
for the small scale problem, the communication data reduction is significant;
the BDF-CF requires about 42.7% of the original CF, while the approximate BDF-
CF and HS-CF requires only 9.2% as agents only communicate common targets
states information vectors and matrices. These gains then increase with scale,
for the medium and large problems the BDF-CF communication is about 33% of the
original CF and for the approximate methods is less than 1%.
Another important aspect is computation complexity. As seen from the table,
while the BDF-CF methods require each agent to process the full random state
vector, the HS-CF offers significant computational reduction. Since each agent
only processes the locally relevant random states, the HS-CF scales with
subset of states and not with number of agents and targets. In the medium and
large scale problems, as the size of the full system random vector states
increase, the HS-CF computation complexity is less than 1% of the other
methods, which can be critical in terms of computing power for resource-
constrained tracking platforms.
TABLE I: Data communication requirements and computational complexity for different fusion methods, for different problem scales. | | Small | Medium | Large
---|---|---|---|---
| ($n_{a}$, $n_{t}$) | (2, 1) | (10, 11) | (25, 51)
| $n_{t}^{i}$ | 1 | 2 | 3
CF | Data req. $[$KB$]$ | 2.4 | 801 | 24300
Complexity | $O(16^{3})$ | $O(104^{3})$ | $O(354^{3})$
BDF-CF | Data req. $[$%CF$]$ | 42.7 | 33.7 | 33.2
Complexity | $O(16^{3})$ | $O(104^{3})$ | $O(354^{3})$
Approx. BDF-CF | Data req. $[$%CF$]$ | 9.2 | 0.25 | 0.02
Complexity | $O(16^{3})$ | $O(104^{3})$ | $O(354^{3})$
HS-CF | Data req. $[$%CF$]$ | 9.2 | 0.25 | 0.02
Complexity | $O(10^{3})$ | $O(14^{3})$ | $O(18^{3})$
## VII Simulation Studies
Multi-agent multi-target tracking simulation scenarios where performed to
compare and validate the proposed algorithms, where the focus is on the BDF-CF
and HS-CF heterogeneous fusion algorithms. Since the dynamics and measurement
models are assumed to be linear with Gaussian noise, Algorithm 1 is used
together with the _iAS_ as the inference engine, i.e., agents estimate the
sufficient statistics (information vector and matrix) of the random state
vector. First, the algorithms are tested on a static target case, where
conditional independence of the local states can be easily guaranteed. This is
followed by a dynamic target test case with only two agents and one target, to
validate and compare the augmented state (smoothing) and the conservative
filtering approaches. Lastly, the conservative filtering approach is used for
a more interesting 4-agents 5-target scenario. Results for all the different
scenarios are based on Monte Carlo simulations and compare the new algorithms
to an optimal centralized estimator.
### VII-A Example 1 - Static Case
A chain network, consisting of five agents connected bilaterally in ascending
order $(1\leftrightarrow 2\leftrightarrow 3\leftrightarrow 4\leftrightarrow
5)$,as depicted in (Fig. 2), attempts to estimate the position of six
stationary targets in a $2D$ space. Assume each tracking agent $i\ =1,...,5$
has perfect self position knowledge, but with constant agent-target relative
position measurement bias vector in the east and north directions
$s_{i}=[b_{e,i},b_{n,i}]^{T}$. In every time step $k$, each agent takes two
kinds of measurements: one for the target and one to collect data on the local
sensor bias random vector, which can be transformed into the linear pseudo-
measurements,
$\displaystyle\begin{split}y^{t}_{i,k}&=x^{t}+s_{i}+v^{i,1}_{k},\ \
v^{i,1}_{k}\sim\mathcal{N}(0,R^{1}_{i}),\\\ m_{i,k}&=s_{i}+v^{i,2}_{k},\ \
v^{i,2}_{k}\sim\mathcal{N}(0,R^{2}_{i}),\end{split}$ (34)
where $y^{t}_{i,k}$ is agent $i$ relative measurement to target $t$ at time
step $k$ and $m_{i,k}$ is a measurement to a known landmark at time step $k$
for bias estimation. $x^{t}=[e^{t},n^{t}]^{T}$ is the east and north position
of target $t\ =1,...,6$. The tracking assignments for each agent, along with
the measurements noise error covariances for the relative target ($R^{1}_{i}$)
and landmark ($R^{2}_{i}$) measurements are given in Table II and illustrated
in Fig. 2. The relative target measurement noise characteristics for different
targets measured by the same agent are taken to be equal. For example, agent
$1$ takes noisy measurements to targets $1$ and $2$ with $1\ m^{2}$ and $10\
m^{2}$ variances in the east and north directions, respectively, and $3\
m^{2}$ in both directions for the landmark.
Following the definitions from Sec. III, the full state vector includes 22
random states
$\chi=[{x^{1}}^{T},...,{x^{6}}^{T},s_{1}^{T},...,s_{5}^{T}]^{T},$ (35)
where for the HS-CF fusion, define the local random state vector at agent $i$
$\chi_{i}=[{X^{\mathcal{T}_{i}}}^{T},s_{i}^{T}]^{T}.$ (36)
Here $\mathcal{T}_{i}$ is the set of targets observed by agent $i$, and
$X^{\mathcal{T}_{i}}$ includes all target random state vectors $x^{t}$, s.t
$t\in\mathcal{T}_{i}$. In other words, the local random state vector at each
agent includes only locally relevant targets and the local biases. In the HS-
CF, where two agents $i$ and $j$ only share the marginal statistics regarding
common states, messages should only consist data regarding targets
$t\in\mathcal{T}_{i}\cap\mathcal{T}_{j}$. For example, according to Table II
and the network tree topology, for agents $1$ and $2$:
$\mathcal{T}_{1}\cap\mathcal{T}_{2}=T_{2}$.
The data communication requirements for this relatively small system were
calculated; similar to the results from Sec. VI-F, the BDF-CF and the HS-CF
requires about 38% and 2.6% of the original CF communication data
requirements, respectively.
TABLE II: Local platform target assignments and sensor measurement error covariances. Agent | Tracked Targets | $R_{i}^{1}[m^{2}]$ | $R_{i}^{2}[m^{2}]$
---|---|---|---
1 | $T_{1},T_{2}$ | diag([1,10]) | diag([3,3])
2 | $T_{2},T_{3}$ | diag([3,3]) | diag([3,3])
3 | $T_{3},T_{4},T_{5}$ | diag([4,4]) | diag([2,2])
4 | $T_{4},T_{5}$ | diag([10,1]) | diag([4,4])
5 | $T_{5},T_{6}$ | diag([2,2]) | diag([5,5])
The BDF-CF and the HS-CF performance was tested with 500 Monte Carlo
simulations and compared to a centralized estimator. As mentioned before, in
the BDF-CF each platform processes the full random state vector (35), while in
the HS-CF each platform processes only the locally relevant random states
(36). In the simulations fusion occurs in every time step.
Figure 5: Example 1 (static) - NEES hypothesis test based on 500 Monte Carlo
simulations, where the dashed lines show bounds for 95$\%$ confidence level.
Shown are test results of agents 1 and 4 using for different fusion methods,
the results indicate all methods produce a consistent estimate.
Fig. 5 shows a NEES consistency test [15] results for agents 1 and 4 and a
centralized estimator. Results are based on 500 Monte Carlo simulations with
95% confidence level. It can be seen that the MMSE estimates of both agents,
with the BDF-CF and the HS-CF are consistent. Note that since the HS-CF only
estimates a subset of 6 states out of the full 22 state vector, the
consistency bounds are different for these methods. The second test to
determine the performance of a fusion method is whether it is conservative
relative to a centralized estimator (see Sec. II-B). To verify, the local
covariance matrix must be checked to see whether it is pessimistic relative to
the centralized covariance. One simple test is by computing the eigenvalues of
$\bar{\Sigma}_{\chi_{i}}-\bar{\Sigma}_{\chi_{i}}^{cent}$, where
$\bar{\Sigma}_{\chi_{i}}$ is the agent’s covariance and
$\bar{\Sigma}_{\chi_{i}}^{cent}$ is the centralized marginal covariance over
$\chi_{i}$. If the minimal eigenvalue is bigger or equal to zero, all
eigenvalue are bigger or equal to zero and the MMSE estimate is conservative
in the PSD sense. In the above simulations the minimal eigenvalues between all
agents and all simulations were 0, for both the BDF-CF and the HS-CF, thus
they are conservative in the PSD sense.
### VII-B Example 2 - Dynamic Case
In dynamic systems, as discussed in Sec. IV, there is a challenge in
maintaining conditional independence. Two ways to solution are suggested, the
first using the _iAS_ , thus keeping a distribution over the full time history
over target random states, which is costly in both communication and
computation requirements. The second, more efficient solution, is to perform
conservative filtering by enforcing conditional independence in the
marginalization step and deflating the information matrix (Algorithm 1). Since
this process loses information due to deflation, the BDF-CF becomes an
approximate solution and is expected to be less accurate than the _iAS_
implementation. This is shown using a two agent, one (dynamic) target tracking
simulation. Here the target follows a linear dynamics model with time-varying
acceleration control,
$x_{k+1}=Fx_{k}+Gu_{k}+\omega_{k},\ \ \omega_{k}\sim\mathcal{N}(0,0.08\cdot
I_{n_{x}\times n_{x}}),$ (37)
where
$\begin{split}F=\begin{bmatrix}1&\Delta t&0&0\\\ 0&1&0&0\\\ 0&0&1&\Delta t\\\
0&0&0&1\end{bmatrix},\quad G=\begin{bmatrix}\frac{1}{2}\Delta t^{2}&0\\\
\Delta t&0\\\ 0&\frac{1}{2}\Delta t^{2}\\\ 0&\Delta
t\end{bmatrix}.\end{split}$ (38)
The acceleration input in the east and north directions is given by
$u_{k}=\begin{bmatrix}a_{e}\cdot\cos(d_{e}\cdot k\Delta t)\ \
a_{n}\cdot\sin(d_{n}\cdot k\Delta t)\end{bmatrix}^{T}$, where $a_{e}/a_{n}$
and $d_{e}/d_{n}$ define the east and north amplitude and frequency,
respectively. The measurement model is as in the static example, given in (34)
with noise parameters defined for agents 1 and 2 in Table II.
Results in Fig. 6 show a comparison between the _iAS_ with the full time
window ($k:1$) and filtering approaches (sliding window of size 1), using the
BDF-CF and the HS-CF for fusion. The plots in figure (a) show the NEES
consistency tests (75 simulations, 95% confidence level) for agent 1, where
the centralized (filtering results presented) and BDF-CF in the upper plot and
the HS-CF with its different bounds in the lower. The results are consistent,
with pessimistic behaviour of the BDF-CF due to the conservative filtering
approach. (b) Shows the root mean squared error (RMSE) results of the same
simulation, with agent 1 in the upper plot and agent 2 in the lower. The _iAS_
has better performance, with smaller RMSE and $2\sigma$ bounds, which is to be
expected due to its smoothing operation, but at the expense of much higher
computation and communication load.
The conservativeness of the fused estimate was checked again by computing the
minimal eigenvalues across 75 simulations and the two agents. For the _iAS_
approach, the BDF-CF and the HS-CF had small negative minimal eigenvalues of
$-0.002$ and $-0.0015$ respectively, thus slightly overconfident relative to
the centralized. For the conservative filtering approach, the BDF-CF was
conservative with minimal eigenvalue of $0.0008$ and the HS-CF was
overconfident with minimal eigenvalue of $-0.26$.
Figure 6: Results from a 75 Monte Carlo simulation of a 2 agent, 1 target
dynamic target tracking scenario. Shown is a comparison between _iAS_ and
filtering. (a) NEES test with the upper figure showing the BDF-CF compared to
the centralized (with filtering) and the lower showing the HS-CF. Here the
dashed black lines show bounds for 95$\%$ confidence level. (b) RMSE
comparisons between the BDF-CF and the HS-CF for _iAS_ and filtering
approaches for agent 1 (upper) and agent 2 (lower).
Conservative filtering allows to test the algorithms on a more interesting
dynamic simulation. As a test case, a simulation of a cooperative target
tracking task with 4 agents and 5 dynamic targets (full random state vector of
28 states) was performed. The dynamic model details are the same as in the 2
agent, 1 target dynamic simulation above, with measurement parameters defined
by the first 4 agents in Table II. The advantages of the BDF-CF and HS-CF
regarding communication and computation costs are highlighted again, as the
BDF-CF saves 58% in communication costs relative to the original CF, and the
HS-CF saves 94.5% in communication and 87.5% in computation complexity.
Results of 500 Monte Carlo simulation with filtering for agents 1 and 4 are
presented in Fig. 7. The plots in (a) show the NEES consistency test with 95%
confidence bound marked with black dashed lines. The upper plot shows the
centralized (black circles) and the BDF-CF for agents 1 (blue squares) and 4
(red x) NEES for the 28-state vector. The lower plot shows the HS-CF results,
which has a smaller 10-state random vector. (b) shows the corresponding RMSE
for agent 1 (upper plot) and 4 (lower plot). Note that the RMSE results for
the centralized estimate and BDF-CF, which hold distributions over the full
28-state vector, are marginalized and computed only over relevant local 10
agent random states for this comparison.
It is seen from the NEES plots that, as expected, the centralized estimator
produces a consistent MMSE estimate, and the BDF-CF overestimates the
uncertainty due to the information matrix deflation (covariance inflation) in
the conservative filtering step. The BDF-CF also produces a conservative MMSE
estimate relative to the centralized in the PSD sense for all agents, since
the minimum eigenvalue between the agents is positive ($3e-04$). The HS-CF is
slightly overconfident for both the consistency test and the PSD test, with
negative minimal eigenvalue of $-0.26$. However, the degree of non-
conservativeness in the HS-CF will in general be highly problem- and topology
dependent. Hence, the choice of whether to task agents with the full random
state vector, with either homogeneous DDF methods (e.g., classical CF and
conventional CI) or heterogeneous fusion with the BDF-CF, or to task them with
only a subset of relevant random states using the HS-CF, will hinge on the
desired trade-off in communication/computation complexity vs. resulting
overconfidence in state MMSE estimates, provided that the HS-CF allows for
stable convergence.
The HS-CF overconfidence is attributed to inaccurate removal of implicit and
hidden correlations due to marginalization in the filtering step (line 8 in
Algorithm 1). Correctly accounting for these dependencies is not in the scope
of this paper, but is the focus of ongoing work.
Figure 7: Results from a 500 Monte Carlo simulation of a cooperative target
tracking task, with filtering, consisting of 4 tracking agents and 5 dynamic
targets. Presented are results for agents 1 and 4. (a) NEES consistency test
results for the centralized and BDF-CF estimates of 28 random states (upper)
and the HS-CF estimates over 10 random states (lower). (b) Solid lines show
the RMSE over target and agent local states relevant to that agent, dashed
line shows the $2\sigma$ confidence bounds on logarithmic scale, for agent 1
(upper) and 4 (lower).
## VIII Conclusions
Heterogeneous fusion defines a key family of problems for Bayesian DDF, as it
enables flexibility for large scale autonomous sensing networks. As shown in
this work, separating the global joint distribution into smaller subsets of
local distributions significantly reduces local agent computation and
communication requirements. The analysis and derivations presented in this
paper, while assuming tree structured networks for the purposes of exact
fusion via channel filtering, offers a basis for developing and analyzing
similar methods for more general heterogeneous problems involving exact or
approximate fusion in more complex networked fusion settings.
Probabilistic graphical models (PGMs) were used here to develop Bayesian DDF
algorithms. PGMs provided insight into the origin of the coupling between
random states not mutually tracked by two agents and enabled exploitation of
the conditional independence structure embedded in these graphs. This led to a
family of conditionally factorized channel filter (CF2) approaches for general
probabilistic and Gaussian pdfs that were demonstrated on static and dynamic
target tracking problems. The latter motivated the development and use of the
information augmented state (_iAS_) filter to regain conditional independence,
on the expense of increasing computation and communication costs. To overcome
this problem a conservative filtering approach was demonstrated to maintain
conditional independence over a small time window, without the need of the
full time history.
The DDF framework naturally enables sparse distributed estimation for high-
dimensional state estimation and the conditionally factorized-CF approach
represents a practical and theoretical shift in the state of the art, subject
to usual provisos and limitations of DDF. From a practical standpoint, the CF
strategies developed in this paper can already be used to improve scalability
in a variety of decentralized Bayesian estimation problems, such as
cooperative localization and navigation [25, 1], multi-agent SLAM [3] and
terrain height mapping [32], where a height distribution is estimated on a
grid map. In this case, for example, the BDF-CF can be used to reduce
communication in the network by dividing the the map into several overlapping
regions of interest, allowing agents to communicate only regarding those cells
in which they have new data to contribute. This scales the communication with
the number of locally observed grid cells instead of the entire map.
Indeed, some works are already leveraging heterogeneous DDF ideas for
robotics, [1, 2], despite the gap in theoretical guarantees and understanding
on the full nature of the problem and its limitations. This paper makes
progress by building theoretical foundations for future research and surfacing
a discussion on the assumptions and definitions of homogeneous DDF, as they
appear to be inadequate for real world robotics problems of heterogeneous
fusion. Heterogeneous fusion, as defined in this paper, requires a careful
revisit of the idea of ‘ideal’ centralized/decentralized Bayesian estimation
as well as the definitions of consistency and conservativeness for general
(non-Gaussian) pdfs and more specifically in the case of heterogeneous pdfs in
dynamic systems.
## Appendix A Derivation of the iAS
The augmented state for a sliding time window from time step $n$ to time step
$k$ (denoted by subscript $k:n$) as shown in [12] is given by the following
equations:
_Prediction Step_
$X_{k:n|k-1}=\begin{pmatrix}F_{k-1}\chi_{k-1|k-1}+Gu_{k}\\\
X_{k-1:n|k-1}\end{pmatrix}$ (39)
$P_{k:n|k-1}=\begin{pmatrix}P_{k|k-1}&\mathbf{F}P_{k-1:n|k-1}\\\
P_{k-1:n|k-1}\mathbf{F}^{T}&P_{k-1:n|k-1}\end{pmatrix},$ (40)
where $\mathbf{F}=\big{[}F_{k-1}\ \ 0_{m\times m(k-n-2)}\big{]}$ and $m$ is
the size of the (not augmented) state vector.
_Update Step_
The measurement update in information space is as follows:
$P_{k:n|k}^{-1}=P_{k:n|k-1}^{-1}+J_{k}I_{k}J_{k}^{T}$ (41)
$P_{k:n|k}^{-1}X_{k:n|k}=P_{k:n|k-1}^{-1}X_{k:n|k-1}+J_{k}i_{k},$ (42)
where $J_{k}=\big{[}I_{m}\ \ 0_{m\times m(k-n-1)}\big{]}^{T}$,
$i_{k}=H_{k}^{T}R_{k}^{-1}z_{k}$ and $I_{k}=H_{k}^{T}R_{k}^{-1}H_{k}$.
Since the algorithms developed in this paper work in log space, it is
advantageous to work with an information filter, which is based on the log-
likelihood of the Gaussian distribution. Thus, a transformation of the
prediction step given in (39)-(40) from state space to the Gaussian
information space is needed.
First, define $P_{k:n|k-1}^{-1}$ to be the augmented predicted information
matrix:
$\begin{split}P_{k:n|k-1}^{-1}&=\begin{pmatrix}V_{11}&V_{12}\\\
V_{21}&V_{22}\end{pmatrix},\end{split}$ (43)
where from the matrix inversion lemma:
$\begin{split}V_{11}&=(P_{k|k-1}-\mathbf{F}P_{k-1:n|k-1}P_{k-1:n|k-1}^{-1}P_{k-1:n|k-1}\mathbf{F}^{T})^{-1}\\\
&=(P_{k|k-1}-\mathbf{F}P_{k-1:n|k-1}\mathbf{F}^{T})^{-1}.\end{split}$ (44)
The expression $\mathbf{F}P_{k-1:n|k-1}\mathbf{F}^{T}$ has the dimension
$m\times m$. From the definition of $\mathbf{F}$ above (44) can be simplified
by noticing that
$\mathbf{F}P_{k-1:n|k-1}\mathbf{F}^{T}=F_{k-1}P_{k-1|k-1}F_{k-1}^{T}$, i.e. it
depends only on the previous time step and not the full time history. Eq. (44)
is thus:
$V_{11}=(P_{k|k-1}-F_{k-1}P_{k-1|k-1}F_{k-1}^{T})^{-1}.$ (45)
Here $P_{k|k-1}^{-1}$ is the predicted information matrix at time step $k$,
given in the literature by:
$P_{k|k-1}^{-1}=(F_{k-1}P_{k-1|k-1}F_{k-1}^{T}+Q)^{-1}$, where $Q$ is the
process noise covariance. Taking the inverse and plugging in $P_{k|k-1}$, (45)
can be simplified to:
$V_{11}=Q^{-1}.$ (46)
Applying the matrix inversion lemma again the expressions for other terms are:
$V_{12}=V_{21}^{T}=-V_{11}\mathbf{F}P_{k-1:n|k-1}P_{k-1:n|k-1}^{-1}=-V_{11}\mathbf{F},$
(47)
$\begin{split}V_{22}=P_{k-1:n|k-1}^{-1}+\mathbf{F}^{T}V_{11}\mathbf{F}.\end{split}$
(48)
The predicted information matrix is then given by:
$\begin{split}P_{k:n|k-1}^{-1}=\begin{pmatrix}Q^{-1}&-Q^{-1}\Gamma^{T}\mathbf{F}\\\
-\mathbf{F}^{T}\Gamma Q^{-1}&P_{k-1:n|k-1}^{-1}+\mathbf{F}^{T}\Gamma
Q^{-1}\Gamma^{T}\mathbf{F}\end{pmatrix}\\\ \end{split}$ (49)
and the predicted information vector can now be derived:
$\begin{split}P_{k:n|k-1}^{-1}&X_{k:n|k-1}=\\\ &\begin{pmatrix}Q^{-1}Gu_{k}\\\
P_{k-1:n|k-1}^{-1}X_{k-1:n|k-1}-\mathbf{F}^{T}\Gamma
Q^{-1}Gu_{k}\end{pmatrix}.\end{split}$ (50)
## References
* [1] I. Loefgren, N. Ahmed, E. Frew, C. Heckman, and S. Humbert, “Scalable event-triggered data fusion for autonomous cooperative swarm localization,” in _2019 22th International Conference on Information Fusion (FUSION)_ , Jul. 2019, pp. 1–8.
* [2] W. W. Whitacre and M. E. Campbell, “Decentralized geolocation and bias estimation for uninhabited aerial vehicles with articulating cameras,” _Journal of Guidance, Control, and Dynamics (JGCD)_ , vol. 34, no. 2, pp. 564–573, Mar. 2011. [Online]. Available: http://arc.aiaa.org/doi/10.2514/1.49059
* [3] A. Cunningham, V. Indelman, and F. Dellaert, “DDF-SAM 2.0: consistent distributed smoothing and mapping,” in _2013 IEEE International Conference on Robotics and Automation (ICRA)_ , May 2013, pp. 5220–5227, iSSN: 1050-4729.
* [4] H. Li and F. Nashashibi, “Cooperative multi-vehicle localization using split covariance intersection filter,” in _2012 IEEE Intelligent Vehicles Symposium_ , Jun. 2012, pp. 211–216, iSSN: 1931-0587.
* [5] J. Zhu and S. S. Kia, “Cooperative localization under limited connectivity,” _IEEE Transactions on Robotics_ , vol. 35, no. 6, pp. 1523–1530, Dec. 2019\.
* [6] T. W. Martin and K. C. Chang, “A distributed data fusion approach for mobile ad hoc networks,” in _2005 7th International Conference on Information Fusion (FUSION)_ , vol. 2, Jul. 2005, pp. 1062–1069.
* [7] S. Grime and H. Durrant-Whyte, “Data fusion in decentralized sensor networks,” _Control Engineering Practice_ , vol. 2, no. 5, pp. 849–863, Oct. 1994.
* [8] C. Y. Chong, E. Tse, and S. Mori, “Distributed estimation in networks,” in _1983 American Control Conference (ACC)_ , Jun. 1983, pp. 294–300.
* [9] S. J. Julier and J. K. Uhlmann, “A non-divergent estimation algorithm in the presence of unknown correlations,” in _Proceedings of the 1997 American Control Conference (ACC)_ , vol. 4, Jun. 1997, pp. 2369–2373 vol.4.
* [10] T. Bailey, S. Julier, and G. Agamennoni, “On conservative fusion of information with unknown non-Gaussian dependence,” in _2012 15th International Conference on Information Fusion (FUSION)_ , Jul. 2012, pp. 1876–1883.
* [11] O. Dagan and N. R. Ahmed, “Heterogeneous decentralized fusion using conditionally factorized channel filters,” in _2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)_ , Sep. 2020, pp. 46–53.
* [12] C.-Y. Chong, S. Mori, F. Govaers, and W. Koch, “Comparison of tracklet fusion and distributed Kalman filter for track fusion,” in _17th International Conference on Information Fusion (FUSION)_ , Jul. 2014, pp. 1–8.
* [13] S. Lubold and C. N. Taylor, “Formal definitions of conservative PDFs,” _arXiv:1912.06780v2 [ math.ST]_ , May 2021. [Online]. Available: http://arxiv.org/abs/1912.06780v2
* [14] J. K. Uhlmann, “Covariance consistency methods for fault-tolerant distributed data fusion,” _Information Fusion_ , vol. 4, no. 3, pp. 201–215, Sep. 2003. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1566253503000368
* [15] Y. Bar-Shalom, X. R. Li, and T. kirubarajan, “Linear estimation in static systems,” in _Estimation with Applications to Tracking and Navigation_. John Wiley & Sons, Ltd, 2001, pp. 121–177.
* [16] L. Chen, P. Arambel, and R. Mehra, “Fusion under unknown correlation - covariance intersection as a special case,” in _Proceedings of the Fifth International Conference on Information Fusion. (FUSION)_ , vol. 2, Jul. 2002, pp. 905–912 vol.2.
* [17] B. Noack, J. Sijs, and U. D. Hanebeck, “Fusion strategies for unequal state vectors in distributed Kalman filtering,” _IFAC Proceedings Volumes_ , vol. 47, no. 3, pp. 3262–3267, Jan. 2014. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1474667016421105
* [18] J. R. A. Klemets and M. Hovd, “Hierarchical decentralized state estimation with unknown correlation for multiple and partially overlapping state vectors,” in _2018 IEEE Conference on Control Technology and Applications (CCTA)_ , Aug. 2018, pp. 508–514.
* [19] S. Radtke, B. Noack, and U. D. Hanebeck, “Distributed estimation with partially overlapping states based on deterministic sample-based fusion,” in _2019 18th European Control Conference (ECC)_ , Naples, Italy, Jun. 2019, pp. 1822–1829. [Online]. Available: https://ieeexplore.ieee.org/document/8795853/
* [20] T. M. Berg and H. F. Durrant-Whyte, “Model distribution in decentralized multi-sensor data fusion,” in _1991 American Control Conference (ACC)_ , Jun. 1991, pp. 2292–2293.
* [21] U. A. Khan and J. M. F. Moura, “Distributed Kalman filters in sensor networks: bipartite fusion graphs,” in _2007 IEEE/SP 14th Workshop on Statistical Signal Processing_ , Aug. 2007, pp. 700–704.
* [22] C.-Y. Chong and S. Mori, “Graphical models for nonlinear distributed estimation,” in _Proceedings of the 7th International Conference on Information Fusion (FUSION)_ , Stockholm, Sweden, 2004, pp. 614–621.
* [23] N. R. Ahmed, W. W. Whitacre, S. Moon, and E. W. Frew, “Factorized covariance intersection for scalable partial state decentralized data fusion,” in _2016 19th International Conference on Information Fusion (FUSION)_ , Jul. 2016, pp. 1049–1056.
* [24] V. Saini, A. A. Paranjape, and A. Maity, “Decentralized information filter with noncommon states,” _Journal of Guidance, Control, and Dynamics (JGCD)_ , vol. 42, no. 9, pp. 2042–2054, 2019. [Online]. Available: https://doi.org/10.2514/1.G003862
* [25] S. J. Dourmashkin, N. R. Ahmed, D. M. Akos, and W. W. Whitacre, “GPS-limited cooperative localization using scalable approximate decentralized data fusion,” in _2018 IEEE/ION Position, Location and Navigation Symposium (PLANS)_ , Apr. 2018, pp. 1473–1484.
* [26] W. Koch and F. Govaers, “On accumulated state densities with applications to out-of-sequence measurement processing,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 47, no. 4, pp. 2766–2778, Oct. 2011, conference Name: IEEE Transactions on Aerospace and Electronic Systems.
* [27] J. Vial, H. Durrant-Whyte, and T. Bailey, “Conservative sparsification for efficient and consistent approximate estimation,” in _2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , Sep. 2011, pp. 886–893, iSSN: 2153-0866.
* [28] N. Carlevaris-Bianco and R. M. Eustice, “Conservative edge sparsification for graph SLAM node removal,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_ , May 2014, pp. 854–860, iSSN: 1050-4729.
* [29] T. B. Schon and F. Lindsten, “Manipulating the multivariate Gaussian density,” Linkoeping University, Tech. Rep., Jan. 2011\.
* [30] S. Thrun, W. Burgard, and D. Fox, “The GraphSLAM algorithm,” in _Probabilistic Robotics_. MIT Press, Aug. 2005, pp. 337–383, google-Books-ID: wjM3AgAAQBAJ.
* [31] R. Forsling, Z. Sjanic, F. Gustafsson, and G. Hendeby, “Consistent distributed track fusion under communication constraints,” in _2019 22th International Conference on Information Fusion (FUSION)_ , Jul. 2019, pp. 1–8.
* [32] J. R. Schoenberg and M. Campbell, “Distributed terrain estimation using a mixture-model based algorithm,” in _2009 12th International Conference on Information Fusion (FUSION)_ , Jul. 2009, pp. 960–967.
| Ofer Dagan received the B.S. degree in aerospace engineering, in 2010, and
the M.S. degree in mechanical engineering, in 2015, from the Technion - Israel
Institute of Technology, Haifa, Israel. He is currently working toward the
Ph.D. degree in aerospace engineering with the Ann and H.J. Smead Aerospace
Engineering Sciences Department, University of Colorado Boulder, Boulder, CO,
USA. From 2010 to 2018 he was a research engineer in the aerospace industry.
His research interests include theory and algorithms for decentralized
Bayesian reasoning in heterogeneous autonomous systems.
---|---
| Nisar R. Ahmed received the B.S. degree in engineering from Cooper Union,
New York City, NY,USA, in 2006 and the Ph.D. degree in mechanical engineering
from Cornell University, Ithaca, NY, USA, in 2012. He is an Associate
Professor of Autonomous Systems and H. Joseph Smead Faculty Fellow with Ann
and H.J. Smead Aerospace Engineering Sciences Department, University of
Colorado Boulder, Boulder, CO, USA. He was also a Postdoctoral Research
Associate with Cornell University until 2014. His research interests include
the development of probabilistic models and algorithms for cooperative
intelligence in mixed human–machine teams.
---|---
|
# Named Entity Recognition in the Style of Object Detection
Bing Li
Microsoft
<EMAIL_ADDRESS>
###### Abstract
In this work, we propose a two-stage method for named entity recognition
(NER), especially for nested NER. We borrowed the idea from the two-stage
Object Detection in computer vision and the way how they construct the loss
function. First, a region proposal network generates region candidates and
then a second-stage model discriminates and classifies the entity and makes
the final prediction. We also designed a special loss function for the second-
stage training that predicts the entityness and entity type at the same time.
The model is built on top of pretrained BERT encoders, and we tried both BERT
base and BERT large models. For experiments, we first applied it to flat NER
tasks such as CoNLL2003 and OntoNotes 5.0 and got comparable results with
traditional NER models using sequence labeling methodology. We then tested the
model on the nested named entity recognition task ACE2005 and Genia, and got
F1 score of 85.6$\%$ and 76.8$\%$ respectively. In terms of the second-stage
training, we found that adding extra randomly selected regions plays an
important role in improving the precision. We also did error profiling to
better evaluate the performance of the model in different circumstances for
potential improvements in the future.
## 1 Introduction
Named entity recognition (NER) is commonly dealt with as a sequence labeling
job and has been one of the most successfully tackled NLP tasks. Inspired by
the similarity between NER and object detection in computer vision and the
success of two-stage object detection methods, it’s natural to ask the
question, can we borrow some ideas from there and simply copy the success.
This work aims at pushing further our understanding in NER by applying a two-
stage object-detection-like approach and evaluating its effects in precision,
recall and different types of errors in variant circumstances.
As a prototype for many other following object detection models, Faster R-CNN
Ren et al. (2015) is a good example to illustrate how the two stages work. In
the first stage, a region proposal network is responsible for generating
candidate regions of interest from the input image and narrowing down the
search space of locations for a more complex and comprehensive investigation
process. The second stage work is then done by the regression and
classification models, which look at finer feature maps, further adjust the
size and position of the bounding box and tell whether there is an object and
what type it belongs to. Admittedly a two-stage pipeline is often blamed for
error propagation, however in the case of object detection it apparently
brings much more than that.
We can easily see the analogy here in NER. To do NER in a similar way, first
we find candidate ranges of tokens that are more likely to be entities.
Second, we scrutinize into each of them, make a judgment if this is a true
entity or not, and then do entity classification and also regression if
necessary. Even though the search space in 1-D NER problem is significantly
smaller than in a 2-D image, the benefits of a two-stage approach are still
obvious. Firstly, with the better division of labor, the two components can
focus on their specialized tasks which are quite different. More specifically,
the region proposal part takes care of the context information and entity
signals regardless of entity type, while the second part covers more on entity
integrity and type characteristics. Secondly, since the region prediction has
been given by the first stage, the second part of the model can take the
global information of all the tokens in the region, instead of looking
separately as in sequence labeling. Although a single token vector can also
encode context information as in BERT Devlin et al. (2018) and other similar
LMs Peters et al. (2018a); Yang et al. (2019); Liu et al. (2019b); Vaswani et
al. (2017); Radford et al. (2019a), having a global view of all relevant
tokens is definitely an advantage and may provide more hints for the final
decision. Thirdly, the model gets better interpretability, since it separates
the concepts of entityness (how likely this mention is a qualified named
entity) and entity classification, you get an interpretable probability for
each entity prediction. Finally, another benefit of this method is that each
entity prediction is independent, which makes it possible to be used in a
nested entity recognition problem that can not be easily handled by sequence
labeling approach, which can only predict one label for each token.
This paper is structured as follows. In Section 2 we explain the model
architecture, including two stages, region proposal and entity classification.
Section 3 reviews the past related works in NER, especially region based NER
that are most similar to our work. Section 4 explains the training process and
evaluation results in detail on flat NER tasks. Section 5 shows the training
and evaluation on nested NER tasks. In Section 6 we evaluate the importance of
different parts of the model by ablation. And finally in Section 7 we show the
error analysis of the new method and compared it with the traditional sequence
labeling methodology.
## 2 Model Design
We propose a two-stage method for NER. In the first stage, an entity region
proposal network predicts highly suspected entity regions. In the second
stage, another model takes the output candidates, extracts more features and
then makes a prediction whether this candidate is a qualified entity or not
and what type should be attached to this entity. Precision and recall would be
evaluated end-to-end, in the same manner as traditional NER models.
Figure 1: Region proposal network is made by adding a linear layer on top of
the BERT output. The start prediction is independent from the end prediction,
both of which use cross entropy loss over two classes. For the end position,
we assign it to the first token of the word right immediate after the entity.
We only predict for the first token of every word if the tokenizer outputs
WordPieces or other subword tokens, and all other tokens will be masked (the
mask is omitted from the diagram).
### 2.1 Entity Region Proposal
For the first stage, we used a simple model similar to sequence labeling ones.
But instead of predicting IOB-format entity labels Ramshaw and Marcus (1995),
we predict $\langle$Start$\rangle$ and $\langle$End$\rangle$ labels. We
appended another linear layer on top of the hidden state outputs from the BERT
model Devlin et al. (2018), to predict the probability for a token to be
$\langle$Start$\rangle$ and $\langle$End$\rangle$, we only consider the
starting token in each word and mask out all trailing tokens within that word.
And we used cross entropy over two classes for the prediction. The model
structure is demonstrated in Figure 1. The goal of the first-stage model is
high recall and acceptable precision, so we could tilt the weights a little
bit towards the positive labels to favor better recall numbers. After
extracting the start and end tokens, we then select pairs to form a complete
region, with the simple rule that the length of the entity cannot exceed some
limit. Admittedly, there is an apparent disadvantage here that is we discard
longer candidates without even a try. But in fact, those longer ones only
compose quite a small portion and are usually very difficult to get right
anyway. In practice we have used 6 and 12 as the length limit for different
datasets and can easily cover 98$\%$-99$\%$ of ground-truth entities.
### 2.2 Entity Discrimination and Classification
Figure 2: The second-stage model is responsible for re-examining the entity
region proposals generated by the first stage. The model trains with 4 tasks
simultaneously. The entityness part and type classification part takes global
view of all tokens within the range, while the two boundary losses only zoom
into the two tokens across the boundary to make a double check of the exact
range.
The second stage is also using BERT Devlin et al. (2018) as the encoding
model. The second stage has two main tasks, discrimination and classification,
which defines the two major components in our loss function, the entityness
loss and the type classification loss. Contrary to a typical object detection
model, we don’t do regression for the bounding box. The model will only tell
if this range is correct or not and if not, discard it directly. One reason is
that we want to keep it as simple as possible and another reason is that the
problem is much easier than the 2-D object detection, and our proposals are
usually accurate enough. Compared to the sequence labeling method, our model
focuses more on aggregate features across all tokens spanning the entity,
which can be seen from the max pooling layer in both the entityness and
classification components. To make the model more sensitive to boundary
errors, especially when coming across long entities, we added another two
losses, the start loss and the end loss, which predict if the boundaries are
correct, so that the prediction won’t be dominated by the bulk part but also
pay attention to the boundary. These start/end logits are also concatenated
with the entityness feature to predict the entityness score. The total loss
function can be written as below:
L
$\displaystyle=\alpha\left(\textbf{L}_{\text{start}}+\textbf{L}_{\text{end}}\right)+\beta\textbf{L}_{\text{entityness}}+\textbf{L}_{\text{type}}$
(1) $\displaystyle\textbf{L}_{\text{s}}$
$\displaystyle=-\frac{1}{N}\sum_{i=1}^{N}\left[{\mathbbm{1}}_{i}^{\text{s}}\log(p_{i}^{\text{s}})+(1-{\mathbbm{1}}_{i}^{\text{s}})\log(1-p_{i}^{\text{s}})\right],$
$\displaystyle\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\text{s}=\text{start, end,
entityness}$ (2) $\displaystyle\textbf{L}_{\text{type}}$
$\displaystyle=-\frac{1}{N}\sum_{i=1}^{N}{\mathbbm{1}}_{i}^{\text{entity}}\left[\sum_{c\in\textrm{classes}}{\mathbbm{1}}_{i}^{\text{c}}\log(p_{i}^{\text{c}})\right]$
(3)
In Equation 1, $\alpha$ and $\beta$ are hyperparameters that control the
weights of the boundary loss and the total entityness loss, and we used 0.5
and 1.0 as default values. In Equation 2-3, $i$ iterates through from sample 1
to N where each sample can be seen as a tuple
$(sentence,index_{start},index_{end})$. ${\mathbbm{1}}_{i}^{\text{entity}}$ is
an indicator that the sample $i$ is an entity. ${\mathbbm{1}}_{i}^{\text{c}}$
indicates if the entity belongs to type c. Same idea for
${\mathbbm{1}}_{i}^{\text{s}}$. The start loss, end loss and entityness loss
are all cross entropy loss over two classes. The type classification loss is
only applied when the region proposal matches a true entity, and would be
ignored otherwise.
The model structure is illustrated in Figure 2. For the start and end loss, we
used a model that calculate multi-head dot products between the two tokens
across the boundary. The intuition is that it needs to catch the relation
signal between the two sides. After dot products we put an extra fully
connected layer to transform the feature vector to the logits of a two-class
classification. All heads have independent weights.
For the entityness and type classification loss, we used the same architecture
with separate weights, that is a fully connected layer followed by a max
pooling layer, and then another fully connected layer to get the final logits.
And we use ReLU activation function after each linear layer. The same as
region proposal, we only consider the starting token in each word and mask out
all trailing tokens within that word.
During inference, we look at the entityness probability first. Only if it’s
above a specified threshold will we look at the classification results and
output the most likely type.
## 3 Related Work
Named entity recognition (NER) is a classic problem in NLP. The goal of NER is
to extract named entities from free text and these entities can be classified
into several categories, for example person (PER), location (LOC) and geo-
political entity (GPE). Traditionally NER is tackled by sequence labeling
method. Many different models are developed along this direction, such as CRFs
in Lafferty et al. (2001); Sutton et al. (2007), LSTM in Hammerton (2003),
LSTM-CRF in Lample et al. (2016) etc. More recently, people start using large-
scale language models, such as BERT Devlin et al. (2018) and ELMo Peters et
al. (2018a).
Nested named entity recognition takes the overlapping between entities into
consideration Kim et al. (2003). This cannot be easily done by traditional
sequence labeling in that one can only assign one tag for each token. There
have been many different approaches proposed for this problem. One important
branch is the region-based method and our work can also be classified into
this category. Finkel and Manning (2009) leveraged parsing trees to extract
subsequences as candidates of entities. Xu et al. (2017); Sohrab and Miwa
(2018) considers all subsequences of a sentence as candidates. A more recent
paper by Lin et al. (2019b) developed a model that locates anchor word first
and then searches for boundaries of the entity, with the assumption that
nested entities have different anchor words. Another work that is very close
to ours is done by Zheng et al. (2019). In their paper they also proposed a
two stage method. Our work is different than theirs from several perspectives.
We have an entityness prediction in the second stage like the objectness in
object detection, and thus we don’t completely depend on the first stage to
determine the region. And our model is built on BERT language model and
finetunes all lower layers, while theirs is using LSTM plus pretrained word
embedding. Another branch of researches is trying to design more expressive
tagging schemas, some representative works are done by Lu and Roth (2015);
Katiyar and Cardie (2018); Wang and Lu (2018). The current state of the art is
Li et al. (2019a), where they viewed the NER problem as a question answering
problem and naturally solved the nested issue. Their model showed impressive
power in both flat and nested NER tests. A major difference between our model
and theirs is that they predict a paring score for any start and end index
pair, which makes the feature matrix and the computational complexity a big
issue. We instead predict entityness only for very few candidates, and we
don’t need to duplicate training examples for multiple queries, thus our
training process takes much less time.
The main contribution of our work is that we bring up the idea to use a high-
recall and relatively low-precision first stage model to select regions and
use a more complicated model to predict the entityness and classification at
the same time, with a global view of all the tokens spanning the candidate
entity. Besides that, our model is much simpler and more lightweight than
other similar models designed for nested NER tasks, and both training and
inference run as fast as the plain BERT sequence labeling model. Our core
model architecture is nothing but a linear layer plus a max pooling layer, but
gives pretty good performance, especially on the ACE2005 dataset.
## 4 Flat NER Experiments
For the flat NER experiment, we used the CoNLL2003 Sang and Meulder (2003) and
OntoNotes 5.0 Pradhan et al. (2013). CoNLL2003 is an English dataset with four
types of named entities, namely Location, Organization, Person and
Miscellaneous. And OntoNotes 5.0 is an English dataset containing text from
many sources and including 18 types of named entity,
| Dev Precision | Dev Recall | Test Precision | Test Recall
---|---|---|---|---
CoNLL2003 Region Proposal | | | |
BERT base | 71.3 | 98.0 | 70.7 | 96.2
BERT large | 71.4 | 98.0 | 70.6 | 96.5
OntoNotes 5.0 Region Proposal | | | |
BERT base (weight 0.5:0.5) | 69.3 | 90.8 | 69.3 | 89.6
BERT base (weight 0.3:0.7) | 67.8 | 92.7 | 67.4 | 92.2
BERT base (weight 0.2:0.8) | 66.3 | 93.9 | 65.8 | 93.7
BERT base (weight 0.1:0.9) | 63.8 | 95.1 | 63.1 | 95.5
Table 1: Region Proposal Model Results. Precision and recall numbers are region metrics regardless of entity type. A prediction is correct as long as the region predicted matches the start and end of the ground-truth entity. Region precision and recall are reported for both dev and test sets. | Dev Precision | Dev Recall | Dev F1 | Test Precision | Test Recall | Test F1
---|---|---|---|---|---|---
CoNLL2003 | | | | | |
BERT base | 95.1 | 95.1 | 95.1 | 91.9 | 91.7 | 91.8
BERT large | 96.1 | 95.4 | 95.8 | 92.0 | 91.6 | 91.8
OntoNotes 5.0 | | | | | |
BERT base | 87.7 | 87.3 | 87.5 | 86.9 | 86.5 | 86.7
BERT large | 88.0 | 88.3 | 88.2 | 86.8 | 87.3 | 87.0
Table 2: Flat NER Results. Standard NER precision and recall are reported here
for both Dev and Test sets. We only showed the best model for each combination
of dataset and the size of BERT encoder.
### 4.1 First-Stage Training
For region prediction, we used the default training parameters provided by
HuggingFace Transformers Wolf et al. (2019) for token classification task,
i.e. AdamW optimizer with learning rate=$5\times 10^{-5}$, $\beta_{1}$=$0.9$
and $\beta_{2}$=$0.999$, hidden state dropout probability 0.1 etc. We
finetuned both BERT-base-cased and BERT-large-cased models for 3 epochs with
batch size of 64. The regions with length equal or less than 6 were selected
as candidates for the second stage. We chose the threshold 6 because most
named entites are shorter than 6 (99.9$\%$ in CoNLL2003 and 99.2$\%$ in
OntoNotes 5.0). Since we ignore entity type in the first stage model, the
precision and recall are based only on region proposals regardless of type. We
keep the model to be as simple as possible because the only goal is to get
high recall in the first stage. For the OntoNotes 5.0 model, a default
training gave a model with pretty low recall, only 89.6$\%$, so we changed the
weights in the cross entropy loss to raise the recall with a little trade-off
of precision. Therefore precision and recall were reported at different
weights for OntoNotes. The training results can be found in Table 1.
From the result, we can see that for CoNLL2003 Sang and Meulder (2003), base
and large models gave pretty close p/r numbers, so we used BERT base in the
following experiments. For OntoNotes dataset, we tried several different
weights, it turned out that weights 0.4:0.6 and 0.3:0.7 both gave pretty good
end-to-end results.
| | Genia | | | ACE2005 |
---|---|---|---|---|---|---
Model | Precision($\%$) | Recall($\%$) | F1($\%$) | Precision($\%$) | Recall($\%$) | F1($\%$)
ARN Lin et al. (2019b) | 75.8 | 73.9 | 74.8 | 76.2 | 73.6 | 74.9
Boundary-aware Neural Zheng et al. (2019) | 75.8 | 73.6 | 74.7 | - | - | -
Merge-BERT Fisher and Vlachos (2019) | - | - | - | 82.7 | 82.1 | 82.4
Seq2seq-BERT Straková et al. (2019) | 80.1 | 76.6 | 78.3 | 83.5 | 85.2 | 84.3
Path-BERT Shibuya and Hovy (2019) | 77.81 | 76.94 | 77.36 | 83.83 | 84.87 | 84.34
BERT-MRC Li et al. (2019a) | 85.18 | 81.12 | 83.75 | 87.16 | 86.59 | 86.88
Our Model | | | | | |
BERT Base Model | 76.6 | 75.1 | 75.9 | 82.8 | 84.9 | 83.8
BERT Large Model | 77.4 | 76.3 | 76.8 | 85.2 | 85.9 | 85.6
Table 3: Nested NER Results on Genia and ACE2005.
### 4.2 Second-Stage Training
There are quite a few differences between our second-stage training and the
traditional sequence labeling NER training, in the sense that our training is
on region proposal level while sequence labeling is on sentence level. Each
training example is now a combination of a sentence, a proposal start index
and a proposal end index, and one sentence could emit multiple training
examples. And the labels are no longer token labels, but labels designed for
our specific loss, specifying if the start index is correct, if the end index
is correct, if the region is corresponding to a true entity, and at last what
the entity type is if all the previous answers are positive. The training
samples are those region proposals output from the first model and we added
more randomly selected negative regions to make it more robust, which turned
out to be very important and will be explained with more details in the
following paragraphs. Finally we evaluated the model performance in an end-to-
end manner. Precision, recall and F1 score on CoNLL2003 and OntoNotes 5.0 can
be found in Table 2.
For the CoNLL2003 dataset Sang and Meulder (2003), the best F1 score we got
with base and large models are both 91.8, which is comparable but a little
lower than the reported sequence labeling BERT results (BERT-Tagger). We
didn’t spend too much time on finetuning hyperparameters, it’s possible that
there is still room for the model. Another possible reason could be in our
model we only predict for one entity at a time and didn’t consider the
interaction between entities when there are multiple in one sentence. For the
OntoNotes 5.0 dataset we got F1 score 87.0.
We tried a few tricks to improve the p/r number. The most effective one is to
add more randomly selected regions as negative samples during the second
stage. For each sentence, we generate one random region that has length in
range [1, 6] and that doesn’t fall into the existing candidates or true
entities. We then labeled them as wrong samples and fed into the training
process together with other candidates. With more random negative samples, the
model is more robust when there is a big gap between the train and test set
distributions, especially when we have a very strong stage-one model with high
precision, which could have a strong bias on the distribution of negative
samples. By adding more negatives, we have almost 0.7$\%$ gain in the F1
score, which will be shown with more details in the following ablation study
section 6.
We also tried adding an extra loss to take into account the classification
error for those regions that were not predicted completely correct but have a
large overlap with one of the true entities. They were not exactly matching
the ground true region, therefore we assigned a weight less than 1 (0.2 in our
experiment). Intuitively, this solution is equivalent to adding more
classification training data. However we didn’t see significant improvement
after this change. We also explored changing hyperparameters, for example the
numbers of channels in the second-stage model. In the default settings we set
the number of heads in dot product to be 32, and the dimension of feature
vector in both entityness and classification to be 64. We tried half or double
these numbers but the model performance degraded in both cases. More details
can be found in the next ablation study subsection 6.
## 5 Nested NER Experiments
For nested NER experiments, we chose two mainstream datasets, the ACE2005
dataset Christopher Walker and Maeda (2006) and Genia dataset Kim et al.
(2003). The ACE2005 dataset contains 7 entity categories. For comparison, we
follow the data preprocessing procedure in Lu and Roth (2015); Wang and Lu
(2018); Katiyar and Cardie (2018) by keeping files from bw, bn, nw and wl, and
splitting these files randomly into train, dev and test sets by 8:1:1,
respectively. For the Genia dataset, again we use the same preprocessing as
Finkel and Manning (2009); Lu and Roth (2015) and we also consider 5 entity
types - DNA, RNA, protein, cell line and cell type.
Our training process for nested NER is basically the same as the previous
section, since our model doesn’t differentiate between flat and nested and
will simply treat all entity regions in the same way even if they are
overlapped. Considering the different distribution of entity lengths, we
increased the length limit for region candidates from 6 words to 12 for
ACE2005 and 8 for Genia. For the region proposal model, we again trained on
top of BERT-base-cased for 3 epochs, and in the cross entropy loss we used
weights 0.1:0.9. For the second stage, we trained both BERT-base and BERT-
large for 10 epochs, the results are reported in Table 3. With the BERT large
model we got an average F1 score of 85.6 on ACE2005 and 76.8 on Genia. This is
not as good as the current state-of-the-art result, but is quite competitive,
and especially on ACE2005 dataset it’s better than all other models except the
BERT-MRC model.
Entity Type | Test Precision | Test Recall
---|---|---
DNA | 74.8 | 72.6
RNA | 90.6 | 82.1
cell line | 78.5 | 67.1
cell type | 73.6 | 72.2
protein | 78.5 | 79.7
Overall | 77.4 | 76.3
Table 4: Precision and Recall by Category. (Genia) Entity Type | Test Precision | Test Recall
---|---|---
PER | 88.6 | 89.9
LOC | 77.0 | 76.2
ORG | 74.8 | 77.8
GPE | 87.7 | 86.7
FAC | 82.0 | 77.8
VEH | 72.3 | 71.4
WEA | 75.0 | 73.8
Overall | 85.2 | 85.9
Table 5: Precision and Recall by Category. (ACE2005)
A detailed P/R analysis by category for Genia and ACE2005 datasets are given
in Table 4 and Table 5.
## 6 Ablation Study
In this part, we did an ablation study to assess the contribution from each
component of the new model. In the first experiment, we removed the start/end
logits from the concatenated vector for entityness prediction and also removed
the start/end loss from the total loss, to see if they are helpful to resolve
boundary errors. Then, we evaluated the effects of the max pooling layer over
the whole entity and used only the first token’s feature vector instead. In
the following experiments we also tried removing random negative samples and
reducing the model size by using less channels or dimensions for the start/end
unit, entityness prediction and entity classification etc. The evaluation
results on the CoNLL2003 test set is shown in Table 6.
From the results, we can see that the start/end prediction only has limited
effect for the final F1, but with a more closer look at the errors we found
that with start/end prediction, the boundary errors dropped from 149 (out of
5711 test samples) to 127 while the type classification errors changed from
222 to 230. The start/end loss did change the pattern of errors and corrected
some boundary mistakes. Removing the max pooling layer brought an F1 drop
greater than 1.0$\%$ and removing random negative samples brought 0.7$\%$
drop. After reducing the number of channels to half of the default, we got a
drop of 0.5$\%$ and when cutting further, keeping only 25$\%$ channels, the
model performance degraded pretty fast to 65$\%$, indicating a strong
underfit.
## 7 Error Analysis
To have a deeper understanding what type of errors our model is making, we did
a further error profiling and also compared it with the result of a standard
sequence labeling model based on BERT. Inspired by the methodology from Object
Detection research, We divided the NER errors into the following types: (1)
the region is correct but the type classification is wrong; (2) one of the
left/right boundaries of the region is wrong, but classification is correct;
(3) one of the left/right boundaries of the region is wrong, and the
classification is also wrong; (4) over-triggering: the predicted entity has no
overlap with any ground-truth entity; (5) under-triggering: for a true entity,
either no overlapping region is proposed or the entityness model predicts no
entity; (6) another type of error is that both boundaries are wrong but the
region has overlap with at least one of true entities, in the evaluation we
found that this type of error occurs only once, so we ignored it in the pie
plot. We displayed the error composition for the new two-stage model as well
as a traditional sequence labeling BERT model side by side, as shown in Figure
3.
| Test P | Test R | F1
---|---|---|---
Original setting | 91.9 | 91.7 | 91.8
Remove start/end prediction | 90.4 | 91.9 | 91.7
Remove max pooling | 90.0 | 90.7 | 90.4
Remove random negatives | 90.6 | 91.6 | 91.1
Remove 50$\%$ channels | 91.2 | 91.3 | 91.3
Remove 75$\%$ channels | 65.0 | 65.4 | 65.2
Table 6: Ablation Study using CoNLL2003 Dataset. All experiments are using the
BERT base model and training with the same epochs. P, R and F1 are reported on
test set. Figure 3: Error Analysis. We have divided the entity recognition
error to 5 classes, more details can be found in the text. We profiled the
errors for traditional sequence labeling BERT model (the left panel) and also
the two-stage model proposed in this paper (the right panel), to provide a
deeper insight of the composition of model errors.
We ran analysis on two models both trained from BERT-base and having similar
precision and recall. As we can see, for both models the dominant part is the
type error (region is correct). We could consider using larger and more
complicated type prediction models since there is a lot of room in that
direction reading from the plot. The new model made significantly less over-
triggering errors, which means the precision of the entityness prediction is
good. And two models have similar amounts of single-boundary errors and under-
triggering errors.
## 8 Conclusion
In this paper, we proposed a new two-stage model for named entity recognition.
And many of the ideas are coming from the inspiration of object detection in
computer vision. More specifically, with a coarse first-stage model to provide
region proposals, we rely on a second-stage model to predict entityness and
entity types at the same time. Through sufficient experiments in both flat and
nested NER tasks, we found that it has better performance on nested NER, and
we got F1 85.6 on ACE2005 dataset and F1 76.8 on Genia datase, better than
many more complicated models. On flat NER tasks, it’s still a few points
behind the current SOTA results. In the future we are going to improve the
model further to see where is the real limit of two-stage region-based named
entity recognition.
## References
* Ren et al. (2015) S. Ren, K. He, R. Girshick, and J. Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In _NIPS_ , pages 5754–5764.
* Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _NIPS_ , pages 5998–6008.
* Radford et al. (2019a) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019a. Language models are unsupervised multitask learners.
* Ramshaw and Marcus (1995) Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In _Proceedings of Third Workshop on Very Large Corpora_ , pages 82–94.
* Lafferty et al. (2001) John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data.
* Sutton et al. (2007) Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. _Journal of Machine Learning Research_ , 8(Mar):693–723.
* Hammerton (2003) James Hammerton. 2003. Named entity recognition with long short-term memory. In _Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4_ , pages 172–175. Association for Computational Linguistics.
* Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. _arXiv preprint arXiv:1603.01360_.
* Kim et al. (2003) J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. _Bioinformatics_ , 19(suppl_1):i180–i182.
* Peters et al. (2018a) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. _arXiv preprint arXiv:1802.05365_.
* Finkel and Manning (2009) Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In _Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1_ , pages 141–150. Association for Computational Linguistics.
* Sang and Meulder (2003) Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In _Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003_ , pages 142–147.
* Pradhan et al. (2013) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In _Proceedings of the Seventeenth Conference on Computational Natural Language Learning_ , pages 143–152, Sofia, Bulgaria. Association for Computational Linguistics.
* Christopher Walker and Maeda (2006) Julie Medero Christopher Walker, Stephanie Strassel and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. _Linguistic Data Consortium, Philadelphia 57._
* Lu and Roth (2015) Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 857–867.
* Wang and Lu (2018) Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. _arXiv preprint arXiv:1810.01817_.
* Katiyar and Cardie (2018) Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 861–871.
* Lin et al. (2019b) Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019b. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 5182–5192.
* Shibuya and Hovy (2019) Takashi Shibuya and Eduard H. Hovy. 2019. Nested named entity recognition via second-best sequence learning and decoding. _CoRR_ , abs/1909.02250.
* Fisher and Vlachos (2019) Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 5840–5850.
* Straková et al. (2019) Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5326–5331, Florence, Italy. Association for Computational Linguistics.
* Zheng et al. (2019) Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 357–366.
* Li et al. (2019a) Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019a. A unified mrc framework for named entity recognition. _arXiv preprint arXiv:1910.11476_.
* Xu et al. (2017) Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , volume 1, pages 1237–1247.
* Sohrab and Miwa (2018) Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2843–2849, Brussels, Belgium. Association for Computational Linguistics.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_.
|
figure
# A Coding Theory Perspective on Multiplexed Molecular Profiling of Biological
Tissues
Luca D’Alessio1 Broad Institute, Cambridge, MA
<EMAIL_ADDRESS>Litian Liu1 MIT, Cambridge, MA
<EMAIL_ADDRESS>Ken Duffy Maynooth University, Ireland
<EMAIL_ADDRESS>Yonina C. Eldar Muriel Médard Weizmann Institute of
Science, Israel
<EMAIL_ADDRESS>MIT, Cambridge, MA
<EMAIL_ADDRESS>Mehrtash Babadi Luca D’Alessio and Mehrtash Babadi acknowledge
funding and support from Data Sciences Platform (DSP), Broad Institute. Litian
Liu acknowledges financial support from Klarman Family Foundation. Yonina
Eldar acknowledges funding from NIMH grant 1RF1MH121289-0. The authors thank
Samouil L. Farhi for beneficial discussions, and Aviv Regev for supporting
this project. Broad Institute, Cambridge, MA
<EMAIL_ADDRESS>
###### Abstract
High-throughput and quantitative experimental technologies are experiencing
rapid advances in the biological sciences. One important recent technique is
multiplexed fluorescence in situ hybridization (mFISH), which enables the
identification and localization of large numbers of individual strands of RNA
within single cells. Core to that technology is a coding problem: with each
RNA sequence of interest being a codeword, how to design a codebook of probes,
and how to decode the resulting noisy measurements? Published work has relied
on assumptions of uniformly distributed codewords and binary symmetric
channels for decoding and to a lesser degree for code construction. Here we
establish that both of these assumptions are inappropriate in the context of
mFISH experiments and substantial decoding performance gains can be obtained
by using more appropriate, less classical, assumptions. We propose a more
appropriate asymmetric channel model that can be readily parameterized from
data and use it to develop a maximum a posteriori (MAP) decoders. We show that
false discovery rate for rare RNAs, which is the key experimental metric, is
vastly improved with MAP decoders even when employed with the existing sub-
optimal codebook. Using an evolutionary optimization methodology, we further
show that by permuting the codebook to better align with the prior, which is
an experimentally straightforward procedure, significant further improvements
are possible.
**footnotetext: These two authors contribute equally to the work.
## I Introduction
In recent years, the field of single-cell biology has witnessed transformative
advances in experimental and computational methods. Of particular interest is
the recent advent of multiplexed fluorescence in situ (in-place) hybridization
(mFISH) microscopy techniques that allow molecular profiling of hundreds of
thousands of cells without disturbing their complex arrangement in space. This
highly-informative data modality paves the way to transformative progress in
many areas of biology, including understanding morphogenesis, tissue
regeneration, and disease at molecular resolution.
One of the major challenges in designing such experiments is the vastness of
functional bio-molecules. For example, the human genome codes nearly 30k non-
redundant types of RNA molecules, many of which translate to proteins with
specific functions. Modern data-driven biology heavily relies on our ability
to measure as many different types of functional molecules as possible.
Clearly, a sequential imaging approach is impractical. Fortunately, a typical
cell produces a rather sparse set of all molecules, and some of the most
promising mFISH techniques exploit molecular sparsity in space together with
coding ideas in order to multiplex the measurements into fewer imaging rounds
[1, 2, 3, 4, 5].
In brief, the mFISH technique involves assigning binary codes to RNA molecules
of interest, chemically synthesizing and “hybridizing” these codes to the
molecules, and measuring them in space one bit at a time via sequential
fluorescence microscopy. A more detailed account of one such pioneering
technique known as MERFISH (“multiplexed error-robust fluorescence in situ
hybridization”)[1] is given in Sec. I-A (also, cf. Fig. 1). An important part
of the MERFISH protocol is the utilization of sparse codes with large minimum
distance to allow error correction. Referred to as MHD4 codes [1, 2, 3], these
16-bit codes have minimum Hamming distance 4 and contain 4 ones and 12 zeros
each. The bit imbalance is motivated by the empirically observed $\sim
2\times$ higher signal fallout rate compared to false alarm. There are only
140 such codes and therefore one is limited to measuring at most 140 distinct
molecules. These codes are randomly assigned to the RNA molecules of interest.
The decoding method in current use relies on quantization, Hamming error
correction, and rejection of ambiguous sequences.
We point out that the assumptions motivating the codebook construction and
decoding, tacitly yet heavily, rely on source uniformity and to a certain
extent on the binary symmetric channel paradigm, both of which are violated in
the context molecular profiling. For channel coding in communication, source
can be readily assumed as uniformly distributed thanks to compression in
source coding and the separation theorem [6]. In molecular profiling, however,
source compression is not applicable and the distribution of RNA molecules is
extremely non-uniform. Moreover, fluorescence microscopy is established to be
highly asymmetric in terms of fallout and false alarm. These violated
assumptions become a source of potential problems when directly applying
communication encoding and decoding paradigms. For example, the false
discovery rate of rare molecules is found to be unacceptably high in replicate
experiments [1, 2], which we later show to be a consequence of the assumed
source uniformity. Accurate quantification of rare RNA molecules (e.g.
transcription factors) is particularly important for data-driven biological
discovery since rare molecules often signal rare events, transient cells
states, etc. This motivates our primary goal in this paper: to incorporate the
prior non-uniformity in the decoding process in a principled way in order to
control false discovery rate of rare molecules. In practice, either accurate
priors are known, can be estimated from the data, or can be measured cheaply
and effortlessly (e.g. using bulk RNA sequencing [1]).
This paper is organized as follows: we review the MERFISH protocol in Sec. I-A
and propose a generative model for the data in Sec. II-A, along with a model
fitting algorithm and a procedure to derive a more tractable binary asymmetric
channel (BAC) formulation from the fitted model. The BAC framework allows us
to evaluate the performance of different encoding and decoding schemes. We
incorporate the prior non-uniformity into the decoding algorithm by developing
a maximum a posteriori (MAP) decoder with a tunable rejection threshold in
Sec. II-D. We show that the false discovery rate of rare RNAs, which is the
key experimental metric, is vastly improved compared to the presently used
MLE-based decoding method [1, 2, 3], even when employed with the existing sub-
optimal MHD4 codebook. Finally, we take a first step in data-driven code
construction in Sec. III. Using an evolutionary optimization methodology, we
show that by permuting the codebook to better align with the prior, which is
an experimentally straightforward procedure, significant further improvements
are possible. We conclude the paper in Sec. IV with of follow up research
directions.
### I-A A brief overview of the MERFISH protocol
Figure 1: A schematic overview of a typical mFISH experiment. (a) codebook and
probe design; (b) sequential imaging; (c) image processing and decoding.
In this section, we briefly review the MERFISH protocol [1], recount different
sources of noise and nuisance, and motivate a generative process for MERFISH
data. Fig. 1 shows a schematic overview of the MERFISH technique. This
protocol consists of four main steps: Step 1. A unique binary codeword of
length $L=16$ is assigned to the RNA molecules of interest; Step 2. The
specimen is stained with carefully designed short RNA sequences called
encoding probes. The middle part of the encoding probes bind with high
specificity to a single RNA type while their flanking heads and tails contain
a subset of $L$ artificial sequences, $\\{R_{1},\ldots,R_{L}\\}$, called
readout sequences. The choice of readout sequences reflects the intended
binary codeword. For instance, if the code for a certain RNA type contains “1”
at positions 1, 3, 5, and 15, the encoding probes are designed to have
$R_{1},R_{3},R_{5}$, and $R_{15}$ flanking sequences (see Fig. 1a); Step 3.
The prepared tissue undergoes $L$ rounds of imaging. Imaging round $l$ begins
with attaching fluorescent readout probes for round l to the prepared tissue.
These probes bind to the flanking readout sequences and contain a fluorescent
dye that emits light upon excitation. The round ends with bleaching the dye.
In effect, imaging round $l$ reveals the position of all RNA molecules having
“1” in their binary codeword at position $l$. Step 4. Finally, the position of
RNA molecules, which appear as bright spots, are identified using conventional
image processing operations (see Fig. 2). The data is summarized as an
$N\times L$ intensity matrix ($N$ being the number of identified spots) and is
ultimately decoded according to the codebook. MERFISH measurements are
affected by several independent sources of noise. These include factors that
are intrinsic to individual molecules, such as (1) stochasticity in the
hybridization of encoding and readout probes, (2) random walk of molecules
between imaging rounds, and (3) CCD camera shot noise. These factors module
the intensity measurements independently in each round and are largely
uncorrelated across rounds. Extreme multiplexing (e.g. as in the seqFISH+
protocol [5]) further leads to interference noise due to signal mixing between
nearby molecules. This nuisance, however, is rather negligible in the MERFISH
protocol.
## II Methodology and Results
### II-A A generative model for mFISH data
Figure 2: Extraction of isolated spots from MERFISH images (data from [2]).
(a) local peak finding; (b) identification of isolated spots; (c) intensity
series from 10 random spots (rows); the leftmost 16 columns show the intensity
measurements; the last two column show the summed intensity and nearest-
neighbor cross-correlations and are used for filtering of poorly localized
spots. Figure 3: Modeling spot intensities as two-component Gaussian mixture
for each data dimension (i.e. readout round and color channel). (a) model
fitting (black and red lines) and empirical histograms (gray); the green lines
indicate the quantization thresholds for the ensuing BAC approximation; (b)
QQ-plots for each data dimension; the labels shown in the sub-panels indicate
hybridization rounds $\\{1,\ldots,8\\}$ and color channels $\\{1,2\\}$.
In this section, we present a simple generative model for MERFISH spot
intensity data, fit the model to real data, and evaluate the goodness of fit.
This model will serve as a foundation for developing a MAP decoder. Fig. 2
shows a typical example of MERFISH data from [2]. We formalize the data
generating process as follows: let $\mathsf{C}\subset\\{0,1\\}^{L}$ be a set
of codewords with cardinality $|\mathsf{C}|=K$ which are assigned to $G\leq K$
molecules, let $a:\tilde{\mathsf{C}}\rightarrow\\{1,\ldots,G\\}$ be the
bijective code assignment map where
$\tilde{\mathsf{C}}\subset\mathsf{C},|\tilde{\mathsf{C}}|=G$ is the set of
used codes, and let $\boldsymbol{\pi}_{1:G}$ be the prior distribution of
molecules. Setting aside interference effects, we model the fluorescence
intensity series $\mathbf{I}_{1:L}\in[0,\infty)^{L}$ measured for an arbitrary
molecule as follows:
$\begin{split}g&\sim\mathrm{Categorical}(\boldsymbol{\pi}),\\\
\mathbf{c}&=a^{-1}(g),\\\ \log
I_{l}\,|\,c_{l}&\sim\mathcal{N}(\mu_{l}[c_{l}],\sigma^{2}_{l}[c_{l}]).\end{split}$
(1)
As discussed earlier, the intrinsic spot intensity noise (1) is
multiplicative, (2) results from a multitude of independent sources, and (3)
is uncorrelated across imaging rounds, motivating factorizing
$\mathbf{I}_{1:L}\,|\,\mathbf{c}_{1:L}$ in $l$ and modeling each conditional
as a Gaussian in the logarithmic space. The well-known heteroscedasticity of
fluorescence noise is reflected in having two different $\sigma^{2}[c]$ for
$c\in\\{0,1\\}$ for the two binary states.
### II-B Image processing and model fitting
The most straightforward way to fit the generative model to empirical data is
by observing that marginalizing the (discrete) molecule identity variable $g$
yields a two-component Gaussian mixture model (GMM) for $\log I_{l}$, with
weights determined by the prior $\boldsymbol{\pi}$, codebook $\mathsf{C}$, and
the assignment $a$. The model parameters
$\\{\mu_{1:L}[0],\sigma^{2}_{1:L}[0],\mu_{1:L}[1],\sigma^{2}_{1:L}[1]\\}$ can
be readily estimated by ML GMM fitting to each column of the spot intensity
table (cf. Fig. 1c), which can be performed efficiently using the conventional
EM algorithm. In order to decouple the intrinsic and extrinsic spot noise in
the raw data, we censor the dataset to only spatially isolated molecules. In
brief, we process the images as described in [1], subtract the background,
identify the position of molecules by local peak-finding, censor dense regions
(e.g. cell nuclei), and retain local peaks that are separated from one another
at least by $\sim 5~{}\mathrm{px}$, which is a few multiples of the
diffraction limit. We perform additional filtering based on the spot intensity
pattern and nearest-neighbor Pearson correlation (cf. Fig. 2) and only retain
peaks with a symmetric appearance. This procedure yields $\sim$ 250k spots in
the dataset published in [2]. The obtained fits are shown in Fig. 3 along with
QQ-plots that confirm a remarkably good fit to the empirical marginal
histograms.
### II-C Quantization, channel model and estimation
The generative model specified by Eq. (1) readily yields the posterior
distribution $\mathrm{Pr}(g\,|\,\mathbf{I};\boldsymbol{\pi},\mathsf{C},a)$ and
can form the basis of an intensity-based MAP decoder. To make the formulation
more amenable for computational and theoretical investigation, as well as
making a connection to the currently used decoding method, we derive an
approximate binary asymmetric channel (BAC) model from Eq. (1) through
quantization. The optimal quantization thresholds $\boldsymbol{\theta}_{1:L}$
are determined for each $l$ to be the point of equal responsibility between
the two Gaussian components, i.e.
$\sum_{g=1}^{G}\pi_{g}\,a^{-1}(g)[l]\,\mathcal{N}(\theta_{l}\,|\,\mu_{l}[1],\sigma_{l}^{2}[1])=\sum_{g=1}^{G}\pi_{g}\,[1-a^{-1}(g)[l]\,\mathcal{N}(\theta_{l}\,|\,\mu_{l}[0],\sigma_{l}^{2}[0])$,
which amdits a closed-form solution. Here, $a$ and $\boldsymbol{\pi}$
correspond to the known code assignment and prior distribution of the data
used for fitting. The fallout $p^{1\rightarrow 0}$ and false alarm
$p^{0\rightarrow 1}$ rates are given by the integrated probability weights of
the two Gaussian components below and above the threshold (cf. Fig. 3a), i.e.
$p^{0\rightarrow 1}_{l}=\Phi[(\mu_{l}[0]-\theta_{l})/\sigma_{l}[0]]$ and
$p^{1\rightarrow 0}_{l}=\Phi[(\theta_{l}-\mu_{l}[1])/\sigma_{l}[1]]$, where
$\Phi(\cdot)$ is the CDF of the standard normal distribution. We find
$p_{l}^{0\rightarrow 1}$ and $p_{l}^{1\rightarrow 0}$ to be $0.046$ and
$0.102$ (mean in $l$), respectively, for the data given in Ref. [2], which is
in agreement with the estimates reported therein. We, however, observed
significant round-to-round variation in the channel parameters and as such,
refrained from further simplifying the channel model to a single BAC for all
imaging rounds $l$. We refer to the bundle of estimated BAC parameters as
$\boldsymbol{\theta}_{\mathrm{BAC}}$.
### II-D Decoding: MAP and MLE decoders
A gratifying property of the BAC approximation of Eq. (1) is allowing us to
evaluate the performance of various decoding strategies without resorting to
time-consuming simulations or further analytical approximations. In the BAC
model, the likelihood of a binary sequence $\mathbf{x}_{1:L}\in\\{0,1\\}^{L}$
conditioned on the codeword $\mathbf{c}\in\mathsf{C}$ is given as:
$\log\mathrm{Pr}(\mathbf{x}\,|\,\mathbf{c},\boldsymbol{\theta}_{\mathrm{BAC}})=\sum_{l=1}^{L}\sum_{i,j\in\\{0,1\\}}\,\delta_{c_{l},i}\,\delta_{x_{l},j}\,\log\,p_{l}^{i\rightarrow
j},$ (2)
where $\delta_{\cdot,\cdot}$ is the Kronecker’s delta function. We define the
posterior Voronoi set for each codeword $\mathbf{c}\in\mathsf{C}$ as:
$\mathsf{V}(\mathbf{c}\,|\,a,\boldsymbol{\omega},\mathsf{C},\boldsymbol{\theta}_{\mathrm{BAC}})=\big{\\{}\mathbf{x}\in\\{0,1\\}^{L}\,|\,\forall\mathbf{c}^{\prime}\in\mathsf{C},\mathbf{c}\neq\mathbf{c}^{\prime}:\\\
\omega_{a(\mathbf{c})}\,\mathrm{Pr}(\mathbf{x}\,|\,\mathbf{c},\boldsymbol{\theta}_{\mathrm{BAC}})>\omega_{a(\mathbf{c}^{\prime})}\,\mathrm{Pr}(\mathbf{x}\,|\,\mathbf{c}^{\prime},\boldsymbol{\theta}_{\mathrm{BAC}})\big{\\}},$
(3)
where $\boldsymbol{\omega}_{1:G}$ is the prior distribution assumed by the
decoder. The Voronoi sets are mutually exclusive by construction, can be
obtained quickly for short codes by exhaustive enumeration, and determine the
optimal codeword for an observed binary sequence. The MLE decoder corresponds
to using a uniform prior, i.e. $\boldsymbol{\omega}\leftarrow\mathbf{1}/G$
whereas the MAP decoder corresponds to using the actual (non-uniform) prior
governing the data $\boldsymbol{\omega}\leftarrow\boldsymbol{\pi}$. We
additionally introduce a MAPq decoder, which is a MAP decoder obtained from
depleting the Voronoi sets from binary sequences for which the posterior
probability of the best candidate code is below a set threshold $q$.
Intuitively, the MAPq decoder is a Bayesian decoder with reject option that
trades precision gain for sensitivity loss by filtering dubious sequences from
the Voronoi sets. The decoding algorithm introduced by Ref.[1, 2, 3] can be
thought of as the MLE decoder with a rejection subspace given by
$S_{\mathrm{rej}}=\\{\mathbf{x}\,|\,\exists\,\mathbf{c},\mathbf{c^{\prime}},\mathbf{c}\neq\mathbf{c}^{\prime}\in\mathsf{C}:d_{\mathrm{H}}(\mathbf{c},\mathbf{x})=d_{\mathrm{H}}(\mathbf{c}^{\prime},\mathbf{x})=d^{*}(\mathbf{x},\mathsf{C})\\}$
where $d_{\mathrm{H}}(\cdot,\cdot)$ is the Hamming distance and
$d^{*}(\mathbf{x},\mathsf{C})=\inf_{\mathbf{c}\in\mathsf{C}}d_{\mathrm{H}}(\mathbf{c},\mathbf{x})$.
We refer to this decoder as Moffitt (2016). We remark that the acceptance
criterion of Moffitt (2016) is extremely stringent: for MHD4 codes,
$|S_{\mathrm{acc}}|=9100$, which is only $\sim 13\%$ of all possible sequences
(here, $S_{\mathrm{acc}}$ is the complement of $S_{\mathrm{rej}}$). In all
cases, the confusion matrix $\mathcal{T}(\mathbf{c}\,|\,\mathbf{c}^{\prime})$,
i.e. the probability that a molecule coded with $\mathbf{c}^{\prime}$ is
decoded to $\mathbf{c}$, can be immediately calculated:
$\mathcal{T}(\mathbf{c}\,|\,\mathbf{c}^{\prime};\boldsymbol{\pi},\boldsymbol{\omega},\boldsymbol{\theta}_{\mathrm{BAC}})=\sum_{\mathbf{x}\in\mathsf{V}(\mathbf{c}|\boldsymbol{\omega},\ldots)}\,\pi_{a(\mathbf{c}^{\prime})}\,\mathrm{Pr}(\mathbf{x}\,|\,\mathbf{c}^{\prime},\boldsymbol{\theta}_{\mathrm{BAC}})$
(4)
from which the marginal true positive rates $\mathrm{TPR}_{1:G}$ and false
discovery rates $\mathrm{FDR}_{1:G}$ can be readily calculated.
### II-E Comparing the performance of different decoders
Figure 4: Comparing the performance of different decoding schemes for randomly
assigned MHD4 codes. (a) and (b) correspond to prior distribution for RNA
molecules selected in [2] and [3], respectively. The top panels show the rank-
ordered prior distribution and the estimated Dirichlet concentration parameter
$\alpha$; the middle and bottom panels show the marginal TPR and FDR for each
molecule type conditioned on $S_{\mathrm{acc}}$ and $S_{\mathrm{rej}}$
subspaces (cf. Sec. II-D); markers are color-coded according to prior rank of
their corresponding molecules. Shaded regions indicate 5-95 percentile range
as a matter of random code assignment; (c) the effect of prior non-uniformity
on the performance of MLE and MAP decoders for randomly assigned MHD4 codes.
The top panels show the uniform mismatch rate. The bottom panels show the
histogram of marginal FDRs vs. Dirichlet prior concentration $\alpha$ in
grayscale. The orange lines and regions indicate the median and 5-95
percentile ranges; (d) performance of MAP decoders with reject at different
acceptance thresholds compared to method in [2].
Developments in previous sections allow us to compare the performance of MLE,
MAP, MAPq, and Moffitt (2016). We use the BAC parameters obtained from the
data in [2], 16-bit MHD4 codes with random assignment, and two different
previously estimated and published source priors with different degree of non-
uniformity. As a first step, we compare the performance of our proposed MAP
and MLE decoders separately inside $S_{\mathrm{acc}}$ and $S_{\mathrm{rej}}$,
the acceptance and rejection subspaces of Moffitt (2016), in Fig. 4a, b
(middle, bottom). The priors are shown on the top, including the estimated
Dirichlet concentration $\alpha$. Marginal performance metrics for different
molecules are color-coded according to their prior rank from red (most
abundant) to blue (least abundant). The MLE decoder inside $S_{\mathrm{acc}}$
is equivalent to Moffitt (2016). Both decoders perform well in this subspace.
While the MLE decoder is performing poorly inside $S_{\mathrm{rej}}$,
providing a sound basis for rejection as in Moffitt (2016), the MAP decoder
yields acceptable FDR, hinting that the $S_{\mathrm{acc}}$ is too stringent
for the MAP decoder and better performance can be expected from MAPq. It also
noticed that MAP decoder controls FDR much better than MLE inside
$S_{\mathrm{acc}}$ for the more non-uniform prior. We explore this observation
more systematically in panel c. We sample $\boldsymbol{\pi}$ from a symmetric
Dirichlet distribution with concentration $\alpha$ and calculate the
distribution of the marginal FDRs (bottom) as well as the uniform mismatch
rate (top). We notice that as the prior gets more concentrated
$\log\alpha\rightarrow-\infty$, the MAP decoder behaves progressively better
whereas the MLE decoder degrades and exhibits a bi-modal behavior: extremely
low (high) FDR on abundant (rare) codes. As the prior gets more uniform
$\log\alpha\rightarrow+\infty$, MLE and MAP become indistinguishable. The
green and red symbols show the biological priors used in panels a and b,
respectively, together with their estimated $\alpha$, in agreement with the
trend of the Dirichlet prior model. Finally, panel d compares the performance
of the MAPq decoder at different rejection thresholds $q$ with Moffitt (2016).
The prior used here is the same as in panel b. It is noticed that the MAPq
decoder is remarkably effective at controlling FDR for all codes whereas
Moffitt (2016) degrades in FDR for rare codes, as expected from the source
uniformity assumption. This finding explains the reportedly lower correlation
between rare molecules in replicate experiments [1, 2]. The smaller panels at
the top of panel c show mean TPR, FDR, and rejection rate across all
molecules. The MAP0.5 decoder has similar sensitivity to Moffitt (2016) while
yielding $\sim 20\%$ lower FDR on average and remarkably $\sim 60\%$ lower
5-95 FDR percentile range, implying significant improvement in reducing the
mean and variance of false positives for both abundant and rare molecules.
## III Data-driven code construction
The results presented so far were obtained randomly assigning a fixed set of
MHD4 codes. Constructing codes to better reflect channel asymmetry and prior
non-uniformity is another attractive opportunity for improving the performance
of mFISH protocols. Constructing application-specific codes for mFISH is
outside the scope of the present paper and is a topic for future research.
Here, we continue to thread on the theme of utilizing prior non-uniformity and
show that optimizing the assignment of the even sub-optimal codes to molecules
with respect to prior abundance can significantly reduce FDR. This is to be
expected given the rather wide performance outcomes shown in Fig. 4 that
result from random code assignment. Explicitly, we seek to optimize the
scalarized metric
$\overline{\mathrm{FDR}}(a,\boldsymbol{\pi})=G^{-1}\sum_{g=1}^{G}\mathrm{FDR}_{g}(a,\boldsymbol{\pi})$
over the assignment operator $a$ for a given prior $\boldsymbol{\pi}$ through
an evolutionary optimization process. We start with a population of $N=5000$
random code assignments, mutate the population via pairwise permutations with
a small probability of $0.05$ per molecule per assignment, and select the
fittest $N$ offsprings using $\overline{\mathrm{FDR}}$ as the measure of
fitness. We do not use a crossover operation here. We hypothesize that a
relevant surrogate for the optimality of $\overline{\mathrm{FDR}}$ is the
concordance between the Hamming distance $d_{H}$ and the prior distance
$d_{\boldsymbol{\pi}}(\mathbf{c},\mathbf{c}^{\prime})\equiv|\pi_{a(\mathbf{c})}-\pi_{a(\mathbf{c}^{\prime})}|$.
We investigate the emergence of this order by monitoring the following order
parameter during the evolution:
$\chi(a,\boldsymbol{\pi})\equiv\frac{1}{G}\sum_{g=1}^{G}\rho^{\mathrm{s}}\Big{[}d_{\mathrm{H}}\big{(}a^{-1}(g),\mathbf{C}_{a}\big{)},d_{\boldsymbol{\pi}}\big{(}a^{-1}(g),\mathbf{C}_{a}\big{)}\Big{]},$
(5)
where $\rho^{s}[\cdot,\cdot]$ denotes the Spearman correlation and
$\mathbf{C}_{a}$ is the ordered list of all codes used by $a$ over which the
correlation is calculated. We refer to the population average of
$\chi(a,\boldsymbol{\pi})$ as $\overline{\chi}$. We implement the evolutionary
algorithm using the PyMOO package [7] and vectorize the calculation of Voronoi
sets with GPU acceleration. Fig. 5 shows the results obtained by running the
evolutionary optimization for three days (NVIDIA Testla P100 GPU, MHD4 codes,
prior from [3]). Panel a shows the monotonic decline of
$\overline{\mathrm{FDR}}$ to $\sim 75\%$ of its initial value (random
assignment). This trend proceeds concurrently with a monotonic upturn in
$\overline{\chi}$, providing evidence for the hypothesized matching order
between $d_{\mathrm{H}}$ and $d_{\boldsymbol{\pi}}$. Panel b compares the
performance metrics of the MAP decoder between the first and last population
of code assignments. It is noticed that the optimized code assignment
predominantly reduces $\overline{\mathrm{FDR}}$ of rare molecules, the mean
FDR of which reduce to $\sim 50\%$ of randomly assigned codes. The possibility
to reduce the FDR of rare molecules is a particularly favorable outcome in
practice.
Figure 5: Evolutionary optimization of code assignment for MHD4 codes (for
channel model described in Fig. 3 and prior distribution from [3]). (a)
bottom: mean FDR vs. generation; top: $d_{\mathrm{H}}-d_{\boldsymbol{\pi}}$
matching order parameter vs. generation (see Eq. 5); (b) the performance of
MAP decoder for randomly assigned codes (squares) vs. optimized assignment
(circles).
## IV Conclusion and Outlook
In this paper, we reviewed multiplexed molecular profiling experiments from
the perspective of coding theory, proposed a motivated generative model for
the data, based on which we derived an approximate parallel BAC model for the
system. We show that the exact MAP decoder of the BAC model vastly outperforms
the decoding algorithm in current use in terms of controlling FDR of rare
molecules, the key experimental metric. This is achieved by taking into
account the non-uniformity of source prior, a “non-classical” aspect of
multiplexed molecular profiling viewed as a noisy channel. We also took the
first step in data-driven code construction and show that optimizing the
assignment of existing sub-optimal codes is another effective method for
reducing false positives.
Attractive directions for follow up research include constructing application-
specific codes to increase the throughput of the mFISH experiments,
theoretical progress in understanding the optimal assignment of existing codes
(e.g. by investigating the geometry of Voronoi sets), extending the generative
model and the ensuing channel description to $q$-ary codes (e.g. as in seqFISH
and seqFISH+ experimental protocols [4, 5]), and taking into account spatial
interference and color channel cross-talk in the data generating process.
## References
* [1] K. H. Chen, A. N. Boettiger, J. R. Moffitt, S. Wang, and X. Zhuang, “Spatially resolved, highly multiplexed rna profiling in single cells,” _Science_ , vol. 348, no. 6233, p. aaa6090, 2015.
* [2] J. R. Moffitt, J. Hao, G. Wang, K. H. Chen, H. P. Babcock, and X. Zhuang, “High-throughput single-cell gene-expression profiling with multiplexed error-robust fluorescence in situ hybridization,” _Proceedings of the National Academy of Sciences_ , vol. 113, no. 39, pp. 11 046–11 051, 2016.
* [3] J. R. Moffitt, J. Hao, D. Bambah-Mukku, T. Lu, C. Dulac, and X. Zhuang, “High-performance multiplexed fluorescence in situ hybridization in culture and tissue with matrix imprinting and clearing,” _Proceedings of the National Academy of Sciences_ , vol. 113, no. 50, pp. 14 456–14 461, 2016.
* [4] S. Shah, E. Lubeck, W. Zhou, and L. Cai, “seqfish accurately detects transcripts in single cells and reveals robust spatial organization in the hippocampus,” _Neuron_ , vol. 94, no. 4, pp. 752–758, 2017.
* [5] C.-H. L. Eng, M. Lawson, Q. Zhu, R. Dries, N. Koulena, Y. Takei, J. Yun, C. Cronin, C. Karp, G.-C. Yuan _et al._ , “Transcriptome-scale super-resolved imaging in tissues by rna seqfish+,” _Nature_ , vol. 568, no. 7751, pp. 235–239, 2019.
* [6] C. E. Shannon, “A mathematical theory of communication,” _Bell system technical journal_ , vol. 27, no. 3, pp. 379–423, 1948.
* [7] J. Blank and K. Deb, “pymoo: Multi-objective optimization in python,” 2020.
|
# Self-stabilizing Algorithm for Maximal Distance-2 Independent Set
Badreddine Benreguia‡ Hamouma Moumen‡111Address for correspondence: H. Moumen,
Department of Informatics, University of Batna 2, Fesdis 05078, Batna,
Algeria. Soheila Bouam‡ Chafik Arar‡
‡ University of Batna 2, 05000 Batna, Algeria
<EMAIL_ADDRESS>
###### Abstract
In graph theory, an independent set is a subset of nodes where there are no
two adjacent nodes. The independent set is maximal if no node outside the
independent set can join it. In network applications, maximal independent sets
can be used as cluster heads in ad hoc and wireless sensor networks. In order
to deal with any failure in networks, self-stabilizing algorithms have been
proposed in the literature to calculate the maximal independent set under
different hypothesis.In this paper, we propose a self-stabilizing algorithm to
compute a maximal independent set where nodes of the independent set are far
from each other at least with distance 3. We prove the correctness and the
convergence of the proposed algorithm. Simulation tests show the ability of
our algorithm to find a reduced number of nodes in large scale networks which
allows strong control of networks.
Keywords: Self-stabilizing algorithm, distributed system, network, independent
set.
## 1 Introduction
### 1.1 Context of the study and motivation
Self-stabilization is a fault tolerance approach that allows distributed
systems to achieve a global correct configuration starting from an unknown
initial configuration. Without external intervention, a self-stabilizing
algorithm is able to correct the global configuration of the distributed
system in a finite time. Various self-stabilizing distributed algorithms have
been proposed in the literature using graph theory such as leader election,
nodes coloring, domination problem, identifying the independent set,
constructing the spanning tree. These algorithms have many benefits in the
real-life applications, for example independent sets have been used as cluster
heads in ad hoc and sensor networks [1, 2, 3, 12].
In graphs, independence is commonly defined as follow: let $G=(V,E)$ be a
graph, where $V$ is the set of nodes and $E$ is the set of edges. An
independent set is a subset of nodes $S\subset V$ such that there is no two
adjacent nodes in $S$. The distance between any two nodes in $S$ is greater
than 1. An independent set $S$ is said $maximal$, if there is no superset of
$S$ that could be an independent set. In other words, there is no node outside
the maximal independent set (MIS) that may join MIS. It is well known in graph
literature that MIS is considered also as dominating set because every node
out of MIS has at least a neighbor in MIS (every node outside MIS is dominated
by a node of MIS).
In this paper, we deal with a particular case of independent set. We call $S$
maximal distance-2 independent set (MD2IS), if of $S$ nodes are independent
and the distance between any two nodes among them is strictly greater than 2.
Figure 1 illustrates difference between MIS and MD2IS where green nodes are
independent. Observe that in MIS (a), distance of 2 could be found between
independent nodes. However, the distance between green nodes in MD2IS (b) is
strictly greater than 2.
Figure 1: (a) Maximal independent set. (b) Maximal distance-2 independent set.
Nodes of MIS are used as servers (cluster heads) in ad hoc and wireless sensor
networks to provide important services for other nodes. Each cluster head has
to guarantee services for nodes connected to it, that are called members of
the cluster. Cluster members represent nodes outside of MIS. A cluster head
could serve its members by routing information, providing keys of encryption,
giving names for members,…
Figure 1(b) shows that elements of MD2SI could be used as cluster heads where
members connected to the head could be within distance of 2. However, using
MIS, members could not be located within distance more than 1. Obviously,
MD2IS provides a more reduced number of clusters than MIS. The choice of the
cluster heads is important in order to contribute in extending lifetime of
wireless sensor and ad hoc networks. Using MD2IS rather than MIS as cluster
heads could provide a good alternative in this sense especially that lifetime
is the major problem of these networks. In addition to that and in order to
deal with any possible failure, we use self-stabilizing algorithm that ensures
reconstructing cluster heads, after the failure occurs, allowing the network
still operational.
Finding the maximal independent set (MIS) in graphs using self-stabilization
paradigm was studied in literature for the first time by Shukla et al. in 1995
[16]. Authors have used a straightforward idea based on two rules: (1) a node
$v$ joins the set $S$ (which is under construction) if $v$ has no neighbor in
$S$, and (2) a node $v$ leaves the set $S$ if at least one of its neighbors is
in $S$. Other variants of self-stabilizing algorithms constructing independent
set have been introduced to deal with particular problems which try to
minimize the algorithm complexity [18] or to be suitable for distributed
daemon 222See section 2.1. [5, 8]. Reader can refer to the survey [6] for more
details on MIS self-stabilizing algorithms. Other self-stabilizing algorithms
have been proposed for independent sets imposing additional constraints
besides to the independence condition. For example, [13] has presented an
algorithm to discover the independent set where each node $u$ out of $S$ has
at least a neighbor $v$ in $S$ such that $deg(v)>deg(u)$. In [2] authors
propose a distributed self-stabilizing algorithm to find MIS using two-hop
(distance-2) information in wireless sensor networks.
### 1.2 Related works
[9, 10] has proposed a self-stabilizing algorithm to find the independent
dominating set imposing a distance greater than $k$ between any two nodes of
the independent set. Work [10] is an improvement of the memory management
regarding the first one [9]. Every node outside the independent set is within
distance $k$. [4] presented a self-stabilizing algorithm to compute a
dominating set $S$ (which is not independent) where every node out of $S$ has
to be distant from $S$ at most by $k$. Although the precedent algorithms have
bounded complexity $O(n+k)$ in rounds, authors indicate that these algorithms
might still never converge under the distributed daemon, since the daemon
could ignore an enabled nodes. It is known in literature that: if the round
complexity of a self-stabilizing algorithm is finite, this does not mean it
converges [4]. Therefore, the computation of the convergence time still an
open question [4, 9, 10] for independent (or dominating) set at distance
$k\geq 2$.
### 1.3 Contribution
In this paper, we propose a self-stabilizing algorithm to find maximal
distance-2 independent set called MD2IS. We prove the correctness and the
convergence of the presented algorithm. Using a serial central daemon, and
starting from an arbitrary configuration, MD2IS reaches the correct
configuration in a limit number of moves. The serial central daemon allows
reaching the correct configuration in the worst case at $2n$ moves. Which
means that MD2IS converges in $O(n)$ moves using a central daemon under
expression distance-2 model. For distance-one model, our algorithm reaches the
correct configuration in $O(nm)$ moves using distributed daemon, where $n$ is
the nodes number and $m$ is the edges number. Proofs and simulation tests
confirm the convergence of the proposed algorithm that provides smaller
independent sets in large scale graphs.
Having independent set more reduced is useful in many applications and allows
more control on large scale networks. For example, the problem of locating an
anonymous source of diffusion [14, 15] needs the placement of few nodes as
observers [17]. A reduced number of nodes that occupy important locations in
graphs, can be used as observers to detect the source of rumors in social
networks.
### 1.4 Organization of the paper
This paper is made up of five sections. Section 2 presents the model and the
terminology used for self-stabilization. In section 3, we introduce the
proposed self-stabilizing algorithm for finding a maximal independent set at
distance-2. Simulation tests are conducted in section 4. Finally, section 5
concludes the paper.
## 2 Model and terminology
Networks and distributed systems are modelled generally as an undirected graph
$G=(V,E)$ where the units of processing represent the set of nodes $V$ and the
links are the set of edges $E$. The neighborhood of a node $v\in V$ is defined
as $N(v)=\\{u\in V:vu\in E\\}$. Usually, we say that two nodes $v,u$ are
$adjacent$ if $u\in N(v)$. We define the neighborhood at distance-2 as
$N(v)_{dist2}=N(v)\cup\\{e\in V:(\exists u\in N(v):e\in N(u))\\}$. It is known
in the graph literature that the distance between two adjacent nodes is 1.
Clearly, the $v$’s neighborhood at distance-2 gathers the set of nodes within
distance of 1 and 2.
A set $S\subset V$ is $independent$ if no two nodes of $S$ are adjacent. In
other word, there is no two nodes in $S$ at distance 1. Generally, a set $S$
of nodes is $distance$-$k$ independent if every node in $S$ is distant at
least $k+1$ to any other node of $S$ [9]. Consequently, a distance-2
independent set is a subset $S$ of $V$ where every two nodes of $S$ are at
distance $>2$. Recall that in the usual case of MIS, each node in the graph is
either independent or dominated $i.e.$ every node of MIS is independent and
every node outside MIS is dominated [7]. In our case, every node $u$ out of
MD2IS is either dominated by a node $v\in$ MD2IS where $dist(u,v)=1$ or is
dominated by $v$ where $dist(u,v)=2$. Note that, a node $u$ out of MD2IS could
be dominated by many nodes of MD2IS, for example it is possible to find a node
$u$ dominated by $v1$ and $v2$ where $dist(u,v1)=1$ and $dist(u,v2)=2$.
Definition : A $distance$-$2$ $independent$ $set$ of a graph $G(V,E)$ is a
subset $S$ of $V$ such that the distance between any two nodes of $S$ is
strictly greater than $2$. $S$ is $maximal$ if no superset of $S$ is also a
distance-2 independent set.
An algorithm is self-stabilizing if it can (1) reach a global correct
configuration called $legitimate$ (2) during a finite time after it starts
from an unknown configuration. When a self-stabilizing algorithm reaches the
correct configuration it must stay inside the correct configuration (known as
$closure$ condition). Hence, to show that an algorithm is self-stabilizing, it
is sufficient to prove its $closure$ for the legitimate configuration and its
$convergence$ to achieve the desired configuration in a finite time. In the
uniform self-stabilizing system, all nodes execute the same code which is a
set of rules having the form: if $guard$ then $statement$ (written as:
$guard\longrightarrow statement$). In this case, nodes use the same local
variables that describe their $state$. The guard is a (or a collection of)
boolean expression. If a guard is evaluated to be true, the corresponding rule
is said $enabled$ (or $priviliged$). We say that a node is $enabled$, if at
least one of its rules is enabled.
Executing the statement of the enabled rule by the node is called a $move$. An
enabled node can make a move only if it is selected by a scheduler called a
$daemon$. A move allows updating the local state (local variables) in order to
make the node more legitimate with its neighborhood.
### 2.1 Daemon notion
The execution of self-stabilizing algorithms is managed by a daemon
(scheduler) that selects the enabled nodes to move from a configuration to
another configuration. Two types of daemons are widely used in literature:
central and distributed daemons. In the central daemons, only one enabled node
is selected to be moved among all the enabled nodes. The central daemon,
called also serial, allows enabled nodes executing a move (one by one) by a
serial order. However, in the distributed daemons, a subset of nodes are
selected among the set of privileged nodes to make a move simultaneously. The
selected subset of nodes to be moved simultaneously forms a $round$. The
distributed daemon is said $synchronous$ when all the enabled nodes are
selected to move simultaneously.
### 2.2 Distance model
Generally, most of the existing self-stabilizing algorithms, use the distance-
one model wherein each node has a partial view on neighbors at distance one
(through an access to the $state$ variable of the neighbors). However, there
is other distance-2 models (like expression model) where every node has to get
information of its neighborhood at distance-2. In the expression model which
is particular case of distance-2 model, access to distance-2 is reached
indirectly through access to $expression$ of neighbors. In the distance-2
model, hypothesis should be constructed carefully. For example, to the best of
our knowledge, there is no algorithm of distance-2 that can operate under
distributed daemon. The existing algorithms of distance-2 have been developed
only under central daemon.
### 2.3 Transformers
A common approach, known in literature [19, 20], allows converting a self-
stabilizing algorithm $A$ operating under a given hypothesis to a new
algorithm $A^{T}$, that operates under other hypothesis different from the
first ones. However, the transformation guarantees that the two algorithms
converge to the same legitimate configuration. Different kinds of transformers
can be found in literature like distance transformers and daemon transformers.
Generally, an overhead of the algorithm complexity is caused by the
transformation which leads generally to a slowdown in the convergence of the
transformed algorithms. In this paper, we use the transformer proposed by [19]
that allows to transform any self-stabilizing algorithm operating under serial
central daemon and expression distance-2 model to a self-stabilizing algorithm
that operates under distributed daemon and distance-one model.
### 2.4 Our execution model
In this paper, we develop a uniform self-stabilizing algorithm. In a first
step, we suppose our algorithm operates under expression distance-2 model
using central daemon. After that, we use transformer proposed by [19] that
allows converting our algorithm to another algorithm that runs under distance-
one model using distributed daemon.
## 3 Self-stabilizing algorithm MD2IS
The proposed self-stabilizing algorithm presented in Algorithm 1, allows
finding the maximal distance-2 independent set. Each node $v$ maintains a
local variable $state$ and an expression $exp$. The $state$ variable could
take one of the values $In$ or $Out$. Once the system reaches the legitimate
configuration, all the nodes are disabled and the set $S=\\{v\in
V:v.state=In\\}$ forms the maximal distance-2 independent set. However, in the
illegitimate configuration, a serial moves of enabled nodes are executed until
the global correct configuration is reached. At every move, an enabled node is
selected randomly by the central daemon.
Every node checks its state regarding its neighborhood using $state$ and
$exp$. Expression $exp$ is used to calculate the number of the neighbors in
$S$. Generally, the expression model allows discovering the neighborhood at
distance-2. Once a node could read expressions of its neighbors, it will have
an information about its neighbors at distance-2.
In our algorithm, R1 shows that if a node $v$ out of $S$ reads the $state$ and
the $exp$ of all its neighbors and finds that all the $state=Out$ and all the
$exp=0$, hence all the neighbors at distance-2 are out, therefore node $v$ has
to join the set $S$. Conversely, R2 illustrates that if a node $v\in S$ knows
that there exists at least one of its neighbors $u$ such that $u.state=In$ or
$u.exp>1$, thus $v$ has to leave $S$. Note that if $v\in S$ and $u.exp>1$
means that $u$ is dominated at least by two nodes: $v$ and another node.
Observe also that if $v.state=IN$ and $u.exp=1$, this means that $u$ is
dominated only by $v$ and it is impossible that $u.exp=0$ because there exists
at least $v\in S$ as a neighbor of $u$.
It is clear that R1 ensures the independence property because node $v$ will be
independent at distance-2 by executing R1 leading to $v.state=In\wedge\forall
u\in N(v):(u.state=Out\wedge u.exp=0)$. However, R2 guarantees that every node
$v$ out of MD2IS is dominated at least by distance-2 because
($v.state=Out\wedge\exists u\in N(v):(u.state=In\vee u.exp>1)$).
Figure 2 shows how our algorithm converges from an initial illegitimate
configuration to a final legitimate configuration using a central serial
daemon. Nodes outlined in red are (privileged) candidates to execute a move.
The central daemon selects in every step only one privileged node to execute a
move. Green nodes represents MD2IS which is reached after 5 moves by the
sequence of rules: R2, R2, R2, R1, R1. Observe that in some cases, a move (for
example transition from configuration c to configuration d) can make other
nodes enabled in the next configuration.
Figure 2: Convergence to the final legitimate MD2IS configuration.
(1) $v.exp::|\\{u\in N(v):u.state=In\\}|$ (2) R1: $v.state=Out\wedge\forall
u\in N(v):(u.state=Out\wedge u.exp=0)\longrightarrow v.state=In$ (3) R2:
$v.state=In\wedge\exists u\in N(v):(u.state=In\vee u.exp>1)\longrightarrow
v.state=Out$
Algorithm 1 Maximal Distance-2 Independent Dominating Set - MD2IS
### 3.1 Closure
###### Lemma 1
When all the nodes are disabled, the set $S=\\{v\in V,v.state=In\\}$ is
maximal distance-2 independent set.
Proof Suppose that the system is in a legitimate configuration. Since $R2$ is
not enabled for nodes $v$ in $S$, the condition $(\exists u\in
N(v):(u.state=In\vee u.exp>1)$ is false. Thus, $\forall u\in
N(v):(u.state=Out\wedge u.exp\leq 1$). Therefore, at distance 1 from $v$, all
the nodes are out of $S$. And, at distance-2 from $v$ there is no node in $S$
because $u.exp\leq 1$ which is exactly $=1$ ($u$ has at least a neighbor $v$
in $S$, Therefore, all the neighbors of $u$ are out $S$ except $v$. This
implies that all the nodes in the neighborhood of $v$ at distance-2 are out of
$S$).
To show that $S$ is maximal, observe that if we want to add one node $v$ with
$v.state=Out$ then $v$ will have not all the neighbors at distance-2 out of
$S$ ($i.e.$ $v$ has at least a neighbor at distance-2 in $S$). Thus, the
addition of this node will violate distance-2 independence.
$\Box_{Lemma~{}\ref{lem1}}$
### 3.2 Convergence
###### Lemma 2
If a node $v$ executes $R1$ becoming independent, it remains independent, and
every node in its neighborhood at distance-2 still out of $S$ and cannot be
enabled.
Proof When a node $v$ executes $R1$, that means all the nodes $u$ at
distance-2 from $v$ are out of $S$. It is clear that no one of $u$ could
enabled $R1$ because there exists at least $v$ in $S$ as a node in the
neighborhood of $u$ at distance-2 from $u$. Thus, $v$ will still in $S$ and
all the neighbors at distance-2 still out of $S$. $\Box_{Lemma~{}\ref{lem2}}$
###### Lemma 3
Any node of $V$ will be enabled at most twice by R2 then R1. Thus, MD2IS
terminates in the worst case at $2n$ moves under the expression model using a
central daemon.
Proof Since Lemma 2 shows that every node executes R1, it cannot move again.
It follows that any node could be enabled (in the worst case) by only R2 and
then R1. Consequently, $2n$ moves is an upper bound to stabilize for $n$
nodes. $\Box_{Lemma~{}\ref{lem3}}$
###### Theorem 1
MD2IS is a self-stabilizing algorithm that constructs Maximal Distance-2
Independent Set in $O(n)$ moves under expression model using a central daemon.
Proof The proof follows from Lemma 1 and Lemma 3.
$\Box_{Theorem~{}\ref{th1}}$
The last theorem shows that MD2IS stabilizes under the central daemon and
expression model. Now, we use the transformer proposed by [19] that gives a
corresponding self-stabilizing algorithm MD2ISD which operates under
distributed daemon and distance-one model.
###### Theorem 2
MD2ISD converges into legitimate configuration in $O(nm)$ moves in the
distance-one model under a distributed daemon. Where $m$ is the number of
edges.
Proof Using Theorem 1, the proof follows from Theorem 18 of [19], where $m$
is the number of edges.
$\Box_{Theorem~{}\ref{th2}}$
## 4 Simulation results
In this section, simulation tests are carried out in order to evaluate MD2IS.
We calculate the cardinality of the MD2IS with MIS, and then, we observe how
the cardinality of MD2IS is related when the graph size grows. Although the
comparison with MIS is unfair, there is no other algorithm that we could refer
to evaluate MD2IS cardinality. The algorithms are written in Java using
expression model for MD2IS. For MIS [5], we use the implementation of Lukasz
Kuszner [11] in which we have generated arbitrary graphs with different
density having sizes from 500 nodes to 20000 nodes. For each size of graphs,
we have carried out 5 to 10 executions and then we have taken the average
value.
Figure 3: Cardinality of MD2IS and MIS according density using graphs of 1000
and 3000 nodes.
Figure 3 shows the cardinality of MD2IS and MIS according graphs density. It
is clear that MD2IS gives independent sets smaller than those produced by MIS.
The density of the graphs has a clear impact on the cardinality of the
independent sets. More the density grows, more the cardinality converges to be
smaller. The important observation is that the cardinality of MD2IS will be
close to 1 when the density becomes greater than 0.5. This is a rational
result because for complete graphs (density=1), cardinality of MD2IS is 1.
Figure 4: Cardinality of MD2IS and MIS according graphs size (Density=0.01).
Figure 4 illustrates the cardinality of independent sets according size of
graphs. Using constant density 0.01, curves show that MIS increases
proportionally with the graph sizes. However, cardinality of MD2IS is
inversely proportional to the graph sizes. For graphs of 10000 nodes, the
cardinality of MIS is greater than 100 whereas the cardinality of MD2IS is
less than 10 nodes.
Graph size | MD2IS Cardinality | MIS Cardinality | MD2IS Convergence | MIS Convergence
---|---|---|---|---
1000 | 601.8 (60.18%) | 687.8 (68.78%) | 400.2 | 415.6
1500 | 703.6 (46.91%) | 926.4 (61.76%) | 652.8 | 636.6
2000 | 758.2 (37.91%) | 1099.4 (54.97%) | 910.8 | 845.4
2500 | 794.4 (31.78%) | 1248.2 (49.93%) | 1229.2 | 1078.4
3000 | 786.6 (26.22%) | 1385.0 (46.17%) | 1570.8 | 1327.4
3500 | 776.4 (22.18%) | 1522.4 (43.50%) | 1877.2 | 1624.6
4000 | 762.2 (19.06%) | 1614.4 (40.36%) | 2202.8 | 1887.4
4500 | 748.8 (16.64%) | 1720.4 (38.23%) | 2483.6 | 2206.0
5000 | 724.2 (14.48%) | 1801.0 (36.02%) | 2785.8 | 2454.8
5500 | 711.0 (12.93%) | 1887.0 (34.31%) | 3048.4 | 2749.2
6000 | 692.2 (11.54%) | 1964.4 (32.74%) | 3335.6 | 3061.6
6500 | 662.8 (10.20%) | 2020.8 (31.09%) | 3551.6 | 3324.6
7000 | 642.6 (9.18%) | 2111.8 (30.17%) | 3825.2 | 3691,4
7500 | 633.8 (8.45%) | 2149.8 (28.66%) | 4098.8 | 3927.2
8000 | 604.6 (7.56%) | 2216.6 (27.71%) | 4333.4 | 4229.2
8500 | 601.8 (7.08%) | 2269.2 (26.70%) | 4586.4 | 4542.8
9000 | 577.6 (6.42%) | 2320.4 (25.78%) | 4836.0 | 4830.8
9500 | 552.8 (5.82%) | 2364.6 (24.89%) | 5097.2 | 5152.4
10000 | 548.4 (5.48%) | 2415.2 (24.15%) | 5350.8 | 5379.2
10500 | 527.4 (5.02%) | 2476.2 (23.58%) | 5645.0 | 5728.4
11000 | 501.6 (4.56%) | 2507.4 (22.79%) | 5817.6 | 6063.2
11500 | 498.2 (4.33%) | 2560.4 (22.26%) | 6091.0 | 6303.6
12000 | 481.2 (4.01%) | 2602.2 (21.69%) | 6300.6 | 6622.0
12500 | 477.8 (3.82%) | 2635.2 (21.08%) | 6555.8 | 6923.8
13000 | 454.6 (3.50%) | 2665.6 (20.50%) | 6790.0 | 7195.0
13500 | 448.0 (3.32%) | 2705.0 (20.04%) | 7057.6 | 7496.6
14000 | 435.2 (3.11%) | 2747.8 (19.63%) | 7251.6 | 7758.6
14500 | 427.4 (2.95%) | 2785.0 (19.21%) | 7512.0 | 8045.0
15000 | 416.2 (2.77%) | 2793.0 (18.62%) | 7798.4 | 8234.0
15500 | 408.2 (2.63%) | 2839.4 (18.32%) | 7988.8 | 8579.8
16000 | 398.8 (2.49%) | 2867.4 (17.92%) | 8293.6 | 8857.8
16500 | 392.0 (2.38%) | 2886.6 (17.49%) | 8496.2 | 9177.6
17000 | 383.4 (2.26%) | 2934.2 (17.26%) | 8798.4 | 9433.6
17500 | 380.2 (2.17%) | 2962.6 (16.93%) | 9043.4 | 9704.0
18000 | 367.6 (2.04%) | 2976.6 (16.54%) | 9249.4 | 9972.0
18500 | 361.0 (1.95%) | 3017.8 (16.31%) | 9481.0 | 10284.0
19000 | 355.2 (1.87%) | 3036.6 (15.98%) | 9748.6 | 10535.8
19500 | 353.0 (1.81%) | 3054.8 (15.67%) | 9998.8 | 10785.0
20000 | 340.4 (1.70%) | 3080.2 (15.40%) | 10207.8 | 11032.4
Table 1: Cardinality and time of convergence of MD2IS (on graphs of
density=0.001).
Further results of simulation are given in table 1 where the time of
convergence is shown to be finite and variates proportionally to the graph
size. Results confirm the lemma 3 that algorithm MD2IS converges at most in
$2n$ moves although the simulations gives smaller number of moves that is
close to $n/2$. In these tests, random graphs have been generated for small
density = 0.001. We use this later value of density in order to be more close
to real networks. For example, for a graph of 10000 nodes, a density of 0.001
will give an average degree = 10 for each node which models a user having 10
friends on social networks. The generated graphs have orders from 1000 nodes
to 20000 nodes. We take the average value after carrying out 5 tests.
## 5 Conclusion
In this paper we proposed a first self-stabilizing MD2IS algorithm that
converges into the correct configuration in $O(n)$ moves using a central
daemon under expression model. In the distance-one model, MD2ISD terminates in
$O(nm)$ moves using a distributed daemon.
The computation of the convergence time for independent sets of $k$ distance
which still an open question is left for the future work. We plan also to
evaluate the use of MD2IS as observers for the problem of locating sources
propagating rumors on real graphs of social networks.
## References
* [1] O. Arapoglu, V. K. Akram, and O. Dagdeviren. An energy-effcient, self-stabilizing and distributed algorithm for maximal independent set construction in wireless sensor networks. Computer Standards and Interfaces, 62-32; 251 42, 2019.
* [2] O. Arapoglu and O. Dagdeviren. An Asynchronous Self-Stabilizing Maximal Independent Set Algorithm in Wireless Sensor Networks Using Two-Hop Information, International Symposium on Networks, Computers and Communications (ISNCC), Istanbul, Turkey, 2019, pp. 1-5. doi:10.1109/ISNCC.2019.8909189.
* [3] D. Bein, A. K. Datta, C. R. Jagganagari, and V. Villain. A self-stabilizing link-cluster algorithm in mobile ad hoc networks. In 8th International Symposium on Parallel Architectures, Algorithms, and Networks, ISPAN, pages 436-441. IEEE Computer Society, 2005.
* [4] A. K. Datta, S. Devismes, and L. L. Larmore. A silent self-stabilizing algorithm for the generalized minimal k-dominating set problem. Theoretical 259 Computer Science, 753:35-63, 2019.
* [5] W. Goddard, S. T. Hedetniemi, D. P. Jacobs, and P. K. Srimani. Self-stabilizing protocols for maximal matching and maximal independent sets for ad hoc networks. In 5th IPDPS Workshop on Advances in Parallel and Distributed Computational Models, page 162. IEEE Computer Society, 2003. doi:10.1109/IPDPS.2003.1213302.
* [6] N. Guellati and H. Kheddouci. A survey on self-stabilizing algorithms for independence, domination, coloring, and matching in graphs. Journal of Parallel and Distributed Computing, 70(4):406-415, 2010.
* [7] S. Hedetniemi, S. Hedetniemi, D. Jacobs, and P. Srimani. Self-stabilizing algorithms for minimal dominating sets and maximal independent sets. Computer and Mathematics with Applications, 46(56):805-811, 2003.
* [8] M. Ikeda, S. Kamei, and H. Kakugawa. A space-optimal self-stabilizing algorithm for the maximal independent set problem. In 3rd International Conference on Parallel and Distributed Computing, Applications and Technologies, pages 70-74, 2002.
* [9] C. Johnen. Fast, silent self-stabilizing distance-k independent dominating set construction. Information Processing Letters, 114:551-555, 2014.
* [10] C. Johnen. Memory efficient self-stabilizing distance-k independent dominating set construction. In Networked Systems - Third International Conference, pages 354-366. Springer International Publishing, 2015.
* [11] L. Kuszner. Tools to develop and test self-stabilizing algorithms. http://kaims.eti.pg.gda.pl/ kuszner/self-stab/main.html, 2005.
* [12] A. Larsson and P. Tsigas. A self-stabilizing (k,r)-clustering algorithm with multiple paths for wireless ad-hoc networks. In 31 International Conference on Distributed Computing Systems, pages 353-362. IEEE, 2011.
* [13] B. Neggazi, N. Guellati, M. Haddad, and H. Kheddouci. Efficient self-stabilizing algorithm for independent strong dominating sets in arbitrary graphs. International Journal of Foundations of Computer Science, 26(6):751-768, 2015.
* [14] P. C. Pinto, P. Thiran, and M. Vetterli. Locating the source of diffusion in large-scale networks. Physical Review Letters, 109(6):068702, 2012.
* [15] E. Seo, P. Mohapatra, and T. Abdelzaher. Identifying rumors and their sources in social networks. In Proceedings of SPIE - The International Society for Optical Engineering, volume 8389, 2012.
* [16] S. K. Shukla, D. J. Rosenkrantz, and S. S. Ravi. Observations on self-stabilizing graph algorithms for anonymous networks. In 2nd Workshop on Self-Stabilizing Systems, 1995.
* [17] B. Spinelli, L. E. Celis, and P. Thiran. Observer placement for source localization: The effect of budgets and transmission variance. In 54th Annual Allerton Conference on Communication, Control, and Computing , pages 743-751, 2016.
* [18] V. Turau. Linear self-stabilizing algorithms for the independent and dominating set problems using an unfair distributed scheduler. Information Processing Letters, 103(3):88-93, 2007.
* [19] V. Turau. Efficient transformation of distance-2 self-stabilizing algorithms. Journal of Parallel and Distributed Computing, 72(4):603-612, 2012.
* [20] W. Goddard, P. K. Srimani, Daemon conversions in distributed selfstabilizing algorithms, in: Ghosh S.K., Tokuyama T. (eds) WALCOM: Algorithms and Computation. WALCOM 2013. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2013, pp. 146–157.
|
# Constraints on Planets in Nearby Young Moving Groups Detectable by High-
Contrast Imaging and Gaia Astrometry
A. L. Wallace${}^{\href https://orcid.org/0000-0002-6591-5290 \,1}$, M. J.
Ireland${}^{\href https://orcid.org/0000-0002-6194-043X\,1}$, C.
Federrath${}^{\href https://orcid.org/0000-0002-0706-2306\,1}$
1Research School of Astronomy & Astrophysics, Australian National University,
Canberra, ACT 2611, Australia
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The formation of giant planets can be studied through direct imaging by
observing planets both during and after formation. Giant planets are expected
to form either by core accretion, which is typically associated with low
initial entropy (cold-start models) or by gravitational instability,
associated with high initial entropy of the gas (hot-start models). Thus,
constraining the initial entropy can provide insight into a planet’s formation
process and determines the resultant brightness evolution. In this study, we
find that, by observing planets in nearby moving groups of known age both
through direct imaging and astrometry with Gaia, it will be possible to
constrain the initial entropy of giant planets. We simulate a set of planetary
systems in stars in nearby moving groups identified by BANYAN $\Sigma$ and
assume a model for planet distribution consistent with radial velocity
detections. We find that Gaia should be able to detect approximately 25% of
planets in nearby moving groups greater than $\sim 0.3\,M_{\text{J}}$. Using
5$\sigma$ contrast limits of current and future instruments, we calculate the
flux uncertainty, and using models for the evolution of the planet brightness,
we convert this to an initial entropy uncertainty. We find that future
instruments such as METIS on E-ELT as well as GRAVITY and VIKiNG with VLTI
should be able to constrain the entropy to within 0.5 $k_{B}$/baryon, which
implies that these instruments should be able to distinguish between hot and
cold-start models.
###### keywords:
gaseous planets – formation – detection
††pubyear: 2015††pagerange: Constraints on Planets in Nearby Young Moving
Groups Detectable by High-Contrast Imaging and Gaia Astrometry–Constraints on
Planets in Nearby Young Moving Groups Detectable by High-Contrast Imaging and
Gaia Astrometry
## 1 Introduction
In order to observationally constrain the formation and evolution of planetary
systems, it is necessary to observe planets during or shortly after formation.
While there has been some success with discovering planets in the process of
formation (e.g., the PDS 70 system; Keppler et al. (2018); Benisty et al.
(2021)), recent direct imaging surveys of the nearest star-forming regions,
despite discoveries of new brown dwarf companions, highlighted a low frequency
of wide orbit planetary mass companions (Kraus & Ireland, 2011; Wallace et
al., 2020). The low number of positive detections of planets in wide orbits
combined with sensitivity limits has produced upper limits on the frequency of
giant planets (Bowler & Nielsen, 2018) and constraints on formation models
(Nielsen et al., 2019; Vigan et al., 2020). From predicted occurrence rates,
it has also been determined that current instruments have insufficient
sensitivity at the expected separations to detect planets around solar-type
stars in nearby star-forming regions (Wallace & Ireland, 2019). However, there
has been greater success with wide-separation planets around high-mass stars
in young nearby moving groups such as $\beta$-Pictoris (Lagrange et al., 2009)
and 51-Eridani (Macintosh et al., 2015).
Nearby moving groups have been studied in detail over the years (Torres et
al., 2008; Zuckerman et al., 2011; Rodriguez et al., 2013) and recently,
precise proper motion and parallax measurements of nearby stars have allowed
reasonably accurate determination of membership to these groups (Gagné et al.,
2018; Schneider et al., 2019). There are at least 27 such associations within
150 pc with ages less than $\sim$800 Myr. The young ages and small distances
of these systems make them ideal for young planet surveys (López-Santiago et
al., 2006).
The upcoming Gaia DR3 and subsequent data releases promise high-precision mass
calculations for many giant, long-period exoplanets (Perryman et al., 2014). A
recent study of HR 8799 has already delivered results (Brandt et al., 2021)
and measured the mass of HR 8799 e. However, Gaia’s expected 5–10 year mission
lifetime puts an upper limit on the semi-major axes of Gaia-detectable planets
with non-degenerate solutions, which limits the possibilities for high-
constrast imaging studies. In order to conduct high-contrast imaging studies
of Gaia-detectable planets, we must observe young planets in nearby systems.
Planets in the process of formation radiate with a luminosity proportional to
their accretion rate and total mass (Zhu, 2015). The amount of energy radiated
away during formation has an effect on the internal entropy of the planet. If
the accretion shock radiates all accretion energy away, the planet will form
with low entropy (cold-start). If none of the accretion energy is radiated
away, the planet will have high entropy (hot-start) (Berardo et al., 2017).
The values of internal entropy corresponding to hot and cold-starts depend on
planetary mass (as shown in the ‘tuning fork’ diagram from Marley et al.
(2007).) Hot-start planets are assumed to form quickly whereas cold-start
planets gradually accrete gas through the accretion shock (Mordasini et al.,
2012). Thus, the initial entropy of a planet can indicate its formation
conditions.
After formation, planets cool and fade over time, but the rate of cooling
depends on their internal entropy (Spiegel & Burrows, 2012). A hot-start
planet will be brighter than a cold-start planet shortly after formation. The
brightness of a planet can then be used to determine the initial entropy. As
planets age, the luminosity decreases at a rate dependant on initial entropy.
The luminosity of hot-start planets decreases faster than cold-start planets,
which means, as planets age, information about the initial entropy is lost.
However, if planets of known mass are observed at young ages, it should be
possible to infer the initial entropy based on the observed flux (Marleau &
Cumming, 2014).
If the mass and age of a planet is known with reasonable precision, the
greatest uncertainty results from the observed flux measurement. This flux
uncertainty depends on the sensitivity of our instruments. In this study, we
consider current instrument such as NIRC2, NaCo, SPHERE and GRAVITY and future
instruments such as JWST, VIKiNG (interferometric instrument using VLTI),
MICADO on the VLT and METIS on the E-ELT. These instruments have observed and
theoretical detection limits which we convert to a flux uncertainty. Using
models linking flux, mass and age to initial entropy, we convert this to an
entropy uncertainty to determine how well the initial entropy can be
constrained.
The rest of the paper is organised as follows. Section 2 summarises our
stellar and planet sample and models for the evolution of planet luminosity
and its dependence on entropy. Section 3 focuses on detection limits of
astrometry and direct imaging and Section 4 presents the numbers of detectable
planets in our sample by both methods. Section 5 explains how we can constrain
the initial entropy of planets if we know the mass, magnitude and age. Our
conclusions are presented in Section 6.
## 2 Stellar and Planetary Properties
### 2.1 Stellar Sample
Our stellar sample comes from nearby young (<800 Myr) moving groups which are
promising targets for planet surveys. The stars are initially selected from
Gaia’s second data release (DR2) (Brown et al., 2018) and then sorted into
moving groups using the BANYAN $\Sigma$ from Gagné et al. (2018). Our initial
sample from Gaia DR2 included stars across the entire sky within 70 pc,
brighter than a G-magnitude of 13 and temperature greater than 3800 K. Beyond
$\sim$70 pc the majority of giant planets in $\sim$10 year orbits are not
detectable and moving group knowledge is less complete. Stars cooler than 3800
K are no-longer considered solar-type in this paper as they have a measured
smaller fraction of planets (Johnson et al., 2007). The magnitude cutoff
excludes stars considered too faint for reliable adaptive optics.
The moving group membership of each star is then determined by the BANYAN
$\Sigma$ algorithm. This process uses each star’s celestial coordinates,
distance and proper motion values and associated errors to determine the
probability of membership to a particular moving group from a list of 27
possible groups. A star that does not belong to any group with more than 70 %
probability is discarded from the sample. A partial sky map of our stellar
sample, indicating group membership is shown in Figure 1. This map only
includes targets with a membership probability greater than 95%.
Figure 1: Stars in our sample belonging to moving groups determined by BANYAN
$\Sigma$. Map is shown in celestial coordinates with East to the left.
Although the BANYAN $\Sigma$ algorithm can sort stars into 27 moving groups,
there are only 10 groups with members closer than 70 pc as shown in Figure 1.
Our target stars are spread across the sky, but the majority is in the
southern hemisphere. The Argus association has the highest number of targets,
but the existence of this group has been controversial (Bell et al., 2015), as
it was unclear whether it represented a single moving group. However, recent
studies have suggested the association does indeed exist with an age of 40–50
Myr (Zuckerman, 2018) so we include it in this study.
Our resultant sample contains 1760 stars across 10 moving groups. The
distributions of distance and mass of the stars in our sample is shown in
Figure 2.
(a) Distribution of Stellar Distance
(b) Distribution of Stellar Mass
Figure 2: Distribution of star distance and mass in our sample. Note the
distance cuts off at 70 pc as we excluded stars further away than this.
Most of our targets are at distances of $\sim$40–60 pc and low mass (<0.6 M⊙).
A colour-magnitude diagram is shown in Figure 3 which plots absolute
G-magnitude against effective temperature.
Figure 3: Colour-magnitude diagram of our targets. The temperature scale cuts
off at 3800 K as we don’t consider stars cooler than this.
As shown in Figures 2 and 3, the majority of our targets are cool, low-mass
stars which, in principle should make it easier to detect planets by
astrometry.
### 2.2 Planet Distribution
For each star in our sample, we simulate a system of planets with properties
sampled from a broken power law in mass $M$ and period $P$ taken from
Fernandes et al. (2019), which has the functional form
$\frac{d^{2}N}{d\mathrm{ln}M\,d\mathrm{ln}{P}}=CM^{\alpha}P^{\beta},$ (1)
where $N$ is the number of planets per star and $C$ is a normalisation
constant. We assume the mass power law is approximately consistent across all
masses with $\alpha=-0.45$. Based on radial velocity detections, the period
power law changes with distance from the star with $\beta>0$ at short periods
and $\beta<0$ for long periods. This broken power law is also consistent with
theoretical core formation expectations (Mordasini et al., 2009). The study
presented in Fernandes et al. (2019) gives several different values, but in
this study we use the symmetric distribution in which $\beta=0.63$ for periods
less than 860 days and $\beta=-0.63$ for periods greater than 860 days. The
constant $C$ is set such that the total number of planets is consistent with
observations. In the symmetric distribution from Fernandes et al. (2019),
assuming a Sun-like star, $C=0.067$. We also note that this distribution is
consistent out to 4 au with the models based on the California Legacy Survey
(Fulton et al., 2021).
When simulating our planet samples, we apply the distribution from Equation 1
in terms of semi-major axis $a$. This changes the power-law index to 0.945 at
small separations and -0.945 at wide separations (multiplying by a factor of
3/2.) The power law changes at a fixed period of 860 days, which corresponds
to a semi-major axis of 1.77 au for a 1 M⊙ star and scales with
$M_{\star}^{1/3}$. This distribution implies the majority of planets are low
mass and at small separations. While there have been high-mass planets
observed at wide separations around HR 8799, $\beta$-Pic and 51 Eri, these are
around high-mass stars where there is known to be an excess of high-mass
planets (Johnson et al., 2010). The total number of planets is assumed to
increase with stellar mass and some studies (e.g. Bowler et al. (2009))
suggest a steep relationship. However, for simplicity, we assume the number of
planets scales linearly with stellar mass, as implied by Mulders (2018), and
the normalisation constant $C$ is simply multiplied by $M_{\star}/M_{\odot}$.
We simulate planet masses over a range of 0.3–13 MJ and semi-major axes over a
range of 1–10 au. Integrating the power law shown in Equation 1 over this
range gives $N\sim 0.07$ planets per star. The symmetric planet distribution
from Fernandes et al. (2019) is shown in Figure 4. Period is converted to
semi-major axis by assuming a 1 M⊙ star.
Figure 4: The differential mass and semi-major axis distribution of our
simulated planets from Fernandes et al. (2019). Integrating this across our
simulation range gives $N=0.08$ planets per star.
### 2.3 Planet Luminosity and Magnitude
As giant planets accrete gas, the accretion energy is emitted as radiation
with a luminosity proportional to the accretion rate (Zhu, 2015) and are at
their brightest during the period of runaway accretion (Lissauer et al.,
2009). After a planet has formed, the luminosity declines over time and is
dependant on the planet’s mass and entropy (Spiegel & Burrows, 2012).
The conditions of planet formation have an effect on the post-formation
entropy and luminosity. If the accretion rate is slow, the planet has more
time to radiate energy away, resulting in a lower internal entropy (cold-
starts). Planets with low internal entropy are typically smaller and cooler
and thus have lower post-formation luminosity than planets formed by hot-
starts. The luminosity as a function of age is shown in Figure 5 for planets
of varying mass and entropy of 9.5 $k_{B}$/baryon (blue curves) and 10.5
$k_{B}$/baryon (red curves) using hot and cold-start models from Spiegel &
Burrows (2012). This is calculated from applying the Stefan-Boltzmann Law to
the radius and temperature plots in Figure 5 of their paper and interpolating
between their initial entropies.
Figure 5: Evolution of planet luminosity for different planet masses and
initial entropies. The blue curves represent initial entropy of 9.5
$k_{B}$/baryon and red represents 10.5 $k_{B}$/baryon. This is calculated
using the temperature and radius curves from Figure 5 in Spiegel & Burrows
(2012), applying the Stefan-Boltzmann Law and interpolating between entropies
shown in their paper.
The absolute magnitude evolution in near-infrared bands is also calculated in
order to determine planet detectability. This is also calculated using models
from Spiegel & Burrows (2012), taking the magnitudes from their Figure 7 and
interpolating between initial entropies. Some example magnitude evolution
curves are shown in Figure 6 for the K (2.2$\mu$m) and L’ (3.77$\mu$m) bands
using the same initial entropies as Figure 5.
(a) Absolute Magnitude in K Band
(b) Absolute Magnitude in L’ Band
Figure 6: Evolution of planet magnitude in the K band (panel a) and in the L’
band (panel b) for different planet masses and initial entropies. The blue
curves represent initial entropy of 9.5 $k_{\rm B}$/baryon and red represents
10.5 $k_{\rm B}$/baryon. This is based on the curves in Figure 7 of Spiegel &
Burrows (2012) and, as before, we interpolate between entropies.
The cooling curves in Figures 5 and 6 demonstrate that hot-start planets are
initially brighter, but the luminosity declines faster. This implies that, at
old ages, hot and cold-start planets become less distinguishable as the
luminosities become approximately equal. However, at the young ages ($\sim$30
Myr) we consider in this study, there is a noticeable difference between the
two entropies for high-mass planets, which indicates we should be able to
constrain the formation models of massive ($>2\,\mathrm{M_{J}}$) planets.
## 3 Planet Detectability
### 3.1 Detection by Gaia Astrometry
The upcoming Gaia data releases are expected to have improved astrometry
measurements of stars and the potential to discover exoplanets through this
method. As a planet with $M_{p}\ll M_{\star}$ and its host star of mass
$M_{\star}$ orbit their common centre of mass, the astrometric semi-major
axis, also known as the astrometric signature, is given by:
$\alpha=\left(\frac{M_{p}}{M_{\star}}\right)\left(\frac{a_{p}}{d_{\star}}\right),$
(2)
where $a_{p}$ is the semi-major axis of the planet in au and $d_{\star}$ is
the distance to the star in pc. The planet detection capability depends on the
signal to noise ratio, given by
$S/N=\frac{\alpha}{\sigma_{\rm{fov}}},$ (3)
where $\sigma_{\rm{fov}}$ is the accuracy per field of view crossing. The
study in Perryman et al. (2014) concluded that a detection threshold of
$S/N>2$ provides a reasonable estimate of planet detection numbers. The study
in Ranalli et al. (2018) simulated the performance of Gaia and determined
there is a 50% chance of detecting planets in the 5 and 10 year missions with
$S/N>2.3$ and $1.7$ respectively. For this study, we assume a planet is
detectable if $S/N>2$. The value of $\sigma_{\rm{fov}}$ depends on the star’s
G magnitude, but is approximately constant at 34.2 $\mu$as for stars brighter
than magnitude 12. In this study, we use the values from Table 2 in Perryman
et al. (2014) and assume planets are detectable if
$\alpha>2\sigma_{\rm{fov}}$. Gaia’s nominal mission is for 5 years with a
possible extension to 10 years and this also places a constraint on detection
capabilities. It is assumed that planets with periods greater than $\sim$10
years will be poorly constrained.
### 3.2 Detection by Direct Imaging
Direct imaging has had a lower yield for planet detection than other
techniques due to the high contrast ratios between stars and planets. Unlike
transit and radial velocity, this method is more sensitive to planets at wide
separations, which are less abundant. Planets at wide separations are
typically found using a combination of angular differential imaging (ADI) and
reference star differential imaging (RDI) (Marois et al., 2006; Lafreniere et
al., 2007) to remove instrumental artefacts combined with coronagraphy to
block out the light of the central star.
Direct imaging methods are less sensitive at small separations, but this is
gradually improving with new analysis methods such as kernel phase
(Martinache, 2010) and observational methods such as interferometry (Lacour et
al., 2019). The detection capability of an instrument is limited by both the
resolution and the maximum signal to noise ratio. This provides a ‘contrast
limit’, which typically improves with distance from the star.
The experimental and theoretical contrast limits for current and future
instruments are shown in Figure 7, assuming a stellar apparent magnitude of 7
in the K and L’ bands, which is close to the average magnitude of our targets,
and an average distance of 50 pc. These limits were determined by observations
or, in the case of future instruments, simulated performance. The SPHERE limit
is derived from the recent SHINE survey (Langlois et al., 2021) and the MICADO
limits come from Perrot et al. (2018). The GRAVITY limit is based on the lower
limits of the curves in Abuter et al. (2019), assuming a 1 hour integration
time. The NaCo limits come from the study presented in Quanz et al. (2012) and
the METIS limits are based on Carlomagno et al. (2020). The NIRC2 limit comes
from our contrast limit through recent observations of Taurus (Wallace et al.,
2020), which are consistent with vortex coronagraph reference star
differential imaging limits (Xuan et al., 2018). The JWST limit is adapted
from Carter et al. (2021).
The VIKiNG limits for L’ come directly from the model assuming 1 hour
integrations by Martinache & Ireland (2018), assuming that the companions must
be resolved in the kernel null maps, and with a loss in contrast as the
planets approach the edge of the telescope PSF (at separations of
0.5-0.8$\lambda$/D). The assumed contrast of $4\times 10^{-5}$ at 5-$\sigma$
assumes either 80 nm RMS fringe tracking errors and 10% intensity
fluctuations, or 120 nm RMS fringe tracking errors and 2% intensity
fluctuations. Note that the Hi-5 instrument (Defrère et al., 2018) operating
in L’ is compatible with the VIKiNG architecture, but may require longer
integration times for the assumed contrast, depending on the finally adopted
architecture. The VIKiNG limits for K are based on a more optimistic, but
theoretically possible set of assumptions. For observations with the UTs,
fringe tracking up to 0.5" off-axis is assumed with an RMS fringe tracking RMS
error of 30 nm. The contrast limit shown is the magnitude difference between
the faintest detectable planet and its star in the K and L’ bands.
(a) K band
(b) L’ band
Figure 7: Assumed limits for current and future instruments as a function of
angular and physical separation (assuming an average distance of 50 pc.) See
text for detailed assumptions. Contrasts of planets of various masses are
shown for comparison, assuming an age of 50 Myr, initial entropy of 10
$k_{B}$/baryon and stellar magnitude of 7.
Figure 7 demonstrates that interferometry with ViKING and GRAVITY as well as
instruments on large telescopes, such as MICADO and METIS are the most
sensitive at small ($<$10 AU) separations. According to the distribution shown
in Figure 4, this is where planets are most likely to occur. The apparent
background-limited magnitude limits of these instruments are shown in Table 1,
which apply mostly to faint targets. We consider a planet to be detectable if
it is brighter than both the apparent magnitude limit and has higher contrast
than the limit shown in Figure 7.
Table 1: Apparent magnitude limits of instruments considered in this paper. Note that for most targets, the contrast and not these background limits most influences detectability. Instrument | K Mag. Limit | L’ Mag. Limit
---|---|---
SPHERE | 22.5 | -
GRAVITY | 19.5 | -
MICADO | 29.5 | -
VIKiNG (UTs) | 23.2 | 18.5
VIKiNG (ATs) | 19.9 | 15.3
NIRC2 | - | 17.94
NaCo | - | 18.55
METIS | - | 21.8
JWST | - | 23.8
## 4 Simulated Detectable Planets
For each star in our sample shown in Section 2.1, we simulate a set of planet
systems using the power-law distribution shown in Section 2.2. The
normalization constant $C$ is scaled linearly with stellar mass. This
simulation was run 5000 times per star, but only a small percentage of these
simulations produced planets. This is due to the integration of the planet
distribution shown in Figure 4. For a 1 M⊙ star, only 8% of simulations
produced a planet more massive than 0.3 MJ. For simplicity, we assume circular
orbits.
### 4.1 Planets Detectable by Gaia
Using our simulated sample of planets around stars in nearby moving groups, we
calculated how many can be detected by Gaia applying the methods detailed in
Section 3.1. We assumed Gaia can detect planets with periods shorter than 10
years provided the astrometric signature $\alpha>2\sigma_{\rm{fov}}$ (Perryman
et al., 2014; Ranalli et al., 2018). The average number of planets per group
and the average number of planets detectable by Gaia are shown in Table 2, as
a result of running the simulation 100 times.
Table 2: Total number of simulated planets in each group and number detectable by Gaia. There are $\sim$0.06 planets per star and Gaia can detect approximately $\sim$30% of these. | | | | Average Number | Average Number
---|---|---|---|---|---
Group Name | Age (Myr) | Average Distance (pc) | Number of Stars | of Planets | of Detectable Planets
AB Doradus | 149${}^{+51}_{-19}$ | 43.19 | 367 | 19.05 | 6.79
Argus | 45$\pm$5 | 48.33 | 630 | 35.10 | 11.32
$\beta$ Pictoris | 22$\pm$6 | 39.60 | 149 | 7.82 | 2.91
Carina | 45${}^{+11}_{-7}$ | 49.22 | 26 | 1.41 | 0.42
Carina-Near | $\sim$200 | 34.28 | 148 | 7.50 | 3.23
Columba | 42${}^{+6}_{-4}$ | 47.47 | 79 | 4.84 | 1.43
Hyades | 750$\pm$100 | 46.67 | 239 | 14.97 | 4.47
Tucana-Horologium | 45$\pm$4 | 49.34 | 94 | 5.31 | 1.58
TW Hydrae | 10$\pm$3 | 53.68 | 21 | 1.00 | 0.32
Ursa Major | $\sim$414 | 25.30 | 7 | 0.64 | 0.25
We have 1760 stars in our sample across all groups shown in Table 2 and, on
average, we simulate $\sim$ 98 giant planets ($M>0.3$ MJ) across all groups
($\sim$ 0.056 planets per star.) Our estimates suggest that Gaia is able to
detect $\sim$ 33 planets, which is approximately a third of our sample.
### 4.2 Planets detectable by both Gaia and Direct Imaging
Using planetary cooling curves shown in Figure 6 and the limits of current and
future instruments shown in Figure 7, we determine how many of the 25 Gaia-
detectable planets are also detectable by direct imaging. This requires an
estimate of the age and initial entropy of each planet. The planet age is
assumed to be equal to the age of the moving group, listed in Table 2, within
observational uncertainties and assume a range of initial planet entropies
from 8.5–11.5 $k_{B}$/baryon. The possible inclination distribution is taken
into account by calculating the averaged projected separation. This is simply
calculated by multiplying the simulated semi-major axis by 0.8 as shown in
Equation 7 of Fischer & Marcy (1992). Figure 8 shows the number of detectable
planets for different instruments in the K and L’ bands as a function of
initial entropy.
(a) K band (b) L’ band
Figure 8: Total number of planets detectable by both Gaia astrometry and high
contrast imaging across all moving groups.
Our results suggest that VIKiNG and METIS should be able to detect more than 4
Gaia-detectable planets regardless of initial entropy, while MICADO should be
able to detect hot-start planets, if a survey of nearby moving groups were
conducted. Note that VIKiNG with the ATs can detect more planets than the UTs
in the L’ band despite being less sensitive. This is simply due to the wider
range of separations detectable by interferometry with the ATs in the L’ band
as shown in Figure 7(b).
## 5 Constraining the Initial Entropy
As explained in Section 2.3, the initial entropy of a planet is related to its
formation conditions and has an effect on the brightness evolution. If the age
and mass of a planet is known to reasonable precision, it should be possible
to constrain the initial entropy of a directly imaged planet.
### 5.1 Dependence of Magnitude on Entropy
As shown in Figure 6, the magnitude evolution depends on its initial entropy,
but this dependence decreases with age. Figure 9 shows the absolute magnitude
as a function of initial entropy for a variety of planet masses and ages using
models from Spiegel & Burrows (2012).
(a) K band (b) L’ band
Figure 9: Absolute Magnitude as a function of initial entropy. Solid, dashed
and dotted curves are for planet ages of 20 Myr, 40 Myr and 80 Myr
respectively.
The curves shown in Figure 9 flatten as the planets age, but also at higher
entropy, particularly in the L’ band. This implies that, while planets with
higher initial entropy will be brighter and easier to detect, it could be
harder to constrain the initial entropy of these planets. In order to
determine how well different instruments can constrain entropy, we calculate
the entropy uncertainty.
### 5.2 Entropy Uncertainty
The entropy uncertainty was calculated for a set of simulated planets
detectable by Gaia as shown in Table 2. Each planet is assigned a mass and
semi-major axis from our distribution, but the initial entropy is unknown. The
likelihood of a particular entropy given our simulated data, $L(S|D)$, is
derived from Bayes Theorem,
$L(S|D)\propto P(D|S)P(S),$ (4)
where $P(D|S)$ is the probability of the data for a given entropy and $P(S)$
is the prior probability of that entropy. The probability as a function of
entropy is calculated for a range of entropies $S_{i}$ using a Gaussian
distribution in planet flux, given by
$P(D|S_{i})\propto e^{-\frac{(f-f_{i})^{2}}{2\sigma_{f}^{2}}},$ (5)
where $f$ is the planet’s flux given an input entropy $S$ and $f_{i}$ is the
flux for a given entropy $S_{i}$. The flux error, $\sigma_{f}$, is calculated
from the contrast limits of various instruments shown in Figure 7. The 5
$\sigma$ contrast limits are converted to planet flux limits and divided by 5
to obtain the flux error. For simplicity, we assume all values of entropy are
equally possible and use a flat distribution for $P(S)$.
As an example, we consider a star of apparent magnitude 7 in the K and L’
bands at a distance of 40 pc and age of 40 Myr. Assuming a true entropy of 10
$k_{B}$/baryon, the likelihood as a function of modelled entropy is shown in
Figure 10 using VIKiNG with the UTs in both K and L’ bands.
(a) K band
(b) L’ band
Figure 10: Likelihood of entropy for given data in K and L’ bands, with VIKiNG
using the UTs, assuming an input entropy of 10 $k_{B}$/baryon and a planet 80
mas from a star of apparent magnitude 7. The solid curves are planets at 10 au
and the dashed curves are planets at 20 au.
The likelihood curves in Figure 10 confirm that the initial entropy of a 2 MJ
planet cannot be constrained, while the entropy of a 4 MJ can be broadly
constrained in the L’ band, but not in the K band.
This method was applied to a random sample of Gaia detectable planets. Each
planet was assigned a set of initial entropies from 8.5–11.5 $k_{B}$/baryon
and the likelihood function was calculated for each of these. From this, we
calculated the entropy uncertainty as a function of input entropy. As shown in
Figure 8, only GRAVITY, MICADO, METIS and VIKiNG will be able to detect at
least 1 planet that is also detectable by Gaia. The majority of simulated
planets have entropies that cannot be constrained. As a preliminary result, we
only consider 1 planet that is detectable by Gaia and all of the instruments
mentioned above. The result for a 3.6 MJ planet 2.4 AU from a 0.75 M⊙ star in
the $\beta$-Pic moving group is shown in Figure 11.
(a) K band (b) L’ band
Figure 11: Entropy Uncertainty as a function of input entropy. The horizontal
axis is a simulated entropy which translates to a theoretical brightness and
the vertical axis is the width of the entropy likelihood function as shown in
Figure 10. For VIKiNG with the UTs, the entropy uncertainty is below 0.5
$k_{\rm{B}}$/baryon for all reasonable values of initial entropy which is less
than the difference between hot and cold-start models. This implies VIKiNG
should be able to distinguish between the two models.
The curves in Figure 11 indicate that VIKiNG with the UTs will be able to
constrain the initial entropy within 0.5 $k_{\rm{B}}$/baryon for the majority
of input entropies. GRAVITY and METIS can also constrain the entropy within
this value for a ‘warm’-start planet (entropy of 9–10 $k_{B}$/baryon), which
indicates these instruments should be able to distinguish between hot and
cold-start models for this planet. We note that, unlike the other instruments,
VIKiNG is still only at a preliminary design study level at this point.
## 6 Conclusions
In this paper, we have examined the brightness evolution of giant planets and
the dependence on initial entropy. Given that high-entropy planets are
brighter than low-entropy planets of similar mass but this difference becomes
less at old ages. If we observe planets at young ages, it should be possible
to constrain initial entropy.
Using the expected uncertainty for Gaia astrometry from Perryman et al.
(2014), we determine that Gaia should be able to detect approximately 25 % of
giant planets in nearby moving groups and calculate their mass. Combining this
with the estimated ages of these moving groups, we can use the detected flux
to determine the initial entropy. We performed this over a simulated sample of
planets around existing stars in nearby moving groups, assuming a symmetric
planet distribution from Fernandes et al. (2019).
We used the measured and expected 5$\sigma$ contrast limits for current and
future instruments to estimate the expected flux error on the planets in our
simulated sample. We found that future instruments MICADO and METIS have the
best contrast levels at wide angles, while interferometers GRAVITY and VIKiNG
are best at small angles. However, we note that improvements to GRAVITY, known
as GRAVITY+ are currently being implemented and, of the VIKiNG concepts, only
the L’ concept has significant funding. Given the technological challenges of
achieving the required 30 nm fringe tracking uncertainty for a VIKiNG K and
that entropy is better constrained by VIKiNG L’, this paper does not provide a
reason to prioritise a high performance Nuller for VLTI operating in the K
filter. Overall, assuming Gaia can detect giant planets in nearby moving
groups, we find that these future instruments should also be able to detect
some planets, if we were to conduct a survey of a relatively small number of
Gaia-detected planets in nearby moving groups.
Using the instrumental flux error, combined with the estimated ages and masses
from Gaia, we can constrain the formation entropy of directly imaged planets.
We found that GRAVITY, METIS and VIKiNG should all be able to constrain the
formation entropy of a super-Jupiter to within 0.5 $k_{B}$/baryon and from
this, distingush between hot and cold-start formation models.
## Data Availability
The data underlying this article are available from the corresponding author
on reasonable request.
## Acknowledgements
We thank the anonymous referee for their useful comments, which greatly
improved this study and the organization of this paper. We thank Mark Krumholz
for initiating the discussions of planet entropy, which led to this work. This
research was supported by the Australian Government through the Australian
Research Council’s Discovery Projects funding scheme (DP190101477). C. F.
acknowledges funding provided by the Australian Research Council through
Future Fellowship FT180100495, and the Australia-Germany Joint Research
Cooperation Scheme (UA-DAAD).
## References
* Abuter et al. (2019) Abuter R., et al., 2019, Eso Messenger
* Bell et al. (2015) Bell C. P., Mamajek E. E., Naylor T., 2015, Monthly Notices of the Royal Astronomical Society, 454, 593
* Benisty et al. (2021) Benisty M., et al., 2021, The Astrophysical Journal Letters, 916, L2
* Berardo et al. (2017) Berardo D., Cumming A., Marleau G.-D., 2017, The Astrophysical Journal, 834, 149
* Bowler & Nielsen (2018) Bowler B. P., Nielsen E. L., 2018, Handbook of Exoplanets, pp 1–17
* Bowler et al. (2009) Bowler B. P., et al., 2009, The Astrophysical Journal, 709, 396
* Brandt et al. (2021) Brandt G. M., Brandt T. D., Dupuy T. J., Michalik D., Marleau G.-D., 2021, The Astrophysical Journal Letters, 915, L16
* Brown et al. (2018) Brown A., et al., 2018, Astronomy & astrophysics, 616, A1
* Carlomagno et al. (2020) Carlomagno B., et al., 2020, Journal of Astronomical Telescopes, Instruments, and Systems, 6, 035005
* Carter et al. (2021) Carter A. L., et al., 2021, Monthly Notices of the Royal Astronomical Society, 501, 1999
* Defrère et al. (2018) Defrère D., et al., 2018, Experimental Astronomy, 46, 475
* Fernandes et al. (2019) Fernandes R. B., Mulders G. D., Pascucci I., Mordasini C., Emsenhuber A., 2019, The Astrophysical Journal, 874, 81
* Fischer & Marcy (1992) Fischer D. A., Marcy G. W., 1992, The Astrophysical Journal, 396, 178
* Fulton et al. (2021) Fulton B. J., et al., 2021, arXiv preprint arXiv:2105.11584
* Gagné et al. (2018) Gagné J., et al., 2018, The Astrophysical Journal, 856, 23
* Johnson et al. (2007) Johnson J. A., Butler R. P., Marcy G. W., Fischer D. A., Vogt S. S., Wright J. T., Peek K. M., 2007, The Astrophysical Journal, 670, 833
* Johnson et al. (2010) Johnson J. A., Aller K. M., Howard A. W., Crepp J. R., 2010, Publications of the Astronomical Society of the Pacific, 122, 905
* Keppler et al. (2018) Keppler M., et al., 2018, Astronomy & Astrophysics, 617, A44
* Kraus & Ireland (2011) Kraus A. L., Ireland M. J., 2011, The Astrophysical Journal, 745, 5
* Lacour et al. (2019) Lacour S., et al., 2019, Astronomy & Astrophysics, 623, L11
* Lafreniere et al. (2007) Lafreniere D., Marois C., Doyon R., Nadeau D., Artigau É., 2007, The Astrophysical Journal, 660, 770
* Lagrange et al. (2009) Lagrange A.-M., et al., 2009, Astronomy & Astrophysics, 493, L21
* Langlois et al. (2021) Langlois M., et al., 2021, arXiv preprint arXiv:2103.03976
* Lissauer et al. (2009) Lissauer J. J., Hubickyj O., D’Angelo G., Bodenheimer P., 2009, Icarus, 199, 338
* López-Santiago et al. (2006) López-Santiago J., Montes D., Crespo-Chacón I., Fernández-Figueroa M., 2006, The Astrophysical Journal, 643, 1160
* Macintosh et al. (2015) Macintosh B., et al., 2015, Science, 350, 64
* Marleau & Cumming (2014) Marleau G.-D., Cumming A., 2014, Monthly Notices of the Royal Astronomical Society, 437, 1378
* Marley et al. (2007) Marley M. S., Fortney J. J., Hubickyj O., Bodenheimer P., Lissauer J. J., 2007, The Astrophysical Journal, 655, 541
* Marois et al. (2006) Marois C., Lafreniere D., Doyon R., Macintosh B., Nadeau D., 2006, The Astrophysical Journal, 641, 556
* Martinache (2010) Martinache F., 2010, The Astrophysical Journal, 724, 464
* Martinache & Ireland (2018) Martinache F., Ireland M. J., 2018, arXiv preprint arXiv:1802.06252
* Mordasini et al. (2009) Mordasini C., Alibert Y., Benz W., Naef D., 2009, Astronomy & Astrophysics, 501, 1161
* Mordasini et al. (2012) Mordasini C., Alibert Y., Klahr H., Henning T., 2012, A&A, 547, A111
* Mulders (2018) Mulders G. D., 2018, Planet Populations as a Function of Stellar Properties. Springer International Publishing, Cham, pp 1–26, doi:10.1007/978-3-319-30648-3_153-1, https://doi.org/10.1007/978-3-319-30648-3_153-1
* Nielsen et al. (2019) Nielsen E. L., et al., 2019, The Astronomical Journal, 158, 13
* Perrot et al. (2018) Perrot C., Baudoz P., Boccaletti A., Rousset G., Huby E., Clénet Y., Durand S., Davies R., 2018, arXiv preprint arXiv:1804.01371
* Perryman et al. (2014) Perryman M., Hartman J., Bakos G. Á., Lindegren L., 2014, The Astrophysical Journal, 797, 14
* Quanz et al. (2012) Quanz S. P., Crepp J. R., Janson M., Avenhaus H., Meyer M. R., Hillenbrand L. A., 2012, The Astrophysical Journal, 754, 127
* Ranalli et al. (2018) Ranalli P., Hobbs D., Lindegren L., 2018, Astronomy & Astrophysics, 614, A30
* Rodriguez et al. (2013) Rodriguez D. R., Zuckerman B., Kastner J. H., Bessell M., Faherty J. K., Murphy S. J., 2013, The Astrophysical Journal, 774, 101
* Schneider et al. (2019) Schneider A. C., Shkolnik E. L., Allers K. N., Kraus A. L., Liu M. C., Weinberger A. J., Flagg L., 2019, The Astronomical Journal, 157, 234
* Spiegel & Burrows (2012) Spiegel D. S., Burrows A., 2012, The Astrophysical Journal, 745, 174
* Torres et al. (2008) Torres C. A., Quast G. R., Melo C. H., Sterzik M. F., 2008, arXiv preprint arXiv:0808.3362
* Vigan et al. (2020) Vigan A., et al., 2020, arXiv preprint arXiv:2007.06573
* Wallace & Ireland (2019) Wallace A., Ireland M., 2019, Monthly Notices of the Royal Astronomical Society, 490, 502
* Wallace et al. (2020) Wallace A., et al., 2020, Monthly Notices of the Royal Astronomical Society, 498, 1382
* Xuan et al. (2018) Xuan W. J., et al., 2018, AJ, 156, 156
* Zhu (2015) Zhu Z., 2015, The Astrophysical Journal, 799, 16
* Zuckerman (2018) Zuckerman B., 2018, The Astrophysical Journal, 870, 27
* Zuckerman et al. (2011) Zuckerman B., Rhee J. H., Song I., Bessell M., 2011, The Astrophysical Journal, 732, 61
|
# Boosting segmentation performance across datasets using histogram
specification with application to pelvic bone segmentation
###### Abstract
Accurate segmentation of the pelvic CTs is crucial for the clinical diagnosis
of pelvic bone diseases and for planning patient-specific hip surgeries. With
the emergence and advancements of deep learning for digital healthcare,
several methodologies have been proposed for such segmentation tasks. But in a
low data scenario, the lack of abundant data needed to train a Deep Neural
Network is a significant bottle-neck. In this work, we propose a methodology
based on modulation of image tonal distributions and deep learning to boost
the performance of networks trained on limited data. The strategy involves
pre-processing of test data through histogram specification. This simple yet
effective approach can be viewed as a style transfer methodology. The
segmentation task uses a U-Net configuration with an EfficientNet-B0 backbone,
optimized using an augmented BCE-IoU loss function. This configuration is
validated on a total of 284 images taken from two publicly available CT
datasets, TCIA (a cancer imaging archive) and the Visible Human Project. The
average performance measures for the Dice coefficient and Intersection over
Union are 95.7% and 91.9%, respectively, give strong evidence for the
effectiveness of the approach, which is highly competitive with state-of-the-
art methodologies.
Index Terms— Pelvic bone segmentation, data pre-processing, histogram
specification, U-Net, fine-tuning.
## 1 Introduction
In recent years, due to the increase in the incidence of pelvic injuries from
traffic-related accidents [1], pelvic bone diseases within the aging
population, and sufficient access to computed tomography (CT) imaging,
automated pelvic bone segmentation in CT has gained considerable prominence.
The segmentation results assist physicians in the early detection of pelvic
injury and help expedite surgical planning and reduce the complications caused
by pelvic fractures [2]. In CT data, structures like the bone marrow and bone
surface appear as dark and bright regions due to their low and high densities
compared to the surrounding tissues. However, given the variations in image
quality between different CT datasets, distinguishing bone structures from the
image background becomes cumbersome and leads to erroneous segmentation
outputs. These issues indicate a need for a novel solution to develop a simple
yet effective methodology for the accurate segmentation of pelvic bones from
varying CT data.
Contribution of this paper: The key novelties of this work are as follows:
1. 1.
introduction of an encoder-decoder network, trained on limited data, for high
accuracy segmentation of pelvic bones
2. 2.
boosting model performance on unseen data by employing histogram specification
The exact details of the approach are deferred until Sec. 3.3. Fig. 1
illustrates the results of the proposed method.
$\begin{array}[]{cc}\includegraphics[width=99.58464pt,height=99.58464pt]{images1/3.png}&\includegraphics[width=99.58464pt,height=99.58464pt]{images1/4.png}\\\
\mbox{TCIA}&\mbox{(a1)}\\\
\includegraphics[width=99.58464pt,height=99.58464pt]{images1/t1.png}&\includegraphics[width=99.58464pt,height=99.58464pt]{images1/t2.png}\\\
\mbox{VHBD}&\mbox{(b1)}\\\ \end{array}$
Fig. 1: (a1) and (b1) – illustrate the segmentation outputs, for input images
from TCIA [3] and VHBD [4], respectively.
## 2 Prior Art
Fig. 2: Workflow of U-Net architecture with pre-trained backbone, detailing
pelvic bone segmentation.
Recent literature has seen many applications for the segmentation of the
pelvis from CT imaging data. Traditional methods such as thresholding and
region growth [5], deformable surface model [6], and others, have been
commonly used to perform bone segmentation. However, these approaches often
suffer from low accuracy due to varying image properties such as intensity,
contrast, and the inherent variations between the texture of the bone
structures (bone marrow and surface boundary) and the surrounding tissues. To
overcome these challenges, supervised methods such as statistical shape models
(SSM) and atlas-based deep learning (DL) methods have made significant
contributions to segmentation tasks. Wang et al. [7, 8] suggested using a
multi-atlas segmentation with joint label fusion for detecting regions on
interest from CT images. Yokota et al. [9] showcased a combination of
hierarchical and conditional SSMs for the automated segmentation of diseased
hips from CT data. Chu et al. [10] presented a multi-atlas based method for
accurately segmenting femur and pelvis. Zeng et al. [11] proposed a supervised
3D U-Net with multi-level supervision for segmenting femur in 3D MRI. Chen et
al. [12] showcased a 3D feature enhanced network for quickly segmenting femurs
from CT data. Chang et al. [13] proposed patch-based refinement on top of a
conditional random field model for fine segmentation of healthy and diseased
hips. Liu et al. [14] used 3D U-Nets in two-stages (trained on approximately
270K images) with a signed distance function for producing bone fragments from
image-stacks. In the following section, we discuss a new technique addressing
accurate segmentation of the pelvis from CT images of varying qualities.
## 3 Proposed Methodology
The efficacy of using Encoder-Decoder architectures for designing high
accuracy segmentation models for biomedical applications has been showcased in
recent literature [15, 11, 14]. We employ a similar architecture, with various
encoder modules for feature extraction and a decoder module for semantic
segmentation. The details of the encoder and decoder modules are explained in
the following.
### 3.1 Encoder Module
In simple terms, an encoder takes the input image and generates a high-
dimensional feature vector aggregated over multiple levels. We deploy a choice
of the following well-known architectures as the encoder module:
#### 3.1.1 ResNet
Residual networks (ResNet) introduced residual mappings to solve the vanishing
gradient problem in deep neural networks [16]. ResNets are easy to optimize
and gain accuracy even with deeper models.
#### 3.1.2 Inception V3
Inception Networks are computationally efficient architectures, both in terms
of the model parameters and their memory usage. Adapting the Inception network
for different applications while ensuring that changes do not impede its
computational efficiency is difficult. Inception V3 introduced various
strategies for optimizing network with ease of model adaptation capabilities
[17].
#### 3.1.3 EfficientNet
Conventional methods make use of scaling to increase the accuracy of the
models. The models are scaled by increasing the depth/width of the network or
using higher resolution input images. EfficientNet results from a novel
scaling method that uses a compound coefficient to uniformly scale the network
across all dimensions [18].
### 3.2 Decoder Module
The decoder module is responsible for generating a semantic segmentation mask
using the aggregated high-dimensional features extracted by the encoder
module. We make use of the popular U-Net model specially designed for medical
imaging as the decoding module [15].
### 3.3 Histogram Specification
Histogram specification, or histogram matching, is a traditional image
processing technique [19] that matches the input image’s histogram to a
reference histogram. Histogram specification involves computing the cumulative
distribution function (CDF) of histograms from both the target and the
reference, following which a transformation function is obtained by mapping
each gray level $[0,255]$ from the target’s CDF (input) to the gray level in
the reference CDF. In this work, we construct the reference histogram by
averaging over histograms from every image in the training set. Using this
technique as a pre-processing step for the test data serves an important
purpose, as the distribution of the test data is converted to a similar form
seen by the network during training.
## 4 Experimental Validation
Table 1: An overview of the datasets used in this work. Dataset | Resolution | Train-set | Val-set | Test-set
---|---|---|---|---
# Images | (%) | (%) | (%)
TCIA [3] | 512 x 512 | 407 | 58 | 117
582 | (70%) | (10%) | (20%)
VHBD [4] | 512 x 512 | – | – | 167
167 | (100%)
VHBD-2 [4] | 512 x 512 | 116 | 17 | 34
167 | (70%) | (10%) | (20%)
### 4.1 Datasets
The input data preparation and label annotation were done using the tools from
Image-J software. A summary of TCIA–cancer imaging archive [3] and
VHBD–Visible human project [4] datasets, image resolution, the number of
images used in this study, and the respective data-splits for training-
validation-testing, are shown in Table 1.
### 4.2 Performance Measures
To quantify the quality of segmentation, we compute standard performance
measures for segmentation tasks commonly used in literature, specifically, the
mean Dice coefficient (mDice) and mean Intersection over Union (mIoU) [20,
21]. For a given segmentation output ($A$) and the ground truth ($B$), the
Dice coefficient is given by
$\text{Dice}=\frac{2\,|A\;\cap\;B|}{|A|\;+\;|B|}$, which can be interpreted as
a weighted average of the precision and recall, and
$\text{IoU}=\frac{|A\;\cap\;B|}{|A\;\cup\;B|}\text{,}$ (also known as Jaccard
index) is commonly used for comparing the similarity between sets ($A$) and
($B$), while penalizing their diversity.
### 4.3 Network Training
The implementations used were based on the documentation from [22]. The models
used [16, 17, 18] were pre-trained on the Imagenet [23] dataset to improve the
generalization capability on unseen data and achieve faster convergence. For
the base-model, we use ResNet-34 [16] as the encoder and a U-Net decoder. We
initialize the base-model with random weights (rnwt) and train without any
data-augmentation (noaug) on images from [3], using an Nvidia RTX 2070 GPU,
and an ADAM optimizer with a learning rate of 0.001, momentum of 0.9 and a
weight decay of 0.0001, for 40 epochs. We chose a 70% : 10% : 20% split of the
data (shown in the first row of Table 1), where the 70% was utilized for
training and the 10% of the data was utilized for validation. The remaining
20% for testing was completely unseen during training. About 50 passes of
random image batches, of size eight, from the training set, were used in each
epoch. The model was then validated on the 10% data to evaluate the
performance based on binary-cross-entropy loss (bce) and record the
corresponding weights. After training, the weights that gave the best
performance on the validation set were selected for the base-model, which was
then evaluated on the unseen test-sets, i.e., 20% of [3] and 100% of [4],
respectively, whose performance is showcased in the first row of Table 2.
Extending beyond the base-model, data augmentation (aug) was performed using
horizontal and vertical flips, affine transforms, image intensity modulation
and blurring, for increasing training data size and to help reduce over-
fitting. In addition, we try to find the best overall segmentation performance
and generalization capability to completely unseen data, through further
extension of the base-model with different configurations, using the
following:
* •
encoder modules using ResNet-34 [16], Inception V3 [17] and EfficientNet-B0
[18], initialized with Imagenet weights (imwt) for transfer learning
* •
re-configuration of input data, or not, to the pre-trained model’s format and
its pre-processing functions (ppr), for extraction of better features
* •
loss functions like Dice loss (dice), IoU loss (iou) and combined bce-iou
loss, in place of bce loss, for propagating strong gradients for better
optimization and learning
$\begin{array}[]{ccc}\includegraphics[width=71.13188pt,height=71.13188pt]{images1/inp.png}&\includegraphics[width=71.13188pt,height=71.13188pt]{images1/dl1.png}&\includegraphics[width=71.13188pt,height=71.13188pt]{images1/dl2.png}\\\
\mbox{Input}&\mbox{(a)}&\mbox{(b)}\\\ \end{array}$
Fig. 3: Pelvic bone segmentation on TCIA data using: (a) Base U-Net with
random weight initialization for ResNet-34 encoder, with no data-augmentation,
optimized using BCE loss (least performing); and (b) fine-tuned U-Net with
Imagenet weight initialization for EfficientNet-B0 encoder, with data-
augmentation and input reconfiguration, optimized using combined BCE-IoU loss
(best performing), are overlaid onto the binary ground-truth; yellow - TP;
black - TN; green - FP; red - FN.
$\begin{array}[]{ccc}\includegraphics[width=71.13188pt,height=71.13188pt]{images1/a.png}&\includegraphics[width=71.13188pt,height=56.9055pt]{images1/b1.png}&\includegraphics[width=71.13188pt,height=71.13188pt]{images1/c.png}\\\
\mbox{TCIA}&\mbox{(a1)}&\mbox{(a2)}\\\
\includegraphics[width=71.13188pt,height=71.13188pt]{images1/d.png}&\includegraphics[width=71.13188pt,height=56.9055pt]{images1/e1.png}&\includegraphics[width=71.13188pt,height=71.13188pt]{images1/f.png}\\\
\mbox{VHBD (target)}&\mbox{(b1)}&\mbox{(b2)}\\\
\includegraphics[width=71.13188pt,height=71.13188pt]{images1/g.png}&\includegraphics[width=71.13188pt,height=56.9055pt]{images1/h1.png}&\includegraphics[width=71.13188pt,height=71.13188pt]{images1/i.png}\\\
\mbox{H-VHBD}&\mbox{(c1)}&\mbox{(c2)}\\\ \end{array}$
Fig. 4: Performance in segmentation with histogram specification: (a1-c1) show
the respective histograms of the input images; (a2-c2) show the pelvic bone
segmentations overlaid on the ground-truth; and (b2-c2) decisively show the
improvement in segmentation from matching target’s histogram to the reference.
yellow - TP; black - TN; green - FP; red - FN. Table 2: Performance comparison
of different U-Net configurations for pelvic bone segmentation on unseen data
from TCIA, VHBD, and H-VHBD, i.e., VHBD after histogram specification.
U-Net Configurations | TCIA | VHBD | H-VHBD
---|---|---|---
mIoU | mDice | mIoU | mDice | mIoU | mDice
Res34-rnwt-noaug-bce | 0.788 $\pm$ 0.033 | 0.867 $\pm$ 0.025 | 0.131 $\pm$ 0.033 | 0.186 $\pm$ 0.037 | 0.774 $\pm$ 0.021 | 0.865 $\pm$ 0.015
Res34-imwt-aug-bce | 0.919 $\pm$ 0.007 | 0.957 $\pm$ 0.004 | 0.746 $\pm$ 0.028 | 0.840 $\pm$ 0.021 | 0.900 $\pm$ 0.004 | 0.947 $\pm$ 0.002
Res34-imwt-aug-dice | 0.925 $\pm$ 0.007 | 0.960 $\pm$ 0.004 | 0.523 $\pm$ 0.039 | 0.645 $\pm$ 0.038 | 0.907 $\pm$ 0.005 | 0.951 $\pm$ 0.002
Res34-imwt-aug-bce-iou | 0.922 $\pm$ 0.007 | 0.959 $\pm$ 0.004 | 0.790 $\pm$ 0.018 | 0.877 $\pm$ 0.012 | 0.906 $\pm$ 0.004 | 0.950 $\pm$ 0.002
IncepV3-imwt-aug-bce | 0.876$\pm$ 0.012 | 0.932 $\pm$ 0.007 | 0.663 $\pm$ 0.026 | 0.784 $\pm$ 0.019 | 0.906 $\pm$ 0.006 | 0.950 $\pm$ 0.003
IncepV3-ppr-imwt-aug-bce | 0.921 $\pm$ 0.011 | 0.957 $\pm$ 0.007 | 0.808 $\pm$ 0.014 | 0.890 $\pm$ 0.009 | 0.913 $\pm$ 0.005 | 0.954 $\pm$ 0.002
EffiB0-imwt-aug-bce | 0.923 $\pm$ 0.007 | 0.959 $\pm$ 0.004 | 0.835 $\pm$ 0.015 | 0.906 $\pm$ 0.010 | 0.901 $\pm$ 0.006 | 0.947 $\pm$ 0.003
EffiB0-ppr-imwt-aug-bce-iou | 0.924 $\pm$ 0.008 | 0.960 $\pm$ 0.004 | 0.836 $\pm$ 0.011 | 0.909 $\pm$ 0.006 | 0.914 $\pm$ 0.005 | 0.955 $\pm$ 0.002
ABLATION STUDY (♣) | 0.913 $\pm$ 0.007 | 0.954 $\pm$ 0.004 | 0.679 $\pm$ 0.047 | 0.801 $\pm$ 0.032 | 0.873 $\pm$ 0.013 | 0.931 $\pm$ 0.007
* *
Encoder module \- Res34, IncepV3, EffiB0 are ResNet-34, Inception Net-V3,
EfficientNet-B0, respectively.
* *
Encoder Weights \- rnwt and imwt are random weights and Imagenet weights,
respectively.
* *
Augmentation \- aug and noaug means training with and without data-
augmentation, respectively.
* *
Loss \- bce, dice, iou are the Binary Cross Entropy loss, Dice Loss and IoU
loss, respectively .
* *
ppr\- configure input to the pre-trained backbone’s format.
* *
Grey background\- indicates improvement due to histogram-specification based
pre-processing.
### 4.4 Results
The detailed comparisons of the different U-Net configurations’ segmentation
performance on test-sets with 95% confidence intervals are shown in Table 2.
The segmentation outputs from the least-performing (base-model) and best-
performing (fine-tuned U-Net with Imagenet weight initialization for
EfficientNet-B0 encoder [18], with data-augmentation and input re-
configuration, optimized using combined BCE-IoU loss) DL models are showcased
in Fig. 3 (a) &(b). The predicted outputs are overlaid onto the ground-truth
and color-coded (yellow - TP; black - TN; green - FP; red - FN) for
visualizing the quality of segmentation. The results shown in Fig. 4(b2) &(c2)
illustrate the desired effect on segmentation due to histogram specification.
The reduction in the number of pixels labeled as FPs & FNs, and improvement in
number of TPs from the overlays decisively show the significance of pre-
processing test-data, which clearly boosts the model’s segmentation
performance. Furthermore, the comparitive results tabulated in the last two
columns of Table 2 give strong evidence for the success of the proposed
methodology on all the specified model configurations.
On analyzing the data shown in Table 3, the proposed methodology’s overall
performance on the test-sets surpassed several state-of-the-art techniques
that were trained on similarly sized datasets, with the exception of Liu et
al. [14] who performed training on approximately 270,000 images. Since data
drives any model, the proposed methodology (trained only on 407 images) shows
room for further improvement in segmentation under the availability of larger
datasets.
Table 3: Overall performance comparison for pelvic bone segmentation with
state-of-the-art techniques.
Methodology | Dataset | mIoU | mDice
---|---|---|---
| (# Images) | |
Yokota et al. [9] | Private (100) | — | 0.928
Chu et al. [10] | Private (318) | — | 0.941
Chang et al. [13] | Private ($\sim$3420) | — | 0.949
Liu et al. [14] | DS$\ddagger$($\sim$63K) | — | 0.984
Proposed method | TCIA,VHBD (284) | 0.919 | 0.957
* (DS$\ddagger$) KITS19, CERVIX, ABDOMEN, MSD T10, COLONOG, CLINIC; Train:Test $\approx$ 270K: 63K; K = $10^{3}$
### 4.5 Ablation Study
Images from [3, 4], with the data splits shown in rows 1 and 3 of Table 1, are
used for training. The best-model was trained on the joint data whose test-
data performance is shown in Table 2 (♣). The results showed that training the
model on joint data degrades the performance on both datasets. The data
imbalance and the varying image tonal distributions play a significant role in
influencing the segmentation performance. And by using the proposed
methodology, the model overcomes data imbalance and generalizes well to unseen
datasets, which boosts its overall segmentation performance.
## 5 Conclusion
To sum up, in this work, we presented a novel methodology for the automated
segmentation of pelvic bones from axial CT images. We addressed the unmet need
for superior pelvic bone segmentation methodology for images with varying
properties by using histogram specification. This simple yet powerful approach
of pre-processing the test-data improved segmentation performance by a
significant margin, with the quantitative results confirming its validity.
Through our approach, the encoder-decoder configuration overcame a significant
hurdle of varying intensity distributions in CT images, which led to superior
segmentation quality. Moreover, after validating the results on publicly
available TCIA and VHBD datasets, the proposed methodology has been shown to
be highly competent with-respect-to existing state-of-the-art techniques.
Through this study, we saw that, although deep learning has pushed the limits
for image processing applications, traditional image processing techniques are
not necessarily obsolete and that combining the two approaches can lead to
superior performance in segmentation.
## References
* [1] Rebecca B Naumann, Ann M Dellinger, Eduard Zaloshnja, Bruce A Lawrence, and Ted R Miller, “Incidence and total lifetime costs of motor vehicle–related fatal and nonfatal injury by road user type, united states, 2005,” Traffic injury prevention, vol. 11, no. 4, pp. 353–360, 2010.
* [2] Hui Yu, Haijun Wang, Yao Shi, Ke Xu, Xuyao Yu, and Yuzhen Cao, “The segmentation of bones in pelvic ct images based on extraction of key frames,” BMC medical imaging, vol. 18, no. 1, pp. 18, 2018.
* [3] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al., “The cancer imaging archive (tcia): maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, no. 6, pp. 1045–1057, 2013\.
* [4] M. J. Ackerman, “The visible human project,” Proceedings of the IEEE, vol. 86, no. 3, pp. 504–511, 1998.
* [5] Phan TH Truc, Sungyoung Lee, and Tae-Seong Kim, “A density distance augmented chan-vese active contour for ct bone segmentation,” in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2008, pp. 482–485.
* [6] Dagmar Kainmueller, Hans Lamecker, Stefan Zachow, and Hans-Christian Hege, “Coupling deformable models for multi-object segmentation,” in International Symposium on Biomedical Simulation. Springer, 2008, pp. 69–78.
* [7] Hongzhi Wang, Jung W Suh, Sandhitsu R Das, John B Pluta, Caryne Craige, and Paul A Yushkevich, “Multi-atlas segmentation with joint label fusion,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 3, pp. 611–623, 2012.
* [8] Hongzhi Wang, Mehdi Moradi, Yaniv Gur, Prasanth Prasanna, and Tanveer Syeda-Mahmood, “A multi-atlas approach to region of interest detection for medical image classification,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2017, pp. 168–176.
* [9] Futoshi Yokota, Toshiyuki Okada, Masaki Takao, Nobuhiko Sugano, Yukio Tada, Noriyuki Tomiyama, and Yoshinobu Sato, “Automated ct segmentation of diseased hip using hierarchical and conditional statistical shape models,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2013, pp. 190–197.
* [10] Chengwen Chu, Junjie Bai, Xiaodong Wu, and Guoyan Zheng, “Mascg: Multi-atlas segmentation constrained graph method for accurate segmentation of hip ct images,” Medical image analysis, vol. 26, no. 1, pp. 173–184, 2015.
* [11] Guodong Zeng, Xin Yang, Jing Li, Lequan Yu, Pheng-Ann Heng, and Guoyan Zheng, “3d u-net with multi-level deep supervision: fully automatic segmentation of proximal femur in 3d mr images,” in International workshop on machine learning in medical imaging. Springer, 2017, pp. 274–282.
* [12] Fang Chen, Jia Liu, Zhe Zhao, Mingyu Zhu, and Hongen Liao, “Three-dimensional feature-enhanced network for automatic femur segmentation,” IEEE journal of biomedical and health informatics, vol. 23, no. 1, pp. 243–252, 2017.
* [13] Yong Chang, Yongfeng Yuan, Changyong Guo, Yadong Wang, Yuanzhi Cheng, and Shinichi Tamura, “Accurate pelvis and femur segmentation in hip ct with a novel patch-based refinement,” IEEE journal of biomedical and health informatics, vol. 23, no. 3, pp. 1192–1204, 2018.
* [14] Pengbo Liu, Hu Han, Yuanqi Du, Heqin Zhu, Yinhao Li, Feng Gu, Honghu Xiao, Jun Li, Chunpeng Zhao, Li Xiao, et al., “Deep learning to segment pelvic bones: Large-scale ct datasets and baseline models,” arXiv preprint arXiv:2012.08721, 2020.
* [15] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
* [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
* [17] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
* [18] Mingxing Tan and Quoc V Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” arXiv preprint arXiv:1905.11946, 2019.
* [19] Richard Szeliski, Computer vision: algorithms and applications, Springer Science & Business Media, 2010.
* [20] William R Crum, Oscar Camara, and Derek LG Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imag., vol. 25, no. 11, pp. 1451–1461, 2006\.
* [21] Herng-Hua Chang, Audrey H Zhuang, Daniel J Valentino, and Woei-Chyn Chu, “Performance measure characterization for evaluating neuroimage segmentation algorithms,” Neuroimage, vol. 47, no. 1, pp. 122–135, 2009.
* [22] Pavel Yakubovskiy, “Segmentation models,” https://segmentation-models.readthedocs.io/en/latest/, 2019.
* [23] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
|
∎
11institutetext: A. De Santis, T. Giovannelli, S. Lucidi, M. Roma
22institutetext: Dipartimento di Ingegneria Informatica, Automatica e
Gestionale “A. Ruberti”
SAPIENZA, Università di Roma
via Ariosto, 25 – 00185 Roma, Italy
22email<EMAIL_ADDRESS>33institutetext: M.
Messedaglia 44institutetext: ACTOR Start up of SAPIENZA Università di Roma
via Nizza 45, 00198 Roma, Italy.
44email<EMAIL_ADDRESS>
# Determining the optimal piecewise constant approximation for the
Nonhomogeneous Poisson Process rate of Emergency Department patient arrivals
Alberto De Santis https://orcid.org/0000-0001-5175-4951 Tommaso Giovannelli
https://orcid.org/0000-0002-1436-5348 Stefano Lucidi
https://orcid.org/0000-0003-4356-7958 Mauro Messedaglia Massimo Roma
https://orcid.org/0000-0002-9858-3616
###### Abstract
Modeling the arrival process to an Emergency Department (ED) is the first step
of all studies dealing with the patient flow within the ED. Many of them focus
on the increasing phenomenon of ED overcrowding, which is afflicting hospitals
all over the world. Since Discrete Event Simulation models are often adopted
with the aim to assess solutions for reducing the impact of this problem,
proper nonstationary processes are taken into account to reproduce time-
dependent arrivals. Accordingly, an accurate estimation of the unknown arrival
rate is required to guarantee reliability of results.
In this work, an integer nonlinear black-box optimization problem is solved to
determine the best piecewise constant approximation of the time-varying
arrival rate function, by finding the optimal partition of the 24 hours into a
suitable number of non equally spaced intervals. The black-box constraints of
the optimization problem make the feasible solutions satisfy proper
statistical hypotheses; these ensure the validity of the nonhomogeneous
Poisson assumption about the arrival process, commonly adopted in the
literature, and prevent to mix overdispersed data for model estimation. The
cost function includes a fit error term for the solution accuracy and a
penalty term to select an adeguate degree of regularity of the optimal
solution. To show the effectiveness of this methodology, real data from one of
the largest Italian hospital EDs are used.
###### Keywords:
Emergency Department Arrival process Non Homogeneous Poisson Process Black-Box
Optimization
## 1 Introduction
Statistical modelling for describing and predicting patient arrival to
Emergency Departments (EDs) represent a basic tool of each study concerning ED
patient load and crowding. Indeed, all the approaches adopted to this aim
require an accurate model of the patient arrival process. Of course, such a
process plays a key role in tackling the widespread phenomenon of overcrowding
which afflicts EDs all over the world Ahalt et al. (2018); Bernstein et al.
(2003); Daldoul et al. (2018); Hoot and Aronsky (2008); Hoot et al. (2007); J
Reeder et al. (2003); Vanbrabant et al. (2020); Wang et al. (2015); Weiss et
al. (2004, 2006). The two factors that have the most significant effect on
overcrowding are both external and internal. The first concerns the patient
arrival process; the second regards the patient flow within the ED. Therefore,
both the aspects must be accurately considered for a reliable study on ED
operation.
Several modelling approaches for analyzing ED patient flow have been proposed
in literature (see Wiler et al. (2011) for a survey). The main quantitative
methods used are based on statistical analysis (time–series, regression) or on
general analytic formulas (queuing theory). Simulation modelling (both
Discrete Event and Agent Based Simulation) is currently one of the most widely
used and flexible tool for studying the patient flow through an ED. In fact,
it enables to perform an effective scenario analysis, aiming at determining
bottlenecks (if any) and testing different ED settings. We refer to Salmon et
al. (2018) for a recent survey on simulation modelling for ED operation. A
step forward is Simulation–Based Optimization methodology which combines a
simulation model with a black-box optimization algorithm, aiming at
determining an optimal ED settings, based on suited objective function
(representing some KPIs) to be maximized or minimized Ahmed and Alkhamis
(2009); Guo et al. (2016, 2017).
Modeling methodologies are generally based on assumptions that, in some cases,
may represent serious limitations when applied to complex real–world cases,
such as ED operation. In particular, when dealing with ED patient arrival
stochastic modelling, due to the nonstationarity of the process, a standard
assumption is the use of Nonhomogeneous Poisson Process (NHPP) Ahalt et al.
(2018); Ahmed and Alkhamis (2009); Guo et al. (2017); Kim and Whitt (2014a);
Kuo et al. (2016); Zeinali et al. (2015). We recall that a counting process
$X(t)$ is a NHPP if 1) arrivals occur one at a time (no batch); 2) the process
has independent increments; 3) increments have Poisson distribution, i.e. for
each interval $[t_{1},t_{2}]$,
$P\left(X(t_{1})-X(t_{2})=n\right)=e^{-m(t_{1},t_{2})}\frac{[m(t_{1},t_{2})]^{n}}{n!},$
where $m(t_{1},t_{2})={\int_{t_{1}}^{t_{2}}\lambda(s)ds}$ and $\lambda(t)$ is
the arrival rate. Unlike the Poisson process (where $\lambda(t)=\lambda$),
NHPP has nonstationary increments and this makes the use of NHPP suitable for
modelling ED arrival process, which is usually strongly time–varying. Of
course, appropriate statistical tests must be applied to available data to
check if NHPP fits. This is usually performed by assuming that NHPP has a rate
which can be considered approximately piecewise constant. Hence,
Kolmogorov–Smirnov (KS) statistical test can be applied in separate and
equally spaced intervals and usually the classical Conditional–Uniform (CU)
property of the Poisson process is exploited Brown et al. (2005); Kim and
Whitt (2014a, b). Unlike standard KS test, in the CU KS test the data are
transformed before applying the test. More precisely, by CU property, the
piecewise constant NHPP is transformed into a sequence of i.i.d. random
variables uniformly distributed on $[0,1]$ so that it can be considered a
(homogeneous) Poisson process in each interval. In this manner, the data from
all the intervals can be merged in a single sequence of i.i.d. random
variables uniformly distributed on $[0,1]$. This procedure, proposed in Brown
et al. (2005), enables to remove nuisance parameters and to obtain
independence from the rate of the Poisson process on each interval. Hence data
from separate intervals (with different rates on each of them) and also from
different days can been combined, avoiding common drawback due to large
within-day and day-to-day variation of the ED patient arrival rate. Actually,
Brown et al. in Brown et al. (2005) apply CU KS test after performing a
further logarithmic data transformation. In Kim and Whitt (2014b, 2015), this
approach has been extensively tested along with alternative data
transformations proposed in early papers Durbin (1961) and Lewis (1965).
However, Kim and Whitt in Kim and Whitt (2014a) observed that this procedure
applied to ED patient arrival data is fair only if they are “analyzed
carefully”. This is due to the fact that the following three issues must be
seriously considered: 1) data rounding, 2) choice of the intervals, 3)
overdispersion. In fact, the first issue may produce batch arrivals (zero
length interarrival times) that are not included in a NHPP, so that unrounded
data (or an unrounding procedure) must be considered. The second is a major
issue in dealing with ED patient arrivals, since arrival rate can rapidly
change so that the piecewise constant approximation is reasonable only if the
interval are properly chosen. The third issue regards combining data from
multiple days. Indeed, in studying ED patient arrival process, it is common to
combine data from the same time slot from different weekdays, being this
imperative when data from a single day are not sufficient for statistical
testing. Data collected from EDs database usually show large variability over
successive weeks mainly due to seasonal phenomena like flu season, holiday
season, etc. However, this overdispersion phenomenon must be checked by using
a dispersion test on the available data (e.g. Kathirgamatamby (1953)).
In this work we propose a new modelling approach for ED patient arrival
process based on a piecewise constant approximation of the arrival rate
accomplished with non equally spaced intervals. This choice is suggested by
the typical situation that occurs in EDs where the arrival rate is low and
varying during the night hours, and it is higher and more stable in the day
time, this is indeed what happens in the chosen case study. Therefore, to
obtain an accurate representation of the arrival rate $\lambda(t)$ by a
piecewise constant function $\lambda_{D}(t)$, a finer discretization of the
time–domain is required during the night hours, as opposite to day time. For
this reason, the proposed method finds the best partition of the 24 hours into
intervals not necessarily equally spaced.
As far as the authors are aware, the use of an optimization method for
identifying stochastic processes characterizing the patient flow through an ED
was already proposed in Guo et al. (2016), but that study aimed at determining
the optimal service time distribution parameters (by using a meta–heuristic
approach) and it did not involve ED arrival process. Therefore our approach
represents the first attempt to adopt an optimization method for determining
the best stochastic model for the ED process arrivals. In the previous work De
Santis et al. (2020) a preliminary study was performed following the same
approach. Here, with respect to De Santis et al. (2020), we propose a
significantly enhanced statistical model which allows us to obtain better
results on the case study we consider.
In constructing a statistical model of the ED patient arrivals, a natural way
to define a selection criterion is to evaluate the fit error between
$\lambda(t)$ and its approximation $\lambda_{D}(t)$. However, the true arrival
rate is unknown. In the approach we propose, as opposite to Kim and Whitt
(2014a), no analytical model is assumed for $\lambda(t)$, but it is
substituted by an “empirical arrival rate model” $\lambda_{F}(t)$ obtained by
a sample approximation corresponding to the very fine uniform partition of the
$24$ hours into intervals of $15$ minutes. In each of these intervals the
average arrival rate values has been estimated from data obtained by
collecting samples over the same day of the week, for all the weeks in some
months using experimental data for the ED patient arrival times. Hence, any
other $\lambda_{D}(t)$ corresponding to a grosser partition of the day must be
compared to $\lambda_{F}(t)$. In other words, an optimization problem is
solved to select the best day partition in non equally spaced intervals,
determining a piecewise constant approximation of the arrival rate over the 24
hours with the best fit to the empirical model. Therefore, the objective
function (to be minimized) of the optimization problem we formulate, comprises
the fit error, namely the mean squared error. Moreover, an additional penalty
term is included aiming at obtaining the overall regularity of the optimal
approximation, being the latter measured by means of the sum of the squares of
the jumps between the values in adjacent intervals. The rationale behind this
term is to avoid optimal solutions with too rough behavior, namely few long
intervals with high jumps.
To make the result reliable, a number of constraints must be considered.
First, the length of each interval of the partition can not be less than a
fixed value (half an hour, one hour). Moreover, for each interval,
* •
the CU KS test must be satisfied to support the NHPP hypothesis;
* •
the dispersion test must be satisfied to ensure that data are not
overdispersed, and could be considered as a realization of the same process
(no week seasonal effects).
The resulting problem is a black-box constrained optimization problem and to
solve it we use a method belonging to the class of Derivative-Free
Optimization. In particular we use the new algorithmic framework recently
proposed in Liuzzi et al. (2020) which handles black-box problems with integer
variables.
We performed an extensive experimentation on data collected from the ED of a
big hospital in Rome (Italy), also including some significant sensitivity
analyses. The results obtained show that this approach enables to determine
the number of intervals and their length such that an accurate approximation
of the empirical arrival rate is achieved, ensuring the consistency between
the NHPP hypothesis and the arrival data. The regularity of optimal piecewise
constant approximation can be also finely tuned by proper weighing a penalty
term in the objective function with respect to the fit error term.
It is worth noting that the use a piecewise constant function for
approximating the arrival rate function is usually required by the most common
discrete event simulation software packages when implementing ED patient
arrivals process as a NHPP.
The paper is organized as follows. In Section 2, we briefly report information
on the hospital ED under study. Section 3 describes the statistical model we
propose. The optimization problem we consider is stated in Section 4 and the
results of an extensive experimentation are reported in Section 5. Finally
Section 6 includes some concluding remarks.
## 2 The case study under consideration
The case study we consider concerns the ED of the Policlinico Umberto I, a
very large hospital in Rome, Italy. It is the biggest ED in the Lazio region
in terms of yearly patients arrivals (about 140,000 on the average). Thanks to
the cooperation of the ED staff, we were able to collect data concerning the
patient flow through the ED for the whole year 2018. In particular, for the
purpose of this work, we focus on the patient arrivals data collected in the
first $m$ weeks of the year. Both walk-in patients and patients transported by
emergency medical service vehicles are considered.
In Figure 1, the weekly hourly average arrival rate to the ED is shown for
$m=13$, i.e. for data collected from the 1st of January to the 31st of March.
Figure 1: Plot of the weekly average arrival rate for the first 13 weeks of
the year.
In particular, for each day of the week, the arrival rate is obtained by
averaging the number of arrivals occurring in the same hourly time slot over
the 13 weeks considered. We observe that, in accordance with the literature
(see, i.e., Kim and Whitt (2014a)), the average arrival rates among the days
of the week are significantly different. Therefore, since averaging over these
days would lead to inaccurate results, the different days of the week must be
considered separately.
Figure 2 reports the hourly average arrival rate for each day of the week,
again referring to $m=13$, i.e. to the first 13 weeks of the year.
Figure 2: Plot of the comparison among hourly average arrival rate for each
day of the week for the first 13 weeks of the year.
By observing this figure, we expect that, being the shape of each rate
similar, the approach proposed in this work allows to obtain similar
partitions of the 24 hours on different days of the week. This enables to
focus only on one arbitrary day of the week. Specifically, Tuesday is the day
chosen to apply the methodology under study, since the shape of its arrival
rate can be considered representative of the other days.
In Figure 3, the plot of the hourly average arrival rate for the Tuesdays over
the 13 considered weeks is reported, while Figure 4 shows mean and variance of
the interarrival times occurred on the first Tuesday of the year 2018.
Figure 3: Plot of the average hourly arrival rate for the Tuesdays over the 13
considered weeks of the year. Figure 4: Plot of the average (in solid green)
and variance (in dashed red) of the interarrival times for the first Tuesday
of year 2018. On the abscissa axis, 3-hours time slots are considered.
From this latter figure, we observe that these two statistics have similar
values within each 3-hours time slot and this is in accordance with the
property of the Poisson probability distribution for which mean and variance
coincide.
## 3 Statistical model
The arrival process at EDs is usually characterized by a strong within-day
variation both in the arrival rate and interarrival times: experimental data
show rapid changes in the number of arrivals during the night hours, as
opposite to a smoother profile at day time. As we already mentioned in the
Introduction, for this reason the ED arrival process is usually modeled as a
NHPP.
No analytical model is available for the arrival rate $\lambda(t)$, and
therefore a suitable representation of the unknown function is needed. A
realistic representation can be obtained by averaging the number of arrivals
observed in experimental data on suitable intervals over the 24 hours of the
day, non necessarily equally spaced. Let $\\{T_{i}\\}$ denote a partition $P$
of the observation period $T=[0,24]$ (hours) in $N$ intervals, and let
$\\{\lambda_{i}\\}$ be corresponding sample average rates. Then a piecewise
constant approximation of $\lambda(t)$ is written as follows
$\lambda_{D}(t)=\sum_{i=1}^{N}\lambda_{i}\,\textbf{1}_{T_{i}}(t),\quad t\in T$
(1)
where $\textbf{1}_{T_{i}}(t)$ is 1 for $t\in T_{i}$ and 0 otherwise (the
indicator function of set $T_{i}$). Any partiton $P$ gives rise to a different
approximation $\lambda_{D}(t)$, depending on the number of intervals and their
lengths. Therefore a criterion is needed to select the best partition
$P^{\star}$ with some desirable features.
First of all, we need to ensure that there is no overdispersion in the arrival
data. We refer to the commonly used dispersion test proposed in
Kathirgamatamby (1953) and reported in Kim and Whitt (2014a). If it is
satisfied, then it is possible to combine arrivals for the same day of the
week over different weeks. To this aim, for any partition $P$, let
$\\{k_{i}^{r}\\}$ denote the number of arrivals in the $i$-th partition
interval $T_{i}$ in the $r$-th week, $r=1,\ldots,m$. Consider the statistics
$Ds_{i}=\displaystyle\frac{1}{\mu_{i}}\displaystyle\sum_{r=1}^{m}\left(k_{i}^{r}-\mu_{i}\right)^{2},\quad
i=1,\ldots,N,$
where $\mu_{i}=\frac{1}{m}\sum_{r=1}^{m}k_{i}^{r}$ is the average number of
arrivals in the given interval for the same day of the week over the
considered $m$ weeks. Under the null hypothesis that the counts
$\\{k_{i}^{r}\\}$ are a sample of $m$ independent Poisson random variables
with the same mean count $\mu_{i}$ (no overdispersion), then $Ds_{i}$ is
distributed as $\chi^{2}_{m-1}$, the chi-squared distribution with $m-1$
degrees of freedom. Therefore the null hyphotesis is accepted with $1-\alpha$
confidence level if
$Ds_{i}\leq\chi^{2}_{m-1,\alpha},\quad i=1,\ldots,N,$ (2)
where $\chi^{2}_{m-1,\alpha}$ is of course the $\alpha$ level critical value
of the $\chi^{2}_{m-1}$ distribution.
Furthermore, the partition is feasible if data are consistent with NHPP.
Namely, if we denote by $k_{i}$ the number of arrivals in each interval
$T_{i}=[a_{i},b_{i})$ obtained by considering data of the same weekday, in the
same interval, over $m$ weeks, i.e. $k_{i}=\sum_{r=1}^{m}k_{i}^{r}$,
$i=1,\ldots,N$, the partition is feasible if each $k_{i}$ has a Poisson
distribution with rate $\lambda_{i}$ obtained as $\mu_{i}/(b_{i}-a_{i})$. To
check the validity of the Poisson hypothesis, the CU KS test can be performed
(see Brown et al. (2005); Kim and Whitt (2014a)). We prefer to use CU KS with
respect to Lewis KS test since this latter is highly sensitive to rounding of
the data and moreover CU KS test has more power against alternative hypotheses
involving exponential interarrival times (see Kim and Whitt (2014b) for a
detailed comparison between the effectiveness of the two tests).
To perform CU KS test, for any interval $T_{i}=[a_{i},b_{i})$, let $t_{ij}$,
$j=1,\ldots,k_{i},$ be the arrival times within the $i$-th interval obtained
as union over the $m$ weeks of the arrival times in each $T_{i}$. Now consider
the rescaled arrival times defined by
$\tau_{ij}=\displaystyle\frac{t_{ij}-a_{i}}{b_{i}-a_{i}}$. The rescaled
arrival times, conditionally to the value $k_{i}$, are a collection of i.i.d.
random variables uniformly distributed over $[0,1]$. Hence, in any interval we
compare the theoretical cumulative distribution function (cdf) $F(t)=t$ with
the empirical cdf
$F_{i}(t)=\frac{1}{k_{i}}\sum_{j=1}^{k_{i}}\textbf{1}_{\\{\tau_{ij}\leq
t\\}},\qquad 0\leq t\leq 1.$
The test statistics is defined as follows
$D_{i}=\sup_{0\leq t\leq 1}(|F_{i}(t)-t|).$ (3)
The critical value for this test is denoted as $T(k_{i},\alpha)$ and its
values can be found on the KS test critical values table. Accordingly, the
Poisson hypothesis is accepted if
$D_{i}\leq T(k_{i},\alpha),\quad i=1,\ldots,N.$ (4)
This test has to be satisfied on each interval $T_{i}$ to qualify the
partition $P$ given by $\\{T_{i}\\}$ as feasible, in the sense that CU KS test
is satisfied, too.
A further restriction is imposed on the feasible partitions. Given the
experimental data, realistic partitions can not have a granularity too fine to
avoid that some $k_{i}$ being too small may unduly determine the rejection of
the CU KS test. To this aim the value of 1 hour was chosen as lower threshold
value, taking into account the specific case study considered (see also Figure
3).
Now let us evaluate the feasible partitions also in terms of the
characteristics of function $\lambda_{D}(t)$. It would be amenable to define a
fit error with respect to $\lambda(t)$, which unfortunately is unknown. The
problem can be got around by considering a piecewise constant approximation
$\lambda_{F}(t)$ over a very fine partition ${P}_{F}$ of $T$. A set of 96
equally space intervals of $15$ minutes was considered and the corresponding
average rates $\lambda_{i}^{F}$ were estimated from data. The plot of
$\lambda_{i}^{F}$ is reported in Figure 5.
Figure 5: Plot of the daily average arrival rate $\lambda_{F}(t)$ with
intervals of 15 minutes.
The function $\lambda_{F}(t)$ can be considered as an empirical arrival rate
model. Note that partition ${P}_{F}$ need not be feasible since it only serves
to define the finest piecewise constant approximation of $\lambda(t)$.
Therefore the following fit error can be defined
$E(P)=\sum_{j=1}^{N}\sum_{i_{j}=1}^{N_{j}}(\lambda_{j}-\lambda_{i_{j}}^{F})^{2}$
(5)
where $N_{j}$ is the number of intervals of $15$ minutes contained in $T_{j}$,
and identified by the set of indexes $\\{i_{j}\\}\subset\\{1,\ldots,96\\}$.
Finally it is also advisable to characterize the “smoothness” of any
approximation $\lambda_{D}(t)$ to avoid very gross partitions with high jumps
between adjacent intervals by means of the mean squared error
$S(P)=\sum_{j=2}^{N}(\lambda_{j}-\lambda_{j-1})^{2}.$ (6)
In the following Section 4 the model features illustrated above are organized
in a proper optimization procedure that provides the selection of the best
partition according to conflicting goals.
The approach we propose enables to well address the major two issues raised in
Kim and Whitt (2014a) (and reported in the Introduction) when dealing with
modelling ED patient arrivals, namely the choice of the intervals and the
overdispersion. As concerns the third issue, the data rounding, the arrival
times in the data we collected are rounded to seconds (format hh:mm:ss), and
actually occurrences of simultaneous arrivals which would cause zero
interarrival times are not present. Therefore, we do not need any unrounding
procedure. Anyhow, as already pointed out above, the CU KS test we use is not
very sensitive to data rounding.
## 4 Statement of the optimization problem
Any partition $P=\\{T_{i}\\}$ of $T=[0,24]$ is characterized by the boundary
points $\\{x_{i}\\}$ of its intervals and by their number $N$. Let us
introduce a vector of variables $x\in\mathbb{Z}^{25}$ such that
$T_{i}=[x_{i},x_{i+1}),$
$i=1,\ldots,24$, with $x_{1}=0$ and $x_{25}=24$.
Functions in (5) and (6) are indeed functions of $x$, and therefore will be
denoted by $E(x)$ and $S(x)$, respectively. Therefore, the objective function
that constitutes the selection criterion is given by
$f(x)=E(x)+wS(x),$ (7)
where $w>0$ is a parameter that controls the weight of the smoothness penalty
term with respect to the fit error: the larger $w$, the smaller the difference
between average arrival rates in adjacent intervals; this in turn implies that
on a steep section of $\lambda_{F}(t)$ an increased number of shorter
intervals is adopted to fill the gap with relatively small jumps.
The set $\cal P$ of feasible partitions is defined as follows:
$\begin{array}[]{l}{\cal
P}=\Bigl{\\{}x\in{\mathbb{Z}}^{25}~{}|~{}x_{1}=0,\quad x_{25}=24,\quad
x_{i+1}-x_{i}\geq\ell_{i},\quad g_{i}(x)\leq 0,\bigr{.}\cr\cr\bigl{.}\hskip
28.45274pth_{i}(x)\leq 0,\quad i=1,\ldots,N\Bigr{\\}}\end{array}$ (8)
where
$\displaystyle\ell_{i}$ $\displaystyle=$
$\displaystyle\begin{cases}0\quad\hbox{if}\quad x_{i}=x_{i+1},\\\
\ell\quad\hbox{otherwise},\\\ \end{cases}$ (9) $\displaystyle g_{i}(x)$
$\displaystyle=$ $\displaystyle\begin{cases}0\quad\hbox{if}\quad
x_{i}=x_{i+1},\\\ D_{i}-T(k_{i},\alpha)\quad\hbox{otherwise},\\\ \end{cases}$
(10) $\displaystyle h_{i}(x)$ $\displaystyle=$
$\displaystyle\begin{cases}0\quad\hbox{if}\quad x_{i}=x_{i+1},\\\
Ds_{i}-\chi^{2}_{m-1,\alpha}\quad\hbox{otherwise},\\\ \end{cases}$ (11)
$i=1,\ldots,N$. The value $\ell$ in (9) denotes the minimum interval length
allowed and we assume $\ell\geq 1/4$. Of course, constraints $g_{i}(x)\leq 0$
represents the satisfaction of the CU KS test in (4), while constraints
$h_{i}(x)\leq 0$ concern the dispersion test in (2). Therefore, the best
piecewise constant approximation $\lambda_{D}^{\star}(t)$ of the time-varying
arrival rate $\lambda(t)$ is obtained by solving the following black-box
optimization problem:
$\begin{split}\max~{}~{}&f(x)\\\ s.t.~{}~{}&x\in{\cal P}.\\\ \end{split}$ (12)
We highlight that the idea to use as constraints of the optimization problem a
test to validate the underlying statistical hypothesis on data along with a
dispersion test is completely novel in the framework of modeling ED patient
arrivals process. The only proposal which use a similar approach is in our
previous paper De Santis et al. (2020).
It is important to note that in (7) the objective function has not analytical
structure with respect to the independent variables and it can only be
computed by a data-driven procedure once the $x_{i}$’s values are given. The
same is true for the constraints $g_{i}(x)$ and $h_{i}(x)$ in (8), too.
Therefore the problem in hand is an integer nonlinear constrained black-box
problem, and both the objective function and the constraints are relatively
expensive to compute and this makes it difficult to efficiently solve. In
fact, classical optimization methods either can not be applied (since based on
the analytic knowledge of the functions involved) or they are not efficient
especially when evaluating the functions at a given point is very
computationally expensive. Therefore to tackle problem (12) we turned our
attention to the class of Derivative-Free Optimization and black-box methods
(see, e.g., Audet and Hare (2017); Conn et al. (2009); Larson et al. (2019)).
More in particular, we adopt the algorithmic framework recently proposed in
Liuzzi et al. (2020). It represents a novel strategy for solving black-box
problems with integer variables and it is based on the use of suited search
directions and a nonmonotone linesearch procedure. Moreover, it can handle
generally-constrained problems by using a penalty approach. We refer to Liuzzi
et al. (2020) for a detailed description and we only highlight that the
results reported in Liuzzi et al. (2020) clearly show that this algorithm
framework is particularly efficient in tackling black-box problems like the
one in (12). In particular, the effectiveness of the adopted exploration
strategy with respect to state-of-the-art methods for black-box is shown. This
is due to the fact that the approach proposed in Liuzzi et al. (2020) combines
computational efficiency with a high level of reliability.
## 5 Experimental results
In this section we report the results of an extensive experimentation on data
concerning the case study described in Section 2, namely the ED patient
arrivals collected in the first $m$ weeks of year 2018. Different values of
the number of weeks $m$ have been considered. Standard significance level
$\alpha=0.05$ is used the CU KS and dispersion tests.
As regards the optimization problem in hand the value of $\ell$ in (9) is set
to $1$ hour. Moreover, it is important to note that different values of the
weight $w$ in the objective function (7) lead to various piecewise constant
approximations with a different fitting accuracy and degree of regularity.
Therefore, we performed a careful tuning of this parameter, aiming at
determining a value which represents a good trade-off between a small fit
error and the smoothness of the approximation.
As concerns the parameter values of the optimization algorithm used in our
experimentation, we used the default ones (see Liuzzi et al. (2020)). The
stopping criterion is based on the maximum number of function evaluations set
to 5000. As starting point $x^{0}$ of the optimization algorithm we adopt the
following
$x^{0}_{i}=i-1,\qquad i=1,\ldots,25,$ (13)
which corresponds to the case of 24 intervals of unitary length. This choice
is a commonly used partition in most of the approaches proposed in literature
(see e.g. Ahalt et al. (2018); Kim and Whitt (2014a)). Table
LABEL:tab:CUKSTest-initial in the Appendix reports the results of CU KS and
dispersion tests applied to the partition corresponding to the starting point
$x^{0}$, considering $m=13$ weeks. In particular, in Table LABEL:tab:CUKSTest-
initial for each one-hour slot the sample size $k_{i}$ is reported along with
the $p$-value and the acceptance/rejection of the null hypothesis of the
corresponding test. We observe that the arrivals are not overdispersed in any
interval of the partition corresponding to $x^{0}$, i.e. all the constraints
$h_{i}(x)\leq 0$ are satisfied and this allows us to combine data for the same
day of the week over successive weeks. However, this partition is even
unfeasible, i.e. $g_{i}(x)>0$ for some $i$; this corresponds to reject the
statistical hypothesis on some $T_{i}$. Notwithstanding, even if the starting
point is unfeasible, the optimization algorithm we use is able find a feasible
solution which minimizes the objective function.
As we already mentioned, the choice of a proper value for the weight $w$ in
the objective function (7) is important and not straightforward. On the other
hand, the number $m$ of the considered weeks also affects both the accuracy of
the approximation, through the average rates estimated on each interval, and
the consistency of the results, which is ensured by constraints (10) and (11).
However, while $w$ is related to the statement of the optimization problem
(12) and it can be arbitrarily chosen, the choice of $m$ is strictly connected
to the available data. In (Kim and Whitt, 2014a, Section 4), the authors
assert that, having 10 arrivals in the one-hour slot 9–10 a.m., it is
necessary to combine data over 20 weeks in order to have a sufficient sample
size (200 patient arrivals). However, being their approach based on equally-
spaced intervals, one-hour slots are also adopted during off-peak hours, for
instance during the night. This implies that the sample size corresponding to
data combination over 20 weeks for these slots could no longer be sufficient
to guarantee good results. This is clearly pointed out in Table
LABEL:tab:CUKSTest-initial where the sample size $k_{i}$ corresponding to some
of the one-hour night slots is very low considering $m=13$ weeks and it
remains insufficient even if 26 weeks are considered (see subsequent Table
LABEL:tab:Test-initial). The approach we propose overcomes this drawback
since, for each choice of $m$, we determine the length of the intervals as
solution of the optimization problem (12). Of course, there could be values of
$m$ such that problem (12) has not feasible solutions, i.e. a partition such
that the NHPP hypothesis holds and the results are consistent does not exists
for such $m$.
In order to deeper examine how the parameters $w$ and $m$ affect the optimal
partition, we performed a sensitivity analysis, focusing first on the case
with fixed $m$ and $w$ varying. In particular, we have chosen to focus on
$m=13$ weeks, which enables to achieve an optimal solution by running the
optimization algorithm without overly computational burden. Anyhow, we expect
that no substantial changes in the conclusions would be obtained with
different values of $m$ and this is confirmed by further experimentation whose
results are not reported here for the sake of brevity.
This analysis allows us to obtain several partitions that may be considered
for a proper fine-tuning of $w$. In particular, we consider different values
of $w$ within the set $\\{0,~{}0.1,~{}1,~{}10,~{}10^{3}\\}$. Table
LABEL:tab:CUKSTest-optimal in the Appendix reports the optimal partitions
obtained by solving problem (12) for these values of $w$. In particular, Table
LABEL:tab:CUKSTest-optimal includes the intervals of the partition, the value
of the sample size $k_{i}$ corresponding to each interval over $13$ weeks and
the results of the CU KS and dispersion tests, namely the $p$-value and the
acceptance/rejection of the null hypothesis of the corresponding test.
In Figure 6, for graphical comparison, we report the plots of the empirical
arrival rate model $\lambda_{F}(t)$ and its piecewise constant approximation
$\lambda_{D}(t)$ corresponding to the optimal partitions obtained.
Figure 6: Graphical comparison between the empirical arrival rate model
$\lambda_{F}(t)$ (in green) and the piecewise constant approximation
$\lambda_{D}(t)$ (in red) corresponding to the optimal partition obtained by
solving problem (12) for different values of the parameter $w$. From top to
bottom: $w=0,0.1,1,10,10^{3}$.
Two effects can be clearly observed as $w$ increases: on the one hand, on
steep sections of $\lambda_{F}(t)$, shorter intervals are adopted to reduce
large gaps between adjacent intervals; on the other hand, when
$\lambda_{F}(t)$ is approximately flat, a lower number of intervals may be
sufficient to guarantee small gaps. This is confirmed by the two top plots in
Figure 6 which correspond to $w=0$ and $w=0.1$. In fact, in the first plot
($w=0$) where only the fit error is included in the objective function and in
the second one ($w=0.1$) where anyhow the fit error is the dominant term of
the objective function, the optimal partition is composed by a relatively
large number of intervals. In particular, in the partition corresponding to
$w=0.1$, fewer intervals are adopted during the day time. As expected, a
smaller number of intervals is attained when $w=1$, $w=10$ and $w=10^{3}$.
Note that, since on the steep section corresponding to the time slot
7:00–10:00 a.m. the maximum number of allowed intervals (due to the lower
threshold value of one hour given by the choice $\ell=1$ in (9)) is already
used, the only way to decrease the smoothness term of the objective function
is to enlarge the intervals during both the day and the night. It is worth
noting that for $w=10^{3}$, the number of intervals increases if compared with
the case $w=10$. This occurs to offset the increase in the fit error term due
to the use of a smaller number of intervals on the flatter sections. As a
consequence, the partition has an unexpected interval at the end of the day.
We point out that for each value of $w$, the optimization algorithm finds an
optimal partition (of course feasible with respect to all the constraints),
despite some constraints related to the CU KS test are violated in the initial
partition, i.e. the one corresponding to $x^{0}$ in (13), namely the standard
assumption of one-hour slots usually adopted. This means that the used data
are in accordance with the NHPP hypothesis and they are sufficient to
appropriately define the piecewise constant approximation of the ED arrival
rate.
Conversely, when the optimization algorithm does not find a feasible
partition, the CU KS test or the dispersion test related to some $T_{i}$ are
never satisfied. This implies that the process is not conforming to the NHPP
hypothesis or that the data are overdispersed. This is clearly highlighted by
our subsequent experimentation where we set $w=1$, letting $m$ varying within
the set $\\{5,9,17,22,26\\}$.
First, in Table LABEL:tab:Test-initial in the Appendix we report the results
of CU KS and dispersion tests applied to the partition corresponding to the
starting point in $x^{0}$(13), for these different values of $m$. Once more,
this table evidences that the use of equally-spaced intervals of one-hour
length during the whole day can be inappropriate. As an example, see the
results of the tests on the time slot 02:00–03:00. Moreover, note that, for
all these values of $m$, the initial partition corresponding to the starting
point $x_{0}$ is infeasible, except when $m=5$. Indeed, the constraints
corresponding to CU KS and dispersion tests are violated for some $T_{i}$,
meaning that the validity of the standard assumption of one-hour time slots
strongly depends on the time period considered for using the collected data.
To this aim, a strength of our approach is its ability to assist in the
selection of a reasonable value for $m$. If there is no value of $m$ such that
the optimization algorithm determines an optimal solution (due to
unfeasibility), then it may be inappropriate to consider the ED arrival
process in hand as NHPP.
The subsequent Table LABEL:tab:Test-infeasible-final includes the optimal
partitions obtained by solving problem (12) for the considered values of
$m\in\\{5,9,17,22,26\\}$. Like the previous tables, Table LABEL:tab:Test-
infeasible-final includes the intervals of the partition, the value of the
sample size $k_{i}$ corresponding to each interval and the results of CU KS
and dispersion tests. For all the considered values of $m$ the optimization
algorithm determines an optimal solution with the only exception of $m=26$. In
this latter case, the maximum number of function evaluations allowed to the
optimization algorithm is not enough to compute an optimal solution: in fact,
we obtain an unfeasible solution since the CU KS test related to the last
interval of the day is not satisfied. This could be partially unexpected,
since more accurate results should be obtained when considering a greater
sample size. However, by adding the last four weeks (passing from $m=22$ to
$m=26$) which corresponds to the month of June, the data become affected by a
seasonal trend and the NHPP assumption is no longer valid.
In Figure 7 we report a graphical comparison between the empirical arrival
rate model $\lambda_{F}(t)$ and the piecewise constant approximation
$\lambda_{D}(t)$ corresponding to the optimal partitions obtained for the
considered values of $m$. We observe that the variability of $\lambda_{F}(t)$
reduces as the value of $m$ increases since averaging on more data leads to
flattening the fluctuation. Despite these rapid oscillations and unlike the
other considered values of $m$, for $m=5$ the empirical model $\lambda_{F}(t)$
shows a constant trend during both the night and day hours. This results in a
piecewise constant approximation $\lambda_{D}(t)$ that is flat in all the time
slots of the 24 hours of the day except the ones related to the morning hours,
for which many intervals are used. In fact, to guarantee a good fitting error
between $\lambda_{D}(t)$ and $\lambda_{F}(t)$, it would be necessary to use
shorter intervals, but this is not allowed by the choice $\ell=1$ in the
constraints (9). For the other considered values of $m$, the number of
intervals increases, leading to partitions that improve the fitting error if
compared with the case $m=5$. In particular, we observe that the piecewise
constant approximation $\lambda_{D}(t)$ obtained for $m=22$ benefits from the
lower fluctuations resulting from averaging more data. Therefore, as expected,
using the maximum number of available data leads to the most accurate
piecewise constant approximation. However, when considering too many data,
seasonal phenomena could give rise to the rejection of the null hypothesis of
the considered tests, as observed for the case $m=26$. Moreover, as
highlighted at the end of Section 5 in Kim and Whitt (2014a), a tendency to
reject the NHPP hypothesis (i.e. the null hypothesis of the CU KS test) may be
encountered when the sample size is large. In fact, a larger sample size
requires a stronger evidence of the null hypothesis in order for the test to
be passed. Notwithstanding, our approach is able to overcome these drawbacks,
providing us with an optimal strategy to identify the best way of using the
collected data.
Figure 7: Graphical comparison between the empirical arrival rate model
$\lambda_{F}(t)$ (in green) and the piecewise constant approximation
$\lambda_{D}(t)$ (in red) corresponding to the optimal partition obtained by
solving problem (12) for different values of the parameter $m$. From top to
bottom: $m=5,9,17,22,26$.
## 6 Conclusions
In this work, we examined the arrival process to EDs by providing a novel
methodology that is able to improve the reliability of the modelling
approaches frequently used to deal with this complex system, i.e. the Discrete
Event Simulation modelling. In accordance with the literature, we adopted the
standard assumption of representing the ED arrival process as a NHPP, which is
suitable for modelling strongly time-varying processes. In particular, the
final goal of the proposed approach is to accurately estimate the unknown
arrival rate, i.e. the time-dependent parameter of the NHPP, by using a
reasonable piecewise constant approximation. To this aim, an integer nonlinear
black-box optimization problem is solved to determine the optimal partition of
the 24 hours into a suitable number of non equally spaced intervals. To
guarantee the reliability of this estimation procedure, two types of
statistical tests are considered as constraints for each interval of any
candidate partition: the CU KS test must be satisfied to ensure the
consistency between the NHPP hypothesis and the ED arrivals; the dispersion
test must be satisfied to avoid the overdispersion of data. To the best of our
knowledge, our methodology represents the first optimization-based approach
adopted for determining the best stochastic model for the ED arrival process.
The extensive experimentation we performed on data collected from an ED of a
big hospital in Italy, shows that our approach is able to find a piecewise
constant approximation which represents a good trade-off between a small fit
error with the empirical arrival rate model and the smoothness of the
approximation. This result is accomplished by the optimization algorithm,
despite some constraints in the starting point, which corresponds to the
commonly adopted partition composed by one-hour time slots, are violated.
Moreover, some significant sensitivity analyses are performed to investigate
the fine-tuning of the two parameters affecting the quality of the piecewise
constant approximation: the weight of the smoothness of the approximation in
the objective function (with respect to the fit error) and the number of weeks
considered from the arrivals data. While the former can be arbitrarily chosen
by a user according to the desired level of smoothness, the latter affects the
accuracy of the arrival rate estimation. In general, the more weeks are
considered, the more accurate is the arrival rate approximation, as long as
the NHPP assumption still holds and the data do not become overdispersed.
As regards future work, in order to deeper analyze the robustness of the
proposed approach, we could use alternative statistical tests, such as the
Lewis and the Log tests described in Kim and Whitt (2014a), in place of the CU
KS test. Moreover, whenever Discrete Event Simulation modelling is the chosen
methodology to study ED operation, a model calibration approach could be also
used to determine the best value of the weight used in the objective function
to penalize the “smoothness term”. In fact, the optimal value of this
parameter could be obtained by minimizing the deviation between the simulation
outputs and the corresponding key performance indicators computed through the
data. This enables to obtain a representation of the ED arrival process that
leads to an improved simulation model of the system under study.
## Appendix A Appendix
In this Appendix we report the detailed results of the CU KS and dispersion
tests related to the partitions considered throughout the paper.
Table 1: Results of the CU KS and dispersion tests (with a significance level of 0.05) applied to each interval of the partition corresponding to the starting point $x^{0}$. The considered number of weeks is $m=13$. For each interval of each partition, the sample size of the dispersion test is $m$. $H_{0}$ denotes the null hypothesis of the corresponding test. | CU KS test | Dispersion test
---|---|---
Interval | $k_{i}$ | $p$-value | $H_{0}$ | $p$-value | $H_{0}$
00:00 – 01:00 | $48$ | $0.836$ | accepted | $0.801$ | accepted
01:00 – 02:00 | $38$ | $0.950$ | accepted | $0.450$ | accepted
02:00 – 03:00 | $22$ | $0.027$ | rejected | $0.521$ | accepted
03:00 – 04:00 | $24$ | $0.752$ | accepted | $0.652$ | accepted
04:00 – 05:00 | $21$ | $0.668$ | accepted | $0.366$ | accepted
05:00 – 06:00 | $32$ | $0.312$ | accepted | $0.524$ | accepted
06:00 – 07:00 | $29$ | $0.634$ | accepted | $0.538$ | accepted
07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
09:00 – 10:00 | $136$ | $0.635$ | accepted | $0.803$ | accepted
10:00 – 11:00 | $143$ | $0.039$ | rejected | $0.966$ | accepted
11:00 – 12:00 | $154$ | $0.325$ | accepted | $0.999$ | accepted
12:00 – 13:00 | $132$ | $0.858$ | accepted | $0.948$ | accepted
13:00 – 14:00 | $121$ | $0.738$ | accepted | $0.984$ | accepted
14:00 – 15:00 | $125$ | $0.885$ | accepted | $0.500$ | accepted
15:00 – 16:00 | $127$ | $0.928$ | accepted | $0.610$ | accepted
16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
17:00 – 18:00 | $111$ | $0.769$ | accepted | $0.516$ | accepted
18:00 – 19:00 | $102$ | $0.458$ | accepted | $0.912$ | accepted
19:00 – 20:00 | $100$ | $0.095$ | accepted | $0.527$ | accepted
20:00 – 21:00 | $101$ | $0.656$ | accepted | $0.586$ | accepted
21:00 – 22:00 | $115$ | $0.763$ | accepted | $0.604$ | accepted
22:00 – 23:00 | $101$ | $0.916$ | accepted | $0.305$ | accepted
23:00 – 24:00 | $70$ | $0.864$ | accepted | $0.104$ | accepted
Table 2: Results of the CU KS and dispersion tests (with a significance level of 0.05) applied to each interval of the optimal partition obtained by solving problem (12) for different values of the parameter $w$, with $m$ fixed to 13 weeks. From top to bottom: $w=0,0.1,1,10,10^{3}$. For each interval of each partition, the sample size of the dispersion test is equal to $m$. $H_{0}$ denotes the null hypothesis of the corresponding test. | CU KS test | Dispersion test
---|---|---
$w$ | Interval | $k_{i}$ | $p$-value | $H_{0}$ | $p$-value | $H_{0}$
| 00:00 – 01:00 | $48$ | $0.836$ | accepted | $0.801$ | accepted
| 01:00 – 02:00 | $38$ | $0.950$ | accepted | $0.450$ | accepted
| 02:00 – 05:00 | $67$ | $0.504$ | accepted | $0.100$ | accepted
| 05:00 – 06:00 | $32$ | $0.312$ | accepted | $0.524$ | accepted
| 06:00 – 07:00 | $29$ | $0.634$ | accepted | $0.538$ | accepted
| 07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
| 08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
| 09:00 – 10:00 | $136$ | $0.635$ | accepted | $0.803$ | accepted
0 | 10:00 – 12:00 | $297$ | $0.433$ | accepted | $0.994$ | accepted
| 12:00 – 13:00 | $132$ | $0.858$ | accepted | $0.948$ | accepted
| 13:00 – 16:00 | $373$ | $0.958$ | accepted | $0.502$ | accepted
| 16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
| 17:00 – 19:00 | $213$ | $0.999$ | accepted | $0.937$ | accepted
| 19:00 – 20:00 | $100$ | $0.095$ | accepted | $0.527$ | accepted
| 20:00 – 21:00 | $101$ | $0.656$ | accepted | $0.586$ | accepted
| 21:00 – 22:00 | $115$ | $0.763$ | accepted | $0.604$ | accepted
| 22:00 – 23:00 | $101$ | $0.916$ | accepted | $0.305$ | accepted
| 23:00 – 24:00 | $70$ | $0.864$ | accepted | $0.104$ | accepted
| 00:00 – 01:00 | $48$ | $0.836$ | accepted | $0.801$ | accepted
| 01:00 – 02:00 | $38$ | $0.950$ | accepted | $0.450$ | accepted
| 02:00 – 05:00 | $67$ | $0.504$ | accepted | $0.100$ | accepted
| 05:00 – 06:00 | $32$ | $0.312$ | accepted | $0.524$ | accepted
| 06:00 – 07:00 | $29$ | $0.634$ | accepted | $0.538$ | accepted
| 07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
| 08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
| 09:00 – 10:00 | $136$ | $0.635$ | accepted | $0.803$ | accepted
0.1 | 10:00 – 12:00 | $297$ | $0.433$ | accepted | $0.994$ | accepted
| 12:00 – 13:00 | $132$ | $0.858$ | accepted | $0.948$ | accepted
| 13:00 – 16:00 | $373$ | $0.958$ | accepted | $0.502$ | accepted
| 16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
| 17:00 – 18:00 | $111$ | $0.769$ | accepted | $0.516$ | accepted
| 18:00 – 22:00 | $418$ | $0.660$ | accepted | $0.987$ | accepted
| 22:00 – 23:00 | $101$ | $0.916$ | accepted | $0.305$ | accepted
| 23:00 – 24:00 | $70$ | $0.864$ | accepted | $0.104$ | accepted
| 00:00 – 02:00 | $86$ | $0.825$ | accepted | $0.709$ | accepted
| 02:00 – 05:00 | $67$ | $0.504$ | accepted | $0.100$ | accepted
| 05:00 – 07:00 | $61$ | $0.739$ | accepted | $0.313$ | accepted
| 07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
1 | 08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
| 09:00 – 15:00 | $811$ | $0.073$ | accepted | $0.955$ | accepted
| 15:00 – 16:00 | $127$ | $0.928$ | accepted | $0.610$ | accepted
| 16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
| 17:00 – 18:00 | $111$ | $0.769$ | accepted | $0.516$ | accepted
| 18:00 – 24:00 | $589$ | $0.059$ | accepted | $0.922$ | accepted
| 00:00 – 02:00 | $86$ | $0.825$ | accepted | $0.709$ | accepted
| 02:00 – 05:00 | $67$ | $0.504$ | accepted | $0.100$ | accepted
| 05:00 – 07:00 | $61$ | $0.739$ | accepted | $0.313$ | accepted
| 07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
10 | 08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
| 09:00 – 15:00 | $811$ | $0.073$ | accepted | $0.955$ | accepted
| 15:00 – 16:00 | $127$ | $0.928$ | accepted | $0.610$ | accepted
| 16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
| 17:00 – 24:00 | $700$ | $0.063$ | accepted | $0.720$ | accepted
| 00:00 – 02:00 | $86$ | $0.825$ | accepted | $0.709$ | accepted
| 02:00 – 06:00 | $99$ | $0.451$ | accepted | $0.162$ | accepted
| 06:00 – 07:00 | $29$ | $0.634$ | accepted | $0.538$ | accepted
| 07:00 – 08:00 | $59$ | $0.424$ | accepted | $0.252$ | accepted
$10^{3}$ | 08:00 – 09:00 | $86$ | $0.393$ | accepted | $0.734$ | accepted
| 09:00 – 15:00 | $811$ | $0.073$ | accepted | $0.955$ | accepted
| 15:00 – 16:00 | $127$ | $0.928$ | accepted | $0.610$ | accepted
| 16:00 – 17:00 | $117$ | $0.479$ | accepted | $0.987$ | accepted
| 17:00 – 18:00 | $111$ | $0.769$ | accepted | $0.516$ | accepted
| 18:00 – 22:00 | $418$ | $0.660$ | accepted | $0.987$ | accepted
| 22:00 – 24:00 | $171$ | $0.053$ | accepted | $0.681$ | accepted
Table 3: Results of the CU KS and dispersion tests (with a significance level of 0.05) applied to each interval of the partition corresponding to the starting point $x^{0}$. From top to bottom: $m=5,9,17,22,26$. For each interval of each partition, the sample size of the dispersion test is $m$. $H_{0}$ denotes the null hypothesis of the corresponding test. | CU KS test | Dispersion test
---|---|---
$m$ | Interval | $k_{i}$ | $p$-value | $H_{0}$ | $p$-value | $H_{0}$
| 00:00 – 01:00 | $20$ | $0.167$ | accepted | $0.240$ | accepted
| 01:00 – 02:00 | $11$ | $0.616$ | accepted | $0.151$ | accepted
| 02:00 – 03:00 | $7$ | $0.887$ | accepted | $0.160$ | accepted
| 03:00 – 04:00 | $7$ | $0.892$ | accepted | $0.683$ | accepted
| 04:00 – 05:00 | $12$ | $0.217$ | accepted | $0.856$ | accepted
| 05:00 – 06:00 | $8$ | $0.426$ | accepted | $0.219$ | accepted
| 06:00 – 07:00 | $15$ | $0.884$ | accepted | $0.504$ | accepted
| 07:00 – 08:00 | $27$ | $0.820$ | accepted | $0.164$ | accepted
| 08:00 – 09:00 | $35$ | $0.875$ | accepted | $0.534$ | accepted
| 09:00 – 10:00 | $50$ | $0.378$ | accepted | $0.844$ | accepted
| 10:00 – 11:00 | $48$ | $0.083$ | accepted | $0.884$ | accepted
| 11:00 – 12:00 | $59$ | $0.484$ | accepted | $0.966$ | accepted
5 | 12:00 – 13:00 | $51$ | $0.594$ | accepted | $0.765$ | accepted
| 13:00 – 14:00 | $47$ | $0.651$ | accepted | $0.689$ | accepted
| 14:00 – 15:00 | $44$ | $0.817$ | accepted | $0.412$ | accepted
| 15:00 – 16:00 | $45$ | $0.811$ | accepted | $0.168$ | accepted
| 16:00 – 17:00 | $47$ | $0.679$ | accepted | $0.987$ | accepted
| 17:00 – 18:00 | $49$ | $0.486$ | accepted | $0.534$ | accepted
| 18:00 – 19:00 | $37$ | $0.731$ | accepted | $0.344$ | accepted
| 19:00 – 20:00 | $35$ | $0.436$ | accepted | $0.839$ | accepted
| 20:00 – 21:00 | $44$ | $0.904$ | accepted | $0.794$ | accepted
| 21:00 – 22:00 | $43$ | $0.459$ | accepted | $0.693$ | accepted
| 22:00 – 23:00 | $32$ | $0.967$ | accepted | $0.667$ | accepted
| 23:00 – 24:00 | $31$ | $0.306$ | accepted | $0.552$ | accepted
| 00:00 – 01:00 | $33$ | $0.106$ | accepted | $0.527$ | accepted
| 01:00 – 02:00 | $22$ | $0.658$ | accepted | $0.488$ | accepted
| 02:00 – 03:00 | $13$ | $0.031$ | rejected | $0.390$ | accepted
| 03:00 – 04:00 | $14$ | $0.258$ | accepted | $0.857$ | accepted
| 04:00 – 05:00 | $16$ | $0.441$ | accepted | $0.471$ | accepted
| 05:00 – 06:00 | $22$ | $0.707$ | accepted | $0.335$ | accepted
| 06:00 – 07:00 | $23$ | $0.580$ | accepted | $0.608$ | accepted
| 07:00 – 08:00 | $48$ | $0.500$ | accepted | $0.484$ | accepted
| 08:00 – 09:00 | $54$ | $0.338$ | accepted | $0.573$ | accepted
| 09:00 – 10:00 | $97$ | $0.391$ | accepted | $0.886$ | accepted
| 10:00 – 11:00 | $97$ | $0.149$ | accepted | $0.836$ | accepted
| 11:00 – 12:00 | $108$ | $0.384$ | accepted | $0.999$ | accepted
9 | 12:00 – 13:00 | $95$ | $0.911$ | accepted | $0.821$ | accepted
| 13:00 – 14:00 | $82$ | $0.733$ | accepted | $0.923$ | accepted
| 14:00 – 15:00 | $75$ | $0.979$ | accepted | $0.753$ | accepted
| 15:00 – 16:00 | $89$ | $0.909$ | accepted | $0.456$ | accepted
| 16:00 – 17:00 | $82$ | $0.429$ | accepted | $0.923$ | accepted
| 17:00 – 18:00 | $78$ | $0.804$ | accepted | $0.596$ | accepted
| 18:00 – 19:00 | $69$ | $0.277$ | accepted | $0.734$ | accepted
| 19:00 – 20:00 | $69$ | $0.218$ | accepted | $0.477$ | accepted
| 20:00 – 21:00 | $72$ | $0.731$ | accepted | $0.731$ | accepted
| 21:00 – 22:00 | $75$ | $0.449$ | accepted | $0.541$ | accepted
| 22:00 – 23:00 | $60$ | $0.989$ | accepted | $0.681$ | accepted
| 23:00 – 24:00 | $48$ | $0.521$ | accepted | $0.689$ | accepted
| 00:00 – 01:00 | $73$ | $0.729$ | accepted | $0.472$ | accepted
| 01:00 – 02:00 | $48$ | $0.708$ | accepted | $0.291$ | accepted
| 02:00 – 03:00 | $39$ | $0.009$ | rejected | $0.010$ | rejected
| 03:00 – 04:00 | $32$ | $0.203$ | accepted | $0.622$ | accepted
| 04:00 – 05:00 | $28$ | $0.706$ | accepted | $0.652$ | accepted
| 05:00 – 06:00 | $38$ | $0.125$ | accepted | $0.607$ | accepted
| 06:00 – 07:00 | $35$ | $0.908$ | accepted | $0.327$ | accepted
| 07:00 – 08:00 | $70$ | $0.788$ | accepted | $0.075$ | accepted
| 08:00 – 09:00 | $121$ | $0.786$ | accepted | $0.577$ | accepted
| 09:00 – 10:00 | $174$ | $0.421$ | accepted | $0.729$ | accepted
| 10:00 – 11:00 | $186$ | $0.332$ | accepted | $0.939$ | accepted
| 11:00 – 12:00 | $203$ | $0.474$ | accepted | $0.999$ | accepted
17 | 12:00 – 13:00 | $176$ | $0.698$ | accepted | $0.986$ | accepted
| 13:00 – 14:00 | $164$ | $0.589$ | accepted | $0.992$ | accepted
| 14:00 – 15:00 | $161$ | $0.983$ | accepted | $0.570$ | accepted
| 15:00 – 16:00 | $168$ | $0.506$ | accepted | $0.815$ | accepted
| 16:00 – 17:00 | $153$ | $0.361$ | accepted | $0.996$ | accepted
| 17:00 – 18:00 | $149$ | $0.596$ | accepted | $0.528$ | accepted
| 18:00 – 19:00 | $134$ | $0.761$ | accepted | $0.909$ | accepted
| 19:00 – 20:00 | $140$ | $0.101$ | accepted | $0.637$ | accepted
| 20:00 – 21:00 | $141$ | $0.709$ | accepted | $0.760$ | accepted
| 21:00 – 22:00 | $153$ | $0.938$ | accepted | $0.855$ | accepted
| 22:00 – 23:00 | $129$ | $0.887$ | accepted | $0.393$ | accepted
| 23:00 – 24:00 | $94$ | $0.950$ | accepted | $0.296$ | accepted
| 00:00 – 01:00 | $95$ | $0.509$ | accepted | $0.720$ | accepted
| 01:00 – 02:00 | $70$ | $0.938$ | accepted | $0.529$ | accepted
| 02:00 – 03:00 | $52$ | $0.008$ | rejected | $0.022$ | rejected
| 03:00 – 04:00 | $36$ | $0.094$ | accepted | $0.507$ | accepted
| 04:00 – 05:00 | $34$ | $0.536$ | accepted | $0.420$ | accepted
| 05:00 – 06:00 | $46$ | $0.045$ | rejected | $0.703$ | accepted
| 06:00 – 07:00 | $48$ | $0.833$ | accepted | $0.590$ | accepted
| 07:00 – 08:00 | $83$ | $0.805$ | accepted | $0.062$ | accepted
| 08:00 – 09:00 | $165$ | $0.576$ | accepted | $0.108$ | accepted
| 09:00 – 10:00 | $219$ | $0.105$ | accepted | $0.737$ | accepted
| 10:00 – 11:00 | $235$ | $0.282$ | accepted | $0.960$ | accepted
| 11:00 – 12:00 | $274$ | $0.585$ | accepted | $0.962$ | accepted
22 | 12:00 – 13:00 | $233$ | $0.956$ | accepted | $0.984$ | accepted
| 13:00 – 14:00 | $216$ | $0.515$ | accepted | $0.999$ | accepted
| 14:00 – 15:00 | $207$ | $0.872$ | accepted | $0.789$ | accepted
| 15:00 – 16:00 | $213$ | $0.841$ | accepted | $0.905$ | accepted
| 16:00 – 17:00 | $204$ | $0.491$ | accepted | $0.999$ | accepted
| 17:00 – 18:00 | $192$ | $0.534$ | accepted | $0.683$ | accepted
| 18:00 – 19:00 | $173$ | $0.818$ | accepted | $0.968$ | accepted
| 19:00 – 20:00 | $177$ | $0.072$ | accepted | $0.768$ | accepted
| 20:00 – 21:00 | $181$ | $0.655$ | accepted | $0.681$ | accepted
| 21:00 – 22:00 | $196$ | $0.977$ | accepted | $0.810$ | accepted
| 22:00 – 23:00 | $167$ | $0.688$ | accepted | $0.412$ | accepted
| 23:00 – 24:00 | $118$ | $0.963$ | accepted | $0.209$ | accepted
| 00:00 – 01:00 | $112$ | $0.171$ | accepted | $0.679$ | accepted
| 01:00 – 02:00 | $75$ | $0.933$ | accepted | $0.377$ | accepted
| 02:00 – 03:00 | $67$ | $0.012$ | rejected | $0.053$ | accepted
| 03:00 – 04:00 | $46$ | $0.458$ | accepted | $0.450$ | accepted
| 04:00 – 05:00 | $38$ | $0.987$ | accepted | $0.465$ | accepted
| 05:00 – 06:00 | $57$ | $0.308$ | accepted | $0.535$ | accepted
| 06:00 – 07:00 | $56$ | $0.935$ | accepted | $0.739$ | accepted
| 07:00 – 08:00 | $100$ | $0.882$ | accepted | $0.128$ | accepted
| 08:00 – 09:00 | $198$ | $0.566$ | accepted | $0.142$ | accepted
| 09:00 – 10:00 | $259$ | $0.341$ | accepted | $0.844$ | accepted
| 10:00 – 11:00 | $289$ | $0.091$ | accepted | $0.942$ | accepted
| 11:00 – 12:00 | $320$ | $0.725$ | accepted | $0.984$ | accepted
26 | 12:00 – 13:00 | $274$ | $0.915$ | accepted | $0.996$ | accepted
| 13:00 – 14:00 | $257$ | $0.228$ | accepted | $0.999$ | accepted
| 14:00 – 15:00 | $243$ | $0.872$ | accepted | $0.835$ | accepted
| 15:00 – 16:00 | $242$ | $0.574$ | accepted | $0.892$ | accepted
| 16:00 – 17:00 | $236$ | $0.630$ | accepted | $0.942$ | accepted
| 17:00 – 18:00 | $231$ | $0.808$ | accepted | $0.753$ | accepted
| 18:00 – 19:00 | $204$ | $0.682$ | accepted | $0.980$ | accepted
| 19:00 – 20:00 | $209$ | $0.170$ | accepted | $0.830$ | accepted
| 20:00 – 21:00 | $219$ | $0.610$ | accepted | $0.735$ | accepted
| 21:00 – 22:00 | $237$ | $0.803$ | accepted | $0.905$ | accepted
| 22:00 – 23:00 | $198$ | $0.614$ | accepted | $0.366$ | accepted
| 23:00 – 24:00 | $147$ | $0.972$ | accepted | $0.032$ | accepted
Table 4: Results of the CU KS and dispersion tests (with a significance level of 0.05) applied to each interval of the final (infeasible) partition obtained by solving problem 12 for different values of the parameter $m$, with $w$ fixed to 1. From top to bottom: $m=5,9,17,22,26$. For each interval of each partition, the sample size of the dispersion test is $m$. $H_{0}$ denotes the null hypothesis of the corresponding test. | CU KS test | Dispersion test
---|---|---
$m$ | Interval | $k_{i}$ | $p$-value | $H_{0}$ | $p$-value | $H_{0}$
| 00:00 – 06:00 | $65$ | $0.068$ | accepted | $0.472$ | accepted
| 06:00 – 07:00 | $15$ | $0.884$ | accepted | $0.504$ | accepted
| 07:00 – 08:00 | $27$ | $0.820$ | accepted | $0.164$ | accepted
| 08:00 – 09:00 | $35$ | $0.875$ | accepted | $0.534$ | accepted
| 09:00 – 10:00 | $50$ | $0.378$ | accepted | $0.844$ | accepted
5 | 10:00 – 12:00 | $107$ | $0.734$ | accepted | $0.938$ | accepted
| 12:00 – 13:00 | $51$ | $0.594$ | accepted | $0.765$ | accepted
| 13:00 – 14:00 | $47$ | $0.651$ | accepted | $0.689$ | accepted
| 14:00 – 15:00 | $44$ | $0.817$ | accepted | $0.412$ | accepted
| 15:00 – 24:00 | $363$ | $0.214$ | accepted | $0.568$ | accepted
| 00:00 – 02:00 | $55$ | $0.249$ | accepted | $0.607$ | accepted
| 02:00 – 04:00 | $27$ | $0.309$ | accepted | $0.501$ | accepted
| 04:00 – 05:00 | $16$ | $0.441$ | accepted | $0.471$ | accepted
| 05:00 – 06:00 | $22$ | $0.707$ | accepted | $0.335$ | accepted
| 06:00 – 07:00 | $23$ | $0.580$ | accepted | $0.608$ | accepted
| 07:00 – 08:00 | $48$ | $0.500$ | accepted | $0.484$ | accepted
9 | 08:00 – 09:00 | $54$ | $0.338$ | accepted | $0.573$ | accepted
| 09:00 – 16:00 | $643$ | $0.060$ | accepted | $0.717$ | accepted
| 16:00 – 17:00 | $82$ | $0.429$ | accepted | $0.923$ | accepted
| 17:00 – 18:00 | $78$ | $0.804$ | accepted | $0.596$ | accepted
| 18:00 – 22:00 | $285$ | $0.919$ | accepted | $0.989$ | accepted
| 22:00 – 23:00 | $60$ | $0.989$ | accepted | $0.681$ | accepted
| 23:00 – 24:00 | $48$ | $0.522$ | accepted | $0.689$ | accepted
| 00:00 – 02:00 | $121$ | $0.094$ | accepted | $0.535$ | accepted
| 02:00 – 05:00 | $99$ | $0.098$ | accepted | $0.067$ | accepted
| 05:00 – 07:00 | $73$ | $0.650$ | accepted | $0.203$ | accepted
| 07:00 – 08:00 | $70$ | $0.788$ | accepted | $0.075$ | accepted
| 08:00 – 09:00 | $121$ | $0.786$ | accepted | $0.577$ | accepted
| 09:00 – 10:00 | $174$ | $0.421$ | accepted | $0.729$ | accepted
17 | 10:00 – 14:00 | $729$ | $0.089$ | accepted | $0.995$ | accepted
| 14:00 – 16:00 | $329$ | $0.982$ | accepted | $0.410$ | accepted
| 16:00 – 17:00 | $153$ | $0.361$ | accepted | $0.996$ | accepted
| 17:00 – 18:00 | $149$ | $0.596$ | accepted | $0.528$ | accepted
| 18:00 – 22:00 | $568$ | $0.586$ | accepted | $0.926$ | accepted
| 22:00 – 24:00 | $223$ | $0.071$ | accepted | $0.793$ | accepted
| 00:00 – 02:00 | $165$ | $0.198$ | accepted | $0.743$ | accepted
| 02:00 – 06:00 | $168$ | $0.117$ | accepted | $0.122$ | accepted
| 06:00 – 07:00 | $48$ | $0.833$ | accepted | $0.590$ | accepted
| 07:00 – 08:00 | $83$ | $0.805$ | accepted | $0.062$ | accepted
| 08:00 – 09:00 | $165$ | $0.576$ | accepted | $0.108$ | accepted
| 09:00 – 10:00 | $219$ | $0.105$ | accepted | $0.737$ | accepted
22 | 10:00 – 14:00 | $958$ | $0.097$ | accepted | $0.994$ | accepted
| 14:00 – 16:00 | $420$ | $0.952$ | accepted | $0.561$ | accepted
| 16:00 – 17:00 | $204$ | $0.491$ | accepted | $0.999$ | accepted
| 17:00 – 18:00 | $192$ | $0.534$ | accepted | $0.683$ | accepted
| 18:00 – 22:00 | $772$ | $0.436$ | accepted | $0.968$ | accepted
| 22:00 – 23:00 | $167$ | $0.688$ | accepted | $0.412$ | accepted
| 23:00 – 24:00 | $118$ | $0.963$ | accepted | $0.209$ | accepted
| 00:00 – 01:00 | $112$ | $0.171$ | accepted | $0.679$ | accepted
| 01:00 – 02:00 | $75$ | $0.933$ | accepted | $0.378$ | accepted
| 02:00 – 06:00 | $208$ | $0.072$ | accepted | $0.080$ | accepted
| 06:00 – 07:00 | $56$ | $0.935$ | accepted | $0.739$ | accepted
| 07:00 – 08:00 | $100$ | $0.882$ | accepted | $0.128$ | accepted
| 08:00 – 09:00 | $198$ | $0.566$ | accepted | $0.142$ | accepted
| 09:00 – 10:00 | $259$ | $0.341$ | accepted | $0.844$ | accepted
| 10:00 – 11:00 | $289$ | $0.091$ | accepted | $0.942$ | accepted
26 | 11:00 – 12:00 | $320$ | $0.725$ | accepted | $0.984$ | accepted
| 12:00 – 13:00 | $274$ | $0.915$ | accepted | $0.996$ | accepted
| 13:00 – 15:00 | $500$ | $0.439$ | accepted | $0.971$ | accepted
| 15:00 – 16:00 | $242$ | $0.574$ | accepted | $0.892$ | accepted
| 16:00 – 18:00 | $467$ | $0.895$ | accepted | $0.939$ | accepted
| 18:00 – 21:00 | $632$ | $0.643$ | accepted | $0.950$ | accepted
| 21:00 – 22:00 | $237$ | $0.803$ | accepted | $0.905$ | accepted
| 22:00 – 24:00 | $345$ | $0.034$ | rejected | $0.440$ | accepted
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* Ahalt et al. (2018) Ahalt V, Argon N, Strickler J, Mehrotra A (2018) Comparison of Emergency Department crowding scores: a discrete-event simulation approach. Health Care Management Science 21:144–155
* Ahmed and Alkhamis (2009) Ahmed MA, Alkhamis TM (2009) Simulation optimization for an Emergency Department healthcare unit in Kuwait. European Journal of Operational Research 198(3):936 – 942
* Audet and Hare (2017) Audet C, Hare W (2017) Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering, Springer
* Bernstein et al. (2003) Bernstein S, Verghese V, Leung W, T Lunney A, Perez I (2003) Development and validation of a new index to measure Emergency Department crowding. Academic emergency medicine: official journal of the Society for Academic Emergency Medicine 10:938–42
* Brown et al. (2005) Brown L, Gans N, Mandelbaum A, Sakov A, Shen H, Zeltyn S, Zhao L (2005) Statistical analysis of a telephone call center: A queueing-science perspective. Journal of the American Statistical Association 100:36–50
* Conn et al. (2009) Conn A, Scheinberg K, Vicente L (2009) Derivative-Free Optimization. SIAM
* Daldoul et al. (2018) Daldoul D, Nouaouri I, Bouchriha H, Allaoui H (2018) A stochastic model to minimize patient waiting time in an Emergency Department. Operations Research for Health Care 18:16 – 25, EURO 2016–New Advances in Health Care Applications
* De Santis et al. (2020) De Santis A, Giovannelli T, Lucidi S, Messedaglia M, Roma M (2020) An optimal non–uniform piecewise constant approximation for the patient arrival rate for a more efficient representation of the Emergency Departments arrival process. Technical Report 1–2020, Dipartimento di Ingegneria Informatica Automatica e Gestionale “A. Ruberti”, SAPIENZA Università di Roma
* Durbin (1961) Durbin J (1961) Some methods for constructing exact tests. Biometrika 48:41–55
* Guo et al. (2016) Guo H, Goldsman D, Tsui KL, Zhou Y, Wong SY (2016) Using simulation and optimisation to characterise durations of Emergency Department service times with incomplete data. International Journal of Production Research 54(21):6494–6511
* Guo et al. (2017) Guo H, Gao S, Tsui K, Niu T (2017) Simulation optimization for medical staff configuration at Emergency Department in Hong Kong. IEEE Transactions on Automation Science and Engineering 14(4):1655–1665
* Hoot and Aronsky (2008) Hoot N, Aronsky D (2008) Systematic review of Emergency Department crowding: causes, effects, and solutions. Annals of Emergency Medicine 52(2):126–136
* Hoot et al. (2007) Hoot NR, Zhou CH, Jones I, Aronsky D (2007) Measuring and forecasting Emergency Department crowding in real time. Annals of emergency medicine 49 6:747–55
* J Reeder et al. (2003) J Reeder T, Burleson D, G Garrison H (2003) The overcrowded Emergency Department: A comparison of staff perceptions. Academic emergency medicine: official journal of the Society for Academic Emergency Medicine 10:1059–64
* Kathirgamatamby (1953) Kathirgamatamby N (1953) Note on the Poisson index of dispersion. Biometrika 40:225–228
* Kim and Whitt (2014a) Kim SH, Whitt W (2014a) Are call center and hospital arrivals well modeled by nonhomogeneous Poisson process ? Manufactory & Service Operations Management 16:464–480
* Kim and Whitt (2014b) Kim SH, Whitt W (2014b) Choosing arrival process models for service systems: Tests of a nonhomogeneous Poisson process. Naval Research Logistics 61:66–90
* Kim and Whitt (2015) Kim SH, Whitt W (2015) The power of alternative Kolmogorov–Smirnov tests based on transformations of the data. ACM Transactions on Modeling and Computer Simulation 25(4):1–22
* Kuo et al. (2016) Kuo YH, Rado O, Lupia B, Leung JMY, Graham CA (2016) Improving the efficiency of a hospital Emergency Department: a simulation study with indirectly imputed service-time distributions. Flexible Services and Manufacturing Journal 28(1):120–147
* Larson et al. (2019) Larson J, Menickelly M, Wild S (2019) Derivative-free optimization methods. Acta Numerica 28:287–404
* Lewis (1965) Lewis P (1965) Some results on tests for poisson processes. Biometrika 52:67–77
* Liuzzi et al. (2020) Liuzzi G, Lucidi S, Rinaldi F (2020) An algorithmic framework based on primitive directions and nonmonotone line searches for black-box problems with integer variables, Mathematical Programming Computation
* Salmon et al. (2018) Salmon A, Rachuba S, Briscoe S, Pitt M (2018) A structured literature review of simulation modelling applied to Emergency Departments: Current patterns and emerging trends. Operations Research for Health Care 19:1–13
* Vanbrabant et al. (2020) Vanbrabant L, Braekers K, Ramaekers K (2020) Improving Emergency Department performance by revising the patient–physician assignment process. Flexible Services and Manufacturing Journal
* Wang et al. (2015) Wang H, Robinson RD, Garrett JS, Bunch K, Huggins CA, Watson K, Daniels J, Banks B, D’Etienne JP, Zenarosa NR (2015) Use of the SONET score to evaluate high volume Emergency Department overcrowding: A prospective derivation and validation study. Emergency Medicine International 11:1–11
* Weiss et al. (2004) Weiss S, Derlet R, Arndahl J, Ernst A, Richards J, Fernández-Frankelton M, Schwab R, Stair T, Vicellio P, Levy D, Brautigan M, Johnson A, Nick T (2004) Estimating the degree of Emergency Department overcrowding in academic medical centers: Results of the national ed overcrowding study (NEDOCS). Academic Emergency Medicine 11(1):38–50
* Weiss et al. (2006) Weiss S, Ernst AA, Nick TG (2006) Comparison of the national Emergency Department overcrowding scale and the Emergency Department work index for quantifying Emergency Department crowding. Academic emergency medicine: official journal of the Society for Academic Emergency Medicine 13 5:513–8
* Wiler et al. (2011) Wiler J, Griffey R, Olsen T (2011) Review of modeling approaches for Emergency Department patient flow and crowding research. Academic Emergency Medicine 18:1371–1379
* Zeinali et al. (2015) Zeinali F, Mahootchi M, Sepehri M (2015) Resource planning in the Emergency Departments: a simulation-base metamodeling approach. Simulation Modelling Practice and Theory 53:123–138
|
Local saddles of RAAR algorithms]
Local saddles of relaxed averaged alternating reflections algorithms on phase retrieval
Pengwen Chen
Applied mathematics, National Chung Hsing University, Taiwan
Phase retrieval can be expressed as a non-convex constrained optimization problem to identify one phase minimizer one a torus. Many iterative transform techniques have been proposed to identify the minimizer, e.g., relaxed averaged alternating reflections(RAAR) algorithms. In this paper, we present one optimization viewpoint on the RAAR algorithm.
RAAR algorithm is one alternating direction method of multipliers(ADMM) with one penalty parameter.
Pairing with multipliers (dual vectors), phase vectors on the primal space are lifted to
higher dimensional vectors, RAAR algorithm is one continuation algorithm, which searches for local saddles in the primal-dual space. The dual iteration approximates one gradient ascent flow, which drives the corresponding local minimizers in a positive-definite Hessian region.
Altering penalty parameters, the RAAR avoids the stagnation of these corresponding local minimizers in the primal space and thus
screens out many stationary points corresponding to non-local minimizers.
Keywords: Phase retrieval, relaxed averaged alternating reflections, alternating direction method of multipliers, Nash equilibrium,
local saddles
§ INTRODUCTION
Phase retrieval has recently attracted attentions in the mathematics community (see one review [1] and references therein). The problem of phase retrieval is motivated by the inability of photo detectors to directly measure the phase of an electromagnetic wave at frequencies of THz (terahertz) and higher.
The problem of phase retrieval aims to reconstruct
an unknown object $x_0\in \IC^n$ from its magnitude measurement data $b=|A^* x_0|$, where
$A\in \IC^{n\times N} $ represents some isometric matrix and $A^*$ represents the Hermitian adjoint of $A$. Introduce one
non-convex $N$-dimensional torus associated with its normalized torus $ \cZ:=\left\{z\in \IC^N: |z|=b\right\},\; \cU:=\left\{u\in \IC^N: |u|=1\right\}. $
The whole problem is equivalent to reconstructing the missing phase information $u$ and the unknown object $x=x_0$ via solving the constrained least squares problem
min_x∈^n, |u|=1 { b⊙u-A^* x^2: u∈^N}=
min_z∈ A_z^2,
$A_\bot\in \IC^{(N-n)\times N}$
is an isometric matrix with unitary matrix $[A^*, A_\bot^*]$,
\[
[A^*, A_\bot^*][A^*, A_\bot^*]^*=A_\bot^* A_\bot+A^* A=I.\]
Here, $b\odot u$ represents the component-wise multiplication
between two vectors $b,u$, respectively.
The isometric condition is not very restrictive in applications, since Fourier transforms are commonly applied in phase retrieval. Even for non-Fourier transforms, we can still obtain equivalent problems via a QR-factorization, see [2].
Let $\cU_*$ denote the set in $\cU$ consisting of all the local minimizers of (<ref>).
A vector $z_*\in \cZ$ minimizes (<ref>)
is called a global solution.
In the noiseless measurement case, $A_\bot z_*=0$ or $z_*=A^* x_*=b\odot u_*$ for some $u_*\in \cU$ and some $x_*\in \IC^n$. Numerically, it is a nontrivial task to obtain a global minimizer on the non-convex torus.
The error reduction method is one traditional method [3], which could produce a local solution of poor quality for (<ref>), if no proper initialization is taken. During last decades, researchers propose various spectral initialization algorithms to overcome this challenge[4, 5, 6, 7, 2, 8, 9, 10, 11, 12].
On the other hand, phase retrieval can be also tackled by another class of algorithms, including
the well-known hybrid input-output algorithm(HIO)[13, 14], the hybrid projection–reflection method[15], Fourier Douglas-Rachford algorithm (FDR)[16], alternating direction methods[17] and relaxed averaged alternating reflections(RAAR) algorithms[18]. An important feature of these algorithms is the empirical ability to avoid local minima and converge to a global mini-mum for noise-free oversampled diffraction patterns. For instance,
the empirical study of FDR indicates the disappearance of the stagnation at poor local solutions
under sufficiently many random masks.
A limit point of FDR is a global solution in (<ref>) and the limit
point with appropriate spectral gap conditions reconstructs the phase retrieval solution [16].
Traditional convergence study on Douglas-Rachford splitting algorithm [19, 20] heavily relies on the convexity assumption.
Noise-free measurement is a strict requirement for HIO and FDR, which motivates the proposal of
relaxed averaged alternating reflections algorithm [18, 21].
Let $\cA,\cB$ denote the sets $Range(A^*)$ and $\cZ$, respectively. Let $P_\cA$ and $P_\cB$ denote the projector on $\cA$ and $\cB$, respectively. Let $R_\cA, R_\cB$ denote the reflectors corresponding to $\cA,\cB$.
With one parameter $\beta\in (0,1)$ relaxing the original feasibility problem (the intersection of $\cA$ and $\cB$), the $\beta$-RAAR algorithm [18] is defined as the iterations $\left\{S^k(w): k=1,2,\ldots \right\}$ for some initialization $w\in \IC^N$,
\begin{eqnarray}
S(w)&=& \beta \cdot \frac{1}{2} (R_\cA R_\cB+I) w+(1-\beta) P_\cB w\\
&=& \frac{\beta}{2}\{(2A^*A-I) (2b\odot \frac{w}{|w|}-w)+w\}+(1-\beta )b\odot \frac{w}{|w|}
\\
&=&{\beta} w+(1-2{\beta}) b\odot \frac{w}{|w|}+{\beta} A^* A (2b\odot \frac{w}{|w|}-w).\label{RAART}
\end{eqnarray}
Fourier Douglas-Rachford algorithm can be deemed as an extreme case of
$\beta$-RAAR family with $\beta=1$.
As RAAR converges to a fixed point $w$, we could retrieve the phase information $u=w/|w|$ for (<ref>).
Any $u$ in $\cU_*$ yields a fixed point $w$.
Empirically, RAAR fixed points can produce local solutions of high quality, if a large value is properly chosen for $\beta$, as reported in [17, 21].
In this work, we disclose the relation between RAAR and the local minimizers $z$ in (<ref>). As HIO can be reformulated as one alternating direction method of multipliers in [17],
we identify RAAR as one ADMM with penalty parameter $1/\beta'=(1-\beta)/\beta$ applied to the constrained optimization problem in (<ref>),
e.g., Theorem. <ref>.
This perspective links $\beta$-RAAR with a small parameter $\beta$ to multiplier methods with
large penalty ${\beta'}^{-1}$.
It is known in optimization that convergence of a multiplier method
relies on
a sufficiently large penalty (e.g., see Prop. 2.7 in [22]). From this perspective, it is not surprising that the convergence of RAAR to its fixed point also requires a large penalty parameter.
large penalty has been employed to ensure various ADMM iterations converging to stationary points [23, 24, 25]. For instance, ADMM [25] is applied to solve the minimization of nonconvex nonsmooth functions. Global convergence to a stationary point can be established, when sufficiently large penalty parameters are used.
Saddle plays a fundamental role in the theory and the application of convex optimization[26], in particular, the convergence of ADMM, e.g., [27]. For the application on phase retrieval,
Sun et al[28] conduct saddle analysis on a quatradic objective function of Gaussian measurements.
The geometric analysis shows that with high probability the global solution is the one local minimizer, when $N/n$ is sufficiently large.
Most of critical points are saddles at actually.
We believe that saddle analysis is also one key ingredient in explaining the avoidance of undesired critical points for the Lagrangian of RAAR.
To some extent, promising empirical performance of non-convex ADMM conveys the impression that saddles exist in the Lagrangian function, which is not evident in the context of phase retrieval. This is a motivation of the current study.
Recently, researchers have been cognizant of the importance of saddle structure in non-convex optimization research.
Analysis of critical points in non-concave-convex problems leads to many interesting results in various applications. For instance, Lee et al. used a dynamical system approach to show that
many gradient-descent algorithms almost surely converge to local minimizers with random initialization, even though
they can get stuck at critical points theoretically [29, 30, 31].
The terminology “local saddle" is a crucial concept in understanding the min-max algorithm employed in modern machine learning research, e.g., gradient descent-ascent algorithms in
generative adversarial networks (GANs)[32] and multi-agent reinforcement learning[33]. With proper Hessian adjustment,
[34] and [35] proposed
novel saddle algorithms to escape undesired critical points and to reach local saddles of min-max problems almost surely with random initialization.
Jin et al. [36] proposed one non-symmetric definition of local saddles to address one basic question, “ what is a proper definition of local optima for the local saddle?" Later, Dai and Zhang gave saddle analysis on the constraint minimization problems [37].
Our study starts with
one characterization of all the fixed point of RAAR algorithms in Theorem <ref>. These fixed points are critical points of (<ref>). By varying $\beta$, some of the fixed points become “local saddles" of a concave-nonconvex function $F$,
max_λmin_z {F(z,λ; β):=(β/2 A_(z-λ)^2-1/2 λ^2 ), z∈, λ∈^N}.
To characterize RAAR iterates,
we investigate saddles in (<ref>) lying in a high dimensional primal-dual space. Our study aims to answer a few intuitive questions, whether these local dimensional critical points on $\cZ$ in the primal space can be lift to local saddles of (<ref>) in a primal-dual space under some spectral gap condition, and how the ADMM iterates avoid or converge to these local saddles under a proper penalty parameter?
The line of thought motivates the current study on local saddles of (<ref>).
Unfortunately, the definition of local saddles in [36] can not be employed to analyze the RAAR convergence, since the objective function in phase retrieval shares phase invariance, see Remark <ref>.
The main goal of the present work is to establish
an optimization view to illustrate the convergence of RAAR, and
show by analysis and numerics,
under the framework for phase retrieval with coded diffraction patterns in [38], RAAR has a basin of attraction at a local saddle $(z_*,\lambda_*)$.
For noiseless measurement, $z_*=A^* x_0$ is a strictly local minimizer of (<ref>).
In practice, numerical stagnation of RAAR on noiseless measurements disappears under sufficient large $\beta$ values.
Specifically, Theorem <ref> shows that
RAAR is actually one ADMM to solve the constrained problem in (<ref>). Based on this identification, Theorem <ref> show that each limit of RAAR iterates can be viewed as a “local saddle" of $\max\min F$ in (<ref>).
The rest of the paper is organized as follows. In section <ref>, we examine the fixed point condition of RAAR algorithm. By identifying RAAR as ADMM, we disclose the concave-non-convex function for the dynamics of RAAR, which provides a continuation viewpoint on RAAR iteration.
In section <ref>, we present one proper definition for local saddles and show
the existence of local saddles for oversampled coded diffraction patterns. In section <ref>, we show the convergence of RAAR to a local saddle under a sufficiently large parameter. Last, we provide experiments to illustrate the behavior of RAAR, (i)comparison experiments between RAAR and Douglas Rachford splitting proposed in [39]; (ii) applications of RAAR on coded diffraction patterns.
§ RAAR ALGORITHMS
§.§ Critical points
The following gives the first order optimality of the problem in (<ref>). This is a special case of Prop. <ref> with $\lambda=0$. We skip
the proof.
Let $z_0=b\odot u_0$ be a local minimizer of the problem in (<ref>).
Let $ K_{z_0}^\bot:=\Re(\diag(\bar u_0)A_\bot^* A_\bot \diag(u_0))$.
Then the first-order optimality condition is
q_0:=z_0^-1⊙(A_^* A_z_0)∈^N, and the second-order necessary condition is that
for all $\xi\in \IR^N$,
Once a local solution $z$ is obtained, the unknown object of phase retrieval in (<ref>) can be estimated by $x=Az$.
On the other hand,
using $I=A^*A+A_\bot^* A_\bot$, we can express the first order condition as
u^-1⊙(A^*A(b⊙u))=b⊙(1-q_0)∈^N.Using $\xi=e_i$ canonical vectors of $\IR^N$, we have a componentwise lower bound on $1-q_0$ from (<ref>):
$b^{-1}\odot (1-q_0)\ge \|A e_i\|^2\ge 0$. In general, there exists many local minimizers on $\cZ$, satisfying (<ref>) and (<ref>).
§.§ Fixed point conditions of RAAR
We begin with fixed point conditions of $\beta$-RAAR iterations in (<ref>).
For each $\beta\in (0,1)$, introduce one auxiliary parameter $\beta'\in (0,\infty)$ defined by
β=β' /1+β', i.e., β'=β/1-β.
We shall show the reduction in the cardinality of fixed points under with a small penalty parameter.
Consider the application of the $\beta$-RAAR algorithm on the constrained problem in (<ref>).
Write $w\in \IC^N$ in polar form $w=u\odot |w|$. For each $\beta\in (0,1)$, let ${\beta'}={\beta}/(1-{\beta})$ and
c:= (1-1-β/β) b+1-β/β |w|∈^N.
Then $w$ is a fixed point of $\beta$-RAAR, if and only if
$w$ satisfies the phase condition
and the magnitude condition,
|w|=β' c+ (1-β')b≥0, i.e., c≥(1-β'^-1) b.
In particular, for
${\beta} \in [1/2, 1)$, we have $c\ge 0$ from (<ref>). Observe that the inequality in (<ref>) ensures the well-defined magnitude vector $|w|$. Hence, the fixed points are critical points of (<ref>).
Rearranging (<ref>), we obtain the fixed point condition of RAAR,
((1-β)|w|-(1-2β)b)⊙w/|w|=β A^*A { (2b-|w|)⊙w/|w|}.
Equivalently, taking the projections $A^*A$ and $I-A^*A$ on (<ref>) yield
\begin{eqnarray}&& A^* A \left\{ (b-|w|) \odot \frac{w}{|w|}\right\}=0, \label{f1}\\
&& (I-A^*A)\left\{ b\odot \frac{w}{|w|}\right\}= {\beta'}^{-1}\left\{(b-|w |)\odot \frac{w}{|w|}\right\}.\label{f2}
\end{eqnarray}
For the only-if part, let $w$ be a fixed point of RAAR with (<ref>,<ref>).
With the definition of $c$, (<ref>) gives
A^*A ((c-b)⊙u)=β'A^*A((|w|-b)⊙u)=0,
and (<ref>) gives
A_^* A_(c⊙w/|w|)=A_^* A_({ b-β'^-1 (b-|w|)}⊙w/|w|)=0,which implies $ c\odot u$
in the range of $A^*$. Together with (<ref>), we have (<ref>). Also,
(<ref>) is the result of the non-negativeness of $|w|$ in (<ref>).
To verify the if-part, we need to show that $w$ constructed from a phase vector $u\in \IC^N$
satisfying (<ref>)
and a magnitude vector $|w|$ satisfying
meets (<ref>,<ref>).
From (<ref>)
and (<ref>), we have (<ref>), i.e.,
A^*A((b-|w|)⊙u)=A^*A{ β'(b-c)⊙u}=β'{c⊙u-c⊙u}=0.
With the aid of (<ref>, <ref>), the fixed point condition in (<ref>) is ensured by the computation: (I-A^*A){(b-β'^-1(b-|w|))⊙u}
the condition in (<ref>) is identical to the first optimality condition in (<ref>). Hence, the fixed points
must be critical points of (<ref>).
Theorem <ref> indicates that each fixed point $w$
can be re-parameterized by $(u,\beta)$ satisfying (<ref>) and (<ref>). The condition in (<ref>) always hold
for ${\beta'}$ sufficiently small.
The first order optimality in (<ref>) yields that the phase condition in (<ref>) is actually the critical point condition of $u\in \cU_*$ in (<ref>). Fix one critical point $u\in \cU_*$ and let $c$ be the corresponding vector given from (<ref>).
From Theorem <ref>,
$w$ given from the polar form $w=u\odot |w|$ with (<ref>) is a fixed point
of $\beta$-RAAR, if $\beta$ satisfies the condition in (<ref>). To further examine (<ref>), we parameterize the fixed point $w$ by $(u,\beta)$. Let
$b^{-1}\odot K^\bot b$ denote the threshold vector, where
\[ K:=\Re(\diag(\bar u)A^*A \diag(u)), \; K^\bot:=I-K.
\]
The fixed point condition in (<ref>) indicates that
$(u,\beta_1)$ gives
a fixed point of $\beta_1$-RAAR with any ${\beta}_1\in (0,{\beta})$. That is,
the corresponding parameter $(\beta')^{-1}$ must exceed the threshold vector,
Since $\beta'= \beta/(1-\beta)$ can be viewed as one penalty parameter in the associated Lagrangian in (<ref>),
we call (<ref>) the penalty-threshold condition of RAAR fixed points. In general,
the cardinality of RAAR fixed points
decreases under
a large parameter $\beta$. See Fig. <ref>.
For ${\beta}=1$, RAAR reduces to FDR, whose fixed point $w$ satisfies $\|A_\bot (b\odot w/|w|)\|=0$ and thus $\|A(b\odot w/|w|)\|=\|b\|$. When phase retrieval has uniqueness property, $A(b\odot w/|w|)$ gives the reconstruction. On the other hand, for $\beta=1/2$,
(<ref>) gives
S(w)=A^*A (b⊙w/|w|)+1/2(I-A^*A) w.
Suppose a RAAR initialization is chosen from the range of $A^* $. The second term in (<ref>) always vanishes and thus RAAR iterations reduce to alternating projection iterations(AP) in [2]. From this perspective,
one can regard $\beta$-RAAR as one family of algorithms interpolating AP and FDR, varying $\beta$ from $1/2$ to $1$.
Illustration of the penalty-threshold condition of RAAR fixed points associated with critical points $u_1,u_2, u_3, u_*\in \cU_*$ of (<ref>). The set of fixed points of $\beta$-RAAR is a collection of line segments parameterized by $(u,\beta)\in \cU_*\times (0,1)$. As $\beta$ gets larger, the cardinality of intersections (i.e., the fixed points) decreases. Critical points associated with $\beta=0.5$ are $u_1, u_3$ and $ u_*$. The global minimizer $u_*$ is the only associated critical point, if $\beta\in (0.9,1)$ is used.
§.§ Alternative directions method of multipliers
Next, we present one relation between RAAR and the alternating direction method of multipliers (ADMM).
The alternating direction method of multipliers was originally introduced in the 1970s[40, 41] and can be regarded as an approximation of the augmented Lagrangian method, whose primal update step is replaced by one iteration of the alternating minimization.
Although ADMM is classified as one first-order method, practically ADMM could produce a solution with modest accuracy within a reasonable amount of time. Due to the algorithm simplicity, nowadays this approach is popular in many applications, in particular, applications of nonsmooth optimalication. See [27, 42, 43] and the references
Use the standard inner product
\[
\langle x, y\rangle:=\Re(x^* y),\; \forall x,y\in\IC^N.
\]
To solve the problem in (<ref>), introduce one auxiliary variable $y\in \IC^N$ with one constraint $y=z$ and one associated multiplier $\lambda\in \IC^N$,
and form
the Lagrangian function
with some parameter $\beta'>0$ in (<ref>),
β'/2 A_y^2+< λ, y-z>+1/2y-z^2, y∈^N, z∈.
Equivalently, we have the augmented Lagrangian function,
L(y,z, λ):= 1/2 A_y^2+
β'^-1< λ, y-z>+1/2β'y-z^2,
when we
$1/\beta'$ as
a penalty parameter.
To solve $(y,z,\lambda)$, ADMM
starts with some initialization $z_1\in \cZ$ and $ \lambda_1\in \IC^N$ and generates the sequence $\left\{(y_k, z_k, \lambda_k): k=1,2,3,\ldots\right\}$ with stepsize $s>0$, according to rules,
\begin{eqnarray*}
y_{k+1}=arg\min_y L(y,z_k, \lambda_k),\label{y1}\\
z_{k+1}=arg\min_{|z|=b} L(y_{k+1},z, \lambda_k)\label{z1},
\\
\lambda_{k+1}=\lambda_k+s\nabla_\lambda L(y_{k+1}, z_{k+1}, \lambda)\label{eq9}
\end{eqnarray*}
Introducing one projection operator on $\cZ$, $[w]_{\cZ}:=w/|w|\odot b$ for $w\in \IC^N$. Algebraic computation yields
\begin{eqnarray}
y_{k+1}=(I+{\beta'} A_\bot ^*A_\bot )^{-1} (z_k-\lambda_k)=(I-{\beta} A_\bot ^* A_\bot )(z_k-\lambda_k), \label{y}
\\
\\
\lambda_{k+1}=\lambda_k+s(y_{k+1}-z_{k+1}).
\end{eqnarray}
From the $y$-update in (<ref>), one reconstruction $x$ for the unknown object in (<ref>) can be computed by
x=Ay=A(I-βA_^* A_)(z-λ)=A(z-λ).
Theorem <ref> indicates that RAAR is actually an ADMM with proper initialization applied to the problem
in (<ref>). In general, the step size $s$ of the dual vector $\lambda$ should be chosen properly to ensure the convergence. The following shows the relation between RAAR and the ADMM with $s=1$. Hence, we shall focus $s=1$ in this paper.
Consider one $\beta$-RAAR iteration $\{{ w_0,w_1,}\ldots, \}$ with nonzero initialization ${ w_0}\in \IC^N$.
Let $
\lambda_1=A_\bot^* A_\bot w_0, \; z_1=[w_0]_\cZ.
one ADMM sequence $\left\{(y_{k+1}, z_{k+1}, \lambda_{k+1}): k={ 1,2,}\ldots\right\}$ with dual step size $s=1$, according to (<ref>, <ref>, <ref>) with initialization
\lambda_1, z_1$.
Construct a sequence $\left\{w'_k: k=1,2,\ldots\right\}$ from $(y_k, z_k, \lambda_k)$,
w'_k:=y_k+1+λ_k=(I-βA_^* A_) z_k+βA_^* A_λ_k.
the sequence $\left\{w'_k: k=1,2,\ldots\right\}$ is exactly the $\beta$-RAAR sequence, i.e., $w'_k=w_k$ for all $k\ge 1$.
Use induction.
For $k=1$, we have
\[
w_1'=(I-\beta A_\bot^* A_\bot) z_1+\beta A_\bot^* A_\bot\lambda_1=
(I-\beta A_\bot^* A_\bot) [w_0]_\cZ+\beta A_\bot^* A_\bot w_0=w_1.
\]
Suppose $w_k'=w_k$ for $k\ge 1$.
From (<ref>) and (<ref>), we have
A_^* A_w'_k=A_^* A_((1-β) z_k+βλ_k),
and $ z_{k+1}=[w_k']_\cZ$.
From (<ref>) and (<ref>),
A_^* A_λ_k+1=A_^* A_{ βλ_k +(1-β) z_k}
-A_^* A_z_k+1.
Together with (<ref>), (<ref>) and (<ref>), we complete the proof by the calculation,
\begin{eqnarray}
(I-\beta A_\bot^* A_\bot) z_{k+1}+\beta A_\bot^* A_\bot\lambda_{k+1}
\\
&=&(I-\beta A_\bot^* A_\bot ) z_{k+1}+
\beta A_\bot^* A_\bot \left\{ \beta \lambda_k +(1-\beta) z_k\right\}
-\beta A_\bot^* A_\bot z_{k+1}\\
&=&(I-2\beta A_\bot^* A_\bot ) [w'_{k}]_\cZ +
\beta
A^*_\bot A_\bot w'_k\\
&=&(I-2\beta A_\bot^* A_\bot ) [w_{k}]_\cZ +
\beta
A^*_\bot A_\bot w_k=w_{k+1}.\label{eq12}
\end{eqnarray}
Theorem <ref> provides one max-min viewpoint to explore the dynamics of RAAR, which motivates the study in the next section. Indeed,
after eliminating $y$ in $L$ in (<ref>) via (<ref>), we end up with a max-min problem of a concave-non-convex function $F$,
F(z,λ; β):=(β/2 A_(z-λ)^2-1/2 λ^2 ), z∈, λ∈^N.
We can convert RAAR convergence to its fixed points into the convergence to saddles of the function $F$ in (<ref>) by one primal-dual algorithm.
For notation simplicity, we shall omit $\beta$ in the function $F(z,\lambda; \beta)$, i.e., write $F(z, \lambda)$, if no confusion occurs in the context.
§ DEFINITION OF LOCAL SADDLES FOR RAAR
When the objective function $F$ of a max-min problem is not concave-convex, saddle points do not exist in general.
For some smooth function $F$,
a point $(\lambda,z)$ is said to be
a local max-min point in [44, 35], if $z=z_*$ is a local minimizer and $\lambda=\lambda_*$ is a local maximizer, i.e.,
for $(z,\lambda)$ near $(z_*,\lambda_*)$.
The standard analysis can give the first-order and second-order characterizations. The existence of a saddle $(z_*, \lambda_*)$ in fact ensures the minimax equality. Indeed, since
$ \min_z F(z,\lambda)\le \min_z\max_\lambda F(z,\lambda)
$, then
$ \max_\lambda \min_z F(z,\lambda)\le \min_z\max_\lambda F(z,\lambda)
$. Together with
min_z max_λF(z,λ)
≤max_λF(z_*, λ)≤F(z_*,λ_*)≤min_z F(z,λ_*)≤max_λmin_z F(z,λ),
we have that $ \min_z \max_\lambda F(z,\lambda)
= \max_\lambda \min_z F(z,\lambda)$ for $(z,\lambda)$ holds near $(z_*,\lambda_*)$.
In this paper, we shall adopt the idea on “local max-min" proposed in [36]
to emphasize the non-symmetric role of $z,\lambda$, i.e., $(\lambda,z)$ is said to be a local max-min point, if for any $(\lambda, z)$ near $(\lambda_*, z_*)$ within a distance $\delta>0$,
max_z' { F(z',λ): z'-z_*≤h(δ)} ≤F(z_*,λ_*)≤F(z,λ_*)
for some continuous function $h: \IR\to \IR$ with $h(0)=0$. That is, the minimizer $z$ is driven by the dual vector $\lambda$ maximizing the objective function $F$.
Since $F$ in (<ref>) is strictly concave in $\lambda$ for $\beta\in (0,1)$,
according to Prop. 18, 19 and 20[36], the first-order condition is $\nabla_z F(z_*,\lambda_*)=0$ and $\nabla_\lambda F(z_*,\lambda_*)=0$, and
the second-order necessary/sufficient condition can be reduced to $\nabla_{zz} F(z_*,\lambda_*)\succeq 0$ and
$\nabla_{zz} F(z_*,\lambda_*)\succ 0$, respectively. In short, thanks to the $\lambda$-concavity in $F$, we end up with the identical characterization for local-max-min points in [44, 35]. For simplicity, we shall also call these local max-min points as local saddles.
§.§ Local saddles
From Theorem <ref>, the RAAR convergence of $w_k$ to $w_*$ can be regarded as the convergence to a local saddle $(\lambda_k, z_k)$ of $F$ in (<ref>). For each $\lambda_k$, $z_k$ is one approximate of the corresponding minimizer of $F(\lambda_k, z)$.
From (<ref>), we have the following optimality of $F$ in $\lambda$ and $z$, respectively.
Since $0<\beta<1$, the strict concavity of $F$ in $\lambda$ ensures the uniqueness of $\lambda$ for any vector $z\in \cZ$. We omit the proof of Prop. <ref>.
Fix one $z\in \cZ$.
The maximizer $\lambda$ of $ F(z,\lambda)$ in (<ref>) satisfies
λ=-β' A_^* A_z.
Hence, $\lambda_*=-\beta' A_\bot^* A_\bot z_*$ holds for a saddle point $(z_*,\lambda_*)$.
Let $q(z, \lambda)\in \IC^N$ be a vector-valued function of $z$ and $ \lambda$,
q(z,λ):=z^-1⊙A_^* A_( z- λ).
Fix one $\lambda$ in (<ref>),
and consider the $z$-minimization
min_z∈ {F(z, λ):=β/2A_(z- λ)^2-1/2 λ^2}.
When $z\in\cZ$ is
a local minimizer of $F(z, \lambda)$, then
q(z, λ)∈^N
ξ^⊤(K_z ^-((q)))ξ≥0, ∀ξ∈^N.
Let $u=z/|z|$ be the phase vector of a local minimizer $z$.
Consider the perturbation $z\to z\odot \exp(i\theta)$ in $\cZ$ applied on $F(z, \lambda)$ with $\theta\in \IR^N$, $\|\theta\|$ near $0$.
Variation of $F$ can be expressed as one function of tangent vectors $\xi:=b\odot \theta$ ,
H(\xi; z)=H(b\odot \theta; z):=\|A_\bot^* A_\bot (z \odot \exp(i\theta) - \lambda)\|^2.
With $z=b\odot u$, the Taylor expansion
\begin{eqnarray*}
H(\xi; z )&=&H(0; z)+\{2\left< i (b\odot u)\odot \theta,A_\bot^* A_\bot (b\odot u- \lambda) \right>-\left< (b\odot u) \odot \theta^2,A_\bot^* A_\bot (b\odot u- \lambda) \right> \\
&&+\|A_\bot (b\odot u\odot \theta)\|^2\}+o(\|\theta\|^2)
\\
&=&H(0; z)+2\left< i \xi,q\odot b\right> +\left\{-\left< \xi ^2, q \right>+\xi^\top K^\bot \xi\right\}+o(\|\xi\|^2)
\end{eqnarray*}
implies that
the first-order condition for a local minimizer $z$ is
q\in \IR^N,
$ and the second-order condition is the positive semi-definite condition,
< ξ, ( K^-(q))ξ> ≥0.
Unfortunately, the above optimality conditions for $F$ in (<ref>) yields nonexistence of local saddles under the definition in (<ref>)!
Consider a noisy case,
$\|A_\bot z\|>0$ for all $z\in\cZ$. No Nash equilibrium of $F$ in (<ref>) can exist,
since (<ref>) and (<ref>) cannot hold simultaneously at any stationary point of $F$.
Indeed, suppose that $(z_*,\lambda_*)$ is
a saddle point with $\|A_\bot z_*\|>0$. The $\lambda$-optimality in (<ref>)
gives $\left< z_*, \lambda_*\right>=-\beta' \|A_\bot z_*\|^2<0$. On the other hand, as $\xi=b$, (<ref>) gives
ξ^⊤(K^-((q)))ξ=(z^* A_^* A_λ)≥0.This inconsistency indicates that
as $\lambda$ tends to $\lambda_*$, the corresponding local minimizer $z$ does not approach $z_*$ continuously.
In the next subsection, we shall give a proper definition for the local Nash equilibrium applied on phase retrievel.
Remark <ref> indicates this difficulty always exists in the non convex-concave optimization with phase invariance.
Consider the problem
\[
\min_{ \lambda}\max_{z\in \cZ} \left\{ -F(z, \lambda)\right\},
\]
where $F(z,\lambda)=F(\alpha z,\alpha\lambda)$ holds for any complex unit $\alpha$.
Suppose that $(z, \lambda)$ is a local Nash equilibrium, then
F(αz, λ)≤F(z,λ)≤F(z,αλ)
for any complex unit $\alpha$ near $1$ and $\lambda$ is a local maximizer.
Then phase-invariance of $F$ implies
F(αz, λ)=F( z,α̅λ)≤F(z,λ).
Contradiction to (<ref>) always occurs, if the above inequality in (<ref>) is strict.
§.§ Cross sections
To alleviate the difficulty in Remark <ref>, we shall restrict the neighbourhood $\cU(z_*)$ by slicing the projected torus $A_\bot \cZ$ into cross sections, such that (<ref>) can hold locally at each critical point.
Fix one $z_0\in \cZ$
and introduce
the set $
\cZ'(z_0):=\left\{z_0\odot \exp(i\theta): \left< \theta, b^2 \right>=0, \theta\in \IR^N\right\}.
the optimization problem
min_z∈'(z_0) { A_z^2}.
Note that one partition $\cZ=\cup_\alpha \left\{\cZ'(z_0\alpha): |\alpha|=1 \right\}$
indicates that
for each $z\in \cZ$, $z\in \cZ'(\alpha z_0)$ holds for some complex unit $\alpha=\exp(i\rho)$, $\rho\in \IR$.
let $1_N\in\IR^N$ be a vector whose entries are all $1$.
Since $z, z_0$ both lie in $\cZ$, then z⊙z_0^-1=exp( i δ)=exp( i (θ+ρ1_N)) for some $\delta, \theta\in \IR^N$
and $\rho:=\|b\|^{-2}\left< \delta, b^2\right>$ with $0=\left< \theta, b^2\right>$.
To proceed, we need a few notations. At each $z\in \cZ$, introduce a matrix $K^\bot_{z}$ and a tangent plane $\left\{iu\odot \xi: \xi\in \Xi,\; u=z/|z|
\right\}$ with
\[K^\bot_{z}:=\Re\left(\diag\left(\overline{\frac{z}{|z|}}\right) A_\bot^* A_\bot \diag\left(\frac{z}{|z|}\right)\right),
\Xi:=\left\{ \xi: \xi\in \IR^N, \left<\xi, b\right>=0\right\}.\]
We shall drop the dependence on $z$ to simplify the notation, if no confusion occurs.
Since the feasible set in $z$ is different from the setting in Prop. <ref>, we must investigate again
the local $z$-optimality in $F(z,\lambda)$.
Fix one $\lambda\in \IC^N$.
Consider the minimization problem
min_z∈'(z_0) {F(z, λ):=β/2A_( z- λ)^2-1/2 λ^2 }.
Suppose $z=b\odot u$ is a local minimizer in (<ref>). Then $\Im(q(z, \lambda))=\rho 1_N$, $\rho\in \IR$.
The second-order necessary condition is that for all $\xi\in \Xi$, we have
\[
\left< \xi, (K_z^\bot -q(z, \lambda))\xi\right>\ge 0.
\]
In addition, a second-order sufficient condition is that for all nonzero $\xi\in \Xi$,
ξ^-2< ξ, (K_z^-q(z, λ))ξ>> 0.
A local minimizer $z$ with (<ref>) is called a strictly local minimizer.
Consider a perturbation $z\to z\odot \exp(i\theta)$ with $\theta\in \IR^N$. Use arguments similar to the proof of Prop. <ref>. Since the objective function in (<ref>) is continuously differentiable, with $\xi:=b\odot \theta$,
we have
< ξ, iu⊙A_^* A_(z- λ)>=0
for all $\xi$ with
< ξ, b>=
< θ, b^2>=0.
Then (<ref>) gives for some multiplier $\rho\in\IR$,
(u⊙A_^* A_(z- λ))=ρb, i.e., (q)=ρ1_N∈^N.
Note that $\xi\in\Xi$ and thus we have the second-order conditions.
Let $z_0=b\odot u_0$ be a local minimizer of the problem in (<ref>).
Then the first-order condition is
q_0:=z_0^-1⊙(A_^* A_z_0)∈^N, and the second-order necessary condition is that
for all $\xi\in \Xi:=\{\xi\in \IR^N; \langle b,\xi\rangle=0\}$,
Hence, $z_0$ is a strictly local minimizer on $\cZ$, if
ξ^-2ξ^⊤(K_z_0^-(q_0))ξ> 0, ξ∈Ξ, ξ>0
and (<ref>) hold.
This is the special case of Prop. <ref> with $\lambda=0$. Note that $q_0=q(z,0)$ and
we have $q_0\in \IR^N$ from (<ref>) and
ρ=⟨b^2, ρ1_N⟩=⟨b^2,( q(z,λ))⟩=(A_z^2)=0.
Readers should notice
different tangent spaces used in Prop. <ref> and Prop. <ref>.
Finally, we show that $(z_*, \lambda_*)$ can be a local saddle under sufficient small $\beta$, if $z_*$ is a strictly local minimier. For the application of Fourier phase retrieval, Theorem <ref> shows the existence of a strictly local minimizer of (<ref>) based on the spectral gap condition of coded diffraction patterns.
Let $z_*$ be a strictly local minimizer in (<ref>).
Let $\lambda_*=-\beta' A_\bot^* A_\bot z_*$. Then we can find
$\beta$ satisfying
ξ^-2 < ξ, (K_z_*^-(b^-1⊙K_z_*^b))ξ>> βK^ξ^2≥0 for any $\xi\in\Xi$,
such that
$( z_*,\lambda_*)$ is a local saddle of
max_λmin_z∈'(z_0) {F(z, λ):=β/2A_( z- λ)^2-1/2 λ^2 }.
Since $z_*$ is a strictly local minimizer,
Cor. <ref> gives $q_0:=z_*^{-1}\odot (A_\bot^* A_\bot z_*)\in \IR^N$ and
$\|\xi\|^{-2}\left< \xi, (K^\bot-\diag(q_0)) \xi\right>>0$.
With $(1+\beta')=(1-\beta)^{-1}$ and
\[ q(z_*, \lambda_*)=z_*^{-1}\odot A_\bot^* A_\bot (z_*- \lambda_*)
=(1+\beta') z_*^{-1}\odot (A_\bot^* A_\bot z_*)=(1+\beta') q_0\in \IR^N,\]
we have the second-order sufficient condition (<ref>) for $\beta$ satisfying (<ref>),
(1-β) < ξ, ( K^-(q(z_*, λ_*)))ξ>=< ξ, ( K^-(q_0))ξ>-βK^ξ^2.
Next, we show that with probability one, the global solution $z=A^*x_0$ is a strictly local minimizer of (<ref>) in the following Fourier phase retrieval.
The main tool is
the following spectral gap theorem in [16].
Let $\Phi$ be the oversampled discrete Fourier transform.
Let $x_0$ be a given rank $\ge 2$ object and at least one of $\mu_j$, $j=1,\ldots, l\ge 2$, be continuously and independently distributed on the unit circle.
)be isometric with a proper choice of $c_0$ and $B:= A\diag(u_0)$, $u_0=|A^* x_0|^{-1}\odot (A^* x_0)$. Then with probability one,
λ_2=max{ (B^* v) : v∈^n, vi x_0, v=1}<1.
Let $\Phi$ be the oversampled discrete Fourier transform.
Let $x_0$ be a given rank $\ge 2$ object and at least one of $\mu_j$, $j=1,\ldots, l\ge 2$, be continuously and independently distributed on the unit circle.
)be isometric with a proper choice of $c_0$. Then with probability one, $z=A^* x_0$ is a strictly local minimizer of (<ref>), and
min_ξ { ξ^-2⟨ξ, (K_z^-((q(z, 0))))ξ⟩: ξ∈^N, ⟨|z|,ξ⟩=0 }
where $\lambda_2$ is given in (<ref>).
Note that (<ref>) implies that $\Im(B^* v)=\lambda_2\xi$ holds for some unit vector $\xi\in \IR^N$. Then $\xi$ is one left singular vector of
:=[(B^*), (B^*)]∈^N×2nwith singular value $\lambda_2$, while and the associated right singular vector is $[\Im(v)^\top , \Re(v)^\top]^\top$.
Let $c=K|z|$ and $z=Ax_0$. The left and right singular vectors $\cB$ corresponding to singular value $1$ are
$c\in \IR^N$ and $[\Im(ix_0)^\top, \Re(ix_0)]^\top $, respectively.
Since $\cB^\top \cB=\Re(B^* B)=\Re(\diag(\overline{u} )A^*A\diag(u))=K_{z}$, then $\xi, c$ are eigenvectors of $K_{z}$.
From theorem <ref>, with probability one we have
1>λ_2≥max_ξ { ⟨ξ, K_zξ⟩: ξ=1,⟨|z|,ξ⟩=0}.
By definition in (<ref>), $q(z, 0)=b^{-1}\odot (K_{z}^\bot b)=1-b^{-1}\odot K_z b=0$. Finally, since $K^\bot_{z}=I-K_{z}$, (<ref>) gives (<ref>) and (<ref>).
§ RAAR CONVERGENCE ANALYSIS
§.§ Convergence of RAAR
We shall derive one inequality stated in (<ref>), which ensures the convergence of RAAR iterations $\left\{w_k: k=1,2,\ldots \right\}$ in Prop. <ref>.
In Theorem. <ref>, we shall show that
the condition in (<ref>) holds
near local saddles under a sufficient large penalty $1/\beta'$.
From the $\lambda$-iterations of ADMM in (<ref>), we have
λ_k+1=λ_k+(y_k+1-z_k+1), and
λ_k+ y_k+1=λ_k+1+ z_k+1.
The $z$ iteration yields
z_{k+1}=[w_{k}]_{\cZ}=[ z_{k+1}+\lambda_{k+1}]_{\cZ}
and $z_{k+1}+\lambda_{k+1}$ shares the same phase with $z_{k+1}$. We have a lower bound,
With $y_*=z_*$, $\lambda_*=-\beta'A_\bot^* A_\bot z_*$, and $y_{k+1}=(I-{\beta}A_\bot^* A_\bot) (z_k-\lambda_k)$, ${\beta'}={\beta}/(1-{\beta})$, introduce
\begin{eqnarray}&&T(z_k,\lambda_k)
:={\beta'} \|A_\bot (y_*-y_{k+1})\|^2+\|y_{k+1}-z_k\|^2\nonumber\\
{\beta} (1-{\beta}) \|A_\bot ((z_k-z_*)-(\lambda_k-\lambda_*))\|^2\nonumber\\
+\|A_\bot (-{\beta} (z_k-z_*)-(1-{\beta})(\lambda_k-\lambda_*))\|^2
+\|A(-\lambda_k)\|^2 \nonumber \\
&=&{\beta} \|A_\bot (z_k-z_*)\|^2+(1-{\beta})\|A_\bot (\lambda_k-\lambda_*)\|^2
\end{eqnarray}
We shall derive a few inequalities from the optimal condition of $y$ and $z$ in (<ref>), respectively.
Let $T$ be defined in (<ref>). Then
+2<z_k-z_*, λ_k-λ_*>
Let $C$ be the cost function, $C(y)={\beta'} \|A_\bot y\|^2/2$, for $ y\in \IC^N$.
We shall prove
1/2 z_k-z_*^2≥< λ_k-λ_*, y_k+1-y_*>+1/2 y_k+1-y_*^2+1/2T(z_k,λ_k).
To this end,
we make two
claims. First,
the optimality of $y_{k+1}$ in (<ref>) indicates that for all $y\in \IC^N$,
\begin{eqnarray}
C(y)-C(y_{k+1})&=&-\frac{1}{2} \|y-z_k\|^2+\left< \lambda_k, (y_{k+1}-y)\right>+\frac{1}{2} \|y-y_{k+1}\|^2\\
\left(\frac{1}{2} \|y_{k+1}-z_k\|^2
+\frac{{\beta'}}{2} \|A_\bot (y-y_{k+1})\|^2\right).\label{Q2a}
\end{eqnarray}
Second, the optimality of $y_*$ in $C(y)$ indicates
C(y)-C(y_*)+< (y-y_*), λ_*>≥0.
To verify (<ref>), use the optimality
$y_{k+1}=(I+{\beta'} {A_\bot^*} A_\bot )^{-1} (z_k-\lambda_k)$ in (<ref>), which gives
$ \nabla_y L(y_{k+1}, z_k, \lambda_k)=0$.
The quadratic convexity of $L$ in $y$ gives
(<ref>), i.e.,
\begin{eqnarray*}
&&L(y, z_k, \lambda_k)=C(y)+\left< \lambda_k, (y-z_k)\right>+\frac{1}{2} \|y-z_k\|^2\\
C(y_{k+1})+\left< \lambda_k, (y_{k+1}-z_k)\right>+\frac{1}{2} \|y_{k+1}-z_k\|^2+ (\frac{{\beta'}}{2} \|A_\bot (y-y_{k+1})\|^2
+\frac{1}{2} \|y-y_{k+1}\|^2
\end{eqnarray*}
For (<ref>), with $\lambda_*=-{\beta'} A_\bot^* A_\bot z_*=-{\beta'} A_\bot^* A_\bot y_*$,
Taylor's expansion of $C(y)$ at $y_*$ gives
\begin{eqnarray}
&& C( y)-C(y_*)= {\beta'} \left< A_\bot^* A_\bot y_*, y-y_*\right>+
\frac{{\beta'}}{2} \|A_\bot(y_*-y)\|^2
\ge \left< -\lambda_*, y-y_*\right>.
\end{eqnarray}
With $y=y_*=z_*$ in (<ref>) and $y=y_{k+1}$ in (<ref>), (<ref>)+(<ref>) gives (<ref>).
Next, from (<ref>), we have two identities,
\begin{eqnarray}\label{sq1}
\end{eqnarray}
The difference of the squares of (<ref>) and (<ref>) gives
\begin{eqnarray}
- \|z_*-z_k\|^2+ \|z_*-y_{k+1}\|^2\nonumber \\
\|w_{k}-w_*\|^2-\|w_{k-1}-w_*\|^2
-2\left<w_{k}-w_{k-1}, \lambda_k-\lambda_* \right>\label{end:2}
\\
\|w_{k}-w_*\|^2-\|w_{k-1}-w_*\|^2
+2\left<z_{k}-z_*, \lambda_k-\lambda_*\right>\nonumber\\
&&-2\left<\lambda_k -\lambda_*, y_{k+1}-z_*\right>,
\label{end:1}
\end{eqnarray}
where the last equality is given by the difference of (<ref>) and (<ref>).
The proof of (<ref>) is completed by (<ref>) and (<ref>).
Note that for each fixed point $w_*:=z_*+\lambda_*$ of RAAR, $\alpha w_*$ is also a fixed point of RAAR with any complex unit $\alpha$.
For $z,\lambda\in \IC^N$, let $\alpha $ be the corresponding global phase factor between $w$ and $w_*$,
argmin_α{ w-αw_*: |α|=1}, w=z+λ, w_*:=z_*+λ_*.Suppose that there exists some constant $c_0>1$ such that the following inequality
< αz_*-z, λ-αλ_*>
holds for $(z,\lambda)=(z_k,\lambda_k)$ for $k\ge k_0$. Then any limit point $(z',\lambda')$ of RAAR satisfies
\[
A_\bot (z'-\alpha z_*)=0,\; \lambda-\alpha \lambda_*=0 \; \textrm{ for some complex unit $\alpha$}.
\]
Recall $w_{k-1}=z_k+\lambda_k$ and $w_*=z_*+\lambda_*$.
Let $\alpha_k$ be
some global factor in
$ \alpha_k:=arg\min_{|\alpha|=1}\|w_k-\alpha w_*\|.$
Summing (<ref>) over $k=k_0,\ldots, k_1$, for some global phase factors $\alpha_{k_0},\ldots, \alpha_{k_1}$,
\begin{eqnarray}&& \|w_{k_0-1}-\alpha_{k_0-1}w_* \|^2-\|w_{k_1-1} -\alpha_{k_1-1}w_*\|^2 \\
&=&\|z_{k_0}+\lambda_{k_0}-\alpha_{k_0-1}(z_*+\lambda_*) \|^2-\|z_{k_1}+\lambda_{k_1} -\alpha_{k_1-1} (z_*+\lambda_*)\|^2 \\
&\ge & \sum_{k=k_0}^{k_1-1} \left\{ \|z_{k}+\lambda_{k}-\alpha_{k-1}(z_*+\lambda_*) \|^2-\|z_{k+1}+\lambda_{k+1} -\alpha_{k}(z_*+\lambda_*)\|^2 \right\}\\
& \ge& \sum_{k=k_0}^{k_1-1}
\left\{ T(z_k,\lambda_k)-2\left<\alpha_{k-1} z_*-z_{k}, \lambda_{k}-\alpha_{k-1}\lambda_*\right> \right\}\\
&\ge& (1-c_0^{-1}) \sum_{k=k_0}^{k_1-1} T(z_k,\lambda_k).
\end{eqnarray}
( 1-c_0^-1) (k_1-k_0) {min_k=k_0,…, k_1-1 T(z_k,λ_k)}≤w_k_0-1-α_k_0-1w_* ^2.
Let $k_1\to \infty$. Since the left-hand side is bounded above and $1-c_0^{-1}>0$, then
\[
\liminf_{k\to \infty} T(z_k,\lambda_k)=0.
\]
Let $(z',\lambda')$ be a limiting point and $\alpha$ be the limiting phase factor. For ${\beta}\in (0,1)$,
from (<ref>)
Aλ'=0, A_(-β (z'-αz_*)-(1-β) (λ'-αλ_*))=0=A_((z'-αz_*)-(λ'-αλ_*)).
The second part of (<ref>) gives $A_\bot \lambda'=\alpha A_\bot \lambda_*$. Thus $\lambda'=\alpha \lambda_*$ and $A_\bot z'=\alpha A_\bot z_*$.
When (<ref>) holds eventually, then (<ref>) indicates that
converges to $0$, i.e., (<ref>) indicates sub-linear convergence of RAAR, $O((k_0-k_1)^{-1})$. This is consistent with sub-linear convergence in FDR numerical experiments in [16].
§.§ Justification of (<ref>) from local saddles
Next we shall verify that the convergence condition in (<ref>) holds,i.e.,
β A_(z-z_*)^2+(1-β)A_(λ-λ_*)^2
> 2 < z_*-z, λ-λ_*>.
the positive definite condition (<ref>) holds at $z_*$ and $(z,\lambda)$ is close to $(z_*, \lambda_*)$.
For the sake of simplicity, we shall omit the global factors $\alpha$ in front of $z$ and $\lambda$, if no confusion occurs.
With ${\beta}\in (0,1)$,
$T(z,\lambda)$ can quantize one distance between $(z,\lambda)$ and $(z_*, \lambda_*)$. That is,
for $\epsilon>0$, from (<ref>), $T(z,\lambda)<\epsilon$ implies
max{ β A_(z-z_*)^2,(1-β)A_(λ-λ_*)^2,
Thus, $
\|A_\bot (z-z_*)\|^2\le \epsilon/{\beta},\; \|\lambda-\lambda_*\|^2\le \epsilon/(1-{\beta}).
Let $z_*$ be a strictly local minimizer in (<ref>).
Then we can find $\beta\in (0,1)$ satisfying
(1- β) < ξ, K_z_*^ξ> > 2 < ξ, (b^-1⊙(K_z_*^b))ξ> for any unit vector $\xi\in \Xi$.
Consider $\beta$-RAAR with this ${\beta}\in (0,1)$. Let
\lambda_*=-{\beta'} A_\bot^* A_\bot z_*$
$w_*=z_*+\lambda_*$. Then
there is some constant $\epsilon>0$,
such that
(<ref>) holds for all $(z,\lambda)$ with
$ \|w-w_*\|^2<\epsilon$, where a proper complex unit is applied on $w_*$ according to (<ref>).
First, the existence of $\beta$ for (<ref>) is ensured by Prop. <ref>.
The RAAR iterations satisfying (<ref>,<ref>, <ref>,<ref>) indicate
the decomposition $w=z+\lambda$ with $z\in \cZ$ and $\lambda\odot z^{-1}\in \IR^N$.
Let $u=z/|z|$, $u_*=z_*/|z_*|$ and $q_0=b^{-1}\odot K^\bot b$. Then
$ \bar u\odot \lambda\in \IR^N, \bar u_*\odot \lambda_*\in \IR^N$
\begin{equation}
(z_*)^{-1}\odot \lambda_*=-\beta' b^{-1}\odot K^\bot b=-\beta' q_0.
\end{equation}
Using continuity arguments on (<ref>), we have with $u'=(z+ z_*)/|z+ z_*|$, $\xi=(-i)\bar u'\odot ( z_*-z)\in \IR^N$,
\begin{equation}
\label{eq86}
\left<
\xi, \bar u'\odot (A_\bot^* A_\bot (u'\odot \xi))
\right>
\left< \xi, \left(\frac{\lambda_*}{z_*}+\frac{\lambda}{z}\right)\odot \xi\right>
\end{equation}
for $(\lambda, z)$ sufficiently close to $( \lambda_*, z_*)$. Observe that as $z\to z_*$, we have
\left< \xi, b\right>=0.
$ Note that $T(z,\lambda)$ has an upper bound $2(1-c_0^{-1})^{-1}\|w_{k_0-1}-w_*\|^2/2$ from (<ref>).
According to Remark <ref>, (<ref>) holds, if $\|w_{k_0-1}-w_*\|<\epsilon$ holds for some $\epsilon$ sufficiently small.
With (<ref>), algebraic computation gives (<ref>). Indeed,
\begin{eqnarray*}
&&2c_0 \left< \lambda-\lambda_*, z_*-z\right>\\
2c_0 \left< \lambda\odot z^{-1},\bar z\odot ( z_*-z)\right>-2c_0 \left<\lambda_*\odot z_*^{-1},\bar z_* \odot (z_*-z)
\right>\\
- c_0 \left<\lambda\odot z^{-1} + \lambda_*\odot z_*^{-1}, | (z-z_*)|^2\right>
\\
&\le & \label{m2}
\beta \left< \xi, \bar u'\odot (A_\bot^* A_\bot (u'\odot \xi))\right>=
\beta\|
A_\bot(z-z_*)\|^2\le T(z,\lambda).
\end{eqnarray*}
Thus, $\|w_{k_0}-w_*\|<\epsilon$ gives the closeness condition for the sequential vector $(z,\lambda)$.
§ NUMERICAL EXPERIMENTS
§.§ Gaussian-DRS
The $\beta$-RAAR algorithm is not the only algorithm, which can screen out some undesired local saddles via varying penalty parameters. Recently,[39] proposed
Gaussian-Douglas-Rachford Splitting
to solve phase retrieval via minimizing a loss function $\||z|-b\|^2$ subject to $z$ in the range of $A^*$. Let $x$ be the unknown object. Let $1_\cF(y)$ be the indicator function of the range $\cF$ of $A^*$. Then $A^* x \in \cF$.
Similar to RAAR, the algorithm can be formulated as ADMM with a penalty parameter $\rho>0$ to reach
a local max-min point of the Lagrangian function
max_λmin_y,z∈^N {1/2 |z|-b^2+ < λ, z-y>+ρ/2 z-y^2+1_(y)}.
The ADMM scheme consists of repeating the following three updates to reach a fixed point $(y,z,\lambda)$:
* $y\leftarrow A^*A (z+\rho^{-1}\lambda)$;
* $ z\leftarrow (1+\rho)^{-1}(\frac{w}{|w|}\odot b+\rho w)$ where $w=y-\rho^{-1}\lambda$;
* $\lambda \leftarrow \lambda+\rho (z-y)$.
Similar to the RAAR reconstruction in (<ref>), once a fixed point of this ADMM is obtained, the object $x$ can be computed by $x=Ay=A(z+\rho^{-1}\lambda)$ from the $y$-update.
Introduce $P=A^*A$ and $P^\bot=I-P$. After eliminating $y$,
the local max-min problem reduces to
max_λmin_z {1/2|z|-b^2+ρ/2 {P^(z+λ/ρ)^2-λ/ρ^2}}.
Algebraic computations on ADMM scheme yield the fixed-point condition of DRS.
Denote $\mu:=\lambda/\rho$.
Let $(z,\mu)$ be a fixed point of $\rho$-DRS. Then z+ρμ=[z-μ]_=[z]_,
Pμ=0 and P^z=0.
We skip the proof of (<ref>). Note that the condition implies
that the vector $z$ shares the same phase vector $u:=z/|z|$ with $z-\mu$, and
$[z]_\cZ$ has the $(P,P^\bot)$-decomposition $[z]_\cZ=z+\rho \mu$,
where $P\mu=0$ and $P^\bot z=0$.
Hence, $\mu/z\in \IR^N$ is a real vectors with bounds, $-\rho^{-1} \le z^{-1}\odot \mu\le 1$.
Next we derive conditions for local saddles of $L$ in (<ref>).
Let $(z,\mu)$ be a local saddle of $L$ in (<ref>). Then
the first-order optimal condition is
z+ρμ=[z]_, Pμ=0, P^z=0.
When $\rho>0$, the concavity of $L$ in $\lambda$ is obvious. Let $K:=\Re(\diag(\bar u) P \diag(u))$ and $ u=z/|z|.$
The second-order necessary condition of $z$ in (<ref>) is
The optimality of $\mu$ in (<ref>) is
P^(z+μ)=μ, which implies
P^z=0 and Pμ=0.
From the derivative of $L$ with respect to $z$,
the $z$-optimality is
\[
\frac{z}{|z|}\odot (|z|-b)+\rho P^\bot (z+\mu)=0,
\textrm{ i.e., }
z-[z]_\cZ+\rho P^\bot (z+ \mu)=0.
\]
Together with (<ref>), we have
z+\rho \mu=[z]_\cZ.
Next, we derive the second-order necessary condition of $z$.
Consider a perturbation $z\to z+\epsilon$ with $\epsilon\in \IC^N$. From
|z+\epsilon|=|z| \left(1+\Re(\frac{\epsilon}{z}) +\frac{1}{2}\Im(\frac{\epsilon}{z})^2\right)+o(\epsilon^2),
\begin{eqnarray}
%&=&2\left< z, \epsilon\right> +\|\epsilon\|^2 -2\left< b,
%|z| \left(\Re(\frac{\epsilon}{z}) +\frac{1}{2}\Im(\frac{\epsilon}{z})^2\right)\right> +o(\|\epsilon\|^2),
&=&2\left< z-b\odot \frac{z}{|z|}, \epsilon\right> +\|\epsilon\|^2 -\left< \frac{b}{|z|}, \Im(\epsilon\odot \bar u)^2\right>+o(\|\epsilon\|^2),\label{eq155}
\end{eqnarray}
we have the second-order condition of $L$,
ρ< ϵ, P^ϵ> +ϵ^2 -< b/|z|, (ϵ⊙u̅)^2> ≥0.
Taking $\epsilon=ic\odot u$ for $c\in\IR^N$ yields
c^2≥1/ρ+1 < b/|z|, c^2>+ρ/ρ+1 <c, Kc>.
Next, we show that a local saddle $(z,\mu)$ is always
a fixed point of DRS.
If $(\mu, z)$ is a local max-min point in (<ref>), then $z$ is a fixed point of DRS.
At each stationary point $z$ of DRS, we have $|z|=Kb$ and $[z]=b\odot u$ of DRS.
Taking $c=e_i$ in (<ref>) yields
\[ (\rho+1) Kb=(\rho+1)|z|\ge b, \; i.e., \;
\bar u\odot (z-\mu)=Kb-\mu\odot \bar u\ge 0, \]
which implies $[z-\mu]_\cZ=[z]_\cZ$. Together with the $(P, P^\bot)$-decomposition, $[z]_\cZ=z+\rho \mu$, we have the fixed point condition.
From the second-order condition in (<ref>), we expect that DRS with smaller $\rho$ yields a stronger screening-out ability.
The next remark illustrates the screening-out similarity between RAAR and DRS in the case $\rho$ close to $0$.
Roughly, for $\rho$ close to $0$, a local saddle at some phase vector $u$ of $\beta$-RAAR would be a local saddle at the same phase vector $u$ of $\rho$-DRS, if $\rho$ and $\beta$ satisfy $\beta^{-1}=\rho+1$.
Indeed, the second-order necessary condition for RAAR function in (<ref>) is given by
= -β/1-βI-K+1/1-β(Kb/b)≽0.
That is, for any nonzero $\xi\in\IR^N$,
< Kb/b, ξ^2>≥βξ^2+(1-β) ξ^⊤Kξ.
On the other hand, for DRS,
replacing $c$ with
$\pm (Kb/b)^{1/2}\odot \xi$ and $|z|=Kb$ in (<ref>) gives
< Kb/b, ξ^2>≥1/ρ+1ξ^2+ρ/ρ+1 < (Kb/b)^1/2⊙ξ, K ((Kb/b)^1/2⊙ξ)>.
Comparing with (<ref>),
(<ref>) is almost identical to (<ref>) under
$\beta^{-1}=\rho+1$, if
$\rho$ is close to $0$ and we ignore the difference of the second terms of their right hand side.
§.§.§ Simulation of Gaussian matrices
We provide one simulation to present the screening-out effect for $\beta$-RAAR and $\rho$-DRS.
Generate Gaussian matrices $A$ with size $n\times N$, $n=100$, $N/n=3, 3.5, 4, 4.5$ and $ 5$, respectively. For simplicity, generate noise-free data $b=|A^* x_0|$ from some $x_0$. Apply $\beta$-RAAR and $\rho$-DRS to reconstruct phase retrieval solutions under a set of parameters $\beta, \rho$, respectively. Here, we test ρ= 1,1/2, 1/3, 1/4, …, 1/10 and β=1/2, 2/3, 3/4, …, 10/11.Figures <ref> show the success rate of reaching a global solution for each parameter value (among $40$ trials with random initializations).
For $\rho$ close to $0$ or $\beta$ close to $1$, $\beta$-RAAR with
\[
\beta=\frac{\rho^{-1}}{1+\rho^{-1}}
\]
gives a similar empirical performance as $\rho$-DRS, although RAAR performs slightly better. For instance, as $\beta\ge 0.8$ or $\rho\le 1/4$,
with success rates higher than $70\%$,
$\beta$-RAAR and $\rho$-DRS algorithms both can reconstruct a global solution in the case with $N/n\ge 4$.
These empirical results are consistent with the theoretical analysis in Remark <ref>.
Right and left subfigures show the success rate under $\beta$-RAAR algorithm and $\rho$-DRS algorithm with $ \beta=\rho^{-1}(1+\rho^{-1})^{-1}$, respectively.
§.§ Coded diffraction patterns
The following experiments
present convergence behavior of RAAR on coded diffraction patterns.
$1\frac{1}{2}$ coded diffraction patterns with oversampling, i.e., one coded pattern and one uncoded pattern as used in [16]. For test images $x_0$, we use the Randomly Phased Phantom(RPP)
$x_0=p\odot \mu_0$, where $\mu_0:=e^{i\phi}$ and $\phi$ are i.i.d. uniform random variables over $[0,2\pi]$. The size is $128\times 128$, including the margins.
we randomize the original phantom $p$ (in the left of Fig. <ref>) in order to make its reconstruction more challenging. A random object such as RPP is more difficult to recover than a deterministic object.
Phantom image $p$ without phase randomization (left); Images reprsents the null vector initialization $\Re(x_{null}\odot \overline{\mu_0})$ in the noiseless case(middle) and in the noisy case(right).
Theorem <ref> states the existence and strictly local minimizer and
Theorem <ref> indicates that the existence of
a local saddle replies on a sufficiently large penalty parameter.
The following experiment validates
RAAR convergence to the local saddle under proper selection on $\beta$.
Empirically, $\beta$-RAAR with large $\beta$ can easily diverge, but $\beta$-RAAR with small $\beta$ can easily get stuck (not necessarily converged) near distinct critical points on $\cZ$.
To demonstrate the effectiveness of RAAR,
we shall make two adjustments on application of $\beta$-RAAR. First, to alleviate the stagnation at far critical solutions, we employ the null vector method[8], which is one spectral initialization, to generate an initialization of RAAR. See the middle and right subfigures in Fig. <ref> for the initialization. Second,
to reach a local saddle within 600 RAAR iterations, we vary the parameter $\beta$ along a $\beta$-path, starting from some initial value and then decreases to $0.5$, shown in
Fig. <ref>. Each path consists of to phases: (i) $\beta$ remains one constant value selecting from $0.95, 0.9, 0.8, 0.7$ and $ 0.6$ within the first $300$ iterations; (ii) $\beta$ decreases to $0.5$ piecewise linearly within the second $300$ iterations.
The corresponding $\beta$ value used within $600$ RAAR iterations of five different $\beta$-paths.
Conduct four experiments to examine $\beta$-RAAR along five $\beta$-paths:
(a) Noiseless data, $b=|A^* x_0|$ with $A$ defined in (<ref>).
Use the null vector method $x_{null}$ as one initial vector for $\beta$-RAAR, i.e.,
z_1=[ A^* x_null ]_, λ_1=A^* x_null-z_1.
(b) Noiseless data. RAAR with random initialization.
(c) Noisy data. RAAR with null vector initialization as in (<ref>).
(d) Noisy data. RAAR with random initialization.
In (c) and (d), the source of noise is the counting statistics [45], i.e., each entry of the squared measurement $b^2$ follows a Poisson distribution,
b^2∼Poisson(κ|A^* x_0|^2), κ>0.
In the RPP experiment, the parameter $\kappa$ is chosen so that the noise level is $\| b-|A^* x_0|\|/\|b\|\approx 0.18$.
§.§.§ Performance metrics
in the case (a,b) and (c,d)
are reported in figure <ref> and figure <ref>, respectively.
Each row
shows the performance metrics of
$\beta$-RAAR iterations $\{w_k\}$. Here,
$z,\lambda$ are computed from $w$ according to {z_k+1:=[w_k]_, λ_k+1:=w_k-z_k+1}.
From (<ref>), the reconstruction of the object $x$ is estimated by $A(z_{k+1}-\lambda_{k+1})$.
* Residual: $\|A_\bot z\|/\|b\|$.
* Norm of derivative:
The Wirtinger derivative of the objective $F$ in the $\lambda$-direction is
-{βA_^*(A_(z-λ)) +λ}.
When RAAR converges to one local saddle, the derivative norm would be $0$.
The norm
\[
\ID_\lambda(z,\lambda):=\left (\|A_\bot ((1-\beta) \lambda+\beta z) \|^2+\|A\lambda\|^2\right )^{1/2}
\]
can be employed
to examine the quality of convergence. (Empirically, the derivative in the $z$-direction
$\ID_z$ has a behavior similar to the one of $\ID_\lambda$. For simplicity, we do not report $\ID_z$ here.)
* Inequality ratio $\IT(z_k,\lambda_k)$.
In the noiseless setting,
$\IT(z,\lambda)$ is positive, as $(z,\lambda)$ approaches a local saddle $(z_*,\lambda_*)$ with $\|A_\bot z_*\|=0$. Hence,
a positive ratio
\[
\IT(z,\lambda):=1+(\beta \|A_\bot z\|^2+(1-\beta) \|A_\bot \lambda\|^2+\|A\lambda\|^2)^{-1}(2\langle z, \lambda\rangle)
\]
can be used as one indicator that
RAAR iterates enter the attraction basin of $(z_*,\lambda_*)$.
§.§.§ Results on noiseless measurement
In Fig. <ref>,
the left column and the right column show the metric performance in the cases (a) and (b).
* The left column shows the result of the 600 RAAR iterations along five $\beta$ curves with null initialization, i.e., case (a). The null initialization is illustrated in
the middle of Fig. <ref>. Based on the residual and derivative metrics, RAAR converges to the global solutions for
all five $\beta$-paths. The $\IT$ become positive after $100$ iterations, which indicates the closeness of the null initialization to the attraction basin. In particular, $\IT$ reaches $1$ in the early iterations of the case $\beta=0.95$.
* The right column shows the case (b). The initialization difference between case (a) and case (b) reflects the influence of undesired local saddles.
We observe two distinct convergence behaviors.
First, in the case of $\beta=0.6, \beta=0.7$ and $\beta=0.8$,
based on the metrics of the derivative norm and the residual, RAAR fails to converge within the first $300$ iterations. As $\beta$ decreases in the second 300 iterations, the iterates tend to different local saddles.
Second, for the $\beta$ paths starting with $\beta=0.9$ or $\beta=0.95$, RAAR successfully converge to global solutions. Their $\IT$-values are negative in the early 50 iterations, but quickly turn to be positive after 100 iterations. Fig. <ref> demonstrates the reconstruction.
§.§.§ Results on noisy measurements
The left column in Fig. <ref> demonstrates
the metric performance of RAAR in the noisy case (c).
The null initialization
shown in the right of Fig. <ref>, is used to reduce the chance of getting stuck at far local saddles.
The reconstructed objects after the first 300 RAAR iterations are shown in the top row of in Fig. <ref>. Even though these reconstructions are very similar to the RPP,
the metric $\ID_\lambda$ indicates that these RAAR with $\beta\ge 0.7$ fail to converge within the first 300 iterations. Hence,
we decrease $\beta$ in the second 300 iterations to obtain local saddles. Observe that the derivative norm in all cases decays to $0$. Actually, by examining the correlation of reconstructed objects after 600 RAAR iterations, we verify that these five reconstructed objects are identical up to a phase factor.
The right column in Fig. <ref> shows the metric performance of the noisy case (d). Five $\beta$-RAAR tend to different residual values in the first 300 iterations.
For large $\beta$, i.e., $\beta=0.95$, $\beta=0.9$ and $\beta=0.8$,
RAAR produce
rather successful reconstructed objects shown the bottom row of in Fig. <ref>. These RAAR do not converge within the first 300 iterations. Hence, we reduce the $\beta$ value during the second $300$ iterations.
By examining the correlation of reconstructed objects, we verify that three final reconstructions are all identical to the final reconstruction in (c).
For small $\beta$, RAAR could get stuck at poor solutions, e.g.,
$\beta=0.7$ and $\beta=0.6$.
Indeed, after the second 300 iterations, these two RAAR converge to non-global local solutions with larger residual values.
The above experiment results suggest that RAAR starting with large $\beta$ typically performs better than RAAR starting with small $\beta$ in the lack of spectral methods. In numerical simulations, we demonstrate the effectiveness of RAAR on coded diffraction patterns, where $\beta$ travels from a large value to $0.5$.
Left and right columns show the residue metric and the derivative metric of RAAR in
case(a) and case (b), respectively.
Left and right columns show the residue metric and the derivative metric of RAAR in
case(c) and case (d), respectively.
The row from left to right show the reconstruction of 300 RAAR iterations in the case (b), corresponding to $\beta$-paths with $\beta=0.95$, $0.9$, $0.8$, $0.7$ and $0.6$, respectively.
The columns from left to right show the reconstruction of 300 RAAR iterations in the case (c,d), corresponding to $\beta$-paths with $\beta=0.95$, $0.9$, $0.8$, $0.7$ and $0.6$, respectively. The top row is the case (c) and the bottom row is the case (d).
§.§ Conclusion and outlook
In this paper, we examine the RAAR convergence
from a viewpoint of local saddles of a concave-non-convex max-min problem. We show that the global solution is a strictly local minimizer in oversampled coded diffraction patterns, which ensures
the existence of local saddles. Convergence to each local saddle of the RAAR Lagrangian function requires a sufficient large penalty parameter, which explains the avoidance of undesired local solutions under RAAR with a moderate penalty parameter.
ADMM is a popular algorithm in handling various constraints. The current paper does not introduce any further assumption on unknown objects, except for the condition $x\in \IC^n$ in (<ref>). Stable recovery from incomplete measurements is actually possible, provided that additional assumptions of unknown objects are used. For instance,
when unknown objects can be characterized by piecewise-smooth functions with small total variation seminorm,
the recovery can be obtained from incomplete Fourier measurements with the aid of total variation regularization [46]. Another interesting work
[47] demonstrates the number of measurements ensuring stable recovery of a sparse object under independent measurement vectors. From the above perspective, one interesting future work is the
saddle analysis of ADMM associated with these additional object assumptions.
§.§ Acknowledgements
The author would like to thank Albert Fannjiang for helpful discussions.
[1]
Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev.
Phase retrieval with application to optical imaging: a contemporary
Signal Processing Magazine, IEEE, 32(3):87–109, 2015.
[2]
P. Chen, A. Fannjiang, and G Liu.
Phase retrieval with one or two coded diffraction patterns by
alternating projection with the null initialization.
Journal of Fourier Analysis and Applications, pages 1–40,
[3]
R. W. Gerchberg.
A practical algorithm for the determination of phase from image and
diffraction plane pictures.
Optik, 35:237, 1972.
[4]
Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi.
Phase retrieval using alternating minimization.
arxiv:1306.0160, 2013.
[5]
Yuxin Chen and Emmanuel J. Candes.
Solving random quadratic systems of equations is nearly as easy as
solving linear systems.
arxiv:1505.05114, 2015.
[6]
Emmanuel J. Candes, Xiaodong Li, and Mahdi Soltanolkotabi.
Phase retrieval via wirtinger flow: Theory and algorithms.
IEEE Transactions on Information Theory, 61(4):1985–2007,
apr 2015.
[7]
Huishuai Zhang and Yingbin Liang.
Reshaped wirtinger flow for solving quadratic system of equations.
In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett,
editors, Advances in Neural Information Processing Systems, volume 29,
pages 2622–2630. Curran Associates, Inc., 2016.
[8]
P. Chen, A. Fannjiang, and G Liu.
Phase retrieval by linear algebra.
SIAM J. Matrix Anal. appl., 38(3):864–868, 2017.
[9]
Wangyu Luo, Wael Alghamdi, and Yue M. Lu.
Optimal spectral initialization for signal recovery with applications
to phase retrieval.
arXiv:1811.04420, 2018.
[10]
Yue M. Lu and Gen Li.
Phase transitions of spectral initialization for high-dimensional
nonconvex estimation.
arxiv:1702.06435, 2017.
[11]
Marco Mondelli and Andrea Montanari.
Fundamental limits of weak recovery with applications to phase
Foundations of Computational Mathematics, 19(3):703–773, Jun
[12]
John C Duchi and Feng Ruan.
Solving (most) of a set of quadratic equalities: Composite
optimization for robust phase retrieval.
Information and Inference: A Journal of the IMA, 8(3):471–529,
[13]
J. R. Fienup.
Phase retrieval algorithms: a comparison.
Applied optics, 21(15):2758–2769, 1982.
[14]
J. R. Fienup.
Phase retrieval algorithms: a personal tour.
Applied Optics, 52(1):45–56, 2013.
[15]
Heinz H. Bauschke, Patrick L. Combettes, and D. Russell Luke.
Hybrid projection–reflection method for phase retrieval.
J. Opt. Soc. Am. A, 20(6):1025–1034, Jun 2003.
[16]
Pengwen Chen and Albert Fannjiang.
Fourier phase retrieval with a single mask by Douglas–Rachford
Applied and Computational Harmonic Analysis, 44(3):665 – 699,
[17]
Zaiwen Wen, Chao Yang, Xin Liu, and Stefano Marchesini.
Alternating direction methods for classical and ptychographic phase
Inverse Problems, 28(11):115010, oct 2012.
[18]
D Russell Luke.
Relaxed averaged alternating reflections for diffraction imaging.
Inverse Problems, 21(1):37–50, nov 2004.
[19]
Jonathan Eckstein and Dimitri P. Bertsekas.
On the Douglas—Rachford splitting method and the proximal point
algorithm for maximal monotone operators.
Mathematical Programming, 55(1):293–318, Apr 1992.
[20]
Bingsheng He and Xiaoming Yuan.
On the convergence rate of Douglas-Rachford operator splitting
Mathematical Programming, 153(2):715–722, 2015.
[21]
Ji Li and Tie Zhou.
On relaxed averaged alternating reflections (RAAR) algorithm for
phase retrieval with structured illumination.
Inverse Problems, 33(2):025012, jan 2017.
[22]
Dimitri P. Bertsekas.
Constrained Optimization and Lagrange Multiplier Methods
(Optimization and Neural Computation Series).
Athena Scientific, 1 edition, 1996.
[23]
Mingyi Hong, Zhi-Quan Luo, and Meisam Razaviyayn.
Convergence analysis of alternating direction method of multipliers
for a family of nonconvex problems.
SIAM Journal on Optimization, 26(1):337–364, 2016.
[24]
Guoyin Li and Ting Kei Pong.
Global convergence of splitting methods for nonconvex composite
SIAM Journal on Optimization, 25(4):2434–2460, 2015.
[25]
Yu Wang, Wotao Yin, and Jinshan Zeng.
Global convergence of ADMM in nonconvex nonsmooth optimization.
Journal of Scientific Computing, 78(1):29–63, 2019.
[26]
D.P. Bertsekas.
Constrained Optimization and Lagrange Multiplier Methods.
Computer science and applied mathematics. Elsevier Science, 2014.
[27]
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Found. Trends Mach. Learn., 3(1):1–122, January 2011.
[28]
J. Sun, Q. Qu, and J. Wright.
A geometric analysis of phase retrieval.
In 2016 IEEE International Symposium on Information Theory
(ISIT), pages 2379–2383, July 2016.
[29]
Jason D. Lee, Max Simchowitz, Michael I. Jordan, and Benjamin Recht.
Gradient descent only converges to minimizers.
In Vitaly Feldman, Alexander Rakhlin, and Ohad Shamir, editors, 29th Annual Conference on Learning Theory, volume 49 of Proceedings of
Machine Learning Research, pages 1246–1257, Columbia University, New York,
New York, USA, 23–26 Jun 2016. PMLR.
[30]
Simon S Du, Chi Jin, Jason D Lee, Michael I Jordan, Aarti Singh, and Barnabas
Gradient descent can take exponential time to escape saddle points.
In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus,
S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information
Processing Systems, volume 30, pages 1067–1077. Curran Associates, Inc.,
[31]
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan.
How to escape saddle points efficiently.
In 34th International Conference on Machine Learning, ICML
2017, pages 2727–2752. International Machine Learning Society (IMLS), 2017.
[32]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
Generative adversarial networks.
Communications of the ACM, 63(11):139–144, 2020.
[33]
Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P How, and John
Deep decentralized multi-task multi-agent reinforcement learning
under partial observability.
arXiv preprint arXiv:1703.06182, 2017.
[34]
Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, and Thomas Hofmann.
Local saddle point optimization: A curvature exploitation approach.
In The 22nd International Conference on Artificial Intelligence
and Statistics, pages 486–495. PMLR, 2019.
[35]
Constantinos Daskalakis and Ioannis Panageas.
The limit points of (optimistic) gradient descent in min-max
Advances in Neural Information Processing Systems,
31:9236–9246, 2018.
[36]
Chi Jin, Praneeth Netrapalli, and Michael Jordan.
What is local optimality in nonconvex-nonconcave minimax
In International Conference on Machine Learning, pages
4880–4889. PMLR, 2020.
[37]
Yu-Hong Dai and Liwei Zhang.
Optimality conditions for constrained minimax optimization.
CSIAM Transactions on Applied Mathematics, 1(2):296–315, 2020.
[38]
Albert Fannjiang.
Absolute uniqueness of phase retrieval with random illumination.
Inverse Problems, 28(7):075008, jun 2012.
[39]
Albert Fannjiang and Zheqing Zhang.
Fixed point analysis of douglas–rachford splitting for ptychography
and phase retrieval.
SIAM Journal on Imaging Sciences, 13(2):609–650, 2020.
[40]
R. Glowinski and A. Marroco.
Sur l'approximation, par éléments finis d'ordre un, et la
résolution, par pénalisation-dualité d'une classe de problèmes de
dirichlet non linéaires.
ESAIM: Mathematical Modelling and Numerical Analysis -
Modélisation Mathématique et Analyse Numérique, 9(R2):41–76, 1975.
[41]
Daniel Gabay and Bertrand Mercier.
A dual algorithm for the solution of nonlinear variational problems
via finite element approximation.
Computers Mathematics with Applications, 2(1):17–40, 1976.
[42]
Junfeng Yang, Yin Zhang, and Wotao Yin.
An efficient tvl1 algorithm for deblurring multichannel images
corrupted by impulsive noise.
SIAM Journal on Scientific Computing, 31(4):2842–2865, 2009.
[43]
Bhabesh Deka and Sumit Datta.
Compressed sensing magnetic resonance image reconstruction
Springer Series on Bio-and Neurosystems, 2019.
[44]
A. Cherukuri, B. Gharesifard, and J. Cortés.
Saddle-point dynamics: Conditions for asymptotic stability of saddle
SIAM Journal on Control and Optimization, 55(1):486–511, 2017.
[45]
P Thibault and M Guizar-Sicairos.
Maximum-likelihood refinement for coherent diffractive imaging.
New Journal of Physics, 14(6):063004, jun 2012.
[46]
Huibin Chang, Yifei Lou, Michael K. Ng, and Tieyong Zeng.
Phase retrieval from incomplete magnitude information via total
variation regularization.
SIAM J Sci Comput, 38(6):A3672–A3695, 2016.
[47]
Y. C. Eldar and S. Mendelson.
Phase retrieval: Stability and recovery guarantees.
Applied and Computational Harmonic Analysis, 36(3):473–494,
|
# Inexact gradient projection method with relative error tolerance
A. A. Aguiar Instituto de Matemática e Estatística, Universidade Federal de
Goiás, CEP 74001-970 - Goiânia, GO, Brazil, E-mails:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>The authors was
supported in part by CNPq grants 305158/2014-7 and 302473/2017-3,
FAPEG/PRONEM- 201710267000532 and CAPES. O. P. Ferreira 11footnotemark: 1 L.
F. Prudente 11footnotemark: 1
###### Abstract
A gradient projection method with feasible inexact projections is proposed in
the present paper. The inexact projection is performed using a relative error
tolerance. Asymptotic convergence analysis and iteration-complexity bounds of
the method employing constant and Armijo step sizes are presented. Numerical
results are reported illustrating the potential advantages of considering
inexact projections instead of exact ones in some medium scale instances of a
least squares problem over the spectrohedron.
Keywords: Gradient method, feasible inexact projection, constrained convex
optimization.
AMS subject classification: 49J52, 49M15, 65H10, 90C30.
## 1 Introduction
In this paper, we address general constrained convex optimization problems of
the form
$\min\\{f(x):~{}x\in C\\},$ (1)
where $C$ is a closed and convex subset of $\mathbb{R}^{n}$ and
$f:\mathbb{R}^{n}\to{\mathbb{R}}$ is a continuously differentiable function.
Denote by $f^{*}:=\inf_{x\in C}f(x)$ the optimal value of (1) and by
$\Omega^{*}$ its solution set, which we will assume to be non-empty unless the
contrary is explicitly stated. The Problem (1) is a basic optimization issue,
it has appeared in several areas of science and engineering, including machine
learning, control theory and signal processing, see for example [9, 10, 17,
36, 46]. In the present paper, we are interested in gradient-type algorithms
to solve it.
The gradient projection method (GPM) is the one of the most oldest method to
solve Problem (1), its convergence properties go back to the works of
Goldstein [22] and Levitin and Polyak [35]. After these works, many variants
of it have appeared throughout the years, resulting in a wide literature on
the subject, see, for example, [4, 5, 6, 16, 17, 25, 27, 28, 43, 51]. The GPM
has attracted the attention of the scientific community working in
optimization, mainly due to its simplicity and easy implementation. Besides,
since this method uses only first order derivatives, it is often considered as
a scalable solver for large-scale optimization problems, see [31, 38, 39, 41,
46, 48]. At each iteration, the classical GPM moves along the direction of the
negative gradient, and then projects the iterate onto $C$ if it is infeasible.
Although the feasible set of many important problems has an easy-to-handle
structure, in general this set could be so complex that the exact projection
can not be easily computed. It is well known that the mostly computational
burden of each iteration of the GPM is in the solution of this subproblem. In
fact, one drawback of methods that use exact projections is to solve a
quadratic problem at each stage, which can lead to a substantial increase in
the cost per iteration if the number of unknowns is large. In order to reduce
the computational effort spent on projections, inexact procedures have been
proposed, resulting in more efficient methods, see for example [8, 16, 43,
51]. Moreover, considering inexact schemes provides theoretical support for
real computational implementations of exact methods. It is worth mentioning
that throughout the years there has been an increase in the popularity of
inexact methods due to the emergence of large-scale problems in compressed
sensing, machine learning applications and data fitting, see for instance [21,
44, 45, 46]. Motivated by practical and theoretical reasons, the purpose of
the present paper is to present a new inexact version of the GPM, which we
call Gradient-InexP method (GInexPM). It consists of using a general inexact
projection instead of the exact one used in the GPM. The inexact projection
concept considered in the present paper is a variation of the one appeared in
[49, Example 1], which is defined by using an approximated property of the
exact projection. In particular, it accepts the exact projection which can be
adopted when it is easily obtained (for instance, the exact projection onto a
box constraint or Lorentz cone can be easily obtained; see [42, p. 520] and
[20, Proposition 3.3], respectively). It is worth noting that our approach to
compute the inexact projection has not being considered in the study of the
classical gradient method, in particular it is different from the ones
proposed in [8, 16, 21, 43, 45, 51]. The analyses of GInexPM will be made
employing two diferent step sizes, namely, constant step size and Armijo’s
step size along the feasible direction. We point out that these step sizes are
discussed extensively in the literature on this subject, where many of our
results were inspired, see for example [4, 6, 28, 29, 33, 40].
Contributions: The main novelty in our work is the use of relative error
tolerances in the computation of the inexact projection, to analyze the
convergence properties of GInexPM. Our numerical experiments showed that
GInexPM outperformed the GPM on a set of least squares problem over the
spectrohedron. From a theoretical point of view, under suitable assumptions,
the classic results of GPM were obtained for GInexPM as well. More
specifically, we have showed that all cluster points of the sequence generated
by GInexPM with constant step size or Armijo’s step size are solutions of
problem (1). Futhermore, under convexity of the objective function, this
sequence converges to a solution, if any. In both cases, the analysis
establishes convergence results without any compactness assumption. We have
also studied iteration-complexity bounds of GInexPM for both constant step
size and Armijo’s step size. The presented analysis establishes that the
complexity bound $\mathcal{O}(1/\sqrt{k})$ is unveil for finding
$\epsilon$-stationary points for Problem (1), and, under convexity on $f$, the
rate to find a $\epsilon$-optimal functional value is $\mathcal{O}(1/k)$.
Content of the paper: In section 2, some notations and basic results used
throughout the paper is presented. In particular, section 2.1 is devoted to
present the concept relative feasible inexact projection and some properties
about this concept. In section 3, we describe GInexPM method using the
constant step size. The results of convergence using constant step size, as
well as, results of iteration-complexity bound are presented in the sections
3.1 and 3.2, respectively. The results related to Armijo’s step sizes is
presented in section 4. In section 4.1, we present the asymptotic convergence
analysis of GInexPM using Armijo’s step size, and an iteration-complexity
bound is presented in section 4.2. Numerical experiments are provided in
section 5. Finally, the last section presents some final considerations.
## 2 Preliminaries
In this section, we introduce some notation and results used throughout our
presentation. We denote ${\mathbb{N}}:=\\{0,1,2,\ldots\\}$,
$\langle\cdot,\cdot\rangle$ is the usual inner product in $\mathbb{R}^{n}$ and
$\|\cdot\|$ is the Euclidean norm. Let $f:{\mathbb{R}}^{n}\to{\mathbb{R}}$ be
a differentiable function and $C\subseteq{\mathbb{R}}^{n}$. The gradient
$\nabla f$ of $f$ is said to be Lipschitz continuous on $C$ with constant
$L>0$ if
$\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|,\qquad\forall~{}x,y\in C.$ (2)
Combining this definition with the fundamental theorem of calculus, we obtain
the following result, for which the proof can found in [6, Proposition A.24].
###### Lemma 1.
Let $f:{\mathbb{R}}^{n}\to{\mathbb{R}}$ be a differentiable function with
Lipschitz continuous gradient on $C\subseteq{\mathbb{R}}^{n}$ with constant
$L>0$. Then, $f(y)-f(x)-\langle\nabla
f(x),y-x\rangle\leq\frac{L}{2}\|x-y\|^{2}$, for all $x,y\in C$.
Let $f:{\mathbb{R}}^{n}\to{\mathbb{R}}$ be a differentiable function and
$C\subseteq{\mathbb{R}}^{n}$ be a convex set. The function $f$ is strongly
convex on $C$, if there exists a constant $\mu\geq 0$ such that
$f(y)-f(x)-\langle\nabla
f(x),y-x\rangle\geq\frac{\mu}{2}\|x-y\|^{2},\qquad\forall~{}x,y\in C.$ (3)
When $\mu=0$, $f$ is said to be convex. If $f(x)\leq f(y)$ implies
$\langle\nabla f(y),x-y\rangle\leq 0$, for any $x,y\in C$, then $f$ is said to
be quasiconvex. Moreover, $f$ is said to be pseudoconvex if $\langle\nabla
f(y),x-y\rangle\geq 0$ implies $f(x)\leq f(y)$, for any $x,y\in C$, for more
details, see [37]. Recall that the convexity of a function guarantees
pseudoconvexity, which in turn guarantees quasiconvexity, see [37].
A point ${\bar{x}}\in C$ is said to be a stationary point for Problem (1) if
$\langle\nabla f({\bar{x}}),x-{\bar{x}}\rangle\geq 0,\qquad\forall~{}x\in C.$
(4)
We end this section with a useful concept in the analysis of the sequence
generated by the gradient method, for more details, see [12].
###### Definition 1.
A sequence $(y^{k})_{k\in\mathbb{N}}$ in $\mathbb{R}^{n}$ is quasi-Fejér
convergent to a set $W\subset{\mathbb{R}}^{n}$ if, for every $w\in W$, there
exists a sequence $(\epsilon_{k})_{k\in\mathbb{N}}\subset\mathbb{R}$ such that
$\epsilon_{k}\geq 0$, $\sum_{k\in\mathbb{N}}\epsilon_{k}<+\infty$, and
$\|y_{k+1}-w\|^{2}\leq\|y_{k}-w\|^{2}+\epsilon_{k}$, for all $k\in\mathbb{N}$.
The main property of a quasi-Fejér convergent sequence is stated in the next
result, and its proof can be found in [12].
###### Theorem 2.
Let $(y^{k})_{k\in\mathbb{N}}$ be a sequence in $\mathbb{R}^{n}$. If
$(y^{k})_{k\in\mathbb{N}}$ is quasi-Fejér convergent to a nomempty set
$W\subset{\mathbb{R}}^{n}$, then $(y^{k})_{k\in\mathbb{N}}$ is bounded.
Furthermore, if a cluster point $y$ of $(y^{k})_{k\in\mathbb{N}}$ belongs to
$W$, then $\lim_{k\rightarrow\infty}y_{k}=y$.
### 2.1 Inexact projection
In this section, we present the concept of feasible inexact projection onto a
closed and convex set. This concept has already been used in [1, 13, 14]. We
also present some new properties of the feasible inexact projection used
throughout the paper. The definition of feasible inexact projection is as
follows.
###### Definition 2.
Let $C\subset{\mathbb{R}}^{n}$ be a closed convex set and
$\varphi_{\gamma}:{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}\to{\mathbb{R}}_{+}$
be a function satisfying the following condition
$\varphi_{\gamma}(u,v,w)\leq\gamma_{1}\|v-u\|^{2}+\gamma_{2}\|w-v\|^{2}+\gamma_{3}\|w-u\|^{2},\qquad\forall~{}u,v,w\in{\mathbb{R}}^{n},$
(5)
where $\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\in{\mathbb{R}}^{3}_{+}$ is a
given forcing parameter. The feasible inexact projection mapping relative to
$u\in C$ with error tolerance $\varphi_{\gamma}$, denoted by ${\cal
P}_{C}(\varphi_{\gamma},u,\cdot):{\mathbb{R}}^{n}\rightrightarrows C$, is the
set-valued mapping defined as follows
${\cal P}_{C}(\varphi_{\gamma},u,v):=\left\\{w\in
C:~{}\big{\langle}v-w,y-w\big{\rangle}\leq\varphi_{\gamma}(u,v,w),\quad\forall~{}y\in
C\right\\}.$ (6)
Each point $w\in{\cal P}_{C}(\varphi_{\gamma},u,v)$ is called a feasible
inexact projection of $v$ onto $C$ relative to $u$ with error tolerance
$\varphi_{\gamma}$.
The feasible inexact projection generalizes the concept usual projection. In
the following, we present some remarks about this concept and some examples of
functions satisfying (5).
###### Remark 1.
Let $\gamma_{1}$, $\gamma_{2}$ and $\gamma_{3}$ be nonnegative forcing
parameters, $C\subset{\mathbb{R}}^{n}$, $u\in C$ and $\varphi_{\gamma}$ be as
in Definition 2. Therefore, for all $v\in{\mathbb{R}}^{n}$, it follows from
(6) that ${\cal P}_{C}(\varphi_{0},u,v)$ is the exact projection of $v$ onto
$C$; see [6, Proposition 2.1.3, p. 201]. Moreover, ${\cal
P}_{C}(\varphi_{0},u,v)\in{\cal P}_{C}(\varphi_{\gamma},u,v)$ which implies
that ${\cal P}_{C}(\varphi_{\gamma},u,v)\neq\varnothing$, for all $u\in C$ and
$v\in{\mathbb{R}}^{n}$. Consequently, the set-valued mapping ${\cal
P}_{C}(\varphi_{\gamma},u,\cdot)$ as stated in (6) is well-defined. Note that
the following functions
$\varphi_{1}(u,v,w)=\gamma_{1}\|v-u\|^{2}+\gamma_{2}\|w-v\|^{2}+\gamma_{3}\|w-u\|^{2}$,
$\varphi_{2}(u,v,w)=\gamma_{1}\|v-u\|^{2}$,
$\varphi_{3}(u,v,w)=\gamma_{2}\|w-v\|^{2}$,
$\varphi_{4}(u,v,w)=\gamma_{3}\|w-u\|^{2}$, and
$\varphi_{5}(u,v,w)=\gamma_{1}\gamma_{2}\gamma_{3}\|v-u\|^{2}\|w-v\|^{2}\|w-u\|^{2}$
satisfy (5).
Item (a) of the next lemma is a variation of [14, Lemma 6]. By using item (a),
we will derive an inequality that together with this item will play an
important role in the remainder of this paper.
###### Lemma 3.
Let $v\in{\mathbb{R}}^{n}$, $u\in C$,
$\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\in{\mathbb{R}}^{3}_{+}$ and
$w\in{\cal P}_{C}(\varphi_{\gamma},u,v)$. Then, there hold:
* (a)
$\displaystyle\|w-x\|^{2}\leq\|v-x\|^{2}+\frac{2\gamma_{1}+2\gamma_{3}}{1-2\gamma_{3}}\|v-u\|^{2}-\frac{1-2\gamma_{2}}{1-2\gamma_{3}}\|w-v\|^{2}$,
for all $x\in C$ and $0\leq\gamma_{3}<1/2$;
* (b)
$\displaystyle\big{\langle}v-w,y-w\big{\rangle}\leq\frac{\gamma_{1}+\gamma_{2}}{1-2\gamma_{2}}\|v-u\|^{2}+\frac{\gamma_{3}-\gamma_{2}}{1-2\gamma_{2}}\|w-u\|^{2}$,
for all $y\in C$ and $0\leq\gamma_{2}<1/2$.
###### Proof.
Let $x\in C$ and $0\leq\gamma_{3}<1/2$. First note that
$\|w-x\|^{2}=\|v-x\|^{2}-\|w-v\|^{2}+2\langle v-w,x-w\rangle$. Since
$w\in{\cal P}_{C}(\varphi_{\gamma},u,v)$, combining the last equality with (5)
and (6), we obtain
$\|w-x\|^{2}\leq\|v-x\|^{2}-(1-2\gamma_{2})\|w-v\|^{2}+2\gamma_{1}\|v-u\|^{2}+2\gamma_{3}\|w-u\|^{2}.$
(7)
On the other hand, we have $\|w-u\|^{2}=\|v-u\|^{2}-\|w-v\|^{2}+2\langle
v-w,u-w\rangle$. Thus, due to $w\in{\cal P}_{C}(\varphi_{\gamma},u,v)$ and
$u\in C$, using (5) and (6), and considering that $0\leq\gamma_{3}<1/2$, we
have
$\|w-u\|^{2}\leq\frac{1+2\gamma_{1}}{1-2\gamma_{3}}\|v-u\|^{2}-\frac{1-2\gamma_{2}}{1-2\gamma_{3}}\|w-v\|^{2}.$
Therefore, combining the last inequality with (7), we obtain the inequality of
item $(a)$. For proving item $(b)$, take $y\in C$ and $0\leq\gamma_{2}<1/2$.
Using (5) and (6), we have
$\big{\langle}v-w,y-w\big{\rangle}\leq\gamma_{1}\|v-u\|^{2}+\gamma_{2}\|w-v\|^{2}+\gamma_{3}\|w-u\|^{2}.$
(8)
Applying item $(a)$ with $x=u$, after some algebraic manipulations, we
conclude that
$\|w-v\|^{2}\leq\frac{1+2\gamma_{1}}{1-2\gamma_{2}}\|v-u\|^{2}-\frac{1-2\gamma_{3}}{1-2\gamma_{2}}\|w-u\|^{2}.$
The last inequality together (8) yield
$\big{\langle}v-w,y-w\big{\rangle}\leq\left(\gamma_{1}+\frac{\gamma_{2}+2\gamma_{1}\gamma_{2}}{1-2\gamma_{2}}\right)\|v-u\|^{2}+\left(\gamma_{3}-\frac{\gamma_{2}-2\gamma_{2}\gamma_{3}}{1-2\gamma_{2}}\right)\|w-u\|^{2},$
which is equivalent to the inequality in $(b)$. ∎
## 3 GInexPM employing the constant step size rule
In this section, we describe the GInexPM with a feasible inexact projection
for solving problem (1). The rule for choosing the step size will be the same
used in [3, 6], namely, the constant step size rule. For that, we take a
exogenous sequence of real numbers $(a_{k})_{k\in\mathbb{N}}$ satisfying
$0\leq a_{k}\leq b_{k-1}-b_{k},\qquad k=0,1,\ldots,$ (9)
for some given nonincreasing sequence of nonegative real numbers
$(b_{k})_{k\in\mathbb{N}}$ converging to zero, with the notation
$b_{-1}\in{\mathbb{R}}_{++}$ such that $b_{-1}>b_{0}$.
###### Remark 2.
Condition (9) implies that $\sum_{k\in\mathbb{N}}a_{k}<b_{-1}$. Examples of
sequences $(a_{k})_{k\in\mathbb{N}}$ and $(b_{k})_{k\in\mathbb{N}}$ satisfying
(9) are obtained by taking $a_{k}:=b_{k-1}-b_{k}$ and, for a given
$\bar{b}>0$: (i) $b_{-1}=3\bar{b}$, $b_{0}=2\bar{b}$, $b_{k}=\bar{b}/k$, for
all $k=1,2,\ldots$; (ii) $b_{-1}=3\bar{b}$, $b_{0}=2\bar{b}$,
$b_{k}=\bar{b}/\ln(k+1)$, for all $k=1,2,\ldots$.
The conceptual GInexPM is formally stated as follows.
Algorithm 1 GInexPM employing constant step size
Step 0: Take $(a_{k})_{k\in\mathbb{N}}$, $(b_{k})_{k\in\mathbb{N}}$ satisfying
(9) and an error tolerance function $\varphi_{\gamma}$. Let $x^{0}\in C$ and
set $k=0$.
Step 1: If $\nabla f(x^{k})=0$, then stop; otherwise, choose real numbers
$\gamma_{1}^{k},\gamma_{2}^{k}$ and $\gamma_{3}^{k}$ such that
$0\leq\gamma_{1}^{k}+\gamma_{2}^{k}\leq\frac{a_{k}}{\|\nabla
f(x^{k})\|^{2}},\qquad 0\leq\gamma_{2}^{k}<\bar{\gamma_{2}}<\frac{1}{2},\qquad
0\leq\gamma_{3}^{k}<\bar{\gamma}<\frac{1}{2},$ (10) and a fixed step size
$\alpha>0$ and define the next iterate $x^{k+1}$ as any feasible inexact
projection of $z^{k}:=x^{k}-\alpha\nabla f(x^{k})$ onto $C$ relative to
$x^{k}$ with forcing parameters
$\gamma^{k}:=(\gamma^{k}_{1},\gamma^{k}_{2},\gamma^{k}_{3})$, i.e.,
$x^{k+1}\in{\cal P}_{C}\left(\varphi_{\gamma^{k}},x^{k},z^{k}\right).$
Step 2: Set $k\leftarrow k+1$, and go to Step 1.
Let us describe the main features of the GInexPM. Firstly, we take exogenous
sequences $(a_{k})_{k\in\mathbb{N}}$ and $(b_{k})_{k\in\mathbb{N}}$ satisfying
(9) and an error tolerance function $\varphi_{\gamma}$. Then, we check if at
the current iterate $x^{k}$ we have $\nabla f(x^{k})=0$, otherwise, we choose
nonnegative forcing parameters $\gamma_{1}^{k}$, $\gamma_{2}^{k}$ and
$\gamma_{3}^{k}$ satisfying (10). Set a fixed step size $\alpha>0$. By using
some inner procedure, the next iterate $x^{k+1}$ is computed as any feasible
inexact projection of $z^{k}=x^{k}-\alpha\nabla f(x^{k})$ onto the feasible
set $C$ relative to $x^{k}$, i.e., $x^{k+1}\in{\cal
P}_{C}(\varphi_{\gamma^{k}},x^{k},z^{k})$; an example of such procedure will
be presented in section 5. Note that, if $\gamma_{1}^{k}\equiv 0$,
$\gamma_{2}^{k}\equiv 0$ and $\gamma_{3}^{k}\equiv 0$, then ${\cal
P}_{C}({\varphi_{0}},x^{k},z^{k})$ is the exact projection, see Remark 1, and
our method corresponds to the usual projected gradient method proposed, for
example, in [3, 6]. It is worth noting that $\gamma_{1}^{k}$ and
$\gamma_{2}^{k}$ in (10) can be chosen as any nonnegative real numbers
satisfying $0\leq(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla f(x^{k})\|^{2}\leq
a_{k}$, for prefixed sequences $(a_{k})_{k\in\mathbb{N}}$ and
$(b_{k})_{k\in\mathbb{N}}$ satisfying (9). In this case, we have
$\sum_{k\in\mathbb{N}}\big{[}(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla
f(x^{k})\|^{2}\big{]}<+\infty.$ (11)
In the next sections, we will deal with the convergence analysis of the
sequence $(x^{k})_{k\in\mathbb{N}}$ generated by GInexPM.
### 3.1 Asymptotic convergence analysis
The aim of this section is to prove the main convergence results about the
asymptotic behavior of the sequence $(x^{k})_{k\in\mathbb{N}}$ generated by
Algorithm 1. We assume that the gradient of the objective function $f$ is
Lipschitz continuous with constant $L>0$. Moreover, we also assume that
$0<\alpha<\frac{1-2\bar{\gamma}}{L}.$ (12)
For future references, it is convenient to define the following constants:
$\nu:=\frac{1-\bar{\gamma_{2}}-\bar{\gamma}}{\alpha}-\frac{L}{2}>0,\qquad\rho:=\frac{\alpha}{1-2\bar{\gamma_{2}}}>0.$
(13)
In the sequel, we state and prove our first result for the sequence
$(x^{k})_{k\in\mathbb{N}}$. The obtained inequality is the counterpart of the
one obtained, for example, in [3, Lemma 9.11, p. 176].
###### Lemma 4.
The following inequality holds:
$f(x^{k+1})\leq f(x^{k})+\rho(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla
f(x^{k})\|^{2}-\nu\|x^{k+1}-x^{k}\|^{2},\qquad\forall~{}{k\in\mathbb{N}}.$
(14)
###### Proof.
Since $\nabla f$ satisfies (2), applying Lemma 1 with $x=x^{k}$ and
$y=x^{k+1}$, we obtain
$f(x^{k+1})\leq f(x^{k})+\big{\langle}\nabla
f(x^{k}),x^{k+1}-x^{k}\big{\rangle}+\frac{L}{2}\|x^{k+1}-x^{k}\|^{2}.$
Thus, after some algebraic manipulations, we have
$f(x^{k+1})\leq f(x^{k})+\frac{1}{\alpha}\big{\langle}[x^{k}-\alpha\nabla
f(x^{k})]-x^{k+1},x^{k}-x^{k+1}\big{\rangle}-\left(\frac{1}{\alpha}-\frac{L}{2}\right)\|x^{k+1}-x^{k}\|^{2}.$
(15)
Since $x^{k+1}\in{\cal P}_{C}(\varphi_{\gamma^{k}},x^{k},z^{k})$ with
$z^{k}=x^{k}-\alpha\nabla f(x^{k})$, applying item $(b)$ of Lemma 3 with
$u=x^{k}$, $y=x^{k}$, $v=z^{k}$, $w=x^{k+1}$, and
$\varphi_{\gamma}=\varphi_{\gamma_{k}}$, we have
$\big{\langle}[x^{k}-\alpha\nabla
f(x^{k})]-x^{k+1},x^{k}-x^{k+1}\big{\rangle}\leq\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\alpha^{2}\|\nabla
f(x^{k})\|^{2}+\frac{\gamma_{3}^{k}-\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\|x^{k+1}-x^{k}\|^{2}.$
Then, combining (15) with the latter inequality yields
$f(x^{k+1})\leq
f(x^{k})+\frac{\alpha(\gamma_{1}^{k}+\gamma_{2}^{k})}{1-2\gamma_{2}^{k}}\|\nabla
f(x^{k})\|^{2}-\left[\frac{1-\gamma_{2}^{k}-\gamma_{3}^{k}}{\alpha(1-2\gamma_{2}^{k})}-\frac{L}{2}\right]\|x^{k+1}-x^{k}\|^{2}.$
Therefore, taking into account (10) and (13), we have (14), which concludes
the proof. ∎
The next result is an immediate consequence of Lemma 4.
###### Corollary 5.
The sequence $(f(x^{k})+\rho b_{k-1})_{k\in\mathbb{N}}$ is monotone non-
increasing. In particular, $\inf_{k}(f(x^{k})+\rho b_{k-1})=\inf_{k}f(x^{k})$.
###### Proof.
Combining (10) with (14), we have $f(x^{k+1})\leq f(x^{k})+\rho a_{k}$, for
all ${k\in\mathbb{N}}$. Thus, taking into account that
$(a_{k})_{k\in\mathbb{N}}$ satisfies (9), we obtain
$f(x^{k+1})+\rho{b_{k}}\leq f(x^{k})+\rho{b_{k-1}}$, for all
${k\in\mathbb{N}}$, implying the first statement. Since
$(b_{k})_{k\in\mathbb{N}}$ converges to zero, the second statement holds. ∎
Now, we are ready to state and prove a partial asymptotic convergence result
on $(x^{k})_{k\in\mathbb{N}}$.
###### Theorem 6.
Assume that $-\infty<f^{*}$. If ${\bar{x}}\in C$ is a cluster point of the
sequence $(x^{k})_{k\in\mathbb{N}}$, then ${\bar{x}}$ is a stationary point
for problem (1).
###### Proof.
By (10), we have $(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla f(x^{k})\|^{2}\leq
a_{k}$, for all ${k\in\mathbb{N}}$. Thus, Lemma 4 implies that that
$\nu\|x^{k+1}-x^{k}\|^{2}\leq[f(x^{k})-f(x^{k+1})]+\rho a_{k}$, for all
${k\in\mathbb{N}}$. Using (9), after some adjustments, we obtain
$\|x^{k+1}-x^{k}\|^{2}\leq\left[f(x^{k})+\rho
b_{k-1}\right]/{\nu}-\left[f(x^{k+1})+\rho b_{k}\right]/{\nu}$, for all
$k\in\mathbb{N}$. Hence, due to $f^{*}\leq\inf_{k}f(x^{k})$, Corollary 5
implies $\sum_{\ell=0}^{k}\|x^{\ell+1}-x^{\ell}\|^{2}\leq[f(x^{0})+\rho
b_{-1}]/{\nu}-f^{*}/{\nu}$. Thus, we conclude that
$\lim_{k\to+\infty}\|x^{k+1}-x^{k}\|=0$. Let ${\bar{x}}$ be a cluster point of
$(x^{k})_{k\in\mathbb{N}}$ and $(x^{k_{j}})_{j\in\mathbb{N}}$ a subsequence of
$(x^{k})_{k\in\mathbb{N}}$ such that $\lim_{j\to+\infty}x^{k_{j}}=~{}\bar{x}$.
Since, $\lim_{j\to+\infty}(x^{k_{j}+1}-x^{k_{j}})=0$, we have
$\lim_{j\to+\infty}x^{k_{j}+1}={\bar{x}}$. On the other hand, due to
$x^{k_{j}+1}\in{\cal P}_{C}(\varphi_{\gamma^{k_{j}}},x^{k_{j}},z^{k_{j}})$,
where $z^{k_{j}}:=x^{k_{j}}-\alpha\nabla f(x^{k_{j}})$, applying item $(b)$ of
Lemma 3 with $v=z^{k_{j}}$, $u=x^{k_{j}}$, $w=x^{k_{j}+1}$, and
$\varphi_{\gamma}=\varphi_{\gamma^{k_{j}}},$ we obtain
$\big{\langle}z^{k_{j}}-x^{k_{j}+1},y-x^{k_{j}+1}\big{\rangle}\leq\frac{\gamma_{1}^{k_{j}}+\gamma_{2}^{k_{j}}}{1-2\gamma_{2}^{k_{j}}}\alpha^{2}\|\nabla
f(x^{k_{j}})\|^{2}+\frac{\gamma_{3}^{k_{j}}-\gamma_{2}^{k_{j}}}{1-2\gamma_{2}^{k_{j}}}\|x^{k_{j}+1}-x^{k_{j}}\|^{2},\qquad\forall~{}y\in
C.$
Thus, taking limits on both sides of the last inequality, we conclude, by
using (11) and continuity of $\nabla f$, that
$\big{\langle}[{\bar{x}}-\alpha\nabla
f({\bar{x}})]-{\bar{x}},y-{\bar{x}}\big{\rangle}\leq 0$, for all $y\in C$.
Therefore, $\big{\langle}\nabla f({\bar{x}}),y-{\bar{x}}\big{\rangle}\geq 0$,
for all $y\in C$, which implies that ${\bar{x}}\in C$ is a stationary point
for problem (1). ∎
In the following lemma, we establish a basic inequality satisfied by
$(x^{k})_{k\in\mathbb{N}}$. In particular, it will be useful to prove the full
asymptotic convergence of $(x^{k})_{k\in\mathbb{N}}$ under quasiconvexity of
$f$.
###### Lemma 7.
For each $x\in C$ and ${k\in\mathbb{N}}$, there holds
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\alpha\rho(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla
f(x^{k})\|^{2}+2\alpha\big{[}f(x^{k})-f(x^{k+1})+\langle\nabla
f(x^{k}),x-x^{k}\rangle\big{]}.$
###### Proof.
Let $x\in C$. By using $z_{k}=x^{k}-\alpha\nabla f(x^{k})$, after some
algebraic manipulations, we have
$\displaystyle\|x^{k+1}-x\|^{2}=\|x^{k}-x\|^{2}-\|x^{k+1}-x^{k}\|^{2}+2\big{\langle}z^{k}-x^{k+1},x-x^{k+1}\big{\rangle}+2\alpha\big{\langle}\nabla
f(x^{k}),x-x^{k+1}\big{\rangle}.$
Since $x^{k+1}\in{\cal P}_{C}(\varphi_{\gamma^{k}},x^{k},z^{k})$, applying
item $(b)$ of Lemma 3 with $y=x$, $v=z^{k}$, $u=x^{k}$, $w=x^{k+1}$, and
$\varphi_{\gamma}=\varphi_{\gamma^{k}},$ we obtain
$\big{\langle}z^{k}-x^{k+1},x-x^{k+1}\big{\rangle}\leq\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\alpha^{2}\|\nabla
f(x^{k})\|^{2}+\frac{\gamma_{3}^{k}-\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\|x^{k+1}-x^{k}\|^{2}.$
On the other hand, since $\nabla f$ satisfies (2), Lemma 1 with $x=x^{k}$ and
$y=x^{k+1}$ yields
$\displaystyle\big{\langle}\nabla f(x^{k}),x-x^{k+1}\big{\rangle}$
$\displaystyle=\big{\langle}\nabla
f(x^{k}),x^{k}-x^{k+1}\big{\rangle}+\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}$ $\displaystyle\leq
f(x^{k})-f(x^{k+1})+\frac{L}{2}\|x^{k+1}-x^{k}\|^{2}+\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}.$
Combining last two inequities with the above equality, we conclude that
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}-\left[\frac{1-2\gamma_{3}^{k}}{1-2\gamma_{2}^{k}}-\alpha
L\right]\|x^{k+1}-x^{k}\|^{2}\\\
+2\alpha^{2}\left(\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\right)\|\nabla
f(x^{k})\|^{2}+2\alpha\left[f(x^{k})-f(x^{k+1})+\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}\right].$ (16)
Taking into account (10), we have $0\leq 1-2\bar{\gamma_{2}}\leq
1-2\gamma_{2}^{k}\leq 1$ and $1-2\gamma_{3}^{k}\geq 1-2\bar{\gamma}\geq 0$.
Hence, it follows from (12) and (13) that
$\frac{\alpha}{1-2\gamma_{2}^{k}}\leq\frac{\alpha}{1-2\bar{\gamma_{2}}}=\rho,\qquad\qquad\left[\frac{1-2\gamma_{3}^{k}}{1-2\gamma_{2}^{k}}-\alpha
L\right]\geq\left[1-2\gamma_{3}^{k}-\alpha
L\right]\geq\left[1-2\bar{\gamma}-\alpha L\right]\geq 0.$
These inequalities, together with (16), imply the desired inequality, which
concludes the proof. ∎
To proceed with the analysis of $(x^{k})_{k\in\mathbb{N}}$, we also need the
following auxiliary set
$T:=\left\\{x\in C:f(x)\leq\inf_{k}f(x^{k}),\quad{k\in\mathbb{N}}\right\\}.$
###### Corollary 8.
Assume that $f$ is a quasiconvex function. If $T\neq\varnothing$, then
$(x^{k})_{k\in\mathbb{N}}$ converges to a stationary point for problem (1).
###### Proof.
Let $x\in T$. Since $f$ is a quasiconvex function and $f(x)\leq f(x^{k})$ for
all $k\in\mathbb{N}$, we have $\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}\leq~{}0$, for all $k\in\mathbb{N}$. Thus,
applying Lemma 7, we conclude that
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\alpha\rho(\gamma_{1}^{k}+\gamma_{2}^{k})\|\nabla
f(x^{k})\|^{2}+\\\
2\alpha\left[f(x^{k})-f(x^{k+1})\right],\qquad\forall~{}{k\in\mathbb{N}}.$
Thus, using the first condition in (10) and considering that
$(a_{k})_{k\in\mathbb{N}}$ and $(b_{k})_{k\in\mathbb{N}}$ satisfy (9), the
latter inequality implies
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\alpha\left([f(x^{k})+\rho
b_{k-1}]-[f(x^{k+1})+\rho b_{k}]\right),\quad\forall~{}k\in\mathbb{N}.$ (17)
On the other hand, performing a sum of (17) for $k=0,1,\ldots,N$ and using
that $x\in T$, we obtain
$\sum_{k=0}^{N}\left(\big{[}f(x^{k})+\rho
b_{k-1}\big{]}-\big{[}f(x^{k+1})+\rho b_{k}\big{]}\right)\leq
f(x^{0})-f(x)+\rho(b_{-1}-b_{N}),$ (18)
for any $N\in\mathbb{N}$. Therefore, (17) and (18) imply that
$(x^{k})_{k\in\mathbb{N}}$ is quasi-Fejér convergent to $T$. Since by
assumption $T\neq\varnothing$, it follows from Theorem 2 that
$(x^{k})_{k\in\mathbb{N}}$ is bounded. Let $\bar{x}$ be a cluster point of
$(x^{k})_{k\in\mathbb{N}}$ and $(x^{k_{j}})_{j\in\mathbb{N}}$ a subsequence of
$(x^{k})_{k\in\mathbb{N}}$ such that $\lim_{j\to+\infty}x^{k_{j}}=\bar{x}$.
Considering that $f$ is continuous, it follows from Corollary 5 that
$\inf_{k}f(x^{k})=\inf_{k}\left(f(x^{k})+\rho
b_{k-1}\right)=\lim_{j\to+\infty}\left(f(x^{k_{j}})+\rho
b_{k_{j}-1}\right)=\lim_{j\to+\infty}f(x^{k_{j}})=f(\bar{x}).$
Therefore $\bar{x}\in T$. Using again Theorem 2, we have that
$(x^{k})_{k\in\mathbb{N}}$ converges to $\bar{x}$, and the conclusion is
obtained from Theorem 6. ∎
In the following, we present an important result, when
$(x^{k})_{k\in\mathbb{N}}$ has no cluster points. This result has already
appeared in several papers studding gradient method with exact projetion, see
for example [4, 30]; however, since its proof is very simple and concise, we
include it here for the sake of completeness.
###### Lemma 9.
If $f$ is a quasiconvex function and $(x^{k})_{k\in\mathbb{N}}$ has no cluster
points then $\Omega^{*}=\varnothing$, $\lim_{k\to\infty}\|x^{k}\|=\infty$, and
$\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$.
###### Proof.
Since $(x^{k})_{k\in\mathbb{N}}$ has no cluster points, then
$\lim_{k\to\infty}\|x^{k}\|=\infty$. Assume that problem (1) has an optimum,
say $\tilde{x}$, so $f(\tilde{x})\leq f(x^{k})$ for all $k$. Thus,
$\tilde{x}\in T$. Using Corollary 8, we have that $(x^{k})_{k\in\mathbb{N}}$
is convergent, contradicting that $\lim_{k\to\infty}\|x^{k}\|=\infty$.
Therefore, $\Omega^{*}=\varnothing$. Now, we claim that
$\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$. If
$\lim_{k\to\infty}f(x^{k})=-\infty$, the claim holds. Let $f^{*}=\inf_{x\in
C}f(x)$. By contradiction, suppose that $\lim_{k\to\infty}f(x^{k})>f^{*}$.
Then, there exists $\tilde{x}\in C$ such that $f(\tilde{x})\leq f(x^{k})$ for
all $k$. Using Corollary 8, we obtain that $(x^{k})_{k\in\mathbb{N}}$ is
convergent, contradicting again $\lim_{k\to\infty}\|x^{k}\|=\infty$, which
conclude the proof. ∎
Finally, we presented the main convergence result when $f$ is pseudoconvex,
which is a version of [4, Corollary 3] for our algorithm, see also [29,
Proposition 5].
###### Theorem 10.
Assume that $f$ is a pseudoconvex function. Then, $\Omega^{*}\neq\varnothing$
if and only if $(x^{k})_{k\in\mathbb{N}}$ has at least one cluster point.
Moreover, $(x^{k})_{k\in\mathbb{N}}$ converges to an optimum point if
$\Omega^{*}\neq\varnothing$; otherwise, $\lim_{k\to\infty}\|x^{k}\|=\infty$
and $\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$.
###### Proof.
Note that pseudoconvex functions are quasiconvex. First assume that
$\Omega^{*}\neq\varnothing$. In this case, we have also $T\neq\varnothing$.
Thus, using Corollary 8, we conclude that $(x^{k})_{k\in\mathbb{N}}$ converges
to a stationary point of problem (1) and, in particular,
$(x^{k})_{k\in\mathbb{N}}$ has a cluster point. Considering that $f$ is
pseudoconvex, this point is also an optimum point. Reciprocally, let $\bar{x}$
be a cluster point of $(x^{k})_{k\in\mathbb{N}}$ and
$(x^{k_{j}})_{j\in\mathbb{N}}$ a subsequence of $(x^{k})_{k\in\mathbb{N}}$
such that $\lim_{j\to+\infty}x^{k_{j}}=\bar{x}$. Since, by Corollary 5,
$(f(x^{k})+\rho b_{k-1})_{k\in\mathbb{N}}$ is monotone non-increasing, using
the continuity of $f$, we have
$\inf_{k}f(x^{k})=\inf_{k}\left(f(x^{k})+\rho
b_{k-1}\right)=\lim_{j\to+\infty}\left(f(x^{k_{j}})+\rho
b_{k_{j}-1}\right)=\lim_{j\to+\infty}f(x^{k_{j}})=f(\bar{x}).$
Therefore $\bar{x}\in T$. From Corollary 8, we obtain that
$(x^{k})_{k\in\mathbb{N}}$ converges to a stationary point $\tilde{x}$ of
problem (1). Thus, by (4), we have $\langle\nabla
f(\tilde{x}),x-\tilde{x}\rangle\geq 0$ for all $x\in C$, which by the pseudo-
convexity of $f$ implies $f(x)\geq f(\tilde{x})$ for all $x\in C$. Therefore,
$\bar{x}\in\Omega^{*}$ and $\Omega^{*}\neq\varnothing$. The last part of the
theorem follows by combining the first one with Lemma 9. ∎
### 3.2 Iteration-complexity bound
In this section, we establish some iteration-complexity bounds for the
sequence $(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 1. For that, we
take $x^{*}\in\Omega^{*}\neq\varnothing$, set $f^{*}=f(x^{*})$, and define the
constant
$\eta:=f(x^{0})-f^{*}+\rho b_{-1}.$
In next result, we do not assume any assumption on the convexity of the
objective function.
###### Theorem 11.
Let $\nu>0$ as in (13). Then, for all $N\in\mathbb{N}$, there holds
$\min\left\\{\|x^{k+1}-x^{k}\|:~{}\forall~{}{k\in\mathbb{N}}\right\\}\leq\frac{\sqrt{\eta/\nu}}{\sqrt{N+1}}.$
(19)
###### Proof.
It follows from (10) and (14) that $f(x^{k+1})\leq f(x^{k})+\rho
a_{k}-\nu\|x^{k+1}-x^{k}\|^{2}$, for all ${k\in\mathbb{N}}$. Hence using (9),
we have $\|x^{k+1}-x^{k}\|^{2}\leq\frac{1}{\nu}\left[\big{(}f(x^{k})+\rho
b_{k-1}\big{)}-\big{(}f(x^{k+1})+\rho b_{k}\big{)}\right]$, for all
${k\in\mathbb{N}}$. Thus, summing both sides for $k=0,1,\ldots,N$ and using
that $f^{*}\leq f(x^{k})$ for all ${k\in\mathbb{N}}$ , we obtain
$\sum_{k=0}^{N}\|x^{k+1}-x^{k}\|^{2}\leq\frac{1}{\nu}\big{[}f(x^{0})-f^{*}+\rho(b_{-1}-b_{N})\big{]}\leq\frac{1}{\nu}\big{[}f(x^{0})-f^{*}+\rho
b_{-1})\big{]}=\eta/\nu.$
Therefore,
$(N+1)\min\left\\{\|x^{k+1}-x^{k}\|^{2}:~{}k=0,1,\ldots,N\right\\}\leq\eta/\nu$,
which implies (19). ∎
In the following, we present an iteration-complexity bound for the sequence
$(x^{k})_{k\in\mathbb{N}}$, for finding $\epsilon$-stationary points of
function $f$.
###### Theorem 12.
For every $N\in\mathbb{N}$, there holds
$\min_{k=0,1,\ldots,N}\big{\langle}\nabla
f(x^{k}),x^{k}-x\big{\rangle}\leq\left[\frac{1}{2\alpha}\|x^{0}-x\|^{2}+\eta\right]\frac{1}{N+1},\qquad\forall~{}x\in
C.$
As a consequence, given $\epsilon>0$, the maximum number of iterations $N$
necessary for Algorithm 1 to generate an iterate $x^{\ell}$ such that
$\big{\langle}\nabla f(x^{\ell}),x-x^{\ell}\big{\rangle}>-\epsilon$, for all
$x\in C$, is $N\geq[\frac{1}{2\alpha}\|x^{0}-x\|^{2}+\eta]/\epsilon-1$.
###### Proof.
Using Lemma 7 and (10), we obtain
$\big{\langle}\nabla
f(x^{k}),x^{k}-x\big{\rangle}\leq\frac{1}{2\alpha}\left[\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}\right]+\left[\rho
a_{k}+f(x^{k})-f(x^{k+1})\right],$
for all ${k\in\mathbb{N}}$. Since $(a_{k})_{k\in\mathbb{N}}$ and
$(b_{k})_{k\in\mathbb{N}}$ satisfy (9), we conclude that
$\big{\langle}\nabla
f(x^{k}),x^{k}-x\big{\rangle}\leq\frac{1}{2\alpha}\left[\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}\right]+\left[\left(f(x^{k})+\rho
b_{k-1}\right)-\left(f(x^{k+1})+\rho b_{k}\right)\right].$
Thus, summing both sides for $k=0,1,\ldots,N$ and using that $f^{*}\leq
f(x^{k})$ for all ${k\in\mathbb{N}}$, we have
$\sum_{k=0}^{N}\big{\langle}\nabla
f(x^{k}),x^{k}-x\big{\rangle}\leq\frac{1}{2\alpha}\|x^{0}-x\|^{2}+\big{[}f(x^{0})-f^{*}+\rho(b_{-1}-b_{N})\big{]}=\frac{1}{2\alpha}\|x^{0}-x\|^{2}+\eta,$
which implies that $(N+1)\min\left\\{\big{\langle}\nabla
f(x^{k}),x^{k}-x\big{\rangle}:~{}k=0,1,\ldots,N\right\\}\leq\frac{1}{2\alpha}\|x^{0}-x\|^{2}+\eta,$
obtaining the first statement of the theorem. The second statement follows
trivially from the first one. ∎
#### 3.2.1 Iteration-complexity bound under convexity
Next result presents an iteration-complexity bound for
$(f(x^{k}))_{k\in\mathbb{N}}$ when $f$ is a convex function. Similar bound for
unconstrained problems can be found in [40, Theorem 2.1.14].
###### Theorem 13.
Assume that $f$ is a convex function. Then, for every $N\in\mathbb{N}$, there
holds
$\min\left\\{f(x^{k})-f^{*}:~{}k=1\ldots,N\right\\}\leq\frac{\|x^{0}-x^{*}\|+2\alpha\rho
b_{-1}}{2\alpha N}.$
###### Proof.
Since $f$ is convex, we have $2\alpha\langle\nabla
f(x^{k}),x^{*}-x^{k}\rangle\leq 2\alpha\left[f(x^{*})-f(x^{k})\right]$. Thus,
using (10) and Lemma 7 with $x=x^{*}$, we obtain
$2\alpha[f(x^{k+1})-f^{*}]\leq\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}+2\alpha\rho
a_{k}$, for all $k\in\mathbb{N}$. Performing the sum of the this inequality
for $k=0,1,\ldots,N-1$, we have
$2\alpha\sum_{k=0}^{N-1}\left[f(x^{k+1})-f^{*}\right]\leq\|x^{0}-x^{*}\|^{2}-\|x^{N}-x^{*}\|^{2}+2\alpha\rho\sum_{k=0}^{N-1}a_{k}.$
Since $\sum_{k\in\mathbb{N}}a_{k}<b_{-1}$, we have $2\alpha
N\min\left\\{f(x^{k})-f^{*}:k=1\ldots,N\right\\}\leq\|x^{0}-x^{*}\|+2\alpha\rho
b_{-1},$ which implies the desired inequality. ∎
#### 3.2.2 Iteration-complexity bound under strong convexity
Our next goal is to show an iteration-complexity bound for
$\left(f(x^{k})\right)_{k\in\mathbb{N}}$ when $f$ is strongly convex. For this
purpose, we first present an inequality that is a variation of [11, Lemma
3.6].
###### Lemma 14.
Assume that $f$ is $\mu-$strongly convex. Then, for all $k\in\mathbb{N}$,
there holds
$f(x^{k+1})-f^{*}\leq\frac{1}{\alpha}\big{\langle}x^{k}-x^{k+1},x^{k}-x^{*}\big{\rangle}-\nu\|x^{k+1}-x^{k}\|^{2}+\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\alpha\|\nabla
f(x^{k})\|^{2}-\frac{\mu}{2}\|x^{k}-x^{*}\|^{2}.$
###### Proof.
Applying Lemma 1 with $x=x^{k}$ and $y=x^{k+1}$, and then using (3), we obtain
$\displaystyle f(x^{k+1})-f^{*}$ $\displaystyle\leq\big{\langle}\nabla
f(x^{k}),x^{k+1}-x^{k}\big{\rangle}+\frac{L}{2}\|x^{k}-x^{k+1}\|^{2}+\big{\langle}\nabla
f(x^{k}),x^{k}-x^{*}\big{\rangle}-\frac{\mu}{2}\|x^{k}-x^{*}\|^{2}.$
$\displaystyle=\big{\langle}\nabla
f(x^{k}),x^{k+1}-x^{*}\big{\rangle}+\frac{L}{2}\|x^{k}-x^{k+1}\|^{2}-\frac{\mu}{2}\|x^{k}-x^{*}\|^{2}.$
(20)
On the order hand, due to $z^{k}=x^{k}-\alpha\nabla f(x^{k})$, after some
algebraic manipulations, we have
$\big{\langle}\nabla
f(x^{k}),x^{k+1}-x^{*}\big{\rangle}=\frac{1}{\alpha}\big{\langle}x^{k}-x^{k+1},x^{k+1}-x^{*}\big{\rangle}+\frac{1}{\alpha}\big{\langle}z^{k}-x^{k+1},x^{*}-x^{k+1}\big{\rangle}.$
(21)
Since $x^{k+1}\in{\cal P}_{C}(\varphi_{\gamma^{k}},x^{k},z^{k})$, applying
item $(b)$ of Lemma 3 with $y=x^{*}$, $v=z^{k}$, $u=x^{k}$, $w=x^{k+1}$, and
$\varphi_{\gamma}=\varphi_{\gamma^{k}}$, we obtain
$\big{\langle}z^{k}-x^{k+1},x^{*}-x^{k+1}\big{\rangle}\leq\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\|z^{k}-x^{k}\|^{2}+\frac{\gamma_{3}^{k}-\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\|x^{k+1}-x^{k}\|^{2}.$
(22)
Taking into account that $\langle x^{k}-x^{k+1},x^{k+1}-x^{*}\rangle=\langle
x^{k}-x^{k+1},x^{k}-x^{*}\rangle-\|x^{k}-x^{k+1}\|^{2}$ and
$z^{k}-x^{k}=-\alpha\nabla f(x^{k})$, the combination of (21) and (22) yields
$\big{\langle}\nabla
f(x^{k}),x^{k+1}-x^{*}\big{\rangle}\leq\frac{1}{\alpha}\big{\langle}x^{k}-x^{k+1},x^{k}-x^{*}\big{\rangle}-\frac{1}{\alpha}\left(\frac{1-\gamma_{3}^{k}-\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\right)\|x^{k+1}-x^{k}\|^{2}\\\
+\frac{\gamma_{1}^{k}+\gamma_{2}^{k}}{1-2\gamma_{2}^{k}}\alpha\|\nabla
f(x^{k})\|^{2}.$
Therefore, considering that $0\leq\gamma_{2}^{k}\leq\bar{\gamma_{2}}$ and
$0\leq\gamma_{3}^{k}\leq\bar{\gamma}$, the desired inequality follows by using
the first condition in (13) and (3.2.2). ∎
To proceed, we assume that the sequence $(x^{k})_{k\in\mathbb{N}}$ converges
to a point $x^{*}\in\Omega^{*}$. Moreover, to establish the iteration-
complexity bound for $(f(x^{k}))_{k\in\mathbb{N}}$, we also take
$\gamma_{1}^{k}=\gamma_{2}^{k}=0,\qquad\forall~{}{k\in\mathbb{N}}.$ (23)
###### Theorem 15.
Assume that $f$ is $\mu-$strongly convex on $\mathbb{R}^{n}$. Then, the
following inequality holds
$\|x^{k+1}-x^{*}\|^{2}\leq\left(1-\alpha\mu\right)\|x^{k}-x^{*}\|^{2}.$ (24)
###### Proof.
Using Lemma 14, considering (23) and $f^{*}\leq f(x^{k})$ for all
${k\in\mathbb{N}}$, we have that
$-2\big{\langle}x^{k}-x^{k+1},x^{k}-x^{*}\big{\rangle}\leq-\alpha\mu\|x^{k}-x^{*}\|^{2}-2\alpha\nu\|x^{k+1}-x^{k}\|^{2}$.
Therefore, since $1-2\alpha\nu<0$ and taking into account that
$\|x^{k+1}-x^{*}\|^{2}=\|x^{k}-x^{*}\|^{2}+\|x^{k+1}-x^{k}\|^{2}-2\big{\langle}x^{k}-x^{k+1},x^{k}-x^{*}\big{\rangle}$,
we obtain (24). ∎
###### Remark 3.
Letting $\alpha=1/L$, (24) yields
$\|x^{k+1}-x^{*}\|^{2}\leq\left(1-\mu/L\right)^{k+1}\|x^{0}-x^{*}\|^{2}$,
which is closed related to [11, Theorem 3.10]. See also [40, Theorem 2.1.15],
for the unconstrained case.
## 4 GInexPM employing Armijo’s step size rule
The aim of this section is to present the GInexPM for solving problem (1)
employing Armijo’s search. Our method is an inexact version of the projected
gradient method proposed in [28], see also [4]. Let us remind the iteration of
the projected gradient method: If the current iterate $x^{k}$ is a non-
stationary point of problem (1), then set $z^{k}=x^{k}-\alpha_{k}\nabla
f(x^{k})$, compute $w^{k}={\cal P}_{C}(z^{k})$ and define the next iterate as
$x^{k+1}=x^{k}+\tau_{k}(w^{k}-x^{k})$, where ${\cal P}_{C}$ is the exact
projection operator on $C$, $\alpha_{k}$ and $\tau_{k}$ are suitable positive
constants. In this scheme, $d^{k}=w^{k}-x^{k}$ is a feasible descent direction
for $f$ at $x^{k}$. Thus, an Armijo’s search is employed to compute $\tau_{k}$
so that it decreases the function $f$ at $x^{k+1}$. In the same way as in
Algorithm 1, we propose to compute a feasible inexact projection instead of
calculating the exact one. Hence, for guarantee that the feasible direction is
also a descent direction, we need to use
$\varphi_{\gamma}:{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}\to{\mathbb{R}}_{+}$
satisfying
$\varphi_{\gamma_{3}}(u,w)\leq\gamma_{3}\|w-u\|^{2},\qquad\forall~{}u,w\in{\mathbb{R}}^{n},$
as the error tolerance function, i.e., we take $\gamma_{1}=\gamma_{2}=0$ in
Definition 2. Hence, the inexact projection ${\cal
P}_{C}(\varphi_{\gamma},u,v)$ onto $C$ of $v\in{\mathbb{R}}^{n}$ relative to
$u\in C$ with error tolerance $\varphi_{\gamma}(u,\cdot)$ becomes
${\cal P}_{C}(\varphi_{\gamma},u,v):=\left\\{w\in
C:~{}\big{\langle}v-w,y-w\big{\rangle}\leq\varphi_{\gamma_{3}}(u,w),\quad\forall~{}y\in
C\right\\}.$ (25)
Also, we assume that the mapping
$(\gamma_{3},u,w)\mapsto\varphi_{\gamma_{3}}(u,w)$ is continuous. In this
case, the gradient algorithm with inexact projection employing Armijo’s step
size rule is formally defined as follows.
Algorithm 2 GInexPM employing Armijo search
Step 0: Choose $\sigma\in(0,1)$, $\tau\in(0,1)$ and
$0<\alpha_{\min}\leq\alpha_{\max}$. Let $x^{0}\in C$ and set $k=0$.
Step 1: Choose an error tolerance function $\varphi_{\gamma}$, real numbers
$\alpha_{k}$ and $\gamma_{3}^{k}$ such that
$\alpha_{\min}\leq\alpha_{k}\leq\alpha_{\max},\qquad\qquad
0\leq\gamma_{3}^{k}\leq\bar{\gamma}<\frac{1}{2},$ (26) and take $w^{k}$ as any
feasible inexact projection of $z^{k}:=x^{k}-\alpha_{k}\nabla f(x^{k})$ onto
$C$ relative to $x^{k}$ with error tolerance
$\varphi_{\gamma_{3}^{k}}(x^{k},w^{k})$, i.e., $w^{k}\in{\cal
P}_{C}\left(\varphi_{\gamma_{3}^{k}},x^{k},z^{k}\right).$ (27) If
$w^{k}=x^{k}$, then stop; otherwise, set $\tau_{k}:=\tau^{j_{k}}$, where
$j_{k}:=\min\left\\{j\in\mathbb{N}:~{}f\big{(}x^{k}+\tau^{j}(w^{k}-x^{k})\big{)}\leq
f(x^{k})+\sigma\tau^{j}\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}\right\\},$ (28) and set the next iterate
$x^{k+1}$ as $x^{k+1}=x^{k}+\tau_{k}(w^{k}-x^{k}).$ (29)
Step 2: Set $k\leftarrow k+1$, and go to Step 1.
Let us describe the main features of Algorithm 2. In Step 1, we check if
$w^{k}=x^{k}$. In this case, as we will show, the current iterate $x^{k}$ is a
solution of problem (1), otherwise, we choose $\alpha_{k}$ such that
$\alpha_{\min}\leq\alpha_{k}\leq\alpha_{\max}$. Then, by using some inner
procedure, we compute $w^{k}$ as any feasible inexact projection of
$z^{k}=x_{k}-\alpha_{k}\nabla f(x_{k})$ onto the feasible set $C$ relative to
$x^{k}$, i.e., $w^{k}\in{\cal P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$.
Recall that, if $\gamma_{3}^{k}=0$, then ${\cal
P}_{C}(\varphi_{0},x^{k},z^{k})$ is the exact projection, see Remark 1.
Therefore, Algorithm 2 can be seen as inexact version of the algorithm
considered in [4, 28]. In the remainder of this section, we study the
asymptotic properties and iteration-complexity bounds related to Algorithm 2.
We begin by presenting some import properties of the inexact projection (25).
###### Lemma 16.
Let $x\in C$, $\alpha>0$, and $0\leq\gamma_{3}\leq\bar{\gamma}<1/2$. Take
$w(\alpha)$ as any feasible inexact projection of $z(\alpha)=x-\alpha\nabla
f(x)$ onto $C$ relative to $x$ with error tolerance
$\varphi_{\gamma_{3}}(x,w(\alpha))$, i.e., $w(\alpha)\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$. Then, there hold:
* (i)
$\big{\langle}\nabla
f(x),w(\alpha)-x\big{\rangle}\leq\left(\dfrac{\gamma_{3}-1}{\alpha}\right)\|w(\alpha)-x\|^{2}$;
* (ii)
the point $x$ is stationary for problem (1) if and only if $x\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$;
* (iii)
if $x$ is a nonstationary point for problem (1), then $\big{\langle}\nabla
f(x),w(\alpha)-x\big{\rangle}<0$. Equivalently, if there exists
${\bar{\alpha}}>0$ such that $\big{\langle}\nabla
f(x),w({\bar{\alpha}})-x\big{\rangle}\geq 0$, then $x$ is stationary for
problem (1).
###### Proof.
Since $w(\alpha)\in{\cal P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$, applying
item $(b)$ of Lemma 3 with $\gamma_{1}=\gamma_{2}=0$, $w=w(\alpha)$,
$v=z(\alpha)$, $y=x$, and $u=x$, we obtain $\big{\langle}x-\alpha\nabla
f(x)-w(\alpha),x-w(\alpha)\big{\rangle}\leq\gamma_{3}\|w(\alpha)-x\|^{2}$
which, after some algebraic manipulations, yields the inequality of item
$(i)$. For proving item $(ii)$, we first assume that $x$ is stationary for
problem (1). In this case, (4) implies that $\big{\langle}\nabla
f(x),w(\alpha)-x\big{\rangle}\geq 0$. Thus, considering that $\alpha>0$ and
$0\leq\gamma_{3}\leq\bar{\gamma}<1/2$, the last inequality together item $(i)$
implies that $w(\alpha)=x$. Therefore, $x\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$. Reciprocally, if $x\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$ then applying item $(b)$ of Lemma
(3) with $\gamma_{1}=\gamma_{2}=0$, $w=x$, $v=z(\alpha)$, and $u=x$, we obtain
$\big{\langle}x-\alpha\nabla f(x)-x,y-x\big{\rangle}\leq 0$, for all $y\in C$.
Considering that $\alpha>0$, the last inequality is equivalently to
$\big{\langle}\nabla f(x),y-x\big{\rangle}\geq 0$, for all $y\in C$. Thus,
according to (4), we conclude that $x$ is stationary point for problem (1).
Finally, for prove item $(iii)$, take $x$ a nonstationary point for problem
(1). Thus item $(ii)$ implies that $x\notin{\cal
P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$ and taking into account that
$w(\alpha)\in{\cal P}_{C}(\varphi_{\gamma_{3}},x,z(\alpha))$, we conclude that
$x\neq w(\alpha)$. Therefore, due to $\alpha>0$ and
$0<\gamma_{3}\leq\bar{\gamma}$, it follows from item $(i)$ that
$\big{\langle}\nabla f(x),w(\alpha)-x\big{\rangle}<0$ and the first sentence
is proved. Finally, note that the second statement is the contrapositive of
the first sentence. ∎
The next result follows from item $(iii)$ of Lemma 16 and its proof will be
omitted.
###### Proposition 17.
Let $\sigma\in(0,1)$, $x\in C$, $\alpha>0$, and
$0\leq\lambda<{\bar{\lambda}}$. Take $w(\alpha)$ as any feasible inexact
projection of $z(\alpha)=x-\alpha\nabla f(x)$ onto $C$ relative to $x$ with
error tolerance $\varphi_{\lambda}(x,w(\alpha))$, i.e., $w(\alpha)\in{\cal
P}_{C}(\varphi_{\lambda},x,z(\alpha))$. If $x$ is a nonstationary point for
problem (1), then there exists $\delta>0$ such that
$f\big{(}x+\zeta[w(\alpha)-x]\big{)}<f(x)+\sigma\zeta\big{\langle}\nabla
f(x),w(\alpha)-x\big{\rangle}$, for all $\zeta\in(0,\delta)$.
In the following, we establish the well definition of Algorithm 2.
###### Proposition 18.
The sequence $(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 2 is well
defined and belongs to $C$.
###### Proof.
Proceeding by induction, let $x^{0}\in C$,
$\alpha_{\min}\leq\alpha_{0}\leq\alpha_{\max}$ and
$0\leq\gamma_{3}^{0}<\bar{\gamma}$. Set $z^{0}=x^{0}-\alpha_{0}\nabla
f(x^{0})$. Since $C$ is closed and convex, it follows from Remark 1 that
${\cal P}_{C}(\varphi_{\gamma_{3}^{0}},x^{0},z^{0})\neq\varnothing$. Thus, we
can take $w^{0}\in P_{C}(\varphi_{\gamma_{3}^{k}},x^{0},z^{0})$. If Algorithm
2 does not stop, i.e., $w^{0}\neq x^{0}$, then it follows from item $(i)$ of
Lemma 16 that $\langle\nabla f(x^{0}),w^{0}-x^{0}\rangle<0$. In this case,
Propoposition 17 implies that it is possible to compute $\tau_{0}\in(0,1]$
satisfying (28), for $k=0$. Therefore, $x^{1}=x^{0}+\tau_{0}(w^{0}-x^{0})$ in
(29) is well defined and, considering that $x^{0},w^{0}\in C$ and
$\tau_{0}\in(0,1]$, we have $x^{1}\in C$. The induction step is completely
analogous, implying that $(x^{k})_{k\in\mathbb{N}}$ is well defined and
belongs to $C$. ∎
### 4.1 Asymptotic convergence analysis
The aim of this section is to study asymptotic convergence properties related
to Algorithm 2. We begin by presenting a partial convergence result of the
sequence $(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 2.
###### Proposition 19.
Algorithm 2 finishes in a finite number of iterations at a stationary point of
problem (1), or generates an infinite sequence $(x^{k})_{k\in\mathbb{N}}$ for
which $\left(f(x^{k})\right)_{k\in\mathbb{N}}$ is a decreasing sequence and
every cluster point of $(x^{k})_{k\in\mathbb{N}}$ is stationary for problem
(1).
###### Proof.
First we assume that $(x^{k})_{k\in\mathbb{N}}$ is finite. In this case,
according to Step 1, there exists $k\in\mathbb{N}$ such that
$x^{k}=w^{k}\in{\cal P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$, where
$z^{k}=x^{k}-\alpha_{k}\nabla f(x^{k})$, $0\leq\gamma_{3}^{k}\leq\bar{\gamma}$
and $\alpha_{k}>0$. Therefore, applying the second statement of item $(ii)$ of
Lemma 16 with $x=x^{k}$, $\alpha=\alpha_{k}$, and $\gamma_{3}=\gamma_{3}^{k}$,
we conclude that $x^{k}$ is stationary for problem (1). Now, we assume that
$(x^{k})_{k\in\mathbb{N}}$ is infinite. Thus, according to Step 1, $x^{k}\neq
w^{k}$ for all $k=0,1,\ldots$. Consequently, applying item $(ii)$ of Lemma 16
with $x=x^{k}$, $\alpha=\alpha_{k}$, and $\gamma_{3}=\gamma_{3}^{k}$, we have
that $x^{k}$ is nonstationary for problem (1). Hence, item $(iii)$ of Lemma 16
implies that $\big{\langle}\nabla f(x^{k}),w^{k}-x^{k}\big{\rangle}<0$, for
all $k=0,1,\ldots$. Therefore, it follows from (28) and (29) that
$0<-\sigma\tau_{k}\big{\langle}\nabla f(x^{k}),w^{k}-x^{k}\big{\rangle}\leq
f(x^{k})-f(x^{k+1}),\qquad\forall~{}k\in\mathbb{N},$ (30)
with implies that $f(x^{k+1})<f(x^{k})$, for all $k=0,1,\ldots$, and then
$\left(f(x^{k})\right)_{k\in\mathbb{N}}$ is a decreasing sequence. Let
${\bar{x}}$ be a cluster point of $(x^{k})_{k\in\mathbb{N}}$ and
$(x^{k_{j}})_{j\in\mathbb{N}}$ a subsequence of $(x^{k})_{k\in\mathbb{N}}$
such that $\lim_{j\to+\infty}x^{k_{j}}=\bar{x}$. Since $C$ is closed, by
Proposition 18, we have $\bar{x}\in C$. Since
$\left(f(x^{k})\right)_{k\in\mathbb{N}}$ is decreasing and
$\lim_{j\to+\infty}f(x^{k_{j}})=f(\bar{x})$, we conclude that
$\lim_{k\to+\infty}f(x^{k})=f(\bar{x})$. On the order hand, using the last
condition in (26), we have $1/(1-2\gamma_{3}^{k})<1/(1-2\bar{\gamma})$, for
all $k=0,1,\ldots$. Since $w^{k}\in{\cal
P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$, where
$z^{k}=x^{k}-\alpha_{k}\nabla f(x^{k})$, applying item $(a)$ of Lemma 3 with
$x=x^{k}$, $u=x^{k}$, $v=z^{k}$, $w=w^{k}$, $\gamma_{1}=\gamma_{2}=0$, and
$\gamma_{3}=\gamma_{3}^{k}$, we obtain
$\|w^{k_{j}}-x^{k_{j}}\|^{2}\leq\frac{\alpha_{k_{j}}^{2}}{1-2\gamma_{3}^{k_{j}}}\|\nabla
f(x^{k_{j}})\|^{2}<\frac{\alpha_{\max}^{2}}{1-2\bar{\gamma}}\|\nabla
f(x^{k_{j}})\|^{2},\qquad\forall~{}j\in\mathbb{N}.$
Considering that $(x^{k_{j}})_{j\in\mathbb{N}}$ converges to ${\bar{x}}$ and
$\nabla f$ is continuous, the last inequality implies that
$(w^{k_{j}})_{j\in\mathbb{N}}\subset C$ is also bounded. Thus, we can assume
without loss of generality that $\lim_{j\to+\infty}w^{k_{j}}=\bar{w}\in C$.
Now, due to $\tau_{k}\in(0,1]$, for all $k=0,1,\ldots$, we can also assume
without loss of generality that
$\lim_{j\to+\infty}\tau_{k_{j}}=\bar{\tau}\in[0,1].$ Therefore, owing to
$\lim_{j\to+\infty}f(x^{k})=f(\bar{x})$, taking limits in (30) along an
appropriate subsequence, we obtain $\bar{\tau}\big{\langle}\nabla
f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}=0.$ We have two possibilities:
$\bar{\tau}>0$ or $\bar{\tau}=0$. If $\bar{\tau}>0$, then $\big{\langle}\nabla
f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}=0.$ Now, we assume that
$\lim_{j\to+\infty}\tau_{k_{j}}=\bar{\tau}=0$. In this case, for any fixed
$q\in\mathbb{N}$, there exits $j$ such that $\tau_{k_{j}}<\tau^{q}$. Hence,
Armijo’s condition (28) does not hold for $\tau^{q}$, i.e.,
$f\big{(}x^{k_{j}}+\tau^{q}(w^{k_{j}}-x^{k_{j}})\big{)}>f(x^{k_{j}})+\sigma\tau^{q}\big{\langle}\nabla
f(x^{k_{j}}),w^{k_{j}}-x^{k_{j}}\big{\rangle}$, for all $j\in\mathbb{N}.$
Thus, taking limits as $j$ goes to $+\infty$, we obtain
$f\big{(}\bar{x}+\tau^{q}(\bar{w}-\bar{x})\big{)}\geq
f(\bar{x})+\sigma\tau^{q}\big{\langle}\nabla
f(\bar{x}),\bar{w}-\bar{x}\big{\rangle},$ which is equivalent to
$\frac{f\big{(}\bar{x}+\tau^{q}(\bar{w}-\bar{x})\big{)}-f(\bar{x})}{\tau^{q}}\geq\sigma\big{\langle}\nabla
f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}.$
Since this inequality holds for all $q\in\mathbb{N}$, taking limits as $q$
goes to $+\infty$, we conclude that $\langle\nabla
f(\bar{x}),\bar{w}-\bar{x}\rangle\geq\sigma\big{\langle}\nabla
f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}$. Hence, due to $\sigma\in(0,1)$, we
obtain $\big{\langle}\nabla f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}\geq 0$. We
recall that $\langle\nabla f(x^{k_{j}}),w^{k_{j}}-x^{k_{j}}\rangle<0$, for all
$j=0,1,\ldots$, which taking limits as $j$ goes to $+\infty$ yields
$\big{\langle}\nabla f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}\leq 0$. Hence,
$\big{\langle}\nabla f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}=0$. Therefore,
for any of two possibilities, $\bar{\tau}>0$ or $\bar{\tau}=0$, we have
$\langle\nabla f(\bar{x}),\bar{w}-\bar{x}\rangle=0$. On the other hand,
$w^{k_{j}}\in{\cal P}_{C}(\varphi_{\gamma_{3}^{k_{j}}},x^{k_{j}},z^{k_{j}}),$
where $z^{k_{j}}=x^{k_{j}}-\alpha_{k_{j}}\nabla f(x^{k_{j}})$,
$0\leq\gamma_{3}^{k_{j}}\leq\bar{\gamma}$, and $\alpha_{k_{j}}>0$. Thus, it
follows from (25) that
$\big{\langle}z^{k_{j}}-w^{k_{j}},y-w^{k_{j}}\big{\rangle}\leq\varphi_{\gamma_{3}^{k_{j}}}(x^{k_{j}},w^{k_{j}}),\qquad
y\in C,\quad\forall~{}j\in\mathbb{N}.$ (31)
Moreover, since $\alpha_{k}\in[\alpha_{\min},\alpha_{\max}]$, for all
$k=0,1,\ldots$, we also assume without loss of generality that
$\lim_{j\to+\infty}\alpha_{k_{j}}=\bar{\alpha}\in[\alpha_{\min},\alpha_{\max}]$.
Thus, taking limits in (31) and considering that the mapping
$(\gamma_{3},u,w)\mapsto\varphi_{\gamma_{3}}(u,w)$ is continuous,
$\lim_{j\to+\infty}x^{k_{j}}=\bar{x}\in C$,
$\lim_{j\to+\infty}w^{k_{j}}=\bar{w}\in C$, and
$\lim_{j\to+\infty}\tau_{k_{j}}=\bar{\tau}\in[0,1]$, we conclude that
$\big{\langle}\bar{z}-\bar{w},y-\bar{w}\big{\rangle}\leq\varphi_{\bar{\gamma}}(\bar{x},\bar{w})$,
for all $y\in C$, where $\bar{z}=\bar{x}-{\bar{\alpha}}\nabla f(\bar{x})$.
Hence, it follows from (25) that $\bar{w}\in{\cal
P}_{C}\left(\varphi_{\bar{\gamma}},{\bar{x}},{\bar{z}}\right)$, where
$\bar{z}=\bar{x}-{\bar{\alpha}}\nabla f(\bar{x})$. Therefore, due to
$\big{\langle}\nabla f(\bar{x}),\bar{w}-\bar{x}\big{\rangle}=0$, we can apply
the second sentence in item $(iii)$ of Lemma 16 with $x=\bar{x}$,
$z({\bar{\alpha}})=\bar{z}$, and $w({\bar{\alpha}})=\bar{w}$ to conclude that
$\bar{x}$ is stationary for problem (1). ∎
Due to Proposition 19, from now on we assume that the sequence
$(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 2 is infinite. The following
result establishes a basic inequality satisfied by the iterates of Algorithm
2, which will be used to study its convergence properties. For simplify the
notations we define the following constant:
$\xi:=\dfrac{2\alpha_{\max}}{\sigma}>0.$ (32)
###### Lemma 20.
For each $x\in C$, there holds
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\alpha_{k}\tau_{k}\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}+\xi\left[f(x^{k})-f(x^{k+1})\right],\quad\forall~{}k\in\mathbb{N}.$
(33)
###### Proof.
We know that
$\|x^{k+1}-x\|^{2}=\|x^{k}-x\|^{2}+\|x^{k+1}-x^{k}\|^{2}-2\big{\langle}x^{k+1}-x^{k},x-x^{k}\big{\rangle}$,
for all $x\in C$ and $k=0,1,\ldots$. Thus, using (29), we have
$\|x^{k+1}-x\|^{2}=\|x^{k}-x\|^{2}+\tau_{k}^{2}\|w^{k}-x^{k}\|^{2}-2\tau_{k}\big{\langle}w^{k}-x^{k},x-x^{k}\big{\rangle},\qquad\forall~{}k\in\mathbb{N}.$
(34)
On the other hand, by using (27) we have $w^{k}\in{\cal
P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$ with
$z^{k}=x^{k}-\alpha_{k}\nabla f(x^{k})$. Thus, applying item $(b)$ of Lemma 3
with $y=x$, $u=x^{k}$, $v=z^{k}$, $w=w^{k}$, $\gamma_{1}=\gamma_{2}=0$,
$\gamma_{3}=\gamma_{3}^{k}$, and
$\varphi_{\gamma_{3}}=\varphi_{\gamma_{3}^{k}}$, we obtain $\langle
x^{k}-\alpha_{k}\nabla
f(x^{k})-w^{k},x-w^{k}\rangle\leq\gamma_{3}^{k}\|w^{k}-x^{k}\|^{2}$, for all
$k\in\mathbb{N}$. After some algebraic manipulations in the last inequality,
we have
$\big{\langle}w^{k}-x^{k},x-x^{k}\big{\rangle}\geq\alpha_{k}\big{\langle}\nabla
f(x^{k}),w^{k}-x\big{\rangle}+(1-\gamma_{3}^{k})\|w^{k}-x^{k}\|^{2}.$
Combining the last inequality with (34), we conclude
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}-\tau_{k}\big{[}2(1-\gamma_{3}^{k})-\tau_{k}\big{]}\|w^{k}-x^{k}\|^{2}+2\tau_{k}\alpha_{k}\big{\langle}\nabla
f(x^{k}),x-w^{k}\big{\rangle}.$ (35)
Since $0\leq\gamma_{3}^{k}<\bar{\gamma}<1/2$ and $\tau_{k}\in(0,1]$, we have
$2(1-\gamma_{3}^{k})-\tau_{k}\geq 1-2\bar{\gamma}>0$. Thus, (35) becomes
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\tau_{k}\alpha_{k}\big{\langle}\nabla
f(x^{k}),x-w^{k}\big{\rangle},\qquad\forall~{}k\in\mathbb{N}.$
Therefore, considering that $\big{\langle}\nabla
f(x^{k}),x-w^{k}\big{\rangle}=\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}+\big{\langle}\nabla
f(x^{k}),x^{k}-w^{k}\big{\rangle}$ and taking into account (28), we conclude
that
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+2\tau_{k}\alpha_{k}\big{\langle}\nabla
f(x^{k}),x-x^{k}\big{\rangle}+\frac{2\alpha_{k}}{\sigma}\left[f(x^{k})-f(x^{k+1})\right],\qquad\forall~{}k\in\mathbb{N}.$
(36)
Since $0<\alpha_{k}\leq\alpha_{\max}$, Proposition 19 implies
$\alpha_{k}\big{[}f(x^{k})-f(x^{k+1})\big{]}<\alpha_{\max}\big{[}f(x^{k})-f(x^{k+1})\big{]}$.
Therefore, the desired inequality (33) follows from (36) by using (32). ∎
For the sequence $(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 2, we
define the following auxiliary set:
$U:=\left\\{x\in C:f(x)\leq\inf_{k}f(x^{k}),\quad{k\in\mathbb{N}}\right\\}.$
Next, we analyze the behavior of the sequence $(x^{k})_{k\in\mathbb{N}}$ when
$f$ is a quasiconvex function.
###### Corollary 21.
Assume that $f$ is a quasiconvex function. If $U\neq\varnothing$, then
$(x^{k})_{k\in\mathbb{N}}$ converges to a stationary point of problem (1).
###### Proof.
Let $x\in U$. Thus, $f(x)\leq f(x^{k})$ for all $k\in\mathbb{N}$. Since $f$ is
quasiconvex, we have $\big{\langle}\nabla f(x^{k}),x-x^{k}\big{\rangle}\leq
0$, for all $k\in\mathbb{N}$. Using Lemma 20, we obtain
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+\xi\left[f(x^{k})-f(x^{k+1})\right],\quad\forall~{}k\in\mathbb{N}.$
Defining $\epsilon_{k}=\xi\big{[}f(x^{k})-f(x^{k+1})\big{]}$, we have
$\|x^{k+1}-x\|^{2}\leq\|x^{k}-x\|^{2}+\epsilon_{k}$, for all $k\in\mathbb{N}$.
On the other hand, summing $\epsilon_{k}$ with $k=0,1,\ldots,N$, we have
$\sum_{k=0}^{N}\epsilon_{k}\leq\xi\big{[}f(x^{0})-f(x)\big{]}<\infty$. Thus,
it follows from Definition 1 that $(x^{k})_{k\in\mathbb{N}}$ is quasi-Fejér
convergent to $U$. Since $U$ is nonempty, it follows from Theorem 2 that
$(x^{k})_{k\in\mathbb{N}}$ is bounded, and therefore it has a cluster point.
Let $\bar{x}$ be a cluster point of $(x^{k})_{k\in\mathbb{N}}$ and
$(x^{k_{j}})_{j\in\mathbb{N}}$ be a subsequence of $(x^{k})_{k\in\mathbb{N}}$
such that $\lim_{j\to\infty}x^{k_{j}}=\bar{x}$. Considering that $f$ is
continuous, we have $\lim_{j\to\infty}f(x^{k_{j}})=f(\bar{x})$. Hence, since,
by Proposition 19, $\left(f(x^{k})\right)_{k\in\mathbb{N}}$ is decreasing, we
obtain
$\inf\\{f(x^{k}):k=0,1,2,\ldots\\}=\lim_{k\to\infty}f(x^{k})=f(\bar{x}).$
Therefore, $\bar{x}\in U$. It follows from Theorem 2 that
$(x^{k})_{k\in\mathbb{N}}$ converges to $\bar{x}$ and the conclusion is
obtained by using again Proposition 19. ∎
The next two results are similar to Lemma 9 and Theorem 10, respectively. For
completeness reasons, we have included their proofs here.
###### Lemma 22.
If $f$ is a quasiconvex function and $(x^{k})_{k\in\mathbb{N}}$ has no cluster
points, then $\Omega^{*}=\varnothing$, $\lim_{k\to\infty}\|x^{k}\|=\infty$,
and $\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$.
###### Proof.
Since $(x^{k})_{k\in\mathbb{N}}$ has no cluster points, then
$\lim_{k\to\infty}\|x^{k}\|=\infty$. Assume that problem (1) has an optimum,
say $\tilde{x}$, so $f(\tilde{x})\leq f(x^{k})$ for all $k$. Thus,
$\tilde{x}\in U$. Using Corollary 21, we obtain that
$(x^{k})_{k\in\mathbb{N}}$ is convergent, contradicting that
$\lim_{k\to\infty}\|x^{k}\|=\infty$. Therefore, $\Omega^{*}=\varnothing$. Now,
we claim that $\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$. If
$\lim_{k\to\infty}f(x^{k})=-\infty$, the claim holds. Let $f^{*}=\inf_{x\in
C}f(x)$. By contradiction, suppose that $\lim_{k\to\infty}f(x^{k})>f^{*}$.
Then, there exists $\tilde{x}\in C$ such that $f(\tilde{x})\leq f(x^{k})$ for
all $k$. Using Corollary 21, we have that $(x^{k})_{k\in\mathbb{N}}$ is
convergent, contradicting again $\lim_{k\to\infty}\|x^{k}\|=\infty$, which
concludes the proof. ∎
###### Theorem 23.
Assume that $f$ is a pseudoconvex function. Then, $\Omega^{*}\neq\varnothing$
if and only if $(x^{k})_{k\in\mathbb{N}}$ has at least one cluster point.
Moreover, $(x^{k})_{k\in\mathbb{N}}$ converges to an optimum point if
$\Omega^{*}\neq\varnothing$; otherwise, $\lim_{k\to\infty}\|x^{k}\|=\infty$
and $\lim_{k\to\infty}f(x^{k})=\inf\\{f(x):x\in C\\}$.
###### Proof.
Recall that pseudoconvex functions are also quasiconvex. Assume that
$\Omega^{*}\neq\varnothing$. In this case, $U\neq\varnothing$. Using Corollary
21, we conclude that $(x^{k})_{k\in\mathbb{N}}$ converges to a stationary
point of problem (1). Reciprocally, let $\bar{x}$ be a cluster point of
$(x^{k})_{k\in\mathbb{N}}$ and $(x^{k_{j}})_{j\in\mathbb{N}}$ be a subsequence
of $(x^{k})_{k\in\mathbb{N}}$ such that $\lim_{j\to+\infty}x^{k_{j}}=\bar{x}$.
Since, from Proposition 19, $(f(x^{k}))_{k\in\mathbb{N}}$ is monotone non-
increasing, by continuity of $f$, we conclude that
$\inf_{k}f(x^{k})=f(\bar{x})$ and hence $\bar{x}\in U$. Using Corollary 21, we
obtain that $(x^{k})_{k\in\mathbb{N}}$ converges to a stationary point
$\tilde{x}$ of problem (1). Since $f$ is pseudoconvex, this point is also an
optimal solution of problem (1). The last part of the theorem follows by
combining the first one with Lemma 22. ∎
### 4.2 Iteration-complexity bound
This section is devoted to study iteration-complexity bounds for the sequence
generated by Algorithm 2, similar results in the multiobjetive context can be
found in [19]. For that we assume that $f^{*}>-\infty$ and the objective
function $f$ has Lipschitz continuous gradient with constant $L\geq 0$, i.e.,
we assume that $\nabla f$ satisfies (2). Moreover, we also assume that
$(x^{k})_{k\in\mathbb{N}}$ generated by Algorithm 1 converges to a point
$x^{*}$, i.e, $\lim_{k\to\infty}x^{k}=x^{*}$. To simplify the notations, we
set
$\tau_{\min}:=\min\left\\{\frac{2\tau(1-\sigma)(1-\bar{\gamma})}{\alpha_{\max}L},~{}1\right\\}.$
(37)
The next lemma is a version of [19, Lemma 3.1] for our specific context.
###### Lemma 24.
The step size $\tau_{k}$ in Algorithm 2 satisfies $\tau_{k}\geq\tau_{\min}$.
###### Proof.
If $\tau_{k}=1$, then the result trivially holds. Thus, assume that
$\tau_{k}<1$. It follows from Armijo’s condition in (28) that
$f(x^{k}+\frac{\tau_{k}}{\tau}(w^{k}-x^{k}))>f(x^{k})+\sigma\frac{\tau_{k}}{\tau}\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}$. Now, using Lemma 1, we have
$f\left(x^{k}+\frac{\tau_{k}}{\tau}(w^{k}-x^{k})\right)\leq
f(x^{k})+\frac{\tau_{k}}{\tau}\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}+\frac{1}{\tau^{2}}\frac{L}{2}\tau_{k}^{2}\|w^{k}-x^{k}\|^{2}$.
Hence, combining the two previous inequalities, we obtain
$\tau(1-\sigma)\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}+\frac{L}{2}\tau_{k}\|w^{k}-x^{k}\|^{2}>0.$
(38)
On the order hand, since $w^{k}\in{\cal
P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$, where
$z^{k}=x^{k}-\alpha_{k}\nabla f(x^{k})$, applying item $(i)$ of Lemma 16 with
$x=x^{k}$, $w(\alpha)=w^{k}$, $z=z^{k}$, $\gamma_{1}=\gamma_{2}=0$,
$\gamma_{3}=\gamma_{3}^{k}$, and $\varphi_{\gamma}=\varphi_{\gamma^{k}}$, we
have
$\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}\leq\left(\frac{\gamma_{3}^{k}-1}{\alpha_{k}}\right)\|w^{k}-x^{k}\|^{2}.$
Combining the last inequality with (38) yields
$\left[{\tau(1-\sigma)(\gamma_{3}^{k}-1)}/{\alpha_{k}}+{L}\tau_{k}/2\right]\|w^{k}-x^{k}\|^{2}>0.$
Hence, using (26), it follows that
$\tau_{k}>\frac{2\tau(1-\sigma)(1-\gamma_{3}^{k})}{\alpha_{k}L}\geq\frac{2\tau(1-\sigma)(1-\bar{\gamma})}{\alpha_{\max}L}.$
Therefore, since $\tau_{k}$ is never larger than one, the result follows and
the proof is concluded. ∎
It follows from item $(ii)$ of Lemma 16 that if $x^{k}\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x^{k},z^{k})$, then the point $x^{k}$ is
stationary for problem (1). Since $w^{k}\in{\cal
P}_{C}(\varphi_{\gamma_{3}},x^{k},z^{k})$, the quantity $\|w^{k}-x^{k}\|$ can
be seen as a measure of stationarity of $x^{k}$. Next theorem presents an
iteration-complexity bound for this quantity, see a similar result in [19,
Theorem 3.1].
###### Theorem 25.
Let $\tau_{\min}$ be defined in (37). Then, for every $N\in\mathbb{N}$, the
following inequality holds
$\min\left\\{\|w^{k}-x^{k}\|:~{}k=0,1\ldots,N-1\right\\}\leq\sqrt{\frac{\alpha_{\max}\left[f(x^{0})-f^{*}\right]}{\sigma\tau_{\min}{\left(1-\bar{\gamma}\right)}}}\frac{1}{\sqrt{N}}.$
###### Proof.
From the definition of $\tau_{k}$ and condition (28), we have
$f(x^{k+1})-f(x^{k})\leq\sigma\tau_{k}\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}.$ (39)
Since $w^{k}\in{\cal P}_{C}(\varphi_{\gamma_{3}^{k}},x^{k},z^{k})$, where
$z^{k}=x^{k}-\alpha_{k}\nabla f(x^{k})$, applying item $(i)$ of Lemma 16 with
$x=x^{k}$, $w(\alpha)=w^{k}$, $z=z^{k}$, $\gamma_{1}=\gamma_{2}=0$,
$\gamma_{3}=\gamma_{3}^{k}$, and $\varphi_{\gamma}=\varphi_{\gamma^{k}}$, we
obtain
$\big{\langle}\nabla
f(x^{k}),w^{k}-x^{k}\big{\rangle}\leq\left(\frac{\gamma_{3}^{k}-1}{\alpha_{k}}\right)\|w^{k}-x^{k}\|^{2}.$
(40)
By (26), we have
$(1-\gamma_{3}^{k})/\alpha_{k}\geq(1-\bar{\gamma})/\alpha_{\max}$. Thus,
combining (39) with (40) and taking into account Lemma 24, it follows that
$f(x^{k})-f(x^{k+1})\geq\sigma\tau_{k}\left(\frac{1-\bar{\gamma}}{\alpha_{\max}}\right)\|w^{k}-x^{k}\|^{2}\geq\sigma\tau_{\min}\left(\frac{1-\bar{\gamma}}{\alpha_{\max}}\right)\|w^{k}-x^{k}\|^{2}.$
Hence, performing the sum of the above inequality for $k=0,1,\ldots,N-1$, we
have
$\sum_{k=0}^{N-1}\|w^{k}-x^{k}\|^{2}\leq\frac{\alpha_{\max}\left[f(x^{0})-f(x^{N})\right]}{\sigma\tau_{\min}(1-\bar{\gamma})}\leq\frac{\alpha_{\max}\left[f(x^{0})-f^{*}\right]}{\sigma\tau_{\min}(1-\bar{\gamma})},$
which implies the desired inequality. ∎
#### 4.2.1 Iteration-complexity bound under convexity
In this section, we present an iteration-complexity bound for the sequence
$\left(f(x^{k})\right)_{k\in\mathbb{N}}$ when $f$ is convex.
###### Theorem 26.
Let $f$ be a convex function on $C$. Then, for every $N\in\mathbb{N}$, there
holds
$\min\left\\{f(x^{k})-f^{*}:~{}k=0,1\ldots,N-1\right\\}\leq\frac{\|x^{0}-x^{*}\|+\xi\left[f(x^{0})-f^{*}\right]}{2\alpha_{\min}\tau_{\min}}\frac{1}{N}.$
###### Proof.
Using the first inequality in (26) and Lemma 24, we have
$2\alpha_{\min}\tau_{\min}\leq 2\alpha_{k}\tau_{k}$, for all
$k\in{\mathbb{N}}$. From the convexity of $f$, we have $\big{\langle}\nabla
f(x^{k}),x^{*}-x^{k}\big{\rangle}\leq f^{*}-f(x^{k})$, for all
$k\in{\mathbb{N}}$. Thus, applying Lemma 20 with $x=x^{*}$, after some
algebraic manipulations, we conclude
$2\alpha_{\min}\tau_{\min}\left[f(x^{k})-f^{*}\right]\leq\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}+\xi\left[f(x^{k})-f(x^{k+1})\right]\quad
k=0,1,\ldots.$
Hence, performing the sum of the above inequality for $k=0,1,\ldots,N-1$, we
obtain
$2\alpha_{\min}\tau_{\min}\sum_{k=0}^{N-1}\left[f(x^{k})-f^{*}\right]\leq\|x^{0}-x^{*}\|^{2}-\|x^{N}-x^{*}\|^{2}+\xi\left[f(x^{0})-f(x^{N})\right].$
Therefore,
$2\alpha_{\min}\tau_{\min}N\min\\{f(x^{k})-f^{*}:k=0,1\ldots,N-1\\}\leq\|x^{0}-x^{*}\|+\xi\left[f(x^{0})-f(x^{N})\right]$,
which implies the desired inequality. ∎
## 5 Numerical experiments
In this section, we summing up the results of our preliminary numerical
experiments in order to verify the practical behavior of the proposed
algorithms. In particular, we will illustrate the potential advantages of
considering inexact projections instead of exact ones in a problem of least
squares over the spectrahedron. The codes are written in Matlab and are freely
available at https://orizon.ime.ufg.br. All experiments were run on a macOS
10.15.7 with 3.7GHz Intel Core i5 processor and 8GB of RAM.
Let $\mathbb{S}^{n}$ be the space of $n\times n$ symmetric real matrices and
$\mathbb{S}^{n}_{+}$ be the cone of positive semidefinite matrices in
$\mathbb{S}^{n}$. Given $A$ and $B$ two $n\times m$ matrices, with $m\geq n$,
we consider the following problem:
$\begin{array}[]{cl}\displaystyle\min_{X\in\mathbb{S}^{n}}&f(X):=\displaystyle\frac{1}{2}\|AX-B\|^{2}_{F}\\\
\mbox{s.t.}&\textrm{tr}(X)=1,\\\ &X\in\mathbb{S}^{n}_{+},\\\ \end{array}$ (41)
where $X$ is the $n\times n$ matrix that we seek to find. Here,
$\|\cdot\|_{F}$ denotes the Frobenius matrix norm $\|A\|_{F}=\sqrt{\langle
A,A\rangle}$, where the inner product is given by $\langle
A,B\rangle=\textrm{tr}(A^{T}B)$. Problem (41) and its variants appear in
applications in different areas such as statistics, physics and economics [15,
18, 26, 50], and were considered, for example, in the numerical tests of [8,
24, 31, 32].
Following we briefly discuss how to compute projections onto the feasible
region of (41). Define $C=\\{X\in\mathbb{R}^{n\times
n}\mid\textrm{tr}(X)=1,\;X\in\mathbb{S}^{n}_{+}\\}$. Since
$\mathbb{S}^{n}_{+}$ is a convex and closed set, $C$ is convex and compact.
Formally the problem of projection a given vector $V\in\mathbb{R}^{n\times n}$
onto $C$ is stated as
$\begin{array}[]{cl}\displaystyle\min_{W\in\mathbb{S}^{n}}&\displaystyle\frac{1}{2}\|W-V\|_{F}^{2}\\\
\mbox{s.t.}&W\in C.\\\ \end{array}$ (42)
Since $\|W-V\|_{F}^{2}=\|W-V_{S}\|_{F}^{2}+\|V_{A}\|^{2}_{F}$ for any
$W\in\mathbb{S}^{n}$, where $V_{S}$ and $V_{A}$ denote the symmetric and the
antisymmetric part of $V$, respectively, it can be assumed, without loss of
generality, that $V\in\mathbb{S}^{n}$. Let $W^{*}$ be the unique solution of
(42). Given the eigen-decomposition $V=QDQ^{T}$, it is well known that
$W^{*}=QP_{\Delta_{n}}(D)Q^{T}$, where $P_{\Delta_{n}}(D)$ denotes the
diagonal matrix obtained by projecting the diagonal elements of $D$ onto the
$n$-dimensional simplex $\Delta_{n}=\\{x\in\mathbb{R}^{n}\mid
x_{1}+\ldots+x_{n}=1;\;x\geq 0\\}$, see, for example, [26]. This means that
computing $W^{*}$ requires a priori the full eigen-decomposition of $V$, which
can be computationally prohibitive for high-dimensional problems. This
drawback will appear clearly in the results reported in section 5.2. Inexact
projections can be obtained by adding the constraint $\textrm{rank}(W)\leq p$,
for a given $1\leq p<n$, to Problem (42). Denoting by $W_{p}$ the solution of
this latter problem, we have $W_{p}=\sum_{i=1}^{p}\lambda_{i}q_{i}q_{i}^{T}$,
where the scalars $(\lambda_{1},\ldots,\lambda_{p})\in\mathbb{R}^{p}$ are
obtained by projecting the $p$ largest eigenvalues of $V$ onto the
$p$-dimensional simplex $\Delta_{p}$ and $q_{i}\in\mathbb{R}^{n}$ are the
corresponding unit eigenvectors for all $i=1,\ldots,p$, see [2]. Therefore,
inexact projections can be computed by means of an incomplete eigen-
decomposition of $V$, resulting in computational savings. Note that if
$\textrm{rank}(W^{*})\leq p$, then $W_{p}$ coincides with $W^{*}$. As far as
we know, this approach was first proposed in [2] and was also used in [24,
23].
We implemented the inexact projection scheme discussed above by choosing
parameter $p$ in an adaptive way and introducing a suitable error criterion.
Let us formally describe the adopted scheme to find an inexact solution
$\tilde{W}$ to Problem (42) relative to $U\in C$ with error tolerance mapping
$\varphi_{\gamma}$ and real numbers $\gamma_{1}$, $\gamma_{2}$ and
$\gamma_{3}$ satisfying the suitable conditions (10) or (26), depending on the
main algorithm.
Algorithm 3 Procedure to compute $\tilde{W}\in{\cal
P}_{C}\left(\varphi_{\gamma},U,V\right)$
Input: Let $1\leq p<n$ be given.
Step 1: Compute $(\lambda_{i}^{V},q_{i})_{i=1}^{p}$ (with $\|q_{i}\|=1$,
$i=1,\ldots,p$) the $p$ largest eigenpairs of $V$, then set
$W_{p}:=\sum_{i=1}^{p}\lambda_{i}q_{i}q_{i}^{T},$ where
$(\lambda_{1},\ldots,\lambda_{p})\in\mathbb{R}^{p}$ are obtained by projecting
$(\lambda_{1}^{V},\ldots,\lambda_{p}^{V})\in\mathbb{R}^{p}$ onto the
$p$-dimensional simplex $\Delta_{p}$.
Step 2: Compute $Y_{p}:=\displaystyle\arg\min_{Y\in C}\,\langle
W_{p}-V,Y-W_{p}\rangle$.
Step 3: If $\langle
W_{p}-V,Y_{p}-W_{p}\rangle\geq-\varphi_{\gamma}(U,V,W_{p})$, then set
$\tilde{W}:=W_{p}$ and return to the main algorithm.
Step 4: Set $p\leftarrow p+1$ and go to Step 1.
Output: $\tilde{W}:=W_{p}$.
Some comments regarding Algorithm 3 are in order. First, in Step 1 we compute
the rank-$p$ projection of $V$ onto $C$. The computational cost os this step
is dominated by the cost of computing the $p$ leading eigenpairs of $V$, since
the projection of $(\lambda_{1}^{V},\ldots,\lambda_{p}^{V})\in\mathbb{R}^{p}$
onto the $p$-dimensional simplex $\Delta_{p}$ can be easily done in $O(p\log
p)$ time, see, for example, [2]. Second, the subproblem in Step 2 is solved by
computing the largest eigenpair of $V-W_{p}$. Indeed, $Y_{p}=qq^{T}$, where
$q\in\mathbb{R}^{n}$ is the unit eigenvector corresponding to the largest
eigenvalue of $V-W_{p}$, see, also, [2]. In our implementations, we used the
Matlab function eigs to compute eigenvalues/eigenvectors [47, 34]. Third, in
Step 3 if the stopping criterion $\langle
W_{p}-V,Y_{p}-W_{p}\rangle\geq-\varphi_{\gamma}(U,V,W_{p})$ is satisfied, then
from Definition 2, we conclude that $\tilde{W}=W_{p}\in{\cal
P}_{C}\left(\varphi_{\gamma},U,V\right)$, i.e., the output is a feasible
inexact projection of $V\in\mathbb{S}^{n}$ relative to $U\in C$. Otherwise, we
increase parameter $p$ and proceed to calculate a more accurate eigen-
decomposition of $V$. Fourth, in the first iteration of the main algorithm, we
set $p=1$ as the input for Algorithm 3. For the subsequent iterations, we
used, in principle, the success value for $p$ from the previous outer
iteration. Without attempting to go into details, seeking computational
savings, in some iterations, we consider decreasing the input $p$ with respect
to the previous successful one.
Concerning the stopping criterion of the main algorithms, all runs were
stopped at an iterate $X^{k}$ declaring convergence if
$\frac{\|X^{\ell}-X^{\ell-1}\|_{F}}{\|X^{\ell-1}\|_{F}}\leq 10^{-4},$
for $\ell=k$ and $\ell=k-1$. This means that we stopped the execution of the
main algorithms when the above convergence metric is satisfied for two
consecutive iterations.
We used a similar strategy as in [24] to generate the test instances of (41).
Given the dimensions $n$ and $m$, with $m\geq n$, we randomly generate $A$ (a
sparse matrix with density $10^{-4}$) with elements between $(-1,1)$. Also,
for a give parameter $\omega>1$, we define
$\bar{X}:=\sum_{i=1}^{\omega}g_{i}g_{i}^{T}$, where $g_{i}\in\mathbb{R}^{n}$
is a random vector with only two nonnull components with the following
structure
$g_{i}=(\cdots\cos(\theta)\cdots\sin(\theta)\cdots)^{T}\in\mathbb{R}^{n}$, and
then set $B=A\bar{X}$. Since $\bar{X}\notin C$, this procedure generally
results in nonzero residue problems.
### 5.1 Influence of the forcing parameter $\gamma$
We start the numerical experiments by checking the influence of the forcing
parameter $\gamma^{k}=(\gamma_{1}^{k},\gamma_{2}^{k},\gamma_{3}^{k})$ in
Algorithm 1. We implemented Algorithm 1 with: $(a_{k})_{k\in\mathbb{N}}$ and
$(b_{k})_{k\in\mathbb{N}}$ given as in Remark 2(ii) with $\bar{b}=100$, and
using
$\varphi_{\gamma^{k}}(U,V,W)=\gamma_{1}^{k}\|V-W\|_{F}^{2}+\gamma_{2}^{k}\|W-V\|_{F}^{2}+\gamma_{3}^{k}\|W-U\|_{F}^{2},\quad\forall
k=1,2,\ldots,$ (43)
as the error tolerance function, see Definition 2. We also set
$\bar{\gamma_{2}}=0.49995$. Concerning parameter $\bar{\gamma}$, we considered
different values for it, as we will explain below. Given a particular
$\bar{\gamma}<1/2$, we set
$\alpha=0.9999\cdot\frac{1-2\bar{\gamma}}{L},$ (44)
where the Lipschitz constant $L$, with respect to problem (41), is given by
$L=\|A^{T}A\|_{F}$. The choice (44) for the fixed step size $\alpha$ trivially
satisfies (12). Since parameter $\gamma_{3}^{k}$ and the step size $\alpha$
are closely related (see (44) and recall that
$0\leq\gamma_{3}^{k}\leq\bar{\gamma}<1/2$), we first investigate the behavior
of Algorithm 1 by varying only the strategy for $\gamma_{3}^{k}$. We set
$\gamma_{2}^{k}=\min\left(\frac{1}{2}\frac{a_{k}}{\|\nabla
f(X^{k})\|_{F}^{2}},\bar{\gamma_{2}}\right),\quad\gamma_{1}^{k}=\frac{a_{k}}{\|\nabla
f(X^{k})\|_{F}^{2}}-\gamma_{2}^{k},\quad\mbox{and}\quad\gamma_{3}^{k}=\bar{\gamma},\quad\forall
k=1,2,\ldots,$ (45)
and considered some different values for $\bar{\gamma}$. Note that the forcing
parameter $\gamma^{k}$ given by (45) satisfies (10). We used an instance of
problem (41) with $n=2000$, $m=4000$, and $\omega=10$. The results for the
starting point $X^{0}=(1/n)I$ and different choices for $\bar{\gamma}$ are in
Table 1. In the table, “$f(X^{*})$” is the function value at the final
iterate, “it” is the number of outer iterations, “time(s)” is the run time in
seconds, and “$\alpha$” is the corresponding fixed step size given by (44).
$\gamma_{3}^{k}=\bar{\gamma}$ | $f(X^{*})$ | it | time(s) | $\alpha$
---|---|---|---|---
$0.0$ | 0.4899 | 107 | 28.9 | 0.0698
$0.1$ | 0.4899 | 129 | 36.6 | 0.0558
$0.2$ | 0.4899 | 162 | 43.5 | 0.0419
$0.3$ | 0.4899 | 223 | 59.6 | 0.0279
$0.4$ | 0.4899 | 375 | 101.0 | 0.0140
Table 1: Influence of parameter $\bar{\gamma}_{3}$ in the performance of
Algorithm 1 with the forcing parameter $\gamma^{k}$ given as in (45) for an
instance of problem (41) with $n=2000$, $m=4000$, and $\omega=10$.
As can be seen in Table 1, Algorithm 1 performs better for lower values of
$\bar{\gamma}_{3}$. This is undoubtedly due to the fact that the fixed step
size $\alpha$ is inversely proportional to $\bar{\gamma}_{3}$, see (44) and
the last column of the table. This result suggests that for the gradient
method with constant step size, the best choice is to take $\gamma_{3}^{k}=0$
for all $k$, leaving the inexactness of the projections to be controlled only
by the terms of $\gamma_{1}^{k}$ and $\gamma_{2}^{k}$ in (43). From an
algorithmic point of view, the term corresponding to $\gamma_{3}^{k}$ in (43)
involves the last two generated iterates and is often used in inexactness
measures. Therefore, for projection algorithms that use a constant step size,
at least under restrictions as in (12), the theory developed here presents
practical alternatives for the formulation of such measures.
Taking $\bar{\gamma}=0$, we consider different combinations of
$\gamma_{1}^{k}$ and $\gamma_{2}^{k}$ such that
$\gamma_{1}^{k}+\gamma_{2}^{k}=a_{k}/\|\nabla f(X^{k})\|_{F}^{2}$. Our
experiments showed that Algorithm 1 presented no significant performance
difference with these combinations. Therefore, in the experiments reported in
section 5.2, we set for Algorithm 1 the forcing parameter $\gamma^{k}$ as in
(45) with $\bar{\gamma}=0$.
### 5.2 Comparison with exact projection approaches
In the present section, we compare the performance of Algorithms 1 and 2 with
their exact counterparts. Algorithm 2 was implemented using the error
tolerance function (43) with
$\gamma^{k}=(0,0,0.49995),\quad\forall k=1,2,\ldots,$
and
$\alpha_{k}:=\left\\{\begin{array}[]{ll}\displaystyle\min\left(\alpha_{\max},\max\left(\alpha_{\min},\langle
S^{k},S^{k}\rangle/\langle
S^{k},Y^{k}\rangle\right)\right),&\mbox{if}\;\langle S^{k},Y^{k}\rangle>0\\\
\alpha_{\max},&\mbox{otherwise},\end{array}\right.$ (46)
where $S^{k}:=X^{k}-X^{k-1}$, $Y^{k}:=\nabla f(X^{k})-\nabla f(X^{k-1})$,
$\alpha_{\min}=10^{-10}$, and $\alpha_{\max}=10^{10}$. We observe that (46)
corresponds to the spectral choice for $\alpha_{k}$, see [8, 7]. In the exact
versions, the projections are calculated exactly, that is, involving full
eigen-decompositions.
We considered some instances of problem (41) with different parameters $n$,
$m$ and $\omega$ and using three starting points given by
$X^{0}(\beta)=(1-\beta)(1/n)I+\beta e_{1}e_{1}^{T}$, where
$e_{1}\in\mathbb{R}^{n}$ is the first canonical vector and
$\beta\in\\{0.00,0.50,0.99\\}$. The results in Table 2 shows that, in relation
to the CPU time, the inexact algorithms were notably more efficient (mainly in
the larger instances) than the corresponding exact versions. In general,
moderate values for the rank parameter $p$ in Algorithm 3 (typically, less
than 10) were sufficient to compute the inexact projections, allowing
significant computational savings with respect to the exact approaches.
Finally, we observe that Algorithm 2 was much more efficient than Algorithm 1
on the chosen set of test problems. This was already expected, due to the
simplicity of the objective function of (41). Note that Algorithm 1 does not
require evaluations of the objective function (only gradient evaluations).
Therefore, we hope that Algorithm 1 can be competitive in problems where the
objective function is expensive to be computationally evaluated.
| Algorithm 1 | Algorithm 2
---|---|---
| Inexact | Exact | Inexact | Exact
$n$ | $m$ | $\omega$ | $\beta$ | $f(X^{*})$ | it | time(s) | $f(X^{*})$ | it | time(s) | $f(X^{*})$ | it | time(s) | $f(X^{*})$ | it | time(s)
| | | 0.00 | 0.4899 | 107 | 30.0 | 0.4899 | 108 | 78.3 | 0.4899 | 8 | 3.6 | 0.4899 | 8 | 6.8
| | 10 | 0.50 | 0.4899 | 108 | 36.7 | 0.4899 | 108 | 78.4 | 0.4899 | 9 | 4.0 | 0.4899 | 9 | 7.7
| | | 0.99 | 0.4899 | 110 | 29.2 | 0.4899 | 110 | 78.8 | 0.4899 | 9 | 4.0 | 0.4899 | 9 | 7.6
| | | 0.00 | 0.7887 | 141 | 47.6 | 0.7887 | 141 | 101.7 | 0.7887 | 9 | 4.2 | 0.7887 | 9 | 7.5
| | 20 | 0.50 | 0.7887 | 139 | 46.8 | 0.7887 | 139 | 100.4 | 0.7887 | 9 | 4.2 | 0.7887 | 9 | 7.6
2000 | 4000 | | 0.99 | 0.7887 | 136 | 36.1 | 0.7887 | 138 | 96.6 | 0.7887 | 10 | 4.8 | 0.7887 | 10 | 8.3
| | | 0.00 | 0.2139 | 189 | 124.3 | 0.2139 | 189 | 543.1 | 0.2139 | 8 | 9.6 | 0.2139 | 8 | 27.0
| | 10 | 0.50 | 0.2139 | 189 | 133.2 | 0.2139 | 189 | 541.5 | 0.2139 | 8 | 9.6 | 0.2139 | 9 | 30.0
| | | 0.99 | 0.2139 | 190 | 121.8 | 0.2139 | 190 | 537.1 | 0.2139 | 10 | 11.5 | 0.2139 | 9 | 30.2
| | | 0.00 | 0.9837 | 173 | 113.1 | 0.9837 | 172 | 491.6 | 0.9837 | 11 | 13.0 | 0.9837 | 12 | 39.4
| | 20 | 0.50 | 0.9837 | 170 | 109.9 | 0.9837 | 170 | 485.2 | 0.9837 | 11 | 12.9 | 0.9837 | 10 | 33.2
3000 | 6000 | | 0.99 | 0.9837 | 161 | 102.8 | 0.9837 | 164 | 466.3 | 0.9837 | 12 | 14.1 | 0.9837 | 10 | 33.3
| | | 0.00 | 1.0046 | 166 | 194.2 | 1.0046 | 165 | 1092.5 | 1.0046 | 10 | 22.8 | 1.0046 | 11 | 83.7
| | 10 | 0.50 | 1.0046 | 165 | 191.6 | 1.0046 | 165 | 1088.8 | 1.0046 | 12 | 26.4 | 1.0046 | 11 | 83.7
| | | 0.99 | 1.0046 | 169 | 193.4 | 1.0046 | 169 | 1113.0 | 1.0046 | 11 | 25.0 | 1.0046 | 12 | 91.2
| | | 0.00 | 3.0753 | 90 | 126.2 | 3.0753 | 89 | 585.9 | 3.0753 | 8 | 16.9 | 3.0753 | 8 | 63.6
| | 20 | 0.50 | 3.0753 | 89 | 122.8 | 3.0753 | 89 | 584.9 | 3.0753 | 9 | 18.5 | 3.0753 | 8 | 62.0
4000 | 8000 | | 0.99 | 3.0753 | 87 | 99.4 | 3.0753 | 86 | 566.0 | 3.0753 | 9 | 18.8 | 3.0753 | 9 | 67.3
| | | 0.00 | 0.7182 | 243 | 599.6 | 0.7182 | 243 | 3212.4 | 0.7181 | 10 | 39.7 | 0.7181 | 10 | 150.8
| | 10 | 0.50 | 0.7182 | 244 | 594.4 | 0.7182 | 244 | 3206.3 | 0.7181 | 9 | 36.3 | 0.7181 | 10 | 151.4
| | | 0.99 | 0.7182 | 246 | 571.2 | 0.7182 | 246 | 3227.1 | 0.7181 | 10 | 38.2 | 0.7181 | 10 | 150.0
| | | 0.00 | 2.7721 | 178 | 436.3 | 2.7721 | 178 | 2339.8 | 2.7721 | 8 | 29.8 | 2.7721 | 8 | 122.8
| | 20 | 0.50 | 2.7721 | 177 | 436.2 | 2.7721 | 177 | 2325.5 | 2.7721 | 8 | 30.5 | 2.7721 | 7 | 108.5
5000 | 10000 | | 0.99 | 2.7721 | 172 | 395.9 | 2.7721 | 172 | 2253.6 | 2.7721 | 8 | 29.6 | 2.7721 | 8 | 122.3
Table 2: Performance of the inexact and exact versions of Algorithms 1 and 2
in some instances of problem (41).
## 6 Conclusions
In this paper, we proposed a new inexact version of the classical gradient
projection method (GPM) denoted by Gradient-InexP method (GInexPM) for solving
constrained convex optimization problems. As a way to compute an inexact
projection the GInexPM uses a relative error tolerance. Two different
strategies for choosing the step size were employed in the analyses of the
method. The convergence analysis was carried out without any compactness
assumption. In addition, we provided some iteration-complexity results related
to GInexPM. Numerical results were reported illustrating potential advantages
of considering inexact projections instead of exact ones. We expect that this
paper will contribute to the development of research in this field of inexact
projections, mainly to solve large-scale problems.
## References
* [1] A. A. Aguiar, O. P. Ferreira, and L. F. Prudente. Subgradient method with feasible inexact projections for constrained convex optimization problems. page arXiv:2006.08770, June 2020, 2006.08770.
* [2] Z. Allen-Zhu, E. Hazan, W. Hu, and Y. Li. Linear convergence of a Frank-Wolfe type algorithm over trace-norm balls. In Advances in Neural Information Processing Systems, pages 6191–6200, 2017.
* [3] A. Beck. Introduction to nonlinear optimization, volume 19 of MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Optimization Society, Philadelphia, PA, 2014. Theory, algorithms, and applications with MATLAB.
* [4] J. Y. Bello Cruz and L. R. Lucambio Pérez. Convergence of a projected gradient method variant for quasiconvex objectives. Nonlinear Anal., 73(9):2917–2922, 2010.
* [5] D. P. Bertsekas. On the Goldstein-Levitin-Polyak gradient projection method. IEEE Trans. Automatic Control, AC-21(2):174–184, 1976.
* [6] D. P. Bertsekas. Nonlinear programming. Athena Scientific Optimization and Computation Series. Athena Scientific, Belmont, MA, second edition, 1999.
* [7] E. G. Birgin, J. M. Martínez, and M. Raydan. Nonmonotone spectral projected gradient methods on convex sets. SIAM Journal on Optimization, 10(4):1196–1211, 2000, https://doi.org/10.1137/S1052623497330963.
* [8] E. G. Birgin, J. M. Martínez, and M. Raydan. Inexact spectral projected gradient methods on convex sets. IMA Journal of Numerical Analysis, 23(4):539–559, 2003.
* [9] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. SIAM Rev., 60(2):223–311, 2018.
* [10] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalities in system and control theory, volume 15 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994.
* [11] S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015\.
* [12] R. Burachik, L. M. G. Drummond, A. N. Iusem, and B. F. Svaiter. Full convergence of the steepest descent method with inexact line searches. Optimization, 32(2):137–146, 1995.
* [13] F. R. de Oliveira, O. P. Ferreira, and G. N. Silva. Newton’s method with feasible inexact projections for solving constrained generalized equations. Comput. Optim. Appl., 72(1):159–177, 2019.
* [14] R. Díaz Millán, O. P. Ferreira, and L. F. Prudente. Alternating conditional gradient method for convex feasibility problems. arXiv e-prints, page arXiv:1912.04247, Dec 2019, 1912.04247.
* [15] R. Escalante and M. Raydan. Dykstra’s algorithm for constrained least-squares rectangular matrix problems. Computers & Mathematics with Applications, 35(6):73 – 79, 1998\.
* [16] J. Fan, L. Wang, and A. Yan. An inexact projected gradient method for sparsity-constrained quadratic measurements regression. Asia-Pac. J. Oper. Res., 36(2):1940008, 21, 2019.
* [17] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4):586–597, Dec 2007.
* [18] R. Fletcher. A nonlinear programming problem in statistics (educational testing). SIAM Journal on Scientific and Statistical Computing, 2(3):257–267, 1981, https://doi.org/10.1137/0902021.
* [19] J. Fliege, A. I. F. Vaz, and L. N. Vicente. Complexity of gradient descent for multiobjective optimization. Optim. Methods Softw., 34(5):949–959, 2019.
* [20] M. Fukushima, Z.-Q. Luo, and P. Tseng. Smoothing functions for second-order-cone complementarity problems. SIAM J. Optim., 12(2):436–460, 2001/02.
* [21] M. Golbabaee and M. E. Davies. Inexact gradient projection and fast data driven compressed sensing. IEEE Transactions on Information Theory, 64(10):6707–6721, 2018\.
* [22] A. A. Goldstein. Convex programming in Hilbert space. Bull. Amer. Math. Soc., 70:709–710, 1964.
* [23] D. S. Gonçalves, M. L. Gonçalves, and F. R. Oliveira. Levenberg-marquardt methods with inexact projections for constrained nonlinear systems. arXiv preprint arXiv:1908.06118, 2019.
* [24] D. S. Gonçalves, M. L. N. Gonçalves, and T. C. Menezes. Inexact variable metric method for convex-constrained optimization problems. Optimization-Online e-prints, 2020.
* [25] P. Gong, K. Gai, and C. Zhang. Efficient euclidean projections via piecewise root finding and its application in gradient projection. Neurocomputing, 74(17):2754 – 2766, 2011.
* [26] D. Gonçalves, M. Gomes-Ruggiero, and C. Lavor. A projected gradient method for optimization over density matrices. Optimization Methods and Software, 31(2):328–341, 2016, https://doi.org/10.1080/10556788.2015.1082105.
* [27] M. L. N. Gonçalves, J. G. Melo, and R. D. C. Monteiro. Projection-free accelerated method for convex optimization. Optimization Methods and Software, 0(0):1–27, 2020.
* [28] A. N. Iusem. On the convergence properties of the projected gradient method for convex optimization. Comput. Appl. Math., 22(1):37–52, 2003.
* [29] A. N. Iusem and B. F. Svaiter. A proximal regularization of the steepest descent method. RAIRO Rech. Opér., 29(2):123–130, 1995.
* [30] K. C. Kiwiel and K. Murty. Convergence of the steepest descent method for minimizing quasiconvex functions. J. Optim. Theory Appl., 89(1):221–226, 1996.
* [31] G. Lan. The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle. arXiv e-prints, page arXiv:1309.5550, Sep 2013, 1309.5550.
* [32] G. Lan and Y. Zhou. Conditional gradient sliding for convex optimization. SIAM J. Optim., 26(2):1379–1409, 2016.
* [33] C.-P. Lee and S. Wright. First-order algorithms converge faster than $o(1/k)$ on convex problems. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3754–3762, Long Beach, California, USA, 09–15 Jun 2019. PMLR.
* [34] R. B. Lehoucq, D. C. Sorensen, and C. Yang. ARPACK Users’ Guide. Society for Industrial and Applied Mathematics, 1998, https://epubs.siam.org/doi/pdf/10.1137/1.9780898719628.
* [35] E. Levitin and B. Polyak. Constrained minimization methods. USSR Computational Mathematics and Mathematical Physics, 6(5):1 – 50, 1966.
* [36] G. Ma, Y. Hu, and H. Gao. An accelerated momentum based gradient projection method for image deblurring. In 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pages 1–4, 2015.
* [37] O. L. Mangasarian. Nonlinear programming, volume 10 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994. Corrected reprint of the 1969 original.
* [38] J. J. Moré. Gradient projection techniques for large-scale optimization problems. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1–3 (Tampa, FL, 1989), pages 378–381. IEEE, New York, 1989.
* [39] J. J. Moré. On the performance of algorithms for large-scale bound constrained problems. In Large-scale numerical optimization (Ithaca, NY, 1989), pages 32–45. SIAM, Philadelphia, PA, 1990.
* [40] Y. Nesterov. Introductory lectures on convex optimization, volume 87 of Applied Optimization. Kluwer Academic Publishers, Boston, MA, 2004. A basic course.
* [41] Y. Nesterov and A. Nemirovski. On first-order algorithms for $\ell_{1}$/nuclear norm minimization. Acta Numer., 22:509–575, 2013.
* [42] J. Nocedal and S. J. Wright. Numerical optimization. Springer Series in Operations Research and Financial Engineering. Springer, New York, second edition, 2006.
* [43] A. Patrascu and I. Necoara. On the convergence of inexact projection primal first-order methods for convex minimization. IEEE Trans. Automat. Control, 63(10):3317–3329, 2018.
* [44] M. Schmidt, N. L. Roux, and F. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, page 1458–1466, Red Hook, NY, USA, 2011. Curran Associates Inc.
* [45] A. M.-C. So and Z. Zhou. Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity. Optim. Methods Softw., 32(4):963–992, 2017.
* [46] S. Sra, S. Nowozin, and S. Wright. Optimization for Machine Learning. Neural information processing series. MIT Press, 2012.
* [47] G. W. Stewart. A krylov–schur algorithm for large eigenproblems. SIAM Journal on Matrix Analysis and Applications, 23(3):601–614, 2002, https://doi.org/10.1137/S0895479800371529.
* [48] J. Tang, M. Golbabaee, and M. E. Davies. Gradient projection iterative sketch for large-scale constrained least-squares. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3377–3386, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
* [49] S. Villa, S. Salzo, L. Baldassarre, and A. Verri. Accelerated and inexact forward-backward algorithms. SIAM J. Optim., 23(3):1607–1633, 2013.
* [50] K. G. Woodgate. Least-squares solution of f = pg over positive semidefinite symmetric p. Linear Algebra and its Applications, 245:171 – 190, 1996.
* [51] F. Zhang, H. Wang, J. Wang, and K. Yang. Inexact primal–dual gradient projection methods for nonlinear optimization on convex set. Optimization, 0(0):1–27, 2019.
|
# Global rigidity for ultra-differentiable quasiperiodic cocycles and its
spectral applications
Hongyu Cheng Chern Institute of Mathematics and LPMC, Nankai University,
Tianjin 300071, China<EMAIL_ADDRESS>, Lingrui Ge Department of
Mathematics, University of California Irvine, CA, 92697-3875, USA
<EMAIL_ADDRESS>, Jiangong You Chern Institute of Mathematics and LPMC,
Nankai University, Tianjin 300071, China<EMAIL_ADDRESS>and Qi Zhou
Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071,
China<EMAIL_ADDRESS>
###### Abstract.
For quasiperiodic Schrödinger operators with one-frequency analytic
potentials, from dynamical systems side, it has been proved that the
corresponding quasiperiodic Schrödinger cocycle is either rotations reducible
or has positive Lyapunov exponent for all irrational frequency and almost
every energy [2]. From spectral theory side, the “Schrödinger conjecture” [2]
and the “Last’s intersection spectrum conjecture” have been verified [35]. The
proofs of above results crucially depend on the analyticity of the potentials.
People are curious about if the analyticity is essential for those problems,
see open problems by Fayad-Krikorian [26, 39] and Jitomirskaya-Marx [35, 46].
In this paper, we prove the above mentioned results for ultra-differentiable
potentials.
## 1\. Introduction and main results
We consider smooth quasiperiodic $SL(2,{\mathbb{R}})$ cocycles
$(\alpha,A):\mathbb{T}\times\mathbb{R}^{2}\rightarrow\mathbb{T}\times\mathbb{R}^{2},\
\ (\theta,w)\mapsto(\theta+\alpha,A(\theta)w),$
where $\alpha\in{\mathbb{R}}\setminus{\mathbb{Q}}$, $A\in
C^{\infty}({\mathbb{T}},SL(2,{\mathbb{R}}))$ and
$\mathbb{T}:=\mathbb{R}/\mathbb{Z}$. Typical examples are Schrödinger cocycles
where
$\begin{split}A(\theta)=S_{E}^{V}(\theta)=\left(\begin{array}[]{ll}E-V(\theta)&-1\\\
1&0\end{array}\right),\end{split}$
which is equivalent to the eigenvalue equations of the one-dimensional
quasiperiodic Schrödinger operator $H_{V,\alpha,\theta}$ defined by
(1.1)
$\begin{split}(H_{V,\alpha,\theta}u)_{n}=u_{n-1}+u_{n+1}+V(\theta+n\alpha)u_{n}.\end{split}$
Quasiperiodic Schrödinger operators describe the conductivity of electrons in
a two-dimensional crystal layer subject to an external magnetic field of flux
acting perpendicular to the lattice plane. Due to the rich backgrounds in
quantum physics, quasiperiodic Schrödinger operators have been extensively
studied [44].
It has been proved that the (almost) reducibility of the above Schrödinger
cocycles is a powerful tool in the study of the spectral theory of
quasiperiodic Schrödinger operators [51]. Recall that $(\alpha,A)$ is
$C^{r}$($r$ could be $\infty$ or $\omega$) reducible, if there exist $B\in
C^{r}({\mathbb{T}},PSL(2,{\mathbb{R}}))$ and $C\in SL(2,{\mathbb{R}})$ such
that
$B(\cdot+\alpha)A(\cdot)B(\cdot)^{-1}=C.$
We remark that the reducibility is too restrictive since even an
${\mathbb{R}}$-valued cocycle is in general not reducible if the frequency is
very Liouvillean. The appropriate notion is $C^{r}$ rotations reducibility,
which means, there exist $B\in C^{r}({\mathbb{T}},PSL(2,{\mathbb{R}}))$ and
$C\in C^{r}({\mathbb{T}},SO(2,{\mathbb{R}}))$ such that
$B(\cdot+\alpha)A(\cdot)B(\cdot)^{-1}=C(\cdot).$
By Kotani’s theory, for Lebesgue almost every $E\in\mathbb{R}$, the
Schrödinger cocycles $(\alpha,S_{E}^{V})$ is either $L^{2}$ rotations
reducible or has positive Lyapunov exponent 111 We refer to Section 2.1 for
the definitions and basic results.. In many circumstances, especially for its
dynamical and spectral applications, what’s important is the rigidity, i.e.,
whether $L^{2}$ conjugacy for analytic (resp. smooth) cocycles implies
analytic (resp. smooth) conjugacy under some additional assumptions. In this
paper, we are interested in the global rigidity results for smooth
quasiperiodic Schrödinger cocycles and its spectral applications.
### 1.1. Global rigidity results for smooth quasiperiodic cocycles
Based on the powerful method of renormalization, Avila-Krikorian [5] proved
that if $\alpha$ is recurrent Diophantine, $V\in
C^{\omega}({\mathbb{T}},{\mathbb{R}})$, then for Lebesgue almost every $E,$
the Schrödinger cocycle $(\alpha,S_{E}^{V})$ is either nonuniformly hyperbolic
or $C^{\omega}$ reducible. Later, Fayad-Krikorian [26] proved that for all
Diophantine $\alpha$ 222Here $\alpha$ is Diophantine (denote $\alpha\in{\rm
DC}(v,\tau)$), if there exist $v>0$ and $\tau>1$ such that
$\|n\alpha\|_{{\mathbb{Z}}}:=\inf_{j\in{\mathbb{Z}}}\left|n\alpha-j\right|>\frac{v}{|n|^{\tau}},\quad\forall\
n\in{\mathbb{Z}}\backslash\\{0\\}.$ We also denote ${\rm
DC}:=\bigcup_{v>0,\,\tau>1}{\rm DC}(v,\tau)$ the union., and $V\in
C^{\infty}({\mathbb{T}},{\mathbb{R}})$, then for Lebesgue almost every $E,$
the Schrödinger cocycle $(\alpha,S_{E}^{V})$ is either nonuniformly hyperbolic
or $C^{\infty}$-reducible. Indeed, it was pointed out by Fayad-Krikorian [26],
to extend the results of [26] to any irrational number is an interesting and
important problem. The problem was later settled by Avila-Fayad-Krikorian [2]
in the analytic case. More precisely, for all irrational $\alpha$ and any
$V\in C^{\omega}({\mathbb{T}},{\mathbb{R}})$, Avila-Fayad-Krikorian [2] proved
that, the Schrödinger cocycle $(\alpha,S_{E}^{V})$ is either $C^{\omega}$
rotations reducible or has positive Lyapunov exponent for Lebesgue almost
every $E$. Around 2011’s, R. Krikorian [39] asked the fourth author whether
the global dichotomy results of [2] hold in the $C^{\infty}$ topology.
In the present paper, we address R. Krikorian’s question for a large class of
$C^{\infty}$ cocycles. We first introduce the definition of $M$-ultra-
differentiable functions. It is known that the derivatives of a $C^{\infty}$
function $f$ may grow as fast as you like and its regularity is characterized
by the growth of $D^{s}f$. For a given sequence of positive real numbers
$M=(M_{s})_{s\in\mathbb{N}},$ we say $f\in
C^{\infty}({\mathbb{T}},{\mathbb{R}})$ is $M$-ultra-differentiable if there
exists $r>0$ such that
$\|D^{s}f\|_{C^{0}}\leq r^{-s}M_{s},$
here $r$ is also called the “width”. The real-analytic and $\nu$-Gevrey
functions are two special cases corresponding to $M_{s}=s!$ and
$M_{s}=(s!)^{\nu^{-1}},0<\nu<1$ respectively. Then our main result is the
following:
###### Theorem 1.1.
Let $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$,
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ be an M-ultra-differentiable function
with $M=(M_{s})_{s\in\mathbb{N}}$ satisfying
$\mathbf{(H1)}$: Log-convex:
$M_{\ell}^{s-k}<M_{k}^{s-\ell}M_{s}^{\ell-k},\quad s>\ell>k,$
$\mathbf{(H2)}$: Sub-exponential growth:
$\lim_{s\rightarrow\infty}s^{-1}\ln(M_{s+1}/M_{s})=0.$
Then for Lebesgue almost every $E\in\mathbb{R},$ either the Schrödinger
cocycle $(\alpha,S_{E}^{V})$ is $C^{\infty}$ rotations reducible or it has
positive Lyapunov exponent.
###### Remark 1.1.
Theorem 1.1 proved the $C^{\infty}$ rigidity for a large class of $C^{\infty}$
quasiperiodic cocycles. The almost rigidity in $C^{\infty}$ topology was
already proved by Fayad-Krikorian [26]. More precisely, they proved the
cocycle either has positive Lyapunov exponent, or the cocycle is $C^{\infty}$
almost rotations reducible, which means the cocycle can be approximated by
rotations reducible cocycles in the $C^{\infty}$ topology.
We remark that the assumptions $\mathbf{(H1)}$ and $\mathbf{(H2)}$ are not
restrictive. It is obvious that both analytic and Gevrey class functions
satisfy $\mathbf{(H1)}$ and $\mathbf{(H2)}$. Indeed, the log-convexity
condition $\mathbf{(H1)}$ is a very classical assumption in the literature,
which guarantees the space of M-ultra-differentiable functions form a Banach
algebra. The sub-exponential condition $\mathbf{(H2)}$ was first introduced by
Bounemoura-Fejoz [13] to guarantee the ultra-differentiable functions have an
analogue of the Cauchy estimates for analytic functions, which is one of the
main ingredients in KAM theory. We remark that the commonly used condition in
the literature is called moderate growth condition:
$\sup_{s,\ell\in{\mathbb{N}}}\left(\frac{M_{s+\ell}}{M_{s}M_{\ell}}\right)^{\frac{1}{s+\ell}}<\infty,$
which is stronger than $\mathbf{(H2)},$ see [13] for details.
Attached to the sequence $(M_{s})_{s\in{\mathbb{N}}}$, one can define
$\Lambda:[0,\ \infty)\rightarrow[0,\ \infty)$ by
$\Lambda(y):=\ln\big{(}\sup_{s\in\mathbb{N}}y^{s}M_{s}^{-1}\big{)}=\sup_{s\in\mathbb{N}}(s\ln
y-\ln M_{s}),$
which in fact describes the decay rate of the Fourier coefficients for
periodic functions. For $C^{\infty}$ smooth periodic functions, the growth of
$\Lambda(y)$ is faster than $\ln(y^{s})$ for any $s\in{\mathbb{N}}$ as $y$
goes to infinity. Consequently, $C^{\infty}$ means
$\lim_{y\rightarrow\infty}\Lambda(y)/\ln(y)=\infty.$
On the other hand, one can easily check that
$M_{s}=\exp\\{s^{\delta(\delta-1)^{-1}}\\}$ satisfies $\mathbf{(H1)}$ and
$\mathbf{(H2)}$ if and only if $\delta>2.$ Attached to this $M_{s}$,
$\Lambda(y)=(\ln y)^{\delta}.$ Notice that $\Lambda(y)=y^{\nu},0<\nu<1$ for
Gevrey functions. Thus the space of M-ultra-differentiable functions with
$\mathbf{(H1)}$ and $\mathbf{(H2)}$ is much bigger than the space of Gevrey
functions, and quite close to the whole space of $C^{\infty}$ functions.
However, those $C^{\infty}$ functions with $\Lambda(y)\leq(\ln y)^{2}$ are not
included. We don’t know it is essential or due to the shortage of our method.
The proof of Theorem 1.1 is based on renormalization technique and local KAM
result. For M-ultra-differentiable functions, to describe the smallness of
perturbation, we define the $\|\cdot\|_{M,r}$-norm by
$\displaystyle\|f\|_{M,r}=c\sup_{s\in\mathbb{N}}\big{(}(1+s)^{2}r^{s}\|D_{\theta}^{s}f(\theta)\|_{C^{0}}M_{s}^{-1}\big{)}<\infty,\
c=4\pi^{2}/3,$
and denote by $U^{M}_{r}(\mathbb{T},*)$ the set of all these $*$-valued
functions ($*$ will usually denote ${\mathbb{R}}$, $sl(2,{\mathbb{R}})$
$SL(2,{\mathbb{R}})$).
Then our precise KAM-type result is the following:
###### Theorem 1.2.
Let $r>0$, $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$ and $A\in
U^{M}_{r}(\mathbb{T},SL(2,\mathbb{R}))$ with $M$ satisfying $\mathbf{(H1)}$
and $\mathbf{(H2)}.$ Then for every $\tau>1$ and $\gamma>0,$ there exists
$\varepsilon_{*}=\varepsilon_{*}(\gamma,\tau,r,M)>0,$ such that if
$\|A-R\|_{M,r}\leq\varepsilon_{*}$ for some $R\in SO(2,{\mathbb{R}}),$ and
$\rho(\alpha,A)=:\rho_{f}\in DC_{\alpha}(\gamma,\tau),\ i.e.,$
$\|k\alpha\pm 2\rho_{f}\|_{\mathbb{Z}}\geq\gamma\langle k\rangle^{-\tau},\
\forall k\in\mathbb{Z},\langle k\rangle=\max\\{1,|k|\\},$
then $(\alpha,A)$ is $C^{\infty}$ rotations reducible.
We point out that, Theorem 1.2 is a semi-local result in the terminology of
[27], i.e., the smallness of the perturbation $\varepsilon_{*}$ does not
depend on the frequency $\alpha$. One should not expect that $\varepsilon_{*}$
is independent of $\rho_{f}$ (in terms of $\gamma,\tau$) as this is not true
in the $C^{\infty}$ topology (or even Gevrey class) [7]. To this end, we
mention another open problem of Fayad-Krikorian [27]: Is the semi-local
version of the almost reducibility conjecture true for cocycles in quasi-
analytic classes? In the analytic topology, it has been established in [30,
52].
The technical reason why we introduce $\mathbf{(H1)}$ and $\mathbf{(H2)}$ is
the following: The proof of Theorem 1.2 is based on a non-standard KAM scheme
developed in [30, 40]. The key idea is to prove that the homological equations
(1.2)
$\mathrm{e}^{2\mathrm{i}(2\pi\rho_{f}+\widetilde{g}(\cdot))}f(\cdot+\alpha)-f+h=0,$
has a smooth approximating solution, consult section 4.1 for more discussions.
Here $\widetilde{g}(\cdot)$ comes from the perturbation, in order to ensure
that (1.2) has a smooth approximating solution, we do need some kind of
control for all derivatives
$\|D^{s}\widetilde{g}(\cdot)\|_{C^{0}},s\in{\mathbb{N}}$ which is guaranteed
by $\mathbf{(H2)}$.
Next we give a short review of local reducibility results. The pioneering
result of local reducibility was due to Dinaburg-Sinai [24], who proved that
if $\alpha\in DC$, and $V$ is analytically small, then $(\alpha,S_{E}^{V})$ is
reducible for majority of $E$. Eliasson [25] further proved for Lebesgue
almost surely $E$, $(\alpha,S_{E}^{V})$ is reducible. Note these two results
are perturbative, i.e. the smallness of $V$ depends on Diophantine constants
of $\alpha$. For reducibility results in other topology, one can consult [20,
22, 12] and the references therein.
If $\alpha$ is Liouvillean, based on “algebraic conjugacy trick” developed in
[26], Avila-Fayad-Krikorian [2] proved that in the local regime,
$(\alpha,S_{E}^{V})$ is reducible for majority of $E$, thus gives a
generalization of Dinaburg-Sinai’s Theorem [24] to arbitrary one-frequency.
The result was also proved for analytic quasiperiodic linear systems by Hou-
You in [30]. Later, Zhou-Wang [55] generalized $SL(2,{\mathbb{R}})$ cocycles
result [2] to $GL(d,{\mathbb{R}})$ cocycles by different method. Theorem 1.1
and Theorem 1.2 can be seen as a generalization of [2] from analytic functions
to ultra-differentiable functions.
### 1.2. The spectral applications
We point out that global rigidity results in the analytic topology [5, 2] have
many important applications in the spectral theory of quasiperiodic
Schrödigner operators. To name a few, it was used to verify the Schrödinger
Conjecture [49] in the Liouvillean context [2], it also plays an essential
role in solving “Last’s intersection spectrum conjecture” [35], Aubry-Andre-
Jitomirskaya’s conjecture [11]. With Theorem 1.2, one can prove the first two
conjectures also hold for quasiperiodic operators with M-ultra-differentiable
potentials satisfying $\mathbf{(H1)}$ and $\mathbf{(H2)}$.
#### 1.2.1. Schrödinger conjecture
The Schrödinger conjecture [49] says, for general discrete Schrödinger
operators over uniquely ergodic base dynamics, all eigenfunctions are bounded
for almost every energy in the support of the absolutely continuous part of
the spectral measure. This conjecture has recently been disproved by Avila
[9]. However, it is still interesting to know, to what extend the conjecture
is true. For example, the KAM scheme of [2] implies that the Schrödinger
conjecture is true in the quasiperiodic case with analytic potentials, and
this was the first time it was verified in a Liouvillean context. Indeed, as
pointed by Jitomirskaya and Marx in [46] (page 2363 of [46]): addressing the
Schrödinger conjecture for quasiperiodic operators with lower regularities of
the potentials still remains an open problem.
With Theorem 1.1, we can prove the Schrödinger conjecture with M-ultra-
differentiable quasiperiodic potentials.
###### Corollary 1.
Let $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$,
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ be a M-ultra-differentiable function
satisfying $\mathbf{(H1)}$ and $\mathbf{(H2)}$. Then the Schrödinger
conjecture is true.
#### 1.2.2. Last’s intersection spectrum conjecture
Denote
$\displaystyle
S_{-}(\beta)=\cap_{\theta\in{\mathbb{T}}}\Sigma_{ac}(\beta,\theta),$
where $\Sigma_{ac}(\beta,\theta)$ is the absolutely continuous spectrum of
(quasi)periodic Schrödinger operator $H_{V,\beta,\theta}$ defined by (1.1).
For any $\alpha\in{\mathbb{R}}\setminus{\mathbb{Q}},$ it can be approximated
by a sequence of rational numbers $(p_{n}/q_{n}).$ It is well known that the
rational frequency approximation is indispensable for numeric analysis, thus
the existence of the limits $S_{-}(p_{n}/q_{n})$ as
$p_{n}/q_{n}\rightarrow\alpha$ is crucial. A conjecture of Y. Last says, up to
a set of zero Lebesgue measure, the absolutely continuous spectrum can be
obtained asymptotically from $S_{-}(p_{n}/q_{n})$, the spectrum of periodic
operators associated with the continued fraction expansion of $\alpha.$
Jitomirskaya-Marx [35] settled the “Last’s intersection spectrum conjecture”
for analytic quasiperiodic Schrödinger operators. They also pointed out, in
[35], that the analyticity of the potential $V$ is essential for the proof of
their result, and, whether or not the analyticity can be relaxed without
reducing the range of frequencies for which the statement holds is an
interesting open problem (page 5 of [35]). In this work, we will give a
positive answer to this problem for $\nu$-Gevrey potentials with $1/2<\nu\leq
1$. In the following, we say two sets $A\doteq B$ if $\chi_{A}=\chi_{B}\ $
Lebesgue almost everywhere. Moreover, we say
$\lim_{n\rightarrow\infty}B_{n}\doteq B$ if
$\lim_{n\rightarrow\infty}\chi_{B_{n}}=\chi_{B}\ $ Lebesgue almost everywhere.
###### Theorem 1.3.
Let $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$ and
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ be a $\nu$-Gevrey function with
$1/2<\nu\leq 1$, there is a sequence $p_{n}/q_{n}\rightarrow\alpha$ such that
$\displaystyle\lim_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\doteq
S_{-}(\alpha)=\Sigma_{ac}(\alpha).$
###### Remark 1.2.
As we will see, in fact we will prove
(1.3)
$\Sigma_{ac}(\alpha)\subset\liminf_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})$
for all M-ultra-differentiable potential satisfying $\mathbf{(H1)}$ and
$\mathbf{(H2)}$ (Theorem 6.1). Gevrey property only plays a role in proving
(1.4)
$\limsup_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\subset\Sigma_{ac}(\alpha)$
for Diophantine frequency (Theorem 6.2).
We briefly explain why analyticity is crucial for the proof of [35]. On the
one hand, the key of (1.3) is to prove that $E\in\Sigma_{ac}(\alpha)$ implies
exponentially small variation (in $q_{n}$) of the approximating discriminants
(“generalized Chambers’ formula”). For the analytic potential, Jitomirskaya-
Marx [35] got this estimate as a corollary of Avila’s quantization of
acceleration [8], which can be defined only for analytic cocycles. On the
other hand, the proof of (1.4) was first obtained by Shamis [47] as a
corollary of the continuity of Lyapunov exponent: i.e. the Lyapunov exponent
$L(\beta+\cdot,\cdot)$: ${\mathbb{T}}\times
C^{\omega}({\mathbb{T}},SL(2,\mathbb{C}))$ is jointly continuous for any
irrational $\beta$ [16, 32, 36]. However, the Lyapunov exponent
$L(\beta+\cdot,\cdot)$: ${\mathbb{T}}\times
C^{\infty}({\mathbb{T}},SL(2,\mathbb{C}))$ is not continuous [50].
In fact, it was also pointed out by Jitomirskaya and Marx in [35] that
analyticity should not be essential for their results, while one needs new
methods in the non-analytic case. To generalize the result in [35] to ultra-
differentiable potential, we have to overcome the difficulty caused by the
non-analyticity of potential. One key issue is to prove the “generalized
Chambers’ formula” in ultra-differentiable case. Instead of using Avila’s
quantization of acceleration [8], we will use perturbative argument which
avoids the analyticity, showing that if the cocycle is smoothly rotations
reducible, then $q$-step transfer matrices grows sub-exponentially in $q$
333As pointed out in footnote 5 of [35], this ideas was first pointed out by
the fourth author after first preprint of [35].. To do this, we will use
inverse renormalization and quantitative KAM result, to show if the cocycle is
almost reducible in ultra-differentiable topology, then we have a good control
of the growth of $q$-step transfer matrices. As for the proof of second
inclusion, the key is to prove that Lyapunov exponent can be still continuous
with respect to the rational approximation of the frequency for $\nu$-Gevrey
potential $V$ with $1/2<\nu<1,$ if $\alpha$ is Diophantine, which is a
generalization of the results in [16]. See Theorem 6.3 for details. Recently,
Ge-Wang-You-Zhao [45] further constructed counter-examples for $\nu$-Gevrey
potential $V$ with $0<\nu<1/2$, which shows Theorem 6.3 is optimal.
Finally, we review some related results. For general ergodic discrete
Schrödinger operators, the relation between the absolutely continuous spectrum
and the spectrum of certain periodic approximates has been studied by Last in
[41, 42], more precisely, [42] essentially proved that for $V\in
C^{1}({\mathbb{T}})$ and a.e. $\alpha$,
$\limsup_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\subset\Sigma_{ac}(\alpha)$ up
to sets of zero Lebesgue measure. The conjecture is known for the almost
Mathieu operator where $V(\theta)=2\lambda\cos\theta$ ([10, 43] for a.e.
$\alpha$, $\lambda$ and [5, 31, 42, 35] extending to all $\alpha$). More
recently, the conjecture was settled for a.e. $\alpha$ and sufficiently smooth
potential by Zhao [54].
### 1.3. The structure of this paper
The paper is arranged as follows. In Section 2 we give some definitions and
preliminaries. Before giving the proof of Theorem 1.2, we first derive
condition $\mathbf{(A)}$ on Fourier coefficients from assumptions
$\mathbf{(H1)}$ and $\mathbf{(H2)}$ on Taylor coefficients (Lemma 3.3) in
Section 3. Then we prove Theorem 1.2 in Section 4 and prove Theorem 1.1 in
Section 5. The proof of Theorem 1.3 is given in Section 6, which was based on
Theorem 6.1 and Theorem 6.2. In Section 7 we give the proof of Generalized
Chambers’ formula (Proposition 3), and in Section 8 we give the proof of the
joint continuity of Lyapunov exponent (Theorem 6.3), these two results are
bases of the proof of Theorem 6.1 and Theorem 6.2 respectively.
## 2\. Definitions and preliminaries
### 2.1. Quasiperiodic cocycles
Given $A\in C^{0}({\mathbb{T}},SL(2,{\mathbb{R}}))$ and
$\alpha\in{\mathbb{R}}\setminus{\mathbb{Q}}$, the iterates of $(\alpha,A)$ are
of the form $(\alpha,A)^{n}=(n\alpha,A_{n})$, where
$A_{n}(\cdot):=\left\\{\begin{array}[]{l l}A(\cdot+(n-1)\alpha)\cdots
A(\cdot+\alpha)A(\cdot),&n\geq 0\\\\[2.84526pt]
A^{-1}(\cdot+n\alpha)A^{-1}(\cdot+(n+1)\alpha)\cdots
A^{-1}(\cdot-\alpha),&n<0\end{array}\right..$
Define the finite Lyapunov exponent as
$L_{n}(\alpha,A)=\frac{1}{n}\int_{\mathbb{T}}\ln\|A_{n}(\theta)\|d\theta,$
then by Kingman’s subadditive ergodic theorem, the Lyapunov exponent of
$(\alpha,A)$ is defined as
$\begin{split}L(\alpha,A)=\lim_{n\rightarrow\infty}L_{n}(\alpha,A)=\inf_{n>0}L_{n}(\alpha,A)\geq
0.\end{split}$
The cocycle $(\alpha,A)$ is called uniformly hyperbolic if there exists a
continuous splitting $E_{s}(\theta)\oplus E_{u}(\theta)=\mathbb{R}^{2},$ and
$C>0,0<\lambda<1,$ such that for every $n\geq 1$ we have
$\begin{split}\|A_{n}(\theta)w\|\leq C\lambda^{n}\|w\|,\ \forall w\in
E_{s}(\theta),\\\ \|A_{-n}(\theta)w\|\leq C\lambda^{n}\|w\|,\ \forall w\in
E_{u}(\theta).\end{split}$
Assume now $A\in C^{0}({\mathbb{T}},SL(2,{\mathbb{R}}))$ is homotopic to the
identity, then there exist
$\psi:{\mathbb{T}}\times{\mathbb{T}}\to{\mathbb{R}}$ and
$u:{\mathbb{T}}\times{\mathbb{T}}\to{\mathbb{R}}^{+}$ such that
$\begin{split}A(x)\cdot\left(\begin{matrix}\cos 2\pi y\\\ \sin 2\pi
y\end{matrix}\right)=u(x,y)\left(\begin{matrix}\cos 2\pi(y+\psi(x,y))\\\ \sin
2\pi(y+\psi(x,y))\end{matrix}\right).\end{split}$
The function $\psi$ is called a lift of $A$. Let $\mu$ be any probability
measure on ${\mathbb{T}}\times{\mathbb{T}}$ which is invariant by the
continuous map $T:(x,y)\mapsto(x+\alpha,y+\psi(x,y))$, projecting over
Lebesgue measure on the first coordinate (for instance, take $\mu$ as any
accumulation point of $\frac{1}{n}\sum_{k=0}^{n-1}T_{*}^{k}\nu$ where $\nu$ is
Lebesgue measure on ${\mathbb{T}}\times{\mathbb{T}}$). Then the number
$\begin{split}\rho(\alpha,A)=\int\psi d\mu\mod{\mathbb{Z}}\end{split}$
does not depend on the choices of $\psi$ and $\mu$ and is called the fibered
rotation number of $(\alpha,A)$, see [33] and [29]. It is immediate from the
definition that
(2.1) $|\rho(\alpha,A)-\rho|\leq\|A-R_{\rho}\|_{C^{0}}.$
### 2.2. Continued fraction expansion
Let $\alpha\in(0,1)$ be irrational. Define $a_{0}=0,\alpha_{0}=\alpha,$ and
inductively for $k\geq 1$,
$a_{k}=[\alpha_{k-1}^{-1}],\qquad\alpha_{k}=\alpha_{k-1}^{-1}-a_{k}=G(\alpha_{k-1})=\\{\alpha_{k-1}^{-1}\\},$
where $G(\cdot)$ is the Gauss map. Let $p_{0}=0,p_{1}=1,q_{0}=1,q_{1}=a_{1},$
then we define inductively $p_{k}=a_{k}p_{k-1}+p_{k-2}$,
$q_{k}=a_{k}q_{k-1}+q_{k-2}.$ The sequence $(q_{n})$ is the denominators of
best rational approximations of $\alpha$ since we have
$\begin{split}\|k\alpha\|_{\mathbb{Z}}\geq\|q_{n-1}\alpha\|_{\mathbb{Z}},\quad\forall\,\,1\leq
k<q_{n},\end{split}$
and
$\begin{split}(q_{n}+q_{n+1})^{-1}<\|q_{n}\alpha\|_{\mathbb{Z}}\leq
q_{n+1}^{-1}.\end{split}$
For sequence $(q_{n})$, we will fix a particular subsequence $(q_{n_{k}})$ of
the denominators of the best rational approximations for $\alpha,$ which for
simplicity will be denoted by $(Q_{k})$. Denote the sequences $(q_{n_{k}+1})$
and $(p_{n_{k}})$ by $(\overline{Q}_{k})$ and $(P_{k}),$ respectively. Next,
we introduce the concept of CD bridge which was introduced in [2].
###### Definition 1 (CD bridge,[2]).
Let $0<\mathbb{A}\leq\mathbb{B}\leq\mathbb{C}$. We say that the pair of
denominators $(q_{m},q_{n})$ forms a $CD(\mathbb{A},\mathbb{B},\mathbb{C})$
bridge if
$\begin{split}&\bullet\,\,q_{i+1}\leq q_{i}^{\mathbb{A}},\quad
i=m,\cdots,n-1,\\\ &\bullet\,\,q_{m}^{\mathbb{C}}\geq q_{n}\geq
q_{m}^{\mathbb{B}}.\end{split}$
###### Lemma 2.1.
_[2]_ For any $\mathbb{A}\geq 1$, there exists a subsequence $(Q_{k})$ of
$(q_{n})$ such that $Q_{0}=1$ and for each $k\geq 0,$
$Q_{k+1}\leq\overline{Q}_{k}^{\mathbb{A}^{4}}$, either $\overline{Q}_{k}\geq
Q_{k}^{\mathbb{A}}$, or the pairs $(\overline{Q}_{k-1},Q_{k})$ and
$(Q_{k},Q_{k+1})$ are both $CD(\mathbb{A},\mathbb{A},\mathbb{A}^{3})$ bridges.
Set $\tau>1$ and $\mathbb{A}>\tau+23>24$, then for
$\\{\overline{Q}_{n}\\}_{n\geq 0},$ the selected subsequence in Lemma 2.1, we
have the following lemma.
###### Lemma 2.2.
For $\\{\overline{Q}_{n}\\}_{n\geq 0},$ we have
$\begin{split}\overline{Q}_{n+1}\geq\overline{Q}_{n}^{\mathbb{A}},\ \forall
n\geq 0.\end{split}$
###### Proof.
$\mathbf{Case\ one:}$ $\overline{Q}_{n+1}\geq Q_{n+1}^{\mathbb{A}}.$ Obviously
$\overline{Q}_{n+1}\geq
Q_{n+1}^{\mathbb{A}}\geq\overline{Q}_{n}^{\mathbb{A}}.$
$\mathbf{Case\ two:}$ $\overline{Q}_{n+1}<Q_{n+1}^{\mathbb{A}}.$ In this case
we know that $(\overline{Q}_{n},Q_{n+1})$ forms a CD
$(\mathbb{A},\mathbb{A},\mathbb{A}^{3})$ bridge. Thus
$Q_{n+1}\geq\overline{Q}_{n}^{\mathbb{A}},$ which implies
$\overline{Q}_{n+1}\geq\overline{Q}_{n}^{\mathbb{A}}.$
∎
### 2.3. Renormalization
In this subsection we give the notations and definitions about the
renormalization which are given in [5, 26, 6].
#### 2.3.1. ${\mathbb{Z}}^{2}-$actions
Consider the cocycle $(\alpha,A)\in(0,1)\setminus\mathbb{Q}\times
U_{r}^{M}(\mathbb{T},SL(2,\mathbb{R}))$ and set
$\beta_{n}=\Pi_{l=0}^{n}\alpha_{l}=(-1)^{n}(q_{n}\alpha-
p_{n})=(q_{n+1}+\alpha_{n+1}q_{n})^{-1},$ where $\alpha_{n}=G^{n}(\alpha)$.
Let $\Omega^{r}={\mathbb{R}}\times U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}}))$
be the subgroup of Diff$({\mathbb{R}}\times
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}})))$ made of skew-product
diffeomorphisms $(\alpha,A)\in{\mathbb{R}}\times
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}})).$
A $U_{r}^{M}$ fibered ${\mathbb{Z}}^{2}-$action is a homomorphism
$\Phi:{\mathbb{Z}}^{2}\rightarrow\Omega^{r}.$ We denote by $\Lambda^{r}$ the
space of such actions, and denote $\Phi=(\Phi(1,0),\Phi(0,1))$ for short. Let
$\Pi_{1}:{\mathbb{R}}\times
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}}))\rightarrow{\mathbb{R}},$
$\Pi_{2}:{\mathbb{R}}\times
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}}))\rightarrow
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}}))$ be the coordinate projections. Let
also $\gamma_{n,m}^{\Phi}=\Pi_{1}\circ\Phi(n,m)$ and
$A_{n,m}^{\Phi}=\Pi_{2}\circ\Phi(n,m).$
Two fibered ${\mathbb{Z}}^{2}$ actions $\Phi$, $\Phi^{\prime}$ are said to be
conjugate if there exists a smooth map $B:{\mathbb{R}}\rightarrow
SL(2,\mathbb{R})$ such that
$\Phi^{\prime}(n,m)=(0,B)\circ\Phi(n,m)\circ(0,B)^{-1},\qquad\forall(n,m)\in{\mathbb{Z}}^{2}.$
That is
$A_{n,m}^{\Phi^{\prime}}(\cdot)=B(\cdot+\gamma_{n,m}^{\Phi})A_{n,m}^{\Phi}(\cdot)B(\cdot)^{-1},\qquad\gamma_{n,m}^{\Phi^{\prime}}=\gamma_{n,m}^{\Phi}.$
We denote $\Phi^{\prime}=\mathrm{Conj_{B}}(\Phi)$ for short. We say that an
action is normalized if $\Phi(1,0)=(1,Id),$ and in that case, if
$\Phi(0,1)=(\alpha,A),$ the map $A\in
U_{r}^{M}({\mathbb{R}},SL(2,{\mathbb{R}}))$ is clearly
${\mathbb{Z}}-$periodic.
For any $M$-ultra-differentiable function
$f:{\mathbb{R}}\rightarrow{\mathbb{R}}$ (not necessary periodic), one can also
define
$\displaystyle\|f\|_{r,T}=c\sup_{s\in\mathbb{N}}\big{(}(1+s)^{2}r^{s}\|D_{\theta}^{s}f(\theta)\|_{C^{0}([0,T])}M_{s}^{-1}\big{)}\
,c=4\pi^{2}/3.$
If $f:{\mathbb{T}}\rightarrow{\mathbb{R}}$ is periodic, we also denote
$\|f\|_{r,1}=\|f\|_{M,r}$.
###### Lemma 2.3.
_(Lemma 2 of[26])_ If $\Phi\in\Lambda^{r}$ with $\gamma_{1,0}^{\Phi}=1,$ then
there exists $B\in U_{r}^{M}(\mathbb{R},SL(2,\mathbb{R}))$ and a normalized
action $\widetilde{\Phi}$ such that
$\widetilde{\Phi}=\mathrm{Conj_{B}}(\Phi)$. Moreover, for any
$T\in{\mathbb{R}}^{+}$, we have estimate
$\displaystyle\|B-Id\|_{rK_{*}^{-1},1}$ $\displaystyle\leq$
$\displaystyle\|\Phi(1,0)-Id\|_{r,1},$ $\displaystyle\|B\|_{r(K_{*}T)^{-1},T}$
$\displaystyle\leq$ $\displaystyle\|\Phi(1,0)\|_{r,T}^{T+1},\quad\forall
T\in{\mathbb{R}}^{+},$
where $K_{*}$ is an absolute constant.
#### 2.3.2. Renormalization of actions
Following [5, 26, 6], we introduce the scheme of renormalization of
${\mathbb{Z}}^{2}$ actions.
Fixing $\lambda\neq 0.$ Define $M_{\lambda}:\Lambda^{r}\rightarrow\Lambda^{r}$
by
$M_{\lambda}(\Phi)(n,m):=(\lambda^{-1}\gamma_{n,m}^{\Phi},A_{n,m}^{\Phi}(\lambda\cdot)).$
Let $\theta_{*}\in{\mathbb{R}}.$ Define
$T_{\theta_{*}}:\Lambda^{r}\rightarrow\Lambda^{r}$ by
$T_{\theta_{*}}(\Phi)(n,m):=(\gamma_{n,m}^{\Phi},A_{n,m}^{\Phi}(\cdot+\theta_{*})).$
Let $U\in GL(2,{\mathbb{R}}).$ Define
$N_{U}:\Lambda^{r}\rightarrow\Lambda^{r}$ by
$N_{U}(\Phi)(n,m):=\Phi(n^{\prime},m^{\prime}),\ \text{where}\
\Big{(}\begin{matrix}n^{\prime}\\\
m^{\prime}\end{matrix}\Big{)}=U^{-1}\Big{(}\begin{matrix}n\\\
m\end{matrix}\Big{)}.$
Let $\widetilde{Q}_{n}=\Big{(}\begin{matrix}q_{n},&p_{n}\\\
q_{n-1},&p_{n-1}\end{matrix}\Big{)},$ and define for $n\in{\mathbb{N}}$ and
$\theta_{*}\in{\mathbb{R}}$ the renormalized actions
$\mathcal{R}^{n}(\Phi):=M_{\beta_{n-1}}\circ N_{\widetilde{Q}_{n}}(\Phi),\
\mathcal{R}_{\theta_{*}}^{n}(\Phi):=T_{\theta_{*}}^{-1}\big{[}\mathcal{R}^{n}(T_{\theta_{*}}(\Phi))\big{]}.$
For any given cocycle $(\alpha,A)$ with
$\alpha\in\mathbb{R}\setminus\mathbb{Q}$, we set $\Phi=((1,Id),(\alpha,A)).$
Then by the definitions of the operators above, we get
$\begin{split}\mathcal{R}_{\theta_{*}}^{n}(\Phi)=((1,A^{(n,0)}),(\alpha_{n},A^{(n,1)})),\end{split}$
where
$\begin{split}A^{(n,0)}(\theta)&=A_{(-1)^{n-1}q_{n-1}}(\theta_{*}+\beta_{n-1}(\theta-\theta_{*})),\\\
A^{(n,1)}(\theta)&=A_{(-1)^{n}q_{n}}(\theta_{*}+\beta_{n-1}(\theta-\theta_{*})).\end{split}$
Thus $A^{(n,0)}$ and $A^{(n,1)}$ are $\beta_{n-1}^{-1}-$periodic and can be
regarded as cocycles over the dynamics on $\mathbb{R}$ given by
$\theta\mapsto\theta+1$ and $\theta\mapsto\theta+\alpha_{n}$. It is easy to
see that
$A^{(n,1)}(\theta+1)A^{(n,0)}(\theta)=A^{(n,0)}(\theta+\alpha_{n})A^{(n,1)}(\theta),$
which expresses the commutation of the cocycles. Based on this fact, there
exists $D_{n}$ which is a normalizing map such that
$\displaystyle D_{n}(\theta+1)A^{(n,0)}(\theta)D_{n}(\theta)^{-1}$
$\displaystyle=$ $\displaystyle Id,$ $\displaystyle
D_{n}(\theta+\alpha_{n})A^{(n,1)}(\theta)D_{n}(\theta)^{-1}$ $\displaystyle=$
$\displaystyle A^{(n)}(\theta),$
which satisfies $A^{(n)}(\theta+1)=A^{(n)}(\theta).$ Thus $A^{(n)}$ can be
seen as an element of $C^{0}(\mathbb{T},SL(2,\mathbb{R})),$ and
$(\alpha_{n},A^{(n)})$ is called a representative of the $n$-th
renormalization of $(\alpha,A).$
#### 2.3.3. Convergence of the renormalized actions
The following result on convergence of renormalized actions was essentially
contained in [5, 26, 6], which deal with cocycles in $C^{\ell}$ setting with
$\ell\in\mathbb{N}$ and $\ell=\infty,\omega.$ We will sketch the proof in the
ultra-differentiable setting, just for completeness.
###### Proposition 1 ([5, 26, 6]).
Suppose that $(\alpha,A)\in(0,1)\setminus\mathbb{Q}\times
U_{r}(\mathbb{T},SL(2,\mathbb{R}))$. If $(\alpha,A)$ is $L^{2}$-conjugated to
rotations and homotopic to the identity, then for almost every
$\theta_{*}\in\mathbb{R},$ there exists $D_{n}\in
U_{r/K_{*}^{2}}(\mathbb{R},SL(2,\mathbb{R}))$ with
(2.2) $\|D_{n}\|_{r/(K_{*}^{2}T),T}\leq C^{q_{n-1}(T+1)},$
such that
$\displaystyle\mathrm{Conj_{D_{n}}}(\mathcal{R}_{\theta_{*}}^{n}(\Phi))=((1,Id),(\alpha_{n},\
R_{\rho_{n}}\mathrm{e}^{F_{n}})),$
with $\|F_{n}\|_{r/K_{*}^{2},1}\rightarrow 0,$ where $K_{*}$ is an absolute
constant defined in Lemma 2.3.
###### Proof.
We first prove $\\{A^{(n,i)}(\theta)\\}_{n\geq 0},i=1,2,$ are precompact in
$U_{r/K_{*}}^{M}.$ Indeed, Theorem 5.1 of [5] shows that for any
$(\alpha,A)\in(0,1)\setminus\mathbb{Q}\times
C^{s}(\mathbb{T},SL(2,\mathbb{R}))$, if it is $L^{2}$-conjugated to rotation,
then for almost every $\theta_{*}\in\mathbb{R},$ there exists $K_{*}>0$ such
that for every $d>0$ and for every $n>n_{0}(d),$
$\displaystyle\|\partial^{\ell}A^{(n,i)}(\theta)\|\leq
K_{*}^{\ell+1}\|A(\theta)\|_{C^{s}},\ i=1,2,\ 0\leq\ell\leq s,\
|\theta-\theta_{*}|<d/n,$
which implies that
$\displaystyle\|A^{(n,i)}(\theta)\|_{C^{s}}\leq
2K_{*}^{s+1}\|A(\theta)\|_{C^{s}},\ i=1,2,$
therefore, by the definition of norms of ultra-differentiable functions, we
have
$\displaystyle\|A^{(n,i)}(\theta)\|_{r/K_{*},1}\leq
2K_{*}\|A(\theta)\|_{r,1}<\infty,\ i=1,2.$
That is the sequences $\\{A^{(n,i)}(\theta)\\}_{n\geq 0},i=1,2,$ are uniformly
bounded in $U_{r/K_{*}},$ which implies $\\{A^{(n,i)}(\theta)\\}_{n\geq
0},i=1,2,$ are precompact in $U_{r/K_{*}}^{M}.$
Assume $B\in L^{2}({\mathbb{T}},SL(2,{\mathbb{R}}))$ is the conjugation such
that
$B(\theta+\alpha)A(\theta)B(\theta)^{-1}\in SO(2,{\mathbb{R}}),$
consequently by Theorems 4.3, Theorem 4.4 in [6] we have
$\mathcal{R}_{\theta_{*}}^{n}(\mathrm{Conj_{B(\theta_{*})}}(\Phi))=((1,\widetilde{C}_{n}^{(1)}(\theta)),(\alpha_{n},\widetilde{C}^{(2)}_{n}(\theta))$
with
$\widetilde{C}_{n}^{(i)}=R_{\rho_{n}}\mathrm{e}^{U_{n}^{(i)}(\theta)},i=1,2,$
and
$\displaystyle\|U_{n}^{(i)}(\theta)\|_{r/K_{*},1}\rightarrow 0,\
i=1,2,\text{if}\ n\rightarrow\infty,|\theta-\theta_{*}|\leq d/n,n\geq
n_{0}(d).$
Using Lemma 2.3, there is a normalizing conjugation $\widetilde{D}_{n},$ which
is closed to identity in $\|\cdot\|_{rK_{*}^{-2},1}-$topology such that
$\widetilde{D}_{n}(\theta+1)\widetilde{C}_{n}^{(1)}(\theta)\widetilde{D}_{n}(\theta)^{-1}=Id.$
Denote $D_{n}=\widetilde{D}_{n}B,$ the action
$\mathrm{Conj_{D_{n}}}(\mathcal{R}_{\theta_{*}}^{n}(\Phi))$ is of form
$((1,Id),(\alpha_{n},\ R_{\rho_{n}}\mathrm{e}^{F_{n}}))$ with
$\|F_{n}\|_{rK_{*}^{-2},1}\rightarrow 0.$ Moreover, for any
$T\in{\mathbb{R}}^{+}$, by Lemma 2.3 we get
$\displaystyle\|\widetilde{D}_{n}\|_{r/(K_{*}^{2}T),T}$
$\displaystyle\leq\|\widetilde{C}_{n}^{(1)}\|_{r/K_{*},T}^{T+1}\leq\|B(\theta_{*})\|^{2(T+1)}\|A^{(n,0)}\|_{r,T}^{T+1}$
$\displaystyle\leq\|B(\theta_{*})\|^{2(T+1)}\|A\|_{r,1}^{q_{n-1}(T+1)}\leq
C^{q_{n-1}(T+1)},$
then (2.2) follows directly. ∎
## 3\. Ultra-differentiable functions
As we introduced, one way to define the modulus of ultra-differentiable
functions is by the growth of $D^{s}f$. For periodic function $f\in
C^{\infty}({\mathbb{T}},{\mathbb{R}}),$ an alternative way is to define the
modulus of ultra-differentiability by the decay rate of its Fourier
coefficient. Attached to the sequence $(M_{s})_{s\in{\mathbb{N}}}$, we can
define $\Lambda:[0,\ \infty)\rightarrow[0,\ \infty)$ by
(3.1)
$\Lambda(y):=\ln\big{(}\sup_{s\in\mathbb{N}}y^{s}M_{s}^{-1}\big{)}=\sup_{s\in\mathbb{N}}(s\ln
y-\ln M_{s}).$
This defines a function $\Lambda:[0,\infty)\rightarrow[0,\infty),$ which is
continuous, constant equal to zero for $y\leq 1$ and strictly increasing for
$y\geq 1$ (see [13, 19] or [48]).
For any $f\in U_{r}^{M}(\mathbb{T},\mathbb{R}),$ write it as
$f(\theta)=\sum_{k\in\mathbb{Z}}\widehat{f}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}$,
one easily checks that $\Lambda$ controls the decay of the Fourier
coefficients in the sense that
(3.2) $|\widehat{f}(k)|\leq\|f\|_{M,r}\exp\\{-\Lambda(|2\pi k|r)\\},\ \forall
k\in\mathbb{Z}.$
For periodic function, using $\Lambda$ is more natural and convenient. Now we
derive some properties of $\Lambda$ for $f\in
U_{r}^{M}(\mathbb{T},\mathbb{R})$ from $\mathbf{(H1)}$ and $\mathbf{(H2)},$
which will be the bases of our whole proof of the KAM scheme.
###### Lemma 3.1 (Proposition 10 of [13]).
Let $f,g\in U_{r}^{M}(\mathbb{T},\mathbb{R})$ with $M$ satisfying
$\mathbf{(H1)}.$ Then $f\cdot g\in U_{r}^{M}(\mathbb{T},\mathbb{R}),$ and we
have
$\displaystyle\|f\cdot g\|_{M,r}\leq\|f\|_{M,r}\|g\|_{M,r}.$
###### Remark 3.1.
As explained in [13], the role of the normalizing constant $c>0$ in the
definition of $\|f\|_{M,r}$ is to ensure that
$U_{r}^{M}(\mathbb{T},\mathbb{R})$ forms a standard Banach algebra with
respect to multiplication.
###### Lemma 3.2 (Proposition 8 of [13]).
Let $f\in U_{r}^{M}(\mathbb{T},\mathbb{R})$ with $M$ satisfying
$\mathbf{(H2)}.$ Then $\partial f\in U_{r/2}^{M}(\mathbb{T},\mathbb{R})$ with
$\displaystyle\|\partial f\|_{r/2}\leq C_{M}r^{-1}\|f\|_{r},$
where
$\displaystyle
C_{M}:=\sup_{s\in\mathbb{N}}\\{2^{-s}M_{s+1}M_{s}^{-1}\\}<\infty.$
###### Lemma 3.3.
$\mathbf{(H1)}$ and $\mathbf{(H2)}$ imply that there exists $\Gamma:[1,\
\infty)\rightarrow\mathbb{R}^{+}$ such that the following hold:
$\displaystyle\mathbf{(A):}\left\\{\begin{array}[]{l}\mathrm{(I)}:\lim_{x\rightarrow\infty}\Gamma(x)=\infty,\\\
\mathrm{(II)}:\Gamma(x)\ln x\ \mathrm{is}\ \mathrm{non-decreasing},\\\
\mathrm{(III)}:\Lambda(y)-\Lambda(x)\geq(\ln y-\ln x)\Gamma(x)\ln x,\ \forall
y>x\geq 1.\end{array}\right.$
###### Proof.
For any $x\geq 1,$ we select $s(x)\in\mathbb{N}$ as the one such that
(3.3)
$\begin{split}\Lambda(x)=\sup_{s\in\mathbb{N}}\ln(x^{s}M_{s}^{-1})=\ln(x^{s(x)}M_{s(x)}^{-1}).\end{split}$
###### Claim 1.
The function $s(x)\in\mathbb{N}$ is well-defined, non-decreasing with
(3.4) $\lim_{x\rightarrow\infty}s(x)(\ln x)^{-1}=\infty.$
###### Proof.
It is quite standard $s(x)\in\mathbb{N}$ is well-defined, we first prove that
$s(x)$ is non-decreasing. By the definition of $s(x)$, we have
$\begin{split}x^{s(x)}M_{s(x)}^{-1}\geq x^{s(x)+1}M_{s(x)+1}^{-1},\
x^{s(x)}M_{s(x)}^{-1}\geq x^{s(x)-1}M_{s(x)-1}^{-1},\end{split}$
which implies
(3.5) $\begin{split}M_{s(x)}/M_{s(x)-1}\leq x\leq
M_{s(x)+1}/M_{s(x)}.\end{split}$
Assume that there exist $y>x\geq 1$ such that $s(y)<s(x).$ The fact that
$s(\cdot)\in{\mathbb{N}}$ implies $s(y)+1\leq s(x).$ First, by (3.5) we get
$\begin{split}y\leq M_{s(y)+1}/M_{s(y)},\quad x\geq
M_{s(x)}/M_{s(x)-1}.\end{split}$
However, by $\mathbf{(H1)}$ we know that
$\\{M_{\ell+1}/M_{\ell}\\}_{\ell\in\mathbb{N}}$ is increasing, which together
with $s(y)+1\leq s(x),$ implies that
$\begin{split}y\leq M_{s(y)+1}/M_{s(y)}\leq M_{s(x)}/M_{s(x)-1}\leq
x,\end{split}$
this contradicts with the assumption $y>x.$ Thus $s(x)$ is non-decreasing.
By (3.5), we have
$\begin{split}s^{-1}(x)\ln(M_{s(x)}/M_{s(x)-1})\leq s^{-1}(x)\ln x\leq
s^{-1}(x)\ln(M_{s(x)+1}/M_{s(x)}),\end{split}$
then (3.4) follows from the assumption $\mathbf{(H2)}$. ∎
Let $\Gamma(x)=s(x)(\ln x)^{-1}$. Then (3.4) implies $\mathrm{(I)}.$ Moreover,
note $\Gamma(x)\ln x=s(x),$ which together with the fact $s(x)\in\mathbb{N}$
is non-decreasing, implies $\mathrm{(II)}.$
Now we prove $\mathrm{(III)}$. For any $y>x$, by the fact $s(x)\in\mathbb{N}$
is non-decreasing, we can distinguish the proof into two cases:
$\textbf{Case 1}:s(y)=s(x).$ By the definitions of $\Lambda(x)$ and $s(x)$, we
get
$\begin{split}\Lambda(y)-\Lambda(x)=(\ln y-\ln x)s(x)=(\ln y-\ln
x)\Gamma(x)\ln x.\end{split}$
$\mathbf{Case\ 2}:\ s(y)\geq s(x)+1.$ The inequality on the left hand of (3.5)
and the fact that $\\{M_{s+1}/M_{s}\\}_{s\in\mathbb{N}}$ is increasing imply
$\begin{split}y\geq M_{s(y)}/M_{s(y)-1}\geq M_{s(x)+1}/M_{s(x)},\end{split}$
that is $\ln y\geq\ln M_{s(x)+1}-\ln M_{s(x)}.$ Together with the definitions
of $\Lambda(x)$ and $s(x)$, it yields
$\begin{split}\Lambda(y)-\Lambda(x)&\geq\ln(y^{s(x)+1}M_{s(x)+1}^{-1})-\ln(x^{s(x)}M_{s(x)}^{-1})\\\
&=(\ln y-\ln x)s(x)+\ln y-(\ln M_{s(x)+1}-\ln M_{s(x)})\\\ &\geq(\ln y-\ln
x)s(x)=(\ln y-\ln x)\Gamma(x)\ln x.\end{split}$
We thus finish the whole proof. ∎
###### Remark 3.2.
To give a heuristic understanding of the function $\Gamma$, we can assume that
$\Lambda$ is differentiable, by (3.3), we can rewrite it as
$\Gamma(x)=x\Lambda^{\prime}(x)(\ln x)^{-1}.$
Now if we fix $\Lambda(x)=(\ln x)^{\delta},$ then
$\Gamma(x)=x\Lambda^{\prime}(x)(\ln x)^{-1}=\delta(\ln x)^{\delta-2}$ and
$\mathrm{(I)}$ and $\mathrm{(II)}$ are equivalent to
$\delta(\ln x)^{\delta-2}\to+\infty$
and $\delta(\ln x)^{\delta-1}$ is non-decreasing, which means $\delta>2.$
For the function $f\in C^{\infty}(\mathbb{T},\mathbb{R})$ and any $K\geq 1$,
we define the truncation operator $\mathcal{T}_{K}$ and projection operator
$\mathcal{R}_{K}$ as
$\mathcal{T}_{K}f(\theta)=\sum_{k\in\mathbb{Z},|k|<K}\widehat{f}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta},\quad\mathcal{R}_{K}f(\theta)=\sum_{k\in\mathbb{Z},|k|\geq
K}\widehat{f}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}.$
We denote the average of $f(\theta)$ on $\mathbb{T}$ by
$[f(\theta)]_{\theta}=\int_{\mathbb{T}}f(\theta)\mathrm{d}\theta=\widehat{f}(0).$
The norm of $\mathcal{R}_{K}f(\theta)$ in a shrunken regime has the following
estimate for $C^{\infty}$ functions satisfying $\mathbf{(H1)}$ and
$\mathbf{(H2)}.$
###### Lemma 3.4.
Under the assumptions $\mathbf{(A)},$ there exists $T_{1}=T_{1}(M)$, such that
for any $f\in U_{r}^{M}(\mathbb{T},\mathbb{R})$, if $Kr\geq T_{1}$, then
(3.6) $\displaystyle\|\mathcal{R}_{K}f\|_{M,r/2}\leq
C(Kr^{2})^{-1}\|f\|_{M,r}\exp\\{-9^{-1}\Gamma(4Kr)\ln(4Kr)\\}.$
Particularly,
(3.7)
$\displaystyle\|\mathcal{R}_{K}f\|_{C^{0}}\leq(Kr^{2})^{-1}\|f\|_{M,r}\exp\\{-\Lambda(\pi
Kr)\\}.$
###### Proof.
First by $\mathrm{(II)}$ and $\mathrm{(III)}$ in $\mathbf{(A)},$ for any
$|k|\geq K,$ we have
$\displaystyle\Lambda(|2\pi k|r)-\Lambda(|2\pi k|(7/8)r)$ $\displaystyle\geq$
$\displaystyle\\{\ln(|2\pi k|r)-\ln(|2\pi k|(7/8)r)\\}\Gamma(|2\pi
k|(7/8)r)\ln(|2\pi k|(7/8)r)$ $\displaystyle\geq$
$\displaystyle\ln(8/7)\Gamma(|4k|r)\ln(|4k|r)>9^{-1}\Gamma(4Kr)\ln(4Kr).$
Moreover, by $\mathrm{(I)}$ in $\mathbf{(A)}$, we know there exists
$T_{1}=T_{1}(M)$, such that if $Kr\geq T_{1}$ then
$\Gamma(4|k|r)>18,\forall|k|\geq K,$ which implies
$\displaystyle\Lambda(|2\pi k|(7/8)r)-\Lambda(|2\pi k|(3/4)r)>2\ln(|4k|r).$
Consequently, direct calculations show that
(3.8) $\displaystyle\sum_{|k|\geq K}\exp\\{-\Lambda(|2\pi k|r)\\}|2\pi
k(3r/4)|^{s}M_{s}^{-1}$ $\displaystyle\leq$ $\displaystyle\sum_{|k|\geq
K}\exp\\{-\Lambda(|2\pi k|r)\\}\exp\\{\Lambda(|2\pi k|(3/4)r)\\}$
$\displaystyle\leq$ $\displaystyle\sup_{|k|\geq K}\exp\\{-\Lambda(|2\pi
k|r)+\Lambda(|2\pi k|(7/8)r)\\}$ $\displaystyle\sum_{|k|\geq
K}\exp\\{-\Lambda(|2\pi k|(7/8)r)+\Lambda(|2\pi k|(3/4)r)\\}$
$\displaystyle\leq$
$\displaystyle\exp\\{-9^{-1}\Gamma(4Kr)\ln(4Kr)\\}\sum_{|k|\geq K}|4kr|^{-2}$
$\displaystyle\leq$
$\displaystyle(4Kr^{2})^{-1}\exp\\{-9^{-1}\Gamma(4Kr)\ln(4Kr)\\}.$
Finally, by (3.2), we have
$\displaystyle\|D_{\theta}^{s}\mathcal{R}_{K}f\|_{C^{0}}\leq\|f\|_{M,r}\sum_{|k|\geq
K}\exp\\{-\Lambda(|2\pi k|r)\\}|2\pi k|^{s},$
then
$\displaystyle\|\mathcal{R}_{K}f\|_{M,r/2}$
$\displaystyle=3^{-1}4\pi^{2}\sup_{s\in\mathbb{N}}\big{(}(r/2)^{s}(1+s)^{2}\|D_{\theta}^{s}\mathcal{R}_{K}f\|_{C^{0}}M_{s}^{-1}\big{)}$
$\displaystyle\leq\|f\|_{M,r}\sup_{s\in\mathbb{N}}3^{-1}4\pi^{2}(1+s)^{2}(2/3)^{s}$
$\displaystyle\sum_{|k|\geq K}\exp\\{-\Lambda(|2\pi k|r)\\}\\{|2\pi
k|(3r/4)\\}^{s}M_{s}^{-1}$ $\displaystyle\leq
C(Kr^{2})^{-1}\|f\|_{M,r}\exp\\{-9^{-1}\Gamma(4Kr)\ln(4Kr)\\},$
where the last inequality follows from (3.8).
The conclusion (3.7) follows from similar computations, we thus omit the
details. ∎
For the given function $f\in U_{r}^{M}(\mathbb{T},\mathbb{R}),$ we define the
$\|\cdot\|_{\Lambda,r}$\- norm by
(3.9)
$\displaystyle\|f\|_{\Lambda,r}=\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\mathrm{e}^{\Lambda(|2\pi
k|r)},$
with $\Lambda$ being the one defined by (3.1). With the help of Lemma 3.3, we
can discuss the relationship between the spaces $\|\cdot\|_{M,r}$ and
$\|\cdot\|_{\Lambda,r}.$
###### Lemma 3.5.
Under the assumptions $\mathbf{(H1)}$ and $\mathbf{(H2)}$, we have
(3.10) $\displaystyle\|f\|_{M,r}$ $\displaystyle\leq$ $\displaystyle
C\|f\|_{\Lambda,2r},$ (3.11) $\displaystyle\|f\|_{\Lambda,r/2}$
$\displaystyle\leq$ $\displaystyle(2\pi r)^{-1}(4+c_{M})\|f\|_{M,r},$
where $c_{M}$ is a constant that only depends on the sequence $M$.
###### Proof.
First, (3.1) implies that for any $y>0$, any $s\in{\mathbb{N}}$,
$\exp\\{-\Lambda(y)\\}y^{s}\leq M_{s},$
which yields
$\displaystyle\exp\\{-\Lambda(yr)\\}y^{s}=\exp\\{-\Lambda(yr)\\}(yr)^{s}r^{-s}\leq
M_{s}r^{-s},\forall yr>0.$
Then
$\displaystyle\sup_{k\in\mathbb{Z}}\exp\\{-\Lambda(|2\pi k|2r)\\}|2\pi
k|^{s}\leq M_{s}(2r)^{-s}.$
Thus for any $f\in U_{r}^{M}(\mathbb{T},\mathbb{R})$, for any
$s\in\mathbb{N},$ we get
$\displaystyle\|D_{\theta}^{s}f\|_{C^{0}}$
$\displaystyle\leq\sum_{k\in\mathbb{Z}}|\widehat{f}(k)||2\pi k|^{s}$
$\displaystyle\leq\sup_{k\in\mathbb{Z}}\exp\\{-\Lambda(|2\pi k|2r)\\}|2\pi
k|^{s}\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\exp\\{\Lambda(|2\pi k|2r)\\}$
$\displaystyle\leq\|f\|_{\Lambda,2r}M_{s}(2r)^{-s},$
which implies that
$\displaystyle\|f\|_{M,r}$
$\displaystyle=3^{-1}4\pi^{2}\sup_{s\in\mathbb{N}}\big{(}r^{s}(1+s)^{2}\|D_{\theta}^{s}f\|_{C^{0}}M_{s}^{-1}\big{)}$
$\displaystyle\leq\|f\|_{\Lambda,2r}\sup_{s\in\mathbb{N}}3^{-1}4\pi^{2}2^{-s}(1+s)^{2}<C\|f\|_{\Lambda,2r}.$
Now we turn to the inequality in (3.11). Easily, for $y\geq 1,$ we have
$\begin{split}\exp\\{-\Lambda(2y)+\Lambda{(y)}\\}&=\inf_{s\in\mathbb{N}}\\{(2y)^{-s}M_{s}\\}y^{s(y)}M_{s(y)}^{-1}\\\
&\leq(2y)^{-(s(y)+2)}M_{s(y)+2}y^{s(y)}M_{s(y)}^{-1}\leq
c_{M}(2y)^{-2},\end{split}$
where (by $\mathbf{(H2)}$)
$\displaystyle
c_{M}:=\sup_{s\in\mathbb{N}}\\{2^{-s}M_{s+2}M_{s}^{-1}\\}<\infty.$
Note $\Lambda(y)=0,$ if $y\leq 1$. Consequently, we have
$\begin{split}\|f\|_{\Lambda,r/2}&=\sum_{k\in{\mathbb{Z}}}|\widehat{f}(k)|\exp\\{\Lambda{(|\pi
k|r)}\\}\\\ &\leq\|f\|_{M,r}\sum_{k\in{\mathbb{Z}}}\exp\\{-\Lambda(|2\pi
k|r)+\Lambda{(|\pi k|r)}\\}\\\ &=\|f\|_{M,r}\\{\sum_{|k|<(\pi
r)^{-1}}+\sum_{|k|\geq(\pi r)^{-1}}\\}\exp\\{-\Lambda(|2\pi k|r)+\Lambda{(|\pi
k|r)}\\}\\\ &\leq 2(\pi r)^{-1}\|f\|_{M,r}+c_{M}\|f\|_{M,r}\sum_{|k|\geq(\pi
r)^{-1}}(|2\pi k|r)^{-2}\\\ &\leq 2(\pi r)^{-1}\|f\|_{M,r}+(2\pi
r)^{-1}c_{M}\|f\|_{M,r}.\end{split}$
∎
We have stated the above lemma only for $\|f\|_{\Lambda,r/2}$ in (3.11) as
this is the only case we shall need; but clearly one could obtain an estimate
for any $\|\partial^{s}f\|_{\Lambda,r/2},\ s\in{\mathbb{N}},$ by the similar
discussions above.
## 4\. The inductive step
### 4.1. Sketch of the proof
The proof of Theorem 1.2 is based on a non-standard KAM scheme which was first
developed in [40]. Now let us briefly introduce the main idea of the proof. We
start from the cocycle $(\alpha,R_{\rho_{f}}\mathrm{e}^{F_{n}})$ with
$\|F_{n}\|$ of size $\varepsilon_{n}$, to conjugate it into
$(\alpha,R_{\rho_{f}}\mathrm{e}^{F_{n+1}})$ with a smaller perturbation, a
crucial ingredient is to solve the homological equations
(4.1) $\displaystyle f_{1}(\cdot+\alpha)-f_{1}=-(g_{1}-[g_{1}]_{\theta}),$
$\mathrm{e}^{4\pi\mathrm{i}\rho_{f}}f_{2}(\cdot+\alpha)-f_{2}+g_{2}=0.$
However, if $\alpha$ is Liouvillean, (4.1) cann’t be solved at all, even if in
the analytic category. This is essentially different from the classical KAM
scheme. Therefore, we have to leave $g_{1}(\theta)$ (at least the resonant
terms of $g_{1}(\theta)$) into the normal form. As a result, from the second
step of iteration we need to consider the modified cocycle
$(\alpha,R_{\rho_{f}+(2\pi)^{-1}g(\theta)}\mathrm{e}^{F_{n}(\theta)})$, thus
the second equation in (4.1) is of the form
$\mathrm{e}^{2\mathrm{i}(2\pi\rho_{f}+g(\theta))}f(\cdot+\alpha)-f+g_{2}=0.$
In order to get desired result, we distinguish the discussions into three
steps. In the first step we eliminate the lower order terms of $g(\theta)\in
U_{r}^{M}({\mathbb{T}},{\mathbb{R}})$ by solving the equation
$v(\theta+\alpha)-v(\theta)=-(\mathcal{T}_{\overline{Q}_{n}}g-[g]_{\theta}).$
Although $\|g(\theta)\|$ is of size $\varepsilon_{0}$,
$\|\mathrm{e}^{\mathrm{i}v}\|_{r}$ could be very large in Liouvillean
frequency case. To control $\|\mathrm{e}^{\mathrm{i}v}\|_{r}$, the trick is to
control $\|\mathrm{Im}v(\theta)\|$ at the cost of reducing the analytic radius
greatly, which was first developed in analytic case in [53]. The key point
here is that $v(\theta)$ is in fact a trigonometric polynomial, one can
analytic continue $v(\theta)$ to become a real analytic function, and the
“width” $r$ just plays the role of analytic radius. Therefore, one can shrink
$r$ greatly in order to control $\|\mathrm{e}^{\mathrm{i}v}\|_{r}$ (Lemma
4.1). Consequently, the “width” will go to zero rapidly, and the convergence
of the KAM iteration only works in the $C^{\infty}$ category.
The second step is to make the perturbation much smaller by solving the
homological equation
$\mathrm{e}^{2\mathrm{i}(2\pi\rho_{f}+\widetilde{g}(\cdot))}f(\cdot+\alpha)-f+h=0,$
where $\|\widetilde{g}\|=O(\|F_{n}\|).$ By the method of diagonally dominant
[40], we can solve its approximation equation and then to make the
perturbation as small as we desire (Lemma 4.3).
By these two steps, we can already get $C^{\infty}$ almost reducible result
(Corollary 2). However, to get $C^{\infty}$ rotations reducible result, at the
end of one KAM step we need to inverse the first step, such that the
conjugation is close to the identity (Lemma 4.6).
For simplicity, in the following parts we will shorten
$U_{r}^{M}({\mathbb{T}},*)$ and $\|\cdot\|_{M,r}$ as $U_{r}({\mathbb{T}},*)$
and $\|\cdot\|_{r}$, also the letter $C$ denotes suitable (possibly different)
large constant that do not depend on the iteration step.
### 4.2. Main iteration lemma
For the functions $\Lambda(x)$ and $\Gamma(x)$ in Lemma 3.3, by $\mathrm{(I)}$
in $\mathbf{(A)}$ we know that there exists $\widetilde{T}\geq T_{1}$, where
$T_{1}$ is defined in Lemma 3.4, such that for any $x\geq\widetilde{T}$
(4.2) $\displaystyle\Gamma(x)$ $\displaystyle\geq$ $\displaystyle
64\mathbb{A}^{8}\tau^{4},$ (4.3) $\displaystyle\Lambda(x)$ $\displaystyle\geq$
$\displaystyle\ln x.$
Denote
(4.4) $\begin{split}T=\max\\{c_{M}^{3},\widetilde{T}^{3},\
(2^{-1}r)^{-12},(4\gamma^{-1})^{2\tau}\\},\end{split}$
where $c_{M}$ is the one in (3.11). Then for the $T$ defined above, we claim
that there exists $n_{0}$ such that $Q_{n_{0}+1}\leq T^{\mathbb{A}^{4}}$ and
$\overline{Q}_{n_{0}+1}\geq T.$ Indeed, let $m_{0}$ be such that
$Q_{m_{0}}\leq T\leq Q_{m_{0}+1}.$ If $\overline{Q}_{m_{0}}\geq T$, then we
set $n_{0}=m_{0}-1.$ Otherwise, if $\overline{Q}_{m_{0}}\leq T,$ by the
definition of $(Q_{k})$, it then holds $Q_{m_{0}+1}\leq T^{\mathbb{A}^{4}}.$
By the selection, $\overline{Q}_{m_{0}+1}\geq T,$ then $n_{0}=m_{0}$ satisfy
our needs. In the following we will shorten $n_{0}$ as $0,$ that is
$\overline{Q}_{n}$ stands for $\overline{Q}_{n+n_{0}}.$
Without loss of generality we assume $0<r_{0}:=2^{-1}r\leq 1$. Set
(4.5)
$\begin{split}\widetilde{\varepsilon}_{0}=0,\qquad\varepsilon_{0}=T^{-8\mathbb{A}^{4}\tau^{2}},\end{split}$
then $\varepsilon_{0}$ just depends on $\gamma,\tau,r,M,$ but not on $\alpha.$
Once we have this, we can define the iterative parameters as following, for
$n\geq 1$
(4.6) $\begin{split}\overline{r}_{n}=2\overline{Q}_{n}^{-2}r_{0},&\qquad
r_{n}=\overline{Q}_{n-1}^{-2}r_{0}.\\\
\varepsilon_{n}=\varepsilon_{n-1}\overline{Q}_{n}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})},&\qquad\widetilde{\varepsilon}_{n}=C\sum_{l=0}^{n-1}\varepsilon_{l}.\end{split}$
To simplify the notations, for any $g\in C^{0}(\mathbb{T},\mathbb{R}),$ we
denote
$\begin{split}R_{g}:=\left(\begin{matrix}\cos 2\pi g\ \ -\sin 2\pi g\\\ \sin
2\pi g\ \ \cos 2\pi g\end{matrix}\right)=\mathrm{e}^{-2\pi gJ},\
J=\left(\begin{matrix}0\ \ \ \ 1\\\ -1\ \ \ 0\end{matrix}\right),\end{split}$
and set
$\begin{split}\mathcal{F}_{r}(\rho_{f},\eta,\widetilde{\eta}):=\Big{\\{}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g(\theta)}\mathrm{e}^{F(\theta)}):&\
\|g\|_{r}\leq\eta,\|F\|_{r}\leq\widetilde{\eta},\\\ &\rho_{f}=\rho(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g(\theta)}\mathrm{e}^{F(\theta)})\Big{\\}}.\end{split}$
Then the main inductive lemma is the following:
###### Proposition 2.
Assume that $\rho_{f}\in DC_{\alpha}(\gamma,\tau)$, then for $n\geq 1,$ the
cocycle
(4.7) $\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g_{n}}\mathrm{e}^{F_{n}(\theta)})\in\mathcal{F}_{r_{n}}(\rho_{f},\widetilde{\varepsilon}_{n},\varepsilon_{n}),\end{split}$
with $\mathcal{R}_{\overline{Q}_{n}}g_{n}=0$ can be conjugated to
(4.8) $\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g_{n+1}}\mathrm{e}^{F_{n+1}(\theta)})\in\mathcal{F}_{r_{n+1}}(\rho_{f},\widetilde{\varepsilon}_{n+1},\varepsilon_{n+1}),\end{split}$
with $\mathcal{R}_{\overline{Q}_{n+1}}g_{n+1}=0$ by the conjugation $\Phi_{n}$
with the estimate
(4.9) $\begin{split}\|\Phi_{n}-I\|_{r_{n+1}}\leq
C\varepsilon_{n}^{\frac{1}{2}}.\end{split}$
The construction of the conjugation in Proposition 2 is divided into three
steps given in Lemma 4.1, Lemma 4.3 and Lemma 4.6 of the following.
###### Lemma 4.1.
For $n\geq 1,$ the cocycle
(4.10) $\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g_{n}}\mathrm{e}^{F_{n}(\theta)})\in\mathcal{F}_{r_{n}}(\rho_{f},\widetilde{\varepsilon}_{n},\varepsilon_{n}),\end{split}$
with $\mathcal{R}_{\overline{Q}_{n}}g_{n}=0$ can be conjugated to the cocycle
(4.11) $\begin{split}(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}_{n}(\theta)})\in\mathcal{F}_{\overline{r}_{n}}(\rho_{f},0,C\varepsilon_{n}),\end{split}$
via the conjugation $(0,\mathrm{e}^{-v_{n}J})$ with
$\|\mathrm{e}^{-v_{n}J}\|_{\overline{r}_{n}}\leq C.$
Before giving the proof of Lemma 4.1 we give an auxiliary lemma. To this end,
for
$f(\theta)=\sum_{k\in\mathbb{Z}}\widehat{f}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}\in
U_{r}(\mathbb{T},\mathbb{R})$, we set
$\vartheta=\theta+\mathrm{i}\widetilde{\theta},(\theta\in\mathbb{T},|\widetilde{\theta}|\leq
r)$
$\displaystyle\widetilde{f}(\vartheta)=\sum_{k\in\mathbb{Z}}\widehat{f}(k)\mathrm{e}^{2\pi\mathrm{i}k(\theta+\mathrm{i}\widetilde{\theta})}.$
Then we, formally, define the analytic norm
$\displaystyle\|\widetilde{f}\|_{r}^{*}=\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\sup_{|\widetilde{\theta}|\leq
r,\theta\in\mathbb{T}}\big{|}\mathrm{e}^{2\pi\mathrm{i}k(\theta+\mathrm{i}\widetilde{\theta})}\big{|}=\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\mathrm{e}^{|2\pi
k|r}.$
If $\mathrm{Im}\vartheta=\widetilde{\theta}=0,$ then
$\widetilde{f}(\vartheta)=f(\theta),$ and if $0<|\mathrm{Im}\vartheta|\leq r,$
one has
$\displaystyle\|f\|_{\Lambda,r}=\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\mathrm{e}^{\Lambda(|2\pi
k|r)}\leq\|\widetilde{f}\|_{r}^{*}.$
In general, $\|\widetilde{f}\|_{r}^{*}=\infty$, however, if $f$ is a
trigonometric polynomial, then $\widetilde{f}$ really defines a real analytic
function in the strip $|\mathrm{Im}\vartheta|\leq r,$ motivated by this we
have the following:
###### Lemma 4.2.
Assume that $v$ is the solution of
(4.12)
$v(\theta+\alpha)-v(\theta)=-(\mathcal{T}_{\overline{Q}_{n}}g-[g]_{\theta}),$
where $g\in U_{r_{n}}(\mathbb{T},\mathbb{R})$ with $\|g(\theta)\|_{r_{n}}\leq
C\varepsilon_{0}.$ Then
$\|\mathrm{e}^{\mathrm{i}v(\theta)}\|_{\overline{r}_{n}}\leq C.$
###### Proof.
By comparing the Fourier coefficients of (4.12) we have
$v(\theta)=\sum_{0<|k|<\overline{Q}_{n}}\widehat{v}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}=-\sum_{0<|k|<\overline{Q}_{n}}\widehat{g}(k)(\mathrm{e}^{2\pi\mathrm{i}k\alpha}-1)^{-1}\mathrm{e}^{2\pi\mathrm{i}k\theta}$
with estimate
(4.13) $|\widehat{v}(k)|\leq\overline{Q}_{n}|\widehat{g}(k)|,\
0<|k|<\overline{Q}_{n}.$
For $\theta\in\mathbb{T},$ by the fact $g(\theta)\in{\mathbb{R}}$, one has
$v(\theta)\in\mathbb{R}.$ Thus for the function
$\begin{split}\widetilde{v}(\vartheta)-v(\theta)=\sum_{0<|k|<\overline{Q}_{n}}\widehat{v}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}\big{(}\mathrm{e}^{-2\pi
k\widetilde{\theta}}-1\big{)},\end{split}$
we have
$\mathrm{Im}\widetilde{v}(\vartheta)=\mathrm{Im}(\widetilde{v}(\vartheta)-v(\theta)).$
Consequently, by (4.13), we have:
$\begin{split}\|\mathrm{Im}\widetilde{v}(\vartheta)\|_{4\overline{Q}_{n}^{-2}}^{*}&\leq\|\widetilde{v}(\vartheta)-v(\theta)\|_{4\overline{Q}_{n}^{-2}}^{*}\\\
&=\sum_{0<|k|<\overline{Q}_{n}}|\widehat{v}(k)|\sup_{|\widetilde{\theta}|\leq
4\overline{Q}_{n}^{-2},\theta\in\mathbb{T}}\big{|}\mathrm{e}^{2\pi\mathrm{i}k\theta}\big{(}\mathrm{e}^{-2\pi
k\widetilde{\theta}}-1\big{)}\big{|}\\\
&\leq\sum_{0<|k|<\overline{Q}_{n}}\overline{Q}_{n}^{-1}|\widehat{g}(k)|16\pi|k|\\\
&\leq 16\pi\overline{Q}_{n}^{-1}\widetilde{T}(\pi
r_{n})^{-1}\sum_{|k|<\widetilde{T}(\pi r_{n})^{-1}}|\widehat{g}(k)|\\\ &\
+16(\overline{Q}_{n}r_{n})^{-1}\sum_{\widetilde{T}(\pi
r_{n})^{-1}\leq|k|<\overline{Q}_{n}}|\widehat{g}(k)|\exp\\{\Lambda(|\pi
k|r_{n})\\}\\\ &\leq
32\widetilde{T}(\overline{Q}_{n}r_{n})^{-1}\|g\|_{\Lambda,r_{n}/2},\end{split}$
where the third inequality follows by (4.3).
By (3.11) of Lemma 3.5, one can further compute
$\|\mathrm{Im}\widetilde{v}(\vartheta)\|_{4\overline{Q}_{n}^{-2}}^{*}\leq
32\widetilde{T}(2\pi\overline{Q}_{n})^{-1}(4+c_{M})\overline{Q}_{n-1}^{4}r_{0}^{2}\|g\|_{r_{n}}\leq\|g\|_{r_{n}},$
the last inequality follows by
$\overline{Q}_{n}\geq\max\\{T,\overline{Q}_{n-1}^{\mathbb{A}}\\},n\geq 1$ (by
Lemma 2.2 and choice of $\overline{Q}_{n_{0}}$). Therefore, by (3.10) of Lemma
3.5, we have
$\begin{split}\|\mathrm{e}^{\mathrm{i}v(\theta)}\|_{\overline{r}_{n}}&\leq
C\|\mathrm{e}^{\mathrm{i}v(\theta)}\|_{\Lambda,2\overline{r}_{n}}\leq
C\|\mathrm{e}^{\mathrm{i}\widetilde{v}(\vartheta)}\|_{2\overline{r}_{n}}^{*}\leq
C\|\mathrm{e}^{\mathrm{i}\widetilde{v}(\vartheta)}\|_{4\overline{Q}_{n}^{-2}}^{*}\\\
&\leq
C\exp\\{\|\mathrm{Im}\widetilde{v}(\vartheta)\|_{4\overline{Q}_{n}^{-2}}^{*}\\}<C\exp\\{\|g\|_{r_{n}}\\}<C.\end{split}$
∎
Proof of Lemma 4.1: Assume that $v_{n}$ is the solution of
$v_{n}(\theta+\alpha)-v_{n}(\theta)=-(g_{n}(\theta)-\widehat{g}_{n}(0)).$
Note $\mathcal{R}_{\overline{Q}_{n}}g_{n}=0$, then by Lemma 4.2 we have
$\begin{split}\|\mathrm{e}^{v_{n}J}\|_{\overline{r}_{n}}\leq C.\end{split}$
Direct computation shows that $(0,\mathrm{e}^{-v_{n}J})$ conjugates the
cocycle (4.10) into
$(\alpha,R_{\rho_{f}+(2\pi)^{-1}\widehat{g}(0)}\mathrm{e}^{\overline{F}_{n}}),$
with $\overline{F}_{n}=\mathrm{e}^{-v_{n}J}F_{n}(\theta)\mathrm{e}^{v_{n}J}.$
Thus by Lemma 3.1, we have
(4.14)
$\|\overline{F}_{n}\|_{\overline{r}_{n}}\leq\|\mathrm{e}^{-v_{n}J}\|_{\overline{r}_{n}}\|F_{n}\|_{\overline{r}_{n}}\|\mathrm{e}^{v_{n}J}\|_{\overline{r}_{n}}\leq
C\varepsilon_{n}.$
On the other hand, since $\mathrm{e}^{-v_{n}J}$ is homotopic to the identity,
the fibered rotation number remains unchanged, then by (2.1), we have
$|(2\pi)^{-1}\widehat{g}_{n}(0)|\leq\|\overline{F}_{n}\|_{\overline{r}_{n}}\leq
C\varepsilon_{n},$
which means
(4.15) $|\widehat{g}_{n}(0)|\leq C\varepsilon_{n}.$
Also note if $B,D$ are small $sl(2,\mathbb{R})$ matrices, then there exists
$E\in sl(2,\mathbb{R})$ such that
$\mathrm{e}^{B}\mathrm{e}^{D}=\mathrm{e}^{B+D+E},$
where $E$ is a sum of terms at least 2 orders in $B,D.$ Consequently, by
(4.14), (4.15) and Lemma 3.1, there exists $\widetilde{F}_{n}\in
U_{\overline{r}_{n}}(\mathbb{T},sl(2,\mathbb{R}))$ such that
$R_{\rho_{f}+(2\pi)^{-1}\widehat{g}(0)}\mathrm{e}^{\overline{F}_{n}}=R_{\rho_{f}}\mathrm{e}^{\widetilde{F}_{n}}$
with estimate $\|\widetilde{F}_{n}\|_{\overline{r}_{n}}\leq C\varepsilon_{n}.$
∎
Once we get (4.11), we will further conjugate it to another cocycle with much
smaller perturbation. We will give a lemma which can be applied to more
general cocycles rather than just (4.11).
###### Lemma 4.3.
Consider the cocycle $(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}(\theta)})$ with $\rho_{f}=\rho(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}(\theta)})\in DC_{\alpha}(\gamma,\tau)$
and
(4.16) $\begin{split}\|\widetilde{F}\|_{\overline{r}_{n}}\leq
8^{-2}\gamma^{2}Q_{n+1}^{-2\tau^{2}},\ n\geq 0.\end{split}$
Then there is a conjugation map $\Psi_{n}\in
U_{r_{n+1}}(\mathbb{T},SL(2,\mathbb{R}))$ with
$\begin{split}\|\Psi_{n}-I\|_{r_{n+1}}\leq\|\widetilde{F}\|_{\overline{r}_{n}}^{\frac{1}{2}},\
n\geq 0,\end{split}$
such that $\Psi_{n}$ conjugates the cocycle $(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}(\theta)})$ into
(4.17) $\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}\widetilde{g}_{n}}\mathrm{e}^{G(\theta)})\in\mathcal{F}_{r_{n+1}}(\rho_{f},2\|\widetilde{F}\|_{\overline{r}_{n}},\epsilon),\end{split}$
with $\mathcal{R}_{\overline{Q}_{n+1}}\widetilde{g}_{n}=0$ and $n\geq 0,$
where
$\epsilon=C^{-2}\|\widetilde{F}\|_{\overline{r}_{n}}\overline{Q}_{n+1}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n+1}^{\frac{1}{3}})}.$
Before give the proof of Lemma 4.3 we give one important lemma, which is about
the estimate of small divisors and serves as the fundamental ingredients of
the proof. Although the proof is quite simple, it is the key observation that
to obtaining semi-local results.
###### Lemma 4.4.
For any $0<\gamma<1,\,\tau>1,$ assume that $\overline{Q}_{n+1}\geq T$ and
$\rho\in DC_{\alpha}(\gamma,\tau)$, then for any
$|k|\leq\overline{Q}_{n+1}^{\frac{1}{2}},$ we have
(4.18) $\begin{split}\big{|}\mathrm{e}^{2\pi\mathrm{i}(k\alpha\pm
2\rho)}-1\big{|}\geq\gamma Q_{n+1}^{-\tau^{2}}.\end{split}$
###### Proof.
We just need to estimate
$\big{|}\mathrm{e}^{2\pi\mathrm{i}(k\alpha+2\rho)}-1\big{|}$ since
$\begin{split}\big{|}\mathrm{e}^{2\pi\mathrm{i}(k\alpha-2\rho)}-1\big{|}=\big{|}\mathrm{e}^{2\pi\mathrm{i}(-k\alpha+2\rho)}-1\big{|}.\end{split}$
$\textbf{Case\ 1.}\ \overline{Q}_{n+1}\leq Q_{n+1}^{2\tau}.$ Then our
assumptions imply
$\begin{split}\big{|}\mathrm{e}^{2\pi\mathrm{i}(k\alpha+2\rho)}-1\big{|}&=2|\sin\pi(k\alpha+2\rho)|>\big{\|}k\alpha+2\rho\big{\|}_{\mathbb{Z}}\geq\gamma\overline{Q}_{n+1}^{-2^{-1}\tau}\geq\gamma
Q_{n+1}^{-\tau^{2}}.\end{split}$
$\textbf{Case\ 2.}\ \overline{Q}_{n+1}>Q_{n+1}^{2\tau}.$ Write $k$ as
$k=\widetilde{k}+mQ_{n+1},\ m\in\mathbb{Z}$ with $|\widetilde{k}|<Q_{n+1}.$
Then we have
$\begin{split}|m|\leq|k|/Q_{n+1}<\overline{Q}_{n+1}^{\frac{1}{2}}/Q_{n+1}.\end{split}$
Consequently, by the assumption that $\rho\in DC_{\alpha}(\gamma,\tau)$, one
has
$\begin{split}\big{|}\mathrm{e}^{2\pi\mathrm{i}(k\alpha+2\rho)}-1\big{|}&>\big{\|}\widetilde{k}\alpha+mQ_{n+1}\alpha+2\rho\big{\|}_{\mathbb{Z}}\\\
&\geq\big{\|}\widetilde{k}\alpha+2\rho\big{\|}_{\mathbb{Z}}-|m|\|Q_{n+1}\alpha\|_{\mathbb{Z}}\\\
&\geq\gamma Q_{n+1}^{-\tau}-|m|/\overline{Q}_{n+1}\geq\gamma
Q_{n+1}^{-\tau}-\overline{Q}_{n+1}^{-\frac{1}{2}}Q_{n+1}^{-1}>2^{-1}\gamma
Q_{n+1}^{-\tau},\end{split}$
where the last inequality is by
$\begin{split}\overline{Q}_{n+1}^{\frac{1}{2}}=\overline{Q}_{n+1}^{\frac{1}{2\tau}}\overline{Q}_{n+1}^{\frac{\tau-1}{2\tau}}\geq
2\gamma^{-1}Q_{n+1}^{\tau-1},\end{split}$
which is guaranteed by $\overline{Q}_{n+1}\geq(2\gamma^{-1})^{2\tau}$ and
$\overline{Q}_{n+1}>Q_{n+1}^{2\tau}.$ ∎
Set $su(1,1)$ be the space consisting of matrices of the form
$\left(\begin{matrix}\mathrm{i}t\ \ \ \ \ v\\\ \overline{v}\ \
-\mathrm{i}t\end{matrix}\right)$ $(t\in\mathbb{R},\ v\in\mathbb{C}),$ we
simply denote such a matrix by $\\{t,v\\}.$ Recall that $sl(2,\mathbb{R})$ is
isomorphic to $su(1,1)$ by the rule $A\mapsto MAM^{-1},$ where
$M=\frac{1}{1+\mathrm{i}}\left(\begin{matrix}1\ \ -\mathrm{i}\\\ 1\quad\ \
\mathrm{i}\end{matrix}\right),$ and a simple calculation yields
$\begin{split}M\left(\begin{array}[]{l}x\ \qquad y+z\\\ y-z\quad\
-x\end{array}\right)M^{-1}=\left(\begin{array}[]{l}\mathrm{i}z\ \qquad
x-\mathrm{i}y\\\ x+\mathrm{i}y\quad\
-\mathrm{i}z\end{array}\right),x,y,z\in\mathbb{R}.\end{split}$
Motivated by Lemma 4.4, we can define the following non-resonant and resonant
spaces.
$\begin{split}\mathcal{B}_{r}^{(nre)}&=\Big{\\{}\\{0,\mathcal{T}_{\overline{Q}_{n+1}^{\frac{1}{2}}}g(\theta)\\}:g\in
U_{r}(\mathbb{T},\mathbb{C})\Big{\\}},\\\
\mathcal{B}_{r}^{(re)}&=\Big{\\{}\\{f(\theta),\mathcal{R}_{\overline{Q}_{n+1}^{\frac{1}{2}}}g(\theta)\\}:g\in
U_{r}(\mathbb{T},\mathbb{C}),\ f\in
U_{r}(\mathbb{T},\mathbb{R})\Big{\\}}.\end{split}$
It follows that
$U_{r}(\mathbb{T},su(1,1))=\mathcal{B}_{r}^{(nre)}\oplus\mathcal{B}_{r}^{(re)}.$
In order to prove Lemma 4.3, we will need the following lemma:
###### Lemma 4.5.
Assume that
$A=\mathrm{diag}\\{\mathrm{e}^{-2\pi\mathrm{i}\rho},\mathrm{e}^{2\pi\mathrm{i}\rho}\\}$
and $g\in U_{r}(\mathbb{T},su(1,1))$. If $\rho\in DC_{\alpha}(\gamma,\tau),$
and
$\|g\|_{r}\leq 8^{-2}\gamma^{2}Q_{n+1}^{-2\tau^{2}},\ n\geq 0,$
then there exist $Y\in\mathcal{B}_{r}^{(nre)}$ and
$g^{(re)}\in\mathcal{B}_{r}^{(re)}$ such that
$\begin{split}\mathrm{e}^{Y(\cdot+\alpha)}A\mathrm{e}^{g(\cdot)}\mathrm{e}^{-Y(\cdot)}=A\mathrm{e}^{g^{(re)}(\cdot)}\end{split}$
with $\|Y\|_{r}\leq\|g\|_{r}^{1/2}$ and $\|g^{(re)}\|_{r}\leq 2\|g\|_{r}.$
The proof of this lemma, which involves the homotopy method, is postponed to
Appendix. Similar proofs appeared in [53, 23].
Proof of Lemma 4.3: Since $SL(2,{\mathbb{R}})$ is isomorphic to $SU(1,1)$,
instead of $(\alpha,\ R_{\rho_{f}}\mathrm{e}^{\widetilde{F}(\theta)})$, we
just consider $(\alpha,A\mathrm{e}^{W(\theta)}),$ where
$A=MR_{\rho_{f}}M^{-1}=\mathrm{diag}\\{\mathrm{e}^{-2\pi\mathrm{i}\rho_{f}},\mathrm{e}^{2\pi\mathrm{i}\rho_{f}}\\}\in
SU(1,1),W=M\widetilde{F}M^{-1}\in su(1,1).$
Since $\rho_{f}\in D_{\alpha}(\gamma,\tau)$ and
$\|\widetilde{F}\|_{\overline{r}_{n}}\leq
8^{-2}\gamma^{2}Q_{n+1}^{-2\tau^{2}}$, by Lemma 4.5, there exist
$Y\in\mathcal{B}_{\overline{r}_{n}}^{(nre)}$ and
$W^{(re)}\in\mathcal{B}_{\overline{r}_{n}}^{(re)}$ such that $\mathrm{e}^{Y}$
conjugates $(\alpha,\ A\mathrm{e}^{W})$ to $(\alpha,\ A\mathrm{e}^{W^{(re)}})$
with
(4.19)
$\begin{split}\|Y\|_{\overline{r}_{n}}\leq\|\widetilde{F}\|_{\overline{r}_{n}}^{\frac{1}{2}},\
\|W^{(re)}\|_{\overline{r}_{n}}\leq
2\|\widetilde{F}\|_{\overline{r}_{n}}.\end{split}$
Denote
$W^{(re)}(\theta)=\\{\widetilde{f}(\theta),\mathcal{R}_{\overline{Q}_{n+1}^{\frac{1}{2}}}\widetilde{g}(\theta)\\}\in\mathcal{B}_{\overline{r}_{n}}^{(re)}.$
Thus by (4.19)
$\begin{split}\|\widetilde{f}(\theta)\|_{\overline{r}_{n}}\leq\|W^{(re)}(\theta)\|_{\overline{r}_{n}}\leq
2\|\widetilde{F}(\theta)\|_{\overline{r}_{n}}.\end{split}$
Note $\overline{Q}_{n+1}\geq\overline{Q}_{n}^{24}$ (by Lemma 2.2) and
$\overline{Q}_{n+1}\geq T\geq r_{0}^{-12}$ (by (4.4)) we get
$\begin{split}\overline{Q}_{n+1}^{\frac{1}{2}}>\overline{Q}_{n+1}^{\frac{1}{3}}>4^{-1}\overline{Q}_{n}^{4}r_{0}^{-2}=\overline{r}_{n}^{-2}\gg
1,\ n\geq 0,\end{split}$
which implies
(4.20)
$\begin{split}\overline{Q}_{n+1}^{\frac{1}{2}}\overline{r}_{n}>\overline{Q}_{n+1}^{\frac{1}{3}}\geq
T^{\frac{1}{3}}\geq\widetilde{T}\geq T_{1}.\end{split}$
Set
$P(\theta)=W^{(re)}(\theta)-\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f}(\theta),0\\},$
thus $\mathcal{T}_{\overline{Q}_{n+1}^{\frac{1}{2}}}P(\theta)=0,$ then by
(3.6) in Lemma 3.4, we get
$\begin{split}\|P(\theta)\|_{\overline{r}_{n}/2}&\leq
C(\overline{Q}_{n+1}^{\frac{1}{2}}\overline{r}_{n}^{2})^{-1}\|P(\theta)\|_{\overline{r}_{n}}\exp\\{-9^{-1}\Gamma(4\overline{Q}_{n+1}^{\frac{1}{2}}\overline{r}_{n})\ln(4\overline{Q}_{n+1}^{\frac{1}{2}}\overline{r}_{n})\\}\\\
&\leq
12C\|\widetilde{F}(\theta)\|_{\overline{r}_{n}}\exp\\{-9^{-1}\Gamma(\overline{Q}_{n+1}^{\frac{1}{3}})\ln(\overline{Q}_{n+1}^{\frac{1}{3}})\\}\\\
&\leq(2C^{2})^{-1}\|\widetilde{F}(\theta)\|_{\overline{r}_{n}}\overline{Q}_{n+1}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n+1}^{\frac{1}{3}})},\end{split}$
where the second inequality is by (4.20) and the fact that $\Gamma(x)\ln x$ is
non-decreasing, i.e., $\mathrm{(II)}$ in $\mathbf{(A)},$ and the last
inequality is by (4.2), that is
$\Gamma(\overline{Q}_{n+1}^{\frac{1}{3}})>64\mathbb{A}^{8}\tau^{4}.$
Note
$\begin{split}A\mathrm{e}^{W^{(re)}}=A\mathrm{e}^{\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f},0\\}}E,\
E=\mathrm{e}^{-\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f},0\\}}\mathrm{e}^{W^{(re)}}.\end{split}$
Then by Lemma 3.1 we have
$\begin{split}\|E-I\|_{\overline{r}_{n}/2}&\leq\mathrm{e}^{\|\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f}\|_{\overline{r}_{n}/2}}\|\mathrm{e}^{W^{(re)}}-\mathrm{e}^{\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f},0\\}}\|_{\overline{r}_{n}/2}\\\
&=\mathrm{e}^{\|\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f}\|_{\overline{r}_{n}/2}}\|\mathrm{e}^{\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f},0\\}+P}-\mathrm{e}^{\\{\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f},0\\}}\|_{\overline{r}_{n}/2}\\\
&\leq\mathrm{e}^{2\|\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f}\|_{\overline{r}_{n}/2}}\mathrm{e}^{\|P\|_{\overline{r}_{n}/2}}\|P\|_{\overline{r}_{n}/2}\\\
&\leq 2\|P\|_{\overline{r}_{n}/2}\leq
C^{-2}\|\widetilde{F}\|_{\overline{r}_{n}}\overline{Q}_{n+1}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n+1}^{\frac{1}{3}})}.\end{split}$
Thus by implicit function theorem, there exists $\widetilde{G}\in
U_{\overline{r}_{n}/2}(\mathbb{T},su(1,1))$ such that
$E=\mathrm{e}^{\widetilde{G}}$ with
$\begin{split}\|\widetilde{G}\|_{\overline{r}_{n}/2}\leq\|E-I\|_{\overline{r}_{n}/2}<C^{-2}\|\widetilde{F}(\theta)\|_{\overline{r}_{n}}\overline{Q}_{n+1}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n+1}^{\frac{1}{3}})}.\end{split}$
Now we go back to $SL(2,\mathbb{R}).$ Let $\Psi_{n}=\mathrm{e}^{M^{-1}YM}.$
Then $\|\Psi_{n}-I\|_{\overline{r}_{n}}\leq\|Y\|_{\overline{r}_{n}}.$
Moreover, $\Psi_{n}$ conjugates the cocycle $(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}(\theta)})$ to (4.17) with
$G=M^{-1}\widetilde{G}M,\widetilde{g}_{n}(\theta)=-\mathcal{T}_{\overline{Q}_{n+1}}\widetilde{f}(\theta).$
Obviously, $\mathcal{R}_{\overline{Q}_{n+1}}\widetilde{g}_{n}=0.$∎
To ensure the composition of the conjugations is close to the identity, we do
one more conjugation which is the inverse of transformation in Lemma 4.1:
###### Lemma 4.6.
Assume that $v_{n}$ is the one defined in Lemma 4.1. Then for any $n\geq 1$,
$(0,\mathrm{e}^{v_{n}(\theta)J})$ further conjugates the cocycle
$\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}\widetilde{g}_{n}}\mathrm{e}^{G(\theta)})\in\mathcal{F}_{r_{n+1}}(\rho_{f},C\varepsilon_{n},C^{-1}\varepsilon_{n+1}),\end{split}$
with $\mathcal{R}_{\overline{Q}_{n+1}}\widetilde{g}_{n}=0$, to the cocycle
$\begin{split}(\alpha,R_{\rho_{f}+(2\pi)^{-1}g_{n+1}}\mathrm{e}^{F_{n+1}})\in\mathcal{F}_{r_{n+1}}(\rho_{f},\widetilde{\varepsilon}_{n+1},\
\varepsilon_{n+1})\end{split}$
with $\mathcal{R}_{\overline{Q}_{n+1}}g_{n+1}=0.$
###### Proof.
Since $v_{n}$ is the solution of
$v_{n}(\theta+\alpha)-v_{n}(\theta)=-g_{n}(\theta)+\widehat{g}_{n}(0),$ then
$(0,\mathrm{e}^{v_{n}(\theta)J})$ conjugates the cocycle $(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}\widetilde{g}_{n}}\mathrm{e}^{G(\theta)})$ to
$\begin{split}(\alpha,R_{\rho_{f}+(2\pi)^{-1}(\widetilde{g}_{n}+g_{n}-\widehat{g}_{n}(0))}\mathrm{e}^{F_{n+1}(\theta)}),\end{split}$
where $F_{n+1}=\mathrm{e}^{v_{n}(\theta)J}G\mathrm{e}^{-v_{n}(\theta)J}$. Let
$g_{n+1}=\widetilde{g}_{n}+g_{n}-\widehat{g}_{n}(0),$ then by Lemma 3.1 and
Lemma 4.1, we have estimates
$\begin{split}\|g_{n+1}\|_{r_{n+1}}&\leq\|\widetilde{g}_{n}\|_{r_{n+1}}+\|g_{n}-\widehat{g}_{n}(0)\|_{r_{n}}\leq
C\varepsilon_{n}+\widetilde{\varepsilon}_{n}=\widetilde{\varepsilon}_{n+1},\\\
\|F_{n+1}\|_{r_{n+1}}&\leq\|\mathrm{e}^{v_{n}J}\|_{r_{n+1}}^{2}\|G\|_{r_{n+1}}\leq\varepsilon_{n+1}.\end{split}$
Obviously, $\mathcal{R}_{\overline{Q}_{n+1}}g_{n+1}=0.$ Moreover, the fibered
rotation number does not change since $\mathrm{e}^{v_{n}J}$ is homotopic to
the identity. ∎
Now we are in the position to prove Proposition 2. First by Lemma 4.1,
$(0,\mathrm{e}^{-v_{n}J})$ conjugates the cocycle (4.7) to
$(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}_{n}(\theta)})\in\mathcal{F}_{\overline{r}_{n}}(\rho_{f},0,C\varepsilon_{n}).$
Moreover, by our definition of $\varepsilon_{n}$, one can easily check that
$C\varepsilon_{n}=C\varepsilon_{n-1}\overline{Q}_{n}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})}\leq
C\overline{Q}_{n}^{-8\mathbb{A}^{4}\tau^{2}}\leq
8^{-2}\gamma^{2}Q_{n+1}^{-2\tau^{2}},\ n\geq 1,$
the last inequality holds since, by Lemma 2.1,
$\overline{Q}_{n}^{\mathbb{A}^{4}}\geq Q_{n+1}$ and $\overline{Q}_{n}\geq
T\geq(4\gamma^{-1})^{2\tau},n\geq 1.$ That is (4.16) holds with
$C\varepsilon_{n}$ in place of $\|\widetilde{F}_{n}\|_{\overline{r}_{n}}.$
Then by the assumption $\rho_{f}\in DC_{\alpha}(\gamma,\tau)$, one can apply
Lemma 4.3, and there exists $\Psi_{n}\in
U_{r_{n+1}}(\mathbb{T},SL(2,\mathbb{R}))$ with
$\|\Psi_{n}-I\|_{r_{n+1}}\leq C\varepsilon_{n}^{\frac{1}{2}},$
which further conjugates the obtained cocycle into
$\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}\widetilde{g}_{n}}\mathrm{e}^{G(\theta)})&\in\mathcal{F}_{r_{n+1}}(\rho_{f},2C\varepsilon_{n},C^{-2}C\varepsilon_{n}\overline{Q}_{n+1}^{-\Gamma^{\frac{1}{2}}(\overline{Q}_{n+1}^{\frac{1}{3}})})\\\
&=\mathcal{F}_{r_{n+1}}(\rho_{f},C\varepsilon_{n},C^{-1}\varepsilon_{n+1}).\end{split}$
Finally, by Lemma 4.6, $(0,\mathrm{e}^{v_{n}(\theta)J})$ further conjugates
the cocycle above to (4.8) with desired estimates. Let
$\Phi_{n}=\mathrm{e}^{v_{n}(\theta)J}\Psi_{n}\mathrm{e}^{-v_{n}(\theta)J},$
then by Lemma 3.1 and Lemma 4.1, we have
$\begin{split}\|\Phi_{n}-I\|_{r_{n+1}}&=\|\mathrm{e}^{v_{n}(\theta)J}(\Psi_{n}-I)\mathrm{e}^{-v_{n}(\theta)J}\|_{r_{n+1}}\\\
&\leq\|\mathrm{e}^{v_{n}(\theta)J}\|_{\overline{r}_{n}}^{2}\|\Psi_{n}-I\|_{r_{n+1}}<C\varepsilon_{n}^{\frac{1}{2}},\end{split}$
which finishes the whole proof.∎
## 5\. Proof of Theorem 1.1 and Theorem 1.2
### 5.1. Proof of Theorem 1.2
Set $A=R_{\varrho}\mathrm{e}^{F}.$ By the assumption $\rho(\alpha,\
R_{\varrho}\mathrm{e}^{F})=\rho_{f}$ and (2.1) one has $|\rho_{f}-\varrho|\leq
2\|F\|_{C^{0}},$ thus one can rewrite $(\alpha,R_{\varrho}\mathrm{e}^{F})$ as
$(\alpha,R_{\rho_{f}}\mathrm{e}^{\widetilde{F}})$ with
$\|\widetilde{F}\|_{r}\leq C\|F\|_{r}.$ Set
$\varepsilon_{*}:=C^{-1}\varepsilon_{0},$ where $\varepsilon_{0}$ is the one
defined by (4.5).
Set $\overline{r}_{0}=r,$ by the selection of $\varepsilon_{0}$ and $Q_{1}\leq
T^{\mathbb{A}^{4}},\ T\geq(4\gamma^{-1})^{2\tau},$ we get
$\begin{split}\|\widetilde{F}\|_{\overline{r}_{0}}\leq
C\varepsilon_{*}=\varepsilon_{0}=T^{-8\mathbb{A}^{4}\tau^{2}}\leq
8^{-2}\gamma^{2}Q_{1}^{-2\tau^{2}}.\end{split}$
Since we further assume $\rho_{f}\in DC_{\alpha}(\gamma,\tau)$, one can apply
Lemma 4.3, then there exists $\Psi_{0}\in
U_{r_{1}}(\mathbb{T},SL(2,\mathbb{R}))$ with
$\displaystyle\|\Psi_{0}-I\|_{r_{1}}\leq C\varepsilon_{0}^{\frac{1}{2}},$
which conjugates the cocycle $(\alpha,R_{\rho_{f}}\mathrm{e}^{\widetilde{F}})$
into
$\begin{split}(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}\widetilde{g}_{0}}\mathrm{e}^{G_{0}})\in\mathcal{F}_{r_{1}}(\rho_{f},\widetilde{\varepsilon}_{1},\varepsilon_{1}).\end{split}$
We emphasize that in the first iteration step, we only apply Lemma 4.3,
without applying Proposition 2, which is quite different from the rest steps.
Now we set $\widetilde{g}_{0}=g_{1},\ G_{0}=F_{1},$ and $\Phi_{0}=\Psi_{0}.$
Then one can apply Proposition 2 inductively, and get a sequence of
transformations $\\{\Phi_{n}\\}_{n\geq 0}$ with estimate
$\|\Phi_{n}-I\|_{r_{n+1}}\leq C\varepsilon_{n}^{\frac{1}{2}}.$ Furthermore,
let
$\begin{split}\Phi^{(n)}=\Phi_{n-1}\circ\Phi_{n-2}\circ\cdots\circ\Phi_{0},\
\Phi=\lim_{n\rightarrow\infty}\Phi^{(n)},\end{split}$
then $\Phi^{(n)}$ conjugates the original cocycle $(\alpha,\
R_{\rho_{f}}\mathrm{e}^{\widetilde{F}})$ to $(\alpha,\
R_{\rho_{f}+(2\pi)^{-1}g_{n}}\mathrm{e}^{F_{n}(\theta)})$.
Finally, let’s show the convergence of $\Phi^{(n)}.$ Let
$\Phi=\lim_{n\rightarrow\infty}\Phi^{(n)},$ we will show $\Phi\in
C^{\infty}({\mathbb{T}},SL(2,{\mathbb{R}}))$. Indeed, by the definition of
$\|\cdot\|_{r}-$norm we have
(5.1) $\begin{split}\|D_{\theta}^{j}f\|_{C^{0}}\leq\|f\|_{r}r^{-j}M_{j},\
\forall f\in U_{r}(\mathbb{T},SL(2,\mathbb{R})),\end{split}$
and by $\mathrm{(I)}$ of $\mathbf{(A)}$, for any $j\in{\mathbb{N}}$, there
exists $n_{j}\in\mathbb{N},$ such that for any $n\geq n_{j}$, we have
$CM_{j}\leq\overline{Q}_{n}^{j},$ and
$\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})\geq 24j$. By (4.9) and
standard computation, we get $\|\Phi^{(n+1)}-\Phi^{(n)}\|_{r_{n+1}}\leq
C\varepsilon_{n}^{\frac{1}{2}},$ then by (5.1) we can further compute
$\begin{split}\big{\|}D^{j}(\Phi^{(n+1)}-\Phi^{(n)})\big{\|}_{C^{0}}&\leq\|\Phi^{(n+1)}-\Phi^{(n)}\|_{r_{n+1}}M_{j}r_{n+1}^{-j}\leq
C\varepsilon_{n}^{\frac{1}{2}}M_{j}r_{n+1}^{-j}\\\
&=\overline{Q}_{n}^{-\frac{1}{2}\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})}\varepsilon_{n-1}^{\frac{1}{2}}CM_{j}\overline{Q}_{n}^{2j}r_{0}^{-j}\\\
&<\overline{Q}_{n}^{-\frac{1}{2}\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})}\varepsilon_{n-1}^{\frac{1}{2}}\overline{Q}_{n}^{4j}<\overline{Q}_{n}^{-\frac{1}{3}\Gamma^{\frac{1}{2}}(\overline{Q}_{n}^{\frac{1}{3}})}\varepsilon_{n-1}^{\frac{1}{3}}=\varepsilon_{n}^{\frac{1}{3}},\end{split}$
which means $\Phi\in C^{\infty}({\mathbb{T}},SL(2,{\mathbb{R}}))$. Let
$g_{\infty}=\lim_{n\rightarrow\infty}g_{n}$, then $\Phi$ conjugates the
cocycle $(\alpha,R_{\rho_{f}}\mathrm{e}^{\widetilde{F}})$ to
$(\alpha,R_{\rho_{f}+(2\pi)^{-1}g_{\infty}})$, where $g_{\infty}\in
C^{\infty}({\mathbb{T}},{\mathbb{R}})$. ∎
Note the proof of the proposition 2 is separated into three steps and if we
just manipulate the first two steps, we get the local almost reducibility.
###### Corollary 2.
Under the assumptions of Theorem 1.2, there exists a sequence of $B_{\ell}\in
U_{r_{\ell}}({\mathbb{T}},SL(2,{\mathbb{R}}))$ transforming $(\alpha,A)$ into
$(\alpha,R_{\rho_{f}}\mathrm{e}^{F_{\ell}})$ with estimates
(5.2) $\|B_{\ell}\|_{r_{\ell}}\leq C,\
\|F_{\ell}\|_{r_{\ell}}\leq\varepsilon_{\ell}.$
###### Proof.
Set $B_{\ell}=\Psi_{\ell-1}\mathrm{e}^{v_{\ell-1}J}\Phi^{(\ell-1)},$ where
$\Psi_{\ell-1},v_{\ell-1}$ are the ones in Section 4 and $\Phi^{(\ell-1)}$ is
the one defined above with $\ell-1$ in place of $n.$ Obviously, $B_{\ell}$
transforming $(\alpha,A)$ into $(\alpha,R_{\rho_{f}}\mathrm{e}^{F_{\ell}})$
and the estimates of $B_{\ell}$ and $F_{\ell}$ follow from the estimates of
$\Psi_{\ell-1},v_{\ell-1}$ and $\Phi^{(\ell-1)}.$ ∎
### 5.2. Proof of Theorem 1.1
The proof of Theorem 1.1 relies on the renormalization theory of one-frequency
quasiperiodic $SL(2,{\mathbb{R}})$ cocycles. Recall for any $0<\gamma<1$ and
$\tau>1,$ $DC_{\alpha}(\gamma,\tau)$ denotes the set of all $\rho$ such that
$\displaystyle\|k\alpha\pm 2\rho\|_{\mathbb{Z}}\geq\gamma\langle
k\rangle^{-\tau},\ \langle k\rangle=\max\\{1,|k|\\},\ \forall k\in\mathbb{Z}.$
Let $\mathcal{P}\subset[0,1/2)$ be the set of all $\rho$ such that there exist
$0<\gamma<1$ and $\tau>1$ with $\rho\beta_{n-1}^{-1}\in
DC_{\alpha_{n}}(\gamma,\tau)$ for infinitely many $n.$ By Borel-Cantelli
lemma, $\mathcal{P}$ is full measure in $[0,1/2)$. We will fix the sequence
$\\{n_{j}\\}_{j\in\mathbb{N}}$ such that $\beta_{n_{j}-1}^{-1}\rho_{f}\in
DC_{\alpha_{n_{j}}}(\gamma,\tau).$
We also recall the following well-known Kotani’s theory [38].
###### Theorem 5.1 ([38]).
Let $\widetilde{\mathcal{P}}\subset[0,1/2)$ be any full measure subset. For
every $V\in C^{\infty}({\mathbb{T}},{\mathbb{R}})$, for almost every
$E\in{\mathbb{R}}$, we have
* •
either $(\alpha,S_{E}^{V})$ has a positive Lyapunov exponent, or
* •
$(\alpha,S_{E}^{V})$ is $L^{2}$-conjugated to an
$\mathrm{SO}(2,{\mathbb{R}})$-valued cocycle and the fibered rotation number
of $(\alpha,S_{E}^{V})$ belongs to $\widetilde{\mathcal{P}}.$
We start from $(\alpha,S_{E}^{V})$ which can be $L^{2}$-conjugated to an
$\mathrm{SO}(2,{\mathbb{R}})$-valued cocycle. By definition of $\mathcal{P}$,
if $\rho(\alpha,S_{E}^{V})=\rho_{f}$ belongs to $\mathcal{P}$, we can find
$0<\gamma<1$ and $\tau>1$, and arbitrary large $j>0$, such that
$\rho_{f}\beta_{n_{j}-1}^{-1}\in DC_{\alpha_{n_{j}}}(\gamma,\tau)$. Now
Proposition 1 ensures that $\|F_{n_{j}}\|_{rK_{*}^{-2}}\rightarrow 0$, then we
can further choose $j$ large enough, such that
$\|F_{n_{j}}\|_{rK_{*}^{-2}}\leq\varepsilon_{*}(\gamma,\tau,rK_{*}^{-2},M),$
where $\varepsilon_{*}=\varepsilon_{*}(\gamma,\tau,r,M)>0$ is the one in
Theorem 1.2. Since $(-1)^{n_{j}}\rho_{f}\beta_{n_{j}-1}^{-1}$ is just the
rotation number of $(\alpha_{n_{j}},R_{\rho_{n_{j}}}\mathrm{e}^{F_{n_{j}}})$,
by Theorem 1.2 we know that
$(\alpha_{n_{j}},R_{\rho_{n_{j}}}\mathrm{e}^{F_{n_{j}}})$ is $C^{\infty}$
rotations reducible. Note
$(\alpha_{n_{j}},R_{\rho_{n_{j}}}\mathrm{e}^{F_{n_{j}}})$ is rotations
reducible (or reducible) implies $(\alpha,S_{E}^{V})$ is rotations reducible
(or reducible) in the same regularity class (consult Proposition 4.2 of [6]
for example), then Theorem 1.1 follows directly. ∎
## 6\. Last’s intersection spectrum conjecture
Consider the Schrödinger operator $H_{V,\beta,\theta}$ defined by (1.1) with
ultra-differentiable potential $V\in U_{r}(\mathbb{T},\mathbb{R}),$ frequency
$\beta\in\mathbb{T}$ and phase $\theta\in\mathbb{T}.$ For fixed $\theta,$
denote by $\sigma(\beta,\theta)$ and $\sigma_{ac}(\beta,\theta)$ the spectrum
of $H_{V,\beta,\theta}$ and its absolutely continuous (ac)-component,
respectively. It is well known that in the case $\beta=p/q,$
$\sigma(p/q,\theta)$ is purely absolutely continuous and consists of $q,$
possibly touching, bands. Moreover, in the case $\beta=\alpha$ is irrational,
the spectrum and ac spectrum do not depend on $\theta:$
$\displaystyle\sigma(\alpha,\theta)=:\Sigma(\alpha),\
\sigma_{ac}(\alpha,\theta)=:\Sigma_{ac}(\alpha),\ \forall\theta\in\mathbb{T}.$
In order to treat rational and irrational frequencies on the same footing,
similar to Avron et al. [10], given $\beta\in\mathbb{T},$ we introduce the
sets
$\displaystyle
S_{+}(\beta):=\bigcup_{\theta\in\mathbb{T}}\sigma(\beta,\theta)=\Sigma(\beta),$
and
$\displaystyle
S_{-}(\beta):=\bigcap_{\theta\in\mathbb{T}}\sigma_{ac}(\beta,\theta)=\Sigma_{ac}(\beta).$
Note that it was proved in [35] that
$S_{+}(\alpha)=\Sigma(\alpha)=\lim\limits_{n\rightarrow\infty}S_{+}(p_{n}/q_{n}).$
Theorem 1.3 follows immediately from the following Theorem 6.1 and Theorem
6.2, while the key arguments are “generalized Chambers’ formula” (Proposition
3) and continuity of Lyapunov exponent (Theorem 6.3).
### 6.1. Generalized Chambers’ formula
###### Theorem 6.1.
Let $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$,
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ be an M-ultra-differentiable function
satisfying $\mathbf{(H1)}$ and $\mathbf{(H2)}$, then we have
$\displaystyle
S_{-}(\alpha)=\Sigma_{ac}(\alpha)\subset\liminf_{n\rightarrow\infty}S_{-}(p_{n}/q_{n}).$
The proof of Theorem 6.1 depends on the following generalized Chambers’
formula. To state this, recall that for each $\theta\in\mathbb{T},$
$H_{V,p/q,\theta}$ is a periodic operator whose spectrum,
$\sigma(p/q,\theta),$ is given in terms of the discriminant by
$\displaystyle\sigma(p/q,\theta)=t_{p/q}(\cdot,\theta)^{-1}[-2,2],$
where
(6.1) $\displaystyle
t_{p/q}(E,\theta)=\mathrm{tr}\\{\Pi_{s=q-1}^{0}S^{V}_{E}(\theta+sp/q)\\},$
which is called as the discriminant of $H_{V,p/q,\theta}$, here
$``\mathrm{tr}"$ stands for the trace. In general, this discriminant is a
polynomial of degree $q$ in $E$ and $q^{-1}-$periodic in $\theta$, whence one
may write
(6.2) $\displaystyle
t_{p/q}(E,\theta)=\sum_{k\in\mathbb{Z}}a_{q,k}(E)\mathrm{e}^{2\pi\mathrm{i}qk\theta}.$
For the almost Mathieu operator, the potential $V=2\lambda\cos 2\pi\theta$ is
in fact a trigonometric polynomial of degree 1. Thus in the formula (6.2) only
the Fourier coefficients with $k=0,\pm 1$ survive, resulting the celebrated
Chamber formula [21, 18, 15]
$\displaystyle t_{p/q}(E,\theta)=a_{q,0}(E)+2\lambda^{q}\cos(2\pi q\theta).$
Note the classical Chamber’s formula holds for any $\lambda$. In particular,
it shows that phase variations of the discriminant for the subcritical almost
Mathieu operator (thus has absolutely continuous spectrum) are exponentially
small in q. Now for any $C^{\infty}$ potential
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ satisfying $\mathbf{(H1)}$ and
$\mathbf{(H2)}$, $E\in\Sigma_{ac}(\alpha)$, we will show that the difference
between the determines $t_{p/q}(E,\theta)$ of rational approximates of
$\alpha$, and its phase-average $a_{q,0}(E)$, is in fact sub-exponentially
small in $q$:
###### Proposition 3.
Let $\alpha\in{\mathbb{R}}\setminus\mathbb{Q}$,
$V:\mathbb{T}\rightarrow{\mathbb{R}}$ be an M-ultra-differentiable function
satisfying $\mathbf{(H1)}$ and $\mathbf{(H2)}$, then for almost every
$E\in\Sigma_{ac}(\alpha)$, there exist $n_{*}=n(V,\alpha,E)\in\mathbb{N}$,
$c=c(E)$ such that
(6.3) $\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)\|_{C^{0}}\leq
4\exp\\{-\Lambda(cq_{n})\\}$
whenever $n\geq n_{*}.$
###### Remark 6.1.
Indeed, one can select $c(E)$, such that $cq_{n}>q_{n}^{\frac{3}{4}}.$
If $V$ is analytic, Jitomirskaya-Marx (Proposition 3.1 in [35]) proved that
$\|t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)\|_{C^{0}}$ is exponentially small
in $q$. Their proof depends on Avila’s quantization of acceleration [8], key
of his global theory. While our proof is a perturbation argument, completely
different from theirs.
Proof of Theorem 6.1. We will first prove Theorem 6.1 assuming Proposition 3
and postpone the proof of Proposition 3 to Section 7. We point out the ideas
of the proof was essentially given by Avila and sketched in [35]. We give the
full proof here for completeness. Let $\mathcal{K}\subset[0,1/2)$ be the set
of all $\rho$ such that
$\inf_{p\in{\mathbb{Z}}}|q_{n}\rho-p|\geq n^{-2},\quad\text{eventually.}$
A simple Borel Cantelli argument shows $|\mathcal{K}|=1/2$. For any
$\beta\in{\mathbb{T}}$, we denote by $N(\beta,E)$ the integrated density of
states (IDS). Note the set $\mathcal{P}\subset[0,1/2)$ we defined in Section
5.2 is also full of measure, $i.e.,$ $|\mathcal{P}|=1/2,$ thus
$\mathcal{P}\doteq\mathcal{K}.$ Moreover, Theorem 1.1 actually implies that
for almost every $E\in\Sigma_{ac}(\alpha)\doteq\mathcal{P}$, the cocycle
$(\alpha,S_{E}^{V})$ is rotations reducible. Thus by Theorem 6.1 in [3], if
$E\in\Sigma_{ac}(\alpha)\doteq\mathcal{P}\doteq\mathcal{K}$, $N(\beta,E)$ is
Lipschitz in $\alpha,$ i.e., there exists some $\Gamma(E)$
(6.4) $|N(\alpha,E)-N(p_{n}/q_{n},E)|<q_{n}^{-2}\Gamma(E).$
Since $N(\alpha,E)\in\mathcal{K}$, then by (6.4), for $n$ sufficiently large,
we have
(6.5) $p-1+\frac{1}{2q_{n}}<q_{n}N(p_{n}/q_{n},E)<p-\frac{1}{2q_{n}},$
for some $1\leq p\leq q_{n}$. On the other hand, it was calculated in [1] that
if $E$ belongs to the $k$-th band of $S_{+}(p_{n}/q_{n})$, we have
(6.6)
$q_{n}N\big{(}p_{n}/q_{n},E\big{)}=k-1+2(-1)^{q_{n}+k-1}\int_{\mathbb{T}}\rho\big{(}p_{n}/q_{n},E,\theta\big{)}d\theta+\frac{1-(-1)^{q_{n}-k+1}}{2},$
where
(6.7) $\rho\Big{(}p_{n}/q_{n},E,\theta\Big{)}=\left\\{\begin{aligned}
&0&t_{p_{n}/q_{n}}(E,\theta)>2,\\\
&(2\pi)^{-1}\arccos(2^{-1}t_{p_{n}/q_{n}}(E,\theta))&|t_{p_{n}/q_{n}}(E,\theta)|\leq
2,\\\ &1/2&t_{p_{n}/q_{n}}(E,\theta)<-2.\end{aligned}\right.$
Then (6.5) and (6.6) imply that
(6.8)
$2\big{|}\cos\big{(}2\pi\int_{\mathbb{T}}\rho\big{(}p_{n}/q_{n},E,\theta\big{)}d\theta\big{)}\big{|}<2-\frac{1}{q_{n}^{2}}.$
Since $\rho\big{(}p_{n}/q_{n},E,\theta\big{)}$ is continuous in $\theta$,
(6.8) and (6.7) imply that there exists $\tilde{\theta}\in{\mathbb{T}}$, such
that
$|t_{p_{n}/q_{n}}(E,\tilde{\theta})|=2\big{|}\cos\big{(}2\pi\rho(p_{n}/q_{n},E,\tilde{\theta})\big{)}\big{|}<2-\frac{1}{q_{n}^{2}}.$
Then by (6.3) in Proposition 3 we have
$\displaystyle\left|a_{q_{n},0}(E)\right|\leq
2-\frac{1}{q_{n}^{2}}+4\exp\\{-\Lambda(cq_{n})\\}\leq 2-\frac{1}{2q_{n}^{2}}.$
By (6.3) again, for any $\theta\in{\mathbb{T}}$, we have
$\left|t_{p_{n}/q_{n}}(E,\theta)\right|\leq 2,$ which means $E\in
S_{-}(p_{n}/q_{n})$.∎
### 6.2. Continuity of the Lyapunov exponent
Theorem 6.1 proves
$\Sigma_{ac}(\alpha)\subset\lim_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})$ for
any M-ultra-differentiable potentials satisfying $\mathbf{(H1)}$ and
$\mathbf{(H2)},$ however, when we come to the inverse inclusion, we can only
prove the result for $\nu$-Gevrey potentials with $1/2<\nu<1.$ It is
interesting to extend the conclusion below to the cocycle with ultra-
differential potentials, even with $C^{\infty}$ potentials.
###### Theorem 6.2.
Let $V:\mathbb{T}\rightarrow{\mathbb{R}}$ be a $\nu$-Gevrey function with
$1/2<\nu<1$, and assume that $\alpha\in{\mathbb{R}}\backslash{\mathbb{Q}}$.
Then there is a sequence $p_{n}/q_{n}\rightarrow\alpha$, such that
$\displaystyle\limsup_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\subset
S_{-}(\alpha)=\Sigma_{ac}(\alpha).$
###### Remark 6.2.
The sequence $p_{n}/q_{n}$ will be the full sequence of continued fraction
approximations in the case $\alpha$ is Diophantine, and an appropriate
subsequence of it otherwise. For practical purposes of making conclusions
about $S_{-}(\alpha)$ based on the information on $S_{-}(p_{n}/q_{n})$, it is
sufficient to have convergence along a subsequence. However, in the latter
case, the potential can be any stationary bounded ergodic one.
The proof of Theorem 6.2 depends on the continuity of the Lyapunov exponent
for the more general Gevrey cocycles. For a Gevrey (possibly matrix valued)
function $f$, we let
$\displaystyle\|f\|_{\nu,r}=\sum_{k\in\mathbb{Z}}|\widehat{f}(k)|\mathrm{e}^{|2\pi
k|^{\nu}r},\ 0<\nu<1.$
We denote by $G_{r}^{\nu}({\mathbb{T}},*)$ the set of all these $*$-valued
functions ($*$ will usually denote ${\mathbb{R}}$, $SL(2,{\mathbb{R}})$.). If
we set $r=\widetilde{r}^{\nu},$ we get
$\|f\|_{\nu,r}=\|f\|_{\Lambda_{\nu},\widetilde{r}},$ where
$\|\cdot\|_{\Lambda_{\nu},r}-$norm is the one defined by (3.9) with
$\Lambda_{\nu}(x)=x^{\nu}.$ To simplify the notation, we introduce
$\|\cdot\|_{\nu,r}.$ Note the function $\Lambda_{\nu}(x)=x^{\nu},0<\nu<1,$
satisfies the subadditivity, thus $G_{r}^{\nu}({\mathbb{T}},*),0<\nu<1,$ is a
Banach algebra.
###### Theorem 6.3.
Let $\rho>0,2^{-1}<\nu<1.$ Consider the cocycle
$(\alpha,A)\in(0,1)\setminus{\mathbb{Q}}\times
G_{\rho}^{\nu}({\mathbb{T}},SL(2,{\mathbb{R}}))$ with $\alpha\in DC.$ Then
$L(\alpha,A)$ is jointly continuous in the sense that
$\lim\limits_{n\rightarrow\infty}L(p_{n}/q_{n},A_{n})=L(\alpha,A),$
where $p_{n}/q_{n}$ is the continued fraction expansion of $\alpha$, and
$A_{n}\in G_{\rho}^{\nu}({\mathbb{T}},SL(2,{\mathbb{R}}))$ with
$A_{n}\rightarrow A$ under the topology derived by
$\|\cdot\|_{\nu,\rho}-$norm.
The full joint continuity was first proved by Bourgain-Jitomirskaya [16] for
analytic cocycles, which also plays a fundamental role in establishing the
global theory of Schödinger operator [8]. However, due to lack of analyticity,
it’s very difficult to generalize the above result to all irrational $\alpha$.
The main reason is that in our large deviation theorem estimates, there is an
upper bound for $N$ (see the assumptions in Proposition 5), thus we cannot
deal with the extremely Liouvillean frequency. It’s an interesting open
question whether one can prove the continuity of the Lyapunov exponent for
Liouvillean frequency and non-analytic potentials.
Proof of Theorem 6.2. We will first prove Theorem 6.2 and left the proof of
Theorem 6.3 to Section 8. We separate the proof into two cases.
$\mathbf{Case~{}I:}$ $\alpha\in DC(v,10)$. We can assume that $L(\alpha,E)>0$,
then by Theorem 6.3, for any $p_{n}/q_{n}$ sufficiently close to $\alpha$, we
have $L(p_{n}/q_{n},E)>0$. This implies that
$E\notin\sigma(p_{n}/q_{n},\theta)$ for some $\theta\in{\mathbb{T}}$, hence
$E\notin S_{-}(p_{n}/q_{n})$.
$\mathbf{Case~{}II:}$ $\alpha\notin DC(v,10)$, we define a sequence
$\\{V^{\theta}_{m}(n)\\}_{m=1}^{\infty}$ periodic potentials by:
$\displaystyle V_{m}^{\theta}(n)=V(\theta+n\alpha),\ \ n=1,2,\cdots,m,$
$\displaystyle V_{m}^{\theta}(n+m)=V_{m}^{\theta}(n),$
such that $V_{m}^{\theta}$ is obtained from $V_{\omega}$ by “cutting” a finite
piece of length $m$, and then repeating it. We denote by $\sigma_{m}(\theta)$
the spectrum of the periodic Schrödinger operators
$\displaystyle Hu=u_{n+1}+u_{n-1}+V_{m}^{\theta}(n)u_{n}.$
By Theorem 1 in [42], for a.e. $\theta\in{\mathbb{T}}$,
(6.9)
$\displaystyle\limsup_{m\rightarrow\infty}\sigma_{m}(\theta)\subset\Sigma_{ac}(\alpha).$
We define
$A^{\theta}_{q_{n}}(\xi)=\begin{pmatrix}V_{q_{n}}^{\theta}(1)&1&&&e^{-i\xi
m}\\\ 1&V_{q_{n}}^{\theta}(2)&1&&\\\ &1&\ddots&\ddots&\\\ &&\ddots&\ddots&1\\\
e^{i\xi m}&&&1&V_{q_{n}}^{\theta}(q_{n})\end{pmatrix},$
$\widetilde{A}^{\theta}_{q_{n}}(\xi)=\begin{pmatrix}V(\theta+p_{n}/q_{n})&1&&&e^{-i\xi
m}\\\ 1&V(\theta+2p_{n}/q_{n})&1&&\\\ &1&\ddots&\ddots&\\\
&&\ddots&\ddots&1\\\ e^{i\xi m}&&&1&V(\theta+q_{n}p_{n}/q_{n})\end{pmatrix}.$
It is standard that
$\sigma_{q_{n}}(\theta)=\bigcup_{\xi}\text{Spec}(A_{q_{n}}^{\theta}(\xi)),\ \
\sigma(p_{n}/q_{n},\theta)=\bigcup_{\xi}\text{Spec}(\widetilde{A}_{q_{n}}^{\theta}(\xi)),$
where $\text{Spec}(A)$ denotes the sets of all eigenvalues of $A$. We need the
following perturbation theory of matrices.
###### Proposition 4 (Corollary 12.2 of [14]).
Let $A$ and $B$ be normal with $\|A-B\|=\varepsilon$. Then within a distance
of $\varepsilon$ of every eigenvalue of $A$ there is at least one eigenvalue
of $B$ and vice versa.
Fix $\theta_{0}$ such that (6.9) holds with $\theta_{0}$ in place of $\theta.$
Notice for any $E_{0}\in\sigma_{q_{n}}(\theta_{0})$, there exists $\xi_{n}$
such that $E_{0}\in\text{Spec}(A_{q_{n}}^{\theta_{0}}(\xi_{n}))$. Applying
Proposition 4 to $A_{q_{n}}^{\theta_{0}}(\xi_{n})$ and
$\widetilde{A}_{q_{n}}^{\theta_{0}}(\xi_{n})$, there exists
$E^{\prime}_{0}\in\text{Spec}(\widetilde{A}_{q_{n}}^{\theta_{0}}(\xi_{n}))$
such that
$|E_{0}-E_{0}^{\prime}|\leq\big{\|}A_{q_{n}}^{\theta_{0}}(\xi_{n})-\widetilde{A}_{q_{n}}^{\theta_{0}}(\xi_{n})\big{\|}\leq
C(V)\sum\limits_{j=1}^{q_{n}}\big{|}j(\alpha-p_{n}/q_{n})\big{|}\leq
C(V)q_{n}^{2}\big{|}\alpha-p_{n}/q_{n}\big{|}.$
Since $E_{0}^{\prime}\in\sigma(p_{n}/q_{n},\theta_{0})$, it follows that
(6.10)
$\big{|}\sigma_{q_{n}}(\theta_{0})-\sigma\big{(}p_{n}/q_{n},\theta_{0}\big{)}\big{|}_{H}\leq
C(V)q_{n}^{2}\big{|}\alpha-p_{n}/q_{n}\big{|},$
where $|A-B|_{H}$ denotes the Hausdorff distance of two sets. We denote
$\sigma_{q_{n}}(\theta_{0})=\bigcup_{i=1}^{q_{n}^{\prime}}[a_{n,i},b_{n,i}]$,
$q_{n}^{\prime}\leq q_{n}$. (6.10) implies that
$\sigma\big{(}p_{n}/q_{n},\theta_{0}\big{)}\subset\bigcup_{i=1}^{q_{n}^{\prime}}\Big{[}a_{n,i}-C(V)q_{n}^{2}\big{|}\alpha-
p_{n}/q_{n}\big{|},b_{n,i}+C(V)q_{n}^{2}\big{|}\alpha-
p_{n}/q_{n}\big{|}\Big{]}.$
It follows that
(6.11)
$\displaystyle\big{|}\sigma(p_{n}/q_{n},\theta_{0})\backslash\sigma_{q_{n}}(\theta_{0})\big{|}\leq
C(V)q_{n}^{3}|\alpha-p_{n}/q_{n}|.$
Since $\alpha\notin DC(v,10)$, by (6.11), there exists a subsequence
$p_{n}/q_{n}$ such that
$\displaystyle\limsup_{n\rightarrow\infty}\sigma(p_{n}/q_{n},\theta_{0})\subset\limsup_{n\rightarrow\infty}\sigma_{q_{n}}(\theta_{0})\subset\Sigma_{ac}(\alpha).$
Moreover, notice that
$\displaystyle\limsup_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\subset\limsup_{n\rightarrow\infty}\sigma(p_{n}/q_{n},\theta_{0}),$
hence
$\displaystyle\limsup_{n\rightarrow\infty}S_{-}(p_{n}/q_{n})\subset\Sigma_{ac}(\alpha).$
## 7\. Proof of Proposition 3
Suppose that $V:\mathbb{T}\rightarrow{\mathbb{R}}$ is an M-ultra-
differentiable function satisfying $\mathbf{(H1)}$ and $\mathbf{(H2)}$, then
for almost every $E\in\Sigma_{ac}(\alpha)$, by Theorem 1.1,
$(\alpha,S_{E}^{V})$ is $C^{\infty}$ rotations reducible. However, this is not
enough for us to conclude
$\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)\|_{C^{0}}\leq
4\exp\\{-\Lambda(cq_{n})\\}.$
Since even we assume $(\alpha,S_{E}^{V})=(\alpha,R_{\psi(\theta)})$ with
$\psi(\theta)\in C^{\infty}(\mathbb{T},\mathbb{R}),$ which only gives
$\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)\|_{C^{0}}\leq
cq_{n}^{-\infty}.$
The idea is that our KAM scheme not only gives $C^{\infty}$ rotations
reducibility, but also almost reducibility in the ultra-differentiable
topology (Corollary 2), however, this only works for cocycles which are close
to constant. Coupled with the renormalization argument, we will show that if
the cocycle is $L^{2}$-conjugated to rotations and
$\rho(\alpha,A)\in\mathcal{P},$ then $(\alpha,A)$ is also almost reducibility
in the ultra-differentiable topology (Lemma 7.3). Consequently, Proposition 3
follows from the perturbation arguments.
### 7.1. Global almost reducibility
To get desired quantitative estimates, the main method is the inverse
renormalization which was first developed in [26]. We introduce the notation
$\Psi:=\mathcal{J}\mathcal{R}_{\theta_{*}}^{n}(\Psi^{\prime})$ if
$\Psi^{\prime}=\mathcal{R}_{\theta_{*}}^{n}(\Psi).$ It is easy to check that
$\mathcal{J}\mathcal{R}_{\theta_{*}}^{n}(\Phi)=T_{\theta_{*}}^{-1}\circ
N_{\widetilde{Q}_{n}^{-1}}\circ M_{\beta_{n-1}^{-1}}\circ
T_{\theta_{*}}(\Phi).$
In our setting, one of the bases is the following:
###### Lemma 7.1.
Let $\Phi_{1}=((1,Id),(\alpha_{n},R_{\rho_{n}}\mathrm{e}^{F(\theta)})),$
$\Phi_{2}=((1,Id),(\alpha_{n},R_{\rho_{n}}))$ where $F\in
U_{r}({\mathbb{T}},sl(2,{\mathbb{R}}))$ with estimate $\|F\|_{r,1}\leq
q_{n-1}^{-2}.$ Then,
(7.1)
$\displaystyle\|\mathcal{J}\mathcal{R}_{\theta_{*}}^{n}(\Phi_{1})-\mathcal{J}\mathcal{R}_{\theta_{*}}^{n}(\Phi_{2})\|_{\beta_{n-1}r,1}<2q_{n-1}\|F(\theta)\|_{r,1}.$
###### Proof.
Direct computation shows that
$\begin{split}M_{\beta_{n-1}^{-1}}(\Phi_{1})&=((\beta_{n-1},Id),(\beta_{n-1}\alpha_{n},R_{\rho_{n}}\mathrm{e}^{F(\beta_{n-1}^{-1}\theta)})),\\\
M_{\beta_{n-1}^{-1}}(\Phi_{2})&=((\beta_{n-1},Id),(\beta_{n-1}\alpha_{n},R_{\rho_{n}})),\end{split}$
and consequently
$\begin{split}N_{\widetilde{Q}_{n}^{-1}}\circ
M_{\beta_{n-1}^{-1}}(\Phi_{1})&=((1,R_{\rho_{n}q_{n-1}}\mathrm{e}^{F_{1}(\theta)}),(\alpha,R_{\rho_{n}p_{n-1}}\mathrm{e}^{F_{2}(\theta)})),\\\
N_{\widetilde{Q}_{n}^{-1}}\circ
M_{\beta_{n-1}^{-1}}(\Phi_{2})&=((1,R_{\rho_{n}q_{n-1}}),(\alpha,R_{\rho_{n}p_{n-1}})).\end{split}$
Note for any $\lambda\neq 0$,
$\|D_{\theta}^{s}M_{\lambda}(\Phi)\|_{C^{0}([0,T])}=\lambda^{s}\|D_{\theta}^{s}\Phi\|_{C^{0}([0,T])}$,
which gives
(7.2) $\|M_{\lambda}(\Phi)\|_{\lambda^{-1}r,T}=\|\Phi\|_{r,\lambda T}.$
Thus the main task is to estimate of the norm under iteration of the cocycles.
To do this, we need the following simple observation:
###### Lemma 7.2.
_(Lemma 4.3 of[55]) _ Let $F_{i}\in
U_{r}({\mathbb{R}},sl(2,{\mathbb{R}})),i=1,\cdots,j.$ Then it holds that
$R_{\rho}\mathrm{e}^{F_{j}(\theta)}R_{\rho}\mathrm{e}^{F_{j-1}(\theta)}\cdots
R_{\rho}\mathrm{e}^{F_{1}(\theta)}=R_{j\rho}\mathrm{e}^{\widetilde{F}(\theta)},$
with estimate
$\|\widetilde{F}\|_{r,T}\leq\sum_{i=1}^{j}\|F_{i}\|_{r,T}.$
By (7.2) and Lemma 7.2, we have
$\|F_{1}\|_{\beta_{n-1}r,1}\leq
q_{n-1}\|F\|_{r,1},\qquad\|F_{2}\|_{\beta_{n-1}r,1}\leq p_{n-1}\|F\|_{r,1}.$
Then (7.1) follows directly.
∎
###### Lemma 7.3.
Assume that $(\alpha,A)$ is $L^{2}$-conjugated to rotations and homotopic to
the identity. If $\rho_{f}=\rho(\alpha,A)\in\mathcal{P}.$ Then there exist
$B_{j,\ell},F_{j,\ell}\in
U_{\widetilde{r}_{\ell}}(\mathbb{T},SL(2,\mathbb{R}))$ such that
(7.3) $\displaystyle
B_{j,\ell}(\theta+\alpha)A(\theta)B_{j,\ell}(\theta)^{-1}=R_{\rho_{f}}\mathrm{e}^{F_{j,\ell}(\theta)},\quad\ell>\widehat{n},$
where
$\widetilde{r}_{\ell}=r\beta_{n_{j}-1}/2K_{*}^{3}\overline{Q}_{\ell-1}^{2},$
and $\widehat{n}\in{\mathbb{N}}$ is the smallest one such that
$\overline{Q}_{\widehat{n}}\geq C^{q_{n_{j}}^{2}}.$ Moreover, we have estimate
$\|F_{j,\ell}\|_{\widetilde{r}_{\ell}}\leq\varepsilon_{\ell}^{\frac{1}{2}},\qquad\|B_{j,\ell}\|_{\widetilde{r}_{\ell}}<4C^{3q_{n_{j}-1}q_{n_{j}}}.$
###### Proof.
Since $(\alpha,A)$ is $L^{2}$-conjugated to rotations and homotopic to the
identity, by Proposition 1 there exists $D_{n}\in
U_{rK_{*}^{-2}}(\mathbb{R},SL(2,\mathbb{R}))$ with
(7.4) $\|D_{n}\|_{r(K_{*}^{2}T)^{-1},T}\leq C^{q_{n-1}(T+1)},$
such that
$\displaystyle\mathrm{Conj_{D_{n}}}(\mathcal{R}_{\theta_{*}}^{n}(\Phi))=((1,Id),(\alpha_{n},\
R_{\rho_{n}}\mathrm{e}^{F_{n}})),$
with $\|F_{n}\|_{rK_{*}^{-2},1}\rightarrow 0.$ Since
$\rho_{f}=\rho(\alpha,A)\in\mathcal{P},$ which means
$\rho_{f}\beta_{n_{j}-1}^{-1}\in DC_{\alpha_{n_{j}}}(\gamma,\tau)$ for
infinitely many $n_{j}$, then we can further choose $j$ large enough, such
that
$\|F_{n_{j}}\|_{rK_{*}^{-2}}\leq\varepsilon_{*}(\gamma,\tau,rK_{*}^{-2},M),$
where $\varepsilon_{*}=\varepsilon_{*}(\gamma,\tau,r,M)>0$ is the one in
Theorem 1.2. In the following, we will write $D_{j}$ for $D_{n_{j}}$ for
short, and denote $T_{n}=\beta_{n-1}^{-1}=q_{n}+\alpha_{n}q_{n-1}$,
$r_{\ell}=2^{-1}rK_{*}^{-2}\overline{Q}_{\ell-1}^{-2}$, then
$\widetilde{r}_{\ell}=(K_{*}T_{n_{j}})^{-1}r_{\ell}$.
Now we apply Corollary 2 to the action
$\mathrm{Conj_{D_{j}}}(\mathcal{R}_{\theta_{*}}^{n_{j}}(\Phi))$ and denote
$Z_{j,\ell}=B_{\ell}D_{j},$ then
(7.5)
$\displaystyle\mathrm{Conj_{Z_{j,\ell}}}(\mathcal{R}_{\theta_{*}}^{n_{j}}(\Phi))=\widetilde{\Phi}_{1}^{(j,\ell)}:=((1,Id),(\alpha_{n_{j}},\
R_{\rho_{n_{j}}}\mathrm{e}^{F_{j,\ell}})),$
which, together with (5.2) and (7.4) and the fact $r_{\ell}\ll
rK_{*}^{-2}T_{n_{j}}^{-1}$, implies
(7.6) $\displaystyle\|F_{j,\ell}\|_{r_{\ell},1}\leq\varepsilon_{\ell},\
\|Z_{j,\ell}\|_{r_{\ell},T_{n_{j}}}\leq\|D_{j}\|_{r_{\ell},T_{n_{j}}}\|B_{\ell}\|_{r_{\ell},1}\leq
C^{3q_{n_{j}-1}q_{n_{j}}}.$
Once we have these, we can set
$\widetilde{\Phi}_{2}^{(j,\ell)}=((1,Id),(\alpha_{n_{j}},\ R_{\rho_{n_{j}}}))$
and define $\Phi_{2,j,\ell}$ by
(7.7)
$\displaystyle\mathrm{Conj_{Z_{j,\ell}}}(\mathcal{R}_{\theta_{*}}^{n_{j}}(\Phi_{2,j,\ell}))=\widetilde{\Phi}_{2}^{(j,\ell)}.$
For given $G\in SL(2,\mathbb{R}),$ $T_{\theta_{*}}$ and $N_{U}$ commute with
$\mathrm{Conj_{G}}$ while
$M_{\lambda}\circ\mathrm{Conj_{G}}=\mathrm{Conj_{G(\lambda\cdot)}}\circ
M_{\lambda},$ then by (7.5) and (7.7), we get
(7.8) $\displaystyle\left\\{\begin{array}[]{l
l}\Phi=\mathcal{J}\mathcal{R}_{\theta_{*}}^{n_{j}}\big{(}\mathrm{Conj_{Z_{j,\ell}^{-1}}}(\widetilde{\Phi}_{1}^{(j,\ell)})\big{)}=\mathrm{Conj_{Z_{j,\ell}^{-1}(\beta_{n_{j}-1}^{-1}\cdot)}}\big{(}\widetilde{\Phi}_{1,j,\ell}\big{)},\\\
\Phi_{2,j,\ell}=\mathcal{J}\mathcal{R}_{\theta_{*}}^{n_{j}}\big{(}\mathrm{Conj_{Z_{j,\ell}^{-1}}}(\widetilde{\Phi}_{2}^{(j,\ell)})\big{)}=\mathrm{Conj_{Z_{(j,\ell)}^{-1}(\beta_{n_{j}-1}^{-1}\cdot)}}\big{(}\widetilde{\Phi}_{2,j,\ell}\big{)},\end{array}\right.$
where
(7.9) $\displaystyle\left\\{\begin{array}[]{l
l}\widetilde{\Phi}_{1,j,\ell}&=\mathcal{J}\mathcal{R}_{\theta_{*}}^{n_{j}}(\widetilde{\Phi}_{1}^{(j,\ell)}),\\\
\widetilde{\Phi}_{2,j,\ell}&=\mathcal{J}\mathcal{R}_{\theta_{*}}^{n_{j}}(\widetilde{\Phi}_{2}^{(j,\ell)})=((1,R_{\rho_{n_{j}}q_{n_{j}-1}}),(\alpha,R_{\rho_{n_{j}}p_{n_{j}-1}})).\end{array}\right.$
Set $\widetilde{Z}_{j}(\theta)=R_{-\rho_{n_{j}}q_{n_{j}-1}\theta},$ then
$\displaystyle\mathrm{Conj_{\widetilde{Z}_{j}}}\big{(}\widetilde{\Phi}_{2,j,\ell}\big{)}=((1,Id),(\alpha,R_{\rho_{f}})):=\Phi_{**},$
which, together with (7.8) and (7.9), implies
(7.10) $\displaystyle\left\\{\begin{array}[]{l
l}\Phi=\mathrm{Conj_{Z_{j,\ell}^{-1}(\beta_{n_{j}-1}^{-1}\cdot)\widetilde{Z}_{j}^{-1}}}(\Phi_{*}),\\\
\Phi_{2,j,\ell}=\mathrm{Conj_{Z_{j,\ell}^{-1}(\beta_{n_{j}-1}^{-1}\cdot)\widetilde{Z}_{j}^{-1}}}(\Phi_{**}),\end{array}\right.$
where
$\Phi_{*}=\mathrm{Conj_{\widetilde{Z}_{j}}}\big{(}\widetilde{\Phi}_{1,j,\ell}\big{)}.$
Moreover, by our selection $\overline{Q}_{\ell}>\overline{Q}_{\widehat{n}}\geq
C^{q_{n_{j}}^{2}}.$
(7.11) $\|\widetilde{Z}_{j}\|_{r_{\ell},1}\leq 2,$
$\|F_{j,\ell}\|_{r_{\ell},1}\leq\varepsilon_{\ell}\ll q_{n_{j}}^{-6}.$
In the following, we will give the estimate of the distance of $\Phi$ and
$\Phi_{2,j,\ell}.$
First, we apply Lemma 7.1 with $\widetilde{\Phi}_{i}^{(j,\ell)}$ in place of
$\Phi_{i},$ and $n_{j}$ in place of $n$ respectively, then by (7.1)
$\displaystyle\|\widetilde{\Phi}_{1,j,\ell}-\widetilde{\Phi}_{2,j,\ell}\|_{T_{n_{j}}^{-1}r_{\ell},1}<\|F_{j,\ell}\|_{r_{\ell},1}^{\frac{3}{4}}\leq\varepsilon_{\ell}^{\frac{3}{4}}.$
Finally, by (7.6) and (7.8) and the inequality above we get
(7.12) $\displaystyle\|\Phi-\Phi_{2,j,\ell}\|_{T_{n_{j}}^{-1}r_{\ell},1}$
$\displaystyle\leq$
$\displaystyle\|Z_{j,l}\|_{r_{\ell},T_{n_{j}}}^{2}\|\widetilde{\Phi}_{1,j,\ell}-\widetilde{\Phi}_{2,j,\ell}\|_{T_{n_{j}}^{-1}r_{\ell},1}$
$\displaystyle\leq$ $\displaystyle
C^{6q_{n_{j}-1}q_{n_{j}}}\varepsilon_{\ell}^{\frac{3}{4}}.$
Notice that $\Phi_{2,j,\ell}$ may not be normalized, however, by (7.12) we
know that
(7.13) $\displaystyle\|\Phi_{2,j,\ell}(1,0)-Id\|_{T_{n_{j}}^{-1}r_{\ell},1}$
$\displaystyle=$
$\displaystyle\|\Phi_{2,j,\ell}(1,0)-\Phi(1,0)\|_{T_{n_{j}}^{-1}r_{\ell},1}$
$\displaystyle\leq$ $\displaystyle
C^{6q_{n_{j}-1}q_{n_{j}}}\varepsilon_{\ell}^{\frac{3}{4}}.$
Thus by Lemma 2.3, there exists a conjugation $\widetilde{B}_{j,\ell}\in
U_{\widetilde{r}_{\ell}}({\mathbb{R}},SL(2,{\mathbb{R}}))$ such that
$\overline{\Phi}_{j,\ell}=\mathrm{Conj_{\widetilde{B}_{j,\ell}}}(\Phi_{2,j,\ell})$
is a normalized action. Moreover, we have estimate
(7.14)
$\displaystyle\|\widetilde{B}_{j,\ell}-Id\|_{\widetilde{r}_{\ell},1}\leq\|\Phi_{2,j,\ell}(1,0)-Id\|_{T_{n_{j}}^{-1}r_{\ell},1}\leq
C^{6q_{n_{j}}q_{n_{j}-1}}\varepsilon_{\ell}^{\frac{3}{4}}.$
Since
$\overline{\Phi}_{j,\ell}(0,1)=\widetilde{B}_{j,\ell}(\theta+\alpha)\Phi_{2,j,\ell}(0,1)\widetilde{B}_{j,\ell}(\theta)^{-1},$
by (7.13), (7.14) we have
$\displaystyle\|\overline{\Phi}_{j,\ell}-\Phi_{2,j,\ell}\|_{\widetilde{r}_{\ell},1}\leq
2C^{6q_{n_{j}}q_{n_{j}-1}}\varepsilon_{\ell}^{\frac{3}{4}}.$
The inequality above, together with (7.12), yields
(7.15) $\|\Phi-\overline{\Phi}_{j,\ell}\|_{\widetilde{r}_{\ell},1}\leq
3C^{6q_{n_{j}}q_{n_{j}-1}}\varepsilon_{\ell}^{\frac{3}{4}}.$
Set
$B_{j,\ell}(\cdot)=\widetilde{Z}_{j}(\cdot)Z_{j,\ell}(\beta_{n_{j}-1}^{-1}\cdot)\widetilde{B}_{j,\ell}^{-1}(\cdot),$
then by (7.10) we get
(7.16)
$\displaystyle\mathrm{Conj_{B_{j,\ell}}}(\overline{\Phi}_{j,\ell})=\mathrm{Conj_{\widetilde{Z}_{j}Z_{j,\ell}(\beta_{n_{j}-1}^{-1}\cdot)}}(\Phi_{2,j,\ell})=\Phi_{**}.$
Thus $B_{j,\ell}$ is $1-$periodic since both $\overline{\Phi}_{j,\ell}$ and
$\Phi_{**}$ are normalized. Moreover, (7.6), (7.11) and (7.14) imply
(7.17)
$\displaystyle\|B_{j,\ell}\|_{\widetilde{r}_{\ell},1}\leq\|\widetilde{Z}_{j}\|_{\widetilde{r}_{\ell},1}\|Z_{j,\ell}\|_{\widetilde{r}_{\ell},T_{n_{j}}}\|\widetilde{B}_{j,\ell}\|_{\widetilde{r}_{\ell},1}\leq
4C^{3q_{n_{j}-1}q_{n_{j}}}.$
By (7.15)-(7.17) we get,
$\displaystyle\|(0,$ $\displaystyle
B_{j,\ell}(\cdot+\alpha))\circ(\alpha,A)\circ(0,B_{j,\ell})^{-1}-(\alpha,R_{\rho_{f}})\|_{\widetilde{r}_{\ell},1}$
$\displaystyle\leq\|\mathrm{Conj_{B_{j,\ell}}}(\Phi)-\mathrm{Conj_{B_{j,\ell}}}(\overline{\Phi}_{j,\ell})\|_{\widetilde{r}_{\ell},1}\leq
C^{13q_{n_{j}-1}q_{n_{j}}}\varepsilon_{\ell}^{\frac{3}{4}}\leq\varepsilon_{\ell}^{\frac{1}{2}},$
where the last inequality follows from our selection
$\overline{Q}_{\ell}>\overline{Q}_{\widehat{n}}\geq C^{q_{n_{j}}^{2}}$ and
definition of $\varepsilon_{\ell}$. Thus, by implicit function theorem, there
exists a unique $F_{j,\ell}\in
U_{\widetilde{r}_{\ell}}({\mathbb{T}},sl(2,{\mathbb{R}})),$ such that
$\displaystyle
B_{j,\ell}(\theta+\alpha)A(\theta)B_{j,\ell}(\theta)^{-1}=R_{\rho_{f}}\mathrm{e}^{F_{j,\ell}(\theta)}$
with
$\|F_{j,\ell}\|_{\widetilde{r}_{\ell}}\leq\varepsilon_{\ell}^{\frac{1}{2}}.$ ∎
### 7.2. Proof of Proposition 3
First we construct the desired sequence. For the sequence
$(q_{\ell})_{\ell\in\mathbb{N}}$ and subsequence
$(Q_{\ell})_{\ell\in\mathbb{N}}$ constructed in Lemma 2.1, first set
$n_{1}\geq n_{0}$ to be the smallest integer such that
(7.18)
$\max\\{16C_{M}C^{6q_{n_{j}}^{2}}(2+2\|V\|_{r}),16r^{-1}q_{n_{j}}K_{*}^{3}\\}\leq\overline{Q}_{n_{1}},$
where $n_{0}$ is the one in section 4. Then we set $n_{*}$ be the smallest
integer number such that
(7.19) $\displaystyle
q_{n_{*}}>\overline{Q}_{n_{1}+1}^{2\mathbb{A}^{4}\tau^{2}}.$
Then, for any fixed $n$ with $n\geq n_{*},$ we set $\ell\in\mathbb{N}$ be the
smallest integer number such that
$\overline{Q}_{\ell}^{2\mathbb{A}^{4}\tau^{2}}\geq q_{n}.$ That is
(7.20)
$\displaystyle\overline{Q}_{\ell-1}^{2\mathbb{A}^{4}\tau^{2}}<q_{n}\leq\overline{Q}_{\ell}^{2\mathbb{A}^{4}\tau^{2}}.$
Thus by (7.19) and (7.20) we get $\ell-1\geq n_{1}\geq\widehat{n}$, where
$\widehat{n}$ is the one defined in Lemma 7.3.
By our construction, for almost every $E\in\Sigma_{ac}(\alpha)$,
$(\alpha,S_{E}^{V})$ is $L^{2}$-conjugated to rotations, and
$\rho_{f}=\rho(\alpha,S_{E}^{V})\in\mathcal{P}.$ Then by (7.3) in Lemma 7.3,
there exist $B_{j,\ell},F_{j,\ell}\in
U_{\widetilde{r}_{\ell}}(\mathbb{T},SL(2,\mathbb{R}))$ such that
(7.21) $\displaystyle
B_{j,\ell}(\theta+\alpha)S_{E}^{V}(\theta)B_{j,\ell}(\theta)^{-1}=R_{\rho_{f}}\mathrm{e}^{F_{j,\ell}},$
with estimate
$\|F_{j,\ell}\|_{\widetilde{r}_{\ell}}\leq\varepsilon_{\ell}^{\frac{1}{2}},\qquad\|B_{j,\ell}\|_{\widetilde{r}_{\ell}}<4C^{3q_{n_{j}-1}q_{n_{j}}}.$
We shorten $B_{j,\ell}$ and $F_{j,\ell}$ as $B$ and $F,$ respectively. By
(7.21) we get
(7.22) $\displaystyle
B(\theta+p_{n}/q_{n})S_{E}^{V}(\theta)B(\theta)^{-1}=R_{\rho_{f}}+f(\theta),$
where
$\displaystyle f(\theta)$ $\displaystyle=$ $\displaystyle
R_{\rho_{f}}(\mathrm{e}^{F(\theta)}-I)+\\{B(\theta+p_{n}/q_{n})-B(\theta+\alpha)\\}S_{E}^{V}(\theta)B(\theta)^{-1}$
$\displaystyle=$ $\displaystyle(\textrm{I})+(\textrm{II}).$
Note for any $E\in\Sigma_{ac}(\alpha)\subset\Sigma(\alpha)$, we have
$|E|<2+\|V\|_{r}$, thus by Cauchy’s estimate (Lemma 3.2), we get
(7.23) $\displaystyle\|\textrm{II}\|_{\widetilde{r}_{\ell}/2}$
$\displaystyle\leq$ $\displaystyle|\alpha-\frac{p_{n}}{q_{n}}|\|\partial
B\|_{\widetilde{r}_{\ell}/2}\|S_{E}^{V}\|_{\widetilde{r}_{\ell}}\|B^{-1}\|_{\widetilde{r}_{\ell}}$
$\displaystyle\leq$ $\displaystyle
C_{M}\widetilde{r}_{\ell}^{-1}q_{n}^{-2}\|B^{-1}\|_{\widetilde{r}_{\ell}}\|B\|_{\widetilde{r}_{\ell}}(2+2\|V\|_{r})$
$\displaystyle\leq$ $\displaystyle
16C_{M}C^{6q_{n_{j}-1}q_{n_{j}}}(2+2\|V\|_{r})\widetilde{r}_{\ell}^{-1}q_{n}^{-2}.$
Since $B$ is 1-periodic, by (7.22), we have
$\displaystyle B(\theta$
$\displaystyle+q_{n}p_{n}/q_{n})\Pi_{s=q_{n}-1}^{0}S_{E}^{V}(\theta+sp_{n}/q_{n})B(\theta)^{-1}$
$\displaystyle=\Pi_{s=q_{n}-1}^{0}B(\theta+(s+1)p_{n}/q_{n})S_{E}^{V}(\theta+sp_{n}/q_{n})B(\theta+sp_{n}/q_{n})^{-1}$
$\displaystyle=\Pi_{s=q_{n}-1}^{0}\\{R_{\rho_{f}}+f(\theta+sp_{n}/q_{n})\\}.$
As a consequence,
$\displaystyle\mathrm{tr}\Pi_{s=q_{n}-1}^{0}S_{E}^{V}(\theta+sp_{n}/q_{n})=\mathrm{tr}\Pi_{s=q_{n}-1}^{0}\\{R_{\rho_{f}}+f(\theta+sp_{n}/q_{n})\\},$
which, together with (6.1) and (6.2), implies
(7.24) $\displaystyle
t_{p_{n}/q_{n}}(E,\theta)=\mathrm{tr}\Pi_{s=q_{n}-1}^{0}\\{R_{\rho_{f}}+f(\theta+sp_{n}/q_{n})\\}=\sum_{k\in\mathbb{Z}}a_{q_{n},k}(E)\mathrm{e}^{2\pi\mathrm{i}kq_{n}\theta}.$
The first equality in (7.24) implies
(7.25)
$\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)\|_{\widetilde{r}_{\ell}/2}\leq
2\\{1+\|f\|_{\widetilde{r}_{\ell}/2}\\}^{q_{n}}.$
In the following we will give the estimate of
$\|f\|_{\widetilde{r}_{\ell}/2},$ indeed, by (7.18) and $\ell-1\geq n_{1}$, we
have
(7.26)
$\widetilde{r}_{\ell}^{-1}=2r^{-1}\beta_{n_{j}-1}^{-1}K_{*}^{3}\overline{Q}_{\ell-1}^{2}<4^{-1}\overline{Q}_{\ell-1}^{3}.$
Again, by (4.6), (7.18), (7.20) and (7.23), we have
$\displaystyle\|f\|_{\widetilde{r}_{\ell}/2}\leq\|\textrm{I}\|_{\widetilde{r}_{\ell}/2}+\|\textrm{II}\|_{\widetilde{r}_{\ell}/2}\leq\frac{1}{2}\overline{Q}_{\ell}^{-4\mathbb{A}^{4}\tau^{2}}+\frac{1}{2}\overline{Q}_{\ell-1}^{4}q_{n}^{-2}\leq
q_{n}^{-2(1-\mathbb{A}^{-4}\tau^{-2})}.$
Consequently, by (7.25)
(7.27)
$\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)\|_{\widetilde{r}_{\ell}/2}\leq
2\\{1+q_{n}^{-2(1-\mathbb{A}^{-4}\tau^{-2})}\\}^{q_{n}}<4.$
On the other hand, by (7.26), we have
$\displaystyle
q_{n}2^{-1}\widetilde{r}_{\ell}>q_{n}\overline{Q}_{\ell-1}^{-3}>q_{n}\overline{Q}_{\ell-1}^{-4}>q_{n}^{1-2\mathbb{A}^{-4}\tau^{-2}}>T_{1}.$
Moreover, the second equality in (7.24) implies
$\displaystyle
t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)=\mathcal{R}_{q_{n}}t_{p_{n}/q_{n}}(\theta,E).$
Thus by Lemma 3.4 and (7.27), we have
$\displaystyle\|t_{p_{n}/q_{n}}(E,\theta)-a_{q_{n},0}(E)\|_{C^{0}}$
$\displaystyle\leq\|t_{p_{n}/q_{n}}(E,\theta)\|_{2^{-1}\widetilde{r}_{\ell}}\exp\\{-\Lambda(\pi
q_{n}2^{-1}\widetilde{r}_{\ell})\\}$ $\displaystyle\leq
4\exp\\{-\Lambda(q_{n}^{1-2\mathbb{A}^{-4}\tau^{-2}})\\},$
the last inequality follows from the fact that $\Lambda(\cdot)$ is non-
decreasing on $\mathbb{R}^{+}.$ ∎
## 8\. Proof of Theorem 6.3
In this section we give the proof of Theorem 6.3 which is based on the large
deviation theorem and avalanche principle. Notice that the cocycle in Theorem
6.3 is $\nu$-Gevrey with $1/2<\nu<1$, we will follow the method in [37] to
approximate the Gevrey cocycle by its truncated cocycle which is analytic in
the certain strip. For the continuity argument, our scheme is in the spirit of
[16] with some modifications. Compared to the result in [37] with $1/2<\nu<1$,
our large deviation theorem also works for more general cocycles (other than
Schrödinger coycle) and rational frequencies, which is an analogue of
Bourgain-Jitomirskaya [16]. More concretely, if we truncate the cocycle
$A(\alpha,\theta)$ to $\widetilde{A}(\alpha,\theta)$, and denote
$\widetilde{A}_{N}(\alpha,\theta)$ to be the transfer matrix, then
$\det\widetilde{A}_{N}(\alpha,\theta)$ depends on $\theta$, is not constant
anymore (of course not identical to 1), thus we have to prove that the
subharmonic extension of $N^{-1}\ln\|\widetilde{A}_{N}(\alpha,\theta)\|$ is
bounded. This boundedness will enable us to give an enhanced version of the
large deviation bound shown in [16]. For more results and methods to prove the
continuity of the Lyapunov exponents, we refer readers to [4, 32, 36, 34].
### 8.1. Large deviation theorem.
In this subsection we give a large deviation theorem for the $\nu$-Gevrey
cocycle with $1/2<\nu<1.$ Let $A_{N}(\alpha,\theta)$ and $L_{N}(\alpha,A)$ be
the associated transfer matrix and finite Lyapunov exponent of the cocycle
$(\alpha,A)$. Then we have the following:
###### Proposition 5.
Let $\rho>0$, $\frac{1}{2}<\nu<1,$ $0<\kappa<1.$ Assume that $A\in
G_{\rho}^{\nu}(\mathbb{T},SL(2,\mathbb{R}))$, and
$\displaystyle\big{|}\alpha-\frac{a}{q}\big{|}<\frac{1}{q^{2}},\ \ (a,q)=1.$
Then there exist $c,C_{i}(\kappa)>0,i=1,2$, $\sigma_{1}>\sigma>1>\gamma>0$ and
$q_{0}(\kappa,\rho,\nu)\in{\mathbb{N}}^{+}$ such that for $q\geq q_{0}$,
$C_{1}(\kappa)q^{\sigma}<N<C_{2}(\kappa)q^{\sigma_{1}}$,
$\displaystyle
mes\Big{\\{}\theta:\big{|}\frac{1}{N}\ln\|A_{N}(\alpha,\theta)\|-L_{N}(\alpha,A)\big{|}>\kappa\Big{\\}}<\mathrm{e}^{-cq^{\gamma}}.$
#### 8.1.1. Averages of shifts of subharmonic functions.
Let $u=u(\theta)$ be a function on ${\mathbb{T}}$ having a subharmonic
extension on the strip $[|\mathrm{Im}\vartheta|\leq\rho]$, and
$\alpha\in{\mathbb{T}}.$ We prove that the mean of $u$ is close to the Fejér
average of $u(\theta)$ for $\theta$ outside a small set (here being ‘close’ or
‘small’ is expressed in terms of the number of shifts considered).
Consider the Fejér kernel of order p:
(8.1)
$K_{R}^{p}(t)=\big{(}\frac{1}{R}\sum\limits_{j=0}^{R-1}\mathrm{e}^{2\pi\mathrm{i}jt}\big{)}^{p},$
then we have
$\begin{split}\big{|}K_{R}^{p}(t)\big{|}=\frac{1}{R^{p}}\big{|}\frac{1-\mathrm{e}^{2\pi\mathrm{i}Rt}}{1-\mathrm{e}^{2\pi\mathrm{i}t}}\big{|}^{p}\leq\frac{1}{R^{p}\|t\|_{{\mathbb{Z}}}^{p}}.\end{split}$
Notice also $\big{|}K_{R}^{p}(t)\big{|}\leq 1$, we have
(8.2)
$\begin{split}|K_{R}^{p}(t)|\leq\min\big{\\{}1,\frac{1}{R^{p}\|t\|_{\mathbb{Z}}^{p}}\big{\\}}\leq\frac{2}{1+R^{p}\|t\|_{\mathbb{Z}}^{p}}.\end{split}$
We can rewrite (8.1) as
$\begin{split}K_{R}^{p}(t)=\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)\mathrm{e}^{2\pi\mathrm{i}jt},\end{split}$
where $c^{p}_{R}(j)$ are positive integers so that
$\begin{split}\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c^{p}_{R}(j)=1.\end{split}$
Notice that if $p=1$ then
$K_{R}^{1}(t)=\frac{1}{R}\sum\limits_{j=0}^{R-1}\mathrm{e}^{2\pi\mathrm{i}jt}$,
thus $c^{1}_{R}(j)=1$ for all $j.$
###### Proposition 6.
Let $\rho>0.$ Assume that $u:{\mathbb{T}}\rightarrow{\mathbb{R}}$ has a
bounded subharmonic extension to the strip $[|\mathrm{Im}\vartheta|\leq\rho]$
and $\|u\|_{C^{0}}\leq S.$ If
$\displaystyle\big{|}\alpha-\frac{a}{q}\big{|}<\frac{1}{q^{2}},\ \ (a,q)=1,$
and
$\sigma>1,0<\varsigma<\sigma^{-1},\varsigma(1-\sigma^{-1})^{-1}<p<(\sigma-1)^{-1},$
then
$\displaystyle
mes\Big{\\{}\theta:\Big{|}\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)u(\theta+j\alpha)-[u(\theta)]_{\theta}\Big{|}>\varsigma_{2}R^{-\varsigma_{1}}\Big{\\}}<\frac{R^{2\varsigma_{1}}}{2^{8}\exp\\{R^{\varsigma_{3}}\\}},$
provided $R=q^{\sigma}\geq q(\varsigma,p,\sigma)\
(\varsigma_{1}=p(1-\sigma^{-1}),\varsigma_{2}=2^{p+5}S\rho^{-1},\varsigma_{3}=\frac{1+p}{\sigma}-p).$
###### Proof.
The proof is divided into the following 3 steps.
1\. Shift of Fejér average. Since $u$ is subharmonic in the strip
$[|\mathrm{Im}\vartheta|\leq\rho]$, then from Corollary 4.7 in [17], we get
(8.3) $\displaystyle|\widehat{u}(k)|\leq\frac{S}{\rho|k|}.$
Consider the Fejér average of $u_{N}(\theta)$, and notice that
$\displaystyle u(\theta+j\alpha)=[u(\theta)]_{\theta}+\sum\limits_{k\neq
0}\widehat{u}(k)\mathrm{e}^{2\pi\mathrm{i}k(\theta+j\alpha)},$
thus, by shortening $K_{R}^{p}(\cdot)$ as $K_{R}(\cdot),$ we get
(8.4)
$\displaystyle\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)u(\theta+j\alpha)-[u(\theta)]_{\theta}$
$\displaystyle=$ $\displaystyle\sum\limits_{k\neq
0}\widehat{u}(k)\Big{(}\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)\mathrm{e}^{2\pi\mathrm{i}jk\alpha}\Big{)}\mathrm{e}^{2\pi\mathrm{i}k\theta}$
$\displaystyle=$ $\displaystyle\sum\limits_{k\neq 0}\widehat{u}(k)\cdot
K_{R}(k\alpha)\mathrm{e}^{2\pi\mathrm{i}k\theta}:=w(\theta)=\mathcal{T}_{K}w(\theta)+\mathcal{R}_{K}w(\theta),$
where $K>q$ is a large constant that will be determined later.
2\. Estimate of the $w(\theta)$. In the following, we will give the estimate
of $w(\theta)$. Let $I_{\ell}=[\frac{q}{4}\ell,\frac{q}{4}(\ell+1)),$ then we
write $\mathcal{T}_{K}w(\theta)$ as
$\begin{split}\mathcal{T}_{K}w(\theta)=\sum\limits_{\ell=0}^{[4Kq^{-1}]+1}\sum\limits_{k\in
I_{\ell}}\widehat{u}(k)\cdot
K_{R}(k\alpha)\mathrm{e}^{2\pi\mathrm{i}k\theta}.\end{split}$
Note $\big{|}\alpha-\frac{a}{q}\big{|}<\frac{1}{q^{2}},$ it follows that for
$|k|\leq\frac{q}{2}$ with $k\neq 0$, we have
$|k\alpha-\frac{ka}{q}|<\frac{1}{2q},$ hence
$\|k\alpha\|_{{\mathbb{Z}}}>\frac{1}{2q}$. Let
$\alpha_{1},\cdots,\alpha_{q/4}$ be the decreasing rearrangement of
$\\{\|k\alpha\|_{{\mathbb{Z}}}^{-1}\\}_{0<|k|\leq\frac{q}{4}}$. Then we have
$\alpha_{i}\leq\frac{2q}{i}$. Moreover, for any interval of length $q/4$, same
is true for $\\{\|k\alpha\|_{{\mathbb{Z}}}^{-1}\\}_{|k|\in I}$ if we exclude
at most one value of $k$. By (8.2) and (8.3), we have
(8.5)
$\begin{split}\sum\limits_{0<|k|<\frac{q}{4}}\big{|}\widehat{u}(k)K_{R}(k\alpha)\big{|}\leq\sum\limits_{0<|k|<\frac{q}{4}}\frac{S\|k\alpha\|_{{\mathbb{Z}}}^{-p}}{|k|\rho
R^{p}}\leq\sum\limits_{1\leq i<\frac{q}{4}}\frac{2S(2q/i)^{p}}{\rho
R^{p}}\leq\frac{2^{p+3}S}{\rho}\left(\frac{q}{R}\right)^{p},\end{split}$
and for each $\ell\geq 1,$ we have
$\displaystyle\sum\limits_{|k|\in
I_{\ell}}\big{|}\widehat{u}(k)K_{R}(k\alpha)\big{|}\leq\frac{2S}{\frac{q}{4}\rho\ell}\Big{(}1+\sum\limits_{1\leq
i<\frac{q}{4}}\frac{(2q/i)^{p}}{R^{p}}\Big{)}\leq\frac{8S}{\rho
q\ell}\big{(}1+c(q/R)^{p}\big{)}.$
Thus we have
$\displaystyle\Big{|}\sum\limits_{\ell=1}^{[4Kq^{-1}]+1}\sum\limits_{|k|\in
I_{\ell}}\widehat{u}(k)K_{R}(k\alpha)\mathrm{e}^{2\pi\mathrm{i}k\theta}\Big{|}$
$\displaystyle\leq\sum\limits_{\ell=1}^{[4Kq^{-1}]+1}\frac{8S}{\rho
q\ell}\big{(}1+c(q/R)^{p}\big{)}$ $\displaystyle\leq\frac{8S}{\rho
q}\big{(}1+c(q/R)^{p}\big{)}\ln[4Kq^{-1}+1]$ (8.6)
$\displaystyle\leq\frac{8S}{\rho q}\big{(}1+c(q/R)^{p}\big{)}\ln K.$
On the other hand, again by (8.3), we have
(8.7)
$\begin{split}\|\mathcal{R}_{K}w(\theta)\|^{2}_{\ell^{2}}\leq\sum\limits_{|k|\geq
K}\frac{S^{2}}{(\rho|k|)^{2}}\leq\frac{S^{2}}{\rho^{2}}K^{-1}.\end{split}$
3\. Choose approximate $K,p$.
Now we can finish the proof of Proposition 6. By (8.4)-(8.1.1) , we have
$\displaystyle\big{|}[u(\theta)]_{\theta}$
$\displaystyle-\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)u(\theta+j\alpha)\big{|}$
$\displaystyle\leq\frac{2^{p+3}}{\rho}\left(\frac{q}{R}\right)^{p}+\frac{8S}{\rho
q}\Big{(}1+c\left(\frac{q}{R}\right)^{p}\Big{)}\ln
K+|\mathcal{R}_{K}w(\theta)|.$
Take $q=R^{\sigma^{-1}},\sigma>1,$
$K=\exp\\{R^{\sigma^{-1}-p(1-\sigma^{-1})}\\}$ and
$0<\varsigma<\sigma^{-1},\varsigma(1-\sigma^{-1})^{-1}<p<(\sigma-1)^{-1}.$
Once we fixed the parameters above, we have,
$\displaystyle\Big{|}\frac{2^{p+3}S}{\rho}\left(\frac{q}{R}\right)^{p}\Big{|}=2^{p+3}S\rho^{-1}R^{-\varsigma_{1}},$
$\displaystyle\Big{|}\frac{8S}{\rho
q}\Big{(}1+c\Big{(}\frac{q}{R}\Big{)}^{p}\Big{)}\ln K\Big{|}$
$\displaystyle\leq 16\rho^{-1}Sq^{-1}\ln K=16S\rho^{-1}R^{-\varsigma_{1}},$
where $\varsigma_{1}=p(1-\sigma^{-1})$. By Chebyshev’s inequality and (8.7),
one has
$\displaystyle mes\Big{\\{}\theta:|\mathcal{R}_{K}w(\theta)|$
$\displaystyle>2^{4}S\rho^{-1}R^{-\varsigma_{1}}\Big{\\}}\leq(2^{4}S\rho^{-1}R^{-\varsigma_{1}})^{-2}\|\mathcal{R}_{K}w\|^{2}_{\ell^{2}}$
$\displaystyle\leq 2^{-8}R^{2\varsigma_{1}}\exp\\{-R^{\varsigma_{3}}\\},$
where $\varsigma_{3}=(1+p)\sigma^{-1}-p$. By the above argument, the desired
result follows directly. ∎
#### 8.1.2. Trigonometric polynomial approximations.
Since $A\in G_{\rho}^{\nu}(\mathbb{T},SL(2,\mathbb{R}))$, then we can write
$A(\theta)=\sum_{k\in\mathbb{Z}}\widehat{A}(k)\mathrm{e}^{2\pi\mathrm{i}k\theta}$
with estimate
(8.8) $\begin{split}|\widehat{A}(k)|\leq\|A\|_{\nu,\rho}\mathrm{e}^{-\rho|2\pi
k|^{\nu}},\ \ \forall k\in{\mathbb{Z}}.\end{split}$
For any $N>0,$ denote $\widetilde{N}=N^{b\nu^{-1}},$ where
$b=\delta(\nu^{-1}-1)^{-1}$ and $\delta\in(0,1)$ will be fixed later. Once we
have this, we can consider the truncated cocycle
$\widetilde{A}(\theta):=\mathcal{T}_{\widetilde{N}}A(\theta),$ denote by
$\widetilde{A}_{N}(\alpha,\theta)$ and
$\widetilde{L}_{N}(\alpha,\widetilde{A})$ the associated transfer matrix and
finite Lyapunov exponent by substituting $\widetilde{A}(\theta)$ for
$A(\theta)$. Then we have the following lemma.
###### Lemma 8.1.
There exists $N(\rho,\nu,\|A\|_{\nu,\rho})\in{\mathbb{N}}$,
$c=c(\rho,\nu,\|A\|_{\nu,\rho})$ such that if $N\geq
N(\rho,\nu,\|A\|_{\nu,\rho})$, then we have the following estimates:
(8.9)
$\|A(\theta)-\widetilde{A}(\theta)\|\leq\mathrm{e}^{-c\widetilde{N}^{\nu}}=\mathrm{e}^{-cN^{b}},$
$\big{|}N^{-1}\ln\|A_{N}(\alpha,\theta)\|-N^{-1}\ln\|\widetilde{A}_{N}(\alpha,\theta)\|\big{|}\leq\mathrm{e}^{-\frac{c}{2}N^{b}},$
$\big{|}L_{N}(\alpha,A)-\widetilde{L}_{N}(\alpha,\widetilde{A})\big{|}<\mathrm{e}^{-\frac{c}{2}N^{b}}.$
###### Proof.
The estimate (8.9) follows directly from (8.8). Moreover, by telescoping
argument, for $N\geq N(\rho,\nu,\|A\|_{\nu,\rho})$ which is large enough, we
have
$\begin{split}\big{\|}A_{N}(\alpha,\theta)-\widetilde{A}_{N}(\alpha,\theta)\big{\|}\leq(\|A\|_{\nu,\rho}+1)^{N}\mathrm{e}^{-cN^{b}}\leq\mathrm{e}^{-\frac{c}{2}N^{b}}.\end{split}$
It follows that
$\begin{split}\Big{|}\frac{1}{N}\ln\|A_{N}(\alpha,\theta)\|-\frac{1}{N}\ln\|\widetilde{A}_{N}(\alpha,\theta)\|\Big{|}\leq\frac{1}{N}\big{\|}A_{N}(\theta)-\widetilde{A}_{N}(\theta)\big{\|}<\mathrm{e}^{-\frac{c}{2}N^{b}}.\end{split}$
By averaging, one thus has
$\big{|}L_{N}(\alpha,A)-\widetilde{L}_{N}(\alpha,\widetilde{A})\big{|}<\mathrm{e}^{-\frac{c}{2}N^{b}}.$
∎
Since $\widetilde{A}(\theta)$ is a trigonometric polynomial, then one can
analytic continue $\widetilde{A}(\theta)$ to become an analytic function.
Indeed, let
$\rho_{N}=\frac{\rho}{4\pi}\widetilde{N}^{\nu-1}=\frac{\rho}{4\pi}N^{-b(\nu^{-1}-1)}:=\frac{\rho}{4\pi}N^{-\delta},$
and set $\vartheta=\theta+\mathrm{i}\widetilde{\theta}$, then
$\widetilde{A}(\vartheta)$ is analytic in the strip
$|\widetilde{\theta}|\leq\rho_{N}$:
(8.10)
$\begin{split}\|\widetilde{A}\|_{\rho_{N}}^{*}=\sum\limits_{|k|<\widetilde{N}}|\widehat{A}(k)|\mathrm{e}^{|2\pi
k\rho_{N}|}\leq\|A\|_{\nu,\rho}\sum\limits_{|k|<\widetilde{N}}\mathrm{e}^{-\rho|2\pi
k|^{\nu}}\mathrm{e}^{|2\pi k|\rho_{N}}:=\mathrm{e}^{C_{1}}<\infty.\end{split}$
For $\vartheta=\theta+\mathrm{i}\widetilde{\theta}$ with
$|\widetilde{\theta}|\leq\rho_{N}$, set
(8.11)
$\begin{split}\tilde{u}_{N}(\vartheta):=\frac{1}{N}\ln\|\widetilde{A}_{N}(\vartheta)\|.\end{split}$
In the following lemma, we will prove that $|\tilde{u}_{N}(\vartheta)|$ is
indeed a bounded subharmonic function in the strip
$[|\mathrm{Im}\vartheta|<\rho_{N}].$
###### Lemma 8.2.
We have the estimate
(8.12)
$\begin{split}\sup_{\theta\in{\mathbb{T}}}\sup_{|\widetilde{\theta}|\leq\rho_{N}}|\tilde{u}_{N}(\vartheta)|\leq\max\\{\ln
2,C_{1}\\},\end{split}$
where $C_{1}$ is the one in (8.10).
###### Proof.
We will prove that the analytic continuation $\widetilde{A}(\vartheta)$ in the
strip $[|\mathrm{Im}\vartheta|<\rho_{N}]$ is not singular, which implies that
$|\tilde{u}_{N}(\vartheta)|$ is a bounded subharmonic function.
We first give a estimate about
$\|\widetilde{A}(\vartheta)-\widetilde{A}(\theta)\|$ as follows:
$\displaystyle\|\widetilde{A}(\vartheta)-\widetilde{A}(\theta)\|$
$\displaystyle\leq$
$\displaystyle\sum_{0<|k|<\widetilde{N}}|\widehat{\tilde{A}}(k)|\sup_{\theta\in{\mathbb{T}},|\widetilde{\theta}|\leq\rho_{N}}|\mathrm{e}^{2\pi\mathrm{i}k\theta}(\mathrm{e}^{-2\pi
k\widetilde{\theta}}-1)|$ $\displaystyle=$ $\displaystyle\sum_{0<|k|\leq
N^{\delta/2}}|\widehat{\tilde{A}}(k)|\mathrm{e}^{|2\pi
k|\rho_{N}}(1-\mathrm{e}^{-|2\pi k\rho_{N}|})$ $\displaystyle+$
$\displaystyle\sum_{N^{\delta/2}<|k|<N^{b\nu^{-1}}}|\widehat{A}(k)|\mathrm{e}^{|2\pi
k|^{\nu}\rho}(\mathrm{e}^{|2\pi k|\rho_{N}}-1)\mathrm{e}^{-|2\pi
k|^{\nu}\rho}.$
To estimate the first term, note if $0<|k|\leq N^{\delta/2}$, then one has
$|2\pi k|\rho_{N}\leq\rho N^{-\delta/2}/2\ll 1,$ which implies
$\begin{split}1-\mathrm{e}^{-|2\pi k\rho_{N}|}\leq 2|2\pi k\rho_{N}|\leq\rho
N^{-\delta/2}.\end{split}$
To estimate the second term, note for all $k$ with
$N^{\delta/2}<|k|<N^{b\nu^{-1}},$ one has
$\begin{split}|2\pi k|^{\nu}\rho/2-|2\pi k|\rho_{N}=2^{-1}\rho(|2\pi
k|^{\nu}-|k|N^{-\delta})\geq 0,\end{split}$
which implies
$\begin{split}(\mathrm{e}^{|2\pi k|\rho_{N}}-1)\mathrm{e}^{-|2\pi
k|^{\nu}\rho/2}<2,\ \forall N^{\delta/2}<|k|<N^{b\nu^{-1}}.\end{split}$
Consequently, one has
(8.13) $\displaystyle\|\widetilde{A}(\vartheta)-\widetilde{A}(\theta)\|$
$\displaystyle\leq$ $\displaystyle\rho
N^{-\delta/2}\|\tilde{A}\|_{\rho_{N}}^{*}+2\|A\|_{\nu,\rho}\mathrm{e}^{-|2\pi
N^{\delta/2}|^{\nu}\rho/2}$ $\displaystyle<$ $\displaystyle 2\rho
N^{-\delta/2}\mathrm{e}^{C_{1}}.$
To see this, one only needs to check that
$f(x)=x^{\nu}-(2\pi)^{-1}xN^{-\delta}>0$ on the interval $[2\pi,2\pi
N^{b\nu^{-1}}].$
Now we give the estimate $|\det{\widetilde{A}(\theta)}|.$ First, the
inequality in (8.13) implies
(8.14)
$\displaystyle|\det{\widetilde{A}(\vartheta)}-\det{\widetilde{A}(\theta)}|\leq
4\|\widetilde{A}(\vartheta)\|\|\widetilde{A}(\vartheta)-\widetilde{A}(\theta)\|\leq
8\rho\mathrm{e}^{2C_{1}}N^{-\delta/2}\ll 1.$
Moreover, note $A\in SL(2,{\mathbb{R}})$, then by Lemma 8.1, we have
(8.15) $\begin{split}|1-\det{\widetilde{A}(\theta)}|\leq
8\|A(\theta)\|\|A(\theta)-\tilde{A}(\theta)\|\leq
C\mathrm{e}^{-cN^{b}},\end{split}$
that is $|\det{\widetilde{A}(\theta)}|\geq 1/2,\
\forall\theta\in{\mathbb{T}},$ which, together with (8.14), yields
(8.16) $\displaystyle|\det{\widetilde{A}(\vartheta)}|\geq
1/4,\forall(\theta,\widetilde{\theta})\in{\mathbb{T}}\times[-\rho_{N},\rho_{N}].$
Once we get the inequality in (8.16), we are ready to estimate
$|\tilde{u}_{N}(\vartheta)|.$ Indeed,
$\displaystyle\|\widetilde{A}_{N}(\vartheta)\|^{2}$
$\displaystyle\geq|\det{\widetilde{A}_{N}(\vartheta)}|=|\Pi_{\ell=0}^{N-1}\det{\widetilde{A}(\vartheta+\ell\alpha)}|$
$\displaystyle=\Pi_{\ell=0}^{N-1}|\det{\widetilde{A}(\vartheta+\ell\alpha)}|\geq
4^{-N},$
which yields
$2^{-N}\leq\|\widetilde{A}_{N}(\vartheta)\|\leq\mathrm{e}^{NC_{1}},$ or
$\displaystyle\sup_{\theta\in{\mathbb{T}}}\sup_{|\widetilde{\theta}|\leq\rho_{N}}|\tilde{u}_{N}(\vartheta)|\leq\max\\{\ln
2,C_{1}\\}.$
∎
#### 8.1.3. Proof of Proposition 5
In this section we will give the proof of Proposition 5 by applying
Proposition 6. First by Lemma 8.1, we have
$\displaystyle|\frac{1}{N}\ln\|A_{N}(\theta)\|-L_{N}(\alpha,A)|$
$\displaystyle\leq|N^{-1}\ln\|A_{N}(\theta)\|-\tilde{u}_{N}(\theta)|$
$\displaystyle+|\tilde{u}_{N}(\theta)-[\tilde{u}_{N}(\theta)]_{\theta}|+|[\tilde{u}_{N}(\theta)]_{\theta}-L_{N}(\alpha,A)|$
(8.17)
$\displaystyle\leq|\tilde{u}_{N}(\theta)-[\tilde{u}_{N}(\theta)]_{\theta}|+2\mathrm{e}^{-\frac{c}{2}N^{b}}.$
Thus we only need to estimate
$|\tilde{u}_{N}(\theta)-[\tilde{u}_{N}(\theta)]_{\theta}|,$ which is
controlled by the sum
(8.18)
$\begin{split}|\tilde{u}_{N}(\theta)-F_{R,p}[\tilde{u}_{N}](\theta)|+|F_{R,p}[\tilde{u}_{N}](\theta)-[\tilde{u}_{N}(\theta)]_{\theta}|,\end{split}$
where
$F_{R,p}[u](\theta)=\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)\tilde{u}_{N}(\theta+j\alpha).$
In the following, we will give the estimates of the two terms above. First we
would like to bound $|\tilde{u}_{N}(\theta)-\tilde{u}_{N}(\theta+\alpha)|$.
For the function $\tilde{u}_{N}(\theta)$ defined by (8.11), the inequality in
(8.12) and $C_{1}>\ln 2$ imply $\tilde{u}_{N}(\vartheta)$ is a bounded
subharmonic function on $[|\mathrm{Im}\vartheta|<\rho_{N}].$ It follows that
(8.19) $\begin{split}\|\tilde{u}_{N}(\vartheta)\|\leq C_{1}=2^{-1}C_{2},\
C_{2}=2C_{1}.\end{split}$
That is this function $\tilde{u}_{N}(\theta)$ satisfies the hypotheses in
Proposition 6. Consequently,
$\displaystyle\Big{|}\tilde{u}_{N}(\theta)-\tilde{u}_{N}(\theta+\alpha)\Big{|}$
$\displaystyle=\frac{1}{N}\Big{|}\ln\|\widetilde{A}_{N}(\theta)\|-\ln\|\widetilde{A}_{N}(\theta+\alpha)\|\Big{|}$
$\displaystyle=\Big{|}\frac{1}{N}\ln\frac{\|\widetilde{A}_{N}(\theta)\|}{\|\widetilde{A}_{N}(\theta+\alpha)\|}\Big{|}$
$\displaystyle=\Big{|}\frac{1}{N}\ln\frac{\|\widetilde{A}(\theta+(N-1)\alpha)\cdots\widetilde{A}(\theta+\alpha)\widetilde{A}(\theta)\|}{\|\widetilde{A}(\theta+N\alpha)\widetilde{A}(\theta+(N-1)\alpha)\cdots\widetilde{A}(\theta+\alpha)\|}\Big{|}$
$\displaystyle\leq\frac{1}{N}\ln\big{(}\|\widetilde{A}(\theta+N\alpha)^{-1}\|\cdot\|\widetilde{A}(\theta)\|\big{)}.$
Thus we only need to estimate $\|\widetilde{A}(\theta+N\alpha)^{-1}\|$.
Indeed, (8.15) implies
$\displaystyle\|\widetilde{A}(\theta)^{-1}\|$
$\displaystyle\leq\|1/\det{\widetilde{A}(\theta)}\|\|\widetilde{A}(\theta)\|$
$\displaystyle\leq\\{1+2\|I-\det{\widetilde{A}(\theta)}\|\\}\|\widetilde{A}(\theta)\|\leq
2\mathrm{e}^{C_{1}}\leq\mathrm{e}^{2C_{1}}.$
Once we have this we get
(8.20)
$\Big{|}\tilde{u}_{N}(\theta)-\tilde{u}_{N}(\theta+\alpha)\Big{|}\leq\frac{3C_{2}}{2N}.$
For the fixed $\nu$ with $1/2<\nu<1,$ there exists $\delta\in(0,1)$ such that
$\nu^{-1}<1+\delta.$ Once we fix $\nu$ and $\delta$ by this way, we set
$b=\delta(\nu^{-1}-1)^{-1}>1.$ Then we choose $\sigma$ and $\varsigma$ in
Proposition 6 as $1<\sigma<\min\\{2,\delta^{-1}\\}$, $\varsigma=\delta.$ That
is the parameters $\sigma$ and $p>1$ in Proposition 6 satisfy
$\displaystyle\delta=b(1/\nu-1)<\frac{1}{\sigma}<1,\ \
\frac{\delta\sigma}{\sigma-1}<p<\frac{1}{\sigma-1}.$
Take $\gamma=1+p(1-\sigma),\sigma_{1}=\frac{p}{\delta}(\sigma-1)$. It is
obvious that
$\displaystyle 1>\gamma=1+p(1-\sigma)>0,\ \
\sigma-\sigma_{1}=\frac{\delta\sigma-p(\sigma-1)}{\delta}<0.$
Notice that $1<\sigma<\delta^{-1}$ and $\nu^{-1}-1<\delta<1,$ thus
$\delta,\sigma\rightarrow 1$ as $\nu\rightarrow 1/2,$ which imply
$\sigma_{1}\rightarrow\sigma,p\rightarrow\infty$ and $\gamma\rightarrow 0$ as
$\nu\rightarrow 1/2.$
For $q,R$ with $R=q^{\sigma}$ (as Proposition 6), set
$\frac{9pC_{2}}{\kappa}q^{\sigma}<N<\big{(}\frac{\kappa\rho}{2^{p+6}\pi
C_{2}}\big{)}^{\frac{1}{\delta}}q^{\sigma_{1}}$ and $K$ as the one in
Proposition 6. Now we give the estimate of the first term in (8.18). More
concretely,
$\displaystyle\Big{|}F_{R,p}[\tilde{u}_{N}](\theta)-\tilde{u}_{N}(\theta)\Big{|}$
$\displaystyle\leq\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}|\tilde{u}_{N}(\theta+j\alpha)-\tilde{u}_{N}(\theta)|c_{R}^{p}(j)$
$\displaystyle\leq\frac{1}{R^{p}}\sum\limits_{j=0}^{p(R-1)}c_{R}^{p}(j)\frac{j3C_{2}}{2N}<\frac{3p(R-1)C_{2}}{2N}$
(8.21) $\displaystyle\leq
3p(R-1)C_{2}\frac{\kappa}{18pC_{2}}q^{-\sigma}\leq\frac{\kappa}{6},$
where the third inequality is by (8.20).
We will apply Proposition 6 to get the estimate of the second term in (8.18).
More concretely, let $\varsigma_{i},i=1,2,3$ be the ones in Proposition 6 with
$2^{-1}C_{2}$ and $\rho_{N}$ in place of $S$ and $\rho,$ respectively and note
$N<\big{(}\frac{\kappa\rho}{2^{p+6}\pi
C_{2}}\big{)}^{\frac{1}{\delta}}q^{\sigma_{1}}$ we get
$\displaystyle\varsigma_{2}R^{-\varsigma_{1}}=2^{p+5}2^{-1}C_{2}4\pi\rho^{-1}N^{\delta}q^{-p(\sigma-1)}<\kappa.$
Thus by Proposition 6 we know that there is a set such that for all $\theta$
outside this set we have
(8.22)
$\displaystyle\Big{|}F_{R,p}[\tilde{u}_{N}](\theta)-[\tilde{u}_{N}(\theta)]_{\theta}\Big{|}\leq\varsigma_{2}R^{-\varsigma_{1}}<\kappa.$
Moreover, the measure of this set is less than
(8.23) $\displaystyle
2^{-8}R^{2\varsigma_{1}}\exp\\{-R^{\varsigma_{3}}\\}=2^{-8}q^{2p(\sigma-1)}\exp\\{-q^{\gamma}\\}<\exp\\{-2^{-1}q^{\gamma}\\}.$
Set
$C_{1}(\kappa)=\frac{9pC_{2}}{\kappa},C_{2}(\kappa)=\big{(}\frac{\kappa\rho}{2^{p+6}\pi
C_{2}}\big{)}^{\frac{1}{\delta}}$, $c=\frac{1}{2}$ and $q\geq q_{0},$ with
$q_{0}$ depending on $\kappa,\rho,\nu$ (by Lemma 8.1) and sufficiently large,
then by (8.1.3)-(8.23) we finish the proof of Proposition 5.
### 8.2. Application of avalanche principle.
###### Proposition 7 ([28, 17]).
Let $A_{1}$, $A_{2}$, $\cdots$, $A_{n}$ be a sequence in $SL(2,{\mathbb{R}})$
satisfying the conditions
$\displaystyle\min\limits_{1\leq j\leq n}$ $\displaystyle\|A_{j}\|\geq\mu>n,$
$\displaystyle\max\limits_{1\leq j\leq n}$
$\displaystyle\left|\ln\|A_{j}\|+\ln\|A_{j+1}\|-\ln\|A_{j+1}A_{j}\|\right|<\frac{1}{2}\ln\mu.$
Then there exists a constant $C_{A}<\infty$ such that
$\displaystyle\Big{|}\ln\|\prod\limits_{j=1}^{n}A_{j}\|+\sum\limits_{j=2}^{n-1}\ln\|A_{j}\|-\sum\limits_{j=1}^{n-1}\ln\|A_{j+1}A_{j}\|\Big{|}<C_{A}\frac{n}{\mu}.$
Following the ideas in [16], in case of positive Lyapunov exponent, the large
deviation theorem provides us a possibility to apply avalanche principal
(Proposition 7) to $A(\theta+jN\alpha)$ for $\theta$ in a set of large measure
and therefore pass on to a lager scale.
###### Lemma 8.3.
Assume that $|\alpha-\frac{a}{q}|<\frac{1}{q^{2}}$, $(a,q)=1$. Let
$C_{1}(\kappa)q^{\sigma}<N<C_{2}(\kappa)q^{\sigma_{1}}$ and $q\geq
q_{0}(\kappa)$ be the same as Proposition 5. Assume that
$L_{N}(\alpha,A)>100\kappa>0$ and
$L_{2N}(\alpha,A)>\frac{19}{20}L_{N}(\alpha,A)$. Then for $N^{\prime}$ such
that $m=N^{\prime}N^{-1}$ satisfies
$\mathrm{e}^{\frac{c}{2}q^{\gamma/4}}<m<\mathrm{e}^{\frac{c}{2}q^{\gamma}}$,
we have
$\displaystyle\Big{|}L_{N^{\prime}}(\alpha,A)+L_{N}(\alpha,A)-2L_{2N}(\alpha,A)\Big{|}<C\mathrm{e}^{-\frac{c}{2}q^{\gamma/4}},$
where $c$ is the one from the large deviation bound of Proposition 5.
###### Proof.
We use the avalanche principal (Proposition 7) on
$A_{j}^{N}(\theta)=A_{N}(\theta+jN\alpha)$ with $\theta$ being restricted to
the set $\Omega\subset{\mathbb{T}}$, defined by $2m$ conditions:
$\displaystyle\Big{|}\frac{1}{N}\ln\|A_{j}^{N}(\theta)\|-L_{N}(\alpha,A)\Big{|}\leq\kappa,\
\ 1\leq j\leq m,$
$\displaystyle\Big{|}\frac{1}{2N}\ln\|A_{j}^{2N}(\theta)\|-L_{2N}(\alpha,A)\Big{|}\leq\kappa,\
\ 1\leq j\leq m.$
By Proposition 5, we have for any $j$
$\displaystyle
mes\Big{\\{}\theta:\big{|}\frac{1}{N}\ln\|A_{j}^{N}(\theta)\|-L_{N}(\alpha,A)\big{|}>\kappa\Big{\\}}<\mathrm{e}^{-cq^{\gamma}},$
$\displaystyle
mes\Big{\\{}\theta:\big{|}\frac{1}{2N}\ln\|A_{j}^{2N}(\theta)\|-L_{2N}(\alpha,A)\big{|}>\kappa\Big{\\}}<\mathrm{e}^{-cq^{\gamma}}.$
Thus we have
(8.24) $\displaystyle
mes({\mathbb{T}}\setminus\Omega)<2m\mathrm{e}^{-cq^{\gamma}}.$
For each $A_{j}^{N}(\theta)$ with $\theta\in\Omega$,
$\displaystyle\mathrm{e}^{N(L_{N}(\alpha,A)-\kappa)}<\|A_{j}^{N}(\theta)\|<\mathrm{e}^{N(L_{N}(\alpha,A)+\kappa)}.$
Note that since $L_{N}(\alpha,A)>100\kappa$, then
$\displaystyle\|A_{j}^{N}(\theta)\|>\mathrm{e}^{\frac{99}{100}NL_{N}(\alpha,A)}:=\mu.$
For large enough $q$, and hence $N$ by hypothesis, we have $\mu>2m$ (since
$\sigma>1>\gamma$). Also for $j<m$, by the fact that
$A_{j+1}^{N}(\theta)A_{j}^{N}(\theta)=A_{j}^{2N}(\theta)$, we have
$\displaystyle\big{|}\ln\|A_{j}^{N}(\theta)\|$
$\displaystyle+\ln\|A_{j+1}^{N}(\theta)\|-\ln\|A_{j+1}^{N}(\theta)A_{j}^{N}(\theta)\|\big{|}$
$\displaystyle<4N\kappa+2N|L_{N}(\alpha,A)-L_{2N}(\alpha,A)|$
$\displaystyle<\frac{1}{25}NL_{N}(\alpha,A)+2N(\frac{1}{20}L_{N}(\alpha,A))<\frac{1}{2}\ln\mu,$
where the second inequality follows by
$L_{2N}(\alpha,A)>\frac{19}{20}L_{N}(\alpha,A)$. Thus, we can apply the
avalanche principal (Proposition 7) for $\theta\in\Omega$ to obtain
$\displaystyle\Big{|}\ln\|\prod\limits_{j=1}^{m}A_{j}^{N}(\theta)\|$
$\displaystyle+\sum\limits_{j=2}^{m-1}\ln\|A_{j}^{N}(\theta)\|-\sum\limits_{j=1}^{m-1}\ln\|A_{j+1}^{N}(\theta)A_{j}^{N}(\theta)\|\Big{|}$
$\displaystyle<C_{A}m/\mu<m\mathrm{e}^{-\frac{1}{2}NL_{N}(\alpha,A)}.$
Integrating on $\Omega$, we get
$\displaystyle\Big{|}\int_{\Omega}\ln\|A_{N^{\prime}}(\theta)\|d\theta$
$\displaystyle+\sum\limits_{j=2}^{m-1}\int_{\Omega}\ln\|A_{N}(\theta+jN\alpha)\|d\theta$
$\displaystyle-\sum\limits_{j=1}^{m-1}\int_{\Omega}\ln\|A_{2N}(\theta+jN\alpha)\|d\theta\Big{|}<m\mathrm{e}^{-\frac{1}{2}NL_{N}(\alpha,A)},$
therefore, recalling (8.24) and the assumption $N>C_{1}(\kappa)q^{\sigma}$, we
have
$\displaystyle\Big{|}L_{N^{\prime}}(\alpha,A)$
$\displaystyle+\frac{m-2}{m}L_{N}(\alpha,A)-\frac{2(m-1)}{m}L_{2N}(\alpha,A)\Big{|}$
$\displaystyle<mN^{\prime-1}\mathrm{e}^{-\frac{1}{2}NL_{N}(\alpha,A)}+C\mathrm{e}^{-\frac{c}{2}q^{\gamma}}<C\mathrm{e}^{-\frac{c}{2}q^{\gamma}}.$
It follows that
$\displaystyle|L_{N^{\prime}}(\alpha,A)$
$\displaystyle+L_{N}(\alpha,A)-2L_{2N}(\alpha,A)|$
$\displaystyle<C\mathrm{e}^{-\frac{c}{2}q^{\gamma}}+2m^{-1}|L_{N}(\alpha,A)-L_{2N}(\alpha,A)|$
$\displaystyle<C\mathrm{e}^{-\frac{c}{2}q^{\gamma}}+L_{N}(\alpha,A)(10m)^{-1}<C\mathrm{e}^{-\frac{c}{2}q^{\gamma/4}},$
where the second inequality is by
$L_{2N}(\alpha,A)>\frac{19}{20}L_{N}(\alpha,A)$ and the last inequality is by
$m>\mathrm{e}^{\frac{c}{2}q^{\gamma/4}}$. ∎
Actually, the condition “$L_{2N}(\alpha,A)>\frac{19}{20}L_{N}(\alpha,A)$” is
not necessary if $q$ is sufficiently large and $L(\alpha,A)>0$.
###### Lemma 8.4.
Assume that $L(\alpha,A)>100\kappa>0,$ then there exists
$N_{0}\in{\mathbb{N}}$ with
$C_{1}(\kappa)q_{0}^{\sigma}<N_{0}<C_{2}(\kappa)q_{0}^{\sigma_{1}}$, $q_{0}$
is the one defined in Proposition 5 such that
(8.25) $\displaystyle L_{2N_{0}}(\alpha,A)>\frac{99}{100}L_{N_{0}}(\alpha,A).$
###### Proof.
Note that for any $n$, by subadditivity, we have
$\displaystyle 100\kappa<L(\alpha,A)=\inf L_{n}(\alpha,A)\leq
L_{2n}(\alpha,A)\leq L_{n}(\alpha,A)\leq C_{1},$
where $C_{1}$ is the one in (8.10).
Set $j_{0}=\big{[}(\ln(100/99))^{-1}\ln(C_{1}/100\kappa)\big{]},$ that is
(8.26)
$\displaystyle(99/100)^{j_{0}+1}C_{1}<100\kappa\leq(99/100)^{j_{0}}C_{1}.$
Consider the sequence $\\{L_{2^{j}N}(\alpha,A)\\}$ where
$N=[C_{1}(\kappa)q_{0}^{\sigma}]+1,$ $j\in{\mathbb{N}}$. If
$\displaystyle L_{2^{j+1}N}(\alpha,A)\leq(99/100)L_{2^{j}N}(\alpha,A)$
hold for all $0\leq j\leq j_{0},$ then
$\displaystyle 100\kappa<L_{2^{j_{0}+1}N}(\alpha,A)$
$\displaystyle\leq(99/100)^{j_{0}+1}L_{N}(\alpha,A)\leq(99/100)^{j_{0}+1}C_{1}<100\kappa,$
where the last inequality is by first inequality in (8.26). Thus there exists
$j_{*}^{\prime}s$ with $0\leq j_{*}\leq j_{0}$ such that
$\displaystyle L_{2^{j_{*}+1}N}(\alpha,A)>(99/100)L_{2^{j_{*}}N}(\alpha,A).$
Moreover, since $j_{0}$ is fixed, we can set $q_{0}$ large enough such that
$\displaystyle
2^{j_{0}}<2^{-1}C_{1}(\kappa)^{-1}C_{2}(\kappa)q_{0}^{\sigma_{1}-\sigma}.$
Set $N_{0}=2^{j_{*}}N.$ Thus we have the estimates
$\displaystyle C_{1}(\kappa)q_{0}^{\sigma}\leq N\leq N_{0}\leq 2^{j_{0}}N\leq
C_{2}(\kappa)q_{0}^{\sigma_{1}}$
with
$\displaystyle L_{2N_{0}}(\alpha,A)>(99/100)L_{N_{0}}(\alpha,A).$
∎
### 8.3. Inductive argument.
Once one has Lemma 8.3, one can follow the induction arguments developed in
[16]. However, in our case there is an upper bound of $N$ in the large
deviation theorem. Thus we can only deal with Diophantine frequencies and
their continued fraction expansions. Moreover, we need to deal with the
Diophantine frequencies and their continued fraction expansions seperately.
Let $p_{n}/q_{n}$ be the continued fraction expansion of $\alpha$. To apply
Lemma 8.3 inductively, we first fix $\alpha\in DC(v,\tau),$ and inductively
choose the following sequences:
(8.27) $\displaystyle
q_{0}=\tilde{q}_{0}<N_{0}<\tilde{q}_{1}<N_{1}<\cdots<N_{s}<\tilde{q}_{s+1}<N_{s+1}<\cdots,$
where $\tilde{p}_{i}/\tilde{q}_{i}$ is a subsequence of the continued fraction
expansion of $\alpha$ with
(8.28) $\text{$\tilde{q}_{s+1}$ is the smallest $q_{j}$ such that
$\tilde{q}_{s+1}>\mathrm{e}^{\tilde{q}_{s}^{\gamma/2}}$},\ s\geq 0,$ (8.29)
$\displaystyle
C_{1}(\kappa)\tilde{q}_{s}^{\sigma}<N_{s}<C_{2}(\kappa)\tilde{q}_{s}^{\sigma_{1}},\
\ \tilde{q}_{s}|N_{s},\ s\geq 0,$ (8.30) $\displaystyle N_{s+1}=m_{s+1}N_{s},\
\
\mathrm{e}^{\frac{c}{2}\tilde{q}_{s}^{\gamma/4}}<m_{s+1}<2m_{s+1}<\mathrm{e}^{\frac{c}{2}\tilde{q}_{s}^{\gamma}},\
s\geq 0.$
Actually, we can inductively select a sequence $\\{N_{s}\\}$ such that
(8.28)-(8.30) hold, indeed the starting case $s=0$ follows from Lemma 8.4.
First by the selection of $\tilde{q}_{s}$ and the Diophantine condition of
$\alpha$, one can check that
$\mathrm{e}^{\tilde{q}_{s}^{\gamma/2}}<\tilde{q}_{s+1}<\mathrm{e}^{2\tau\tilde{q}_{s}^{\gamma/2}}$.
Take $N_{s+1}=N_{s}m_{s+1}$ with
$m_{s+1}:=\tilde{q}_{s+1}([\tilde{q}_{s+1}^{\sigma-1}]+1).$ It’s easy to check
that
$\displaystyle
C_{1}(\kappa)\tilde{q}_{s+1}^{\sigma}<C_{1}(\kappa)\tilde{q}_{s}^{\sigma}\tilde{q}_{s+1}^{\sigma}<N_{s+1}<2C_{2}(\kappa)\tilde{q}_{s}^{\sigma_{1}}\tilde{q}_{s+1}^{\sigma}<C_{2}(\kappa)\tilde{q}_{s+1}^{\sigma_{1}},$
$\displaystyle\mathrm{e}^{\frac{c}{2}\tilde{q}_{s}^{\gamma/4}}<\mathrm{e}^{\sigma\tilde{q}_{s}^{\gamma/2}}<\tilde{q}_{s+1}^{\sigma}<m_{s+1}<2m_{s+1}\leq
4\tilde{q}_{s+1}^{\sigma}<4\mathrm{e}^{2\sigma\tau\tilde{q}_{s}^{\gamma/2}}<\mathrm{e}^{\frac{c}{2}\tilde{q}_{s}^{\gamma}}.$
Thus such a choice of $N_{s}$ satisfies all estimates in (8.28)-(8.30) if
$\tilde{q}_{0}$ is sufficiently large. With the help of such sequence, we can
prove the following:
###### Lemma 8.5.
Assume that $\alpha\in DC(v,\tau)$ and $L(\alpha,A)>100\kappa>0$. There exist
$c^{\prime\prime}>0$ and
$C_{1}(\kappa)\tilde{q}_{0}^{\sigma}<N_{0}<C_{2}(\kappa)\tilde{q}_{0}^{\sigma_{1}}$
such that
$\displaystyle|L(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|<\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
###### Proof.
Let $c/100<c_{3}<c_{2}<c_{1}<c/2$, $2C_{1}<C<\infty$, and $\tilde{q}_{-1}=0$.
We use induction to show that the sequences $\\{N_{s}\\}$ and
$\\{\tilde{q}_{s}\\},$ defined by (8.27), additionally satisfy, for $s\geq 0$,
(8.31)
$\displaystyle|L_{N_{s+1}}(\alpha,A)+L_{N_{s}}(\alpha,A)-2L_{2N_{s}}(\alpha,A)|<C\mathrm{e}^{-c_{1}\tilde{q}_{s}^{\gamma/4}},$
(8.32)
$\displaystyle|L_{2N_{s+1}}(\alpha,A)-L_{N_{s+1}}(\alpha,A)|<C\mathrm{e}^{-c_{2}\tilde{q}_{s}^{\gamma/4}},$
(8.33)
$\displaystyle|L_{N{s+1}}(\alpha,A)-L_{N_{s}}(\alpha,A)|<C\mathrm{e}^{-c_{3}\tilde{q}_{s-1}^{\gamma/4}}.$
We first check the case $s=0$. Fix $N_{1}$ satisfying (8.30). We will show
$\displaystyle|L_{N_{1}}(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|<C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}},$
$\displaystyle|L_{2N_{1}}(\alpha,A)-L_{N_{1}}(\alpha,A)|<C\mathrm{e}^{-c_{2}\tilde{q}_{0}^{\gamma/4}},$
$\displaystyle|L_{N_{1}}(\alpha,A)-L_{N_{0}}(\alpha,A)|<C\mathrm{e}^{-c_{3}\tilde{q}_{-1}^{\gamma/4}}=C.$
In this case, the last inequality holds automatically since
$\tilde{q}_{-1}=0$, one only needs to check the first two inequalities. By
(8.25) and (8.29),(8.30) with $s=0$ we know that the conditions in Lemma 8.3
are all satisfied with $N^{\prime}=N_{1}$, $N=N_{0}$ and $q=\tilde{q}_{0}$.
Therefore, by Lemma 8.3, we have
$\displaystyle|L_{N_{1}}(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|<C\mathrm{e}^{-\frac{c}{2}\tilde{q}_{0}^{\gamma/4}}<C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}}.$
On the other hand, (8.30) ensures one can also apply Lemma 8.3 to
$N^{\prime}=2N_{1}$, thus we have
$\displaystyle|L_{2N_{1}}(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|<C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}}.$
It follows that
$\displaystyle|L_{2N_{1}}(\alpha,A)-L_{N_{1}}(\alpha,A)|<2C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}}<C\mathrm{e}^{-c_{2}\tilde{q}_{0}^{\gamma/4}},$
and we have completed the initial case $s=0.$
For $j\geq 1,$ assume that (8.31)-(8.33) hold for all $s$ with $s\leq j-1.$
Now we consider the case $s=j.$ Fix $N_{j+1}$ satisfying (8.30). By induction
we have
$\displaystyle|L_{2N_{j}}(\alpha,A)-L_{N_{j}}(\alpha,A)|<C\mathrm{e}^{-c_{2}\tilde{q}_{j-1}^{\gamma/4}}\leq
C\mathrm{e}^{-c_{2}\tilde{q}_{0}^{\gamma/4}}.$
This implies $L_{2N_{j}}(\alpha,A)>(19/20)L_{N_{j}}(\alpha,A),$ which together
with (8.29), implies $N_{j}$ satisfies the two conditions of $N$ in Lemma 8.3
with $\tilde{q}_{j}$ in place of $q.$ Moreover, by (8.30),
$m_{j+1}=N_{j+1}N_{j}^{-1}$ satisfies the estimate of $m$ in Lemma 8.3 with
$\tilde{q}_{j}$ in place of $q.$ Thus by Lemma 8.3, with
$N^{\prime}=N_{j+1},N=N_{j}$ and $q=\tilde{q}_{j}$ we get
$\displaystyle|L_{N_{j+1}}(\alpha,A)+L_{N_{j}}(\alpha,A)-2L_{2N_{j}}(\alpha,A)|<C\mathrm{e}^{-\frac{c}{2}\tilde{q}_{j}^{\gamma/4}}<C\mathrm{e}^{-c_{1}\tilde{q}_{j}^{\gamma/4}}.$
Similarly, (8.30) ensures one can also apply Lemma 8.3 to
$N^{\prime}=2N_{j+1}$, and we have
$\displaystyle|L_{2N_{j+1}}(\alpha,A)+L_{N_{j}}(\alpha,A)-2L_{2N_{j}}(\alpha,A)|<C\mathrm{e}^{-c_{1}\tilde{q}_{j}^{\gamma/4}}.$
Thus
$\displaystyle|L_{2N_{j+1}}(\alpha,A)-L_{N_{j+1}}(\alpha,A)|<2C\mathrm{e}^{-c_{1}\tilde{q}_{j}^{\gamma/4}}<C\mathrm{e}^{-c_{2}\tilde{q}_{j}^{\gamma/4}},$
$\displaystyle|L_{N_{j+1}}(\alpha,A)-L_{N_{j}}(\alpha,A)|\leq$
$\displaystyle|L_{N_{j+1}}(\alpha,A)+L_{N_{j}}(\alpha,A)-2L_{2N_{j}}(\alpha,A)|$
$\displaystyle+|2L_{2N_{j}}(\alpha,A)-2L_{N_{j}}(\alpha,A)|$ $\displaystyle<$
$\displaystyle
C\mathrm{e}^{-c_{1}\tilde{q}_{j}^{\gamma/4}}+2C\mathrm{e}^{-c_{2}\tilde{q}_{j-1}^{\gamma/4}}<C\mathrm{e}^{-c_{3}\tilde{q}_{j-1}^{\gamma/4}}.$
That is the estimates in (8.31)-(8.33) hold for all $s\in{\mathbb{N}}.$ As a
consequence of (8.31) with $s=0$ and (8.33)
$\displaystyle|L(\alpha,A)$
$\displaystyle+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|$
$\displaystyle\leq|L_{N_{1}}(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|$
$\displaystyle\ +\sum_{s\geq
1}|L_{N_{s+1}}(\alpha,A)-L_{N_{s}}(\alpha,A)|<\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
∎
For the rational frequency case, we will first estimate the difference between
$L(p_{j}/q_{j},A_{j})$ and $L_{n}(p_{j}/q_{j},A_{j})$ with $n$ much larger
than $q_{j}$ (Lemma 8.6), and then use avalanche principle estimate to get
estimate of $L_{n}(p_{j}/q_{j},A_{j})$ (Lemma 8.7).
###### Lemma 8.6.
Consider the cocycle $(p/q,A)\in{\mathbb{Q}}\times
C^{0}({\mathbb{T}},SL(2,{\mathbb{R}}))$ with $p,q\in{\mathbb{N}},(p,q)=1,$ and
$\|A\|\leq\mathrm{e}^{C_{1}}.$ Set $n=mq+r,m\in{\mathbb{N}},0\leq r<q,$ then
$\displaystyle L_{n}(p/q,A)\leq L(p/q,A)+2n^{-1}(\ln m+qC_{1}).$
###### Proof.
Set $A_{q}:=A_{q}(\theta)=A(\theta+(q-1)p/q)A(\theta+(q-2)p/q)\cdots
A(\theta).$ For the matrix $A_{q}$ there exists a unitary $U$ such that
$A_{q}=U\begin{pmatrix}\lambda&\psi\\\ 0&\lambda^{-1}\end{pmatrix}U^{-1}.$
Then for $m\in{\mathbb{N}},$ we have
$A_{q}^{m}=U\begin{pmatrix}\lambda^{m}&R_{m}(\lambda,\psi)\\\
0&\lambda^{-m}\end{pmatrix}U^{-1},m\geq 2$ with
$R_{m}(\lambda,\psi)=\left\\{\begin{aligned}
&\sum_{l=1}^{k}\\{\lambda^{2l-1}+\lambda^{-(2l-1)}\\}\psi&m=2k,k\geq 1,\\\
&\psi+\sum_{l=1}^{k}\\{\lambda^{2l}+\lambda^{-2l}\\}\psi&m=2k+1,k\geq
1.\end{aligned}\right.$
Thus
$\|A_{q}^{m}\|\leq\|\lambda^{m}\|+\|R(\lambda,\psi)\|\leq\|\lambda\|^{m}(1+m\|\psi\|)\leq\rho(A_{q})^{m}(1+m\exp\\{qC_{1}\\}),$
where $\rho(A)$ stands for the spectrum radius. Note, for $n=mq+r,$
$A_{n}(\theta)=A_{r}(\theta)A_{q}^{m}(\theta),$ then by the inequality above
we get
$\displaystyle L_{n}(p/q,A)$ $\displaystyle=$
$\displaystyle\frac{1}{n}\int_{{\mathbb{T}}}\ln\|A_{n}(\theta)\|d\theta\leq\frac{1}{n}\int_{{\mathbb{T}}}\ln\|A_{q}^{m}(\theta)\|d\theta+\frac{rC_{1}}{n}$
$\displaystyle\leq$
$\displaystyle\frac{1}{n}\Big{\\{}\int_{{\mathbb{T}}}\ln\rho(A_{q})^{m}d\theta+\int_{{\mathbb{T}}}\ln(1+m\exp\\{qC_{1}\\})d\theta\Big{\\}}+\frac{qC_{1}}{n}$
$\displaystyle\leq$ $\displaystyle L(p/q,A)+2n^{-1}(\ln m+qC_{1}).$
∎
###### Lemma 8.7.
Assume that $\alpha\in DC(v,\tau)$ and $\\{p_{j}/q_{j}\\}$ is the sequence of
continued fraction expansion of $\alpha,$ and $A_{j}\rightarrow A$ under the
topology derived by $\|\cdot\|_{\nu,\rho}$-norm. Then there exists a $j_{1}$
such that for $j\geq j_{1},$ we have
$\displaystyle|L(p_{j}/q_{j},A_{j})+L_{N_{0}}(p_{j}/q_{j},A_{j})-2L_{2N_{0}}(p_{j}/q_{j},A_{j})|<2\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
###### Remark 8.1.
Here $N_{0}$ and $c^{\prime\prime}$ are the ones in Lemma 8.5.
###### Proof.
For the fixed $N_{0},$ note $L_{2N_{0}}(\cdot_{1},\cdot_{2})$ and
$L_{N_{0}}(\cdot_{1},\cdot_{2})$ are continuous in both variables and
$L_{2N_{0}}(\alpha,A)>(99/100)L_{N_{0}}(\alpha,A)$,
$L_{N_{0}}(\alpha,A)>100\kappa>0$, then there exists
$j_{1}\in{\mathbb{Z}}^{+}$, such that if $j>j_{1}$, we have
(8.34) $\displaystyle L_{N_{0}}(p_{j}/q_{j},A_{j})$ $\displaystyle>$
$\displaystyle 99\kappa$ (8.35) $\displaystyle L_{2N_{0}}(p_{j}/q_{j},A_{j})$
$\displaystyle>$ $\displaystyle(49/50)L_{N_{0}}(p_{j}/q_{j},A_{j}).$
For the fixed $p_{j}/q_{j}$ and the sequence $\\{\tilde{q}_{\ell}\\}$ defined
by (8.27), there exists $s_{j}\in{\mathbb{N}}$ such that
$\tilde{q}_{s_{j}}\leq q_{j}<\tilde{q}_{s_{j}+1}.$ Then we define the same
sequences $\\{\tilde{q}_{\ell}\\}_{\ell=0}^{s_{j}}$ and
$\\{N_{\ell}\\}_{\ell=0}^{s_{j}+1}$ for $p_{j}/q_{j}$ as $\alpha$ such that
(8.28)-(8.30) hold. Following Lemma 8.5, we will inductively show that
(8.36)
$\displaystyle|L_{N_{\ell+1}}(p_{j}/q_{j},A_{j})+L_{N_{\ell}}(p_{j}/q_{j},A_{j})-2L_{2N_{\ell}}(p_{j}/q_{j},A_{j})|<C\mathrm{e}^{-c_{1}\tilde{q}_{\ell}^{\gamma/4}},$
(8.37)
$\displaystyle|L_{2N_{\ell+1}}(p_{j}/q_{j},A_{j})-L_{N_{\ell+1}}(p_{j}/q_{j},A_{j})|<C\mathrm{e}^{-c_{2}\tilde{q}_{\ell}^{\gamma/4}},$
(8.38)
$\displaystyle|L_{N_{\ell+1}}(p_{j}/q_{j},A_{j})-L_{N_{\ell}}(p_{j}/q_{j},A_{j})|\leq
C\mathrm{e}^{-c_{3}\tilde{q}_{s_{\ell-1}}^{\gamma/4}}.$
To give the induction, the key is to apply Lemma 8.3, and verify that
(8.39) $\displaystyle|p_{j}/q_{j}-\tilde{p}_{\ell}/\tilde{q}_{\ell}|$
$\displaystyle<$ $\displaystyle\tilde{q}_{\ell}^{-2},$ (8.40) $\displaystyle
L_{2N_{\ell}}(p_{j}/q_{j},A_{j})$ $\displaystyle>$
$\displaystyle(19/20)L_{N_{\ell}}(p_{j}/q_{j},A_{j}),$ (8.41) $\displaystyle
L_{N_{\ell}}(p_{j}/q_{j},A_{j})$ $\displaystyle>$ $\displaystyle 90\kappa.$
Indeed, by the property of continued fraction expansion, (8.39) holds for any
$0\leq\ell\leq s_{j}$, and (8.40) follows from (8.35) and (8.37). On the other
hand, if $\ell=0$, by (8.34), (8.35) and (8.36), we have
$\displaystyle|L_{N_{1}}($ $\displaystyle
p_{j}/q_{j},A_{j})-L_{N_{0}}(p_{j}/q_{j},A_{j})|$
$\displaystyle\leq\Big{|}L_{N_{1}}(p_{j}/q_{j},A_{j})+L_{N_{0}}(p_{j}/q_{j},A_{j})-2L_{2N_{0}}(p_{j}/q_{j},A_{j})\Big{|}$
$\displaystyle\
+2|L_{2N_{0}}(p_{j}/q_{j},A_{j})-L_{N_{0}}(p_{j}/q_{j},A_{j})|$
$\displaystyle\leq
C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}}+2L_{N_{0}}(p_{j}/q_{j},A_{j})/50,$
which implies that
(8.42) $\displaystyle L_{N_{1}}(p_{j}/q_{j},A_{j})\geq
48L_{N_{0}}(p_{j}/q_{j},A_{j})/50-C\mathrm{e}^{-c_{1}\tilde{q}_{0}^{\gamma/4}}>95\kappa.$
As for (8.41), in case $\ell\geq 1$, by (8.42) and (8.38), one has
$\displaystyle L_{N_{\ell+1}}(p_{j}/q_{j},A_{j})$ $\displaystyle\geq$
$\displaystyle
L_{N_{1}}(p_{j}/q_{j},A_{j})-\sum_{k=1}^{\ell}|L_{N_{k+1}}(p_{j}/q_{j},A_{j})-L_{N_{k}}(p_{j}/q_{j},A_{j})|$
$\displaystyle>$ $\displaystyle
95\kappa-\sum_{k=0}^{\ell-1}C\mathrm{e}^{-c_{3}\tilde{q}_{i}^{\gamma/4}}>90\kappa.$
Therefore, the iteration can be conducted $s_{j}$ times, and we obtain
(8.43)
$\displaystyle|L_{N_{s_{j}+1}}(p_{j}/q_{j},A_{j})+L_{N_{0}}(p_{j}/q_{j},A_{j})-2L_{2N_{0}}(p_{j}/q_{j},A_{j})|<\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
Moreover, by Lemma 8.6, we get
$\displaystyle L_{N_{s_{j}+1}}(p_{j}/q_{j},A_{j})\leq
L(p_{j}/q_{j},A_{j})+5C_{1}C_{1}(\kappa)^{-1}\tilde{q}_{s_{j}}^{-(\sigma-1)}.$
The inequality above, together with (8.43) yields
$\displaystyle|L(p_{j}/q_{j},A_{j})+L_{N_{0}}(p_{j}/q_{j},A_{j})-2L_{2N_{0}}(p_{j}/q_{j},A_{j})|<2\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
∎
### 8.4. Proof of Theorem 6.3.
Assume $\alpha\in DC(v,\tau)$ and $p_{n}/q_{n}$ be the continued fraction
expansion of $\alpha$. Notice that since for each $N$, $L_{N}(\alpha,A)$ is a
continuous function in both variables, thus, $L(\alpha,A)=\inf
L_{N}(\alpha,A)$ is upper semi-continuous, then in the case $L(\alpha,A)=0$ we
get
$\displaystyle
0\leq\liminf_{n\rightarrow\infty}L(p_{n}/q_{n},A_{n})\leq\limsup_{n\rightarrow\infty}L(p_{n}/q_{n},A_{n})\leq
L(\alpha,A)=0,$
that is $\lim_{n\rightarrow\infty}L(p_{n}/q_{n},A_{n})=0.$ Therefore we may
assume $L(\alpha,A)>100\kappa>0$.
Take $j>j_{1}$ and
$C_{1}(\kappa)q_{0}^{\sigma}<N_{0}<C_{2}(\kappa)q_{0}^{\sigma_{1}}$, by Lemma
8.5 and Lemma 8.7, we have
$\displaystyle|L(\alpha,A)+L_{N_{0}}(\alpha,A)-2L_{2N_{0}}(\alpha,A)|<\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}},$
and
$\displaystyle|L(p_{j}/q_{j},A_{j})+L_{N_{0}}(p_{j}/q_{j},A_{j})-2L_{2N_{0}}(p_{j}/q_{j},A_{j})|<2\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}.$
Hence, one can estimate
$\displaystyle|L(\alpha,A)$
$\displaystyle-L(p_{j}/q_{j},A_{j})|\leq|L_{N_{0}}(p_{j}/q_{j},A_{j})-L_{N_{0}}(\alpha,A)|$
$\displaystyle\
+2|L_{2N_{0}}(p_{j}/q_{j},A_{j})-L_{2N_{0}}(\alpha,A)|+3\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}}$
$\displaystyle\leq
C(\kappa)^{N_{0}}\\{|p_{j}/q_{j}-\alpha|+\|A-A_{j}\|_{\nu,\rho}\\}+3\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}},$
it follows that
$\displaystyle\limsup\limits_{j\rightarrow\infty}|L(\alpha,A)-L(p_{j}/q_{j},A_{j}|\leq
4\mathrm{e}^{-c^{\prime\prime}\tilde{q}_{0}^{\gamma/4}},$
let $\tilde{q}_{0}\rightarrow\infty$, we get the result.
## Appendix: Proof of Lemma 4.5
Define $B_{r}(\delta)=\\{Y\in\mathcal{B}_{r}^{(nre)}:\|Y\|_{r}\leq\delta\\}$
and set $\varepsilon=8^{-2}\gamma^{2}Q_{n+1}^{-2\tau^{2}}.$ Then we define the
nonlinear functional
$\begin{split}\mathcal{F}:B_{r}(\varepsilon^{1/2})\rightarrow\mathcal{B}_{r}^{(nre)}\end{split}$
by
(8.44)
$\begin{split}\mathcal{F}(Y)=\mathbb{P}_{nre}\ln(\mathrm{e}^{A^{-1}Y(\theta+\alpha)A}\mathrm{e}^{g}\mathrm{e}^{-Y}),\end{split}$
where $\mathbb{P}_{nre}$ denotes the standard projections from
$\mathcal{B}_{r}$ to $\mathcal{B}_{r}^{(nre)}.$
We will find a solution of functional equation
(8.45) $\begin{split}\mathcal{F}(Y_{t})=(1-t)\mathcal{F}(Y_{0}),\
Y_{0}=0.\end{split}$
The derivative of $\mathcal{F}$ at $Y\in B_{r}(\varepsilon^{1/2})$ along
$Y^{\prime}\in\mathcal{B}_{r}^{(nre)}$ is given by
(8.46)
$\begin{split}D\mathcal{F}(Y)Y^{\prime}=\mathbb{P}_{nre}\big{\\{}A^{-1}Y^{\prime}(\theta+\alpha)A-Y^{\prime}+O(\|A\|^{2}g)Y^{\prime}+P[A,Y,Y^{\prime},g](\theta)\big{\\}},\end{split}$
where
$\begin{split}P[A,Y,Y^{\prime},g](\theta)&=O(A^{-1}Y(\theta+\alpha)A)A^{-1}Y^{\prime}(\theta+\alpha)A+2^{-1}[Y^{\prime\prime\prime},F+H]+\cdots\\\
&-O(Y)Y^{\prime}+2^{-1}[F+H^{\prime},-Y^{\prime\prime}]+\cdots-O(\|A\|^{2}g)Y^{\prime},\\\
O(\|A\|^{2}g)Y^{\prime}&=O(g)A^{-1}Y^{\prime}(\theta+\alpha)A+O(g)Y^{\prime},\end{split}$
with $P[A,Y,Y^{\prime},0](\theta)=0,$
$\begin{split}Y^{\prime\prime\prime}(\theta+\alpha)&=A^{-1}Y^{\prime}(\theta+\alpha)A+O(A^{-1}Y(\theta+\alpha)A)A^{-1}Y^{\prime}(\theta+\alpha)A,\\\
Y^{\prime\prime}(\theta)&=Y^{\prime}(\theta)+O(Y(\theta))Y^{\prime}(\theta),\\\
F(\theta)&=A^{-1}Y(\theta+\alpha)A+g(\theta)-Y(\theta),\end{split}$
and $H,H^{\prime}$ being sums of terms at least 2 orders in
$A^{-1}Y(\theta+\alpha)A,g(\theta),-Y(\theta).$ Moreover, the first
$``\cdots"$ denotes the sum of terms which are at least 2 orders in $F+H$ but
only 1 order in $Y^{\prime\prime\prime}.$ The second $``\cdots"$ denotes the
sum of terms which are at least 2 orders in $F+H^{\prime}$ but only 1 order in
$Y^{\prime\prime}.$
We give a estimate about the operator $D\mathcal{F}(Y)^{-1}$.
###### Proposition 8.
For the fixed $Y\in\mathcal{B}_{r}(\varepsilon^{1/2}),$ $D\mathcal{F}(Y)$
(defined by (8.46)) is a linear map from $\mathcal{B}_{r}^{(nre)}$ to
$\mathcal{B}_{r}^{(nre)}$ with estimate
(8.47) $\begin{split}\|D\mathcal{F}(Y)^{-1}\|\leq
2^{-1}\varepsilon^{-1/2}.\end{split}$
###### Proof.
For the fixed $Y\in\mathcal{B}_{r}(\varepsilon^{1/2}),$ obviously, the
operator $D\mathcal{F}(Y)$ defined by (8.46) is a linear map from
$\mathcal{B}_{r}^{(nre)}$ to $\mathcal{B}_{r}^{(nre)}$. In the following we
prove the estimate in (8.47). To this end, we consider the operator
$D\mathcal{F}(0)$ given by
$\begin{split}D\mathcal{F}(0)Y^{\prime}=A^{-1}Y^{\prime}(\theta+\alpha)A-Y^{\prime}+\mathbb{P}_{nre}O(\|A\|^{2}g)Y^{\prime},Y^{\prime}\in\mathcal{B}_{r}^{(nre)}.\end{split}$
Note the operator $D\mathcal{F}(0)$ is a linear map mapping
$\mathcal{B}_{r}^{(nre)}$ to $\mathcal{B}_{r}^{(nre)}$. Next we give the
estimate about $D\mathcal{F}(0)^{-1}.$
Note $\overline{Q}_{n+1}\geq T>(2\gamma^{-1})^{2\tau},n\geq 0$ ((4.4)), then
by (4.18) in Lemma 4.4 we get
$\begin{split}\|k\alpha\pm 2\rho_{f}\|_{\mathbb{Z}}\geq\gamma
Q_{n+1}^{-\tau^{2}}=8\varepsilon^{\frac{1}{2}},|k|<\overline{Q}_{n+1}^{\frac{1}{2}}.\end{split}$
By the inequality above, one can check, for
$Y^{\prime}\in\mathcal{B}_{r}^{(nre)},$
$\begin{split}A^{-1}&Y^{\prime}(\cdot+\alpha)A-Y^{\prime}\in\mathcal{B}_{r}^{(nre)},\\\
\|A^{-1}&Y^{\prime}(\cdot+\alpha)A-Y^{\prime}\|_{r}\geq
8\|A\|^{2}\varepsilon^{\frac{1}{2}}\|Y^{\prime}\|_{r}.\end{split}$
Moreover, by Lemma 3.1 (Banach algebra property) we get
$\|O(\|A\|^{2}g)\|_{r}\leq 2\|A\|^{2}\varepsilon.$ Note
$(8\varepsilon^{1/2})^{-1}2\varepsilon<1,$ then
(8.48) $\begin{split}\|D\mathcal{F}(0)^{-1}\|_{r}\leq
2(8\varepsilon^{1/2})^{-1}=4^{-1}\varepsilon^{-1/2}.\end{split}$
Once we get (8.48), we will turn to $\|D\mathcal{F}(Y)^{-1}\|.$ The
calculations below also depends on Lemma 3.1, we omit the reference about this
lemma.
Note
$\\{D\mathcal{F}(Y)-D\mathcal{F}(0)\\}Y^{\prime}=\mathbb{P}_{nre}\big{\\{}P(A,Y,Y^{\prime}g)-P(A,0,Y^{\prime}g)\\},$
we get
(8.49)
$\begin{split}\sup_{\|Y\|_{r}\leq\varepsilon^{\frac{1}{2}},\|g\|_{r}\leq\varepsilon}\|D\mathcal{F}(Y)-D\mathcal{F}(0)\|\leq
2\varepsilon^{\frac{1}{2}}.\end{split}$
(8.48) and (8.49) yield
$\begin{split}\|D\mathcal{F}(0)^{-1}(D\mathcal{F}(Y)-D\mathcal{F}(0))\|\leq
4^{-1}\varepsilon^{-1/2}2\varepsilon^{\frac{1}{2}}=2^{-1}.\end{split}$
Finally, note
$\begin{split}D\mathcal{F}(Y)^{-1}=\\{1+D\mathcal{F}(0)^{-1}(D\mathcal{F}(Y)-D\mathcal{F}(0))\\}^{-1}D\mathcal{F}(0)^{-1},\end{split}$
then by the inequality above we know that $D\mathcal{F}(Y)$ is invertible with
$\begin{split}\|D\mathcal{F}(Y)^{-1}\|\leq 2\|D\mathcal{F}(0)^{-1}\|\leq
2^{-1}\varepsilon^{-1/2}.\end{split}$
∎
Now we turn to functional equation (8.45). Formally, we get
(8.50)
$\begin{split}Y_{t}=-\int_{0}^{t}D\mathcal{F}(Y_{s})^{-1}\mathcal{F}(Y_{0})ds=-\int_{0}^{t}D\mathcal{F}(Y_{s})^{-1}\mathbb{P}_{nre}gds,0\leq
t\leq 1.\end{split}$
Moreover, by (8.47)
$\begin{split}\|Y_{t}\|_{r}\leq\sup_{Y\in\mathcal{B}_{r}(\varepsilon^{1/2}),\|g\|_{r}\leq\varepsilon}\|D\mathcal{F}(Y)^{-1}\|\|g\|_{r}\leq
2^{-1}\varepsilon^{-1/2}\varepsilon<\varepsilon^{1/2},0\leq t\leq
1.\end{split}$
Therefore, the solution of (8.45) exists in
$\mathcal{B}_{r}(\varepsilon^{1/2})$ and is given by (8.50).
For $Y_{t},\ 0\leq t\leq 1,$ given above, we know that $\mathcal{F}(Y_{1})=0,$
that is (by (8.44))
$\begin{split}\mathbb{P}_{nre}\ln(\mathrm{e}^{A^{-1}Y_{1}(\theta+\alpha)A}\mathrm{e}^{g}\mathrm{e}^{-Y_{1}})=0,\end{split}$
which implies that there exists a matrix $g^{(re)}\in\mathcal{B}_{r}^{(re)}$
such that
$\begin{split}\ln(\mathrm{e}^{A^{-1}Y_{1}(\theta+\alpha)A}\mathrm{e}^{g}\mathrm{e}^{-Y_{1}})=g^{(re)}.\end{split}$
That is
$\begin{split}\mathrm{e}^{Y_{1}(\theta+\alpha)}A\mathrm{e}^{g}\mathrm{e}^{-Y_{1}}=A\mathrm{e}^{g^{(re)}}.\end{split}$
By standard calculations we get the estimate $\|g^{(re)}\|\leq 2\varepsilon.$
This $Y_{1},$ with the estimate $\|Y_{1}\|_{r}<\varepsilon^{\frac{1}{2}},$ is
the one we want.
## Acknowledgments
J. You and Q. Zhou were supported by National Key R&D Program of China
(2020YFA0713300) and Nankai Zhide Foundation. H. Cheng was supported by NSFC
grant (12001294). She would like to thank Y. Pan for useful discussion. L. Ge
was partially supported by NSF DMS-1901462 and AMS-Simons Travel Grant
2020–2022. J. You was also partially supported by NSFC grant (11871286). Q.
Zhou was also partially supported by NSFC grant (12071232), The Science Fund
for Distinguished Young Scholars of Tianjin (No. 19JCJQJC61300).
## References
* AD [08] A. Avila and D. Damanik. Absolute continuity of the integrated density of states for the almost Mathieu operator with non-critical coupling. Invent. Math., 172(2):439–453, 2008.
* AFK [11] A. Avila, B. Fayad, and R. Krikorian. A KAM scheme for ${\rm SL}(2,\mathbb{R})$ cocycles with Liouvillean frequencies. Geom. Funct. Anal., 21(5):1001–1019, 2011.
* AJ [09] A. Avila and S. Jitomirskaya. The Ten Martini Problem. Ann. of Math. (2), 170(1):303–342, 2009.
* AJS [14] A. Avila, S. Jitomirskaya, and C. Sadel. Complex one-frequency cocycles. J. Eur. Math. Soc. (JEMS), 16(9):1915–1935, 2014.
* AK [06] A. Avila and R. Krikorian. Reducibility or nonuniform hyperbolicity for quasiperiodic Schrödinger cocycles. Ann. of Math. (2), 164(3):911–940, 2006.
* AK [15] A. Avila and R. Krikorian. Monotonic cocycles. Invent. Math., 202(1):271–331, 2015.
* AK [16] A. Avila and R. Krikorian. Some remarks on local and semi-local results for schrödinger cocycles. 2016\.
* [8] A. Avila. Global theory of one-frequency Schrödinger operators. Acta Math., 215(1):1–54, 2015.
* [9] A. Avila. On the Kotani-Last and Schrödinger conjectures. J. Amer. Math. Soc., 28(2):579–616, 2015.
* AvMS [90] J. Avron, P. H. M. van Mouche, and B. Simon. On the measure of the spectrum for the almost Mathieu operator. Comm. Math. Phys., 132(1):103–118, 1990.
* AYZ [17] A. Avila, J. You, and Q. Zhou. Sharp phase transitions for the almost Mathieu operator. Duke Math. J., 166(14):2697–2718, 2017.
* BCL [21] A. Bounemoura, C. Chavaudret, and S. Liang. Reducibility of ultra-differentiable quasi-periodic cocycles under an adapted arithmetic condition. 2021\.
* [13] A. Bounemoura and J. Fejoz. Hamiltonian perturbation theory for ultra-differentiable functions. Mem. Amer. Math. Soc., to appear.
* Bha [87] R. Bhatia. Perturbation bounds for matrix eigenvalues, volume 162 of Pitman Research Notes in Mathematics Series. Longman Scientific & Technical, Harlow; John Wiley & Sons, Inc., New York, 1987.
* BHJ [19] Simon Becker, Rui Han, and Svetlana Jitomirskaya. Cantor spectrum of graphene in magnetic fields. Invent. Math., 218(3):979–1041, 2019.
* BJ [02] J. Bourgain and S. Jitomirskaya. Continuity of the Lyapunov exponent for quasiperiodic operators with analytic potential. volume 108, pages 1203–1218. 2002. Dedicated to David Ruelle and Yasha Sinai on the occasion of their 65th birthdays.
* Bou [05] J. Bourgain. Green’s function estimates for lattice Schrödinger operators and applications, volume 158 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 2005.
* BS [82] J. Béllissard and B. Simon. Cantor spectrum for the almost Mathieu equation. J. Functional Analysis, 48(3):408–419, 1982.
* CC [94] J. Chaumat and A.-M. Chollet. Surjectivité de l’application restriction à un compact dans des classes de fonctions ultradifférentiables. Math. Ann., 298(1):7–40, 1994.
* CCYZ [19] A. Cai, C. Chavaudret, J. You, and Q. Zhou. Sharp Hölder continuity of the Lyapunov exponent of finitely differentiable quasi-periodic cocycles. Math. Z., 291(3-4):931–958, 2019.
* Cha [65] W. G. Chambers. Linear-network model for magnetic breakdown in two dimensions. Phys. Rev. A, 140(1A):135–143, 1965.
* Cha [13] C. Chavaudret. Strong almost reducibility for analytic and Gevrey quasi-periodic cocycles. Bull. Soc. Math. France, 141(1):47–106, 2013.
* Dia [06] J.L. Dias. A normal form theorem for Brjuno skew systems through renormalization. J. Differential Equations, 230(1):1–23, 2006.
* DS [75] E. I. Dinaburg and Ja. G. Sinaĭ. The one-dimensional Schrödinger equation with quasiperiodic potential. Funkcional. Anal. i Priložen., 9(4):8–21, 1975.
* Eli [92] L. H. Eliasson. Floquet solutions for the $1$-dimensional quasi-periodic Schrödinger equation. Comm. Math. Phys., 146(3):447–482, 1992.
* FK [09] B. Fayad and R. Krikorian. Rigidity results for quasiperiodic ${\rm SL}(2,\mathbb{R})$-cocycles. J. Mod. Dyn., 3(4):497–510, 2009.
* FK [18] B. Fayad and R. Krikorian. Some questions around quasi-periodic dynamics. In Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Vol. III., pages 1909–1932. World Sci. Publ., Hackensack, NJ, 2018.
* GS [01] M. Goldstein and W. Schlag. Hölder continuity of the integrated density of states for quasi-periodic Schrödinger equations and averages of shifts of subharmonic functions. Ann. of Math. (2), 154(1):155–203, 2001.
* Her [83] M. R. Herman. Une methode pour minorer les exposants de Lyapounov et quelques exemples montrant le caractère local dun theoreme d${}^{\prime}{A}$rnold et de Moser sur le tore de dimension $2$. Comment. Math. Helv., 58(3):453–502, 1983.
* HY [12] X. Hou and J. You. Almost reducibility and non-perturbative reducibility of quasi-periodic linear systems. Invent. Math., 190(1):209–260, 2012.
* JK [02] S. Jitomirskaya and I. V. Krasovsky. Continuity of the measure of the spectrum for discrete quasiperiodic operators. Math. Res. Lett., 9(4):413–421, 2002.
* JKS [09] S. Jitomirskaya, D. A. Koslover, and M.S. Schulteis. Continuity of the Lyapunov exponent for analytic quasiperiodic cocycles. Ergodic Theory Dynam. Systems, 29(6):1881–1905, 2009.
* JM [82] R. Johnson. and J. Moser. The rotation number for almost periodic potentials. Comm. Math. Phys., 84(3):403–438, 1982.
* JM [11] S. Jitomirskaya and C. A. Marx. Continuity of the Lyapunov exponent for analytic quasi-periodic cocycles with singularities. J. Fixed Point Theory Appl., 10(1):129–146, 2011.
* [35] S. Jitomirskaya and C. A. Marx. Analytic quasi-periodic Schrödinger operators and rational frequency approximants. Geom. Funct. Anal., 22(5):1407–1443, 2012.
* [36] S. Jitomirskaya and C. A. Marx. Analytic quasi-perodic cocycles with singularities and the Lyapunov exponent of extended Harper’s model. Comm. Math. Phys., 316(1):237–267, 2012.
* Kle [05] S. Klein. Anderson localization for the discrete one-dimensional quasi-periodic Schrödinger operator with potential defined by a Gevrey-class function. J. Funct. Anal., 218(2):255–292, 2005.
* Kot [84] S. Kotani. Ljapunov indices determine absolutely continuous spectra of stationary random one-dimensional Schrödinger operators. In Stochastic analysis (Katata/Kyoto, 1982), volume 32 of North-Holland Math. Library, pages 225–247. North-Holland, Amsterdam, 1984\.
* [39] R. Krikorian. Priviate communications.
* KWYZ [18] R. Krikorian, J. Wang, J. You, and Q. Zhou. Linearization of quasiperiodically forced circle flows beyond Brjuno condition. Comm. Math. Phys., 358(1):81–100, 2018.
* Las [92] Y. Last. On the measure of gaps and spectra for discrete $1$D Schrödinger operators. Comm. Math. Phys., 149(2):347–360, 1992.
* Las [93] Y. Last. A relation between a.c. spectrum of ergodic Jacobi matrices and the spectra of periodic approximants. Comm. Math. Phys., 151(1):183–192, 1993.
* Las [94] Y. Last. Zero measure spectrum for the almost Mathieu operator. Comm. Math. Phys., 164(2):421–432, 1994.
* Las [05] Yoram Last. Spectral theory of Sturm-Liouville operators on infinite intervals: a review of recent developments. In Sturm-Liouville theory, pages 99–120. Birkhäuser, Basel, 2005.
* LWJZ [20] L.Ge, Y. Wang, J.You, and X. Zhao. Transition space for the continuity of the lyapunov exponent of quasiperiodic schrödinger cocycles. 2020\.
* MJ [17] C. A. Marx and S. Jitomirskaya. Dynamics and spectral theory of quasi-periodic Schrödinger-type operators. Ergodic Theory Dynam. Systems, 37(8):2353–2393, 2017.
* Sha [11] Mira Shamis. Some connections between almost periodic and periodic discrete Schrödinger operators with analytic potentials. J. Spectr. Theory, 1(3):349–362, 2011.
* Thi [03] V. Thilliez. Division by flat ultradifferentiable functions and sectorial extensions. Results Math., 44(1-2):169–188, 2003.
* VPMG [93] S. A. Molchanov V. P. Maslov and A. Y. Gordon. Behavior of generalized eigenfunctions at infinity and the Schrödinger conjecture. Russian J. Math. Phys., 1(1):71–104, 1993.
* WY [13] Y. Wang and J. You. Examples of discontinuity of Lyapunov exponent in smooth quasiperiodic cocycles. Duke Math. J., 162(13):2363–2412, 2013.
* You [18] J. You. Quantitative almost reducibility and its applications. In Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Vol. III., pages 2113–2135. World Sci. Publ., Hackensack, NJ, 2018.
* YZ [13] J. You and Q. Zhou. Embedding of analytic quasi-periodic cocycles into analytic quasi-periodic linear systems and its applications. Comm. Math. Phys., 323(3):975–1005, 2013.
* YZ [14] J. You and Q. Zhou. Phase transition and semi-global reducibility. Comm. Math. Phys., 330(3):1095–1113, 2014.
* Zha [19] X. Zhao. On Last’s intersection spectrum conjecture. Nonlinearity, 32(7):2352–2364, 2019.
* ZW [12] Q. Zhou and J. Wang. Reducibility results for quasiperiodic cocycles with Liouvillean frequency. J. Dynam. Differential Equations, 24(1):61–83, 2012.
|
# Optimal Oracles for Point-to-Set Principles
D. M. Stull
Department of Computer Science, Iowa State University
Ames, IA 50011, USA
<EMAIL_ADDRESS>
###### Abstract
The point-to-set principle [14] characterizes the Hausdorff dimension of a
subset $E\subseteq\mathbb{R}^{n}$ by the effective (or algorithmic) dimension
of its individual points. This characterization has been used to prove several
results in classical, i.e., without any computability requirements, analysis.
Recent work has shown that algorithmic techniques can be fruitfully applied to
Marstrand’s projection theorem, a fundamental result in fractal geometry.
In this paper, we introduce an extension of point-to-set principle - the
notion of optimal oracles for subsets $E\subseteq\mathbb{R}^{n}$. One of the
primary motivations of this definition is that, if $E$ has optimal oracles,
then the conclusion of Marstrand’s projection theorem holds for $E$. We show
that every analytic set has optimal oracles. We also prove that if the
Hausdorff and packing dimensions of $E$ agree, then $E$ has optimal oracles.
Moreover, we show that the existence of sufficiently nice outer measures on
$E$ implies the existence of optimal Hausdorff oracles. In particular, the
existence of exact gauge functions for a set $E$ is sufficient for the
existence of optimal Hausdorff oracles, and is therefore sufficient for
Marstrand’s theorem. Thus, the existence of optimal oracles extends the
currently known sufficient conditions for Marstrand’s theorem to hold.
Under certain assumptions, every set has optimal oracles. However, assuming
the axiom of choice and the continuum hypothesis, we construct sets which do
not have optimal oracles. This construction naturally leads to a
generalization of Davies theorem on projections.
## 1 Introduction
Effective, i.e., algorithmic, dimensions were introduced [12, 1] to study the
randomness of points in Euclidean space. The effective dimension, $\dim(x)$
and effective strong dimension, $\operatorname{Dim}(x)$, are real values which
measure the asymptotic density of information of an individual point $x$. The
connection between effective dimensions and the classical Hausdorff and
packing dimension is given by the point-to-set principle of J. Lutz and N.
Lutz [14]: For any $E\subseteq\mathbb{R}^{n}$,
$\displaystyle\dim_{H}(E)$
$\displaystyle=\min\limits_{A\subseteq\mathbb{N}}\sup_{x\in
E}\dim^{A}(x),\text{ and}$ (1) $\displaystyle\dim_{P}(E)$
$\displaystyle=\min\limits_{A\subseteq\mathbb{N}}\sup_{x\in
E}\operatorname{Dim}^{A}(x)\,.$ (2)
Call an oracle $A$ satisfying (1) a Hausdorff oracle for $E$. Similarly, we
call an oracle $A$ satisfying (2) a packing oracle for $E$. Thus, the point-
to-set principle shows that the classical notion of Hausdorff or packing
dimension is completely characterized by the effective dimension of its
individual points, relative to a Hausdorff or packing oracle, respectively.
Recent work as shown that algorithmic dimensions are not only useful in
effective settings, but, via the point-to-set principle, can be used to solve
problems in geometric measure theory [15, 17, 18, 29]. It is important to note
that the point-to-set principle allows one to use algorithmic techniques to
prove theorems whose statements have seemingly nothing to do with
computability theory. In this paper, we focus on the connection between
algorithmic dimension and Marstrand’s projection theorem.
Marstrand, in his landmark paper [20], was the first to study how the
dimension of a set is changed when projected onto a line. He showed that, for
any analytic set $E\in\mathbb{R}^{2}$, for almost every angle
$\theta\in[0,\pi)$,
$\dim_{H}(p_{\theta}\,E)=\min\\{\dim_{H}(E),1\\},$ (3)
where $p_{\theta}(x,y)=x\cos\theta+y\sin\theta$111This result was later
generalized to $\mathbb{R}^{n}$, for arbitrary $n$, as well as extended to
hyperspaces of dimension $m$, for any $1\leq m\leq n$ (see e.g. [21, 22,
23]).. The study of projections has since become a central theme in fractal
geometry (see [8] or [24] for a more detailed survey of this development).
Marstrand’s theorem begs the question of whether the analytic requirement on
$E$ can be dropped. It is known that, without further conditions, it cannot.
Davies [5] showed that, assuming the axiom of choice and the continuum
hypothesis, there are non-analytic sets for which Marstrands conclusion fails.
However, the problem of classifying the sets for which Marstrands theorem does
hold is still open. Recently, Lutz and Stull [19] used the point-to-set
principle to prove that the projection theorem holds for sets for which the
Hausdorff and packing dimensions agree222Orponen [28] has recently given
another proof of Lutz and Stull’s result using more classical tools.. This
expanded the reach of Marstrand’s theorem, as this assumption is incomparable
with analyticity.
In this paper, we give the broadest known sufficient condition (which makes
essential use of computability theory) for Marstrand’s theorem. In particular,
we introduce the notion of optimal Hausdorff oracles for a set
$E\subseteq\mathbb{R}^{n}$. We prove that Marstrand’s theorem holds for every
set $E$ which has optimal Hausdorff oracles.
An optimal Hausdorff oracle for a set $E$ is a Hausdorff oracle which
minimizes the algorithmic complexity of ”most“333By most, we mean a subset of
$E$ of the same Hausdorff dimension as $E$ points in $E$. It is not
immediately clear that any set $E$ has optimal oracles. Nevertheless, we show
that two natural classes of sets $E\subseteq\mathbb{R}^{n}$ do have optimal
oracles.
We show that every analytic, and therefore Borel, set has optimal oracles. We
also prove that every set whose Hausdorff and packing dimensions agree has
optimal Hausdorff oracles. Thus, we show that the existence of optimal oracles
encapsulates the known conditions sufficient for Marstrand’s theorem to hold.
Moreover, we show that the existence of sufficiently nice outer measures on
$E$ implies the existence of optimal Hausdorff oracles. In particular, the
existence of exact gauge functions (Section 2.1) for a set $E$ is sufficient
for the existence of optimal Hausdorff oracles for $E$, and is therefore
sufficient for Marstrand’s theorem. Thus, the existence of optimal Hausdorff
oracles is weaker than the previously known conditions for Marstrand’s theorem
to hold.
We also show that the notion of optimal oracles gives insight to sets for
which Marstrand’s theorem does not hold. Assuming the axiom of choice and the
continuum hypothesis, we construct sets which do not have optimal oracles.
This construction, with minor adjustments, proves a generalization of Davies
theorem proving the existence of sets for which (3) does not hold. In
addition, the inherently algorithmic aspect of the construction might be
useful for proving set-theoretic properties of exceptional sets for
Marstrand’s theorem.
Finally, we define optimal packing oracles for a set. We show that every
analytic set $E$ has optimal packing oracles. We also show that every $E$
whose Hausdorff and packing dimensions agree have optimal packing oracles.
Assuming the axiom of choice and the continuum hypothesis, we show that there
are sets with optimal packing oracles without optimal Hausdorff oracles (and
vice-versa).
The structure of the paper is as follows. In Section 2.1 we review the
concepts of measure theory needed, and the (classical) definition of Hausdorff
dimension. In Section 2.2 we review algorithmic information theory, including
the formal definitions of effective dimensions. We then introduce and study
the notion of optimal oracles in Section 3. In particular, we give a general
condition for the existence of optimal oracles in Section 3.1. We use this
condition to prove that analytic sets have optimal oracles in Section 3.2. We
conclude in Section 3.3 with an example, assuming the axiom of choice and the
continuum hypothesis, of a set without optimal oracles. The connection between
Marstrands projection theorem and optimal oracles is explored in Section 4. In
this section, we prove that Marstrands theorem holds for every set with
optimal oracles. In Section 4.1, we use the construction of a set without
optimal oracles to give a new, algorithmic, proof of Davies theorem. Finally,
in Sectino 5, we define and investigate the notion of optimal packing oracles.
## 2 Preliminaries
### 2.1 Outer Measures and Classical Dimension
A set function $\mu:\mathcal{P}(\mathbb{R}^{n})\to[0,\infty]$ is called an
outer measure on $\mathbb{R}^{n}$ if
1. 1.
$\mu(\emptyset)=0$,
2. 2.
if $A\subseteq B$ then $\mu(A)\leq\mu(B)$, and
3. 3.
for any sequence $A_{1},A_{2},\ldots$ of subsets,
$\mu(\bigcup_{i}A_{i})\leq\sum_{i}\mu(A_{i})$.
If $\mu$ is an outer measure, we say that a subset $A$ is $\mu$-measurable if
$\mu(A\cap B)+\mu(B-A)=\mu(B)$,
for every subset $B\subseteq\mathbb{R}^{n}$.
An outer measure $\mu$ is called a metric outer measure if every Borel subset
is $\mu$-measurable and
$\mu(A\cup B)=\mu(A)+\mu(B)$,
for every pair of subsets $A,B$ which have positive Hausdorff distance. That
is,
$\inf\\{\|x-y\|\,|\,x\in A,y\in B\\}>0$.
An important example of a metric outer measure is the $s$-dimensional
Hausdorff measure. For every $E\subseteq[0,1)^{n}$, define the $s$-dimensional
Hausdorff content at precision $r$ by
$h^{s}_{r}(E)=\inf\left\\{\sum_{i}d(Q_{i})^{s}\,|\,\bigcup_{i}Q_{i}\text{
covers }E\text{ and }d(Q_{i})\leq 2^{-r}\right\\}$,
where $d(Q)$ is the diameter of ball $Q$. We define the $s$-dimensional
Hausdorff measure of $E$ by
$\mathcal{H}^{s}(E)=\lim\limits_{r\to\infty}h^{s}_{r}(E)$.
###### Remark.
It is well-known that $\mathcal{H}^{s}$ is a metric outer measure for every
$s$.
The Hausdorff dimension of a set $E$ is then defined by
$\dim_{H}(E)=\inf\limits_{s}\\{\mathcal{H}^{s}(E)=\infty\\}=\sup\limits_{s}\\{\mathcal{H}^{s}(E)=0\\}$.
Another important metric outer measure, which gives rise to the packing
dimension of a set, is the $s$-dimensional packing measure. For every
$E\subseteq[0,1)^{n}$, define the $s$-dimensional packing pre-measure by
$p^{s}(E)=\limsup\limits_{\delta\to
0}\left\\{\sum\limits_{i\in\mathbb{N}}d(B_{i})^{s}\,|\,\\{B_{i}\\}\text{ is a
set of disjoint balls and }B_{i}\in C(E,\delta)\right\\}$,
where $C(E,\delta)$ is the set of all closed balls with diameter at most
$\delta$ with centers in $E$. We define the $s$-dimensional packing measure of
$E$ by
$\mathcal{P}^{s}(E)=inf\left\\{\sum\limits_{j}p^{s}(E_{j})\,|\,E\subseteq\bigcup
E_{j}\right\\}$,
where the infimum is taken over all countable covers of $E$. For every $s$,
the $s$-dimensional packing measure is a metric outer measure.
The packing dimension of a set $E$ is then defined by
$\dim_{P}(E)=\inf\limits_{s}\\{\mathcal{P}^{s}(E)=0\\}=\sup\limits_{s}\\{\mathcal{P}^{s}(E)=\infty\\}$.
In order to prove that every analytic set has optimal oracles, we will make
use of the following facts of geometric measure theory (see, e.g., [7], [2]).
###### Theorem 1.
The following are true.
1. 1.
Suppose $E\subseteq\mathbb{R}^{n}$ is compact and satisfies
$\mathcal{H}^{s}(E)>0$. Then there is a compact subset $F\subseteq E$ such
that $0<\mathcal{H}^{s}(F)<\infty$.
2. 2.
Every analytic set $E\subseteq\mathbb{R}^{n}$ has a $\Sigma^{0}_{2}$ subset
$F\subseteq E$ such that $\dim_{H}(F)=\dim_{H}(E)$.
3. 3.
Suppose $E\subseteq\mathbb{R}^{n}$ is compact and satisfies
$\mathcal{P}^{s}(E)>0$. Then there is a compact subset $F\subseteq E$ such
that $0<\mathcal{P}^{s}(F)<\infty$.
4. 4.
Every analytic set $E\subseteq\mathbb{R}^{n}$ has a $\Sigma^{0}_{2}$ subset
$F\subseteq E$ such that $\dim_{P}(F)=\dim_{P}(E)$.
It is possible to generalize the definition of Hausdorff measure using gauge
functions. A function $\phi:[0,\infty)\to[0,\infty)$ is a gauge function if
$\phi$ is monotonically increasing, strictly increasing for $t>0$ and
continuous. If $\phi$ is a gauge, define the $phi$-Hausdorff content at
precision $r$ by
$h^{\phi}_{r}(E)=\inf\left\\{\sum_{i}\phi(d(Q_{i}))\,|\,\bigcup_{i}Q_{i}\text{
covers }E\text{ and }d(Q_{i})\leq 2^{-r}\right\\}$,
where $d(Q)$ is the diameter of ball $Q$. We define the $phi$-Hausdorff
measure of $E$ by
$\mathcal{H}^{\phi}(E)=\lim\limits_{r\to\infty}h^{\phi}_{r}(E)$.
Thus we recover the $s$-dimensional Hausdorff measure when $\phi(t)=t^{s}$.
Gauged Hausdorff measures give fine-grained information about the size of a
set. There are sets $E$ which Hausdorff dimension $s$, but
$\mathcal{H}^{s}(E)=0$ or $\mathcal{H}^{s}(E)=\infty$. However, it is
sometimes possible to find an appropriate gauge so that
$0<\mathcal{H}^{\phi}(E)<\infty$. When $0<\mathcal{H}^{\phi}(E)<\infty$, we
say that $\phi$ is an exact gauge for $E$.
###### Example.
For almost every Brownian path $X$ in $\mathbb{R}^{2}$,
$\mathcal{H}^{2}(X)=0$, but $0<\mathcal{H}^{\phi}(X)<\infty$, where
$\phi(t)=t^{2}\log\frac{1}{t}\log\log\frac{1}{t}$.
For two outer measures $\mu$ and $\nu$, $\mu$ is said to be absolutely
continuous with respect to $\nu$, denoted $\mu\ll\nu$, if $\mu(A)=0$ for every
set $A$ for which $\nu(A)=0$.
###### Example.
For every $s$, let $\phi_{s}(t)=t^{s}\log\frac{1}{t}$. Then
$\mathcal{H}^{s}\ll\mathcal{H}^{\phi_{s}}$.
###### Example.
For every $s$, let $\phi_{s}(t)=\frac{t^{s}}{\log\frac{1}{t}}$. Then
$\mathcal{H}^{\phi_{s}}\ll\mathcal{H}^{s}$.
### 2.2 Algorithmic Information Theory
The _conditional Kolmogorov complexity_ of a binary string
$\sigma\in\\{0,1\\}^{*}$ given binary string $\tau\in\\{0,1\\}^{*}$ is
$K(\sigma|\tau)=\min_{\pi\in\\{0,1\\}^{*}}\left\\{\ell(\pi):U(\pi,\tau)=\sigma\right\\}\,,$
where $U$ is a fixed universal prefix-free Turing machine and $\ell(\pi)$ is
the length of $\pi$. The _Kolmogorov complexity_ of $\sigma$ is
$K(\sigma)=K(\sigma|\lambda)$, where $\lambda$ is the empty string. An
important fact is that the choice of universal machine affects the Kolmogorov
complexity by at most an additive constant (which, especially for our
purposes, can be safely ignored). See [11, 27, 6] for a more comprehensive
overview of Kolmogorov complexity.
We can naturally extend these definitions to Euclidean spaces by introducing
“precision” parameters [16, 14]. Let $x\in\mathbb{R}^{m}$, and
$r,s\in\mathbb{N}$. The _Kolmogorov complexity of $x$ at precision $r$_ is
$K_{r}(x)=\min\left\\{K(p)\,:\,p\in
B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\\}\,.$
The _conditional Kolmogorov complexity of $x$ at precision $r$ given
$q\in\mathbb{Q}^{m}$_ is
$\hat{K}_{r}(x|q)=\min\left\\{K(p\,|\,q)\,:\,p\in
B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\\}\,.$
The _conditional Kolmogorov complexity of $x$ at precision $r$ given
$y\in\mathbb{R}^{n}$ at precision $s$_ is
$K_{r,s}(x|y)=\max\big{\\{}\hat{K}_{r}(x|q)\,:\,q\in
B_{2^{-s}}(y)\cap\mathbb{Q}^{n}\big{\\}}\,.$
We typically abbreviate $K_{r,r}(x|y)$ by $K_{r}(x|y)$.
The _effective Hausdorff dimension_ and _effective packing dimension_
444Although effective Hausdorff was originally defined by J. Lutz [13] using
martingales, it was later shown by Mayordomo [25] that the definition used
here is equivalent. For more details on the history of connections between
Hausdorff dimension and Kolmogorov complexity, see [6, 26]. of a point
$x\in\mathbb{R}^{n}$ are
$\dim(x)=\liminf_{r\to\infty}\frac{K_{r}(x)}{r}\quad\text{and}\quad\operatorname{Dim}(x)=\limsup_{r\to\infty}\frac{K_{r}(x)}{r}\,.$
By letting the underlying fixed prefix-free Turing machine $U$ be a universal
_oracle_ machine, we may _relativize_ the definition in this section to an
arbitrary oracle set $A\subseteq\mathbb{N}$. The definitions of
$K^{A}_{r}(x)$, $\dim^{A}(x)$, $\operatorname{Dim}^{A}(x)$, etc. are then all
identical to their unrelativized versions, except that $U$ is given oracle
access to $A$. Note that taking oracles as subsets of the naturals is quite
general. We can, and frequently do, encode a point $y$ into an oracle, and
consider the complexity of a point relative to $y$. In these cases, we
typically forgo explicitly referring to this encoding, and write e.g.
$K^{y}_{r}(x)$. We can also join two oracles $A,B\subseteq\mathbb{N}$ using
any computable bijection $f:\mathbb{N}\times\mathbb{N}\to\mathbb{N}$. We
denote the join of $A$ and $B$ by $(A,B)$. We can generalize this procedure to
join any countable sequence of oracles.
As mentioned in the introduction, the connection between effective dimensions
and the classical Hausdorff and packing dimensions is given by the point-to-
set principle introduced by J. Lutz and N. Lutz [14].
###### Theorem 2 (Point-to-set principle).
Let $n\in\mathbb{N}$ and $E\subseteq\mathbb{R}^{n}$. Then
$\displaystyle\dim_{H}(E)$
$\displaystyle=\min\limits_{A\subseteq\mathbb{N}}\sup_{x\in
E}\dim^{A}(x),\text{ and}$ $\displaystyle\dim_{P}(E)$
$\displaystyle=\min\limits_{A\subseteq\mathbb{N}}\sup_{x\in
E}\operatorname{Dim}^{A}(x)\,.$
An oracle testifying to the the first equality is called a Hausdorff oracle
for E. Similarly, an oracle testifying to the the second equality is called a
packing oracle for E.
## 3 Optimal Hausdorff Oracles
For any set $E$, there are infinitely many Hausdorff oracles for $E$. A
natural question is whether there is a Hausdorff oracle which minimizes the
complexity of every point in $E$. Unfortunately, it is, in general, not
possible for a single oracle to maximally reduce every point. We introduce the
notion of optimal Hausdorff oracles by weakening the condition to a single
point.
###### Definition 3.
Let $E\subseteq\mathbb{R}^{n}$ and $A\subseteq\mathbb{N}$. We say that $A$ is
Hausdorff optimal for $E$ if the following conditions are satisfied.
1. 1.
$A$ is a Hausdorff oracle for $E$.
2. 2.
For every $B\subseteq\mathbb{N}$ and every $\epsilon>0$ there is a point $x\in
E$ such that $\dim^{A,B}(x)\geq\dim_{H}(E)-\epsilon$ and for almost every
$r\in\mathbb{N}$
$K^{A,B}_{r}(x)\geq K^{A}_{r}(x)-\epsilon r$.
Note that the second condition only guarantees the existence of one point
whose complexity is unaffected by the addtional information in $B$. However,
we can show that this implies the seemingly stronger condition that “most”
points are unaffected. For $B\subseteq\mathbb{N}$, $\epsilon>0$ define the set
$N(A,B,\epsilon)=\\{x\in E\,|\,(\forall^{\infty}r)\,K^{A,B}_{r}(x)\geq
K^{A}_{r}(x)-\epsilon r\\}$
###### Proposition 4.
Let $E\subseteq\mathbb{R}^{n}$ be a set such that $\dim_{H}(E)>0$ and let $A$
be an oracle. Then $A$ is a Hausdorff optimal oracle for $E$ if and only if
$A$ is a Hausdorff oracle and $\dim_{H}(N(A,B,\epsilon))=\dim_{H}(E)$ for
every $B\subseteq\mathbb{N}$ and $\epsilon>0$.
###### Proof.
For the forward direction, let $A$ be a optimal Hausdorff oracle for $E$. Then
by the first condition of the definition, $A$ is a Hausdorff oracle. Let
$B\subseteq\mathbb{N}$ and $\epsilon>0$. Let $C$ be a Hausdorff oracle for
$N(A,B,\epsilon)$. For the sake of contradiction, suppose that
$\dim_{H}(N(A,B,\epsilon))<\dim_{H}(E)-\gamma$,
for some $\gamma>0$. We will, without loss of generality, assume that
$\gamma<\epsilon$. Then, by the point to set principle, for every $x\in
N(A,B,\epsilon)$,
$\displaystyle\dim^{A,(B,C)}(x)$ $\displaystyle\leq\dim^{C}(x)$
$\displaystyle\leq\dim_{H}(N(A,B,\epsilon))$
$\displaystyle<\dim_{H}(E)-\gamma.$
Since, $A$ is an optimal Hausdorff oracle for $E$, there is a point $x\in E$
such that $\dim^{A,(B,C)}(x)\geq\dim_{H}(E)-\gamma$ and for almost every
$r\in\mathbb{N}$
$K^{A,(B,C)}_{r}(x)\geq K^{A}_{r}(x)-\gamma r$.
By our previous discussion, any such point $x$ cannot be in $N(A,B,\epsilon)$.
However, if $x\notin N(A,B,\epsilon)$, then for infinitely many $r$,
$K^{A,(B,C)}_{r}(x)<K^{A}_{r}(x)-\epsilon r$.
Thus, no such $x$ exists, contradicting the fact that $A$ is Hausdorff
optimal.
For the backward direction, let $A$ be an oracle satisfying the hypothesis.
Then $A$ is a Hausdorff oracle for $E$ and the first condition of optimal
Hausdorff oracles is satisfied. Let $B\subseteq\mathbb{N}$ and $\epsilon>0$.
By our hypothesis and the point-to-set principle,
$\displaystyle\dim_{H}(E)$ $\displaystyle=\dim_{H}(N(A,B,\epsilon))$
$\displaystyle\leq\sup\limits_{x\in N(A,B,\epsilon)}\dim^{A,B}(x).$
Therefore, there is certainly a point $x\in E$ such that
$\dim^{A,B}(x)\geq\dim_{H}(E)-\epsilon$ and
$K^{A,B}_{r}(x)\geq K^{A}_{r}(x)-\epsilon r$,
for almost every $r\in\mathbb{N}$. ∎
A simple, but useful, result is if $B$ is an oracle obtained by adding
additional information to an optimal Hausdorff oracle, then $B$ is also
optimal.
###### Lemma 5.
Let $E\subseteq\mathbb{R}^{n}$. If $A$ is an optimal Hausdorff oracle for $E$,
then the join $C=(A,B)$ is Hausdorff optimal for $E$ for every oracle $B$.
###### Proof.
Let $A$ be an optimal Hausdorff oracle for $E$. By the point-to-set principle
(Theorem 2),
$\displaystyle\dim_{H}(E)$ $\displaystyle=\sup\limits_{x\in E}\dim^{A}(x)$
$\displaystyle\geq\sup\limits_{x\in E}\dim^{C}(x)$
$\displaystyle\geq\dim_{H}(E).$
Hence, the oracle $C=(A,B)$ is a Hausdorff oracle for $E$.
Let $B^{\prime}\subseteq\mathbb{N}$ be an oracle, and let $\epsilon>0$. Let
$x\in E$ be a point such that
$\dim^{A,(B,B^{\prime})}(x)\geq\dim_{H}(E)-\epsilon/2,$ (4)
and
$K_{r}^{A,(B,B^{\prime})}(x)\geq K^{A}_{r}(x)-\epsilon r/2,$ (5)
for almost every precision $r$. Note that such a point exists since $A$ is
optimal for $E$.
For all sufficiently large $r$,
$\displaystyle K^{(A,B),B^{\prime}}_{r}(x)$
$\displaystyle=K^{A,(B,B^{\prime})}_{r}(x)$ $\displaystyle\geq
K^{A}_{r}(x)-\epsilon r/2$ $\displaystyle\geq K^{A,B}_{r}(x)-\epsilon r/2$
$\displaystyle=K^{C}_{r}(x)-\epsilon r/2.$
Therefore, $C=(A,B)$ is optimal for $E$.
∎
We now give some basic closure properties of the class of sets with optimal
Hausdorff oracles.
###### Observation 6.
Let $F\subseteq E$. If $\dim_{H}(F)=\dim_{H}(E)$ and $F$ has an optimal
Hausdorff oracle, then $E$ has an optimal Hausdorff oracle.
We can also show that having optimal Hausdorff oracles is closed under
countable unions.
###### Proposition 7.
Let $E_{1},E_{2},\ldots$ be a countable sequence of sets and let
$E=\cup_{n}E_{n}$. If every set $E_{n}$ has an optimal Hausdorff oracle, then
$E$ has an optimal Hausdorff oracle.
###### Proof.
We first note that
$\dim_{H}(E)=\sup_{n}\dim_{H}(E_{n})$.
For each $n$, let $A_{n}$ be an optimal Hausdorff oracle for $E_{n}$. Let $A$
be the join of $A_{1},A_{2},\ldots$. Let $B$ be a Hausdorff oracle for $E$.
Note that, by Lemma 5, for every $n$, since $A_{n}$ is an optimal Hausdorff
oracle for $E_{n}$, $(A,B)$ is optimal for $E_{n}$.
We now claim that $(A,B)$ is an optimal Hausdorff oracle for $E$. Theorem 2
shows that item (1) of the definition of optimal Hausdorff oracles is
satisfied. For item (2), let $C\subseteq\mathbb{N}$ be an oracle, and let
$\epsilon>0$. Let $n$ be a number such that
$\dim_{H}(E_{n})>\dim_{H}(E)-\epsilon$. Since $(A,B)$ is Hausdorff optimal for
$E_{N}$, there is a point $x\in E_{n}$ such that
1. (i)
$\dim^{(A,B),C}(x)\geq\dim_{H}(E_{n})-\epsilon\geq\dim_{H}(E)-\epsilon$, and
2. (ii)
for almost every $r$,
$K^{(A,B),C}_{r}(x)\geq K^{(A,B)}_{r}(x)-\epsilon r$.
Therefore, item (2) of the definition of optimal Hausdorff oracles is
satisfied, and so $(A,B)$ is Hausdorff optimal for $E$.
∎
### 3.1 Outer Measures and Optimal Oracles
In this section we give a sufficient condition for a set to have optimal
Hausdorff oracles. Specifically, we prove that if $\dim_{H}(E)=s$, and there
is a metric outer measure, absolutely continuous with respect to
$\mathcal{H}^{s}$, such that $0<\mu(E)<\infty$, then $E$ has optimal Hausdorff
oracles. Although stated in this general form, the main application of this
result (in Section 3.2) is for the case $\mu=\mathcal{H}^{s}$.
For every $r\in\mathbb{N}$, let $\mathcal{Q}^{n}_{r}$ be the set of all dyadic
cubes at precision $r$, i.e., cubes of the form
$Q=[m_{1}2^{-r},(m_{1}+1)2^{-r})\times\ldots\times[m_{n}2^{-r},(m_{n}+1)2^{-r})$,
where $0\leq m_{1},\ldots,m_{n}\leq 2^{r}$. For each $r$, we refer to the
$2^{nr}$ cubes in $\mathcal{Q}_{r}$ as $Q_{r,1},\ldots,Q_{r,2^{nr}}$. We can
identify each dyadic cube $Q_{r,i}$ with the unique dyadic rational $d_{r,i}$
at the center of $Q_{r,i}$.
We now associate, to each metric outer measure, a discrete semimeasure on the
dyadic rationals $\mathbb{D}$. Recall that discrete semimeasure on
$\mathbb{D}^{n}$ is a function $p:\mathbb{D}^{n}\to[0,1]$ which satisfies
$\Sigma_{r,i}p(d_{r,i})<\infty$.
Let $E\subseteq\mathbb{R}^{n}$ and $\mu$ be a metric outer measure such that
$0<\mu(E)<\infty$. Define the function
$p_{\mu}:\mathbb{D}^{n}\rightarrow[0,1]$ by
$p_{\mu,E}(d_{r,i})=\frac{\mu(E\cap Q_{r,i})}{r^{2}\mu(E)}$.
###### Observation 8.
Let $\mu$ be a metric outer measure and $E\subseteq\mathbb{R}^{n}$ such that
$0<\mu(E)<\infty$. Then for every $r$, every dyadic cube
$Q\in\mathcal{Q}_{r}$, and all $r^{\prime}>r$,
$\mu(E\cap Q)=\sum\limits_{\begin{subarray}{c}Q^{\prime}\subset Q\\\
Q^{\prime}\in\mathcal{Q}_{r^{\prime}}\end{subarray}}\mu(E\cap Q^{\prime})$.
###### Proposition 9.
Let $E\subseteq\mathbb{R}^{n}$ and $\mu$ be a metric outer measure such that
$0<\mu(E)<\infty$. Relative to some oracle $A$, the function $p_{\mu,E}$ is a
lower semi-computable discrete semimeasure.
###### Proof.
We can encode the real numbers $p_{\mu,E}(d)$ into an oracle $A$, relative to
which $p_{\mu,E}$ is clearly computable.
To see that $p_{\mu,E}$ is indeed a discrete semimeasure, by Observation 8,
$\displaystyle\sum\limits_{r,i}p_{\mu,E}(d_{r,i})$
$\displaystyle=\sum\limits_{r}\sum\limits_{i=1}^{2^{2r}}\frac{\mu(E\cap
Q_{r,i})}{r^{2}\mu(E)}$
$\displaystyle=\sum\limits_{r}\frac{1}{r^{2}\mu(E)}\sum\limits_{i=1}^{2^{2r}}\mu(E\cap
Q_{r,i})$ $\displaystyle=\sum\limits_{r}\frac{\mu(E)}{r^{2}\mu(E)}$
$\displaystyle<\infty.$
∎
In order to connect the existence of such an outer measure $\mu$ to the
existence of optimal oracles, we need to relate the semimeasure $p_{\mu}$ and
Kolmogorov complexity. We achieve this using a fundamental result in
algorithmic information theory.
Levin’s optimal lower semicomputable subprobability measure, relative to an
oracle $A$, on the dyadic rationals $\mathbb{D}$ is defined by
$\mathbf{m}^{A}(d)=\sum\limits_{\pi\,:\,U^{A}(\pi)=d}2^{-|\pi|}$.
###### Lemma 10.
Let $E\subseteq\mathbb{R}^{n}$ and $\mu$ be a metric outer measure such that
$0<\mu(E)<\infty$. Let $A$ be an oracle relative to which $p_{\mu,E}$ is lower
semi-computable. Then is a constant $\alpha>0$ such that
$\mathbf{m}^{A}(d)\geq\alpha p_{\mu,E}(d)$, for every $d\in\mathbb{D}^{n}$.
###### Proof.
Case and Lutz [3], generalizing Levin’s coding theorem [9, 10], showed that
there is a constant $c$ such that
$\mathbf{m}^{A}(d_{r,i})\leq 2^{-K^{A}(d_{r,i})+K^{A}(r)+c}$,
for every $r\in\mathbb{N}$ and $d_{r,i}\in\mathbb{D}^{n}$. The optimality of
$\mathbf{m}^{A}$ ensures that, for every lower semicomputable (relative to
$A$) discrete semimeasure $\nu$ on $\mathbb{D}^{n}$,
$\mathbf{m}^{A}(d_{r,i})\geq\alpha\nu(d_{r,i})$.
∎
The results of this section have dealt with the dyadic rationals. However, we
ultimately deal with the Kolmogorov complexity of Euclidean points. A result
of Case and Lutz [3] relates the Kolmogorov complexity of Euclidean points
with the complexity of dyadic rationals.
###### Lemma 11 ([3]).
Let $x\in[0,1)^{n}$, $A\subseteq\mathbb{N}$, and $r\in\mathbb{N}$. Let
$Q_{r,i}$ be the (unique) dyadic cube at precision $r$ containing $x$. Then
$K^{A}_{r}(x)=K^{A}(d_{r,i})-O(\log r)$.
###### Lemma 12.
Let $E\subseteq\mathbb{R}^{n}$ and $\mu$ be a metric outer measure such that
$0<\mu(E)<\infty$. Let $A$ be an oracle relative to which $p_{\mu,E}$ is lower
semi-computable. Then, for every oracle $B\subseteq\mathbb{N}$ and every
$\epsilon>0$, the set
$N=\\{x\in E\,|\,(\exists^{\infty})\;K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon
r\\}$
has $\mu$-measure zero.
###### Proof.
Let $B\subseteq\mathbb{N}$ and $\epsilon>0$. For every $R\in\mathbb{N}$, there
is a set $\mathcal{C}_{R}$ of dyadic cubes satisfying the following.
* •
The cubes in $\mathcal{C}_{R}$ cover $N$.
* •
Every $Q_{r,i}$ in $\mathcal{C}_{R}$ satisfies $r\geq R$.
* •
For every $Q_{r,i}\in\mathcal{C}_{R}$,
$K^{A,B}(d_{r,i})<K^{A}(d_{r,i})-\epsilon r+O(\log r)$.
Note that the last item follows from our definition of $N$ by Lemma 11.
Since the family of cubes in $\mathcal{C}_{R}$ covers $N$, by the subadditive
property of $\mu$,
$\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}\mu(E\cap Q_{r,i})\geq\mu(N)$.
Thus, for every $R$, by Lemma 10 and Kraft’s inequality,
$\displaystyle 1$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{-K^{A,B}(d_{r,i})}$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r-K^{A}(d_{r,i})}$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r}\mathbf{m}^{A}(d_{r,i})$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r-K^{A}(r)+c}\alpha p_{\mu,E}(d_{r,i})$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r/2}p_{\mu,E}(d_{r,i})$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r/2}\frac{\mu(E\cap Q_{r,i})}{r^{2}\mu(E)}$
$\displaystyle\geq\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}2^{\epsilon
r/4}\frac{\mu(E\cap Q_{r,i})}{r^{2}\mu(E)}$ $\displaystyle\geq 2^{\epsilon
R/4}\sum\limits_{Q_{r,i}\in\mathcal{C}_{R}}\frac{\mu(E\cap Q_{r,i})}{\mu(E)}$
$\displaystyle\geq 2^{\epsilon R/4}\frac{\mu(N)}{\mu(E)}.$
Since $R$ can be arbitrarily large, and $0<\mu(E)<\infty$, the conclusion
follows. ∎
We now have the machinery in place to prove the main theorem of this section.
###### Theorem 13.
Let $E\subseteq\mathbb{R}^{n}$ with $\dim_{H}(E)=s$. Suppose there is a metric
outer measure $\mu$ such that
$0<\mu(E)<\infty$,
and either
1. 1.
$\mu\ll\mathcal{H}^{s-\delta}$, for every $\delta>0$, or
2. 2.
$\mathcal{H}^{s}\ll\mu$ and $\mathcal{H}^{s}(E)>0$.
Then $E$ has an optimal Hausdorff oracle $A$.
###### Proof.
Let $A\subseteq\mathbb{N}$ be a Hausdorff oracle for $E$ such that $p_{\mu,E}$
is computable relative to $A$. Note that such an oracle exists by the point-
to-set principle and routine encoding. We will show that $A$ is optimal for
$E$.
For the sake of contradiction, suppose that there is an oracle $B$ and
$\epsilon>0$ such that, for every $x\in E$ either
1. 1.
$\dim^{A,B}(x)<s-\epsilon$, or
2. 2.
there are infinitely many $r$ such that $K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon
r$.
Let $N$ be the set of all $x$ for which the second item holds. By Lemma 12,
$\mu(N)=0$. We also note that, by the point-to-set principle,
$\dim_{H}(E-N)\leq s-\epsilon$,
and so $\mathcal{H}^{s}(E-N)=0$.
To achieve the desired contradiction, we first assume that
$\mu\ll\mathcal{H}^{s-\delta}$, for every $\delta>0$. Since
$\mu\ll\mathcal{H}^{s-\delta}$, and $\dim_{H}(E-N)<s-\epsilon$,
$\mu(E-N)=0$.
Since $\mu$ is a metric outer measure,
$\displaystyle 0$ $\displaystyle<\mu(E)$ $\displaystyle\leq\mu(N)+\mu(E-N)$
$\displaystyle=0,$
a contradiction.
Now suppose that $\mathcal{H}^{s}\ll\mu$ and $\mathcal{H}^{s}(E)>0$. Then,
since $\mathcal{H}^{s}$ is an outer measure, $\mathcal{H}^{s}(E)>0$ and
$\mathcal{H}^{s}(E-N)=0$ we must have $\mathcal{H}^{s}(N)>0$. However this
implies that $\mu(N)>0$, and we again have the desired contradiction. Thus $A$
is an optimal Hausdorff oracle for $E$ and the proof is complete. ∎
Recall that $E\subseteq[0,1)^{n}$ is called an $s$-set if
$0<\mathcal{H}^{s}(E)<\infty$.
Since $\mathcal{H}^{s}$ is a metric outer measure, and trivially absolutely
continuous with respect to itself, we have the following corollary.
###### Corollary 14.
Let $E\subseteq[0,1)^{n}$ be an $s$-set. Then there is an optimal Hausdorff
oracle for $E$.
### 3.2 Sets with optimal Hausdorff oracles
We now show that every analytic set has optimal Hausdorff oracles.
###### Lemma 15.
Every analytic set $E$ has optimal Hausdorff oracles.
###### Proof.
We begin by assuming that $E$ is compact, and let $s=\dim_{H}(E)$. Then for
every $t<s$, $\mathcal{H}^{t}(E)>0$. Thus, by Theorem 1(1), there is a
sequence of compact subsets $F_{1},F_{2},\ldots$ of $E$ such that
$\dim_{H}(\bigcup_{n}F_{n})=\dim_{H}(E)$,
and, for each $n$,
$0<\mathcal{H}^{s_{n}}(F_{n})<\infty$,
where $s_{n}=s-1/n$. Therefore, by Theorem 13, each set $F_{n}$ has optimal
Hausdorff oracles. Hence, by Proposition 7, $E$ has optimal Hausdorff oracles
and the conclusion follows.
We now show that every $\Sigma^{0}_{2}$ set has optimal Hausdorff oracles.
Suppose $E=\cup_{n}F_{n}$ is $\Sigma^{0}_{1}$, where each $F_{n}$ is compact.
As we have just seen, each $F_{n}$ has optimal Hausdorff oracles. Therefore,
by Proposition 7, $E$ has optimal Hausdorff oracles and the conclusion
follows.
Finally, let $E$ be analytic. By Theorem 1(2), there is a $\Sigma^{0}_{2}$
subset $F$ of the same Hausdorff dimension as $E$. We have just seen that $F$
must have an optimal Hausdorff oracle. Since $\dim_{H}(F)=\dim_{H}(E)$, by
Observation 6 $E$ has optimal Hausdorff oracles, and the proof is complete ∎
Crone, Fishman and Jackson [4] have recently shown that, assuming the Axiom of
Determinacy (AD)555Note that AD is inconsistent with the axiom of choice.,
every subset $E$ has a Borel subset $F$ such that $\dim_{H}(F)=\dim_{H}(E)$.
This, combined with Lemma 15, yields the following corollary.
###### Corollary 16.
Assuming AD, every set $E\subseteq\mathbb{R}^{n}$ has optimal Hausdorff
oracles.
###### Lemma 17.
Suppose that $E\subseteq\mathbb{R}^{n}$ satisfies $\dim_{H}(E)=\dim_{P}(E)$.
Then $E$ has an optimal Hausdorff oracle. Moreover, the join $(A,B)$ is an
optimal Hausdorff oracle, where $A$ and $B$ are the Hausdorff and packing
oracles, respectively, of $E$.
###### Proof.
Let $A$ be a Hausdorff oracle for $E$ and let $B$ be a packing oracle for $E$.
We claim that that the join $(A,B)$ is an optimal Hausdorff oracle for $E$. By
the point-to-set principle, and the fact that extra information cannot
increase effective dimension,
$\displaystyle\dim_{H}(E)$ $\displaystyle=\sup\limits_{x\in E}\dim^{A}(x)$
$\displaystyle\geq\sup\limits_{x\in E}\dim^{A,B}(x)$
$\displaystyle\geq\dim_{H}(E).$
Therefore
$\dim_{H}(E)=\sup\limits_{x\in E}\dim^{A,B}(x)$,
and the first condition of optimal Hausdorff oracles is satisfied.
Let $C\subseteq\mathbb{N}$ be an oracle and $\epsilon>0$. By the point-to-set
principle,
$\dim_{H}(E)\leq\sup\limits_{x\in E}\dim^{A,B,C}(x)$,
so there is an $x\in E$ such that
$\dim_{H}(E)-\epsilon/4<\dim^{A,B,C}(x)$.
Let $r$ be sufficiently large. Then, by our choice of $B$ and the fact that
additional information cannot increase the complexity of a point,
$\displaystyle K^{A,B}_{r}(x)$ $\displaystyle\leq K^{B}_{r}(x)$
$\displaystyle\leq\dim_{P}(E)r+\epsilon r/4$
$\displaystyle=\dim_{H}(E)r+\epsilon r/4$
$\displaystyle<\dim^{A,B,C}(x)r+\epsilon r/2$ $\displaystyle\leq
K_{r}^{A,B,C}(x)+\epsilon r.$
Since the oracle $C$ and $\epsilon$ were arbitrarily, the proof is complete. ∎
### 3.3 Sets without optimal Hausdorff oracles
In the previous section, we gave general conditions for a set $E$ to have
optimal Hausdorff oracles. Indeed, we saw that under the axiom of determinacy,
every set has optimal Hausdorff oracles.
However, assuming the axiom of choice (AC) and the continuum hypothesis (CH),
we are able to construct sets without optimal Hausdorff oracles.
###### Lemma 18.
Assume AC and CH. Then, for every $s\in(0,1)$, there is a subset
$E\subseteq\mathbb{R}$ with $\dim_{H}(E)=s$ such that $E$ does not have
optimal Hausdorff oracles.
Let $s\in(0,1)$. We begin by defining two sequences of natural numbers,
$\\{a_{n}\\}$ and $\\{b_{n}\\}$. Let $a_{1}=2$, and $b_{1}=\lfloor
2/s\rfloor$. Inductively define $a_{n+1}=b_{n}^{2}$ and $b_{n+1}=\lfloor
a_{n+1}/s\rfloor$. Note that
$\lim_{n}a_{n}/b_{n}=s$.
Using AC and CH, we order the subsets of the natural numbers such that every
subset has countably many predecessors. For every countable ordinal $\alpha$,
let $f_{\alpha}:\mathbb{N}\to\\{\beta\,|\,\beta<\alpha\\}$ be a function such
that each ordinal $\beta$ strictly less than $\alpha$ is mapped to by
infinitely many $n$. Note that such a function exists, since the range is
countable assuming CH.
We will define real numbers $x_{\alpha}$, $y_{\alpha}$ via transfinite
induction. Let $x_{1}$ be a real which is random relative to $A_{1}$. Let
$y_{1}$ be the real whose binary expansion is given by
$\displaystyle y_{1}[r]=\begin{cases}0&\text{ if }a_{n}<r\leq b_{n}\text{ for
some }n\in\mathbb{N}\\\ x_{1}[r]&\text{ otherwise}\end{cases}$
For the induction step, suppose we have defined our points up to $\alpha$. Let
$x_{\alpha}$ be a real number which is random relative to the join of
$\bigcup_{\beta<\alpha}(A_{\beta},x_{\beta})$ and $A_{\alpha}$. This is
possible, as we are assuming that this union is countable. Let $y_{\alpha}$ be
the point whose binary expansion is given by
$\displaystyle y_{\alpha}[r]=\begin{cases}x_{\beta}[r]&\text{ if }a_{n}<r\leq
b_{n},\text{ where }f_{\alpha}(n)=\beta\\\ x_{\alpha}[r]&\text{
otherwise}\end{cases}$
Finally, we define our set $E=\\{y_{\alpha}\\}$. We now claim that
$\dim_{H}(E)=s$, and that $E$ does not have an optimal Hausdorff oracle.
###### Lemma 19.
The Hausdorff dimension of $E$ is $s$.
###### Proof.
We first upper bound the dimension. Let $A$ be an oracle encoding $x_{1}$.
From our construction, for every element $y\in E$, there are infinitely many
intervals $[a_{n},b_{n}]$ such that $y[a_{n},b_{n}]=x_{1}[a_{n},b_{n}]$.
Hence, for every $y\in E$, there are infinitely many $n$ such that
$\displaystyle K^{A}_{b_{n}}(y)$
$\displaystyle=K^{A}_{a_{n}}(y)+K^{A}_{b_{n},a_{n}}(y)+o(b_{n})$
$\displaystyle\leq K^{A}_{a_{n}}(y)+o(b_{n})$ $\displaystyle\leq
a_{n}+o(b_{n}).$
Therefore, by the point to set principle,
$\displaystyle\dim_{H}(E)$ $\displaystyle\leq\sup_{y}\dim^{A}(y)$
$\displaystyle=\sup_{y}\liminf_{r}\frac{K^{A}_{r}(y)}{r}$
$\displaystyle\leq\sup_{y}\liminf_{n}\frac{K^{A}_{b_{n}}(y)}{b_{n}}$
$\displaystyle\leq\sup_{y}\liminf_{n}\frac{a_{n}+o(b_{n})}{b_{n}}$
$\displaystyle\leq\sup_{y}\liminf_{n}s$ $\displaystyle=s,$
and the proof that $\dim_{H}(E)\leq s$ is complete.
For the lower bound, let $A$ be a Hausdorff oracle for $E$, and let $\alpha$
be the ordinal corresponding to $A$. By our construction of $y_{\alpha}$, for
every $n$,
$\displaystyle K^{A_{\alpha}}_{a_{n}}(y_{\alpha})$ $\displaystyle\geq
K^{A_{\alpha}}_{a_{n}}(x_{\alpha})-b_{n-1}$ $\displaystyle\geq
a_{n}-b_{n-1}-o(a_{n})$ $\displaystyle\geq
a_{n}-a_{n}^{\frac{1}{2}}-o(a_{n}).$
Hence, for every $n$, and every $a_{n}<r\leq b_{n}$,
$\displaystyle K^{A_{\alpha}}_{r}(y_{\alpha})$ $\displaystyle\geq
K^{A_{\alpha}}_{a_{n}}(y_{\alpha})$ $\displaystyle\geq
a_{n}-a_{n}^{\frac{1}{2}}-o(a_{n}).$
This implies that
$\frac{K^{A_{\alpha}}_{r}(y_{\alpha})}{r}\geq s-o(1)$,
for every $n$, and every $a_{n}<r\leq b_{n}$.
We can also conclude that, for every $n$ and every $b_{n}<r\leq a_{n+1}$,
$\displaystyle K^{A_{\alpha}}_{r}(y_{\alpha})$
$\displaystyle=K^{A_{\alpha}}_{b_{n}}(y_{\alpha})+K^{A_{\alpha}}_{r,b_{n}}(y_{\alpha})-o(r)$
$\displaystyle\geq a_{n}-a_{n}^{\frac{1}{2}}+r-b_{n}-o(r).$
This implies that
$\displaystyle\frac{K^{A_{\alpha}}_{r}(y_{\alpha})}{r}$
$\displaystyle=1+\frac{a_{n}}{r}-\frac{b_{n}}{r}-o(1)$
$\displaystyle=1-\frac{a_{n}(1/s-1)}{r}-o(1)$ $\displaystyle\geq
1-s(1/s-1)-o(1)$ $\displaystyle=s-o(1).$
for every $n$, and every $a_{n}<r\leq b_{n}$.
Together, these inequalities and the point-to-set principle show that
$\displaystyle\dim_{H}(E)$ $\displaystyle=\sup_{x}\dim^{A}(x)$
$\displaystyle\geq\dim^{A}(y_{\alpha})$
$\displaystyle=\liminf_{r}\frac{K^{A}(y_{\alpha})}{r}$
$\displaystyle\geq\liminf_{r}s-o(1)$ $\displaystyle=s,$
and the proof is complete. ∎
###### Lemma 20.
$E$ does not have optimal Hausdorff oracles.
###### Proof.
Let $A_{\alpha}\subseteq\mathbb{N}$ be an oracle. It suffices to show that
$A_{\alpha}$ is not optimal. With this goal in mind, let $B$ be an oracle
encoding $x_{\alpha}$ and the set $\\{y_{\beta}\,|\,\beta<\alpha\\}$. Note
that we can encode this information since this set is countable.
Let $y_{\beta}\in E$. First, suppose that $\beta\leq\alpha$. Then by our
choice of $B$, $\dim^{A_{\alpha},B}(y_{\beta})=0$. So then suppose that
$\beta>\alpha$. We first note that, since $x_{\beta}$ is random relative to
$A_{\alpha}$
$\displaystyle K^{A_{\alpha}}_{a_{n}}(y_{\beta})$ $\displaystyle\geq
K^{A_{\alpha}}(y_{\beta}[b_{n-1}\ldots a_{n}])-O(\log a_{n})$
$\displaystyle=K^{A_{\alpha}}(x_{\beta}[b_{n-1}\ldots a_{n}])-O(\log a_{n})$
$\displaystyle\geq a_{n}-b_{n-1}-O(\log a_{n})$ $\displaystyle\geq
a_{n}-o(a_{n}),$
for every $n\in\mathbb{N}$.
By our construction, there are infinitely many $n$ such that
$y_{\beta}[a_{n}\ldots b_{n}]=x_{\alpha}[a_{n}\ldots b_{n}]$ (6)
Since $x_{\alpha}$ is random relative to $A_{\alpha}$, for any $n$ such that
(6) holds,
$\displaystyle K^{A_{\alpha}}_{b_{n}}(y_{\beta})$
$\displaystyle=K^{A_{\alpha}}_{a_{n}}(y_{\beta})+K^{A_{\alpha}}_{b_{n},a_{n}}(y_{\beta})$
$\displaystyle\geq a_{n}-o(a_{n})+K^{A_{\alpha}}(y_{\beta}[a_{n}\ldots
b_{n}])-O(\log b_{n})$
$\displaystyle=a_{n}-o(a_{n})+K^{A_{\alpha}}(x_{\alpha}[a_{n}\ldots b_{n}])$
$\displaystyle\geq a_{n}-o(a_{n})+b_{n}-a_{n}-o(b_{n})$
$\displaystyle=b_{n}-o(b_{n}).$
However, since we can compute $x_{\alpha}$ given $B$,
$\displaystyle K^{A_{\alpha},B}_{b_{n}}(y_{\beta})$
$\displaystyle=K^{A_{\alpha},B}_{a_{n}}(y_{\beta})+K^{A_{\alpha},B}_{b_{n},a_{n}}(y_{\beta})$
$\displaystyle=K^{A_{\alpha},B}_{a_{n}}(y_{\beta})$ $\displaystyle\leq
a_{n}-o(a_{n})$ $\displaystyle=sb_{n}-o(a_{n})$
$\displaystyle=sK^{A_{\alpha}}_{b_{n}}(y_{\beta})-o(a_{n}).$
Therefore $A_{\alpha}$ is not optimal, and the claim follows. ∎
#### 3.3.1 Generalization to $\mathbb{R}^{n}$
In this section, we use Lemma 18 to show that there are sets without optimal
Hausdorff oracles in $\mathbb{R}^{n}$ of every possible dimension. We will
need the following lemma on giving sufficient conditions for a product set to
have optimal Hausdorff oracles. Interestingly, we need the product formula to
hold for arbitrary sets, first proven by Lutz [17]. Under the assumption that
$F$ is regular, the product formula gives
$\dim_{H}(F\times G)=\dim_{H}(F)+\dim_{H}(G)=\dim_{P}(F)+\dim_{H}(G)$,
for every set $G$.
###### Lemma 21.
Let $F\subseteq\mathbb{R}^{n}$ be a set such that $\dim_{H}(F)=\dim_{P}(F)$,
let $G\subseteq\mathbb{R}^{m}$ and let $E=F\times G$. Then $E$ has optimal
Hausdorff oracles if and only if $G$ has optimal Hausdorff oracles.
###### Proof.
Assume that $G$ has an optimal Hausdorff oracle $A_{1}$. Let $A_{2},A_{3}$ be
Hausdorff oracles for $E$ and $F$, respectively. Let $A\subseteq\mathbb{N}$ be
the join of all three oracles. We claim that $A$ is optimal for $E$. Let $B$
be any oracle and let $\epsilon>0$. Since $A$ is optimal for $G$, by Lemma 5,
there is a point $z\in G$ such that $\dim^{A,B}(z)\geq\dim_{H}(G)-\epsilon/2$
and
$K^{A,B}_{r}(z)\geq K^{A}_{r}(z)-\epsilon r/2$,
for almost every $r$. By the point-to-set principle, we may choose a $y\in F$
such that
$\dim^{A,B,z}(y)\geq\dim_{H}(F)-\epsilon/2$.
Let $x=(y,z)\in E$. Then
$\displaystyle K^{A,B}_{r}(x)$ $\displaystyle=K^{A,B}_{r}(y,z)$
$\displaystyle=K^{A,B}_{r}(z)+K^{A,B}_{r}(y\,|\,z)$ $\displaystyle\geq
K^{A}_{r}(z)-\epsilon r/2+K^{A,B,z}_{r}(y)$ $\displaystyle\geq
K^{A}_{r}(z)-\epsilon r/2+(\dim_{H}(F)-\epsilon/2)r$
$\displaystyle=K^{A}_{r}(z)-\epsilon r/2+(\dim_{P}(F)-\epsilon/2)r$
$\displaystyle\geq K^{A}_{r}(z)-\epsilon r/2+K^{A}_{r}(y)-\epsilon r/2$
$\displaystyle\geq K^{A}_{r}(z)-\epsilon r/2+K^{A}_{r}(y\,|\,z)-\epsilon r/2$
$\displaystyle\geq K^{A}_{r}(y,z)-\epsilon r$
$\displaystyle=K^{A}_{r}(x)-\epsilon r.$
Since $B$ and $\epsilon$ were arbitrary, $A$ is optimal for $E$.
Suppose that $G$ does not have optimal Hausdorff oracles. Let $A$ be a
Hausdorff oracle for $E$. It suffices to show that $A$ is not optimal for $E$.
Since optimal Hausdorff oracles are closed under the join operation, we may
assume that $A$ is a Hausdorff oracle for $F$ and $G$ as well. Since $G$ does
not have optimal Hausdorff oracles, there is an oracle $B$ and $\epsilon>0$
such that, for every $z\in G$, either $\dim^{A,B}(z)<\dim_{H}(G)-\epsilon$ or
$K^{A,B}_{r}(z)<K^{A}_{r}(z)-\epsilon r/2$,
for infinitely many $r$. Let $x\in E$, such that
$\dim^{A,B}(x)\geq\dim_{H}(E)-\epsilon/2$. Let $x=(y,z)$ for some $y\in F$ and
$z\in G$. Then we have
$\displaystyle\dim_{H}(F)+\dim_{H}(G)$ $\displaystyle=\dim_{H}(E)$
$\displaystyle\leq\dim^{A,B}(x)+\epsilon/2$
$\displaystyle=\dim^{A,B}(y)+\dim^{A,B}(z\,|\,y)+\epsilon/2$
$\displaystyle\leq\dim_{H}(F)+\dim^{A,B}(z)+\epsilon/2.$
Hence, $\dim^{A,B}(z)\geq\dim_{H}(G)-\epsilon/2$.
We conclude that there are infinitely many $r$ such that
$\displaystyle K^{A,B}_{r}(x)$
$\displaystyle=K^{A,B}_{r}(z)+K^{A,B}_{r}(y\,|\,z)$
$\displaystyle<K^{A}_{r}(z)-\epsilon r/2+K^{A,B}_{r}(y\,|\,z)$
$\displaystyle\leq K^{A}_{r}(z)-\epsilon r/2+K^{A}_{r}(y\,|\,z)$
$\displaystyle=K^{A,B}_{r}(x)-\epsilon r/2.$
Thus $E$ does not have optimal Hausdorff oracles. ∎
###### Theorem 22.
Assume AC and CH. Then for every $n\in\mathbb{N}$ and $s\in(0,n)$, there is a
subset $E\subseteq\mathbb{R}^{n}$ with $\dim_{H}(E)=s$ such that $E$ does not
have optimal Hausdorff oracles.
###### Proof.
We will show this via induction on $n$. For $n=1$, the conclusion follows from
Lemma 18.
Suppose the claim holds for all $m<n$. Let $s\in(0,n)$. First assume that
$s<n-1$. Then by our induction hypothesis, there is a set
$G\subseteq\mathbb{R}^{n-1}$ without optimal Hausdorff oracles such that
$\dim_{H}(G)=s$. Let $E=\\{0\\}\times G$. Note that, since $\\{0\\}$ is a
singleton, $\dim_{H}(\\{0\\})=\dim_{P}(\\{0\\})=0$. Therefore, by Lemma 21,
$E$ does not have optimal Hausdorff oracles. By the well-known product formula
for Hausdorff dimension,
$\displaystyle\dim_{H}(G)$ $\displaystyle\leq\dim_{H}(\\{0\\})+\dim_{H}(G)$
$\displaystyle\leq\dim_{H}(E)$
$\displaystyle\leq\dim_{P}(\\{0\\})+\dim_{H}(G)$ $\displaystyle=\dim_{H}(G),$
and the claim follows.
We now assume that $s\geq n-1$. Let $d=s-1$. By our induction hypothesis,
there is a set $G\subseteq\mathbb{R}^{n-1}$ without optimal Hausdorff oracles
such that $\dim_{H}(G)=d$. Let $E=[0,1]\times G$. Note that, since $[0,1]$ has
(Lebesgue) measure one, $\dim_{H}([0,1])=\dim_{P}([0,1])=1$. Thus, by Lemma
21, $E$ is a set without optimal Hausdorff oracles. By the product formula,
$\displaystyle 1+\dim_{H}(G)$ $\displaystyle\leq\dim_{H}([0,1])+\dim_{H}(G)$
$\displaystyle\leq\dim_{H}(E)$ $\displaystyle\leq\dim_{P}([0,1])+\dim_{H}(G)$
$\displaystyle=1+\dim_{H}(G),$
and the claim follows. ∎
## 4 Marstrand’s Projection Theorem
The following theorem, due to Lutz and Stull [19], gives sufficient conditions
for strong lower bounds on the complexity of projected points.
###### Theorem 23.
Let $z\in\mathbb{R}^{2}$, $\theta\in[0,\pi]$, $C\subseteq\mathbb{N}$,
$\eta\in\mathbb{Q}\cap(0,1)\cap(0,\dim(z))$, $\varepsilon>0$, and
$r\in\mathbb{N}$. Assume the following are satisfied.
1. 1.
For every $s\leq r$, $K_{s}(\theta)\geq s-\log(s)$.
2. 2.
$K^{C,\theta}_{r}(z)\geq K_{r}(z)-\varepsilon r$.
Then,
$K^{C,\theta}_{r}(p_{\theta}z)\geq\eta r-\varepsilon
r-\frac{4\varepsilon}{1-\eta}r-O(\log r)\,.$
The second condition of this theorem requires the oracle $(C,\theta)$ to give
essentially no information about $z$. The existence of optimal Hausdorff
oracles gives a sufficient condition for this to be true, for all sufficiently
large precisions. Thus we are able to show that Marstrands projection theorem
holds for any set with optimal Hausdorff oracles.
###### Theorem 24.
Suppose $E\subseteq\mathbb{R}^{2}$ has an optimal Hausdorff oracle. Then for
almost every $\theta\in[0,\pi]$,
$\dim_{H}(p_{\theta}E)=\min\\{\dim_{H}(E),1\\}$.
###### Proof.
Let $A$ be an optimal Hausdorff oracle for $E$. Let $\theta$ be random
relative to $A$. Let $B$ be oracle testifying to the point-to-set principle
for $p_{\theta}E$. It suffices to show that
$\sup\limits_{z\in E}\dim^{A,B}(p_{\theta}z)=\min\\{1,\dim_{H}(E)\\}$.
Since $E$ has optimal Hausdorff oracles, for each $n\in\mathbb{N}$, we may
choose a point $z_{n}\in E$ such that
* •
$\dim^{A,B,\theta}(z_{n})\geq\dim_{H}(E)-\frac{1}{2n}$, and
* •
$K^{A,B,\theta}_{r}(z_{n})\geq K^{A}_{r}(z_{n})-\frac{r}{2n}$ for almost every
$r$.
Fix a sufficiently large $n$, and let $\varepsilon=1/2n$. Let
$\eta\in\mathbb{Q}$ be a rational such that
$\min\\{1,\dim_{H}(E)\\}-5\varepsilon^{1/2}<\eta<1-4\varepsilon^{1/2}$.
We now show that the conditions of Theorem 23 are satisfied for
$\eta,\varepsilon$, relative to $A$. By our choice of $\theta$,
$K^{A}_{r}(\theta)\geq r-O(\log r)$,
for every $r\in\mathbb{N}$. By our choice of $z_{n}$ and the Hausdorff
optimality of $A$,
$K^{A,B,\theta}_{r}(z_{n})\geq K_{r}(z_{n})-\varepsilon r$,
for all sufficiently large $r$. We may therefore apply Theorem 23, to see
that, for all sufficiently large $r$,
$K^{A,B,\theta}_{r}(p_{\theta}z_{n})\geq\eta r-\varepsilon
r-\frac{4\varepsilon}{1-\eta}r-O(\log r)\,.$
Thus,
$\displaystyle\dim^{A,B}(p_{\theta}z_{n})$
$\displaystyle\geq\dim^{A,B,\theta}(p_{\theta}z_{n})$
$\displaystyle=\limsup_{r}\frac{K^{A,B,\theta}_{r}(p_{\theta}z_{n})}{r}$
$\displaystyle\geq\limsup_{r}\frac{\eta r-\varepsilon
r-\frac{4\varepsilon}{1-\eta}r-O(\log r)}{r}$
$\displaystyle=\limsup_{r}\eta-\varepsilon-\frac{4\varepsilon}{1-\eta}-o(1)$
$\displaystyle>\eta-\varepsilon-\varepsilon^{1/2}-o(1)$
$\displaystyle>\min\\{1,\dim_{H}(E)\\}-\varepsilon-6\varepsilon^{1/2}-o(1).$
Hence,
$\lim_{n}\dim^{A,B}(p_{\theta}z_{n})=\min\\{1,\dim_{H}(E)\\}$,
and the proof is complete. ∎
This shows that Marstrand’s theorem holds for every set $E$ with
$\dim_{H}(E)=s$ satisfying any of the following:
1. 1.
$E$ is analytic.
2. 2.
$\dim_{H}(E)=\dim_{P}(E)$.
3. 3.
$\mu\ll\mathcal{H}^{s-\delta}$, for every $\delta>0$ for some metric outer
measure $\mu$ such that $0<\mu(E)<\infty$.
4. 4.
$\mathcal{H}^{s}\ll\mu$ and $\mathcal{H}^{s}(E)>0$, for some metric outer
measure $\mu$ such that $0<\mu(E)<\infty$.
For example, the existence of exact gauged Hausdorff measures on $E$ guarantee
the existence of optimal Hausdorff oracles.
###### Example.
Let $E$ be a set with $\dim_{H}(E)=s$ and $\mathcal{H}^{s}(E)=0$. Suppose that
$0<\mathcal{H}^{\phi}(E)<\infty$, where
$\phi(t)=\frac{t^{s}}{\log\frac{1}{t}}$. Since
$\mathcal{H}^{\phi}\ll\mathcal{H}^{s-\delta}$ for every $\delta>0$, Theorem 13
implies that $E$ has optimal Hausdorff oracles, and thus Marstrand’s theorem
holds for $E$.
###### Example.
Let $E$ be a set with $\dim_{H}(E)=s$ and $\mathcal{H}^{s}(E)=\infty$. Suppose
that $0<\mathcal{H}^{\phi}(E)<\infty$, where $\phi(t)=t^{s}\log\frac{1}{t}$.
Since $\mathcal{H}^{s}\ll\mathcal{H}^{\phi}$, Theorem 13 implies that $E$ has
optimal Hausdorff oracles, and thus Marstrand’s theorem holds for $E$.
### 4.1 Counterexample to Marstrand’s theorem
In this section we show that there are sets for which Marstrand’s theorem does
not hold. While not explicitly mentioning optimal Hausdorff oracles, the
construction is very similar to the construction in Section 3.3.
###### Theorem 25.
Assuming AC and CH, for every $s\in(0,1)$ there is a set $E$ such that
$\dim_{H}(E)=1+s$ but
$\dim_{H}(p_{\theta}E)=s$
for every $\theta\in(\pi/4,3\pi/4)$.
This is a modest generalization of Davies’ theorem to sets with Hausdorff
dimension strictly greater than one. In the next section we give a new proof
of Davies’ theorem by generalizing this construction to the endpoint $s=0$.
We will need the following simple observation.
###### Observation 26.
Let $r\in\mathbb{N}$, $s\in(0,1)$, and $\theta\in(\pi/8,3\pi/8)$. Then for
every dyadic rectangle
$R=[d_{x}-2^{-r},d_{x}+2^{-r}]\times[d_{y}-2^{-sr},d_{y}+2^{-sr}]$,
there is a point $z\in R$ such that $K^{\theta}_{r}(p_{\theta}z)\leq sr+o(r)$.
###### Proof.
Note that $p_{\theta}$ is a Lipschitz function. Thus, for any rectangle
$R=[d_{x}-2^{-r},d_{x}+2^{-r}]\times[d_{y}-2^{-sr},d_{y}+2^{-sr}]$,
The length of its projection (which is an interval) is
$|p_{\theta}R|\geq c2^{-sr}$
for some constant $c$. It is well known that any interval of length
$2^{-\ell}$ contains points $x$ such that
$K_{r}(x)\leq\ell r+o(r)$.
∎
For every $r\in\mathbb{N}$, $\theta\in(\pi/4,3\pi/4)$, binary string $x$ of
length $r$ and string $y$ of length $sr$, let $g_{\theta}(x,y)\mapsto z$ be a
function such that
$K^{\theta}_{r}(p_{\theta}\,(x,z))\leq sr+o(r)$.
That is, $g_{\theta}$, given a rectangle
$R=[d_{x}-2^{-r},d_{x}+2^{-r}]\times[d_{y}-2^{-sr},d_{y}+2^{-sr}]$,
outputs a value $z$ such that $K_{r}(p_{\theta}(x,z))$ is small.
Let $s\in(0,1)$. We begin by defining two sequences of natural numbers,
$\\{a_{n}\\}$ and $\\{b_{n}\\}$. Let $a_{1}=2$, and $b_{1}=\lfloor
2/s\rfloor$. Inductively define $a_{n+1}=b_{n}^{2}$ and $b_{n+1}=\lfloor
a_{n+1}/s\rfloor$. We will also need, for every ordinal $\alpha$, a function
$f_{\alpha}:\mathbb{N}\to\\{\beta\,|\,\beta<\alpha\\}$ such that each ordinal
$\beta<\alpha$ is mapped to by infinitely many $n$. Note that such a function
exists, since the range is countable assuming CH.
Using AC and CH, we first order the subsets of the natural numbers and we
order the angles $\theta\in(\pi/4,3\pi/4)$ so that each has at most countably
many predecessors.
We will define real numbers $x_{\alpha}$, $y_{\alpha}$ and $z_{\alpha}$
inductively. Let $x_{1}$ be a real which is random relative to $A_{1}$. Let
$y_{1}$ be a real which is random relative to $(A_{1},x_{1})$. Define $z_{1}$
to be the real whose binary expansion is given by
$\displaystyle z_{1}[r]=\begin{cases}g_{\theta_{1}}(x_{1},y_{1})[r]&\text{ if
}a_{n}<r\leq b_{n}\text{ for some }n\in\mathbb{N}\\\ y_{1}[r]&\text{
otherwise}\end{cases}$
For the induction step, suppose we have defined our points up to ordinal
$\alpha$. Let $x_{\alpha}$ be a real number which is random relative to the
join of $\bigcup_{\beta<\alpha}(A_{\beta},x_{\beta})$ and $A_{\alpha}$. Let
$y_{\alpha}$ be random relative to the join of
$\bigcup_{\beta<\alpha}(A_{\beta},x_{\beta})$, $A_{\alpha}$ and $x_{\alpha}$.
This is possible, as we are assuming CH, and so this union is countable. Let
$z_{\alpha}$ be the point whose binary expansion is given by
$\displaystyle
z_{\alpha}[r]=\begin{cases}g_{\theta_{\beta}}(x_{\alpha},y_{\alpha})[r]&\text{
if }a_{n}<r\leq b_{n},\text{ for }f_{\alpha}(n)=\beta\\\ y_{\alpha}[r]&\text{
otherwise}\end{cases}$
Finally, we define our set $E=\\{(x_{\alpha},z_{\alpha})\\}$.
###### Lemma 27.
For every $\theta\in(\pi/4,3\pi/4)$,
$\dim_{H}(p_{\theta}E)\leq s$
###### Proof.
Let $\theta\in(\pi/4,3\pi/4)$ and $\alpha$ be its corresponding ordinal. Let
$A$ be an oracle encoding $\theta$ and
$\bigcup\limits_{\beta\leq\alpha}(x_{\beta},y_{\beta},z_{\beta})$.
Note that, since we assumed CH, this is a countable union, and so the oracle
is well defined.
Let $z=(x_{\beta},z_{\beta})\in E$. First assume that $\beta\leq\alpha$. Then,
by our construction of $A$, all the information of $p_{\theta}z$ is already
encoded in our oracle, and so
$K^{A}_{r}(p_{\theta}z)=o(r)$.
Now assume that $\beta>\alpha$. Then by our construction of $E$, there are
infinitely many $n$ such that $f_{\beta}(n)=\alpha$. Therefore there are
infinitely many $n$ such that
$z_{\beta}[r]=g_{\theta_{\alpha}}(x_{\beta},y_{\beta})[r]$,
for $a_{n}<r\leq b_{n}$. Recalling the definition of $g_{\theta_{\alpha}}$,
this means that, for each such $n$,
$K^{\theta}_{b_{n}}(p_{\theta}z)=sb_{n}+o(r)$.
Therefore, by the point-to-set principle,
$\displaystyle\dim_{H}(p_{\theta}E)$ $\displaystyle\leq\sup_{z\in
E}\dim^{A}(p_{\theta}z)$
$\displaystyle\leq\sup_{\beta>\alpha}\liminf_{n}\frac{K^{A}_{b_{n}}(p_{\theta}z)}{b_{n}}$
$\displaystyle\leq\sup_{\beta>\alpha}\liminf_{n}\frac{sb_{n}}{b_{n}}$
$\displaystyle=s,$
and the proof is complete. ∎
###### Lemma 28.
The Hausdorff dimension of $E$ is $1+s$.
###### Proof.
We first give an upper bound on the dimension. Let $A$ be an oracle encoding
$\theta_{1}$. Let $z=(x_{\alpha},z_{\alpha})$. By our construction of $E$,
there are infinitely many $n$ such that $f_{\alpha}(n)=1$. Therefore there are
infinitely many $n$ such that
$z_{\beta}[r]=g_{\theta_{1}}(x_{\beta},y_{\beta})[r]$,
for $a_{n}<r\leq b_{n}$. Recalling the definition of $g_{\theta_{1}}$, this
means that, for each such $n$,
$K^{\theta_{1}}_{b_{n}}(p_{\theta_{1}}z)=sb_{n}+o(r)$.
Moreover,
$\displaystyle K^{\theta_{1}}_{b_{n}}(x_{\alpha},z_{\alpha})$
$\displaystyle\leq
K^{\theta_{1}}_{b_{n}}(x_{\alpha})+K^{\theta_{1}}_{b_{n}}(z_{\alpha}\mid
x_{\alpha})+o(r)$ $\displaystyle\leq
b_{n}+K^{\theta_{1}}_{b_{n}}(p_{\theta_{1}}z)+o(r)$ $\displaystyle\leq
b_{n}+sb_{n}+o(b_{n})).$
Therefore, by the point-to-set principle,
$\displaystyle\dim_{H}(E)$ $\displaystyle\leq\sup_{z\in E}\dim^{A}(z)$
$\displaystyle\leq\sup_{z\in E}\liminf_{n}\frac{K^{A}_{b_{n}}(z)}{b_{n}}$
$\displaystyle\leq\sup_{z\in E}\liminf_{n}\frac{(1+s)b_{n}+o(b_{n})}{b_{n}}$
$\displaystyle=1+s.$
For the upper bound, let $A$ be a Hausdorff oracle for $E$, and let $\alpha$
be the ordinal corresponding to $A$. By construction of
$z=(x_{\alpha},z_{\alpha})$,
$K^{A}_{r}(x_{\alpha})\geq r-o(r)$,
for all $r\in\mathbb{N}$. We also have, for every $n$,
$\displaystyle K^{A}_{a_{n}}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{a_{n}}(y_{\alpha}\,|\,x_{\alpha})-b_{n-1}-o(a_{n})$ $\displaystyle\geq
a_{n}-b_{n-1}-o(a_{n})$ $\displaystyle=a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n}).$
Hence, for every $n$ and every $a_{n}<r\leq b_{n}$,
$\displaystyle K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{a_{n}}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n}).$
This implies that
$\displaystyle\frac{K^{A}_{r}(x_{\alpha},z_{\alpha})}{r}$
$\displaystyle=\frac{K^{A}_{r}(x_{\alpha})+K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})}{r}$
$\displaystyle\geq\frac{r+a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n})}{r}$
$\displaystyle=1+s-o(1).$
We can also conclude that, for every $n$ and every $b_{n}<r\leq a_{n+1}$,
$\displaystyle K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{b_{n}}(z_{\alpha}\,|\,x_{\alpha})K^{A}_{a_{n},b_{n}}(z_{\alpha}\,|\,x_{\alpha})-o(r)$
$\displaystyle\geq a_{n}-a^{\frac{1}{2}}_{n}+r-b_{n}-o(r).$
This implies that
$\displaystyle\frac{K^{A}_{r}(x_{\alpha},z_{\alpha})}{r}$
$\displaystyle=\frac{K^{A}_{r}(x_{\alpha})+K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})}{r}$
$\displaystyle\geq\frac{r+a_{n}-a^{\frac{1}{2}}_{n}+r-b_{n}-o(r)}{r}$
$\displaystyle\geq 1+s-o(1).$
These inequalities, combined with the point-to-set principle show that
$\displaystyle\dim_{H}(E)$ $\displaystyle=\sup_{z\in E}\dim^{A}(z)$
$\displaystyle\geq\sup_{z\in E}\liminf_{r}\frac{K^{A}_{r}(z)}{r}$
$\displaystyle\geq\sup_{z\in E}\liminf_{r}1+s$ $\displaystyle=1+s,$
and the proof is complete. ∎
### 4.2 Generalization to the endpoint $s=0$
###### Theorem 29.
Assuming AC and CH, there is a set $E$ such that $\dim_{H}(E)=1$ but
$\dim_{H}(p_{\theta}E)=0$
for every $\theta\in(\pi/4,3\pi/4)$.
For every $r\in\mathbb{N}$, $\theta\in(\pi/4,3\pi/4)$, binary string $x$ of
length $r$ and string $y$ of length $sr$, let $g^{s}_{\theta}(x,y)\mapsto z$
be a function such that
$K^{\theta}_{r}(p_{\theta}\,(x,z))\leq sr+o(r)$.
That is, $g^{s}_{\theta}$, given a rectangle
$R=[d_{x}-2^{-r},d_{x}+2^{-r}]\times[d_{y}-2^{-sr},d_{y}+2^{-sr}]$,
outputs a value $z$ such that $K_{r}(p_{\theta}(x,z))$ is small.
We begin by defining two sequences of natural numbers, $\\{a_{n}\\}$ and
$\\{b_{n}\\}$. Let $a_{1}=2$, and $b_{1}=4$. Inductively define
$a_{n+1}=b_{n}^{2}$ and $b_{n+1}=(n+1)\lfloor a_{n+1}\rfloor$. We will also
need, for every ordinal $\alpha$, a function
$f_{\alpha}:\mathbb{N}\to\\{\beta\,|\,\beta<\alpha\\}$ such that each ordinal
$\beta<\alpha$ is mapped to by infinitely many $n$. Note that such a function
exists, since the range is countable assuming CH.
Using AC and CH, we first order the subsets of the natural numbers and we
order the angles $\theta\in(\pi/4,3\pi/4)$ so that each has at most countably
many predecessors.
We will define real numbers $x_{\alpha}$, $y_{\alpha}$ and $z_{\alpha}$
inductively. Let $x_{1}$ be a real which is random relative to $A_{1}$. Let
$y_{1}$ be a real which is random relative to $(A_{1},x_{1})$. Define $z_{1}$
to be the real whose binary expansion is given by
$\displaystyle z_{1}[r]=\begin{cases}g^{1}_{\theta_{1}}(x_{1},y_{1})[r]&\text{
if }a_{n}<r\leq b_{n}\text{ for some }n\in\mathbb{N}\\\ y_{1}[r]&\text{
otherwise}\end{cases}$
For the induction step, suppose we have defined our points up to ordinal
$\alpha$. Let $x_{\alpha}$ be a real number which is random relative to the
join of $\bigcup_{\beta<\alpha}(A_{\beta},x_{\beta})$ and $A_{\alpha}$. Let
$y_{\alpha}$ be random relative to the join of
$\bigcup_{\beta<\alpha}(A_{\beta},x_{\beta})$, $A_{\alpha}$ and $x_{\alpha}$.
This is possible, as we are assuming CH, and so this union is countable. Let
$z_{\alpha}$ be the point whose binary expansion is given by
$\displaystyle
z_{\alpha}[r]=\begin{cases}g^{1/n}_{\theta_{\beta}}(x_{\alpha},y_{\alpha})[r]&\text{
if }a_{n}<r\leq b_{n},\text{ for }f_{\alpha}(n)=\beta\\\ y_{\alpha}[r]&\text{
otherwise}\end{cases}$
Finally, we define our set $E=\\{(x_{\alpha},z_{\alpha})\\}$.
###### Lemma 30.
For every $\theta\in(\pi/4,3\pi/4)$,
$\dim_{H}(p_{\theta}E)=0$.
###### Proof.
Let $\theta\in(\pi/4,3\pi/4)$ and $\alpha$ be its corresponding ordinal. Let
$A$ be an oracle encoding $\theta$ and
$\bigcup\limits_{\beta\leq\alpha}(x_{\beta},y_{\beta},z_{\beta})$.
Note that, since we assumed CH, this is a countable union, and so the oracle
is well defined.
Let $z=(x_{\beta},z_{\beta})\in E$. First assume that $\beta\leq\alpha$. Then,
by our construction of $A$, all the information of $p_{\theta}z$ is already
encoded in our oracle, and so
$K^{A}_{r}(p_{\theta}z)=o(r)$.
Now assume that $\beta>\alpha$. Then by our construction of $E$, there are
infinitely many $n$ such that $f_{\beta}(n)=\alpha$. Therefore there are
infinitely many $n$ such that
$z_{\beta}[r]=g^{1/n}_{\theta_{\alpha}}(x_{\beta},y_{\beta})[r]$,
for $a_{n}<r\leq b_{n}$. Recalling the definition of
$g^{1/n}_{\theta_{\alpha}}$, this means that, for each such $n$,
$K^{\theta}_{b_{n}}(p_{\theta}z)=\frac{b_{n}}{n}+o(r)$.
Therefore, by the point-to-set principle,
$\displaystyle\dim_{H}(p_{\theta}E)$ $\displaystyle\leq\sup_{z\in
E}\dim^{A}(p_{\theta}z)$
$\displaystyle\leq\sup_{\beta>\alpha}\liminf_{n}\frac{K^{A}_{b_{n}}(p_{\theta}z)}{b_{n}}$
$\displaystyle\leq\sup_{\beta>\alpha}\liminf_{n}\frac{\frac{b_{n}}{n}}{b_{n}}$
$\displaystyle=\frac{1}{n},$
and the proof is complete. ∎
###### Lemma 31.
The Hausdorff dimension of $E$ is $1$.
###### Proof.
We first give an upper bound on the dimension. Let $A$ be an oracle encoding
$\theta_{1}$. Let $z=(x_{\alpha},z_{\alpha})$. By our construction of $E$,
there are infinitely many $n$ such that $f_{\alpha}(n)=1$. Therefore there are
infinitely many $n$ such that
$z_{\beta}[r]=g^{1/n}_{\theta_{1}}(x_{\beta},y_{\beta})[r]$,
for $a_{n}<r\leq b_{n}$. Recalling the definition of $g^{1/n}_{\theta_{1}}$,
this means that, for each such $n$,
$K^{\theta_{1}}_{b_{n}}(p_{\theta_{1}}z)=\frac{b_{n}}{n}+o(r)$.
Moreover,
$\displaystyle K^{\theta_{1}}_{b_{n}}(x_{\alpha},z_{\alpha})$
$\displaystyle\leq
K^{\theta_{1}}_{b_{n}}(x_{\alpha})+K^{\theta_{1}}_{b_{n}}(z_{\alpha}\mid
x_{\alpha})+o(r)$ $\displaystyle\leq
b_{n}+K^{\theta_{1}}_{b_{n}}(p_{\theta_{1}}z)+o(r)$ $\displaystyle\leq
b_{n}+\frac{b_{n}}{n}+o(b_{n})).$
Therefore, by the point-to-set principle,
$\displaystyle\dim_{H}(E)$ $\displaystyle\leq\sup_{z\in E}\dim^{A}(z)$
$\displaystyle\leq\sup_{z\in E}\liminf_{n}\frac{K^{A}_{b_{n}}(z)}{b_{n}}$
$\displaystyle\leq\sup_{z\in
E}\liminf_{n}\frac{(b_{n}+b_{n}/n+o(b_{n})}{b_{n}}$ $\displaystyle=1.$
For the upper bound, let $A$ be a Hausdorff oracle for $E$, and let $\alpha$
be the ordinal corresponding to $A$. By construction of
$z=(x_{\alpha},z_{\alpha})$,
$K^{A}_{r}(x_{\alpha})\geq r-o(r)$,
for all $r\in\mathbb{N}$. We also have, for every $n$,
$\displaystyle K^{A}_{a_{n}}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{a_{n}}(y_{\alpha}\,|\,x_{\alpha})-b_{n-1}-o(a_{n})$ $\displaystyle\geq
a_{n}-b_{n-1}-o(a_{n})$ $\displaystyle=a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n}).$
Hence, for every $n$ and every $a_{n}<r\leq b_{n}$,
$\displaystyle K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{a_{n}}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n}).$
This implies that
$\displaystyle\frac{K^{A}_{r}(x_{\alpha},z_{\alpha})}{r}$
$\displaystyle=\frac{K^{A}_{r}(x_{\alpha})+K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})}{r}$
$\displaystyle\geq\frac{r+a_{n}-a^{\frac{1}{2}}_{n}-o(a_{n})}{r}$
$\displaystyle=1-o(1).$
We can also conclude that, for every $n$ and every $b_{n}<r\leq a_{n+1}$,
$\displaystyle K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})$ $\displaystyle\geq
K^{A}_{b_{n}}(z_{\alpha}\,|\,x_{\alpha})+K^{A}_{a_{n},b_{n}}(z_{\alpha}\,|\,x_{\alpha})-o(r)$
$\displaystyle\geq a_{n}-a^{\frac{1}{2}}_{n}+r-b_{n}-o(r).$
This implies that
$\displaystyle\frac{K^{A}_{r}(x_{\alpha},z_{\alpha})}{r}$
$\displaystyle=\frac{K^{A}_{r}(x_{\alpha})+K^{A}_{r}(z_{\alpha}\,|\,x_{\alpha})}{r}$
$\displaystyle\geq\frac{r+a_{n}-a^{\frac{1}{2}}_{n}+r-b_{n}-o(r)}{r}$
$\displaystyle\geq 1-o(1).$
These inequalities, combined with the point-to-set principle show that
$\displaystyle\dim_{H}(E)$ $\displaystyle=\sup_{z\in E}\dim^{A}(z)$
$\displaystyle\geq\sup_{z\in E}\liminf_{r}\frac{K^{A}_{r}(z)}{r}$
$\displaystyle\geq\sup_{z\in E}1$ $\displaystyle=1,$
and the proof is complete. ∎
## 5 Optimal Packing Oracles
Similarly, we can define optimal packing oracles for a set.
###### Definition 32.
Let $E\subseteq\mathbb{R}^{n}$ and $A\subseteq\mathbb{N}$. We say that $A$ is
an optimal packing oracle (or packing optimal) for $E$ if the following
conditions are satisfied.
1. 1.
$A$ is a packing oracle for $E$.
2. 2.
For every $B\subseteq\mathbb{N}$ and every $\epsilon>0$ there is a point $x\in
E$ such that $\operatorname{Dim}^{A,B}(x)\geq\dim_{P}(E)-\epsilon$ and for
almost every $r\in\mathbb{N}$
$K^{A,B}_{r}(x)\geq K^{A}_{r}(x)-\epsilon r$.
Let $E\subseteq\mathbb{R}^{n}$ and $A\subseteq\mathbb{N}$. For
$B\subseteq\mathbb{N}$, $\epsilon>0$ define the set
$N(A,B,\epsilon)=\\{x\in E\,|\,(\forall^{\infty}r)\,K^{A,B}_{r}(x)\geq
K^{A}_{r}(x)-\epsilon r\\}$
###### Proposition 33.
Let $E\subseteq\mathbb{R}^{n}$ be a set such that $\dim_{P}(E)>0$ and let $A$
be an oracle. Then $A$ is packing optimal for $E$ if and only if $A$ is a
packing oracle and for every $B\subseteq\mathbb{N}$ and $\epsilon>0$,
$\dim_{P}(N(A,B,\epsilon))=\dim_{P}(E)$.
###### Proof.
For the forward direction, let $A$ be an optimal packing oracle for $E$. Then
by the first condition of the definition, $A$ is a packing oracle. Let
$B\subseteq\mathbb{N}$ and $\epsilon>0$. Let $C$ be a packing oracle for
$N(A,B,\epsilon)$. For the sake of contradiction, suppose that
$\dim_{P}(N(A,B,\epsilon))<\dim_{P}(E)-\gamma$,
for some $\gamma>0$. We will, without loss of generality, assume that
$\gamma<\epsilon$. Then, by the point to set principle, for every $x\in
N(A,B,\epsilon)$,
$\displaystyle\operatorname{Dim}^{A,(B,C)}(x)$
$\displaystyle\leq\operatorname{Dim}^{C}(x)$
$\displaystyle\leq\dim_{P}(N(A,B,\epsilon))$
$\displaystyle<\dim_{P}(E)-\gamma.$
Since, $A$ is an optimal packing oracle for $E$, there is a point $x\in E$
such that $\operatorname{Dim}^{A,(B,C)}(x)\geq\dim_{P}(E)-\gamma$ and for
almost every $r\in\mathbb{N}$
$K^{A,(B,C)}_{r}(x)\geq K^{A}_{r}(x)-\gamma r$.
By our previous discussion, any such point $x$ cannot be in $N(A,B,\epsilon)$.
However, if $x\notin N(A,B,\epsilon)$, then for infinitely many $r$,
$K^{A,(B,C)}_{r}(x)<K^{A}_{r}(x)-\epsilon r$.
Thus, no such $x$ exists, contradicting the fact that $A$ is packing optimal.
For the backward direction, let $A$ be an oracle satisfying the hypothesis.
Then $A$ is a Hausdorff oracle for $E$ and the first condition of optimal
Hausdorff oracles is satisfied. Let $B\subseteq\mathbb{N}$ and $\epsilon>0$.
By our hypothesis and the point-to-set principle,
$\displaystyle\dim_{H}(E)$ $\displaystyle=\dim_{H}(N(A,B,\epsilon))$
$\displaystyle\leq\sup\limits_{x\in N(A,B,\epsilon)}\dim^{A,B}(x).$
Therefore, there is certainly a point $x\in E$ such that
$\dim^{A,B}(x)\geq\dim_{H}(E)-\epsilon$ and
$K^{A,B}_{r}(x)\geq K^{A}_{r}(x)-\epsilon r$,
for almost every $r\in\mathbb{N}$. ∎
###### Lemma 34.
Let $E\subseteq\mathbb{R}^{n}$. If $A$ is packing optimal for $E$, then the
join $C=(A,B)$ is packing optimal for $E$ for every oracle $B$.
###### Proof.
Let $A$ be an optimal packing oracle for $E$, let $B$ be an oracle and let
$C=(A,B)$. By the point-to-set principle (Theorem 2),
$\displaystyle\dim_{P}(E)$ $\displaystyle=\sup\limits_{x\in
E}\operatorname{Dim}^{A}(x)$ $\displaystyle\geq\sup\limits_{x\in
E}\operatorname{Dim}^{C}(x)$ $\displaystyle\geq\dim_{P}(E).$
Hence, the oracle $C=(A,B)$ is a packing oracle for $E$.
Let $B^{\prime}\subseteq\mathbb{N}$ be an oracle, and let $\epsilon>0$. Let
$x\in E$ be a point such that
$\operatorname{Dim}^{A,(B,B^{\prime})}(x)\geq\dim_{P}(E)-\epsilon/2,$ (7)
and
$K_{r}^{A,(B,B^{\prime})}(x)\geq K^{A}_{r}(x)-\epsilon r/2,$ (8)
for almost every precision $r$. Note that such a point exists since $A$ is
packing optimal for $E$.
For all sufficiently large $r$,
$\displaystyle K^{(A,B),B^{\prime}}_{r}(x)$
$\displaystyle=K^{A,(B,B^{\prime})}_{r}(x)$ $\displaystyle\geq
K^{A}_{r}(x)-\epsilon r/2$ $\displaystyle\geq K^{A,B}_{r}(x)-\epsilon r/2$
$\displaystyle=K^{C}_{r}(x)-\epsilon r/2.$
Therefore, $C=(A,B)$ is packing optimal for $E$. ∎
We now give some basic closure properties of the class of sets with optimal
packing oracles.
###### Observation 35.
Let $F\subseteq E$. If $\dim_{P}(F)=\dim_{P}(E)$ and $F$ has an optimal
packing oracle, then $E$ has an optimal packing oracle.
We can also show that having optimal packing oracles is closed under countable
unions.
###### Lemma 36.
Let $E_{1},E_{2},\ldots$ be a countable sequence of sets and let
$E=\cup_{n}E_{n}$. If every set $E_{n}$ has an optimal packing oracle, then
$E$ has an optimal Hausdorff oracle.
###### Proof.
We first note that
$\dim_{P}(E)=\sup_{n}\dim_{P}(E_{n})$.
For each $n$, let $A_{n}$ be an optimal packing oracle for $E_{n}$. Let $A$ be
the join of $A_{1},A_{2},\ldots$. Let $B$ be an oracle guaranteed by Theorem 2
such that
$\sup_{x}\operatorname{Dim}^{B}(x)=\sup_{n}\dim_{P}(E_{n})$.
Note that, by Lemma 5, for every $n$, $(A,B)$ is packing optimal for $E_{n}$.
We now claim that $(A,B)$ is an optimal packing oracle for $E$. Theorem 2
shows that item (1) of the definition of optimal packing oracles is satisfied.
For item (2), let $C\subseteq\mathbb{N}$ be an oracle, and let $\epsilon>0$.
Let $n$ be a number such that $\dim_{P}(E_{n})>\dim_{P}(E)-\epsilon$. Since
$(A,B)$ is packing optimal for $E_{N}$, there is a point $x\in E_{n}$ such
that
1. (i)
$\dim^{(A,B),C}(x)\geq\dim_{P}(E_{n})-\epsilon\geq\dim_{P}(E)-\epsilon$, and
2. (ii)
for almost every $r$,
$K^{(A,B),C}_{r}(x)\geq K^{(A,B)}_{r}(x)-\epsilon r$.
Therefore, item (2) of the definition of optimal packing oracles is satisfied,
and so $(A,B)$ is Hausdorff optimal for $E$. ∎
For every $0\leq\alpha<\beta\leq 1$ define the set
$D_{\alpha,\beta}=\\{x\in(0,1)\,|\,\dim(x)=\alpha\text{ and
}\operatorname{Dim}(x)=\beta\\}$.
###### Lemma 37.
For every $0\leq\alpha<\beta\leq 1$, $D_{\alpha,\beta}$ has optimal Hausdorff
and optimal packing oracles and
$\displaystyle\dim_{H}(D_{\alpha,\beta})=\alpha$
$\displaystyle\dim_{P}(D_{\alpha,\beta})=\beta.$
###### Proof.
We begin by noting that $D_{\alpha,\beta}$ is Borel. Therefore, by Theorems 13
and 39, $D_{\alpha,\beta}$ has optimal Hausdorff and optimal packing oracles.
Thus, it suffices to show prove the dimension equalities.
Define the increasing sequence of natural numbers $\\{h_{j}\\}$ inductively as
follows. Let $h_{1}=2$, and let $h_{j+1}=2^{h_{j}}$. For every oracle $A$ let
$z_{A}$ be a point such that, for every $\delta>0$ and all sufficiently large
$r$,
$K^{A}_{(1+\delta)r,r}(z_{A}\,|\,z_{A})=\alpha\delta
r=K_{(1+\delta)r,r}(z_{A}\,|\,z_{A})$.
Let $y_{A}$ be random relative to $A$ and $z_{A}$.
Let $x_{A}$ be the point whose binary expansion is given by
$\displaystyle x_{A}[r]=\begin{cases}z_{A}[r]&\text{ if
}h_{j}<r\leq\frac{1-\beta}{1-\alpha}h_{j+1}\text{ for some }j\in\mathbb{N}\\\
y_{A}[r]&\text{ otherwise}\end{cases}$
Let $A$ be an oracle, and consider the point $x_{A}$. Let $r\in\mathbb{N}$ be
sufficiently large and let $j\in\mathbb{N}$ such that $h_{j}<r\leq h_{j+1}$.
We first suppose that $r\leq\frac{1-\beta}{1-\alpha}h_{j+1}$. Then
$\displaystyle K_{r}(x_{A})$ $\displaystyle\geq K^{A}_{r}(x_{A})$
$\displaystyle=K^{A}_{h_{j}}(x_{A})+K^{A}_{r,h_{j}}(x_{A}\,|\,x_{A})$
$\displaystyle=O(\log r)+K^{A}_{r,h_{j-1}}(z_{A}\,|\,z_{A})$
$\displaystyle=\alpha r+O(\log r)$ $\displaystyle\geq K_{r}(x_{A}).$
Now suppose that $r>\frac{1-\beta}{1-\alpha}h_{j+1}$. Let
$t=\frac{1-\beta}{1-\alpha}h_{j+1}$. Then
$\displaystyle K_{r}(x_{A})$ $\displaystyle\geq K^{A}_{r}(x_{A})$
$\displaystyle=K^{A}_{t}(x_{A})+K^{A}_{r,t}(x_{A}\,|\,x_{A})+O(\log r)$
$\displaystyle=\alpha t+K^{A}_{r,t}(x_{A}\,|\,x_{A})+O(\log r)$
$\displaystyle=\alpha t+r-t+O(\log r)$ $\displaystyle=r-t(1-\alpha)+O(\log r)$
$\displaystyle=r-(1-\beta)h_{j+1}+O(\log r)$ $\displaystyle\geq K_{r}(x_{A}).$
In particular, $K^{A}_{r}(x_{A})\geq\alpha r$ for every $h_{j}<r\leq h_{j+1}$.
Hence for every oracle $A$,
$\dim^{A}(x_{A})=\alpha=\dim(x_{A})$.
For all sufficiently large $j$,
$\displaystyle K_{h_{j}}(x_{A})$ $\displaystyle=K^{A}_{r}(x_{A})$
$\displaystyle=h_{j}-(1-\beta)h_{j}+O(\log r)$ $\displaystyle=\beta
h_{j}+O(\log r),$
and so
$\operatorname{Dim}^{A}(x_{A})=\beta=\operatorname{Dim}(x_{A})$.
Therefore, for every $A$, $x_{A}\in D_{\alpha,\beta}$.
Finally, by the above bounds,
$\displaystyle\dim_{H}(D_{\alpha,\beta})=\alpha$
$\displaystyle\dim_{P}(D_{\alpha,\beta})=\beta.$
∎
### 5.1 Sufficient conditions for optimal packing oracles
###### Lemma 38.
Let $E\subseteq\mathbb{R}^{n}$ be a set such that $\dim_{H}(E)=\dim_{P}(E)=s$.
Then $E$ has optimal Hausdorff and optimal packing oracles.
###### Proof.
Lemma 17 shows that $E$ has optimal Hausdorff oracles. Let $A_{1}$ be an
optimal Hausdorff oracle for $E$. Let $A_{2}$ be a packing oracle for $E$. Let
$A=(A_{1},A_{2})$. By Lemma 5, $A$ is an optimal Hausdorff oracle for $E$. We
now show that $A$ is an optimal packing oracle for $E$.
It is clear that $A$ is a packing oracle for $E$. Let $B\subseteq\mathbb{N}$
and $\epsilon>0$. Since $A$ is Hausdorff optimal for $E$, there is a point
$x\in E$ such that $\dim^{A,B}(x)\geq s-\epsilon$ and
$K^{A,B}_{r}(x)\geq K^{A}_{r}(x)-\epsilon r$
for almost every $r$. Therefore
$\displaystyle\operatorname{Dim}^{A,B}(x)$ $\displaystyle\geq\dim^{A,B}(x)$
$\displaystyle\geq s-\epsilon$ $\displaystyle=\dim_{P}(E)-\epsilon.$
Therefore $x$ satisfies the second condition of optimal packing oracles, and
the conclusion follows. ∎
###### Theorem 39.
Let $E\subseteq\mathbb{R}^{n}$ with $\dim_{P}(E)=s$. Suppose there is a metric
outer measure $\mu$ such that
$0<\mu(E)<\infty$,
and either
1. 1.
$\mu\ll\mathcal{P}^{s}$, or
2. 2.
$\mathcal{P}^{s}\ll\mu$ and $\mathcal{P}^{s}(E)>0$.
Then $E$ has an optimal packing oracle $A$.
###### Proof.
Let $A\subseteq\mathbb{N}$ be a packing oracle for $E$ such that $p_{\mu,E}$
is computable relative to $A$. Note that such an oracle exists by the point-
to-set principle and routine encoding. We will show that $A$ is packing
optimal for $E$.
For the sake of contradiction, suppose that there is an oracle $B$ and
$\epsilon>0$ such that, for every $x\in E$ either
1. 1.
$\operatorname{Dim}^{A,B}(x)<s-\epsilon$, or
2. 2.
there are infinitely many $r$ such that $K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon
r$.
Let $N$ be the set of all $x$ for which the second item holds. By Lemma 12,
$\mu(N)=0$. We also note that, by the point-to-set principle,
$\operatorname{Dim}_{H}(E-N)\leq s-\epsilon$,
and so $\mathcal{P}^{s}(E-N)=0$.
To achieve the desired contradiction, we first assume that
$\mu\ll\mathcal{P}^{s}$. In this case, it suffices to show that $\mu(E-N)>0$.
Since $\mu\ll\mathcal{P}^{s}$,
$\mu(E-N)=0$.
Since $\mu$ is a metric outer measure,
$\displaystyle 0$ $\displaystyle<\mu(E)$ $\displaystyle\leq\mu(N)+\mu(E-N)$
$\displaystyle=0,$
a contradiction.
Now suppose that $\mathcal{P}^{s}\ll\mu$ and $\mathcal{P}^{s}(E)>0$. Then,
since $\mathcal{P}^{s}$ is an outer measure, $\mathcal{P}^{s}(E)>0$ and
$\mathcal{P}^{s}(E-N)=0$ we must have $\mathcal{P}^{s}(N)>0$. However this
implies that $\mu(N)>0$, and we again have the desired contradiction. Thus $A$
is an optimal packing oracle for $E$ and the proof is complete. ∎
We now show that every analytic set has optimal packing oracles.
###### Lemma 40.
Every analytic set $E$ has optimal packing oracles.
###### Proof.
A set $E\subseteq\mathbb{R}^{n}$ is called an packing $s$-set if
$0<\mathcal{P}^{s}(E)<\infty$.
Since $\mathcal{P}^{s}$ is a metric outer measure, and trivially absolutely
continuous with respect to itself, Theorem 39 shows that if $E$ is a packing
$s$-set then there is an optimal packing oracle for $E$.
Now assume that $E$ is compact, and let $s=\dim_{H}(E)$. Then for every $t<s$,
$\mathcal{H}^{t}(E)>0$. Thus, by Theorem 1, there is a sequence of compact
subsets $F_{1},F_{2},\ldots$ of $E$ such that
$\dim_{P}(\bigcup_{n}F_{n})=\dim_{P}(E)$,
and, for each $n$,
$0<\mathcal{P}^{s_{n}}(F_{n})<\infty$,
where $s_{n}=s-1/n$. Therefore, each set $F_{n}$ has optimal packing oracles.
Hence, by Lemma 36, $E$ has optimal packing oracles and the conclusion
follows.
We now show that every $\Sigma^{0}_{2}$ set has optimal packing oracles.
Suppose $E=\cup_{n}F_{n}$ is $\Sigma^{0}_{1}$, where each $F_{n}$ is compact.
As we have just seen, each $F_{n}$ has optimal packing oracles. Therefore, by
Lemma 36, $E$ has optimal packing oracles and the conclusion follows.
Finally, let $E$ be analytic. By Theorem 1, there is a $\Sigma^{0}_{2}$ subset
$F$ of the same packing dimension as $E$. We have just seen that $F$ must have
an optimal packing oracle. Since $\dim_{P}(F)=\dim_{P}(E)$, by Observation 35
$E$ has optimal packing oracles, and the proof is complete ∎
### 5.2 Sets without optimal oracles
###### Theorem 41.
Assuming CH and AC, for every $0<s_{1}<s_{2}\leq 1$ there is a set
$E\subseteq\mathbb{R}$ which does not have Hausdorff optimal nor packing
optimal oracles such that
$\dim_{H}(E)=s_{1}$ and $\dim_{P}(E)=s_{2}$.
###### Proof.
Let $\delta=s_{2}-s_{1}$. We begin by defining two sequences of natural
numbers, $\\{a_{n}\\}$ and $\\{b_{n}\\}$. Let $a_{1}=2$, and $b_{1}=4$.
Inductively define $a_{n+1}=2^{b_{n}}$ and $b_{n+1}=2^{a_{n+1}}$.
Using AC and CH, we order the subsets of the natural numbers such that every
subset has countably many predecessors. For every countable ordinal $\alpha$,
let $f_{\alpha}:\mathbb{N}\to\\{\beta\,|\,\beta<\alpha\\}$ be a function such
that each ordinal $\beta$ strictly less than $\alpha$ is mapped to by
infinitely many $n$. Note that such a function exists, since the range is
countable assuming CH.
We will define real numbers $w_{\alpha}$, $x_{\alpha}$, $y_{\alpha}$ and
$z_{\alpha}$ via transfinite induction. Let $x_{1}$ be a real such that, for
every $\gamma>0$ and all sufficiently large $r$,
$K^{A_{1}}_{(1+\gamma)r,r}(w_{1}\,|\,w_{1})=s_{1}\gamma
r=K_{(1+\gamma)r,r}(w_{1}\,|\,w_{1})$.
Let $x_{1}$ be random relative to $A_{1}$ and $w_{1}$. Let $y_{1}$ be a real
such that, for every $\gamma>0$ and all sufficiently large $r$,
$K^{A_{1}}_{(1+\gamma)r,r}(y_{1}\,|\,y_{1})=(s_{1}+\frac{\delta}{2})\gamma
r=K_{(1+\gamma)r,r}(y_{1}\,|\,y_{1})$.
Let $z_{1}$ be the real whose binary expansion is given by
$\displaystyle z_{1}[r]=\begin{cases}w_{1}[r]&\text{ if
}a_{n}<r\leq\frac{1-s_{2}}{1-s_{1}}b_{n}\text{ for some }n\in\mathbb{N}\\\
x_{1}[r]&\text{ if }\frac{1-s_{2}}{1-s_{1}}b_{n}<r\leq b_{n}\text{ for some
}n\in\mathbb{N}\\\ y_{1}[r]&\text{ if }b_{n}<r\leq(1-\delta)a_{n+1}<\text{ for
some }n\in\mathbb{N}\\\ 0&\text{ if
}(1-\delta)a_{n+1}<r\leq(1+\delta)a_{n+1}<\text{ for some }n\in\mathbb{N}\\\
\end{cases}$
For the induction step, suppose we have defined our points up to $\alpha$. Let
$A$ be the join of
$\bigcup_{\beta<\alpha}(A_{\beta},w_{\beta}.x_{\beta},y_{\beta},z_{\beta})$
and $A_{\alpha}$. Let $x_{\alpha}$ be a real such that, for every $\gamma>0$
and all sufficiently large $r$,
$K^{A}_{(1+\gamma)r,r}(w_{\alpha}\,|\,w_{\alpha})=s_{1}\gamma
r=K_{(1+\gamma)r,r}(w_{\alpha}\,|\,w_{\alpha})$.
Let $x_{\alpha}$ be random relative to $A$ and $w_{\alpha}$. Let $y_{\alpha}$
be a real such that, for every $\gamma>0$ and all sufficiently large $r$,
$K^{A,w_{\alpha},x_{\alpha}}_{(1+\gamma)r,r}(y_{\alpha}\,|\,y_{\alpha})=(s_{1}+\frac{\delta}{2})\gamma
r=K_{(1+\gamma)r,r}(y_{\alpha}\,|\,y_{\alpha})$.
Let $z_{\alpha}$ be the real whose binary expansion is given by
$\displaystyle z_{\alpha}[r]=\begin{cases}w_{\alpha}[r]&\text{ if
}a_{n}<r\leq\frac{1-s_{2}}{1-s_{1}}b_{n}\text{ for some }n\in\mathbb{N}\\\
x_{\alpha}[r]&\text{ if }\frac{1-s_{2}}{1-s_{1}}b_{n}<r\leq b_{n}\text{ for
some }n\in\mathbb{N}\\\ y_{\alpha}[r]&\text{ if
}b_{n}<r\leq(1-\delta)a_{n+1}<\text{ for some }n\in\mathbb{N}\\\
x_{\beta}&\text{ if }(1-s_{1}\delta/2)a_{n+1}<r\leq a_{n+1}<\text{ where
}f(\beta)=n\\\ \end{cases}$
Finally, we define our set $E=\\{z_{\alpha}\\}$.
We begin by collecting relevant aspects of our construction. Let $\alpha$ be
an ordinal, let $A=A_{\alpha}$ be the corresponding oracle in the order, and
let $z=z_{\alpha}$ be the point constructed at ordinal $\alpha$.
Let$n$ be sufficiently large. Let $a_{n}<r\leq\frac{1-s_{2}}{1-s_{1}}b_{n}$,
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{a_{n}}(z)+K^{A}_{r,a_{n}}(w_{\alpha}\,|\,z)$
$\displaystyle=K^{A}_{a_{n}}(z)+(r-a_{n})s_{1}+O(\log r).$ (9)
Let $t=\frac{1-s_{2}}{1-s_{1}}b_{n}<r\leq b_{n}$,
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{t}(z)+K^{A}_{r,t}(x_{\alpha}\,|\,z)$
$\displaystyle=K^{A}_{t}(z)+(r-t)+O(\log r)$
$\displaystyle=(t-a_{n})s_{1}+(r-t)+O(\log r)$
$\displaystyle=ts_{1}+r-t+O(\log r)$ $\displaystyle=r-(1-s_{1})t+O(\log r)$
$\displaystyle=r-(1-s_{2})b_{n}+O(\log r).$ (10)
Let $b_{n}<r\leq(1-s_{1}\delta/2)a_{n+1}$. Then,
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{b_{n}}(z)+K^{A}_{r,b_{n}}(y_{\alpha}\,|\,z)$
$\displaystyle=b_{n}-(1-s_{2})b_{n}+K^{A}_{r,b_{n}}(y_{\alpha}\,|\,z)$
$\displaystyle=s_{2}b_{n}+K^{A}_{r,b_{n}}(y_{\alpha}\,|\,z)$
$\displaystyle=s_{2}b_{n}+(s_{1}+\frac{\delta}{2})(r-b_{n}).$ (11)
Finally, let $t=(1-s_{1}\delta/2)a_{n+1}<r\leq a_{n+1}$ and let $\beta<\alpha$
be the ordinal such that $f(\beta)=n$. Then,
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{t}(z)+K^{A}_{r,t}(x_{\beta}\,|\,z)$
$\displaystyle=s_{2}b_{n}+(s_{1}+\frac{\delta}{2})(t-b_{n})+K^{A}_{r,t}(x_{\beta}\,|\,z)$
$\displaystyle=(s_{1}+\frac{\delta}{2})t+K^{A}_{r,t}(x_{\beta}\,|\,z)$ (12)
In particular,
$\displaystyle s_{1}r$ $\displaystyle\leq K^{A}_{r}(z)$
$\displaystyle=(s_{1}+\frac{\delta}{2})t+K^{A}_{r,t}(x_{\beta}\,|\,z)$
$\displaystyle\leq(s_{1}+\frac{\delta}{2})r+r-t$
$\displaystyle\leq(s_{1}+\frac{\delta}{2})r+\frac{\delta r}{2}$
$\displaystyle=s_{2}r$ (13)
Let $a_{n}<r\leq a_{n+1}$. The above inequalities show that, if
$r>\frac{1-s_{2}}{1-s_{1}}b_{n}$, then
$K^{A}_{r}(z)\geq s_{1}r$.
When $r\leq\frac{1-s_{2}}{1-s_{1}}b_{n}$, by combining equality (9) and
inequality (13),
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{a_{n}}(z)+(r-a_{n})s_{1}+O(\log r)$ $\displaystyle\geq
s_{1}a_{n}+(r-a_{n})s_{1}+O(\log r)$ $\displaystyle=s_{1}r+O(\log r).$
Therefore, $K^{A}_{r}(z)\geq s_{1}r$. for all $r$. For the lower bound, let
$r=\frac{1-s_{2}}{1-s_{1}}b_{n}$. Then,
$\displaystyle K^{A}_{r}(z)$
$\displaystyle=K^{A}_{a_{n}}(z)+(r-a_{n})s_{1}+O(\log r)$ $\displaystyle\leq
a_{n}++(r-a_{n})s_{1}+O(\log r)$ $\displaystyle\leq s_{1}r+O(\log r),$
and so $\dim^{A}(z)=s_{1}$.
Similarly, the above inequalities show that $K^{A}_{r}(z)\leq s_{2}r$. To
prove the lower bound, let $r=b_{n}$. Then
$\displaystyle K^{A}_{r}(z)$ $\displaystyle=r-(1-s_{2})b_{n}+O(\log r)$
$\displaystyle=s_{2}r+O(\log r),$
and so $\operatorname{Dim}^{A}(z)=s_{2}$.
To complete the proof, we must show that $E$ does not have an optimal
Hausdorff oracle, nor an optimal packing oracle. Let $A=A_{\alpha}$ be any
Hausdorff oracle for $E$. Let $B$ be an oracle encoding the set
$\\{w_{\beta},x_{\beta},y_{\beta}\,|\,\beta\leq\alpha\\}$. Note that we can
encode this information since this set is countable.
Let $z_{\beta}\in E$. First, suppose that $\beta\leq\alpha$. Then by our
choice of $B$, $\dim^{A_{\alpha},B}(z_{\beta})=0$. So then suppose that
$\beta>\alpha$. Let $n$ be a sufficiently large natural such that
$f(\alpha)=n$. Then, since $x_{\alpha}$ is random relative to $A_{\alpha}$
$\displaystyle K^{A_{\alpha}}_{a_{n+1}}(z_{\beta})$
$\displaystyle=(s_{1}+\frac{\delta}{2})t+K^{A}_{a_{n+1},t}(x_{\alpha}\,|\,z)$
$\displaystyle\geq(s_{1}+\frac{\delta}{2})t+a_{n+1}-t,$
where $t=(1-s_{1}\delta/2)a_{n+1}$. However, by our construction on $B$,
$\displaystyle K^{A_{\alpha},B}_{a_{n+1}}(z_{\beta})$
$\displaystyle=(s_{1}+\frac{\delta}{2})t+K^{A,B}_{r,t}(x_{\alpha}\,|\,z)$
$\displaystyle\geq(s_{1}+\frac{\delta}{2})t+O(1).$
Therefore,
$\displaystyle
K^{A_{\alpha}}_{a_{n+1}}(z_{\beta})-K^{A_{\alpha},B}_{a_{n+1}}(z_{\beta})$
$\displaystyle=a_{n+1}-t$ $\displaystyle=\frac{s_{1}\delta a_{n+1}}{2}.$
Since $z_{\beta}$ was arbitrary, it follows that $B$ reduces the complexity of
every point $z\in E$ infinitely often. Since $A_{\alpha}$ was arbitrary, we
conclude that $E$ does not have optimal Hausdorff nor optimal packing oracles.
∎
###### Corollary 42.
Assuming CH and AC, for every $0<s_{1}<s_{2}\leq 1$ there is a set
$E\subseteq\mathbb{R}$ which has optimal Hausdorff oracles but does not have
optimal packing oracles such that
$\dim_{H}(E)=s_{1}$ and $\dim_{P}(E)=s_{2}$.
###### Proof.
Let $F$ be a set such that $\dim_{H}(F)=\dim_{P}(F)=s_{1}$. Then, by Lemma 17,
$F$ has optimal Hausdorff oracles. Let $G$ be a set, guaranteed by Theorem 41,
with $\dim_{H}(G)<s_{1}$, $\dim_{P}(G)=s_{2}$ such that $G$ does not have
optimal Hausdorff nor optimal packing oracles.
Let $E=F\cup G$. Then $\dim_{H}(E)=s_{1}$ and $\dim_{P}(E)=s_{2}$ by the union
formula for Hausdorff and packing dimension. By Observation 6, $E$ has optimal
Hausdorff oracles.
We now prove that $E$ does not have optimal packing oracles. Let $A$ be a
packing oracle for $E$. By possibly joining $A$ with a packing oracle for $G$,
we may assume that $A$ is a packing oracle for $G$ as well. Since $G$ does not
have optimal packing oracles, there is an oracle $B\subseteq\mathbb{N}$ and
$\epsilon>s_{2}-s_{1}$ such that, for every $x\in G$ where
$\operatorname{Dim}^{A,B}(x)\geq s_{2}-\epsilon$,
$K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon r$
for infinitely many $r$. Let $x\in E$ such that
$\operatorname{Dim}^{A,B}(x)\geq s_{2}-\epsilon$. Then, by our choice of $F$,
$x$ must be in $G$. Therefore
$K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon r$
for infinitely many $r$, and so $A$ is not an optimal packing oracle for $E$.
Since $A$ was arbitrary, the conclusion follows.
∎
###### Theorem 43.
Assuming CH and AC, for every $0<s_{1}<s_{2}\leq 1$ there is a set
$E\subseteq\mathbb{R}$ which has optimal packing oracles but does not have
optimal Hausdorff oracles such that
$\dim_{H}(E)=s_{1}$ and $\dim_{P}(E)=s_{2}$.
###### Proof.
Let
$F=\\{x\in(0,1)\,|\,\dim(x)=0\text{ and }\operatorname{Dim}(x)=s_{2}\\}$.
By Lemma 37, $\dim_{H}(F)=0$, $\dim_{P}(F)=s_{2}$ and $F$ has optimal packing
oracles. Let $G$ be a set, guaranteed by Theorem 41, with $\dim_{H}(G)=s_{1}$,
$\dim_{P}(G)=s_{2}$ such that $G$ does not have optimal Hausdorff nor optimal
packing oracles. Let $E=F\cup G$. Then $\dim_{H}(E)=s_{1}$ and
$\dim_{P}(E)=s_{2}$ by the union formula for Hausdorff and packing dimension.
By Observation 35, $E$ has optimal packing oracles.
We now prove that $E$ does not have optimal Hausdorff oracles. Let $A$ be a
Hausdorff oracle for $E$. By possibly joining $A$ with a Hausdorff oracle for
$G$, we may assume that $A$ is a Hausdorff oracle for $G$ as well. Since $G$
does not have optimal Hausdorff oracles, there is an oracle
$B\subseteq\mathbb{N}$ and $\epsilon>s_{1}$ such that, for every $x\in G$
where $\dim^{A,B}(x)\geq s_{1}-\epsilon$,
$K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon r$
for infinitely many $r$. Let $x\in E$ such that $\dim^{A,B}(x)\geq
s_{1}-\epsilon$. Then, since $\dim_{H}(F)=0$, $x$ must be in $G$. Therefore
$K^{A,B}_{r}(x)<K^{A}_{r}(x)-\epsilon r$
for infinitely many $r$, and so $A$ is not an optimal Hausdorff oracle for
$E$. Since $A$ was arbitrary, the conclusion follows. ∎
## 6 Acknowledgments
I would like to thank Denis Hirschfeldt, Jack Lutz and Chris Porter for very
valuable discussions and suggestions. I would also like to thank the
participants of the recent AIM workshop on Algorithmic Randomness.
## References
* [1] Krishna B. Athreya, John M. Hitchcock, Jack H. Lutz, and Elvira Mayordomo. Effective strong dimension in algorithmic information and computational complexity. SIAM J. Comput., 37(3):671–705, 2007.
* [2] Christopher J. Bishop and Yuval Peres. Fractals in probability and analysis, volume 162 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2017.
* [3] Adam Case and Jack H. Lutz. Mutual dimension. ACM Transactions on Computation Theory, 7(3):12, 2015.
* [4] Logan Crone, Lior Fishman, and Stephen Jackson. Hausdorff dimension regularity properties and games. arXiv preprint arXiv:2003.11578, 2020.
* [5] Roy O. Davies. Two counterexamples concerning Hausdorff dimensions of projections. Colloq. Math., 42:53–58, 1979.
* [6] Rod Downey and Denis Hirschfeldt. Algorithmic Randomness and Complexity. Springer-Verlag, 2010.
* [7] Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications. Wiley, third edition, 2014.
* [8] Kenneth Falconer, Jonathan Fraser, and Xiong Jin. Sixty years of fractal projections. In Fractal geometry and stochastics V, pages 3–25. Springer, 2015\.
* [9] Leonid A. Levin. On the notion of a random sequence. Soviet Math Dokl., 14(5):1413–1416, 1973.
* [10] Leonid Anatolevich Levin. Laws of information conservation (nongrowth) and aspects of the foundation of probability theory. Problemy Peredachi Informatsii, 10(3):30–35, 1974.
* [11] Ming Li and Paul M.B. Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, third edition, 2008.
* [12] Jack H. Lutz. Dimension in complexity classes. SIAM J. Comput., 32(5):1236–1259, 2003.
* [13] Jack H. Lutz. The dimensions of individual strings and sequences. Inf. Comput., 187(1):49–79, 2003.
* [14] Jack H. Lutz and Neil Lutz. Algorithmic information, plane Kakeya sets, and conditional dimension. ACM Trans. Comput. Theory, 10(2):Art. 7, 22, 2018.
* [15] Jack H Lutz and Neil Lutz. Who asked us? how the theory of computing answers questions about analysis. In Complexity and Approximation, pages 48–56. Springer, 2020.
* [16] Jack H. Lutz and Elvira Mayordomo. Dimensions of points in self-similar fractals. SIAM J. Comput., 38(3):1080–1112, 2008.
* [17] Neil Lutz. Fractal intersections and products via algorithmic dimension. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), 2017.
* [18] Neil Lutz and D. M. Stull. Bounding the dimension of points on a line. In Theory and applications of models of computation, volume 10185 of Lecture Notes in Comput. Sci., pages 425–439. Springer, Cham, 2017\.
* [19] Neil Lutz and D. M. Stull. Projection theorems using effective dimension. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), 2018.
* [20] J. M. Marstrand. Some fundamental geometrical properties of plane sets of fractional dimensions. Proc. London Math. Soc. (3), 4:257–302, 1954.
* [21] Pertti Mattila. Hausdorff dimension, orthogonal projections and intersections with planes. Ann. Acad. Sci. Fenn. Ser. AI Math, 1(2):227–244, 1975.
* [22] Pertti Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability. Cambridge University Press, 1999.
* [23] Pertti Mattila. Hausdorff dimension, projections, and the fourier transform. Publicacions matematiques, pages 3–48, 2004.
* [24] Pertti Mattila. Hausdorff dimension, projections, intersections, and besicovitch sets. In New Trends in Applied Harmonic Analysis, Volume 2, pages 129–157. Springer, 2019.
* [25] Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. Inf. Process. Lett., 84(1):1–3, 2002.
* [26] Elvira Mayordomo. Effective fractal dimension in algorithmic information theory. In S. Barry Cooper, Benedikt Löwe, and Andrea Sorbi, editors, New Computational Paradigms: Changing Conceptions of What is Computable, pages 259–285. Springer New York, 2008.
* [27] Andre Nies. Computability and Randomness. Oxford University Press, Inc., New York, NY, USA, 2009.
* [28] Tuomas Orponen. Combinatorial proofs of two theorems of Lutz and Stull. arXiv preprint arXiv:2002.01743, 2020.
* [29] D. M. Stull. Results on the dimension spectra of planar lines. In 43rd International Symposium on Mathematical Foundations of Computer Science, volume 117 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 79, 15. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018.
|
††thanks: NHFP Einstein fellow
# Constraining gravitational wave amplitude birefringence and Chern-Simons
gravity with GWTC-2
Maria Okounkova 0000-0001-7869-5496<EMAIL_ADDRESS>Center
for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY
10010, United States Will M. Farr 0000-0003-1540-8562
<EMAIL_ADDRESS>Department of Physics and Astronomy, Stony Brook
University, Stony Brook NY 11794, United States Center for Computational
Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, United
States Maximiliano Isi 0000-0001-8830-8672<EMAIL_ADDRESS>Center for
Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY
10010, United States LIGO Laboratory and Kavli Institute for Astrophysics and
Space Research, Massachusetts Institute of Technology, Cambridge,
Massachusetts 02139, USA Leo C. Stein 0000-0001-7559-9597
<EMAIL_ADDRESS>Department of Physics and Astronomy, The University of
Mississippi, University, MS 38677, United States
###### Abstract
We perform a new test of general relativity (GR) with signals from GWTC-2, the
LIGO and Virgo catalog of gravitational wave detections. We search for the
presence of amplitude birefringence, in which left versus right circularly
polarized modes of gravitational waves are exponentially enhanced and
suppressed during propagation. Such an effect is present in various beyond-GR
theories but is absent in GR. We constrain the amount of amplitude
birefringence consistent with the data through an opacity parameter $\kappa$,
which we bound to be $\kappa\lesssim 0.74\textrm{ Gpc}^{-1}$. We then use
these theory-agnostic results to constrain Chern-Simons gravity, a beyond-GR
theory with motivations in quantum gravity. We bound the canonical Chern-
Simons lengthscale to be $\ell_{0}\lesssim 1.0\times 10^{3}$ km, improving on
previous long-distance measurement results by a factor of two.
## I Introduction
At some length scale, Einstein’s theory of general relativity (GR) must break
down and be reconciled with quantum mechanics in a beyond-GR theory of
gravity. Gravitational waves (GWs) from binary black hole (BBH) mergers, such
as those recently detected by LIGO Aasi _et al._ (2015) and Virgo Acernese
_et al._ (2015) could contain signatures of beyond-GR effects, which has
motivated significant efforts to test GR with LIGO and Virgo detections Abbott
_et al._ (2019a, b); Isi _et al._ (2019a); Nair _et al._ (2019); Isi _et
al._ (2019b); Abbott _et al._ (2020a, b).
In this study, we perform a new test of GR with the second LIGO-Virgo catalog,
GWTC-2 Abbott _et al._ (2020a, 2019c); LIGO Scientific Collaboration and
Virgo Collaboration (2019).111Note that GWTC-2 contains GWTC-1 Abbott _et
al._ (2019a), the first LIGO and Virgo catalog, as a subset. In several
beyond-GR theories, GWs, exhibit amplitude birefringence: when propagating
from the source to the detector, the amplitudes of left versus right polarized
modes are exponentially enhanced or suppressed. We use the confident BBH
detections in GWTC-2 to constrain this effect, which is absent in GR.
We characterize the strength of the amplitude birefringence in terms of an
opacity parameter,
that can then be mapped on to various beyond-GR theories. A thorough review of
theories that exhibit amplitude birefringence is provided in Zhao _et al._
(2020a), including Chern-Simons gravity Alexander and Yunes (2009), ghost-free
scalar-tensor theories Crisostomi _et al._ (2018), symmetric teleparallel
equivalents of GR Conroy and Koivisto (2019), and Hor̆ava-Lifshitz gravity
Horava (2009).
As a specific application, we use our limit on amplitude birefringence to
constrain non-dynamical Chern-Simons gravity, a parity-violating beyond-GR
effective field theory with origins in string theory, loop quantum gravity,
and inflation Alexander and Yunes (2009); Green and Schwarz (1984); Taveras
and Yunes (2008); Mercuri and Taveras (2009); Weinberg (2008). Previous works
have addressed the possibility of detecting Chern-Simons amplitude
birefringence with GW detectors Nojiri _et al._ (2019); Zhao _et al._
(2020b); Alexander _et al._ (2008); Yunes _et al._ (2010); Yunes and Finn
(2009); Yagi and Yang (2018), and in this study we perform such a measurement
on real GW data.
In Sec. II, we give an overview of the observational effects of amplitude
birefringence on GW detections. We then use GWTC-2 to bound the amount of
amplitude birefrigence in BBH signals. In Sec. III, we consider these results
in the context of Chern-Simons gravity, and bound the canonical Chern-Simons
lengthscale. We conclude in Sec. IV. We set $G=c=1$ throughout. $H_{0}$ refers
to the present day value of the Hubble parameter, with dimensions of
$[H_{0}]=L^{-1}$, and $z$ refers to cosmological redshift.
## II Constraints on amplitude birefringence
### II.1 Observational effects
In GR, for the dominant $(2,\pm 2)$ angular mode of non-precessing compact
binary inspirals, the ratio of the gravitational wave strain $h$, in right
$h_{\mathrm{R}}$, versus left $h_{\mathrm{L}}$, circularly polarized modes is
purely a function of the inclination angle of the binary, of the form
$\displaystyle\left(\frac{h_{\mathrm{R}}}{h_{\mathrm{L}}}\right)_{\mathrm{GR}}=\left(\frac{1+\cos\iota}{1-\cos\iota}\right)^{2}\,.$
(1)
Here, the inclination $\iota$ is the angle from the total angular momentum of
the binary to the line of sight of the observer. In terms of the plus,
$h_{+}$, and cross, $h_{\times}$, polarizations, the circular polarizations
are given by $h_{\mathrm{R},\mathrm{L}}=h_{+}\pm ih_{\times}$. A system with
$\cos\iota=1$ has power purely in $h_{\mathrm{R}}$, and is face-on, while one
with $\cos\iota=-1$ has power purely in $h_{\mathrm{L}}$ and is face-off. Thus
$\displaystyle\textrm{pure }h_{\mathrm{R}}$
$\displaystyle\Longleftrightarrow\cos\iota=+1\Longleftrightarrow\textrm{face-
on}\,,$ (2) $\displaystyle\textrm{pure }h_{\mathrm{L}}$
$\displaystyle\Longleftrightarrow\cos\iota=-1\Longleftrightarrow\textrm{face-
off}\,.$ (3)
We assume that the universe is homogeneous and isotropic at cosmological
scales, and that gravitational physics does not have any preferred direction.
This implies that the underlying distribution for $\cos\iota$ is flat, meaning
no preference for face-on versus face-off events.
The picture in Eq. (1) changes in beyond-GR theories that exhibit amplitude
birefringence. In this case, the amplitudes of left- versus right-polarized
modes are exponentially enhanced and suppressed during propagation, leading to
an expression of the form
$\displaystyle\left(\frac{h_{\mathrm{R_{obs}}}}{h_{\mathrm{L_{obs}}}}\right)_{\mathrm{Biref}}=\frac{e^{-d_{C}\kappa}(1+\cos\iota)^{2}}{e^{d_{C}\kappa}(1-\cos\iota)^{2}}\,.$
(4)
Here, $d_{C}$ is the comoving distance to the source (with units of length
$L^{1}$), and $\kappa$ is an opacity parameter with units of $L^{-1}$ that
governs the strength of the birefringence.222Note that Eq. (4) is only correct
for every theory at linear order in $\kappa d_{C}$; it is exactly correct at
all orders for some theories and field profiles. We thus write $\kappa$ as
function of $d_{C}$, so
$\kappa\left(d_{C}\right)=\kappa_{0}+\mathcal{O}\left(d_{C}\right)$, and
specialize to constant $\kappa$ for the remainder of the paper. Note that
$\kappa=0$ is consistent with GR. Throughout this study, we will assume that
$\kappa d_{C}\ll 1$, that is, beyond-GR effects are small enough that the
effective field theory is valid.
In traditional GW parameter estimation, however, we do not have access to the
true value, $\cos\iota$, of the inclination angle, but rather observe some
effective value, $\cos\iota_{\mathrm{obs}}$. Thus, from Eq. (4), in the
presence of amplitude birefringence, we would measure a ratio
$\displaystyle\frac{1+\cos\iota_{\mathrm{obs}}}{1-\cos\iota_{\mathrm{obs}}}=\frac{e^{-d_{C}\kappa/2}(1+\cos\iota)}{e^{d_{C}\kappa/2}(1-\cos\iota)}\,.$
(5)
Let us think about how amplitude birefringence would affect the values
$\cos\iota_{\mathrm{obs}}$ for multiple events. Statistical isotropy of BBH
orientation requires that $p(\cos\iota)$, the distribution on the true
inclination angle over the population of BBH mergers, be flat. The _observed_
distribution of inclinations is influenced by selection effects, but to a very
good approximation these are independent of the _sign_ of $\cos\iota$ (Abbott
_et al._ , 2020a, c). Thus, if there are no beyond-GR effects and $\kappa=0$,
we expect to see an equal number of face-on and face-off events. Meanwhile, if
$\kappa>0$, we will preferentially measure $\cos\iota_{\mathrm{obs}}\sim-1$
for isotropically distributed events. In other words, we will preferentially
see more face-off, rather than face-on mergers. Similarly, if $\kappa<0$, we
will preferentially see more face-on mergers. Thus, we expect
$p(\cos\iota_{\mathrm{obs}})$ to not be symmetric about zero.
We can quantify the number of observed face-on versus face-off events by
defining the on/off (or right/left) asymmetry statistic $\Delta$, in the range
$-1\leq\Delta\leq+1$, computed as
$\displaystyle\Delta\equiv\frac{N(\cos\iota_{\mathrm{obs}}>0)-N(\cos\iota_{\mathrm{obs}}<0)}{N}\,,$
(6)
where $N(\cos\iota_{\mathrm{obs}}>0)$ is the number of face-on observations,
etc. From an underlying distribution on $\cos\iota$ and the birefringence
result in Eq. (5), we induce a distribution on $\cos\iota_{\mathrm{obs}}$ and
thus $\Delta$ (birefringent theories that lead to a different relationship
between $\iota$ and $\iota_{\mathrm{obs}}$ will give a different induced
distribution). Solving for $\cos\iota_{\mathrm{obs}}$ we get
$\displaystyle\cos\iota_{\mathrm{obs}}=\frac{-(1-\cos\iota)e^{\kappa
d_{C}/2}+(1+\cos\iota)e^{-\kappa d_{C}/2}}{(1-\cos\iota)e^{\kappa
d_{C}/2}+(1+\cos\iota)e^{-\kappa d_{C}/2}}\,.$ (7)
Working solely with a quantity such as $\Delta$ provides a robust framework
for many beyond-GR theories, and does not require making assumptions about the
underlying theory, as is done when producing template waveforms. Note that we
do not consider beyond-GR modifications to the gravitational waveform itself
as generated from the source, assuming such modifications to be small since
they are not amplified with distance (unlike birefringence, which is a
propagation effect).
The effect of birefringence on the observed inclination depends on the product
of the opacity parameter and the comoving distance to each event; a full
analysis would take account of the varying distances to the events in GWTC-2,
which we consider in Appendix A. To obtain an approximate constraint averaging
over BBH detections, however, it is sufficient to approximate a common
comoving distance, $d_{C}$, for all events.
If we take the flat distribution $\cos\iota\sim\mathcal{U}(-1,1)$,333It is
appropriate to match the value of $\Delta$ inferred from the data to the
effect of $\kappa$ on the astrophysical population rather than the _selected_
population (events that pass some detection threshold) for the following
reason. Selection effects are, to a very good approximation, independent of
the _sign_ of $\cos\iota_{\mathrm{obs}}$ (Abbott _et al._ , 2019a, 2020a);
due to this symmetry, the same fraction of the population of mergers will be
detectable for _any_ value of $\Delta$ in our simplified model where the
distribution of $\cos\iota_{\mathrm{obs}}$ is piecewise-flat. The usual factor
correcting for selection effects, conventionally written
$\alpha\left(\Delta\right)$ (Mandel _et al._ , 2019), appearing in the
denominator of the likelihood is therefore constant. Our analysis, ignoring
the constant $\alpha$ factor, infers the true _population_ value of $\Delta$;
and it is therefore appropriate to match inferred $\Delta$ values to the
actual effect on the population from $\kappa$ rather than the _selected_
population. and assuming that $\kappa d_{C}$ is the same for all observations,
we get the simple expected value
$\displaystyle\hat{\Delta}\equiv\langle\Delta\rangle=\tanh\frac{\kappa
d_{C}}{2}\,,$ (8)
which can be inverted to estimate $\kappa$ from $\hat{\Delta}$,
$\displaystyle\hat{\kappa}=\frac{1}{d_{C}}\log\left[\frac{1+\hat{\Delta}}{1-\hat{\Delta}}\right]\,.$
(9)
For our constraints on $\kappa$ and our projections, we use a common comoving
distance to our BBH mergers of $d_{C}=d_{C}\left(z=0.3\right)\simeq
1.23\,\mathrm{Gpc}$, corresponding to the median detected redshift in GWTC-2.
We will additionally consider an analysis with $d_{C}=d_{C}(z=0.3\pm 0.1)$ in
order to provide some error region for our results.
Birefringence also changes the signal _amplitude_ measured at the detector,
and therefore the inferred luminosity distance to the source, via
$\frac{d_{L,\mathrm{obs}}}{d_{L}}=\\\
\frac{\sqrt{1+\cos^{2}\iota}}{\sqrt{\left(1+\cos^{2}\iota_{\mathrm{obs}}\right)\cosh
2\kappa d_{C}+2\cos\iota_{\mathrm{obs}}\sinh 2\kappa d_{C}}}\\\
=1+\frac{\cos\iota_{\mathrm{obs}}\left(\cos^{2}\iota_{\mathrm{obs}}-5\right)}{2\left(1+\cos^{2}\iota_{\mathrm{obs}}\right)}\kappa
d_{C}+\mathcal{O}\left(\kappa d_{C}\right)^{2}\,,$ (10)
where we have used
$d_{L}^{-1}\propto\sqrt{h_{+}^{2}+h_{\times}^{2}}\sim\sqrt{(1+\cos^{2}\iota)^{2}-4\cos^{2}\iota}$.
The effect here is to modify the observed distance or redshift distribution of
sources from the true distribution. Since the effect enters at linear order in
$\kappa d_{C}$, it is degenerate with a variation in the BBH merger rate with
redshift; this is in contrast to effects which modify the leading-order
relation between the merger rate and distance or redshift, such as extra
spacetime dimensions (Fishbach _et al._ , 2018; Pardo _et al._ , 2018). The
latter are, in principle, observable even in a nearby sample of BBH mergers,
with $z\to 0$. In this study, we use the values for $d_{C}$ reported in
GWTC-2, without considering these higher-order corrections.
Nevertheless, a full analysis could fit an evolving merger rate and
birefringence effects on inclination and amplitude, incorporating selection
effects. Given the existing uncertainty about the evolution of the merger rate
with redshift (Fishbach _et al._ , 2018; Abbott _et al._ , 2019d) and the
difficulty in measuring $\cos\iota_{\mathrm{obs}}$ with existing data (typical
uncertanties are $\sim 0.3$ (Abbott _et al._ , 2019a)), our approximate
analysis captures the majority of the information about birefringence in the
data at this time.
Note that in this study we assume that amplitude birefringence is the only
phenomenon that modifies the observed inclination angle from its true value.
In particular, we do expect strong gravitational lensing to affect fewer than
$10^{-3}$ of the detected events Dai _et al._ (2020); Smith _et al._ (2018),
and hence do not consider strong lensing effects in this study.
### II.2 GWTC-2 constraint
Figure 1: Likelihood distributions on $\cos\iota_{\mathrm{obs}}$, the
observed inclination angle from GWTC-2 Abbott _et al._ (2019a); LIGO
Scientific Collaboration and Virgo Collaboration (2019); Abbott _et al._
(2020a). Each solid curve (including the gray curves) corresponds to a BBH
detection, and the dashed black curve corresponds to the mean of
$\cos\iota_{\mathrm{obs}}$ across these events. While most events do not
provide a confident measurement of $\cos\iota_{\mathrm{obs}}$, we have
highlighted (in thick, colored lines) the events that do show a strong
preference for being face-off or face-on. Note that a population consistent
with GR will have a mean distribution for $\cos\iota$ symmetric about zero.
In Fig. 1, we show the posterior distributions on the observed inclination
angle, $\cos\iota_{\mathrm{obs}}$, from GWTC-2 Abbott _et al._ (2019a); LIGO
Scientific Collaboration and Virgo Collaboration (2019); Abbott _et al._
(2020a).444When available, we use the NRSur7dq4 parameter estimation results.
Otherwise, if available, we use the SEOBNRv4PHM results, and finally we
otherwise use the SEOBNRv4P results. We estimate that any systematic
difference between which waveform model we use is well below the uncertainty
in $\cos\iota$.
The first two Advanced LIGO and Virgo observing runs, O1 and O2, contain 10
significant BBH detections, three of which have an inclination constraint
sufficient to confidently identify the handedness of the wave, with each
preferring a left-handed polarization (i.e. come from a binary orbiting in a
left-handed sense with respect to the line-of-sight). The O3a observing run,
meanwhile, contains approximately 37 candidate BBH detections, four of which
provide a sufficient inclination constraint, with one left-handed polarization
event, and three right-handed polarization events. While this results in a
total of seven _confident_ inclination angle measurements, we will consider
all of the $\cos\iota_{\mathrm{obs}}$ distributions in our analysis,
incorporating even weak preferences for left or right handed orbits from each
one into our analysis.
Note that in the presence of strong amplitude birefringence, we would expect
to observe such events with only one inclination angle preference. Thus,
GWTC-2 rules out the possibility of purely right or left-handed gravitational
events. Due to their relative proximities and the correspondingly weak
expected opacity constraints, we simplify our analysis by excluding the binary
neutron star events. Thus, we exclude GW170817 and GW190425, as well ad the
neutron star - black hole candidate GW190426_152155. Note that we do include
GW190814, which provides a strong inclination constraint, but does have an
(uncategorized) component mass of $2.59M_{\odot}$ Abbott _et al._ (2020a).
Using these measures of $\cos\iota_{\mathrm{obs}}$, we then compute a
distribution on $\Delta$ from these observations using Eq. (6) and applying a
flat prior on $-1<\Delta<1$, which we show in Fig. 2. We see that the
distribution on $\Delta$ from the O1-O2 observing runs disfavors face-on
events, while preferring face-off events, and that the distribution on
$\Delta$ from O3a disfavors face-off events, while preferring face-on events.
Together, all of the detections are consistent with $\Delta=0\pm 0.4$
consistent with no amplitude birefringence.
Figure 2: Posterior distribution for $\Delta$, which measures preference for
face-on versus face-off observed events, as defined in Eq. (6). Without
amplitude birefringence, the distribution should be symmetric around
$\Delta=0$. We show the distribution for $\Delta$ from O1-O2 events (light
blue curve), and for O3a events (pink curve). We see that O1-O2 have a
preference for face-off events, while O3a has a preference for face-on events.
The resulting distribution is consistent with $\Delta=0$, with a standard
deviation of $0.4$, supporting no amplitude birefringence. The red dashed
line, meanwhile corresponds to the values of $\Delta$ obtained by drawing from
a distribution uniform in $\cos\iota_{\mathrm{obs}}$ (thus corresponding to no
information).
Given $\Delta$, we can now use Eq. (9) to obtain a distribution on the
absolute values of the opacity parameter $\kappa$, defined in Eq. (4). This
will provide a physical measure of the amount of amplitude birefringence, the
magnitude of which can then be used to constrain various beyond-GR theories.
We show the resulting distribution on $\kappa$ in Fig. 3. We observe that for
a common comoving distance of $d_{C}=d_{C}(z=0.3)$ (median detected redshift
in GWTC-2), we can bound, at $1\sigma$:
O1-O2: $\displaystyle\kappa\lesssim 2.0\textrm{ Gpc}^{-1}\,,$ O3a:
$\displaystyle\kappa\lesssim 1.3\textrm{ Gpc}^{-1}\,,$ All:
$\displaystyle\kappa\lesssim 0.74\textrm{ Gpc}^{-1}\,.$
In Fig. 3, we also show results for $\kappa$ for common comoving distances of
$d_{C}=d_{C}(z=0.3\pm 0.1)$ for all of the detections, in order to
qualitatively show the effect of a spread in the distance measurements on the
inferred value of $\kappa$. These differences of $z\pm 0.1$ shift the inferred
value for all of the detections by $\pm 0.25\textrm{ Gpc}^{-1}$.
Recall that for the effective field theory to be valid, we require that
$\kappa d_{C}\ll 1$. The analysis presented in this paper in terms of the
observed inclination angle works for any value of $\kappa$, but we must be
careful about the distances $d_{C}$. Thus, in Fig. 3 we shade the region for
which $\kappa d_{C}>1$, where this condition is violated given our choice of
$d_{C}=d_{C}(z=0.3)$.
In order to see how much information we have gained from these detections, let
us consider a distribution flat in $\cos\iota_{\mathrm{obs}}$ (meaning that
all measured inclination angles are equally likely and
$\cos\iota_{\mathrm{obs}}$ carries no information about the system). The
posterior on $\Delta$ for 47 events from this distribution using Eq. (6)
should be uniform on $\Delta$ (the events carry no information about which
handedness is preferred). For such uninformative measurements if we wish to
recover the correct flat distribution for $\Delta$ from our computations, we
must satisfy the criterion that the number of samples used for each event is
much larger than the number of detections as detailed in Appendix B.
If we then compute $\kappa$ from these values of $\Delta$ in Fig. 3, we obtain
a distribution that looks like that of O1-O2. We can thus conclude that the
measurements of $\cos\iota_{\mathrm{obs}}$ in O1-O2 are not sufficient to
provide an informative constraint on $\kappa$; almost all of our constraint on
$\kappa$ comes from the assumed prior on $\Delta$ transformed through Eq. (9)
into a prior on $\kappa$. However, adding in O3a does make the result deviate
from the prior, thus showing that we can indeed constrain the level of
amplitude birefringence with all of the BBH detections. In order to quantify
this information, we can compute the Kullback–Leibler (KL) divergence
$D_{\mathrm{KL}}(p(\lambda)\|q(\lambda))$ of a distribution $p$ with respect
to $q$,
$\displaystyle D_{\mathrm{KL}}(p(\lambda)\,\|\,q(\lambda))=\int
p(\lambda)\log_{2}\left[\frac{p(\lambda)}{q(\lambda)}\right]d\lambda\,.$ (11)
While technical details can be found in Kullback and Leibler (1951), the KL
divergence is a distance measure of how a probability distribution is
different from a reference probability distribution, thus allowing us to
compare the curves in Fig. 3. Using the flat distribution as our reference
distribution, we find
$\displaystyle
D_{\mathrm{KL}}(P_{\textrm{O1-O2}}(\kappa)\,\|\,P_{\textrm{Flat}}(\kappa))$
$\displaystyle=2.7\times 10^{-3}\,,$ $\displaystyle
D_{\mathrm{KL}}(P_{\textrm{O3a}}(\kappa)\,\|\,P_{\textrm{Flat}}(\kappa))$
$\displaystyle=6.5\times 10^{-2}\,,$ $\displaystyle
D_{\mathrm{KL}}(P_{\textrm{All}}(\kappa)\,\|\,P_{\textrm{Flat}}(\kappa))$
$\displaystyle=3.8\times 10^{-1}\,,$
in units of bits.
Figure 3: Distribution for the norm of the opacity parameter $\kappa$, as
given in Eq. (4), which measures the strength of the amplitude birefringence
effect. In the absence of amplitude birefringence, we expect $\kappa=0$. Here,
we compute the posterior on $|\kappa|$ with O1-O2 (light blue curve), and O3a
(pink curve), combining all of the detections in the black curve. In order to
show the effect of our assumption of a common comoving distance of
$d_{C}(z=0.3)$ for all events, we also plot lines (light gray), for
$d_{C}(z=0.3\pm 0.1)$. The shaded region corresponds to
$\kappa=1/d_{C}(z=0.3)$, in which the effective field theory assumption that
$\kappa d_{C}\ll 1$ does not hold. The O1-O2 result on its own is
uninformative, as it qualitatively agrees with a constraint generated from a
flat, uninformative distribution in $\cos\iota_{\mathrm{obs}}$ (dashed thick
line). Adding in the O3a results, however, does result in an informative
constraint.
## III Constraints on Chern-Simons gravity
We now use the inferred opacity $\kappa$ from Sec. II to place constraints on
Chern-Simons gravity (CS). CS modifies the Einstein-Hilbert action of GR
through the inclusion of a scalar field coupled to a term quadratic in
spacetime curvature. In CS, amplitudes of left versus right circularly-
polarized modes are exponentially enhanced and suppressed during propagation,
with the strength of this amplitude birefringence being governed by properties
of the CS scalar field Alexander and Yunes (2009). Thus, by placing
constraints on the opacity parameter with GWTC-2, we can place observational
constraints on CS.
Following the conventions of Alexander and Yunes (2009), the action of Chern-
Simons gravity takes the form
$\displaystyle S=\int d^{4}x\sqrt{-g}\Big{(}\frac{R}{16\pi
G}+\frac{1}{4}\alpha\vartheta{}^{*}\\!RR-\beta\frac{1}{2}\nabla_{a}\vartheta\nabla^{a}\vartheta\Big{)}\,,$
(12)
where $g_{ab}$ is the spacetime metric with covariant derivative $\nabla_{a}$.
The first term corresponds to the Einstein-Hilbert action of GR, where $R$ is
the spacetime Ricci scalar. The second term couples the CS scalar field
$\vartheta$ to spacetime curvature via the Pontryagin density
${}^{*}\\!RR\equiv\,{}^{*}\\!R_{abcd}R^{abcd}$, which is the spacetime Riemann
tensor contracted with its dual Alexander and Yunes (2009). The last term is a
kinetic term for the scalar field, with constant $\beta$. We follow the choice
of Jackiw and Pi (2003); Alexander _et al._ (2008), and set
$\alpha=\kappa_{\mathrm{E}}$, which gives $\vartheta$ units of length squared,
$[\vartheta]=L^{2}$.
In non-dynamical CS gravity, we set $\beta=0$, and $\vartheta$ is ‘frozen-in’
with some pre-defined profile Jackiw and Pi (2003), which we will leave
unspecified for now. Note that $\vartheta$ cannot be constant, otherwise the
${}^{*}\\!RR$ term, a topological invariant, would integrate out of the action
in Eq. (12).
As calculated by Alexander et al. Alexander _et al._ (2008), in CS, GWs
propagating through a Friedmann-Robertson-Walker universe are exponentially
suppressed and enhanced depending on helicity. For compact-binary sources,
this birefringence effect manifests in a change in the observed inclination of
the binary, $\cos\iota_{\mathrm{obs}}$, from the true inclination angle of the
source, $\cos\iota$, as
$\displaystyle\left(\frac{h_{\mathrm{R_{obs}}}}{h_{\mathrm{L_{obs}}}}\right)_{\mathrm{CS}}$
$\displaystyle=\left(\frac{1+\cos\iota}{1-\cos\iota}\right)^{2}\exp\left[\frac{2k(t)}{H_{0}}\zeta(\vartheta)\right]$
$\displaystyle=\left(\frac{1+\cos\iota_{\mathrm{obs}}}{1-\cos\iota_{\mathrm{obs}}}\right)^{2}\,.$
(13)
Here, $k(t)$ is the wavenumber for the given Fourier propagating mode, with
units of $L^{-1}$, and $\zeta(\vartheta)$ is a dimensionless function of the
integrated history of the CS scalar field. While Eq. (III) is a function of
the wavenumber, we will estimate that $k(t)$ covers a narrow frequency range,
and thus write $k(t)\sim k$, where $k$ is a typical value in this range,
without treating each mode separately.
### III.1 General constraint
Comparing Eq. (III) with Eq. (4), we can directly relate $\zeta(\vartheta)$,
which captures all of the dependence on the CS field, to the measured value of
$\kappa$ as
$\displaystyle\zeta(\vartheta)=\frac{\kappa d_{C}H_{0}}{k}\,.$ (14)
Thus, setting $d_{C}\left(z=0.3\right)\simeq\sim 1.23$ Gpc for a typical
Advanced LIGO BBH source distance (corresponding to the median detected
redshift in GWTC-2) Abbott _et al._ (2019a), and setting $k\sim 2\pi\times
100\,\mathrm{Hz}/c\sim 2\times 10^{-6}\,\mathrm{m}$ for the approximate value
of the region of greatest sensitivity of LIGO (cf. Abbott _et al._ (2016,
2019a)), we obtain the dimensionless result
$\displaystyle\zeta(\vartheta)=\left(\frac{\kappa}{1\textrm{
Gpc}^{-1}}\right)\times 6.6\times 10^{-21}\,.$ (15)
From the results for GWTC-2 in Sec. II, we compute
O1-O2: $\displaystyle\zeta(\vartheta)\lesssim 1.3\times 10^{-20}\,,$ O3a:
$\displaystyle\zeta(\vartheta)\lesssim 8.6\times 10^{-21}\,,$ All:
$\displaystyle\zeta(\vartheta)\lesssim 4.9\times 10^{-21}\,.$
As stated before, $\zeta(\vartheta)$ is dependent on the integrated history of
the CS scalar field. In Alexander _et al._ (2008), the authors calculate
$\zeta(\vartheta)$ for a matter-dominated universe (with scale factor
$a(\eta)=a_{0}\eta^{2}$, where $a_{0}$ is the present-day value and $\eta$ is
conformal time). Since the LIGO sources are found at redshifts $z<1$ (300–3000
Mpc) Abbott _et al._ (2019a), we focus on a dark-energy dominated universe,
with $a(t)=a_{0}e^{H_{0}t}$. We compute the corresponding $\zeta$, in terms of
dimensionless conformal time $\eta$, to be
$\displaystyle\zeta(\vartheta)=\frac{H_{0}^{2}}{2}\int_{\eta}^{1}\left(\eta^{2}\vartheta^{\prime\prime}(\eta)-2\eta\vartheta^{\prime}(\eta)\right)d\eta\,.$
(16)
We give the full calculation in Appendix C.
In the above expressions, we have left the ‘frozen-in’ profile of $\vartheta$
unspecified. Let us suppose that $\vartheta$ is dependent on some CS parameter
$P$. For some specified profile $\vartheta[P]$, the reader can thus use Eqs.
(15) and (16) to compute a value of $P$ given a value of $\kappa$.
### III.2 Constraint on canonical $\vartheta$ profile
Let us now consider the ‘canonical’ profile for $\vartheta$ given in Jackiw
and Pi (2003); Alexander and Yunes (2009); Yunes and Spergel (2009), where
$\vartheta$ has an isotropic, time-dependent profile of the form
$\displaystyle\vartheta=\frac{t}{\mu}\,,$ (17)
where $\mu$ is a mass scale with units $[\mu]=L^{-1}$. Note that when $\mu$ is
large, we recover GR.
Let us define
$\displaystyle\ell_{0}\equiv\frac{1}{\mu}$ (18)
to be the CS lengthscale for this field profile. With this profile,
$\zeta(\vartheta)$ in Eq. (16) becomes
$\displaystyle\zeta(\vartheta)=\frac{3H_{0}\ell_{0}}{2c}(1-\eta)=\frac{3H_{0}\ell_{0}d_{C}}{2d_{H}}\,.$
(19)
where we have re-introduced a factor of c and have set $(1-\eta)\sim
d_{C}/d_{H}$, where $d_{H}\equiv c/H_{0}$ is the Hubble distance. Combining
Eqs. (14) and (19), we obtain
$\displaystyle\ell_{0}=\frac{2cd_{H}\kappa}{3k}\,.$ (20)
which becomes
$\displaystyle\ell_{0}=\left(\frac{\kappa}{1\textrm{ Gpc}^{-1}}\right)\times
1400\textrm{ km}\,.$ (21)
Given the posterior on $\kappa$ computed in Sec. II, we show the posterior on
$\ell_{0}$, computed using Eq. (21) in Fig. 4. We can thus bound
O1-O2: $\displaystyle\ell_{0}\lesssim 2.8\times 10^{3}\textrm{ km}\,,$ O3a:
$\displaystyle\ell_{0}\lesssim 1.8\times 10^{3}\textrm{ km}\,,$ All:
$\displaystyle\ell_{0}\lesssim 1.0\times 10^{3}\textrm{ km}\,.$
Figure 4: Posterior on $\ell_{0}$, the CS field length scale for the canonical
CS field profile given in Eqs. (17) and (18). We compute the likelihood from
the observations in O1-O2 (light blue curve), O3a (pink curve), and both
catalogs (black curve). Each vertical line corresponds to $1-\sigma$.
### III.3 Projected value of $\ell_{0}$ with more detections
We can project future constrains on $\ell_{0}$ using the difference in
constraints we have obtained with GWTC-2 results to see how this constraint
would improve with future detections. At fixed detector sensitivity, we expect
that the constraint will go as $\ell_{0}(N)\sim 1/\sqrt{N}$, where $N$ is the
number of detections. But as the detector sensitivity changes so does the
typical distance to a detected merger. Since advanced LIGO at design
sensitivity is expected to have a larger reach in distance Abbott _et al._
(2018), we set the typical value of the redshift to $z=0.75$. Repeating the
previous analysis with the O1-O2 detections and the O3a detections, and with
$z=0.75$ instead of $z=0.3$, we find that
$\displaystyle\ell_{0}(N=10,z=0.75)$ $\displaystyle=1300\textrm{ km}$ (22)
$\displaystyle\ell_{0}(N=47,z=0.75)$ $\displaystyle=490\textrm{ km}$ (23)
With 1000 BBH detections at design sensitivity, for example, we would expect
to bound $\ell_{0}\lesssim 100\,\mathrm{km}$. This projection is the result of
two anticipated improvements—first in the greater reach in redshift of LIGO at
design sensitivity, and second in the number of detections.
### III.4 Implications of Chern-Simons constraint
Let us compare the physical constraint on the canonical Chern-Simons
lengthscale from Sec. III.2 to additional observed bounds on the non-dynamical
theory, using the $1000$ km bound we obtain from GWTC-2. Smith et al. Smith
_et al._ (2008) used Solar-System measurements of frame-dragging from LAGEOS
and Gravity Probe B to bound $|\dot{\vartheta}|\leq 3000(\kappa/\alpha)$ km.
We have chosen $\alpha=\kappa$ in this study, and for the canonical profile,
we have $\dot{\vartheta}=\ell_{0}$. Hence, the Smith et al. constraint becomes
$\ell_{0}\leq 3000$ km. The bound from GWTC-2 is smaller than this number,
indicating that LIGO events can constrain the non-dynamical theory more
tightly than this Solar-System test.
Alexander et al. proposed an amplitude birefringence analysis with LISA
Alexander _et al._ (2008), estimating that for a $10^{6}\,M_{\odot}$ BBH at
redshift $z\sim 15$, one could bound $\ell_{0}\leq 10^{-2}$ km Alexander and
Yunes (2009).555Note that the analysis in this paper was performed for a dark-
energy dominated universe, which is applicable to LIGO sources with $z\sim 1$,
while the LISA analysis required a matter-dominated universe. This is a
stronger bound that the one obtained in this paper, and attempting to achieve
such a bound with LIGO-Virgo events would require $N\sim 10^{11}$ detections
(cf. Sec. III.3). The authors of Alexander _et al._ (2008) perform a Fisher-
matrix analysis for a source sweeping through $10^{-4}-10^{-2}$ Hz, keeping
track of the frequency dependence in $k(t)$ and hence
$\iota_{\mathrm{obs}}(t)$. In this study, we have approximated $k(t)$ as a
constant $2\pi\times 100$ Hz, which in turn corresponds to setting
$\iota_{\mathrm{obs}}(t)$ to a constant function of time-varying apparent
inclination angle described in Alexander _et al._ (2008). While LISA is
sensitive to this effect due to probing long BBH inspirals, LIGO is not
sensitive to this effect, as there are not enough cycles in the LIGO band to
probe precession for most events Abbott _et al._ (2019a).
Additionally, Hu et al. Hu _et al._ (2020) performed a study analyzing the
capability of a network of future space-based detectors (LISA, Taiji, and
TianQin) to constrain parity violations in gravitational wave propagation,
finding that for a $10^{6}\,M_{\odot}$ event at 20 Gpc, the parity violating
scale from amplitude birefringence could be bounded to
$M_{\mathrm{PV}}>\mathcal{O}(10^{-15})$ eV, corresponding to $2\times 10^{5}$
km. This, as the authors note, is a weaker bound than the constraint from
ground-based detectors.
Yunes and Spergel Yunes and Spergel (2009) performed a binary pulsar test with
PSR J0737–3039, finding $\ell_{0}\lesssim 6\times 10^{-9}$ km, a bound much
stronger than the one reported in this paper. The periastron precession of a
system is corrected in CS, with the gradient of $\vartheta$ selecting a
preferred direction in spacetime for the correction. The strength of this
correction relative to GR is governed by $a^{2}/R^{2}$, where $a$ is the
semimajor axis of the system, and $R$ is the radius of the object. With a
large separation ($\sim 10^{6}$ stellar radii in this case), and small radii,
a binary pulsar system produces a very strong constraint. However, as shown in
Ali-Haimoud (2011), this analysis failed to account for several effects that
lead to a suppression of the rate of periastron precession. In particular,
Yunes and Spergel (2009) modeled PSR J0737–3039B as a point particle, rather
than an extended body with radius $R_{B}$. If $R_{B}$ is larger than
$2\pi\ell_{0}$ (the CS wavelength), the average force per unit mass is
suppressed by a factor of $\sim 15(\ell_{0}/R_{B})^{3}$. Thus, in order to
match the observed constraint on periastron precession, $\ell_{0}$ must be
$\gtrsim R_{B}$. Indeed, Ali-Haimoud (2011) computed a corrected constraint of
$\ell_{0}\lesssim 0.4$ km.
In addition, Yunes and Spergel (2009) probes a different physical regime than
we probe in this paper. Yunes and Spergel assume the canonical, global
$\vartheta=\ell_{0}t$ profile, but use a local measurement to probe
$\ell_{0}$. This involves assuming that the canonical profile, which has no
spatial dependence, truly holds within our galaxy, and that there are no
spatial density variations in the field near PSR J0737-3039. In this paper,
however, we use an integrated history of $\vartheta$, sampling its temporal
evolution, all the way from redshift $z\sim 1$ to present day. Over such
cosmological distances, choosing the smooth, isotropic profile
$\vartheta=\ell_{0}t$ may be justified, as any spatial effects can be presumed
to integrate out. Thus, our analysis differs from binary pulsar tests in that
we have used a global measurement to constraint a global quantity, without
making any local assumptions.
Recently, Wang et al. Wang _et al._ (2020) analyzed the presence of amplitude
and velocity birefringence in GWTC-1, the first catalog of LIGO and Virgo
detections, finding no evidence of parity violation. Their methods are
different from the ones presented in this paper, as they match GWTC-1 data
against GW templates that include birefringence effects, rather than looking
at an ensemble of inclination angles. The constraint on the parity violating
energy scale found in Wang _et al._ (2020) is $M_{\mathrm{PV}}>0.07$ GeV,
which corresponds to a lengthscale of $\hbar c/M_{\mathrm{PV}}\sim 10^{-18}$
km. However, this comes from velocity birefringence effects, as LIGO is more
sensitive to phase, rather than amplitude, modifications. Indeed, the
constraint from amplitude birefringence effects only is
$M_{\mathrm{PV}}>10^{-22}$ GeV which corresponds to $\sim 2000$ km. Similarly,
Yamada et al. Yamada and Tanaka (2020) performed a parametrized tests of
parity violation in gravitational wave propagation for GWTC-1, finding a
minimum bound of $\ell_{0}\leq 1422$ km for GW151226 for CS gravity. Our
GWTC-2 result of $\ell_{0}\leq 1000$ km improves on both of these results.
## IV Conclusion
In this study, we have used GWTC-2 Abbott _et al._ (2020a); LIGO Scientific
Collaboration and Virgo Collaboration (2019), including events from the first
three observation runs, to perform a new test of general relativity (GR). We
have placed an observational bound on gravitational wave amplitude
birefringence, which is absent in GR, but present in various beyond-GR
theories. Namely, we have bounded the opacity parameter governing the strength
of the amplitude birefringence to $\kappa\lesssim 0.74\textrm{ Gpc}^{-1}$
(Sec. II.2).
This general opacity constraint can then be mapped onto any beyond-GR theory
exhibiting amplitude birefringence (see Zhao _et al._ (2020a) for a review).
We have focused on (non-dynamical) Chern-Simons gravity, a beyond-GR theory
with motivations in string theory and loop quantum gravity (Sec. III). We have
used our results for $\kappa$ to bound $\zeta(\theta)$, a general CS parameter
governing the CS scalar field, to $\zeta(\vartheta)\lesssim 4.9\times
10^{-21}$. We then computed the constraint on the CS lengthscale of the
canonical scalar field profile, to give $\ell_{0}\lesssim 1.0\times 10^{3}$ km
(Sec. III.2).
One of the main benefits of our analysis is that it is simple and fast (of
order minutes), and only requires looking at inclination angle posterior
distributions for gravitational wave events, which are readily available from
LIGO and Virgo catalogs, without performing an independent parameter
estimation analysis. We plan to repeat this analysis with future LIGO and
Virgo observations, obtaining an even tighter bound on this beyond-GR effect.
## Acknowledgements
MO and WF are funded by the Center for Computational Astrophysics at the
Flatiron Institute, which is supported by the Simons Foundation. MI is
supported by NASA through the NASA Hubble Fellowship grant
#HST–HF2–51410.001–A awarded by the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in Astronomy,
Inc., for NASA, under contract NAS5–26555.
## Appendix A Joint distance-$\kappa$ analysis
Let us now consider dropping the assumption used in the main analysis of Sec.
II that all of the observed GW events are at the same distance. This requires
performing a joint analysis for $d_{L}$ and $\kappa$.
For ease of notation, let us write
$\displaystyle c$ $\displaystyle\equiv\cos\iota\,,$ (24) $\displaystyle c_{o}$
$\displaystyle\equiv\cos\iota_{\mathrm{obs}}\,,$ $\displaystyle d_{Lo}$
$\displaystyle\equiv d_{L,\mathrm{obs}}\,.$
We can model this entire system as a probabilistic graphical model (PGM), as
illustrated in Fig. 5. The directions in the PGM denote the influences between
various variables. In our case, $\kappa$, $d_{L}$, and $c$, which are true
astrophysical parameters, influence the observed variables $d_{Lo}$ and
$c_{o}$. In turn, $d_{Lo}$ and $c_{o}$ influence the observed gravitational
wave data $D_{\mathrm{GW}}$. In this model, $\kappa$ plays a special role,
because it is shared by the entire population.
Figure 5: Probabilistic graphical model illustrating the relationship between
the variables (see Eq. (24) for abbreviations). The gray region represents
data that applies to each gravitational wave event, while $\kappa$ is a
universal constant, independent of each event. The true luminosity distance
$d_{L}$, the true inclination angle $c$, and $\kappa$ affect the observed
luminosity distance $d_{Lo}$ and the observed inclination angle $c_{o}$. These
in turn affect the observed gravitational wave data $D_{\mathrm{GW}}$.
For each event, we can marginalize over the distributions for the observed
variables, $\\{d_{Lo},c_{o}\\}$ to obtain $p(D_{\mathrm{GW}}|\kappa)$ as
$\displaystyle
p(D_{\mathrm{GW}}|\kappa)=\int\mathrm{d}c_{o}\,\mathrm{d}d_{Lo}\,p(D_{\mathrm{GW}}|c_{o},d_{Lo})p(c_{o},d_{Lo}|\kappa)\,.$
(25)
The above expression is a standard marginalization using the PGM, without
making any astrophysical arguments.
We can compute the likelihood for $M_{\mathrm{obs}}$ gravitational wave
observations as the product
$\displaystyle p\left(\left\\{D_{\mathrm{GW},j}\mid
j=1,\ldots,M_{\mathrm{obs}}\right\\}\mid\kappa\right)=\prod_{j=1}^{M_{\mathrm{obs}}}p\left(D_{\mathrm{GW},j}\mid\kappa\right)\,,$
(26)
where we use Eq. (25) to compute each of the individual likelihoods in the
product.
Let us now work with Eq. (25), further marginalizing over $d_{Lo}$ as
$\displaystyle p(D_{\mathrm{GW}}|\kappa)$
$\displaystyle=\int\mathrm{d}c_{o}\,\mathrm{d}d_{Lo}\,p(D_{\mathrm{GW}}|c_{o},d_{Lo})$
(27) $\displaystyle\quad\times p(c_{o}|\kappa,d_{Lo})p(d_{Lo})\,.$
In order to compute $p(d_{Lo})$, we assert that the distribution of observed
luminosity distances tracks the star formation rate, with
$\displaystyle
p\left(d_{Lo}\right)\propto\frac{\left(1+z\right)^{\alpha}}{1+\left(\frac{1+z}{1+z_{p}}\right)^{\beta}}\frac{\mathrm{d}V}{\mathrm{d}z}\frac{\mathrm{d}z}{\mathrm{d}d_{L}}\frac{1}{1+z}$
(28)
where $\alpha=2.7$, $z_{p}=1.9$, and $\beta=5.6$ from Madau and Dickinson
(2014). Effectively, we adjust the true merger rate evolution with redshift to
match the observed distribution to the star formation rate (this is consistent
with the population analysis in Abbott _et al._ (2019d)). We do this to avoid
learning anything about $\kappa$ from any imposed prior on the _true_ merger
rate evolution, since we are a priori very uncertain about it.
Now, we need to compute $p(c_{o}|\kappa,d_{Lo})$, We assume from isotropy that
the true inclination angle at the source, $c$, is independent of $d_{L}$ and
$\kappa$, giving
$\displaystyle p(c|d_{L},\kappa)=\frac{1}{2}\,.$ (29)
Then, we can compute $p(c_{o}|\kappa,d_{Lo})$ through a substitution of
variables as
$\displaystyle p(c_{o}|\kappa,d_{Lo})=p(c|d_{L},\kappa)\left|\frac{\partial
c}{\partial c_{o}}\right|\,.$ (30)
From Eq. (7), we can compute
$\displaystyle\frac{\partial c}{\partial c_{o}}=1+c_{o}\kappa
d_{C}+\mathcal{O}(\kappa d_{C})^{2}\,.$ (31)
Thus, we obtain
$\displaystyle p(c_{o}|\kappa,d_{Lo})=\frac{1}{2}\left(1+c_{o}\kappa
d_{C}+\mathcal{O}(\kappa d_{C})^{2}\right)\,.$ (32)
Now we have all of the pieces of Eq. (27). To make the above expressions
valid, we impose that $\kappa d_{C}\ll 1$. We enforce this by choosing a flat
prior on $\kappa$ symmetric about zero, with support up to maximum allowed
value $\kappa_{\mathrm{max}}$ determined by the largest value of the distance,
$d_{C\mathrm{,max}}$. We choose the 99th percentile value of $d_{C}$ in each
dataset to give $d_{C\mathrm{,max}}$ (cf. Fig. 6 for an illustration).
In practice, we have access not to continuous probability distributions, but
rather to $N$ samples from each gravitational wave events. Thus, we express
the integral in Eq. (27) as a sum over $N$ samples, giving
$\displaystyle p\left(D_{\mathrm{GW}}\mid\kappa\right)$
$\displaystyle\simeq\frac{1}{N}\times$ (33)
$\displaystyle\sum_{n=1}^{N}\frac{p\left(c_{o,n}\mid
d_{Lo,n},\kappa\right)p\left(d_{Lo,n}\right)p\left(\vec{\theta}_{n,\mathrm{other}}\right)}{p\left(\vec{\theta}_{n}\right)}\,.$
The quantity, $\vec{\theta}_{\mathrm{n}}$ refers to all of the parameters of
the model. The quantity $\vec{\theta}_{\mathrm{n,other}}$, meanwhile, refers
to all of the parameters besides the distances, inclination angles, and
$\kappa$ in the model, such as the masses and spins of the black holes. We can
use priors on $p(\vec{\theta}_{\mathrm{n,other}})$ to re-sample the
distributions on parameters given in GWTC-2, with weights
$\displaystyle
w_{n}=\frac{p\left(d_{Lo,n}\right)p\left(\vec{\theta}_{\mathrm{other},n}\right)}{p\left(\vec{\theta}_{n}\right)}\,,$
(34)
to give
$\displaystyle
p\left(D_{\mathrm{GW}}\mid\kappa\right)\simeq\frac{1}{N^{\prime}}\sum_{n=1}^{N^{\prime}}w_{n}p\left(c_{o,n}\mid
d_{Lo,n},\kappa\right)\,,$ (35)
for the sum in Eq. (33).
In particular, in keeping with Eq. (28), we want to choose the prior on masses
and distances to track the star formation rate. The prior
$p(m_{1},m_{2},d_{Lo})$ used in GWTC-2 is flat in detector frame masses and
flat in $c_{o}$, of the form
$\displaystyle p\left(m_{1},m_{2},d_{Lo}\right)\propto\frac{\partial
m_{1}^{\mathrm{det}}}{\partial m_{1}}\frac{\partial
m_{2}^{\mathrm{det}}}{\partial
m_{2}}d_{L}^{2}=\left(1+z\right)^{2}d_{L}^{2}\,.$ (36)
We will re-weight using a prior on the masses and is proportional to
$m_{1}^{-1.6}$ and flat in mass ratio, $q$, the approximate best-fit
distribution from Abbott _et al._ (2019d), of the form
$\displaystyle p\left(m_{1},m_{2},d_{Lo}\right)\propto
m_{1}^{-1.6}\frac{\partial q}{\partial
m_{2}}p\left(d_{Lo}\right)=m_{1}^{-2.6}p\left(d_{Lo}\right),$ (37)
where $p\left(d_{Lo}\right)$ tracks the star formation rate as given in Eq.
(28). Note that we do not consider the parameter space of other physical
binary black hole populations in this study, in part because population models
are not presently well-constrained with GWTC-2 Abbott _et al._ (2020c).
We then combine all of the events using Eq. (26) to give the likelihood across
all events. From this likelihood, we can then compute the posterior
$p\left(\kappa\mid\left\\{D_{\mathrm{GW},j}\mid
j=1,\ldots,M_{\mathrm{obs}}\right\\}\right)$ using a flat prior on $\kappa$,
normalizing to integrate to 1.
We show the resulting posterior on $\kappa$ for GWTC-2 in Fig. 6. We see that
we do not get an informative constraint on $\kappa$ from O1-O2, as in the
analysis presented in Sec. II. However, adding in O3a, we can get a constraint
consistent with $\kappa=0$.
Since $\kappa=0$ corresponds to GR, by comparing the value of the posterior to
the prior at $\kappa=0$, we can obtain an evidence for GR. While we see that
for O1-O2 we effectively recover the prior value at $\kappa=0$, giving us no
information, in the case of the simulated detections, we can recover
informative evidence for GR. However, for O3a and all of the detections, the
result does give a constraint around $\kappa=0$.
Figure 6: Posterior distribution on $\kappa$ using a joint distance-$\kappa$
analysis. We show the posteriors for the O1-O2 detections (light blue curve),
O3a detections (pink curve), and all detections (purple curve). The combined
result prefers $\kappa=0$, thus showing consistency with GR. Compare to Fig.
6, which assumes a fixed distance for all events. For each dataset, we show
the corresponding prior on $\kappa$ with a dot-dashed line, given through the
condition that $\kappa d_{C}\ll 1$. The priors are different for the two
datasets as they have different maximum values of $d_{C}$. We also fit a
Gaussian to all of the detections to estimate a variance for the distribution
(black dashed curve, visually overlapping with the data).
Note that this analysis requires that $\kappa d_{C}\ll 1$, and thus we must
limit the values of $\kappa$ considered consistent with our events with large
comoving distances for our analysis to be valid; going beyond linear order in
the above relations is possible, but the solutions for
$c_{o}\left(c,d_{L,o}\right)$ become multi-valued, significantly complicated
the analysis. The events with confident constraints on inclination angle shown
in Fig. 1 are at redshifts of $0.05\lesssim z\lesssim 0.38$. GWTC-2 does
contain events at redshifts up to $z=1$ Abbott _et al._ (2020c), but the
inclination measurements from these events are uninformative. For future
observations, however, we have to be cautious of the $\kappa d_{C}\ll 1$
requirement when bounding $\kappa$ with events at large redshifts in order for
the linear analysis to remain valid.
We fit a Gaussian to the computed distribution on $\kappa$ (for all of the
gravitational wave events) in Fig. 6, finding a mean of $-0.035\textrm{
Gpc}^{-1}$, and a standard deviation of $\sigma=0.4\textrm{ Gpc}^{-1}$. This
value of $\sigma$ is larger than the width of the prior support we impose to
satisfy the $\kappa d_{C}\ll 1$ constraint. We can estimate, however, how many
future detections it will take for $\sigma$ to lie inside of the prior. For
the same distance distribution of observed sources, $\sigma$ will decrease by
a factor of $\sqrt{N}$ for $N$ more detections. For $\sigma$ to decrease by a
factor of two from $0.4\textrm{ Gpc}^{-1}$ to $0.2\textrm{ Gpc}^{-1}$, we thus
require $N\sim 30$ more informative events.
However, for future gravitational wave detections, we know that we will be
able to observe further distances, which will affect the number of detections
and hence the behavior of $\sigma$. Specifically, the rate at which we observe
new events increases with distance $d_{C}$ as $d_{C}^{3}$ (since the overall
observable volume increases). Thus, $\sigma$ will decrease with distance as
$d_{C}^{-3/2}$.666Here we make the assumption that $\sigma$ is otherwise
independent of distance, conservatively ignoring the fact that events that are
further can give larger constraints on amplitude birefringence, and assuming
that the inclination angle can be measured with similar accuracy at various
distances. This increased distance, however, will decrease the allowed value
of $\kappa$ (from the constraint $\kappa d_{C}\ll 1$) by a factor of
$d_{C}^{-1}$. Thus, as the observable distance increases, $\sigma$, the
variance on the measured $\kappa$, will decrease faster than the prior on the
allowed values of $\kappa$. Hence, in time, we will be able to make a more
precise and valid measurement of $\kappa$.
## Appendix B Uninformative inclination distributions
In order the quantify the amount of information about $\kappa$ contained in
the GWTC-2 detections, we must compare the results (whether qualitatively or
quantitatively through a Kullback-Leibler divergence) to the distribution on
$\kappa$ that we would get from detections that are completely uninformative
about $\cos\iota$. Of course, such uninformative measurements must generate a
posterior for $\kappa$ that is equal to the prior (that is, they must generate
a flat likelihood function); but it is an interesting test for any practical
inference method that it satisfies this condition.
To generate such a test for our methods here, we produce an uninformative
distribution on $\cos\iota$ for all detections. We generate
$N_{\mathrm{samp}}$ mock samples from a distribution that is
$\mathcal{U}[-1,1]$. We can then take the ensemble of $N_{\mathrm{det}}$ such
detections and compute a likelihood distribution on $\Delta$ using the
procedure in Sec. II.1, following with a computation of $\kappa$.
However, when generating these samples, we must be careful about the fact that
we are considering an uninformative distribution. For each detection, we
obtain a certain amount of Poisson noise given that we only have
$N_{\mathrm{samp}}$ discrete samples. Naively, one would expect these Poisson
fluctuations to cancel one another out as we accumulate more detections,
converging to some ‘true value’. However, because each successive
uninformative ‘measurement’ of $\cos\iota\in\mathcal{U}[-1,1]$ offers no new
information, there is no such sense of convergence. Instead, the detections
essentially result in a random walk in the slope of the likelihood with the
$\Delta$ parameter. We compute a log-likelihood distribution on $\Delta$ over
all of the detections using
$\displaystyle\log\mathcal{L}(\Delta)=\sum_{\mathrm{Detections}}\log\left[N_{-}\frac{(1-\Delta)}{2N_{\mathrm{samp}}}+N_{+}\frac{(1+\Delta)}{2N_{\mathrm{samp}}}\right]$
(38)
where for each detection, $N_{-}$ is the number of samples with $\cos\iota<0$
and $N_{+}=N_{\mathrm{samp}}-N_{-}$ is the number of samples with
$\cos\iota>0$.
For a uniform distribution, we would expect to have $N_{-}=N_{+}=\frac{1}{2}$,
so let us write, to linear order,
$N_{-}/N_{\mathrm{samp}}=\frac{1}{2}+\epsilon$ and
$N_{+}/N_{\mathrm{samp}}=\frac{1}{2}-\epsilon$. For any particular detection,
assuming $N_{\mathrm{samp}}\gg 1$, $\epsilon$ is approximately normally
distributed with mean zero and standard deviation
$1/\sqrt{N_{\mathrm{samp}}}$. Eq. (38) then results in
$\displaystyle\log\mathcal{L}(\Delta)=\sum_{\mathrm{Detections}}\log\left[\frac{1}{2}-\Delta\epsilon\right]\,,$
(39)
which for each detection results in a line with slope linearly dependent on
$\epsilon$. Summing the independent, normally-distributed random variables
$\epsilon$ gives
$\log\mathcal{L}(\Delta)=\mathrm{const}-\Delta\sum_{\mathrm{Detections}}\epsilon.$
(40)
The sum of normally-distributed $\epsilon$ results in a random-walk for the
slope of the likelihood with $\Delta$; the sum is, itself, normally-
distributed with mean zero and standard deviation
$\sqrt{N_{\mathrm{det}}/N_{\mathrm{samp}}}$. In order to ensure that
uninformative detections do not accumulate a significant slope in
$\mathcal{L}(\Delta)$, we must ensure that
$\displaystyle N_{\mathrm{samp}}\gg N_{\mathrm{det}}$ (41)
and thus have a number of samples that is dependent on the number of
detections in the uninformative case. Note that this is different from what we
do in practice, where we assume that the gravitational wave events are
informative about $\cos\iota$ and hence $\Delta$, and we use a fixed number of
samples (1024 in this study) from each posterior distribution in our
calculations.
We can see the outcome of this in Fig. 7, where we plot the resulting
distribution on $\Delta$ from uninformative samples with and without imposing
the criterion in Eq. (41), where we obtain convergence to a flat distribution
when we satisfy the criterion.
Figure 7: Posterior distribution on $\Delta$ computed from uninformative
distributions of $\cos\iota\in\mathcal{U}[-1,1]$ for each detection. Each
solid curve corresponds to the combined posterior distribution on $\Delta$ for
the given number of detections, and the number of samples for each detection.
The top panel corresponds to using a constant number of samples for each
$N_{\mathrm{det}}$, which does not converge to the expected flat distribution
on $\Delta$ (dashed grey line) with increasing detections. The bottom panel,
however, shows the case where $N_{\mathrm{samp}}$ changes with
$N_{\mathrm{det}}$ to satisfy the criterion in Eq. (41), indeed showing
convergence to the expected flat distribution. The slope of the posterior is,
in each case, comparable to $\sqrt{N_{\mathrm{det}}/N_{\mathrm{samp}}}$.
## Appendix C Derivation of $\zeta(\vartheta)$ for dark-energy dominated
universe
We now work through the derivation of $\zeta(\vartheta)$ (cf. Eq. (III)) for a
dark-energy dominated universe. We follow the steps of Alexander _et al._
(2008), which computed $\zeta(\vartheta)$ for a matter-dominated universe.
We work in units of conformal time $\eta$, with $[\eta]=L^{0}$, and where
$\eta=1$ corresponds to present-day. The scale factor $a$ has units of
$[a]=L$. The conformal time and proper time $t$ are related as $dt=ad\eta$. We
use notation for derivatives $\dot{f}=\partial_{t}f$ and
$f^{\prime}=\partial_{\eta}f$. $H\equiv\dot{a}/a$ is the Hubble parameter,
with $[H]=L^{-1}$, and $\mathcal{H}\equiv a^{\prime}/a$ is the conformal
Hubble parameter with dimensions $[\mathcal{H}]=L^{0}$. Quantities with
subscript $0$, such as $\\{a_{0},H_{0},\mathcal{H}_{0}\\}$, refer to present-
day values of the parameters. As stated before, the CS scalar field
$\vartheta$ has dimensions of $[\vartheta]=L^{2}$, for the choice of
$\alpha=\kappa$ for the CS coupling constant (cf. Eq. (12)). We set $G=c=1$
for this calculation.
Let us assume that right and left polarized gravitational waves have the
following profile (cf. Eq. 189 in Alexander and Yunes (2009)),
$\displaystyle
h_{\mathrm{R},\mathrm{L}}=A(1+\lambda_{\mathrm{R},\mathrm{L}}\cos\iota)^{2}\exp[-i(\phi_{0}+\Delta\phi_{\mathrm{R},\mathrm{L}})]\,,$
(42)
where $\iota$ is the inclination angle between the angular momentum of the
source and the observer’s line of sight, and $A$ is an amplitude dependent on
parameters of the source that is the same for both polarizations. The quantity
$\lambda_{\mathrm{R}}=+1$ for right-handed polarizations, and
$\lambda_{\mathrm{L}}=-1$ for left-handed polarizations. The quantity
$\phi_{0}$ is the gravitational wave phase as given by GR, and
$\Delta\phi_{\mathrm{R},\mathrm{L}}$ is the CS modification to the
gravitational wave phase. Let us write the total phase as
$\displaystyle\phi_{\mathrm{R},\mathrm{L}}(\eta)=\phi_{0}(\eta)+\Delta\phi_{\mathrm{R},\mathrm{L}}(\eta)\,,$
(43)
With the profile in Eq. (42), the ratio between the right and left polarized
strain becomes
$\displaystyle\frac{h_{\mathrm{R}}}{h_{\mathrm{L}}}=\frac{(1+\cos\iota)^{2}}{(1-\cos\iota)^{2}}\exp[-i(\Delta\phi_{\mathrm{R}}-\Delta\phi_{\mathrm{L}})]\,.$
(44)
It is the quantity
$\displaystyle\Delta\phi_{\mathrm{R}}-\Delta\phi_{\mathrm{L}}$ (45)
that we are thus interested in computing, and which is related to $\zeta$ (cf.
Eq. (III)) as
$\displaystyle\frac{2k}{H_{0}}\zeta=-i(\Delta\phi_{\mathrm{R}}-\Delta\phi_{\mathrm{L}})\,.$
(46)
The standard linearized Einstein equations for metric perturbations in a
Friedmann-Robertson-Walker (FRW) universe are modified through the inclusion
of CS coupling to a scalar field. The equation for the phase of circularly
polarized modes thus takes the form (cf. Sec. 2.B in Alexander and Yunes
(2009) for a full derivation)
$\displaystyle\left[i\phi_{\mathrm{R},\mathrm{L}}^{\prime\prime}+(\phi_{\mathrm{R},\mathrm{L}}^{\prime})^{2}+\mathcal{H}^{\prime}+\mathcal{H}^{2}-\kappa^{2}\right]\left(1-\frac{\lambda_{\mathrm{R},\mathrm{L}}\kappa\vartheta^{\prime}}{a^{2}}\right)$
(47)
$\displaystyle\quad=\frac{i\lambda_{\mathrm{R},\mathrm{L}}\kappa}{a^{2}}(\vartheta^{\prime\prime}-2\mathcal{H}\vartheta^{\prime})(\phi_{\mathrm{R},\mathrm{L}}^{\prime}-i\mathcal{H})\,,$
where $\kappa$ is the co-moving wave-number with units $[\kappa]=L^{0}$. For
ease of notation, let us drop the ${\mathrm{R},\mathrm{L}}$ subscript and
focus on a polarization with a generic $\lambda\in\\{-1,1\\}$.
Following Alexander _et al._ (2008), we put Eq. (47) in terms of a host of
other variables, namely
$\displaystyle\;\;y\equiv\frac{\phi^{\prime}}{k}\;\;\;\;\gamma\equiv\frac{\mathcal{H}_{0}}{\kappa}\;\;\;\;\Gamma\equiv\frac{\mathcal{H}}{\mathcal{H}_{0}}$
(48)
$\displaystyle\;\;\delta\equiv\frac{\mathcal{H}_{0}^{\prime}}{\kappa^{2}}\;\;\;\;\Delta\equiv\frac{\mathcal{H}^{\prime}}{\mathcal{H}_{0}^{\prime}}\;\;\;\;\epsilon=\frac{\vartheta_{0}^{\prime\prime}}{a_{0}^{2}}$
$\displaystyle\;\;\zeta\equiv\frac{\kappa\vartheta_{0}^{\prime}}{a_{0}^{2}}\;\;\;\;E\equiv\frac{\vartheta^{\prime\prime}}{a^{2}\epsilon}\;\;\;\;Z\equiv\frac{\kappa\vartheta^{\prime}}{a^{2}\zeta}\,.$
Eq. (47) thus becomes
$\displaystyle\frac{y^{\prime}}{\kappa}+i(1-\gamma^{2}\Gamma^{2}-\delta\Delta-y^{2})=\frac{\lambda(\epsilon
E-2\gamma\zeta\Gamma Z)}{1-\lambda\zeta Z}(y-i\gamma\Gamma)\,.$ (49)
Thus far, nothing has been assumed about the scale factor or matter-energy
content of the FRW universe. Let us assume, however, following Alexander _et
al._ (2008) that $\vartheta$ and $\mathcal{H}$ evolve on cosmological
timescales (with $f^{\prime}\sim\mathcal{H}f$), and so
$\displaystyle\epsilon^{2}\sim(\gamma\zeta)^{2}\ll\gamma^{2}\sim\delta\,.$
(50)
Then, we can say that all of the terms with factors of $\epsilon$ and
$\gamma\zeta$ are perturbations, and hence we can write the solution to Eq.
(49) as
$\displaystyle y=y_{0}+\epsilon y_{0,1}+\gamma\zeta y_{1,0}+\ldots\,,$ (51)
where $y_{0}$ is the value of $y$ obtained from pure GR (setting $\vartheta=0$
in Eq. (49)), and $\\{\epsilon,\gamma,\zeta\\}$ are given in Eq. (48).
Next, we require that the perturbations vanish at some initial conformal time
$\eta_{i}$, we obtain that (cf. Eq. 2.23 in Alexander and Yunes (2009))
$\displaystyle y_{0,1}(\eta)$ $\displaystyle=\lambda\mathcal{Y}[E](\eta)\,,$
(52) $\displaystyle y_{1,0}(\eta)$ $\displaystyle=-2\lambda\mathcal{Y}[\Gamma
Z](\eta)\,,$ (53)
where $\\{E,\Gamma,Z\\}$ are functions of $\vartheta$ given in Eq. (48), and
$\displaystyle\mathcal{Y}[g](\eta)\equiv\kappa
e^{2i\phi_{0}(\eta)}\int_{\eta_{i}}^{\eta}dxe^{-2i\phi_{0}(x)}y_{0}(x)g(x)\,,$
(54)
for some function $g(\eta)$, where $\phi_{0}(\eta)$ is the gravitational wave
phase from pure GR (obtained from solving Eq. (47) with $\vartheta=0$).
The CS correction to the accumulated phase as the wave propagates from
$\eta_{i}$ to $\eta$ (cf. Eq. 2.24 in Alexander and Yunes (2009)) is thus
$\displaystyle\Delta\phi(\eta_{i},\eta)=\kappa\lambda\int_{\eta_{i}}^{\eta}d\eta\\{\epsilon\mathcal{Y}[E](\eta)-2\gamma\zeta\mathcal{Y}[\Gamma
Z](\eta)\\}\,.$ (55)
To summarize, our goal is to integrate Eq. (55) to obtain the CS modification
to the gravitational wave phase, which will allow us to compute the ratio
between right and left polarized stain modes for a given $\vartheta$, as
expressed in Eq. (44).
If we assume that $\gamma\ll 1$ (which is justified for the LIGO frequency
range) then the function $\mathcal{Y}[g]$ (for some function $g[\eta]$) has
the asymptotic expansion (cf. Eq. 2.25 in Alexander _et al._ (2008))
$\displaystyle\mathcal{Y}[g](\eta)\sim\frac{ie^{2i\phi_{0}(\eta)}}{2}\left[e^{-2i\phi_{0}(\eta)}\sum_{\ell=0}^{n}\left(\frac{1}{2ik}\right)^{\ell}+\left(\frac{1}{y_{0}}\frac{d}{d\eta}\right)^{\ell}g\right]^{\eta}_{\eta_{i}}\,.$
(56)
We will follow Alexander _et al._ (2008) in going to order $\ell=0$ in this
calculation, giving
$\displaystyle\mathcal{Y}[g](\eta)$
$\displaystyle\sim\frac{ie^{2i\phi_{0}(\eta)}}{2}\left(e^{-2i\phi_{0}(\eta)}g(\eta)-e^{-2i\phi_{0}(\eta_{i})}g(\eta_{i})\right)$
$\displaystyle=\frac{i}{2}g(\eta)-\frac{i}{2}e^{2i(\phi_{0}(\eta)-\phi_{0}(\eta_{i}))}g(\eta_{i})\,.$
(57)
Now our calculation diverges from that in Alexander and Yunes (2009), as we
work in a dark-energy dominated (rather than matter-dominated) universe, with
scale factor
$\displaystyle a(t)=a_{0}e^{H_{0}t}\,.$ (58)
Working in units of conformal time, we obtain
$\displaystyle\eta(t)$ $\displaystyle=-\frac{1}{a_{0}H_{0}}e^{-H_{0}t}\,,$
(59) $\displaystyle t(\eta)$
$\displaystyle=\frac{\log\left(-\frac{1}{a_{0}H_{0}\eta}\right)}{H_{0}}\,,$
(60)
which gives
$\displaystyle a(\eta)$ $\displaystyle=-\frac{1}{H_{0}\eta}\,.$ (61)
With the convention of $\eta=1$ corresponding to present day, we obtain
$\displaystyle a_{0}$
$\displaystyle=-\frac{1}{H_{0}}\,,\;\;\;\;a^{\prime}(\eta)=\frac{1}{H_{0}\eta^{2}}\,,$
(62) $\displaystyle\mathcal{H}$
$\displaystyle\equiv\frac{a^{\prime}(\eta)}{a(\eta)}=\frac{1}{\eta}\,,\;\;\;\;\mathcal{H}_{0}=1\,,$
(63) $\displaystyle\mathcal{H}^{\prime}$
$\displaystyle=-\frac{1}{\eta^{2}}\,,\;\;\;\;\mathcal{H}^{\prime}_{0}=-1\,.$
(64)
Now, we can compute all of the quantities in Eq. (48) for a dark-energy
dominated universe as
$\displaystyle\;\;y\equiv\frac{\phi^{\prime}}{\kappa}\;\;\;\;\gamma\equiv\frac{1}{\kappa}\;\;\;\;\Gamma\equiv\frac{1}{\eta}$
(65)
$\displaystyle\;\;\delta\equiv\frac{-1}{\kappa^{2}}\;\;\;\;\Delta\equiv\frac{1}{\eta^{2}}\;\;\;\;\epsilon=H_{0}^{2}\vartheta_{0}^{\prime\prime}$
$\displaystyle\;\;\zeta\equiv\kappa\vartheta_{0}^{\prime}H_{0}^{2}\;\;\;\;E\equiv\eta^{2}\frac{\vartheta^{\prime\prime}}{\vartheta_{0}^{\prime\prime}}\;\;\;\;Z\equiv\eta^{2}\frac{\vartheta^{\prime}}{\vartheta_{0}^{\prime}}\,.$
Our aim is thus to evaluate Eq. (55) to obtain the CS correction to the phase,
$\Delta\phi$. Now, we must first obtain $y_{0}$, the value of
$\phi_{0}^{\prime}/k$ without a perturbation. Thus, we solve Eq. (49) with
zero RHS to give
$\displaystyle\frac{y_{0}^{\prime}}{k}+i(1-\gamma^{2}\Gamma^{2}-\delta\Delta-y^{2})=0$
$\displaystyle\frac{y_{0}^{\prime}}{k}+i(1-\frac{1}{\kappa^{2}\eta^{2}}-\frac{-1}{\kappa^{2}\eta^{2}}-y^{2})=0$
$\displaystyle\frac{y_{0}^{\prime}}{k}+i(1-y^{2})=0\,,$ (66)
which gives solutions of the form
$\displaystyle y_{0}=-i\tan(\kappa\eta-iC_{0})\,,$ (67)
where $C_{0}$ is a constant of integration that we will leave unspecified for
now.
Integrating
$\displaystyle\phi_{0}^{\prime}=\kappa y_{0}\,,$ (68)
we obtain
$\displaystyle\phi_{0}(\eta)=C_{1}+i\log(\cosh(C_{0}-i\kappa\eta))$ (69)
we can freely set $C_{1}=0$ since we are interested in the difference between
two values of $\phi_{0}$.
Now, let us find $\Delta\phi$, the CS phase accumulated by the perturbations
using Eq. (55). Using the form of $\mathcal{Y}[g]$ from Eq. (57), and the
solution in Eq. (69), we compute
$\displaystyle\phi_{0}(\eta)-\phi_{0}(\eta_{i})=$ (70)
$\displaystyle\quad-i\log(\cosh(C_{0}-i\kappa\eta))+i\log(\cosh(C_{0}-i\kappa\eta_{i}))$
which gives
$\displaystyle
e^{2i(\phi_{0}(\eta)-\phi_{0}(\eta_{i}))}=\frac{\cosh(C_{0}-i\kappa\eta)^{2}}{\cosh(C_{0}-i\kappa\eta_{0})^{2}}$
(71)
Thus, we have
$\displaystyle\epsilon\mathcal{Y}[E](\eta)$
$\displaystyle=\frac{i}{2}H_{0}^{2}\vartheta_{0}^{\prime\prime}\left(\eta^{2}\frac{\vartheta^{\prime\prime}}{\vartheta_{0}^{\prime\prime}}-\eta_{i}^{2}\frac{\vartheta_{i}^{\prime\prime}}{\vartheta_{0}^{\prime\prime}}\frac{\cosh(C_{0}-i\kappa\eta)^{2}}{\cosh(C_{0}-i\kappa\eta_{0})^{2}}\right)$
$\displaystyle=\frac{i}{2}H_{0}^{2}\left(\eta^{2}\vartheta^{\prime\prime}-\eta_{i}^{2}\vartheta_{i}^{\prime\prime}\frac{\cosh(C_{0}-i\kappa\eta)^{2}}{\cosh(C_{0}-i\kappa\eta_{0})^{2}}\right)$
(72)
and similarly
$\displaystyle\gamma\zeta\mathcal{Y}[\Gamma Z](\eta)$
$\displaystyle=\frac{i}{2}\vartheta_{0}^{\prime}H_{0}^{2}\left(\eta\frac{\vartheta^{\prime}}{\vartheta_{0}^{\prime}}-\eta_{i}\frac{\vartheta_{i}^{\prime}}{\vartheta_{0}^{\prime}}\frac{\cosh(C_{0}-i\kappa\eta)^{2}}{\cosh(C_{0}-i\kappa\eta_{0})^{2}}\right)$
$\displaystyle=\frac{i}{2}H_{0}^{2}\left(\eta\vartheta^{\prime}-\eta_{i}\vartheta_{i}^{\prime}\frac{\cosh(C_{0}-i\kappa\eta)^{2}}{\cosh(C_{0}-i\kappa\eta_{0})^{2}}\right)\,.$
(73)
Let us follow the logic below Eq. 3.4 of Alexander and Yunes (2009) to drop
the oscillatory pieces, thus obtaining the overall integral from Eq. (55) of
$\displaystyle\Delta\phi_{\mathrm{R},\mathrm{L}}\sim
i\frac{\kappa}{2}\lambda_{\mathrm{R},\mathrm{L}}H_{0}^{2}\int_{\eta}^{1}\left(\eta^{2}\vartheta^{\prime\prime}(\eta)-2\eta\vartheta^{\prime}(\eta)\right)d\eta\,,$
(74)
where we have reintroduced the ${\mathrm{R},\mathrm{L}}$ notation.
Following Eq. (46), we have
$\displaystyle\frac{2k}{H_{0}}\zeta=-i(\Delta\phi_{\mathrm{R}}-\Delta\phi_{\mathrm{L}})$
(75)
and thus, using $\lambda_{\mathrm{R}}-\lambda_{\mathrm{L}}=2$, we obtain
$\displaystyle\frac{2k}{H_{0}}\zeta=\kappa
H_{0}^{2}\int_{\eta}^{1}\left(\eta^{2}\vartheta^{\prime\prime}(\eta)-2\eta\vartheta^{\prime}(\eta)\right)d\eta\,.$
(76)
Writing $\kappa=k_{0}/H_{0}$ (cf. Eq. 3.8 in Alexander _et al._ (2008)), we
obtain,
$\displaystyle\zeta=\frac{H_{0}^{2}}{2}\int_{\eta}^{1}\left(\eta^{2}\vartheta^{\prime\prime}(\eta)-2\eta\vartheta^{\prime}(\eta)\right)d\eta\,.$
(77)
Eq. (77) precisely gives us $\zeta$ for a dark-energy dominated universe. Let
us double-check the units. In Eq. (46), $\zeta$ must be dimensionless. In this
study, $[\vartheta]=L^{2}$ and $[H_{0}]=L^{-1}$, so indeed $[\zeta]=L^{0}$.
## References
* Aasi _et al._ (2015) J. Aasi _et al._ (LIGO Scientific), Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc] .
* Acernese _et al._ (2015) F. Acernese _et al._ (VIRGO), Class. Quant. Grav. 32, 024001 (2015), arXiv:1408.3978 [gr-qc] .
* Abbott _et al._ (2019a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. X9, 031040 (2019a), arXiv:1811.12907 [astro-ph.HE] .
* Abbott _et al._ (2019b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), (2019b), arXiv:1903.04467 [gr-qc] .
* Isi _et al._ (2019a) M. Isi, M. Giesler, W. M. Farr, M. A. Scheel, and S. A. Teukolsky, Phys. Rev. Lett. 123, 111102 (2019a), arXiv:1905.00869 [gr-qc] .
* Nair _et al._ (2019) R. Nair, S. Perkins, H. O. Silva, and N. Yunes, Phys. Rev. Lett. 123, 191101 (2019), arXiv:1905.00870 [gr-qc] .
* Isi _et al._ (2019b) M. Isi, K. Chatziioannou, and W. M. Farr, Phys. Rev. Lett. 123, 121101 (2019b), arXiv:1904.08011 [gr-qc] .
* Abbott _et al._ (2020a) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020a), arXiv:2010.14527 [gr-qc] .
* Abbott _et al._ (2020b) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020b), arXiv:2010.14529 [gr-qc] .
* Abbott _et al._ (2019c) R. Abbott _et al._ (LIGO Scientific, Virgo), (2019c), arXiv:1912.11716 [gr-qc] .
* LIGO Scientific Collaboration and Virgo Collaboration (2019) LIGO Scientific Collaboration and Virgo Collaboration, “Parameter estimation sample release for GWTC-1,” https://dcc.ligo.org/LIGO-P1800370/public (2019).
* Zhao _et al._ (2020a) W. Zhao, T. Zhu, J. Qiao, and A. Wang, Phys. Rev. D 101, 024002 (2020a), arXiv:1909.10887 [gr-qc] .
* Alexander and Yunes (2009) S. Alexander and N. Yunes, Phys. Rept. 480, 1 (2009), arXiv:0907.2562 [hep-th] .
* Crisostomi _et al._ (2018) M. Crisostomi, K. Noui, C. Charmousis, and D. Langlois, Phys. Rev. D97, 044034 (2018), arXiv:1710.04531 [hep-th] .
* Conroy and Koivisto (2019) A. Conroy and T. Koivisto, JCAP 12, 016 (2019), arXiv:1908.04313 [gr-qc] .
* Horava (2009) P. Horava, Phys. Rev. D79, 084008 (2009), arXiv:0901.3775 [hep-th] .
* Green and Schwarz (1984) M. B. Green and J. H. Schwarz, Phys. Lett. 149B, 117 (1984).
* Taveras and Yunes (2008) V. Taveras and N. Yunes, Phys. Rev. D78, 064070 (2008), arXiv:0807.2652 [gr-qc] .
* Mercuri and Taveras (2009) S. Mercuri and V. Taveras, Phys. Rev. D80, 104007 (2009), arXiv:0903.4407 [gr-qc] .
* Weinberg (2008) S. Weinberg, Phys. Rev. D77, 123541 (2008), arXiv:0804.4291 [hep-th] .
* Nojiri _et al._ (2019) S. Nojiri, S. D. Odintsov, V. K. Oikonomou, and A. A. Popov, Phys. Rev. D100, 084009 (2019), arXiv:1909.01324 [gr-qc] .
* Zhao _et al._ (2020b) W. Zhao, T. Liu, L. Wen, T. Zhu, A. Wang, Q. Hu, and C. Zhou, Eur. Phys. J. C 80, 630 (2020b), arXiv:1909.13007 [gr-qc] .
* Alexander _et al._ (2008) S. Alexander, L. S. Finn, and N. Yunes, Phys. Rev. D78, 066005 (2008), arXiv:0712.2542 [gr-qc] .
* Yunes _et al._ (2010) N. Yunes, R. O’Shaughnessy, B. J. Owen, and S. Alexander, Phys. Rev. D82, 064017 (2010), arXiv:1005.3310 [gr-qc] .
* Yunes and Finn (2009) N. Yunes and L. S. Finn, _Laser Interferometer Space Antenna. Proceedings, 7th international LISA Symposium, Barcelona, Spain, June 16-20, 2008_ , J. Phys. Conf. Ser. 154, 012041 (2009), arXiv:0811.0181 [gr-qc] .
* Yagi and Yang (2018) K. Yagi and H. Yang, Phys. Rev. D97, 104018 (2018), arXiv:1712.00682 [gr-qc] .
* Abbott _et al._ (2020c) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020c), arXiv:2010.14533 [astro-ph.HE] .
* Mandel _et al._ (2019) I. Mandel, W. M. Farr, and J. R. Gair, Mon. Not. Roy. Astron. Soc. 486, 1086 (2019), arXiv:1809.02063 [physics.data-an] .
* Fishbach _et al._ (2018) M. Fishbach, D. E. Holz, and W. M. Farr, Astrophys. J. 863, L41 (2018), [Astrophys. J. Lett.863,L41(2018)], arXiv:1805.10270 [astro-ph.HE] .
* Pardo _et al._ (2018) K. Pardo, M. Fishbach, D. E. Holz, and D. N. Spergel, JCAP 1807, 048 (2018), arXiv:1801.08160 [gr-qc] .
* Abbott _et al._ (2019d) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Astrophys. J. 882, L24 (2019d), arXiv:1811.12940 [astro-ph.HE] .
* Dai _et al._ (2020) L. Dai, B. Zackay, T. Venumadhav, J. Roulet, and M. Zaldarriaga, (2020), arXiv:2007.12709 [astro-ph.HE] .
* Smith _et al._ (2018) G. P. Smith, M. Jauzac, J. Veitch, W. M. Farr, R. Massey, and J. Richard, Mon. Not. Roy. Astron. Soc. 475, 3823 (2018), arXiv:1707.03412 [astro-ph.HE] .
* Kullback and Leibler (1951) S. Kullback and R. A. Leibler, Ann. Math. Statist. 22, 79 (1951).
* Jackiw and Pi (2003) R. Jackiw and S. Y. Pi, Phys. Rev. D68, 104012 (2003), arXiv:gr-qc/0308071 [gr-qc] .
* Abbott _et al._ (2016) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 131103 (2016), arXiv:1602.03838 [gr-qc] .
* Yunes and Spergel (2009) N. Yunes and D. N. Spergel, Phys. Rev. D80, 042004 (2009), arXiv:0810.5541 [gr-qc] .
* Abbott _et al._ (2018) B. Abbott _et al._ (KAGRA, LIGO Scientific, VIRGO), Living Rev. Rel. 21, 3 (2018), arXiv:1304.0670 [gr-qc] .
* Smith _et al._ (2008) T. L. Smith, A. L. Erickcek, R. R. Caldwell, and M. Kamionkowski, Phys. Rev. D77, 024015 (2008), arXiv:0708.0001 [astro-ph] .
* Hu _et al._ (2020) Q. Hu, M. Li, R. Niu, and W. Zhao, (2020), arXiv:2006.05670 [gr-qc] .
* Ali-Haimoud (2011) Y. Ali-Haimoud, Phys. Rev. D83, 124050 (2011), arXiv:1105.0009 [astro-ph.HE] .
* Wang _et al._ (2020) Y.-F. Wang, R. Niu, T. Zhu, and W. Zhao, (2020), arXiv:2002.05668 [gr-qc] .
* Yamada and Tanaka (2020) K. Yamada and T. Tanaka, PTEP 2020, 093E01 (2020), arXiv:2006.11086 [gr-qc] .
* Madau and Dickinson (2014) P. Madau and M. Dickinson, Annual Review of Astronomy and Astrophysics 52, 415 (2014), https://doi.org/10.1146/annurev-astro-081811-125615 .
|
∎
11institutetext: S. Mishra 22institutetext: Indian Institute of Technology
Kanpur, India
22email<EMAIL_ADDRESS>33institutetext: S. Prasad44institutetext:
University of Illinois at Urbana-Champaign, USA
44email<EMAIL_ADDRESS>55institutetext: S. Mishra∗ [Corresponding
Author]66institutetext: University of Illinois at Urbana-Champaign, USA
66email<EMAIL_ADDRESS>
# Exploring multi-task multi-lingual learning of transformer models for hate
speech and offensive speech identification in social media
Sudhanshu Mishra Shivangi Prasad Shubhanshu Mishra∗
(Received: date / Accepted: date)
###### Abstract
Hate Speech has become a major content moderation issue for online social
media platforms. Given the volume and velocity of online content production,
it is impossible to manually moderate hate speech related content on any
platform. In this paper we utilize a multi-task and multi-lingual approach
based on recently proposed Transformer Neural Networks to solve three sub-
tasks for hate speech. These sub-tasks were part of the 2019 shared task on
hate speech and offensive content (HASOC) identification in Indo-European
languages. We expand on our submission to that competition by utilizing multi-
task models which are trained using three approaches, a) multi-task learning
with separate task heads, b) back-translation, and c) multi-lingual training.
Finally, we investigate the performance of various models and identify
instances where the Transformer based models perform differently and better.
We show that it is possible to to utilize different combined approaches to
obtain models that can generalize easily on different languages and tasks,
while trading off slight accuracy (in some cases) for a much reduced inference
time compute cost. We open source an updated version of our HASOC 2019 code
with the new improvements at https://github.com/socialmediaie/MTML_HateSpeech.
###### Keywords:
Hate Speech Offensive content Transformer Models BERT Language Models Neural
Networks Multi-lingual Multi-Task Learning Social Media Natural Language
Processing Machine Learning Deep Learning Open Source
## 1 Introduction
With increased access to the internet, the number of people that are connected
through social media is higher than ever (Perrin,, 2015). Thus, social media
platforms are often held responsible for framing the views and opinions of a
large number of people (Duggan et al.,, 2017). However, this freedom to voice
our opinion has been challenged by the increase in the use of hate speech
(Mondal et al.,, 2017). The anonymity of the internet grants people the power
to completely change the context of a discussion and suppress a person’s
personal opinion (Sticca and Perren,, 2013). These hateful posts and comments
not only affect the society at a micro scale but also at a global level by
influencing people’s views regarding important global events like elections,
and protests (Duggan et al.,, 2017). Given the volume of online communication
happening on various social media platforms and the need for more fruitful
communication, there is a growing need to automate the detection of hate
speech. For the scope of this paper we adopt the definition of hate speech and
offensive speech as defined in the Mandl et al., (2019) as “insulting,
hurtful, derogatory, or obscene content directed from one person to another
person” (quoted from (Mandl et al.,, 2019)).
In order to automate hate speech detection the Natural Language Processing
(NLP) community has made significant progress which has been accelerated by
organization of numerous shared tasks aimed at identifying hate speech (Mandl
et al.,, 2019; Kumar et al.,, 2020, 2018). Furthermore, there has been a
proliferation of new methods for automated hate speech detection in social
media text (Salminen et al.,, 2018; Mishra et al., 2020b, ; Mishra and
Mishra,, 2019; Mishra, 2020a, ; Waseem et al.,, 2017; Struß et al.,, 2019;
Mandl et al.,, 2019; Mondal et al.,, 2017). However, working with social media
text is difficult (Eisenstein,, 2013; Mishra and Diesner,, 2016; Mishra et
al.,, 2014; Mishra and Diesner,, 2019; Mishra,, 2019; Mishra, 2020b, ; Mishra,
2020a, ), as people use combinations of different languages, spellings and
words that one may never find in any dictionary. A common pattern across many
hate speech identification tasks Mandl et al., (2019); Kumar et al., (2020);
Waseem et al., (2017); Zampieri et al., (2019); Basile et al., (2019); Struß
et al., (2019) is the identification of various aspects of hate speech, e.g.,
in HASOC 2019 (Mandl et al.,, 2019), the organizers divided the task into
three sub-tasks, which focused on identifying the presence of hate speech;
classification of hate speech into offensive, profane, and hateful; and
identifying if the hate speech is targeted towards an entity.
Many researchers have tried to address these types of tiered hate speech
classification tasks using separate models, one for each sub-task (see review
of recent shared tasks Zampieri et al., (2019); Kumar et al., (2018, 2020);
Mandl et al., (2019); Struß et al., (2019)). However, we consider this
approach limited for application to systems which consume large amounts of
data, and are computationally constrained for efficiently flagging hate
speech. The limitation of existing approach is because of the requirement to
run several models, one for each language and sub-task.
In this work, we propose a unified modeling framework which identifies the
relationship between all tasks across multiple languages. Our aim is to be
able to perform as good if not better than the best model for each task
language combination. Our approach is inspired from the promising results of
multi-task learning on some of our recent works (Mishra,, 2019; Mishra, 2020b,
; Mishra, 2020a, ). Additionally, while building a unified model which can
perform well on all tasks is challenging, an important benefit of these models
is their computational efficiency, achieved by reduced compute and maintenance
costs, which can allow the system to trade-off slight accuracy for efficiency.
In this paper, we propose the development of such universal modelling
framework, which can leverage recent advancements in machine learning to
achieve competitive and in few cases state-of-the-art performance of a variety
of hate speech identification sub-tasks across multiple languages. Our
framework encompasses a variety of modelling architectures which can either
train on all tasks, all languages, or a combination of both. We extend the our
prior work in Mishra and Mishra, (2019); Mishra et al., 2020b ; Mishra, 2020b
; Mishra, 2020a ; Mishra, (2019) to develop efficient models for hate speech
identification and benchmark them against the HASOC 2019 corpus, which
consists of social media posts in three languages, namely, English, Hindi, and
German. We open source our implementation to allow its usage by the wider
research community. Our main contributions are as follows:
1. 1.
Investigate more efficient modeling architectures which use a) multi-task
learning with separate task heads, b) back-translation, and c) multi-lingual
training. These architectures can generalize easily on different languages and
tasks, while trading off slight accuracy (in some cases) for a much reduced
inference time compute cost.
2. 2.
Investigate the performance of various models and identification of instances
where our new models differ in their performance.
3. 3.
Open source pre-trained models and model outputs at Mishra et al., 2020a
along with the updated code for using these models at:
https://github.com/socialmediaie/MTML_HateSpeech
## 2 Related Work
Prior work (see Schmidt and Wiegand, (2017) for a detailed review on prior
methods) in the area of hate speech identification, focuses on different
aspects of hate speech identification, namely analyzing what constitutes hate
speech, high modality and other issues encountered when dealing with social
media data and finally, model architectures and developments in NLP, that are
being used in identifying hate speech these days. There is also prior
literature focusing on the different aspects of hateful speech and tackling
the subjectivity that it imposes. There are many shared tasks Mandl et al.,
(2019); Kumar et al., (2018, 2020); Struß et al., (2019); Basile et al.,
(2019) that tackle hate speech detection by classifying it into different
categories. Each shared task focuses on a different aspect of hate speech.
Waseem et al., (2017) proposed a typology on the abusive nature of hate
speech, classifying it into generalized, explicit and implicit abuse. Basile
et al., (2019) focused on hateful and aggressive posts targeted towards women
and immigrants. Mandl et al., (2019) focused on identifying targeted and un-
targeted insults and classifying hate speech into hateful, offensive and
profane content. Kumar et al., (2018, 2020) tackled aggression and
misogynistic content identification for trolling and cyberbullying posts.
Vidgen et al., (2019) identifies that most of these shared tasks broadly fall
into these three classes, individual directed abuse, identity directed abuse
and concept directed abuse. It also puts into context the various challenges
encountered in abusive content detection.
Unlike other domains of information retrieval, there is a lack of large data-
sets in this field. Moreover, the data-sets available are highly skewed and
focus on a particular type of hate speech. For example, Davidson et al.,
(2017) models the problem as a generic abusive content identification
challenge, however, these posts are mostly related towards racism and sexism.
Furthermore, in the real world, hateful posts do not fall into to a single
type of hate speech. There is a huge overlapping between different hateful
classes, making hate speech identification a multi label problem.
A wide variety of system architectures, ranging from classical machine
learning to recent deep learning models, have been tried for various aspects
of hate speech identification. Facebook, YouTube, and Twitter are the major
sources of data for most data-sets. Burnap and Williams, (2015) used SVM and
ensemble techniques on identifying hate speech in Twitter data. Razavi et al.,
(2010) approach for abuse detection using an insulting and abusive language
dictionary of words and phrases. Van Hee et al., (2015) used bag of words
n-gram features and trained an SVM model on a cyberbullying dataset. Salminen
et al., (2018) achieved an F1-score of 0.79 on classification of hateful
YouTube and Facebook posts using a linear SVM model employing TF-IDF weighted
n-grams.
Recently, models based on deep learning techniques have also been applied to
the task of hate speech identification. These models often rely on distributed
representations or embeddings, e.g., FastText embeddings (Joulin et al.,
(2017), and paragraph2vec distributed representations (Le and Mikolov, (2014).
Badjatiya et al., (2017) employed an LSTM architecture to tune Glove word
embeddings on the DATA-TWITTER-TWH data-set. Risch and Krestel, (2018) used a
neural network architecture using a GRU layer and ensemble methods for the
TRAC 2018 (Kumar et al.,, 2018) shared task on aggression identification. They
also tried back-translation as a data augmentation technique to increase the
data-set size. Wang, (2018) illustrated the use of sequentially combining CNNs
with RNNs for abuse detection. They show that this approach was better than
using only the CNN architecture giving a 1% improvement in the F1-score. One
of the most recent developments in NLP are the transformer architecture
introduced by Vaswani et al., (2017). Utilizing the transformer architecture,
Devlin et al., (2019) provide methods to pre-train models for language
understanding (BERT) that have achieved state of the art results in many NLP
tasks and are promising for hate speech detection as well. BERT based models
achieved competitive performance in HASOC 2019 shared tasks. We Mishra and
Mishra, (2019) fine tuned BERT base models for the various HASOC shared tasks
being the top performing model in some of the sub-tasks. A similar approach
was also used for the TRAC 2020 shared tasks on Aggression identification by
us Mishra et al., 2020b still achieving competitive performance with the
other models without using an ensemble techniques. An interesting approach was
the use of multi-lingual models by joint training on different languages. This
approach presents us with a unified model for different languages in abusive
content detection. Ensemble techniques using BERT models (Risch and Krestel,,
2020) was the top performing model in many of the shared tasks in TRAC 2020.
Recently, multi-task learning has been used for improving performance on NLP
tasks (Liu et al.,, 2016; Søgaard and Goldberg,, 2016), especially social
media information extraction tasks (Mishra,, 2019), and more simpler variants
have been tried for hate speech identification in our recent works (Mishra and
Mishra,, 2019; Mishra et al., 2020b, ). Florio et al., (2020) investigated the
usage of AlBERTo on monitoring hate speech against Italian on Twitter. Their
results show that even though AlBERTo is sensitive to the fine tuning set,
it’s performance increases given enough training time. Mozafari et al., (2020)
employ a transfer learning approach using BERT for hate speech detection.
Ranasinghe and Zampieri, (2020) use cross-lingual embeddings to identify
offensive content in multilingual setting. Our multi-lingual approach is
similar in spirit to the method proposed in Plank, (2017) which use the same
model architecture and aligned word embedding to solve the tasks. There has
also been some work on developing solutions for multilingual toxic comments
which can be related to hate speech.111https://www.kaggle.com/c/jigsaw-
multilingual-toxic-comment-classification Recently, Mishra, 2020c also used a
single model across various tasks which performed very well for event
detection tasks for five Indian languages.
There have been numerous competitions dealing with hate speech evaluation.
OffensEval Zampieri et al., (2019) is one of the popular shared tasks dealing
with offensive language in social media, featuring three sub-tasks for
discriminating between offensive and non-offensive posts. Another popular
shared task in SemEval is the HateEval Basile et al., (2019) task on the
detection of hate against women and immigrants. The 2019 version of HateEval
consists of two sub-task for determination of hateful and aggressive posts.
GermEval Struß et al., (2019) is another shared task quite similar to HASOC.
It focused on the Identification of Offensive Language in German Tweets. It
features two sub-tasks following a binary and multi-class classification of
the German tweets.
An important aspect of hate speech is that it is primarily multi-modal in
nature. A large portion of the hateful content that is shared on social media
is in the form of memes, which feature multiple modalities like audio, text,
images and videos in some cases as well. Yang et al., (2019) present different
fusion approaches to tackle multi-modal information for hate speech detection.
Gomez et al., (2020) explore multi-modal hate speech consisting of text and
image modalities. They propose various multi-modal architectures to jointly
analyze both the textual and visual information. Facebook recently released
the hateful memes data-set for the Hateful Memes challenge Kiela et al.,
(2020) to provide a complex data-set where it is difficult for uni-modal
models to achieve good performance.
## 3 Methods
For this paper, we extend some of the techniques that we have used in TRAC
2020 in Mishra et al., 2020b as well as Mishra, (2019); Mishra, 2020a ;
Mishra, 2020b , and apply them to the HASOC data-set Mandl et al., (2019).
Furthermore, we extend the work that we did as part of the HASOC 2019 shared
task Mishra and Mishra, (2019) by experimenting with multi-lingual training,
back-translation based data-augmentation, and multi-task learning to tackle
the data sparsity issue of the HASOC 2019 data-set.
Figure 1: Task Description
### 3.1 Task Definition and Data
All of the experiments reported hereafter have been done on the HASOC 2019
data-set (Mandl et al.,, 2019) consisting of posts in English (EN), Hindi (HI)
and German (DE). The shared tasks of HASOC 2019 had three sub-tasks (A,B,C)
for both English and Hindi languages and two sub-tasks (A,B) for the German
language. The description of each sub-task is as follows (see Figure 1 for
details):
* •
Sub-Task A : Posts have to be classified into hate speech HOF and non-
offensive content NOT.
* •
Sub-Task B : A fine grained classification of the hateful posts in sub-task A.
Hate Speech posts have to be identified into the type of hate they represent,
i.e containing hate speech content (HATE), containing offensive content (OFFN)
and those containing profane words (PRFN).
* •
Sub-Task C : Another fine grained classification of the hateful posts in sub-
tasks A. This sub-task required us to identify whether the hate speech was
targeted towards an individual or group TIN or whether it was un-targeted UNT.
Table 1: Distribution of number of tweets in different data-sets and splits. task | DE | EN | HI
---|---|---|---
| train | dev | test | train | dev | test | train | dev | test
A | 3,819 | 794 | 850 | 5,852 | 505 | 1,153 | 4,665 | 136 | 1,318
B | 407 | 794 | 850 | 2,261 | 302 | 1,153 | 2,469 | 136 | 1,318
C | | | | 2,261 | 299 | 1,153 | 2,469 | 72 | 1,318
The HASOC 2019 data-set consists of posts taken from Twitter and Facebook. The
data-set only consists of text and labels and does not include any contextual
information or meta-data of the original post e.g. time information. The data
distribution for each language and sub-task is mentioned in Table 1. We can
observe, that the sample size for each language is of the order of a few
thousand post, which is an order smaller to other datasets like OffenseEval
(13,200 posts), HateEval (19,000 posts), and Kaggle Toxic Comments datasets
(240,000 posts). This can pose a challenge for training deep learning models,
which often consists of large number of parameters, from scratch. Class wise
data distribution for each language is available in the appendix .1 figures 4,
5, and 6. These figures show that the label distribution is highly skewed for
task C, such as the label UNT, which is quite underrepresented. Similarly, for
German the task A data is quite unbalanced. For more details on the dataset
along with the details on its creation and motivation we refer the reader to
Mandl et al., (2019). Mandl et al., (2019) reports that the inter-annotator
agreement is in the range of 60% to 70% for English and Hindi. Furthermore,
the inter-annotator agreement is more than 86% for German.
Figure 2: An overview of various model architectures we used. Shaded task
boxes represent that we first compute a marginal representation of labels only
belonging to that task before computing the loss.
### 3.2 Fine-tuning transformer based models
The transformer based models especially BERT (Devlin et al.,, 2019), have
proven to be successful in achieving very good results on a range of NLP
tasks. Upon its release, BERT based models became state of the art for 11 NLP
tasks (Devlin et al.,, 2019). This motivated us to try out BERT for hate
speech detection. We had used multiple variants of the BERT model during HASOC
2019 shared tasks Mishra and Mishra, (2019). We also experimented with other
transformer models and BERT during TRAC2020 Mishra et al., 2020b . However,
based on our experiments, we find the original BERT models to be best
performing for most tasks. Hence, for this paper we only implement our models
on those. For our experiments we use the open source implementations of BERT
provided by Wolf et al., (2019)222https://github.com/huggingface/transformers.
A common practice for using BERT based models, is to fine-tune an existing
pre-trained model on data from a new task. For fine tuning the pre-trained
BERT models we used the BERT for Sequence Classification paradigm present in
the HuggingFace library. We fine tune BERT using various architectures. A
visual description of these architectures is shown in Figure 2. These models
are explained in detail in later sections.
To process the text, we first use a pre-trained BERT tokenizer to convert the
input sentences into tokens. These tokens are then passed to the model which
generate a BERT specific embeddings for each token. The special part about
BERT is that its decoder is supplied all the hidden states of the encoder
unlike other transformer models before BERT. This helps it to capture better
contextual information even for larger sequences. Each sequence of tokens is
padded with a [CLS] and [SEP] token. The pre-trained BERT model generates an
output vector for each of the tokens. For sequence classification tasks, the
vector corresponding to the [CLS] token is used as it holds the contextual
information about the complete sentence. Additional fine-tuning is done on
this vector to generate the classification for specific data-sets.
To keep our experiments consistent, the following hyper-parameters were kept
constant for all our experiments. For training our models we used the standard
hyper-parameters as mentioned in the huggingface transformers documentation.
We used the Adam optimizer (with $\epsilon=1e-8$) for 5 epochs, with a
training/eval batch size of 32. Maximum allowable length for each sequence was
kept as $128$. We use a linearly decreasing learning rate with a starting
value as $5e-5$ with a weight decay of 0.0 and a max gradient norm of $1.0$.
All models were trained using Google
Colab’s333https://colab.research.google.com/ GPU runtimes. This limited us to
a model run-time of 12 hours with a GPU which constrained our batch size as
well as number of training epochs based on the GPU allocated by Colab.
We refer to models which fine tune BERT on using data set from a single
language for a single task, as _Single models_ with an indicator _(S)_ , this
is depicted in Figure 2 (1st row left). All other models types which we
discuss later are identified by their model types and naems in Figure 2.
### 3.3 Training a model for all tasks
One of the techniques that we had used for our work in HASOC 2019 Mishra and
Mishra, (2019) was creating an additional sub-task D by combining the labels
for all of the sub-tasks. We refer to models which use this technique as
_Joint task models_ which an indicator _(D)_ (see Figure 2 models marked with
D). This allowed us to train a single model for all of the sub-tasks. This
also helps in overcoming the data sparsity issue for sub-tasks B and sub-tasks
C for which the no. of data points is very small. The same technique was also
employed in our submission to TRAC 2020 Mishra et al., 2020b aggression and
misogyny identification tasks. Furthermore, when combining labels, we only
consider valid combination of labels, which allows us to reduce the possible
output space. For HASOC, the predicted output labels for the joint-training
are as follows : NOT-NONE-NONE, HOF-HATE-TIN, HOF-HATE-UNT, HOF-OFFN-TIN, HOF-
OFFN-UNT, HOF-PRFN-TIN, HOF-PRFN-UNT. The task specific labels can be easily
extracted from the output labels, using post-processing of predicted labels.
### 3.4 Multi-lingual training
Inspired from joint training of all tasks, as described above, we also
implement the training of a single model for all languages for a given sub-
task. Similar approach was utilized in our prior submission to TRAC 2020
Mishra et al., 2020b . We refer to models which use this technique as _All
models_ with an indicator _(ALL)_ (see Figure 2 models marked with ALL). In
this method, we combine the data-sets from all the languages and train a
single multi-lingual model on this combined data-set. The multi-lingual model
is able to learn data from multiple languages thus providing us with a single
unified model for different languages. A major motivation for taking this
approach was that social media data often does not belong to one particular
language. It is quite common to find code-mixed posts on Twitter and Facebook.
Thus, a multi-lingual model is the best choice in this scenario. During our
TRAC 2020 work, we had found out that this approach works really well and was
one of our top models in almost all of the shared tasks. From a deep learning
point of view, this technique seems promising as doing this also increases the
size of the data-set available for training without adding new data points
from other data-sets or from data augmentation techniques.
As a natural extension of the above two approaches, we combine the multi-
lingual training with the joint training approach to train a single model on
all tasks for all languages. We refer to models which use this technique as
_All joint task models_ with an indicator _(ALL) (D)_ (see Figure 2).
### 3.5 Multi-task learning
While the joint task setting, can be considered as a multi-task setting, it is
not, in the common sense, and hence our reservation in calling it multi-task.
The joint task training can be considered an instance of multi-class
prediction, where the number of classes is based on the combination of tasks.
This approach does not impose any sort of task specific structure on the
model, or computes and combines task specific losses. The core idea of multi-
task learning is to use similar tasks as regularizers for the model. This is
done by simply adding the loss functions specific to each task in the final
loss function of the model. This way the model is forced to optimize for all
of the different tasks simultaneously, thus producing a model that is able to
generalize on multiple tasks on the data-set. However, this may not always
prove to be beneficial as it has been reported that when the tasks differ
significantly the model fails to optimize on any of the tasks. This leads to
significantly worse performance compared to single task approaches. However,
sub-tasks in hate speech detection are often similar or overlapping in nature.
Thus, this approach seems promising for hate speech detection.
Our multi-task setup is inspired from the marginalized inference technique
which was used in Mishra et al., 2020b . In the marginalized inference, we
post-process the probabilities of each label in the joint model, and compute
the task specific label probability by marginalizing the probability of across
all the other tasks. This ensures that the probabilities of labels for each
sub-task make a valid probability distribution and sum to one. For example,
$p(\textbf{HOF-HATE-TIN})>p(\textbf{HOF-PRFN-TIN})$ does not guarantee that
$p(\textbf{HOF-HATE-UNT})>p(\textbf{HOF-PRFN-UNT})$. As described above, we
can calculate the task specific probabilities by marginalizing the output
probabilities of that task. For example, $p(\textbf{HATE})=p(\textbf{HOF-HATE-
TIN})+p(\textbf{HOF-HATE-UNT})$. However, using this technique did not lead to
a significant improvement in the predictions and the evaluation performance.
In some cases, it was even lower than the original method. A reason we suspect
for this low performance is that the model was not trained to directly
optimize its loss for this marginal inference. Next, we describe our multi-
task setup inspired from this approach.
For our multi-task experiments, we first use our joint training approach (sub-
task D) to generate the logits for the different class labels. These logits
are then marginalized to generate task specific logits (marginalizing logits
is simpler than marginalizing the probability for each label, as we do not
need to compute the partition function). For each task, we take a cross-
entropy loss using the new task specific logits. Finally we add the respective
losses for each sub-task along with the sub-task D loss. This added up loss is
the final multi-task loss function of our model. We then train our model to
minimize this loss function. In this loss, each sub-task loss acts as a
regularizer for the other task losses. Since, we are computing the multi-task
loss for each instance, we include a special label _NONE_ for sub-tasks B and
C, for the cases where the label of sub-task A is _NOT_. We refer to models
which use this technique as _Multi-task models_ with an indicator _(MTL) (D)_
(see Figure 2).
One important point to note is that we restrict the output space of the multi-
task model by using the task 4 labels. This is an essential constraint that we
put on the model because of which there is no chance of any inconsistency in
the prediction. By inconsistency we say that it is not possible for our multi-
task model to predict a data point that belongs to _NOT_ for task A and to any
label other than _NONE_ for the tasks B and C. If we follow the general
procedure for training a multi-task model, we would have $2*(3+1)*(2+1)=24$ ,
with $+1$ for the additional _NONE_ label, combinations of outputs from our
model, which would produce the inconsistencies mentioned above.
Like the methods mentioned before, we extend multi-task learning to all
languages, which results in _Multi-task all model_ , which are indicated with
an indicator _(MTL) (ALL) (D)_.
### 3.6 Training with Back-Translated data
One approach for increasing the size of the training data-set, is to generate
new instances based on existing instances using data augmentation techniques.
These new instances are assigned the same label as the original instance.
Training model with instances generated using data augmentation techniques
assumes that the label remains same if the data augmentation does not change
the instance significantly. We utilized a specific data augmentation technique
used in NLP models, called Back-Translation (Koehn,, 2005; Sennrich et al.,,
2016). Back-translation uses two machine translation models, one, to translate
a text from its original language to a target language; and another, to
translate the new text in target language back to the original language. This
technique was successfully used in the submission of Risch and Krestel, (2018,
2020) during TRAC 2018 and 2020. Data augmentation via back-translation
assumes that current machine translation systems when used in back-translation
settings give a different text which expresses a similar meaning as the
original. This assumption allows us to reuse the label of the original text
for the back-translated text.
We used the Google translate API444https://cloud.google.com/translate/docs to
back-translate all the text in our data-sets.
For each language in our data-set we use the following source $\rightarrow$
target $\rightarrow$ source pairs:
* •
EN: _English_ $\rightarrow$ _French_ $\rightarrow$ _English_
* •
HI: _Hindi_ $\rightarrow$ _English_ $\rightarrow$ _Hindi_
* •
DE: _German_ $\rightarrow$ _English_ $\rightarrow$ _German_
To keep track of the back-translated texts we added a flag to the text id. In
many cases, there were minimal changes to the text. In some cases there were
no changes to the back-translated texts. However the no. of such texts where
there was no change after back-translation was very low. For example, among
4000 instances in the English training set around 100 instances did not have
any changes. So while using the back-translated texts for our experiments, we
simply used all the back-translated texts whether they under-went a change or
not. The data-set size doubled after using the back-translation data
augmentation technique. An example of back-translated English text is as
follows (changed text is emphasized):
1. 1.
Original: @politico No. We should remember very clearly that #Individual1 just
admitted to treason . #TrumpIsATraitor #McCainsAHero #JohnMcCainDay
2. 2.
Back-translated: @politico No, we must not forget that very clear #Individual1
just admitted to treason. #TrumpIsATraitor #McCainsAHero #JohnMcCainDay
## 4 Results
We present our results for sub-tasks A, B and C in Table 2, 3, and 4
respectively. To keep the table concise we use the following convention.
1. 1.
(ALL): A _bert-base-multi-lingual-uncased_ model was used with multi-lingual
joint training.
2. 2.
(BT): The data-set used for this experiment is augmented using back-
translation.
3. 3.
(D): A joint training approach has been used.
4. 4.
(MTL): The experiment is performed using a multi-task learning approach.
5. 5.
(S): This is the best model which was submitted to HASOC 2019 in Mishra and
Mishra, (2019).
The pre-trained BERT models which were fine-tuned for each language in a
single language setting, are as follows:
1. 1.
EN \- _bert-base-uncased_
2. 2.
HI \- _bert-base-multi-lingual-uncased_
3. 3.
DE \- _bert-base-multi-lingual-uncased_
### 4.1 Model performance
We evaluate our models against each other and also against the top performing
models of HASOC 2019 for each task. We use the same benchmark scores, namely,
weighted F1-score and macro F1-score, as were used in Mandl et al., (2019),
with macro F1-score being the scores which were used for overall ranking in
HASOC 2019.
#### 4.1.1 Sub-Task A
Table 2: Sub-task A results. Models in HASOC 2019 (Mandl et al.,, 2019) were ranked based on Macro F1. | | Weighted F1 | Macro F1
---|---|---|---
lang | model | dev | train | test | dev | train | test
EN | (ALL) | 0.562 | 0.949 | 0.804 | 0.568 | 0.946 | 0.753
(ALL) (D) | 0.481 | 0.894 | 0.797 | 0.497 | 0.886 | 0.740
(BT) | 0.535 | 0.973 | 0.756 | 0.545 | 0.971 | 0.690
(BT) (ALL) | 0.493 | 0.986 | 0.803 | 0.509 | 0.985 | 0.747
(BT) (ALL) (D) | 0.474 | 0.981 | 0.806 | 0.492 | 0.980 | 0.750
(MTL) (ALL) (D) | 0.552 | 0.823 | 0.801 | 0.559 | 0.812 | 0.755
(MTL) (D) | 0.543 | 0.745 | 0.819 | 0.557 | 0.725 | 0.765
| (S) | 0.606 | 0.966 | 0.790 | 0.610 | 0.964 | 0.740
| (S) (D) | 0.596 | 0.908 | 0.801 | 0.603 | 0.902 | 0.747
| HASOC Best | - | - | 0.840 | - | - | 0.788
HI | (ALL) | 0.786 | 0.976 | 0.793 | 0.785 | 0.976 | 0.793
(ALL) (D) | 0.815 | 0.959 | 0.811 | 0.815 | 0.959 | 0.810
(BT) | 0.654 | 0.967 | 0.746 | 0.654 | 0.967 | 0.744
(BT) (ALL) | 0.815 | 0.982 | 0.795 | 0.814 | 0.982 | 0.795
(BT) (ALL) (D) | 0.772 | 0.975 | 0.793 | 0.772 | 0.975 | 0.792
(MTL) (ALL) (D) | 0.860 | 0.921 | 0.808 | 0.860 | 0.921 | 0.807
(MTL) (D) | 0.748 | 0.893 | 0.814 | 0.749 | 0.893 | 0.814
| (S) | 0.742 | 0.961 | 0.802 | 0.742 | 0.961 | 0.802
| (S) (D) | 0.822 | 0.941 | 0.814 | 0.823 | 0.941 | 0.811
| HASOC Best | - | - | 0.820 | - | - | 0.815
DE | (ALL) | 0.899 | 0.993 | 0.794 | 0.706 | 0.981 | 0.584
(ALL) (D) | 0.906 | 0.988 | 0.779 | 0.730 | 0.968 | 0.566
(BT) | 0.878 | 0.988 | 0.777 | 0.628 | 0.969 | 0.533
(BT) (ALL) | 0.908 | 0.999 | 0.800 | 0.742 | 0.998 | 0.612
(BT) (ALL) (D) | 0.902 | 0.998 | 0.783 | 0.712 | 0.994 | 0.584
(MTL) (ALL) (D) | 0.917 | 0.969 | 0.786 | 0.764 | 0.915 | 0.582
(MTL) (D) | 0.878 | 0.898 | 0.789 | 0.593 | 0.683 | 0.526
| (S) | 0.606 | 0.966 | 0.789 | 0.610 | 0.964 | 0.577
| HASOC Best | - | - | 0.792 | - | - | 0.616
The best scores for sub-task A are mentioned in Table 2. The best scores for
this task belong to Wang et al., (2019), Bashar and Nayak, (2020) and Saha et
al., (2019) for English, Hindi and German respectively. All the models that we
experimented with in sub-task A are very closely separated by the macro-F1
score. Hence, all of them give a similar performance for this task. The
difference between the macro F1-scores of these models is $<3\%$ . For both
English and Hindi, the multi-task learning model performed the best while for
the German language, the model that was trained on the back-translated data
using the multi-lingual joint training approach and task D ( (ALL) (BT) (D) )
worked best. However, it is interesting to see that the multi-task model gives
competitive performance on all of the languages within the same computation
budget. One thing to notice is that, the train macro-F1 scores of the multi-
task model are much lower than the other models. This suggests that the (MTL)
model, given additional training time might improve the results even further.
We are unable to provide longer training time due to lack of computational
resources available to us. The (ALL) (MTL) model also gives a similar
performance compared to the (MTL) model. This suggests that the additional
multi-lingual training comes with a trade off with a slightly lower macro-F1
score. However, the difference between the scores of the two models is $\sim
1\%$. In order to address the additional training time the (MTL) models
required, we trained the (ALL) (MTL) model for 15 epochs. However, this
training time was too large as the models over-fitted the data. This finally
resulted in a degradation in the performance of these models. A sweet spot for
the training time may be found for the (MTL) models which may result in an
increase in the performance of the model whilst avoiding over-fitting. We were
not able to conduct more experiments to do the same due to time constraints.
This may be evaluated in the additional future work on these models. We,
however, cannot compare the German (MTL) models with the (MTL) models of the
other languages as the German data did not have not have sub-task C, so the
(MTL) approach did not have sub-task C for German. As we will see in the next
section, the (MTL) models performed equally well in sub-task B. This might be
because both tasks A and B involve identifying hate and hence are in a sense
co-related. This co-relation is something that the (MTL) models can utilize
for their advantage. It has been found in other multi-task approaches that the
models learn more effectively when the different tasks are co-related.
However, their performance can degrade if the tasks are un-related. The lower
performance on the German data may be because of the unavailability of the
sub-task C. However, the results are still competitive with the other models.
For German, the (ALL) (MTL) model performed better than our submission for
HASOC 2019. The (MTL) model for Hindi was able to match the best model for
this task at HASOC 2019.
The (ALL) and (ALL) (D) training methods show an improvement from our single
models submitted at HASOC. These models present us with an interesting option
for abuse detection tasks as they are able to work on all of the shared tasks
at the same time, leveraging the multi-lingual abilities of the model whilst
still having a computation budget equivalent to that of a single model. These
results show that these models give a competitive performance with the single
models. They even outperform the single model, e.g., they outperform the bert-
base-uncased single models that were used in English sub-task A, which have
been specially tuned for English tasks. While for German and Hindi, the single
models themselves utilized a bert-base-uncased model, so they are better
suited for analyzing the improvements by the multi-lingual joint training
approach. On these languages we see, that the (ALL) and (ALL) (D) techniques
do improve the macro-F1 scores on for this task.
The back-translation technique does not seem to improve the models much. The
approach had a mixed performance. For all the languages, back-translation
alone does not improve the model and hints at over-fitting, resulting in a
decrease in test results. However, when it is combined with the (ALL) and (D)
training methods we see an increase in the performance. The (ALL) and (D)
training methods are able to leverage the data-augmentation applied in the
back-translated data. Back-translation when used with (ALL) or (ALL) (D) are
better than the single models that we submitted at HASOC 2019. The (BT) (ALL)
model comes really close to the best model at HASOC, coming second according
to the results in Mandl et al., (2019).
#### 4.1.2 Sub-Task B
Table 3: sub-task B results. Models in HASOC 2019 (Mandl et al.,, 2019) were ranked based on Macro F1. | | Weighted F1 | Macro F1
---|---|---|---
lang | model | dev | train | test | dev | train | test
EN | (ALL) | 0.361 | 0.826 | 0.501 | 0.290 | 0.805 | 0.467
(ALL) (D) | 0.201 | 0.776 | 0.556 | 0.190 | 0.580 | 0.392
(BT) | 0.422 | 0.965 | 0.532 | 0.352 | 0.960 | 0.510
(BT) (ALL) | 0.396 | 0.962 | 0.626 | 0.375 | 0.957 | 0.591
(BT) (ALL) (D) | 0.201 | 0.950 | 0.576 | 0.153 | 0.708 | 0.408
(MTL) (ALL) (D) | 0.397 | 0.927 | 0.635 | 0.277 | 0.915 | 0.590
(MTL) (D) | 0.344 | 0.899 | 0.638 | 0.341 | 0.881 | 0.600
| (S) | 0.349 | 0.867 | 0.728 | 0.314 | 0.846 | 0.545
| (S) (D) | 0.401 | 0.875 | 0.698 | 0.332 | 0.839 | 0.537
| HASOC Best | - | - | 0.728 | - | - | 0.545
HI | (ALL) | 0.494 | 0.832 | 0.500 | 0.340 | 0.802 | 0.494
(ALL) (D) | 0.678 | 0.792 | 0.564 | 0.293 | 0.566 | 0.415
(BT) | 0.231 | 0.807 | 0.507 | 0.160 | 0.767 | 0.501
(BT) (ALL) | 0.589 | 0.890 | 0.667 | 0.283 | 0.875 | 0.662
(BT) (ALL) (D) | 0.630 | 0.849 | 0.519 | 0.180 | 0.617 | 0.381
(MTL) (ALL) (D) | 0.819 | 0.883 | 0.647 | 0.499 | 0.864 | 0.641
(MTL) (D) | 0.553 | 0.802 | 0.602 | 0.348 | 0.764 | 0.593
| (S) | 0.466 | 0.749 | 0.688 | 0.322 | 0.701 | 0.553
| (S) (D) | 0.757 | 0.826 | 0.715 | 0.459 | 0.736 | 0.581
| HASOC Best | - | - | 0.715 | - | - | 0.581
DE | (ALL) | 0.326 | 0.876 | 0.459 | 0.315 | 0.861 | 0.343
(ALL) (D) | 0.285 | 0.813 | 0.154 | 0.255 | 0.603 | 0.128
(BT) | 0.285 | 0.620 | 0.413 | 0.328 | 0.581 | 0.285
(BT) (ALL) | 0.478 | 0.985 | 0.495 | 0.484 | 0.984 | 0.397
(BT) (ALL) (D) | 0.179 | 0.945 | 0.242 | 0.153 | 0.707 | 0.177
(MTL) (ALL) (D) | 0.463 | 0.946 | 0.527 | 0.346 | 0.707 | 0.344
(MTL) (D) | 0.468 | 0.923 | 0.541 | 0.482 | 0.918 | 0.416
| (S) | 0.112 | 0.367 | 0.756 | 0.140 | 0.247 | 0.249
| (S) (D) | 0.865 | 0.918 | 0.778 | 0.282 | 0.409 | 0.276
| HASOC Best | - | - | 0.775 | - | - | 0.347
The best scores for sub-task B are mentioned in Table 3. The best scores for
this task belong to Ruiter et al., (2019) for German. For English and Hindi
sub-task B our submissions had performed the best at HASOC 2019. For sub-task
B, many of our models were able to significantly outperform the best HASOC
models. For English, the multi-task approach results in a new best macro-F1
score of $0.600$, a $6\%$ increase from the previous best. For Hindi, our (BT)
(ALL) results in a macro-F1 score of $0.662$ which is $8\%$ more than the
previous best. For Germans, our (MTL) model has a macro-F1 score on the test
set of 0.416 which is almost $7\%$ more than the previous best.
For the English task, even our (MTL) (ALL) and (BT) (ALL) models were able to
beat the previous best. However, our results show that unlike sub-task A where
our models had similar performances, in sub-task B there is huge variation in
their performance. Many outperform the best, however some of them also show
poor results. The (ALL) and (ALL) (D) perform poorly for the three languages,
except (ALL) in German, and show very small macro-F1 scores even on the
training set. Thus, training these models for longer may change the results.
The (MTL) models are able to give competitive performance in task-A and is
able to outperform the previous best, thus showing it’s capability to leverage
different co-related tasks and generalize well on all of them.
Here again we see that back-translation alone does not improve the macro-F1
scores. However, an interesting thing to notice is that the (ALL) and (BT)
models which perform poorly individually, tend to give good results when used
together. Outperforming the previous best HASOC models, in all the three
languages. This hints that data sparsity alone is not the major issue of this
task. This is also evident from the performance of the (MTL) model which only
utilizes the data-set of a single language, which is significantly smaller
than the back-translated (twice the original data-set) and the multi-lingual
joint model (sum of the sizes of the original model). But the (BT) (ALL) (D)
model performed poorly in all of the three languages. Thus, the use of sub-
task (D) with these models only degrades performance.
The results from this task confirm that the information required to predict
task - A is important for task - B as well. This information is shared better
by the modified loss function of the (MTL) models rather than the loss
function for sub-task (D).
The (MTL) models build up on the sub-task (D) approach and do not utilize it
explicitly. The sub-task (D) approach seems like a multi-task learning method,
however, it is not complete and is not able to learn from the other tasks,
thus does not offer huge improvement. These (MTL) models do show a variation
in their performance but it is always on the higher side of the macro-F1
scores.
#### 4.1.3 Sub-Task C
The best scores for sub-task C are mentioned in Table 4. The best scores for
this task belong to Mujadia et al., (2019) for Hindi while our submissions
performed the best for English. The results for sub-task C also show
appreciable variation. Except the (ALL) (D) and (BT) (ALL) (D) models which
also performed poorly in sub-task B, the variation in their performance,
especially for English, is not as significant as that present in sub-task B.
This may be due to the fact that the two way fine-grained classification is a
much easier task than the three way classification in sub-task B. One
important point to note is that sub-task C focused on identifying the context
of the hate speech, specifically it focused on finding out whether it is
targeted or un-targeted, while sub-task A and sub-task B both focused on
identifying the type of hate speech.
The (MTL) models do not perform as well as they did in the previous two tasks.
They were still able to outperform the best models for English but perform
poorly for Hindi. An important point to notice here is that the train macro-F1
scores for the (MTL) models is significantly low. This suggests that the (MTL)
model was not able learn well even for the training instances of this task.
This can be attributed to the point mentioned above that this task is
inherently not as co-related to sub-task A and sub-task B as previously
assumed. The task structure itself is not beneficial for a (MTL) approach. The
main reason for this is that this task focuses on identifying targeted and un-
targeted hate-speech. However, a non hate-speech text can also be an un-
targeted or targeted text. As the (MTL) model receives texts belonging to both
hate (HOF) and non-hate speech NOT, the information contained in the texts
belonging to this task are counter acted by those targeted and un-targeted
texts belong to the (NOT) class. Thus, a better description of this task is
not a fine-grain classification of hate speech text, but that which involves
targeted and un-targeted labels for both (HOF) and (NOT) classes. In that
setting, we can fully utilize the advantage of the multi-task learning model
and can expect better performance on this task as well.
The (ALL) and (BT) models performed really well in sub-task C. The (ALL), (BT)
and (ALL) (BT) models outperform the previous best for English. The
combination of these models with (D) still does not improve them and they
continue to give poor performance. This provides more evidence to our previous
inference that sub-task (D) alone does not improve the our performance.
Table 4: sub-task C results. Models in HASOC 2019 (Mandl et al.,, 2019) were ranked based on Macro F1. | | Weighted F1 | Macro F1
---|---|---|---
lang | model | dev | train | test | dev | train | test
EN | (ALL) | 0.842 | 0.986 | 0.737 | 0.524 | 0.958 | 0.547
(ALL) (D) | 0.380 | 0.792 | 0.658 | 0.141 | 0.328 | 0.292
(BT) | 0.836 | 0.991 | 0.771 | 0.465 | 0.974 | 0.543
(BT) (ALL) | 0.839 | 0.987 | 0.718 | 0.534 | 0.962 | 0.514
(BT) (ALL) (D) | 0.374 | 0.967 | 0.600 | 0.173 | 0.618 | 0.297
(MTL) (ALL) (D) | 0.839 | 0.704 | 0.658 | 0.311 | 0.465 | 0.518
(MTL) (D) | 0.844 | 0.747 | 0.692 | 0.470 | 0.506 | 0.538
| (S) | 0.880 | 0.980 | 0.756 | 0.627 | 0.942 | 0.511
| (S) (D) | 0.548 | 0.874 | 0.764 | 0.393 | 0.651 | 0.476
| HASOC Best | - | - | 0.756 | - | - | 0.511
HI | (ALL) | 0.844 | 0.765 | 0.827 | 0.525 | 0.740 | 0.557
(ALL) (D) | 0.594 | 0.666 | 0.740 | 0.216 | 0.417 | 0.336
(BT) | 0.861 | 0.766 | 0.817 | 0.652 | 0.744 | 0.568
(BT) (ALL) | 0.797 | 0.941 | 0.775 | 0.517 | 0.937 | 0.527
(BT) (ALL) (D) | 0.682 | 0.779 | 0.673 | 0.288 | 0.507 | 0.317
(MTL) (ALL) (D) | 0.374 | 0.530 | 0.626 | 0.292 | 0.524 | 0.456
(MTL) (D) | 0.557 | 0.577 | 0.628 | 0.397 | 0.573 | 0.451
| (S) | 0.800 | 0.877 | 0.727 | 0.550 | 0.866 | 0.565
| (S) (D) | 0.769 | 0.724 | 0.758 | 0.537 | 0.622 | 0.550
| HASOC Best | - | - | 0.736 | - | - | 0.575
Overall most of our models, show an improvement from the single models
submitted at HASOC.The sub-par performance of the back-translated models
across the sub-tasks suggest that data sparsity is not the central issue of
this challenge. To take advantage of the augmented-data additional methods
have to be used. The sub-task (D) does not significantly adds as an
improvement to the models. It can be seen that it actually worsens the
situation for sub-tasks B and sub-tasks C. This can be attributed to it
changing the task to a much harder 7-class distribution task.The combined
model approaches that we have mentioned above offer a resource efficient way
for hate speech detection. The (ALL), (ALL) (MTL) and (MTL) models are able to
generalize well for the different tasks and different languages. They present
themselves as good candidates for a unified model for hate speech detection.
### 4.2 Error analysis
(a) sub-task A
(b) sub-task B
(c) sub-task C
Figure 3: Variation in label F1-scores for all sub-tasks across all models
After identifying the best model and the variation in evaluation scores for
each model, we investigate the overall performance of these models for each
label belonging to each task. In Figure 3, we can observe how the various
labels have a high variance in their predictive performance.
#### 4.2.1 Sub-Task A
For English, the models show decent variation for both the labels on the
training set. However, this variation is not as significant for the dev and
test sets for the (NOT) label. There is an appreciable variation for (HOF)
label on the dev set, but it is not transferred to the test set. The scores
for the train sets is very high compared to the dev and test sets. For Hindi,
the predictions for both the labels show minimum variation in the F1-score
across the three data-sets, with similar scores for each label. For German,
the F1-scores for the (NOT) class is quite high compared to that of (HOF)
class. The models, have a very low F1-score for the (HOF) label on the test
set with appreciable variation across the different models.
#### 4.2.2 Sub-Task B
For English, the F1-scores for all the labels are quite high on the train set
with decent variation for the (OFFN) label. All of the labels show appreciable
variance on the dev and test sets. The (OFFN) has the lowest F1-score amongst
the labels on both the dev and test sets with the other two labels having
similar scores for the test set. For Hindi, the train F1-scores are similar
for all of the labels. The F1-scores are on the lower end for the (HATE) and
(OFFN) labels on the dev set with appreciable variance across the models. This
may be due to the fact that the Hindi dev set contains very few samples from
these two labels. For German, the variation among the F1-scores is high across
all the three sets. The (HATE) label and the (OFFN) label have a large
variation in their F1-scores across the models on the dev and test set
respectively. The F1-score for the (OFFN) label is much higher than the other
labels on the test set.
#### 4.2.3 Sub-Task C
For English, the (UNT) label has exceptionally high variance across the models
for the train set. This is due to the exceptionally low scores by the (BT)
(ALL) (D) model. This label has extremely low F1-score on the dev set.
Furthermore, there is also large variation in the (TIN) scores across the
models in all the sets.
For Hindi, the (TIN) label has similar F1-scores with large variations across
the models on all of the three sets. However, the (UNT) label has small
variance across the models on the dev and test sets.
#### 4.2.4 Back Translation
We also looked at the issue with back-translated results. In order to assess
the back-translated data we looked at the new words added and removed from a
sentence after back-translation. Aggregating these words over all sentences we
find that the top words which are often removed and introduced are stop words,
e.g. the, of, etc. In order to remove these stop words from our analysis and
assess the salient words introduced and removed per label we remove the
overall top 50 words from the introduced and removed list aggregated over each
label. This highlights words which are often removed from offensive and
hateful labels are indeed offensive words. A detailed list of words for
English and German can be found in appendix .2 (we excluded results for Hindi
because of Latex encoding issues).
## 5 Discussion
### 5.1 Computational benefits
From the results mentioned above, we can easily conclude that multi-task
models present us with robust models for hate speech detection that can
generalize well across different languages and different tasks for a given
domain. Even the combined models, present us with models that can be deployed
easily and give competitive performance on different languages with an
efficient computation budget. Many of our models, perform better than the best
scoring models of HASOC 2019 while maintaing a low inference cost.
### 5.2 Additional evaluation
Our current evaluation was limited to the HASOC dataset, additional evaluation
needs to be done to assess out of domain and out of language capabilities of
these techniques. Furthermore, the back-translation approach needs to be
assessed even further using qualitative analysis of generated back-
translations.
### 5.3 Architectures and Training improvements
There are additional combination of architectures which we plan to try in
future iterations of this work. Some of the combinations which we have not
considered in this work are the (BT) (MTL) models and the (BT) (MTL) (ALL)
models. We have seen that the (ALL) and (BT) models work well in unison and
the (MTL) (ALL) models also give competitive performance with the (MTL) model.
Therefore, a (BT) (MTL) (ALL) model is expected to bring out the best of both
worlds. The (MTL) models we have used can still be tuned further, which may
increase their results on the test sets. We trained (ALL) (MTL) model for 15
epochs instead of the usual 5, but it over-fitted the training set. Further
experiments have to be conducted to identify the ideal training time for these
models.
### 5.4 Real world usage
Even though our models have performed really well on the HASOC dataset, yet
the results are far from ideal. Given that HASOC dataset is quite small, our
models may not generalize well outside of the domain of HASOC, however, our
focus was on assessing the improvements we get using our multi-task and multi-
lingual techniques on this datasets. We also conducted similar experiments in
our work for the TRAC 2020 dataset Mishra et al., 2020b . In order to make
these model more robust for general purpose hate-speech detection we need to
train it on more-diverse and larger dataset. Furthermore, we also would like
to highlight that our models need to be further evaluated for demographic bias
as it has been found in Davidson et al., (2019) that hate speech and abusive
language datasets exhibit racial bias towards African American English usage.
## 6 Conclusion
We would like to conclude this paper by highlighting the promise shown by
multi-lingual and multi-task models on solving the hate and abusive speech
detection in a computationally efficient way while maintaining comparable
accuracy of single task models. We do highlight that our pre-trained models
need to be further evaluated before being used on large scale, however the
architecture and the training framework is something which can easily scale to
large dataset without sacrificing performance as was shown in Mishra, 2020b ;
Mishra, 2020a ; Mishra, (2019).
## Compliance with Ethical Standards
Conflict of Interest: The authors declare that they have no conflict of
interest.
## References
* Badjatiya et al., (2017) Badjatiya, P., Gupta, S., Gupta, M., and Varma, V. (2017). Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW ’17 Companion, page 759–760, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
* Bashar and Nayak, (2020) Bashar, M. A. and Nayak, R. (2020). Qutnocturnal@ hasoc’19: Cnn for hate speech and offensive content identification in hindi language. arXiv preprint arXiv:2008.12448.
* Basile et al., (2019) Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Rangel Pardo, F. M., Rosso, P., and Sanguinetti, M. (2019). SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54–63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
* Burnap and Williams, (2015) Burnap, P. and Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7(2):223–242.
* Davidson et al., (2019) Davidson, T., Bhattacharya, D., and Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25–35, Stroudsburg, PA, USA. Association for Computational Linguistics.
* Davidson et al., (2017) Davidson, T., Warmsley, D., Macy, M. W., and Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media 2017.
* Devlin et al., (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Duggan et al., (2017) Duggan, M., Smith, A., and Caiazza, T. (2017). Online Harassment 2017. Technical report, Pew Research Center.
* Eisenstein, (2013) Eisenstein, J. (2013). What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 359–369, Atlanta, Georgia. Association for Computational Linguistics.
* Florio et al., (2020) Florio, K., Basile, V., Polignano, M., Basile, P., and Patti, V. (2020). Time of your hate: The challenge of time in hate speech detection on social media. Applied Sciences, 10(12):4180.
* Gomez et al., (2020) Gomez, R., Gibert, J., Gomez, L., and Karatzas, D. (2020). Exploring hate speech detection in multimodal publications. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1459–1467.
* Joulin et al., (2017) Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. (2017). Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics.
* Kiela et al., (2020) Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Ringshia, P., and Testuggine, D. (2020). The hateful memes challenge: Detecting hate speech in multimodal memes.
* Koehn, (2005) Koehn, P. (2005). Europarl : A Parallel Corpus for Statistical Machine Translation. MT Summit.
* Kumar et al., (2018) Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018). Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 1–11, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
* Kumar et al., (2020) Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2020). Evaluating aggression identification in social media. In Kumar, R., Ojha, A. K., Lahiri, B., Zampieri, M., Malmasi, S., Murdock, V., and Kadar, D., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020), Paris, France. European Language Resources Association (ELRA).
* Le and Mikolov, (2014) Le, Q. V. and Mikolov, T. (2014). Distributed representations of sentences and documents. CoRR, abs/1405.4053.
* Liu et al., (2016) Liu, P., Qiu, X., and Huang, X. (2016). Deep Multi-Task Learning with Shared Memory for Text Classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 118–127, Stroudsburg, PA, USA. Association for Computational Linguistics.
* Mandl et al., (2019) Mandl, T., Modha, S., Majumder, P., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019). Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages. In Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE ’19, page 14–17, New York, NY, USA. Association for Computing Machinery.
* Mishra, (2019) Mishra, S. (2019). Multi-dataset-multi-task Neural Sequence Tagging for Information Extraction from Tweets. In Proceedings of the 30th ACM Conference on Hypertext and Social Media - HT ’19, pages 283–284, New York, New York, USA. ACM Press.
* (21) Mishra, S. (2020a). Information Extraction from Digital Social Trace Data with Applications to Social Media and Scholarly Communication Data. ACM SIGIR Forum, 54(1).
* (22) Mishra, S. (2020b). Information Extraction from Digital Social Trace Data with Applications to Social Media and Scholarly Communication Data. PhD thesis, University of Illinois at Urbana-Champaign.
* (23) Mishra, S. (2020c). Non-neural Structured Prediction for Event Detection from News in Indian Languages. In Mehta, P., Mandl, T., Majumder, P., and Mitra, M., editors, Working Notes of FIRE 2020 - Forum for Information Retrieval Evaluation, Hyderabad, India. CEUR Workshop Proceedings, CEUR-WS.org.
* Mishra et al., (2014) Mishra, S., Agarwal, S., Guo, J., Phelps, K., Picco, J., and Diesner, J. (2014). Enthusiasm and support: alternative sentiment classification for social movements on social media. In Proceedings of the 2014 ACM conference on Web science - WebSci ’14, pages 261–262, Bloomington, Indiana, USA. ACM Press.
* Mishra and Diesner, (2016) Mishra, S. and Diesner, J. (2016). Semi-supervised Named Entity Recognition in noisy-text. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 203–212, Osaka, Japan. The COLING 2016 Organizing Committee.
* Mishra and Diesner, (2019) Mishra, S. and Diesner, J. (2019). Capturing Signals of Enthusiasm and Support Towards Social Issues from Twitter. In Proceedings of the 5th International Workshop on Social Media World Sensors - SIdEWayS’19, pages 19–24, New York, New York, USA. ACM Press.
* Mishra and Mishra, (2019) Mishra, S. and Mishra, S. (2019). 3Idiots at HASOC 2019: Fine-tuning Transformer Neural Networks for Hate Speech Identification in Indo-European Languages. In Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation, pages 208–213, Kolkata, India.
* (28) Mishra, S., Prasad, S., and Mishra, S. (2020a). Model and predictions for multi-task multi-lingual learning of transformer models for hate speech and offensive speech identification in social media. Accessible at: https://doi.org/10.13012/B2IDB-3565123_V1.
* (29) Mishra, S., Prasad, S., and Mishra, S. (2020b). Multilingual Joint Fine-tuning of Transformer models for identifying Trolling,Aggression and Cyberbullying at TRAC 2020. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mondal et al., (2017) Mondal, M., Silva, L. A., and Benevenuto, F. (2017). A Measurement Study of Hate Speech in Social Media. In Proceedings of the 28th ACM Conference on Hypertext and Social Media - HT ’17, pages 85–94, New York, New York, USA. ACM Press.
* Mozafari et al., (2020) Mozafari, M., Farahbakhsh, R., and Crespi, N. (2020). A bert-based transfer learning approach for hate speech detection in online social media. In Cherifi, H., Gaito, S., Mendes, J. F., Moro, E., and Rocha, L. M., editors, Complex Networks and Their Applications VIII, pages 928–940, Cham. Springer International Publishing.
* Mujadia et al., (2019) Mujadia, V., Mishra, P., and Sharma, D. M. (2019). Iiit-hyderabad at hasoc 2019: Hate speech detection.
* Perrin, (2015) Perrin, A. (2015). Social Media Usage:2005-2015. Technical report, Pew Research Center.
* Plank, (2017) Plank, B. (2017). All-in-1 at IJCNLP-2017 task 4: Short text classification with one model for all languages. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 143–148, Taipei, Taiwan. Asian Federation of Natural Language Processing.
* Ranasinghe and Zampieri, (2020) Ranasinghe, T. and Zampieri, M. (2020). Multilingual offensive language identification with cross-lingual embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5838–5844, Online. Association for Computational Linguistics.
* Razavi et al., (2010) Razavi, A. H., Inkpen, D., Uritsky, S., and Matwin, S. (2010). Offensive language detection using multi-level classification. In Proceedings of the 23rd Canadian Conference on Advances in Artificial Intelligence, AI’10, page 16–27, Berlin, Heidelberg. Springer-Verlag.
* Risch and Krestel, (2018) Risch, J. and Krestel, R. (2018). Aggression identification using deep learning and data augmentation. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (co-located with COLING), pages 150–158.
* Risch and Krestel, (2020) Risch, J. and Krestel, R. (2020). Bagging bert models for robust aggression identification. In Proceedings of the Workshop on Trolling, Aggression and Cyberbullying (TRAC@LREC).
* Ruiter et al., (2019) Ruiter, D., Rahman, M. A., and Klakow, D. (2019). Lsv-uds at HASOC 2019: The problem of defining hate. In Mehta, P., Rosso, P., Majumder, P., and Mitra, M., editors, Working Notes of FIRE 2019 - Forum for Information Retrieval Evaluation, Kolkata, India, December 12-15, 2019, volume 2517 of CEUR Workshop Proceedings, pages 263–270. CEUR-WS.org.
* Saha et al., (2019) Saha, P., Mathew, B., Goyal, P., and Mukherjee, A. (2019). Hatemonitors: Language agnostic abuse detection in social media.
* Salminen et al., (2018) Salminen, J., Almerekhi, H., Milenković, M., gyo Jung, S., An, J., Kwak, H., and Jansen, B. (2018). Anatomy of online hate: Developing a taxonomy and machine learning models for identifying and classifying hate in online news media.
* Schmidt and Wiegand, (2017) Schmidt, A. and Wiegand, M. (2017). A Survey on Hate Speech Detection using Natural Language Processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics.
* Sennrich et al., (2016) Sennrich, R., Haddow, B., and Birch, A. (2016). Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Stroudsburg, PA, USA. Association for Computational Linguistics.
* Søgaard and Goldberg, (2016) Søgaard, A. and Goldberg, Y. (2016). Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–235. Association for Computational Linguistics.
* Sticca and Perren, (2013) Sticca, F. and Perren, S. (2013). Is Cyberbullying Worse than Traditional Bullying? Examining the Differential Roles of Medium, Publicity, and Anonymity for the Perceived Severity of Bullying. Journal of Youth and Adolescence, 42(5):739–750.
* Struß et al., (2019) Struß, J., Siegel, M., Ruppenhofer, J., Wiegand, M., and Klenner, M. (2019). Overview of germeval task 2, 2019 shared task on the identification of offensive language. In KONVENS.
* Van Hee et al., (2015) Van Hee, C., Lefever, E., Verhoeven, B., Mennes, J., Desmet, B., De Pauw, G., Daelemans, W., and Hoste, V. (2015). Detection and fine-grained classification of cyberbullying events. In Angelova, G., Bontcheva, K., and Mitkov, R., editors, Proceedings of Recent Advances in Natural Language Processing, Proceedings, pages 672–680.
* Vaswani et al., (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pages 5998–6008.
* Vidgen et al., (2019) Vidgen, B., Harris, A., Nguyen, D., Tromble, R., Hale, S., and Margetts, H. (2019). Challenges and frontiers in abusive content detection. In Proceedings of the Third Workshop on Abusive Language Online, pages 80–93, Florence, Italy. Association for Computational Linguistics.
* Wang et al., (2019) Wang, B., Ding, Y., Liu, S., and Zhou, X. (2019). Ynu_wb at hasoc 2019: Ordered neurons lstm with attention for identifying hate speech and offensive language.
* Wang, (2018) Wang, C. (2018). Interpreting neural network hate speech classifiers. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 86–92, Brussels, Belgium. Association for Computational Linguistics.
* Waseem et al., (2017) Waseem, Z., Davidson, T., Warmsley, D., and Weber, I. (2017). Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84, Vancouver, BC, Canada. Association for Computational Linguistics.
* Wolf et al., (2019) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., and Brew, J. (2019). Huggingface’s transformers: State-of-the-art natural language processing.
* Yang et al., (2019) Yang, F., Peng, X., Ghosh, G., Shilon, R., Ma, H., Moore, E., and Predovic, G. (2019). Exploring deep multimodal fusion of text and photo for hate speech classification. In Proceedings of the Third Workshop on Abusive Language Online, pages 11–18, Florence, Italy. Association for Computational Linguistics.
* Zampieri et al., (2019) Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75–86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
## Appendix
### .1 Label Distribution
Figure 4: English Data class wise distribution
Figure 5: German Data class wise distribution
Figure 6: Hindi Data class wise distribution
### .2 Back translation top changed words
Here we list the top 5 words per label for each task obtained after removing
the top 50 words which were either introduced or removed via back translation.
We do not list the top words for Hindi because of the encoding issue in Latex.
Listing 1: Changed words in English Training Data
⬇
1task_1 introduced_words
2HOF [(’asset’, 30), (”you’re”, 29), (’so’, 28), (”it’s”, 26), (’there’, 25)]
3NOT [(’worldcup2019’, 47), (’at’, 41), (”i’m”, 40), (”it’s”, 38), (’us’, 38)]
4task_1 removed_words
5HOF [(’fuck’, 52), (’he’s’, 43), (’what’, 38), (’don’t’, 37), (’them’, 36)]
6NOT [(’happy’, 49), (’than’, 47), (’being’, 45), (’every’, 45), (’been’, 43)]
7task_2 introduced_words
8HATE [(’there’, 17), (’worldcup2019’, 16), (’so’, 14), (’because’, 14),
(’dhoni’, 13)]
9NONE [(’worldcup2019’, 47), (’at’, 41), (”i’m”, 40), (”it’s”, 38), (’us’,
38)]
10OFFN [(’impeach45’, 11), (’asset’, 11), (’now’, 8), (’lie’, 6),
(’trump2020’, 6)]
11PRFN [(”it’s”, 16), (’fucking’, 16), (’damn’, 14), (’f’, 14), (”you’re”,
12)]
12task_2 removed_words
13HATE [(’which’, 20), (’ground’, 20), (’such’, 18), (’its’, 18), (”doesn’t”,
18)]
14NONE [(’happy’, 49), (’than’, 47), (’being’, 45), (’every’, 45), (’been’,
43)]
15OFFN [(’he’s’, 12), (’them’, 11), (”he’s”, 9), (’what’, 9), (’been’, 9)]
16PRFN [(’fuck’, 47), (’fucking’, 21), (’he’s’, 16), (’off’, 16), (’don’t’,
15)]
17task_3 introduced_words
18NONE [(’worldcup2019’, 47), (’at’, 41), (”i’m”, 40), (”it’s”, 38), (’us’,
38)]
19TIN [(’asset’, 30), (”you’re”, 27), (’so’, 25), (’which’, 25), (’because’,
25)]
20UNT [(’f’, 8), (’nmy’, 5), (’***’, 5), (’there’, 4), (’these’, 4)]
21task_3 removed_words
22NONE [(’happy’, 49), (’than’, 47), (’being’, 45), (’every’, 45), (’been’,
43)]
23TIN [(’fuck’, 48), (’he’s’, 39), (’what’, 35), (’don’t’, 34), (”he’s”, 34)]
24UNT [(’them’, 7), (’f***’, 6), (’being’, 5), (’such’, 5), (’does’, 5)]
Listing 2: Changed words in German Training Data
⬇
1task_1 introduced_words
2HOF [(’!!’, 15), (’etwas’, 15), (’diese’, 12), (’sein’, 11), (’werden’, 11)]
3NOT [(’einen’, 59), (’jetzt’, 58), (’war’, 57), (’menschen’, 57), (’was’,
56)]
4task_1 removed_words
5HOF [(’du’, 18), (’wohl’, 14), (’haben’, 12), (’eure’, 11), (’mir’, 11)]
6NOT [(’wieder’, 56), (’uber’, 55), (’vom’, 52), (’haben’, 51), (’einem’, 49)]
7task_2 introduced_words
8HATE [(’diese’, 6), (’werden’, 5), (’grun’, 4), (’sollte’, 4), (’konnen’, 3)]
9NONE [(’einen’, 59), (’jetzt’, 58), (’war’, 57), (’menschen’, 57), (’was’,
56)]
10OFFN [(’!!’, 11), (’sein’, 8), (’dumm’, 8), (’etwas’, 8), (’sein,’, 7)]
11PRFN [(’ich’, 6), (’scheibe’, 5), (’etwas’, 5), (’keine’, 5), (’alle’, 4)]
12task_2 removed_words
13HATE [(’diesen’, 5), (’dass’, 5), (’kann’, 4), (’wohl’, 4), (’also’, 4)]
14NONE [(’wieder’, 56), (’uber’, 55), (’vom’, 52), (’haben’, 51), (’einem’,
49)]
15OFFN [(’du’, 12), (’nur’, 8), (’muss’, 8), (’eure’, 8), (’haben’, 7)]
16PRFN [(’bin’, 5), (’was’, 5), (’wohl’, 4), (’keine’, 4), (’fressen’, 4)]
|
[a]Wen-Chen Chang
# QCD effects in lepton angular distributions of Drell-Yan/$Z$ production and
jet discrimination
Randall Evan McClellan Jen-Chieh Peng Oleg Teryaev
###### Abstract
We present a comparison of data of lepton angular distributions of Drell-
Yan/$Z$ production with the fixed-order pQCD calculations by which the
baseline of pQCD effects is illustrated. As for the $Z$ production, we predict
that $A_{0}$ and $A_{2}$ for $Z$ plus single gluon-jet events are very
different from that of $Z$ plus single quark-jet events, allowing a new
experimental tool for checking various algorithms which attempt to
discriminate quark jets from gluon jets. Using an intuitive geometric
approach, we show that the violation of the Lam-Tung relation, appearing at
large transverse-momentum region, is attributed to the presence of a non-
coplanarity effect. This interpretation is consistent with the appearance of
violation beyond LO-QCD effect.
## 1 Introduction
Measuring lepton angular distributions of Drell-Yan (D-Y) process [1] provides
a powerful tool to explore the reaction mechanisms and the parton
distributions of colliding hadrons. For example, the Lam-Tung (L-T) relation
[2] has been proposed as a benchmark of the perturbative QCD (pQCD) effect in
D-Y process. Violations of L-T relation were observed in the measurements of
D-Y production by the fixed-target experiments as well as $\gamma^{*}$/$Z$
production by the CMS [3] and ATLAS [4] experiments at LHC. It is important to
understand the origin of these violations.
It is found that the violation of the L-T relation seen in CMS and ATLAS data
in the region of transverse momentum ($q_{T}$) greater than 5 GeV could be
well described taking into account NNLO pQCD effect [5]. On the other hand,
the agreement is not as good in a similar comparison [6, 7] for the fixed-
target data of NA10 [8], E615 [9] and E866 [10] at $q_{T}<3$ GeV. Transverse-
momentum dependent Boer-Mulders functions [11], correlating the nucleon spin
with the intrinsic transverse momentum of partons, have been suggested to
account for a violation of the L-T relation observed at small $q_{T}$ in the
fixed-target experiments.
In this proceedings, we show that the $q_{T}$ dependence of the angular
distribution coefficients, as well as the violation of the Lam-Tung violation,
could be obtained if the angular distribution coefficients were analyzed as a
function of the number of accompanying jets in $Z$-boson production measured
by the CMS and ATLAS Collaborations [12]. Furthermore we compare the data of
dilepton angular parameters $\lambda$, $\mu$, $\nu$ and the L-T violation
quantity $1-\lambda-2\nu$ measured by E615 [9] with the fixed-order pQCD
calculations. Finally we interpret some notable features of pQCD results using
the geometric model [13, 14, 15]. More results and greater details can be
found in Ref. [7, 12].
## 2 Lepton angular distributions of $Z$ production and jet discrimination
The lepton angular distribution in the $Z$ rest frame can be expressed as [3,
4]
$\displaystyle\frac{d\sigma}{d\Omega}$ $\displaystyle\propto$
$\displaystyle(1+\cos^{2}\theta)+\frac{A_{0}}{2}(1-3\cos^{2}\theta)+A_{1}\sin
2\theta\cos\phi+\frac{A_{2}}{2}\sin^{2}\theta\cos 2\phi$ (1) $\displaystyle+$
$\displaystyle A_{3}\sin\theta\cos\phi+A_{4}\cos\theta+A_{5}\sin^{2}\theta\sin
2\phi+A_{6}\sin 2\theta\sin\phi+A_{7}\sin\theta\sin\phi,$
where $\theta$ and $\phi$ are the polar and azimuthal angles of leptons in the
rest frame of $Z$. Since the intrinsic transverse momenta of the annihilating
quark and antiquark is neglected in the original Drell-Yan model, the angular
distribution is simply $1+\cos^{2}\theta$ and all angular distribution
coefficients, $A_{i}$, vanish. For a non-zero dilepton transverse momentum,
$q_{T}$, these coefficients can deviate from zero due to QCD effects. However,
it was predicted that the coefficients $A_{0}$ and $A_{2}$ should remain
identical, $A_{0}=A_{2}$, i.e. the Lam-Tung relation [2].
Figure 1: Comparison between the CMS data [3] of $A_{0}$ and $A_{0}-A_{2}$ for
$Z$ production from $p-p$ collisions with fixed-order pQCD calculations.
Curves correspond to calculations described in the text.
Figure 1 shows the CMS data for $A_{0}$ and $A_{0}-A_{2}$. Pronounced $q_{T}$
dependence of $A_{0}$ is observed and the Lam-Tung relation, $A_{0}-A_{2}=0$,
is clearly violated. There are two NLO QCD subprocesses for $Z$ production:
$q\bar{q}\rightarrow Zg$ annihilation process, and $qg\rightarrow Zq$ quark
Compton scattering process. In the Collins-Soper frame [16], the NLO pQCD
predictions of $A_{0}$ and $A_{2}$ as a function of $q_{T}$ of $Z$ for these
two processes are $A_{0}=A_{2}=q^{2}_{T}/(M_{Z}^{2}+q^{2}_{T})$ [17] and
$A_{0}=A_{2}=5q^{2}_{T}/(M_{Z}^{2}+5q^{2}_{T})$ [18, 19], respectively. The
dotted and dashed curves in Fig. 1(a) correspond to these NLO expressions.
As the $q\bar{q}$ and $qg$ processes contribute to the $pp\to ZX$ reaction
incoherently, the observed $q_{T}$ dependence of $A_{0}$ reflects the combined
effect of these two contributions. A best-fit to the CMS $A_{0}$ data, shown
as the solid line in Fig. 1(a), gives a mixture of 58.5$\pm$1.6% $qg$ and
41.5$\pm$1.6% $q\bar{q}$ processes. For $pp$ collisions at the LHC, the $qg$
process is expected to be more important than the $q\bar{q}$ process, in
agreement with the best-fit result. For $Z$ plus single-jet events, Fig. 1(a)
shows that there is remarkable difference in the $q_{T}$ dependence for
$A_{0}$ between the $q\bar{q}$ annihilation process and the $qg$ Compton
process. Since it is a high-$p_{T}$ gluon (quark) jet associated with the
$q\bar{q}(qg)$ process at the $\alpha_{s}$ level, one could first utilize the
existing algorithms for quark (gluon) jet identification to separate the
$q\bar{q}$ annihilation events from the $qg$ Compton events and investigate
the angular distribution of individual event samples. Their angular
distribution coefficients for $Z$ plus single jet data would also provide a
powerful tool for testing various algorithms designed to distinguish quark
jets from gluon jets.
For the $Z$ plus multi-jet data, the L-T relation is expected to be violated
at a higher level than that of the inclusive production data. Exclusion of the
$Z$ plus single-jet events satisfying the L-T relation, would enhance the
violation of the L-T relation. We have carried out pQCD calculations using
DYNNLO [20, 21] to demonstrate this effect. Figure 1(b) compares the DYNNLO
calculations with the CMS $A_{0}-A_{2}$ data. The black band corresponds to
the NNLO calculation including contributions from the events of $Z$ with
single jet and two jets. The blue band singles out the contributions to
$A_{0}-A_{2}$ from $Z$ plus two jets only, showing that the violation of the
Lam-Tung relation is indeed amplified for the multi-jet events. This can be
readily tested with the data collected at the LHC.
## 3 Lepton angular distributions of Drell-Yan production in fixed-target
experiments
In the rest frame of the virtual photon in the D-Y process, another expression
for the lepton angular distributions commonly used by the fixed-target
experiments is given as [2]
$\frac{d\sigma}{d\Omega}\propto 1+\lambda\cos^{2}\theta+\mu\sin
2\theta\cos\phi+\frac{\nu}{2}\sin^{2}\theta\cos 2\phi,$ (2)
where $\theta$ and $\phi$ refer to the polar and azimuthal angles of leptons.
The $\lambda,\mu,\nu$ are related to $A_{0},A_{1},A_{2}$ in Eq. (1) via
$\displaystyle\lambda=\frac{2-3A_{0}}{2+A_{0}};~{}~{}~{}\mu=\frac{2A_{1}}{2+A_{0}};~{}~{}~{}\nu=\frac{2A_{2}}{2+A_{0}}.$
(3)
Equation (3) shows that the L-T relation, $1-\lambda-2\nu=0$, is equivalent to
$A_{0}=A_{2}$.
Figure 2: Comparison of NLO (red points) and NNLO (blue points) fixed-order
pQCD calculations with the E615 $\pi^{-}+W$ D-Y data at 252 GeV [9] (black
points) for $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$.
In Fig. 2, we compare the results of $\lambda$, $\mu$, $\nu$, and the L-T
violation, $1-\lambda-2\nu$, from the fixed-order pQCD calculations with
252-GeV $\pi^{-}+W$ data from E615 experiment [9]. The angular parameters are
evaluated as a function of $q_{T}$ in the Collins-Soper frame. Overall, the
calculated $\lambda$, $\mu$ and $\nu$ exhibit distinct $q_{T}$ dependencies.
At $q_{T}\rightarrow 0$, $\lambda$, $\mu$ and $\nu$ approach the values
predicted by the collinear parton model: $\lambda=1$ and $\mu=\nu=0$. As
$q_{T}$ increases, Fig. 2 shows that $\lambda$ decreases toward its
large-$q_{T}$ limit of $-1/3$ while $\nu$ increases toward $2/3$, for both
$q\bar{q}$ and $qG$ processes [18, 19]. The $q_{T}$ dependence of $\mu$ is
relatively mild compared to $\lambda$ and $\nu$. This is understood as a
result of some cancellation effect, to be discussed in Sec. 4.
Comparing the results of the NLO with the NNLO calculations,
$\lambda{\rm(NNLO)}$ is smaller than $\lambda\rm{(NLO)}$ while $\mu$ and $\nu$
are very similar for NLO and NNLO. The amount of L-T violation,
$1-\lambda-2\nu$, is zero in the NLO calculation, and nonzero and positive in
the NNLO calculation. As seen in Fig. 2, the pQCD predicts a sizable magnitude
for $\nu$, comparable to the data. Such pQCD effect should be included in the
determination of nonperturbative Boer-Mulders effect from the data of $\nu$.
## 4 Geometric model
As introduced above, both CMS and E615 data of lepton angular distributions
for $Z$ and D-Y production can be reasonably well described by the NLO and
NNLO pQCD calculations. It is interesting to see that various salient features
of pQCD calculations could be understood using a geometric approach developed
in Refs. [13, 14].
In the Collins-Soper $\gamma^{*}/Z$ rest frame, the hadron plane, the quark
plane, and the lepton plane of collision geometry are defined [13, 14]. A pair
of collinear $q$ and $\bar{q}$ with equal momenta annihilate into a
$\gamma^{*}/Z$. The momentum unit vector of $q$ is defined as
$\hat{z}^{\prime}$, and the quark plane is formed by the $\hat{z}^{\prime}$
and the $\hat{z}$ axes of Collins-Soper frame. The angular coefficients
$A_{i}$ in Eq. (3) can be expressed in term of $\theta_{1}$ and $\phi_{1}$ as
follows:
$\displaystyle
A_{0}=\langle\sin^{2}\theta_{1}\rangle,~{}~{}~{}A_{1}=\frac{1}{2}\langle\sin
2\theta_{1}\cos\phi_{1}\rangle,~{}~{}~{}A_{2}=\langle\sin^{2}\theta_{1}\cos
2\phi_{1}\rangle,$ (4)
where the $\theta_{1}$ and $\phi_{1}$ are the polar and azimuthal angles of
the natural quark axis $\hat{z}^{\prime}$ of the quark plane in the Collins-
Soper frame.
Up to NLO ($\mathcal{O}(\alpha_{S})$) in pQCD, the quark plane coincides with
the hadron plane and $\phi_{1}=0$. Therefore $A_{0}=A_{2}$ or
$1-\lambda-2\nu=0$, i.e., the L-T relation is satisfied. Higher order pQCD
processes allow the quark plane to deviate from the hadron plane, i.e.,
$\phi_{1}\neq 0$. This acoplanarity effect leads to a violation of the L-T
relation. For a nonzero $\phi_{1}$, Eq. (4) shows that $A_{2}<A_{0}$.
Therefore, when the L-T relation is violated, $A_{0}$ must be greater than
$A_{2}$ or, equivalently, $1-\lambda-2\nu>0$. This expectation of
$1-\lambda-2\nu>0$ in this geometric approach agrees with the results of NNLO
pQCD calculations shown in Fig. 2. The geometric approach offers a simple and
intuitive interpretation for this result.
Furthermore the sign of $\mu$ could be either positive or negative, depending
on which parton and hadron the gluon is emitted from [14, 7]. Hence, one
expects some cancellation effects for $\mu$ among contributions from various
processes. Each process is weighted by the corresponding parton density
distributions. At mid-rapidity, the momentum fraction carried by the beam
parton is comparable to that of the target parton. Therefore, the weighting
factors for various processes are of similar magnitude and the cancellation
effect could be very significant, resulting in a small value of $\mu$.
## 5 Summary
We have presented a comparison of the measurements of the angular parameters
$A_{0}$ and $A_{0}-A_{2}$ of the $Z$ production from the CMS experiment as
well as $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$ of the D-Y process from
the fixed-target E615 experiment with the corresponding results from the NLO
and NNLO pQCD calculations. Qualitatively the transverse momentum dependence
of measured angular parameters could be described by pQCD. The L-T violation
part $A_{0}-A_{2}$ or $1-\lambda-2\nu$ remains zero in the NLO pQCD
calculation and turns positive in NNLO pQCD. The measurement of $A_{0}$ and
$A_{2}$ coefficients in $Z$ plus single-jet or multi-jet events would provide
valuable insights on the origin of the violation of the L-T relation and could
be used an an index in discriminating the intrinsic property of high-$q_{T}$
jets.
Within the geometric picture, the occurrence of acoplanarity between the quark
plane and the hadron plane ($\phi_{1}\neq 0$), for the pQCD processes beyond
NLO offers an interpretation of a violation of the L-T relation. The predicted
positive value of $1-\lambda-2\nu$, or $A_{0}>A_{2}$ when $\phi_{1}$ is
nonzero, is consistent with the NNLO pQCD results.
## References
* [1] J. C. Peng and J. W. Qiu, Prog. Part. Nucl. Phys. 76, 43 (2014).
* [2] C. S. Lam and W. K. Tung, Phys. Rev. D 21, 2712 (1980).
* [3] CMS Collaboration, V. Khachatryan et al., Phys. Lett. B 750, 154 (2015).
* [4] ATLAS Collaboration, G. Aad et al., JHEP 08, 159 (2016).
* [5] R. Gauld, A. Gehrmann-De Ridder, T. Gehrmann, E. W. N. Glover and A. Huss, JHEP 1711, 003 (2017).
* [6] M. Lambertsen and W. Vogelsang, Phys. Rev. D 93, 114013 (2016).
* [7] W. C. Chang, R. E. McClellan, J. C. Peng and O. Teryaev, Phys. Rev. D 99, 014032 (2019).
* [8] NA10 Collaboration, S. Falciano et al., Z. Phys. C 31, 513 (1986); M. Guanziroli et al., Z. Phys. C 37, 545 (1988).
* [9] E615 Collaboration, J. S. Conway et al., Phys. Rev. D 39, 92 (1989); J. G. Heinrich et al., Phys. Rev. D 44, 1909 (1991).
* [10] E866/NuSea Collaboration, R. S. Towell et al., Phys. Rev. D 64, 052002 (2001).
* [11] D. Boer, Phys. Rev. D 60, 014012 (1999).
* [12] J. C. Peng, W. C. Chang, R. E. McClellan and O. Teryaev, Phys. Lett. B 797, 134895 (2019).
* [13] J. C. Peng, W. C. Chang, R. E. McClellan, and O. Teryaev, Phys. Lett. B 758, 384 (2016).
* [14] W. C. Chang, R. E. McClellan, J. C. Peng and O. Teryaev, Phys. Rev. D 96, 054020 (2017).
* [15] J. C. Peng, D. Boer, W. C. Chang, R. E. McClellan and O. Teryaev, Phys. Lett. B 789, 356 (2019).
* [16] J. C. Collins and D. E. Soper, Phys. Rev. D16, 2219 (1977).
* [17] J. C. Collins, Phys. Rev. Lett. 42, 291 (1979).
* [18] R. L. Thews, Phys. Rev. Lett. 43, 987 (1979).
* [19] J. Lindfors, Phys. Scr. 20, 19 (1979).
* [20] S. Catani and M. Grazzini, Phys. Rev. Lett. 98, 222002 (2007).
* [21] S. Catani et al., Phys. Rev. Lett. 103, 082001 (2009).
|
# Inadequacy of Linear Methods for Minimal Sensor Placement and Feature
Selection in Nonlinear Systems; a New Approach Using Secants
Samuel E. Otto Department of Mechanical and Aerospace Engineering, Princeton
University, Princeton, NJ 08544<EMAIL_ADDRESS>and Clarence W. Rowley
###### Abstract.
Sensor placement and feature selection are critical steps in engineering,
modeling, and data science that share a common mathematical theme: the
selected measurements should enable solution of an inverse problem. Most real-
world systems of interest are nonlinear, yet the majority of available
techniques for feature selection and sensor placement rely on assumptions of
linearity or simple statistical models. We show that when these assumptions
are violated, standard techniques can lead to costly over-sensing without
guaranteeing that the desired information can be recovered from the
measurements. In order to remedy these problems, we introduce a novel data-
driven approach for sensor placement and feature selection for a general type
of nonlinear inverse problem based on the information contained in secant
vectors between data points. Using the secant-based approach, we develop three
efficient greedy algorithms that each provide different types of robust, near-
minimal reconstruction guarantees. We demonstrate them on two problems where
linear techniques consistently fail: sensor placement to reconstruct a fluid
flow formed by a complicated shock-mixing layer interaction and selecting
fundamental manifold learning coordinates on a torus.
###### Key words and phrases:
nonlinear inverse problems, state estimation, feature selection, manifold
learning, greedy algorithms, submodular optimization, shock-turbulence
interaction, reduced-order modeling
This research was supported by the Army Research Office under grant number
W911NF-17-1-0512. S.E.O. was supported by a National Science Foundation
Graduate Research Fellowship Program under Grant No. DGE-2039656. Any
opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of
the National Science Foundation.
## 1\. Introduction
Reconstructing the state of complex systems like fluid flows, chemical
processes, and biological networks from measurements taken by a few carefully
chosen sensors is a crucial task for controlling, forecasting, and building
simplified models of these systems. In this setting it is important to be able
to reconstruct the relevant information about the system using the smallest
total number of measurements which includes minimizing the number of sensors
to reduce cost, and using the shortest possible measurement histories to
shorten response time. Feature selection in statistics and machine learning is
a related task where one tries to find a small subset of measured variables
(features) in the available data that allow one to reliably predict a quantity
of interest.
Nonlinear reconstruction can yield large improvements over linear
reconstruction when the sensors or features are carefully selected [20].
Successful nonlinear reconstruction techniques include neural networks
[37],[38], deep nonlinear state estimators [23], [29], and convex $\ell^{1}$
minimization to reveal sparse coefficients in learned libraries [64], [9]. The
need for nonlinear representation and reconstruction is also recognized in the
reduced-order-modeling community where it is called “nonlinear Galerkin”
approximation [31], [47], [34]. These methods are necessary because in many
systems of interest, the state is found to lie near a low-dimensional
underlying manifold that is curved in such a way that it is not contained in
any low-dimensional subspace [40]. We will show that the best possible linear
reconstruction accuracy is fundamentally limited by the number of measurements
(features) and the fraction of the variance that is captured in the principal
subspace [24] of that dimension. In essence, any linear representation in a
subspace is “too loose” and demands an excessive number of measurements to
even have a hope of accurately reconstructing the state using linear
functions. Nonlinear reconstruction is much more powerful, as Whitney’s
celebrated embedding theorem (Theorem. 5, [62]) shows that states on any
$r$-dimensional smooth manifold can be reconstructed using $2r$ carefully
chosen measurements. If the measurements must be linear functions of the state
on a compact sub-manifold of $\mathbb{R}^{n}$ then $2r+1$ can be found [61].
With many measurements available from our sensors (though not necessarily ones
that achieve Whitney’s results), the problem that remains is to properly
choose them so that nonlinear reconstruction is possible and robust to noise.
While nonlinear reconstruction has proved to be extremely advantageous, the
overwhelming majority of sensor placement and feature selection methods rely
on measures of linear or Gaussian reconstruction accuracy as an optimization
criteria. Such methods include techniques based on sampling modal bases [66],
[33], [14], [18], [8], linear dynamical system models [36], [17], [53], [54],
[59], Bayesian and maximum likelihood optimality in linear inverse problems
[13], [26], [51], information-theoretic criteria under Gaussian or other
simple statistical models [28], [12], [11], [52], [50], and sparse linear
approximation in dictionaries using LASSO [57], [67] or orthogonal matching
pursuit [41], [58]. We provide an overview of a representative collection of
these methods that we shall use as a basis for comparison in Section 2.
We show that relying on these linear, Gaussian techniques to identify sensors
that will be used for nonlinear reconstruction can lead to costly over-sensing
when the underlying manifold is low-dimensional, but the data do not lie in an
equally low-dimensional subspace. This effect is most pronounced when the most
energetic (highest variance) components of the data are actually functions of
less-energetic components, but not vice versa. In such cases, the linear
techniques are consistently “tricked” into sensing the most energetic
components while failing to capture the important less energetic ones that can
actually be used for minimal reconstruction. These situations are not merely
academic, and they actually abound in physics and in data science. As we shall
discuss in Section 3, the problem appears in mixing layer fluid flows and in
the presence of shock waves, which are both ubiquitous in aerodynamics. The
presence of important low-energy sub-harmonic frequencies is also generic
behavior after a period-doubling bifurcation, which is a common route to
chaos, for instance in ecosystem collapse [60] and cardiac arrhythmia [45]. In
data science, the problem is most pronounced when we try to select fundamental
nonlinear embedding coordinates for a data set using manifold learning
techniques like kernel PCA [49], Laplacian eigenmaps [1], diffusion maps [16],
and Isomap [56] as we shall discuss in Section 3.3.
In order to address the limitations of linear, Gaussian methods for sensor
placement and feature selection demonstrated in the first half of the paper,
we develop a novel data-driven approach based on consideration of secant
vectors between states in Section 4. Related secant-based approaches have been
pioneered by [5], [25], [21], [55] for the purpose of finding optimal
embedding subspaces. While their considerations of secants lead to continuous
optimization problems over subspaces, our considerations of secants lead to
combinatorial optimization problems over sets of sensors. We develop three
different secant-based objectives together with greedy algorithms that each
provide different types of robust, near-minimal reconstruction guarantees for
very general types of nonlinear inverse problems. The guarantees stem from the
underlying geometric information that is captured by secants and encoded in
our optimization objectives. Moreover, the objectives we consider each have
the celebrated diminishing returns property called _submodularity_ , allowing
us to leverage the classical results of G. L. Nemhauser and L. A. Wolsey et
al. [39], [63] to guarantee the performance of efficient greedy algorithms for
sensor placement. We also leverage concentration of measure results in order
to prove performance guarantees when the secants are randomly down-sampled,
enabling computational scalability to very large data sets. Each of these
techniques demonstrates greatly improved performance compared to a large
collection of linear techniques on a canonical shock-mixing layer flow problem
[65] as well as for selecting fundamental manifold learning coordinates.
## 2\. Background on Linear, Gaussian, Techniques
The predominant sensor placement, feature selection, and experimental design
techniques available today rely on linear and/or Gaussian assumptions about
the underlying data: that is, that the data live in a low-dimensional subspace
and/or have a Gaussian distribution. Under these assumptions, it becomes easy
to quantify the performance of sensors, features, or experiments, using a
variety of information theoretic, Bayesian, maximum likelihood, or other
optimization criteria. A comprehensive review is beyond the scope of this
paper, and of course we do not claim that linear methods always fail. Rather,
we argue that because the underlying linear, Gaussian assumptions are violated
in many real-world problems, we cannot expect them to find small collections
of sensors that enable nonlinear reconstruction of the desired quantities. We
shall briefly review the collection of linear techniques that we shall compare
to throughout this work and that we think are representative of the current
literature.
### 2.1. (Group) LASSO
The Least Absolute Shrinkage and Selection Operator (LASSO) method introduced
by R. Tibshirani [57] is a highly successful technique for feature selection
in machine learning that has found additional applications in compressive
sampling recovery [10] and system identification [7]. A generalization by M.
Yuan and Y. Lin [67] called group LASSO is especially relevant for sensor
placement since it allows measurements to be selected in groups that might
come from the same sensor at different instants of time. Suppose we are given
a collection of data consisting of available measurements
${\mathbf{m}}_{j}({\mathbf{x}}_{i})$, $j=1,\ldots,M$ along with relevant
quantities ${\mathbf{g}}({\mathbf{x}}_{i})$ that we wish to reconstruct over a
collection of states ${\mathbf{x}}_{i}$, $i=1,\ldots,N$. The group LASSO
convex optimization problem takes the form
(1)
$\operatorname*{\min\\!imize\enskip}_{{\mathbf{A}}_{1},\ldots,{\mathbf{A}}_{M}}\sum_{i=1}^{N}\Big{\|}{\mathbf{g}}({\mathbf{x}}_{i})-\sum_{j=1}^{M}{\mathbf{A}}_{j}{\mathbf{m}}_{j}({\mathbf{x}}_{i})\Big{\|}_{2}^{2}+\gamma\sum_{j=1}^{M}\left\|{\mathbf{A}}_{j}\right\|_{F}$
and tries to reconstruct the targets as accurately as possible using a linear
combination of the measurements subject to a sparsity-promoting penalty. The
strength of the penalty depends on the user-specified parameter $\gamma\geq 0$
and forces the coefficient matrices ${\mathbf{A}}_{j}$ on many of the
measurement groups to be identically zero. Those coefficient matrices with
nonzero entries indicate the sensors that should be used to _linearly_
reconstruct the target variables with high accuracy.
### 2.2. Determinantal “D”-Optimal Selection
Suppose the state ${\mathbf{x}}$ has a prior probability distribution with
covariance ${\mathbf{C}}_{{\mathbf{x}}}$ and the target variables
${\mathbf{g}}({\mathbf{x}})$ and measurements
${\mathbf{m}}_{j}({\mathbf{x}})$, $j=1,\ldots,M$ are linear functions of the
state
(2)
${\mathbf{g}}({\mathbf{x}})={\mathbf{T}}{\mathbf{x}},\qquad{\mathbf{m}}_{j}({\mathbf{x}})={\mathbf{M}}_{j}{\mathbf{x}}+{\mathbf{n}}_{j}$
where ${\mathbf{n}}_{j}$ is the mean-zero, state independent, noise from the
$j$th sensor with covariance ${\mathbf{C}}_{{\mathbf{n}}_{j}}$. Then, if
${\mathbf{M}}_{\mathscr{S}}$ is a matrix with rows given by ${\mathbf{M}}_{j}$
and ${\mathbf{C}}_{{\mathbf{n}}_{\mathscr{S}}}$ is a block diagonal matrix
formed from ${\mathbf{C}}_{{\mathbf{n}}_{j}}$, for $j$ in a given set of
sensors $\mathscr{S}$, then the optimum (least-squares) linear estimate of
${\mathbf{g}}({\mathbf{x}})$ given ${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$
has error covariance
(3)
${\mathbf{C}}_{{\mathbf{e}}}(\mathscr{S})={\mathbf{T}}\left({\mathbf{C}}_{{\mathbf{x}}}^{-1}+{\mathbf{M}}_{\mathscr{S}}^{T}{\mathbf{C}}_{{\mathbf{n}}_{\mathscr{S}}}^{-1}{\mathbf{M}}_{\mathscr{S}}\right)^{-1}{\mathbf{T}}^{T}.$
If ${\mathbf{x}}$ and the noise are independent Gaussian random variables then
Eq. 3 is the covariance of the posterior distribution for
${\mathbf{g}}({\mathbf{x}})$ given ${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$.
A low-dimensional representation of the state and its covariance are usually
found from data via principal component analysis (PCA) [24] or proper
orthogonal decomposition (POD) [4] when an analytical model is not available.
A common technique, referred to as the Bayesian approach in the optimal design
of experiments [44] is to quantify performance using functions of
${\mathbf{C}}_{{\mathbf{e}}}(\mathscr{S})$ [13]. In particular, Bayesian
determinantal or “D”-optimality entails minimizing
$\log{\det{{\mathbf{C}}_{{\mathbf{e}}}(\mathscr{S})}}$, which, under Gaussian
assumptions, is equivalent to minimizing the conditional entropy [52], [50] or
the volumes of confidenece ellipsoids about the maximum a posteriori (MAP)
estimate of ${\mathbf{g}}({\mathbf{x}})$ given
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$ [26]. This approach is widely used
for sensor placement since it readily admits efficient approximations based on
convex relaxation [26] and greedy algorithms [51], [59] with guaranteed
performance. Similar objectives have been used to quantify observability and
controllability for sensor and actuator placement in linear dynamical systems
[53], [54].
When there is no prior probability distribution for ${\mathbf{x}}$ and we want
to estimate the full state ${\mathbf{g}}({\mathbf{x}})={\mathbf{x}}$ from
measurements corrupted by Gaussian noise, we can construct the maximum
likelihood estimate whose error covariance is
(4)
${\mathbf{C}}_{{\mathbf{e}}}(\mathscr{S})=\left({\mathbf{M}}_{\mathscr{S}}^{T}{\mathbf{C}}_{{\mathbf{n}}_{\mathscr{S}}}^{-1}{\mathbf{M}}_{\mathscr{S}}\right)^{-1}.$
Minimizing the volumes of confidence ellipsoids in this setting as is done in
[26] is referred to as maximum likelihood “D”-optimality since it entails
maximizing
$\log\det{\left({\mathbf{M}}_{\mathscr{S}}^{T}{\mathbf{C}}_{{\mathbf{n}}_{\mathscr{S}}}^{-1}{\mathbf{M}}_{\mathscr{S}}\right)}$.
In the absence of the regularizing effect the prior distribution has on the
estimate, we must have at least as many sensor measurements as state variables
in the maximum likelihood setting.
### 2.3. Pivoted QR Factorization
Pivoted matrix factorization techniques, and QR pivoting in particular, have
become a popular choice for sensor placement [33], [6] and feature selection
in reduced-order modeling [14], [18], where the method is often referred to as
the Discrete Empirical Interpolation Method (DEIM). This approach dates back
to P. Businger and G. H. Golub’s seminal work [8], which introduced
Householder-pivoted QR factorization for the purpose of feature selection in
least squares fitting problems. The approach is also closely related to
orthogonal matching pursuit [41] and simultaneous orthogonal matching pursuit
[58], which are widely used sparse approximation algorithms.
In its simplest form, one supposes that the underlying state to be estimated
${\mathbf{g}}({\mathbf{x}})={\mathbf{x}}$ is low dimensional (e.g., using its
PCA or POD coordinate representation) and selects the linear measurements from
among the rows of a matrix ${\mathbf{M}}$ by forming a pivoted QR
decomposition of the form
(5)
${\mathbf{M}}^{T}\begin{bmatrix}[c|c]{\mathbf{P}}_{1}&{\mathbf{P}}_{2}\end{bmatrix}={\mathbf{Q}}\begin{bmatrix}[c|c]{\mathbf{R}}_{1}&{\mathbf{R}}_{2}\end{bmatrix},$
where $\begin{bmatrix}[c|c]{\mathbf{P}}_{1}&{\mathbf{P}}_{2}\end{bmatrix}$ is
a permutation. The first $K=\dim{\mathbf{x}}$ pivot columns forming
${\mathbf{P}}_{1}$ determine a set of sensor measurements
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})={\mathbf{M}}_{\mathscr{S}}{\mathbf{x}}={\mathbf{P}}_{1}^{T}{\mathbf{M}}{\mathbf{x}}$
from which ${\mathbf{x}}$ can be robustly recovered as
(6)
${\mathbf{x}}=\left({\mathbf{P}}_{1}^{T}{\mathbf{M}}\right)^{-1}{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})={\mathbf{Q}}\left({\mathbf{R}}_{1}^{T}\right)^{-1}{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}).$
This approach is successful because at each step of the QR pivoting process,
the measurement that maximizes the corresponding diagonal entry of the upper
triangular matrix ${\mathbf{R}}_{1}$ is selected. The resulting large diagonal
entries of ${\mathbf{R}}_{1}$ mean that measurement errors are not strongly
amplified by the linear reconstruction map
${\mathbf{Q}}\left({\mathbf{R}}_{1}^{T}\right)^{-1}$.
## 3\. Problems with Linear Techniques
In this section, we illustrate the problems with employing linear state
reconstruction and sensor placement techniques for nonlinear systems and data
sets by means of an example. We consider the shock-mixing layer interaction
proposed by Yee et al. [65], which has become a canonical problem for studying
jet noise production as well as high-order numerical methods. This problem
captures many key elements of shock wave-turbulent boundary layer interactions
that, according to S. Priebe and M. P. Martín [42] “occur in many external and
internal compressible flow applications such as transonic aerofoils, high-
speed engine inlets, internal flowpaths of scramjets, over-expanded rocket
engine nozzles and deflected control surfaces or any other discontinuities in
the surface geometry of high-speed vehicles.” The resulting pressure and heat
transfer fluctuations can be large, so it is important to monitor the state of
these flows to ensure the safety of a vehicle.
Our goal will be to choose a small number of sensor locations in this flow at
which to measure either the horizontal, $u$, or vertical, $v$, velocity
component in order to reconstruct the entire velocity field. A snapshot of
these velocity fields from the fully-developed flow computed using the high-
fidelity local WENO-type characteristic filtering method of S.-C. Lo et al.
[32] is shown in Fig. 1. While the flow is very nearly periodic, and hence
lives near a one-dimensional loop in state space, the complicated physics
arising from the interaction of the oblique shock with vortices in the
spatially-evolving mixing layer results in data that do not lie near any low-
dimensional subspace. In addition to being high dimensional, this flow
exhibits the low-frequency unsteadiness characteristic of shock wave–turbulent
boundary layer interactions [42], [15], [43] and of spatial mixing layer flows
in general [22].
(a) stream-wise $u$ velocity component
(b) transverse $v$ velocity component
(c) available sensor locations
Figure 1. A snapshot of the $u$ and $v$ velocity components in the shock
mixing-layer flow is shown in (a) and (b) along with the sensors selected
using various methods from among the two components at $1105$ available
locations shown in (c). These methods include LASSO with PCA (black o), LASSO
with Isomap (red x) greedy Bayes D-optimality (magenta x), convex Bayes
D-optimality (black $>$), convex D-optimality for modes $3$ and $4$ (black v),
QR pivoting (green +), and secant-based techniques using detectable
differences ($\\#1,\\#2$: green star, $\\#3$: black star) and the
amplification threshold method (black square).
### 3.1. The Need for Nonlinear Reconstruction
Linear reconstruction is fundamentally confined to a subspace whose dimension
is at most equal to the total number of sensor measurements. Hence the
fraction of the variance that linear reconstruction can capture using $d$
measurements is at most the fraction of the variance along the first $d$
principal components: in particular, the coefficient of determination is
bounded by
(7)
$R^{2}\leq\frac{\sigma_{1}^{2}+\cdots+\sigma_{d}^{2}}{\sigma_{1}^{2}+\cdots+\sigma_{n}^{2}}.$
Examining the fraction of the variance captured by the leading principal
subspaces in Figure 2a leads us to the rather disappointing conclusion that in
order to capture $90\%$ of the variance in the shock-mixing layer flow via
linear reconstruction, we need at least $11$ independent measurements, and to
capture $98\%$ we need at least $33$.
variance fraction remainingprincipal subspace dimension (a) variance
orthogonal to principal subspaces
$\theta$$\phi_{1}$$\phi_{2}$ (b) Isomap coordinates
phase angle, $\theta$modal coefficient, $z_{k}$ (c) PCA coefficients
Figure 2. The linear and nonlinear dimension reduction techniques PCA (a.k.a
POD) and Isomap are applied to the shock-mixing layer data. (a) shows the
remaining fraction of the total variance orthogonal to each leading principal
subspace. (b) plots the data in the leading two Isomap embedding coordinates,
revealing that it lies very near a loop in state space. (c) shows how the
leading principal components (modal coefficients) vary with the phase angle
around the loop. The black vertical lines reveal distinct points where the
leading three principal components are identical.
The best possible linear reconstruction performance can be arbitrarily poor
even though the underlying manifold is low-dimensional. We illustrate this
fact with the following toy model that resembles the phase dependence of
principal components in the shock-mixing layer problem shown in Figures 2b and
2c. Let $\theta$ be uniformly distributed over the interval $[0,2\pi]$ and let
the components of the state vector have sinusoidal dependence on the phase
given by
(8) $x_{2k-1}=\sqrt{2}\cos(k\theta),\ \ x_{2k}=\sqrt{2}\sin(k\theta),\ \
k=1,\ldots,n/2.$
Since these components are orthonormal functions of $\theta$ with respect to
the uniform probability measure on $[0,2\pi]$, the state vector has isotropic
covariance $\mathbb{E}{\mathbf{x}}{\mathbf{x}}^{T}={\mathbf{I}}_{n}$ and the
fraction of the variance captured by the leading $d$ principal components is
$d/n$. As the dimension increases, the highest possible coefficient of
determination for linear reconstruction approaches zero since $R^{2}\leq
d/n\to 0$ as $n\to\infty$. Meanwhile, it’s obvious that the state vector can
be perfectly reconstructed as a nonlinear function of $x_{1}$ and $x_{2}$
alone.
Indeed, it is possible to reconstruct the entire shock-mixing layer flow-field
as a nonlinear function of the velocity measurements at two carefully chosen
locations. In particular, the measurements made at the locations marked by the
two green stars in Figure 1 are one-to-one with the phase and hence the state
of the flow. This is seen in Figure 3g, where the phase angle (color) — hence
the full state — can be determined uniquely from the values of the
measurements. Meanwhile, the best possible linear reconstruction performance
using two measurements is $R^{2}<0.5$.
In practice, many nonlinear reconstruction techniques are available including
neural networks [37], Gaussian process regression [46], and recurrent neural
networks for time-delayed measurements [29]. Using Gaussian process regression
and the two sensor locations marked by green stars in Figure 1, we obtain
near-perfect, robust reconstruction of the leading $100$ principal components.
The resulting reconstruction accuracy for the flow-fields on a held-out set of
$250$ snapshots is $R^{2}=0.986$.
### 3.2. The Need for Nonlinear Sensor Placement
With such poor reconstruction afforded by linear techniques, we cannot expect
sensor placement methods based on them to perform any better. This is not to
say that a practitioner won’t ever find lucky sensor locations for nonlinear
reconstruction by employing a sensor placement technique that maximizes linear
reconstruction accuracy. However, this kind of luck is not guaranteed as
illustrated when we apply state of the art linear sensor placement techniques
to the shock mixing-layer problem. Indeed Figures 3a, 3b, 3c, 3d, and 3e
provide visual proof that three sensors chosen using LASSO to reconstruct the
leading $100$ principal components, LASSO to reconstruct the leading two
Isomap coordinates, the greedy Bayes D-optimality approach, the convex Bayes
D-optimality approach, and pivoted QR factorization do not produce
measurements that are one-to-one with the state. Implementation details can be
found in Appendix A. In each case, there are at least two distinct states with
different phases on the orbit (color) for which the sensors measure the same
values and hence cannot be used to tell them apart.
Even measuring the leading three principal components directly, which are
optimal for linear reconstruction, cannot always reveal the state of the
shock-mixing layer flow. The black vertical lines in Figure 2c indicate the
phases of two distinct states for which the leading three principal components
agree, yet the fourth differs. One may wonder whether the fact that the third
and fourth principal components are one-to-one with the state can be leveraged
for sensor placement. Even our attempt to place three maximum likelihood
D-optimal sensors using the convex optimization approach of [26] to
reconstruct the third and fourth principal components fails to produce
measurement that can recover the phase of the flow as seen in Figure 3f.
On the other hand, it is possible to find two sensor locations whose
measurements are one-to-one with the state as shown in Figure 3g. The
resulting curve near which the data lie has a kink in the lower-right region
indicating that while the measurements are one-to-one, the time-derivative of
the state cannot be determined at this point. Capturing time-derivatives is
necessary for building reduced-order models, and this can be accomplished
using the three sensors marked by black squares in Figure 1 and whose
measurements are plotted in Figure 3i. We note, however, that these locations
are far apart in space, and so will be more sensitive to perturbations of the
shear-layer thickness which affects the horizontal spacing of vortices.
(a) LASSO+PCA
(b) LASSO+Isomap
(c) greedy Bayes D-opt.
(d) convex Bayes D-opt.
(e) pivoted QR
(f) convex M.L. D-opt., modes $3,4$
(g) secant detect. diffs., $\\#(\mathscr{S})=2$
(h) secant detect. diffs., $\\#(\mathscr{S})=3$
(i) secant amplification tol.
Figure 3. these plots show the measurements made by sensors selected using
various methods on the shock-mixing layer flow problem. Each dot indicates the
values measured by the sensors and its color indicates the phase of the
corresponding flowfield. The sensors selected using linear methods shown in
(a)-(f) each make identical or nearly identical measurements on distinct
flowfields, indicated by overlapping points with different colors. These
sensors cannot tell those flowfields apart since the measurements are the
same. The sensors selected using secant-based methods shown in (g)-(i) make
distinct measurements for distinct states and have no such overlaps.
The linear techniques, LASSO, greedy and convex Bayesian D-optimal selection,
pivoted QR, and even direct measurement of principal components fail to reveal
the minimum number of sensors needed to reconstruct the state because there is
important information about the flow contained in less-energetic principal
components. In particular, Figure 2c shows that the most energetic two
principal components oscillate with twice the frequency of the third and
fourth most energetic components as one moves around the orbit. In trying to
maximize the variance captured by a linear estimator, the linear sensor
placement techniques are doomed to choose sensors whose measurements return to
the same values twice in one period as in Figures 3a, 3c, and 3e. In addition,
the convex Bayesian D-optimal approach finds sensors that achieve a superior
value of the objective $\log\det{{\mathbf{C}}_{{\mathbf{e}}}(\mathscr{S})}$
than the greedy Bayesian D-optimal approach, yet the resulting measurements in
Figure 3d have many more self-intersections than the greedy method in Figure
3c.
We are forced to conclude that sensor placement based on linear reconstruction
is totally unconnected with nonlinear reconstructability when the underlying
manifold and principal dimensions do not agree. This can be seen most clearly
from the fact that by simply re-scaling each coordinate in the toy model Eq. 8
by positive constants $\alpha_{1},\ldots,\alpha_{n}$, we can trick these
techniques into selecting any given collection of coordinates. Under this
scaling, the covariance matrix becomes
$\text{diag}(\alpha_{1}^{2},\ldots,\alpha_{n}^{2})$ and if we sort the
constants in decreasing order $\alpha_{k_{1}}\geq\alpha_{k_{2}}\geq\cdots$
then the variance captured by a linear reconstruction from $d$ measurements
cannot exceed
(9)
$R^{2}\leq\frac{\alpha_{k_{1}}^{2}+\cdots+\alpha_{k_{d}}^{2}}{\alpha_{1}^{2}+\cdots+\alpha_{n}^{2}},$
according to the bound in Eq. 7. Equality is achieved by the optimal linear
estimator based on measured coordinates $x_{k_{1}},\ldots,x_{k_{d}}$.
Meanwhile, the only pair of coordinates needed for nonlinear reconstruction
are $x_{1}$ and $x_{2}$.
The key point is that sensor placement approaches based on linear
reconstruction tend to pick sensor locations that have high variance over
other choices that can be more informative. The linear approach works well
when a small number of principal components contain essentially all of the
variance or when all higher modal components are very nearly determined by the
lower ones. But as we have shown, linear approaches to sensor placement can
fail catastrophically when genuinely informative fluctuations, e.g. sub-
harmonics, produce significant variance orthogonal to the leading principal
subspace. In order to reveal minimal sensor locations that can be used for
nonlinear reconstruction in such situations, we cannot rely on linear
reconstruction performance as an optimization criteria, and an entirely new
approach is needed. In Section 4 we discuss an approach that can recover the
correct coordinates from which all others can be nonlinearly reconstructed.
### 3.3. Selecting Manifold Learning Coordinates
The examples presented in the previous Section 3.2 involved data lying near a
one-dimensional underlying manifold. Essentially the same problems can occur
for data lying near higher-dimensional manifolds, and an especially
illustrative and practically useful application where this situation is
routinely encountered is manifold learning. In general, manifold learning
seeks to find a small collection of nonlinear coordinates that fully describe
the structure of a dataset, i.e., that embed it in a lower-dimensional space.
Many techniques including kernel PCA [49], Laplacian eigenmaps [1], diffusion
maps [16], and Isomap [56] accomplish this via eigen-decomposition of various
symmetric matrices
(10)
$\mathbf{G}=\boldsymbol{\Phi}\boldsymbol{\Lambda}^{2}\boldsymbol{\Phi}^{T},\qquad\boldsymbol{\Phi}=\begin{bmatrix}\boldsymbol{\phi}_{1}&\cdots&\boldsymbol{\phi}_{r}\end{bmatrix}$
derived from pair-wise similarity among data points. The $k$th eigen-
coordinate of each point in the data set is given by the elements of
$\boldsymbol{\phi}_{k}$, which can be viewed as a discrete approximation of an
eigenfunction of some kernel integral operator on the underlying manifold.
These methods suffer from a well-known issue when the dataset has multiple
length scales: namely, there may be several redundant harmonically related
eigen-coordinates with higher salience (determined by the eigenvalues) before
one encounters a new fundamental eigen-coordinate describing a new set of
features. This makes the search for a fundamental set of eigen-coordinates
that embed the underlying manifold a potentially large combinatorial search
problem.
As a concrete example, consider the Isomap eigen-coordinates shown in Figure 4
computed from 2000 points lying on the torus in $\mathbb{R}^{3}$,
(11) $\mathbf{x}=\left(\left(5+\cos{\theta_{2}}\right)\cos{\theta_{1}},\
\left(5+\cos{\theta_{2}}\right)\sin{\theta_{1}},\ \sin{\theta_{2}}\right),$
with $(\theta_{1},\theta_{2})$ drawn uniformly at random from the square
$[0,2\pi]\times[0,2\pi]$. Toroidal dynamics are known to occur in combustion
instabilities where multiple incommensurate frequencies are observed [19],
[30], producing data that winds around a torus in high-dimensional state
space. One may want to build simplified reduced-order models of these dynamics
by finding a small set of nonlinear coordinates that described the state on
the torus using manifold learning.
Considering the torus in Eq. 11, the underlying kernel integral operators
associated with each manifold learning technique mentioned above are
equivariant with respect to rotations about $\theta_{1}$, meaning that among
their eigenfunctions are always those of the symmetry’s generator, namely
$\phi_{k}({\mathbf{x}})=e^{ik\theta_{1}({\mathbf{x}})}$. Unsurprisingly, the
leading six Isomap eigen-coordinates, ranked by their associated eigenvalues,
are all harmonically related modes resembling the real and imaginary parts of
$e^{ik\theta_{1}}$, which provide redundant information about $\theta_{1}$ and
no information about $\theta_{2}$. The coordinate $\theta_{1}$ corresponds to
larger spatial variations among points and it is not until we encounter the
seventh eigen-coordinate that we learn about the smaller variations associated
with $\theta_{2}$. A naïve user of Isomap might plot the data in the leading
three coordinates and falsely conclude that the data lies on a two-dimensional
gasket. We’d like to provide an efficient method for selecting the fundamental
eigen-coordinates $\phi_{1}$, $\phi_{2}$ and $\phi_{7}$, from which all others
can be (nonlinearly) reconstructed; yet again, linear methods fundamentally
cannot be used to select them.
(a) Isomap $\phi_{1}$
(b) Isomap $\phi_{2}$
(c) Isomap $\phi_{3}$
(d) Isomap $\phi_{7}$
Figure 4. Isomap coordiantes computed from $2000$ randomly sampled points on
the torus defined by Eq. 11. The leading six coordinates resemble the real and
imaginary components of $e^{ik\theta_{1}}$, $k=1,2,3$, due to the rotational
symmetry, providing redundant information about $\theta_{1}$ and no
information about $\theta_{2}$. The fundamental coordinates $\phi_{1}$,
$\phi_{2}$, and $\phi_{7}$ provide an embedding of the data that captures its
toroidal structure.
Linear methods cannot be used to select manifold learning eigen-coordinates
for essentially the same reason why they failed on the toy models in Section
3.2: the coordinates are all mutually orthogonal as functions supported on the
data! In particular, the covariance among the eigen-coordinates over the data
is isotropic
$\mathbb{E}[\phi_{i}(\mathbf{x})\phi_{j}(\mathbf{x})]=\frac{1}{m}\boldsymbol{\phi}_{i}^{T}\boldsymbol{\phi}_{j}=\frac{1}{m}\delta_{i,j}$
and so all sub-collections of a given size capture the same fraction of the
total eigen-coordinate variance. The methods presented in the following
Section 4 remedy this issue and are capable of selecting the correct set of
fundamental eigen-coordinates on the torus example in Eq. 11.
## 4\. Greedy Algorithms using Secants
With the failure of techniques based on linear reconstruction to select
minimal collections of sensors for nonlinear reconstruction, we propose an
alternative approach that relies on a collection of “secant” vectors between
distinct data points. In this section, we develop this approach, yielding
three related greedy selection techniques with classical theoretical
guarantees on their performance. We also discuss some theoretical results that
provide deterministic performance guarantees for the sensors selected by our
algorithms on unseen data drawn from an underlying set.
We consider a very general type of sensor placement problem that can be stated
as follows. Let the set $\mathcal{X}\subset\mathbb{R}^{n}$ represent the
possible states of the system and suppose that we are interested in some
relevant information about the state described by a function
${\mathbf{g}}:\mathcal{X}\to\mathbb{R}^{q}$. The sensors are also described as
functions of the state ${\mathbf{m}}_{j}:\mathcal{X}\to\mathbb{R}^{d_{j}}$,
$j=1,\ldots,M$ where, with a slight abuse of notation, we will denote the set
of all sensors and the set of all sensor indices by $\mathscr{M}$
interchangeably. Our goal is to choose a small subset of sensors
$\mathscr{S}=\\{j_{1},\ldots,j_{K}\\}\subseteq\mathscr{M}$ so that the
relevant information ${\mathbf{g}}({\mathbf{x}})$ about any state
${\mathbf{x}}\in\mathcal{X}$ can be recovered from the combined measurements
we have selected
(12)
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})=\left({\mathbf{m}}_{j_{1}}({\mathbf{x}}),\ldots,{\mathbf{m}}_{j_{K}}({\mathbf{x}})\right)\in\mathbb{R}^{d_{\mathscr{S}}},$
where the measurement dimension is
$d_{\mathscr{S}}=\sum_{j\in\mathscr{S}}d_{j}$. That is, we want to choose
$\mathscr{S}$ in such a way that there exists a reconstruction function
$\boldsymbol{\Phi}_{\mathscr{S}}:\mathbb{R}^{d_{\mathscr{S}}}\to\mathbb{R}^{q}$
so that
(13)
${\mathbf{g}}({\mathbf{x}})=\boldsymbol{\Phi}_{\mathscr{S}}\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\right)$
for every ${\mathbf{x}}\in\mathcal{X}$.
For such a reconstruction function $\boldsymbol{\Phi}_{\mathscr{S}}$ to exist,
we must meet the modest condition that any two states
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ with different target
values ${\mathbf{g}}({\mathbf{x}})\neq{\mathbf{g}}({\mathbf{x}}^{\prime})$
produce different measured values
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\neq{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})$.
This is nothing but the vertical line test for
$\boldsymbol{\Phi}_{\mathscr{S}}$, ensuring that it is a true function that
does not take multiple values. However, this condition may be met for a
variety of different choices of measurements $\mathscr{S}$ and we shall
introduce three different ways to quantify their performance and choose among
them. In these methods, the notion that $\boldsymbol{\Phi}_{\mathscr{S}}$
should not be sensitive to perturbations of the measurements is key in
quantifying the performance of the sensors. The techniques we propose each
rely on secants, defined below, to measure the sensitivity of
$\boldsymbol{\Phi}_{\mathscr{S}}$.
###### Definition 4.1 (Secant).
A _secant_ is a pair of states $({\mathbf{x}},{\mathbf{x}}^{\prime})$, where
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ and
${\mathbf{x}}\neq{\mathbf{x}}^{\prime}$.
By carefully choosing the objective functions
$f:2^{\mathscr{M}}\to\mathbb{R}$, we can rely on classical results by G. L.
Nemhauser and L. A. Wolsey et al. [39], [63] to prove that greedy algorithms
can be used to place the sensors with near-optimal performance. In particular,
each objective that we propose is normalized so that $f(\emptyset)=0$,
monotone non-decreasing so that $\mathscr{S}\subseteq\mathscr{S}^{\prime}$
implies $f(\mathscr{S})\leq f(\mathscr{S}^{\prime})$, and has a diminishing
returns property called _submodularity_.
###### Definition 4.2 (Submodular Function).
Let $\mathscr{M}$ be a finite set and denote the set of all subsets of
$\mathscr{M}$ by $2^{\mathscr{M}}$. A real-valued function of the subsets
$f:2^{\mathscr{M}}\to\mathbb{R}$ is called “submodular” when it has the
following diminishing returns property: for any element $j\in\mathscr{M}$ and
subsets $\mathscr{S},\mathscr{S}^{\prime}\subseteq\mathscr{M}$,
(14)
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\subseteq{\mathscr{M}}\setminus\\{j\\}\quad\Rightarrow\quad
f(\mathscr{S}\cup\\{j\\})-f(\mathscr{S})\geq
f(\mathscr{S}^{\prime}\cup\\{j\\})-f(\mathscr{S}^{\prime}).$
That is, adding any new element $j$ to the smaller set $\mathscr{S}$ increases
$f$ at least as much as adding the same element to the larger set
$\mathscr{S}^{\prime}\supseteq\mathscr{S}$.
Note that in applications we often do not have direct access to the full set
$\mathcal{X}$, which may be continuous. Rather, we have a discrete collection
of data
$\mathcal{X}_{N}=\\{{\mathbf{x}}_{1},\ldots,{\mathbf{x}}_{N}\\}\subset\mathcal{X}$,
which we assume is large enough to achieve suitable approximations of the
underlying set.
### 4.1. Maximizing Detectable Differences
As we have seen in the first half of this paper, a set of sensors can be
considered good if nearby measurements come only from states whose target
variables are also close together. Otherwise a small perturbation to the
measurements results in a large change in the quantities of interest. One way
to quantify this intuition is to select measurements that minimize the sum of
squared differences in the target variables associated with states whose
measurements are closer together than a fixed detection threshold $\gamma>0$,
i.e.,
(15)
$F_{\gamma}(\mathscr{S}):=\sum_{\begin{subarray}{c}{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}\
:\\\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}<\gamma\end{subarray}}\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2}.$
The length scale $\gamma$ determines how large of a difference between
measurements the user deems to be significant enough to tell the two states
${\mathbf{x}}$ and ${\mathbf{x}}^{\prime}$ apart. For instance, $\gamma^{2}$
might be selected to be proportional to the noise variance using a desired
number $\\#(\mathscr{S})=K$ of sensors. Let the sum of squared differences in
the target variables along each secant be denoted by
(16)
$F_{\infty}:=\sum_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}}\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2}.$
Then it is clear that minimizing the sum of squared “undetectable” differences
given by Eq. 15 is equivalent to maximizing an objective function
(17)
$\tilde{f}_{\gamma}(\mathscr{S})=F_{\infty}-F_{\gamma}(\mathscr{S})=\sum_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}}\tilde{w}_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2},$
where $\tilde{w}_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ is
one if
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\geq\gamma$
and is zero otherwise. This weight function indicates whether our measurements
${\mathbf{m}}_{\mathscr{S}}$ can distinguish the states ${\mathbf{x}}$ and
${\mathbf{x}}^{\prime}$ using the detection threshold $\gamma$, and may be
written
(18)
$\tilde{w}_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})=\mathbbm{1}\left\\{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\geq\gamma\right\\},$
where $\mathbbm{1}\\{A\\}=1$ if $A$ is true and $0$ if $A$ is false.
Therefore, we can view the objective in Eq. 17 as the sum of squared
differences that are “detectable.”
Maximizing the objective in Eq. 17 over a fixed number of sensors
$\\#(\mathscr{S})\leq K$ is a combinatorial optimization problem and to our
knowledge does not admit an efficient direct approximation algorithm. However,
if we reformulate the objective using a relaxed weight function
(19)
$w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})=\min\left\\{\frac{1}{\gamma^{2}}\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2},\
1\right\\},$
then
(20)
$\boxed{f_{\gamma}(\mathscr{S})=\sum_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}}w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2},}$
obtained by replacing $\tilde{w}$ with $w$ in Eq. 17, becomes a normalized,
monotone, submodular function on subsets $\mathscr{S}\subseteq\mathscr{M}$
(Lemma B.3 in the Appendix) and a simple greedy approximation algorithm
guarantees near-optimal performance on this problem! The greedy algorithm
produces a sequence of sets $\mathscr{S}_{1},\mathscr{S}_{2},\ldots$, by
starting with $\mathscr{S}_{0}=\emptyset$ and adding the sensor $j_{k}$ to
$\mathscr{S}_{k-1}$ that maximizes the objective
$f_{\gamma}(\mathscr{S}_{k-1}\cup\\{j\\})$ over all
$j\in\mathscr{M}\setminus\mathscr{S}_{k-1}$. If $\mathscr{S}^{*}_{K}$
maximizes $f_{\gamma}(\mathscr{S})$ over all subsets of size
$\\#(\mathscr{S})=K$ then the classical result of G. L. Nemhauser et al. [39]
states that the objective values attained by the greedily chosen sets satisfy
(21)
$f_{\gamma}(\mathscr{S}_{k})\geq\left(1-e^{-k/K}\right)f_{\gamma}(\mathscr{S}^{*}_{K}),\qquad
k=1,\ldots,\\#(\mathscr{M}).$
The objective function $f_{\gamma}$ given by Eq. 20 can be viewed as a
“submodular relaxation” of the original sum of squared differences
$\tilde{f}_{\gamma}$ given by Eq. 17. While
$f_{\gamma}(\mathscr{S})\geq\tilde{f}_{\gamma}(\mathscr{S})$ for every
$\mathscr{S}\subseteq\mathscr{M}$, Theorem 4.3, below, shows that $f_{\gamma}$
also provides a lower bound on $\tilde{f}_{\gamma^{\prime}}$ at reduced values
of the detection threshold $\gamma^{\prime}<\gamma$. Hence, maximization of
$f_{\gamma}$ is justified as a proxy for maximizing
$\tilde{f}_{\gamma^{\prime}}$. Moreover, the relaxed objective bounds the
total square differences among target variables that are _not detectable_ due
to corresponding measurement differences smaller than reduced threshold via
Eq. 23 of Theorem 4.3.
###### Theorem 4.3 (Relaxation Bound on Undetectable Differences).
Consider the rigid and relaxed objectives given by Eq. 17 and Eq. 20. Then for
every $\mathscr{S}\subseteq\mathscr{M}$ and constant $0<\alpha<1$, we have
(22)
$\tilde{f}_{\alpha\gamma}(\mathscr{S})\geq\frac{1}{1-\alpha^{2}}\left[f_{\gamma}(\mathscr{S})-\alpha^{2}F_{\infty}\right].$
Furthermore, the total fluctuation between target variables associated with
states whose measurements are closer together than the reduced detection
threshold $\alpha\gamma$, given by Eq. 15, is bounded above by
(23)
$F_{\alpha\gamma}(\mathscr{S})\leq\frac{1}{1-\alpha^{2}}\left[F_{\infty}-f_{\gamma}(\mathscr{S})\right].$
###### Proof.
We observe that
(24)
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\geq\alpha\gamma\quad\Leftrightarrow\quad
w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\geq\alpha^{2}$
and so we have
(25)
$\displaystyle\tilde{w}_{\alpha\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$
$\displaystyle=\mathbbm{1}\left\\{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\geq\alpha\gamma\right\\}$
(26)
$\displaystyle=\mathbbm{1}\left\\{w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\geq\alpha^{2}\right\\}.$
Since $0\leq w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\leq
1$, we obtain the following linear lower bound
(27)
$\tilde{w}_{\alpha\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\geq\frac{1}{1-\alpha^{2}}\left[w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})-\alpha^{2}\right].$
Summing this lower bound over all secants gives
(28)
$\tilde{f}_{\alpha\gamma}(\mathscr{S})\geq\frac{1}{1-\alpha^{2}}\left[f_{\gamma}(\mathscr{S})-\alpha^{2}F_{\infty}\right]$
and subtracting each side from $F_{\infty}$ yields the final result. ∎
When applied to the shock-mixing layer problem with the leading Isomap
coordinates taken as the target variables
${\mathbf{g}}({\mathbf{x}})=(\phi_{1}({\mathbf{x}}),\phi_{2}({\mathbf{x}}))$,
the greedy algorithm maximizing $f_{\gamma}$ first reveals the two sensor
locations marked by green stars and then the black star in Figure 1 over the
range of $0.02\leq\gamma\leq 0.06$. These choices produce the measurements
shown in Figs. 3g and 3h, which can be used to reveal the exact phase of the
system. Choosing smaller values of $\gamma$ yields different sensors that can
also be used to reveal the phase, but with reduced robustness to measurement
perturbations. This method of maximizing detectable differences also reveals
the correct $K=3$ fundamental Isomap eigen-coordinates from among the leading
$100$ on the torus example in Eq. 11 over a wide range $0.05\leq\gamma\leq
3.0$. For implementation details, see the Appendix.
### 4.2. Minimal Sensing to Meet an Error Tolerance
The approach presented above relies on an average and so does not guarantee
that the target value ${\mathbf{g}}({\mathbf{x}})$ can be recovered from the
selected measurements ${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$ for every
${\mathbf{x}}\in\mathcal{X}$. In this section, we modify the technique
developed above in order to provide such a guarantee by trying to find the
minimum number of sensors so that every pair of states in the sampled set
$\mathcal{X}_{N}$ with target values separated by at least $\varepsilon$
correspond to measurements separated by at least $\gamma$. If our sampled
points $\mathcal{X}_{N}$ come sufficiently close to every point of
$\mathcal{X}$ in the sense of Definition 4.4, then Proposition 4.5, given
below, allows us to draw a similar conclusion about the measurements from all
points in the underlying set $\mathcal{X}$.
###### Definition 4.4 ($\varepsilon_{0}$-net).
An $\varepsilon_{0}$-net of $\mathcal{X}$ is a finite subset
$\mathcal{X}_{N}\subset\mathcal{X}$ satisfying
(29)
$\forall{\mathbf{x}}\in\mathcal{X},\quad\exists{\mathbf{x}}_{i}\in\mathcal{X}_{N}\quad\mbox{such
that}\quad\|{\mathbf{x}}-{\mathbf{x}}_{i}\|_{2}<\varepsilon_{0}.$
We use the subscript $N$ to denote the number of points in $\mathcal{X}_{N}$.
In particular, if $\mathcal{X}_{N}$ forms a fine enough $\varepsilon_{0}$-net
of $\mathcal{X}$, then Proposition 4.5 guarantees that small measurement
differences never correspond to large target value differences.
###### Proposition 4.5 (Separation Guarantee on Underlying Set).
Let $\mathcal{X}_{N}$ be an $\varepsilon_{0}$-net of $\mathcal{X}$ (see
Definition 4.4) and let $\mathscr{S}$ be a subset of $\mathscr{M}$ satisfying
(30)
$\forall{\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}\qquad\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\geq\gamma.$
If ${\mathbf{m}}_{\mathscr{S}}$ and ${\mathbf{g}}$ are Lipschitz functions
with Lipschitz constants $\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}$ and
$\|{\mathbf{g}}\|_{\text{lip}}$ respectively, then
(31)
$\forall{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}\qquad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon+2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}>\gamma-2\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}.$
###### Proof.
The proof follows immediately from successive applications of the triangle
inequality and so we relegate it to Appendix C ∎
Consequently, the approach described in this section allows one to reconstruct
${\mathbf{g}}({\mathbf{x}})$ from a perturbed measurement
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})+{\mathbf{n}}$ by taking the value
${\mathbf{g}}({\mathbf{x}}^{\prime})$ from its nearest neighbor
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})$ with
${\mathbf{x}}^{\prime}\in\mathcal{X}$ and achieve small error
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}$ as
long as the perturbation $\|{\mathbf{n}}\|_{2}$ is below a threshold.
Supposing that the desired separation can be obtained using all of the
sensors, i.e., $\mathscr{S}=\mathscr{M}$, then we can take the sum in the
objective $f_{\gamma}$ given by Eq. 20 only over those pairs
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}$ with targets separated
by at least
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon$,
i.e.,
(32)
$\boxed{f_{\gamma,\varepsilon}(\mathscr{S})=\sum_{\begin{subarray}{c}{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}\
:\\\
\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon\end{subarray}}w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2},}$
and state the problem formally as
(33)
$\operatorname*{\min\\!imize\enskip}_{\mathscr{S}\subseteq\mathscr{M}}\\#(\mathscr{S})\quad\mbox{subject
to}\quad
f_{\gamma,\varepsilon}(\mathscr{S})=f_{\gamma,\varepsilon}(\mathscr{M}).$
We observe that if all points
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}$ with
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon$
can be separated by at least $\gamma$ using $\mathscr{S}=\mathscr{M}$ then
$w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{M})=1$ for each term
in Eq. 32. On the other hand if there is such a pair
${\mathbf{x}},{\mathbf{x}}^{\prime}$ with
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}<\gamma$
then that term has
$w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})<1$ and
$f_{\gamma,\varepsilon}(\mathscr{S})<f_{\gamma,\varepsilon}(\mathscr{M})$ as a
consequence.
One can show, by using the same argument as in Lemma B.3 of the Appendix, that
the objective Eq. 32 is submodular in addition to being normalized and
monotone non-decreasing. It follows that Eq. 33 is a classical submodular set
cover problem for which a greedy algorithm maximizing $f_{\gamma,\varepsilon}$
and stopping when
$f_{\gamma,\varepsilon}(\mathscr{S}_{K})=f_{\gamma,\varepsilon}(\mathscr{M})$
will always find, up to a logarithmic factor, the minimum possible number of
sensors [63]. In particular, suppose that $\mathscr{S}^{*}$ is a subset of
minimum size with $f_{\gamma,\varepsilon}(\mathscr{S}^{*})=f(\mathscr{M})$ and
that the greedy algorithm chooses a sequence of subsets
$\mathscr{S}_{1},\ldots,\mathscr{S}_{K}$ with
$f_{\gamma,\varepsilon}(\mathscr{S}_{K})=f_{\gamma,\varepsilon}(\mathscr{M})$.
If we define the “increment condition number” to be the ratio of the largest
and smallest increments in the objective during greedy optimization
(34)
$\kappa=\frac{f_{\gamma,\varepsilon}(\mathscr{S}_{1})}{f_{\gamma,\varepsilon}(\mathscr{S}_{K})-f_{\gamma,\varepsilon}(\mathscr{S}_{K-1})},$
then the classical result of L. A. Wolsey [63] proves that the greedily chosen
set is no larger than
(35) $\\#(\mathscr{S}_{K})\leq(1+\ln{\kappa})\\#(\mathscr{S}^{*}).$
### 4.3. Minimal Sensing to Meet an Amplification Tolerance
The approaches discussed above are capable of choosing measurements that
separate states with distant target values by at least a fixed distance
$\gamma$. However, the nearby measurements separated by less than $\gamma$ may
not adequately capture the local behavior of the target variables as
illustrated by the kink in the measurements made by these sensors in the
shock-mixing layer flow shown in Figure 3g. This means that while the state
can be reconstructed from the measurements, its time derivative cannot. This
would be a major problem if we wish to build a reduced-order model of this
system based only on the fluid velocities measured at the chosen points. In
addition, we may want the separation between the measurements to grow with the
corresponding separation in target values, rather than potentially saturating
at the $\gamma$ threshold.
Attempting to select sensors $\mathscr{S}$ whose measurements capture both the
local and global structure of the target variables leads us to consider
disturbance amplification as a performance metric. In this section, we try to
find the minimum number of sensors so that the Lipschitz constant of the
reconstruction function does not exceed a user-specified threshold $L$. In
practice, we do not have access to the true Lipschitz constant, so instead we
bound a proxy defined below:
(36)
$\|\boldsymbol{\Phi}_{\mathscr{S}}\|_{\text{lip}}\approx\|\boldsymbol{\Phi}_{\mathscr{S}}\|_{\mathcal{X}_{N},\text{lip}}=\max_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}}\frac{\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}}{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}}\leq
L.$
Proposition 4.6, below, shows that it suffices to enforce this condition over
an $\varepsilon_{0}$-net, $\mathcal{X}_{N}$, of $\mathcal{X}$ (see Definition
4.4) in order to bound the amplification over all of $\mathcal{X}$ up to a
slight relaxation for measurement differences on the same scale
$\varepsilon_{0}$ as the sampling.
###### Proposition 4.6 (Amplification Guarantee on Underlying Set).
Let $\mathcal{X}_{N}$ be an $\varepsilon_{0}$-net of $\mathcal{X}$ and let
$\mathscr{S}$ be a subset of $\mathscr{M}$ satisfying
(37)
${\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}\qquad\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\leq
L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}.$
If ${\mathbf{m}}_{\mathscr{S}}$ and ${\mathbf{g}}$ are Lipschitz functions,
with Lipschitz constants $\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}$ and
$\|{\mathbf{g}}\|_{\text{lip}}$ respectively, then
(38)
$\forall{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}\qquad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}<L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})+{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\\\
+2\left(\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}\right)\varepsilon_{0}.$
###### Proof.
The proof is a direct application of the triangle inequality and so it is
relegated Appendix C. ∎
If the Lipschitz condition in Eq. 36 over $\mathcal{X}_{N}$ can be met using
all of the sensors $\mathscr{S}=\mathscr{M}$ then the problem we hope to solve
can be stated formally as in Eq. 33, where the condition Eq. 36 is imposed
using a different normalized, monotone, submodular function
(39)
$\boxed{f_{L}(\mathscr{S})=\sum_{\begin{subarray}{c}{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}\\\
{\mathbf{g}}({\mathbf{x}})\neq{\mathbf{g}}({\mathbf{x}}^{\prime})\end{subarray}}\min\left\\{\frac{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2}}{\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}},\
\frac{1}{L^{2}}\right\\}.}$
See Lemma B.4 in the Appendix for proof of these properties.
We observe that if there is any secant
$({\mathbf{x}},{\mathbf{x}}^{\prime})\in\mathcal{X}_{N}\times\mathcal{X}_{N}$
for which Eq. 36 is not satisfied for a given $\mathscr{S}\subset\mathscr{M}$,
then the corresponding term of Eq. 39 is less than $1/L^{2}$ and
$f_{L}(\mathscr{S})<f_{L}(\mathscr{M})$. Otherwise, each term of Eq. 39 is
$1/L^{2}$ and we have $f_{L}(\mathscr{S})=f_{L}(\mathscr{M})$. Again, the
classical result in [63] shows that a greedy approximation algorithm
maximizing Eq. 39 and stopping when
$f_{L}(\mathscr{S}_{K})=f_{L}(\mathscr{M})$ finds the minimum possible number
of sensors up to a logarithmic factor so that the Lipschitz condition Eq. 36
is satisfied. In particular, the same guarantee stated in Eq. 35 holds for the
Lipschitz objective too.
In some applications, we may instead want to find the measurements that
minimize the reconstruction Lipschitz constant
$\|\boldsymbol{\Phi}_{\mathscr{S}}\|_{\mathcal{X}_{N},\text{lip}}$ using a
fixed sensor budget $\\#(\mathscr{S})\leq C$. By running the greedy algorithm
repeatedly using different thresholds $L$ it is possible to obtain upper and
sometimes lower bounds on this budget-constrained minimum Lipschitz constant
$L^{*}$. This idea is closely related to the approach of [27]. If the greedy
algorithm using Lipschitz constant $L$ chooses sensors $\mathscr{S}$ that meet
the budget $\\#(\mathscr{S})\leq C$ then $L$ is obviously an upper bound on
$L^{*}$. In practice, we can use a bisection search over $L$ to find nearly
the smallest $L$ to any given tolerance for which $\\#(\mathscr{S})\leq C$. To
get the lower bound, the greedy algorithm is run with a small enough $L$ so
that the bound on the minimum possible cost from Eq. 35 exceeds the budget
(40) $C<\\#(\mathscr{S})/(1+\ln\kappa).$
If this is the case, there is no collection of measurements with amplification
at most $L$ that meets the cost constraint. Thus, such an $L$ is a lower bound
on the minimum possible amplification using measurement budget $C$. Again,
bisection search can be used to find nearly the largest $L$ so that
$C<\\#(\mathscr{S})/(1+\ln\kappa)$.
With the leading Isomap coordinates taken as the target variables
${\mathbf{g}}({\mathbf{x}})=(\phi_{1}({\mathbf{x}}),\phi_{2}({\mathbf{x}}))$,
a bisection search over $L$ identifies the three sensor locations marked by
black squares in Figure 1 on the shock-mixing layer problem and the correct
fundamental Isomap eigenfunctions $\phi_{1},\phi_{2},\phi_{7}$ on the torus
example in Eq. 11. The measurements made by these sensors on the shock-mixing
layer problem are shown in Figure 3i and indicate, by the lack of self-
intersections, that they can be used to recover the phase.
The minimum number of sensors selected by the greedy algorithm that allow one
to reconstruct both the relevant information ${\mathbf{g}}({\mathbf{x}})$ and
its time derivative is usually persistent over a wide range of Lipschitz
constants with fewer sensors not being chosen until $L$ is made extremely
large. In the shock-mixing layer problem, three sensors that successfully
reveal the underlying phase are found for values of $L$ ranging from $1868$ to
$47624$, above which only two sensors that cannot reveal the underlying phase
are selected. The fact that a smaller set of inadequate sensors are selected
for extremely large $L$ reflects our use of a discrete approximation
$\mathcal{X}_{N}$ of the continuous set $\mathcal{X}$. Measurements from
$\mathcal{X}_{N}$ will almost never truly overlap to give
$\|\boldsymbol{\Phi}_{\mathscr{S}}\|_{\text{lip}}=\infty$ as they would for
measurements from $\mathcal{X}$.
We also find that with $L=129$, the minimum possible number of sensors exceeds
$\\#(\mathscr{S}_{K})/(1+\ln{\kappa})=3.18>3$ on the shock-mixing layer
problem. Therefore, the minimum possible reconstruction Lipschitz constant
using three sensors that one might find by an exhaustive search over the
$\binom{2210}{3}\approx 1.8\times 10^{9}$ possible combinations must be
greater than $129$. For implementation details, see the Appendix.
## 5\. Computational Considerations and Down-Sampling
So far, the three secant-based methods we presented involve objectives that
sum over $\mathcal{O}(N^{2})$ pairs of points from the sampled set
$\mathcal{X}_{N}$. In this section, we discuss how this large collection of
secants can be sub-sampled to produce high-probability performance guarantees
using a number of secants that scales more favorably with the size of the data
set. By sub-sampling we do pay a price in the sense that some “bad” secants
may escape our sampling scheme and so we cannot draw the same conclusions
about every point in the underlying set as we did in Propositions 4.5 and 4.6
for the sensors chosen using the methods in Sections 4.2 and 4.3. Instead, we
can bound the size of the set of these “bad” secants with high probability by
using a sampled collection of secants that scales linearly with $N$. In the
case of the total detectable difference-based objective discussed in Section
4.1, we can prove high-probability bounds for the sum of squared undetectable
differences in the target variables using a constant number of secants that
doesn’t depend on $N$ at all.
Before getting started with our discussion of down-sampling, let us first
mention that the calculation of each of the objectives formulated in Section 4
is easily parallelizable, whether or not they are down-sampled. Even though
the computation of each objective function given by Eq. 20, 32, or 39 requires
$\mathcal{O}(N^{2})$ operations, the terms being summed can be distributed
among many processors without the need for any communication except at the end
when each processor reports the sum over the secants allocated to it.
Furthermore, because each secant-based objective we consider in this paper is
submodular, it is not actually necessary to evaluate the objectives over all
of the remaining sensors during each step of the greedy algorithm. By
employing the “accelerated greedy” algorithm of M. Minoux [35], the same set
of sensors can be found using a minimal number of evaluations of the
objective. We provide a summary of the accelerated greedy algorithm in Section
D of the Appendix.
The computational cost of evaluating the objectives in Sections 4.2 and 4.3
during each step of the greedy algorithm may also be reduced by exploiting the
fact that each term in the sum is truncated once the measurements achieve a
certain level of separation. This means that only the nearest neighbors within
a known distance of each ${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$,
${\mathbf{x}}\in\mathcal{X}_{N}$ need to be computed and rest of the terms all
achieve the threshold and need not be computed explicitly. To compute the sum
efficiently, fixed-radius near neighbors algorithms [2], [3] could be
employed.
### 5.1. Maximizing Detectable Differences
The main results of this section are Theorems 5.2 and 5.3, which show that
with high probability we can obtain guaranteed performance in terms of mean
undetectable differences by sampling a constant number of secants (i.e.,
independent of $N$) selected at random. In particular, Theorem 5.2 bounds the
worst-case performance of the greedy algorithm with high probability using the
sampled objective. Theorem 5.3, on the other hand, shows that if one only
considers randomly sampled secants with target variables separated by at least
$\varepsilon$ (see Section 4.2), then the mean square undetectable difference
between target values is less than $2\varepsilon^{2}$ with high probability.
While the original mean square fluctuation objective in Eq. 20 was formulated
over the discrete set $\mathcal{X}_{N}$, we can actually prove more versatile
approximation results about an objective defined as an average over the
entire, possibly continuous, set $\mathcal{X}$ with respect to a probability
measure $\mu$. In particular, we assume the target variables ${\mathbf{g}}$
and measurements ${\mathbf{m}}_{j}$, $j\in\mathscr{M}$ are measurable
functions on $\mathcal{X}$ and consider an average detectable difference
objective
(41)
$f_{\gamma}(\mathscr{S})=\int_{\mathcal{X}\times\mathcal{X}}w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime})$
with $w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ defined by
Eq. 19. We also denote the average fluctuations between target variables
associated with states whose measurements are closer together than the
detection threshold $\gamma$ by
(42)
$F_{\gamma}(\mathscr{S}):=\int_{\begin{subarray}{c}({\mathbf{x}},{\mathbf{x}}^{\prime})\in\mathcal{X}\times\mathcal{X}\
:\\\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}<\gamma\end{subarray}}\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime})$
and the total fluctuation among target variables by
(43)
$F_{\infty}:=\int_{\mathcal{X}\times\mathcal{X}}\left\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\right\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime}).$
Note that the original objective formulated in Section 4.1 as well as Eq. 15
are special cases of Eq. 41 and Eq. 42, up to an irrelevant constant factor,
when
$\mu=\frac{1}{N}\sum_{{\mathbf{x}}\in\mathcal{X}_{N}}\delta_{{\mathbf{x}}}$
and $\delta_{{\mathbf{x}}}(A)=\mathbbm{1}\\{{\mathbf{x}}\in A\\}$ is the Dirac
measure on Borel sets $A\subseteq\mathcal{X}$. By Lemma B.3, Eq. 41 is
submodular in addition to being normalized and monotone non-decreasing.
Furthermore, by an identical argument to Theorem 4.3, we know that the mean
square fluctuation between target variables associated with states whose
measurements are closer together than a reduced detection thereshold
$\alpha\gamma$ with $0<\alpha<1$ is bounded above by
(44)
$F_{\alpha\gamma}(\mathscr{S})\leq\frac{1}{1-\alpha^{2}}\left[F_{\infty}-f_{\gamma}(\mathscr{S})\right].$
We begin with Lemma 5.1, which shows that by sampling a large enough
collection of points
${\mathbf{x}}_{1},{\mathbf{x}}^{\prime}_{1},\ldots,{\mathbf{x}}_{m},{\mathbf{x}}^{\prime}_{m}\in\mathcal{X}$
independently according to $\mu$, the objective $f_{\gamma}$ can be uniformly
approximated by a sample-based average
(45)
$f_{\gamma,m}(\mathscr{S})=\frac{1}{m}\sum_{i=1}^{m}w_{\gamma,{\mathbf{x}}_{i},{\mathbf{x}}_{i}^{\prime}}(\mathscr{S})\left\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{i}^{\prime})\right\|_{2}^{2}$
over all $\mathscr{S}\subseteq\mathscr{M}$ of size $\\#(\mathscr{S})\leq L$
with high probability over the sample points. Most importantly, the number of
sample points needed for this approximation guarantee is independent of the
distribution $\mu$. Consequently if we have access to $N$ points making up
$\mathcal{X}_{N}$ that have been sampled independently according to $\mu$, we
need only keep the first $2m$ of them to accurately approximate the objective.
The number $m$ of such sub-sampled points depends only on the quality of the
probabilistic guarantee and not on the size of the data set $N$.
###### Lemma 5.1 (Accuracy of the Down-Sampled Objective).
Consider the objectives $f_{\gamma}$ and $f_{\gamma,m}$ defined according to
Eq. 41 and Eq. 45. Assume that the target function is bounded over
$\mathcal{X}$ so that
(46)
$D=\operatorname{diam}{\mathbf{g}}(\mathcal{X})=\sup_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}}\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}<\infty.$
and that
${\mathbf{x}}_{1},{\mathbf{x}}^{\prime}_{1},\ldots,{\mathbf{x}}_{m},{\mathbf{x}}^{\prime}_{m}\in\mathcal{X}$
are sampled independently according to a probability measure $\mu$ on
$\mathcal{X}$. If the number of sampled pairs is at least
(47)
$m\geq\frac{D^{4}}{2\varepsilon^{2}}\left[L\ln{\\#(\mathscr{M})}-\ln{\left((L-1)!\right)}-\ln{\left(\frac{p}{2}\right)}\right],$
then $|f_{\gamma,m}(\mathscr{S})-f_{\gamma}(\mathscr{S})|<\varepsilon$ for
every $\mathscr{S}\subseteq\mathscr{M}$ of size $\\#(\mathscr{S})\leq L$ with
probability at least $1-p$.
###### Proof.
For simplicity, we will drop $\gamma$ from the subscripts on our objectives
since $\gamma$ remains fixed throughout the proof. Let us begin by fixing a
set $\mathscr{S}\subseteq\mathscr{M}$ of size $\\#(\mathscr{S})\leq L$ and
denoting $M=\\#(\mathscr{M})$ for short. Under the assumption that the points
${\mathbf{x}}_{i},{\mathbf{x}}^{\prime}_{i}$ are sampled independently and
identically under $\mu$, the random variables
(48)
$Z_{i}(\mathscr{S})=w_{{\mathbf{x}}_{i},{\mathbf{x}}_{i}^{\prime}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{i}^{\prime})\|_{2}^{2},\quad
i=1,\ldots,m,$
are independent and bounded by $0\leq Z_{i}(\mathscr{S})\leq D^{2}$. The value
of the optimization objective is the expectation
$f(\mathscr{S})=\mathbb{E}[Z_{i}(\mathscr{S})]$ and the value of our sub-
sampled objective is the empirical average
(49) $f_{m}(\mathscr{S})=\frac{1}{m}\sum_{i=1}^{m}Z_{i}(\mathscr{S}).$
Hoeffding’s inequality allows us to bound the probability that
$f_{m}(\mathscr{S})$ differs from $f(\mathscr{S})$ by more than $\varepsilon$
according to
(50)
$\mathbb{P}\left\\{\left|f_{m}(\mathscr{S})-f(\mathscr{S})\right|\geq\varepsilon\right\\}\leq
2\exp\left(-\frac{2m\varepsilon^{2}}{D^{4}}\right).$
We want the objective to be accurately approximated with tolerance
$\varepsilon$ uniformly over all collections of sensors of size
$\\#(\mathscr{S})\leq L$. We unfix $\mathscr{S}$ by taking the union bound
(51)
$\mathbb{P}\bigcup_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}:\\\
\\#(\mathscr{S})\leq
L\end{subarray}}\left\\{\left|f_{m}(\mathscr{S})-f(\mathscr{S})\right|\geq\varepsilon\right\\}\leq\sum_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}:\\\
\\#(\mathscr{S})\leq
L\end{subarray}}2\exp\left(-\frac{2m\varepsilon^{2}}{D^{4}}\right).$
The combinatorial inequality
(52) $\\#\left(\\{\mathscr{S}\subseteq\mathscr{M}\ :\ \\#(\mathscr{S})\leq
L\\}\right)=\sum_{k=1}^{L}\binom{M}{k}\leq\sum_{k=1}^{L}\frac{M^{k}}{k!}\leq
L\frac{M^{L}}{L!}=\frac{M^{L}}{(L-1)!}$
yields the bound
(53)
$\mathbb{P}\bigcup_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}:\\\
\\#(\mathscr{S})\leq
L\end{subarray}}\left\\{\left|f_{m}(\mathscr{S})-f(\mathscr{S})\right|\geq\varepsilon\right\\}\leq
2\exp\left(L\ln{M}-\ln{\left((L-1)!\right)}-\frac{2m\varepsilon^{2}}{D^{4}}\right)\leq
p$
when the number of sampled pairs ${\mathbf{x}}_{i},{\mathbf{x}}_{i}^{\prime}$
satisfies Eq. 47. ∎
The uniform accuracy of the sampled objective $f_{\gamma,m}$ over the feasible
subsets $\mathscr{S}$ in our optimization problem
(54)
$\operatorname*{\max\\!imize\enskip}_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}\
:\ \\#(\mathscr{S})\leq K\end{subarray}}f_{\gamma}(\mathscr{S})$
established in Lemma 5.1 leads to performance guarantees for the greedy
approximation algorithm when the sampled objective $f_{\gamma.m}$ is used in
place of $f_{\gamma}$. In particular, Theorem 5.2 shows that the greedy
algorithm can be applied to the sampled objective Eq. 45 and still achieve
near-optimal performance with respect to the original objective Eq. 41 on the
underlying set $\mathcal{X}$ with high probability. This sampling-based
approach therefore completely eliminates the $\mathcal{O}(N^{2})$ dependence
of the computational complexity involved in evaluating the objective at a
penalty on the worst case performance that can be made arbitrarily small by
sampling more points.
###### Theorem 5.2 (Greedy Performance using Sampled Objective).
Assume the same hypotheses as Lemma 5.1 and let $\mathscr{S}^{*}$ denote an
optimal solution of
(55)
$\operatorname*{\max\\!imize\enskip}_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}\
:\ \\#(\mathscr{S})\leq K\end{subarray}}f_{\gamma}(\mathscr{S}),$
with $f_{\gamma}$ given by Eq. 41 and $K\leq L$. If
$\mathscr{S}_{1},\ldots,\mathscr{S}_{L}$ are the sequence of subsets selected
by the greedy algorithm using the sampled objective $f_{\gamma,m}$ given by
Eq. 45, then
(56)
$f_{\gamma}(\mathscr{S}_{k})\geq\left(1-e^{-k/K}\right)f_{\gamma}(\mathscr{S}^{*})-\left(2-e^{-k/K}\right)\varepsilon,\qquad
k=1,\ldots,L,$
with probability at least $1-p$ over the sample points.
###### Proof.
For simplicity, we will drop $\gamma$ from the subscripts on our objectives
since $\gamma$ remains fixed throughout the proof. Let $\mathscr{S}_{m}^{*}$
denote the optimal solution of
(57)
$\operatorname*{\max\\!imize\enskip}_{\begin{subarray}{c}\mathscr{S}\subseteq\mathscr{M}\
:\ \\#(\mathscr{S})\leq K\end{subarray}}f_{m}(\mathscr{S}),$
using the sampled objective and assume that
$|f(\mathscr{S})-f_{m}(\mathscr{S})|<\varepsilon$ for every subset
$\mathscr{S}$ of $\mathscr{M}$ with $\\#(\mathscr{S})\leq L$. According to
Lemma 5.1, this happens with probability at least $1-p$ over the sample
points. Using this uniform approximation and the guarantee on the performance
of the greedy algorithm for $f_{m}$, we have
(58) $f(\mathscr{S}_{k})\geq
f_{m}(\mathscr{S}_{k})-\varepsilon\geq\left(1-e^{-k/K}\right)f_{m}(\mathscr{S}_{m}^{*})-\varepsilon.$
Since $\mathscr{S}_{m}^{*}$ is the optimal solution using the sampled
objective, we must have $f_{m}(\mathscr{S}_{m}^{*})\geq
f_{m}(\mathscr{S}^{*})$. Using this fact and the uniform approximation gives
(59) $\displaystyle f(\mathscr{S}_{k})$
$\displaystyle\geq\left(1-e^{-k/K}\right)f_{m}(\mathscr{S}^{*})-\varepsilon$
(60)
$\displaystyle\geq\left(1-e^{-k/K}\right)\left(f(\mathscr{S}^{*})-\varepsilon\right)-\varepsilon.$
Combining the terms on $\varepsilon$ completes the proof. ∎
###### Remark 5.1.
While Theorem 5.2 tells us that down-sampling has a small effect on the worst-
case performance of the greedy algorithm, unfortunately, we cannot say much
beyond that. It may be the case that the greedy solution using the sampled
objective $f_{\gamma,m}$ produces a very different value of $f_{\gamma}$ than
the greedy solution using $f_{\gamma}$ directly, even though these functions
are both submodular and differ by no more than an arbitrarily small
$\varepsilon>0$. Consider the following example in Table 1 where we have two
submodular objectives, $f$ and $\tilde{f}$, that differ by no more than
$\varepsilon\ll 1$, yet the greedy algorithm applied to $f$ and $\tilde{f}$
yield results that differ by $\mathcal{O}(1)$.
$\mathscr{S}$ | $f(\mathscr{S})$ | $\tilde{f}(\mathscr{S})$
---|---|---
$\emptyset$ | $0$ | $0$
$\\{a\\}$ | $2+\varepsilon$ | $2$
$\\{b\\}$ | $2$ | $2+\varepsilon$
$\\{c\\}$ | $1$ | $1$
$\\{a,b\\}$ | $2+\varepsilon$ | $2+2\varepsilon$
$\\{a,c\\}$ | $3+\varepsilon$ | $3$
$\\{b,c\\}$ | $2$ | $2+\varepsilon$
$\\{a,b,c\\}$ | $3$ | $3$
Table 1. Two submodular functions are given that differ by no more than
$\varepsilon\ll 1$, yet produce very different greedy solutions and objective
values.
One can easily verify that both functions in Table 1 are normalized, monotone,
and submodular. When selecting subsets of size $2$, the greedy algorithm for
$f$ picks $\emptyset\to\\{a\\}\to\\{a,c\\}$ and the greedy algorithm for for
$\tilde{f}$ picks $\emptyset\to\\{b\\}\to\\{a,b\\}$. The values of $f$ on the
chosen sets, $f(\\{a,c\\})=3+\varepsilon$ and $f(\\{a,b\\})=2+2\varepsilon$,
differ by $1-\varepsilon\gg\varepsilon$, and similarly for
$\tilde{f}(\\{a,c\\})=3$ and $\tilde{f}(\\{a,b\\})=2+\varepsilon$, which also
differ by $1-\varepsilon\gg\varepsilon$. Thus the performance of the greedy
algorithm can be sensitive to small perturbations of the objective even though
the lower bound on performance is not sensitive.
It turns out that by solving the error tolerance problem in Section 4.2
greedily using a down-sampled objective, we can provide high probability
bounds directly on the mean square undetectable differences in Eq. 42. We will
use the down-sampled objective
(61)
$f_{\gamma,\varepsilon,m}(\mathscr{S})=\frac{1}{m}\sum_{\begin{subarray}{c}i\in\\{1,\ldots,m\\}:\\\
\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}^{\prime}_{i})\|_{2}\geq\varepsilon\end{subarray}}w_{\gamma,{\mathbf{x}}_{i},{\mathbf{x}}^{\prime}_{i}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}^{\prime}_{i})\|_{2}^{2},$
with the relaxed weight function in Eq.19 in a greedy approximation algorithm
for the submodular set-cover problem
(62)
$\operatorname*{\min\\!imize\enskip}_{\mathscr{S}\subseteq\mathscr{M}}\\#(\mathscr{S})\quad\mbox{subject
to}\quad
f_{\gamma,\varepsilon,m}(\mathscr{S})=f_{\gamma,\varepsilon,m}(\mathscr{M}).$
Using the resulting greedy solution $\mathscr{S}_{K}$ that satisfies
$f_{\gamma,\varepsilon,m}(\mathscr{S}_{K})=f_{\gamma,\varepsilon,m}(\mathscr{M})=\tilde{f}_{\gamma,\varepsilon,m}(\mathscr{M})$,
Theorem 5.3 provides a high-probability bound on the mean square undetectable
difference in the target variables, Eq. 42, over the entire set
$\mathcal{X}\times\mathcal{X}$ rather than merely
$\mathcal{X}_{N}\times\mathcal{X}_{N}$.
###### Theorem 5.3 (Sample Separation Bound on Undetectable Differences).
Consider the functions $f_{\gamma,\varepsilon,m}$ and $F_{\gamma}$ defined by
Eqs. 62 and 42 and assume that the condition
$\|{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}}^{\prime})\|_{2}\geq\gamma$
holds for $\mu$-almost every
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ such that
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon$.
Suppose that the target function is bounded over $\mathcal{X}$ so that
(63)
$D=\operatorname{diam}{\mathbf{g}}(\mathcal{X})=\sup_{{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}}\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}<\infty.$
and that
${\mathbf{x}}_{1},{\mathbf{x}}^{\prime}_{1},\ldots,{\mathbf{x}}_{m},{\mathbf{x}}^{\prime}_{m}\in\mathcal{X}$
are sampled independently according to the probability measure $\mu$ on
$\mathcal{X}$. If the number of sampled pairs is at least
(64) $m\geq\frac{D^{4}}{2\varepsilon^{4}}\left(\\#(\mathscr{M})\ln
2-\ln{p}\right),$
and the greedy approximation of Eq. 62 produces a set $\mathscr{S}_{K}$, then
(65) $F_{\gamma}(\mathscr{S}_{K})<2\varepsilon^{2}$
with probability at least $1-p$.
###### Proof.
For simplicity, we will drop $\gamma,\varepsilon$ from the subscripts on our
objectives since $\gamma$ and $\varepsilon$ remain fixed throughout the proof.
Let
(66)
$\mathcal{D}=\left\\{({\mathbf{x}},{\mathbf{x}}^{\prime})\in\mathcal{X}\times\mathcal{X}\
:\
\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon\right\\}$
and
(67)
$\tilde{f}(\mathscr{S})=\mathbb{E}\left[\tilde{f}_{m}(\mathscr{S})\right]\\\
=\int_{\mathcal{X}\times\mathcal{X}}\chi_{\mathcal{D}}({\mathbf{x}},{\mathbf{x}}^{\prime})\tilde{w}_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime}),$
where $\chi_{\mathcal{D}}$ is the characteristic function of the set
$\mathcal{D}$. From our assumption that
$\|{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}})\|_{2}\geq\gamma$
for $\mu$-almost every ${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ with
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon$,
it follows
(68)
$\tilde{f}(\mathscr{M})=\int_{\mathcal{X}\times\mathcal{X}}\chi_{\mathcal{D}}({\mathbf{x}},{\mathbf{x}}^{\prime})\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime}).$
Expanding our definition of $F_{\gamma}$ in Eq.42, we find
(69) $F_{\gamma}(\mathscr{S})=\tilde{f}(\mathscr{M})-\tilde{f}(\mathscr{S})\\\
+\int_{\mathcal{X}\times\mathcal{X}}\chi_{\mathcal{D}^{c}}({\mathbf{x}},{\mathbf{x}}^{\prime})\left[1-\tilde{w}_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\right]\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}\
d\mu({\mathbf{x}})d\mu({\mathbf{x}}^{\prime})$
and therefore
(70)
$F_{\gamma}(\mathscr{S})\leq\tilde{f}(\mathscr{M})-\tilde{f}(\mathscr{S})+\varepsilon^{2}.$
We shall now use a similar Hoeffding and union bound argument as in Thm. 5.1
to relate $\tilde{f}(\mathscr{M})-\tilde{f}(\mathscr{S})$ to
$\tilde{f}_{m}(\mathscr{M})-\tilde{f}_{m}(\mathscr{S})$ uniformly over every
subset $\mathscr{S}\subseteq\mathscr{M}$. Fixing such
$\mathscr{S}\subset\mathscr{M}$, the one-sided Hoeffding inequality tells us
that
(71)
$\mathbb{P}\left\\{\left[\tilde{f}(\mathscr{M})-\tilde{f}(\mathscr{S})\right]-\left[\tilde{f}_{m}(\mathscr{M})-\tilde{f}_{m}(\mathscr{S})\right]\geq\varepsilon^{2}\right\\}\leq\exp{\left(-\frac{2m\varepsilon^{4}}{D^{4}}\right)}.$
Unfixing $\mathscr{S}$ using the union bound tells us that
(72)
$\tilde{f}(\mathscr{M})-\tilde{f}(\mathscr{S})<\tilde{f}_{m}(\mathscr{M})-\tilde{f}_{m}(\mathscr{S})+\varepsilon^{2}$
uniformly over all $\mathscr{S}\subset\mathscr{M}$ with probability at least
$1-p$. Since the greedy algorithm terminates when
$\tilde{f}_{m}(\mathscr{S}_{K})=\tilde{f}_{m}(\mathscr{M})$, it follows by
substitution into Eq. 70 that
(73) $F_{\gamma}(\mathscr{S})<2\varepsilon^{2}$
with probability at least $1-p$ over the sample points. ∎
### 5.2. Minimal Sensing to Meet Separation or Amplification Tolerances
If we want to draw stronger conclusions about the underlying set $\mathcal{X}$
than are captured by the mean square (un)detectable differences, then we must
increase the number of sample points. The following Theorems 5.4 and 5.5 show
that similar conclusions about the separation of points as in Propositions 4.5
and 4.6 can be achieved over large subsets of $\mathcal{X}$ with high
probability by considering secants between a randomly chosen set of “base
points” and an the full data set. More precisely, we will consider secants
between an $\varepsilon_{0}$-net $\mathcal{X}_{N}$ of $\mathcal{X}$ and a
collection of base point $\mathcal{B}_{m}\subset\mathcal{X}$ with size $m$
independent of $N$. This leads to linear $\mathcal{O}(N)$ scaling of the cost
to evaluate the down-sampled versions of the objectives given by Eqs. 32 and
39 in Sections 4.2 and 4.3 to achieve these relaxed guarantees.
The strong guarantee of Proposition 4.5 requires that we use an objective like
Eq. 32 in the submodular set-cover problem Eq. 33 where the sum in Eq. 32 is
taken over $\mathcal{X}_{N}\times\mathcal{X}_{N}$ and $\mathcal{X}_{N}$ is an
$\varepsilon_{0}$-net of the underlying set $\mathcal{X}$. The problem is that
the $\varepsilon_{0}$-net $\mathcal{X}_{N}$ may be quite large and the number
of operations needed to evaluate the sum in the objective scales with the
square of the size of $\mathcal{X}_{N}$. Here we will prove that a similar
guarantee as in Proposition 4.5 holds with high probability over a large
subset of $\mathcal{X}$ when the sum in Eq. 32 is taken over secants between a
randomly chosen collection of base points
$\mathcal{B}_{m}=\\{{\mathbf{b}}_{1},\ldots,{\mathbf{b}}_{m}\\}\subseteq\mathcal{X}$
and the $\varepsilon_{0}$-net $\mathcal{X}_{N}$. Most importantly, the number
of base points depends on the quality of the guarantee and not on size of the
$\varepsilon_{0}$-net, so that the computational cost can be reduced to linear
dependence on the size of $\mathcal{X}_{N}$.
Specifically, in place of Eq. 32, we can consider the sampled objective
(74)
$f_{\gamma,\varepsilon,m}(\mathscr{S})=\frac{1}{mN}\sum_{\begin{subarray}{c}1\leq
i\leq m,\ 1\leq j\leq N:\\\
\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon\end{subarray}}w_{\gamma,{\mathbf{b}}_{i},{\mathbf{x}}_{j}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}^{2}$
with $w_{\gamma,{\mathbf{b}}_{i},{\mathbf{x}}_{j}}(\mathscr{S})$ defined by
Eq. 19 in the optimization problem Eq. 33. The greedy approximation algorithm
produces a set of sensors $\mathscr{S}_{K}$ such that
(75)
$\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}_{K}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}_{K}}({\mathbf{x}}_{j})\|_{2}\geq\gamma$
for every ${\mathbf{b}}_{i}\in\mathcal{B}_{m}$ and
${\mathbf{x}}_{j}\in\mathcal{X}_{N}$. Theorem 5.4 guarantees that with high
probability, only a small subset of points in $\mathcal{X}$ have target values
that cannot be distinguished from the rest by measurements separated by a
relaxed detection threshold. This size of this “bad set” is determined by its
$\mu$-measure, which can be made arbitrarily small with high probability by
taking more sample base points $m$.
###### Theorem 5.4 (Sampled Separation Guarantee).
Let $\mathcal{X}_{N}$ be an $\varepsilon_{0}$-net of $\mathcal{X}$ and let the
base points $\mathcal{B}_{m}$ be sampled independently according to a
probability measure $\mu$ on $\mathcal{X}$ with
(76) $m\geq\frac{1}{2\delta^{2}}\left(\\#(\mathscr{M})\ln{2}-\ln{p}\right),$
where $p,\delta\in(0,1)$. Consider the objective $f_{\gamma,\varepsilon,m}$
given by Eq. 74 for a certain choice of $\gamma>0$ and $\varepsilon>0$ for
which every ${\mathbf{b}}_{i}\in\mathcal{B}_{m}$ and
${\mathbf{x}}_{j}\in\mathcal{X}_{N}$ satisfies
(77)
$\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{M}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}}_{j})\|_{2}\geq\gamma.$
Suppose also that ${\mathbf{g}}$ and the measurement functions
${\mathbf{m}}_{k}$, $k\in\mathscr{M}$ are all Lipschitz over $\mathcal{X}$. If
$f_{\gamma,\varepsilon,m}(\mathscr{S})=f_{\gamma,\varepsilon,m}(\mathscr{M})$,
then the $\mu$ measure of points ${\mathbf{x}}\in\mathcal{X}$ such that
(78)
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}>\gamma-\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}$
for every ${\mathbf{x}}^{\prime}\in\mathcal{X}$ is at least $1-\delta$ with
probability at least $1-p$.
###### Proof.
For simplicity, we will drop $\gamma,\varepsilon$ from the subscript on our
objective since $\gamma$ and $\varepsilon$ remain fixed throughout the proof.
Let us begin by fixing a set $\mathscr{S}\subseteq\mathscr{M}$ and define the
random variables
(79)
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=\max_{{\mathbf{x}}\in\mathcal{X}_{N}}\mathbbm{1}\big{\\{}\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\|_{2}<\gamma\quad\mbox{and}\quad\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}})\|_{2}\geq\varepsilon\big{\\}}.$
If $Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$ then every
${\mathbf{x}}\in\mathcal{X}_{N}$ with
$\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}})\|_{2}\geq\varepsilon$
also satisfies
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\|_{2}\geq\gamma$,
otherwise $Z_{\mathscr{S}}({\mathbf{b}}_{i})=1$. We observe that
$Z_{\mathscr{S}}({\mathbf{b}}_{i})$, $i=1,\ldots,m$ are independent,
identically distributed Bernoulli random variables whose expectation
(80)
$\mathbb{E}\left[Z_{\mathscr{S}}({\mathbf{b}}_{i})\right]=\mu\big{(}\big{\\{}{\mathbf{x}}\in\mathcal{X}\
:\ \exists{\mathbf{x}}^{\prime}\in\mathcal{X}_{N}\quad\mbox{s.t.}\\\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}<\gamma,\quad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon\big{\\}}\big{)}$
is the $\mu$-measure of points in $\mathcal{X}$ for which target values
differing by at least $\varepsilon$ with points of $\mathcal{X}_{N}$ are
separated by measurements differing by less than $\gamma$. Suppose that for a
fixed ${\mathbf{x}}\in\mathcal{X}$ we have
(81)
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\geq\gamma$
for every ${\mathbf{x}}_{j}\in\mathcal{X}_{N}$. For any
${\mathbf{x}}^{\prime}\in\mathcal{X}$, there is an
${\mathbf{x}}_{j}\in\mathcal{X}_{N}$ with
$\|{\mathbf{x}}^{\prime}-{\mathbf{x}}_{j}\|_{2}<\varepsilon_{0}$ and so we
have
(82)
$\begin{split}\varepsilon+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\\\
&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}_{j})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\\\
&<\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}.\end{split}$
Hence,
$\varepsilon\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}$,
which implies that
$\gamma\leq\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}$
by assumption. From this we obtain
(83)
$\begin{split}\gamma&\leq\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\\\
&<\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}.\end{split}$
Therefore, for such an ${\mathbf{x}}\in\mathcal{X}$ we have
(84)
$\forall{\mathbf{x}}^{\prime}\in\mathcal{X}\qquad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}>\gamma-\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}.$
It follows that $\mathbb{E}\left[Z_{\mathscr{S}}({\mathbf{b}}_{i})\right]$ is
an upper bound on the $\mu$-measure of points in $\mathcal{X}$ for which there
is another point in $\mathcal{X}$ with a close measurement and distant target
value, that is
(85)
$\mathbb{E}\left[Z_{\mathscr{S}}({\mathbf{b}}_{i})\right]\geq\mu\big{(}\big{\\{}{\mathbf{x}}\in\mathcal{X}\
:\ \exists{\mathbf{x}}^{\prime}\in\mathcal{X}\quad\mbox{s.t.}\\\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\leq\gamma-\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}},\\\
\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\big{\\}}\big{)}.$
By assumption, we have a set $\mathscr{S}\subset\mathscr{M}$ so that
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$ for each $i=1,\ldots,m$. And so it
remains to bound the difference between the empirical and true expectation of
$Z_{\mathscr{S}}({\mathbf{b}}_{i})$ uniformly over every subset
$\mathscr{S}\subset\mathscr{M}$. For fixed $\mathscr{S}$, the one-sided
Hoeffding inequality gives
(86)
$\mathbb{P}\Big{\\{}\frac{1}{m}\sum_{i=1}^{m}\left(\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]-Z_{\mathscr{S}}({\mathbf{b}}_{i})\right)\geq\delta\Big{\\}}\leq
e^{-2m\delta^{2}}.$
Unfixing $\mathscr{S}$ via the union bound over all
$\mathscr{S}\subset\mathscr{M}$ and applying our assumption about the number
of base points $m$ yields
(87)
$\mathbb{P}\bigcup_{\mathscr{S}\subseteq\mathscr{M}}\Big{\\{}\frac{1}{m}\sum_{i=1}^{m}\left(\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]-Z_{\mathscr{S}}({\mathbf{b}}_{i})\right)\geq\delta\Big{\\}}\leq\exp{\left[\\#(\mathscr{M})\ln{2}-2m\delta^{2}\right]}\\\
\leq p.$
Since our assumed choice of $\mathscr{S}$ has
$f_{m}(\mathscr{S})=f_{m}(\mathscr{M})$ it follows that all
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$, $i=1,\ldots,m$, hence we have
(88) $\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]<\delta$
with probability at least $1-p$. Combining this with Eq. 85 completes the
proof. ∎
It is also possible to use a down-sampled objective to greedily choose sensors
that satisfy a similarly relaxed version of the amplification guarantee given
by Proposition 4.6 with high probability over a large subset of $\mathcal{X}$.
In order to do this, we take the sum in Eq. 39 over secants between a randomly
chosen collection of base points
$\mathcal{B}_{m}=\\{{\mathbf{b}}_{1},\ldots,{\mathbf{b}}_{m}\\}\subseteq\mathcal{X}$
and the $\varepsilon_{0}$-net $\mathcal{X}_{N}$. Again, the number of base
points depends on the quality of the guarantee and not on size of the
$\varepsilon_{0}$-net, so that the computational cost can be reduced to linear
dependence on the size of $\mathcal{X}_{N}$.
Specifically, in place of Eq. 39, we consider
(89) $f_{L,m}(\mathscr{S})=\sum_{\begin{subarray}{c}1\leq i\leq m,\ 1\leq
j\leq N,\\\
{\mathbf{g}}({\mathbf{b}}_{i})\neq{\mathbf{g}}({\mathbf{x}}_{j})\end{subarray}}\min\left\\{\frac{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}^{2}}{\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}^{2}},\
\frac{1}{L^{2}}\right\\}.$
In Theorem 5.5 we show that when a sufficiently small set of sensors
$\mathscr{S}$ is found, e.g., using the greedy algorithm with the sampled
objective $f_{L,m}$, that satisfies the amplification tolerance over
$\mathcal{B}_{m}\times\mathcal{X}_{N}$, we can conclude that that a slightly
relaxed amplification bound holds with high probability over a large subset of
$\mathcal{X}$. In particular, the subset of “bad points” in
${\mathbf{x}}\in\mathcal{X}$ for which there is another point
${\mathbf{x}}^{\prime}\in\mathcal{X}$ with a different target value, but not a
sufficiently different measured value, has small $\mu$-measure with high
probability.
###### Theorem 5.5 (Sampled Amplification Guarantee).
Let $\mathcal{X}_{N}$ be an $\varepsilon_{0}$-net of $\mathcal{X}$ and let the
base points $\mathcal{B}_{m}$ be sampled independently according to a
probability measure $\mu$ on $\mathcal{X}$ with
(90) $m\geq\frac{1}{2\delta^{2}}\left(\\#(\mathscr{M})\ln{2}-\ln{p}\right).$
Consider the objective $f_{m}$ given by Eq. 89 for a certain choice of $L>0$
for which
(91)
$\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\leq
L\|{\mathbf{m}}_{\mathscr{M}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{M}}({\mathbf{x}}_{j})\|_{2}$
is achieved for all ${\mathbf{b}}_{i}\in\mathcal{B}_{m}$,
${\mathbf{x}}_{j}\in\mathcal{X}_{N}$. Suppose also that ${\mathbf{g}}$ and the
measurement functions ${\mathbf{m}}_{k}$, $k\in\mathscr{M}$ are all Lipschitz
functions over $\mathcal{X}$. If $f_{L,m}(\mathscr{S})=f_{L,m}(\mathscr{M})$,
then the $\mu$-measure of points ${\mathbf{x}}\in\mathcal{X}$ such that
(92)
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}<L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})+{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+\left(\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}\right)\varepsilon_{0}$
for every ${\mathbf{x}}^{\prime}\in\mathcal{X}$ is at least $1-\delta$ with
probability at least $1-p$.
###### Proof.
The proof is analogous to Theorem 5.4 and so we relegate it to Appendix C. ∎
## 6\. Working with Noisy Data
So far, we have considered maximizing different measures of robust
reconstructability given a collection of noiseless data. That is, the
resulting sensors are selected in order to be noise robust, but we have
assumed that the measurements ${\mathbf{m}}_{j}({\mathbf{x}}_{i})$,
$j\in\mathscr{M}$ and target variables ${\mathbf{g}}({\mathbf{x}}_{i})$ used
during the sensor selection process are noiseless over the sampled states
${\mathbf{x}}_{i}\in\mathcal{X}_{N}$. In many applications, however, our data
may contain noisy measurements, target variables, or both. In this section, we
study the effect of noisy data on the performance of our proposed secant-based
greedy algorithms. By “noise” we mean specifically that we are given a
collection of available measurements
$\left\\{{\mathbf{\tilde{m}}}_{i,\mathscr{M}}={\mathbf{m}}_{\mathscr{M}}({\mathbf{x}}_{i})+{\mathbf{u}}_{i,\mathscr{M}}\right\\}_{i=1}^{N}$
that are corrupted by unknown noise ${\mathbf{u}}_{i,\mathscr{M}}$ together
with the corresponding target values
$\left\\{{\mathbf{\tilde{g}}}_{i}={\mathbf{g}}({\mathbf{x}}_{i})+{\mathbf{v}}_{i}\right\\}_{i=1}^{N}$
that are also corrupted by unknown noise ${\mathbf{v}}_{i}$. That is, we do
not have access to the measurement functions ${\mathbf{m}}_{\mathscr{M}}$ or
the target function ${\mathbf{g}}$ and must rely solely on noisy data
generated by them.
First, we mention that the minimal sensing method to meet an error tolerance
discussed in Section 4.2 is robust to bounded noise in the measurements and
target variables. In particular, since the selected sensors $\mathscr{S}$
using the approach described in Section 4.2 automatically satisfy Eq. 94,
Proposition 6.1, below, shows that the true measurements coming from states
with sufficiently distant true target values must also be separated by the
measurements.
###### Proposition 6.1 (Noisy Separation Guarantee).
Let $\mathcal{X}_{N}$ be an $\varepsilon_{0}$-net of $\mathcal{X}$ (see
Definition 4.4) and let ${\mathbf{v}}_{i}\in\mathbb{R}^{\dim{\mathbf{g}}}$,
${\mathbf{u}}_{i,\mathscr{S}}\in\mathbb{R}^{d_{\mathscr{S}}}$, $i=1,\ldots,N$
be bounded vectors with
(93) $\forall
i=1,\ldots,N\qquad\left\|{\mathbf{u}}_{i,\mathscr{S}}\right\|_{2}\leq\delta_{u},\qquad\left\|{\mathbf{v}}_{i}\right\|_{2}\leq\delta_{v}.$
Suppose that there exists $\epsilon>0$ and $\gamma>0$ such that
(94)
$\forall{\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}\qquad\|\left({\mathbf{g}}({\mathbf{x}}_{i})+{\mathbf{v}}_{i}\right)-\left({\mathbf{g}}({\mathbf{x}}_{j})+{\mathbf{v}}_{j}\right)\|_{2}\geq\varepsilon\\\
\Rightarrow\quad\|\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})+{\mathbf{u}}_{i,\mathscr{S}}\right)-\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})+{\mathbf{u}}_{j,\mathscr{S}}\right)\|_{2}\geq\gamma.$
If ${\mathbf{m}}_{\mathscr{S}}$ and ${\mathbf{g}}$ are Lipschitz functions
with Lipschitz constants $\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}$ and
$\|{\mathbf{g}}\|_{\text{lip}}$ respectively, then
(95)
$\forall{\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}\qquad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon+2\delta_{v}+2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
\quad\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}>\gamma-2\delta_{u}-2\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}.$
###### Proof.
The proof is analogous to Proposition 4.5 and has been relegated to Appendix
C. ∎
As a consequence of Proposition 6.1, the reconstruction error for the desired
quantities using these sensors can still be bounded if the thresholds
$\varepsilon$ and $\gamma$ exceed twice the noise level of the target variable
and measurements respectively (with a little extra padding based on the
sampling fineness).
On the other hand, the minimal sensing method to meet an amplification
tolerance discussed in Section 4.3 is very sensitive to noisy data. This is
because measurement noise can bring two nearby measurements
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})$ and
${\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})$ arbitrarily close together
while the corresponding target variables ${\mathbf{g}}({\mathbf{x}})$ and
${\mathbf{g}}({\mathbf{x}}^{\prime})$ remain separated. Such terms can result
in arbitrarily large data-driven estimates of the reconstruction Lipschitz
constant. Consequently it may not be possible to find a small set of sensors
$\mathscr{S}$ such that
(96) $\max_{1\leq i<j\leq
N}\frac{\left\|{\mathbf{\tilde{g}}}_{i}-{\mathbf{\tilde{g}}}_{j}\right\|_{2}}{\left\|{\mathbf{\tilde{m}}}_{i,\mathscr{S}}-{\mathbf{\tilde{m}}}_{j,\mathscr{S}}\right\|_{2}}\leq
L$
for acceptable values of $L$.
One way to deal with this problem is to smooth out the target variables. For
instance, given the available noisy measurement and target pairs
$\left\\{\left({\mathbf{\tilde{m}}}_{i,\mathscr{M}},\
{\mathbf{\tilde{g}}}_{i}\right)\right\\}_{i=1}^{N}$, one can find an
approximation of the reconstruction function $\boldsymbol{\Phi}_{\mathscr{M}}$
via regression. Using the predicted target variables
(97)
${\mathbf{\hat{g}}}_{i}:=\boldsymbol{\Phi}_{\mathscr{M}}\left({\mathbf{\tilde{m}}}_{i,\mathscr{M}}\right)$
in place of the noisy data ${\mathbf{\tilde{g}}}_{i}$ fixes the problem of
infinite Lipschitz constants. This is because the amplification-based approach
using these data seeks to find the minimal set of sensors $\mathscr{S}$ such
that
(98) $\max_{1\leq i<j\leq
N}\frac{\left\|{\mathbf{\hat{g}}}_{i}-{\mathbf{\hat{g}}}_{j}\right\|_{2}}{\left\|{\mathbf{\tilde{m}}}_{i,\mathscr{S}}-{\mathbf{\tilde{m}}}_{j,\mathscr{S}}\right\|_{2}}\leq
L$
rather than satisfying Eq. 96.
We use a similar type of smoothing approach for the shock-mixing layer problem
by choosing the leading two Isomap coordinates
${\mathbf{g}}({\mathbf{x}})=\left(\phi_{1}({\mathbf{x}}),\
\phi_{2}({\mathbf{x}})\right)$ rather than simply taking
${\mathbf{g}}({\mathbf{x}})={\mathbf{x}}$. This is because the full state
${\mathbf{x}}$ contains some small noise, meaning that it does not lie exactly
on the one-dimensional loop in state space, but rather on a very thin manifold
with full dimensionality. If we were to use the Lipschitz-based approach to
reconstruct ${\mathbf{x}}$ directly, we would need enough sensors to
reconstruct this noise. By seeking to reconstruct the leading Isomap
coordinates instead, we have regularized our selection algorithm to choose
only those sensors that are needed to reconstruct the dominant periodic
behavior.
Reconstructing smoothed target variables turns out to be a robust method for
sensor placement, as we show by introducing increasing levels of noise in the
shock-mixing layer problem. We added independent Gaussian noise with standard
deviations $\sigma_{\text{noise}}=0.01$, $0.02$, $0.03$, $0.04$, and $0.05$ to
each velocity component at every location on the computational grid, yielding
noisy snapshots like the one shown in Figure 5. This reflects the typical
situation when the underlying data given to us are noisy. At each noise level
we selected three sensors using the detectable difference-based method of
Section 4.1 as well as the amplification tolerance-based method of Section
4.3, with a bisection search over the threshold Lipschitz constant $L$, to
reconstruct the leading two Isomap coordinates of the noisy data. Despite the
noise, the leading two Isomap coordinates continued to accurately capture the
dominant periodic behavior of the underlying system, making them good
reconstruction target variables. The thresholds for the detectable difference
method were fixed at $\gamma=0.04$ except in the $\sigma_{\text{noise}}=0.02$
case, where better performance was achieved using $\gamma=0.02$.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 5. We show the stream-wise (first column) and transverse (second
column) components of velocity for a single snapshot of the shock-mixing layer
flow with increasing levels of noise added in each successive row. Independent
Gaussian noise with standard deviations $\sigma_{\text{noise}}=0.01$, $0.02$,
$0.03$, $0.04$, and $0.05$ are added to each velocity component at each
location on the computational grid. The first two sensors chosen by detectable
difference method of Section 4.1 are indicated by green stars and the third is
indicated by a black star. The three sensors selected using the amplification
tolerance method of Section 4.3 with bisection search over $L$ are indicated
by black squares.
We found that the amplification tolerance-based method identified the same
sensors across each of the first four noise levels
$\sigma_{\text{noise}}=0.01$, $0.02$, $0.03$, and $0.04$. While these sensor
locations differed slightly from the ones selected without noise (shown in
Figure 1), they too were capable of robustly recovering the underlying phase
of the system as illustrated by their corresponding measurements in the third
column of Figure 6. At the largest noise level $\sigma_{\text{noise}}=0.5$,
the sensors selected using this method changed, but were still capable of
revealing the phase as shown in the bottom right plot of Figure 6. The
detectable difference-based method selected the same three sensors as in the
zero noise case when $\sigma_{\text{noise}}=0.01$ with the first two remaining
the same up to $\sigma_{\text{noise}}=0.02$. At these noise levels the first
two sensors are sufficient to reveal the underlying phase of the system as
shown in the first two plots in the first column of Figure 6. Beyond this
level of noise, the first two sensors were no longer able to reveal the phase
as illustrated by the self-intersections in the last three plots in the first
column of Figure 6. While it is admittedly difficult to see from the last
three plots in the middle column of Figure 6, the third sensor eliminated
these self-intersections by raising one of the two intersecting branches and
allowing the phase to be determined.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
Figure 6. These plots show the measurements made by sensors selected using the
detectable difference method of Section 4.1 with two (first column) and three
(second column) sensors along with the amplification tolerance method of
Section 4.3 with three sensors (third column) on the shock-mixing layer flow
problem with various levels of added noise. Each row shows the result of
adding independent Gaussian noise with standard deviations
$\sigma_{\text{noise}}=0.01$, $0.02$, $0.03$, $0.04$, and $0.05$ to each
velocity component at each location on the computational grid.
## 7\. Conclusion
In this paper we have identified a common type of nonlinear structure that
causes techniques for sensor placement relying on linear reconstruction
accuracy as an optimization criterion to consistently fail to identify minimal
sets of sensors. Specifically, these techniques break down and lead to costly
over-sensing when the data is intrinsically low dimensional, but is curved in
such a way that energetic components are functions of less energetic ones, but
not vice versa. This problem occurs commonly in fluid flows, period-doubling
bifurcations in ecology and cardiology, as well as in spectral methods for
manifold learning. We demonstrated that a representative collection of linear
techniques fail to identify sensors from which the state of a shock-mixing
layer flow can be reconstructed, and we provide a simple example that
illustrates that the performance of the linear techniques can be arbitrarily
bad. In addition, we demonstrated that it is impossible to use linear feature
selection methods to choose fundamental nonlinear eigen-coordinates in
manifold learning problems.
To remedy these issues, we proposed a new approach for sensor placement that
relies on the information contained in secant vectors between data points to
quantify nonlinear reconstructability of desired quantities from measurements.
The resulting secant-based optimization problems turn out to have useful
diminishing returns properties that enable efficient greedy approximation
algorithms to achieve guaranteed high levels of performance. We also describe
how down-sampling can be used to improve the computational scaling of these
algorithms while still providing guarantees regarding the reconstructability
of states in the underlying set from which the available data is sampled.
Finally, these methods prove to be capable of selecting minimal collections of
sensors in the shock-mixing layer problem as well as selecting the minimal set
of fundamental manifold learning coordinates on a torus — both of which are
problems where the linear techniques fail.
### Acknowledgements
The authors would like to thank Gregory Blaisdell, Shih-Chieh Lo, Tasos
Lyrintzis, and Kurt Aikens for providing the code used to compute the shock-
mixing layer interaction. We also want to thank Alberto Padovan and Anastasia
Bizyaeva for providing key references that motivate our main example, provide
connections with period doubling, and reveal how linear methods can fail to
find adequate sensor and actuator locations in real-world problems.
## References
* [1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
* [2] J. L. Bentley. A survey of techniques for fixed radius near neighbor searching. Technical report, Stanford University, Stanford, CA, USA, 1975.
* [3] J. L. Bentley, D. F. Stanat, and E. H. Williams Jr. The complexity of finding fixed-radius near neighbors. Information Processing Letters, 6(6):209–212, 1977.
* [4] G. Berkooz, P. Holmes, and J. L. Lumley. The proper orthogonal decomposition in the analysis of turbulent flows. Annual Review of Fluid Mechanics, 25(1):539–575, 1993.
* [5] D. Broomhead and M. Kirby. Dimensionality reduction using secant-based projection methods: The induced dynamics in projected systems. Nonlinear Dynamics, 41(1-3):47–67, 2005.
* [6] S. L. Brunton and J. N. Kutz. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, 2019.
* [7] S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016.
* [8] P. Businger and G. H. Golub. Linear least squares solutions by householder transformations. Numerische Mathematik, 7(3):269–276, 1965.
* [9] J. L. Callaham, K. Maeda, and S. L. Brunton. Robust flow reconstruction from limited measurements via sparse representation. Physical Review Fluids, 4(10):103907, 2019.
* [10] E. J. Candès, Y. Plan, et al. Near-ideal model selection by $\ell_{1}$ minimization. The Annals of Statistics, 37(5A):2145–2177, 2009.
* [11] W. F. Caselton, L. Kan, and J. V. Zidek. Quality data networks that minimize entropy. In Statistics in the Environmental and Earth Sciences, pages 10–38. Halsted Press, 1992.
* [12] W. F. Caselton and J. V. Zidek. Optimal monitoring network designs. Statistics & Probability Letters, 2(4):223–227, 1984.
* [13] K. Chaloner and I. Verdinelli. Bayesian experimental design: A review. Statistical Science, pages 273–304, 1995.
* [14] S. Chaturantabut and D. C. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010.
* [15] N. T. Clemens and V. Narayanaswamy. Low-frequency unsteadiness of shock wave/turbulent boundary layer interactions. Annual Review of Fluid Mechanics, 46:469–492, 2014.
* [16] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5–30, 2006.
* [17] N. K. Dhingra, M. R. Jovanović, and Z.-Q. Luo. An ADMM algorithm for optimal sensor and actuator selection. In 53rd IEEE Conference on Decision and Control, pages 4039–4044. IEEE, 2014.
* [18] Z. Drmac and S. Gugercin. A new selection operator for the discrete empirical interpolation method—improved a priori error bound and extensions. SIAM Journal on Scientific Computing, 38(2):A631–A648, 2016.
* [19] W. J. Dunstan, R. R. Bitmead, and S. M. Savaresi. Fitting nonlinear low-order models for combustion instability control. Control Engineering Practice, 9(12):1301–1317, 2001.
* [20] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar):1157–1182, 2003.
* [21] C. Hegde, A. C. Sankaranarayanan, W. Yin, and R. G. Baraniuk. Numax: A convex approach for learning near-isometric linear embeddings. IEEE Transactions on Signal Processing, 63(22):6109–6121, 2015\.
* [22] C.-M. Ho and L.-S. Huang. Subharmonics and vortex merging in mixing layers. Journal of Fluid Mechanics, 119:443–473, 1982.
* [23] S. Hosseinyalamdary. Deep Kalman filter: Simultaneous multi-sensor integration and modelling; a GNSS/IMU case study. Sensors, 18(5):1316, 2018.
* [24] H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24(6):417, 1933.
* [25] A. A. Jamshidi and M. J. Kirby. Towards a black box algorithm for nonlinear function approximation over high-dimensional domains. SIAM Journal on Scientific Computing, 29(3):941–963, 2007.
* [26] S. Joshi and S. Boyd. Sensor selection via convex optimization. IEEE Transactions on Signal Processing, 57(2):451–462, 2008.
* [27] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Journal of Machine Learning Research, 9(Dec):2761–2801, 2008.
* [28] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9(Feb):235–284, 2008.
* [29] R. G. Krishnan, U. Shalit, and D. Sontag. Structured inference networks for nonlinear state space models. In Thirty-First AAAI Conference on Artificial Intelligence, 2017\.
* [30] A. Lamraoui, F. Richecoeur, S. Ducruix, and T. Schuller. Experimental analysis of simultaneous non-harmonically related unstable modes in a swirled combustor. In Proceedings of the ASME 2011 Turbo Expo: Turbine Technical Conference and Exposition, volume 2, pages 1289–1299, 2011.
* [31] K. Lee and K. T. Carlberg. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics, 404:108973, 2020.
* [32] S.-C. Lo, G. A. Blaisdell, and A. S. Lyrintzis. High-order shock capturing schemes for turbulence calculations. International Journal for Numerical Methods in Fluids, 62(5):473–498, 2010.
* [33] K. Manohar, B. W. Brunton, J. N. Kutz, and S. L. Brunton. Data-driven sparse sensor placement for reconstruction: Demonstrating the benefits of exploiting known patterns. IEEE Control Systems Magazine, 38(3):63–86, 2018.
* [34] M. Marion and R. Temam. Nonlinear Galerkin methods. SIAM Journal on Numerical Analysis, 26(5):1139–1157, 1989.
* [35] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, pages 234–243. Springer, 1978.
* [36] V. Mons, J.-C. Chassaing, and P. Sagaut. Optimal sensor placement for variational data assimilation of unsteady flows past a rotationally oscillating cylinder. Journal of Fluid Mechanics, 823:230–277, 2017.
* [37] N. J. Nair and A. Goza. Integrating sensor data into reduced-order models with deep learning. Bulletin of the American Physical Society, 64, 2019.
* [38] N. J. Nair and A. Goza. Leveraging reduced-order models for state estimation using deep learning. arXiv preprint arXiv:1912.10553, 2019.
* [39] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14(1):265–294, 1978.
* [40] M. Ohlberger and S. Rave. Reduced basis methods: Success, limitations and future challenges. In Proceedings of ALGORITMY, pages 1–12, 2016.
* [41] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar conference on signals, systems and computers, pages 40–44. IEEE, 1993.
* [42] S. Priebe and M. P. Martín. Low-frequency unsteadiness in shock wave–turbulent boundary layer interaction. Journal of Fluid Mechanics, 699:1–49, 2012.
* [43] S. Priebe, J. H. Tu, C. W. Rowley, and M. P. Martín. Low-frequency dynamics in a shock-induced separated flow. Journal of Fluid Mechanics, 807:441–477, 2016.
* [44] F. Pukelsheim. Optimal Design of Experiments. SIAM, 2006.
* [45] T. Quail, A. Shrier, and L. Glass. Predicting the onset of period-doubling bifurcations in noisy cardiac systems. Proceedings of the National Academy of Sciences, 112(30):9358–9363, 2015.
* [46] C. E. Rasmussen. Gaussian processes in machine learning. In Summer School on Machine Learning, pages 63–71. Springer, 2003\.
* [47] G. Rega and H. Troger. Dimension reduction of dynamical systems: methods, models, applications. Nonlinear Dynamics, 41(1-3):1–15, 2005.
* [48] C. W. Rowley, T. Colonius, and R. M. Murray. Model reduction for compressible flows using POD and Galerkin projection. Physica D: Nonlinear Phenomena, 189(1-2):115–129, 2004.
* [49] B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299–1319, 1998.
* [50] P. Sebastiani and H. P. Wynn. Maximum entropy sampling and optimal Bayesian experimental design. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(1):145–157, 2000.
* [51] M. Shamaiah, S. Banerjee, and H. Vikalo. Greedy sensor selection: Leveraging submodularity. In 49th IEEE Conference on Decision and Control, pages 2572–2577. IEEE, 2010.
* [52] M. C. Shewry and H. P. Wynn. Maximum entropy sampling. Journal of applied statistics, 14(2):165–170, 1987.
* [53] T. H. Summers, F. L. Cortesi, and J. Lygeros. On submodularity and controllability in complex dynamical networks. IEEE Transactions on Control of Network Systems, 3(1):91–101, 2015\.
* [54] T. H. Summers, F. L. Cortesi, and J. Lygeros. On submodularity and controllability in complex dynamical networks. IEEE Transactions on Control of Network Systems, 3(1):91–101, March 2016.
* [55] W. Sun, G. Yang, B. Du, L. Zhang, and L. Zhang. A sparse and low-rank near-isometric linear embedding method for feature extraction in hyperspectral imagery classification. IEEE Transactions on Geoscience and Remote Sensing, 55(7):4032–4046, 2017.
* [56] J. B. Tenenbaum, V. De Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319–2323, 2000.
* [57] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996.
* [58] J. A. Tropp, A. C. Gilbert, and M. J. Strauss. Simultaneous sparse approximation via greedy pursuit. In Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., volume 5, pages v–721. IEEE, 2005.
* [59] V. Tzoumas, A. Jadbabaie, and G. J. Pappas. Sensor placement for optimal Kalman filtering: Fundamental limits, submodularity, and algorithms. In 2016 American Control Conference, pages 191–196. IEEE, 2016\.
* [60] O. Tzuk, S. R. Ujjwal, C. Fernandez-Oto, M. Seifan, and E. Meron. Period doubling as an indicator for ecosystem sensitivity to climate extremes. Scientific reports, 9(1):1–10, 2019.
* [61] H. Whitney. Differentiable manifolds. Annals of Mathematics, pages 645–680, 1936.
* [62] H. Whitney. The self-intersections of a smooth $n$-manifold in $2n$-space. Annals of Mathematics, pages 220–246, 1944.
* [63] L. A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982.
* [64] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2008.
* [65] H. C. Yee, N. D. Sandham, and M. J. Djomehri. Low-dissipative high-order shock-capturing methods using characteristic-based filters. Journal of Computational Physics, 150(1):199–238, 1999.
* [66] B. Yildirim, C. Chryssostomidis, and G. Karniadakis. Efficient sensor placement for ocean measurements using low-dimensional concepts. Ocean Modelling, 27(3-4):160–173, 2009.
* [67] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
## Appendix A Implementation Details
### A.1. Principal Component Analysis (PCA) and Isomap
In this paper, we used principal component analysis (PCA) [24] in order to
find a modal basis for pivoted QR factorization and to identify a low-
dimensional representation of the state and its covariance for determinantal
D-optimal selection techniques on the shock-mixing layer flow. In order to
perform PCA, one needs an appropriate inner product on the space in which the
data lives. In the case of the shock-mixing layer problem, we use the energy-
based inner product for compressible flows developed in [48] together with
trapezoidal quadrature weights to approximate the integrals of the spatial
fields over a stretched computational grid. In this problem, the data consists
of vectors ${\mathbf{z}}$ whose elements are the streamwise velocity $u$,
transverse velocity $v$, and the local speed of sound $a$ over a $321\times
81$ computational grid. The inner product between two snapshots ${\mathbf{z}}$
and ${\mathbf{z}}^{\prime}$ is defined by
(99) $\left\langle{\mathbf{z}},\
{\mathbf{z}}^{\prime}\right\rangle={\mathbf{z}}^{T}{\mathbf{W}}{\mathbf{z}}^{\prime}=\sum_{i=1}^{321}\sum_{j=1}^{81}w_{i,j}\left(u_{i,j}^{2}+v_{i,j}^{2}+a_{i,j}^{2}\right)\\\
\approx\int_{\Omega}\left[u(\xi_{1},\xi_{2})^{2}+v(\xi_{1},\xi_{2})^{2}+a(\xi_{1},\xi_{2})^{2}\right]d\xi_{1}d\xi_{2},$
where the weights $\\{w_{i,j}\\}$ are selected to perform trapezoidal
quadrature. Principal component analysis is performed by computing an economy-
sized singular value decomposition of the mean-subtracted data matrix
(100)
$\tilde{{\mathbf{U}}}\boldsymbol{\Sigma}{\mathbf{V}}^{T}={\mathbf{W}}^{1/2}\begin{bmatrix}({\mathbf{z}}_{1}-\bar{{\mathbf{z}}})&\cdots&({\mathbf{z}}_{N}-\bar{{\mathbf{z}}})\end{bmatrix},\quad\bar{{\mathbf{z}}}=\frac{1}{N}\sum_{i=1}^{N}{\mathbf{z}}_{i}$
and forming the matrix of principal vectors
${\mathbf{U}}={\mathbf{W}}^{-1/2}\tilde{{\mathbf{U}}}$. These vectors, making
up the columns of ${\mathbf{U}}$, are orthonormal with respect to the
${\mathbf{W}}$-weighted inner product. If we represent the states in this
basis so that
${\mathbf{z}}_{i}=\bar{{\mathbf{z}}}+{\mathbf{U}}{\mathbf{x}}_{i}$ then
${\mathbf{x}}$ has empirical covariance
${\mathbf{C}}_{{\mathbf{x}}}=\frac{1}{N}\boldsymbol{\Sigma}^{2}$.
The same weighted inner product was used to compute the distances between each
data point ${\mathbf{z}}_{i}$ and its $10$ nearest neighbors in order to
compute the leading $50$ Isomap coordinates using scikit learn’s
implementation found at https://scikit-
learn.org/stable/modules/generated/sklearn.manifold.Isomap.html.
### A.2. (Group) LASSO
We use the Python implementation of group LASSO [67] by Yngve Mardal Moe at
the University of Oslo that can be found at https://group-
LASSO.readthedocs.io/en/latest/index.html. We select among $2210$ sensor
measurements of $u$ and $v$ velocity components over a grid of $1105$ spatial
locations taken directly from the shock-mixing layer snapshot data. We tried
two different kinds of target variables to be reconstructed via group LASSO.
For the method we call “LASSO+PCA”, the target variables were the data’s
leading $100$ principal components which capture over $99\%$ of the data’s
variance. For the method we call “LASSO+Isomap”, the target variables were the
leading two Isomap coordinates
${\mathbf{g}}({\mathbf{x}})=\left(\phi_{1}({\mathbf{x}}),\phi_{2}({\mathbf{x}})\right)$,
which reveal the phase angle $\theta$. The sparsity-promoting regularization
parameter was found using a bisection search in each case and was the smallest
value, to within a tolerance of $10^{-5}$, for which group LASSO selected $3$
sensors.
### A.3. Bayesian D-Optimal Selection
We use two different approaches for Bayesian D-optimal sensor placement: the
greedy technique of [51] and the convex relaxation approach by [26]. In the
greedy approach, we leverage the submodularity of the objective in the case
when ${\mathbf{T}}={\mathbf{I}}$ in order to use the accelerated greedy
algorithm of M. Minoux [35]. For the convex approach, we wrote a direct Python
translation of a MATLAB code written by S. Joshi and S. Boyd that implements a
Newton method with line search, and may be found at
https://web.stanford.edu/~boyd/papers/matlab/sensor_selection/. We use the
gradient and Hessian matrices for the Bayesian D-optimal objective from their
paper [26].
In both the greedy and convex approach for the shock-mixing layer problem, we
take the state to be its representation using $100$ principal components with
covariance given by
${\mathbf{C}}_{{\mathbf{x}}}=\frac{1}{N}\boldsymbol{\Sigma}^{2}$ as computed
by PCA. These principal components were also used as the relevant information
to be reconstructed, i.e., ${\mathbf{T}}={\mathbf{I}}$. The sensor noise was
assumed to be isotropic with covariance
${\mathbf{C}}_{{\mathbf{n}}_{\mathscr{S}}}=\sigma^{2}{\mathbf{I}}_{d_{\mathscr{S}}}$
with $\sigma=0.02$. We tried many other values of $\sigma$, yielding different
sensor locations, none of which could be used for nonlinear reconstruction.
The ones we show at $\sigma=0.02$ are representative.
### A.4. Maximum Likelihood D-Optimal Selection
We used the maximum likelihood D-optimal selection technique based on convex
relaxation found in [26] in order to choose sensors to try to reconstruct only
the $3$rd and $4$th principal components of the shock-mixing layer snapshots.
That is, if
${\mathbf{U}}=\begin{bmatrix}{\mathbf{u}}_{1}&{\mathbf{u}}_{2}&\cdots\end{bmatrix}$
is the matrix of principal components, we model the state as a linear
combination of ${\mathbf{u}}_{3}$ and ${\mathbf{u}}_{4}$ together with
isotropic Gaussian noise. We try to find the sensors so that the correct
coefficients on ${\mathbf{u}}_{3}$ and ${\mathbf{u}}_{4}$ can be recovered
with high confidence from the measurements. The rationale for doing so is the
fact that these two components are sufficient to nonlinearly reconstruct the
state of the system if they can be measured. As in Section A.3 above, we use a
direct Python translation of a MATLAB code written by S. Joshi and S. Boyd,
which may be found at
https://web.stanford.edu/~boyd/papers/matlab/sensor_selection/.
### A.5. Pivoted QR Factorization
For the pivoted QR factorization method [18, 8] applied to the shock-mixing
layer flow, we represent the state approximately as a linear combination of
the leading three principal components. Scipy’s implementation of pivoted QR
factorization found at
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.qr.html was
used to select among the $2210$ allowable sensors those that allow robust
reconstruction of these first three principal components. We also tried
representing the state using more principal components and taking the first
three sensor locations chosen via pivoted QR factorization. As with the case
when only three principal components are used, these sensors do not enable
nonlinear reconstruction of the state.
### A.6. Secant-Based Detectable Differences
The secant-based detectable difference method was implemented using the
accelerated greedy algorithm of M. Minoux [35] to optimize the objective
computed over all secants between points in the training data set consisting
of $N=750$ snapshots of the shock-mixing layer velocity field. We select among
the $2210$ sensor measurements of $u$ and $v$ velocity components on a grid of
$1105$ spatial locations taken directly from the shock-mixing layer snapshot
data. The target variables were chosen to be the leading two Isomap
coordinates
${\mathbf{g}}({\mathbf{x}})=\left(\phi_{1}({\mathbf{x}}),\phi_{2}({\mathbf{x}})\right)$,
which reveal the phase angle $\theta$. The greedy algorithm first reveals the
two sensor locations marked by green stars and then the black star in Figure 1
over the range of $0.02\leq\gamma\leq 0.06$, which can be used to reveal the
exact phase of the system. Choosing smaller values of $\gamma$ produce
different sensors that can also be used to reveal the phase, but with reduced
robustness to measurement perturbations. Gaussian process regression [46] was
used to reconstruct the leading $100$ principal components of the flowfields
from the sensor measurements. We used scikit learn’s implementation which can
be found at https://scikit-
learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html
together with a Matérn and white noise kernel whose parameters were optimized
during the fit.
For the torus example, the relevant information we wish to reconstruct are the
leading $100$ Isomap eigen-coordinates
${\mathbf{g}}({\mathbf{x}})=\left(\phi_{1}({\mathbf{x}}),\ldots,\phi_{100}({\mathbf{x}})\right)$
computed from $2000$ points sampled from the torus according to Eq. 11. The
objective function was evaluated using secants between $\\#(\mathcal{B})=100$
randomly sampled base points and the original set of $N=2000$ points. The
correct three coordinates $\phi_{1},\phi_{2},\phi_{7}$ are selected from among
the first $100$ consistently across a wide range of measurement separation
values $0.05\leq\gamma\leq 3.0$. We note that these values vary slightly with
the selected base points and these particular values hold only for one
instance.
### A.7. Secant-Based Amplification Tolerance
Like the secant-based detectable difference method described above, the
secant-based amplification tolerance method was implemented using the same
data, secant vectors, and target variables with the accelerated greedy
algorithm. A bisection search was used to find the smallest Lipschitz constant
$L=1868$ to within a tolerance of $1$ for which the algorithm selects three
sensors on the shock-mixing layer flow. Three (different) sensors that
correctly reveal the state of the flow are selected by this algorithm over a
range $1868\leq L\leq 47624$, above which only two sensors that cannot reveal
the state are selected. We also find that with $L=129$, the minimum possible
number of sensors exceeds $\\#(\mathscr{S}_{K})/(1+\ln{\kappa})=3.18>3$.
Therefore, the minimum possible reconstruction Lipschitz constant using three
sensors that one might find by an exhaustive combinatorial search must be
greater than $129$. We admit that this is likely a rather pessimistic bound,
but we cannot check it as there are $\binom{2210}{3}\approx 1.8\times 10^{9}$
possible choices for three sensors in this problem.
When applied to select from among the leading $100$ Isomap eigen-coordinates
on the torus example with the same setup as the secant-based detectable
differences method, the amplification tolerance method selects the appropriate
collection $\phi_{1},\phi_{2},\phi_{7}$ over the range $7.1\leq L\leq 25$. We
note that these value vary slightly with the selected base points and these
particular values hold only for one instance.
## Appendix B Submodularity of Objectives
We will need the definition of a modular function given below.
###### Definition B.1 (Modular Function).
Denote the set of all subsets of $\mathscr{M}$ by $2^{\mathscr{M}}$. A real-
valued function of the subsets $f:2^{\mathscr{M}}\to\mathbb{R}$ is called
“modular” when it can be written as a sum
(101) $f(\mathscr{S})=\sum_{j\in\mathscr{S}}a_{j}$
of constants $a_{j}$, $j\in\mathscr{M}$.
The key ingredient needed to prove submodularity for the objectives described
in Section 4 is the following lemma.
###### Lemma B.2 (Concave Composed with Modular is Submodular).
Let $h:\mathbb{R}\to\mathbb{R}$ be a concave function and let
$a:2^{\mathscr{M}}\to\mathbb{R}$ defined by
(102) $a(\mathscr{S})=\sum_{j\in\mathscr{S}}a_{j}$
be a modular function (Def. B.1) of subsets $\mathscr{S}\subseteq\mathscr{M}$
with $a_{j}\geq 0$ for all $j\in\mathscr{M}$. Then the function
$f:2^{\mathscr{M}}\to\mathbb{R}$ defined by
(103) $f(\mathscr{S})=h(a(\mathscr{S}))$
is submodular.
###### Proof.
Suppose that
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\subseteq{\mathscr{M}}\setminus\\{j\\}$.
By concavity of $h$ we have
(104)
$h_{\alpha}=h((1-\alpha)a(\mathscr{S})+\alpha(a(\mathscr{S}^{\prime})+a_{j}))\geq(1-\alpha)h_{0}+\alpha
h_{1}$
for every $\alpha\in[0,1]$, where we note that $h_{0}=f(\mathscr{S})$ and
$h_{1}=f(\mathscr{S}^{\prime}\cup\\{j\\})$.
Since $\\{a_{l}\\}$ are non-negative we have $a(\mathscr{S})\leq
a(\mathscr{S})+a_{j}\leq a(\mathscr{S}^{\prime})+a_{j}$ and
$a(\mathscr{S})\leq a(\mathscr{S}^{\prime})\leq
a(\mathscr{S}^{\prime})+a_{j}$. We can therefore find
(105)
$\alpha_{1}=\frac{a_{j}}{a(\mathscr{S}^{\prime})+a_{j}-a(\mathscr{S})},\quad\alpha_{2}=\frac{a(\mathscr{S}^{\prime})-a(\mathscr{S})}{a(\mathscr{S}^{\prime})+a_{j}-a(\mathscr{S})}$
so that $h_{\alpha_{1}}=f(\mathscr{S}\cup\\{j\\})$ and
$h_{\alpha_{2}}=f(\mathscr{S}^{\prime})$. Note that $\alpha_{1}+\alpha_{2}=1$.
We now use Eq. 104 at $\alpha_{1}$ and $\alpha_{2}$ to bound the increments of
$f$:
(106)
$f(\mathscr{S}\cup\\{j\\})-f(\mathscr{S})=h_{\alpha_{1}}-h_{0}\geq\alpha_{1}(h_{1}-h_{0}),$
(107)
$f(\mathscr{S}^{\prime}\cup\\{j\\})-f(\mathscr{S}^{\prime})=h_{1}-h_{\alpha_{2}}\leq(1-\alpha_{2})(h_{1}-h_{0})$
Combining the bounds Eq. 106 and Eq. 107 on the increments using
$1-\alpha_{2}=\alpha_{1}$ we conclude that $f$ is submodular
(108) $f(\mathscr{S}\cup\\{j\\})-f(\mathscr{S})\geq
f(\mathscr{S}^{\prime}\cup\\{j\\})-f(\mathscr{S}^{\prime}).$
∎
Using Lemma B.2 it suffices to observe that each of the objectives described
in Section 4 can be written as the composition of a concave function and a
modular function. We carry this out below in addition to proving normalization
and monotonicity for these objectives.
###### Lemma B.3 (Detectable Difference Objective is Submodular).
Suppose that the target variables ${\mathbf{g}}$ and measurements
${\mathbf{m}}_{j}$, $j\in\mathscr{M}$ are measurable functions. If $\mu$ and
$\nu$ are measures on $\mathcal{X}$, then the function defined by
(109)
$f(\mathscr{S})=\int_{\begin{subarray}{c}({\mathbf{x}},{\mathbf{x}}^{\prime})\in\mathcal{X}\times\mathcal{X}:\\\
\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\geq\varepsilon\end{subarray}}w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}\
d\mu({\mathbf{x}})\nu(d{\mathbf{x}}^{\prime}),$
for any $\varepsilon\geq 0$ with
(110)
$w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})=\min\left\\{\frac{1}{\gamma^{2}}\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2},\
1\right\\},$
is normalized so that $f(\emptyset)=0$, monotone non-decreasing so that
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\ \Rightarrow\ f(\mathscr{S})\leq
f(\mathscr{S}^{\prime})$, and submodular (Def. 4.2).
###### Proof.
Normalization is obvious. It suffices to prove that the function
$w_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ is monotone and
submodular for any fixed ${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$.
For if we suppose that
(111)
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\subseteq{\mathscr{M}}\setminus\\{j\\}\quad\Rightarrow\quad
w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}\cup\\{j\\})-w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\\\
\geq
w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime}\cup\\{j\\})-w_{\gamma,{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime}),$
then multiplying both sides of the inequality by
$\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}$
and integrating proves that $f$ is submodular. The same argument also proves
monotonicity.
Let ${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ be fixed. The squared
separation between the measurements is given by a modular (Def. B.1) sum
(112) $\mathscr{S}\ \mapsto\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2}=\sum_{j\in\mathscr{S}}\|{\mathbf{m}}_{j}({\mathbf{x}})-{\mathbf{m}}_{j}({\mathbf{x}}^{\prime})\|_{2}^{2}$
of non-negative constants
$\|{\mathbf{m}}_{j}({\mathbf{x}})-{\mathbf{m}}_{j}({\mathbf{x}}^{\prime})\|_{2}^{2}$
over each $j\in\mathscr{S}$. Since $x\mapsto\min\\{x/\gamma^{2},\ 1\\}$ is a
non-decreasing function, it follows that
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\ \Rightarrow\
w_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\leq
w_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime})$, proving
monotonicity.
Submodularity of $w_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ follows
from Lemma B.2 since $w_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ is
the composition of a concave function $x\mapsto\min\\{x/\gamma^{2},\ 1\\}$
with the modular function in Eq. 112. ∎
###### Lemma B.4 (Lipschitz Objective is Submodular).
Suppose that the target variables ${\mathbf{g}}$ and measurements
${\mathbf{m}}_{j}$, $j\in\mathscr{M}$ are measurable functions. If $\mu$ and
$\nu$ are measures on $\mathcal{X}$, then the function defined by
(113)
$f(\mathscr{S})=\int_{\begin{subarray}{c}({\mathbf{x}},{\mathbf{x}}^{\prime})\in\mathcal{X}\times\mathcal{X}:\\\
{\mathbf{g}}({\mathbf{x}})\neq{\mathbf{g}}({\mathbf{x}}^{\prime})\end{subarray}}g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\
d\mu({\mathbf{x}})\nu(d{\mathbf{x}}^{\prime}),$
with
(114)
$g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})=\min\left\\{\frac{\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2}}{\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}},\
\frac{1}{L^{2}}\right\\},$
is normalized so that $f(\emptyset)=0$, monotone non-decreasing so that
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\ \Rightarrow\ f(\mathscr{S})\leq
f(\mathscr{S}^{\prime})$, and submodular (Def. 4.2).
###### Proof.
Normalization is obvious. It suffices to prove that the function
$g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ is monotone and
submodular for any fixed ${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$.
For if we suppose that
(115)
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\subseteq{\mathscr{M}}\setminus\\{j\\}\quad\Rightarrow\quad
g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}\cup\\{j\\})-g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\geq
g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime}\cup\\{j\\})-g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime}),$
then integrating both sides of the inequality proves that $f$ is submodular.
The same argument also proves monotonicity.
Let ${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ be fixed. The squared
separation between the measurements is given by a modular (Def. B.1) sum
(116) $\mathscr{S}\ \mapsto\
\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}^{2}=\sum_{j\in\mathscr{S}}\|{\mathbf{m}}_{j}({\mathbf{x}})-{\mathbf{m}}_{j}({\mathbf{x}}^{\prime})\|_{2}^{2}$
of non-negative constants
$\|{\mathbf{m}}_{j}({\mathbf{x}})-{\mathbf{m}}_{j}({\mathbf{x}}^{\prime})\|_{2}^{2}$
over each $j\in\mathscr{S}$. Since
(117) $x\ \mapsto\
\min\left\\{\frac{x}{\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}^{2}},\
\frac{1}{L^{2}}\right\\}$
is a non-decreasing function, it follows that
$\mathscr{S}\subseteq\mathscr{S}^{\prime}\ \Rightarrow\
g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})\leq
g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S}^{\prime})$, proving
monotonicity.
Submodularity of $g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ follows
from Lemma B.2 since $g_{{\mathbf{x}},{\mathbf{x}}^{\prime}}(\mathscr{S})$ is
the composition of the concave function in Eq. 117 with the modular function
in Eq. 116. ∎
## Appendix C Proofs
###### Proposition 4.5: Separation Guarantee on Underlying Set.
The result follows immediately from the triangle inequality. Let
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ and
${\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}$ so that
$\|{\mathbf{x}}-{\mathbf{x}}_{i}\|_{2}<\varepsilon_{0}$ and
$\|{\mathbf{x}}^{\prime}-{\mathbf{x}}_{j}\|_{2}<\varepsilon_{0}$. Then
$\varepsilon+2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}$
implies that
(118)
$\begin{split}\varepsilon+2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\\\
&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{i})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}^{\prime})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\\\
&<\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}},\end{split}$
hence,
$\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon$.
By assumption, this implies that
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\geq\gamma$
and
(119)
$\begin{split}\gamma&\leq\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\\\
&\leq\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\|_{2}+\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}+\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\\\
&<2\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}+\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2},\end{split}$
hence,
$\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}>\gamma-2\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}$
as claimed. ∎
###### Proposition 4.6: Amplification Guarantee on Underlying Set.
The result follows immediately from the triangle inequality. Let
${\mathbf{x}},{\mathbf{x}}^{\prime}\in\mathcal{X}$ and
${\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}$ so that
$\|{\mathbf{x}}-{\mathbf{x}}_{i}\|_{2}<\varepsilon_{0}$ and
$\|{\mathbf{x}}^{\prime}-{\mathbf{x}}_{j}\|_{2}<\varepsilon_{0}$, then
(120)
$\begin{split}\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{i})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}^{\prime})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\\\
&<2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\\\
&\leq
2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})\|_{2}+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\\\
&\hskip
56.9055pt+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}\\\
&<2\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}+2L\varepsilon_{0}\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}.\end{split}$
Gathering terms on $\varepsilon_{0}$ completes the proof. ∎
###### Proposition 6.1: Noisy Separation Guarantee.
Choose ${\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}$ and suppose that
(121)
$\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon+2\delta_{v}.$
Then we have
(122)
$\begin{split}\|\left({\mathbf{g}}({\mathbf{x}}_{i})+{\mathbf{v}}_{i}\right)-\left({\mathbf{g}}({\mathbf{x}}_{j})+{\mathbf{v}}_{j}\right)\|_{2}&\geq\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}-\|{\mathbf{v}}_{i}\|-\|{\mathbf{v}}_{j}\|\\\
&\geq\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}-2\delta_{v}\\\
&\geq\varepsilon\end{split}$
By our assumption, this implies
(123)
$\|\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})+{\mathbf{u}}_{i,\mathscr{S}}\right)-\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})+{\mathbf{u}}_{j,\mathscr{S}}\right)\|_{2}\geq\gamma,$
and so we have
(124)
$\begin{split}\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})+{\mathbf{u}}_{j,\mathscr{S}}\|_{2}&\geq\|\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})+{\mathbf{u}}_{i,\mathscr{S}}\right)-\left({\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})+{\mathbf{u}}_{j,\mathscr{S}}\right)\|_{2}-\|{\mathbf{u}}_{i,\mathscr{S}}\|-\|{\mathbf{u}}_{j,\mathscr{S}}\|\\\
&\geq\gamma-2\delta_{u}.\end{split}$
Therefore, we have established that
(125)
$\forall{\mathbf{x}}_{i},{\mathbf{x}}_{j}\in\mathcal{X}_{N}\qquad\|{\mathbf{g}}({\mathbf{x}}_{i})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\geq\varepsilon+2\delta_{v}\\\
\Rightarrow\quad\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})+{\mathbf{u}}_{j,\mathscr{S}}\|_{2}\geq\gamma-2\delta_{u}.$
The conclusion follows immediately by Proposition 4.5. ∎
###### Theorem 5.5: Down-Sampled Amplification Guarantee.
For simplicity, we will drop $L$ from the subscript on our objective since the
threshold $L$ for the Lipschitz constant remains fixed throughout the proof.
Let us begin by fixing a set $\mathscr{S}\subseteq\mathscr{M}$ and define the
random variables
(126)
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=\max_{{\mathbf{x}}\in\mathcal{X}_{N}}\mathbbm{1}\big{\\{}\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}})\|_{2}>L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\|_{2}\big{\\}},$
for $i=1,\ldots,m$. If $Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$ then every secant
between ${\mathbf{b}}_{i}$ and points of $\mathcal{X}_{N}$ satisfies the
desired bound on the amplification. Otherwise, there is some point
${\mathbf{x}}\in\mathcal{X}$ for which
(127)
$\|{\mathbf{g}}({\mathbf{b}}_{i})-{\mathbf{g}}({\mathbf{x}})\|_{2}>L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{b}}_{i})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})\|_{2}$
and so $Z_{\mathscr{S}}({\mathbf{b}}_{i})=1$. We observe that
$Z_{\mathscr{S}}({\mathbf{b}}_{1}),\ldots,Z_{\mathscr{S}}({\mathbf{b}}_{m})$
are independent, identically distributed Bernoulli random variables whose
expectation
(128)
$\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]=\mu\big{(}\big{\\{}{\mathbf{x}}\in\mathcal{X}\
:\ \exists{\mathbf{x}}_{j}\in\mathcal{X}_{N}\quad\mbox{s.t.}\\\
\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}>L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}\big{\\}}\big{)}$
is the $\mu$-measure of points in $\mathcal{X}$ that are not adequately
separated from points in the $\varepsilon_{0}$-net $\mathcal{X}_{N}$ by the
measurements ${\mathbf{m}}_{\mathscr{S}}$. Suppose that for a fixed
${\mathbf{x}}\in\mathcal{X}$ we have
(129) $\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}\leq
L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}$
for every ${\mathbf{x}}_{j}\in\mathcal{X}_{N}$. By definition of
$\mathcal{X}_{N}$, for any ${\mathbf{x}}^{\prime}\in\mathcal{X}$, there is an
${\mathbf{x}}_{j}\in\mathcal{X}_{N}$ with
$\|{\mathbf{x}}^{\prime}-{\mathbf{x}}_{j}\|_{2}<\varepsilon_{0}$ and so we
have
(130)
$\begin{split}\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}&\leq\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}_{j})\|_{2}+\|{\mathbf{g}}({\mathbf{x}}_{j})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\\\
&<L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
&\leq
L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}_{j})\|_{2}+\varepsilon_{0}\|{\mathbf{g}}\|_{\text{lip}}\\\
&<L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+\left(\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}\right)\varepsilon_{0}.\end{split}$
It follows that $\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]$ is an upper
bound on the $\mu$-measure of points in $\mathcal{X}$ for which the relaxed
amplification threshold is exceeded, that is,
(131)
$\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]\geq\mu\big{(}\big{\\{}{\mathbf{x}}\in\mathcal{X}\
:\
\exists{\mathbf{x}}^{\prime}\in\mathcal{X}\quad\mbox{s.t.}\quad\|{\mathbf{g}}({\mathbf{x}})-{\mathbf{g}}({\mathbf{x}}^{\prime})\|_{2}\\\
\geq
L\|{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}})-{\mathbf{m}}_{\mathscr{S}}({\mathbf{x}}^{\prime})\|_{2}+\left(\|{\mathbf{g}}\|_{\text{lip}}+L\|{\mathbf{m}}_{\mathscr{S}}\|_{\text{lip}}\right)\varepsilon_{0}\big{\\}}\big{)}.$
By assumption, we have a set $\mathscr{S}\subseteq\mathscr{M}$ so that
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$ for each $i=1,\ldots,m$. And so it
remains to bound the difference between the empirical and true expectation of
$Z_{\mathscr{S}}({\mathbf{b}}_{i})$ uniformly over every subset
$\mathscr{S}\subseteq\mathscr{M}$. For fixed $\mathscr{S}$, the one-sided
Hoeffding inequality gives
(132)
$\mathbb{P}\Big{\\{}\frac{1}{m}\sum_{i=1}^{m}\left(\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]-Z_{\mathscr{S}}({\mathbf{b}}_{i})\right)\geq\delta\Big{\\}}\leq
e^{-2m\delta^{2}}.$
Unfixing $\mathscr{S}$ via the union bound over all
$\mathscr{S}\subseteq\mathscr{M}$ and applying our assumption about the number
of base points $m$ yields
(133)
$\mathbb{P}\bigcup_{\mathscr{S}\subseteq\mathscr{M}}\Big{\\{}\frac{1}{m}\sum_{i=1}^{m}\left(\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]-Z_{\mathscr{S}}({\mathbf{b}}_{i})\right)\geq\delta\Big{\\}}\leq
e^{\\#(\mathscr{M})\ln{2}-2m\delta^{2}}\leq p.$
Since our assumed choice of $\mathscr{S}$ has
$f_{m}(\mathscr{S})=f_{m}(\mathscr{M})$ it follows that all
$Z_{\mathscr{S}}({\mathbf{b}}_{i})=0$, $i=1,\ldots,m$, hence we have
(134) $\mathbb{E}[Z_{\mathscr{S}}({\mathbf{b}}_{i})]<\delta$
with probability at least $1-p$. Combining this with Eq. 131 completes the
proof. ∎
## Appendix D Description of the Accelerated Greedy Algorithm
Since each objective function $f$ presented in Section 4 is submodular, it is
possible to use an “accelerated greedy” (AG) algorithm to obtain the same
solution as the naive greedy algorithm with a provably minimal number of
objective function evaluations compared to a broad class of algorithms [35].
Let the increase in the objective function obtained by adding the sensor $j$
to the set $\mathscr{S}$ be called
$\Delta_{j}(\mathscr{S})=f(\mathscr{S}\cup\\{j\\})-f(\mathscr{S})$. Instead of
evaluating $\Delta_{j}(\mathscr{S}_{k-1})$ for every measurement in
$\mathscr{M}\setminus\mathscr{S}_{k-1}$, AG keeps track of an upper bound
$\hat{\Delta}_{j}\geq\Delta_{j}(\mathscr{S}_{k-1})$ on the increments for each
sensor. Since submodularity of $f$ means that the increments
$\Delta_{j}(\mathscr{S})$ can only decrease as the size of $\mathscr{S}$
increases, it is sufficient to have the maximum upper bound
$\hat{\Delta}_{j^{*}}\geq\hat{\Delta}_{j}$, $\forall
j\in\mathscr{M}\setminus\mathscr{S}_{k-1}$ be tight
$\hat{\Delta}_{j^{*}}=\Delta_{j^{*}}(\mathscr{S}_{k-1})$ in order to conclude
that $\Delta_{j^{*}}(\mathscr{S}_{k-1})$ is the largest increment. The rest of
the upper bounds on the increments can remain loose since they are smaller
than the tight maximum upper bound. The AG algorithm finds largest upper bound
$\hat{\Delta}_{j^{*}}$ and updates it so that it is tight. If
$\hat{\Delta}_{j^{*}}$ is still the greatest upper bound, then $j^{*}=j_{k}$
achieves the largest increment and is added to $\mathscr{S}_{k-1}$. Otherwise
if $\hat{\Delta}_{j^{*}}$ is no longer the largest upper bound, the new
largest upper bound is selected and process repeated until a tight maximum
upper bound is obtained.
|
# $\ell$-covering $k$-hypergraphs are quasi-eulerian
Mateja Šajna111Corresponding author. Email<EMAIL_ADDRESS>Mailing
address: Department of Mathematics and Statistics, University of Ottawa, 150
Louis-Pasteur Private, Ottawa, ON, K1N 6N5, Canada. and Andrew Wagner
University of Ottawa
###### Abstract
An Euler tour in a hypergraph $H$ is a closed walk that traverses each edge of
$H$ exactly once, and an Euler family is a family of closed walks that jointly
traverse each edge of $H$ exactly once. An $\ell$-covering $k$-hypergraph, for
$2\leq\ell<k$, is a $k$-uniform hypergraph in which every $\ell$-subset of
vertices lie together in at least one edge.
In this paper we prove that every $\ell$-covering $k$-hypergraph, for $k\geq
3$, admits an Euler family.
Keywords: $\ell$-covering hypergraph; Euler family; Euler tour; Lovasz’s
$(g,f)$-factor theorem.
## 1 Introduction
The complete characterization of graphs that admit an Euler tour is a classic
result covered by any introductory graph theory course. The concept naturally
extends to hypergraphs; that is, an Euler tour of a hypergraph is a closed
walk that traverses every edge exactly once. However, the study of eulerian
hypegraphs is a much newer and largely unexplored territory.
The first results on Euler tours in hypergraphs were obtained by Lonc and
Naroski [4]. Most notably, they showed that the problem of existence of an
Euler tour is NP-complete on the set of $k$-uniform hypergraphs, for any
$k\geq 3$, as well as when restricted to a particular subclass of 3-uniform
hypergraphs.
Bahmanian and Šajna [2] attempted a systematic study of eulerian properties of
general hypergraphs; some of their techniques and results will be used in this
paper. In particular, they introduced the notion of an Euler family — a
collection of closed walks that jointly traverse each edge exactly once — and
showed that the problem of existence of an Euler family is polynomial on the
class of all hypergraphs.
In this paper, we define an $\ell$-covering $k$-hypergraph, for $2\leq\ell<k$,
to be a non-empty $k$-uniform hypergraph in which every $\ell$-subset of
vertices appear together in at least one edge.
In [2], the authors proved that every 2-covering 3-hypergraph with at least
two edges admits an Euler family, and the present authors gave a short proof
[6] to show that every triple system — that is, a 3-uniform hypergraph in
which every pair of vertices lie together in the same number of edges — admits
an Euler tour as long as it has at least two edges. Most recently, the present
authors proved the following result.
###### Theorem 1.1.
[7] Let $k\geq 3$, and let $H$ be a $(k-1)$-covering $k$-hypergraph. Then $H$
admits an Euler tour if and only if it has at least two edges.
In this paper, we aim to extend Theorem 1.1 to all $\ell$-covering
$k$-hypergraphs. Our main result is as follows.
###### Theorem 1.2.
Let $\ell$ and $k$ be integers, $2\leq\ell<k$, and let $H$ be an
$\ell$-covering $k$-hypergraph. Then $H$ admits an Euler family if and only if
it has at least two edges.
As the concept of an Euler family is a relaxation of the concept of an Euler
tour, the conclusion of Theorem 1.2 is weaker than that of Theorem 1.1;
however, it holds for a much larger class of hypergraphs.
We prove Theorem 1.2 by induction on $\ell$. The base case $\ell=2$ is stated
as Theorem 5.1; its proof is essentially a counting argument and requires most
of the work. The main part of the proof is presented in Section 5, while some
special cases and technical details are handled in Sections 3 and 4. In
particular, in Section 4, using the Lovasz $(g,f)$-factor theorem, we develop
a sufficient condition for a $k$-uniform hypergraph without cut edges to admit
an Euler family.
## 2 Preliminaries
We use hypergraph terminology established in [1, 2], which applies to loopless
graphs as well. Any graph theory terms not explained here can be found in [3].
A hypergraph $H$ is a pair $(V,E)$, where $V$ is a non-empty set, and $E$ is a
multiset of elements from $2^{V}$. The elements of $V=V(H)$ and $E=E(H)$ are
called the vertices and edges of $H$, respectively. The order of $H$ is $|V|$,
and the size is $|E|$. A hypergraph of order 1 is called trivial, and a
hypergraph with no edges is called empty.
Distinct vertices $u$ and $v$ in a hypergraph $H=(V,E)$ are called adjacent
(or neighbours) if they lie in the same edge, while a vertex $v$ and an edge
$e$ are said to be incident if $v\in e$. The degree of $v$ in $H$, denoted
$\deg_{H}(v)$, is the number of edges of $H$ incident with $v$. An edge $e$ is
said to cover the vertex pair $\\{u,v\\}$ if $\\{u,v\\}\subseteq e$. A
hypergraph $H$ is called $k$-uniform if every edge of $H$ has cardinality $k$.
###### Definition 2.1.
Let $\ell$ and $k$ be integers, $2\leq\ell<k$. An $\ell$-covering
$k$-hypergraph is a $k$-uniform hypergraph in which every $\ell$-subset of
vertices lie together in at least one edge.
The incidence graph of a hypergraph $H=(V,E)$ is a bipartite simple graph $G$
with vertex set $V\cup E$ and bipartition $\\{V,E\\}$ such that vertices $v\in
V$ and $e\in E$ of $G$ are adjacent if and only if $v$ is incident with $e$ in
$H$. The elements of $V$ and $E$ are called v-vertices and e-vertices of $G$,
respectively.
A hypergraph $H^{\prime}=(V^{\prime},E^{\prime})$ is called a subhypergraph of
the hypergraph $H=(V,E)$ if $V^{\prime}\subseteq V$ and $E^{\prime}=\\{e\cap
V^{\prime}:e\in E^{\prime\prime}\\}$ for some submultiset $E^{\prime\prime}$
of $E$. For $e\in E$, the symbol $H{\backslash}e$ denotes the subhypergraph
$(V,E-\\{e\\})$ of $H$, and for $v\in V$, the symbol $H-v$ denotes the
subhypergraph $(V-\\{v\\},E^{\prime})$ where $E^{\prime}=\\{e-\\{v\\}:e\in
E,e-\\{v\\}\neq\emptyset\\}$.
A $(v_{0},v_{k})$-walk in $H$ is a sequence $W=v_{0}e_{1}v_{1}e_{2}\ldots
e_{k}v_{k}$ such that $v_{0},\ldots,v_{k}\in V$; $e_{1},\ldots,e_{k}\in E$;
and $v_{i-1},v_{i}\in e_{i}$ with $v_{i-1}\neq v_{i}$ for all $i=1,\ldots,k$.
A walk is said to traverse each of the vertices and edges in the sequence. The
vertices $v_{0},v_{1},\ldots,v_{k}$ are called the anchors of $W$. If
$e_{1},e_{2},\ldots,e_{k}$ are pairwise distinct, then $W$ is called a trail
(strict trail in [1, 2]); if $v_{0}=v_{k}$ and $k\geq 2$, then $W$ is closed.
A hypergraph $H$ is connected if every pair of vertices are connected in $H$;
that is, if for any pair $u,v\in V(H)$, there exists a $(u,v)$-walk in $H$. A
connected component of $H$ is a maximal connected subhypergraph of $H$ without
empty edges. The number of connected components of $H$ is denoted by ${\rm
c}(H)$. We call $v\in V(H)$ a cut vertex of $H$, and $e\in E(H)$ a cut edge of
$H$, if ${\rm c}(H-v)>{\rm c}(H)$ and ${\rm c}(H{\backslash}e)>{\rm c}(H)$,
respectively.
An Euler family of a hypergraph $H$ is a collection of pairwise anchor-
disjoint and edge-disjoint closed trails that jointly traverse every edge of
$H$, and an Euler tour is a closed trail that traverses every edge of $H$. A
hypergraph that is either empty or admits an Euler tour (family) is called
eulerian (quasi-eulerian). Note that an Euler tour corresponds to an Euler
family of cardinality 1, so every eulerian hypergraph is also quasi-eulerian.
The following theorem allows us to determine whether a hypergraph is eulerian
or quasi-eulerian from its incidence graph.
###### Theorem 2.2.
[2, Theorem 2.18] Let $H$ be a hypergraph and $G$ its incidence graph. Then
the following hold.
(1)
$H$ is quasi-eulerian if and only if $G$ has a spanning subgraph $G^{\prime}$
such that $\deg_{G^{\prime}}(e)=2$ for all $e\in E(H)$, and
$\deg_{G^{\prime}}(v)$ is even for all $v\in V(H)$.
(2)
$H$ is eulerian if and only if $G$ has a spanning subgraph $G^{\prime}$ with
at most one non-trivial connected component such that $\deg_{G^{\prime}}(e)=2$
for all $e\in E(H)$, and $\deg_{G^{\prime}}(v)$ is even for all $v\in V(H)$.
## 3 Technical Lemmas
In this section, we take care of some special cases and prove some technical
results that will aid in the proof of our base case, Theorem 5.1.
###### Lemma 3.1.
Let $k\geq 4$, and let $H$ be a 2-covering $k$-hypergraph with at least 2
edges. Then $H$ has no cut edges.
###### Proof.
Suppose $e$ is a cut edge of $H$. Then there exist vertices $u,v\in e$ that
are disconnected in $H{\backslash}e$. Since $H$ has at least 2 edges, it must
be that $k\neq|V(H)|$ and $e\neq V(H)$. Hence there exists $w\in V(H)-e$. Let
$e_{1},e_{2}$ be edges of $H$ containing $u$ and $w$, and $v$ and $w$,
respectively. As $e\not\in\\{e_{1},e_{2}\\}$, we can see that $ue_{1}we_{2}v$
is a $(u,v)$-walk in $H{\backslash}e$, a contradiction. ∎
###### Lemma 3.2.
Let $k\geq 4$, and let $H$ be a 2-covering $k$-hypergraph of order
$n>\frac{3k}{2}$ and size $m\geq 2$. Then $m\geq
2\lfloor\frac{n+3}{k}\rfloor$.
###### Proof.
If $n\leq 2k-4$, then $2\lfloor\frac{n+3}{k}\rfloor\leq 2\leq m$. Hence assume
$n\geq 2k-3$.
Suppose first that $n\geq 3k-3$. Since there are $\binom{n}{2}$ pairs of
vertices to cover, and each edge covers $\binom{k}{2}$ pairs, we know that
$m\geq\frac{n(n-1)}{k(k-1)}$. As $k\geq 4$, we have
$\displaystyle m\geq\frac{n(n-1)}{k(k-1)}$
$\displaystyle\geq\frac{(3k-3)(n-1)}{k(k-1)}=\frac{3(n-1)}{k}=\frac{2n+n-3}{k}$
$\displaystyle\geq\frac{2n+3k-6}{k}\geq\frac{2n+6}{k}\geq
2\Big{\lfloor}\frac{n+3}{k}\Big{\rfloor}.$
Finally, assume $2k-3\leq n\leq 3k-4$. As $2\lfloor\frac{n+3}{k}\rfloor\leq
4$, it suffices to show that $m\geq 4$. Suppose $m\leq 3$. Since $H$ is a
2-covering $k$-hypergraph with $n>k$ and $m\geq 2$, every vertex has degree at
least 2. Thus
$2n\leq\sum_{v\in V(H)}\deg(v)=km\leq 3k$
and $n\leq\frac{3k}{2}$, contradicting the assumption that $n>\frac{3k}{2}$.
Therefore, in all cases we have $m\geq 2\lfloor\frac{n+3}{k}\rfloor.$ ∎
###### Lemma 3.3.
Let $H$ be a hypergraph with $|E(H)|\geq 2$ satifying the following.
* •
For all $e,f\in E(H)$, we have $|e\cap f|\geq 2$; and
* •
there exist distinct $e,f\in E(H)$ such that $|e\cap f|\geq 3$.
Then $H$ is eulerian.
###### Proof.
Let $E(H)=\\{e_{1},\ldots,e_{m}\\}$ and assume $e_{1}$ and $e_{m}$ are
distinct edges such that $|e_{1}\cap e_{m}|\geq 3$. Take any $v_{1}\in
e_{1}\cap e_{2}$. For $i=2,\ldots,m-1$, let $v_{i}$ be a vertex in $(e_{i}\cap
e_{i+1})-\\{v_{i-1}\\}$, and let $v_{0}\in(e_{1}\cap
e_{m})-\\{v_{1},v_{m-1}\\}$. It is easy to verify that $v_{0}e_{1}v_{1}\ldots
v_{m-1}e_{m}v_{0}$ is an Euler tour of $H$. ∎
###### Corollary 3.4.
Let $H$ be a 2-covering $k$-hypergraph of order $n$. If $n\leq 2k-3$ or
$(k,n)=(4,6)$, then $H$ is eulerian.
###### Proof.
If $n\leq 2k-3$, then every pair of edges $e,f\in E(H)$ satisfies $|e\cap
f|\geq 3$, so $H$ is eulerian by Lemma 3.3.
Assume now that $(k,n)=(4,6)$. For all $e,f\in E(H)$, we have $|e\cap f|\geq
2$. If there exist distinct edges $e,f\in E(H)$ such that $|e\cap f|\geq 3$,
then $H$ is eulerian by Lemma 3.3. Hence assume $|e\cap f|=2$ for all $e,f\in
E(H)$, and let $V(H)=\\{v_{1},\ldots,v_{6}\\}$. It is not difficult to see
that we must have $E(H)=\\{e_{1},e_{2},e_{3}\\}$ where, without loss of
generality, the edges are $e_{1}=v_{1}v_{2}v_{3}v_{4}$,
$e_{2}=v_{1}v_{2}v_{5}v_{6}$, and $e_{3}=v_{3}v_{4}v_{5}v_{6}$. It follows
that $W=v_{3}e_{1}v_{2}e_{2}v_{5}e_{3}v_{3}$ is an Euler tour of $H$. ∎
###### Lemma 3.5.
Let $n,k,q\in\mathbb{Z}^{+}$ be such that $n\geq qk$. Let
$S=\big{\\{}(x_{1},\ldots,x_{q})\in(\mathbb{Z}^{+})^{q}:x_{1}+\cdots+x_{q}=n,x_{i}\geq
k\mbox{ for all }i\big{\\}},$
and define $f:S\to\mathbb{Z}^{+}$ by
$f(x_{1},\ldots,x_{q})=\binom{x_{1}}{2}+\cdots+\binom{x_{q}}{2}$. Then $f$
attains its maximum on $S$ at the point $\big{(}k,\ldots,k,n-k(q-1)\big{)}$.
###### Proof.
Since the domain $S$ is finite, function $f$ indeed attains a maximum on $S$.
Let ${\bf x}=(x_{1},\ldots,x_{q})\in S$ be such that $f({\bf x})$ is maximum.
By symmetry of $f$, we may assume that $x_{1}\leq x_{2}\leq\ldots\leq x_{q}$.
As $x_{1}\geq k$ and $x_{q}=n-(x_{1}+\ldots+x_{q-1})$, we observe that
$x_{q}\leq n-k(q-1)$.
Suppose that $x_{q}<n-k(q-1)$. Then there exists $i\in\\{1,\ldots,q-1\\}$ such
that $x_{i}>k$. Let $i$ be the smallest index with this property, and let
${\bf y}=(x_{1},\ldots,x_{i-1},x_{i}-1,x_{i+1},\ldots,x_{q-1},x_{q}+1).$
Then ${\bf y}\in S$ and
$\displaystyle f({\bf y})=$ $\displaystyle\sum_{\begin{subarray}{c}j=1\\\
j\neq
i\end{subarray}}^{q-1}\binom{x_{j}}{2}+\binom{x_{i}-1}{2}+\binom{x_{q}+1}{2}$
$\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}j=1\\\ j\neq
i\end{subarray}}^{q-1}\binom{x_{j}}{2}+\frac{x_{i}(x_{i}-1)}{2}-\frac{2(x_{i}-1)}{2}+\frac{x_{q}(x_{q}-1)}{2}+\frac{2x_{q}}{2}$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{q}\binom{x_{j}}{2}+(x_{q}-x_{i}+1)>f({\bf x}),$
contradicting the choice of ${\bf x}$.
Hence $x_{q}=n-k(q-1)$, and consequently $x_{1}=\ldots=x_{q-1}=k$. Thus $f$
attains its maximum on $S$ at the point ${\bf
x}=\big{(}k,\ldots,k,n-k(q-1)\big{)}$ as claimed. ∎
## 4 A sufficient condition
In this section, we state and prove Proposition 4.2, which gives a sufficient
condition for a $k$-uniform hypergraph to admit an Euler family. This
sufficient condition will be our main tool in the proof of Theorem 5.1. It is
based on the $(g,f)$-factor theorem by Lovász [5], stated below as Theorem
4.1.
For a graph $G$ and functions $f,g:V(G)\to\mathbb{N}$, a $(g,f)$-factor of $G$
is a spanning subgraph $F$ of $G$ such that $g(x)\leq\deg_{F}(x)\leq f(x)$ for
all $x\in V(G)$. An $f$-factor is simply an $(f,f)$-factor. For any sets
$U,W\subseteq V(G)$, let $\varepsilon_{G}(U,W)$ denote the number of edges of
$G$ with one endpoint in $U$ and the other in $W$.
###### Theorem 4.1.
[5] Let $G=(V,E)$ be a graph and $f,g:V\rightarrow\mathbb{N}$ be functions
such that $g(x)\leq f(x)$ and $g(x)\equiv f(x)$ (mod 2) for all $x\in V$. Then
$G$ has a $(g,f)$-factor $F$ such that $\deg_{F}(x)\equiv f(x)$ (mod 2) for
all $x\in V$ if and only if, for all disjoint $S,T\subseteq V$, we have
$\sum_{x\in S}f(x)+\sum_{x\in
T}(\deg_{G}(x)-g(x))-\varepsilon_{G}(S,T)-q(S,T)\geq 0,$ (1)
where $q(S,T)$ is the number of connected components $C$ of $G-(S\cup T)$ such
that
$\sum_{x\in V(C)}f(x)+\varepsilon_{G}(V(C),T)\equiv 1\text{ (mod 2).}$
###### Proposition 4.2.
Let $k\geq 3$, and let $H=(V,E)$ be a $k$-uniform hypergraph of order $n$ and
size $m$. Let $G$ be the incidence graph of $H$, and $G^{*}$ the graph
obtained from $G$ by appending $2(m+n)^{2}$ loops to every v-vertex.
Assume that $H$ has no cut edges and that for all $X\subseteq E$ with $|X|\geq
2$, we have that $|X|\geq 2\lfloor\frac{{\rm c}(G^{*}-X)+3}{k}\rfloor$. Then
$H$ is quasi-eulerian.
###### Proof.
Let $r=2(m+n)^{2}$, and define $f:V(G^{*})\rightarrow\mathbb{Z}$ by
$f(x)=\left\\{\begin{array}[]{l l}r&\mbox{ if }x\in V,\\\ 2&\mbox{ if }x\in
E.\end{array}\right.$
We shall use Theorem 4.1 to show that $G^{*}$ has an $(f,f)$-factor, so let
$S,T\subseteq V(G^{*})$ be disjoint sets, and denote
$\gamma(S,T)=\sum_{x\in S}f(x)+\sum_{x\in
T}(\deg_{G^{*}}(x)-f(x))-\varepsilon_{G^{*}}(S,T)-q(S,T),$
where $q(S,T)$ is the number of connected components $C$ of $G^{*}-(S\cup T)$
such that $\varepsilon_{G^{*}}(V(C),T)$ is odd. Observe that Condition (1) for
$G^{*}$ with $g=f$ is equivalent to $\gamma(S,T)\geq 0$.
Since $G$ is a subgraph of $K_{n,m}$, we have $\varepsilon_{G^{*}}(S,T)\leq
mn$ and $q(S,T)\leq m+n$, and therefore
$\varepsilon_{G^{*}}(S,T)+q(S,T)\leq(m+n)^{2}=\frac{r}{2}$. In addition, we
have $\deg_{G^{*}}(x)-f(x)\geq r$ for all $x\in V$, and
$\deg_{G^{*}}(x)-f(x)\geq k-2$ for all $x\in E$.
Case 1: $(S\cup T)\cap V\neq\emptyset$. If $S\cap V\neq\emptyset$, then
$\displaystyle\sum_{x\in S}f(x)\geq r$, and if $T\cap V\neq\emptyset$, then
$\displaystyle\sum_{x\in T}(\deg_{G^{*}}(x)-f(x))\geq r$. Thus, in both cases
$\displaystyle\gamma(S,T)$ $\displaystyle=\Big{(}\sum_{x\in S}f(x)+\sum_{x\in
T}(\deg_{G^{*}}(x)-f(x))\Big{)}-\Big{(}\varepsilon_{G^{*}}(S,T)+q(S,T)\Big{)}$
$\displaystyle\geq r-\frac{r}{2}\geq 0.$
Case 2: $(S\cup T)\cap V=\emptyset$. Then $\varepsilon_{G^{*}}(S,T)=0$ since
$S\cup T\subseteq E$.
First, suppose $T=\emptyset$. Then $\varepsilon_{G^{*}}(V(C),T)=0$ for all
connected components $C$ of $G^{*}-(S\cup T)$, so $q(S,T)=0$. Hence
$\displaystyle\gamma(S,T)=\sum_{x\in S}f(x)\geq 0$.
Next, suppose $S=\emptyset$ and $|T|=1$. Then $S\cup T=\\{e\\}$ for some $e\in
E$. By assumption, edge $e$ is not a cut edge of $H$ and hence by [1, Theorem
3.23], e-vertex $e$ is not a cut vertex of $G^{*}$, and $G^{*}-(S\cup T)$ is
connected. It follows that $q(S,T)\leq 1$ and
$\gamma(S,T)=(\deg_{G^{*}}(e)-f(e))-q(S,T)\geq(k-2)-1\geq 0.$
We may now assume that $T\neq\emptyset$ and $|S\cup T|\geq 2$. Since each
connected component $C$ of $G^{*}-(S\cup T)$ with
$\varepsilon_{G^{*}}(V(C),T)$ odd corresponds to at least one edge incident
with a vertex in $T$, the number of such components is at most $k|T|$. Hence
$q(S,T)\leq\min\\{{\rm c}(G^{*}-(S\cup T)),k|T|\\}$, and
$\displaystyle\gamma(S,T)$ $\displaystyle=2|S|+(k-2)|T|-q(S,T)$
$\displaystyle\geq 2|S\cup T|+(k-4)|T|-\min\\{{\rm c}(G^{*}-(S\cup
T)),k|T|\\}.$ (2)
Define $t=\lfloor\frac{{\rm c}(G^{*}-(S\cup T))+3}{k}\rfloor$, so that
$kt-3\leq{\rm c}(G^{*}-(S\cup T))\leq kt+k-4.$
If $|T|\geq t+1$, then $\min\\{{\rm c}(G^{*}-(S\cup T)),k|T|\\}={\rm
c}(G^{*}-(S\cup T))\leq kt+k-4$, so Inequality (2) yields
$\gamma(S,T)\geq 2|S\cup T|+(k-4)(t+1)-(kt+k-4)=2|S\cup T|-4t.$
The same bound is obtained if $|T|\leq t$: in this case, we have $\min\\{{\rm
c}(G^{*}-(S\cup T)),k|T|\\}\leq k|T|$, so that (2) yields
$\gamma(S,T)\geq 2|S\cup T|+(k-4)|T|-k|T|=2|S\cup T|-4|T|\geq 2|S\cup T|-4t.$
In both cases, as $S\cup T\subseteq E$ and $|S\cup T|\geq 2$, the assumption
of the proposition implies $|S\cup T|\geq 2\lfloor\frac{{\rm c}(G^{*}-(S\cup
T))+3}{k}\rfloor=2t$, so that $\gamma(S,T)\geq 0$.
Therefore, $\gamma(S,T)\geq 0$ for all disjoint $S,T\subseteq V(G^{*})$, and
by Theorem 4.1, we conclude that $G^{*}$ has an $(f,f)$-factor $F$. Deleting
the loops of $F$, we obtain a spanning subgraph $F^{\prime}$ of $G$ in which
all v-vertices have even degree and all e-vertices have degree 2. Thus $H$
admits an Euler family by Theorem 2.2. ∎
## 5 Proof of the main result
We shall now prove our main result, Theorem 1.2. We use induction on $\ell$,
and most of the work is required to prove the basis of induction, which we
state below as Theorem 5.1.
###### Theorem 5.1.
Let $k\geq 4$, and let $H$ be a 2-covering $k$-hypergraph with at least two
edges. Then $H$ is quasi-eulerian.
###### Proof.
Let $H=(V,E)$ with $n=|V|$ and $m=|E|$. If $n\leq 2k-3$, then $H$ is eulerian
by Corollary 3.4, so we may assume that $n\geq 2k-2.$
If $n\leq\frac{3k}{2}$, it then follows that $(k,n)=(4,6)$. Again, $H$ is
eulerian by Corollary 3.4. Hence $n>\frac{3k}{2}$, and Lemma 3.2 implies that
$m\geq 2\big{\lfloor}\frac{n+3}{k}\big{\rfloor}$.
In the rest of the proof we show that $H$ satisfies the conditions of
Proposition 4.2.
Let $G^{*}$ be the graph obtained from the incidence graph of $H$ by adjoining
$2(m+n)^{2}$ loops to every v-vertex.
Fix any $X\subseteq E$ with $|X|\geq 2$, and denote $q={\rm c}(G^{*}-X)$.
Suppose that $|X|<2\Big{\lfloor}\frac{q+3}{k}\Big{\rfloor}$. If $q\leq 2k-4$,
then this supposition implies that $|X|<2$, a contradiction. Hence we may
assume that $q\geq 2k-3$, and hence $q\geq 5$. Moreover, our supposition
implies
$|X|\leq 2\frac{q+3}{k}-1.$ (3)
Let $\ell$ denote the number of v-vertices that are isolated in $G^{*}-X$.
Case 1: $\ell\geq 1$. If $\ell=n$, then $X=E$, $q=n$, and $|X|=|E|\geq
2\lfloor\frac{n+3}{k}\rfloor=2\lfloor\frac{q+3}{2}\rfloor$, contradicting our
assumption on $X$. Thus we may assume $\ell<n$, and hence $\ell<q$.
Since $G^{*}-X$ has $q-\ell$ non-trivial connected components, each with at
least $k$ v-vertices, we have
$n\geq\ell+k(q-\ell).$ (4)
Since $q>\ell$, this inequality also implies
$n\geq\ell+k.$ (5)
Let $S$ be the set of pairs $\\{u,v\\}$ of v-vertices such that $u$ is
isolated in $G^{*}-X$, and $v$ is not. Then $|S|=\ell(n-\ell)$. Observe that
every edge of $H$ covers at most $\frac{k^{2}}{4}$ pairs from $S$, which
implies that $|X|\geq\frac{\ell(n-\ell)}{\frac{k^{2}}{4}}$. Combining this
inequality with (3), we obtain
$\frac{4\ell(n-\ell)}{k^{2}}\leq\frac{2q+6-k}{k}.$ (6)
Substituting $q\leq\ell+\frac{n-\ell}{k}$ from Inequality (4) and rearranging
yields
$n(4\ell-2)\leq 4\ell^{2}-k^{2}+2\ell k-2\ell+6k.$
Further substituting $n\geq\ell+k$ from (5) and isolating $\ell$, we obtain
$\ell\leq 4-\frac{k}{2}$, which implies $\ell\in\\{1,2\\}$ as $k\geq 4$.
However, if on the left-hand side of Inequality (6) we apply
$\frac{n-\ell}{k}\geq q-\ell$ from (4) and simplify, then we obtain
$(4\ell-2)q-4\ell^{2}\leq 6-k\leq 2.$
Now substituting either $\ell=1$ or $\ell=2$ yields $q\leq 3$, a
contradiction.
Case 2: $\ell=0$. Let $C_{1},C_{2},\dotso,C_{q}$ be the connected components
of $G^{*}-X$, and let $n_{i}$ denote the number of v-vertices of $C_{i}$. Note
that $n_{i}\geq k$ for all $i$.
The number of pairs of v-vertices that lie in distinct connected components of
$G^{*}-X$ is $\binom{n}{2}-\sum_{i=1}^{q}\binom{n_{i}}{2}$, and these pairs
must all be covered by the edges of $X$. As $n\geq qk$,
$n_{1}+\ldots+n_{q}=n$, and $n_{i}\geq k$, for all $i$,we know that
$\sum_{i=1}^{q}\binom{n_{i}}{2}\leq(q-1)\binom{k}{2}+\binom{n-k(q-1)}{2}$ by
Lemma 3.5. Therefore,
$\binom{n}{2}-\sum_{i=1}^{q}\binom{n_{i}}{2}\geq\binom{n}{2}-(q-1)\binom{k}{2}-\binom{n-k(q-1)}{2}.$
Since each edge of $X$ covers up to $\binom{k}{2}$ pairs of v-vertices in
distinct connected components, we deduce that
$|X|\geq\frac{\binom{n}{2}-(q-1)\binom{k}{2}-\binom{n-k(q-1)}{2}}{\binom{k}{2}}.$
On the other hand, by (3), we have $|X|\leq\frac{2q+6-k}{k}$, so
$\frac{\binom{n}{2}-(q-1)\binom{k}{2}-\binom{n-k(q-1)}{2}}{\binom{k}{2}}\leq\frac{2q+6-k}{k}.$
(7)
We now substitute $x=q-1$, noting that $x\geq 4$ as $q\geq 5$. Rearranging
Inequality (7), we then obtain
$2kxn\leq k^{2}x^{2}+(k^{2}+2k-2)x-(k-8)(k-1).$
Applying $n\geq qk=(x+1)k$ further yields
$k^{2}x^{2}+(k^{2}-2k+2)x+(k-8)(k-1)\leq 0.$
Denote the left-hand side by $f(x)=ax^{2}+bx+c$, where $a=k^{2}$,
$b=k^{2}-2k+2$, and $c=(k-8)(k-1)$, and observe that $a,b>0$ as $k\geq 4$. If
$b^{2}-4ac<0$, then $f(x)>0$ for all $x$, a contradiction. Hence assume
$b^{2}-4ac\geq 0$. Let $x_{2}$ be the larger of the two roots of $f(x)=0$. If
$x_{2}<4$, then $f(x)>0$ for all $x\geq 4$, a contradiction. Hence we must
have
$4\leq\frac{-b+\sqrt{b^{2}-4ac}}{2a}.$
Since $a,b>0$, it is straightforward to show that $16a+4b+c\leq 0$ follows.
However,
$16a+4b+c=k(21k-17)+16>0,$
a contradiction.
Since each case leads to a contradiction, we conclude that
$|X|\geq\lfloor\frac{{\rm c}(G^{*}-X)+3}{k}\rfloor$. By Lemma 3.1, hypergraph
$H$ has no cut edges, so we may apply Proposition 4.2 to conclude that $H$ is
quasi-eulerian. ∎
We are now ready to prove our main result, restated below.
###### Theorem 1.2.
Let $\ell$ and $k$ be integers, $2\leq\ell<k$, and let $H$ be an
$\ell$-covering $k$-hypergraph. Then $H$ is quasi-eulerian if and only if it
has at least two edges.
###### Proof.
Since $H$ is non-empty, and since a hypergraph with a single edge does not
admit a closed trail, necessity is easy to see.
To prove sufficiency, for $s\geq 1$ and $\ell\geq 2$, define the proposition
$P_{s}(\ell):\mbox{ ``Every }\ell\mbox{-covering }(\ell+s)\mbox{-hypergraph
with at least two edges is quasi-eulerian.''}$
Theorem 1.1 implies that $P_{1}(\ell)$ holds for all $\ell\geq 2$. Hence fix
any $s\geq 2$.
We prove $P_{s}(\ell)$ by induction on $\ell$. As $\ell+s\geq 4$, the basis of
induction, $P_{s}(2)$, follows from Theorem 5.1. Suppose that, for some
$\ell\geq 2$, the proposition $P_{s}(\ell)$ holds; that is, every
$\ell$-covering $(\ell+s)$-hypergraph with at least two edges is quasi-
eulerian.
Let $H=(V,E)$ be an $(\ell+1)$-covering $\big{(}(\ell+1\big{)}+s)$-hypergraph
with $|E|\geq 2.$ Fix any $v\in V$ and let $V^{*}=V-\\{v\\}$. Define a mapping
$\varphi:E\to 2^{V^{*}}$ by
$\varphi(e)=e-\\{v\\}\quad\mbox{ if }v\in e,$
and otherwise,
$\varphi(e)=e-\\{u\\}\quad\mbox{ for any }u\in e.$
Then let $E^{*}=\\{\varphi(e):e\in E\\}$ and $H^{*}=(V^{*},E^{*})$, so that
$\varphi$ is a bijection from $E$ to $E^{*}$. It is straightforward to verify
that $H^{*}$ is an $\ell$-covering $(\ell+s)$-hypergraph. As $|E^{*}|=|E|\geq
2$, by induction hypothesis, hypergraph $H^{*}$ admits an Euler family
$\mathcal{F}^{*}$. In each closed trail in $\mathcal{F}^{*}$, replace each
$e\in E^{*}$ with $\varphi^{-1}(e)$ to obtain a set $\mathcal{F}$ of closed
trails of $H$. It is not difficult to verify that $\mathcal{F}$ is an Euler
family of $H$, so $P_{s}(\ell+1)$ follows.
By induction, we conclude that $P_{s}(\ell)$ holds for all $\ell\geq 2$, and
any $s\geq 1$. Therefore, every $\ell$-covering $k$-hypergraph with at least
two edges is quasi-eulerian. ∎
## Acknowledgements
The first author gratefully acknowledges support by the Natural Sciences and
Engineering Research Council of Canada (NSERC), Discovery Grant
RGPIN-2016-04798.
## References
* [1] M. A. Bahmanian and M. Šajna, Connection and separation in hypergraphs, Theory Appl. Graphs 2 (2015), Art. 5, 24 pp.
* [2] M. A. Bahmanian and M. Šajna, Quasi-Eulerian hypergraphs, Elec. J. Combin. 24 (2017), #P3.30, 12 pp.
* [3] J. A. Bondy and U. S. R. Murty, Graph Theory, Springer, 2008.
* [4] Z. Lonc and P. Naroski, On tours that contain all edges of a hypergraph, Elec. J. Combin. 17 (2010), #R144, 31 pp.
* [5] L. Lovász, The factorization of graphs II, Acta Math. Acad. Sci. Hungar. 23 (1972), 223–246.
* [6] M. Šajna and A. Wagner, Triple systems are eulerian, J. Combin. Des. 25 (2017), 185–191.
* [7] M. Šajna and A. Wagner, Covering hypergraphs are eulerian, submitted, 24 pp. arXiv:2101.04561.
|
# Using edge cuts
to find Euler tours and Euler families in hypergraphs
Mateja Šajna111Corresponding author. Email<EMAIL_ADDRESS>Mailing
address: Department of Mathematics and Statistics, University of Ottawa, 150
Louis-Pasteur Private, Ottawa, ON, K1N 6N5, Canada. and Andrew Wagner
University of Ottawa
###### Abstract
An Euler tour in a hypergraph is a closed walk that traverses each edge of the
hypergraph exactly once, while an Euler family is a family of closed walks
that jointly traverse each edge exactly once and cannot be concatenated. In
this paper, we show how the problem of existence of an Euler tour (family) in
a hypergraph $H$ can be reduced to the analogous problem in some smaller
hypergraphs that are derived from $H$ using an edge cut of $H$. In the
process, new techniques of edge cut assignments and collapsed hypergraphs are
introduced. Moreover, we describe algorithms based on these characterizations
that determine whether or not a hypergraph admits an Euler tour (family), and
can also construct an Euler tour (family) if it exists.
Keywords: Hypergraph; Euler tour; Euler family; edge cut; edge cut assignment;
collapsed hypergraph; algorithm.
## 1 Introduction
It is common knowledge that a connected graph admits an Euler tour — that is,
a closed walk traversing each edge of the graph exactly once — if and only if
the graph has no vertices of odd degree. The notion of an Euler tour can be
generalized to hypergraphs in the obvious way, and has been studied as such in
[5, 1]. In addition, Bahmanian and Šajna [1] also introduced the notion of an
Euler family, which is a family of closed walks that jointly traverse each
edge of the hypergraph exactly once and cannot be concatenated. For a
connected graph, an Euler family precisely corresponds to an Euler tour; for
general connected hypergraphs, however, the two notions give rise to two
rather distinct problems: Euler family, which is of polynomial complexity [1],
and Euler tour, which is NP-complete [5]. The question of how to reduce the
search for an Euler family or tour in a hypergraph to smaller hypergraphs is
thus particularly pertinent in the case of Euler tours.
An Euler tour (family) is called spanning if it traverses every vertex of the
hypergraph. In [6], Steimle and Šajna showed how to use certain vertex cuts to
reduce the problem of finding a spanning Euler tour (family) in a hypergraph
$H$ to some smaller hypergraphs derived from subhypergraphs of $H$. In the
present paper, with the same goal in mind, we shall use edge cuts instead.
This new approach represents a major improvement over the results from [6].
First, it can be used for general Euler tours and families, not just for
spanning Euler tours and families. Second, while the main results from [6]
apply only to hypergraphs with vertex cuts of cardinality at most two, the
present approach applies to edge cuts of any size, and hence to all
hypergraphs, as every non-trivial hypergraph has an edge cut. In addition, we
introduce new elegant techniques of edge cut assignments and collapsed
hypergraphs, which will be useful in problems beyond the scope of this paper.
After reviewing the terminology and providing some basic tools in Section 2,
we shall introduce edge cuts in Section 3, and the first main tool — edge cut
assignments — in Section 4. The key result of this section is Theorem 4.4,
where we show that a hypergraph admits an Euler family (tour) if and only if
there exists an edge cut assignment such that the associated hypergraph admits
an Euler family (tour). This theorem forms the basis of all reduction results
and algorithms.
In Section 5, we present our first main reduction result. Namely, in Theorems
5.2 and 5.3 we show that a hypergraph $H$ with a minimal edge cut $F$ admits
an Euler family (tour) if and only if certain subhypergraphs of H (obtained
from the connected components of $H{\backslash}F$) admit an Euler family
(tour).
We begin Section 6 by introducing a collapsed hypergraph of a hypegraph $H$;
that is, a hypergraph obtained from $H$ by “collapsing” (identifying) the
vertices in a given subset of the vertex set of $H$. We then present our
second main reduction result. In Theorem 6.2, we show that a hypergraph $H$
admits an Euler family if and only if certain collapsed hypergraphs obtained
via an edge cut assignment admit Euler families. The analogous result for
Euler tours is presented in Corollaries 6.3 and 6.4; the former contains the
proof of necessity, while the latter contains the proof of sufficiency, but
requires an additional assumption.
Each of Sections 5 and 6 concludes with a description of pertinent algorithms
that determine whether or not a hypergraph admits an Euler family (tour).
Since all of our proofs are constructive, the algorithms can easily be
modified to construct an Euler family (tour) if one exists.
## 2 Preliminaries
We begin with some basic concepts related to hypergraphs, which will be used
in later discussions. For any graph- and hypergraph-theoretic terms not
defined here, we refer the reader to [3] and [2], respectively.
A hypergraph $H$ is an ordered pair $(V,E)$, where $V$ is a non-empty finite
set and $E$ is a finite multiset of $2^{V}$. To denote multisets, we shall use
double braces, $\left\\{\\!\\!\left\\{.\right\\}\\!\\!\right\\}$. The elements
of $V=V(H)$ and $E=E(H)$ are called vertices and edges, respectively. A
hypergraph is said to be trivial if it has only one vertex, and empty if it
has no edges.
Let $H=(V,E)$ be a hypergraph, and $u,v\in V$. If $u\neq v$ and there exists
an edge $e\in E$ such that $u,v\in e$, then we say that $u$ and $v$ are
adjacent (via the edge $e$); this is denoted $u\sim_{H}v$. If $v\in V$ and
$e\in E$ are such that $v\in e$, then $v$ is said to be incident with $e$, and
the ordered pair $(v,e)$ is called a flag of $H$. The set of flags of $H$ is
denoted by $F(H)$. The degree of a vertex $v\in V$ is the number of edges in
$E$ incident with $v$, and is denoted by $\deg_{H}(v)$, or simply $\deg(v)$
when there is no ambiguity.
A hypergraph $H^{\prime}=(V^{\prime},E^{\prime})$ is called a subhypergraph of
the hypergraph $H=(V,E)$ if $V^{\prime}\subseteq V$ and
$E^{\prime}=\left\\{\\!\\!\left\\{e\cap V^{\prime}:e\in
E^{\prime\prime}\right\\}\\!\\!\right\\}$ for some submultiset
$E^{\prime\prime}$ of $E$. For any subset $V^{\prime}\subseteq V$, we define
the subhypergraph of $H$ induced by $V^{\prime}$ to be the hypergraph
$(V^{\prime},E^{\prime})$ with $E^{\prime}=\left\\{\\!\\!\left\\{e\cap
V^{\prime}:e\in E,e\cap V^{\prime}\neq\emptyset\right\\}\\!\\!\right\\}$.
Thus, we obtain the subhypergraph induced by $V^{\prime}$ by deleting all
vertices in $V-V^{\prime}$ from $V$ and from each edge of $H$, and
subsequently deleting all empty edges. For any subset $E^{\prime}\subseteq E$,
we denote the subhypergraph $(V,E-E^{\prime})$ of $H$ by
$H{\backslash}E^{\prime}$, and for $e\in E$, we write $H{\backslash}e$ instead
of $H{\backslash}\\{e\\}$. For any multiset $E^{\prime}$ of $2^{V}$, the
symbol $H+E^{\prime}$ will denote the hypergraph obtained from $H$ by
adjoining all edges in $E^{\prime}$.
A $(v_{0},v_{k})$-walk of length $k$ in a hypergraph $H$ is an alternating
sequence $W=v_{0}e_{1}v_{1}\ldots$ $v_{k-1}e_{k}v_{k}$ of (possibly repeated)
vertices and edges such that $v_{0},\ldots,v_{k}\in V$, $e_{1},\ldots,e_{k}\in
E$, and for each $i\in\\{1,\ldots,k\\}$, the vertices $v_{i-1}$ and $v_{i}$
are adjacent in $H$ via the edge $e_{i}$. Note that since adjacent vertices
are by definition distinct, no two consecutive vertices in a walk can be the
same. It follows that no walk in a hypergraph contains an edge of cardinality
less than 2. The vertices in $V_{a}(W)=\\{v_{0},\ldots,v_{k}\\}$ are called
the anchors of $W$, vertces $v_{0}$ and $v_{k}$ are the endpoints of $W$, and
$v_{1},\ldots,v_{k-1}$ are the internal vertices of $W$. We also define the
edge set of $W$ as $E(W)=\\{e_{1},\ldots,e_{k}\\}$, and the set of anchor
flags of $W$ as
$F(W)=\\{(v_{0},e_{1}),(v_{1},e_{1}),(v_{2},e_{2}),\ldots,(v_{k-1},e_{k}),(v_{k},e_{k})\\}$.
Walks $W$ and $W^{\prime}$ in a hypergraph $H$ are said to be edge-disjoint if
$E(W)\cap E(W^{\prime})=\emptyset$, and anchor-disjoint if $V_{a}(W)\cap
V_{a}(W^{\prime})=\emptyset$.
A walk $W=v_{0}e_{1}v_{1}\ldots v_{k-1}e_{k}v_{k}$ is called closed if
$v_{0}=v_{k}$ and $k\geq 2$; a trail if the edges $e_{1},\ldots,e_{k}$ are
pairwise distinct; a path if it is a trail and the vertices
$v_{0},\ldots,v_{k}$ are pairwise distinct; and a cycle if it is a closed
trail and the vertices $v_{0},\ldots,v_{k-1}$ are pairwise distinct. (Note
that in [2], a “trail” was defined as a walk with no repeated anchor flags,
and a walk with no repeated edges was called a “strict trail”. In this paper,
we shall consider only strict trails, and hence use the shorter term “trail”
to mean a “strict trail”.)
A walk $W=v_{0}e_{1}v_{1}\ldots v_{k-1}e_{k}v_{k}$ is said to traverse a
vertex $v$ and edge $e$ if $v\in V_{a}(W)$ and $e\in E(W)$, respectively. More
specifically, the walk $W$ is said to traverse the edge $e$ via vertex $v$, as
well as traverse vertex $v$ via edge $e$, if either $ev$ or $ve$ is a
subsequence of $W$.
Vertices $u$ and $v$ are connected in a hypergraph $H$ if there exists a
$(u,v)$-walk — or equivalently, a $(u,v)$-path [2, Lemma 3.9] — in $H$, and
$H$ itself is connected if every pair of vertices in $V$ are connected in $H$.
The connected components of $H$ are the maximal connected subhypergraphs of
$H$ without empty edges. The number of connected components of $H$ is denoted
by $c(H)$.
An Euler tour of a hypergraph $H$ is a closed trail of $H$ traversing every
edge of $H$, and a hypergraph is said to be eulerian if it either admits an
Euler tour or is trivial and empty. An Euler family of $H$ is a set of
pairwise edge-disjoint and anchor-disjoint closed trails of $H$ jointly
traversing every edge of $H$. Clearly, an Euler family of cardinality 1
corresponds to an Euler tour, and vice-versa. An Euler family $\cal F$ of $H$
is said to be spanning if every vertex of $H$ is an anchor of a closed trail
in $\cal F$, and an Euler tour of $H$ is spanning if it traverses every vertex
of $H$.
The following two observations will be frequently used tools in the rest of
the paper. Their easy proofs are omitted.
###### Lemma 2.1
Let $H$ be a hypergraph with connected components $H_{i},$ for $i\in I$. Then
the following hold.
1. (i)
If, for each $i\in I$, we have that $H_{i}$ has an Euler family
${\mathcal{F}}_{i}$, then $\bigcup_{i\in I}{\mathcal{F}}_{i}$ is an Euler
family of $H$. If each ${\mathcal{F}}_{i}$ is spanning in $H_{i}$, then
${\mathcal{F}}$ is spanning in $H$.
2. (ii)
If $H$ has an Euler family ${\mathcal{F}}$, then ${\mathcal{F}}$ has a
partition $\\{{\mathcal{F}}_{i}:i\in I\\}$ such that, for each $i\in I$, we
have that ${\mathcal{F}}_{i}$ is an Euler family of $H_{i}$. If
${\mathcal{F}}$ is spanning in $H$, then ${\mathcal{F}}_{i}$ is spanning in
$H_{i}$, for each $i\in I$.
###### Lemma 2.2
Let $H_{1}$ and $H_{2}$ be hypergraphs such that $V(H_{1})\subseteq V(H_{2})$
and there exists a bijection $\varphi:E(H_{1})\to E(H_{2})$ satisfying
$e\subseteq\varphi(e)$ for all $e\in E(H_{1})$. Then the following hold.
1. (i)
If $H_{1}$ has an Euler family ${\mathcal{F}}_{1}$, then $H_{2}$ has an Euler
family ${\mathcal{F}}_{2}$ obtained from ${\mathcal{F}}_{1}$ by replacing each
edge $e$ with $\varphi(e)$.
2. (ii)
If $H_{2}$ has an Euler family ${\mathcal{F}}_{2}$ such that for all $e\in
E(H_{2})$ we have that $e$ is traversed in ${\mathcal{F}}_{2}$ via vertices in
$\varphi^{-1}(e)$, then $H_{1}$ has an Euler family ${\mathcal{F}}_{1}$
obtained from ${\mathcal{F}}_{2}$ by replacing each edge $e$ with
$\varphi^{-1}(e)$.
## 3 Edge cuts in hypergraphs
As in graphs [3], an edge cut in a hypergraph $H=(V,E)$ is a set of edges of
the form $[S,V-S]_{H}$ for some non-empty proper subset $S$ of $V$. Here, for
any $S,T\subseteq V$, we denote
$[S,T]_{H}=\\{e\in E:e\cap S\neq\emptyset\mbox{ and }e\cap T\neq\emptyset\\}.$
An edge $e$ in a hypergraph $H$ is said to be a cut edge of $H$ if
$c(H{\backslash}e)>c(H)$. Thus, an edge $e$ is a cut edge if and only if
$\\{e\\}$ is an edge cut.
###### Lemma 3.1
Let $H=(V,E)$ be a hypergraph and $F\subseteq E.$ Then $H{\backslash}F$ is
disconnected if and only if $H$ has an edge cut $F^{\prime}\subseteq F$.
Proof. Assume $H{\backslash}F$ is disconnected, let $H_{1}$ be one connected
component of $H{\backslash}F$, and $S=V(H_{1})$. Then $F^{\prime}=[S,V-S]_{H}$
is an edge cut of $H$ contained in $F$.
Conversely, assume $H$ has an edge cut $F^{\prime}\subseteq F$, where
$F^{\prime}=[S,V-S]_{H}$ for $\emptyset\subsetneq S\subsetneq V$. Then
$H{\backslash}F$ has no edge intersecting both $S$ and $V-S$, so it is
disconnected. a
As we shall see in the next lemma, minimal edge cuts — that is, edge cuts that
are not properly contained in other edge cuts — have a very nice property that
will be much exploited in subsequent results. Note that, for a hypergraph
$H=(V,E)$ and its subhypergraph $H^{\prime}$, we say that an edge $e\in E$
intersects $H^{\prime}$ if $e\cap V(H^{\prime})\neq\emptyset$.
###### Lemma 3.2
Let $H=(V,E)$ be a hypergraph. An edge cut $F$ of $H$ is minimal if and only
if every edge of $F$ intersects each connected component of $H{\backslash}F$.
Proof. Assume $F$ is a minimal edge cut of $H$. Suppose $H_{1}$ is a connected
component of $H{\backslash}F$, and $f\in F$ is such that $f$ does not
intersect $H_{1}$. Let $S=V(H_{1})$ and $F^{\prime}=F-\\{f\\}$. Then
$[S,V-S]_{H}\subseteq F^{\prime}\subsetneq F$, contradicting the minimality of
$F$. Thus every edge of $F$ intersects each connected component of
$H{\backslash}F$.
Conversely, assume each edge of $F$ intersects every connected component of
$H{\backslash}F$. Suppose $F^{\prime}$ is an edge cut of $H$ and
$F^{\prime}\subsetneq F$. Take any $f\in F-F^{\prime}$. Since $f$ intersects
every connected component of $H{\backslash}F$, the hypergraph
$H+f=H{\backslash}F^{\prime}$ is connected. Hence by Lemma 3.1, the set
$F^{\prime}$ contains no edge cut of $H$, a contradiction. Hence the edge cut
$F$ is minimal. a
## 4 Edge Cut Assignments
We are now ready to present the first of the two main tools that we shall use
to study eulerian properties of hypergraphs via their edge cuts; namely, an
edge cut assignment $\alpha$ associated with an edge cut $F$ of a hypergraph
$H$, which maps each edge of $F$ to an unordered pair of connected components
(or, more generally, unions of connected components) of $H{\backslash}F$.
Note that, for a finite set $I$, we denote the set of all unordered pairs of
elements in $I$, with repetitions in a pair permitted, by $I^{[2]}$; that is,
$I^{[2]}=\\{ij:i,j\in I\\}$. Thus, $|I^{[2]}|=\frac{1}{2}n(n+1)$.
###### Definition 4.1
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$\\{V_{i}:i\in I\\}$, be a partition of $V(H)$ such that each $V_{i}$ is a
union of the vertex sets of the connected components of $H{\backslash}F$.
* •
A mapping $\alpha:F\to I^{[2]}$ is called an edge cut assignment (associated
with $F$ and $H$) if, for all $f\in F$, we have that $\alpha(f)=ij$ implies
that $f\cap V_{i}\neq\emptyset$ and $f\cap V_{j}\neq\emptyset$, and
$\alpha(f)=i$ implies that $|f\cap V_{i}|\geq 2$.
* •
An edge cut assignment $\alpha:F\to I^{[2]}$ is called standard if $V_{i}$,
for each $i\in I$, is the vertex set of a connected component of
$H{\backslash}F$.
* •
Given an edge cut assignment $\alpha$, we define, for each $e\in E(H)$,
$e^{\alpha}=\left\\{\begin{array}[]{ll}e\cap\left(V_{i}\cup
V_{j}\right)&\mbox{ if }e\in F\mbox{ and }\alpha(e)=ij\\\ e&\mbox{ if
}e\not\in F\end{array}\right..$
* •
A hypergraph $H^{\alpha}$, defined by
$V(H^{\alpha})=V(H)\quad\mbox{and}\quad
E(H^{\alpha})=\left\\{\\!\\!\left\\{e^{\alpha}:e\in
E(H)\right\\}\\!\\!\right\\},$
is called the hypergraph associated with $H$ and the edge cut assignment
$\alpha$.
* •
A multigraph $G^{\alpha}$ with vertex set $V(G^{\alpha})=I$ and edge multiset
$E(G^{\alpha})=\left\\{\\!\\!\left\\{\alpha(f):f\in F\right\\}\\!\\!\right\\}$
is called the multigraph associated with the edge cut assignment $\alpha$.
Thus $\alpha$ can be viewed as a bijection $F\to E(G^{\alpha})$.
The usefulness of these concepts will be conveyed in the following three
observations, the last of which yields necessary and sufficient conditions for
a hypergraph to admit an Euler family (tour) via an edge cut assignment
$\alpha$ and the associated hypergraph $H^{\alpha}$.
###### Lemma 4.2
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$\\{V_{i}:i\in I\\}$ be a partition of $V(H)$ into unions of the vertex sets
of the connected components of $H{\backslash}F$. Furthermore, let $\alpha:F\to
I^{[2]}$ be an edge cut assignment. If $H^{\alpha}$ has an Euler family
(tour), then so does $G^{\alpha}$.
Proof. Let ${\mathcal{F}}$ be an Euler family of $H^{\alpha}$, and
${\mathcal{F}}^{\prime}$ the subset of ${\mathcal{F}}$ consisting of all
closed trails that traverse at least on edge of $F$. Take any closed trail
$T\in{\mathcal{F}}^{\prime}$. Then, without loss of generality, $T$ is of the
form
$T=v_{0}T_{0}v_{0}^{\prime}f_{0}^{\alpha}v_{1}T_{1}v_{1}^{\prime}f_{1}^{\alpha}v_{2}\ldots
v_{k-1}T_{k-1}v_{k-1}^{\prime}f_{k-1}^{\alpha}v_{0}$ for some
$f_{0},\ldots,f_{k-1}\in F$ and $(v_{i},v_{i}^{\prime})$-trails $T_{i}$ in
$H_{i}$, for $i\in\\{0,\ldots,k-1\\}$. Obtain the sequence $T^{\alpha}$ from
$T$ by replacing each subsequence $v_{i}T_{i}v_{i}^{\prime}$ with $i$, and
each edge $f_{i}^{\alpha}$ with $\alpha(f_{i})$. Then $T^{\alpha}$ is a closed
trail in $G^{\alpha}$, and $\\{T^{\alpha}:T\in{\mathcal{F}}^{\prime}\\}$ is an
Euler family of $G^{\alpha}$.
Moreover, if $T$ is an Euler tour of $H^{\alpha}$, then $T^{\alpha}$ is Euler
tour of $G^{\alpha}$. a
###### Lemma 4.3
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$\\{V_{i}:i\in I\\}$ be a partition of $V(H)$ into unions of the vertex sets
of the connected components of $H{\backslash}F$.
1. (i)
Suppose $H$ has an Euler family ${\mathcal{F}}$. Let $\alpha:F\to I^{[2]}$ be
an edge cut assignment defined by $\alpha(f)=ij$ if the edge $f\in F$ is
traversed by a trail in ${\mathcal{F}}$ via a vertex in $V_{i}$ and a vertex
in $V_{j}$ (where $i=j$ is possible). Then the hypergraph $H^{\alpha}$ has an
Euler family obtained from ${\mathcal{F}}$ by replacing each $f\in F$ with
$f^{\alpha}$.
2. (ii)
Conversely, if for some edge cut assignment $\alpha:F\to I^{[2]}$, the
associated hypergraph $H^{\alpha}$ has an Euler family
${\mathcal{F}}^{\alpha}$, then $H$ has an Euler family obtained from
${\mathcal{F}}^{\alpha}$ by replacing each $f^{\alpha}$, for $f\in F$, with
$f$.
Proof.
1. (i)
Define a bijection $\varphi:E(H^{\alpha})\to E(H)$ by $\varphi(e^{\alpha})=e$,
for all $e\in E(H)$. Then $e^{\alpha}\subseteq\varphi(e^{\alpha})$ for all
$e^{\alpha}\in E(H^{\alpha})$. By Lemma 2.2(ii), since each edge $e$ of $H$ is
traversed in ${\mathcal{F}}$ via vertices in $\varphi^{-1}(e)$, an Euler
family of $H^{\alpha}$ is obtained from ${\mathcal{F}}$ by replacing each edge
$e$ with $\varphi^{-1}(e)$, which effectively means replacing each $f\in F$
with $f^{\alpha}$.
2. (ii)
Define $\varphi$ as in (i) and use Lemma 2.2(i).
a
###### Theorem 4.4
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$\\{V_{i}:i\in I\\}$ be a partition of $V(H)$ into unions of the vertex sets
of the connected components of $H{\backslash}F$. Then
1. (i)
$H$ has an Euler family if and only if for some edge cut assignment
$\alpha:F\to I^{[2]}$, each connected component of $H^{\alpha}$ has an Euler
family; and
2. (ii)
$H$ has an Euler tour if and only if for some edge cut assignment $\alpha:F\to
I^{[2]}$, the hypergraph $H^{\alpha}$ has a unique non-empty connected
component, which has an Euler tour.
Proof. Observe that, by Lemma 4.3, a hypergraph $H$ has an Euler family of
cardinality $k$ if and only if for some edge cut assignment $\alpha$, the
associated hypergraph $H^{\alpha}$ has an Euler family of cardinality $k$.
1. (i)
Since by Lemma 2.1, $H^{\alpha}$ has an Euler family if and only if each of
its connected components has an Euler family, the statement follows.
2. (ii)
From the above observation, $H$ has an Euler tour if and only if $H^{\alpha}$,
for some $\alpha$, has an Euler tour. Since it is clear that $H^{\alpha}$ has
an Euler tour if and only if it has a unique non-empty connected component,
which itself has an Euler tour, the statement follows.
a
We point out that Theorem 4.4 forms the basis for all other, more specific
reductions, as well as for the algorithms presented in the rest of this paper.
## 5 Reduction using standard edge cut assignments
In this section, we shall be using only standard edge cut assignments; that
is, edge cut assignments arising from partitions of the form $\\{V(H_{i}):i\in
I\\}$, where the $H_{i}$, for $i\in I$, are the connected components of
$H{\backslash}F$, and $F$ is a minimal edge cut in a hypergraph $H$.
###### Lemma 5.1
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$H_{i}$, for $i\in I$, be the connected components of $H{\backslash}F$.
Furthermore, let $\alpha:F\to I^{[2]}$ be a standard edge cut assignment.
1. (i)
If $G^{\prime}$ is a connected component of $G^{\alpha}$, then
$H^{\alpha}[\bigcup_{i\in V(G^{\prime})}V(H_{i})]$ is a connected component of
$H^{\alpha}$.
2. (ii)
If $H^{\prime}$ is a connected component of $H^{\alpha}$, then
$V(H^{\prime})=\bigcup_{i\in J}V(H_{i})$ for some $J\subseteq I$, and
$G^{\alpha}[J]$ is a connected component of $G^{\alpha}$.
Proof.
1. (i)
Let $G^{\prime}$ be a connected component of $G^{\alpha}$, let
$J=V(G^{\prime})$ and $H^{\prime}=H^{\alpha}[\bigcup_{i\in J}V(H_{i})]$.
Take any $i,j\in J$ such that $i\sim_{G^{\alpha}}j$. Then there exists $f\in
F$ such that $\alpha(f)=ij$ and $f^{\alpha}=f\cap(V(H_{i})\cup V(H_{j}))$.
Since $f$ intersects both $H_{i}$ and $H_{j}$, it follows that
$H^{\alpha}[V(H_{i})\cup V(H_{j})]$ is connected. Therefore $H^{\prime}$ is
connected.
Moreover, if there exist $i\in J$, $j\in I-J$, $u\in V(H_{i})$, and $v\in
V(H_{j})$ such that $u\sim_{H^{\alpha}}v$, then for some $f\in F$ we have that
$\alpha(f)=ij$, and hence $i\sim_{G^{\alpha}}j$, a contradiction.
We conclude that $H^{\prime}$ is a connected component of $H^{\alpha}$.
2. (ii)
Let $H^{\prime}$ be a connected component of $H^{\alpha}$. Since
$H^{\alpha}=\left(\bigcup_{i\in I}H_{i}\right)+\\{f^{\alpha}:f\in F\\}$, there
exists $J\subseteq I$ such that $V(H^{\prime})=\bigcup_{i\in J}V(H_{i})$.
Take any $i,j\in J$. Then for all $u\in V(H_{i})$ and $v\in V(H_{j})$ we have
that $u$ and $v$ are connected in $H^{\alpha}$. It is then easy to see that
$i$ and $j$ are connected in $G^{\alpha}$. Hence $G^{\alpha}[J]$ is connected.
It then follows from (i) that $G^{\alpha}[J]$ is a connected component of
$G^{\alpha}$.
a
We are now ready to prove our first reduction theorem, Theorem 5.2, which
states that a hypergraph $H$ with a minimal edge cut $F$ admits an Euler
family if and only if certain subhypergraphs of $H$ obtained from the
connected components of $H{\backslash}F$ admit Euler families. The analogous
result for Euler tours follows in Theorem 5.3.
###### Theorem 5.2
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$H_{i}$, for $i\in I$, be the connected components of $H{\backslash}F$.
Then $H$ has a (spanning) Euler family if and only if there exists $J\subseteq
I$ with $1\leq|J|\leq|F|$ such that
1. (i)
$H\left[\bigcup_{j\in J}V(H_{j})\right]$ has a non-empty (spanning) Euler
family, and
2. (ii)
$H_{i}$ has a (spanning) Euler family for all $i\in I-J$.
Proof. Assume $H$ has an Euler family ${\mathcal{F}}$, and let $\alpha:F\to
I^{[2]}$ be the edge cut assignment defined by $\alpha(f)=ij$ if the edge
$f\in F$ is traversed by ${\mathcal{F}}$ via a vertex in $H_{i}$ and a vertex
in $H_{j}$. Then by Lemma 4.3(i), the associated hypergraph $H^{\alpha}$ has
an Euler family as well. Let $G^{\alpha}$ be the associated multigraph, and
$J$ the union of the vertex sets of all non-empty connected components of
$G^{\alpha}$. Since $H$ is connected, we have $|F|\geq 1$, and hence $|J|\geq
1$. Since $H^{\alpha}$ has an Euler family, by Lemma 4.2, so does
$G^{\alpha}$. Hence $G^{\alpha}[J]$ is an even graph with $|J|$ vertices,
$|F|$ edges, and minimum degree at least 2; it follows that $|J|\leq|F|$.
To show (i) and (ii), let $V^{\prime}=\bigcup_{j\in J}V(H_{j})$. By Lemma
5.1(i), we have that $H^{\alpha}[V^{\prime}]$ is a union of connected
components of $H^{\alpha}$, and $H_{i}$, for each $i\in I-J$, is a connected
component of $H^{\alpha}$. Since $H^{\alpha}$ has an Euler family, it follows
from Lemma 2.1 that $H^{\alpha}[V^{\prime}]$ has an Euler family, as does
$H_{i}$, for each $i\in I-J$.
It remains to show that $H[V^{\prime}]$ has a Euler family. Define a bijection
$\varphi:E\left(H^{\alpha}[V^{\prime}]\right)\to E\left(H[V^{\prime}]\right)$
by $\varphi(e^{\alpha})=e\cap V^{\prime}$. Then
$e^{\alpha}\subseteq\varphi(e^{\alpha})$ for all $e^{\alpha}\in
E\left(H^{\alpha}[V^{\prime}]\right)$, so by Lemma 2.2(i), since
$H^{\alpha}[V^{\prime}]$ has an Euler family, so does $H[V^{\prime}]$.
Moreover, since $H[V^{\prime}]$ is non-empty, this Euler family is non-empty.
Conversely, assume that there exists $J\subseteq I$ with $1\leq|J|\leq|F|$
such that (i) and (ii) hold. Let $V^{\prime}=\bigcup_{j\in J}V(H_{j})$ and
$H^{\prime}=H[V^{\prime}]\cup\bigcup_{i\in I-J}H_{i}$. Then by Lemma 2.1(i),
the hypergraph $H^{\prime}$ admits an Euler family.
Observe that $E(H^{\prime})=\bigcup_{i\in I}E(H_{i})\cup\\{f\cap
V^{\prime}:f\in F\\}$. Define a bijection $\varphi:E(H^{\prime})\to E(H)$ by
$\varphi(f\cap V^{\prime})=f$ for all $f\in F$, and $\varphi(e)=e$ for all
$e\in\bigcup_{i\in I}E(H_{i})$. Hence $e\subseteq\varphi(e)$ for all $e\in
E(H^{\prime})$, and since $H^{\prime}$ has an Euler family, by Lemma 2.2(i),
the hyperhgraph $H$ has an Euler family as well.
It is easy to see that if the Euler family ${\mathcal{F}}$ of $H$ is spanning,
then so are the resulting Euler families of $H[V^{\prime}]$ and $H_{i}$, for
all $i\in I-J$, and vice-versa. a
###### Theorem 5.3
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and let
$H_{i}$, for $i\in I$, be the connected components of $H{\backslash}F$.
Then $H$ has an Euler tour if and only if there exists $J\subseteq I$ with
$1\leq|J|\leq|F|$ such that
1. (i)
$H\left[\bigcup_{j\in J}V(H_{j})\right]$ has an Euler tour, and
2. (ii)
$H_{i}$ is empty for all $i\not\in J$.
Proof. Assume $H$ has an Euler tour $T$, and let $\alpha:F\to I^{[2]}$ be the
edge cut assignment defined by $\alpha(f)=ij$ if the edge $f\in F$ is
traversed by $T$ via a vertex in $H_{i}$ and a vertex in $H_{j}$. Let
$G^{\alpha}$ be the associated multigraph. By Lemmas 4.3(i) and 4.2,
respectively, the hypergraph $H^{\alpha}$ and multigraph $G^{\alpha}$ both
have Euler tours. Hence $G^{\alpha}$ has a unique non-empty connected
component; let $J$ be its vertex set. As in the proof of Theorem 5.2, it can
be shown that $1\leq|J|\leq|F|$.
Let $V^{\prime}=\bigcup_{j\in J}V(H_{j})$. By Lemma 5.1(i), we have that
$H^{\alpha}[V^{\prime}]$ is a connected component of $H^{\alpha}$, as is
$H_{i}$, for each $i\in I-J$. Since $H^{\alpha}$ has an Euler tour and
$H^{\alpha}[V^{\prime}]$ is not empty, it follows that
$H^{\alpha}[V^{\prime}]$ has an Euler tour, and $H_{i}$, for each $i\in I-J$,
is empty.
Conversely, assume that there exists $J\subseteq I$ with $1\leq|J|\leq|F|$
such that (i) and (ii) hold. Let $V^{\prime}=\bigcup_{j\in J}V(H_{j})$, and
$H^{\prime}=H[V^{\prime}]\cup\bigcup_{i\in I-J}H_{i}$. Similarly to the proof
of Theorem 5.2, we observe that since $H[V^{\prime}]$ has an Euler tour and
$H_{i}$, for each $i\in I-J$, is empty, Lemma 2.1(i) shows that $H^{\prime}$
has an Euler tour as well. By Lemma 2.2(i), it follows that $H$ has an Euler
tour. a
We observe that Theorem 5.3 cannot be extended to spanning Euler tours in any
meaningful way. The following immediate consequence of Theorems 5.2 and 5.3
gives more transparent necessary and sufficient conditions for the existence
of an Euler family and Euler tour for a hypergraph with a cut edge.
###### Corollary 5.4
Let $H$ be a connected hypergraph with a cut edge $f$. Let $H_{i}$, for $i\in
I$, be the connected components of $H{\backslash}f$. Then the following hold.
1. (i)
$H$ has a (spanning) Euler family if and only if there exists $j\in I$ such
that
* •
$H[V(H_{j})]$ has a non-empty (spanning) Euler family, and
* •
$H_{i}$ has a (spanning) Euler family for all $i\in I-\\{j\\}$.
2. (ii)
$H$ has an Euler tour if and only if there exists $j\in I$ such that
* •
$H[V(H_{j})]$ has an Euler tour, and
* •
$H_{i}$ is empty for all $i\in I-\\{j\\}$.
3. (iii)
$H$ has no spanning Euler tour.
4. (iv)
If $H$ has no vertex of degree 1, then $H$ has no Euler tour.
In Algorithm 5.5 below, we shall now put to use the results of Lemma 4.2 and
Theorem 4.4 (applied to standard edge cut assignments) to determine whether a
given hypergraph has an Euler family.
###### Algorithm 5.5
Does a connected hypergraph $H$ admit an Euler family?
1. (1)
Sequentially delete vertices of degree at most 1 from $H$ until no such
vertices remain or $H$ is trivial. If at any step $H$ has an edge of
cardinality less than 2, then $H$ has no Euler family — exit.
2. (2)
If $H$ is trivial (and empty), then it has an Euler family — exit.
3. (3)
Find a minimal edge cut $F$.
4. (4)
Let $H_{i}$, for $i\in I$, be the connected components of $H{\backslash}F$.
5. (5)
For all edge cut assignments $\alpha:F\to I^{[2]}$:
1. (a)
If $G^{\alpha}$ has no Euler family, then $H^{\alpha}$ has no Euler family —
discard this $\alpha$.
2. (b)
For each connected component $H^{\prime}$ of $H^{\alpha}$:
if $H^{\prime}$ has no Euler family (recursive call), discard this $\alpha$.
3. (c)
If every connected component of $H^{\alpha}$ has an Euler family, then $H$ has
an Euler family — exit.
6. (6)
$H$ has no Euler family — exit.
Similarly, in Algorithm 5.6 below, we use Theorem 5.3 and Corollary 5.4, in
addition to Lemma 4.2 and Theorem 4.4, to determine whether a given hypergraph
has an Euler tour.
###### Algorithm 5.6
Does a connected hypergraph $H$ admit an Euler tour?
1. (1)
Sequentially delete vertices of degree at most 1 from $H$ until no such
vertices remain or $H$ is trivial. If at any step $H$ has an edge of
cardinality less than 2, then $H$ is not eulerian — exit.
2. (2)
If $H$ is trivial (and empty), then it is eulerian — exit.
3. (3)
Find a minimal edge cut $F$. If $|F|=1$, then $H$ is not eulerian — exit.
4. (4)
Let $H_{i}$, for $i\in I$, be the connected components of $H{\backslash}F$.
5. (5)
Let $J=\\{i\in I:H_{i}\mbox{ is non-empty}\\}$. If $|F|<|J|$, then $H$ is not
eulerian — exit.
6. (6)
For all edge cut assignments $\alpha:F\to I^{[2]}$:
1. (a)
If $G^{\alpha}$ is not eulerian, then $H^{\alpha}$ is not eulerian — discard
this $\alpha$.
2. (b)
If $H^{\alpha}$ has at least 2 non-empty connected components, then
$H^{\alpha}$ is not eulerian — discard this $\alpha$.
3. (c)
Let $H^{\prime}$ be the unique non-empty connected component of $H^{\alpha}$.
If $H^{\prime}$ is eulerian (recursive call), then $H$ is eulerian — exit.
Otherwise, discard this $\alpha$.
7. (7)
$H$ is not eulerian — exit.
Figure 1: Isomorphism classes of even graphs of small size without isolated
vertices.
###### Remarks 5.7
1. (i)
In both algorithms, the input hypergraph is being modified during the
execution of the algorithm. However, at any step, the current hypergraph $H$
admits an Euler family (tour) if and only if the input hypergraph does.
2. (ii)
As each non-empty proper subset $S$ of $V(H)$ corresponds to an edge cut,
minimal edge cuts are easy to construct. As we comment below, it is likely
that using a minimum edge cut would be more efficient. An ${\cal
O}(p+n^{2}\lambda)$ algorithm for finding a minimum edge cut in a hypergraph
$H$ with size $p=\sum_{e\in E(H)}|e|$, order $n=|V(H)|$, and cardinality of a
minimum edge cut $\lambda$ was described in [4].
3. (iii)
Recall that a non-empty graph has an Euler family if and only if it is even,
and is eulerian if and only if it is even and has at most one non-empty
connected component. Hence (5a) in Algorithm 5.5 and (6a) in Algorithm 5.6 are
easy to verify.
4. (iv)
The most time-consuming part of both algorithms is likely going to be the
sequence of recursive calls to determine whether hypergraphs $H^{\prime}$ have
an Euler family or tour. The smaller the size of the largest of the
hypergraphs $H^{\prime}$, the better; however, this size is impossible to
predict. Instead, we suggest minimizing the number of possible edge cut
assignments $\alpha$ to be verified. The number of all possible mappings $F\to
I^{[2]}$ is ${|I|+1\choose 2}^{|F|}$, which indeed suggests choosing $F$ to be
a minimum edge cut. Among minimum edge cuts $F$, should we seek one that also
minimizes $|I|$? We do not have a clear answer: on one hand, this approach
would minimize the number of mappings $\alpha$ to verify; one the other hand,
the expected size of the hypergraphs $H^{\prime}$ for our recursive calls is
inversely proportional to $|I|$. — We shall comment more on the complexity of
Algorithms 5.5 and 5.6 in Remark 6.8(iii).
5. (v)
The number of edge cut assignments $\alpha:F\to I^{[2]}$ resulting in an even
graph $G^{\alpha}$ is likely much smaller than ${|I|+1\choose 2}^{|F|}$. To
give some idea of this number, we show in Figure 1, for $|F|\leq 4$, the
isomorphism classes of the suitable graphs $G^{\alpha}$ with the isolated
vertices removed.
6. (vi)
Observe that Algorithms 5.5 and 5.6 are formulated to solve decision problems
— does $H$ contain an Euler family (tour)? — however, since they are based on
results that use constructive proofs, they can easily be adapted to construct
an Euler family (tour), if it exists.
## 6 Reduction Using Collapsed Hypergraphs
We shall now introduce our second main tool — the collapsed hypergraph — which
allows for a binary-type of a reduction.
###### Definition 6.1
Let $H$ be a hypergraph, and $S$ a subset of its vertex set with
$\emptyset\subsetneq S\subsetneq V(H)$. The collapsed hypergraph of $H$ with
respect to $S$ is the hypergraph $H\circ S=(V^{\circ},E^{\circ})$ defined by
$V^{\circ}=(V-S)\cup\\{u^{\circ}\\}\quad\mbox{ and }\quad
E^{\circ}=\left\\{\\!\\!\left\\{e^{\circ}:e\in E(H),|e\cap(V-S)|\geq
1\right\\}\\!\\!\right\\},$
where
$e^{\circ}=\left\\{\begin{array}[]{ll}e&\mbox{ if }e\cap S=\emptyset\\\
(e-S)\cup\\{u^{\circ}\\}&\mbox{otherwise}\end{array}\right..$
Here, $u^{\circ}$ is a new vertex, which we call the collapsed vertex of
$H\circ S$.
We are now ready for our second main reduction theorem, Theorem 6.2, which
states that a hypergraph $H$ admits an Euler family if and only if some
collapsed hypergraphs of $H$ (obtained using an edge cut assignment) admit
Euler families.
###### Theorem 6.2
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and
$\\{V_{0},V_{1}\\}$ a partition of $V(H)$ into unions of the vertex sets of
the connected components of $H{\backslash}F$.
Then $H$ admits an Euler family if and only if there exists an edge cut
assignment $\alpha:F\to\mathbb{Z}_{2}^{[2]}$ such that $|\alpha^{-1}(01)|$ is
even, and either
1. (i)
$\alpha^{-1}(01)=\emptyset$ and $H^{\alpha}[V_{i}]$ has an Euler family for
each $i\in\mathbb{Z}_{2}$, or
2. (ii)
$\alpha^{-1}(01)\neq\emptyset$, and for each $i\in\mathbb{Z}_{2}$, the
collapsed hypergraph $H^{\alpha}\circ V_{i}$ has an Euler family that
traverses the collapsed vertex $u_{i}^{\circ}$ via each of the edges in
$\\{f^{\circ}:f\in\alpha^{-1}(01)\\}$.
Proof. Let ${\mathcal{F}}$ be an Euler family of $H$, and
$\alpha:F\to\mathbb{Z}_{2}^{[2]}$ the edge cut assignment with $\alpha(f)=ij$
if and only if the edge $f$ is traversed by ${\mathcal{F}}$ via a vertex in
$V_{i}$ and a vertex in $V_{j}$ (where $i=j$ is possible). By Lemma 4.3(i),
the hypergraph $H^{\alpha}$ has an Euler family as well, say
${\mathcal{F}}^{\alpha}$. Since the only edges of $H^{\alpha}$ that intersect
both $V_{0}$ and $V_{1}$ are edges of the form $f^{\alpha}$ with $f\in F$ and
$\alpha(f)=01$, it is easy to see that each closed trail in
${\mathcal{F}}^{\alpha}$ traverses an even number of such edges. Hence
$|\alpha^{-1}(01)|$ is indeed even.
Suppose first that $\alpha^{-1}(01)=\emptyset$. Then $H^{\alpha}[V_{i}]$, for
each $i\in\mathbb{Z}_{2}$, is a union of connected components of $H^{\alpha}$,
and hence has an Euler family by Lemma 2.1(ii).
Suppose now that $\alpha^{-1}(01)\neq\emptyset$. By symmetry, it suffices to
show that $H^{\alpha}\circ V_{1}$ has an Euler family that traverses the
collapsed vertex $u_{1}^{\circ}$ via each of the edges in
$\\{f^{\circ}:f\in\alpha^{-1}(01)\\}$. Let $T$ be a closed trail in the Euler
family ${\mathcal{F}}^{\alpha}$ of $H^{\alpha}$ that traverses an edge
$f^{\alpha}$, for some $f\in\alpha^{-1}(01)$. Then $T$ has the form
$T=v_{0}^{(0)}f_{0}^{\alpha}u_{0}^{(1)}T_{0}^{(1)}v_{0}^{(1)}f_{0}^{\prime\alpha}u_{0}^{(0)}T_{0}^{(0)}v_{1}^{(0)}\ldots
u_{k-1}^{(0)}T_{k-1}^{(0)}v_{0}^{(0)}$
for some vertices
$v_{0}^{(0)},u_{0}^{(0)},v_{1}^{(0)},u_{1}^{(0)},\ldots,u_{k-1}^{(0)}\in
V_{0}$ and
$u_{0}^{(1)},v_{0}^{(1)},u_{1}^{(1)},v_{1}^{(1)},\ldots,v_{k-1}^{(1)}\in
V_{1}$; for some edges
$f_{0},f_{0}^{\prime},f_{1},f_{1}^{\prime},\ldots,f_{k-1}^{\prime}\in\alpha^{-1}(01)$;
and for $i\in\mathbb{Z}_{k}$, for some $(u_{i}^{(0)},v_{i+1}^{(0)})$-trails
$T_{i}^{(0)}$ in $H^{\alpha}[V_{0}]$ and $(u_{i}^{(1)},v_{i}^{(1)})$-trails
$T_{i}^{(1)}$ in $H^{\alpha}[V_{1}]$. Let $T^{\circ}$ be the sequence obtained
from $T$ by replacing each subsequence of the form
$v_{i}^{(0)}f_{i}^{\alpha}u_{i}^{(1)}T_{i}^{(1)}v_{i}^{(1)}f_{i}^{\prime\alpha}u_{i}^{(0)}$
with $v_{i}^{(0)}f_{i}^{\circ}u_{1}^{\circ}f_{i}^{\prime\circ}u_{i}^{(0)}$.
Then $T^{\circ}$ is a closed trail in $H^{\alpha}\circ V_{1}$, and it
traverses the collapsed vertex $u_{1}^{\circ}$ via each of the edges of the
form $f^{\circ}$ for $f\in\alpha^{-1}(01)$ such that $T$ traverses
$f^{\alpha}$. If we additionally define $T^{\circ}=T$ for all closed trails
$T\in{\mathcal{F}}^{\alpha}$ that do not traverse any edges of the form
$f^{\alpha}$, for some $f\in\alpha^{-1}(01)$, then it is clear that
$\\{T^{\circ}:T\in{\mathcal{F}}^{\alpha}\\}$ is a family of closed trails in
$H^{\alpha}\circ V_{1}$ that jointly traverse each edge exactly once, and also
traverse the collapsed vertex $u_{1}^{\circ}$ via each of the edges in
$\\{f^{\circ}:f\in\alpha^{-1}(01)\\}$. To obtain an Euler family, we just need
to concatenate all those closed trails in this family that traverse the
collapsed vertex $u_{1}^{\circ}$.
To prove the converse, let $\alpha:F\to\mathbb{Z}_{2}^{[2]}$ be an edge cut
assignment such that $|\alpha^{-1}(01)|$ is even, and either (i) or (ii)
holds.
Suppose first that (i) holds. Since $\alpha^{-1}(01)=\emptyset$, for each
$i\in\mathbb{Z}_{2}$, the hypergraph $H^{\alpha}[V_{i}]$ is a union of
connected components of $H^{\alpha}$, and by Lemma 2.1, since each of
$H^{\alpha}[V_{0}]$ and $H^{\alpha}[V_{1}]$ admits an Euler family, so does
$H^{\alpha}$. By Lemma 4.3(ii), it follows that $H$ admits an Euler family.
Suppose now that (ii) holds and $|\alpha^{-1}(01)|=2k$. For each
$i\in\mathbb{Z}_{2}$, let
$\alpha^{-1}(01)=\\{f_{0}^{(i)},f_{1}^{(i)},\ldots,f_{2k-1}^{(i)}\\}$. By
assumption, the collapsed hypergraph $H^{\alpha}\circ V_{1-i}$ admits an Euler
family ${\mathcal{F}}^{(i)}$ that traverses the collapsed vertex
$u_{1-i}^{\circ}$ via each of the edges $f^{\circ}$, for
$f\in\alpha^{-1}(01)$. Let $T^{(i)}$ be the unique closed trail in
${\mathcal{F}}^{(i)}$ that traverses $u_{1-i}^{\circ}$. Then, without loss of
generality, $T^{(i)}$ must be of the form
$T^{(i)}=v_{0}^{(i)}\,\,(f_{0}^{(i)})^{\circ}\,\,u_{1-i}^{\circ}\,\,(f_{1}^{(i)})^{\circ}\,\,v_{1}^{(i)}\,\,T_{1}^{(i)}\,\,v_{2}^{(i)}\,\,(f_{2}^{(i)})^{\circ}\ldots\,\,v_{2k-1}^{(i)}\,\,T_{k}^{(i)}\,\,v_{0}^{(i)}$
for some vertices $v_{0}^{(i)},v_{1}^{(i)},\ldots,v_{2k-1}^{(i)}\in V_{i}$
and, for $j=1,2,\ldots,k$, some $(v_{2j-1}^{(i)},v_{2j}^{(i)})$-trails
$T_{j}^{(i)}$ in $H^{\alpha}[V_{i}]$. (Here, subscripts are evaluated modulo
$2k$.)
Let $\pi:\mathbb{Z}_{2k}\to\mathbb{Z}_{2k}$ be a bijection such that
$f_{\pi(j)}^{(1)}=f_{j}^{(0)}$, for all $j\in\mathbb{Z}_{2k}$. We thus have
that $v_{\pi(j)}^{(1)}\in f_{j}^{(0)}$, for all $j\in\mathbb{Z}_{2k}$.
We now link the $(v_{2j-1}^{(0)},v_{2j}^{(0)})$-trails $T_{j}^{(0)}$ and
$(v_{2j-1}^{(1)},v_{2j}^{(1)})$-trails $T_{j}^{(1)}$, for $j=1,2,\ldots,k$,
into a family ${\cal T}$ of closed trails in $H^{\alpha}$ using the edges
$(f_{j}^{(0)})^{\alpha}$, for $j\in\mathbb{Z}_{2k}$. In particular, the new
closed trails will traverse each edge $(f_{j}^{(0)})^{\alpha}$ via anchors
$v_{j}^{(0)}$ and $v_{\pi(j)}^{(1)}$.
Finally, let
${\mathcal{F}}=\left({\mathcal{F}}^{(0)}-\\{T^{(0)}\\}\right)\cup\left({\mathcal{F}}^{(1)}-\\{T^{(1)}\\}\right)\cup{\cal
T}.$
Since for each $i\in\mathbb{Z}_{2}$, the closed trails in
${\mathcal{F}}^{(i)}-\\{T^{(i)}\\}$ traverse all edges of $H^{\alpha}[V_{i}]$
that are not traversed by $T^{(i)}$, and the closed trails in ${\cal T}$
traverse all edges of $H^{\alpha}[V_{0}]\cup H^{\alpha}[V_{1}]$ that are
traversed by $T^{(0)}$ or $T^{(1)}$, as well as all edges in
$\\{f^{\alpha}:f\in\alpha^{-1}(01)\\}$, we conclude that ${\mathcal{F}}$ is an
Euler family of $H^{\alpha}$.
It then follows by Lemma 4.3(ii) that $H$ admits an Euler family. a
Observe that in the proof of sufficiency in Theorem 6.2 we have no control
over the number of closed trails in the family ${\cal T}$; hence the analogous
result for Euler tours does not hold in general. A weaker result is proved
below: necessity is guaranteed by Corollary 6.3, while sufficiency is proved
in Corollary 6.4 with an additional assumption on the edge cut assignment.
This additional assumption, however, always holds for edge cuts of cardinality
at most 3.
###### Corollary 6.3
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and
$\\{V_{0},V_{1}\\}$ a partition of $V(H)$ into unions of the vertex sets of
the connected components of $H{\backslash}F$.
If $H$ admits an Euler tour, then there exists an edge cut assignment
$\alpha:F\to\mathbb{Z}_{2}^{[2]}$ such that $|\alpha^{-1}(01)|$ is even, and
either
1. (i)
$\alpha^{-1}(01)=\emptyset$ and, without loss of generality,
$H^{\alpha}[V_{0}]$ has an Euler tour and $H^{\alpha}[V_{1}]$ is empty, or
2. (ii)
$\alpha^{-1}(01)\neq\emptyset$, and for each $i\in\mathbb{Z}_{2}$, the
collapsed hypergraph $H^{\alpha}\circ V_{i}$ has an Euler tour that traverses
the collapsed vertex $u_{i}^{\circ}$ via each of the edges in
$\\{f^{\circ}:f\in\alpha^{-1}(01)\\}$.
Proof. Let $T$ be an Euler tour of $H$, and $\alpha:F\to\mathbb{Z}_{2}^{[2]}$
the edge cut assignment with $\alpha(f)=ij$ if and only if the edge $f$ is
traversed by $T$ via a vertex in $V_{i}$ and a vertex in $V_{j}$ (where $i=j$
is possible). We establish that $|\alpha^{-1}(01)|$ is even just as in the
proof of necessity in Theorem 6.2.
By Lemma 4.3(i), the hypergraph $H^{\alpha}$ has an Euler tour as well. If
$\alpha^{-1}(01)=\emptyset$, then $H^{\alpha}$ is disconnected. Hence without
loss of generality, $H^{\alpha}[V_{0}]$ has an Euler tour and
$H^{\alpha}[V_{1}]$ is empty.
Suppose now that $\alpha^{-1}(01)\neq\emptyset$. Following the proof of
necessity in Theorem 6.2, we can show that for each $i\in\mathbb{Z}_{2}$, the
Euler tour $T$ of $H$ gives rise to an Euler tour $T_{i}^{\circ}$ of
$H^{\alpha}\circ V_{i}$ that traverses the collapsed vertex $u_{i}^{\circ}$
via each of the edges in $\\{f^{\circ}:f\in\alpha^{-1}(01)\\}$. a
###### Corollary 6.4
Let $H$ be a connected hypergraph with a minimal edge cut $F$, and
$\\{V_{0},V_{1}\\}$ a partition of $V(H)$ into unions of the vertex sets of
the connected components of $H{\backslash}F$. Assume that there exists an edge
cut assignment $\alpha:F\to\mathbb{Z}_{2}^{[2]}$ such that
$|\alpha^{-1}(01)|\in\\{0,2\\}$, and either
1. (i)
$\alpha^{-1}(01)=\emptyset$ and, without loss of generality,
$H^{\alpha}[V_{0}]$ has an Euler tour and $H^{\alpha}[V_{1}]$ is empty, or
2. (ii)
$\alpha^{-1}(01)\neq\emptyset$, and for each $i\in\mathbb{Z}_{2}$, the
collapsed hypergraph $H^{\alpha}\circ V_{i}$ has an Euler tour that traverses
the collapsed vertex $u_{i}^{\circ}$.
Then $H$ admits an Euler tour.
Proof. If $\alpha^{-1}(01)=\emptyset$, then $H^{\alpha}=H^{\alpha}[V_{0}]\cup
H^{\alpha}[V_{1}]$, and since $H^{\alpha}[V_{1}]$ is empty and
$H^{\alpha}[V_{0}]$ admits an Euler tour, so does $H^{\alpha}$. By Lemma
2.2(ii), it follows that $H$ admits an Euler tour.
The case $\alpha^{-1}(01)\neq\emptyset$ is similar to the proof of sufficiency
in Theorem 6.2, assuming Condition (ii). We have
$\alpha^{-1}(01)=\\{f_{0}^{(0)},f_{1}^{(0)}\\}=\\{f_{0}^{(1)},f_{1}^{(1)}\\}$.
By assumption, for each $i\in\mathbb{Z}_{2}$, the collapsed hypergraph
$H^{\alpha}\circ V_{1-i}$ admits an Euler tour $T^{(i)}$ that traverses the
collapsed vertex $u_{1-i}^{\circ}$, necessarily via the edges $f^{\circ}$, for
$f\in\alpha^{-1}(01)$. Then $T^{(i)}$ must be of the form
$T^{(i)}=v_{0}^{(i)}\,\,(f_{0}^{(i)})^{\circ}\,\,u_{1-i}^{\circ}\,\,(f_{1}^{(i)})^{\circ}\,\,v_{1}^{(i)}\,\,T_{1}^{(i)}\,\,v_{0}^{(i)}$
for some vertices $v_{0}^{(i)},v_{1}^{(i)}\in V_{i}$ and a
$(v_{1}^{(i)},v_{0}^{(i)})$-trail $T_{1}^{(i)}$ in $H^{\alpha}[V_{i}]$.
Since either $f_{j}^{(0)}=f_{j}^{(1)}$ or $f_{j}^{(0)}=f_{1-j}^{(1)}$ for
$j\in\mathbb{Z}_{2}$, linking the $(v_{1}^{(0)},v_{0}^{(0)})$-trail
$T_{1}^{(0)}$ and $(v_{1}^{(1)},v_{0}^{(1)})$-trail $T_{1}^{(1)}$ using the
edges $(f_{0}^{(0)})^{\alpha}$ and $(f_{1}^{(0)})^{\alpha}$ clearly results in
a closed trail $T$ in $H^{\alpha}$ that traverses all edges of $H^{\alpha}$
traversed by $T^{(0)}$ and $T^{(1)}$, as well as the edges
$(f_{0}^{(0)})^{\alpha}$ and $(f_{1}^{(0)})^{\alpha}$. We conclude that $T$ is
an Euler tour of $H^{\alpha}$.
By Lemma 4.3(ii), we have that $H$ admits an Euler tour. a
Observe that in the case $|F|\leq 3$, Corollary 6.4 gives a full converse to
Corollary 6.3.
In Theorem 6.2 and Corollaries 6.3 and 6.4, we seem to be translating the
problem of finding an Euler family (tour) to a different problem, namely, of
finding an Euler family (tour) that traverses a specified vertex via each of
the specified edges. In the next lemma, we show that an algorithm to find the
former can be used to find the latter as well.
###### Lemma 6.5
Let $H$ be a hypergraph, $u\in V(H)$, and $F\subseteq E(H)$ such that each
edge in $F$ is incident with the vertex $u$. Construct a hypergraph
$H^{\prime}$ from $H$ as follows:
* •
for each edge $f\in F$, adjoin a new vertex $u_{f}$, and
* •
replace each edge $f\in F$ with edges $f^{\prime}=(f-\\{u\\})\cup\\{u_{f}\\}$
and $e_{f}=\\{u_{f},u\\}$.
Then $H$ has an Euler family (tour) traversing vertex $u$ via each edge in $F$
if and only if $H^{\prime}$ has an Euler family (tour).
Proof. Assume ${\mathcal{F}}$ is an Euler family of $H$ traversing vertex $u$
via each edge in $F$. For each closed trail $T$ in ${\mathcal{F}}$, obtain a
sequence $T^{\prime}$ by replacing each subsequence of the form $fu$ in $T$
with the subsequence $f^{\prime}u_{f}e_{f}u$, and each subsequence of the form
$uf$ with the subsequence $ue_{f}u_{f}f^{\prime}$. It is clear that
${\mathcal{F}}^{\prime}=\\{T^{\prime}:T\in{\mathcal{F}}\\}$ is an Euler family
of $H^{\prime}$.
Conversely, if ${\mathcal{F}}^{\prime}$ is an Euler family of $H^{\prime}$,
then it must traverse each edge of the form $e_{f}$, for some $f\in F$, via
either the subtrail $f^{\prime}u_{f}e_{f}u$ or the subtrail
$ue_{f}u_{f}f^{\prime}$. For each closed trail
$T^{\prime}\in{\mathcal{F}}^{\prime}$, obtain a sequence $T$ by replacing each
subsequence of the form $f^{\prime}u_{f}e_{f}u$ with $fu$, and each
subsequence of the form $ue_{f}u_{f}f^{\prime}$ with $uf$. It is then easy to
see that ${\mathcal{F}}=\\{T:T^{\prime}\in{\mathcal{F}}^{\prime}\\}$ is an
Euler family of $H$ that traverses vertex $u$ via each edge in $F$.
Since in both parts of the proof we have
$|{\mathcal{F}}|=|{\mathcal{F}}^{\prime}|$, the statement for Euler tours
holds as well. a
We shall now describe an algorithm based on Theorem 6.2 and Lemma 6.5 that
determines whether a hypergraph admits an Euler family.
###### Algorithm 6.6
Does a connected hypergraph $H$ admit an Euler family?
1. (1)
Sequentially delete vertices of degree at most 1 from $H$ until no such
vertices remain or $H$ is trivial. If at any step $H$ has an edge of
cardinality less than 2, then $H$ has no Euler family — exit.
2. (2)
If $H$ is trivial (and empty), then it has an Euler family — exit.
3. (3)
Find a minimal edge cut $F$.
4. (4)
Find a bipartition $\\{V_{0},V_{1}\\}$ of $V(H)$ into unions of the vertex
sets of the connected components of $H{\backslash}F$.
5. (5)
For all edge cut assignments $\alpha:F\to\mathbb{Z}_{2}^{[2]}$:
1. (a)
If $|\alpha^{-1}(01)|$ is odd — discard this $\alpha$.
2. (b)
If $\alpha^{-1}(01)=\emptyset$: if $H^{\alpha}[V_{0}]$ and $H^{\alpha}[V_{1}]$
both have Euler families (recursive call), then $H$ has an Euler family —
exit.
3. (c)
If $\alpha^{-1}(01)\neq\emptyset$: if $H^{\alpha}\circ V_{1}$ and
$H^{\alpha}\circ V_{0}$ both have Euler families (recursive call) that
traverse the collapsed vertex via each of the edges of the form $f^{\circ}$,
for $f\in\alpha^{-1}(01)$, then $H$ has an Euler family — exit.
6. (6)
$H$ has no Euler family — exit.
The analogous algorithm for Euler tours, Algorithm 6.7 below, relies on
Corollary 6.4 and Lemma 6.5. If the hypergraph has no edge cut satisfying the
assumptions of Corollary 6.4, then Algorithm 5.6 is invoked.
###### Algorithm 6.7
Does a connected hypergraph $H$ admit an Euler tour?
1. (1)
Sequentially delete vertices of degree at most 1 from $H$ until no such
vertices remain or $H$ is trivial. If at any step $H$ has an edge of
cardinality less than 2, then $H$ is not eulerian — exit.
2. (2)
If $H$ is trivial (and empty), then it is eulerian — exit.
3. (3)
Find a minimum edge cut $F$. If $|F|=1$, then $H$ is not eulerian — exit.
4. (4)
For all bipartitions $\\{V_{0},V_{1}\\}$ of $V(H)$ into unions of the vertex
sets of the connected components of $H{\backslash}F$
and for all edge cut assignments $\alpha:F\to\mathbb{Z}_{2}^{[2]}$:
1. (a)
If $|\alpha^{-1}(01)|$ is odd — discard this $\alpha$.
2. (b)
If $|\alpha^{-1}(01)|=0$: if $H^{\alpha}[V_{0}]$ is empty and
$H^{\alpha}[V_{1}]$ has an Euler tour, or vice-versa (recursive call), then
$H$ has an Euler tour — exit.
3. (c)
If $|\alpha^{-1}(01)|=2$: if $H^{\alpha}\circ V_{1}$ and $H^{\alpha}\circ
V_{0}$ both have an Euler tour that traverses the collapsed vertex (recursive
call), then $H$ has an Euler tour — exit.
5. (5)
Use Algorithm 5.6.
###### Remarks 6.8
1. (i)
Remarks 5.7(i) and 5.7(vi) apply to Algorithms 6.6 and 6.7 as well.
2. (ii)
In Step (3) of Algorithm 6.7, we chose to find a minimum edge cut — see Remark
5.7(ii) — in the hope of finding an edge cut of cardinality at most 3; in this
case, a reduction using Corollary 6.4 is guaranteed, and Step (5) is not
reached in this call.
3. (iii)
Observe that for a given edge cut $F$ such that $H{\backslash}F$ has exactly
$|I|$ connected components, the number of bipartitions $\\{V_{0},V_{1}\\}$ is
$2^{|I|-1}-1$, and the number of edge cut assignments
$\alpha:F\to\mathbb{Z}_{2}^{[2]}$ is at most $3^{|F|}$. Hence the loop at Step
(4) of Algorithm 6.7 will be executed at most $(2^{|I|-1}-1)\cdot 3^{|F|}$
times.
4. (iv)
For suitable bipartitions $\\{V_{0}V_{1}\\}$, we expect that the reduction at
Step (4) in both algorithms is the most efficient (that is, the number of
recursive calls is minimized) if $|V_{0}|$ and $|V_{1}|$ are as equal as
possible.
5. (v)
We shall now compare the time complexity functions $\tau(p)$ and $\sigma(p)$
of Algorithms 5.6 and 6.7, respectively, where $p=\sum_{e\in E(H)}|E|$ is the
size of the input hypergraph $H$. To facilitate the comparison, we shall
assume that at any step of the algorithm (including recursive calls) we have
$1\leq|F|\leq k$ and $2\leq|I|\leq m$ for some constants $k$ and $m$, and that
at any recursive call, the size of the hypergraph $H^{\prime}$ is at most
$\frac{p}{c}$ for some constant $c>1$. In addition, we assume that Step (5) of
Algorithm 6.7 is never reached. With these assumptions, we have
$\tau(p)\approx{m+1\choose 2}^{k}\cdot\tau\left(\frac{p}{c}\right)\approx
m^{2k}\cdot\tau\left(\frac{p}{c}\right)\approx
m^{2k\log_{c}p}=p^{\log_{c}(m^{2k})}$
and
$\sigma(p)\approx
2^{m}3^{k}\cdot\sigma\left(\frac{p}{c}\right)\approx(2^{m}3^{k})^{\log_{c}p}=p^{\log_{c}(2^{m}3^{k})}.$
Thus, $\tau(p)$ and $\sigma(p)$ are both polynomial in $p$; however, which of
the polynomial orders is smaller depends on the relationship between $k$ and
$m$. Roughly speaking, $\log_{c}(2^{m}3^{k})=m\log_{c}2+k\log_{c}3$ is smaller
when $k$ is larger than $m$, while $\log_{c}(m^{2k})=2k\log_{c}m$ is smaller
when $m$ is much larger than $k$.
6. (vi)
The time complexities of Algorithms 5.5 and 6.6 can be compared in a similar
way, with very similar results.
Acknowledgement
The first author gratefully acknowledges support by the Natural Sciences and
Engineering Research Council of Canada (NSERC), Discovery Grant
RGPIN-2016-04798.
## References
* [1] M. A. Bahmanian and M. Šajna, Quasi-eulerian hypergraphs, Electron. J. Combin. 24 (2017), #P3.30, 12 pp.
* [2] M. A. Bahmanian and M. Šajna, Connection and separation in hypergraphs, Theory Appl. Graphs 2 (2015), no. 2, Art. 5, 24 pp.
* [3] J. A. Bondy, U. S. R. Murty, Graph theory. Graduate Texts in Mathematics 244, Springer, New York, 2008.
* [4] C. Chekuri, C. Xu, Computing minimum cuts in hypergraphs, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (2017), 1085–1100.
* [5] Z. Lonc, P. Naroski, On tours that contain all edges of a hypergraph, Electron. J. Combin. 17 (2010), # R144, 31 pp.
* [6] Y. D. Steimle, M. Šajna, Spanning Euler tours and spanning Euler families in hypergraphs with particular vertex cuts, Discrete Math. 341 (2018), 2808–2819.
|
# A proposal to improve Ni-based superconductors
Zi-Jian Lang (gbsn郎子健) Tsung-Dao Lee Institute & School of Physics and
Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Ruoshi Jiang
(gbsn姜若诗) Tsung-Dao Lee Institute & School of Physics and Astronomy, Shanghai
Jiao Tong University, Shanghai 200240, China Wei Ku (bsmi顧威) corresponding
email<EMAIL_ADDRESS>Tsung-Dao Lee Institute & School of Physics and
Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Key
Laboratory of Artificial Structures and Quantum Control (Ministry of
Education), Shanghai 200240, China
###### Abstract
Recently discovered superconductivity in hole-doped nickelate Nd0.8Sr0.2NiO2
caught intensive attention in the field. An immediate question is how to
improve its superconducting properties. Guided by the key characteristics of
electronic structures of the cuprates and the nickelates, we propose that
nickel chalcogenides with a similar lattice structure should be a promising
family of materials. Using NdNiS2 as an example, we find this particular
crystal structure a stable one, through first-principle structural
optimization and phonon calculation. We justify our proposal by comparing with
CaCuO2 and NdNiO2 the strength of the charge-transfer characteristics and the
trend in their low-energy many-body effective Hamiltonians of doped hole
carriers. These analysis indicates that nickel chalcogenides host low-energy
physics closer to that of the cuprates, with stronger magnetic interaction
than the nickelates, and thus deserve further experimental exploration. Our
proposal also opens up the possibility of a wide range of parameter tuning
through ligand substitution among chalcogenides, to further improve
superconducting properties.
Hole doped nickelate, Nd1-xSrxNiO2 Li _et al._ (2019, 2020a), as a first Ni-
based high-temperature superconductor, has recently attracted great attention
in condensed matter physics recently. It displays type-II superconductivity,
dome-shaped superconducting phase Li _et al._ (2020a) and strange metal
(linear resistivity) behavior in its normal state Li _et al._ (2019). All of
these characteristics suggest that this material represents a new family of
unconventional superconductors. Meanwhile, their strong temperature and doping
dependent hall coefficient Li _et al._ (2020a), negative magneto-resistance
Li _et al._ (2020b), absence of long-range magnetic order Hayward and
Rosseinsky (2003); Sawatzky (2019) in the parent compound, and increasing
normal-state resistivity in the overdoped regime Li _et al._ (2020a); Osada
_et al._ (2020) also indicate rich underlying physics in this new
superconductor that might be absent in the cuprates Ando _et al._ (2004a, b);
Bozovic _et al._ (2016); Dagotto (1994). Obviously, these nickelate
superconductors are of a promising family to help unravel the long-standing
puzzles of high-temperature superconductivity and even to find higher
transition temperature $T_{c}$ beyond the cuprates.
So far within limited attempts, the highest $T_{c}$ of nickelates is only
about 12K Li _et al._ (2020a); Osada _et al._ (2020), one order of magnitude
lower than the best cuprates Schilling _et al._ (1993); Dagotto (1994).
Furthermore, at present, superconductivity is only found in thin films but not
in bulk samples Li _et al._ (2020b), for reasons yet to be understood.
Significant experimental progress is thus expected upon improvement of sample
quality. On the other hand, it is of equal importance to seek other approaches
to improve the superconducting properties besides the sample quality.
Here we address this timely issue by first comparing the high-energy
electronic structure of the cuprates and nickelates to identify their key
characteristics being the strength of charge transfer. Based on this, we
propose a new family of material, nickel chalcogenides, as a promising
candidate to improve the superconducting properties. Taking NdNiS2 as an
example, through density functional structure optimization and phonon
calculation, we first demonstrate that this compound is stable under the same
crystal structure as NdNiO2. The corresponding high-energy electronic
structure confirms our expectation of an enhanced charge-transfer
characteristic. Furthermore, our local many-body diagonalization gives a
ground state similar to those of the cuprates and nickelates, namely a spin-
singlet state with doped holes mostly residing in ligand-$p$-orbitals Lang
_et al._ (2020). As anticipated, the corresponding effective eV-scale
Hamiltonian of hole carriers contains stronger spin interactions than NdNiO2,
suggesting a higher temperature scale in the phase diagrams. Our study
indicates that nickel chalcogenides are promising candidates for improved
superconducting properties, and ligand substitution, e.g. NdNiS2-xOx and
NdNiS2-xSex, would introduce a great deal of tunability for future
experimental exploration.
Figure 1: Comparison of LDA+$U$ band structures of CaCuO2, NdNiS2 and NdNiO2
under AFM order, unfolded in the one-Cu/Ni Brillouin zone. The red, blue and
green colors represent the weights of Cu/Ni, Ca/Nd and O/S orbitals, and Nd
$f$-orbitals are set transparent. The lower panel shows the magnified band
structure of the purple dashed boxes in the upper panel. Notice the trend in
the relative energies of O/S and Cu/Ni orbitals.
To identify the key difference between the cuprates and nickelates, we compare
their high-energy electronic structure using density functional theory(DFT).
Since both the cuprates and nickelates host strong antiferromagnetic (AFM)
correlation Lee _et al._ (2006); Cui _et al._ (2020) inherited from the
unfrustrated square lattice of spin 1/2 local moment Anderson (1950), we
calculate the band structures under AFM order within the LDA+$U$ approximation
Anisimov _et al._ (1993); Liechtenstein _et al._ (1995); sup and unfold
them to the one-Ni unit cell for a simpler visualization Ku _et al._ (2010).
Fig. 1(a)(c) show that compared with NdNiO2, the main difference of CaCuO2 at
the large energy scale is the much lower energy of its $d$-orbitals (in red)
relative to the O $p$-orbitals (in green), reflecting a much stronger charge-
transfer nature well known to the community Zaanen _et al._ (1985). Given
that both families are doped spin 1/2 systems, it is reasonable to expect that
promoting such a charge transfer characteristic should improve significantly
the superconducting properties, due to various considerations of low-energy
physics such as enhanced super-exchange interaction Anderson (1950) and
renormalized kinetic energy. Since there is no chemical way to further lower
the orbital energy of Ni (other than replacing it by Cu), we are left with no
choice but to raise the energy of the ligand $p$-orbitals, for example by
substituting O with S or Se.
Taking NdNiS2 as an example, we first examine the stability of this compound
under the same crystal structure [c.f. Fig. 2(a)] as the nickelates. Our
structure optimization calculation sup gives lattice constants $a=b=4.505$Å
and $c=3.703$Å. With these structural parameters, further phonon calculation
finds that phonon frequencies are all positive, as shown in Fig. 2(b). This
confirms a stable structure realizable in the lab.
Figure 2: (a) Crystal structure of NdNiS2, where grey, orange and yellow balls
represent the Ni, Nd and S atoms respectively. (b) Phonon dispersion and
corresponding density of states of NdNiS2. The positivity of all phonon
frequencies confirms the stability of the crystal structure.
Next, we verify the enhanced charge-transfer characteristic of this material.
Fig. 1(b) shows the similar unfolded band structure of AFM NdNiS2. As expected
from above chemical intuition, substituting O by S raises the energy of the
$p$-orbitals (in green) quite significantly, thereby enhancing the charge-
transfer nature. The density of state (DOS) plots in Fig. 3 illustrate a
similar trend. Right below the Fermi energy, the relative weight of the most
relevant ligand $p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$-orbitals (in green) to
the $d_{x^{2}-y^{2}}$-orbital (in red) grows systematically from NdNiO2 to
NdNiS2 and CaCuO2. (Here, $p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$ refers to O/S
$p$-orbitals pointing toward nearest Cu/Ni atoms.) Indeed, substituting O by S
enhances the charge-transfer nature and brings nickel chalcogenides closer to
the cuprates.
To reveal the physical benefits of a stronger charge-transfer characteristic,
we proceed to investigate the low-energy effective Hamiltonian using well-
established approaches for the cuprates sup ; Ghijsen _et al._ (1988); Zhang
and Rice (1988); Lang _et al._ (2020). Using DFT-parameterized high-energy
many-body Hamiltonian, we calculate the local many-body ground state via exact
diagonalization. The ground state with a doped hole is a spin-singlet state
similar to the well-known Zhang-Rice singlet Zhang and Rice (1988) with
(self-)doped hole mostly residing in the ligand $p$-orbitals.
Note that such a strong singlet formation introduces an important correction
to Fig. 1 and 3: it pulls the energy of $x^{2}-y^{2}$ orbital closest to the
chemical potential, even beyond the $3z^{2}-r^{2}$ orbital. This effect,
however, will still respect the above mentioned trend concerning the relative
energies of O/S orbitals and Cu/Ni orbitals.
Figure 3: Comparison of orbital resolved Density of States in CaCuO2, NdNiS2
and NdNiO2. Notice the gradual reduction of the relative weights of O/S
(green) $p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$-orbitals against (red) Cu/Ni
$d_{x^{2}-y^{2}}$-orbitals right below the Fermi energy from CaCuO2 to NdNiO2.
Using this singlet state as basis, the low-energy Hamiltonian of hole carriers
resembles the well-known $t$-$J$ model: (The subspace spanned by this singlet
state form the basis for the low-energy effective Hamiltonian, upon
integrating out the rest of the Hilbert space perturbatively.)
$\begin{split}H=\sum_{ii^{\prime}\nu}t_{ii^{\prime}}\tilde{c}_{i\nu}^{\dagger}\tilde{c}_{i^{\prime}\nu}+\sum_{<i,j>}J\mathbf{S}_{i}\cdot\mathbf{S}_{j},\end{split}$
(1)
where $\tilde{c}_{i\nu}^{\dagger}$ create a dressed hole at site $i$ of spin
$\nu$.
$\mathbf{S}_{i}=\sum_{\nu,\nu^{\prime}}\tilde{c}^{\dagger}_{i\nu}\bm{\sigma}_{\nu,\nu^{\prime}}\tilde{c}_{i\nu^{\prime}}$
denotes the spin operator and $\bm{\sigma}_{\nu,\nu^{\prime}}$ is the vector
of Pauli matrices.
Table 1 shows our resulting nearest neighbor hopping parameters
$t_{ii^{\prime}}$ and super-exchange parameters $J$ for the three materials.
Despite the larger lattice constant in NdNiS2, $t_{ii^{\prime}}$ turns out to
be similar in all three materials owing to the larger radius of S
$p$-orbitals. In contrast, $J$ is systematically enhanced from NdNiO2, to
NdNiS2 and CaCuO2. This is because a stronger charge-transfer nature (higher
$p$-orbital energy) gives a reduced charge-transfer gap $\Delta_{CT}$
(approximate energy to return an electron from the
$p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$-orbital back to Cu/Ni
$d_{x^{2}-y^{2}}$-orbital.) With the intra-atomic repulsion roughly the same
in Cu and Ni, this in turn enhances the super-exchange processes (
$\propto\Delta_{CT}^{-1}$) Ogata and Fukuyama (2008); Lang _et al._ (2020).
We stress that despite the simplicity of such an estimation, the qualitative
trend among these materials is robust.
The enhanced $J$ is likely very important for the superconducting properties.
It would not only lead to a stronger magnetic correlation that dominates the
low-energy physical Hilbert space, but also give rise to a larger renormalized
kinetic energy Yin and Ku (2009); Dagotto (1994). In other words, a larger $J$
can stretch the energy scale of all the low-energy physics, effectively
producing a larger temperature scale in the phase diagram. One can therefore
expect that NdNiS2 should have better superconducting properties than the
nickelates.
An interesting feature of NdNiS2 is that the possible electron-carrier density
in the parent compound will increase as a result of higher-energy $p$-orbitals
(c.f. Fig. 1 and 3). On the one hand, since the electron carriers are shown to
be nearly decoupled from the hole carriers Lang _et al._ (2020) in the
nickelates (and the same is found in NdNiS2 sup ), their existence should not
interfere much with the hole superconductivity. On the other hand, these
weakly correlated electron carriers might introduce additional physical
effects $absent$ in the cuprates (for example, strengthening the essential
superconducting phase stiffness.) Further experimental investigation of the
nickel chalcogenides will prove highly illuminating.
Finally, we note that it is not just S that has a good $p$-orbital energy, Se
having a similar chemical orbital energy should also be suitable from our
consideration. This opens up a wide range of tunability in material design,
for example NdNiS2-xOx or NdNiS2-xSex, to optimize superconducting properties,
or to explore systematic trends for better physical understanding.
Table 1: Comparison of energy difference of $d_{x^{2}-y^{2}}$ and $p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$, $\Delta_{pd}=\epsilon_{p_{\mathbin{\\!/\mkern-5.0mu/\\!}}}-\epsilon_{d_{x^{2}-y^{2}}}$; estimated charge transfer gap, $\Delta_{CT}$; hybridization between $d_{x^{2}-y^{2}}$ and $p_{\mathbin{\\!/\mkern-5.0mu/\\!}}$ orbitals, $t_{pd}$; nearest neighbor hopping $t$ and exchange parameter $J$ in one band $t-J$ model and $T_{c}$ Li _et al._ (2020a); Balestrino _et al._ (2001) for three different materials, CaCuO2, NdNiS2 and NdNiO2. | $\Delta_{pd}$ | $\Delta_{CT}$ | $t_{pd}$ | $t$ | $J$ | $T_{c}$
---|---|---|---|---|---|---
CaCuO2 | 3.7 | $\sim$ 3.5 | 1.3 | 0.3 | $\sim$ 0.3 | $>$50K
NdNiS2 | 5.7 | $\sim$ 4.0 | 1.2 | 0.3 | $\sim$ 0.13 | ?
NdNiO2 | 8.9 | $\sim$ 6.0 | 1.3 | 0.3 | $\sim$ 0.07 | $\sim 12$K
In conclusion, aiming to improve the superconducting properties of the newly
discovered unconventional nickelate superconductors, we identify the degree of
charge transfer as the key difference with the cuprates in their high-energy
electronic structure. Guided by this, we propose a new family of material
nickel chalcogenides as a promising candidate for improved superconducting
properties. Taking NdNiS2 as an example, we find this compound stable under
the desired crystal structure and thus realizable in laboratory. The resulting
high-energy electronic structure displays the anticipated enhancement of the
charge-transfer nature. We then reveal the physical benefits of a stronger
charge-transfer characteristic via derivation of low-energy effective
Hamiltonian. The resulting Hamiltonian encapsulates a stronger super-exchange
spin-interaction, implying a higher temperature scale for all low-energy
physics, including superconductivity. Our study paves the way to discover more
nickel-based superconductors in nickel chalcogenides with improved
superconducting properties, for examples NdNiS2-xOx and NdNiS2-xSex. Further
experimental exploitation of the wide range of tunability through ligand
substitution would likely make significant contribution to the resolution of
the long-standing puzzles of high-temperature superconductivity.
###### Acknowledgements.
This work is supported by National Natural Science Foundation of China (NSFC)
#11674220 and 11745006 and Ministry of Science and Technology #2016YFA0300500
and 2016YFA0300501.
## References
* Li _et al._ (2019) D. Li, K. Lee, B. Y. Wang, M. Osada, S. Crossley, H. R. Lee, Y. Cui, Y. Hikita, and H. Y. Hwang, Nature 572, 624 (2019).
* Li _et al._ (2020a) D. Li, B. Y. Wang, K. Lee, S. P. Harvey, M. Osada, B. H. Goodge, L. F. Kourkoutis, and H. Y. Hwang, Phys. Rev. Lett. 125, 027001 (2020a).
* Li _et al._ (2020b) Q. Li, C. He, J. Si, X. Zhu, Y. Zhang, and H.-H. Wen, Communications Materials 1, 16 (2020b).
* Hayward and Rosseinsky (2003) M. Hayward and M. Rosseinsky, Solid State Sciences 5, 839 (2003), international Conference on Inorganic Materials 2002.
* Sawatzky (2019) G. A. Sawatzky, Nature 572, 592 (2019).
* Osada _et al._ (2020) M. Osada, B. Y. Wang, K. Lee, D. Li, and H. Y. Hwang, arXiv e-prints , arXiv:2010.16101 (2020), arXiv:2010.16101 [cond-mat.supr-con] .
* Ando _et al._ (2004a) Y. Ando, Y. Kurita, S. Komiya, S. Ono, and K. Segawa, Phys. Rev. Lett. 92, 197001 (2004a).
* Ando _et al._ (2004b) Y. Ando, S. Komiya, K. Segawa, S. Ono, and Y. Kurita, Phys. Rev. Lett. 93, 267001 (2004b).
* Bozovic _et al._ (2016) I. Bozovic, X. He, J. Wu, and A. T. Bollinger, Nature 536, 309 EP (2016).
* Dagotto (1994) E. Dagotto, Rev. Mod. Phys. 66, 763 (1994).
* Schilling _et al._ (1993) A. Schilling, M. Cantoni, J. D. Guo, and H. R. Ott, Nature 363, 56 (1993).
* Lang _et al._ (2020) Z.-J. Lang, R. Jiang, and W. Ku, arXiv e-prints , arXiv:2005.00022 (2020), arXiv:2005.00022 [cond-mat.supr-con] .
* Lee _et al._ (2006) P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. 78, 17 (2006).
* Cui _et al._ (2020) Y. Cui, C. Li, Q. Li, X. Zhu, Z. Hu, Y.-f. Yang, J. S. Zhang, R. Yu, H.-H. Wen, and W. Yu, arXiv e-prints , arXiv:2011.09610 (2020), arXiv:2011.09610 [cond-mat.supr-con] .
* Anderson (1950) P. W. Anderson, Phys. Rev. 79, 350 (1950).
* Anisimov _et al._ (1993) V. I. Anisimov, I. V. Solovyev, M. A. Korotin, M. T. Czyżyk, and G. A. Sawatzky, Phys. Rev. B 48, 16929 (1993).
* Liechtenstein _et al._ (1995) A. I. Liechtenstein, V. I. Anisimov, and J. Zaanen, Phys. Rev. B 52, R5467 (1995).
* (18) See Supplemental Material for details of phonon and band structure calculation.
* Ku _et al._ (2010) W. Ku, T. Berlijn, and C.-C. Lee, Phys. Rev. Lett. 104, 216401 (2010).
* Zaanen _et al._ (1985) J. Zaanen, G. A. Sawatzky, and J. W. Allen, Phys. Rev. Lett. 55, 418 (1985).
* Ghijsen _et al._ (1988) J. Ghijsen, L. H. Tjeng, J. van Elp, H. Eskes, J. Westerink, G. A. Sawatzky, and M. T. Czyzyk, Phys. Rev. B 38, 11322 (1988).
* Zhang and Rice (1988) F. C. Zhang and T. M. Rice, Phys. Rev. B 37, 3759 (1988).
* Ogata and Fukuyama (2008) M. Ogata and H. Fukuyama, Reports on Progress in Physics 71, 036501 (2008).
* Yin and Ku (2009) W.-G. Yin and W. Ku, Phys. Rev. B 79, 214512 (2009).
* Balestrino _et al._ (2001) G. Balestrino, S. Lavanga, P. G. Medaglia, P. Orgiani, A. Paoletti, G. Pasquini, A. Tebano, and A. Tucciarone, Applied Physics Letters 79, 99 (2001).
|
APCTP-Pre2021-001
PNUTP-21-A11
Axion-driven hybrid inflation over a barrier
Jinn-Ouk Gonga,b and Kwang Sik Jeongc
aDepartment of Science Education, Ewha Womans University, Seoul 03760, Korea
bAsia Pacific Center for Theoretical Physics, Pohang 37673, Korea
cDepartment of Physics, Pusan National University, Busan 46241, Korea
We present a novel cosmological scenario that describes both inflation and
dark matter. A concrete realization of our scenario is given based on a well-
established particle physics model, where an axionlike field drives inflation
until a potential barrier, which keeps a waterfall field at the origin,
disappears to trigger a waterfall transition. Such a barrier makes the
inflaton potential much flatter, improving significantly the naturalness and
viability of the otherwise problematic setup adopted previously. The observed
spectrum of the cosmic microwave background indicates that the inflationary
Hubble scale, which is allowed to span a wide range, uniquely fixes the
inflaton mass and decay constant. This raises an intriguing possibility of
probing inflation via experimental searches for axionlike particles. Further,
our model involves dark matter candidates including the inflaton itself. Also,
for a complex waterfall field, we can determine cosmologically the Peccei-
Quinn scale associated with the strong CP problem.
## 1 INTRODUCTION
Cosmic inflation [1, 2, 3] has become an essential part of the standard
cosmological model. Before the onset of the hot big bang evolution, it
provides the necessary initial conditions–otherwise extremely finely tuned–as
confirmed by the observations on the cosmic microwave background (CMB) [4].
Furthermore, it explains the origin of temperature fluctuations of the CMB and
the inhomogeneous distribution of galaxies on large scales due to quantum
fluctuations during inflation [5]. The properties of these primordial
perturbations have been constrained by decades of observations, and are
consistent with the predictions of inflation [6].
To implement inflation, we need typically an inflaton field with a
sufficiently flat potential. The inflaton drives an inflationary epoch until
the “slow-roll” period does not hold any longer [7, 8]. It is, however, a
formidable task to maintain an unusually flat potential against various
corrections [9]. A powerful way of protecting the flatness is to impose
certain symmetries. An axionlike field is an appealing candidate for an
inflaton because its mass requires breaking of the associated shift symmetry,
naturally making it very light. However, the predictions of the minimal axion-
driven inflation – natural inflation [10, 11] – are only marginally consistent
with the most recent Planck observations. Additionally, for successful
inflation, the axion should have a trans-Planckian decay constant $f\gtrsim
5m_{\rm Pl}$ with $m_{\rm Pl}\equiv(8\pi G)^{-1/2}$ [6], which may be outside
the range of validity of an effective field theory description.
In this article, we present a novel cosmological scenario in which an
axionlike field can drive inflation successfully and at the same time
contribute to dark matter. The end of the inflationary phase is triggered by a
waterfall transition like hybrid inflation [12]. The distinctive features of
our model are twofold. First, the waterfall field $\chi$ is trapped at the
origin during inflation by a potential barrier. This implies that, differently
from the previous hybrid inflation models, the scale of inflation is not tied
to that of the waterfall phase transition. As a result, unlike the original
natural inflation, the inflaton is allowed to have a decay constant well below
$m_{\rm Pl}$ so that the effective field theory is trustable, yet maintains a
flat potential. Thus the naturalness and viability of the models, which were
problematic either in the effective theory viewpoint or in unnaturally fine-
tuned initial conditions, are improved significantly within our scenario. The
Planck results are accommodated in a broad range of the inflaton mass and
decay constant, but with a certain relationship between them. This opens an
interesting possibility to probe inflation via experimental searches for
axionlike particles.
Another merit of our scenario is that the inflaton itself can constitute dark
matter, which is generally difficult in other inflation models. Further,
because the inflationary Hubble scale $H_{\rm inf}$ is allowed to span a very
wide range, $\chi$ has a potential to resolve other puzzles of the Standard
Model (SM). If complex, we may identify U$(1)_{\chi}$ with the Peccei-Quinn
(PQ) symmetry solving the strong CP problem [13]. Remarkably, the PQ scale is
then determined cosmologically, and the contribution of the QCD axion to dark
matter constrains $H_{\rm inf}$ to be below about $10^{4}$ GeV.
## 2 MODEL
Our model consists of two scalar fields, the inflaton $\phi$ and the waterfall
field $\chi$. During inflation, $\phi$ rolls down the potential slowly while
$\chi$ is trapped at the origin by a barrier. There, the effective mass
squared of $\chi$, $\mu_{\rm eff}^{2}$, is thus positive. As $\phi$ evolves,
$\mu_{\rm eff}^{2}$ decreases monotonically and vanishes at a critical value
$\phi_{c}$, removing the barrier. Then inflation ends almost instantaneously.
Here, the barrier does bring the separation between the scales for inflation
and waterfall phase transition.
Our scenario is successfully realized if $\phi$ is an axionlike field with
decay constant $f$. This is because its interactions are well controlled by
shift symmetry, $\phi\to\phi+{\rm constant}$, presumably broken only by
nonperturbative effects, and the size of its potential terms is finite and
insensitive to $f$. A dangerous waterfall tadpole can be avoided by imposing a
symmetry, for instance, U$(1)_{\chi}$ if $\chi$ is a complex scalar.
Explicitly, we consider the potential
$V(\phi,\chi)=V_{0}+\mu^{2}_{\rm
eff}(\phi)|\chi|^{2}-\lambda|\chi|^{4}+\frac{1}{\Lambda^{2}}|\chi|^{6}+U(\phi)\,,$
(1)
with $\lambda>0$, and the $\phi$-dependent terms given respectively by
$\displaystyle\mu^{2}_{\rm eff}(\phi)$
$\displaystyle=m^{2}-\mu^{2}\cos\left(\frac{\phi}{f}+\alpha\right)\,,$ (2)
$\displaystyle U(\phi)$
$\displaystyle=M^{4}\cos\left(\frac{\phi}{f}\right)\,,$ (3)
where $\alpha$ is constant. The positive constant $V_{0}$ is fixed by
demanding $V=0$ at the true vacuum, and $\Lambda$ is the cutoff scale of the
theory. The parameter space of our interest is
$m^{4}<\mu^{4}\ll\lambda V_{0}\quad\text{and}\quad M^{4}\ll V_{0}\,.$ (4)
Then the true vacuum appears at $\chi_{0}\sim\sqrt{\lambda}\Lambda$, well
below the cutoff scale $\Lambda$ as long as $\lambda\ll 1$, and $V_{0}$ reads
$V_{0}\sim\lambda^{3}\Lambda^{4}\,.$ (5)
Note that there are two minima along the $\chi$-direction for $\mu^{2}_{\rm
eff}(\phi)>0$, and a barrier separates them. The position and height of the
waterfall barrier are determined by $\mu$, whereas the value of $V_{0}$ is
insensitive to it.
Before going further, let us discuss the case $\lambda<0$ so that there is no
barrier, similar to hybrid natural inflation [14, 15, 16]. In such a case,
$V_{0}$ is fixed by $\mu$ roughly to be $\mu^{4}/|\lambda|$, and the possible
range of $M^{4}/V_{0}$ is severely constrained because a closed loop of $\chi$
generally makes $|\lambda|\gtrsim 1/16\pi^{2}$ and
$M^{4}\gtrsim\mu^{2}\Lambda^{2}_{\ast}/16\pi^{2}$ with $\mu<\Lambda_{\ast}$.
Here $\Lambda_{\ast}$ is the cutoff scale of the $\phi$-dependent waterfall
mass operator. In our scenario, $M^{4}/V_{0}$ can be arbitrarily small, making
the inflaton potential much flatter than the case without a barrier.
The rate of tunneling over a barrier is proportional to $\exp(-S_{E})$, where
$S_{E}$ is the Euclidean action of $\chi$ evaluated on a bounce solution.
Tunneling proceeds dominantly via the Coleman-De Luccia bounce [17] with
$S_{E}>S_{0}\equiv 8\pi^{2}/(3\lambda)$ in the region with $\mu^{2}_{\rm
eff}>2H^{2}_{\rm inf}$, while through the Hawking-Moss instantons [18] with
$S_{E}=\mu^{4}_{\rm eff}/H^{4}_{\rm inf}\times S_{0}$ in the opposite region
[19]. Here we have used that the bounce is insensitive to the $|\chi|^{6}$
term for $\mu^{4}\ll\lambda V_{0}$. For viable inflation, we thus impose the
condition
$\mu^{2}\gg H_{\rm inf}^{2}\,.$ (6)
Then $\chi$ is heavy enough to be initially fixed at the origin. In addition,
the tunneling rate is exponentially suppressed so that $\chi$ stays at the
origin until the barrier disappears at $\phi_{c}$. Bubbles of true vacuum can
be nucleated around the end of inflation, but the U$(1)_{\chi}$ phase
transition occurs rather smoothly because the barrier soon disappears.
As a simple ultraviolet completion of the inflaton potential, we consider a
hidden QCD with U$(1)_{\chi}$ charged quarks. (2) and (3) are then generated
in a controllable way while naturally satisfying the hierarchies (4).
Vectorlike quarks $u+u^{c}$ and $d+d^{c}$ couple to $\chi$ through the
U$(1)_{\chi}$ and gauge invariant interactions
$m_{u}uu^{c}+y\chi
u^{c}d+y^{\prime}\chi^{*}ud^{c}+m_{d}dd^{c}+\frac{1}{16\pi^{2}}\frac{\phi}{f}\,G_{\mu\nu}\widetilde{G}^{\mu\nu}\,,$
(7)
where the hidden confining scale lies in the range $m_{d}\ll\Lambda_{h}\ll
m_{u}$. Here we have taken the field basis where the quark mass parameters are
real. Note that the last term above is an anomalous inflaton coupling to
hidden gluons, which is the only source of shift symmetry breaking. At energy
scales below $m_{u}$, $u+u^{c}$ are integrated out to give a $\chi$-dependent
effective quark mass
$\left(\frac{yy^{\prime}}{m_{u}}|\chi|^{2}+m_{d}+\delta m_{d}\right)dd^{c}\,.$
(8)
Here we have included the radiative contribution from a closed loop of $\chi$
$\delta
m_{d}=\frac{yy^{\prime}}{16\pi^{2}}m_{u}\log\left(\frac{\Lambda^{2}}{m^{2}_{\chi}}\right)\,,$
(9)
with $m_{\chi}$ being the mass of the radial component of $\chi$. For small
values of $\chi$, $d+d^{c}$ are lighter than $\Lambda_{h}$ and condensate to
form a meson with mass and decay constant around $\Lambda_{h}$. The inflaton
mixes with the meson in the presence of anomalous coupling to hidden gluons,
and finally the effective potential at scales below $\Lambda_{h}$ is obtained
by integrating out the heavy meson
$\Delta V_{\rm
eff}=-\left|\frac{yy^{\prime}}{m_{u}}\right|\Lambda^{3}_{h}\cos\left(\frac{\phi}{f}+\beta_{1}\right)|\chi|^{2}+\left|m_{d}+\delta
m_{d}\right|\Lambda^{3}_{h}\cos\left(\frac{\phi}{f}+\beta_{2}\right)\,,$ (10)
where the constant phases are given by $\beta_{1}=\arg(yy^{\prime}/m_{u})$ and
$\beta_{2}=\arg(m_{d}+\delta m_{d})$. It is clear that the above reduces to
(2) and (3) with $\alpha=\beta_{1}-\beta_{2}$. Also, the hierarchies
$\mu^{4}\ll\lambda V_{0}$ and $M^{4}\ll V_{0}$ are satisfied naturally if
$H_{\rm inf}\lesssim\Lambda_{h}\ll\Lambda\,,$ (11)
where we have used that $H_{\rm inf}$ should be lower than $\Lambda_{h}$ since
otherwise instanton effects become very weak. On the other hand, $m$ should be
smaller than $\mu$ because inflation ends when the barrier disappears. The
smallness of $m$ may be accommodated in more speculative idea like
supersymmetry or anthropic selection.
## 3 COSMOLOGICAL DYNAMICS
### 3.1 Inflation
The Universe undergoes an inflationary phase while $\chi$ is trapped at the
origin. During this stage, $\phi$ evolves down the potential
$V=V_{0}+U(\phi)=V_{0}+M^{4}\cos\left(\frac{\phi}{f}\right)\,.$ (12)
Thus, the evolution of $\phi$ during inflation is essentially identical to
hybrid natural inflation. $\mu^{2}_{\rm eff}$ crosses zero when $\phi$ reaches
the critical value
$\frac{\phi_{c}}{f}=\cos^{-1}\left(\frac{m^{2}}{\mu^{2}}\right)-\alpha\,.$
(13)
The sign flip triggers the waterfall phase transition, because there is no
potential barrier along the $\chi$-direction, and inflation ends almost
instantaneously. Among the model parameters, $m$, $\mu$, and $\alpha$ affect
inflation only through the above combination. Figure 1 shows schematically the
inflationary and waterfall phases.
Figure 1: Schematic display of inflationary and waterfall phases. The
evolution of $\phi$ changes the waterfall potential dramatically near the
origin, as illustrated in the right panel, but rarely around the true vacuum
at $\chi=\chi_{0}$ if (4) holds.
At this point, it is very important to note that two crucial ingredients are
required to make our scenario distinctive. One is the shift symmetry of
$\phi$, which naturally allows the hierarchies (4). The other is a
$\phi$-dependent barrier between two extrema at and off the origin in the
waterfall potential. The barrier makes the inflaton potential flatter, and
consequently the value of $f$ required for viable inflation can naturally be
much lower than $m_{\rm Pl}$.
For $M^{4}\ll V_{0}$, inflation is driven by $V_{0}$ so that
$H_{\rm inf}^{2}\approx\frac{V_{0}}{3m_{\rm Pl}^{2}}\,.$ (14)
From (12), the slow-roll parameters are given by, with $\theta\equiv\phi/f$,
$\displaystyle\epsilon$ $\displaystyle\equiv\frac{m_{\rm
Pl}^{2}}{2}\left(\frac{V^{\prime}}{V}\right)^{2}\approx\frac{1}{2}\left(\frac{m_{\rm
Pl}}{f}\right)^{2}\left(\frac{M^{4}}{V_{0}}\right)^{2}\sin^{2}\theta\,,$ (15)
$\displaystyle\eta$ $\displaystyle\equiv m_{\rm
Pl}^{2}\frac{V^{\prime\prime}}{V}\approx-\left(\frac{m_{\rm
Pl}}{f}\right)^{2}\left(\frac{M^{4}}{V_{0}}\right)\cos\theta\,.$ (16)
Thus, $|\eta|$ is parametrically much bigger than $\epsilon$. The slow-roll
conditions, $\epsilon\ll 1$ and $|\eta|\ll 1$, are satisfied if the following
condition holds
$f\gtrsim\left(\frac{M^{4}}{V_{0}}\right)^{1/2}m_{\rm Pl}\,,$ (17)
but it need not be above $m_{\rm Pl}$.
The amplitude of the power spectrum of the curvature perturbation and its
spectral index, and the tensor-to-scalar ratio in terms of the slow-roll
parameters are constrained as [6]
$\displaystyle A_{\cal R}$ $\displaystyle=\frac{V_{0}}{24\pi^{2}m_{\rm
Pl}^{4}\epsilon_{*}}\approx 2.0989^{+0.0296}_{-0.0292}\times 10^{-9}\,,$ (18)
$\displaystyle n_{\cal R}$ $\displaystyle=1-6\epsilon_{*}+2\eta_{*}\approx
0.9656\pm 0.0042\,,$ (19) $\displaystyle r$
$\displaystyle=16\epsilon_{*}<0.056\,,$ (20)
where the subscript $*$ denotes the evaluation at the horizon exit. Since
$\epsilon\ll|\eta|$, $n_{\cal R}$ is determined entirely by $\eta$. Hence,
from (16) and (19), $f$ is written as
$f=\sqrt{\frac{2}{1-n_{\cal
R}}\cos\theta_{*}}\bigg{(}\frac{M^{4}}{V_{0}}\bigg{)}^{1/2}m_{\rm Pl}\approx
7.625\sqrt{\cos\theta_{*}}\bigg{(}\frac{M^{4}}{V_{0}}\bigg{)}^{1/2}m_{\rm
Pl}\,,$ (21)
while (20) is translated to the following mild constraint
$\frac{M^{4}}{V_{0}}<\frac{0.056}{8}\frac{2}{1-n_{\cal
R}}\cos\theta_{*}\approx 0.4070\cos\theta_{*}\,.$ (22)
The number of $e$-folds $N$, before the onset of the waterfall phase
transition, is estimated by
$N=\frac{1}{m_{\rm
Pl}}\int_{\phi_{c}}^{\phi}\frac{d\phi^{\prime}}{\sqrt{2\epsilon}}\approx\frac{V_{0}}{M^{4}}\bigg{(}\frac{f}{m_{\rm
Pl}}\bigg{)}^{2}\log\bigg{[}\frac{\tan(\theta_{c}/2)}{\tan(\theta/2)}\bigg{]}\approx
58.14\cos\theta_{*}\log\bigg{[}\frac{\tan(\theta_{c}/2)}{\tan(\theta/2)}\bigg{]}\,,$
(23)
thus we can use $\theta$ and $N$ interchangeably. The required number of
$e$-folds is around 60, which fixes $\theta_{\ast}$ roughly as
$\theta_{\ast}\approx
0.7126\tan\bigg{(}\frac{\theta_{c}}{2}\bigg{)}-0.1566\tan^{3}\bigg{(}\frac{\theta_{c}}{2}\bigg{)}\,,$
(24)
neglecting terms of higher order in $\tan(\theta_{c}/2)$. Thus,
$\theta_{\ast}$ does not need to be very close to the hilltop of the
potential.
The inflaton mass during inflation sets the lower bound of the inflaton mass
at the true vacuum, $m_{\phi}|_{\rm min}=M^{2}/f$. It is interesting to note
that both $f$ and $m_{\phi}|_{\rm min}$ are proportional to $H_{\text{inf}}$;
combining (18) with (14) and (21), they are written respectively as
$\displaystyle f$ $\displaystyle=\frac{H_{\text{inf}}}{\pi(1-n_{\cal
R})\sqrt{A_{\cal R}}\tan\theta_{*}}\approx\frac{2.020\times
10^{5}}{\tan\theta_{*}}H_{\text{inf}}\,,$ (25) $\displaystyle m_{\phi}|_{\rm
min}$ $\displaystyle=\sqrt{\frac{3(1-n_{\cal
R})}{2\cos\theta_{*}}}H_{\text{inf}}\approx\frac{0.2272}{\sqrt{\cos\theta_{*}}}H_{\text{inf}}\,.$
(26)
Figure 2 shows the relationship between $H_{\rm inf}$, $m_{\phi}|_{\rm min}$
and $f$. $\phi$ can couple to the SM sector, for instance, to gauge bosons
through anomalous couplings as naturally expected from its axionic nature. Our
scenario thus provides theoretical support for experimental searches for
axionlike particles in a wide mass range. The rough relation, $f\sim
10^{6}\times m_{\phi}|_{\rm min}$, indicates that $\phi$ should be heavier
than about $0.1$ GeV to avoid too rapid cooling of stars [20], if coupled to
photons. Another plausible possibility is that it instead couples to the Higgs
sector as in the cosmological relaxation model of the weak scale [21]. Then,
it may be detectable at collider and beam-dump experiments. For instance, the
mass range below a few GeV can be probed by experiments at SHiP [22] and NA62
[23].
Figure 2: Decay constant and lower bound on mass of the inflaton compatible
with the Planck results on $n_{\cal R}$ and $A_{\cal R}$. We have taken
$\theta_{\ast}$ lying between 0.01 (solid lines) and 1.5 (dotted lines).
The postinflationary evolution leads to very rich phenomenologies. The
quantitative predictions depend very much on the detail of the model, so here
we are satisfied with describing briefly the subsequent evolution. After the
barrier disappears, $\chi$ soon acquires a tachyonic mass much larger than
$H_{\text{inf}}$ in magnitude for $\mu^{2}\gg H^{2}_{\text{inf}}$. This
happens within an $e$-fold after $\phi=\phi_{c}$, so $\chi$ rolls fast down to
the true vacuum. Unlike usual axion-driven inflation where the Universe is
reheated via an anomalous coupling to photons, the reheating process depends
greatly on the details of the model. Generally speaking, however, depending on
the couplings, tachyonic preheating [24] is extremely effective so that the
vacuum energy is rapidly transferred to the energy of inhomogeneous
oscillations of $\phi$ and/or $\chi$ [25, 26], subsequently heating up the
Universe to a radiation-dominated regime.
After inflation, spontaneous U$(1)_{\chi}$ breaking occurs and leads to the
formation of cosmic strings [27], which can survive in the late Universe and
contribute to the CMB temperature anisotropies, depending on how the
associated Nambu-Goldstone boson becomes massive. For instance, for global
U$(1)_{\chi}$, it can obtain a mass nonperturbatively from some confining
gauge sector. Then, topological defects are unstable and collapse by the wall
tension if the domain-wall number is equal to unity, or if a small explicit
symmetry-breaking term is added to lift the vacuum degeneracy [28, 29].
Further, cosmic string loops and large time-dependent inhomogeneities
generated during tachyonic preheating can act as a source of gravitational
waves (GWs). The corresponding GW spectrum can span a huge range of
frequencies; from ${\cal O}(10^{-12})$ Hz to ${\cal O}(1)$ Hz for stable and
metastable cosmic strings [30], within the reach of pulsar-timing arrays,
LIGO, and LISA, and from ${\cal O}(1)$ Hz to ${\cal O}(10^{10})$ Hz for
inhomogeneities from tachyonic preheating [31, 32, 33]. Such high-frequency
GWs are unfortunately beyond the sensitivity of interferometric experiments
due to the shot-noise fluctuations of photons. GWs in relatively low-frequency
regimes may well be within the reach of future detectors like advanced LIGO,
the Einstein Telescope, and the Big Bang Observer, which however is possible
only for extremely small values of couplings.
### 3.2 Dark matter
Another distinctive feature of our scenario is the possibility that the
inflaton can contribute to dark matter if its potential arises from hidden QCD
as in (7). Having Yukawa interactions with $\chi$, the hidden quarks have
masses increasing with the waterfall field value. This implies that
$\Lambda_{h}$ also increases, and thus the hidden QCD gets stronger after
inflation. In the region of large waterfall field values where all the hidden
quarks are heavier than $\Lambda_{h}$, we have
$\mu^{2}=0\quad\text{and}\quad M^{4}=\Lambda^{4}_{h}\,,$ (27)
in (2) and (3), because there are no mesons formed.
Let us consider the case that $\chi_{0}$ is sufficiently large so that (27)
holds in the present Universe. $\phi$ is then stabilized at a CP-conserving
minimum, and consequently accidental $Z_{2}$ symmetry arises: $\phi\to-\phi$.
The $Z_{2}$ forbids $\phi$ to mix with $\chi$, making it stable if does not
couple to SM. $\phi$ starts coherent oscillations around the minimum when $H$
becomes comparable to its mass, i.e. at the temperature fixed by
$m_{\phi}(T_{1})=3H(T_{1})\,.$ (28)
If $T$ is below $\Lambda_{h}$ during reheating, oscillations start before
reheating ends. Then, the inflaton relic density from oscillation is roughly
estimated by
$\Omega_{\phi}h^{2}\sim
0.24\,\theta^{2}_{c}\left(\frac{T_{1}}{\Lambda_{h}}\right)^{n}\left(\frac{f}{10^{11}{\rm
GeV}}\right)^{2}\left(\frac{T_{\rm reh}}{10^{5}{\rm GeV}}\right)\,,$ (29)
with $T_{1}<\Lambda_{h}$, and $n=11N/6-2$ for confining SU$(N)$. Here $T_{\rm
reh}$ is the reheating temperature at which the Universe becomes completely
radiation-dominated, and we have used that the scale factor scales as
$H^{-2/3}$ during a matter-dominated era. If oscillations start after
reheating, the relic density can be read off from (29) by replacing $T_{\rm
reh}$ with $T_{1}$. $\phi$ can thus account for the observed dark matter in a
wide range of $f$ depending on $T_{\rm reh}$. It is worth noting that
$\alpha=0$ leads to accidental $Z_{2}$ even when (27) is not the case [34].
The inflation sector includes another candidate for dark matter associated
with spontaneously broken U$(1)_{\chi}$. An interesting possibility arising
due to a wide allowed range of $H_{\rm inf}$ is to identify U$(1)_{\chi}$ with
the PQ symmetry so that $\arg{\chi}$ becomes the QCD axion explaining the
absence of CP violation in QCD. This implies that the waterfall and PQ phase
transitions are identical. Then, as corresponds to $\chi_{0}$, the axion decay
constant is cosmologically determined by
$f_{a}\approx\frac{3.8\times 10^{11}\,{\rm
GeV}}{\lambda^{1/4}}\left(\frac{H_{\rm inf}}{10^{4}{\rm GeV}}\right)^{1/2}\,,$
(30)
which should be above about $10^{9}$ GeV to avoid astrophysical bounds. The
axion anomalous coupling to gluons, which is required to solve the strong CP
problem, is generated by adding U$(1)_{\chi}$-charged heavy quarks or extra
Higgs doublets [35, 36]. We also note that the domain-wall number should be
equal to one since otherwise domain walls formed during the QCD phase
transition overclose the Universe. In such a case, axions are produced from
coherent oscillations and, more efficiently, from unstable domain-walls
bounded by an axion string. The relic density is estimated by [37]
$\Omega_{a}h^{2}\approx 0.54\times\left(\frac{\Lambda_{\rm QCD}}{400{\rm
MeV}}\right)\left(\frac{f_{a}}{10^{11}{\rm GeV}}\right)^{1.19}\,.$ (31)
Therefore, the observed dark-matter density indicates
$H_{\rm inf}\lesssim\sqrt{\lambda}\times 10^{4}\,{\rm GeV}\,.$ (32)
It would be also interesting to consider other cases where U$(1)_{\chi}$ is
identified, for instance, with U$(1)_{L}$ associated with the seesaw mechanism
or local U$(1)_{B-L}$ to extend SM.
## 4 CONCLUSIONS
We have proposed a cosmological scenario that improves significantly the
naturalness and viability of axion-driven inflation. During inflation, the
waterfall field remains at the origin by a potential barrier, which disappears
when the inflaton reaches at a critical point–then inflation ends almost
instantaneously. The inflaton interaction responsible for such a barrier can
naturally arise if the shift symmetry is broken nonperturbatively by hidden
QCD with quarks coupled to the waterfall field. Interestingly, the Planck
results indicate the possibility of probing our scenario by experimental
searches for axionlike particles. It is also remarkable that the inflaton can
be stable enough to constitute dark matter if all the hidden quarks get
heavier than the confining scale at the true vacuum. Further, for the case of
a complex waterfall field, its phase component can play the role of the QCD
axion, contributing to dark matter for $H_{\rm inf}$ below about $10^{4}$ GeV.
### Acknowledgements
This work is supported in part by the National Research Foundation of Korea
Grants No. 2018R1C1B6006061, No. 2021R1A4A5031460 (K.S.J.) and No.
2019R1A2C2085023 (J.G.). We also acknowledge the Korea-Japan Basic Scientific
Cooperation Program supported by the National Research Foundation of Korea and
the Japan Society for the Promotion of Science (2020K2A9A2A08000097). J.G. is
further supported in part by the Ewha Womans University Research Grant of 2020
(1-2020-1630-001-1). J.G. is grateful to the Asia Pacific Center for
Theoretical Physics for hospitality while this work was under progress.
## References
* [1] A. H. Guth, Phys. Rev. D 23, 347-356 (1981)
* [2] A. D. Linde, Phys. Lett. B 108, 389-393 (1982)
* [3] A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48, 1220-1223 (1982)
* [4] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020) [arXiv:1807.06209 [astro-ph.CO]].
* [5] V. F. Mukhanov and G. V. Chibisov, JETP Lett. 33, 532-535 (1981)
* [6] Y. Akrami et al. [Planck], Astron. Astrophys. 641, A10 (2020) [arXiv:1807.06211 [astro-ph.CO]].
* [7] V. Mukhanov, “Physical Foundations of Cosmology,” Cambridge, UK: Univ. Pr. (2005) 421 p.
* [8] S. Weinberg, “Cosmology,” Oxford, UK: Oxford Univ. Pr. (2008) 593 p.
* [9] D. H. Lyth and A. Riotto, Phys. Rept. 314, 1-146 (1999) [arXiv:hep-ph/9807278 [hep-ph]].
* [10] K. Freese, J. A. Frieman and A. V. Olinto, Phys. Rev. Lett. 65, 3233-3236 (1990)
* [11] F. C. Adams, J. R. Bond, K. Freese, J. A. Frieman and A. V. Olinto, Phys. Rev. D 47, 426-455 (1993) [arXiv:hep-ph/9207245 [hep-ph]].
* [12] A. D. Linde, Phys. Rev. D 49, 748-754 (1994) [arXiv:astro-ph/9307002 [astro-ph]].
* [13] R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440-1443 (1977)
* [14] G. G. Ross and G. German, Phys. Lett. B 684, 199-204 (2010) [arXiv:0902.4676 [hep-ph]].
* [15] G. G. Ross and G. German, Phys. Lett. B 691, 117-120 (2010) [arXiv:1002.0029 [hep-ph]].
* [16] G. G. Ross, G. German and J. A. Vazquez, JHEP 05, 010 (2016) [arXiv:1601.03221 [astro-ph.CO]].
* [17] S. R. Coleman and F. De Luccia, Phys. Rev. D 21, 3305 (1980)
* [18] S. W. Hawking and I. G. Moss, Phys. Lett. B 110 (1982), 35-38.
* [19] A. Shkerin and S. Sibiryakov, Phys. Lett. B 746, 257-260 (2015) [arXiv:1503.02586 [hep-ph]].
* [20] G. G. Raffelt, Ann. Rev. Nucl. Part. Sci. 49, 163 (1999) [hep-ph/9903472].
* [21] P. W. Graham, D. E. Kaplan and S. Rajendran, Phys. Rev. Lett. 115, no.22, 221801 (2015) [arXiv:1504.07551 [hep-ph]].
* [22] S. Alekhin et al., Rept. Prog. Phys. 79, no. 12, 124201 (2016) [arXiv:1504.04855 [hep-ph]].
* [23] S. Martellotti, arXiv:1510.00172 [physics.ins-det].
* [24] G. N. Felder, J. Garcia-Bellido, P. B. Greene, L. Kofman, A. D. Linde and I. Tkachev, Phys. Rev. Lett. 87, 011601 (2001) [arXiv:hep-ph/0012142 [hep-ph]].
* [25] J. Garcia-Bellido and A. D. Linde, Phys. Rev. D 57, 6075-6088 (1998) [arXiv:hep-ph/9711360 [hep-ph]].
* [26] E. J. Copeland, S. Pascoli and A. Rajantie, Phys. Rev. D 65, 103517 (2002) [arXiv:hep-ph/0202031 [hep-ph]].
* [27] R. Jeannerot, J. Rocher and M. Sakellariadou, Phys. Rev. D 68, 103514 (2003) [arXiv:hep-ph/0308134 [hep-ph]].
* [28] G. B. Gelmini, M. Gleiser and E. W. Kolb, Phys. Rev. D 39, 1558 (1989).
* [29] S. E. Larsson, S. Sarkar and P. L. White, Phys. Rev. D 55, 5129 (1997) [hep-ph/9608319].
* [30] P. Auclair, J. J. Blanco-Pillado, D. G. Figueroa, A. C. Jenkins, M. Lewicki, M. Sakellariadou, S. Sanidas, L. Sousa, D. A. Steer and J. M. Wachter, et al. JCAP 04, 034 (2020) [arXiv:1909.00819 [astro-ph.CO]].
* [31] J. Garcia-Bellido and D. G. Figueroa, Phys. Rev. Lett. 98, 061302 (2007) [arXiv:astro-ph/0701014 [astro-ph]].
* [32] J. Garcia-Bellido, D. G. Figueroa and A. Sastre, Phys. Rev. D 77, 043517 (2008) [arXiv:0707.0839 [hep-ph]].
* [33] J. F. Dufaux, G. Felder, L. Kofman and O. Navros, JCAP 03, 001 (2009) [arXiv:0812.2917 [astro-ph]].
* [34] S. H. Im and K. S. Jeong, Phys. Lett. B 799, 135044 (2019) [arXiv:1907.07383 [hep-ph]].
* [35] A. Ringwald, Phys. Dark Univ. 1, 116 (2012) [arXiv:1210.5081 [hep-ph]].
* [36] M. Kawasaki and K. Nakayama, Ann. Rev. Nucl. Part. Sci. 63, 69 (2013) [arXiv:1301.1123 [hep-ph]].
* [37] T. Hiramatsu, M. Kawasaki, K. Saikawa and T. Sekiguchi, JCAP 01, 001 (2013) [arXiv:1207.3166 [hep-ph]].
|
# Graph Neural Network for Traffic Forecasting: A Survey
Weiwei Jiang Department of Electronic Engineering, Tsinghua University,
Beijing, 100084, China Jiayun Luo School of Computer Science and
Engineering, Nanyang Technological University, 639798, Singapore
###### Abstract
Traffic forecasting is important for the success of intelligent transportation
systems. Deep learning models, including convolution neural networks and
recurrent neural networks, have been extensively applied in traffic
forecasting problems to model spatial and temporal dependencies. In recent
years, to model the graph structures in transportation systems as well as
contextual information, graph neural networks have been introduced and have
achieved state-of-the-art performance in a series of traffic forecasting
problems. In this survey, we review the rapidly growing body of research using
different graph neural networks, e.g. graph convolutional and graph attention
networks, in various traffic forecasting problems, e.g. road traffic flow and
speed forecasting, passenger flow forecasting in urban rail transit systems,
and demand forecasting in ride-hailing platforms. We also present a
comprehensive list of open data and source resources for each problem and
identify future research directions. To the best of our knowledge, this paper
is the first comprehensive survey that explores the application of graph
neural networks for traffic forecasting problems. We have also created a
public GitHub repository where the latest papers, open data, and source
resources will be updated.
###### keywords:
Traffic Forecasting , Graph Neural Networks , Graph Convolution Network ,
Graph Attention Network , Deep Learning
††journal: Journal of LaTeX Templates
## 1 Introduction
Transportation systems are among the most important infrastructure in modern
cities, supporting the daily commuting and traveling of millions of people.
With rapid urbanization and population growth, transportation systems have
become more complex. Modern transportation systems encompass road vehicles,
rail transport, and various shared travel modes that have emerged in recent
years, including online ride-hailing, bike-sharing, and e-scooter sharing.
Expanding cities face many transportation-related problems, including air
pollution and traffic congestion. Early intervention based on traffic
forecasting is seen as the key to improving the efficiency of a transportation
system and to alleviate transportation-related problems. In the development
and operation of smart cities and intelligent transportation systems (ITSs),
traffic states are detected by sensors (e.g. loop detectors) installed on
roads, subway and bus system transaction records, traffic surveillance videos,
and even smartphone GPS data collected in a crowd-sourced fashion. Traffic
forecasting is typically based on consideration of historical traffic state
data, together with the external factors which affect traffic states, e.g.
weather and holidays.
Both short-term and long-term traffic forecasting problems for various
transport modes are considered in the literature. This survey focuses on the
data-driven approach, which involves forecasting based on historical data. The
traffic forecasting problem is more challenging than other time series
forecasting problems because it involves large data volumes with high
dimensionality, as well as multiple dynamics including emergency situations,
e.g. traffic accidents. The traffic state in a specific location has both
spatial dependency, which may not be affected only by nearby areas, and
temporal dependency, which may be seasonal. Traditional linear time series
models, e.g. auto-regressive and integrated moving average (ARIMA) models,
cannot handle such spatiotemporal forecasting problems. Machine learning (ML)
and deep learning techniques have been introduced in this area to improve
forecasting accuracy, for example, by modeling the whole city as a grid and
applying a convolutional neural network (CNN) as demonstrated by Jiang & Zhang
[2018]. However, the CNN-based approach is not optimal for traffic foresting
problems that have a graph-based form, e.g. road networks.
In recent years, graph neural networks (GNNs) have become the frontier of deep
learning research, showing state-of-the-art performance in various
applications [Wu et al., 2020b]. GNNs are ideally suited to traffic
forecasting problems because of their ability to capture spatial dependency,
which is represented using non-Euclidean graph structures. For example, a road
network is naturally a graph, with road intersections as the nodes and road
connections as the edges. With graphs as the input, several GNN-based models
have demonstrated superior performance to previous approaches on tasks
including road traffic flow and speed forecasting problems. These include, for
example, the diffusion convolutional recurrent neural network (DCRNN) [Li et
al., 2018b] and Graph WaveNet [Wu et al., 2019] models. The GNN-based approach
has also been extended to other transportation modes, utilizing various graph
formulations and models.
To the best of the authors’ knowledge, this paper presents the first
comprehensive literature survey of GNN-related approaches to traffic
forecasting problems. While several relevant traffic forecasting surveys exist
[Shi & Yeung, 2018, Pavlyuk, 2019, Yin et al., 2021, Luca et al., 2020, Fan et
al., 2020, Boukerche & Wang, 2020a, Manibardo et al., 2020, Ye et al., 2020a,
Lee et al., 2021, Xie et al., 2020a, George & Santra, 2020, Haghighat et al.,
2020, Boukerche et al., 2020, Tedjopurnomo et al., 2020, Varghese et al.,
2020], most of them are not GNN-focused with only one exception [Ye et al.,
2020a]. For this survey, we reviewed 212 papers published in the years 2018 to
2020. Additionally, because this is a very rapidly developing research field,
we also included preprints that have not yet gone through the traditional peer
review process (e.g., arXiv papers) to present the latest progress. Based on
these studies, we identify the most frequently considered problems, graph
formulations, and models. We also investigate and summarize publicly available
useful resources, including datasets, software, and open-sourced code, for
GNN-based traffic forecasting research and application. Lastly, we identify
the challenges and future directions of applying GNNs to the traffic
forecasting problem.
Instead of giving a whole picture of traffic forecasting, our aim is to
provide a comprehensive summary of GNN-based solutions. This paper is useful
for both the new researchers in this field who want to catch up with the
progress of applying GNNs and the experienced researchers who are not familiar
with these latest graph-based solutions. In addition to this paper, we have
created an open GitHub repository on this topic
111https://github.com/jwwthu/GNN4Traffic, where relevant content will be
updated continuously.
Our contributions are summarized as follows:
1) Comprehensive Review: We present the most comprehensive review of graph-
based solutions for traffic forecasting problems in the past three years
(2018-2020).
2) Resource Collection: We provide the latest comprehensive list of open
datasets and code resources for replication and comparison of GNNs in future
work.
3) Future Directions: We discuss several challenges and potential future
directions for researchers in this field, when using GNNs for traffic
forecasting problems.
The remainder of this paper is organized as follows. In Section 2, we compare
our work with other relevant research surveys. In Section 3, we categorize the
traffic forecasting problems that are involved with GNN-based models. In
Section 4, we summarize the graphs and GNNs used in the reviewed studies. In
Section 5, we outline the open resources. Finally, in Section 6, we point out
challenges and future directions.
## 2 Related Research Surveys
In this section, we introduce the most recent relevant research surveys (most
of which were published in 2020). The differences between our study and these
existing surveys are pointed out when appropriate. We start with the surveys
addressing wider ITS topics, followed by those focusing on traffic prediction
problems and GNN application in particular.
Besides traffic forecasting, machine learning and deep learning methods have
been widely used in ITSs as discussed in Haghighat et al. [2020], Fan et al.
[2020], Luca et al. [2020]. In Haghighat et al. [2020], GNNs are only
mentioned in the task of traffic characteristics prediction. Among the major
milestones of deep-learning driven traffic prediction (summarized in Figure 2
of Fan et al. [2020]), the state-of-the-art models after 2019 are all based on
GNNs, indicating that GNNs are indeed the frontier of deep learning-based
traffic prediction research.
Roughly speaking, five different types of traffic prediction methods are
identified and categorized in previous surveys [Xie et al., 2020a, George &
Santra, 2020], namely, statistics-based methods, traditional machine learning
methods, deep learning-based methods, reinforcement learning-based methods,
and transfer learning-based methods. Some comparisons between different
categories have been considered, e.g., statistics-based models have better
model interpretability, whereas ML-based models are more flexible as discussed
in Boukerche et al. [2020]. Machine learning models for traffic prediction are
further categorized in Boukerche & Wang [2020a], which include the regression
model, example-based models (e.g., k-nearest neighbors), kernel-based models
(e.g. support vector machine and radial basis function), neural network
models, and hybrid models. Deep learning models are further categorized into
five different generations in Lee et al. [2021], in which GCNs are classified
as the fourth generation and other advanced techniques that have been
considered but are not yet widely applied are merged into the fifth
generation. These include transfer learning, meta learning, reinforcement
learning, and the attention mechanism. Before these advanced techniques become
mature in traffic prediction tasks, GNNs remain the state-of-the-art
technique.
Some of the relevant surveys only focus on the progress of deep learning-based
methods [Tedjopurnomo et al., 2020], while the others prefer to compare them
with the statistics-based and machine learning methods [Yin et al., 2021,
Manibardo et al., 2020]. In Tedjopurnomo et al. [2020], 37 deep neural
networks for traffic prediction are reviewed, categorized, and discussed. The
authors conclude that encoder-decoder long short term-memory (LSTM) combined
with graph-based methods is the state-of-the-art prediction technique. A
detailed explanation of various data types and popular deep neural network
architectures is also provided, along with challenges and future directions
for traffic prediction. Conversely, it is found that deep learning is not
always the best modeling technique in practical applications, where linear
models and machine learning techniques with less computational complexity can
sometimes be preferable [Manibardo et al., 2020].
Additional research surveys consider aspects other than model selection. In
Pavlyuk [2019], spatiotemporal feature selection and extraction pre-processing
methods, which may also be embedded as internal model processes, are reviewed.
A meta-analysis of prediction accuracy when applying deep learning methods to
transport studies is given in Varghese et al. [2020]. In this study, apart
from the models themselves, additional factors including sample size and
prediction time horizon are shown to have a significant influence on
prediction accuracy.
To the authors’ best knowledge, there are no existing surveys focusing on the
application of GNNs for traffic forecasting. Graph-based deep learning
architectures are reviewed in Ye et al. [2020a], for a series of traffic
applications, namely, traffic congestion, travel demand, transportation
safety, traffic surveillance, and autonomous driving. Specific and practical
guidance for constructing graphs in these applications is provided. The
advantages and disadvantages of both GNNs and other deep learning models (e.g.
RNN, TCN, Seq2Seq, and GAN) are examined. While the focus is not limited to
traffic prediction problems, the graph construction process is universal in
the traffic domain when GNNs are involved.
## 3 Problems
In this section, we discuss and categorize the different types of traffic
forecasting problems considered in the literature. Problems are first
categorized by the traffic state to be predicted. Traffic flow, speed, and
demand problems are considered separately while the remaining types are
grouped together under “other problems”. Then, the problem-types are further
broken down into levels according to where the traffic states are defined.
These include road-level, region-level, and station-level categories.
Different problem types have different modelling requirements for representing
spatial dependency. For the road-level problems, the traffic data are usually
collected from sensors, which are associated with specific road segments, or
GPS trajectory data, which are also mapped into the road network with map
matching techniques. In this case, the road network topology can be seen as
the graph to use, which may contain hundreds or thousands of road segments
potentially. The spatial dependency may be described by the road network
connectivity or spatial proximity. For the station-level problems, the metro
or bus station topology can be taken as the graph to use, which may contain
tens or hundreds of stations potentially. The spatial dependency may be
described by the metro lines or bus routes. For the region-level problem, the
regular or irregular regions are used as the nodes in a graph. The spatial
dependency between different regions can be extracted from the land use
purposes, e.g., from the points-of-interest data.
A full list of the traffic forecasting problems considered in the surveyed
studies is shown in Table LABEL:tab:problems. Instead of giving the whole
picture of traffic forecasting research, only those problems with GNN-based
solutions in the literature are listed in Table I.
Table 1: Traffic forecasting problems in the surveyed studies. Problem | Relevant Studies
---|---
Road Traffic Flow | Zhang et al. [2018b], Wei et al. [2019], Xu et al. [2020a], Guo et al. [2020a], Zheng et al. [2020b], Pan et al. [2020, 2019], Lu et al. [2019a], Mallick et al. [2020], Zhang et al. [2020j, l], Bai et al. [2020], Fang et al. [2019], Huang et al. [2020a], Wang et al. [2018b], Zhang et al. [2020e], Song et al. [2020a], Xu et al. [2020b], Wang et al. [2020g], Chen et al. [2020e], Lv et al. [2020], Kong et al. [2020], Fukuda et al. [2020], Zhang & Guo [2020], Boukerche & Wang [2020b], Tang et al. [2020b], Kang et al. [2019], Guo et al. [2019c], Li et al. [2019], Xu et al. [2019], Zhang et al. [2019d], Wu et al. [2018a], Sun et al. [2020], Wei & Sheng [2020], Li et al. [2020f], Cao et al. [2020], Yu et al. [2018, 2019b], Li et al. [2020b], Yin et al. [2020], Chen et al. [2020g], Zhang et al. [2020a], Wang et al. [2020a], Xin et al. [2020], Qu et al. [2020], Wang et al. [2020b], Xie et al. [2020d], Huang et al. [2020b], Guo et al. [2020b], Zhang et al. [2020h], Fang et al. [2020a], Li & Zhu [2021], Tian et al. [2020], Xu et al. [2020c], Chen et al. [2020c]
Road OD Flow | Xiong et al. [2020], Ramadan et al. [2020]
Intersection Traffic Throughput | Sánchez et al. [2020]
Regional Taxi Flow | Zhou et al. [2020d], Sun et al. [2020], Chen et al. [2020d], Wang et al. [2018a], Peng et al. [2020], Zhou et al. [2019], Wang et al. [2020e], Qiu et al. [2020]
Regional Bike Flow | Zhou et al. [2020d], Sun et al. [2020], Wang et al. [2018a, 2020e]
Regional Ride-hailing Flow | Zhou et al. [2019]
Regional Dockless E-Scooter Flow | He & Shin [2020a]
Regional OD Taxi Flow | Wang et al. [2020e], Yeghikyan et al. [2020]
Regional OD Bike Flow | Wang et al. [2020e]
Regional OD Ride-hailing Flow | Shi et al. [2020], Wang et al. [2020h, 2019]
Station-level Subway Passenger Flow | Fang et al. [2019, 2020a], Peng et al. [2020], Ren & Xie [2019], Li et al. [2018a], Zhao et al. [2020a], Han et al. [2019], Zhang et al. [2020b, c], Li et al. [2020e], Liu et al. [2020b], Ye et al. [2020b], Ou et al. [2020]
Station-level Bus Passenger Flow | Fang et al. [2019, 2020a], Peng et al. [2020]
Station-level Shared Vehicle Flow | Zhu et al. [2019]
Station-level Bike Flow | He & Shin [2020b], Chai et al. [2018]
Station-level Railway Passenger Flow | He et al. [2020]
Road Traffic Speed | Li et al. [2018b], Wu et al. [2019], Zhang et al. [2018b], Wei et al. [2019], Xu et al. [2020a], Guo et al. [2020a], Zheng et al. [2020b], Pan et al. [2020, 2019], Lu et al. [2019a], Mallick et al. [2020], Zhang et al. [2020j], Lv et al. [2020], Li et al. [2020f], Yin et al. [2020], Guo et al. [2020b], Li & Zhu [2021], Chen et al. [2020d], Zhao et al. [2020a], Bai et al. [2021], Tang et al. [2020a], James [2020], Shin & Yoon [2020], Liu et al. [2020a], Zhang et al. [2018a, 2019f], Yu & Gu [2019], Xie et al. [2019], Zhang et al. [2019a], Guo et al. [2019a], Diao et al. [2019], Cirstea et al. [2019], Lu et al. [2019b], Zhang et al. [2019c], James [2019], Ge et al. [2019a, b], Zhang et al. [2019b], Lee & Rhee [2019a], Shleifer et al. [2019], Yu et al. [2020a], Ge et al. [2020], Lu et al. [2020b], Yang et al. [2020], Zhao et al. [2019], Cui et al. [2019], Chen et al. [2019], Zhang et al. [2019e], Yu et al. [2019a], Lee & Rhee [2019b], Bogaerts et al. [2020], Wang et al. [2020f], Cui et al. [2020b, a], Guo et al. [2020c], Zhou et al. [2020a], Cai et al. [2020], Zhou et al. [2020b], Wu et al. [2020c], Chen et al. [2020f], Opolka et al. [2019], Mallick et al. [2021], Oreshkin et al. [2021], Jia et al. [2020], Sun et al. [2021], Guo & Yuan [2020], Xie et al. [2020b], Zhang et al. [2020i], Zhu et al. [2020c], Feng et al. [2020], Zhu et al. [2020a], Fu et al. [2020], Zhang et al. [2020d], Xie et al. [2020c], Park et al. [2020], Agafonov [2020], Chen et al. [2020a], Lu et al. [2020a], Jepsen et al. [2019, 2020], Bing et al. [2020], Lewenfus et al. [2020], Zhu et al. [2020b], Liao et al. [2018], Maas & Bloem [2020], Li et al. [2020d], Song et al. [2020b], Zhao et al. [2020b], Guopeng et al. [2020], Kim et al. [2020]
Road Travel Time | Guo et al. [2020a], Hasanzadeh et al. [2019], Fang et al. [2020b], Shao et al. [2020], Shen et al. [2020]
Traffic Congestion | Dai et al. [2020], Mohanty & Pozdnukhov [2018], Mohanty et al. [2020], Qin et al. [2020a], Han et al. [2020]
Time of Arrival | Hong et al. [2020]
Regional OD Taxi Speed | Hu et al. [2018]
Ride-hailing Demand | Pian & Wu [2020], Jin et al. [2020b], Li & Axhausen [2020], Jin et al. [2020a], Geng et al. [2019b], Lee et al. [2019], Bai et al. [2019b], Geng et al. [2019a], Bai et al. [2019a], Ke et al. [2021], Li et al. [2020c]
Taxi Demand | Lee et al. [2019], Bai et al. [2019b, a], Ke et al. [2019], Hu et al. [2020], Zheng et al. [2020a], Xu & Li [2019], Davis et al. [2020], Chen et al. [2020h], Du et al. [2020], Li & Moura [2020], Wu et al. [2020a], Ye et al. [2021]
Shared Vehicle Demand | Luo et al. [2020]
Bike Demand | Lee et al. [2019], Bai et al. [2019b, a], Du et al. [2020], Ye et al. [2021], Chen et al. [2020b], Wang et al. [2020d], Qin et al. [2020b], Xiao et al. [2020], Yoshida et al. [2019], Guo et al. [2019b], Kim et al. [2019], Lin et al. [2018]
Traffic Accident | Zhou et al. [2020e], Yu et al. [2020b], Zhang et al. [2020k], Zhou et al. [2020f]
Traffic Anomaly | Liu et al. [2020c]
Parking Availability | Zhang et al. [2020g], Yang et al. [2019], Zhang et al. [2020f]
Transportation Resilience | Wang et al. [2020c]
Urban Vehicle Emission | Xu et al. [2020d]
Railway Delay | Heglund et al. [2020]
Lane Occupancy | Wright et al. [2019]
Generally speaking, traffic forecasting problems are challenging, not only for
the complex temporal dependency, but only for the complex spatial dependency.
While many solutions have been proposed for dealing with the time dependency,
e.g., recurrent neural networks and temporal convolutional networks, the
problem to capture and model the spatial dependency has not been fully solved.
The spatial dependency, which refers to the complex and nonlinear relationship
between the traffic state in one particular location with other locations.
This location could be a road intersection, a subway station, or a city
region. The spatial dependency may not be local, e.g., the traffic state may
not only be affected by nearby areas, but also those which are far away in the
spatial range but connected by a fast transportation tool. The graphs are
necessary to capture such kind of spatial information as we would discuss in
the next section.
Before the usage of graph theories and GNNs, the spatial information is
usually extracted by multivariate time series models or CNNs. Within a
multivariate time series model, e.g., vector autoregression, the traffic
states collected in different locations or regions are combined together as
multivariate time series. However, the multivariate time series models can
only extract the linear relationship among different states, which is not
enough for modeling the complex and nonlinear spatial dependency. CNNs take a
step further by modeling the local spatial information, e.g., the whole
spatial range is divided into regular grids as the two-dimensional image
format and the convolution operation is performed in the neighbor grids.
However, the CNN-based approach is bounded to the case of Euclidean structure
data, which cannot model the topological structure of the subway network or
the road network.
Graph neural networks bring new opportunities for solving traffic forecasting
problems, because of their strong learning ability to capture the spatial
information hidden in the non-Euclidean structure data, which are frequently
seen in the traffic domain. Based on graph theories, both nodes and edges have
their own attributes, which can be used further in the convolution or
aggregation operations. These attributes describe different traffic states,
e.g., volume, speed, lane numbers, road level, etc. For the dynamic spatial
dependency, dynamic graphs can be learned from the data automatically. For the
case of hierarchical traffic problems, the concepts of super-graphs and sub-
graphs can be defined and further used.
### 3.1 Traffic Flow
We consider three levels of traffic flow problems in this survey, namely,
road-level flow, region-level flow, and station-level flow.
Road-level flow problems are concerned with traffic volumes on a road and
include road traffic flow, road origin-destination (OD) Flow, and intersection
traffic throughput. In road traffic flow problems, the prediction target is
the traffic volume that passes a road sensor or a specific location along the
road within a certain time period (e.g. five minutes). In the road OD flow
problem, the target is the volume between one location (the origin) and
another (the destination) at a single point in time. The intersection traffic
throughput problem considers the volume of traffic moving through an
intersection.
Region-level flow problems consider traffic volume in a region. A city may be
divided into regular regions (where the partitioning is grid-based) or
irregular regions (e.g. road-based or zip-code-based partitions). These
problems are classified by transport mode into regional taxi flow, regional
bike flow, regional ride-hailing flow, regional dockless e-scooter flow,
regional OD taxi flow, regional OD bike flow, and regional OD ride-hailing
flow problems.
Station-level flow problems relate to the traffic volume measured at a
physical station, for example, a subway or bus station. These problems are
divided by station type into station-level subway passenger flow, station-
level bus passenger flow, station-level shared vehicle flow, station-level
bike flow, and station-level railway passenger flow problems.
Road-level traffic flow problems are further divided into cases of
unidirectional and bidirectional traffic flow, whereas region-level and
station-level traffic flow problems are further divided into the cases of
inflow and outflow, based on different problem formulations.
### 3.2 Traffic Speed
We consider two levels of traffic speed problems in this survey, namely, road-
level and region-level problems. We also include travel time and congestion
predictions in this category because they are closely correlated to traffic
speed. For example, in several studies, traffic congestion is judged by a
threshold-based speed inference. The specific road-level speed problem
categories considered are road traffic speed, road travel time, traffic
congestion, and time of arrival problems; while the region-level speed problem
considered is regional OD taxi speed.
### 3.3 Traffic Demand
Traffic demand refers to the potential demand for travel, which may or may not
be fulfilled completely. For example, on an online ride-hailing platform, the
ride requests sent by passengers represent the demand, whereas only a subset
of these requests may be served depending on the supply of drivers and
vehicles, especially during rush hours. Accurate prediction of travel demand
is a key element of vehicle scheduling systems (e.g. online ride-hailing or
taxi dispatch platforms). However, in some cases, it is difficult to collect
the potential travel demand from passengers and a compromise method using
transaction records as an indication of the traffic demand is used. In such
cases the real demand may be underestimated. Based on transport mode, the
traffic demand problems considered include ride-hailing demand, taxi demand,
shared vehicle demand, and bike demand.
### 3.4 Other Problems
In addition to the above three categories of traffic forecasting problems,
GNNs are also being applied to the following problems.
Traffic accident and Traffic anomaly: the target is to predict the traffic
accident number reported to the police system. A traffic accident is usually
an accident in road traffic involving different vehicles, which may cause
significant loss of life and property. The traffic anomaly has a broader
definition that deviates from the normal traffic state, e.g., the traffic jam
caused by a traffic accident or a public procession.
Parking availability: the target is to predict the availability of vacant
parking space for cars in the streets or in a car parking lot.
Urban vehicle emission: while not directly related to traffic states, the
prediction of urban vehicle emission is considered in Xu et al. [2020d]. Urban
vehicle emission refers to the emission produced by motor vehicles, e.g.,
those use internal combustion engines. Urban vehicle emission is a major
source of air pollutants and its amount is affected by different traffic
states, e.g., the excess emission would be created in traffic congestion
situations.
Railway delay: the delay time of specific routes in the railway system is
considered in Heglund et al. [2020].
Lane occupancy: With simulated traffic data, lane occupancy has been measured
and predicted [Wright et al., 2019].
## 4 Graphs and Graph Neural Networks
In this section, we summarize the types of graphs and GNNs used in the
surveyed studies, focusing on GNNs that are frequently used for traffic
forecasting problems. The contributions of this section include an organized
approach for classifying the different traffic graphs based on the domain
knowledge, and a summary of the common ways for constructing adjacency
matrices, which may not be encountered in other neural networks before and
would be very helpful for those who would like to use graph neural networks.
The different GNN structures already used for traffic forecasting problems are
briefly introduced in this section too. For a wider and deeper discussion of
GNNs, refer to Wu et al. [2020b], Zhou et al. [2020c], Zhang et al. [2020m].
### 4.1 Traffic Graphs
#### 4.1.1 Graph Construction
A graph is the basic structure used in GNNs. It is defined as $G=(V,E,A)$,
where $V$ is the set of vertices or nodes, $E$ is the set of edges between the
nodes, and $A$ is the adjacency matrix Element $a_{ij}$ of $A$ represents the
“edge weight” between nodes $i$ and $j$. Both the nodes and the edges may
represent different attributes in different GNN problems. For traffic
forecasting, the traffic state prediction target is usually one of the node
features. We divide the time axis into discrete time steps, e.g. five minutes
or one hour, depending on the specific scenario. In single step forecasting,
the traffic state in the next time step is predicted, whereas in multiple step
forecasting the traffic state several time steps later is the prediction
target. The traffic state at time step $i$ is denoted by $\chi_{i}$, and the
forecasting problem is formulated as: find the function $f$ which generates
$y=f(\mathbf{\chi};G)$, where $y$ is the traffic state to be predicted,
$\mathbf{\chi}=\\{\chi_{1},\chi_{2},...,\chi_{N}\\}$ is the historical traffic
state defined on graph $G$, and $N$ is the number of time steps in the
historical window size. As mentioned in Section 1, traffic states can be
highly affected by external factors, e.g. weather and holidays. The
forecasting problem formulation, extended to incorporate these external
factors, takes the form $y=f(\mathbf{\chi},\varepsilon;G)$, where
$\varepsilon$ represents the external factors.
Various graph structures are used to model traffic forecasting problems
depending on both the forecasting problem-type and the traffic datasets
available. These graphs can be pre-defined static graphs, or dynamic graphs
continuously learned from the data. The static graphs can be divided into two
types, namely, natural graphs and similarity graphs. Natural graphs are based
on a real-world transportation system, e.g. the road network or subway system;
whereas similarity graphs are based solely on the similarity between different
node attributes where nodes may be virtual stations or regions.
We categorize the existing traffic graphs into the same three levels used in
Section 3, namely, road-level, region-level and station-level graphs, as shown
in the examples in Figures 1, 1, and 1, respectively.
Figure 1: Examples of the different levels of graphs. 1 a road-level graph:
the road network in the Performance Measurement System (PeMS) where each
sensor is a node; source: http://pems.dot.ca.gov/; 1 a region-level graph: the
zip codes of Manhattan where each zip code zone is a node; source:
https://maps-manhattan.com/manhattan-zip-code-map; and 1 a station-level
graph, the Beijing subway system where each subway station is a node; source:
https://www.travelchinaguide.com/cityguides/beijing/transportation/subway.htm.
Road-level graphs. These include sensor graphs, road segment graphs, road
intersection graphs, and road lane graphs. Sensor graphs are based on traffic
sensor data (e.g. the PeMS dataset) where each sensor is a node, and the edges
are road connections. The other three graphs are based on road networks with
the nodes formed by road segments, road intersections, and road lanes,
respectively.
Region-level graphs. These include irregular region graphs, regular region
graphs, and OD graphs. In both irregular and regular region graphs the nodes
are regions of the city. Regular region graphs, which have grid-based
partitioning, are listed separately because of their natural connection to
previous widely used grid-based forecasting using CNNs, in which the grids may
be seen as image pixels. Irregular region graphs include all other
partitioning approaches, e.g. road based, or zip code based Ke et al. [2019].
In the OD graph, the nodes are origin region - destination region pairs. In
these graphs, the edges are usually defined with a spatial neighborhood or
other similarities.
Station-level graphs. These include subway station graphs, bus station graphs,
bike station graphs, railway station graphs, car-sharing station graphs,
parking lot graphs, and parking block graphs. Usually, there are natural links
between stations that are used to define the edges, e.g. subway or railway
lines, or the road network.
A full list of the traffic graphs used in the surveyed studies is shown in
Table 2. Sensor graphs and road segment graphs are most frequently used
because they are compatible with the available public datasets as discussed in
Section 5. It is noted that in some studies multiple graphs are used as
simultaneous inputs and then fused to improve the forecasting performance [Lv
et al., 2020, Zhu et al., 2019].
Table 2: Traffic graphs in the surveyed studies. Graph | Relevant Studies
---|---
Sensor Graph | Li et al. [2018b], Wu et al. [2019], Xu et al. [2020a], Zheng et al. [2020b], Pan et al. [2020, 2019], Lu et al. [2019a], Mallick et al. [2020], Zhang et al. [2020j], Bai et al. [2020], Huang et al. [2020a], Zhang et al. [2020e], Song et al. [2020a], Xu et al. [2020b], Wang et al. [2020g], Chen et al. [2020e], Lv et al. [2020], Kong et al. [2020], Fukuda et al. [2020], Zhang & Guo [2020], Boukerche & Wang [2020b], Tang et al. [2020b], Kang et al. [2019], Guo et al. [2019c], Li et al. [2019], Sun et al. [2020], Wei & Sheng [2020], Li et al. [2020f], Cao et al. [2020], Yu et al. [2018, 2019b], Li et al. [2020b], Yin et al. [2020], Chen et al. [2020g], Zhang et al. [2020a], Wang et al. [2020a], Xin et al. [2020], Xie et al. [2020d], Huang et al. [2020b], Li & Zhu [2021], Tian et al. [2020], Xu et al. [2020c], Chen et al. [2020c], Xiong et al. [2020], Chen et al. [2020d], Tang et al. [2020a], Zhang et al. [2018a, 2019a], Cirstea et al. [2019], Ge et al. [2019a, b], Shleifer et al. [2019], Ge et al. [2020], Yang et al. [2020], Zhao et al. [2019], Cui et al. [2019], Chen et al. [2019], Yu et al. [2019a], Wang et al. [2020f], Cui et al. [2020b, a], Zhou et al. [2020a], Cai et al. [2020], Zhou et al. [2020b], Wu et al. [2020c], Chen et al. [2020f], Opolka et al. [2019], Mallick et al. [2021], Oreshkin et al. [2021], Jia et al. [2020], Sun et al. [2021], Guo & Yuan [2020], Zhang et al. [2020i], Feng et al. [2020], Xie et al. [2020c], Park et al. [2020], Chen et al. [2020a], Lewenfus et al. [2020], Maas & Bloem [2020], Li et al. [2020d], Song et al. [2020b], Zhao et al. [2020b], Wang et al. [2020c]
Road Segment Graph | Zhang et al. [2018b], Guo et al. [2020a], Pan et al. [2019], Zhang et al. [2020j, l], Wang et al. [2018b], Zhang et al. [2020e], Lv et al. [2020], Zhang et al. [2019d, 2020a], Qu et al. [2020], Guo et al. [2020b], Ramadan et al. [2020], Zhao et al. [2020a], Bai et al. [2021], Shin & Yoon [2020], Liu et al. [2020a], Yu & Gu [2019], Xie et al. [2019], Guo et al. [2019a], Diao et al. [2019], Lu et al. [2019b], Zhang et al. [2019c], James [2019], Zhang et al. [2019b], Lee & Rhee [2019a], Yu et al. [2020a], Lu et al. [2020b], Zhao et al. [2019], Cui et al. [2019], Zhang et al. [2019e], Lee & Rhee [2019b], Cui et al. [2020b, a], Guo et al. [2020c], Xie et al. [2020b], Zhu et al. [2020c, a], Fu et al. [2020], Zhang et al. [2020d], Agafonov [2020], Lu et al. [2020a], Jepsen et al. [2019, 2020], Zhu et al. [2020b], Liao et al. [2018], Guopeng et al. [2020], Kim et al. [2020], Hasanzadeh et al. [2019], Fang et al. [2020b], Dai et al. [2020], Han et al. [2020], Hong et al. [2020], Chen et al. [2020h], Yu et al. [2020b]
Road Intersection Graph | Zhang et al. [2018b], Wei et al. [2019], Fang et al. [2019], Zhang et al. [2020e], Xu et al. [2019], Wu et al. [2018a], Sánchez et al. [2020], James [2020], Zhang et al. [2019f], Lu et al. [2019b], Zhang et al. [2019c], Bogaerts et al. [2020], Shao et al. [2020], Qin et al. [2020a]
Road Lane Graph | Wright et al. [2019]
Irregular Region Graph | Zhou et al. [2020d], Sun et al. [2020], Chen et al. [2020d], Bing et al. [2020], Mohanty & Pozdnukhov [2018], Mohanty et al. [2020], Hu et al. [2018], Li & Axhausen [2020], Bai et al. [2019b, a], Ke et al. [2021], Hu et al. [2020], Zheng et al. [2020a], Davis et al. [2020], Du et al. [2020], Li & Moura [2020], Ye et al. [2021], Zhang et al. [2020k], Liu et al. [2020c]
Regular Region Graph | Pan et al. [2020], Wang et al. [2020b], Zhang et al. [2020h], Wang et al. [2018a], Zhou et al. [2019], Wang et al. [2020e], Qiu et al. [2020], He & Shin [2020a], Yeghikyan et al. [2020], Shi et al. [2020], Wang et al. [2019], Shen et al. [2020], Pian & Wu [2020], Jin et al. [2020b, a], Geng et al. [2019b], Lee et al. [2019], Geng et al. [2019a], Li et al. [2020c], Xu & Li [2019], Davis et al. [2020], Wu et al. [2020a], Zhou et al. [2020e, f], Xu et al. [2020d]
OD Graph | Wang et al. [2020h], Ke et al. [2019]
Subway Station Graph | Fang et al. [2019, 2020a], Ren & Xie [2019], Li et al. [2018a], Zhao et al. [2020a], Han et al. [2019], Zhang et al. [2020b, c], Li et al. [2020e], Liu et al. [2020b], Ye et al. [2020b], Ou et al. [2020]
Bus Station Graph | Fang et al. [2019, 2020a]
Bike Station Graph | He & Shin [2020b], Chai et al. [2018], Du et al. [2020], Chen et al. [2020b], Wang et al. [2020d], Qin et al. [2020b], Xiao et al. [2020], Yoshida et al. [2019], Guo et al. [2019b], Kim et al. [2019], Lin et al. [2018]
Railway Station Graph | He et al. [2020], Heglund et al. [2020]
Car-sharing Station Graph | Zhu et al. [2019], Luo et al. [2020]
Parking Lot Graph | Zhang et al. [2020g, f]
Parking Block Graph | Yang et al. [2019]
#### 4.1.2 Adjacency Matrix Construction
Adjacency matrices are seen as the key to capturing spatial dependency in
traffic forecasting [Ye et al., 2020a]. While nodes may be fixed by physical
constraints, the user typically has control over the design of the adjacency
matrix, which can even be dynamically trained from continuously evolving data.
We extend the categories of adjacency matrices used in previous studies [Ye et
al., 2020a] and divide them into four types, namely, road-based, distance-
based, similarity-based, and dynamic matrices.
Road-based Matrix. This type of adjacency matrix relates to the road network
and includes connection matrices, transportation connectivity matrices, and
direction matrices. A connection matrix is a common way of representing the
connectivity between nodes. It has a binary format, with an element value of 1
if connected and 0 otherwise. The transportation connectivity matrix is used
where two regions are geographically distant but conveniently reachable by
motorway, highway, or subway [Ye et al., 2020a]. It also includes cases where
the connection is measured by travel time between different nodes, e.g. if a
vehicle can travel between two intersections in less than 5 minutes then there
is an edge between the two intersections [Wu et al., 2018a]. The less commonly
used direction matrix takes the angle between road links into consideration.
Distance-based Matrix. This widely used matrix-type represents the spatial
closeness between nodes. It contains two sub-types, namely, neighbor and
distance matrices. In neighbor matrices, the element values are determined by
whether the two regions share a common boundary (if connected the value is set
to 1, generally, or 1/4 for grids, and 0 otherwise). In distance matrices, the
element values are a function of geometrical distance between nodes. This
distance may be calculated in various ways, e.g. the driving distance between
two sensors, the shortest path length along the road [Kang et al., 2019, Lee &
Rhee, 2019a], or the proximity between locations calculated by the random walk
with restart (RWR) algorithm [Zhang et al., 2019e].
Similarity-based Matrix. This type of matrix is divided into two sub-types,
namely, traffic pattern and functional similarity matrices. Traffic pattern
similarity matrices represent the correlations between traffic states, e.g.
similarities of flow patterns, mutual dependencies between different
locations, and traffic demand correlation in different regions. Functional
similarity matrices represent, for example, the distribution of different
types of point-of-interests in different regions.
Dynamic Matrix. This type of matrix is used when no pre-defined static
matrices are used. Many studies have demonstrated the advantages of using
dynamic matrices, instead of a pre-defined adjacency matrix, for various
traffic forecasting problems.
A full list of the adjacency matrices applied in the surveyed studies is shown
in Table 3. Dynamic matrices are listed at the bottom of the table, with no
further subdivisions. The connection and distance matrices are the most
frequently used types, because of their simple definition and representation
of spatial dependency.
Table 3: Adjacency matrices in the surveyed studies. Adjacency Matrix | Relevant Studies
---|---
Connection Matrix | Zhang et al. [2018b], Wei et al. [2019], Xu et al. [2020a], Guo et al. [2020a], Zhang et al. [2020l], Wang et al. [2018b], Song et al. [2020a], Zhang & Guo [2020], Xu et al. [2019], Cao et al. [2020], Yu et al. [2019b], Chen et al. [2020g], Zhang et al. [2020a], Qu et al. [2020], Wang et al. [2020b], Huang et al. [2020b], Xiong et al. [2020], Sánchez et al. [2020], Wang et al. [2020h], Zhang et al. [2020c], Li et al. [2020e], Liu et al. [2020b], Ou et al. [2020], He et al. [2020], Bai et al. [2021], Liu et al. [2020a], Zhang et al. [2019f], Yu & Gu [2019], Xie et al. [2019], Guo et al. [2019a], Lu et al. [2019b], Zhang et al. [2019c], James [2019], Zhang et al. [2019b], Zhao et al. [2019], Cui et al. [2019, 2020b, 2020a], Wu et al. [2020c], Opolka et al. [2019], Sun et al. [2021], Guo & Yuan [2020], Xie et al. [2020b], Zhu et al. [2020c, a], Zhang et al. [2020d], Agafonov [2020], Chen et al. [2020a], Lu et al. [2020a], Bing et al. [2020], Zhu et al. [2020b], Fang et al. [2020b], Shao et al. [2020], Shen et al. [2020], Qin et al. [2020a], Hong et al. [2020], Xu & Li [2019], Davis et al. [2020], Chen et al. [2020h], Wang et al. [2020d], Zhou et al. [2020e], Yu et al. [2020b], Liu et al. [2020c], Zhang et al. [2020g, f], Heglund et al. [2020], Yin et al. [2020], Zhang et al. [2020b]
Transportation Connectivity Matrix | Pan et al. [2020, 2019], Lv et al. [2020], Wu et al. [2018a], Ye et al. [2020b], Geng et al. [2019b, a], Luo et al. [2020], Wright et al. [2019]
Direction Matrix | Shin & Yoon [2020], Lee & Rhee [2019a, b]
Neighbor Matrix | Wang et al. [2018a], Yeghikyan et al. [2020], Shi et al. [2020], Wang et al. [2019], Hu et al. [2018], Geng et al. [2019b], Lee et al. [2019], Ke et al. [2021, 2019], Hu et al. [2020], Zheng et al. [2020a], Yoshida et al. [2019]
Distance Matrix | Li et al. [2018b], Zheng et al. [2020b], Pan et al. [2020, 2019], Lu et al. [2019a], Mallick et al. [2020], Huang et al. [2020a], Xu et al. [2020b], Wang et al. [2020g], Boukerche & Wang [2020b], Kang et al. [2019], Sun et al. [2020], Wei & Sheng [2020], Yu et al. [2018], Li et al. [2020b], Chen et al. [2020g], Wang et al. [2020a], Xin et al. [2020], Xie et al. [2020d], Li & Zhu [2021], Tian et al. [2020], Xu et al. [2020c], Chen et al. [2020c], Zhou et al. [2020d], Chen et al. [2020d], He & Shin [2020a], Ren & Xie [2019], Zhu et al. [2019], He & Shin [2020b], Chai et al. [2018], Shin & Yoon [2020], Zhang et al. [2018a], Ge et al. [2019a, b], Lee & Rhee [2019a], Shleifer et al. [2019], Ge et al. [2020], Yang et al. [2020], Chen et al. [2019], Zhang et al. [2019e], Lee & Rhee [2019b], Bogaerts et al. [2020], Wang et al. [2020f], Guo et al. [2020c], Zhou et al. [2020a], Cai et al. [2020], Zhou et al. [2020b], Chen et al. [2020f], Mallick et al. [2021], Jia et al. [2020], Zhang et al. [2020i], Feng et al. [2020], Xie et al. [2020c], Li et al. [2020d], Song et al. [2020b], Zhao et al. [2020b], Kim et al. [2020], Mohanty & Pozdnukhov [2018], Mohanty et al. [2020], Jin et al. [2020b], Li & Axhausen [2020], Jin et al. [2020a], Geng et al. [2019a], Ke et al. [2021], Li et al. [2020c], Ke et al. [2019], Luo et al. [2020], Chen et al. [2020b], Xiao et al. [2020], Guo et al. [2019b], Kim et al. [2019], Lin et al. [2018], Yang et al. [2019], Wang et al. [2020c], Xu et al. [2020d]
Traffic Pattern Similarity Matrix | Lv et al. [2020], Li & Zhu [2021], Xu et al. [2020c], Zhou et al. [2020d], Sun et al. [2020], Wang et al. [2020e], He & Shin [2020a], Ren & Xie [2019], Han et al. [2019], Liu et al. [2020b], He & Shin [2020b], Chai et al. [2018], Lu et al. [2020a], Lewenfus et al. [2020], Dai et al. [2020], Han et al. [2020], Jin et al. [2020b], Li & Axhausen [2020], Jin et al. [2020a], Bai et al. [2019b, a], Li et al. [2020c], Ke et al. [2019], Chen et al. [2020b], Wang et al. [2020d], Yoshida et al. [2019], Kim et al. [2019], Lin et al. [2018], Zhou et al. [2020f]
Functional Similarity Matrix | Lv et al. [2020], He & Shin [2020a], Shi et al. [2020], Zhu et al. [2019], Ge et al. [2019a, b, 2020], Jin et al. [2020b], Geng et al. [2019b, a], Ke et al. [2019], Luo et al. [2020], Zhang et al. [2020k]
Dynamic Matrix | Wu et al. [2019], Bai et al. [2020], Fang et al. [2019], Zhang et al. [2020e], Chen et al. [2020e], Kong et al. [2020], Tang et al. [2020b], Guo et al. [2019c], Li et al. [2019], Zhang et al. [2019d], Li et al. [2020f], Guo et al. [2020b], Zhang et al. [2020h], Peng et al. [2020], Zhou et al. [2019], Shi et al. [2020], Li et al. [2018a], Tang et al. [2020a], Zhang et al. [2019a], Diao et al. [2019], Yu et al. [2020a], Fu et al. [2020], Maas & Bloem [2020], Li & Axhausen [2020], Du et al. [2020], Li & Moura [2020], Wu et al. [2020a], Ye et al. [2021]
### 4.2 Graph Neural Networks
Previous neural networks, e.g. fully-connected neural networks (FNNs), CNNs,
and RNNs, could only be applied to Euclidean data (i.e. images, text, and
videos). As a type of neural network which directly operates on a graph
structure, GNNs have the ability to capture complex relationships between
objects and make inferences based on data described by graphs. GNNs have been
proven effective in various node-level, edge-level, and graph-level prediction
tasks. As mentioned in Section 2, GNNs are currently considered the state-of-
the-art techniques for traffic forecasting problems.
GNNs can be divided into four types, namely, recurrent GNNs, convolutional
GNNs, graph autoencoders, and spatiotemporal GNNs [Wu et al., 2020b]. Because
traffic forecasting is a spatiotemporal problem, the GNNs used in this field
can all be categorized as the latter. However, certain components of the other
types of GNNs have also been applied in the surveyed traffic forecasting
studies.
Spatiotemporal GNNs can be further categorized based on the approach used to
capture the temporal dependency in particular. Most of the relevant studies in
the literature can be split into two types, namely, RNN-based and CNN-based
spatiotemporal GNNs [Wu et al., 2020b]. The RNN-based approach is used in Li
et al. [2018b], Guo et al. [2020a], Pan et al. [2020, 2019], Lu et al.
[2019a], Mallick et al. [2020], Zhang et al. [2020j, l], Bai et al. [2020],
Huang et al. [2020a], Wang et al. [2018b, 2020g], Lv et al. [2020], Fukuda et
al. [2020], Zhang & Guo [2020], Boukerche & Wang [2020b], Kang et al. [2019],
Li et al. [2019], Xu et al. [2019], Wu et al. [2018a], Wei & Sheng [2020], Li
et al. [2020f], Yu et al. [2019b], Yin et al. [2020], Xin et al. [2020], Qu et
al. [2020], Huang et al. [2020b], Guo et al. [2020b], Fang et al. [2020a], Li
& Zhu [2021], Chen et al. [2020c], Ramadan et al. [2020], Zhou et al. [2020d],
Wang et al. [2018a], Peng et al. [2020], Zhou et al. [2019], Wang et al.
[2020e], Qiu et al. [2020], Shi et al. [2020], Wang et al. [2020h, 2019],
Zhang et al. [2020b], Liu et al. [2020b], Ye et al. [2020b], Zhu et al.
[2019], Chai et al. [2018], He et al. [2020], Bai et al. [2021], Zhang et al.
[2018a, 2019f], Xie et al. [2019], Zhang et al. [2019a], Guo et al. [2019a],
Cirstea et al. [2019], Lu et al. [2019b], Zhang et al. [2019b], Lu et al.
[2020b], Zhao et al. [2019], Cui et al. [2019], Chen et al. [2019], Zhang et
al. [2019e], Bogaerts et al. [2020], Cui et al. [2020a], Zhou et al. [2020a],
Mallick et al. [2021], Sun et al. [2021], Xie et al. [2020b], Zhu et al.
[2020c, a], Fu et al. [2020], Chen et al. [2020a], Lewenfus et al. [2020], Zhu
et al. [2020b], Liao et al. [2018], Zhao et al. [2020b], Guopeng et al.
[2020], Shao et al. [2020], Shen et al. [2020], Mohanty & Pozdnukhov [2018],
Mohanty et al. [2020], Hu et al. [2018], Pian & Wu [2020], Jin et al. [2020a],
Geng et al. [2019a], Bai et al. [2019a], Li et al. [2020c], Ke et al. [2019],
Hu et al. [2020], Xu & Li [2019], Davis et al. [2020], Chen et al. [2020h], Du
et al. [2020], Wu et al. [2020a], Ye et al. [2021], Luo et al. [2020], Chen et
al. [2020b], Wang et al. [2020d], Xiao et al. [2020], Guo et al. [2019b], Lin
et al. [2018], Zhou et al. [2020f], Liu et al. [2020c], Zhang et al. [2020g],
Yang et al. [2019], Zhang et al. [2020f], Wang et al. [2020c], Wright et al.
[2019]; while the CNN-based approach is used in Wu et al. [2019], Fang et al.
[2019], Zhang et al. [2020e], Xu et al. [2020b], Chen et al. [2020e], Kong et
al. [2020], Tang et al. [2020b], Guo et al. [2019c], Sun et al. [2020], Yu et
al. [2018], Li et al. [2020b], Wang et al. [2020a], Tian et al. [2020], Chen
et al. [2020d], Zhao et al. [2020a], Zhang et al. [2020c], Ou et al. [2020],
Tang et al. [2020a], Diao et al. [2019], Lee & Rhee [2019a, b], Wang et al.
[2020f], Wu et al. [2020c], Guo & Yuan [2020], Zhang et al. [2020i], Feng et
al. [2020], Zhang et al. [2020d], Xie et al. [2020c], Lu et al. [2020a], Maas
& Bloem [2020], Li et al. [2020d], Song et al. [2020b], Dai et al. [2020],
Hong et al. [2020], Zheng et al. [2020a], Zhou et al. [2020e], Yu et al.
[2020b], Xu et al. [2020d], Heglund et al. [2020].
With the recent expansion of relevant studies, we add two sub-types of
spatiotemporal GNNs in this survey, namely, attention-based and FNN-based.
Attention mechanism is firstly proposed to memorize long source sentences in
neural machine translation [Vaswani et al., 2017]. Then it is used for
temporal forecasting problems. As a special case, Transformer is built
entirely upon attention mechanisms, which makes it possible to access any part
of a sequence regardless of its distance to the target [Xie et al., 2020d, Cai
et al., 2020, Jin et al., 2020b, Li & Moura, 2020]. The attention-based
approaches are used in Zheng et al. [2020b], Zhang et al. [2020a], Wang et al.
[2020b], Xie et al. [2020d], Cai et al. [2020], Zhou et al. [2020b], Chen et
al. [2020f], Park et al. [2020], Fang et al. [2020b], Jin et al. [2020b], Bai
et al. [2019b], Li & Moura [2020], Zhang et al. [2020k], while the simpler
FNN-based approach is used in Zhang et al. [2018b], Wei et al. [2019], Song et
al. [2020a], Cao et al. [2020], Chen et al. [2020g], Zhang et al. [2020h], Sun
et al. [2020], He & Shin [2020a], Yeghikyan et al. [2020], Ren & Xie [2019],
Li et al. [2018a], Han et al. [2019], He & Shin [2020b], Zhang et al. [2019c],
Ge et al. [2019a, b], Yu et al. [2020a], Ge et al. [2020], Yu et al. [2019a],
Guo et al. [2020c], Agafonov [2020], Geng et al. [2019b], Qin et al. [2020b],
Kim et al. [2019]. Apart from using neural networks to capture temporal
dependency, other techniques that have also been combined with GNNs include
autoregression [Lee et al., 2019], Markov processes [Cui et al., 2020b], and
Kalman filters [Xiong et al., 2020].
Of the additional GNN components adopted in the surveyed studies,
convolutional GNNs are the most popular, while recurrent GNN [Scarselli et
al., 2008] and Graph Auto-Encoder (GAE) [Kipf & Welling, 2016] are used less
frequently. We further categorize convolutional GNNs into the following five
types: (1) Graph Convolutional Network (GCN) [Kipf & Welling, 2017], (2)
Diffusion Graph Convolution (DGC) [Atwood & Towsley, 2016], (3) Message
Passing Neural Network (MPNN) [Gilmer et al., 2017], (4) GraphSAGE [Hamilton
et al., 2017], and (5) Graph Attention Network (GAT) [Veličković et al.,
2018]. These relevant graph neural networks are listed chronologically in
Figure 2. While different GNNs can be used for traffic forecasting, a general
design pipeline is proposed in [Zhou et al., 2020c] and suggested for future
studies as follows:
1. 1.
Find graph structure. As discussed in Section IV, different traffic graphs are
available.
2. 2.
Specify graph type and scale. The graphs can be further classified into
different types if needed, e.g., directed/undirected graphs,
homogeneous/heterogeneous graphs, static/dynamic graphs. For most cases in
traffic forecasting, the graphs of the same type are used in a single study.
As for the graph scale, the graphs in the traffic domain are not as large as
those for the social networks or academic networks with millions of nodes and
edges.
3. 3.
Design loss function. The training setting usually follows the supervised
approach, which means the GNN-based models are firstly trained on a training
set with labels and then evaluated on a test set. The forecasting task is
usually designed as the node-level regression problem. Based on these
considerations, the proper loss function and evaluation metrics can be chosen,
e.g., Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).
4. 4.
Build model using computational modules. The GNNs discussed in this section
are exactly those which have already been used as computational modules to
build forecasting models in the surveyed studies.
Figure 2: The relevant graph neural networks in this survey.
GCNs are spectral-based convolutional GNNs, in which the graph convolutions
are defined by introducing filters from graph signal processing. Spectral
convoluted neural networking [Bruna et al., 2014] assumes that the filter is a
set of learnable parameters and considers graph signals with multiple
channels. The GCN is a first-order approximation of Chebyshev’s spectral CNN
(ChebNet) [Defferrard et al., 2016], which approximates the filter using the
Chebyshev polynomials of the diagonal matrix of eigenvalues.
The alternative approach is spatial-based convolutional GNNs, in which the
graph convolutions are defined by information propagation. DGC, MPNN,
GraphSAGE, and GAT all follow this approach. The graph convolution is modeled
as a diffusion process with a transition probability from one node to a
neighboring node in DGC. An equilibrium is expected to be obtained after
several rounds of information transition. The general framework followed is a
message passing network, which models the graph convolutions as an
information-passing process from one node to another connected node directly.
To alleviate the computation problems caused by a large number of neighbors,
sampling is used to obtain a fixed number of neighbors in GraphSAGE. Lastly,
without using a predetermined adjacency matrix, the attention mechanism is
used to learn the relative weights between two connected nodes in GAT.
A full list of the GNN components used in the surveyed studies is shown in
Table 4. Currently, the most widely used GNN is the GCN. However, we also
notice a growing trend in the use of GAT in traffic forecasting.
Table 4: GNNs in the surveyed studies. GNN | Relevant Studies
---|---
Recurrent GNN | Wang et al. [2018b, a], Lu et al. [2019b, 2020b]
GAE | Xu et al. [2020a, 2019], Opolka et al. [2019], Shen et al. [2020]
GCN | Wu et al. [2019], Zhang et al. [2018b], Guo et al. [2020a], Lu et al. [2019a], Zhang et al. [2020j, l], Bai et al. [2020], Fang et al. [2019], Zhang et al. [2020e], Song et al. [2020a], Xu et al. [2020b], Wang et al. [2020g], Lv et al. [2020], Boukerche & Wang [2020b], Tang et al. [2020b], Guo et al. [2019c], Li et al. [2019], Zhang et al. [2019d], Sun et al. [2020], Li et al. [2020f], Cao et al. [2020], Yu et al. [2018, 2019b], Li et al. [2020b], Chen et al. [2020g], Zhang et al. [2020a], Wang et al. [2020a], Xin et al. [2020], Qu et al. [2020], Wang et al. [2020b], Huang et al. [2020b], Guo et al. [2020b], Fang et al. [2020a], Li & Zhu [2021], Xu et al. [2020c], Chen et al. [2020c], Xiong et al. [2020], Ramadan et al. [2020], Zhou et al. [2020d], Sun et al. [2020], Peng et al. [2020], Zhou et al. [2019], Wang et al. [2020e], Qiu et al. [2020], He & Shin [2020a], Yeghikyan et al. [2020], Shi et al. [2020], Wang et al. [2020h], Ren & Xie [2019], Li et al. [2018a], Zhao et al. [2020a], Han et al. [2019], Zhang et al. [2020b, c], Liu et al. [2020b], Ye et al. [2020b], Zhu et al. [2019], Chai et al. [2018], He et al. [2020], Bai et al. [2021], Tang et al. [2020a], James [2020], Zhang et al. [2018a, 2019f], Yu & Gu [2019], Guo et al. [2019a], Diao et al. [2019], Zhang et al. [2019c], James [2019], Ge et al. [2019a, b], Zhang et al. [2019b], Lee & Rhee [2019a], Yu et al. [2020a], Ge et al. [2020], Zhao et al. [2019], Cui et al. [2019], Zhang et al. [2019e], Yu et al. [2019a], Lee & Rhee [2019b], Bogaerts et al. [2020], Cui et al. [2020b, a], Guo et al. [2020c], Cai et al. [2020], Wu et al. [2020c], Chen et al. [2020f], Jia et al. [2020], Sun et al. [2021], Xie et al. [2020b], Zhu et al. [2020c], Feng et al. [2020], Zhu et al. [2020a], Fu et al. [2020], Agafonov [2020], Chen et al. [2020a], Lu et al. [2020a], Jepsen et al. [2019, 2020], Bing et al. [2020], Lewenfus et al. [2020], Zhu et al. [2020b], Liao et al. [2018], Maas & Bloem [2020], Li et al. [2020d], Song et al. [2020b], Zhao et al. [2020b], Guopeng et al. [2020], Shao et al. [2020], Dai et al. [2020], Mohanty & Pozdnukhov [2018], Mohanty et al. [2020], Qin et al. [2020a], Han et al. [2020], Hong et al. [2020], Hu et al. [2018], Li & Axhausen [2020], Jin et al. [2020a], Geng et al. [2019b], Bai et al. [2019b], Geng et al. [2019a], Bai et al. [2019a], Ke et al. [2021], Li et al. [2020c], Ke et al. [2019], Hu et al. [2020], Zheng et al. [2020a], Davis et al. [2020], Chen et al. [2020h], Du et al. [2020], Li & Moura [2020], Ye et al. [2021], Luo et al. [2020], Chen et al. [2020b], Wang et al. [2020d], Qin et al. [2020b], Xiao et al. [2020], Yoshida et al. [2019], Guo et al. [2019b], Kim et al. [2019], Lin et al. [2018], Zhou et al. [2020e], Yu et al. [2020b], Zhang et al. [2020k], Zhou et al. [2020f], Liu et al. [2020c], Zhang et al. [2020g], Yang et al. [2019], Zhang et al. [2020f], Xu et al. [2020d], Heglund et al. [2020]
DGC | Li et al. [2018b], Mallick et al. [2020], Chen et al. [2020e], Fukuda et al. [2020], Ou et al. [2020], Chen et al. [2019], Wang et al. [2020f], Zhou et al. [2020a, b], Mallick et al. [2021], Xie et al. [2020c], Kim et al. [2020], Wang et al. [2020c]
MPNN | Wei et al. [2019], Xu et al. [2020b], Wang et al. [2019]
GraphSAGE | Liu et al. [2020a]
GAT | Zheng et al. [2020b], Pan et al. [2020, 2019], Huang et al. [2020a], Kong et al. [2020], Zhang & Guo [2020], Tang et al. [2020b], Kang et al. [2019], Wu et al. [2018a], Wei & Sheng [2020], Yin et al. [2020], Xie et al. [2020d], Zhang et al. [2020h], Tian et al. [2020], He & Shin [2020b], Tang et al. [2020a], Zhang et al. [2019a], Cirstea et al. [2019], Yang et al. [2020], Guo & Yuan [2020], Zhang et al. [2020i, d], Park et al. [2020], Song et al. [2020b], Fang et al. [2020b], Pian & Wu [2020], Jin et al. [2020b], Xu & Li [2019], Wu et al. [2020a], Wright et al. [2019]
## 5 Open Data and Source Resources
In this section, we summarize the open data and source code used in the
surveyed papers. These open data are suitable for GNN-related studies with
graph structures discussed in Section IV, which can be used to formulate
different forecasting problems in Section III. We also list the GNN-related
code resources for those who want to replicate the previous GNN-based
solutions as baselines in the follow-up studies.
### 5.1 Open Data
We categorize the data used in the surveyed studies into three major types,
namely, graph-related data, historical traffic data, and external data. Graph-
related data refer to those data which exhibit a graph structure in the
traffic domain, i.e., transportation network data. Historical traffic data
refer to those data which record the historical traffic states, usually in
different locations and time points. We further categorize the historical
traffic data into sub-types as follows. External data refer to the factors
that would affect the traffic states, i.e., weather data and calendar data.
Some of these data can be used in the graph-based modeling directly, while the
others may require some pre-processing steps before being Incorporated into
GNN-based models.
Transportation Network Data. These data represent the underlying
transportation infrastructure, e.g., road, subway, and bus networks. They can
be obtained from government transportation departments or extracted from
online map services, e.g., OpenStreetMap. Based on their topology structure,
these data can be used to build the graphs directly, e.g., the road segments
or the stations are nodes and the road intersections or subway links are the
edges. While this modeling approach is straightforward, the disadvantage is
that only static graphs can be built from transportation network data.
Traffic Sensor Data. Traffic sensors, e.g. loop detectors, are installed on
roads to collect traffic information, e.g., traffic volume or speed. This type
of data is widely used for traffic prediction, especially road traffic flow
and speed prediction problems. For graph-based modeling, each sensor can be
used as a node, with road connections as the edges. One advantage of using
traffic sensor data for graph-based modeling is that the captured traffic
information can be used directly as the node attributes, with little pre-
processing overhead. One exception is that the sensors are prone to hardware
faults, which causes the missing data or data noise problems and requires
corresponding pre-processing techniques, e.g., data imputation and denoising
methods. Another disadvantage of using traffic sensor data for graph-based
modeling is that the traffic sensors can only be installed in a limited number
of locations for a series of reasons, e.g., installation cost. With this
constraint, only the part of the road networks with traffic sensors can be
incorporated into a graph, while the uncovered areas are neglected.
GPS Trajectory Data. Different types of vehicles (e.g. taxis, buses, online
ride-hailing vehicles, and shared bikes) can be equipped with GPS receivers,
which record GPS coordinates in 2-60 second intervals. The trajectory data
calculated from these GPS coordinate samples can be matched to road networks
and further used to derive traffic flow or speed. The advantage of using GPS
trajectory data for graph-based modeling is both the low expense to collect
GPS data with smartphones and the wider coverage with the massive number of
vehicles, compared with traffic sensor data. However, GPS trajectory data
contain no direct traffic information, which can be derived with corresponding
definitions though. The data quality problems also remain with GPS trajectory
data and more pre-processing steps are required, e.g., map matching.
Location-based Service Data. GPS function is also embedded in smartphones,
which can be used to collect various types of location-related data, e.g.,
check-in data, point-of-interest data, and route navigation application data.
The pros and cons of using location-based service data are similar with GPS
trajectory data. And the difference is that location-based service data are
often collected in a crowd-sourced approach, with more data providers but
potentially a lower data quality.
Trip Record Data. These include departure and arrival dates/times, departure
and arrival locations, and other trip information. Traffic speed and demand
can derived from trip record data from various sources, e.g., taxis, ride-
hailing services, buses, bikes, or even dock-less e-scooters used in He & Shin
[2020a]. These data can be collected in public transportation systems with
mature methods, for example, by AFC (Automatic Fare Collection) in the subway
and bus systems. Trip record data have the advantage of being capable of
constructing multiple graph-based problems, e.g., station-level traffic flow
and demand problems. They are also easier to collect in existing public
transportation systems.
Traffic Report Data. This type of data is often used for abnormal cases, e.g.,
anomaly report data used in Liu et al. [2020c] and traffic accident report
data used in Zhou et al. [2020e], Zhang et al. [2020k], Zhou et al. [2020f].
Traffic report data are less used in graph-based modeling because of their
sparsity in both spatial and temporal dimensions, compared with trip record
data.
Multimedia Data. This type of data can be used as an additional input to deep
learning models or for verifying the traffic status indicated by other data
sources. Multimedia data used in the surveyed studies include the Baidu
street-view images used in Qin et al. [2020a] for traffic congestion, as well
as satellite imagery data [Zhang et al., 2020k], and video surveillance data
[Shao et al., 2020]. Multimedia data are also less seen in graph-based
modeling because of their higher requirement for data collection, transmission
and storage, compared with traffic sensor data with similar functionalities.
It is also more difficult to extract precise traffic information, e.g.,
vehicle counts, from images or videos through image processing and object
detection techniques.
Simulated Traffic Data. In addition to observed real-world datasets,
microscopic traffic simulators are also used to build virtual training and
testing datasets for deep learning models. Examples in the surveyed studies
include the MATES Simulator used in Fukuda et al. [2020] and INTEGRATION
software used in Ramadan et al. [2020]. With many real-world datasets
available, simulated traffic data are rarely used in GNN-based and more
broader ML-based traffic forecasting studies. Traffic simulations have the
potential of modeling unseen graphs though, e.g., evaluating a planned road
topology.
Weather Data. Traffic states are highly affected by the meteorological factors
including temperature, humidity, precipitation, barometer pressure, and wind
strength.
Calendar Data. This includes the information on weekends and holidays. Because
traffic patterns vary significantly between weekdays and weekends/holidays,
some studies consider these two cases separately. Both weather and calendar
data have been proven useful for traffic forecasting in the literature and
should not be neglected in graph-based modeling as external factors.
While present road network and weather data can be easily found on the
Internet, it is much more difficult to source historical traffic data, both
due to data privacy concerns and the transmission and storage requirements of
large data volumes. In Table 5 we present a list of the open data resources
used in the surveyed studies. Most of these open data are already cleaned or
preprocessed and can be readily used for benchmarking and comparing the
performance of different models in future work.
Table 5: Open data for traffic prediction problems. Dataset Name | Relevant Studies
---|---
METR-LA | Li et al. [2018b], Wu et al. [2019], Xu et al. [2020a], Pan et al. [2020, 2019], Lu et al. [2019a], Zhang et al. [2020e], Wang et al. [2020g], Zhang & Guo [2020], Boukerche & Wang [2020b], Cao et al. [2020], Yu et al. [2019b], Li & Zhu [2021], Tian et al. [2020], Chen et al. [2020d], Bai et al. [2021], Zhang et al. [2018a], Cirstea et al. [2019], Shleifer et al. [2019], Yang et al. [2020], Chen et al. [2019], Wang et al. [2020f], Cui et al. [2020b], Zhou et al. [2020a], Cai et al. [2020], Zhou et al. [2020b], Wu et al. [2020c], Chen et al. [2020f], Opolka et al. [2019], Oreshkin et al. [2021], Jia et al. [2020], Zhang et al. [2020i], Feng et al. [2020], Xie et al. [2020c], Park et al. [2020], Song et al. [2020b]
PeMS all | Mallick et al. [2020, 2021]
PeMS-BAY | Li et al. [2018b], Wu et al. [2019], Zheng et al. [2020b], Pan et al. [2020, 2019], Xu et al. [2020b], Wang et al. [2020g], Zhang & Guo [2020], Boukerche & Wang [2020b], Li et al. [2020f], Cao et al. [2020], Xie et al. [2020d], Li & Zhu [2021], Tian et al. [2020], Shleifer et al. [2019], Chen et al. [2019], Yu et al. [2019a], Wang et al. [2020f], Cui et al. [2020b], Zhou et al. [2020a], Cai et al. [2020], Zhou et al. [2020b], Wu et al. [2020c], Chen et al. [2020f], Oreshkin et al. [2021], Guo & Yuan [2020], Zhang et al. [2020i], Feng et al. [2020], Xie et al. [2020c], Park et al. [2020], Song et al. [2020b]
PeMSD3 | Song et al. [2020a], Cao et al. [2020], Chen et al. [2020g], Wang et al. [2020a], Li & Zhu [2021]
PeMSD4 | Bai et al. [2020], Huang et al. [2020a], Zhang et al. [2020e], Song et al. [2020a], Chen et al. [2020e], Tang et al. [2020b], Guo et al. [2019c], Li et al. [2019], Wei & Sheng [2020], Cao et al. [2020], Li et al. [2020b], Yin et al. [2020], Zhang et al. [2020a], Wang et al. [2020a], Xin et al. [2020], Huang et al. [2020b], Guo et al. [2020b], Li & Zhu [2021], Xu et al. [2020c], Chen et al. [2020c], Ge et al. [2019a, b, 2020], Zhao et al. [2020b]
PeMSD7 | Zhang et al. [2020j], Huang et al. [2020a], Song et al. [2020a], Xu et al. [2020b], Tang et al. [2020b], Sun et al. [2020], Cao et al. [2020], Yu et al. [2018, 2019b], Chen et al. [2020g], Wang et al. [2020a], Xin et al. [2020], Xie et al. [2020d], Li & Zhu [2021], Zhang et al. [2019a], Ge et al. [2019a, b, 2020], Yu et al. [2019a], Zhao et al. [2020b]
PeMSD8 | Bai et al. [2020], Huang et al. [2020a], Song et al. [2020a], Chen et al. [2020e], Guo et al. [2019c], Wei & Sheng [2020], Cao et al. [2020], Li et al. [2020b], Yin et al. [2020], Zhang et al. [2020a], Wang et al. [2020a], Guo et al. [2020b], Li & Zhu [2021]
Seattle Loop | Cui et al. [2019, 2020a], Sun et al. [2021], Lewenfus et al. [2020]
T-Drive | Pan et al. [2020, 2019]
SHSpeed | Zhang et al. [2020j], Wang et al. [2018b], Guo et al. [2019a]
TaxiBJ | Zhang et al. [2020h], Wang et al. [2018a], Bai et al. [2019b]
TaxiSZ | Bai et al. [2021], Zhao et al. [2019]
TaxiCD | Hu et al. [2018, 2020]
TaxiNYC | Zhang et al. [2020h], Sun et al. [2020], Zhou et al. [2019], Hu et al. [2018], Jin et al. [2020b], Li & Axhausen [2020], Zheng et al. [2020a], Xu & Li [2019], Davis et al. [2020], Du et al. [2020], Li & Moura [2020], Ye et al. [2021], Zhou et al. [2020f]
UberNYC | Jin et al. [2020b], Ke et al. [2021]
DiDiChengdu | Zhang et al. [2019d], Qu et al. [2020], Wang et al. [2020b], Zhou et al. [2019], Wang et al. [2020h], Bogaerts et al. [2020], Li et al. [2020c]
DiDiTTIChengdu | Lu et al. [2020a]
DiDiXi’an | Qu et al. [2020], Bogaerts et al. [2020]
DiDiHaikou | Pian & Wu [2020], Jin et al. [2020a]
BikeDC | Sun et al. [2020], Wang et al. [2020d]
BikeNYC | Zhang et al. [2020h], Sun et al. [2020], Wang et al. [2018a], He & Shin [2020b], Chai et al. [2018], Lee et al. [2019], Bai et al. [2019b], Du et al. [2020], Ye et al. [2021], Wang et al. [2020d], Guo et al. [2019b], Lin et al. [2018]
BikeChicago | Chai et al. [2018]
SHMetro | Liu et al. [2020b]
HZMetro | Liu et al. [2020b]
#### 5.1.1 Traffic Sensor Data
The relevant open traffic sensor data are listed as follows.
METR-LA 222Download link: https://github.com/liyaguang/DCRNN: This dataset
contains traffic speed and volume collected from the highway of the Los
Angeles County road network, with 207 loop detectors. The samples are
aggregated in 5-minute intervals. The most frequently referenced time period
for this dataset is from March 1st to June 30th, 2012.
Performance Measurement System (PeMS) Data 333http://pems.dot.ca.gov/: This
dataset contains raw detector data from over 18,000 vehicle detector stations
on the freeway system spanning all major metropolitan areas of California from
2001 to 2019, collected with various sensors including inductive loops, side-
fire radar, and magnetometers. The samples are captured every 30 seconds and
aggregated in 5-minute intervals. Each data sample contains a timestamp,
station ID, district, freeway ID, direction of travel, total flow, and average
speed. Different subsets of PeMS data have been used in previous studies, for
example:
* 1.
PeMS-BAY 444Download link: https://github.com/liyaguang/DCRNN: This subset
contains data from 325 sensors in the Bay Area from January 1st to June 30th,
2017.
* 2.
PeMSD3: This subset uses 358 sensors in the North Central Area. The frequently
referenced time period for this dataset is September 1st to November 30th,
2018.
* 3.
PeMSD4: This subset uses 307 sensors in the San Francisco Bay Area. The
frequently referenced time period for this dataset is January 1st to February
28th, 2018.
* 4.
PeMSD7: This subset uses 883 sensors in the Los Angeles Area. The frequently
referenced time period for this dataset is May to June, 2012.
* 5.
PeMSD8: This subset uses 170 sensors in the San Bernardino Area. The
frequently referenced time period for this dataset is July to August, 2016.
Seattle Loop 555Download link: https://github.com/zhiyongc/Seattle-Loop-Data:
This dataset was collected by inductive loop detectors deployed on four
connected freeways (I-5, I-405, I-90, and SR-520) in the Seattle area, from
January 1st to 31st, 2015. It contains the traffic speed data from 323
detectors. The samples are aggregated in 5-minute intervals.
#### 5.1.2 Taxi Data
The open taxi datasets used in the surveyed studies are listed as follows.
T-drive [Yuan et al., 2010]: This dataset contains a large number of taxicab
trajectories collected by 30,000 taxis in Beijing from February 1st to June
2nd, 2015.
SHSpeed (Shanghai Traffic Speed) [Wang et al., 2018b] 666Download link:
https://github.com/xxArbiter/grnn: This dataset contains 10-minute traffic
speed data, derived from raw taxi trajectory data, collected from 1 to 30
April 2015, for 156 urban road segments in the central area of Shanghai,
China.
TaxiBJ [Zhang et al., 2017]: This dataset contains inflow and outflow data
derived from GPS data in more than 34,000 taxicabs in Beijing from four time
intervals: (1) July 1st to October 30th, 2013; (2) March 1st to June 30th,
2014; (3) March 1st to June 30th, 2015; and (4) November 1st, 2015 to April
10th, 2016. The Beijing city map is divided into $32\times 32$ grids and the
time interval of the flow data is 30 minutes.
TaxiSZ [Zhao et al., 2019] 777Download link:
https://github.com/lehaifeng/T-GCN: This dataset is derived from taxi
trajectories in Shenzhen from January 1st to 31st, 2015. It contains the
traffic speed on 156 major roads of the Luohu District every 15 minutes.
TaxiCD 888 https://js.dclab.run/v2/cmptDetail.html?id=175: This dataset
contains 1.4 billion GPS records from 14,864 taxis collected from August 3rd
to 30th, 2014 in Chengdu, China. Each GPS record consists of a taxi ID,
latitude, longitude, an indicator of whether the taxi is occupied, and a
timestamp.
TaxiNYC999http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml: The
taxi trip records in New York starting from 2009, in both yellow and green
taxis. Each trip record contains pick-up and drop-off dates/times, pick-up and
drop-off locations, trip distances, itemized fares, rate types, payment types,
and driver-reported passenger counts.
#### 5.1.3 Ride-hailing Data
The open ride-hailing data used in the surveyed studies are listed as follows.
UberNYC 101010https://github.com/fivethirtyeight/uber-tlc-foil-response: This
dataset comes from Uber, which is one of the largest online ride-hailing
companies in the USA, and is provided by the NYC Taxi & Limousine Commission
(TLC). It contains data from over 4.5 million Uber pickups in New York City
from April to September 2014, and 14.3 million more Uber pickups from January
to June 2015.
Didi GAIA Open Data 111111https://outreach.didichuxing.com/research/opendata/:
This open data plan is supported by Didi Chuxing, which is one of the largest
online ride-hailing companies in China.
* 1.
DiDiChengdu: This dataset contains the trajectories of DiDi Express and DiDi
Premier drivers within Chengdu, China. The data contains trips from October to
November 2016.
* 2.
DiDiTTIChengdu: This dataset represents the DiDi Travel Time Index Data in
Chengdu, China in the year of 2018, which contains the average speed of major
roads every 10 minutes.
* 3.
DiDiXi’an: This dataset contains the trajectories of DiDi Express and DiDi
Premier drivers within Xi’an, China. The data contains trips from October to
November 2016.
* 4.
DiDiHaikou: The dataset contains DiDi Express and DiDi Premier orders from May
1st to October 31st, 2017 in the city of Haikou, China, including the
coordinates of origins and destinations, pickup and drop-off timestamps, as
well as other information.
#### 5.1.4 Bike Data
The open bike data used in the surveyed studies are listed as follows.
BikeNYC 121212 https://www.citibikenyc.com/system-data: This dataset is from
the NYC Bike System, which contains 416 stations. The frequently referenced
time period for this dataset is from 1st July, 2013 to 31th December, 2016.
BikeDC 131313 https://www.capitalbikeshare.com/system-data: This dataset is
from the Washington D.C. Bike System, which contains 472 stations. Each record
contains trip duration, start and end station IDs, and start and end times.
BikeChicago 141414https://www.divvybikes.com/system-data: This dataset is from
the Divvy System Data in Chicago, from 2015 to 2020.
#### 5.1.5 Subway Data
The subway data referenced in the surveyed studies are listed as follows.
SHMetro [Liu et al., 2020b] 151515Download link:
https://github.com/ivechan/PVCGN: This dataset is derived from 811.8 million
transaction records of the Shanghai metro system collected from July 1st to
September 30th, 2016. It contains 288 metro stations and 958 physical edges.
The inflow and outflow of each station are provided in 15 minute intervals.
HZMetro [Liu et al., 2020b] 161616Download link:
https://github.com/ivechan/PVCGN: This dataset is similar to SHMetro, from the
metro system in Hangzhou, China, in January 2019. It contains 80 metro
stations and 248 physical edges, and the aggregation time length is also 15
minutes.
### 5.2 Open Source Codes
Several open source frameworks for implementing general deep learning models,
most of which are built with the Python programming language, can be accessed
online, e.g. TensorFlow 171717https://www.tensorflow.org/, Keras
181818https://keras.io/, PyTorch 191919https://pytorch.org/, and MXNet
202020https://mxnet.apache.org/. Additional Python libraries designed for
implementing GNNs are available. These include DGL 212121https://www.dgl.ai/,
pytorch_geometric 222222https://pytorch-geometric.readthedocs.io/, and Graph
Nets 232323https://github.com/deepmind/graph_nets.
Many authors have also released open-source implementations of their proposed
models. The open source projects for traffic flow, traffic speed, traffic
demand, and other problems are summarized in Tables 6, 7, 8, and 9,
respectively. In these open source projects, TensorFlow and PyTorch are the
two frameworks that are used most frequently.
Table 6: Open source projects for traffic flow related problems. Article | Year | Framework | Problem | Link
---|---|---|---|---
Zheng et al. [2020b] | 2020 | TensorFlow | Road Traffic Flow, Road Traffic Speed | https://github.com/zhengchuanpan/GMAN
Bai et al. [2020] | 2020 | PyTorch | Road Traffic Flow | https://github.com/LeiBAI/AGCRN
Song et al. [2020a] | 2020 | MXNet | Road Traffic Flow | https://github.com/wanhuaiyu/STSGCN
Tang et al. [2020b] | 2020 | TensorFlow | Road Traffic Flow | https://github.com/sam101340/GAGCN-BC-20200720
Wang et al. [2020a] | 2020 | MXNet, PyTorch | Road Traffic Flow | https://github.com/zkx741481546/Auto-STGCN
Guo et al. [2020b] | 2020 | PyTorch | Road Traffic Flow, Road Traffic Speed | https://github.com/guokan987/DGCN
Li & Zhu [2021] | 2020 | MXNet | Road Traffic Flow, Road Traffic Speed | https://github.com/MengzhangLI/STFGNN
Tian et al. [2020] | 2020 | PyTorch, DGL | Road Traffic Flow | https://github.com/Kelang-Tian/ST-MGAT
Xiong et al. [2020] | 2020 | TensorFlow | Road OD Flow | https://github.com/alzmxx/OD_Prediction
Peng et al. [2020] | 2020 | Keras | Road Station-level Subway Passenger Flow, Station-level Bus Passenger Flow, Regional Taxi Flow | https://github.com/RingBDStack/GCNN-In-Traffic
Qiu et al. [2020] | 2020 | Pytorch | Regional Taxi Flow | https://github.com/Stanislas0/ToGCN-V2X
Yeghikyan et al. [2020] | 2020 | PyTorch | Regional OD Taxi Flow | https://github.com/FelixOpolka/Mobility-Flows-Neural-Networks
Zhang et al. [2020b] | 2020 | Keras | Station-level Subway Passenger Flow | https://github.com/JinleiZhangBJTU/ResNet-LSTM-GCN
Zhang et al. [2020c] | 2020 | Keras | Station-level Subway Passenger Flow | https://github.com/JinleiZhangBJTU/Conv-GCN
Liu et al. [2020b] | 2020 | PyTorch | Station-level Subway Passenger Flow | https://github.com/ivechan/PVCGN
Ye et al. [2020b] | 2020 | Keras | Station-level Subway Passenger Flow | https://github.com/start2020/Multi-STGCnet
Pan et al. [2019] | 2019 | MXNet, DGL | Road Traffic Flow, Road Traffic Speed | https://github.com/panzheyi/ST-MetaNet
Guo et al. [2019c] | 2019 | MXNet | Road Traffic Flow | https://github.com/wanhuaiyu/ASTGCN
Guo et al. [2019c] | 2019 | PyTorch | Road Traffic Flow | https://github.com/wanhuaiyu/ASTGCN-r-pytorch
Wang et al. [2018b] | 2018 | PyTorch | Road Traffic Flow | https://github.com/xxArbiter/grnn
Yu et al. [2018] | 2018 | TensorFlow | Road Traffic Flow | https://github.com/VeritasYin/STGCN_IJCAI-18
Li et al. [2018a] | 2018 | Keras | Station-level Subway Passenger Flow | https://github.com/RingBDStack/GCNN-In-Traffic
Chai et al. [2018] | 2018 | TensorFlow | Bike Flow | https://github.com/Di-Chai/GraphCNN-Bike
Table 7: Open source projects for traffic speed related problems. Article | Year | Framework | Problem | Link
---|---|---|---|---
Zhang et al. [2020h] | 2020 | Keras | Road Traffic Speed | https://github.com/jillbetty001/ST-CGA
Bai et al. [2021] | 2020 | TensorFlow | Road Traffic Speed | https://github.com/lehaifeng/T-GCN/tree/master/A3T
Yang et al. [2020] | 2020 | TensorFlow | Road Traffic Speed | https://github.com/fanyang01/relational-ssm
Wu et al. [2020c] | 2020 | PyTorch | Road Traffic Speed | https://github.com/nnzhan/MTGNN
Mallick et al. [2021] | 2020 | TensorFlow | Road Traffic Speed | https://github.com/tanwimallick/TL-DCRNN
Chen et al. [2020a] | 2020 | PyTorch | Road Traffic Speed | https://github.com/Fanglanc/DKFN
Lu et al. [2020a] | 2020 | PyTorch | Road Traffic Speed | https://github.com/RobinLu1209/STAG-GCN
Guopeng et al. [2020] | 2020 | TensorFlow, Keras | Road Traffic Speed | https://github.com/RomainLITUD/DGCN_traffic_forecasting
Shen et al. [2020] | 2020 | PyTorch | Road Travel Time | https://github.com/YibinShen/TTPNet
Hong et al. [2020] | 2020 | TensorFlow | Time of Arrival | https://github.com/didi/heteta
Wu et al. [2019] | 2019 | PyTorch | Road Traffic Speed | https://github.com/nnzhan/Graph-WaveNet
Shleifer et al. [2019] | 2019 | PyTorch | Road Traffic Speed | https://github.com/sshleifer/Graph-WaveNet
Zhao et al. [2019] | 2019 | TensorFlow | Road Traffic Speed | https://github.com/lehaifeng/T-GCN
Cui et al. [2019] | 2019 | TensorFlow | Road Traffic Speed | https://github.com/zhiyongc/Graph_Convolutional_LSTM
Jepsen et al. [2019, 2020] | 2019 | MXNet | Road Traffic Speed | https://github.com/TobiasSkovgaardJepsen/relational-fusion-networks
Li et al. [2018b] | 2018 | TensorFlow | Road Traffic Speed | https://github.com/liyaguang/DCRNN
Li et al. [2018b] | 2018 | PyTorch | Road Traffic Speed | https://github.com/chnsh/DCRNN_PyTorch
Zhang et al. [2018a] | 2018 | MXNet | Road Traffic Speed | https://github.com/jennyzhang0215/GaAN
Liao et al. [2018] | 2018 | TensorFlow | Road Traffic Speed | https://github.com/JingqingZ/BaiduTraffic
Mohanty & Pozdnukhov [2018], Mohanty et al. [2020] | 2018 | TensorFlow | Traffic Congestion | https://github.com/sudatta0993/Dynamic-Congestion-Prediction
Table 8: Open source projects for traffic demand related problems. Article | Year | Framework | Problem | Link
---|---|---|---|---
Hu et al. [2020] | 2020 | TensorFlow | Taxi Demand | https://github.com/hujilin1229/od-pred
Davis et al. [2020] | 2020 | TensorFlow, PyTorch | Taxi Demand | https://github.com/NDavisK/Grids-versus-Graphs
Ye et al. [2021] | 2020 | PyTorch | Taxi Demand, Bike Demand | https://github.com/Essaim/CGCDemandPrediction
Lee et al. [2019] | 2019 | TensorFlow, Keras | Ride-hailing Demand, Bike Demand, Taxi Demand | https://github.com/LeeDoYup/TGGNet-keras
Ke et al. [2019] | 2019 | Keras | Taxi Demand | https://github.com/kejintao/ST-ED-RMGC
Table 9: Open source projects for other problems. Article | Year | Framework | Problem | Link
---|---|---|---|---
Zhou et al. [2020e] | 2020 | TensorFlow | Traffic Accident | https://github.com/zzyy0929/AAAI2020-RiskOracle/
Yu et al. [2020b] | 2020 | PyTorch, DGL | Traffic Accident | https://github.com/yule-BUAA/DSTGCN
Zhang et al. [2020f] | 2020 | PyTorch, DGL | Parking Availability | https://github.com/Vvrep/SHARE-parking_availability_prediction-Pytorch
Wang et al. [2020c] | 2020 | TensorFlow | Transportation Resilience | https://github.com/Charles117/resilience_shenzhen
Wright et al. [2019] | 2019 | TensorFlow, Keras | Lane Occupancy | https://github.com/mawright/trafficgraphnn
## 6 Challenges and Future Directions
In this section, we discuss general challenges for traffic prediction problems
as well as specific new challenges when GNNs are involved. While GNNs achieve
a better forecasting performance, they are not the panacea. Some existing
challenges from the border topic of traffic forecasting remain unsolved in
current graph-based studies. Based on these challenges, we discuss possible
future directions as well as early attempts in these directions. Some of these
future directions are inspired from the border traffic forecasting research
and remain insightful for the graph-based modeling approach. We would also
highlight the special opportunities with GNNs.
### 6.1 Challenges
#### 6.1.1 Heterogeneous Data
Traffic prediction problems involve both spatiotemporal data and external
factors, e.g., weather and calendar information. Heterogeneous data fusion is
a challenge that is not limited to the traffic domain. GNNs have enabled
significant progress by taking the underlying graph structures into
consideration. However, some challenges remain; for example, geographically
close nodes may not be the most influential, both for CNN-based and GNN-based
approaches. Another special challenge for GNNs is that the underlying graph
information may not be correct or up to date. For example, the road topology
data of OpenStreetMap, an online map services, are collected in a crowd-
sourced approach, which may be inaccurate or lagged behind the real road
network. The spatial dependency relationship extracted by GNNs with these
inaccurate data may decrease the forecasting accuracy.
Data quality concerns present an additional challenge with problems such as
missing data, sparse data and noise potentially compromising forecasting
results. Most of the surveyed models are only evaluated with processed high-
quality datasets. A few studies do, however, take data quality related
problems into consideration, e.g., using the Kalman filter to deal with the
sensor data bias and noise [Chen et al., 2020a], infilling missing data with
moving average filters [Hasanzadeh et al., 2019] or linear interpolation
[Agafonov, 2020, Chen et al., 2020a]. Missing data problem could be more
common in GNNs, with the potential missing phenomena happening with historical
traffic data or underlying graph information, e.g., GCNs are proposed to fill
data gaps in missing OD flow problems [Yao et al., 2020].
Traffic anomalies (e.g., congestion) are an important external factor that may
affect prediction accuracy and it has been proven that under congested traffic
conditions a deep neural network may not perform as well as under normal
traffic conditions [Mena-Oreja & Gozalvez, 2020]. However, it remains a
challenge to collect enough anomaly data to train deep learning models
(including GNNs) in both normal and anomalous situations. The same concern
applies for social events, public holidays, etc.
#### 6.1.2 Multi-task Performance
For the public service operation of ITSs, a multi-task framework is necessary
to incorporate all the traffic information and predict the demand of multiple
transportation modes simultaneously. For example, knowledge adaption is
proposed to adapt the relevant knowledge from an information-intensive source
to information-sparse sources for demand prediction [Li et al., 2020a].
Related challenges lie in data format incompatibilities as well as the
inherent differences in spatial or temporal patterns. While some of the
surveyed models can be used for multiple tasks, e.g., traffic flow and traffic
speed prediction on the same road segment, most can only be trained for a
single task at one time.
Multi-task forecasting is a bigger challenge in graph-based modeling because
different tasks may use different graph structures, e.g., road-level and
station-level problems use different graphs and thus are difficult to be
solved with a single GNN model. Some efforts that have been made in GNN-based
models for multi-task prediction include taxi departure flow and arrival flow
[Chen et al., 2020h], region-flow and transition-flow [Wang et al., 2020b],
crowd flows, and OD of the flows [Wang et al., 2020e]. However, most of the
existing attempts are based on the same graph with multiple outputs generated
by feed forward layers. Nonetheless, GNN-based multi-task prediction for
different types of traffic forecasting problems is a research direction
requiring significant further development, especially those requiring multiple
graph structures.
#### 6.1.3 Practical Implementation
A number of challenges prevent the practical implementation of the models
developed in the surveyed studies in city-scale ITSs.
First, there is significant bias introduced by the small amount of data
considered in the existing GNN-based studies which, in most cases, spans less
than one year. The proposed solutions are therefore not necessarily applicable
to different time periods or different places. If longer traffic data are to
be used in GNNs, the corresponding change of the underlying traffic
infrastructures should be recorded and updated, which increases both the
expense and difficulty of the associated data collection process in practice.
A second challenge is the computation scalability of GNNs. To avoid the huge
computation requirements of the large-scale real-world traffic network graphs,
only a subset of the nodes and edges are typically considered. For example,
most studies only use a subset of the PeMS dataset when considering the road
traffic flow or speed problems. Their results can therefore only be applied to
the selected subsets. Graph partitioning and parallel computing
infrastructures have been proposed for solving this problem. The traffic speed
and flow of the entire PeMS dataset with 11,160 traffic sensor locations are
predicted simultaneously in Mallick et al. [2020], using a graph-partitioning
method that decomposes a large highway network into smaller networks and
trains a single DCRNN model on a cluster with graphics processing units
(GPUs). However, increased modeling power can only improve the state-of-the-
art results with narrow performance margins, compared to statistical and
machine learning models with less complex structures and computational
requirements.
A third challenge is presented by changes in the transportation networks and
infrastructure, which are essential to build the graphs in GNNs. The real-
world network graphs change when road segments or bus lines are added or
removed. Points-of-interest in a city also change when new facilities are
built. Static graph formulations are not enough for handling these situations.
Some efforts have been made to solve this problem with promising results. For
example, a dynamic Laplacian matrix estimator is proposed to find the change
of Laplacian matrix, according to changes in spatial dependencies hidden in
the traffic data [Diao et al., 2019], and a Data Adaptive Graph Generation
(DAGG) module is proposed to infer the inter-dependencies between different
traffic series automatically, without using pre-defined graphs based on
spatial connections [Bai et al., 2020].
#### 6.1.4 Model Interpretation
The challenge of model interpretation is a point of criticism for all “black-
box” machine learning or deep learning models, and traffic forecasting tasks
are no exception [Wu et al., 2018b, Barredo-Arrieta et al., 2019]. while there
have been remarkable progresses for visualizing and explaining other deep
neural network structures, e.g., CNNs, the development of post-processing
techniques to explain the predictions made by GNNs is still in an early phase
[Baldassarre & Azizpour, 2019, Pope et al., 2019, Ying et al., 2019] and the
application of these techniques to the traffic forecasting domain has not yet
been addressed.
### 6.2 Future Directions
#### 6.2.1 Centralized Data Repository
A centralized data repository for GNN-based traffic forecasting resources
would facilitate objective comparison of the performance of different models
and be an invaluable contribution to the field. This future direction is
proposed for the challenge of heterogeneous data as well as the data quality
problem. Another unique feature of this repository could be the inclusion of
graph-related data, which have not be provided directly in previous traffic
forecasting studies.
Some criteria for building such data repositories, e.g. a unified data format,
tracking of dataset versions, public code and ranked results, and sufficient
record lengths (longer than a year ideally), have been discussed in previous
surveys [Manibardo et al., 2020]. Compiling a centralized and standardized
data repository is particularly challenging for GNN-based models where natural
graphs are collected and stored in a variety of data formats (e.g. Esri
Shapefile and OSM XML used by Openstreetmap are used for digital maps in the
GIS community) and various different similarity graphs can be constructed from
the same traffic data in different models.
Some previous attempts in this direction have been made in the machine
learning community, e.g. setting benchmarks for several traffic prediction
tasks in Papers With Code 242424https://paperswithcode.com/task/traffic-
prediction, and in data science competitions, e.g., the Traffic4cast
competition series 252525https://www.iarai.ac.at/traffic4cast/. However, the
realization of a centralized data repository remains an open challenge.
#### 6.2.2 Combination with Other Techniques
GNNs may be combined with other advanced techniques to overcome some of their
inherent challenges and achieve better performance.
Data Augmentation. Data augmentation has been proven effective for boosting
the performance of deep learning models, e.g. in image classification tasks
and time series prediction tasks. Data augmentation is proposed for the
challenge of the possible forecasting bias introduced by the small amount of
available data. However, due to the complex structure of graphs, it is more
challenging to apply data augmentation techniques to GNNs. Recently, data
augmentation for GNNs has proven helpful in semi-supervised node
classification tasks [Zhao et al., 2021]. However, it remains a question
whether data augmentation may be effective in traffic forecasting GNN
applications.
Transfer Learning. Transfer learning utilizes knowledge or models trained for
one task to solve related tasks, especially those with limited data. In the
image classification field, pre-trained deep learning models from the ImageNet
or MS COCO datasets are widely used in other problems. In traffic prediction
problems, where a lack of historical data is a frequent problem, transfer
learning is a possible solution. For GNNs, transfer learning can be used from
a graph with more historical traffic data for the model training process to
another graph with less available data. Transfer learning can also be used for
the challenge caused by the changes in the transportation networks and
infrastructure, when new stations or regions have not accumulated enough
historical traffic data to train a GNN model. A novel transfer learning
approach for DCRNN is proposed in Mallick et al. [2021], so that a model
trained on data-rich regions of highway network can be used to predict traffic
on unseen regions of the highway network. The authors demonstrated the
efficacy of model transferability between the San Francisco and Los Angeles
regions using different parts of the California road network from the PeMS.
Meta-learning. Meta-learning, or learning how to learn, has recently become a
potential learning paradigm that can absorb information from a task and
effectively generalize it to an unseen task. Meta-learning is proposed for the
challenge of GNN-based multi-task prediction, especially those involving
mutiple graphs. There are different types of meta learning methods and some of
them are combined with graph structures for describing relationships between
tasks or data samples [Garcia & Bruna, 2017, Liu et al., 2019]. Based on a
deep meta learning method called network weight generation, ST-MetaNet+ is
proposed in Pan et al. [2020], which leverages the meta knowledge extracted
from geo-graph attributes and dynamic traffic context learned from traffic
states to generate the parameter weights in graph attention networks and RNNs,
so that the inherent relationships between diverse types of spatiotemporal
correlations and geo-graph attributes can be captured.
Generative Adversarial Network (GAN) [Goodfellow et al., 2014]. GAN is a
machine learning framework that has two components, namely, a generator, which
learns to generate plausible data, and a discriminator, which learns to
distinguish the generator’s fake data from real data. After training to a
state of Nash equilibrium, the generator may generate undistinguished data,
which helps to expand the training data size for many problems, including GNN-
based traffic forecasting. GAN is proposed for the challenges caused by the
small data amount used in previous studies or the changes in the
transportation networks and infrastructure when not enough historical traffic
data are available. In Xu et al. [2020a], the road network is used directly as
the graph, in which the nodes are road state detectors and the edges are built
based on their adjacent links. DeepWalk is used to embed the graph and the
road traffic state sensor information is transferred into a low-dimensional
space. Then, the Wasserstein GAN (WGAN) [Arjovsky et al., 2017] is used to
train the traffic state data distribution and generate predicted results. Both
public traffic flow (i.e., Caltrans PeMSD7) and traffic speed (i.e., METR-LA)
datasets are used for evaluation, and the results demonstrate the
effectiveness of the GAN-based solution when used in graph-based modeling.
Automated Machine Learning (AutoML). The application of machine learning
requires considerable manual intervention in various aspects of the process,
including feature extraction, model selection, and parameter adjustment.
AutoML automatically learns the important steps related to features, models,
optimization, and evaluation, so that machine learning models can be applied
without manual intervention. AutoML would help to improve the implementation
of machine learning models, including GNNs. AutoML is proposed for the
challenge for computational requirements in graph-based modeling, in which
case the hyper parameter tuning for GNNs can be more efficient with state-of-
the-art AutoML techniques. An early attempt to combine AutoML with GNNs for
traffic prediction problems is an Auto-STGCN algorithm, proposed in Wang et
al. [2020a]. This algorithm searches the parameter space for STGCN models
quickly based on reinforcement learning and generates optimal models
automatically for specific scenarios.
Bayesian Network. Most of the existing studies aim for deterministic models
that make mean predictions. However, some traffic applications rely on
uncertainty estimates for the future situations. To tackle this gap, the
Bayesian network, which is a type of probabilistic graphical model using
Bayesian inference for probability computations, is a promising solution. The
combination of GNNs with Bayesian networks is proposed for the challenge of
GNN model interpretation. With probabilistic predictions, uncertainty
estimates are generated for the future situations, especially the chance of
extreme traffic states. A similar alternative is Quantile Regression, which
estimates the quantile function of a distribution at chosen points, combined
with Graph WaveNet for uncertainty estimates [Maas & Bloem, 2020].
#### 6.2.3 Applications in Real-World ITS Systems
Last but not the least, most of the surveyed GNN-based studies are only based
on the simulations with historical traffic data, without being validated or
deployed in real-world ITS systems. However, there are a number of potential
applications, especially for GNN-based models with the better forecasting
performance. To name a few potential cases, the GNN-based forecasting model
can be used for traffic light control in signalized intersections, when each
intersection is modeled as a node in the graph and the corresponding traffic
flow forecasting result can be used to design the traffic light control
strategy. Another example is the application in map service and navigation
applications, in which each road segment is modeled as a node in the graph and
the corresponding traffic speed and travel time forecasting result can be used
to calculate the estimated time of arrival. A third example is the application
in online ride-hailing service providers, e.g., Uber and Lyft, in which each
region is modeled as a node and the corresponding ride-hailing demand
forecasting can be used to design a more profitable vehicle dispatching and
scheduling system. Inspired by these potential application scenarios, there
are a lot of potential research opportunities for researchers from both the
academia and the industry.
## 7 Conclusion
In this paper, a comprehensive review of the application of GNNs for traffic
forecasting is presented. Three levels of traffic problems and graphs are
summarized, namely, road-level, region-level and station-level. The usages of
recurrent GNNs, convolutional GNNs and graph autoencoders are discussed. We
also give the latest collection of open dataset and code resource for this
topic. Challenges and future directions are further pointed out for the
follow-up research.
## References
* Agafonov [2020] Agafonov, A. (2020). Traffic flow prediction using graph convolution neural networks. In 2020 10th International Conference on Information Science and Technology (ICIST) (pp. 91–95). IEEE.
* Arjovsky et al. [2017] Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. arXiv preprint arXiv:1701.07875, .
* Atwood & Towsley [2016] Atwood, J., & Towsley, D. (2016). Diffusion-convolutional neural networks. In NIPS.
* Bai et al. [2021] Bai, J., Zhu, J., Song, Y., Zhao, L., Hou, Z., Du, R., & Li, H. (2021). A3t-gcn: attention temporal graph convolutional network for traffic forecasting. ISPRS International Journal of Geo-Information, 10, 485.
* Bai et al. [2019a] Bai, L., Yao, L., Kanhere, S. S., Wang, X., Liu, W., & Yang, Z. (2019a). Spatio-temporal graph convolutional and recurrent networks for citywide passenger demand prediction. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 2293–2296).
* Bai et al. [2019b] Bai, L., Yao, L., Kanhere, S. S., Wang, X., & Sheng, Q. Z. (2019b). Stg2seq: spatial-temporal graph to sequence model for multi-step passenger demand forecasting. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (pp. 1981–1987). AAAI Press.
* Bai et al. [2020] Bai, L., Yao, L., Li, C., Wang, X., & Wang, C. (2020). Adaptive graph convolutional recurrent network for traffic forecasting. In Advances in Neural Information Processing Systems.
* Baldassarre & Azizpour [2019] Baldassarre, F., & Azizpour, H. (2019). Explainability techniques for graph convolutional networks. In International Conference on Machine Learning (ICML) Workshops, 2019 Workshop on Learning and Reasoning with Graph-Structured Representations.
* Barredo-Arrieta et al. [2019] Barredo-Arrieta, A., Laña, I., & Del Ser, J. (2019). What lies beneath: A note on the explainability of black-box machine learning models for road traffic forecasting. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 2232–2237). IEEE.
* Bing et al. [2020] Bing, H., Zhifeng, X., Yangjie, X., Jinxing, H., & Zhanwu, M. (2020). Integrating semantic zoning information with the prediction of road link speed based on taxi gps data. Complexity, 2020.
* Bogaerts et al. [2020] Bogaerts, T., Masegosa, A. D., Angarita-Zapata, J. S., Onieva, E., & Hellinckx, P. (2020). A graph cnn-lstm neural network for short and long-term traffic forecasting based on trajectory data. Transportation Research Part C: Emerging Technologies, 112, 62–77.
* Boukerche et al. [2020] Boukerche, A., Tao, Y., & Sun, P. (2020). Artificial intelligence-based vehicular traffic flow prediction methods for supporting intelligent transportation systems. Computer Networks, 182, 107484.
* Boukerche & Wang [2020a] Boukerche, A., & Wang, J. (2020a). Machine learning-based traffic prediction models for intelligent transportation systems. Computer Networks, 181, 107530.
* Boukerche & Wang [2020b] Boukerche, A., & Wang, J. (2020b). A performance modeling and analysis of a novel vehicular traffic flow prediction system using a hybrid machine learning-based model. Ad Hoc Networks, .
* Bruna et al. [2014] Bruna, J., Zaremba, W., Szlam, A., & LeCun, Y. (2014). Spectral networks and deep locally connected networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014.
* Cai et al. [2020] Cai, L., Janowicz, K., Mai, G., Yan, B., & Zhu, R. (2020). Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting. Transactions in GIS, .
* Cao et al. [2020] Cao, D., Wang, Y., Duan, J., Zhang, C., Zhu, X., Huang, C., Tong, Y., Xu, B., Bai, J., Tong, J. et al. (2020). Spectral temporal graph neural network for multivariate time-series forecasting. Advances in Neural Information Processing Systems, 33.
* Chai et al. [2018] Chai, D., Wang, L., & Yang, Q. (2018). Bike flow prediction with multi-graph convolutional networks. In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (pp. 397–400).
* Chen et al. [2019] Chen, C., Li, K., Teo, S. G., Zou, X., Wang, K., Wang, J., & Zeng, Z. (2019). Gated residual recurrent graph neural networks for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 485–492). volume 33.
* Chen et al. [2020a] Chen, F., Chen, Z., Biswas, S., Lei, S., Ramakrishnan, N., & Lu, C.-T. (2020a). Graph convolutional networks with kalman filtering for traffic prediction. In Proceedings of the 28th International Conference on Advances in Geographic Information Systems (pp. 135–138).
* Chen et al. [2020b] Chen, H., Rossi, R. A., Mahadik, K., & Eldardiry, H. (2020b). A context integrated relational spatio-temporal model for demand and supply forecasting. arXiv preprint arXiv:2009.12469, .
* Chen et al. [2020c] Chen, J., Liao, S., Hou, J., Wang, K., & Wen, J. (2020c). Gst-gcn: A geographic-semantic-temporal graph convolutional network for context-aware traffic flow prediction on graph sequences. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 1604–1609). IEEE.
* Chen et al. [2020d] Chen, K., Chen, F., Lai, B., Jin, Z., Liu, Y., Li, K., Wei, L., Wang, P., Tang, Y., Huang, J. et al. (2020d). Dynamic spatio-temporal graph-based cnns for traffic flow prediction. IEEE Access, 8, 185136–185145.
* Chen et al. [2020e] Chen, L., Han, K., Yin, Q., & Cao, Z. (2020e). Gdcrn: Global diffusion convolutional residual network for traffic flow prediction. In International Conference on Knowledge Science, Engineering and Management (pp. 438–449). Springer.
* Chen et al. [2020f] Chen, W., Chen, L., Xie, Y., Cao, W., Gao, Y., & Feng, X. (2020f). Multi-range attentive bicomponent graph convolutional network for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Chen et al. [2020g] Chen, X., Zhang, Y., Du, L., Fang, Z., Ren, Y., Bian, K., & Xie, K. (2020g). Tssrgcn: Temporal spectral spatial retrieval graph convolutional network for traffic flow forecasting. In 2020 IEEE International Conference on Data Mining (ICDM). IEEE.
* Chen et al. [2020h] Chen, Z., Zhao, B., Wang, Y., Duan, Z., & Zhao, X. (2020h). Multitask learning and gcn-based taxi demand prediction for a traffic road network. Sensors, 20, 3776\.
* Cirstea et al. [2019] Cirstea, R.-G., Guo, C., & Yang, B. (2019). Graph attention recurrent neural networks for correlated time series forecasting. MileTS19@KDD, .
* Cui et al. [2019] Cui, Z., Henrickson, K., Ke, R., & Wang, Y. (2019). Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. IEEE Transactions on Intelligent Transportation Systems, .
* Cui et al. [2020a] Cui, Z., Ke, R., Pu, Z., Ma, X., & Wang, Y. (2020a). Learning traffic as a graph: A gated graph wavelet recurrent neural network for network-scale traffic prediction. Transportation Research Part C: Emerging Technologies, 115, 102620.
* Cui et al. [2020b] Cui, Z., Lin, L., Pu, Z., & Wang, Y. (2020b). Graph markov network for traffic forecasting with missing data. Transportation Research Part C: Emerging Technologies, 117, 102671. URL: http://www.sciencedirect.com/science/article/pii/S0968090X20305866. doi:https://doi.org/10.1016/j.trc.2020.102671.
* Dai et al. [2020] Dai, R., Xu, S., Gu, Q., Ji, C., & Liu, K. (2020). Hybrid spatio-temporal graph convolutional network: Improving traffic prediction with navigation data. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD ’20 (p. 3074–3082). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3394486.3403358. doi:10.1145/3394486.3403358.
* Davis et al. [2020] Davis, N., Raina, G., & Jagannathan, K. (2020). Grids versus graphs: Partitioning space for improved taxi demand-supply forecasts. IEEE Transactions on Intelligent Transportation Systems, .
* Defferrard et al. [2016] Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems (pp. 3844–3852).
* Diao et al. [2019] Diao, Z., Wang, X., Zhang, D., Liu, Y., Xie, K., & He, S. (2019). Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 890–897). volume 33.
* Du et al. [2020] Du, B., Hu, X., Sun, L., Liu, J., Qiao, Y., & Lv, W. (2020). Traffic demand prediction based on dynamic transition convolutional neural network. IEEE Transactions on Intelligent Transportation Systems, .
* Fan et al. [2020] Fan, X., Xiang, C., Gong, L., He, X., Qu, Y., Amirgholipour, S., Xi, Y., Nanda, P., & He, X. (2020). Deep learning for intelligent traffic sensing and prediction: recent advances and future challenges. CCF Transactions on Pervasive Computing and Interaction, (pp. 1–21).
* Fang et al. [2020a] Fang, S., Pan, X., Xiang, S., & Pan, C. (2020a). Meta-msnet: Meta-learning based multi-source data fusion for traffic flow prediction. IEEE Signal Processing Letters, .
* Fang et al. [2019] Fang, S., Zhang, Q., Meng, G., Xiang, S., & Pan, C. (2019). Gstnet: Global spatial-temporal network for traffic flow prediction. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19 (pp. 2286–2293). International Joint Conferences on Artificial Intelligence Organization. URL: https://doi.org/10.24963/ijcai.2019/317. doi:10.24963/ijcai.2019/317.
* Fang et al. [2020b] Fang, X., Huang, J., Wang, F., Zeng, L., Liang, H., & Wang, H. (2020b). Constgat: Contextual spatial-temporal graph attention network for travel time estimation at baidu maps. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD ’20 (p. 2697–2705). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3394486.3403320. doi:10.1145/3394486.3403320.
* Feng et al. [2020] Feng, D., Wu, Z., Zhang, J., & Wu, Z. (2020). Dynamic global-local spatial-temporal network for traffic speed prediction. IEEE Access, 8, 209296–209307.
* Fu et al. [2020] Fu, J., Zhou, W., & Chen, Z. (2020). Bayesian spatio-temporal graph convolutional network for traffic forecasting. arXiv preprint arXiv:2010.07498, .
* Fukuda et al. [2020] Fukuda, S., Uchida, H., Fujii, H., & Yamada, T. (2020). Short-term prediction of traffic flow under incident conditions using graph convolutional recurrent neural network and traffic simulation. IET Intelligent Transport Systems, .
* Garcia & Bruna [2017] Garcia, V., & Bruna, J. (2017). Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043, .
* Ge et al. [2019a] Ge, L., Li, H., Liu, J., & Zhou, A. (2019a). Temporal graph convolutional networks for traffic speed prediction considering external factors. In 2019 20th IEEE International Conference on Mobile Data Management (MDM) (pp. 234–242). IEEE.
* Ge et al. [2019b] Ge, L., Li, H., Liu, J., & Zhou, A. (2019b). Traffic speed prediction with missing data based on tgcn. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 522–529). IEEE.
* Ge et al. [2020] Ge, L., Li, S., Wang, Y., Chang, F., & Wu, K. (2020). Global spatial-temporal graph convolutional network for urban traffic speed prediction. Applied Sciences, 10, 1509.
* Geng et al. [2019a] Geng, X., Li, Y., Wang, L., Zhang, L., Yang, Q., Ye, J., & Liu, Y. (2019a). Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 3656–3663). volume 33.
* Geng et al. [2019b] Geng, X., Wu, X., Zhang, L., Yang, Q., Liu, Y., & Ye, J. (2019b). Multi-modal graph interaction for multi-graph convolution network in urban spatiotemporal forecasting. arXiv preprint arXiv:1905.11395, .
* George & Santra [2020] George, S., & Santra, A. K. (2020). Traffic prediction using multifaceted techniques: A survey. Wireless Personal Communications, 115, 1047–1106.
* Gilmer et al. [2017] Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., & Dahl, G. E. (2017). Neural message passing for quantum chemistry. In International Conference on Machine Learning (pp. 1263–1272). PMLR.
* Goodfellow et al. [2014] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27, 2672–2680.
* Guo & Yuan [2020] Guo, G., & Yuan, W. (2020). Short-term traffic speed forecasting based on graph attention temporal convolutional networks. Neurocomputing, .
* Guo et al. [2019a] Guo, J., Song, C., & Wang, H. (2019a). A multi-step traffic speed forecasting model based on graph convolutional lstm. In 2019 Chinese Automation Congress (CAC) (pp. 2466–2471). IEEE.
* Guo et al. [2020a] Guo, K., Hu, Y., Qian, Z., Liu, H., Zhang, K., Sun, Y., Gao, J., & Yin, B. (2020a). Optimized graph convolution recurrent neural network for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Guo et al. [2020b] Guo, K., Hu, Y., Qian, Z., Sun, Y., Gao, J., & Yin, B. (2020b). Dynamic graph convolution network for traffic forecasting based on latent network of laplace matrix estimation. IEEE Transactions on Intelligent Transportation Systems, .
* Guo et al. [2020c] Guo, K., Hu, Y., Qian, Z. S., Sun, Y., Gao, J., & Yin, B. (2020c). An optimized temporal-spatial gated graph convolution network for traffic forecasting. IEEE Intelligent Transportation Systems Magazine, .
* Guo et al. [2019b] Guo, R., Jiang, Z., Huang, J., Tao, J., Wang, C., Li, J., & Chen, L. (2019b). Bikenet: Accurate bike demand prediction using graph neural networks for station rebalancing. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 686–693). IEEE.
* Guo et al. [2019c] Guo, S., Lin, Y., Feng, N., Song, C., & Wan, H. (2019c). Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 922–929). volume 33.
* Guopeng et al. [2020] Guopeng, L., Knoop, V. L., & van Lint, H. (2020). Dynamic graph filters networks: A gray-box model for multistep traffic forecasting. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) (pp. 1–6). IEEE.
* Haghighat et al. [2020] Haghighat, A. K., Ravichandra-Mouli, V., Chakraborty, P., Esfandiari, Y., Arabi, S., & Sharma, A. (2020). Applications of deep learning in intelligent transportation systems. Journal of Big Data Analytics in Transportation, 2, 115–145.
* Hamilton et al. [2017] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. In Advances in neural information processing systems (pp. 1024–1034).
* Han et al. [2020] Han, X., Shen, G., Yang, X., & Kong, X. (2020). Congestion recognition for hybrid urban road systems via digraph convolutional network. Transportation Research Part C: Emerging Technologies, 121, 102877.
* Han et al. [2019] Han, Y., Wang, S., Ren, Y., Wang, C., Gao, P., & Chen, G. (2019). Predicting station-level short-term passenger flow in a citywide metro network using spatiotemporal graph convolutional neural networks. ISPRS International Journal of Geo-Information, 8, 243.
* Hasanzadeh et al. [2019] Hasanzadeh, A., Liu, X., Duffield, N., & Narayanan, K. R. (2019). Piecewise stationary modeling of random processes over graphs with an application to traffic prediction. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 3779–3788). IEEE.
* He & Shin [2020a] He, S., & Shin, K. G. (2020a). Dynamic flow distribution prediction for urban dockless e-scooter sharing reconfiguration. In Proceedings of The Web Conference 2020 (pp. 133–143).
* He & Shin [2020b] He, S., & Shin, K. G. (2020b). Towards fine-grained flow forecasting: A graph attention approach for bike sharing systems. In Proceedings of The Web Conference 2020 WWW ’20 (p. 88–98). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3366423.3380097. doi:10.1145/3366423.3380097.
* He et al. [2020] He, Y., Zhao, Y., Wang, H., & Tsui, K. L. (2020). Gc-lstm: A deep spatiotemporal model for passenger flow forecasting of high-speed rail network. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) (pp. 1–6). IEEE.
* Heglund et al. [2020] Heglund, J. S., Taleongpong, P., Hu, S., & Tran, H. T. (2020). Railway delay prediction with spatial-temporal graph convolutional networks. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) (pp. 1–6). IEEE.
* Hong et al. [2020] Hong, H., Lin, Y., Yang, X., Li, Z., Fu, K., Wang, Z., Qie, X., & Ye, J. (2020). Heteta: Heterogeneous information network embedding for estimating time of arrival. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD ’20 (p. 2444–2454). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3394486.3403294. doi:10.1145/3394486.3403294.
* Hu et al. [2018] Hu, J., Guo, C., Yang, B., Jensen, C. S., & Chen, L. (2018). Recurrent multi-graph neural networks for travel cost prediction. arXiv preprint arXiv:1811.05157, .
* Hu et al. [2020] Hu, J., Yang, B., Guo, C., Jensen, C. S., & Xiong, H. (2020). Stochastic origin-destination matrix forecasting using dual-stage graph convolutional, recurrent neural networks. In 2020 IEEE 36th International Conference on Data Engineering (ICDE) (pp. 1417–1428). IEEE.
* Huang et al. [2020a] Huang, R., Huang, C., Liu, Y., Dai, G., & Kong, W. (2020a). Lsgcn: Long short-term traffic prediction with graph convolutional networks. In C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20 (pp. 2355–2361). International Joint Conferences on Artificial Intelligence Organization. URL: https://doi.org/10.24963/ijcai.2020/326. doi:10.24963/ijcai.2020/326 main track.
* Huang et al. [2020b] Huang, Y., Zhang, S., Wen, J., & Chen, X. (2020b). Short-term traffic flow prediction based on graph convolutional network embedded lstm. In International Conference on Transportation and Development 2020 (pp. 159–168). American Society of Civil Engineers Reston, VA.
* James [2019] James, J. (2019). Online traffic speed estimation for urban road networks with few data: A transfer learning approach. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 4024–4029). IEEE.
* James [2020] James, J. (2020). Citywide traffic speed prediction: A geometric deep learning approach. Knowledge-Based Systems, (p. 106592).
* Jepsen et al. [2019] Jepsen, T. S., Jensen, C. S., & Nielsen, T. D. (2019). Graph convolutional networks for road networks. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (pp. 460–463).
* Jepsen et al. [2020] Jepsen, T. S., Jensen, C. S., & Nielsen, T. D. (2020). Relational fusion networks: Graph convolutional networks for road networks. IEEE Transactions on Intelligent Transportation Systems, .
* Jia et al. [2020] Jia, C., Wu, B., & Zhang, X.-P. (2020). Dynamic spatiotemporal graph neural network with tensor network. arXiv preprint arXiv:2003.08729, .
* Jiang & Zhang [2018] Jiang, W., & Zhang, L. (2018). Geospatial data to images: A deep-learning framework for traffic forecasting. Tsinghua Science and Technology, 24, 52–64.
* Jin et al. [2020a] Jin, G., Cui, Y., Zeng, L., Tang, H., Feng, Y., & Huang, J. (2020a). Urban ride-hailing demand prediction with multiple spatio-temporal information fusion network. Transportation Research Part C: Emerging Technologies, 117, 102665.
* Jin et al. [2020b] Jin, G., Xi, Z., Sha, H., Feng, Y., & Huang, J. (2020b). Deep multi-view spatiotemporal virtual graph neural network for significant citywide ride-hailing demand prediction. arXiv preprint arXiv:2007.15189, .
* Kang et al. [2019] Kang, Z., Xu, H., Hu, J., & Pei, X. (2019). Learning dynamic graph embedding for traffic flow forecasting: A graph self-attentive method. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 2570–2576). IEEE.
* Ke et al. [2021] Ke, J., Feng, S., Zhu, Z., Yang, H., & Ye, J. (2021). Joint predictions of multi-modal ride-hailing demands: A deep multi-task multi-graph learning-based approach. Transportation Research Part C: Emerging Technologies, 127, 103063.
* Ke et al. [2019] Ke, J., Qin, X., Yang, H., Zheng, Z., Zhu, Z., & Ye, J. (2019). Predicting origin-destination ride-sourcing demand with a spatio-temporal encoder-decoder residual multi-graph convolutional network. arXiv preprint arXiv:1910.09103, .
* Kim et al. [2020] Kim, S.-S., Chung, M., & Kim, Y.-K. (2020). Urban traffic prediction using congestion diffusion model. In 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia) (pp. 1–4). IEEE.
* Kim et al. [2019] Kim, T. S., Lee, W. K., & Sohn, S. Y. (2019). Graph convolutional network approach applied to predict hourly bike-sharing demands considering spatial, temporal, and global effects. PLOS ONE, 14, e0220782.
* Kipf & Welling [2016] Kipf, T. N., & Welling, M. (2016). Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, .
* Kipf & Welling [2017] Kipf, T. N., & Welling, M. (2017). Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR ’17).
* Kong et al. [2020] Kong, X., Xing, W., Wei, X., Bao, P., Zhang, J., & Lu, W. (2020). Stgat: Spatial-temporal graph attention networks for traffic flow forecasting. IEEE Access, .
* Lee et al. [2019] Lee, D., Jung, S., Cheon, Y., Kim, D., & You, S. (2019). Demand forecasting from spatiotemporal data with graph networks and temporal-guided embedding. arXiv preprint arXiv:1905.10709, .
* Lee et al. [2021] Lee, K., Eo, M., Jung, E., Yoon, Y., & Rhee, W. (2021). Short-term traffic prediction with deep neural networks: A survey. IEEE Access, 9, 54739–54756.
* Lee & Rhee [2019a] Lee, K., & Rhee, W. (2019a). Ddp-gcn: Multi-graph convolutional network for spatiotemporal traffic forecasting. arXiv preprint arXiv:1905.12256, .
* Lee & Rhee [2019b] Lee, K., & Rhee, W. (2019b). Graph convolutional modules for traffic forecasting. CoRR, abs/1905.12256. URL: http://arxiv.org/abs/1905.12256. arXiv:1905.12256.
* Lewenfus et al. [2020] Lewenfus, G., Martins, W. A., Chatzinotas, S., & Ottersten, B. (2020). Joint forecasting and interpolation of time-varying graph signals using deep learning. IEEE Transactions on Signal and Information Processing over Networks, .
* Li & Axhausen [2020] Li, A., & Axhausen, K. W. (2020). Short-term traffic demand prediction using graph convolutional neural networks. AGILE: GIScience Series, 1, 1--14.
* Li et al. [2020a] Li, C., Bai, L., Liu, W., Yao, L., & Waller, S. T. (2020a). Knowledge adaption for demand prediction based on multi-task memory neural network. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 715--724).
* Li et al. [2018a] Li, J., Peng, H., Liu, L., Xiong, G., Du, B., Ma, H., Wang, L., & Bhuiyan, M. Z. A. (2018a). Graph cnns for urban traffic passenger flows prediction. In 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 29--36). IEEE.
* Li & Zhu [2021] Li, M., & Zhu, Z. (2021). Spatial-temporal fusion graph neural networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 4189--4196). volume 35.
* Li et al. [2020b] Li, W., Wang, X., Zhang, Y., & Wu, Q. (2020b). Traffic flow prediction over muti-sensor data correlation with graph convolution network. Neurocomputing, .
* Li et al. [2020c] Li, W., Yang, X., Tang, X., & Xia, S. (2020c). Sdcn: Sparsity and diversity driven correlation networks for traffic demand forecasting. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1--8). IEEE.
* Li & Moura [2020] Li, Y., & Moura, J. M. (2020). Forecaster: A graph transformer for forecasting spatial and time-dependent data. In Proceedings of the Twenty-fourth European Conference on Artificial Intelligence.
* Li et al. [2018b] Li, Y., Yu, R., Shahabi, C., & Liu, Y. (2018b). Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In International Conference on Learning Representations (ICLR ’18).
* Li et al. [2020d] Li, Z., Li, L., Peng, Y., & Tao, X. (2020d). A two-stream graph convolutional neural network for dynamic traffic flow forecasting. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 355--362). IEEE.
* Li et al. [2020e] Li, Z., Sergin, N. D., Yan, H., Zhang, C., & Tsung, F. (2020e). Tensor completion for weakly-dependent data on graph for metro passenger flow prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Li et al. [2019] Li, Z., Xiong, G., Chen, Y., Lv, Y., Hu, B., Zhu, F., & Wang, F.-Y. (2019). A hybrid deep learning approach with gcn and lstm for traffic flow prediction. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 1929--1933). IEEE.
* Li et al. [2020f] Li, Z., Xiong, G., Tian, Y., Lv, Y., Chen, Y., Hui, P., & Su, X. (2020f). A multi-stream feature fusion approach for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Liao et al. [2018] Liao, B., Zhang, J., Wu, C., McIlwraith, D., Chen, T., Yang, S., Guo, Y., & Wu, F. (2018). Deep sequence learning with auxiliary information for traffic prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 537--546).
* Lin et al. [2018] Lin, L., He, Z., & Peeta, S. (2018). Predicting station-level hourly demand in a large-scale bike-sharing network: A graph convolutional neural network approach. Transportation Research Part C: Emerging Technologies, 97, 258--276.
* Liu et al. [2020a] Liu, J., Ong, G. P., & Chen, X. (2020a). Graphsage-based traffic speed forecasting for segment network with sparse data. IEEE Transactions on Intelligent Transportation Systems, .
* Liu et al. [2020b] Liu, L., Chen, J., Wu, H., Zhen, J., Li, G., & Lin, L. (2020b). Physical-virtual collaboration modeling for intra-and inter-station metro ridership prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Liu et al. [2019] Liu, L., Zhou, T., Long, G., Jiang, J., & Zhang, C. (2019). Learning to propagate for graph meta-learning. In Advances in Neural Information Processing Systems (pp. 1039--1050).
* Liu et al. [2020c] Liu, R., Zhao, S., Cheng, B., Yang, H., Tang, H., & Yang, F. (2020c). St-mfm: A spatiotemporal multi-modal fusion model for urban anomalies prediction. In Proceedings of the Twenty-fourth European Conference on Artificial Intelligence.
* Lu et al. [2020a] Lu, B., Gan, X., Jin, H., Fu, L., & Zhang, H. (2020a). Spatiotemporal adaptive gated graph convolution network for urban traffic flow forecasting. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1025--1034).
* Lu et al. [2019a] Lu, M., Zhang, K., Liu, H., & Xiong, N. (2019a). Graph hierarchical convolutional recurrent neural network (ghcrnn) for vehicle condition prediction. arXiv preprint arXiv:1903.06261, .
* Lu et al. [2020b] Lu, Z., Lv, W., Cao, Y., Xie, Z., Peng, H., & Du, B. (2020b). Lstm variants meet graph neural networks for road speed prediction. Neurocomputing, .
* Lu et al. [2019b] Lu, Z., Lv, W., Xie, Z., Du, B., & Huang, R. (2019b). Leveraging graph neural network with lstm for traffic speed prediction. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 74--81). IEEE.
* Luca et al. [2020] Luca, M., Barlacchi, G., Lepri, B., & Pappalardo, L. (2020). Deep learning for human mobility: a survey on data and models. arXiv preprint arXiv:2012.02825, .
* Luo et al. [2020] Luo, M., Du, B., Klemmer, K., Zhu, H., Ferhatosmanoglu, H., & Wen, H. (2020). D3p: Data-driven demand prediction for fast expanding electric vehicle sharing systems. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4, 1--21.
* Lv et al. [2020] Lv, M., Hong, Z., Chen, L., Chen, T., Zhu, T., & Ji, S. (2020). Temporal multi-graph convolutional network for traffic flow prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Maas & Bloem [2020] Maas, T., & Bloem, P. (2020). Uncertainty intervals for graph-based spatio-temporal traffic prediction. arXiv preprint arXiv:2012.05207, .
* Mallick et al. [2020] Mallick, T., Balaprakash, P., Rask, E., & Macfarlane, J. (2020). Graph-partitioning-based diffusion convolution recurrent neural network for large-scale traffic forecasting. Transportation Research Record, (p. 0361198120930010).
* Mallick et al. [2021] Mallick, T., Balaprakash, P., Rask, E., & Macfarlane, J. (2021). Transfer learning with graph neural networks for short-term highway traffic forecasting. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 10367--10374). IEEE.
* Manibardo et al. [2020] Manibardo, E. L., Laña, I., & Del Ser, J. (2020). Deep learning for road traffic forecasting: Does it make a difference? arXiv preprint arXiv:2012.02260, .
* Mena-Oreja & Gozalvez [2020] Mena-Oreja, J., & Gozalvez, J. (2020). A comprehensive evaluation of deep learning-based techniques for traffic prediction. IEEE Access, 8, 91188--91212.
* Mohanty & Pozdnukhov [2018] Mohanty, S., & Pozdnukhov, A. (2018). Graph cnn+ lstm framework for dynamic macroscopic traffic congestion prediction. In International Workshop on Mining and Learning with Graphs.
* Mohanty et al. [2020] Mohanty, S., Pozdnukhov, A., & Cassidy, M. (2020). Region-wide congestion prediction and control using deep learning. Transportation Research Part C: Emerging Technologies, 116, 102624.
* Opolka et al. [2019] Opolka, F. L., Solomon, A., Cangea, C., Veličković, P., Liò, P., & Hjelm, R. D. (2019). Spatio-temporal deep graph infomax. In Representation Learning on Graphs and Manifolds, ICLR 2019 Workshop.
* Oreshkin et al. [2021] Oreshkin, B. N., Amini, A., Coyle, L., & Coates, M. (2021). Fc-gaga: Fully connected gated graph architecture for spatio-temporal traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 9233--9241). volume 35.
* Ou et al. [2020] Ou, J., Sun, J., Zhu, Y., Jin, H., Liu, Y., Zhang, F., Huang, J., & Wang, X. (2020). Stp-trellisnets: Spatial-temporal parallel trellisnets for metro station passenger flow prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1185--1194).
* Pan et al. [2019] Pan, Z., Liang, Y., Wang, W., Yu, Y., Zheng, Y., & Zhang, J. (2019). Urban traffic prediction from spatio-temporal data using deep meta learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1720--1730).
* Pan et al. [2020] Pan, Z., Zhang, W., Liang, Y., Zhang, W., Yu, Y., Zhang, J., & Zheng, Y. (2020). Spatio-temporal meta learning for urban traffic prediction. IEEE Transactions on Knowledge and Data Engineering, .
* Park et al. [2020] Park, C., Lee, C., Bahng, H., Tae, Y., Jin, S., Kim, K., Ko, S., & Choo, J. (2020). St-grat: A novel spatio-temporal graph attention networks for accurately forecasting dynamically changing road speed. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1215--1224).
* Pavlyuk [2019] Pavlyuk, D. (2019). Feature selection and extraction in spatiotemporal traffic forecasting: a systematic literature review. European Transport Research Review, 11, 6.
* Peng et al. [2020] Peng, H., Wang, H., Du, B., Bhuiyan, M. Z. A., Ma, H., Liu, J., Wang, L., Yang, Z., Du, L., Wang, S. et al. (2020). Spatial temporal incidence dynamic graph neural networks for traffic flow forecasting. Information Sciences, 521, 277--290.
* Pian & Wu [2020] Pian, W., & Wu, Y. (2020). Spatial-temporal dynamic graph attention networks for ride-hailing demand prediction. arXiv preprint arXiv:2006.05905, .
* Pope et al. [2019] Pope, P. E., Kolouri, S., Rostami, M., Martin, C. E., & Hoffmann, H. (2019). Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 10772--10781).
* Qin et al. [2020a] Qin, K., Xu, Y., Kang, C., & Kwan, M.-P. (2020a). A graph convolutional network model for evaluating potential congestion spots based on local urban built environments. Transactions in GIS, .
* Qin et al. [2020b] Qin, T., Liu, T., Wu, H., Tong, W., & Zhao, S. (2020b). Resgcn: Residual graph convolutional network based free dock prediction in bike sharing system. In 2020 21st IEEE International Conference on Mobile Data Management (MDM) (pp. 210--217). IEEE.
* Qiu et al. [2020] Qiu, H., Zheng, Q., Msahli, M., Memmi, G., Qiu, M., & Lu, J. (2020). Topological graph convolutional network-based urban traffic flow and density prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Qu et al. [2020] Qu, Y., Zhu, Y., Zang, T., Xu, Y., & Yu, J. (2020). Modeling local and global flow aggregation for traffic flow forecasting. In International Conference on Web Information Systems Engineering (pp. 414--429). Springer.
* Ramadan et al. [2020] Ramadan, A., Elbery, A., Zorba, N., & Hassanein, H. S. (2020). Traffic forecasting using temporal line graph convolutional network: Case study. In ICC 2020-2020 IEEE International Conference on Communications (ICC) (pp. 1--6). IEEE.
* Ren & Xie [2019] Ren, Y., & Xie, K. (2019). Transfer knowledge between sub-regions for traffic prediction using deep learning method. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 208--219). Springer.
* Sánchez et al. [2020] Sánchez, C. S., Wieder, A., Sottovia, P., Bortoli, S., Baumbach, J., & Axenie, C. (2020). Gannster: Graph-augmented neural network spatio-temporal reasoner for traffic forecasting. In International Workshop on Advanced Analysis and Learning on Temporal Data (AALTD). Springer.
* Scarselli et al. [2008] Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2008). The graph neural network model. IEEE transactions on neural networks, 20, 61--80.
* Shao et al. [2020] Shao, K., Wang, K., Chen, L., & Zhou, Z. (2020). Estimation of urban travel time with sparse traffic surveillance data. In Proceedings of the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and Artificial Intelligence (pp. 218--223).
* Shen et al. [2020] Shen, Y., Jin, C., & Hua, J. (2020). Ttpnet: A neural network for travel time prediction based on tensor decomposition and graph embedding. IEEE Transactions on Knowledge and Data Engineering, .
* Shi et al. [2020] Shi, H., Yao, Q., Guo, Q., Li, Y., Zhang, L., Ye, J., Li, Y., & Liu, Y. (2020). Predicting origin-destination flow via multi-perspective graph convolutional network. In 2020 IEEE 36th International Conference on Data Engineering (ICDE) (pp. 1818--1821). IEEE.
* Shi & Yeung [2018] Shi, X., & Yeung, D.-Y. (2018). Machine learning for spatiotemporal sequence forecasting: A survey. arXiv preprint arXiv:1808.06865, .
* Shin & Yoon [2020] Shin, Y., & Yoon, Y. (2020). Incorporating dynamicity of transportation network with multi-weight traffic graph convolutional network for traffic forecasting. IEEE Transactions on Intelligent Transportation Systems, .
* Shleifer et al. [2019] Shleifer, S., McCreery, C., & Chitters, V. (2019). Incrementally improving graph wavenet performance on traffic prediction. arXiv preprint arXiv:1912.07390, .
* Song et al. [2020a] Song, C., Lin, Y., Guo, S., & Wan, H. (2020a). Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Song et al. [2020b] Song, Q., Ming, R., Hu, J., Niu, H., & Gao, M. (2020b). Graph attention convolutional network: Spatiotemporal modeling for urban traffic prediction. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) (pp. 1--6). IEEE.
* Sun et al. [2020] Sun, J., Zhang, J., Li, Q., Yi, X., Liang, Y., & Zheng, Y. (2020). Predicting citywide crowd flows in irregular regions using multi-view graph convolutional networks. IEEE Transactions on Knowledge and Data Engineering, (pp. 1--1).
* Sun et al. [2020] Sun, X., Li, J., Lv, Z., & Dong, C. (2020). Traffic flow prediction model based on spatio-temporal dilated graph convolution. KSII Transactions on Internet & Information Systems, 14.
* Sun et al. [2021] Sun, Y., Wang, Y., Fu, K., Wang, Z., Zhang, C., & Ye, J. (2021). Constructing geographic and long-term temporal graph for traffic forecasting. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 3483--3490). IEEE.
* Tang et al. [2020a] Tang, C., Sun, J., & Sun, Y. (2020a). Dynamic spatial-temporal graph attention graph convolutional network for short-term traffic flow forecasting. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1--5). IEEE.
* Tang et al. [2020b] Tang, C., Sun, J., Sun, Y., Peng, M., & Gan, N. (2020b). A general traffic flow prediction approach based on spatial-temporal graph attention. IEEE Access, 8, 153731--153741.
* Tedjopurnomo et al. [2020] Tedjopurnomo, D. A., Bao, Z., Zheng, B., Choudhury, F., & Qin, A. (2020). A survey on modern deep neural network for traffic prediction: Trends, methods and challenges. IEEE Transactions on Knowledge and Data Engineering, .
* Tian et al. [2020] Tian, K., Guo, J., Ye, K., & Xu, C.-Z. (2020). St-mgat: Spatial-temporal multi-head graph attention networks for traffic forecasting. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 714--721). IEEE.
* Varghese et al. [2020] Varghese, V., Chikaraishi, M., & Urata, J. (2020). Deep learning in transport studies: A meta-analysis on the prediction accuracy. Journal of Big Data Analytics in Transportation, (pp. 1--22).
* Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30, 5998--6008.
* Veličković et al. [2018] Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., & Bengio, Y. (2018). Graph attention networks. In International Conference on Learning Representations.
* Wang et al. [2018a] Wang, B., Luo, X., Zhang, F., Yuan, B., Bertozzi, A. L., & Brantingham, P. J. (2018a). Graph-based deep modeling and real time forecasting of sparse spatio-temporal data. arXiv preprint arXiv:1804.00684, .
* Wang et al. [2020a] Wang, C., Zhang, K., Wang, H., & Chen, B. (2020a). Auto-stgcn: Autonomous spatial-temporal graph convolutional network search based on reinforcement learning and existing research results. arXiv preprint arXiv:2010.07474, .
* Wang et al. [2020b] Wang, F., Xu, J., Liu, C., Zhou, R., & Zhao, P. (2020b). Mtgcn: A multitask deep learning model for traffic flow prediction. In International Conference on Database Systems for Advanced Applications (pp. 435--451). Springer.
* Wang et al. [2020c] Wang, H.-W., Peng, Z.-R., Wang, D., Meng, Y., Wu, T., Sun, W., & Lu, Q.-C. (2020c). Evaluation and prediction of transportation resilience under extreme weather events: A diffusion graph convolutional approach. Transportation Research Part C: Emerging Technologies, 115, 102619.
* Wang et al. [2020d] Wang, Q., Guo, B., Ouyang, Y., Shu, K., Yu, Z., & Liu, H. (2020d). Spatial community-informed evolving graphs for demand prediction. In Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2020).
* Wang et al. [2020e] Wang, S., Miao, H., Chen, H., & Huang, Z. (2020e). Multi-task adversarial spatial-temporal networks for crowd flow prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1555--1564).
* Wang et al. [2018b] Wang, X., Chen, C., Min, Y., He, J., Yang, B., & Zhang, Y. (2018b). Efficient metropolitan traffic prediction based on graph recurrent neural network. arXiv preprint arXiv:1811.00740, .
* Wang et al. [2020f] Wang, X., Guan, X., Cao, J., Zhang, N., & Wu, H. (2020f). Forecast network-wide traffic states for multiple steps ahead: A deep learning approach considering dynamic non-local spatial correlation and non-stationary temporal dependency. Transportation Research Part C: Emerging Technologies, 119, 102763. URL: http://www.sciencedirect.com/science/article/pii/S0968090X20306756. doi:https://doi.org/10.1016/j.trc.2020.102763.
* Wang et al. [2020g] Wang, X., Ma, Y., Wang, Y., Jin, W., Wang, X., Tang, J., Jia, C., & Yu, J. (2020g). Traffic flow prediction via spatial temporal graph neural network. In Proceedings of The Web Conference 2020 WWW ’20 (p. 1082–1092). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3366423.3380186. doi:10.1145/3366423.3380186.
* Wang et al. [2020h] Wang, Y., Xu, D., Peng, P., Xuan, Q., & Zhang, G. (2020h). An urban commuters’ od hybrid prediction method based on big gps data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30, 093128\.
* Wang et al. [2019] Wang, Y., Yin, H., Chen, H., Wo, T., Xu, J., & Zheng, K. (2019). Origin-destination matrix prediction via graph convolution: a new perspective of passenger demand modeling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1227--1235).
* Wei & Sheng [2020] Wei, C., & Sheng, J. (2020). Spatial-temporal graph attention networks for traffic flow forecasting. In IOP Conference Series: Earth and Environmental Science (p. 012065). IOP Publishing volume 587.
* Wei et al. [2019] Wei, L., Yu, Z., Jin, Z., Xie, L., Huang, J., Cai, D., He, X., & Hua, X.-S. (2019). Dual graph for traffic forecasting. IEEE Access, .
* Wright et al. [2019] Wright, M. A., Ehlers, S. F., & Horowitz, R. (2019). Neural-attention-based deep learning architectures for modeling traffic dynamics on lane graphs. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 3898--3905). IEEE.
* Wu et al. [2020a] Wu, M., Zhu, C., & Chen, L. (2020a). Multi-task spatial-temporal graph attention network for taxi demand prediction. In Proceedings of the 2020 5th International Conference on Mathematics and Artificial Intelligence (pp. 224--228).
* Wu et al. [2018a] Wu, T., Chen, F., & Wan, Y. (2018a). Graph attention lstm network: A new model for traffic flow forecasting. In 2018 5th International Conference on Information Science and Control Engineering (ICISCE) (pp. 241--245). IEEE.
* Wu et al. [2018b] Wu, Y., Tan, H., Qin, L., Ran, B., & Jiang, Z. (2018b). A hybrid deep learning based traffic flow prediction method and its understanding. Transportation Research Part C: Emerging Technologies, 90, 166--180.
* Wu et al. [2020b] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y. (2020b). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, .
* Wu et al. [2020c] Wu, Z., Pan, S., Long, G., Jiang, J., Chang, X., & Zhang, C. (2020c). Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD ’20 (p. 753–763). New York, NY, USA: Association for Computing Machinery. URL: https://doi.org/10.1145/3394486.3403118. doi:10.1145/3394486.3403118.
* Wu et al. [2019] Wu, Z., Pan, S., Long, G., Jiang, J., & Zhang, C. (2019). Graph wavenet for deep spatial-temporal graph modeling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19 (pp. 1907--1913). International Joint Conferences on Artificial Intelligence Organization. URL: https://doi.org/10.24963/ijcai.2019/264. doi:10.24963/ijcai.2019/264.
* Xiao et al. [2020] Xiao, G., Wang, R., Zhang, C., & Ni, A. (2020). Demand prediction for a public bike sharing program based on spatio-temporal graph convolutional networks. Multimedia Tools and Applications, (pp. 1--19).
* Xie et al. [2020a] Xie, P., Li, T., Liu, J., Du, S., Yang, X., & Zhang, J. (2020a). Urban flow prediction from spatiotemporal data using machine learning: A survey. Information Fusion, 59, 1--12.
* Xie et al. [2020b] Xie, Q., Guo, T., Chen, Y., Xiao, Y., Wang, X., & Zhao, B. Y. (2020b). Deep graph convolutional networks for incident-driven traffic speed prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1665--1674).
* Xie et al. [2020c] Xie, Y., Xiong, Y., & Zhu, Y. (2020c). Istd-gcn: Iterative spatial-temporal diffusion graph convolutional network for traffic speed forecasting. arXiv preprint arXiv:2008.03970, .
* Xie et al. [2020d] Xie, Y., Xiong, Y., & Zhu, Y. (2020d). Sast-gnn: A self-attention based spatio-temporal graph neural network for traffic prediction. In International Conference on Database Systems for Advanced Applications (pp. 707--714). Springer.
* Xie et al. [2019] Xie, Z., Lv, W., Huang, S., Lu, Z., Du, B., & Huang, R. (2019). Sequential graph neural network for urban road traffic speed prediction. IEEE Access, .
* Xin et al. [2020] Xin, Y., Miao, D., Zhu, M., Jin, C., & Lu, X. (2020). Internet: Multistep traffic forecasting by interacting spatial and temporal features. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 3477--3480).
* Xiong et al. [2020] Xiong, X., Ozbay, K., Jin, L., & Feng, C. (2020). Dynamic origin--destination matrix prediction with line graph neural networks and kalman filter. Transportation Research Record, (p. 0361198120919399).
* Xu et al. [2019] Xu, D., Dai, H., Wang, Y., Peng, P., Xuan, Q., & Guo, H. (2019). Road traffic state prediction based on a graph embedding recurrent neural network under the scats. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29, 103125\.
* Xu et al. [2020a] Xu, D., Wei, C., Peng, P., Xuan, Q., & Guo, H. (2020a). Ge-gan: A novel deep learning framework for road traffic state estimation. Transportation Research Part C: Emerging Technologies, 117, 102635.
* Xu et al. [2020b] Xu, M., Dai, W., Liu, C., Gao, X., Lin, W., Qi, G.-J., & Xiong, H. (2020b). Spatial-temporal transformer networks for traffic flow forecasting. arXiv preprint arXiv:2001.02908, .
* Xu et al. [2020c] Xu, X., Zheng, H., Feng, X., & Chen, Y. (2020c). Traffic flow forecasting with spatial-temporal graph convolutional networks in edge-computing systems. In 2020 International Conference on Wireless Communications and Signal Processing (WCSP) (pp. 251--256). IEEE.
* Xu & Li [2019] Xu, Y., & Li, D. (2019). Incorporating graph attention and recurrent architectures for city-wide taxi demand prediction. ISPRS International Journal of Geo-Information, 8, 414.
* Xu et al. [2020d] Xu, Z., Kang, Y., Cao, Y., & Li, Z. (2020d). Spatiotemporal graph convolution multifusion network for urban vehicle emission prediction. IEEE Transactions on Neural Networks and Learning Systems, .
* Yang et al. [2020] Yang, F., Chen, L., Zhou, F., Gao, Y., & Cao, W. (2020). Relational state-space model for stochastic multi-object systems. In International Conference on Learning Representations.
* Yang et al. [2019] Yang, S., Ma, W., Pi, X., & Qian, S. (2019). A deep learning approach to real-time parking occupancy prediction in transportation networks incorporating multiple spatio-temporal data sources. Transportation Research Part C: Emerging Technologies, 107, 248--265.
* Yao et al. [2020] Yao, X., Gao, Y., Zhu, D., Manley, E., Wang, J., & Liu, Y. (2020). Spatial origin-destination flow imputation using graph convolutional networks. IEEE Transactions on Intelligent Transportation Systems, .
* Ye et al. [2021] Ye, J., Sun, L., Du, B., Fu, Y., & Xiong, H. (2021). Coupled layer-wise graph convolution for transportation demand prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 4617--4625). volume 35.
* Ye et al. [2020a] Ye, J., Zhao, J., Ye, K., & Xu, C. (2020a). How to build a graph-based deep learning architecture in traffic domain: A survey. IEEE Transactions on Intelligent Transportation Systems, .
* Ye et al. [2020b] Ye, J., Zhao, J., Ye, K., & Xu, C. (2020b). Multi-stgcnet: A graph convolution based spatial-temporal framework for subway passenger flow forecasting. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1--8). IEEE.
* Yeghikyan et al. [2020] Yeghikyan, G., Opolka, F. L., Nanni, M., Lepri, B., & Liò, P. (2020). Learning mobility flows from urban features with spatial interaction models and neural networks. In 2020 IEEE International Conference on Smart Computing (SMARTCOMP) (pp. 57--64). IEEE.
* Yin et al. [2020] Yin, X., Wu, G., Wei, J., Shen, Y., Qi, H., & Yin, B. (2020). Multi-stage attention spatial-temporal graph networks for traffic prediction. Neurocomputing, .
* Yin et al. [2021] Yin, X., Wu, G., Wei, J., Shen, Y., Qi, H., & Yin, B. (2021). Deep learning on traffic prediction: Methods, analysis and future directions. IEEE Transactions on Intelligent Transportation Systems, .
* Ying et al. [2019] Ying, Z., Bourgeois, D., You, J., Zitnik, M., & Leskovec, J. (2019). Gnnexplainer: Generating explanations for graph neural networks. In Advances in neural information processing systems (pp. 9244--9255).
* Yoshida et al. [2019] Yoshida, A., Yatsushiro, Y., Hata, N., Higurashi, T., Tateiwa, N., Wakamatsu, T., Tanaka, A., Nagamatsu, K., & Fujisawa, K. (2019). Practical end-to-end repositioning algorithm for managing bike-sharing system. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 1251--1258). IEEE.
* Yu et al. [2020a] Yu, B., Lee, Y., & Sohn, K. (2020a). Forecasting road traffic speeds by considering area-wide spatio-temporal dependencies based on a graph convolutional neural network (gcn). Transportation Research Part C: Emerging Technologies, 114, 189--204.
* Yu et al. [2019a] Yu, B., Li, M., Zhang, J., & Zhu, Z. (2019a). 3d graph convolutional networks with temporal graphs: A spatial information free framework for traffic forecasting. arXiv preprint arXiv:1903.00919, .
* Yu et al. [2018] Yu, B., Yin, H., & Zhu, Z. (2018). Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18 (pp. 3634--3640). International Joint Conferences on Artificial Intelligence Organization. URL: https://doi.org/10.24963/ijcai.2018/505. doi:10.24963/ijcai.2018/505.
* Yu et al. [2019b] Yu, B., Yin, H., & Zhu, Z. (2019b). St-unet: A spatio-temporal u-network for graph-structured time series modeling. arXiv preprint arXiv:1903.05631, .
* Yu & Gu [2019] Yu, J. J. Q., & Gu, J. (2019). Real-time traffic speed estimation with graph convolutional generative autoencoder. IEEE Transactions on Intelligent Transportation Systems, 20, 3940--3951.
* Yu et al. [2020b] Yu, L., Du, B., Hu, X., Sun, L., Han, L., & Lv, W. (2020b). Deep spatio-temporal graph convolutional network for traffic accident prediction. Neurocomputing, .
* Yuan et al. [2010] Yuan, J., Zheng, Y., Zhang, C., Xie, W., Xie, X., Sun, G., & Huang, Y. (2010). T-drive: driving directions based on taxi trajectories. In Proceedings of the 18th SIGSPATIAL International conference on advances in geographic information systems (pp. 99--108).
* Zhang et al. [2019a] Zhang, C., James, J., & Liu, Y. (2019a). Spatial-temporal graph attention networks: A deep learning approach for traffic forecasting. IEEE Access, 7, 166246--166256.
* Zhang et al. [2020a] Zhang, H., Liu, J., Tang, Y., & Xiong, G. (2020a). Attention based graph covolution networks for intelligent traffic flow analysis. In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE) (pp. 558--563). IEEE.
* Zhang et al. [2020b] Zhang, J., Chen, F., Cui, Z., Guo, Y., & Zhu, Y. (2020b). Deep learning architecture for short-term passenger flow forecasting in urban rail transit. IEEE Transactions on Intelligent Transportation Systems, .
* Zhang et al. [2020c] Zhang, J., Chen, F., & Guo, Y. (2020c). Multi-graph convolutional network for short-term passenger flow forecasting in urban rail transit. IET Intelligent Transport Systems, .
* Zhang et al. [2018a] Zhang, J., Shi, X., Xie, J., Ma, H., King, I., & Yeung, D.-Y. (2018a). Gaan: Gated attention networks for learning on large and spatiotemporal graphs. arXiv preprint arXiv:1803.07294, .
* Zhang et al. [2017] Zhang, J., Zheng, Y., & Qi, D. (2017). Deep spatio-temporal residual networks for citywide crowd flows prediction. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (pp. 1655--1661).
* Zhang et al. [2020d] Zhang, K., He, F., Zhang, Z., Lin, X., & Li, M. (2020d). Graph attention temporal convolutional network for traffic speed forecasting on road networks. Transportmetrica B: Transport Dynamics, (pp. 1--19).
* Zhang et al. [2019b] Zhang, N., Guan, X., Cao, J., Wang, X., & Wu, H. (2019b). A hybrid traffic speed forecasting approach integrating wavelet transform and motif-based graph convolutional recurrent neural network. arXiv preprint arXiv:1904.06656, .
* Zhang et al. [2020e] Zhang, Q., Chang, J., Meng, G., Xiang, S., & Pan, C. (2020e). Spatio-temporal graph structure learning for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Zhang et al. [2018b] Zhang, Q., Jin, Q., Chang, J., Xiang, S., & Pan, C. (2018b). Kernel-weighted graph convolutional network: A deep learning approach for traffic forecasting. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 1018--1023). IEEE.
* Zhang & Guo [2020] Zhang, T., & Guo, G. (2020). Graph attention lstm: A spatio-temperal approach for traffic flow forecasting. IEEE Intelligent Transportation Systems Magazine, .
* Zhang et al. [2019c] Zhang, T., Jin, J., Yang, H., Guo, H., & Ma, X. (2019c). Link speed prediction for signalized urban traffic network using a hybrid deep learning approach. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 2195--2200). IEEE.
* Zhang et al. [2020f] Zhang, W., Liu, H., Liu, Y., Zhou, J., & Xiong, H. (2020f). Semi-supervised hierarchical recurrent graph neural network for city-wide parking availability prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Zhang et al. [2020g] Zhang, W., Liu, H., Liu, Y., Zhou, J., Xu, T., & Xiong, H. (2020g). Semi-supervised city-wide parking availability prediction via hierarchical recurrent graph neural network. IEEE Transactions on Knowledge and Data Engineering, .
* Zhang et al. [2020h] Zhang, X., Huang, C., Xu, Y., & Xia, L. (2020h). Spatial-temporal convolutional graph attention networks for citywide traffic flow forecasting. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1853--1862).
* Zhang et al. [2020i] Zhang, X., Zhang, Z., & Jin, X. (2020i). Spatial-temporal graph attention model on traffic forecasting. In 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI) (pp. 999--1003). IEEE.
* Zhang et al. [2019d] Zhang, Y., Cheng, T., & Ren, Y. (2019d). A graph deep learning method for short-term traffic forecasting on large road networks. Computer-Aided Civil and Infrastructure Engineering, 34, 877--896.
* Zhang et al. [2020j] Zhang, Y., Cheng, T., Ren, Y., & Xie, K. (2020j). A novel residual graph convolution deep learning model for short-term network-based traffic forecasting. International Journal of Geographical Information Science, 34, 969--995.
* Zhang et al. [2020k] Zhang, Y., Dong, X., Shang, L., Zhang, D., & Wang, D. (2020k). A multi-modal graph neural network approach to traffic risk forecasting in smart urban sensing. In 2020 17th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) (pp. 1--9). IEEE.
* Zhang et al. [2020l] Zhang, Y., Lu, M., & Li, H. (2020l). Urban traffic flow forecast based on fastgcrnn. Journal of Advanced Transportation, 2020.
* Zhang et al. [2019e] Zhang, Y., Wang, S., Chen, B., & Cao, J. (2019e). Gcgan: Generative adversarial nets with graph cnn for network-scale traffic prediction. In 2019 International Joint Conference on Neural Networks (IJCNN) (pp. 1--8). IEEE.
* Zhang et al. [2020m] Zhang, Z., Cui, P., & Zhu, W. (2020m). Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, .
* Zhang et al. [2019f] Zhang, Z., Li, M., Lin, X., Wang, Y., & He, F. (2019f). Multistep speed prediction on traffic networks: A deep learning approach considering spatio-temporal dependencies. Transportation research part C: emerging technologies, 105, 297--322.
* Zhao et al. [2020a] Zhao, B., Gao, X., Liu, J., Zhao, J., & Xu, C. (2020a). Spatiotemporal data fusion in graph convolutional networks for traffic prediction. IEEE Access, .
* Zhao et al. [2020b] Zhao, H., Yang, H., Wang, Y., Wang, D., & Su, R. (2020b). Attention based graph bi-lstm networks for traffic forecasting. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) (pp. 1--6). IEEE.
* Zhao et al. [2019] Zhao, L., Song, Y., Zhang, C., Liu, Y., Wang, P., Lin, T., Deng, M., & Li, H. (2019). T-gcn: A temporal graph convolutional network for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, .
* Zhao et al. [2021] Zhao, T., Liu, Y., Neves, L., Woodford, O., Jiang, M., & Shah, N. (2021). Data augmentation for graph neural networks. In Proceedings of the 30th International Joint Conference on Artificial Intelligence. AAAI Press.
* Zheng et al. [2020a] Zheng, B., Hu, Q., Ming, L., Hu, J., Chen, L., Zheng, K., & Jensen, C. S. (2020a). Spatial-temporal demand forecasting and competitive supply via graph convolutional networks. arXiv preprint arXiv:2009.12157, .
* Zheng et al. [2020b] Zheng, C., Fan, X., Wang, C., & Qi, J. (2020b). Gman: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Zhou et al. [2020a] Zhou, F., Yang, Q., Zhang, K., Trajcevski, G., Zhong, T., & Khokhar, A. (2020a). Reinforced spatio-temporal attentive graph neural networks for traffic forecasting. IEEE Internet of Things Journal, .
* Zhou et al. [2020b] Zhou, F., Yang, Q., Zhong, T., Chen, D., & Zhang, N. (2020b). Variational graph neural networks for road traffic prediction in intelligent transportation systems. IEEE Transactions on Industrial Informatics, .
* Zhou et al. [2020c] Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., & Sun, M. (2020c). Graph neural networks: A review of methods and applications. AI Open, 1, 57--81.
* Zhou et al. [2020d] Zhou, Q., Gu, J.-J., Ling, C., Li, W.-B., Zhuang, Y., & Wang, J. (2020d). Exploiting multiple correlations among urban regions for crowd flow prediction. Journal of Computer Science and Technology, 35, 338--352.
* Zhou et al. [2019] Zhou, X., Shen, Y., & Huang, L. (2019). Revisiting flow information for traffic prediction. arXiv preprint arXiv:1906.00560, .
* Zhou et al. [2020e] Zhou, Z., Wang, Y., Xie, X., Chen, L., & Liu, H. (2020e). Riskoracle: A minute-level citywide traffic accident forecasting framework. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 34.
* Zhou et al. [2020f] Zhou, Z., Wang, Y., Xie, X., Chen, L., & Zhu, C. (2020f). Foresee urban sparse traffic accidents: A spatiotemporal multi-granularity perspective. IEEE Transactions on Knowledge and Data Engineering, .
* Zhu et al. [2019] Zhu, H., Luo, Y., Liu, Q., Fan, H., Song, T., Yu, C. W., & Du, B. (2019). Multistep flow prediction on car-sharing systems: A multi-graph convolutional neural network with attention mechanism. International Journal of Software Engineering and Knowledge Engineering, 29, 1727--1740.
* Zhu et al. [2020a] Zhu, H., Xie, Y., He, W., Sun, C., Zhu, K., Zhou, G., & Ma, N. (2020a). A novel traffic flow forecasting method based on rnn-gcn and brb. Journal of Advanced Transportation, 2020.
* Zhu et al. [2020b] Zhu, J., Han, X., Deng, H., Tao, C., Zhao, L., Tao, L., & Li, H. (2020b). Kst-gcn: A knowledge-driven spatial-temporal graph convolutional network for traffic forecasting. arXiv preprint arXiv:2011.14992, .
* Zhu et al. [2020c] Zhu, J., Tao, C., Deng, H., Zhao, L., Wang, P., Lin, T., & Li, H. (2020c). Ast-gcn: Attribute-augmented spatiotemporal graph convolutional network for traffic forecasting. arXiv preprint arXiv:2011.11004, .
|
# Narrow-line absorption at 689 nm in an ultracold strontium gas
Fachao Hu These authors contributed equally to this work. Canzhu Tan These
authors contributed equally to this work. Hefei National Laboratory for
Physical Sciences at the Microscale and Shanghai Branch, University of Science
and Technology of China, Shanghai 201315, China CAS Center for Excellence and
Synergetic Innovation Center in Quantum Information and Quantum Physics,
University of Science and Technology of China, Shanghai 201315, China Yuhai
Jiang<EMAIL_ADDRESS>Shanghai Advanced Research Institute, Chinese
Academy of Sciences, Shanghai 201210, China CAS Center for Excellence and
Synergetic Innovation Center in Quantum Information and Quantum Physics,
University of Science and Technology of China, Shanghai 201315, China
Matthias Weidemüller<EMAIL_ADDRESS>Hefei National Laboratory
for Physical Sciences at the Microscale and Shanghai Branch, University of
Science and Technology of China, Shanghai 201315, China CAS Center for
Excellence and Synergetic Innovation Center in Quantum Information and Quantum
Physics, University of Science and Technology of China, Shanghai 201315, China
Physikalisches Institut, Universität Heidelberg, Im Neuenheimer Feld 226,
69120 Heidelberg, Germany Bing Zhu<EMAIL_ADDRESS>Physikalisches Institut, Universität Heidelberg, Im Neuenheimer Feld 226,
69120 Heidelberg, Germany Hefei National Laboratory for Physical Sciences at
the Microscale and Shanghai Branch, University of Science and Technology of
China, Shanghai 201315, China CAS Center for Excellence and Synergetic
Innovation Center in Quantum Information and Quantum Physics, University of
Science and Technology of China, Shanghai 201315, China
###### Abstract
We analyse the spectrum on the narrow-line transition
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
at 689 nm in an ultracold gas of 88Sr via absorption imaging. In the low
saturation regime, the Doppler effect dominates in the observed spectrum
giving rise to a symmetric Voigt profile. The atomic temperature and atom
number can accurately be deduced from these low-saturation imaging data. At
high saturation, the absorption profile becomes asymmetric due to the photon-
recoil shift, which is of the same order as the natural line width. The line
shape can be described by an extension of the optical Bloch equations
including the photon recoil. A lensing effect of the atomic cloud induced by
the dispersion of the atoms is also observed at higher atomic densities in
both the low and strong saturation regimes.
## I Introduction
The existence of metastable states and narrow-line transitions among the
alkaline-earth and alkaline-earth-like atoms brings new opportunities for
studying cold and ultracold atoms, for example the optical-lattice clocks Jun
Ye (2008); Ludlow _et al._ (2015), the time variation of fundamental
constants Safronova _et al._ (2018a, b); Kennedy _et al._ (2020), atom
interferometersHu _et al._ (2017, 2019); Rudolph _et al._ (2020), nonlinear
quantum optics Ye _et al._ (1998); Christensen _et al._ (2015); Westergaard
_et al._ (2015), and strongly correlated Rydberg gases Dunning _et al._
(2016); Madjarov _et al._ (2020). While many of these applications rely on
the clock transition ${}^{1}\textrm{S}_{0}-^{3}\textrm{P}_{0}$ with a
linewidth on the level of mHz, the other narrow one
${}^{1}\textrm{S}_{0}-^{3}\textrm{P}_{1}$ with a $(1\sim 100)$-kHz linewidth
enable the cooling of the atoms down to the photon-recoil-limited regime
Curtis _et al._ (2001); Loftus _et al._ (2004); Guttridge _et al._ (2016)
and a direct laser cooling to quantum degeneracy was demonstrated Stellmer
_et al._ (2013). Very recently, the kHz-transitions play an essential role in
the realizations of optical tweezer arrays of alkali-earth Norcia _et al._
(2018); Cooper _et al._ (2018) and alkali-earth-like Saskin _et al._ (2019)
atoms.
Benefiting from the narrow line width of the
${}^{1}\textrm{S}_{0}$-${}^{3}\textrm{P}_{1}$ transitions in alkali-earth or
alkali-earth-like systems, fluorescence signals from these transitions can be
employed for studying collective atomic scattering and motional effects
Bromley _et al._ (2016), measuring atomic transition properties Ferrari _et
al._ (2003); Ido _et al._ (2005); Schmitt _et al._ (2013), and detecting
single atoms with high fidelities Saskin _et al._ (2019). On the other hand,
absorption imaging using broad dipole-allowed transitions ($\sim$ 10 MHz) may
be by far the most widely-used method in diagnosing ultracold-atom systems,
providing accurate information on the spatial distribution of atoms, the atom
number, and the atomic temperature Ketterle _et al._ (1999); Ketterle and
Zwierlein (2008). However, absorption with narrow-line transitions were rarely
studied in the ultracold regime, where the photon recoil energy is comparable
to the absorption linewidth including the Doppler effect. Oates _et al._
studied the atomic-recoil-induced asymmetries in a form of saturation
spectroscopy with a Ca optical-clock apparatus Oates _et al._ (2005), and the
photon-recoil effect on the dispersion was observed in a Yb vapor cell in Ref.
Grimm and Mlynek (1988). Stellmer _et al._ have implemented the absorption
imaging on the 7.5-kHz transition at 689 nm to resolve the hyperfine structure
of the fermionic 87Sr at a magnetic field of about 0.5 G Stellmer _et al._
(2011). They observed a Lorentzian lineshape with a full width at half maximum
(FWHM) of about 40 kHz, without discussing further details on the spectrum.
In this work, we study in detail the absorption spectrum on the narrow
transition
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
at 689 nm with an ultracold 88Sr atomic cloud. We measure the spectrum in both
the weak and strong saturation regimes. At low saturations, the absorption
lineshape is close to a Gaussian shape essentially determined by the Doppler
effect in the temperature range studied here. Thus, this regime can be
exploited for thermometry of the atomic sample, which is confirmed by a
comparison to the temperature obtained by the standard time-of-flight (TOF)
method Ketterle _et al._ (1999); Ketterle and Zwierlein (2008) using the
broadband transition
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
. The narrow-line absorption imaging at low saturation also provides
information on the atom numbers and atomic densities with a comparable
accuracy to detection methods based on the broad (blue) line. In the strong
saturation regime, an asymmetric lineshape is observed. We have performed a
theoretical simulation based on the optical Bloch equations (OBEs) involving
the momentum transfers during the imaging process and confirmed that the
photon recoil has important influence on the line shape. We also observe a
density-dependent lensing effect in the absorption images at large detunings
of the imaging light.
The article is organized as follows: We show our experimental setup in Sec.
II. The low- and high-saturation absorption spectra are described in Secs.
III.1 and III.2, respectively. The theoretical simulation and comparison to
experiments in the high-saturation regime are discussed in Sec. III.3. The
observation of lensing effect is presented in Sec. IV. Sec. V concludes the
paper.
## II Experimental setup
Figure 1: (a) Schematic of the top view of experimental setup. HWP: half wave-
plate; PBS: polarizing beam-splitter. g and B represent the gravity and
magnetic field, respectively. See text for more details. (b) Time sequence for
absorption imaging. See text for explanations of $\tau_{\textrm{TOF}}$ and
$\tau_{\textrm{exp}}$. (c) Absorption spectrum showing all three Zeeman
sublevels of ${}^{3}\textrm{P}_{1}$ state when the imaging light polarization
is tuned to about 45∘ angled to the residual magnetic field. Black points are
the measured peak OD, and the red curve is the fit to a multi-peak Gaussian
function. The obtained Zeeman splitting is 167.7(1.2) kHz, corresponding to a
magnetic field of 79.9(6) mG.
Fig. 1(a) shows the experimental setup. The 88Sr atoms are first loaded into a
two-stage magneto-optical trap (MOT) for the laser cooling and trapping Nosske
_et al._ (2017); Qiao _et al._ (2019), operated on the broad
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
and narrow
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
transitions, respectively. We could create an atomic cloud of $10^{6}$ atoms
with a density of about $10^{10}$ cm-3 and a temperature around 1 $\mu$K. A
cigar-shaped optical dipole trap (ODT) formed by two horizontally propagating
beams at the wavelength of 532 nm, is simultaneously switched on at the
second-stage MOT. The two ODT beams both have a waist of about 60 $\mu$m and
cross at an angle of 18∘. Holding atoms in the ODT for 200 ms to reach
equilibrium after switching off the MOT, we obtain about $(0.5\cdots 5)\times
10^{5}$ atoms at a temperature of $0.7\cdots 6$ $\mu$K depending on the ODT
power. At a power of 0.6 W for each beam the trap depth of the ODT is about
$6\mu$K and the trap frequencies are $2\pi\times$(217, 34, 217) Hz along the
$x$, $y$, and $z$ directions [see Fig. 1(a)], respectively, resulting in cloud
radii of (27, 69, 27) $\mu$m and a peak density of $7\times 10^{11}$cm-3. The
temperatures along the $y$ and $z$ directions are mapped out by the standard
TOF method. The above-mentioned atom numbers, cloud sizes, and temperatures
are measured using absorption imaging with the broad
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
transition. The lifetime of the atomic clouds in the ODT is about 2 s, limited
by the collisions with background gas.
The imaging light at 689 nm is delivered from a commercial tapered amplifier
seeded by an external-cavity diode laser (Toptica TApro), used also for the
narrow-line MOT cooling, which is frequency-stabilized to a passive ultra-low
expansion cavity with a short-term noise of 1 kHz level and a long-term drift
of 8 kHz/day Qiao _et al._ (2019). As shown in Fig. 1(a), the imaging beam
propagates along the $z$ direction with a tunable linear polarization and has
a $1/e^{2}$ diameter of 4.2 mm. The imaging pulse length and intensity are
controlled by an accousto-optic modulator (not shown in the figure). The
imaging system consists of two achromatic lenses with focal lengths of +200 mm
and +300 mm, and maps the absorption to an EM-CCD camera from Andor with a
magnification factor of 1.5. We have an imaging resolution of about 12 $\mu$m.
The imaging sequence is described in Fig. 1(b). The absorption imaging on the
narrow-line transition is performed after rapidly switching off the ODT to
avoid the differential AC Stark shifts on the energy levels. A quantization
magnetic field along the vertical direction is applied (rising time 2 ms)
before the imaging pulse to split the Zeeman sublevels of
${}^{3}\textrm{P}_{1}$ state, as seen in Fig. 1(c). After a given time-of-
flight (TOF) time $\tau_{\textrm{TOF}}$, the atoms are shined by the imaging
light with an exposure time $\tau_{\textrm{exp}}=200$ $\mu$s. By tuning the
$\tau_{\textrm{TOF}}$ we can tune the atomic density during the absorption,
which plays an important role in observing the dispersive lensing effect
discussed in Sec. III. As done in a standard absorption imaging sequence, two
additional images with and without the imaging light are taken after the first
pulse. The three images are then processed (see, e.g., Lewandowski _et al._
(2003)) to obtain the two-dimensional optical density (OD) distribution (see
the insets of Fig. 5).
By changing the linear imaging polarization angle in the $x-y$ plane, all
three Zeeman sublevels of the ${}^{3}\textrm{P}_{1}$ state are addressable. An
example is shown in Fig. 1(d). The peak OD is measured as a function of the
imaging detuning showing three peaks at a magnetic field of about 80 mG. The
relative line strengths are determined by the polarization and the different
coupling strengths of the three corresponding transitions (see Fig. 1(d)). We
have used this measurement to optimize the compensation of the background
magnetic field to be better than 5 mG in our setup and to calibrate the
quantization fields. For the absorption studies, we apply a field of 4 G to
split the sublevels and the imaging polarization is tuned parallel to the
quantization axis, so that the system is subjected only to the closed $\pi$
transition ($m_{j}=0\rightarrow m_{j^{\prime}}=0$), which can be treated as a
perfect two-level system.
## III Measurements and analysis
Thanks to the high sensitivity and large dynamical range of our imaging camera
(Andor iXon 897) at 689 nm, we can study the absorption spectrum on the
narrow-line transition with a saturation parameter $s$ ranging from 0.01 to
more than 100. Meanwhile, the cloud temperature and the atomic density can be
controlled via the ODT depth and the TOF time $\tau_{\textrm{TOF}}$
independently.
Figure 2: (a) Low-saturation absorption spectra at temperatures of 1.3 $\mu K$
(black) and 5.7 $\mu K$ (red). The integrated absorption signal over the
atomic cloud region is plotted as a function of the imaging detuning. The
solid curves are fits to the Voigt profile. See text for more details. (b)
Spectroscopic thermometry. The fitted Doppler widths from (a) are used to
estimate the temperatures $T_{\textrm{Fit}}$, which are plotted against the
TOF measurement results $T_{\textrm{TOF}}$ in the lower panel of (b). A linear
fit to the data (black dashed line) gives a slope of 1.05(3). The red dashed
line represents $T_{\textrm{Fit}}=T_{\textrm{TOF}}$. In the upper panel we
also show the ratio (black open circles) between the atom numbers obtained
from the narrow- ($N_{\textrm{red}}$) and broad-linewidth
($N_{\textrm{blue}}$) imaging, which is a constant of 1.06(2) (gray solid
line).
### III.1 Low-saturation absorption
In Fig. 2(a), we show two measured absorption spectra at temperatures of 1.3
$\mu$K (black points) and 5.7 $\mu$K (red points) with a saturation parameter
of $s=0.1$. The TOF time $\tau_{\textrm{TOF}}$ (see Fig. 1(b)) is chosen to be
3.1 ms to minimize the lensing effect (see Sec. IV) as well as to keep large
enough signal-to-noise ratios (SNRs) in the OD images. The plotted signals in
Fig. 1(b) are the OD integrals over the whole atomic cloud region divided by
the peak cross section $\sigma_{0}=3\lambda^{2}/2\pi$, which is the standard
way to calculate the atom number in the absorption imaging (see the following
paragraph for a correction).
Symmetric lineshapes are observed in both cases and the linewidth increases
with the increasing temperature. The spectra fit well to Voigt profiles with a
fixed Lorentzian width of $v_{L}=10.01$ kHz, resulted from the power
broadening $\Gamma\sqrt{1+s}/2\pi$ and the detection bandwidth
$0.9/\tau_{\textrm{exp}}=4.5$ kHz due to the finite length of the square-shape
imaging pulse (see Fig. 1(b)), where $\Gamma/2\pi=7.5$ kHz is the natural
linewidth. The FWHM Gaussian width $v_{G}$ obtained from the Voigt profile
fitting is used to deduce the temperature $T_{\textrm{Fit}}$ along the imaging
propagation direction, from the relation $v_{G}=\frac{2}{\lambda}\sqrt{2\ln
2k_{b}T_{\textrm{Fit}}/m}$. Here $k_{b}$ is the Boltzmann constant, $\lambda$
is the transition wavelength, and $m$ is the atomic mass. $T_{\textrm{Fit}}$
obtained in this way are compared to those measured by the TOF method in the
lower panel of Fig. 2(b). The linear fit between $T_{\textrm{Fit}}$ and
$T_{\textrm{TOF}}$ (black dashed line) results in a slope of 1.05(3), which
agrees excellently with the ideal case of $T_{\textrm{Fit}}=T_{\textrm{TOF}}$
(red dashed line). We also notice the empirical density broadening in the
saturation fluorescence spectroscopy reported in Ido _et al._ (2005). The
linear slope is only modified slightly to 1.05(6) even if we take the
empirical density relation following Ref. Ido _et al._ (2005).
Figure 3: High-saturation absorption. (a) The measured absorption lineshapes
at low (blue dots) and high (red dots) saturations. The data in the high-
saturation case ($s=35.8$) showing asymmetric profile is fitted to the
numerical solution of Eq. (5) (red curve) and magnified by 6 times to have a
better visualization. As a comparison, the low-saturation ($s=0.09$) data is
symmetric and fits well to the Voigt profile (blue curve). (b) - (d), the
population difference $\Delta\rho(p)$ obtained from the OBE solutions at there
different detunings [$0,\pm 5\Gamma$, as marked by the grey vertical lines in
(a)] after an exposure time of 100 $\mu$s and 200 $\mu$s, respectively. As a
reference, we also show the initial distribution at $t=0$, which is the
Maxwell-Boltzmann one determined by the cloud temperature. The black solid
vertical lines mark the resonant momentum positions, where the probe detuning
is compensated by the Doppler effect. The inset images show measurements of
the two-dimensional OD distributions at low (upper) and high (lower)
saturations for their respective detunings.
In addition to the temperature, the atom number and atomic density can also be
extracted from the narrow-linewidth absorption imaging in the low saturation
regime. The broad (blue) transition typically used in determining the atom
number and atomic density has a natural linewidth on the order of 10 MHz, much
broader than the Doppler width. The absorption cross-section in the broad-
transition imaging can hence be regarded as temperature-independent. However,
for the narrow transition with a natural linewidth smaller than the Doppler
width ($\Gamma/2\pi v_{G}<1$), the Doppler effect has to be considered when
calculating the atom number Foot (2004). This is done by convolving the
velocity-dependent Lorentzian absorption profile with the Maxwell-Boltzmann
velocity distribution in the atomic sample (see Appendix Appendix). The
convolution results in a relationship between the measured OD and the atom
number similar to the broadband absorption imaging case, modified by a
coefficient depending on the ratio between the Doppler-broadened width and the
natural linewidth,
$\displaystyle OD_{0}(x,y)$ $\displaystyle=n(x,y)\sigma_{0}\times
C(\Gamma,v_{G})\,,$ (1)
where $C(\Gamma,v_{G})=\sqrt{\pi}\alpha e^{\alpha^{2}}\textrm{Erfc}(\alpha)$
is the coefficient with $\alpha=\sqrt{\ln 2}\Gamma/2\pi v_{G}$, $OD_{0}(x,y)$
are the on-resonance OD spatial distribution, and $n(x,y)$ is the atomic
column density. Erfc(x) is the complementary error function. The derivation of
the coefficient is presented in the Appendix Appendix. With the on-resonance
OD and the temperature-dependent $v_{G}$ determined from the spectrum fitting,
the atom number and atomic density can be obtained with Eq. (1). The upper
panel of Fig. 2(b) shows the ratio of the atom number determined by absorption
imaging with the narrow
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
($N_{\textrm{red}}$) and broad
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
($N_{\textrm{blue}}$) transitions, which lies close to 1 (gray solid line).
### III.2 Strong-saturation absorption
In this section we study the narrow-line absorption spectrum at strong
saturations with $s\gg 1$. The question arises, to which extent the photon
recoils impacts the absorption profile, as the the recoil shift is comparable
to the natural linewidth ($\sim 4.78$ kHz vs. $7.5$ kHz for the narrow
transition studied here). During the absorption process, absorption and
spontaneous emission events give rise to the change of momentum distribution,
and hence affecting the subsequent absorption. For narrow-line transitions,
few photon-recoil events are enough to drive the atom out of resonance with
the imaging laser. This is in stark contrast to broadband transitions, since
the natural linewidth is usually much larger than the Doppler shifts induced
by photon recoils in a cold atom sample. Nevertheless, the photon-recoil
effect was already considered when imaging light species like Li using broad
transitions Horikoshi _et al._ (2017), where the recoil-induced detuning and
blurring place strong constraints on the proper imaging conditions.
In Fig. 3(a) we show two absorption spectra at saturation parameters of
$s=0.09$ and $s=35.8$, respectively. We observe a decrease of the integrated
OD signal at all detunings due to the saturation effect (note that the data at
higher saturation is magnified by a factor of 6 for a better view). More
importantly, the lineshape is asymmetric at the high saturation, namely the
integrated OD approaches zero more slowly on the negative-detuning side than
that on the positive one, and the absorption peak is shifted by a few kHz to
the positive detuning. At high saturation, differences can already be seen in
the OD images at the two detuning sides [see lower rows in the insets of Figs.
3(b-d)], namely a wider spatial extension for the position detuning than that
for the negative one. In this series of experiments, the influence of the
lensing effect on the OD measurement(see Sec. IV) is negligible due to the low
atomic densities involved here.
The observed asymmetry and peak shift can be interpreted qualitatively by
considering the absorption process including the influence of the photon
recoil. The photon recoil associated with each absorption-spontaneous emission
cycle redistributes the momentum of atoms, which depends strongly on the light
detuning Stenholm (1978). Consequently, an asymmetric lineshape and the shift
of the maximum of absorption emerges when more and more photons are scattered
due to the momentum redistribution in the atomic cloud. In order to resolve
such effects, the Doppler width has to be comparable to the power-broadened
line width. In the case of the strong saturation in Fig. 3 (a), the power-
broadened Lorentzian width $\Gamma\sqrt{1+s}\sim 45$ kHz is close to the
Doppler one of $\sim 40$ kHz. In the following subsection a quantitative
description is presented incorporating the photon-recoil effect in an OBE
formalism.
### III.3 Spectrum lineshape simulation
Figure 4: Absorption peak position shift. The relative position of the
absorption peak at different saturation parameters $s$ are compared for
imaging times of 100 $\mu$s (red circles and curve) and 200 $\mu$s (black
circles and curve). The black and red curves are the calculated results
without any free parameters, while the black and red circles are the fitted
results from measurements by using the peak position and height as the two
free fitting parameters. See text for more discussions.
We take an OBE formalism including the method of so-called ’momentum families’
from Ref. Castin _et al._ (1989), originally developed to understand laser
cooling on a narrow-line transition. The model considers a two-level atom
system with an initial Maxwell-Boltzmann thermal distribution, interacting
with a single near-resonant monochromatic homogeneous probe beam. The state of
an atom with momentum $\bm{p}$ is expressed in the form of
$\\{\ket{g,\bm{p}},\ket{e,\bm{p}}\\}$, where $\ket{g(e)}$ corresponds to the
atomic ground (excited) state. The system Hamiltonian driven under a laser
beam propagating along the $z$ axis is,
$\centering
H_{0}=\frac{\hat{\bm{p}}^{2}}{2m}+\hbar\omega_{0}\ket{e}\bra{e}-\hat{\bm{D}}\cdot\hat{\bm{E}}\@add@centering$
(2)
where $\omega_{0},\hat{\bm{D}},\hat{\bm{E}}$ are the transition frequency,
dipole moment operator, and laser electric field, respectively. In our case,
only the $\pi$-transition branch $m_{j}=0\rightarrow m_{j^{\prime}}=0$ is
considered, and only momentum along the light propagation axis $p=p_{z}$ is
preserved, with the other two components $p_{x},p_{y}$ traced over. The system
Hamiltonian under the rotating-wave approximation becomes,
$\centering
H_{S}=\frac{\hat{p}^{2}}{2m}-\hbar\delta\ket{e}\bra{e}+\frac{\hbar\Omega}{2}(e^{ikz}\ket{e}\bra{g}+\ket{g}\bra{e}e^{-ikz})\@add@centering$
(3)
where $\delta,\Omega$ are the bare detuning and Rabi frequency.
The evolution of states $\ket{g,p},\ket{e,p+\hbar k}$ with any momentum p
remains globally closed under $H_{S}$ when the spontaneous emission is not
considered, for which reason the states $\ket{g,p}$, $\ket{e,p+\hbar k}$ are
grouped as a family $\mathcal{F}(p)$. The system density matrix $\rho$
expanded in this basis is,
$\centering\begin{aligned} \rho_{gg}(p)&=\braket{g,p}{\rho}{g,p}\\\
\rho_{ee}(p)&=\braket{e,p+\hbar k}{\rho}{e,p+\hbar k}\\\
\rho_{ge}(p)&=\rho_{eg}^{*}(p)=\braket{g,p}{\rho}{e,p+\hbar k}\\\
\end{aligned}\,.\@add@centering$ (4)
The equations of evolution under $H_{S}$ together with the spontaneous
emission processes are,
$\displaystyle\dot{\rho}_{gg}(p)$ $\displaystyle=\Gamma\bar{\pi}_{e}(p-\hbar
k)-\frac{i\Omega}{2}(\rho_{eg}(p)-\rho_{ge}(p))\,,$ (5)
$\displaystyle\dot{\rho}_{ee}(p)$
$\displaystyle=-\Gamma\bar{\pi}_{e}(p)+\frac{i\Omega}{2}(\rho_{eg}(p)-\rho_{ge}(p))\,,$
$\displaystyle\dot{\rho}_{ge}(p)$ $\displaystyle=\dot{\rho}_{eg}^{*}(p)$
$\displaystyle=-(i(\bar{\delta}-\frac{kp}{m})+\frac{\Gamma}{2})\rho_{ge}(p)+\frac{i\Omega}{2}(\rho_{gg}(p)-\rho_{ee}(p))\,,$
where $\bar{\delta}=\delta-\hbar k^{2}/(2m)$ and the term $\bar{\pi}_{e}$
represents the impact of spontaneous decay on the system evolution, defined as
$\displaystyle\bar{\pi}_{e}(p)=$
$\displaystyle\int\limits_{-\infty}^{+\infty}dp_{x}\int\limits_{-\infty}^{+\infty}dp_{y}\int\limits_{-\hbar
k}^{+\hbar k}dp^{\prime}\mathcal{N}(p^{\prime})$ (6)
$\displaystyle\braket{e,p_{x},p_{y},p_{z}=p+p^{\prime}}{\rho}{e,p_{x},p_{y},p_{z}=p+p^{\prime}}\,.$
Here $\mathcal{N}(p^{\prime})=\frac{3}{4\hbar k}(1-p^{\prime
2}/\hbar^{2}k^{2})$ results from the classical dipole radiation pattern Castin
_et al._ (1989) of the $\pi$ transition. With all the atoms initially at the
ground state $\ket{g}$ with a Maxwell-Boltzmann distribution of temperature
$T$, we numerically integrate the equations (5) to get the system evolution.
The solution of the off-diagonal elements $\rho_{eg}(p)$ results in the
susceptibility $\chi(p)\propto n\rho_{eg}(p)$ with the atomic density $n$. The
absorption profile is then calculated by tracing the imaginary part of the
susceptibility over all momenta, i.e. $\sum_{p}\mathrm{Im}\chi(p)$, and then
integrating over the interaction duration.
For the solid curves in Figs. 3(a), we fit the experimental data to the
calculated profiles with the maximum integrated OD and the peak position as
the only free parameters. Both the lineshape asymmetry and the shift of the
absorption peak at high saturation can be reproduced very well by Eq. (5)
including the momentum transfer due to the photon-scattering events. While the
model predicts a significant shift of the absorption peak, its position is
still used as a free parameter in the fits to account for the possible
deviation between the measurements and the calculations, as discussed in more
details in Fig. 4.
One can gain further insight into the photon-recoil effects by considering the
quasi-steady solution of the off-diagonal element in Eq. (5),
$\mathrm{Im\rho_{eg}(p)}\propto\Delta\rho(p)=\rho_{gg}(p)-\rho_{ee}(p)$. We
show from Fig. 3(b) to 3(d) the calculated distribution of the population
difference $\Delta\rho(p)$ at two saturation parameters of $s\approx 0.09$
(blue curves) and $s=35.8$ (red curves) after 200-$\mu$s atom-light
interaction time (about $10/\Gamma$, the imaging pulse length in this
measurement), when the probe laser is detuned by $-5\Gamma,0,+5\Gamma$ from
left to right. At the low saturation ($s\approx 0.09$), $\Delta\rho(p)$ is
only slightly modified compared to the initial Maxwell-Boltzmann distribution
(black dot-dashed lines), remaining almost Gaussian even after long
interaction time, such that the convolution between the velocity-dependent
Lorentzian profile. The momentum distribution results in a lineshape nearly
the Voigt one, as the blue curve seen in Fig. 3(a). When highly saturated
($s=35.8$), however, the $\Delta\rho(p)$ distribution is strongly modified and
depleted near the resonant momentum (marked by vertical dashed lines) where
the Doppler shift compensates the bare imaging detuning. In Fig. 3(b) with a
detuning of $-5\Gamma$, the distribution maintains a Gaussian shape with the
center shifted by $\sim 1.2\hbar k$ after 200 $\mu$s. While at a detuning of
$+5\Gamma$ in Fig. 3(d), two peaks appear on the opposite sides of the
resonant momentum. Such a strong dependence on the detuning leads to the
observed asymmetric lineshape and the peak shift.
The effects of the photon recoil can also be revealed by studying the time
evolution of the momentum distribution. In Figs. 3(b-d) the $\Delta\rho(p)$ at
$s=35.8$ after 100-$\mu$s interaction (red dash-dot curves) are shown as a
comparison to the 200-$\mu$s case. Small but clear differences of
$\Delta\rho(p)$ are observed for all three detunings indicating that the
momentum distribution undergoes some time evolution, which may result in a
time-dependent absorption lineshape. This is actually demonstrated in Fig. 4
by comparing the saturation-dependent shift of the absorption peak position
for the 100- and 200-$\mu$s imaging durations. The peak position is shifted
towards the positive detuning when increasing the imaging intensity and such a
shift becomes larger in the case of a longer exposure, i.e. more photons are
scattered. The solid curves represent the calculated results without any free
parameters, while the solid dots are from fits with the peak position and
height as the free fitting parameters [see Fig. 3(a)]. Overall, the fitted
shifts agree well with the calculations without free parameters, while
deviations are seen for some points coming from fluctuations of experimental
conditions like laser power and atom number, as well as the low SNR for large
saturation parameters.
## IV The lensing effect
Figure 5: Absorption spectrum with $s=17$ at two different atomic densities of
$8.9\times 10^{10}$ cm-3 (red circles) and $2.8\times 10^{11}$ cm-3 (blue
diamonds). The red curve is a fit to the numerical solution of Eq. (5). We
obtain negative peak ODs at some large positive detunings. In the right inset,
the lower OD image measured at a large positive detuning with the high atomic
density has a dark hole instead of a bright peak in the cloud center, caused
by the lensing effect. At the large negative detuning, the dark position
appears at the edges of the cloud (left inset). As a comparison, we also show
an example of the OD images for the low-density case with a normal Gaussian
distribution in the upper panel of the inset.
As shown in Fig. 5, we have also experimentally observed another phenomenon in
the absorption spectrum at high atomic densities, the so-called lensing effect
which is well known in standard absorption imaging. The absorption spectra at
two different atomic densities are compared at a saturation of $s=17$. In the
low-density case ($n\sim 8.9\times 10^{10}$ cm-3, red dots in the figure), we
find a similar asymmetry as that in Fig. 3(a) for the high saturation. With a
3-fold higher density ($n\sim 2.8\times 10^{11}$ cm-3, blue diamonds in the
figure), a negative peak OD is obtained from the two-dimension Gaussian fit at
some large positive detunings. Checking the OD images there (one example shown
as the right inset in Fig. 5), a dark hole instead of a bright peak is seen at
the central region of the atomic cloud for the large positive detuning, while
at the negative one a dark edge is observed. This phenomonon is related to the
microscopic lensing effect studied in e.g. Refs. Labeyrie _et al._ (2003);
Wang and Saffman (2004); Labeyrie _et al._ (2007); Roof _et al._ (2015); Han
_et al._ (2015); Noaman _et al._ (2018); Gilbert _et al._ (2018), where a
spatial-dependent index of refraction leads to a focusing or defocusing effect
on the imaging beam depending on the detuning.
The observed lensing effect can be understood from the following equation for
describing the phase shift of the imaging field in the transverse plane
propagating through a cloud of two-level atoms Labeyrie _et al._ (2007),
$d\phi(x,y)=-\sigma_{0}n(x,y,z)dz\frac{\delta/\Gamma}{1+4(\delta/\Gamma)^{2}+s(x,y)}\,.$
(7)
Here $dz$ is the thickness of the atomic cloud along the light propagation
direction, $\delta$ is the detuning, and $s(r)$ has a spatial dependence due
to the intensity distribution of a Gaussian probe beam. Spatial inhomogeneity
of the index of refraction can be induced by the spatial distribution of the
atomic density, or the probe intensity, or both. For negative (positive)
detuning, Eq. (7) leads to a focusing (defocusing) of the imaging beam.
The observed lensing effect here mainly stems from the density inhomogeneity
as indicated by the density-dependence (see Fig. 5) and the fact that the
imaging beam is much larger than the atomic cloud ($\sim 200$ times). The
lensing induced by such densitiy inhomogeneity was observed in both the weak-
Roof _et al._ (2015) and strong-saturation Labeyrie _et al._ (2003); Wang
and Saffman (2004); Labeyrie _et al._ (2007) regimes. The lensing effect
shown in Fig. 5 with strong saturation is also observable in the weak-probe
case in our experiment. However, to quantitatively explain our observation,
detailed calculations on the light propagation are needed like in Refs. Han
_et al._ (2015); Gilbert _et al._ (2018), even including the atom dipolar
interactions or multiple scattering events (e.g. Bromley _et al._ (2016); Zhu
_et al._ (2016); Chabé _et al._ (2014)), which is beyond the scope of this
paper.
## V Conclusion
In conclusion, we have studied both experimentally and theoretically the
absorption spectrum of a narrow-line transition at 689 nm in an ultracold 88Sr
gas. The atomic cloud temperature down to 1 $\mu$K can be inferred from the
measured absorption lineshape at low probe saturations ($s\ll 1$) if the
Doppler width dominates over other line-broadening effects. Information on the
atom number can also be reliably extracted from the low-saturation absorption.
In the strongly saturated regime, we observed the photon-recoil-induced
asymmetry in the absorption spectrum, which can be described by two-level OBEs
involving the photon recoils. We also showed a lensing effect when probing a
high-density sample, which is due to the spatial-dependent dispersive response
of the atomic cloud to the imaging field. It is of strong interest in studying
further the weak-probe high-density regime because of the collective and
cooperative effects that are predicted theoretically Bienaimé _et al._
(2013); Zhu _et al._ (2016); Kupriyanov _et al._ (2017); Bettles _et al._
(2020). The narrow-line absorption can also be employed as sensitive probe for
other cold atom systems with similar narrow-line transitions, like, e.g., Yb.
The good resolution also makes the narrow-line absorption applicable to
detection of interactions in more complicated systems, e.g. the spatial
correlation Günter _et al._ (2012) due to Rydberg blockade.
## Acknowledgements
We acknowledge C. Qiao, L. Couturier, and I. Nosske for their contributions on
setting up the experiment at the early stage of project. F.H acknowledges
Yaxiong Liu for helpful discussions on numerical algorithms. M.W.’s research
activities in China are supported by the 1000-Talent-Program. The work was
supported by the National Natural Science Foundation of China (Grant Nos.
11574290 and 11604324) and Shanghai Natural Science Foundation (Grant No.
18ZR1443800). Y.H.J. also acknowledges support under Grant No. 11827806.
## Appendix
The low-saturation ($s\ll 1$) OD spatial distribution is represented as,
$\centering
OD(x,y)=\int_{-\infty}^{+\infty}\sigma_{0}n(x,y)f(v)L(\delta,v,\Gamma)dv\@add@centering$
(8)
where $L(\delta,v,\Gamma)=\frac{\Gamma^{2}/4}{(\delta-kv)^{2}+\Gamma^{2}/4}$
is the Lorentzian profile with $\delta$ the bare laser detuning, $\Gamma$ the
natural linewidth, $k=2\pi/\lambda$ the laser wavenumber, and $v$ the atom
velocity, $f(v)=\frac{1}{u\sqrt{\pi}}e^{-v^{2}/u^{2}}$ is the Gaussian
velocity distribution with $u=\sqrt{2k_{B}T/m}$ the most probable speed. The
Doppler width $v_{G}$ is related to $u$, $v_{G}=ku\sqrt{\ln 2}/\pi$. Then Eq.
(8) reads
$\displaystyle OD(x,y)$
$\displaystyle=\sigma_{0}n(x,y)\int_{-\infty}^{+\infty}\frac{1}{u\sqrt{\pi}}e^{-(v/u)^{2}}\frac{\Gamma^{2}/4}{(\delta-
kv)^{2}+\Gamma^{2}/4}dv$ (9)
$\displaystyle=\sigma_{0}n(x,y)\frac{\alpha^{2}}{\sqrt{\pi}}\int_{-\infty}^{+\infty}\frac{e^{-(x^{\prime}+\delta/ku)^{2}}}{x^{\prime
2}+\alpha^{2}}dx^{\prime}$
The substitution $x^{\prime}=kv-\delta$ is used in the second step. Here
$\alpha=\frac{\sqrt{\ln 2}\Gamma}{2\pi v_{G}}$ represents the ratio between
the natural linewidth and the Doppler width. At the on-resonance condition
($\delta=0$) we have the Eq. (1). The coefficients
$C(\Gamma,v_{G})=\sqrt{\pi}\alpha e^{\alpha^{2}}\textrm{Erfc}(\alpha)$ for
correcting the on-resonance absorption cross section are plotted in Fig. for
the
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
(black dashed line),
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
(blue dashed line) transitions of 88Sr and the D2 transition of 87Rb (red
dotted line) as a comparison.
Figure 6: The coefficient for correcting the on-resonance absorption cross
section due to the Doppler effect. The plotted temperature range is $0.01-10$
$\mu$K. Three atomic transitions are compared: the broad
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{1}\textrm{P}_{1}$
(black dashed line) and narrow
$5\mathrm{s^{2}}\,^{1}\textrm{S}_{0}-5\mathrm{s}5\mathrm{p}\,^{3}\textrm{P}_{1}$
(blue dashed line) transitions in 88Sr, and the D2 line of 87Rb (red dotted
line). This coefficient is 1 for broad transitions ($\Gamma\gg 2\pi v_{G}$)
and strongly modified for narrow ones ($\Gamma\lesssim 2\pi v_{G}$) in the
ultracold range.
## References
* Jun Ye (2008) H. K. Jun Ye, H. J. Kimble, Science (2008), 10.1126/science.1148259.
* Ludlow _et al._ (2015) A. D. Ludlow, M. M. Boyd, J. Ye, E. Peik, and P. Schmidt, Reviews of Modern Physics 87, 637 (2015).
* Safronova _et al._ (2018a) M. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko, and C. W. Clark, Reviews of Modern Physics 90 (2018a), 10.1103/RevModPhys.90.025008.
* Safronova _et al._ (2018b) M. S. Safronova, S. G. Porsev, C. Sanner, and J. Ye, Physical Review Letters 120 (2018b), 10.1103/physrevlett.120.173001.
* Kennedy _et al._ (2020) C. J. Kennedy, E. Oelker, J. M. Robinson, T. Bothwell, D. Kedar, W. R. Milner, G. E. Marti, A. Derevianko, and J. Ye, Physical Review Letters 125 (2020), 10.1103/physrevlett.125.201302.
* Hu _et al._ (2017) L. Hu, N. Poli, L. Salvi, and G. M. Tino, Physical Review Letters 119 (2017), 10.1103/physrevlett.119.263601.
* Hu _et al._ (2019) L. Hu, E. Wang, L. Salvi, J. N. Tinsley, G. M. Tino, and N. Poli, Classical and Quantum Gravity 37, 014001 (2019).
* Rudolph _et al._ (2020) J. Rudolph, T. Wilkason, M. Nantel, H. Swan, C. M. Holland, Y. Jiang, B. E. Garber, S. P. Carman, and J. M. Hogan, Physical Review Letters 124 (2020), 10.1103/physrevlett.124.083604.
* Ye _et al._ (1998) J. Ye, L.-S. Ma, and J. L. Hall, Journal of the Optical Society of America B 15, 6 (1998).
* Christensen _et al._ (2015) B. T. R. Christensen, M. R. Henriksen, S. A. Schäffer, P. G. Westergaard, D. Tieri, J. Ye, M. J. Holland, and J. W. Thomsen, Physical Review A 92 (2015), 10.1103/physreva.92.053820.
* Westergaard _et al._ (2015) P. G. Westergaard, B. T. Christensen, D. Tieri, R. Matin, J. Cooper, M. Holland, J. Ye, and J. W. Thomsen, Physical Review Letters 114 (2015), 10.1103/physrevlett.114.093002.
* Dunning _et al._ (2016) F. B. Dunning, T. C. Killian, S. Yoshida, and J. Burgdörfer, Journal of Physics B: Atomic, Molecular and Optical Physics 49, 112003 (2016).
* Madjarov _et al._ (2020) I. S. Madjarov, J. P. Covey, A. L. Shaw, J. Choi, A. Kale, A. Cooper, H. Pichler, V. Schkolnik, J. R. Williams, and M. Endres, Nature Physics (2020), 10.1038/s41567-020-0903-z.
* Curtis _et al._ (2001) E. A. Curtis, C. W. Oates, and L. Hollberg, Phys. Rev. A 64, 031403 (2001).
* Loftus _et al._ (2004) T. H. Loftus, T. Ido, A. D. Ludlow, M. M. Boyd, and J. Ye, Physical Review Letters 93 (2004), 10.1103/physrevlett.93.073003.
* Guttridge _et al._ (2016) A. Guttridge, S. A. Hopkins, S. L. Kemp, D. Boddy, R. Freytag, M. P. A. Jones, M. R. Tarbutt, E. A. Hinds, and S. L. Cornish, Journal of Physics B: Atomic, Molecular and Optical Physics 49, 145006 (2016).
* Stellmer _et al._ (2013) S. Stellmer, B. Pasquiou, R. Grimm, and F. Schreck, Physical Review Letters 110 (2013), 10.1103/physrevlett.110.263003.
* Norcia _et al._ (2018) M. A. Norcia, A. W. Young, and A. M. Kaufman, Physical Review X 8, 041054 (2018).
* Cooper _et al._ (2018) A. Cooper, J. P. Covey, I. S. Madjarov, S. G. Porsev, M. S. Safronova, and M. Endres, Physical Review X 8 (2018), 10.1103/physrevx.8.041055.
* Saskin _et al._ (2019) S. Saskin, J. Wilson, B. Grinkemeyer, and J. Thompson, Physical Review Letters 122 (2019), 10.1103/physrevlett.122.143002.
* Bromley _et al._ (2016) S. L. Bromley, B. Zhu, M. Bishof, X. Zhang, T. Bothwell, J. Schachenmayer, T. L. Nicholson, R. Kaiser, S. F. Yelin, M. D. Lukin, A. M. Rey, and J. Ye, Nature Communications 7 (2016), 10.1038/ncomms11039.
* Ferrari _et al._ (2003) G. Ferrari, P. Cancio, R. Drullinger, G. Giusfredi, N. Poli, M. Prevedelli, C. Toninelli, and G. M. Tino, Physical Review Letters 91 (2003), 10.1103/physrevlett.91.243002.
* Ido _et al._ (2005) T. Ido, T. H. Loftus, M. M. Boyd, A. D. Ludlow, K. W. Holman, and J. Ye, Physical Review Letters 94 (2005), 10.1103/physrevlett.94.153001.
* Schmitt _et al._ (2013) M. Schmitt, E. A. L. Henn, J. Billy, H. Kadau, T. Maier, A. Griesmaier, and T. Pfau, Opt. Lett. 38, 637 (2013).
* Ketterle _et al._ (1999) W. Ketterle, D. S. Durfee, and D. Stamper-Kurn, arXiv preprint cond-mat/9904034 (1999).
* Ketterle and Zwierlein (2008) W. Ketterle and M. W. Zwierlein, arXiv preprint arXiv:0801.2500 (2008).
* Oates _et al._ (2005) C. Oates, G. Wilpers, and L. Hollberg, Physical Review A 71 (2005), 10.1103/physreva.71.023404.
* Grimm and Mlynek (1988) R. Grimm and J. Mlynek, Physical Review Letters 61, 2308 (1988).
* Stellmer _et al._ (2011) S. Stellmer, R. Grimm, and F. Schreck, Physical Review A 84 (2011), 10.1103/physreva.84.043611.
* Nosske _et al._ (2017) I. Nosske, L. Couturier, F. Hu, C. Tan, C. Qiao, J. Blume, Y. H. Jiang, P. Chen, and M. Weidemüller, Physical Review A 96 (2017), 10.1103/physreva.96.053415.
* Qiao _et al._ (2019) C. Qiao, C. Z. Tan, F. C. Hu, L. Couturier, I. Nosske, P. Chen, Y. H. Jiang, B. Zhu, and M. Weidemüller, Applied Physics B 125 (2019), 10.1007/s00340-019-7328-3.
* Lewandowski _et al._ (2003) H. J. Lewandowski, D. Harber, D. L. Whitaker, and E. A. Cornell, Journal of low temperature physics 132, 309 (2003).
* Foot (2004) C. Foot, _Atomic Physics_ (Oxford University Press, 2004).
* Horikoshi _et al._ (2017) M. Horikoshi, A. Ito, T. Ikemachi, Y. Aratake, M. Kuwata-Gonokami, and M. Koashi, Journal of the Physical Society of Japan 86, 104301 (2017).
* Stenholm (1978) S. Stenholm, Applied Physics 15, 287 (1978).
* Castin _et al._ (1989) Y. Castin, H. Wallis, and J. Dalibard, Journal of the Optical Society of America B 6, 2046 (1989).
* Labeyrie _et al._ (2003) G. Labeyrie, T. Ackemann, B. Klappauf, M. Pesch, G. Lippi, and R. Kaiser, The European Physical Journal D-Atomic, Molecular, Optical and Plasma Physics 22, 473 (2003).
* Wang and Saffman (2004) Y. Wang and M. Saffman, Phys. Rev. A 70, 013801 (2004).
* Labeyrie _et al._ (2007) G. Labeyrie, G. Gattobigio, T. Chanelière, G. Lippi, T. Ackemann, and R. Kaiser, The European Physical Journal D 41, 337 (2007).
* Roof _et al._ (2015) S. Roof, K. Kemp, M. Havey, I. M. Sokolov, and D. V. Kupriyanov, Opt. Lett. 40, 1137 (2015).
* Han _et al._ (2015) J. Han, T. Vogt, M. Manjappa, R. Guo, M. Kiffner, and W. Li, Physical Review A 92 (2015), 10.1103/physreva.92.063824.
* Noaman _et al._ (2018) M. Noaman, M. Langbecker, and P. Windpassinger, Opt. Lett. 43, 3925 (2018).
* Gilbert _et al._ (2018) J. R. Gilbert, C. P. Roberts, and J. L. Roberts, J. Opt. Soc. Am. B 35, 718 (2018).
* Zhu _et al._ (2016) B. Zhu, J. Cooper, J. Ye, and A. M. Rey, Physical Review A 94 (2016), 10.1103/physreva.94.023612.
* Chabé _et al._ (2014) J. Chabé, M.-T. Rouabah, L. Bellando, T. Bienaimé, N. Piovella, R. Bachelard, and R. Kaiser, Physical Review A 89 (2014), 10.1103/physreva.89.043833.
* Bienaimé _et al._ (2013) T. Bienaimé, R. Bachelard, N. Piovella, and R. Kaiser, Fortschritte der Physik 61, 377 (2013).
* Kupriyanov _et al._ (2017) D. Kupriyanov, I. Sokolov, and M. Havey, Physics Reports 671, 1 (2017).
* Bettles _et al._ (2020) R. J. Bettles, M. D. Lee, S. A. Gardiner, and J. Ruostekoski, Communications Physics 3, 1 (2020).
* Günter _et al._ (2012) G. Günter, M. R. de Saint-Vincent, H. Schempp, C. S. Hofmann, S. Whitlock, and M. Weidemüller, Physical Review Letters 108 (2012), 10.1103/physrevlett.108.013002.
|
# A Generative Model of Galactic Dust Emission Using Variational Inference
Ben Thorne,1 Lloyd Knox,1 and Karthik Prabhu1
1Department of Physics, University of California, One Shields Avenue, Davis,
CA 95616, USA
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Emission from the interstellar medium can be a significant contaminant of
measurements of the intensity and polarization of the cosmic microwave
background (CMB). For planning CMB observations, and for optimizing
foreground-cleaning algorithms, a description of the statistical properties of
such emission can be helpful. Here we examine a machine learning approach to
inferring the statistical properties of dust from either observational data or
physics-based simulations. In particular, we apply a type of neural network
called a Variational Auto Encoder (VAE) to maps of the intensity of emission
from interstellar dust as inferred from Planck sky maps and demonstrate its
ability to a) simulate new samples with similar summary statistics as the
training set, b) provide fits to emission maps withheld from the training set,
and c) produce constrained realizations. We find VAEs are easier to train than
another popular architecture: that of Generative Adversarial Networks (GANs),
and are better-suited for use in Bayesian inference.
###### keywords:
cosmology: cosmic microwave background – ISM: general – methods:statistical
††pubyear: 2021††pagerange: A Generative Model of Galactic Dust Emission Using
Variational Inference–A Generative Model of Galactic Dust Emission Using
Variational Inference
## 1 Introduction
Among the many research enterprises stimulated by the detection of large-scale
anisotropies in the cosmic microwave background (CMB) by the COsmic Background
Explorer (COBE) with its Differential Microwave Radiometer (Smoot et al.,
1992), is the hunt for signatures of primordial gravitational waves (PGW). To
date, only upper limits have been set, most commonly expressed as limits on
the ratio of primordial tensor perturbation power to scalar perturbation
power, $r$. Soon after the COBE detection it was realized that reliably
detecting levels below $r\simeq 0.1$ could not be done with temperature
anisotropies alone (Knox & Turner, 1994), and that proceeding further would
require highly sensitive measurements of the polarization of the CMB on
angular scales of about a degree, or larger (Kamionkowski et al., 1997; Seljak
& Zaldarriaga, 1997).
Polarized emission from the interstellar medium of the Milky Way, in the
cleanest parts of the sky at the cleanest observing frequencies, is comparable
to the cosmic microwave background signal generated by PGWs if the PGW signal
is near the current 95% confidence upper limit of $r<0.06$ (BICEP2
Collaboration et al., 2018). So-called Stage III CMB experiments, such as the
Simons Observatory (Ade et al., 2019), and BICEP Array (Hui et al., 2018)
combined with SPT-3G (Benson et al., 2014) are designed to have sufficient
sensitivity and systematic error control to tighten the 95% confidence upper
limits by a factor of about 20. The Stage IV experiments LiteBIRD and CMB-S4
are targeting upper limits factors of 2 and 5 times more stringent still,
respectively. Thus we are rapidly moving into a regime where the foreground
contamination is up to two orders of magnitude larger111This is for
fluctuation power. The rms level of contamination in the map is up to one
order of magnitude larger than the signal of interest. than the signal of
interest.
The most exciting possibility is that there will be a detection of PGW, as
opposed to improved upper limits. A detection claim would essentially be a
claim that there is power remaining in the map that cannot be explained as a
residual instrumental systematic or residual foreground emission. Detection,
therefore, requires not only foreground cleaning, but the capability to
quantify the probability distribution of residual foreground power. Such
capability is hampered by our lack of prior knowledge of the probability
distribution of the non-Gaussian and non-isotropic galactic foreground
emission.
The state of the art in analysis of such observations either implicitly or
explicitly has the galactic emission, or their residuals, modeled as Gaussian
isotropic fields (Planck Collaboration et al., 2020; Aiola et al., 2020;
BICEP2 Collaboration et al., 2018). They are modeled as such not because they
are, but strictly for convenience.
At the very least, we need sufficient simulations of galactic emission to test
such algorithms for bias. A more ambitious objective is to abandon assumptions
of Gaussianity and isotropy altogether, and perform a complete Bayesian
analysis with incorporation of an appropriate prior for the spatial
distribution of interstellar emission. Groundbreaking progress toward such a
Bayesian analysis has been made recently, with the development of analysis
methodologies by Millea et al. (2020a), and the recent application to real
data (Millea et al., 2020b).
The analysis framework in Millea et al. (2020a) was developed for “de-lensing”
of the CMB; i.e., taking into account the impact of gravitational lensing on
the statistical properties of CMB polarization. Although it has not been
applied to multi-frequency data, or used for foreground cleaning, at a
conceptual level the framework can be straightforwardly extended to analysis
of foreground-contaminated multi-frequency data. Although this extension could
be implemented with isotropic Gaussian priors for foreground emission, it also
presents the opportunity to incorporate more realistic priors – priors that
more accurately reflect what we know about such emission from other data, or
from physics-based simulations.
We are thus interested in both creating simulated maps of galactic emission
with the appropriate statistical properties for testing analysis algorithms to
be used on real data, and also in learning, from other data and perhaps
physical modeling (e.g. MHD simulations of the interstellar medium (Kim et
al., 2019)) the statistical properties of maps of galactic emission for use in
Bayesian inference engines.
Here we report on progress toward accomplishing both of these tasks with the
use of neural networks. Aylor et al. (2019) studied the use of generative
adversarial networks (GANs) for learning how to simulate new emission maps
with statistic properties similar to those from a training set, whilst
Krachmalnicoff & Puglisi (2020) trained to simulate non-Gaussian small-scale
polarized dust emission. Here we present a similar study, this time using a
different neural network architecture and training program, that of
variational auto encoders (VAEs).
VAEs and GANs are examples of deep generative models. These models have had
recent success in accurately modeling complicated, high-dimensional, datasets,
and generating realistic novel samples (Razavi et al., 2019; van den Oord et
al., 2016b; Brock et al., 2018). Generative models can be divided into two
main categories: likelihood-based models that seek to optimize the log
likelihood of the data, these include the VAE (Kingma & Welling, 2013; Jimenez
Rezende et al., 2014), _flow_ based methods (Dinh et al., 2014, 2016; Jimenez
Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018), and _autoregressive_ models
(van den Oord et al., 2016a); and implicit models, such as GANs (Goodfellow et
al., 2014), which train a generator and discriminator in an adversarial game
scenario. There are many trade-offs to consider when selecting a likelihood-
based approach (Kingma & Dhariwal, 2018), but here we choose to explore the
use of VAEs due to their simplicity and computational scalability to higher
resolution datasets.
We find some advantages of VAEs over GANs. The adversarial training process
does not produce an explicit inference model, and it is hard to consistently
compare model performance against some test set. Furthermore, it is also a
common problem that samples from GANs do not represent the full diversity of
the underlying distribution (Grover et al., 2017). In contrast, VAEs optimize
the log likelihood of the data. This means both that it is possible to
directly compare models, and trained models should support the entire dataset,
which is crucial when applying a trained model to real data. VAEs also tend to
be easier to train in that training success is more stable to variation of
hyperparameters. As a downside, VAEs are well known for loss of resolution. We
see this in our results and discuss adaptations one could make to avoid this
degradation of angular resolution.
Although our work is motivated by the PGW-driven desire to understand the
statistical properties of polarized foreground emission, in this paper, as was
the case in Aylor et al. (2019), we restrict ourselves to intensity.
Observations of polarized dust emission with high signal-to-noise over a large
fraction of sky do not currently exist, which precludes the training of
similar models on real data. However, in ongoing work, we are exploring the
use of magnetohydrodynamical (MHD) (Kim et al., 2019) simulations to train
generative models of polarized emission. In this scenario a trained model
would provide a ‘compression’ of the information available in MHD simulations
into a single statistical model, which could then be used either in inference,
or to augment real low-resolution observations with physically-motivated
small-scale realizations.
The rest of this paper is structured as follows. In Section 2 we introduce
variational autoencoders, and the objective for their optimization. We then
describe the network architecture we used, the training dataset we produced to
train the network, and how hyperparameter values were set. In Section 3 we
present the results of applying the trained VAE to test set images. Finally,
in Section 4 we summarize our findings and discuss areas of current and future
work.
## 2 Variational Autoencoders
In this Section we will introduce the idea of variational autoencoders, the
specific model we implement, and the details of how we train that model.
Our goal here is to take a set of images of thermal emission from interstellar
dust $\mathbf{x}^{(i)}=(x_{1}^{(i)},\dots,x_{N}^{(i)})\in\mathbb{R}^{N}$, and
infer from them an underlying distribution, $p(x)$ from which they could have
been drawn, using the techniques of _generative modeling_. Variational
autoencoders are a type of generative machine learning model, which provide a
framework by which we may infer the parameters of a joint distribution over
our original data, and some _latent variables_ , $\mathbf{z}$, representing
the unobserved part of the model. We can factorize the joint distribution of
the data and latent variables into two terms representing the generative
process of the data, and the latent space, responsible for the variance in the
observed data:
$p(\mathbf{x},\mathbf{z})=\underbrace{p(\mathbf{x}|\mathbf{z})}_{{\rm
Generative}}\underbrace{p(z)}_{{\rm Variance}}.$ (1)
The VAE approach is to model the conditional distribution with an appropriate
family of functions with some unknown weights, $\theta$:
$p_{\theta}(\mathbf{x}|\mathbf{z})\approx p(\mathbf{x}|\mathbf{z})$. This
conditional model encodes the generative process by which $\mathbf{x}$ depends
on the latent set of variables $\mathbf{z}$. The choice of $p(\mathbf{z})$ can
then be a simple, perhaps Gaussian, prior probability distribution
$p(\mathbf{z})$, which encodes the dataset variation in a simple latent space.
This can be seen as a type of regularization by which we separate out
different sources of variation within the dataset, a process that is quite
natural for physical processes, and often makes the resulting model
interpretable.
The goal of training is thus to find a transformation that delivers an
acceptable approximation $p_{\theta}(\mathbf{x})\approx p(\mathbf{x})$, that
is optimal (in some sense), given the training set data. Toward that end we
consider the parametrized joint distribution of $\mathbf{x}$ and $\mathbf{z}$:
$p_{\theta}(\mathbf{x},\mathbf{z})=p_{\theta}(\mathbf{x}|\mathbf{z})p(\mathbf{z}),$
(2)
which leads to our object of interest via marginalization over $z$:
$p_{\theta}(\mathbf{x})=\int d\mathbf{z}~{}p_{\theta}(\mathbf{x},\mathbf{z}).$
(3)
Our tasks are thus to choose a parameterization – this is referred to as a
choice of _architecture_ – and then find a means of optimizing these
parameters $\theta$ with resepect to a chosen _objective_ , via a process
referred to as _training_.
### 2.1 Objective
In principle we could determine $\theta$ by maximizing the training set’s
joint likelihood $\Pi_{i}p_{\theta}(\mathbf{x}^{i})$. In practice, however,
this would involve evaluating the integral in Equation 3 for each datapoint
individually, which is intractable for even moderately high-dimensional latent
spaces. The VAE framework provides an objective function that bounds the
maximum likelihood value, and is computationally tractable.
Let a dataset $\mathcal{D}$ be made up of samples
$\mathbf{x}^{(i)}=(x_{1}^{(i)},\dots,x_{N}^{(i)})\in\mathbb{R}^{N}$, which we
will assume to be independent and identically distributed samples from some
true underlying distribution $p_{\mathcal{D}}(\mathbf{x})$. Absent an
analytical model for $p_{\mathcal{D}}(\mathbf{x})$, we can instead take it to
be a member of an expressive family of functions parametrized by
$\bm{\theta}$: $p_{\mathcal{D}}(\mathbf{x})=p_{\bm{\theta}}(\mathbf{x})$. This
can be done by introducing an unobserved set of latent variables,
$\mathbf{z}=(z_{1},\dots,z_{d})\in\mathbb{R}^{d}$, and considering the joint
distribution $p(\mathbf{x},\mathbf{z})$. This joint distribution is specified
by: the prior over the latent space, $p(\mathbf{z})$, which is assumed to be
some simple distribution (typically Gaussian); and the conditional
distribution $p(\mathbf{x}|\mathbf{z})$, which is intended to represent most
of the complexity in the true underlying distribution
$p_{\mathcal{D}}(\mathbf{x})$. We model this distribution as a neural network
with weights $\theta$: $p_{\theta}(\mathbf{x}|\mathbf{z})$. The marginal
likelihood is then:
$p_{\theta}(\mathbf{x})=\int
d\mathbf{z}~{}p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})=\mathbb{E}_{p(\mathbf{z})}\left[p_{\theta}(\mathbf{x}|\mathbf{z})\right],$
(4)
where we have introduced the notation $\mathbb{E}_{Y}[h(y)]$ to indicate the
expectation of the function $h(y)$ with respect to the distribution $y\sim Y$.
In principle, we could determine the conditional model by fixing $\theta$ to a
value that maximizes the marginal likelihood. In practice, however, the
integral in Equation 4 is intractable, due to the dimensionality of the latent
space, and in any case would require a per-datapoint optimization process. As
a result, the posterior
$p_{\theta}(\mathbf{z}|\mathbf{x})=p_{\theta}(\mathbf{z},\mathbf{x})/p_{\theta}(\mathbf{x})$
is also intractable.
We make progress by introducing a second approximation, this time to the
posterior: $q_{\phi}(\mathbf{z}|\mathbf{x})\approx
p_{\theta}(\mathbf{z}|\mathbf{x})$, where $q_{\phi}(\mathbf{z}|\mathbf{x})$ is
often referred to as an _inference_ network. For any choice of
$q_{\phi}(\mathbf{z}|\mathbf{x})$, including any choice of its weights $\phi$,
we can write the log likelihood of the data as:
$\log~{}p_{\theta}(\mathbf{x})=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log~{}p_{\theta}(\mathbf{x})\right].$
(5)
Applying the chain rule of probability:
$p_{\theta}(\mathbf{x},\mathbf{z})=p_{\theta}(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})$,
and inserting an identity, this can be split into two terms:
$\log
p_{\theta}(\mathbf{x})=\mathbb{L}_{\theta,\phi}(\mathbf{x})+\mathbb{D}_{\rm
KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x})),$ (6)
where $\mathbb{L}_{\theta,\phi}$ is referred to as the _evidence lower bound_
(ELBO):
$\mathbb{L}_{\theta,\phi}(\mathbf{x})\equiv\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log\left[\frac{p_{\theta}(\mathbf{x},\mathbf{z})}{q_{\phi}(\mathbf{z}|\mathbf{x})}\right]\right],$
(7)
and the second term is the Kullback-Leibler (KL) divergence:
$\mathbb{D}_{\rm
KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x}))=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log\left[\frac{q_{\phi}(\mathbf{z}|\mathbf{x})}{p_{\theta}(\mathbf{z}|\mathbf{x})}\right]\right],$
(8)
which is a measure of the ‘distance’ between two distributions, and is always
positive.
From Equation 6 we see that the bound $\mathbb{L}_{\theta,\phi}(\mathbf{x})$
will become tightest when $\mathbb{D}_{\rm
KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x}))\rightarrow
0$, such that our approximation to the posterior,
$q_{\phi}(\mathbf{z}|\mathbf{x})\approx p_{\theta}(\mathbf{z}|\mathbf{x})$,
becomes exact. However, due to the presence of the
$p_{\theta}(\mathbf{z}|\mathbf{x})$ term, $\mathbb{D}_{\rm
KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x}))$ can
not be evaluated directly, and so we are not able to directly optimize the
likelihood in Equation 6. Instead, we seek to maximize the evidence lower
bound, thereby achieving an ‘optimum’ set of weights $\theta,~{}\phi$.
The evidence lower bound and its gradient with respect to $\theta$ can be
computed straightforwardly. The gradients with respect to $\phi$ appear more
problematic, since the expectation we are calculating is taken over a
distribution parametrized by $\phi$. The typical Monte Carlo estimates of this
expectation, and its derivatives, are unbiased, but tend to have a high
variance, often making the training process unstable. Through a
reparametrization presented in Kingma & Welling (2013), it is possible to
rewrite this expectation such that the source of randomness is not dependent
on $\phi$, and gradients with respect to $\phi$ may be calculated with
standard Monte Carlo techniques. We are therefore able to optimize
$\mathbb{L}_{\theta,\phi}(\mathbf{x})$ by stochastic gradient descent, and
approximately optimize the marginal log likelihood.
### 2.2 Architecture
In this section we describe the architecture of the networks
$p_{\theta}(\mathbf{x}|\mathbf{z})$ and $q_{\phi}(\mathbf{z}|\mathbf{x})$, and
the latent prior $p(\mathbf{z})$. We adopt a convolutional architecture for
both the encoder and decoder network.
#### 2.2.1 Latent Space
We choose to use a $d$-dimensional latent space, with a multivariate normal
prior, $\mathbf{z}\sim\mathcal{N}(0,\mymathbb{1}^{d\times d})$.
#### 2.2.2 Encoder
The encoder maps input images $\mathbf{x}\in\mathbb{R}^{256\times 256}$ to
latent space distribution parameters,
$\mathbf{[}\bm{\mu}^{d},\bm{\sigma}^{d}]\in\mathbb{R}^{2d}$. It is worth
emphasizing the point that, since we are modelling the distribution
$p(\mathbf{z}|\mathbf{x})$, the output of the encoder is not a single point in
the latent parameter space, but rather a distribution, parametrized by the
mean and variance $\mathbf{[}\bm{\mu}^{d},\bm{\sigma}^{d}]$. The mapping from
image to latent space parameters requires both a dimensionality reduction, and
a reshaping. We achieve these goals by using a _convolutional neural network_.
In the following we will describe the precise network that we implemented,
using the language of neural networks. For details on the motivation for these
choices, and their technical meaning, we refer to introductory texts on
machine learning and convolutional neural networks such as Goodfellow et al.
(2016)
The encoder reduces the dimension of the input image by applying a series of
strided convolutions with a rectified linear unit activation function, and
then flattens the image for input to a final dense layer connected to the
output latent space distribution parameters. Each convolution is characterized
by a kernel shape with a number of pixels, $k_{i}$, where $i$ indicates the
layer, and a stride length, which we set to 2. The values $k_{i}$ are set
during the hyperaparameter optimization stage described in Section 2.3.3. We
apply a batch normalization with momentum parameter equal to 0.9 after each
convolution. This regularizes the weights, and leads to more stable training.
A summary of the encoder model is given in Table 1.
#### 2.2.3 Decoder
The decoder is essentially the reverse process to the encoder, mapping a
latent vector $\mathbf{z}\in\mathbb{R}^{d}$ to an image
$\mathbf{x}\in\mathbb{R}^{256\times 256}$. We denote a decoder $g$, with
weights $\phi$ as $g_{\phi}:\mathbf{z}\rightarrow\mathbf{x}$. The primary
difference to the structure of the encoder is that we use transverse
convolutions as opposed to convolutions, in order to increase the size of each
dimension. A summary of the decoder model is given in Table 2.
Layer | Layer Output Shape | Hyperparameters
---|---|---
Input | (256, 256, 1) |
Conv2D | (128, 128, 256) | stride=2
ReLu | (128, 128, 256) |
BatchNorm | (128, 128, 256) | momentum=0.9
Conv2D | (64, 64, 128) | stride=2
ReLu | (64, 64, 128) |
BatchNorm | (64, 64, 128) | momentum=0.9
Conv2D | (32, 32, 64) | stride=2
ReLu | (32, 32, 64) |
BatchNorm | (32, 32, 64) | momentum=0.9
Dense | (1024) |
Dense | (512) |
Table 1: This table shows the structure of the encoder network, $q_{\phi}(\mathbf{z}|\mathbf{x})$. Layer | Layer Output Shape | Hyperparameters
---|---|---
Input | (256, 1) |
Dense | (8192) |
Reshape | (16, 16, 32) |
BatchNorm | (16, 16, 32) | momentum=0.9
TransposeConv2D | (32, 32, 128) | stride=2
ReLu | (32, 32, 128) |
BatchNorm | (32, 32, 128) | momentum=0.9
TransposeConv2D | (64, 64, 64) | stride=2
ReLu | (64, 64, 64) |
BatchNorm | (64, 64, 64) | momentum=0.9
TransposeConv2D | (128, 128, 32) | stride=2
ReLu | (128, 128, 32) |
BatchNorm | (128, 128, 32) | momentum=0.9
TransposeConv2D | (256, 256, 16) | stride=2
ReLu | (256, 256, 16) |
BatchNorm | (256, 256, 16) | momentum=0.9
TransposeConv2D | (256, 256, 1) | stride=1
Table 2: This table shows the structure of the decoder network,
$p_{\theta}(\mathbf{x}|\mathbf{z})$.
### 2.3 Training
In this section we detail the process by which we optimize the weights of the
VAE model described in Section 2.2 with respect to the ELBO objective
introduced in Section 2.1. The training process requires us to specify the
training dataset, $\mathcal{D}$, the training _strategy_ by which we make
updates to the weights $\theta,~{}\phi$, and the process of hyperparameter
optimization by which we make concrete selections of meta parameters of the
model (such as kernel shapes and training parameters).
#### 2.3.1 Data
Machine learning techniques are notoriously data-hungry, and will perform best
for larger datasets. Standard computer vision datasets on which algorithms are
tested (e.g. ImageNet (Russakovsky et al., 2015)) contain tens of thousands,
sometimes millions, of images. However, we have only one sky from which to
obtain observations of Galactic dust. As such, we are forced to partition the
sky into patches, which we treat as separate images in the training process.
In order to obtain $\sim 1000$’s of images, the natural linear scale of an
individual patch is $\sim 10^{\circ}$. Such a small patch size has the
advantage that we are then justified in projecting the cutouts onto the flat
sky, and applying standard machine learning techniques to the resulting two-
dimensional images, sidestepping the issue of defining neural networks that
operate on spherical images (for such implementations see Perraudin et al.
(2019); Krachmalnicoff & Tomasi (2019)).
We use the _P_ lanck GNILC-separated thermal dust intensity map at 545 GHz
222http://pla.esac.esa.int/pla/aio/product-action?MAP.MAP_ID=COM_CompMap_Dust-
GNILC-F545_2048_R2.00.fits, which we download from the Planck Legacy Archive.
In order to extract cutout images from this map we follow a similar procedure
to Aylor et al. (2019). We mask the Galactic plane by excluding all regions at
latitudes below $15^{\circ}$. Then we lay down a set of centroids
$(l_{i+1},b_{i+1})=(l_{i}+s,b_{i}+s/\cos(l_{i}))$, where $s$ is a step size
parameter, and $s/\cos(l_{i})$ is a step between longitudes for a given
latitude, which ensures the same angular separation in the latitudinal
direction. Each centroid is then rotated to the equator, and an
$8^{\circ}\times 8^{\circ}$ square region around the centroid is projected
onto a cartesian grid with 256 pixels along each size. For $s=4^{\circ}$, this
results in a dataset, $\mathcal{D}$, of 2254 maps. We then shuffle and split
$\mathcal{D}$ into three groups: a 70% training set, $\mathbf{x}^{\rm train}$,
a 15% validation set, $\mathbf{x}^{\rm val}$, and a 15% test set,
$\mathbf{x}^{\rm test}$.
In order to artificially increase the diversity of images in our limited
sample we employ two standard data augmentation techniques. During the data
preprocessing stage of training, we randomly flip each image along the
horizontal and vertical directions, and rotate each image by an integer
multiple of $90^{\circ}$. These transformations are not invariant under
convolution; however, these would constitute perfectly realistic foreground
images.
#### 2.3.2 Strategy
Here we discuss the training strategy used to learn the weights $\theta,\phi$.
As discussed in Section 2, to train a VAE we maximize the lower bound on the
log likelihood of the data given in Equation 7 with respect to the weights
$\theta,\phi$. In practice, at each step we compute a Monte Carlo estimate of
this quantity:
$\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\frac{p_{\theta}(\mathbf{x},\mathbf{z})}{q_{\phi}(\mathbf{z}|\mathbf{x})}\right]\approx\log
p_{\theta}(\mathbf{x}|\mathbf{z})+\log p(\mathbf{z})-\log
q_{\phi}(\mathbf{z}|\mathbf{x})$ (9)
where $\mathbf{x}$ on the RHS is now a minibatch of the data, the size of
which is a hyperparameter of the training process. The analysis we present in
Section 2.3.3 shows that a batch size of 8 is preferred. For each batch we
then calculate the gradients of this quantity with respect to the weights
$\theta,\phi$ and backpropagate the errors through the network, adjusting
$\theta,\phi$ in accordance with the learning schedule. For this schedule we
used the Adam optimizer with hyperparameters determined through the
optimization process described in Section 2.3.3.
The training was performed by passing over the entire dataset 100 times, and
in each pass splitting the data into batches of 8 images. To guard against
overfitting we evaluated $\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm train})$
and $\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm val})$ every five epochs and
checked for divergence between these quantities at late epochs. If the network
had begun to overfit on the training data, its predictions for the validation
set would deteriorate, which would be reflected in a worsening
$\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm val})$. We found that the
$\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm train})$ plateaued after 50 epochs,
and saw no divergence between $\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm
train})$ and $\mathbb{L}_{\theta,\phi}(\mathbf{x}^{\rm val})$ after training
for an additional 50 epochs.
Models were built using the Tensorflow software package (Abadi et al., 2015),
and trained using a Tesla V100 GPU on the Cori supercomputer at NERSC.
#### 2.3.3 Hyperparameter Optimization
In this section we provide motivation for our selection of the model
hyperparameters. It is not possible to optimize model hyperparameters such as
batch size, or model architecture, using the same stochastic gradient descent
technique that is used to optimize model weights and biases. Instead, a
limited number of hyperparameter combinations can be trained, and the
corresponding model that achieves the best loss after a certain amount of
training time, or certain number of epochs, is used. The space of
hyperparameters is high-dimensional, and so can not be uniformly densely
sampled due to computational cost. Instead, we employed a Bayesian
optimization approach in which a few random combinations of hyperparameters
are chosen, and trained for 20 epochs each. From this set of hyperparameters,
a Gaussian process (GP) model of the loss as a function of hyperparameters is
built. From this GP model, new trial candidates are selected, and trained,
with the resulting loss then being incorporated into the GP weights. We
allowed this process to continue for 100 different trials, and used the
hyperparameters that achieved the lowest loss after twenty epochs of training.
## 3 Results
### 3.1 Reconstructions
In this section we present reconstructions of test set images, and compare
their pixel value distribution and power spectra.
For a given image, $\mathbf{x}_{\rm test}$, we can sample the posterior as
$\mathbf{z}_{\rm test}^{(i)}\sim q_{\phi}(\mathbf{z}|\mathbf{x})$, and push
these through the decoder to get a reconstructed image $\mathbf{x}^{(i)}_{\rm
test}=\mathbf{g}_{\theta}(\mathbf{z}^{(i)}_{\rm test})$. To summarize the
distribution of reconstructed images, we draw $L$ samples and calculate their
average:
$\tilde{\mathbf{x}}\approx\frac{1}{L}\sum_{l=1}^{L}\mathbf{g}_{\theta}(\mathbf{z}_{\rm
test}^{(l)}).$ (10)
For the remainder of this section, a ‘reconstruction’ refers to the
calculation of Equation 10 with $L=100$. For a given reconstruction, we can
straightforwardly calculate two statistics: i) the histogram of its pixel
values and ii) the power spectrum. We calculate the histogram of pixel values
in 20 bins from -3 to 5, and normalize the count such that the area under the
histogram is equal to unity. To calculate the power spectrum we apply a cosine
apodization with a characteristic scale of one degree to the image, such that
it smoothly tapers to zero at the edge of the map. We then calculate the mode
coupling matrix for this mask, and calculate the uncoupled power spectrum
using the NaMaster code (Alonso et al., 2019). For reasons that will become
clear later we are primarily interested in comparing ranges of multipoles in
the signal-dominated regime, well within the resolution limit of the original
maps, and so we do not make any efforts to noise debias or account for the
beam present in the original maps.
First, we present the reconstructions of three randomly-selected test set
images, and show the resulting maps, along with the residuals, in Figure 1. We
can see that the network does very well in reconstructing the large-scale
features in these test-set maps, and the visual quality is sufficient to
appear ‘real’, if lower-resolution. Features are well recovered up to
$\sim$degree scales, with features below that scale being smoothed out by the
calculation of the expectation in Equation 10. The residuals shown in the
bottom row of Figure 1 are well behaved and do not show any strong biases
correlated with features in the map.
Figure 1: This figure shows the reconstruction of three randomly-selected
images from the test set, not used during the training or validation of the
network. The top row are the original images, the second row are the
reconstructions. and the third row are the residuals of the reconstructions.
The reconstructions clearly lose small-scale details, but but manage to
recover the large scale variations well.
In Figure 2 we take a single randomly-selected test set image, and show its
reconstruction, the pixel value histograms of each image, and their power
spectra. As was the case for the three examples shown in Figure 1, there is
excellent visual agreement between the original image and its reconstruction.
This is enforced by the excellent agreement between the distribution of pixel
values in the two images, shown in the bottom left panel of Figure 2. The
reconstructed power spectrum in the bottom right panel of Figure 2 also shows
excellent agreement up to $\ell\sim 400$, and suppression of power in the
reconstructed image going to smaller scales, consistent with the visual
blurriness of the reconstructed image.
Figure 2: _Top left_ : a randomly-selected test set image, $\mathbf{x}$. _Top
right_ : the reconstruction of the test set image, $\tilde{\mathbf{x}}$, as
computed using Equation 10. _Bottom left_ : kernel density estimate of the
distribution of pixel values of the original image, and its reconstruction.
_Bottom right_ : the log power spectra of the test set image and its
reconstruction. Note that since the test set images are standardized, these
quantities are unitless.
In order to compare reconstructions for the whole test set, we now calculate
the pixel value distribution and power spectrum for each of the 339 images in
the test set and their reconstructions. In order to represent the distribution
of pixel value histograms across this test set, we calculate the quartiles and
median in each bin, across the test set. In Figure 3 we plot the $25^{\rm th}$
percentile, median, and $75^{\rm th}$ percentile as a function of bin center,
for both the original test set images, and their reconstructions. There is
excellent agreement between the two sets of images, with no evidence of any
aggregate bias in the reconstructions.
Figure 3: In this figure we compare the pixel value distributions of the 339
test set images (black), and their reconstructions (green). We calculate
quantiles across the test set, and plot the $25^{\rm th}$ and $75^{\rm th}$
quartiles (the dashed lines), and the median as functions of pixel value (the
solid lines).
In Figure 4 we compare the power spectra of all test set images and their
reconstructions. Figure 4 shows that the same behavior as was seen in Figure 2
is displayed for the entire test set. Spectra are generally well recovered for
$\ell<400$, with power being increasingly suppressed for $\ell>400$, relative
to the real image power spectra.
Figure 4: In this figure we compare the power spectra of the 339 test set
images (black) and their reconstructions (green). Each power spectrum is
plotted as an individual line.
Here, we are encountering a known issue with VAEs: reconstructed images are
often blurry (Kingma & Dhariwal, 2018; Kingma et al., 2016; Kingma & Welling,
2019). The blurriness can be understood by considering the objective function
in Equation 7, and inspecting the term
$\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[p_{\theta}(\mathbf{x},\mathbf{z})\right]$.
Since this expectation is taken with respect to the distribution
$q_{\phi}(\mathbf{z}|\mathbf{x})$, it will strongly penalize points
$(\mathbf{x},\mathbf{z})$ that are likely under $q_{\phi}$, but unlikely under
$p_{\theta}$. On the other hand, points that are likely under $p_{\theta}$,
but are not present in the empirical data distribution, will suffer a much
smaller penalty. The result is that, if the model is not sufficiently flexible
to fit the data distribution exactly, it will compensate by widening the
support of $p_{\theta}(\mathbf{x},\mathbf{z})$ beyond what is present in the
data distribution, inflating the variance of
$p_{\theta}(\mathbf{x}|\mathbf{z})$. Since we have assumed a Gaussian
distribution for the decoder model that is independent from pixel to pixel,
and given that the signal in the training images is red-tilted (as is the case
for most natural images containing extended recognizable structures), the
increased variance leads to a degradation of small-scale features through the
averaging process of Equation 10 (Zhao et al., 2017). A corollary of the
extended support of $p_{\theta}(\mathbf{x},\mathbf{z})$ is that sampling the
prior in order to generate novel images will not necessarily produce realistic
samples (Kingma & Welling, 2019).
One way in which the flexibility of VAEs may be enhanced is through the use of
_normalizing flows_ (Jimenez Rezende & Mohamed, 2015). As the name suggests,
the idea here is to start with a simple distribution, such as a multivariate
normal, and ‘stack’ layers of invertible transformations, such that the output
may be significantly more complex. There are certain requirements placed on
these transformations such that they remain computationally efficient, for
example they must have tractable Jacobians (Jimenez Rezende & Mohamed, 2015).
Expanding the VAE model presented here by introducing normalizing flows could
be expected to improve both the reconstruction quality, and the quality of
novel samples, and is the subject of current work.
### 3.2 Interpolation in the latent space
As a means of investigating the structure of the encoding that has been
learned, we study the ‘interpolation’ between real images, $\mathbf{x}_{1}$
and $\mathbf{x}_{2}$, by performing the interpolation between their latent
encodings, $\mathbf{z}_{1}$ and , $\mathbf{z}_{2}$. From the smooth nature of
the changes in the resulting continuum of maps we will see that smooth
variations in the latent space result in smooth variations in the map space.
This study also demonstrates the ability of the VAE approach to generate novel
foreground images by restricting to a region of the latent space close to the
encodings of real maps, therefore avoiding the spurious regions of
$(\mathbf{x},\mathbf{z})$ that could be obtained by sampling from an ill-
fitted prior, as discussed at the end of Section 3.1.
The probability mass in high-dimensional distributions tends to concentrate in
a shell relatively far from the modal probability density. Therefore,
traversing the latent space in a straight line (in the Euclidean sense), does
not necessarily pass through areas of high probability mass. In order to keep
the interpolated points within areas of high probability mass, we interpolate
from $\mathbf{z}_{1}$ to $\mathbf{z}_{2}$ using spherical trajectories that
traverse great circles in the latent space, as the distance from the origin
smoothly changes from $|\mathbf{z_{1}}|$ to $|\mathbf{z_{2}}|$. Specifically,
we follow this continuous trajectory parametrized by some factor $\lambda$:
$\mathbf{z}_{1,2}(\lambda)=\frac{\sin((1-\lambda)\theta)}{\sin\theta}\mathbf{z}_{1}+\frac{\sin(\lambda\theta)}{\sin\theta}\mathbf{z}_{2},$
(11)
where $\cos(\theta)=\hat{\mathbf{z}}_{1}\cdot\hat{\mathbf{z}}_{2}$. We then
take $N$ points along this line corresponding to
$\lambda=[1/(N+1),2/(N+1),\dots,N/(N+1)]$, and decode to obtain the
corresponding map
$\mathbf{x}_{1,2}(\lambda)=g_{\phi}(\mathbf{z}_{1,2}(\lambda))$.
Figure 5: This figure presents synthetic images generated by interpolating
between real images, $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$, shown in the top
left and bottom right panels respectively. The interpolation is carried out in
the latent space using Equation 11, and is parametrized by a continuous
variable $\lambda$. The intermediate panels show the interpolation evaluated
at $N=10$ points along the trajectory.
Figure 5 shows the smooth transition in image space between the two real
images (the top left panel and the bottom right panel) randomly selected from
the test set, calculated using the interpolation described above. Features,
such as the strong filamentary structures in the center of the image,
transition smoothly in and out of the image, demonstrating that small
perturbations in the latent space result in small perturbations in decoded
images.
### 3.3 Data Imputation
In this section we consider a possible application of our trained model to the
reconstruction of corrupted data. During the analysis of CMB data there are
many possible reasons that data may be incomplete, from masking of point
sources, to corruption by uncontrolled systematics. The task of _inpainting_
these regions is simple when the missing emission is well described by
Gaussian statistics, as is the case for the CMB (Bucher & Louis, 2012). The
lack of a similarly simple approach for the non-Gaussian foreground signal
means that previous efforts have relied on empirically-validated, simple,
algorithms, such as diffusive filling (Bucher et al., 2016). Future surveys
will have ever-lower noise floors, and so will be increasingly contaminated by
point-sources, even in polarization. The aggressive masking required in this
regime could lead to the failure of simple foreground inpainting techniques
(Puglisi & Bai, 2020). The statistical foreground model presented here allows
us to take a Bayesian approach to foreground inpainting, in which we may
compute a posterior distribution for the missing data, conditioned on the
observed data (Böhm et al., 2019). This has the advantage of conserving the
foregrounds’ statistical properties, whilst also taking into account all of
the contextual information in the image, unlike methods such as diffusive
inpainting. In the rest of this section we will present a toy model for
corrupted data, and show that we are able to perform inpainting by optimizing
the posterior distribution in the latent space.
Representing the contamination as a linear operator $\mathsf{A}$, we can write
down a model for the observed data $\mathbf{d}$:
$\mathbf{d}=\mathsf{A}\mathbf{x}+\mathbf{n}$, where $\mathbf{n}$ is a possible
noise term. The posterior distribution of $\mathbf{z}$ is given by Bayes’
theorem:
$\log p(\mathbf{z}|\mathbf{d})=\log p(\mathbf{z})+\log
p_{\theta}(\mathbf{d}|\mathbf{z})-\log p(\mathbf{d}).$ (12)
For a given statistical model of the noise, we have a complete description of
the term $\log p(\mathbf{d}|\mathbf{z})$, and we can work with the posterior
distribution in the latent space.
As a concrete example we will consider the case of a binary $N\times N$
masking operator, $\mathsf{A}$, with elements equal to one (zero) where pixels
are (un)observed. To form simulated ‘corrupted’ images, we take random images
from the test dataset, apply $\mathsf{A}$, and add white Gaussian noise
$\mathbf{n}$, characterized by a pixel standard deviation $\sigma$:
$\mathbf{d}_{\rm test}=\mathsf{A}\mathbf{x}_{\rm test}+\mathbf{n}$. The
posterior distribution in the latent space is then:
$-2\log p(\mathbf{z}|\mathbf{d}_{\rm
test})\propto\mathbf{z}^{T}\mathbf{z}+\frac{\bm{\mu}_{\theta}(\mathbf{z})^{T}\bm{\mu}_{\theta}(\mathbf{z})}{\sigma^{2}},$
(13)
where we have written the residual vector as
$\bm{\mu}_{\theta}(\mathbf{z})=\mathsf{A}\mathbf{g}_{\theta}(\mathbf{z})-\mathbf{d}_{\rm
test}$.
Fully sampling Equation 13 can be computationally expensive due to the
dimensionality of $\mathbf{z}$, and is made more challenging by the
possibility of $\log p(\mathbf{z}|\mathbf{d}_{\rm test})$ being multi-modal.
For these reasons, applying standard Markov Chain Monte Carlo techniques can
often fail to fully explore the posterior (Böhm et al., 2019), and we leave a
sampling approach for future work, here taking only a single representative
sample by maximizing $\hat{\mathbf{z}}_{\rm
test}=\operatorname*{argmax}_{\mathbf{z}}\log p(\mathbf{z}|\mathbf{d}_{\rm
test})$.
In the following we will take $\mathsf{A}$ to be a masking operator that
applies a binary mask to a map. However, as long as a forward model for the
corruption operation can be written down (e.g. a Gaussian convolution), the
same technique could be applied. We take three randomly selected test set
images, $\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}$, and apply three
different binary masks, $\mathsf{A}_{1},\mathsf{A}_{2},\mathsf{A}_{3}$. To
each corrupted image, we add a white noise realization with a pixel standard
deviation of 0.2. For each corrupted, noisy image, we then maximize the
posterior in Equation 13 to find $\mathbf{z}_{i}^{\rm MAP}$ using the LBFGS
algorithm. In Figure 6 we show the randomly selected test set images in the
first row, the corrupted images in the second row, and the reconstructed map
$g(\mathbf{z}_{i}^{\rm MAP})$ in the third row. We also calculate the pixel
value histograms and power spectra of the input and reconstructed maps and
show these in the bottom two rows of Figure 6.
Figure 6: This figure shows three randomly-selected test set images,
$\mathbf{x}_{1,2,3}$ in the top row. As described in Section 3.3, these images
are corrupted with a binary mask $\mathsf{A}_{1,2,3}$ and white noise. The
corrupted images are shown in the second row. The third row shows the
reconstructed images obtained by maximizing the latent space posterior in
Equation 13 for each of the three corrupted images, and decoding the resulting
points in the latent space. The fourth and fifth rows show the pixel value
histograms and power spectra of the original and reconstructed maps.
One can see from Figure 6 that all the images are well reconstructed, and
there is no visible effect of the masking remaining in the reconstructions.
Comparing the regions in the first and third rows corresponding to the masked
areas, we see that the network does not reproduce the exact features in the
masked region, for any of the $\mathbf{x}_{i}$, as expected. However, the
network does reconstruct plausible inpaintings, with the correct statistics,
given the context in the rest of the image. For example, the reconstruction
$g_{\phi}(\mathbf{z}_{2}^{\rm MAP})$ does not replicate the true high-
intensity filamentary structure in the input image, $\mathbf{x}_{2}$, which
would be impossible. However, it does recognize from the context that
intensity is increasing towards the masked area in the bottom left of the
image, and populates that area with high-variance, high-intensity features.
Correspondingly, such high-intensity features are not seen in the
reconstructed regions of $g_{\phi}(\mathbf{z}_{1,3}^{\rm MAP})$, which
correspond to relatively low-emission regions. The pixel value histograms and
power spectra in the last two rows of Figure 6 show similar behavior. We see
good agreement between the original and reconstructed histograms and
powerspectra for both the $\mathbf{x}_{1}$ and $\mathbf{x}_{3}$ maps, up to
the suppression at $\ell>400$ common to all reconstructions. On the other
hand, we see a disagreement between the original and reconstructed statistics
of $\mathbf{x}_{2}$, due to the higher variance associated with the filled-in
region.
These results show that the network has learned generalizable information
about foreground behavior, and is able to inpaint novel foreground emission
with correct statistical properties, based on the context of an image. The
forward model used in this inpainting process can be easily extended to maps
with multiple masks and different types of filtering and noise found in real
data.
## 4 Discussion and Conclusions
In this paper we have presented a new application of VAEs to images of
Galactic thermal dust emission. Using a training set extracted from Planck
observations of thermal dust emission, this technique allowed us to learn a
transformation from a space of uncorrelated latent variables with a
multivariate normal prior, to the space of possible dust maps.
The training process was validated by computing and comparing summary
statistics, including the distribution of pixel values, and power spectra of
reconstructed maps, on a test set withheld during the training process. The
applicability of the trained model was also demonstrated by reconstructing
data corrupted by noise and masking. This was the first use of a trained
generative dust model to perform Bayesian inference, and demonstrates the
applicability of this approach in the simulation of foreground images, and the
Bayesian modeling of polarized CMB data.
The usefulness of this model is currently limited by the flexibility of the
posterior, and its ability to fit the true underlying posterior. As was
discussed in Section 3.1, this has two main consequences: i) a naïve sampling
of the prior is not guaranteed to produce realistic samples, ii) reconstructed
images are blurry, limiting accuracy to degree scales. Both of these issues
may be tackled by increasing the expressiveness of the model (Kingma &
Welling, 2019), which we plan to do by introducing a normalizing flow to link
the prior and latent space (Kingma et al., 2016).
As discussed in the Section 1, our main goal is to model polarized dust
emission. We attempted a similar analysis to that presented here by repeating
the training procedure on a network that accepted an additional ‘channel’ as
input, representing a tuple of Stokes $Q$ and $U$ parameters, rather than only
Stokes $I$, and using the Planck 353 GHz polarization observations to form a
training set. We found that the network was not able to learn any meaningful
information from this setup, consistent with what similar analyses have found
(Petroff et al., 2020). In order to extend our analysis to polarization, we
are therefore exploring the use of MHD simulations (Kim et al., 2019) as a
training set. Kim et al. (2019) have demonstrated that simulations of a
multiphase, turbulent, magnetized ISM produce synthetic observations of the
ISM with statistics (such as the ratio of $E$ power to $B$ power, and the tilt
of the $EE$ and $BB$ power spectra) matching those of real skies. Our initial
results have shown that this is a promising alternative to the use of real
data in training generative networks.
## Acknowledgements
We would like to acknowledge useful conversations with Ethan Anderes and Kevin
Aylor in the preparation of this work. This work was supported by an XSEDE
start up allocation, PHY180022. This work was supported in part by the
National Science Foundation via awards OPP-1852617 and AST-1836010. We also
acknowledge the use of the Perlmutter preparedness GPU allocation on the Cori
super computer at NERSC.
## Data Availability
The data used in this study is available on the Planck Legacy Archive at the
URL: http://pla.esac.esa.int/pla/aio/product-
action?MAP.MAP_ID=COM_CompMap_Dust-GNILC-F545_2048_R2.00.fits
## References
* Abadi et al. (2015) Abadi M., et al., 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, https://www.tensorflow.org/
* Ade et al. (2019) Ade P., et al., 2019, J. Cosmology Astropart. Phys., 2019, 056
* Aiola et al. (2020) Aiola S., et al., 2020, arXiv e-prints, p. arXiv:2007.07288
* Alonso et al. (2019) Alonso D., Sanchez J., Slosar A., LSST Dark Energy Science Collaboration 2019, MNRAS, 484, 4127
* Aylor et al. (2019) Aylor K., Haq M., Knox L., Hezaveh Y., Perreault-Levasseur L., 2019, arXiv e-prints, p. arXiv:1909.06467
* BICEP2 Collaboration et al. (2018) BICEP2 Collaboration et al., 2018, Phys. Rev. Lett., 121, 221301
* Benson et al. (2014) Benson B. A., et al., 2014, in Holland W. S., Zmuidzinas J., eds, Vol. 9153, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VII. SPIE, pp 552 -- 572, doi:10.1117/12.2057305, https://doi.org/10.1117/12.2057305
* Böhm et al. (2019) Böhm V., Lanusse F., Seljak U., 2019, arXiv e-prints, p. arXiv:1910.10046
* Brock et al. (2018) Brock A., Donahue J., Simonyan K., 2018, arXiv e-prints, p. arXiv:1809.11096
* Bucher & Louis (2012) Bucher M., Louis T., 2012, MNRAS, 424, 1694
* Bucher et al. (2016) Bucher M., Racine B., van Tent B., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 055
* Dinh et al. (2014) Dinh L., Krueger D., Bengio Y., 2014, arXiv e-prints, p. arXiv:1410.8516
* Dinh et al. (2016) Dinh L., Sohl-Dickstein J., Bengio S., 2016, arXiv e-prints, p. arXiv:1605.08803
* Goodfellow et al. (2014) Goodfellow I. J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y., 2014, arXiv e-prints, p. arXiv:1406.2661
* Goodfellow et al. (2016) Goodfellow I., Bengio Y., Courville A., 2016, Deep Learning. MIT Press
* Grover et al. (2017) Grover A., Dhar M., Ermon S., 2017, arXiv e-prints, p. arXiv:1705.08868
* Hui et al. (2018) Hui H., et al., 2018, in Zmuidzinas J., Gao J.-R., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 10708, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX. p. 1070807 (arXiv:1808.00568), doi:10.1117/12.2311725
* Jimenez Rezende & Mohamed (2015) Jimenez Rezende D., Mohamed S., 2015, arXiv e-prints, p. arXiv:1505.05770
* Jimenez Rezende et al. (2014) Jimenez Rezende D., Mohamed S., Wierstra D., 2014, arXiv e-prints, p. arXiv:1401.4082
* Kamionkowski et al. (1997) Kamionkowski M., Kosowsky A., Stebbins A., 1997, Phys. Rev. Lett., 78, 2058
* Kim et al. (2019) Kim C.-G., Choi S. K., Flauger R., 2019, ApJ, 880, 106
* Kingma & Dhariwal (2018) Kingma D. P., Dhariwal P., 2018, arXiv e-prints, p. arXiv:1807.03039
* Kingma & Welling (2013) Kingma D. P., Welling M., 2013, arXiv e-prints, p. arXiv:1312.6114
* Kingma & Welling (2019) Kingma D. P., Welling M., 2019, arXiv e-prints, p. arXiv:1906.02691
* Kingma et al. (2016) Kingma D. P., Salimans T., Jozefowicz R., Chen X., Sutskever I., Welling M., 2016, arXiv e-prints, p. arXiv:1606.04934
* Knox & Turner (1994) Knox L., Turner M. S., 1994, Phys. Rev. Lett., 73, 3347
* Krachmalnicoff & Puglisi (2020) Krachmalnicoff N., Puglisi G., 2020, arXiv e-prints, p. arXiv:2011.02221
* Krachmalnicoff & Tomasi (2019) Krachmalnicoff N., Tomasi M., 2019, A&A, 628, A129
* Millea et al. (2020a) Millea M., Anderes E., Wandelt B. D., 2020a, arXiv e-prints, p. arXiv:2002.00965
* Millea et al. (2020b) Millea M., et al., 2020b, arXiv e-prints, p. arXiv:2012.01709
* Perraudin et al. (2019) Perraudin N., Defferrard M., Kacprzak T., Sgier R., 2019, Astronomy and Computing, 27, 130
* Petroff et al. (2020) Petroff M. A., Addison G. E., Bennett C. L., Weiland J. L., 2020, arXiv e-prints, p. arXiv:2004.11507
* Planck Collaboration et al. (2020) Planck Collaboration et al., 2020, A&A, 641, A6
* Puglisi & Bai (2020) Puglisi G., Bai X., 2020, arXiv e-prints, p. arXiv:2003.13691
* Razavi et al. (2019) Razavi A., van den Oord A., Vinyals O., 2019, arXiv e-prints, p. arXiv:1906.00446
* Russakovsky et al. (2015) Russakovsky O., et al., 2015, International Journal of Computer Vision (IJCV), 115, 211
* Seljak & Zaldarriaga (1997) Seljak U., Zaldarriaga M., 1997, Phys. Rev. Lett., 78, 2054
* Smoot et al. (1992) Smoot G. F., et al., 1992, ApJ, 396, L1
* Zhao et al. (2017) Zhao S., Song J., Ermon S., 2017, arXiv e-prints, p. arXiv:1702.08658
* van den Oord et al. (2016a) van den Oord A., Kalchbrenner N., Kavukcuoglu K., 2016a, arXiv e-prints, p. arXiv:1601.06759
* van den Oord et al. (2016b) van den Oord A., et al., 2016b, arXiv e-prints, p. arXiv:1609.03499
|
###### Proof.
.: 5pt
###### Transporting a prediction model for use in a new target population
###### Abstract
We consider methods for transporting a prediction model and assessing its
performance for use in a new target population, when outcome and covariate
data for model development are available from a simple random sample from the
source population, but only covariate data are available from a simple random
sample from the target population. We discuss how to tailor the prediction
model for use in the target population, how to assess model performance (e.g.,
by estimating the target population mean squared error), and how to perform
model and tuning parameter selection. We provide identifiability results for
measures of performance in the target population for a potentially
misspecified prediction model under a sampling design where the source and the
target population samples are obtained separately. We also introduce the
concept of prediction error modifiers that can be used to reason about
tailoring measures of model performance to the target population. We
illustrate the methods using simulated data.
Keywords: transportability, generalizability, model performance, prediction
error modifier, covariate-shift, domain adaptation
## Introduction
Users of prediction models typically want to obtain predictions in a specific
target population. For example, a healthcare system may want to deploy a
clinical risk prediction model [1] to identify individuals at high risk for
adverse outcomes among all patients receiving care. Prediction models are
often built using data from source populations represented in prospective
epidemiological cohorts, confirmatory randomized trials [2], or administrative
databases [3]. In most cases, the data from the source population that are
used for developing the prediction model cannot be treated as a random sample
from the target population where the model will be deployed because the two
populations have different data distributions. Consequently, a model developed
using the data from the source population may not be applicable to the target
population and model performance estimated using data from the source
population may not reflect performance in the target population.
Consider a setup where outcome and covariate data are available from a sample
of the source population and only covariate data are available from a sample
of the target population. For example, covariate data from the target
population may be obtained from administrative databases, but outcome data may
be unavailable (e.g., when outcome ascertainment requires specialized
assessments) or insufficient (e.g., when the number of outcome events is small
due to incomplete followup). In this setup, developing and assessing the
performance of a prediction model for the target population is not possible
using standard methods because of the complete lack of outcome data from the
target population; using data from the source population can be an attractive
alternative. Yet, as noted above, directly applying a prediction model
developed in data from the source population to the target population, or
treating model performance measures (e.g., mean squared prediction error)
estimated in the source data as reflective of performance in the target
population may be inappropriate when the two populations have different data
distributions. Thus, investigators are faced with two transportability tasks:
(1) tailoring a prediction model for use in a target population when relying
on outcome data from the source population; and (2) assessing the performance
of the model in that target population.
These two transportability tasks have received attention in the computer
science literature on covariate shift and domain adaptation [4, 5, 6, 7, 8, 9,
10, 11, 12]. In epidemiology, however, the transportability of prediction
models has been treated heuristically and commonly used methods do not have
well-understood statistical behavior. The related problem of transporting
inferences about treatment effects to a target population has received more
attention [13, 14, 15, 16], but there are important differences between
transportability of treatment effects and prediction models in terms of the
parameters being estimated and the methods used for estimation.
Here, we examine the conditions that allow transporting prediction models from
the source population to the target population. We discuss the implications of
these conditions both for tailoring the models for use in the target
population and for assessing model performance in that context. We show that
many popular measures of model performance can be identified and estimated
using covariate and outcome data from the source population and just covariate
data from the target population under both nested and non-nested sampling
designs, without the strong assumption that the prediction model is correctly
specified. We discuss the relevance of our results when using modern model-
building approaches such as cross-validation-based model selection. We
introduce the concept of prediction error modifiers, which is useful for
reasoning about transportability of measures of model performance to the
target population. Last, we illustrate the methods using simulated data.
## Sampling design and identifiability conditions
Let $Y$ be the outcome of interest and $X$ a covariate vector. We assume that
outcome and covariate information is obtained from a simple random sample from
the source population $\\{(X_{i},Y_{i}):i=1,\ldots,n_{\text{\tiny
source}}\\}$. Furthermore, covariate information is obtained from a simple
random sample from the target population, $\\{X_{i}:i=1,\ldots,n_{\text{\tiny
target}}\\}$; no outcome information is available from the target population.
This “non-nested” sampling design [17, 18], where the samples from the target
and source population are obtained separately, is the one most commonly used
in studies examining the performance of a prediction model in a new target
population. For that reason, we will present results for non-nested designs in
some detail, before considering nested designs, where the source population is
a subset of a larger population that represents the target population.
Let $S$ be an indicator for the population from which data are obtained, with
$S=1$ for the source population and $S=0$ for the target population, and
denote $n=n_{\text{\tiny source}}+n_{\text{\tiny target}}$ as the sample size
of the composite dataset consisting of the data from the source and target
population samples. This composite dataset is randomly split into a training
set and a test set. The training set is used to build a prediction model for
the expectation of the outcome conditional on covariates in the source
population, $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$, and then, the
test set is used to evaluate model performance. We use $g_{\beta}(X)$ to
denote the posited parametric model, indexed by the parameter $\beta$, and
$g_{\widehat{\beta}}(X)$ to denote the “fitted” model with estimated parameter
$\widehat{\beta}$. We use $f(\cdot)$ to generically denote densities.
We assume the following identifiability conditions:
1. A1.
Conditional independence of the outcome $Y$ and the data source $S$. For every
$x$ with positive density in the target population, $f(X=x,S=0)>0$,
$f(Y|X=x,S=1)=f(Y|X=x,S=0).$
Informally, this condition means that the relationship between $Y$ and $X$ is
the same in the source population and the target population and it implies
that the conditional expectation of $Y$ given $X$ is the same in the two
populations,
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$.
2. A2.
Positivity. For every $x$ such that $f(X=x,S=0)\neq 0$, $\Pr[S=1|X=x]>0$.
Informally, this condition means that every covariate pattern in the target
population can occur in the source data, as sample size goes to infinity.
Next, we discuss how, under assumptions A1 and A2, the prediction model can be
tailored for use in the target population and how we can assess model
performance in the target population.
## Tailoring the model to the target population
Recall that $g_{\beta}(X)$ is a model for
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$. Suppose that the parameter
$\beta$ takes values in the space $\mathcal{B}$. We say that the model is
correctly specified if there exists a $\beta_{0}\in\mathcal{B}$ such that
$g_{\beta_{0}}(X)=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$ [19].
Tailoring the fitted model $g_{\widehat{\beta}}(X)$ for use in the target
population depends on whether the posited model $g_{\beta}(X)$ is correctly
specified.
##### Correctly specified model:
Suppose that the model $g_{\beta}(X)$ is correctly specified and thus we can
construct a model-based estimator $g_{\widehat{\beta}}(X)$ that consistently
estimates $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$. Under condition A1,
a consistent estimator for $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$ is
also consistent for $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$ (because
the two expectations are equal when condition A1 holds). Moreover, when the
model for the conditional expectation is parametric (as we have assumed up to
now) and the parameter $\beta$ is estimated using maximum likelihood methods,
then the unweighted maximum likelihood estimator $\widehat{\beta}$ estimated
using only the source data training set is optimal in terms of having the
smallest asymptotic variance [20, 21].
##### Missspecified model:
Now, suppose, as is more likely to be the case, that the model $g_{\beta}(X)$
is misspecified. In that case, theoretical work on the behavior of weighted
maximum likelihood estimators for $\beta$ under covariate shift [21] shows
that the maximum likelihood estimator estimated using only source population
data is no longer optimal, in the sense of minimizing the Kullback-Leibler
divergence between the estimated and true conditional density of the outcome
given covariates. Instead, the Kullback-Leibler divergence is minimized by
using a weighted maximum likelihood estimator with weights set equal to the
ratio of the densities in the target and source populations, that is,
$f(X|S=0)/f(X|S=1)$.
In applied work, the density ratio is typically unknown and needs to be
estimated using the data, but direct estimation of density ratios is
challenging, particularly when $X$ is high-dimensional [22]. Instead, we can
use the fact that the density ratio is, up to a proportionality constant,
equal to the inverse of the odds of being from the source population,
$\dfrac{f(X|S=0)}{f(X|S=1)}\propto\dfrac{\Pr[S=0|X]}{\Pr[S=1|X]},$
to replace density ratio weights with inverse-odds weights and obtain an
optimal estimator of the model, tailored for use in the target population. The
inverse-odds weights can be obtained by estimating the probability of an
observation being from the source population conditional on covariates – a
task for which many practical methods are available for high-dimensional data
[23]. A reasonable approach for tailoring a potentially misspecified
prediction model for use in the target population could proceed in three
steps. Fist, estimate the probability of being from the source population,
using training data from the source population and target population. Second,
use the estimated probabilities to construct inverse-odds of participation
weights for observations in the training set from the source population.
Third, apply the weights from the second step to estimate the prediction model
using all observations in the training set from the source population.
One difficulty with the above procedure is that, in non-nested designs, the
sample from the source population and the sample from the target population
are obtained separately, with sampling fractions from the corresponding
underlying populations that are unknown by the investigators and unlikely to
be equal. When that is the case, the probabilities $\Pr[S=0|X]$ and
$\Pr[S=1|X]$ in the inverse-odds weights are not identifiable from the
observed data [18, 24] (i.e., cannot be estimated using the observed data).
Although the inverse-odds weights are not identifiable, in Appendix A.1 we
show that, up to an unknown proportionality constant, they are equal to the
inverse-odds of participation weights _in the training set_ ,
$\dfrac{\Pr[S=0|X]}{\Pr[S=1|X]}\propto\dfrac{\Pr[S=0|X,D_{\text{\tiny
train}}=1]}{\Pr[S=1|X,D_{\text{\tiny train}}=1]},$ (1)
where $D_{\text{\tiny train}}$ is an indicator if data from an observation is
in the training set and used to estimate the inverse-odds weights. It follows
that we can use inverse-odds weights estimated in the training set, when
estimating $\beta$ with the weighted maximum likelihood estimator.
## Assessing model performance in the target population
We now turn our attention to assessing model performance in the target
population. For concreteness, we focus on model assessment using the squared
error loss function and on identifying and estimating its expectation, that
is, the mean squared error (MSE), in the target population. The squared error
loss $(Y-g_{\widehat{\beta}}(X))^{2}$ quantifies the discrepancy between the
(observable) outcome $Y$ and the model-derived prediction
$g_{\widehat{\beta}}(X)$ in terms of the square of their difference. The MSE
in the target population is defined as
$\psi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}[(Y-g_{\widehat{\beta}}(X))^{2}|S=0].$
In the main text of this paper, we focus on the MSE because it is a commonly
used measure of model performance. Our results, however, readily extend to
other measures of performance. In Appendix A.1, we provide identifiability
results for general loss function-based measures of model performance.
### Prediction error modifiers
To help explain why model performance measures need to be tailored for use in
the target population, we introduce the term “prediction error modifier” to
describe a covariate that, for a given prediction model, is associated with
prediction error as assessed with some specific measure of model performance.
Slightly more formally and using the squared error loss as an example, we say
that the random variable $Z$ is a prediction error modifier, for the model
$g_{\widehat{\beta}}(X)$, with respect to MSE in the source population, if the
conditional expectation
$\operatorname{\textnormal{\mbox{E}}}[(Y-g_{\widehat{\beta}}(X))^{2}|Z=z,S=1]$
varies as a function of $z$. Several parametric or non-parametric methods are
available to examine whether
$\operatorname{\textnormal{\mbox{E}}}[(Y-g_{\widehat{\beta}}(X))^{2}|Z,S=1]$
is a constant [25]. The prediction error modifier $Z$ can contain all the
covariates in $X$ or only a subset of them. When the distribution of
prediction error modifiers differs between the source and target populations,
measures of model performance estimated using data from the source population
are unlikely to be applicable in the target population, in the sense that the
performance of the model in the source data may be very different (either
better or worse) compared to performance of the same model in the target
population. Large differences in performance measures between the source and
target population can occur even if the true outcome model in the two
populations is the same (i.e., even if condition A1 holds) because most common
measures of model performance average (marginalize) prediction errors over the
data distribution of the target population, and the covariate distribution of
the target population can be different from the distribution in the source
population.
Figure 1 shows an example of a prediction error modifier that is differently
distributed between the source and target population resulting in an MSE in
the target population that is higher than the MSE in the source population; as
the covariate vector in the example is one dimensional $X$ and $Z$ are equal.
In the middle panel of Figure 1 we plot the inverse-odds weights as a function
of the prediction error modifier $X$; in the bottom panel we plot the
conditional squared errors as a function of $X$. Because both the conditional
squared errors and the inverse-odds weights (and therefore the probability of
being from the target population) increase as $X$ increases, the target
population MSE (which is equal to the expectation of the squared errors) is
larger than the source population MSE. Hence, directly using the source
population MSE in the context of the target population would lead to over-
optimism about model performance.
### Assessing model performance in the target population
In our setup, where outcome information is only available from the sample of
the source population, we need to account for differences in the data
distribution between the source population and the target population to assess
model performance in the target population. Proposition 1 in Appendix A.1
shows that, under the setup described previously and conditions A1 and A2,
$\psi_{\widehat{\beta}}$ is identifiable using source and target population
data through the expression
$\psi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}[\operatorname{\textnormal{\mbox{E}}}[(Y-g_{\widehat{\beta}}(X))^{2}|X,S=1,D_{\text{\tiny
test}}=1]|S=0,D_{\text{\tiny test}}=1],$
or equivalently using an inverse-odds weighting expression
$\psi_{\widehat{\beta}}=\frac{1}{\Pr[S=0|D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\text{\tiny
test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}(Y-g_{\widehat{\beta}}(X))^{2}\bigg{|}D_{\text{\tiny
test}}=1\right].$ (2)
Here, $D_{\text{\tiny test}}$ is an indicator for whether an observation is in
the source or target test data.
The identifiability result in expression (2) suggests the following inverse-
odds weighting estimator [26, 21] for the target population MSE:
$\widehat{\psi}_{\widehat{\beta}}=\frac{\sum\limits_{i=1}^{n}I(S_{i}=1,D_{\text{\tiny
test},i}=1)\widehat{o}(X_{i})\left(Y_{i}-g_{\widehat{\beta}}(X_{i})\right)^{2}}{\sum\limits_{i=1}^{n}I(S_{i}=0,D_{\text{\tiny
test},i}=1)},$ (3)
where $\widehat{o}(X)$ is an estimator for the inverse-odds weights in the
test set, $\dfrac{\Pr[S=0|X,D_{\text{\tiny test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}$. To ensure independence between the data used to train the model
and the data used to evaluate the model, we propose to use inverse-odds
weights estimated using the training set for model building and inverse-odds
weights estimated using the test set for estimating model performance.
An important feature of our result is that it does not require the prediction
model to be correctly specified, that is, we do not assume that
$g_{\widehat{\beta}}(X)$ converges to the true conditional expectation of the
outcome in the source population,
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$. This implies that model
performance measures in the target population are identifiable and estimable,
both for misspecified and correctly specified models. Informally, our
identifiability results require the existence of a common underlying model for
the source and target population (condition A1), but they do not require the
(much less plausible) assumption that investigators can correctly specify that
model.
So far we have focused on the scenario where the prediction model is built
using the training data and is evaluated using the test data, and where the
entire composite dataset (formed by appending data from the source and target
population) is split into a test and a training set that are used for model
estimation and assessment. In some cases an established model is available
(e.g., one developed using external data) and the goal of the analysis is
limited to assessing model performance in the target population. In that case,
no data from the source or target population need to be used for model
development and all available data can be used to evaluate model performance
and treated as a part of the “test set”.
We should note here that provided the prediction model is correctly specified,
exchangeability in mean over $S$, that is
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$,
is sufficient for the parameter $\beta$ to be identifiable using data from the
source population alone. Exchangeability in mean over $S$ is a weaker
condition than condition A1; that is, condition A1 implies exchangeability in
mean, but the converse is not true. Exchangeability in mean, however, is not
sufficient for transporting measures of model performance, such as the MSE. In
Appendix C we give an example of a setting where exchangeability in mean holds
but it is not sufficient to identify the target population MSE.
## Model and tuning parameter selection
Up to now we have proceeded as if the source population data in the training
set are used to estimate parameters of a pre-specified parametric model,
without employing any form of model selection (e.g., variable choice or other
specification search) or tuning parameter selection. Yet, when developing
prediction models, analysts often select between multiple different models and
statistical learning algorithms usually have one or more tuning parameters.
Importantly, data-driven methods for model and tuning parameter selection,
such as cross-validation-based procedures, rely on optimizing some measure of
model performance, such as the MSE.
Consider, for instance, tuning parameter selection using $K$-fold cross-
validation. In such an analysis, we split the data into $K$ mutually exclusive
subsets (“folds”) and for each value of the tuning parameter we build the
model with the selected tuning parameter value on $K-1$ of the folds and
estimate a measure of model performance on the fold that is not used for model
building. This process is repeated where each of the $K$ folds is left out of
the model building process, resulting in $K$ estimates of model performance.
The final cross-validated estimator of model performance associated with the
tuning parameter value is the average of the $K$ estimators. The cross-
validated value of the tuning parameter is selected as the value of the tuning
parameter that optimizes the cross-validated estimator of model performance.
Clearly, data-driven model and tuning parameter selection relies on estimating
measures of model performance. Furthermore, tailoring the cross-validated
model for use in the target population and tuning parameter selection to
improve model performance for use in the target population require
incorporating the results from the two preceding sections to account for
differences in the distribution of covariates between the source and target
population. Specifically, when prediction error modifiers have a different
distribution in the source and the target population, cross-validated measures
of model performance calculated using the source population data are biased
estimators of model performance in the target population. Inverse-odds
weighting estimators can adjust for that bias and failing to adjust for this
bias when performing cross-validation is likely to lead to sub-optimal model
or tuning parameter selection in the context of the target population.
## Illustration using simulated data
In this section we use simulated data to illustrate (i) the performance of
correctly and incorrectly specified prediction models when used with or
without inverse-odds of participation weights; (ii) the potential for bias
resulting from the naive (unweighted) MSE estimator that uses only the source
population data to estimate the target population MSE; and (iii) the ability
to adjust for that bias using the inverse-odds weighting estimator.
##### Data generation:
We simulated the outcome using the linear model $Y=1+X+0.5X^{2}+\varepsilon$,
where $\varepsilon\sim\mathcal{N}(0,X)$ and $X\sim Uniform(0,10)$. Under this
model, the errors are heteroscedastic because the error variance directly
depends on the covariate $X$. We simulated participation in the source data
using a logistic regression model
$\ln\left(\frac{\Pr[S=1|X]}{1-\Pr[S=1|X]}\right)=1.5-0.3X$. We set the total
sample size to $1000$ and the source and target population data were randomly
split in a 1:1 ratio into a training and a test set.
Under this data generating mechanism, the target population MSE is larger than
the source population MSE and both conditions A1 and A2 are satisfied. We
considered two prediction models, a correctly specified linear regression
model that included main effects of $X$ and $X^{2}$ and a misspecified linear
regression model that only included the main effect of $X$. We also considered
two approaches for estimating each posited prediction model: ordinary least
squares regression (unweighted, OLS) and weighted least squares regression
(WLS) where the weights were equal to the inverse of estimated odds of
participation in the source data training set. We estimated the inverse-odds
of participation in the training set, $\Pr[S=0|X,D_{\text{\tiny
train}}=1]/\Pr[S=1|X,D_{\text{\tiny train}}=1]$, using a correctly specified
logistic regression model for $\Pr[S=1|X,D_{\text{\tiny train}}=1]$. Figure 2
highlights the relationship between the correct model, and the large-sample
limits of the weighted and unweighted misspecified models. For the inverse-
odds weighting estimator $\widehat{\psi}_{\widehat{\beta}}$, we estimated the
odds weights $\widehat{o}(X)$ in the test set by fitting a correctly specified
logistic regression model for $\Pr[S=1|X,D_{\text{\tiny test}}=1]$ using the
test set data.
##### Simulation results:
The results from $10,000$ runs of the simulation are presented in Table 1. For
both OLS and WLS estimation of the prediction model, the correctly specified
model resulted in smaller average target population and source population MSE
estimates compared with the misspecified model. When comparing the performance
of OLS and WLS estimation of the prediction model in the target population OLS
performed slightly better than WLS when the model was correctly specified
(average MSE of $45.8$ vs. $46.2$). When the prediction model was incorrectly
specified, OLS performed worse than WLS (average MSE of $66.3$ vs. $58.0$).
The last column in the Table shows that the average of the inverse-odds
weighting MSE estimator across the simulations was very close to the true
target population MSE (obtained via numerical methods) for all combinations of
model specifications and use of weights. In all scenarios of this simulation,
the source population MSE estimator was substantially lower than the target
population MSE. Hence, using the estimated source population MSE as an
estimator for the target population MSE would lead to substantial
underestimation of the MSE (i.e., showing model performance to be better than
it is in the context of the target population). In contrast, the inverse-odds
weighting estimator would give an accurate assessment of model performance in
the target population.
## Nested designs
Thus far, we have focused on the non-nested sampling design. Nested sampling
designs are an alternative approach where the source population is a subset of
the target population of interest [16, 18, 27]. Examples of such nested
designs arise when the sample from the source population, from which outcome
information is available, can be embedded within a larger cohort (e.g., via
record linkage techniques) that can be viewed as representing the target
population. Our results can be applied, with minor modifications, to nested
designs. In Appendix B, we prove an identification result for nested designs
and provide an estimator for loss-based measure of target population model
performance.
## Discussion
We considered transporting prediction models to a different population than
was used for original model development, when outcome and covariate data are
available on a simple random sample from the source population and covariate
information is available on a simple random sample from the target population.
We described the adjustments needed when the covariate distribution differs
between the source and target population and provided identification results.
We discussed how to tailor the prediction model to the target population and
how to calculate measures of model performance in the context of the target
population, without requiring the prediction model to be correctly specified.
We also examined tailoring data-driven model and tuning parameter selection to
the target population. The key insight is that most measures of model
performance average over the covariate distribution and, as a result,
estimators of these measures obtained in data from the source population will
typically be biased for the corresponding measures in the target population,
when the covariate distribution differs between the two populations.
To simplify the exposition, throughout this paper we have assumed that the
covariates needed to satisfy the conditional independence condition (A1) are
the same as the covariates used in the prediction model. In practice, the set
of covariates needed to satisfy condition A1 may be much larger than the set
of covariates that are practically useful to include in the prediction model.
The identifiability results in our paper can be easily modified to allow for
the two sets of covariates to be different.
## References
* [1] Ewout W Steyerberg et al. Clinical prediction models. Springer, 2019.
* [2] Romin Pajouheshnia, Rolf HH Groenwold, Linda M Peelen, Johannes B Reitsma, and Karel GM Moons. When and how to use data from randomised trials to develop or validate prognostic models. BMJ, 365, 2019.
* [3] Benjamin A Goldstein, Ann Marie Navar, Michael J Pencina, and John Ioannidis. Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review. Journal of the American Medical Informatics Association, 24(1):198–208, 2017.
* [4] Steffen Bickel, Michael Brückner, and Tobias Scheffer. Discriminative learning for differing training and test distributions. In Proceedings of the 24th International Conference on Machine Learning, pages 81–88, 2007.
* [5] Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert MÞller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(May):985–1005, 2007.
* [6] Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2010.
* [7] Bin Cao, Xiaochuan Ni, Jian-Tao Sun, Gang Wang, and Qiang Yang. Distance metric learning under covariate shift. In Twenty-Second International Joint Conference on Artificial Intelligence, 2011.
* [8] Masashi Sugiyama and Motoaki Kawanabe. Machine learning in non-stationary environments: introduction to covariate shift adaptation. MIT press, 2012.
* [9] Wouter M Kouw and Marco Loog. An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806, 2018.
* [10] Sentao Chen and Xiaowei Yang. Tailoring density ratio weight for covariate shift adaptation. Neurocomputing, 333:135–144, 2019.
* [11] Masato Ishii, Takashi Takenouchi, and Masashi Sugiyama. Partially zero-shot domain adaptation from incomplete target data with missing classes. In The IEEE Winter Conference on Applications of Computer Vision, pages 3052–3060, 2020.
* [12] Abhirup Datta, Jacob Fiksel, Agbessi Amouzou, and Scott L Zeger. Regularized bayesian transfer learning for population-level etiological distributions. Biostatistics, 2020.
* [13] Stephen R Cole and Elizabeth A Stuart. Generalizing evidence from randomized clinical trials to target populations: the actg 320 trial. American Journal of Epidemiology, 172(1):107–115, 2010.
* [14] Kara E Rudolph and Mark J van der Laan. Robust estimation of encouragement-design intervention effects transported across sites. Journal of the Royal Statistical Society. Series B, Statistical Methodology, 79(5):1509, 2017.
* [15] Issa J Dahabreh, Sarah E Robertson, Jon A Steingrimsson, Elizabeth A Stuart, and Miguel A Hernán. Extending inferences from a randomized trial to a new target population. Statistics in Medicine, 39(14):1999–2014, 2020.
* [16] Issa J Dahabreh, Sarah E Robertson, Eric J Tchetgen, Elizabeth A Stuart, and Miguel A Hernán. Generalizing causal inferences from individuals in randomized trials to all trial-eligible individuals. Biometrics, 75(2):685–694, 2019.
* [17] Issa J Dahabreh and Miguel A Hernán. Extending inferences from a randomized trial to a target population. European Journal of Epidemiology, 34(8):719–722, 2019.
* [18] Issa J Dahabreh, Sebastien JP Haneuse, James M Robins, Sarah E Robertson, Ashley L Buchanan, Elisabeth A Stuart, and Miguel A Hernán. Study designs for extending causal inferences from a randomized trial to a target population. arXiv preprint arXiv:1905.07764, 2019.
* [19] Jeffrey M Wooldridge. Econometric analysis of cross section and panel data. MIT press, 2010.
* [20] Guido W Imbens and Tony Lancaster. Efficient estimation and stratified sampling. Journal of Econometrics, 74(2):289–318, 1996.
* [21] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000\.
* [22] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density ratio estimation in machine learning. Cambridge University Press, 2012.
* [23] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.
* [24] Issa J Dahabreh, James M Robins, and Miguel A Hernán. Benchmarking observational methods by comparing randomized trials and their emulations. Epidemiology, 31(5):614–619, 2020.
* [25] Alex Luedtke, Marco Carone, and Mark J van der Laan. An omnibus non-parametric test of equality in distribution for unknown functions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81(1):75–99, 2019.
* [26] Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of the twenty-first international conference on Machine learning, page 114, 2004.
* [27] Yi Lu, Daniel O Scharfstein, Maria M Brooks, Kevin Quach, and Edward H Kennedy. Causal inference for comprehensive cohort studies. arXiv preprint arXiv:1910.03531, 2019.
## Figures
Figure 1: An example of a prediction error modifier, $X$.
The top panel shows a scatter-plot of the data (including the unobserved
target population outcomes) and the solid black line is the true conditional
expectation function $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$. The
middle panel shows the inverse-odds weights (IOW) as a function of $X$ and the
bottom panel shows the conditional mean squared error (MSE) as a function of
$X$. In these artificial data, larger values of $X$ have higher probability of
being from the target population, $S=0$ (corresponding to lower odds of being
from the source population and higher inverse-odds weights) and higher MSE.
Hence, $X$ is a prediction error modifier that is differentially distributed
between the source and the target population. This leads to the source
population MSE being smaller than the target population MSE (0.47 versus
0.74). Figure 2: An example of simulated data used to illustrate
transportability of prediction models.
The solid curve is $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$, the dashed
line is the large-sample limit when estimating the misspecified model without
weighting, and the dotted line is the large-sample limit when estimating the
misspecified model using inverse-odds weights. The weighted estimation gives
more influence to observations with higher values of $X$, compared to
unweighted estimation, because higher values of $X$ are associated with higher
odds of a sampled observation being from the target population (i.e., lower
odds of being from the source population, corresponding to higher inverse-odds
weights). This is seen in the figure as for high values of $X$ the weighted
model better approximates $\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]$
compared to the unweighted model, but the opposite is true for smaller values
of $X$.
## Table
Table 1: Target population mean squared error (MSE), average of the source data MSE estimators, and the estimators for the target population MSE that weight observations by the inverse-odds of being from the source population. | Model specification,
---
estimation approach
| True target
---
population MSE
| Average of unweighted
---
MSE estimator
| Average of weighted
---
MSE estimator
| Correctly specified,
---
OLS
45.8 | 22.5 | 45.8
| Incorrectly specified,
---
OLS
66.3 | 34.5 | 66.3
| Correctly specified,
---
WLS
46.2 | 22.8 | 46.2
| Incorrectly specified,
---
WLS
58.0 | 43.6 | 57.9
Correctly specified and incorrectly specified refers to the specification of
the posited prediction model. OLS = model estimation using ordinary least
squares regression (unweighted); WLS = model estimation using weighted least
squares regression with weights equal to the inverse of the odds of being from
the source population. Weighted MSE estimator results were obtained using the
estimator in equation (3). Results were averaged over $10,000$ simulations.
The true target population MSE was obtained using numerical methods.
## Appendix A Proofs of key results
### A.1 Identifiability for non-nested designs
#### Proof of identifiability of target population MSE
We will provide the identifiability result for a general loss function
$L(Y,g_{\widehat{\beta}}(X))$. Many common performance measures, including the
mean squared error, absolute error, and the Brier score, are special cases of
expected loss functions. We define $D_{\text{ \tiny test}}$ as an indicator if
an observation is in the source or target test data.
###### Proposition 1.
Under conditions A1 and A2 and when the source and target data are obtained by
separate simple random sampling of the corresponding underlying populations,
with potentially unknown sampling probabilities, then the target population
MSE, $\psi_{\widehat{\beta}}$, is identifiable as
$\psi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}[\operatorname{\textnormal{\mbox{E}}}[(Y-g_{\widehat{\beta}}(X))^{2}|X,S=1,D_{\emph{\tiny
test}}=1]|S=0,D_{\emph{\tiny test}}=1];$ (A.1)
or, using an inverse-odds weighting representation,
$\psi_{\widehat{\beta}}=\frac{1}{\Pr[S=0|D_{\emph{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\emph{\tiny
test}}=1]}{\Pr[S=1|X,D_{\emph{\tiny
test}}=1]}(Y-g_{\widehat{\beta}}(X))^{2}\bigg{|}D_{\emph{\tiny
test}}=1\right].$ (A.2)
All quantities in expressions (A.1) and (A.2) condition on $D_{\emph{\tiny
test}}=1$ and can therefore be calculated using the available test data.
###### Proof.
For the first representation we have
$\displaystyle\psi_{\widehat{\beta}}$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|S=0]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X,S=0]|S=0\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\int
L(y,g_{\widehat{\beta}}(X))dF(y|X,S=0)\bigg{|}S=0\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\int
L(y,g_{\widehat{\beta}}(X))dF(y|X,S=1)\bigg{|}S=0\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X,S=1]|S=0\right],$
where the first equality follows from the definition of
$\psi_{\widehat{\beta}}$, the second from the law of iterated expectations,
the third from the definition of conditional expectation, and the fourth from
identifiability condition A1. All expectations conditional on $(X,S=1)$ in the
above formula are well defined by the positivity condition A2. Rewrite
$\displaystyle\psi_{\widehat{\beta}}$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}[\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X,S=1]|S=0]$
$\displaystyle=\int\operatorname{\textnormal{\mbox{E}}}[L(Y-g_{\widehat{\beta}}(X))|X=x,S=1]dF(x|S=0).$
The conditional expectation
$\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X=x,S=1]$ is
identifiable because, under the non-nested sampling design, data are available
from a random sample of observations from the source population ($S=1$).
Furthermore, the conditional distribution $F(x|S=0)$ is also identifiable
because, under the non-nested sampling design, data are available from a
random sample of observations from the target population ($S=0$). More
formally, the random sampling ensures that
$\psi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}[\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X,S=1,D_{\text{\tiny
test}}=1]|S=0,D_{\text{\tiny test}}=1].$
For the inverse-odds weighting representation
$\displaystyle\psi_{\widehat{\beta}}$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}[\operatorname{\textnormal{\mbox{E}}}[L(Y,g_{\widehat{\beta}}(X))|X,S=1,D_{\text{\tiny
test}}=1]|S=0,D_{\text{\tiny test}}=1]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}X,D_{\text{\tiny
test}}=1\right]\bigg{|}S=0,D_{\text{\tiny test}}=1\right]$
$\displaystyle=\frac{1}{\Pr[S=0|D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[I(S=0)\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}X,D_{\text{\tiny
test}}=1\right]\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\frac{1}{\Pr[S=0|D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\text{\tiny
test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}X,D_{\text{\tiny
test}}=1\right]\bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\frac{1}{\Pr[S=0|D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\text{\tiny
test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}D_{\text{\tiny test}}=1\right].$
For the fourth equality we have used that
$\displaystyle\operatorname{\textnormal{\mbox{E}}}\left[I(S=0)\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}X,D_{\text{\tiny
test}}=1\right]\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[I(S=0)\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}X,D_{\text{\tiny
test}}=1\right]\Bigg{|}X,D_{\text{\tiny test}}=1\right]\Bigg{|}D_{\text{\tiny
test}}=1\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}X,D_{\text{\tiny
test}}=1\right]\operatorname{\textnormal{\mbox{E}}}[I(S=0)|X,D_{\text{\tiny
test}}=1]\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\text{\tiny
test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}X,D_{\text{\tiny
test}}=1\right]\Bigg{|}D_{\text{\tiny test}}=1\right]$
All of the quantities in
$\frac{1}{\Pr[S=0|D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)\Pr[S=0|X,D_{\text{\tiny
test}}=1]}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}D_{\text{\tiny test}}=1\right].$
condition on $D_{\text{\tiny test}}=1$ and are therefore identifiable using
the observed data. ∎
#### Proof of identifiability of inverse-odd weights
Let $D_{\text{\tiny train}}$ be an indicator if data from an observation is in
the training set and used to estimate the inverse-odds weights. The sampling
design assumes that $\Pr[D_{\text{\tiny train}}=1|X,S=1]=a$ for some
potentially unknown constant $a>0$; and $\Pr[D_{\text{\tiny
train}}=1|X,S=0]=b$ for some potentially unknown constant $b>0$. By the random
formation of the test and the training set, the inverse-odds weights in the
test and the training set are equal. But, to ensure independence between the
data used to train the model and the data used to evaluate the model we
propose to use inverse-odds weights estimated using the training set for model
building and the inverse-odds weights estimated using the test set for
estimating model performance.
#### Proof of expression 1 from the main text
Recall that the sampling design assumes that $\Pr[D_{\text{\tiny
train}}=1|X,S=1]=a$ for some potentially unknown constant $a>0$ and
$\Pr[D_{\text{\tiny train}}=1|X,S=0]=b$ for some potentially unknown constant
$b>0$. Using that, we have
$\displaystyle\frac{\Pr[S=0|X,D_{\text{\tiny
train}}=1]}{\Pr[S=1|X,D_{\text{\tiny train}}=1]}$
$\displaystyle=\frac{\Pr[S=0,D_{\text{\tiny
train}}=1|X]}{\Pr[S=1,D_{\text{\tiny train}}=1|X]}$
$\displaystyle=\frac{\Pr[S=0|X]}{\Pr[S=1|X]}\times\frac{\Pr[D_{\text{\tiny
train}}=1|X,S=0]}{\Pr[D_{\text{\tiny train}}=1|X,S=1]}$
$\displaystyle=\frac{\Pr[S=0|X]}{\Pr[S=1|X]}\times\frac{\Pr[D_{\text{\tiny
train}}=1|S=0]}{\Pr[D_{\text{\tiny train}}=1|S=1]}$
$\displaystyle=\frac{\Pr[S=0|X]}{\Pr[S=1|X]}\times\frac{b}{a}$
$\displaystyle\propto\frac{\Pr[S=0|X]}{\Pr[S=1|X]}.$
∎
## Appendix B Identification and estimation in nested designs
Consider a nested design where the source population is a subset of a larger
target population of interest. We assume that covariate data, $X$, are
available on all target population members, but outcome data, $Y$, are only
available on everyone in the source population. The data is assumed to be
realizations of
$\\{(X_{i},S_{i},S_{i}\times Y_{i},i=1,\ldots,n\\},$
where $n$ is the total number of observations (i.e., the total number of
individuals in a cohort representing the target population and in which the
sample from the source population is nested) and $S$ is the indicator of an
observation coming from the source population ($S=1$ for observations in the
source population and $S=0$ for observations not in the source population).
For nested designs the target parameter is defined as
$\phi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\right].$
We introduce the following modified identifiability conditions:
1. B1.
For every $x$ such that $f(X=x)\neq 0$,
$f(Y|X=x,S=1)=f(Y|X=x).$
2. B2.
For every $x$ such that $f(X=x)\neq 0$, $\Pr[S=1|X=x]>0$ .
###### Proposition 2.
Under conditions B1 and B2, $\phi_{\widehat{\beta}}$ can be written as the
observed data functional
$\phi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\big{|}X,S=1,D_{\text{\tiny
test}}=1\right]D_{\text{\tiny test}}=1\right].$ (A.3)
Or using the inverse probability weighting representation
$\phi_{\widehat{\beta}}=\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}D_{\text{\tiny test}}=1\right].$
(A.4)
#### Proof of Proposition 2:
We have
$\displaystyle\phi_{\widehat{\beta}}$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\big{|}X\right]\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\big{|}X,S=1\right]\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\big{|}X,S=1,D_{\text{\tiny
test}}=1\right]\Big{|}D_{\text{\tiny test}}=1\right].$
For the inverse probability weighting representation
$\displaystyle\phi_{\widehat{\beta}}$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[L(Y,g_{\widehat{\beta}}(X))\big{|}X,S=1,D_{\text{\tiny
test}}=1\right]\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\bigg{|}X,D_{\text{\tiny
test}}=1\right]\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\Bigg{|}D_{\text{\tiny test}}=1\right]$
$\displaystyle=\frac{1}{\Pr[D_{\text{\tiny
test}}=1]}\operatorname{\textnormal{\mbox{E}}}\left[\frac{I(S=1,D_{\text{\tiny
test}}=1)}{\Pr[S=1|X,D_{\text{\tiny
test}}=1]}L(Y,g_{\widehat{\beta}}(X))\right],$
which establishes the identifiability of $\phi_{\widehat{\beta}}$. ∎
Using plug-in estimators into identifiability expression (A.4) gives the
inverse probability weighting estimator for nested designs. That is,
$\widehat{\phi}_{\widehat{\beta}}=\frac{\sum_{i=1}^{n}\frac{I(S_{i}=1,D_{\text{\tiny
test},i}=1)}{\widehat{p}(X_{i})}L(Y_{i},g_{\widehat{\beta}}(X_{i}))}{\sum_{i=1}^{n}I(D_{\text{\tiny
test},i}=1)},$
where $\widehat{p}(X)$ is an estimator for $\Pr[S=1|X,D_{\text{\tiny
test}}=1]$.
## Appendix C Inverse-odds weighting estimators can be biased under mean
exchangeability
For correctly specified prediction models, exchangeability in mean over $S$,
that is
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$,
is sufficient for the parameter $\beta$ to be identifiable using data from the
source population alone. Exchangeability in mean over $S$ is a weaker
condition than condition A1; that is, condition A1 implies exchangeability in
mean, but the converse is not true. Exchangeability in mean, however, is
insufficient for transportability of the MSE. This can be seen in Figure 3
where
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$
but $\mbox{Var}[Y|X,S=1]\neq\mbox{Var}[Y|X,S=0]$ (and thus assumption A1 does
not hold). As the conditional variance is different between the two
populations, standardizing to the target population covariate distribution is
not sufficient to transport the MSE to the target population.
If the outcome is binary, condition A1 can be written as
$\Pr[Y=1|X=x,S=1]=\Pr[Y=1|X=x,S=0]$, so for binary outcomes distributional
independence over $S$ is equivalent to exchangeability in mean over $S$.
Figure 3: An example of a setting where condition A1 does not hold. Here,
$\operatorname{\textnormal{\mbox{E}}}[Y|X,S=1]=\operatorname{\textnormal{\mbox{E}}}[Y|X,S=0]$
(the black line is the true conditional mean for both populations), but
$\mbox{Var}[Y|X,S=1]<\mbox{Var}[Y|X,S=0]$ for all values of $X$. In this case,
estimators of model performance measures that use weights equal to the
inverse-odds of being from the source population (e.g., the MSE estimator in
the main text of the paper) will be biased.
|
# GymD2D: A Device-to-Device Underlay Cellular Offload Evaluation Platform
David Cotton 0000-0002-8817-3736 School of Electrical and Data Engineering
University of Technology Sydney
Sydney, Australia
<EMAIL_ADDRESS>Zenon Chaczko School of Electrical and
Data Engineering
University of Technology Sydney
Sydney, Australia
<EMAIL_ADDRESS>
###### Abstract
Cellular offloading in device-to-device communication is a challenging
optimisation problem in which the improved allocation of radio resources can
increase spectral efficiency, energy efficiency, throughout and reduce
latency. The academic community have explored different optimisation methods
on these problems and initial results are encouraging. However, there exists
significant friction in the lack of a simple, configurable, open-source
framework for cellular offload research. Prior research utilises a variety of
network simulators and system models, making it difficult to compare results.
In this paper we present GymD2D, a framework for experimentation with physical
layer resource allocation problems in device-to-device communication. GymD2D
allows users to simulate a variety of cellular offload scenarios and to extend
its behaviour to meet their research needs. GymD2D provides researchers an
evaluation platform to compare, share and build upon previous research. We
evaluated GymD2D with state-of-the-art deep reinforcement learning and
demonstrate these algorithms provide significant efficiency improvements.
###### Index Terms:
device-to-device (D2D) communication, cellular offload, resource allocation,
radio resource management, network simulator, deep reinforcement learning,
OpenAI Gym
††publicationid: pubid: ©2021 IEEE. Personal use of this material is
permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale
or redistribution to servers or lists, or reuse of any copyrighted component
of this work in other works.
## I Introduction
Wireless data use is increasing rapidly presenting a challenge for cellular
service providers. The explosion of video and smart device traffic has
intensified demand for limited cellular resources [1]. A multi-faceted
approach to developing next generation networks include: unlocking millimeter-
wave frequencies, increasing cell density, multiple-input multiple-output,
improved information encoding, and the focus of this research—smarter
coordination [2].
Device-to-device (D2D) communication is a broad set of protocols for ad-hoc,
peer-to-peer wireless communication for cellular or Internet connected
wireless devices. In contrast to normal cellular operation, user equipment
(UE) utilising D2D mode communicate directly with one another instead of
through base stations and connected networks [3]. D2D has been proposed for a
variety of use cases such as: public safety communications, communications
relay, localised services, Internet of things and cellular offload [4].
Figure 1: Example of D2D cellular offloading with underlay networks. This
example demonstrates the cellular offload optimisation problem. Depicted is a
BS, CUE and 2 DUE pairs. The BS has 2 RBs available to allocate. The BS has
assigned the CUE RB1 for downlink and has the option of either assigning pair
A RB1 and pair B RB2 or vice versa. If the BS assigns RB1 to pair A it is
likely that due to their proximity to the CUE, significant interference would
occur. As pair B is situated relatively distant to the CUE, if pair B
transmitted with a lower power level, their interference on the CUE would be
negligible. Therefore the optimal solution is to assign RB1 to pair B and RB2
to pair A.
In this research we are interested in cellular offload, a mechanism to divert
cellular traffic through alternative channels to improve network efficiency.
This can be achieved by communicating out of band, such as over WiFi, or
inband via overlay or underlay networking. Underlay cellular offload is a form
of opportunistic spectrum access in which D2D UE (DUE) act as secondary users
sharing radio resources with primary cellular UE (CUE) [5]. In underlay
networking, DUE are responsible for managing their interference with CUE so as
to avoid degrading primary network performance.
Radio resource management (RRM) is the system wide management of radio
resources, such as transmit power and resource blocks (RB), across a wireless
network to manage interference and utilise resources as efficiently as
possible. In cellular systems, which are limited by co-channel interference,
improved resource allocation can increase spectral efficiency, energy
efficiency, throughput and reduce latency. We show a simplified example of the
resource allocation problem in Figure 1.
A challenge facing researchers developing D2D algorithms is the need for
improved tooling, reliable measurements and established benchmarks. An open-
source evaluation platform allows researchers to compare results, share
algorithms and build upon previous research.
In this paper we present GymD2D, a network simulator and evaluation platform
for RRM in D2D underlay cellular offload. GymD2D provides convenient
abstractions to aid researchers in quick prototyping of resource allocation
algorithms. The toolkit allows users to programmatically configure the
environment to simulate a wide variety of scenarios. GymD2D has been designed
with extensibility as a core design principle, allowing users to override and
extend its behaviour to meet their research needs. It has been developed in
the Python programming language to allow users to leverage its extensive
ecosystem of scientific computing packages. The open-source nature of GymD2D
centralises development effort and avoids the redundant work of individual
researchers creating their own simulators. This puts more eyes on bug fixing,
provides a more stable platform and increases confidence in reported empirical
results. GymD2D reduces entry barriers for junior researchers, helps
researchers from other disciplines to cross-pollinate ideas easier and more
generally increases participation. Our software package is provided to the
community under a MIT licence at https://github.com/davidcotton/gym-d2d.
## II Background
### II-A Device-to-device communication
In this section we provide an overview of the D2D RRM literature to situate
the reader as to the requirements of the platform. Firstly, we highlight the
most common optimisation problems. Secondly, we analyse key differences across
simulators, paying special attention to simplifying assumptions frequently
observed. Thirdly, we survey the optimisation algorithms used for resource
allocation. Finally, we outline limitations of existing research, providing
direction for the simulation requirements of future work.
#### II-A1 Optimisation Problems
RRM is an optimisation problem where the objective is to utilise radio
resources as efficiently as possible. D2D RRM has been proposed for improving
spectral efficiency, energy efficiency and quality of service. In these uses
cases, the objective can be to optimise data rates, throughput, capacity,
signal to interference noise ratio (SINR), power consumption, energy
efficiency or latency [4, 6]. D2D systems can be centrally managed by the
network operator, manage interference autonomously or use a hybrid control
mode which aims to combine the benefits of both. The choice of control mode
limits the applicability of certain algorithms which may only be feasible in
centrally managed paradigms.
#### II-A2 Simulation models
D2D cellular offload typically investigates networks using orthogonal
frequency division multiple access (OFDMA), communicating on licensed bands
using underlay networking. The most common scenario is a single macro base
stations (MBS) surrounded by many randomly positioned CUEs and DUEs. It is
generally assumed that cellular systems are under full load and each RB is
allocated to a CUE. In the literature, simulations vary in scope from 2–30
RBs, 2–30 CUEs and 2–60 DUEs, while MBS operate with a cell radius of 20–500m.
It is frequently assumed that DUE are already paired and operate in a range
between 10–30m apart. Typically, omni-directional antenna and isotropic
propagation are utilised. Path loss is commonly modeled using log-distance
models, with or without shadowing.
#### II-A3 Optimisation Algorithms
A wide variety of optimisation methods have been investigated on a range of
D2D RRM problems. Initially, D2D radio resources were proposed to be managed
using existing cellular uplink power control mechanisms [7]. Consequently, it
was identified that resources could be more efficiently allocated with the use
of mathematical optimisation [8]. However, due to the computational complexity
of these methods and the millisecond timescales involved, it may not feasible
to solve to optimality. This can be addressed with the use of greedy heuristic
algorithms which reduce computational complexity at the cost of global
optimality [9]. Alternatively, resource allocation can be optimised graph
theoretically [10], game theoretically [11], with evolutionary algorithms [12]
or reinforcement learning (RL) [13]. More recently, deep reinforcement
learning (DRL), a subfield of RL which uses deep neural networks to represent
policy and/or value functions has demonstrated promising results [14, 15]. DRL
is well suited for many D2D RRM problems as neural networks provide rich
approximations, scale well and generalise to unseen data.
#### II-A4 Research limitations
A common limitation observed in D2D cellular offload research is BSs not
enforcing uplink power control for CUE, who transmit at maximum power, a very
energy inefficient approach. Another research challenge is accounting for
large SINR increases on the primary network, such as could significantly
impact primary network throughput or drive up CUE transmit power levels.
Resource allocations algorithms need to demonstrate their effectiveness in
larger search spaces that more closely reflect real world demands. Iterative
learning algorithms need to be capable of generalising to out of training
distribution data and be robust under diverse propagation conditions. Lastly,
in our opinion, one of the greatest limitations of existing research is the
lack of established benchmarks and comparison with other algorithms.
### II-B OpenAI Gym
OpenAI Gym is an open-source software toolkit for reinforcement learning (RL)
[16]. Gym provides an abstraction layer that enables a variety of tasks, known
as environments in RL parlance, to be wrapped to present a consistent
interface. The abstraction provided by Gym allows the easy interchange of
algorithms and environments. This makes it is easy to test how a single
algorithm generalises across a diverse set of environments or to benchmark
different algorithms on a given environment. The simplicity and flexibility
Gym offers has proved very popular and has lead to it becoming the de facto
environment format in RL. While Gym was designed for RL research, the
application programming interface (API) it provides makes it easy to apply
many other algorithms types.
### II-C Network simulation
One of the most widely used network simulators in education and research is
ns-3. Ns-3 is an open-source, modular, discrete-event simulator for wired and
wireless networks. It provides the full TCP/IP stack and wireless propagation
modelling. Another popular alternative with comparable features is OMNeT++,
while there exists similar commercial tools such as NetSim and MATLAB. Ns-3
has been incorporated into an OpenAI Gym environment under the ns3-gym project
[17].
## III Design Principles
The design of GymD2D has been inspired by the authors experience developing
and comparing reinforcement learning algorithms. In our experience the
following design principles stimulate experimentation and the sharing of
ideas.
* •
Simple: Easy to get started with, the framework should allow researchers to be
productive quickly.
* •
Configurable: The framework should be easily configured to meet the broad
range of D2D cellular offload use cases. Configurability allows researchers to
programmatically test algorithm generalisation and scalability.
* •
Extensible: The framework should allow users to extend the system’s behaviour
to meet their needs. The nature of research dictates a stream of new ideas we
can’t anticipate but we can provide researchers the flexibility to adapt.
* •
Scalable: The framework should be performant and easily parallelisable.
Developing new algorithms requires significant experimentation and reducing
the time spent waiting for results is important for productivity. Some
algorithms, such as policy gradient DRL, require parallel environments to
function. Real world solutions are often a combination of both algorithmic and
architectural components.
* •
Reproducible: Experiments should be easily repeatable. To build confidence in
our deductions, it is important that we can reperform experiments to ensure
the observed outcomes were not statistical anomalies. Reproducibility allows
researchers to share their contributions with community more easily.
## IV System Design
### IV-A System model
GymD2D is designed to simulate physical layer resource allocation problems in
D2D underlay cellular offload. The framework abstracts away data link and
above layers, D2D session establishment and management concerns. The system
model contains a single MBS $b$, a set of $M$ CUEs
$\mathcal{M}=\\{1,\dots,M\\}$ and a set of $N$ DUE pairs
$\mathcal{N}=\\{1,\dots,N\\}$, that reside within the coverage area of the
cell. We denote the $m^{th}$ CUE $C_{m}$, the $n^{th}$ DUE pair $D_{n}$ and
the transmitter and receiver of pair $D_{n}$ by $D_{n}^{t}$ and $D_{n}^{r}$
respectively.
The system employs OFDMA, with a set of $K$ RBs $k\in\mathcal{K}$ are
available for allocation. An assumption is made that all devices are equipped
with omni-directional antenna and transmit isotropically. Accordingly, the
network resides within a circular cell of radius $R$, with the MBS located in
the centre at position $(0,0)$. The simulation environment contains no
obstructions or outside interference. D2D communicate one-to-one and D2D relay
is not supported.
We denote the effective isotropic radiated power (EIRP) $P$ of BSs, CUEs and
DUEs as $P^{b}$, $P^{c}$, $P^{d}$ respectively. The EIRP of a BS is
calculated,
$P^{b}=P_{tx}-10log_{10}s+g_{ant}-l_{ix}-l_{cb}+g_{amp}$ (1)
and the EIRP of CUE and DUE,
$P^{c}=P^{d}=P_{tx}-10log_{10}s+g_{ant}-l_{ix}-l_{bd}$ (2)
where $P_{tx}$ is the transmission power level, $s$ is the number of
subcarriers, $g_{ant}$ is the transmitting antenna gain, $l_{ix}$ is the
interference margin loss to approximate noise from surrounding cells, $l_{bd}$
is body loss to approximate attenuation caused by the user, $l_{cb}$ is cable
loss, and $g_{amp}$ is amplifier gain.
We denote the received signal level $R$ from transmitter $i$ at receiver $j$
of BS, CUEs and DUEs as $R^{b}_{i,j}$, $R^{c}_{i,j}$, $R^{d}_{i,j}$. The
received signal level of BS as,
$R^{b}_{i,j}=P_{i}-PL_{i,j}+g_{ant}-l_{cb}+g_{amp}$ (3)
and the received signal level of CUE or DUE,
$R^{c}_{i,j}=R^{d}_{i,j}=P_{i}-PL_{i,j}+g_{ant}-l_{bd}$ (4)
where $P_{i}$ is the EIRP from transmitter $i$ and $PL_{i,j}$ is the path loss
of the chosen path loss model between $i$ and $j$.
We assume D2D transmissions are synchronised to cellular transmissions and
occupy the same $K$ orthogonal resources. During both uplink and downlink, co-
channel interference is calculated for each receiver sharing RB $k$. GymD2D
considers co-channel interference between:
* •
D2D to cellular, interference from secondary DUE on the primary cellular
network,
* •
cellular to D2D, interference from CUE or BS to DUE, and
* •
D2D to D2D, the interference between DUE pairs sharing a RB.
Accordingly, we model the instantaneous SINR $\xi$ of receiver $j$ from
transmitter $i$ on RB $k$,
$\xi_{i,j,k}=\frac{R_{i,j}}{\sum_{n\in\mathcal{T}_{k},n\neq
i}R_{n,j}+\sigma^{2}}$ (5)
where $\mathcal{T}_{k}$ is the set of transmitters allocated to RB $k$ and
$\sigma^{2}$ is additive white Gaussian noise (AWGN).
The capacity of channel $C_{i,j}$ can be calculated using the SINR
$\xi_{i,j}$,
$C_{i,j}[Mbps]=Blog_{2}(1+\xi_{i,j})$ (6)
where $B$ is the channel bandwidth in MHz.
### IV-B Path loss models
GymD2D contains several of the most common path loss models and makes it easy
for users to implement their own custom models. By default, GymD2D uses the
simplest model, free space path loss (FSPL),
$FSPL(f,d)[dB]=10nlog_{10}\Big{(}\frac{4\pi fd}{c}\bigg{)}$ (7)
where $n=2$ is the path loss exponent (PLE) in free space, $f$ is the carrier
frequency in Hz, $d$ is the distance between the transmitter and receiver and
$c$ is the speed of light in m/s.
To simulate obstructed propagation environments it can be useful to model
fading effects as random processes. One such model is the log-distance with
shadowing path loss model, which is included in GymD2D. The log-distance path
loss model extends FSPL to mimic random shadowing effects, such as caused by
buildings, with a log-normal distribution,
$PL^{LD}(f,d)[dB]=FSPL(f,d_{0})+10nlog_{10}\frac{d}{d_{0}}+\chi_{\sigma}$ (8)
where $d_{0}$ is an arbitrary close-in reference distance, typically 1–100m
and $\chi_{\sigma}$ is a zero-mean Gaussian with standard deviation $\sigma$
in dB. Empirical measurements have shown values of $n=2.7\text{ to }3.5$ to be
suitable to model urban environments [18].
### IV-C Architecture
GymD2D consists of two main components, a network simulator and a Gym
environment. The network simulator models physical layer cellular networking.
The Gym environment provides an abstraction layer to allow researchers to
experiment with different simulation parameters and algorithms
programmatically. Users supply RRM algorithms to manage the wireless devices
under simulation. GymD2D outputs data on the state of the simulation to the
user; to allow the effectiveness of RRM algorithms to be studied, such as
through visualisation. A high level overview of the architecture of GymD2D is
depicted in Figure 2.
Figure 2: Proposed GymD2D architecture. GymD2D consists of a network
simulator, wrapped by an OpenAI Gym environment. The user creates their own
RRM algorithms to control wireless devices.
### IV-D Network simulator
The network simulator models a single cellular cell which is populated with a
collection of randomly placed CUEs and DUE pairs. It is a configurable
component which can be customised to emulate a range of cellular offload
scenarios. This includes the number and configuration of BSs, CUEs and DUEs
and environmental parameters such as the available RBs, cell size and path
loss model.
The main components of the network simulator are: a collection of wireless
devices (BSs, CUEs, DUEs), a path loss model and a traffic model as shown in
the class diagram in Figure 3.
Figure 3: Network simulator architecture. The main components of the network
simulator are a collection of CUEs, DUEs and BS, the path loss model and the
traffic model.
Each simulation, the actions of BS and UEs within the cell can be generated
internally by the traffic model or externally from a user defined RRM
algorithm. A typical use case would be to use the internal traffic model to
control BS and CUEs and the user RRM algorithm the DUEs.
GymD2D uses a discrete-event simulation model. This method is congruent with
the Gym API in which the incoming actions are the events and the Gym step()
method calls equate to the system update intervals and model a single LTE or
NR frame. At each step, each device may transmit, receive, or take no action.
An action is tuple consisting of a transmitter, receiver, communication mode,
RB and transmission power. The simulator consolidates the actions from both
the traffic model and the RRM algorithm, then calculates the resulting
propagation and interference. After calculating propagation, metrics on the
state of the network, such as SINR and throughput, are output to the Gym
environment.
### IV-E Gym environment
The Gym environment has been designed to be configuration driven, to
facilitate the programmatic scheduling and reproducibility of experiments.
When instantiating a new Gym environment, configuration can be provided to
specify the BSs, CUEs and DUEs that inhabit the simulation and the
environmental conditions.
The Gym environment provides RRM algorithms with an observation and action
space. These provide a mapping to configure for the expected format of inputs
and outputs. For example, when using a DRL algorithm, this would allow DRL to
configure its neural networks for the shape of incoming observations and
output actions of the correct dimension.
At each step, the Gym environment receives actions from the RRM algorithm and
converts them to a format suitable for the network simulator. Once a
simulation step is complete, the environment uses the state of the simulator
to create the observations and rewards RRM algorithms consume to make their
decisions.
## V Evaluation
### V-A Methodology
Figure 4: Evaluation results. To evaluate GymD2D we compared the performance
of several state-of-the-art DRL algorithms in their efficiency allocating
radio resources as D2D demand increased. Solid lines indicate the mean
algorithm performance across ten trials with the shaded area the 95%
confidence interval. The dashed red line indicates the baseline total system
capacity without D2D communication. (a) The total system capacity of the DRL
and random agent. (b) The total system capacity of just the DRL agents. (c)
The total DUE capacity of the DRL agents. (d) The mean transmit power of all
agents.
We evaluated GymD2D with several leading DRL algorithms to determine their
efficiency allocating radio resources as D2D demand increased. The objective
was to maximise the total system capacity, that is the sum data rate of all
CUE and DUE, calculated for each transmitter/receiver pair $i,j$ by
$C_{i,j}[Mbps]=\begin{cases}Blog_{2}(1+\xi_{i,j})&\xi_{i,j}\geq\rho_{j}\\\
0&\xi_{i,j}<\rho_{j}\\\ \end{cases}$ (9)
where $B=0.18$ is the RB bandwidth in MHz and $\rho_{b}=-123.4$ and
$\rho_{d}=-107.5$ is the receiver sensitivity of a BS and DUE respectively in
dBm. Our evaluation simulated a single cell under full load. The scenario
contained 25 RBs and CUEs, with each CUE allocated an individual RB. We
employed a centrally managed control mode in which DUE communicated in the
uplink frame, with the resource allocation managed by the network operator.
Each RRM algorithm was evaluated with 10, 20, 30, 40 and 50 communicating D2D
pairs. Algorithms were compared by training to convergence, then evaluating
for 100 episodes. For each algorithm–D2D link density comparison, we conducted
ten trials, retraining from scratch and evaluating, to account for variations
in performance. Each episode lasted for ten steps or equivalently ten LTE/NR
frames to simulate short bursts of traffic on a busy network. In each episode
all CUE and DUE remained geographically fixed, but at the end of each episode,
all CUE and DUE were randomly repositioned within the cell to simulate new
devices accessing the network. Wireless propagation was modelled using the
Log-Distance Shadowing model (8) with PLE $n=2.0$ and $\chi_{\sigma}=2.7$. The
simulation parameters are detailed in Table I.
TABLE I: Simulation parameters Parameter | Value
---|---
Cell radius | $500$ m
Maximum D2D pair distance | $30$ m
Carrier frequency | $2.1$ GHz
RB bandwidth | $180$ kHz
Number of RBs | $25$
Number of CUEs | $25$
Number of DUE pairs | $10,20,30,40,50$
CUE transmit power | $23$ dBm
DUE min, max transmit power | $0,20$ dBm
Path loss model | Log-Distance Shadowing
Path loss exponent | $2.0$
Shadowing SD $\chi_{\sigma}$ | 2.7
We evaluated three DRL algorithms, Rainbow DQN [19], Discrete Soft Actor-
Critic (SAC) [20], and Advantage Actor-Critic (A2C) [21], and a random agent
baseline. All DRL algorithms used a fully connected neural network with two
hidden layers trained using the Adam optimiser. Each hidden layer contained
128 units and used ReLU activation between layers. Policies used a reward
discounting factor of $\gamma=0.9$. Our Rainbow DQN (Table II) implementation
used distributional, dueling, double-Q and noisy networks with a prioritised
replay buffer and single step returns. The discrete action space variant of
the SAC (Table III) was used. A2C (Table IV) is the synchronous version of A3C
and used Generalised Advantage Estimator (GAE) [22] with $\lambda=1.0$.
TABLE II: Rainbow DQN hyperparameters Parameter | Value
---|---
Discounting factor $\gamma$ | $0.9$
Learning rate $\alpha$ | $5\cdot 10^{-4}$
Batch size | $32$
Online network update period | $4$ steps
Learning start | $1,000$ steps
Target network sync period | $500$ steps
Distributional atoms | 51
Distributional bounds $v_{min}$, $v_{max}$ | [-1,10]
Replay buffer capacity | $50,000$
Replay buffer prioritisation exponent $\omega$ | $0.6$
Replay buffer importance sampling $\beta$ | $0.6\rightarrow 0.4$
Importance sampling annealing | 20,000 steps
TABLE III: SAC hyperparameters Parameter | Value
---|---
Discounting factor $\gamma$ | $0.9$
Learning rate $\alpha$ | $3\cdot 10^{-4}$
Batch size | $256$
Learning start | $1,500$ steps
Target smoothing coefficient | 0.005
Replay buffer capacity | $50,000$
TABLE IV: A2C hyperparameters Parameter | Value
---|---
Discounting factor $\gamma$ | $0.9$
Learning rate $\alpha$ | $1\cdot 10^{-4}$
Rollout length | $10$
Entropy coefficient $\beta$ | $0.01$
GAE $\lambda$ | $1.0$
### V-B Results
The results of our evaluation can be seen in Figure 4. Baseline total system
capacity measures the efficiency of the system without D2D communication. For
our scenario this was 94.75 Mbps. We found that all the DRL algorithms
achieved a similar level of performance, increasing system capacity over the
baseline by more than 11%. Our results show that the system capacity continued
to increase sublinearly as number of active D2D links demand grew. Conversely,
the performance of the random agent shows that without careful resource
allocation, the system capacity drops sharply.
We found that despite allowing DUE to communicate up to half the power of CUE
(20 vs. 23 dBm), they typically converged into operating ranges between 7 and
15 dBm. This resulted in a negligible decrease in the total CUE capacity, 1–2
Mbps or $\approx$1.84% below the baseline system capacity. This decrease was
approximately constant across D2D density.
### V-C Discussion
Inspecting the actions the DRL algorithms selected, we found that they
converged to allocating all DUE onto one or two RBs. This is surprising as we
had anticipated the DUE to be evenly distributed amongst all available RBs.
Investigating further, we observed that over the course of a training run, the
DQN converging from an even RB distribution to the focused allocation
strategy. This behaviour developed in the later stages of training and only
contributed modest increases to the system capacity.
As expected, the optimal strategy for resource allocation was to assign DUE to
share RBs with the most geographically distant CUE. When combined with the
focused RB allocation described above, this typically resulted in the RRM
algorithm choosing to allocate DUE to share with the one or two most isolated
CUE.
Despite the random UE positioning, the DRL agents were able to learn policies
that generalised much better than we anticipated when using fully connected
neural networks. We were also surprised how quickly agents adapted during an
episode, improving their performance over the course of the ten-step episode.
## VI Conclusion
In this research we have presented GymD2D, a network simulator and evaluation
platform for RRM in D2D underlay cellular offload. GymD2D makes it easy for
researchers to build, benchmark and share RRM algorithms and results. Our
toolkit is designed to quickly prototype physical layer resource allocation
algorithms, without the complexity of higher layer protocols. GymD2D is
configurable and extensible, allowing it to be employed to simulate a range of
D2D research needs.
We have evaluated GymD2D with several leading DRL algorithms and demonstrated
the performance gains of intelligent RRM, increasing system capacity by more
than 11%. There was no clear winner amongst the DRL algorithms which performed
similarly. The results also demonstrated that D2D cellular offload can
significantly minimise its impact on primary networks.
In the future we plan to increase the simulation complexity in GymD2D, adding
more realistic modelling. Other interesting research challenge include
investigating the impacts of CUE power control on cellular offload and
supporting D2D relay. We continue to use GymD2D in ongoing research,
developing methods for scaling up DRL based D2D RRM.
## References
* [1] T. S. Rappaport, W. Roh, and K. Cheun, “Mobile’s millimeter-wave makeover,” _IEEE Spectrum_ , vol. 51, no. 9, pp. 34–58, 2014.
* [2] A. Gupta and R. K. Jha, “A survey of 5g network: Architecture and emerging technologies,” _IEEE access_ , vol. 3, pp. 1206–1232, 2015.
* [3] B. Kaufman and B. Aazhang, “Cellular networks with an overlaid device to device network,” in _2008 42nd Asilomar conference on signals, systems and computers_. IEEE, 2008, Conference Proceedings, pp. 1537–1541.
* [4] A. Asadi, Q. Wang, and V. Mancuso, “A survey on device-to-device communication in cellular networks,” _IEEE Communications Surveys & Tutorials_, vol. 16, no. 4, pp. 1801–1819, 2014.
* [5] P. Janis, V. Koivunen, C. Ribeiro, J. Korhonen, K. Doppler, and K. Hugl, “Interference-aware resource allocation for device-to-device radio underlaying cellular networks,” in _VTC Spring 2009-IEEE 69th Vehicular Technology Conference_. IEEE, 2009, Conference Proceedings, pp. 1–5.
* [6] C. Chakraborty and J. J. Rodrigues, “A comprehensive review on device-to-device communication paradigm: Trends, challenges and applications,” _Wireless Personal Communications_ , pp. 1–23, 2020.
* [7] P. Janis, C.-H. Yu, C. Ribeiro, C. Wijting, K. Hugl, O. Tirkkonen, and V. Koivunen, “Device-to-device communication underlaying cellular communications systems,” _Int’l J. of Communications, Network and System Sciences_ , vol. 2009, 2009.
* [8] C.-H. Yu, O. Tirkkonen, K. Doppler, and C. Ribeiro, “Power optimization of device-to-device communication underlaying cellular communication,” in _2009 IEEE international conference on communications_. IEEE, 2009, Conference Proceedings, pp. 1–5.
* [9] M. Zulhasnine, C. Huang, and A. Srinivasan, “Efficient resource allocation for device-to-device communication underlaying lte network,” in _2010 IEEE 6th International conference on wireless and mobile computing, networking and communications_. IEEE, 2010, Conference Proceedings, pp. 368–375.
* [10] R. Zhang, X. Cheng, L. Yang, and B. Jiao, “Interference-aware graph based resource sharing for device-to-device communications underlaying cellular networks,” in _2013 IEEE wireless communications and networking conference (WCNC)_. IEEE, 2013, Conference Proceedings, pp. 140–145.
* [11] C. Xu, L. Song, Z. Han, Q. Zhao, X. Wang, and B. Jiao, “Interference-aware resource allocation for device-to-device communications as an underlay using sequential second price auction,” in _2012 IEEE international conference on communications (ICC)_. IEEE, 2012, Conference Proceedings, pp. 445–449.
* [12] L. Su, Y. Ji, P. Wang, and F. Liu, “Resource allocation using particle swarm optimization for d2d communication underlay of cellular networks,” in _2013 IEEE wireless communications and networking conference (WCNC)_. IEEE, 2013, Conference Proceedings, pp. 129–133.
* [13] Y. Luo, Z. Shi, X. Zhou, Q. Liu, and Q. Yi, “Dynamic resource allocations based on q-learning for d2d communication in cellular networks,” in _2014 11th International Computer Conference on Wavelet Actiev Media Technology and Information Processing (ICCWAMTIP)_. IEEE, 2014, Conference Proceedings, pp. 385–388.
* [14] Z. Li, C. Guo, and Y. Xuan, “A multi-agent deep reinforcement learning based spectrum allocation framework for d2d communications,” _arXiv preprint arXiv:1904.06615_ , 2019.
* [15] J. Tan, L. Zhang, and Y.-C. Liang, “Deep reinforcement learning for channel selection and power control in d2d networks,” in _2019 IEEE Global Communications Conference (GLOBECOM)_. IEEE, 2019, Conference Proceedings, pp. 1–6.
* [16] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” _arXiv preprint arXiv:1606.01540_ , 2016.
* [17] P. Gawłowicz and A. Zubow, “ns3-gym: Extending openai gym for networking research,” _arXiv preprint arXiv:1810.03943_ , 2018.
* [18] T. S. Rappaport, _Wireless communications: principles and practice_. prentice hall PTR New Jersey, 1996, vol. 2.
* [19] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” _arXiv preprint arXiv:1710.02298_ , 2017\.
* [20] P. Christodoulou, “Soft actor-critic for discrete action settings,” _arXiv preprint arXiv:1910.07207_ , 2019.
* [21] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in _International Conference on Machine Learning_ , 2016, Conference Proceedings, pp. 1928–1937.
* [22] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, “High-dimensional continuous control using generalized advantage estimation,” _arXiv preprint arXiv:1506.02438_ , 2015.
|
# Arbitrary-Oriented Ship Detection through Center-Head Point Extraction
Feng Zhang, Xueying Wang, Shilin Zhou, Yingqian Wang, Yi Hou This work was
partially supported in part by the National Natural Science Foundation of
China (Nos. 61903373, 61401474, 61921001).Feng Zhang, Xueying Wang, Shilin
Zhou, Yingqian Wang, Yi Hou are with the College of Electronic Science and
Technology, National University of Defense Technology (NUDT), P. R. China.
Emails: {zhangfeng01, wangxueying, slzhou, wangyingqian16<EMAIL_ADDRESS>(Corresponding author: Xueying Wang)
###### Abstract
Ship detection in remote sensing images plays a crucial role in various
applications and has drawn increasing attention in recent years. However,
existing arbitrary-oriented ship detection methods are generally developed on
a set of predefined rotated anchor boxes. These predefined boxes not only lead
to inaccurate angle predictions but also introduce extra hyper-parameters and
high computational cost. Moreover, the prior knowledge of ship size has not
been fully exploited by existing methods, which hinders the improvement of
their detection accuracy. Aiming at solving the above issues, in this paper,
we propose a _center-head point extraction based detector_ (named CHPDet) to
achieve arbitrary-oriented ship detection in remote sensing images. Our CHPDet
formulates arbitrary-oriented ships as rotated boxes with head points which
are used to determine the direction. And rotated Gaussian kernel is used to
map the annotations into target heatmaps. Keypoint estimation is performed to
find the center of ships. Then, the size and head point of the ships are
regressed. The orientation-invariant model (OIM) is also used to produce
orientation-invariant feature maps. Finally, we use the target size as prior
to finetune the results. Moreover, we introduce a new dataset for multi-class
arbitrary-oriented ship detection in remote sensing images at a fixed ground
sample distance (GSD) which is named FGSD2021. Experimental results on
FGSD2021 and two other widely used data sets, i.e., HRSC2016, and UCAS-AOD
demonstrate that our CHPDet achieves state-of-the-art performance and can well
distinguish between bow and stern. Code and FGSD2021 dataset are available at
https://github.com/zf020114/CHPDet.
###### Index Terms:
Arbitrary-oriented ship detection, Remote sensing images, Keypoint estimation,
Deep convolution neural networks
Figure 1: Four different representations of the arbitrary-oriented ship and
the disadvantage of the angle regression scheme. (a) Horizontal boxes
parameterized by 4 tuples $(x_{min},y_{min},x_{max},y_{max})$. (b) Rotated box
with the angle parameterized by 5 tuples $(x_{c},y_{c},w,h,\theta)$. (c)
Rotated box with vertices $(a,b,c,d)$, parametrized by 8 tuples
$(x_{a},y_{a},x_{b},y_{b},x_{c},y_{c},x_{d},y_{d})$. (d) Rotated box with head
point which is parameterized by 6 tuples $(x_{c},y_{c},w,h,x_{h},y_{h})$. (e)
A small angle disturbance will cause a large IoU decrease. (f) The angle is
discontinous when reaches its range boundary.
## I Introduction
Ship detection from high-resolution optical remote sensing images is widely
applied in various tasks such as illegal smuggling, port management, and
target reconnaissance. Recently, ship detection has received increasing
attention and was widely investigated in the past decades [1, 2, 3, 4].
However, ship detection in remote sensing images is a highly challenging task
due to the arbitrary orientations, densely-parking scenarios, and complex
backgrounds [5, 6, 7]. To handle the multi-orientation issue, existing methods
generally use a series of predefined anchors [8], which has the following
shortcomings.
_Inaccurate angle regression._ Fig. 1(a)-(d) illustrate four different
representations of an arbitrary-oriented ship. Since ships in remote sensing
images are generally in strips, the intersection over union (IoU) score is
very sensitive to the angle of bounding boxes. As shown in Fig. 1(e), the
ground truth box is the bounding box of a ship with an aspect ratio of 10:1.
The red rotated box is generated by rotating the ground truth box with a small
angle of $5^{\circ}$. It can be observed that such a small angle variation
reduces the IoU between these two boxes to 0.63. Therefore, the anchor-based
detectors which define the positive and negative anchors by IoU score usually
suffer from an imbalance issue, and thus resulting in detection performance
degeneration [9]. Moreover, the angle of the ship is a periodic function, and
it is discontinuous at the boundary ($0^{\circ}$ or $180^{\circ}$), as shown
in Fig. 1(f). This discontinuity will also cause performance degeneration.
Figure 2: The overall framework of our arbitrary-oriented ship detection
method. The dotted lines in the graph represent the same position on the
feature maps. Feature maps are first generated by using a fully convolutional
backbone network and orientation-invariant model (OIM). Afterward, the peaks
of the feature map of center points are selected as center points. Then, the
center points offset, object sizes, and head regression locations are
regressed on the corresponding feature maps at the position of each center
point. The potential head points are collected by extracting peaks with
confidence scores larger than $0.1$ on the head feature map. The final head
location is obtained by assigning each regressed location to its nearest
potential head points and then add the head offset.
_Excessive hyper-parameters and high computational cost._ Existing methods
generally use oriented bounding boxes as anchors to handle rotated objects and
thus introduce excessive hyper-parameters such as box sizes, aspect ratios,
and orientation angles. Note that, these hyper-parameters have to be manually
tuned for novel scenarios, which limits the generalization capability of these
methods. Predefined anchor-based methods usually require a large number of
anchor boxes. For example, in R2PN [10], 6 different orientations were used in
rotated anchor boxes, and there are a total of 24 anchors at each pixel on its
feature maps. A large number of anchor boxes introduce excessive computational
cost when calculating IoU scores and executing the non-maximum suppression
(NMS) algorithm.
_Under-exploitation of prior information of ships._
Most previous ship detectors adopted the commonly-used rotation detection
algorithms in the area of remote sensing and scene text detection, while
overlook the unique characteristics of ships in remote sensing images. That
is, the position of the bow is relatively obvious and a certain category of
the ship in remote sensing images has a relatively fixed size range by
normalizing the ground sample distance (GSD) of images. The size of the ship
and the position of the ship’s head are important clues for detection.
However, these prior informations have been under-exploited by previous ship
detection algorithms. These methods only model the ships as rotated rectangles
to regress the parameters and do not use the obvious bow point to determine
the direction of the ship. Due to the limitation of the effective receptive
field of the network, the appearance information near the central point is
mainly used in target classification. Size regression and target
classification are obtained independently by two parallel branches. Therefore,
the size of the target can not effectively assist target classification.
Motivated by the anchor-free detectors CenterNet [11] in natural scenes, in
this paper, we propose a one-stage, anchor-free and NMS-free method for
arbitrary-oriented ship detection in remote sensing images. We formulate ships
as rotated boxes with a head point representing the direction. Specifically,
orientation-invariant feature maps are first produced by an orientation-
invariant model. Afterward, the peaks of the center feature map are selected
as center points. Then, the offset, object sizes, and head positions are
regressed on the corresponding feature maps at each center point. Finally,
target size is used to adjust the classification score. The architecture of
our CHPDet is shown in Fig. 2.
The major contributions of this paper are summarized as follows.
* •
We develop a one-stage, anchor-free ship detector CHPDet, Specifically, we
represent the ships using rotated boxes with a head point. This representation
addresses the problem of angle periodicity by transforming the angle
regression task into a keypoint estimation task. Moreover, our proposed method
can expand the scope of angle to [$0^{\circ}$-$360^{\circ}$), and distinguish
between bow and stern.
* •
We design rotated Gaussian kernel to map the annotations into target heatmaps,
which can better adapting to the characteristics of the rotated target.
* •
We propose a module to refine the detection results based on prior
information. Moreover, we proposed a new dataset named FGSD2021 for multi-
class arbitrary-oriented ship detection in remote sensing images at fixed GSD.
This dataset can facilitate the use of prior knowledge of ship size and
promote the actual application for remote sensing ship detection.
* •
We introduce an orientation-invariant model (OIM) to generate orientation-
invariant feature maps. Extensive experimental results on three datasets show
that our CHPDet achieves state-of-the-art performance in both speed and
accuracy, as shown in Fig. 3.
Figure 3: Speed vs. accuracy on our proposed FGSD2021 dataset.
The rest of this paper is organized as follows. In Section II, we briefly
review the related work. In Section III, we introduce the proposed method in
detail. Experimental results and analyses are presented in Section IV.
Finally, we conclude this paper in Section V.
## II Related Work
In this section, we briefly review the major works in horizontal object
detection, rotated object detection, and remote sensing ship detection.
### II-A Horizontal Object Detection
In recent years, deep convolutional neural networks (DCNN) have been developed
as a powerful tool for feature representation learning [12, 13], and have
achieved significant improvements in horizontal object detection [14].
Existing object detection methods generally represent objects as horizontal
boxes, as shown in Fig. 1(a). According to different detection paradigms, deep
learning-based object detection methods can be roughly divided into two-stage
detectors, single-stage detectors, and multi-stage detectors. Two-stage
detectors (e.g., RCNN [15], Fast-RCNN [16], Faster-RCNN [17], Mask-RCNN [18],
R-FCN [19]) used a pre-processing approach to generate object proposals, and
extract features from the generated proposals to predict the category. In
contrast, one-stage detectors (e.g., YOLO [20, 21], SSD [22], RetinaNet [23])
do not have the pre-processing step and directly performed categorical
prediction on the feature maps. Multi-stage detectors (e,g, cascade RCNN [24],
HTC [25]) performed multiple classifications and regressions, resulting in
notable accuracy improvements. In summary, two-stage and multi-stage detectors
generally achieve better performance, but one-stage detectors are usually more
time-efficient.
Compared to the above-mentioned anchor-based methods, anchor-free methods [26]
[11] can avoid the requirement of anchors and have become a new research focus
in recent years. For example, CornerNet [26] detected objects at each position
of the feature map using the top-left and bottom-right corner points.
CenterNet [11] modeled an object as a center point and performed keypoint
estimation to find center points and regressed the object size. FCOS [27]
predicted four distances, a center score, and a classification score at each
position of the feature map to detect objects. The above-mentioned approaches
achieve significant improvement in general object detection tasks. However,
these detectors can only generate horizontal bounding boxes, which limits
their applicability.
### II-B Arbitrary-oriented object detection
Arbitrary-oriented detectors are widely used in remote sensing and scene text
images. Most of these detectors used rotated bounding boxes or quadrangles to
represent multi-oriented objects, as shown in Fig. 1(b) (c). In RRPN [28],
rotated region proposal network was proposed to improve the quality of the
region proposals. In R2CNN [29], a horizontal region of interest (RoI) was
generated to simultaneously predict the horizontal and rotated boxes. RoI-
Trans [30] transformed a horizontal RoI into a rotated RoI (RRoI). In SCRDet
[31] and RSDet [9], novel losses were employed to address the boundary problem
for oriented bounding boxes. In R3Det [32], a refined single-stage rotated
detector was proposed for the feature misalignment problem. In CSL [33] and
DCL [34], angle regression was converted into a classification task to handle
the boundary problem. In S2A-Net [35], a fully convolutional layer was
proposed to align features to achieve better performance. The aforementioned
methods need a set of anchor boxes for classification and regression. These
anchors introduce excessive hyper-parameters which limit the generalization
capability and introduce an excessive computational cost. At present, several
anchor-free arbitrary-oriented detectors (e.g., O2D-Net [36] and X-LineNet
[37]) are proposed to detect oriented objects by predicting a pair of
intersecting lines. However, The features used in these methods are not
rotation-invariant and the performance still lags behind that of the anchor-
base detectors.
### II-C Ship detection in remote sensing images
Different from other objects in remote sensing images, ships are in strips
with a large aspect ratio. Generally, the outline of the ships is an
approximate pentagon with two parallel long sides, and the position of the bow
is relatively obvious. Consequently, a certain category of the ship in remote
sensing images has a relatively fixed size range by normalizing the GSD of
images.
Traditional ship detectors generally used a coarse-to-fine framework with two
stages including ship candidate generation and false alarm elimination. For
example, Shi et al. [38] first generated ship candidates by considering ships
as anomalies and then discriminated these candidates using the AdaBoost
approach [39]. Yang et al. [40] proposed a saliency-based method to generate
candidate regions, and used a support vector machine (SVM) to further classify
these candidates. Liu et al [41, 42] introduced an RRoI pooling layer to
extract features of rotated regions. In R2PN [10], a rotated region proposal
network was proposed to generate arbitrary-proposals with ship orientation
angle information. The above detectors are also based on a set of anchors and
cannot fully exploit the prior information of ships.
## III Proposed Method
In this section, the architecture of CHPDet is introduced in detail. Our
method consists of 5 modules including an arbitrary-oriented ship
representation module, a rotated Gaussian kernel module, a head point
estimation module, an orientation-invariant module and a probability
refinement module. All ships are represented by rotated boxes with a head
point. We first detect centers of ships by extracting the peaks in heatmaps
which are generated by rotated Gaussian kernels. Then, we locate the head
points by two steps (directly regress from image features at the center
location, and estimate head points from head heatmaps). We also extract
orientation-invariant feature maps by the orientation-invariant model (OIM) to
increased consistency between targets and corresponding features. Finally, we
refine the detection results based on the prior information. The overall
framework of CHPDet is shown in Fig. 2.
Figure 4: A schematic diagram of map a rotated bounding box to a rotated
Gaussian distribution. Figure 5: A visualization of (a) center heatmap, (b)
head heatmap. In center and head heatmaps, different colors represent
different categories.
### III-A Arbitrary-oriented ship representation
As shown in Fig. 1, the widely-used horizontal bounding boxes cannot be
directly applied to the arbitrary-oriented ship detection task since excessive
redundant background area is included. Moreover, since the arbitrary-oriented
ships generally have a large aspect ratio and park densely, the NMS algorithm
using a horizontal bounding box tends to produce missing detection. To this
end, many methods represent ships as rotated bounding boxes, and these boxes
are parameterized with 5 tuples $(c_{x},c_{y},w,h,\theta)$, where $(x,y)$ is
the coordinate of the center of the rotated bounding box, $w$ and $h$ are the
width and length of the ship, respectively. The angle
$\theta\in[0^{\circ},180^{\circ})$ is the orientation of the long side with
respect to the y-axis. This representation can result in the regression
inconsistency issue near the boundary case. Recently, some detectors represent
objects by four clockwise vertices, which are parameterized by 8 tuples
$(x_{a},y_{a},x_{b},y_{b},x_{c},y_{c},x_{d},y_{d})$. This representation can
also introduce regression inconsistency due to the order of the four corner
points.
To avoid the afore-mentioned inconsistency problem, we represent ships as two
points and their corresponding size, which are parameterized by 6 tuples
$(x_{c},y_{c},w,h,x_{h},y_{h})$. $(x_{c},y_{c})$ is the coordinate of the
center of the rotated bounding box, $w$ and $h$ are the width and length of
the ship, $(x_{h},y_{h})$ is the coordinate of the head point of the ship. The
direction of the ship is determined by connecting the center and the bow. This
representation of ships converts discontinuous angle regression to continuous
keypoint estimation. This representation also extends the range of angle
representation to $[0^{\circ},360^{\circ})$ and enables the network to
distinguish between bow and stern.
### III-B Rotated Gaussian Kernel
Our detectors uses center heatmaps to classify and locate ships
simultaneously. To adapt to the characteristics of the rotated target, we use
the rotated Gaussian kernel (see Fig. 4) to map the annotations to target
heatmaps in the training stage.
Specifically, given $m^{th}$ annotated box $\left(x,y,w,h,\theta\right)$
belongs to $c_{m}^{th}$ category, it is linearly mapped to the feature map
scale. Then, 2D Gaussian distribution
$\mathcal{N}(\mathbf{m},\mathbf{\Sigma})$ is adopted to produce target heatmap
$\textbf{C}\in\mathbb{R}^{\frac{W}{s}\times\frac{H}{s}\times C}$. Here,
$m=(x,y)$ represents the probability density function of the rotated Gaussian
distribution, and the probability density function can be calculated according
to covariance matrix Eq. 1.
$\displaystyle\Sigma^{1/2}$ $\displaystyle=\mathbf{RSR}^{\top}$ (1)
$\displaystyle=\left(\begin{array}[]{cc}\cos\theta&-\sin\theta\\\
\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}[]{cc}\sigma_{x}&0\\\
0&\sigma_{y}\end{array}\right)\left(\begin{array}[]{cc}\cos\theta&\sin\theta\\\
-\sin\theta&\cos\theta\end{array}\right)$
$\displaystyle=\left(\begin{array}[]{cc}\sigma_{x}\cos^{2}\theta+\sigma_{y}\sin^{2}\theta&\left(\sigma_{x}-\sigma_{y}\right)\cos\theta\sin\theta\\\
\left(\sigma_{x}-\sigma_{y}\right)\cos\theta\sin\theta&\sigma_{x}\cos^{2}\theta+\sigma_{y}\sin^{2}\theta\end{array}\right),$
where $s$ is a downsampling stride and
$\sigma_{x}=\alpha\frac{\sigma_{p}\times w}{\sqrt{w\times h}}$,
$\sigma_{y}=\alpha\frac{\sigma_{p}\times h}{\sqrt{w\times h}}$, $\sigma_{p}$
is a size-adaptive standard deviation [11]. $\alpha$ is set to 1.2 in our
implementation, and it’s not carefully selected. Fig. 4 is a schematic diagram
of mapping a rotated bounding box to a rotated Gaussian distribution.
If two Gaussian kernels belong to the same category with an overlap region, we
take the maximum value at each pixel of the feature map.
$\hat{\textbf{C}}\in\mathbb{R}^{\frac{W}{s}\times\frac{H}{s}\times C}$ is a
prediction on feature maps produced by the backbones. Fig. 5(a) shows a
visualization of the center heatmaps.
We extract locations with values larger or equal to their 8-connected
neighbors as detected center points. The value of the peak point is set as a
confidence measurement, and the coordinates in the feature map are used as an
index to get other attributes. Therefore, the accurate location of the center
point on the feature map is the key part of the whole detection.
The peaks of the Gaussian kernel, also the centers of rotated box, are treated
as the positive samples while any other pixels are treated as the negative
samples, which may cause a huge imbalance between positive and negative
samples. To handle the imbalance issue, we use the variant focal loss as [23,
11]:
$\mathcal{L}_{c}=\frac{-1}{N}\left\\{\begin{array}[]{cl}\sum_{xyc}\left(1-\hat{\textbf{C}}_{xyc}\right)^{\gamma}\log\left(\hat{\textbf{C}}_{xyc}\right)&\text{
if }\textbf{C}(xyc)=1\\\
\sum_{xyc}\left(1-\textbf{C}_{xyc}\right)^{\beta}\left(\hat{\textbf{C}}_{xyc}\right)^{\gamma}\\\
\log\left(1-\hat{\textbf{C}}_{xyc}\right)&\text{ otherwise
}\end{array}\right.$ (2)
where $\gamma$ and $\beta$ are the hyper-parameters of the focal loss, $N$ is
the number of objects in image $I$ which is used to normalize all positive
focal loss instances to $1$. We set $\gamma=2$ and $\beta=4$ in our
experiments empirically as in [26].
To reduce the quantization error caused by the output stride, we produce local
offset feature maps
$\textbf{O}\in\mathbb{R}^{\frac{W}{S}\times\frac{H}{S}\times 2}$. Suppose that
${c}=\left\\{\left(\hat{x}_{k},\hat{y}_{k}\right)\right\\}_{k=1}^{n}$ is the
set of detected center points, center point location is given by an integer
coordinates $c_{k}=(\hat{x_{i}},\hat{y_{i}})$ on feature map C. For each
predicted center point $c_{k}$, let the value on the offset feature maps
$f_{k}=(\delta\hat{x}_{k},\delta\hat{y}_{k})$ be the offset of center point
$c_{k}$. The final center point location of class $c$ is
$\hat{center_{c}}=\left\\{\left(\hat{x_{k}}+\delta\hat{x}_{k},\hat{y_{k}}+\delta\hat{y}_{k}\right)\right\\}_{k=1}^{n}$.
Note that, all classes share the same offset predictions to reduce the
computational complexity. The offset is optimized with an L1 loss. This
supervision is performed on all center point.
$\mathcal{L}_{\text{co}}=\frac{1}{N}\sum_{k=1}^{N}\left|\textbf{O}{{c_{k}}}-\left(\frac{\rm{center}_{k}}{S}-c_{k}\right)\right|.$
(3)
The regression of the size of objects is similar to that of local offset.
### III-C Head Point estimation
We perform two steps for better head points estimation.
#### III-C1 Regression-based head point estimation
Let $\rm{head}_{k}$$=(h_{x},h_{y})$ be the $k^{th}$ head point,we directly
regress to the offsets $(\varDelta\hat{x}_{k},\varDelta\hat{y}_{k})$ on
feature map $\textbf{R}\in\mathbb{R}^{\frac{W}{S}\times\frac{H}{S}\times 2}$
at each predicted center point $c_{k}\in\hat{center}$. The regression-based
head point is
$\left\\{\left(\hat{x_{k}}+\varDelta\hat{x}_{k},\hat{y_{k}}+\varDelta\hat{y}_{k}\right)\right\\}_{k=1}^{n}$,
where $\left(\varDelta\hat{x}_{i},\varDelta\hat{y}_{i}\right)$ is the head
point regression, and an L1 loss is used to optimized head regression feature
maps.
$\mathcal{L}_{hr}=\frac{1}{N}\sum_{k=1}^{N}\left|\textbf{R}_{c_{k}}-h_{k}\right|.$
(4)
#### III-C2 Bottom-up head point estimation
We use standard bottom-up multi-human pose estimation [43] to refine the head
points. A target map
$\textbf{H}\in\mathbb{R}^{\frac{W}{s}\times\frac{H}{s}\times 1}$ is computed
as described in Section III-B. A low-resolution equation is
$\rm\tilde{head}=\left\lfloor\frac{head}{s}\right\rfloor$. Head point heatmap
$\textbf{E}\in\mathbb{R}^{\frac{W}{S}\times\frac{H}{S}\times 1}$ and local
offset heatmap $\textbf{HO}\in\mathbb{R}^{\frac{W}{S}\times\frac{H}{S}\times
2}$ are head maps produced by the backbones. These two head maps are trained
with variant focal loss and an L1 loss.
$\mathcal{L}_{he}=\frac{-1}{N}\sum_{xy}\left\\{\begin{array}[]{cl}\left(1-\textbf{E}_{xy}\right)^{\gamma}\log\left(\textbf{E}_{xy}\right)&\text{
if }\textbf{H}_{xy}=1\\\
\left(1-\textbf{H}_{xy}\right)^{\beta}\left(\textbf{E}_{xy}\right)^{\gamma}\\\
\log\left(1-\textbf{E}_{xy}\right)&\text{ otherwise }\end{array}\right.$ (5)
$\mathcal{L}_{ho}=\frac{1}{N}\sum_{k=1}^{N}\left|\textbf{HO}_{c_{k}}-\left(\frac{\rm{head_{k}}}{S}-\tilde{head}\right)\right|.$
(6)
The bottom-up head point estimation is the same as the center point detection.
Note that, in center point detection, each category has a center points heat
map, while in head points estimation, all categories share one head points
heatmap. We extract all peak point locations
$\hat{\rm{head}}=\left\\{\tilde{l}_{i}\right\\}_{i=1}$ with a confidence
$\textbf{HO}_{x,y}>0.1$ as a potential head points set, and refine the
potential head point locations by adding the offset ${(\xi_{x},\xi_{y})}$.
Fig. 5(b) visualizes the head points heatmap.
We introduce a set of weighted factors to balance the contribution of these
parts, and set $\lambda_{o}=1$, $\lambda_{s}=0.1$, $\lambda_{\rm{hr}}=1$,
$\lambda_{\rm{he}}=1$, and $\lambda_{\rm{ho}}=1$ in all our experiments. We
set $\lambda_{s}=0.1$ since the scale of the loss is ranged from $0$ to the
output size $h/S$. The overall training loss is
$\displaystyle\mathcal{L}=$
$\displaystyle\mathcal{L}_{c}+\lambda_{o}\mathcal{L}_{o}+\lambda_{s}\mathcal{L}_{s}+\lambda_{\rm{hr}}\mathcal{L}_{\rm{hr}}+\lambda_{\rm{he}}\mathcal{L}_{\rm{he}}+\lambda_{\rm{ho}}\mathcal{L}_{\rm{ho}}.$
(7)
In the testing phase, we first extracted the center points on the output
center heatmaps C for each category. We used a $3\times 3$ max-pooling layer
to get the peak points and selected the top 100 peaks as potential center
points. Each center point location is represented as an integer coordinates
$\hat{c}=(\hat{x},\hat{y})$. Take out the offsets $(\delta_{x},\delta_{y})$,
size $(w,h)$, and head points regression
$\left(\varDelta_{x},\varDelta_{y}\right)$ on the corresponding feature map at
the location of center points. We also picked all head peak points on the
output center heatmaps E with a scores
$\hat{\rm{head}}\in(x,y),if~{}\textbf{E}_{x,y}>0.1$, and then assigned each
regressed location
${\rm{head}_{r}=\left(\hat{x}+\varDelta{x},\hat{y}+\varDelta{y}\right)}$ to
its closest detected keypoint
$\arg\min_{l\in\rm{head}_{r}}\left(l-\hat{\rm{head}}\right)^{2}$ as the head
point $(\hat{h_{x}},\hat{h_{y}})$, then we add the head point offset
$(\xi_{x},\xi_{y})$ to refine the head point estimation. Finally, we get the
rotated boxes
${(\hat{x}+\delta_{x},\hat{y}+\delta_{y},w,h,\hat{h_{x}}+\xi_{x},\hat{h_{y}}+\xi_{y})}$.
We use the line connecting the center point and the head point to determine
the orientation of targets.
Figure 6: A visualization of ship probability density map. In the ship
probability density map, $l_{a}$ represents the mean length of category $a$,
$l$ represents the length of the detected ship. The red area is the
probability that the target belongs to category $a$.
### III-D Orientation-Invariant Model
Let $\textbf{I}\in\mathbb{R}^{W\times H\times 3}$ be an input image with width
$W$ and height $H$, the feature map generated from backbone is
$\textbf{F}\in\mathbb{R}^{\frac{W}{s}\times\frac{H}{s}\times K}$, where $S$ is
the output stride, $C$ is the output feature channels. In this paper, we set
the default stride value to $S=4$ and feature channels to $K=64$.
The feature generated from these backbones is not rotation-invariant [44],
while ships in remote sensing images are distributed with arbitrary
orientations. To alleviate the inconsistency, we introduce an orientation-
invariant model (OIM) which consists of two modules: active rotating filters
(ARF) and oriented response pooling (ORPooling) [44].
We first use active ARF to explicitly encode the orientation information. An
ARF is a $k\times k\times N$ filter that actively rotates $N-1$ times during
convolution to produce a feature map with $N$ orientation channels. For a
feature map M and an ARF $\mathcal{F}$, the $i^{th}$ filter
$\mathbf{I}^{(i)}$, $i\in[1,N-1]$, is obtained by clockwise rotating
$\mathcal{F}$ by $\frac{2\pi n}{N}$(N is set to 8 by default) , and can be
computed as
$\mathbf{I}^{(i)}=\sum_{n=0}^{N-1}\mathcal{F}_{\theta_{i}}^{(n)}\cdot\mathbf{M}^{(n)},\theta_{i}=i\frac{2\pi}{N},i=0,\ldots,N-1$
(8)
where $\mathcal{F}_{\theta_{i}}$ is the clockwise $\theta_{i}$-rotated version
of $\mathcal{F}$, $\mathcal{F}_{i}^{(n)}$ and $\textbf{M}^{(n)}$ are the
$n^{th}$ orientation channel of $\mathcal{F}_{i}$ and M respectively. The ARF
captures image response in $N$ directions and explicitly encodes its location
and orientation into a single feature map with $N$ orientation channels. To
reduce computational complexity, we use the combination of small $3\times 3$
filters and an $8$ orientation channels in our experiments.
Feature maps captured by ARF are not rotation-invariant as orientation
information are encoded instead of being discarded. Then ORPooling is used to
extract orientation-invariant feature. It is simply achieved by choosing the
orientation channel with the strongest response as the output feature
$\textbf{I}\in\mathbb{R}^{\frac{W}{s}\times\frac{H}{s}\times K}$. That is,
$\hat{\mathbf{I}}=\max\\{\mathbf{I}^{(n)}\\},0<n<N-1.$ (9)
Since ORPooling is introduced to extract the maximum response value for all
ARF, the target features of different orientations at this location are
identical. Based on the rotation invariance feature, six kinds of feature maps
are got by convolution layers respectively. Moreover, OIM only introduces one
convolution layer with a small number of parameters, which has little effect
on the speed of training and inferencing.
The rotation-invariant feature is very important for detecting arbitrary
oriented objects, which enhances the consistency of the feature. Our detectors
extract locations with local maximum as detected center points, so at the
object center, the rotation-invariant feature of arbitrary oriented objects
are identical, which increases the generalization ability of the network.
Otherwise, more parameters are needed to encode the orientation information.
### III-E Refine probability according to size
By normalizing the GSD of remote sensing images, objects of the same size on
the ground have the same size in all images. The size of the target is an
important clue to identify the target because a certain type of target in
remote sensing images usually has a relatively fixed size range. We propose an
approach to adjust the confidence score of targets according to the prior
knowledge of ship size. As shown in Fig. 5(d), suppose that the category of
the detected box is $a$, the original confidence score is $s_{a}$, assume that
the length of the detected ship obeys a normal distribution, the mean and
standard deviation of the length of category $a$ are $L_{a}$, $\delta_{a}$.
Then the probability of the target belonging to $a$ is $p_{a}$, i.e.
$p_{a}=\frac{2}{\delta_{a}\sqrt{2\pi}}\int_{-\infty}^{-|l-la|}\exp\left(-\frac{(x-la)^{2}}{2\delta_{a}^{2}}\right)dx.$
(10)
To reduce hyper-parameters, we assume that the standard deviation is
proportional to the mean $\delta_{a}=L_{a}\times\lambda$ for all categories of
ships. We multiply the two probabilities to obtain the final detection
confidence, $\hat{p_{a}}=p_{a}\times s_{a}$.
## IV Experiments
We evaluate our method on our FGSD2021 dataset, the public HRSC2016 [45] and
UCAS-AOD [46] dataset. In this section, we first introduce the datasets and
implementation details, then perform ablation studies and compare our network
to several state-of-the-art methods.
Figure 7: Example images from the proposed FGSD2021 dataset. 20 categories are
chosen and annotated in our dataset, including Aircraft carriers, Wasp-class,
Tarawa-class, Austin-class, Whidbey-island-class, San-Antonio-class, Newport-
class, Ticonderoga-class, Arleigh-Burke-class, Perry-class, Lewis and Clark-
class, Supply-class, Henry J. Kaiser-class, Bob Hope-Class, Mercy-class,
Freedom-class, Independence-class, Avenger-class, submarine, and others.
### IV-A Datasets
#### IV-A1 HRSC2016
The HRSC2016 dataset [45] is a challenging dataset for ship detection in
remote sensing images, which collected six famous harbors on Google Earth. The
training, validation, and test sets include 436 images with 1207 samples, 181
with 541 samples, and 444 images with 1228 samples, respectively. The image
size of this dataset ranges from $300\times 300$ to $1500\times 900$. This
dataset includes three levels of tasks (i.e., L1, L2, and L3), and these three
tasks contain 1 class, 4 classes, and 19 classes, respectively. Besides, the
head point of ships is given in this dataset. Following [28] [35] [32], we
evaluate our method on task L1. We used the training and validation set in the
training phase and evaluated the detection performance on the test set.
#### IV-A2 FGSD2021
Existing ship datasets HRSC2016 have the following shortcomings. First, the
GSD is unknown, so we cannot get the size of objects in the image by the
actual size on the ground. Second, the size of the image is very small which
is inconsistent with the actual remote sensing image detection task. To solve
these problems, we propose a new ship detection dataset FGSD2021 which has a
fixed GSD.
Our dataset is developed by collecting high-resolution satellite images from
publicly available Google Earth, which covers some famous ports such as
Dandiego, Kitsap-Bremerton, Norfolk, Pearl Harbor, and Yokosuka. We usually
obtain multiple images of the same port on different days, and there are also
some images from the HRSC2016 dataset. We collected 636 images with a
normalized GSD, 1 meter per pixel. The images in our dataset are very large,
usually, one image covers a whole port. The width of images is ranged from 157
to 7789 pixels, and the average width is 1202 pixels, the height is ranged
from 224 to 6506 pixels, and the average height is 1205 pixels. Our FGSD2021
dataset is divided into 424 training images and 212 test images. The training
set is used in the training phase. The detection performance of the proposed
method is evaluated on the test set. FGSD2021 including 5274 labeled targets
and 20 categories are chosen and annotated. We use the
labelimg2111https://github.com/chinakook/labelImg2 tools to label the ship,
the angle range is $[0^{\circ},360^{\circ})$, and the main direction is the
direction of the bow. Some examples of annotated patches are shown in Fig. 7.
#### IV-A3 UCAS-AOD
The UCAS-AOD dataset [46] contains 1510 aerial images of about $659\times
1280$ pixels and 14596 instances of two categories including plane and car.
The angle range of target in this dataset is $[0^{\circ},180^{\circ})$, so we
manually marked the direction of the head. We randomly sampled 1132 images for
training and 378 images for testing. All images were cropped into patches of
size $672\times 672$.
### IV-B Implementation Details
Our network was implemented in PyTorch on a PC with Intel Core i7-8700K CPU,
NVIDIA RTX 2080Ti GPU. We used the Adam method [47] as the optimizer, and the
initial learning rate was set to $2.5\times 10^{-4}$. We trained our network
for 140 epochs with a learning rate being dropped at 90 epochs. During the
training phase, we used random rotation, random flipping, and color jittering
for data augmentation. To maintain the GSD of the image, we cropped all images
into $1024\times 1024$ slices with a stride of 820, resized them to $512\times
512$. We merged the detection results of all the slices to restore the
detecting results on the original image. Finally, we applied rotated-non-
maximum-suppression (RNMS) with an IoU threshold of 0.15 to discard repetitive
detections. The speed of the proposed network was measured on a single NVIDIA
RTX 2080Ti GPU.
Several different backbones (e.g., deep layer aggregation (DLA) [48] and
hourglass network (Hourglass) [49]) can be used to extract features from
images. We followed CenterNet [11] to enhance DLA by replacing ordinary
convolutions with deformable convolutions and add a 256 channel $3\times 3$
convolutional layer before the output head. The hourglass network consists of
two sequential hourglass modules. Each hourglass module includes 5 pairs of
down and up convolutional networks with skip connections. This network
generally yields better keypoint estimation performance [26].
### IV-C Evaluation Metrics
The IoU between oriented boxes is used to distinguish detection results. The
mean average precision (mAP) and head direction accuracy are used to evaluate
the performance of arbitrary-Oriented detectors.
#### IV-C1 IoU
The IoU is the result of dividing the overlapping area by the union area of
two boxes. We adopted the evaluation approach in DOTA [50] to get the IoU. If
the IoU between a detection box and a ground-truth is higher than a threshold,
the detection box is marked as true-positive (TP), otherwise false-positive
(FP). If a ground-truth box has no matching detections, it is marked as false
negative (FN).
#### IV-C2 mAP
The precision and recall are calculate by $\text{precision }=\frac{\text{ TP
}}{\mathrm{TP}+\mathrm{FP}}$,
$\text{recall}=\frac{\text{TP}}{\mathrm{TP}+\mathrm{FN}}$. We first set a set
of thresholds, and then we get a corresponding maximum precision for each
recall threshold. AP is the average of these precisions. The mean average
precision (mAP) is the mean of APs over all classes. The mAP0.5-mAP0.8 is
computed under the IoU threshold of 0.5-0.8 respectively. PASCAL VOC2007
metric is used to compute the mAP in all of our experiments.
#### IV-C3 Head direction accuracy
The prediction angle range of the previous algorithm is 0∘-180∘, which cannot
distinguish between the bow and stern of the ship. The mAP base on the IoU
between two rotated boxes is taken as the only evaluation criterion, which
cannot reflect the accuracy of the bow direction. To solve this problem, we
define bow direction accuracy as an additional evaluation. That is the
proportion of the ships whose angle difference from the ground-truth less than
10 degrees in all TPs.
### IV-D Ablation Study
In this subsection, we present ablation experiments to investigate our models.
#### IV-D1 CenterNet as baseline
As an anchor-free detector, CenterNet performs keypoint estimation to find the
center point and regresses the object size at each center point position. To
carry out arbitrary-oriented ship detection, we add an extra branch to predict
the angle as a baseline which is named CenterNet-Rbb. CenterNet-Rbb uses a
DLA34 as the backbone, and presents ships as rotated boxes with angle, and
uses the L1 loss function to optimized angle regression feature maps. We set
weighted factor $\lambda_{angle}=0.1$ to balance the contribution of these
parts since the scale of the loss is ranged from $0$ to $180$. As shown in
Table I, CenterNet-Rbb achieves an mAP of 70.52% which demonstrates that our
baseline achieves competitive performance.
TABLE I: Results achieved on FGSD2021 with different ablation versions. ‘Baseline’ represents adding a branch to predict the angle based on CenterNet. ‘Head Point’ represents replacing the angle prediction branch to head point estimation module. ‘Rotate kernel’ represents generating center heatmap by rotated kernel in training. ‘OIM’ represents add orientation-invariant model behand the backbone. ‘Extra convolution’ represents replacing the OIM with two extra convolution layers. ‘Refine probability’ represents using the prior size information to adjust the confidence score of the detected boxes. | baseline | Different Settings of CHPDet |
---|---|---|---
Head Point | | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$
Rotate kernel | | | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$
OIM | | | | $\checkmark$ | | $\checkmark$
Extra convolution | | | | | $\checkmark$ |
Refine Probability | | | | | | $\checkmark$
mAP | 70.52 | 82.96 | 83.56 | 86.61 | 82.66 | 87.91
| | | | | |
TABLE II: Performance of CHEDet achieved on FGSD2021 with different variance coefficient $\lambda$. ‘without refine’ represents using the original confidence without refinement. ‘Ground truth class’ represents using ground truth class label to eliminate the misclassification. Backbone | Image Size | coefficient $\lambda$ | without refine | Ground truth class
---|---|---|---|---
0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8
DLA34 | $512\times 512$ | 87.40 | 87.91 | 87.39 | 87.45 | 87.17 | 87.20 | 87.15 | 87.10 | 86.61 | 89.33
DLA34 | $1024\times 1024$ | 86.37 | 87.84 | 89.28 | 88.17 | 88.68 | 88.85 | 88.47 | 88.50 | 88.39 | 89.74
| | | | | | | | | | |
TABLE III: Detection accuracy on different types of ships and overall performance with the state-of-the-art methods on FGSD. The short names for categories are defined as (abbreviation-full name): Air - Aircraft carriers, Was - Wasp class, Tar - Tarawa class, Aus - Austin class, Whi - Whidbey Island class, San -San Antonio class, New - Newport class, Tic - Ticonderoga class, Bur- Arleigh Burke class, Per - Perry class, Lew -Lewis and Clark class, Sup - Supply class, Kai - Henry J. Kaiser class, Hop - Bob Hope Class, Mer - Mercy class, Fre - Freedom class, Ind - Independence class, Ave - Avenger class, Sub - Submarine and Oth - Other. CHPDet† means CHPDet trained and detected with $1024\times 1024$ image size. Method | Air | Was | Tar | Aus | Whi | San | New | Tic | Bur | Per | Lew | Sup | Kai | Hop | Mer | Fre | Ind | Ave | Sub | Oth | mAP(07)
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
R2CNN [29] | 89.9 | 80.9 | 80.5 | 79.4 | 87.0 | 87.8 | 44.2 | 89.0 | 89.6 | 79.5 | 80.4 | 47.7 | 81.5 | 87.4 | 100 | 82.4 | 100 | 66.4 | 50.9 | 57.2 | 78.09
Retinanet-Rbb [23] | 89.7 | 89.2 | 78.2 | 87.3 | 77.0 | 86.9 | 62.7 | 81.5 | 83.3 | 70.6 | 46.8 | 69.9 | 80.2 | 83.1 | 100 | 80.6 | 89.7 | 61.5 | 42.5 | 9.1 | 73.49
ROI-Trans[30] | 90.9 | 88.6 | 87.2 | 89.5 | 78.5 | 88.8 | 81.8 | 89.6 | 89.8 | 90.4 | 71.7 | 74.7 | 73.7 | 81.6 | 78.6 | 100 | 75.6 | 78.4 | 68.0 | 66.9 | 83.48
SCRDet [31] | 77.3 | 90.4 | 87.4 | 89.8 | 78.8 | 90.9 | 54.5 | 88.3 | 89.6 | 74.9 | 68.4 | 59.2 | 90.4 | 77.2 | 81.8 | 73.9 | 100 | 43.9 | 43.8 | 57.1 | 75.90
CSL [33] | 89.7 | 81.3 | 77.2 | 80.2 | 71.4 | 77.2 | 52.7 | 87.7 | 87.7 | 74.2 | 57.1 | 97.2 | 77.6 | 80.5 | 100 | 72.7 | 100 | 32.6 | 37.0 | 40.7 | 73.73
DCL [34] | 89.9 | 81.4 | 78.6 | 80.7 | 78.0 | 87.9 | 49.8 | 78.7 | 87.2 | 76.1 | 60.6 | 76.9 | 90.4 | 80.0 | 78.8 | 77.9 | 100 | 37.1 | 31.2 | 45.6 | 73.34
R3Det[32] | 90.9 | 80.9 | 81.5 | 90.1 | 79.3 | 87.5 | 29.5 | 77.4 | 89.4 | 69.7 | 59.9 | 67.3 | 80.7 | 76.8 | 72.7 | 83.3 | 90.9 | 38.4 | 23.1 | 40.0 | 70.47
RSDet[9] | 89.8 | 80.4 | 75.8 | 77.3 | 78.6 | 88.8 | 26.1 | 84.7 | 87.6 | 75.2 | 55.1 | 74.4 | 89.7 | 89.3 | 100 | 86.4 | 100 | 27.6 | 37.6 | 50.6 | 73.74
S2A-Net[35] | 90.9 | 81.4 | 73.3 | 89.1 | 80.9 | 89.9 | 81.2 | 89.2 | 90.7 | 88.9 | 60.5 | 75.9 | 81.6 | 89.2 | 100 | 68.6 | 90.9 | 61.3 | 55.7 | 64.7 | 80.19
ReDet[51] | 90.9 | 90.6 | 80.3 | 81.5 | 89.3 | 88.4 | 81.8 | 88.8 | 90.3 | 90.5 | 78.1 | 76.0 | 90.7 | 87.0 | 98.2 | 84.4 | 90.9 | 74.6 | 85.3 | 71.2 | 85.44
Oriented R-CNN[52] | 90.9 | 89.7 | 81.5 | 81.1 | 79.6 | 88.2 | 98.9 | 89.8 | 90.6 | 87.8 | 60.4 | 73.9 | 81.8 | 86.7 | 100 | 60.0 | 100 | 79.4 | 66.9 | 63.7 | 82.54
BBAVectors[53] | 99.5 | 90.9 | 75.9 | 94.3 | 90.9 | 52.9 | 88.5 | 90.0 | 80.4 | 72.2 | 76.9 | 88.2 | 99.6 | 100 | 94.0 | 100 | 74.5 | 58.9 | 63.1 | 81.1 | 83.59
DARDet[54] | 90.9 | 89.2 | 69.7 | 89.6 | 88.0 | 81.4 | 90.3 | 89.5 | 90.5 | 79.7 | 62.5 | 87.9 | 90.2 | 89.2 | 100 | 68.9 | 81.8 | 66.3 | 44.3 | 56.2 | 80.31
CenterNet-Rbb[11] | 67.2 | 77.9 | 79.2 | 75.5 | 66.8 | 79.8 | 76.8 | 83.1 | 89.0 | 77.7 | 54.5 | 72.6 | 77.4 | 100 | 100 | 60.8 | 74.8 | 46.5 | 44.1 | 6.8 | 70.52
CHPDet-DLA34 | 90.9 | 90.4 | 89.6 | 89.3 | 89.6 | 99.1 | 99.4 | 90.2 | 90.2 | 90.3 | 70.7 | 87.9 | 89.2 | 96.5 | 100 | 85.1 | 100 | 84.4 | 68.5 | 56.9 | 87.91
CHPDet-DLA34† | 90.9 | 90.2 | 90.9 | 90.3 | 89.3 | 89.2 | 98.9 | 90.2 | 90.2 | 90.2 | 72.2 | 96.5 | 90.7 | 95.3 | 100 | 95.2 | 90.9 | 86.4 | 85.9 | 62.4 | 89.29
| | | | | | | | | | | | | | | | | | | | |
#### IV-D2 Effectiveness of the head point estimation
When we replace the angle prediction branch with the head point estimation
module, the overall mAP is improved from 70.52% to 82.96%. It is a significant
improvement, which fully demonstrates the effectiveness of the head point
estimation approach. This improvement mainly comes from two aspects. First,
the algorithm makes full use of the prior knowledge of the bow point and
improves the accuracy of angle regression. Second, since multi-task learning
is performed, bow detection increases the supervision information and improves
the accuracy of other tasks.
To further verify the promoting effect of head point estimation for center
point detection and size detection, we set all angles of ground-truth and the
detected box to 0∘. Compared with the CenterNet-Rbb, The mAP of CHPDet has
risen from 84.4% to 88.0%. This shows that the head point estimation is
equivalent to multi-task joint training. It gives more supervision to the
network and improves the performance of the network. Besides, the head point
estimation only introduces 3 additional channels feature maps and 0.7 ms speed
latency.
#### IV-D3 Effectiveness of the rotated Gaussian kernel
Our detector uses the rotated Gaussian kernel to map the annotations to target
heatmaps and achieves an improvement of 0.6% in terms of nomal Gaussian
kernels. This implies that rotated Gaussian kernel is a better representation
for OBB in the aerial images.
The rotated Gaussian kernel can adjust its shape and direction according to
the shape of the target and reduce the influence of positioning error on the
detection results. As shown in Fig. 4, the rotated Gaussian kernel has the
maximum error in the long axis direction, so in the detection process, the
center point has a large error on the long axis. Because the error of the
center point in the long axis has the least influence on the IoU, the rotated
Gaussian kernel can reduce the influence of positioning error on the detection
results, and vice versa. Note that, rotated Gaussian kernel does not introduce
any additional parameters, and they do not increase training and inferencing
time. Consequently, it is a completely cost-free module.
#### IV-D4 Effectiveness of the orientation-invariant model
We add an orientation-invariant model (OIM) at the end of the backbone and
keep other settings unchanged to validate its effectiveness. As shown in Table
I, compared with the standard backbone, the backbone with the orientation-
invariant model improves mAP by about 3 percentages to 86.61%, while only
introduces 2.6 ms speed latency.
To further verify the effectiveness of the OIM structure, we replace the OIM
with two convolution layers. Compared with the standard backbone, the backbone
with two extra convolution layers model drops the performance to 82.66%. It is
proved that the performance improvement does not come from the improvement of
the number of parameters.
We argue that the standard backbones are not rotation-invariant, and the
corresponding features are rotation-sensitive. Consequently, OIM increases the
consistency between targets and corresponding features. It not only improves
the accuracy of angle prediction, but also improves the accuracy of center
point detection and size regression.
#### IV-D5 Effectiveness of the Refine probability model
In the FGSD2021 dataset, the actual length of each category is determined. For
example, the length of the Ticonderoga-class cruiser is 172.8 meters. In our
designed network, the prior knowledge of ship length is used to refine the
confidence of the detected ships belonging to a certain category. Table I
shows the mAP values of different ablation versions on the test set. It can be
observed that the baseline model achieves the lowest mAP. When the prior size
information is incorporated, the performance has been improved. The accuracy
improvement on low-resolution images is more obvious, e.g., from 86.61% to
87.91%, an increase of 1.3% in mAP. It demonstrates that the prior size
information can improve classification accuracy.
We set a variance coefficient to adjust the influence of size on probability.
Consequently, we use the length of this type of ship $l_{a}$ multiplied by a
coefficient $r$ as the mean square error of this type $\delta_{a}$,
$\delta_{a}=l_{a}\times r$. The variance coefficient will affect
classification accuracy. When the coefficient is large, the probability
difference between different categories will be smaller, and the influence of
the size on the confidence of the category will be smaller, and vice versa. As
can be observed in Table II, when the coefficient is small, it is equivalent
to use size as the main information to classify objects. Accuracy increases
gradually as the coefficient increases, and when the coefficient is larger
than 0.2, the coefficient has little impact on the accuracy. When we treat all
categories as one category and remove the category influence on the detection
results, the mAP is $89.33$%, and $89.74$%, respectively. At the same time, by
incorporating prior information to adjust the classification confidence, the
detection accuracy under 20 categories with an input image of size 1024x1024
achieved an mAP of $89.28$% which shows that after incorporating the prior
information, almost all categories are classified correctly.
TABLE IV: Detection performance on the FGSD2021 at different IoU thresholds and the accuracy of bow direction. BDA presents bow direction accuracy Method | Backbone | Image Size | mAP0.5 | mAP0.6 | mAP0.7 | mAP0.8 | BDA | FPS
---|---|---|---|---|---|---|---|---
R2CNN[29] | Resnet50 | $512\times 512$ | 78.09 | 75.03 | 64.83 | 36.41 | _ | 10.3
Retinanet-Rbb[23] | Resnet50 | $512\times 512$ | 73.49 | 69.17 | 62.82 | 45.00 | _ | 35.6
RoI-Trans[30] | Resnet50 | $512\times 512$ | 83.48 | 82.63 | 80.35 | 65.18 | _ | 19.2
SCRDet[31] | Resnet50 | $512\times 512$ | 75.90 | 70.98 | 61.82 | 35.12 | _ | 9.2
CSL[33] | Resnet50 | $512\times 512$ | 73.73 | 69.71 | 60.25 | 34.93 | _ | 10.4
DCL[34] | Resnet50 | $512\times 512$ | 73.34 | 69.19 | 57.80 | 28.54 | _ | 10.0
R3Det[32] | Resnet50 | $512\times 512$ | 70.47 | 68.32 | 57.17 | 27.44 | _ | 14.0
RSDet[9] | Resnet50 | $512\times 512$ | 73.74 | 69.55 | 61.52 | 35.83 | _ | 15.4
S2A-Net[35] | Resnet50 | $512\times 512$ | 80.19 | 79.58 | 75.65 | 58.82 | _ | 33.1
ReDet[51] | ReResnet50 | $512\times 512$ | 85.44 | 84.65 | 80.24 | 67.94 | _ | 13.8
Oriented R-CNN[52] | Resnet50 | $512\times 512$ | 82.54 | 81.32 | 78.53 | 64.87 | _ | 27.4
BBAVectors[53] | Resnet50 | $512\times 512$ | 83.59 | 82.74 | 78.55 | 62.48 | _ | 18.5
DARDet[54] | Resnet50 | $512\times 512$ | 80.31 | 79.62 | 74.77 | 59.21 | _ | 31.9
CenterNet-Rbb[11] | DLA34 | $512\times 512$ | 70.52 | 69.34 | 65.52 | 45.33 | _ | 48.5
CHPDet(ours) | DLA34 | $512\times 512$ | 87.91 | 87.15 | 83.69 | 71.24 | 97.84 | 41.7
CHPDet(ours) | DLA34 | $1024\times 1024$ | 89.29 | 88.98 | 86.57 | 73.56 | 98.39 | 15.4
| | | | | | | |
Figure 8: Comparison of the detection results in FGSD2021 with different methods. The first column is the ground truth, and the second to the last columns are the results of Retinanet-Rbb [23], ROI-Trans [30], SCRDet [31] , S2A-Net [35], and CHPDet (ours), respectively. Different color of rotated boxes represents a different type of ships. The pink point represents the head point. TABLE V: Detection accuracy on the HRSC2016 dataset, 07 means using the 2007 evaluation metric. Method | Backbone | mAP(07)
---|---|---
R2CNN [29] | Resnet101 | 73.07
RRPN [28] | Resnet101 | 79.08
R2PN[10] | VGG16 | 79.6
ROI-trans[30] | Resnet101 | 86.20
Gliding Vertex[55] | Resnet101 | 88.20
BBAVectors[53] | Resnet101 | 88.6
R3Det [32] | Resnet101 | 89.26
FPN-CSL[33] | Resnet101 | 89.62
R3Det-DCL[34] | Resnet101 | 89.46
DAL[56] | Resnet101 | 89.77
R3Det-GWD [57] | Resnet101 | 89.85
RSDet [9] | ResNet152 | 86.5
FR-Est [58] | Resnet101 | 89.7
S2A-Net [35] | Resnet101 | 90.2
Oriented RepPoints[59] | Resnet50 | 90.38
ReDet[51] | ReResnet50 | 90.46
Oriented R-CNN[52] | Resnet101 | 90.50
DARDet [54] | Resnet50 | 90.37
CHPDet(ours) | DLA34 | 88.81
CHPDet(ours) | Hourglass104 | 90.55
| |
TABLE VI: Detection accuracy on the UCAS-AOD dataset. Method | Backbone | car | airplane | mAP(07)
---|---|---|---|---
YOLOv3 [60] | Darknet53 | 74.63 | 89.52 | 82.08
RetinaNet [23] | Resnet101 | 84.64 | 90.51 | 87.57
FR-O[50] | Resnet101 | 86.87 | 89.86 | 88.36
ROI-trans[30] | Resnet101 | 87.99 | 89.90 | 88.95
FPN-CSL[33] | Resnet101 | 88.09 | 90.38 | 89.23
R3Det-DCL[34] | Resnet101 | 88.15 | 90.57 | 89.36
DAL[56] | Resnet101 | 89.25 | 90.49 | 89.87
CHPDet(ours) | DLA34 | 88.58 | 90.64 | 89.61
CHPDet(ours) | Hourglass104 | 89.18 | 90.81 | 90.00
| | | |
#### IV-D6 Bow direction accuracy
It can be seen from Table III that the bow direction accuracy of our CHPDet is
up to 97.84, 98.14, and 98.39, respectively. This shows that almost all bow
directions of ships are correct. As shown in Fig. 9, the pink dots represent
the correct head point and the green dots represent the wrong head point. Our
detection algorithm can well detect the bow direction of all types of ships,
including aircraft carriers, amphibious ships. Only a small number of ships or
submarines whose bow and stern are similar from a bird-view perspective, the
bow direction will be opposite.
### IV-E Comparison with other methods
In this section, we compare our method with other representative ship
detectors including RetinaNet-Rbb [23] ROI-trans
[30]222https://github.com/dingjiansw101/AerialDetection/, R2CNN [29], CSL
[33], DCL [34], RSDet [9], SCRDet
[31]333https://github.com/yangxue0827/RotationDetection, and S2A-Net
[35]444https://github.com/csuhan/s2anet on three benchmark datasets including
FGSD2021, HRSC2016 [45] and UCAS-AOD [46]. To achieve fair comparison, we used
the default settings of the original codes on the DOTA dataset including the
same data augmentation strategy, and the number of training epochs.
Figure 9: Some bow direction detection result of CHPDet. The pink dots
represent the correct head point and the green dots represent the wrong head
point.
#### IV-E1 Results on FGSD2021
We evaluate CHPDet on the FGSD2021 dataset and compare our method with other
rotation detection methods. It can be seen from Table III that CHPDet achieves
$87.91\%$ mAP at the speed of $41.7$ FPS, which surpass the other compared
methods. Compared with the general rotation detection methods RoI-Trans [30]
and S2A-Net [35], our proposed method achieves a remarkable improvement by
4.5%, 7.7% in mAP and 19.3, 8.6 in FPS. When higher resolution images are
used, the accuracy can be improved to $89.29\%$. This confirms that our method
achieves a large superiority in terms of accuracy and speed. To further verify
the accuracy of the prediction, we gradually increase the IoU threshold. As
can be seen from Table IV, when the IoU threshold is gradually increased, the
performance of other detectors dropped significantly, and the decline of our
detector is relatively small. When the IoU threshold was increased to $0.8$,
the mAP of our CHPDet remained at $71.24$. This shows that our detector can
get higher quality rotated boxes than other algorithms.
Fig. 8 shows a visual comparison of the detection results of Retinanet-Rbb
[23], ROI-Trans [30], SCRDet [31], S2A-Net [35], and our method. As shown in
the first row, all the other methods have misclassification or false alarms,
S2A-Net [35] has an inaccurate angle prediction, while our method precisely
detects them. For the densely parking scene in the second row, all the
compared detectors lost at least two submarines, and our method is not
influenced by the densely parking scene. The last row of Fig. 8 is a harbor
with a complex background. Note that, two ships are not in the water but on
the dry dock. ROI-trans [30] and S2A-Net [35] miss the targets, SCRDet [31]
has an inaccurate bounding box. Compared to these four methods, our method can
better detect the ships in the complex background and is more robust for
challenging situations.
This improvement mainly comes from three aspects. First, the algorithm makes
full use of the prior knowledge of the bow point and improves the accuracy of
directional regression. Second, since multi-task learning is performed, bow
detection increases the supervision information and improves the accuracy of
other tasks. Last, the prior knowledge of ship length is used to refine the
confidence of the detected ships belonging to a certain category. The usage of
the prior knowledge of ships introduces significant performance improvements.
Figure 10: Sample object detection results of our proposed CHPDet on HRSC2016
dataset. Figure 11: Sample object detection results of our proposed CHPDet on
UCAS-AOD dataset.
#### IV-E2 Results on HRSC2016
The HRSC2016 dataset contains plenty of ships with arbitrary orientations. we
evaluate our method on task L1 which contains 1 class and report the results
with VOC2007 metric. To demonstrate the performance of our detector, we
compare it with other state-of-the-art methods, i.e., ReDet [51], Oriented
R-CNN [52], and Oriented RepPoints [59]. The overall comparison performance is
reported in Table V. Our method achieves the best performance over all the
compared methods, at an accuracy of $90.55\%$. To further show the performance
of CHPDet, the detection results are visualized in Fig. 10. As shown in the
first two columns, the densely parked ships can be detected well. In the last
two columns, there is a lot of background around ships, which is a huge
challenge for detectors. The results indicate that our proposed method can
avoid false alarms in complex background.
#### IV-E3 Results on UCAS-AOD
The UCAS-AOD dataset contains a large mount of cars and planes, which are
often overwhelmed by a complex background in aerial images. For a fair
comparison, we only report the results under VOC2007 metric. Table VI shows
the results with the recent methods on the UCAS-AOD dataset. It can be seen
that our proposed method achieves the best performance (with an mAP of
90.00%). The CHPDet, which uses a larger output resolution (output stride of
4) compared to traditional object detectors (output stride of 8) and presents
ship as the center and head points, can capture abundant information of small
objects. Fig. 11 gives some example detection results on the UCAS-AOD dataset.
We find that CHPDet performs well in a variety of challenging scenes, which
demonstrates the generalization capability of the detector.
## V Conclusion
Our proposed approach converts discontinuous angle regression to continuous
keypoint estimation by formulating ships as rotated boxes with a head point
representing the direction. This design can incorporate the prior knowledge of
the bow point, which not only improves the detection performance, but also
expands the scope of predicted angle to $[0^{\circ}-360^{\circ})$. Our method
can distinguish between bow and stern. CHPDet has simple structure. It has
only one positive sample per annotation and simply extracts local peaks in the
keypoint heatmap. It does not need Non-Maximum Suppression (NMS). This design
ensures high time efficiency. The prior knowledge of ship length is also
incorporated to refine the confidence of the detected ships belonging to a
certain category.
Although our method achieves encouraging results on ship detection from remote
sensing images, our method can not be directly used in normal object detection
datasets in aerial images such as DOTA [50]. That is because, CHPDet needs
more accurate annotations which mark the direction of the target head in the
range of $360^{\circ}$. CHPDet is several times faster than most detectors in
inference, but it suffers from a long training time. For future work, we will
address this issue by encoding more training samples from annotated boxes.
In this paper, we proposed a one-stage anchor-free detection framework to
detect arbitrary-oriented ships from remote sensing images by making full use
of the prior of ships. Our method detects ships by extracting the center, head
of ships, and regresses the size of ships at each center point with rotation-
invariant features. And we refine the detection results based on the prior
information. And we refine the detection results based on the prior
information. CHPDet avoids complex anchor design and computing relative to the
anchor-based methods and can accurately predict angles in a large range
[$0^{\circ}$-$360^{\circ}$). Experimental results demonstrate that our method
achieves better accuracy and efficiency as compared with other ship detectors.
## References
* [1] S. He, H. Zou, Y. Wang, R. Li, F. Cheng, X. Cao, and M. Li, “Enhancing mid–low-resolution ship detection with high-resolution feature distillation,” _IEEE Geoscience and Remote Sensing Letters_ , 2021.
* [2] B. Li, Y. Guo, J. Yang, L. Wang, Y. Wang, and W. An, “Gated recurrent multiattention network for vhr remote sensing image classification,” _IEEE Transactions on Geoscience and Remote Sensing_ , 2021.
* [3] Z. Deng, H. Sun, S. Zhou, and J. Zhao, “Learning deep ship detector in sar images from scratch,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 57, no. 6, pp. 4021–4039, 2019.
* [4] Z. Deng, H. Sun, S. Zhou, J. Zhao, L. Lei, and H. Zou, “Multi-scale object detection in remote sensing imagery with convolutional neural networks,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 145, pp. 3–22, 2018.
* [5] G. Cheng and J. Han, “A survey on object detection in optical remote sensing images,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 117, pp. 11–28, 2016.
* [6] X. Sun, P. Wang, C. Wang, Y. Liu, and K. Fu, “Pbnet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 173, pp. 50–65, 2021.
* [7] Q. He, X. Sun, Z. Yan, and K. Fu, “Dabnet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images,” _IEEE Transactions on Geoscience and Remote Sensing_ , 2021.
* [8] M. Li, W. Guo, Z. Zhang, W. Yu, and T. Zhang, “Rotated region based fully convolutional network for ship detection,” in _IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2018, pp. 673–676.
* [9] W. Qian, X. Yang, S. Peng, Y. Guo, and C. Yan, “Learning modulated loss for rotated object detection,” _arXiv preprint arXiv:1911.08299_ , 2019.
* [10] Z. Zhang, W. Guo, S. Zhu, and W. Yu, “Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 15, no. 11, pp. 1745–1749, 2018\.
* [11] X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” _arXiv preprint arXiv:1904.07850_ , 2019.
* [12] S. Liu, Q. Du, X. Tong, A. Samat, and L. Bruzzone, “Unsupervised change detection in multispectral remote sensing images via spectral-spatial band expansion,” _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , vol. 12, no. 9, pp. 3578–3587, 2019.
* [13] D. S. Maia, M.-T. Pham, E. Aptoula, F. Guiotte, and S. Lefèvre, “Classification of remote sensing data with morphological attributes profiles: a decade of advances,” _IEEE Geoscience and Remote Sensing Magazine_ , 2021.
* [14] L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep learning for generic object detection: A survey,” _International Journal of Computer Vision_ , vol. 128, no. 2, pp. 261–318, 2020.
* [15] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in _2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014_. IEEE Computer Society, 2014, pp. 580–587.
* [16] R. B. Girshick, “Fast R-CNN,” in _2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015_. IEEE Computer Society, 2015, pp. 1440–1448.
* [17] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” in _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_ , C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 91–99.
* [18] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” in _IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017_. IEEE Computer Society, 2017, pp. 2980–2988.
* [19] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: object detection via region-based fully convolutional networks,” in _Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain_ , D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 379–387.
* [20] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016_. IEEE Computer Society, 2016, pp. 779–788.
* [21] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017_. IEEE Computer Society, 2017, pp. 6517–6525.
* [22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in _European Conference on Computer Vision_. Springer, 2016, pp. 21–37.
* [23] T. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017_. IEEE Computer Society, 2017, pp. 2999–3007.
* [24] Z. Cai and N. Vasconcelos, “Cascade R-CNN: delving into high quality object detection,” in _2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018_. IEEE Computer Society, 2018, pp. 6154–6162.
* [25] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, “Hybrid task cascade for instance segmentation,” in _IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019_. Computer Vision Foundation / IEEE, 2019, pp. 4974–4983.
* [26] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 734–750.
* [27] Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: fully convolutional one-stage object detection,” in _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019_. IEEE, 2019, pp. 9626–9635.
* [28] J. Ma, W. Shao, Y. Hao, W. Li, W. Hong, Y. Zheng, and X. Xue, “Arbitrary-oriented scene text detection via rotation proposals,” _IEEE Transactions on Multimedia_ , vol. PP, no. 99, p. 1, 2017.
* [29] Y. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, P. Fu, and Z. Luo, “R2cnn: Rotational region cnn for arbitrarily-oriented scene text detection,” in _2018 24th International Conference on Pattern Recognition (ICPR)_. IEEE, 2018, pp. 3610–3615.
* [30] J. Ding, N. Xue, Y. Long, G. Xia, and Q. Lu, “Learning roi transformer for detecting oriented objects in aerial images,” _arXiv: Computer Vision and Pattern Recognition_ , 2018.
* [31] X. Yang, J. Yang, Y. Zhang, T. Zhang, Z. Guo, X. Sun, and K. Fu, “Scrdet: Towards more robust detection for small, cluttered and rotated objects,” in _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019_. IEEE, 2019, pp. 8231–8240.
* [32] X. Yang, Q. Liu, J. Yan, A. Li, Z. Zhang, and G. Yu, “R3det: Refined single-stage detector with feature refinement for rotating object,” _arXiv preprint arXiv:1908.05612_ , 2019.
* [33] X. Yang and J. Yan, “Arbitrary-oriented object detection with circular smooth label,” pp. 677–694, 2020.
* [34] X. Yang, L. Hou, Y. Zhou, W. Wang, and J. Yan, “Dense label encoding for boundary discontinuity free rotation detection,” pp. 15 819–15 829, 2021.
* [35] J. Han, J. Ding, J. Li, and G.-S. Xia, “Align deep features for oriented object detection,” _IEEE Transactions on Geoscience and Remote Sensing_ , 2021.
* [36] H. Wei, Y. Zhang, Z. Chang, H. Li, H. Wang, and X. Sun, “Oriented objects as pairs of middle lines,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 169, pp. 268–279, 2020.
* [37] H. Wei, Y. Zhang, B. Wang, Y. Yang, H. Li, and H. Wang, “X-linenet: Detecting aircraft in remote sensing images by a pair of intersecting line segments,” _IEEE Transactions on Geoscience and Remote Sensing_ , 2020.
* [38] Z. Shi, X. Yu, Z. Jiang, and B. Li, “Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 52, no. 8, pp. 4511–4523, 2013.
* [39] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” _Journal of Computer and System Sciences_ , vol. 55, no. 1, pp. 119–139, 1997.
* [40] F. Yang, Q. Xu, and B. Li, “Ship detection from optical satellite images based on saliency segmentation and structure-lbp feature,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 14, no. 5, pp. 602–606, 2017.
* [41] Z. Liu, H. Wang, L. Weng, and Y. Yang, “Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 13, no. 8, pp. 1074–1078, 2017.
* [42] Z. Liu, J. Hu, L. Weng, and Y. Yang, “Rotated region based cnn for ship detection,” in _IEEE International Conference on Image Processing_ , 2018\.
* [43] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017_. IEEE Computer Society, 2017, pp. 1302–1310.
* [44] Y. Zhou, Q. Ye, Q. Qiu, and J. Jiao, “Oriented response networks,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017_. IEEE Computer Society, 2017, pp. 4961–4970.
* [45] Z. Liu, L. Yuan, L. Weng, and Y. Yang, “A high resolution optical satellite image dataset for ship recognition and some new baselines,” in _International Conference on Pattern Recognition Applications and Methods_ , vol. 2. SCITEPRESS, 2017, pp. 324–331.
* [46] C. Li, C. Xu, Z. Cui, D. Wang, T. Zhang, and J. Yang, “Feature-attentioned object detection in remote sensing imagery,” in _2019 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2019, pp. 3886–3890.
* [47] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , Y. Bengio and Y. LeCun, Eds., 2015.
* [48] F. Yu, D. Wang, E. Shelhamer, and T. Darrell, “Deep layer aggregation,” in _2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018_. IEEE Computer Society, 2018, pp. 2403–2412.
* [49] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” pp. 483–499, 2016.
* [50] G. Xia, X. Bai, J. Ding, Z. Zhu, S. J. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, “DOTA: A large-scale dataset for object detection in aerial images,” in _2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018_. IEEE Computer Society, 2018, pp. 3974–3983.
* [51] J. Han, J. Ding, N. Xue, and G.-S. Xia, “Redet: A rotation-equivariant detector for aerial object detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 2786–2795.
* [52] X. Xie, G. Cheng, J. Wang, X. Yao, and J. Han, “Oriented r-cnn for object detection,” _arXiv preprint arXiv:2108.05699_ , 2021.
* [53] J. Yi, P. Wu, B. Liu, Q. Huang, H. Qu, and D. Metaxas, “Oriented object detection in aerial images with box boundary-aware vectors,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 2021, pp. 2150–2159.
* [54] F. Zhang, X. Wang, S. Zhou, and Y. Wang, “Dardet: A dense anchor-free rotated object detector in aerial images,” _arXiv preprint arXiv:2110.01025_ , 2021\.
* [55] Y. Xu, M. Fu, Q. Wang, Y. Wang, K. Chen, G.-S. Xia, and X. Bai, “Gliding vertex on the horizontal bounding box for multi-oriented object detection,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 43, no. 4, pp. 1452–1459, 2020.
* [56] Q. Ming, Z. Zhou, L. Miao, H. Zhang, and L. Li, “Dynamic anchor learning for arbitrary-orientedd object detection,” _arXiv preprint arXiv:2012.04150_ , vol. 1, no. 2, p. 6, 2020.
* [57] X. Yang, J. Yan, Q. Ming, W. Wang, X. Zhang, and Q. Tian, “Rethinking rotated object detection with gaussian wasserstein distance loss,” _arXiv preprint arXiv:2101.11952_ , 2021.
* [58] K. Fu, Z. Chang, Y. Zhang, and X. Sun, “Point-based estimator for arbitrary-oriented object detection in aerial images,” _IEEE Transactions on Geoscience and Remote Sensing_ , 2020.
* [59] W. Li and J. Zhu, “Oriented reppoints for aerial object detection,” _arXiv preprint arXiv:2105.11111_ , 2021.
* [60] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” _arXiv preprint arXiv:1804.02767_ , 2018.
| Feng Zhang received the B.E. degree in electronic information engineering
from Harbin Institute of Technology(HIT), Harbin, China, in 2009, and the M.E.
degree in information and communication engineering from National University
of Defense Technology (NUDT), Changsha, China, in 2011. He is currently
pursuing a Ph.D. degree from the College of Electronic Science and Technology,
NUDT. His research interests focus on include remote sensing image processing,
pattern recognition, and computer vision.
---|---
| Xueying Wang received the B.S. degree in electronic information engineering
from Beihang University, Beijing, China, in 2009, the M.S. and Ph.D. degrees
in electronic science and technology from the National University of Defense
Technology, Changsha, China, in 2011 and 2016. He is currently an Assistant
Professor with the College of Electrical Science, National University of
Defense Technology. His research interests include remote sensing image
processing, pattern recognition.
---|---
| Shilin Zhou received the B.S., M.S., and Ph.D. degrees in electrical
engineering from Hunan University, Changsha, China, in 1994, 1996, and 2000,
respectively. He is currently a Full Professor with the College of Electrical
Science, National University of Defense Technology, Changsha. He has authored
or co-authored over 100 referred papers. His research interests include image
processing and pattern recognition.
---|---
| Yingqian Wang received the B.E. degree in electrical engineering from
Shandong University (SDU), Jinan, China, in 2016, and the M.E. degree in
information and communication engineering from National University of Defense
Technology (NUDT), Changsha, China, in 2018. He is currently pursuing a Ph.D.
degree from the College of Electronic Science and Technology, NUDT. He has
authored several papers in journals and conferences such as TPAMI, TIP, CVPR,
and ECCV. His research interests focus on low-level vision, particularly on
light field imaging and image super-resolution.
---|---
| Yi Hou received the B.S. degree from Wuhan University, China, and the M.S.
as well as a Ph.D. degree from the National University of Defense Technology,
China. He held a visiting position with the Department of Computing Science,
University of Alberta, Canada, from 2014 to 2016. His main research interests
include robot visual SLAM, visual place recognition, time series
classification, signal processing, computer vision, deep learning, pattern
recognition, and image processing.
---|---
|
# Boost-S: Gradient Boosted Trees for Spatial Data and Its Application to
FDG-PET Imaging Data
Reza Iranzad Department of Industrial Engineering, University of Arkansas
Xiao Liu Department of Industrial Engineering, University of Arkansas W. Art
Chaovalitwongse Department of Industrial Engineering, University of Arkansas
Daniel S. Hippe Department of Radiology, University of Washington Shouyi
Wang Department of Industrial, Manufacturing & Systems Engineering,
University of Texas at Arlington Jie Han Department of Industrial,
Manufacturing & Systems Engineering, University of Texas at Arlington Phawis
Thammasorn Department of Industrial Engineering, University of Arkansas
Chunyan Duan Department of Mechanical Engineering, Tongji University Jing
Zeng Department of Radiation Oncology, University of Washington Stephen R.
Bowen Department of Radiology, University of Washington Department of
Radiation Oncology, University of Washington
###### Abstract
Boosting Trees are one of the most successful statistical learning approaches
that involve sequentially growing an ensemble of simple regression trees
(i.e., “weak learners”). This paper proposes a new gradient Boosted Trees
algorithm for Spatial Data (Boost-S) for spatially correlated data with
covariate information. Boost-S integrates the spatial correlation structure
into the classical framework of gradient boosted trees. Each tree is grown by
solving a regularized optimization problem, where the objective function
involves two penalty terms on tree complexity and takes into account the
underlying spatial correlation. A computationally-efficient algorithm is
proposed to obtain the ensemble trees. The proposed Boost-S is applied to the
spatially-correlated FDG-PET (fluorodeoxyglucose-positron emission tomography)
imaging data collected during cancer chemoradiotherapy. Our numerical
investigations successfully demonstrate the advantages of the proposed Boost-S
over existing approaches for this particular application.
Key words: Gradient Boosted Trees, Spatial Statistics, FDG-PET,
Chemoradiotherapy
## Introduction
Spatial data refer to an important type of data which arise in a spatial area
and are often correlated in space. Capturing such correlation is the
centerpiece of statistical analysis of spatial data (Cressie and Wikle, 2011b;
Schabenberger and Gotway, 2005). Applications of statistical spatial data
modeling can be found in a spectrum of scientific and engineering applications
ranging from energy (Ezzat et al., 2019), reliability (Liu et al., 2018c; Fang
et al., 2019), quality and manufacturing (Zang and Qiu, 2019; Wang et al.,
2016; Yue et al., 2020), environmental and natural process (Guinness and
Stein, 2013; Liu et al., 2018a, 2020), medical informatics (Yao et al., 2017;
Yan et al., 2019), etc.
### The Problem Statement
In this paper, we are concerned with the modeling problem:
$Y(\bm{s})=Z(\bm{x}_{\bm{s}})+\varepsilon(\bm{s}),\quad\quad\bm{s}\in\mathcal{S}$
(1)
where $Y(\bm{s})$ represents the observation at a spatial location
$\bm{s}\in\mathcal{S}$, $\bm{x}_{\bm{s}}$ is a vector that collects the
available features at $\bm{s}$, $Z(\bm{x})\equiv\mathbb{E}(Y;\bm{x})$ is the
mean-value function given $\bm{x}$, and $\varepsilon(\bm{s})$ is an isotropic
and weakly stationary spatial noise process with zero-mean and a covariance
$C(h)$, where $C(h)=\text{cov}(Y(\bm{s}),Y(\bm{s}^{\prime}))$ with $h$ being
some distance measure between $\bm{s}$ and $\bm{s}^{\prime}$; for example, the
Euclidean distance.
The goal of this paper is to tackle the modeling problem (1) by devising a new
additive-tree-based method that approximates $Z(\bm{x})$ as follows:
$Z(\bm{x})\approx\phi(\bm{x})=\sum_{k=1}^{K}f_{k}(\bm{x}),\quad\quad
f\in\mathcal{F}$ (2)
where $\\{f_{k}(\bm{x})\\}_{k=1}^{K}$ represents an ensemble of binary
decision trees, and $\mathcal{F}$ represents the tree space.
Apparently, the problem hinges on how the ensemble of trees
$\\{f_{k}(\bm{x})\\}_{k=1}^{K}$ can be grown, while taking into account the
important spatial correlation of $Y(\bm{s})$. In particular, we exploit the
idea of gradient boosting which involves sequentially growing an ensemble of
simple regression trees. Mathematically, each tree is added to the ensemble by
solving an optimization problem with a carefully chosen objective function.
This goal boils down to three fundamental tasks to be addressed in this paper:
(i) formulate a (regularized) optimization problem that balances the
complexity of individual trees and takes into account the spatial correlation
associated with $Y(\bm{s})$; (ii) devise an algorithm that solves the
regularized optimization problem in a computationally efficient manner (so
that trees can be added sequentially to the ensemble); and (iii) validate the
performance of the proposed method on real datasets.
### A Motivating Application
A motivating application is first presented. Fluorodeoxyglucose-Positron
Emission Tomography (FDG-PET) has been widely used in cancer diagnosis and
treatment to detect metabolically active malignant lesions, and plays a
critical role in quantitatively assessing and monitoring tumor response to
treatment. For illustrative purposes, Figure 1 shows the Standardized Uptake
Values (SUV) obtained from the FDG-PET imaging of a patient. In this figure,
the top row shows the baseline image taken before the radiotherapy (i.e., Pre-
RT), while the bottom row shows the FDG PET/CT imaging 3 weeks after
radiotherapy (i.e., Mid-RT image). In this case, it is possible to observe the
shrinkage of the tumor, indicating the effectiveness of treatment. Hence, the
difference between the Mid-RT and Pre-RT images can be naturally used to
quantify the tumor’s spatial response to treatment. Figure 1 also suggests
that a tumor typically presents spatially-correlated and spatially-
heterogeneous responses. Some areas of a tumor may respond well to treatment
while some areas appear to be less responsive. The importance of capturing
such spatially-varying and spatially-correlated responses has been discussed
in Bowen et al. (2019).
Figure 1: An illustrative example of spatially-correlated Pre-RT and Mid-RT
FDG-PET images
Figure 2: 3D Pre-RT and Mid-RT SUV images for two patients
A tumor is a three-dimensional object. Figure 2 shows the SUV in three-
dimensional spaces for another two patients. The top row shows the Pre-RT
data, while the bottom row shows the Mid-RT data collected 3 weeks after the
radiotherapy. It is seen that,
* •
The overall SUV level decreases 3 weeks after the radiotherapy;
* •
The change in SUV (i.e., tumor’s response to radiotherapy) varies over space.
In particular, the SUV level gradually decreases from the centroid to the
surface of a tumor.
Hence, if we let $Y(\bm{s})$ in (1) represent the change in SUV level at
location $\bm{s}$ within the spatial domain $\mathcal{S}$ occupied by the
tumor, then, a statistical spatial model is needed that enables clinicians to
understand how $Y(\bm{s})$ depends on a vector of features (including
geometric features, therapy dosage, and so on) for treatment optimization and
control. However, in practice, the complex relationship between the response
$Y$ and features $\bm{x}$ can hardly be directly specified. The inevitably
nonlinear and interaction effects motivate us to investigate the non-
parametric tree-based methods in this paper. More detailed discussions and
rationales are provided in the next section.
### Literature Review and Contributions
The pioneering work of statistical modeling of spatial data can be found in
Banerjee et al. (2004), Schabenberger and Gotway (2005) and Cressie and Wikle
(2011a). The mainstay approach, also known as the geostatistical paradigm,
models spatial processes by random fields with fully specified covariance
functions. A linear relationship between covariates and process mean is often
assumed, and the model parameters can be obtained through Generalized Least
Squares or Maximum Likelihood Estimation. The covariance structures are
typically derived from moments of multivariate probability distributions and
motivated by considerations of mathematical convenience (e.g., stationary,
isotropic, space-time separable, etc.). Hence, there have been prolonged
research interests to provide flexible and effective ways to construct non-
stationary covariance functions (Cressie and Huang, 1999; Gneiting, 2002;
Fuentes et al., 2005; Gneiting et al., 2006; Ghosh et al., 2010; Reich et al.,
2011; Lenzi et al., 2019). The geostatistical modeling paradigm, which heavily
relies on random fields, becomes less practical for large problems and
approximations are commonly used, such as the Gaussian Markov Random Fields
representation (Lindgren and Rue, 2011), Nearest-Neighbor Gaussian Process
(Datta et al., 2016; Banerjee, 2017), kernel convolution (Higdon, 1998), low-
rank representation (Cressie and Johannesson, 2002; Nychka and Wikle, 2002;
Banerjee et al., 2008), the approximation of likelihood functions (Stein et
al., 2004; Fuentes, 2007; Guinness and Fuentes, 2015), Bayesian inference for
latent Gaussian models based on the integrated nested Laplace approximations
(Rue et al., 2009; R-INLA, 2019), Lagrangian spatio-temporal covariance
function (Gneiting et al., 2006), matrix-free state-space model (Mondal and
Wang, 2019), Vecchia approximations of Gaussian processes (Katzfuss et al.,
2020), as well as the multi-resolution approximation (M-RA) of Gaussian
processes observed at irregular spatial locations (Katzfuss, 2017).
Other spatial models have also been proposed in the literature. Notably, the
Markov Random Fields (MRF) model focuses on the (conditional) distribution of
the quantity at a particular spatial location given the neighboring
observations, such as the auto Poisson model (Besag, 1974), Conditional
Autoregressive Model (Carlin and Banerjee, 2003; Liu et al., 2018b), etc. The
Spatial Moving Average (SMA) approach models a spatial process through a
process convolution with a convolution kernel (Higdon, 1998; Brown et al.,
2000; Liu et al., 2016). There is also a large body of literature focusing on
spatio-temporal data. For example, the Hierarchical Dynamical Spatio-Temporal
Models (DSTM) (Wikle and Cressie, 1999; Berliner, 2003; Cressie and Wikle,
2011a; Stroud et al., 2010; Katzfuss et al., 2020), and the SPDE-based
modeling approach that aims to integrate governing physics into statistical
spatio-temporal models (Brown et al., 2000; Hooten and Wikle, 2008; Stroud et
al., 2010; Sigrist et al., 2015; Liu et al., 2020). A summary of the latest
advances in the spatial modeling with SPDE can be found in Cressie and Wikle
(2011a) and Krainski et al. (2019).
For spatial models in the form of (1), one challenge arising from practice is
to specify the relationship between the covariates $\bm{x}$ and response $Y$,
which can rarely be adequately captured by linear models. For the FDG-PET
imaging data presented in Section 1.2, for example, both non-linear and
interaction effects are expected between tumor’s response and covariates (such
as treatment, geometric features of the tumor, etc.). Hence, non-parametric
approaches, especially the additive-tree-based approaches, provide some major
modeling advantages. Constructing a tree does not require parametric
assumptions on the complex relationship between features and event processes.
An individual tree performs a partition of the feature space. For each sub
feature space, a predicted value is found for the individuals over that sub
feature space. A sum-of-trees model consists of multivariate components that
effectively handle the complex interaction effects among features (Chipman et
al., 2010). Feature selection is also possible under the framework of
additive-tree-based models (Hastie et al., 2009; Liu and Pan, 2020).
Among the additive-tree-based methods, gradient boosted trees have become one
of the most successful statistical learning approaches over the last two
decades, generating 17 winning solutions among 29 Kaggle challenges in 2015
(Chen and Guestrin, 2016). Schapire (1999) introduced the first applicable
Boosting method. The main idea of gradient boosting hinges on fitting a
sequence of correlated “weaker learners” (such as simple trees). Each tree
explains only a small amount of variation not captured by previous trees
(Hastie et al., 2009; Chipman et al., 2010). However, many existing boosting
trees, such as XGBoost, do not consider the possible spatial correlation when
they are applied to spatial data. To our best knowledge, Sigrist (2020)
recently proposed the only boosted-trees-based approach for Gaussian Process
and mixed effects model (which captures correlated errors). Such a method
minimizes the negative log-likelihood at each tree node splitting, and is
available in the R package, GPBoost. The package also provides a range of
regularization and tuning parameter options. In our paper, on the other hand,
regularizations have been directly added to the objective function in order to
control the complexity of individual trees (i.e., number of leaves and leaf
weights), leading to a regularized optimziation problem at each tree node
splitting. The regularization terms are motivated by the fundamental idea
behind boosting trees which involves a sequence of correlated simple trees.
This idea has been adopted by XGBoost (Chen and Guestrin, 2016), while an
alternative approach adopted by the Bayesian Additive Regression Trees (BART)
involves assigning prior distributions on parameters charactering the tree
structure (Chipman et al., 2010). Hence, the main contribution of the paper is
to propose a computationally-efficient gradient boosting method for growing
the ensemble trees $\\{f_{k}(\bm{x})\\}_{k=1}^{K}$ in (2) for spatially-
correlated data. Each tree is grown by solving a regularized optimization
problem, where the objective function involves regularizations on tree
complexity and takes into account the underlying spatial correlation. The
proposed algorithm is referred to as Boost-S, which stands for Gradient
Boosted Trees for Spatial Data with covariate information (Boost-S). Boost-S
integrates the spatial correlation structure into the classical framework of
gradient boosted trees. In Statistics, Ordinary Least Squares is extended to
Generalized Least Squares for correlated data. An analogous notion can be
formulated here when extending the classical framework of gradient boosted
trees to spatially-correlated data, giving rise to the proposed Boost-S.
The rest of this paper is structured as follows: Section 2 presents the
technical details of the Boost-S algorithm. The applications and numerical
illustrations of Boost-S are presented in Section 3. Section 4 concludes the
paper.
## Boost-S: Technical Details
This section provides the technical details behind Boost-S. Suppose that data
are collected from a number of $n$ spatial locations,
$\bm{s}_{1},\bm{s}_{2},...,\bm{s}_{n}$. At each location, we observe a
response $y$ and a $m$-dimensional feature vector
$\bm{x}=(x_{1},x_{2},...,x_{m})^{T}$. From (1) and (2), Boost-S aims to
construct an ensemble of binary trees, $\\{f_{k}(\bm{x})\\}_{k=1}^{K}$, using
gradient boosting such that
$Y(\bm{s})=\sum_{k=1}^{K}f_{k}(\bm{x})+\varepsilon(\bm{s}),\quad\quad\bm{s}\in\\{\bm{s}_{1},\bm{s}_{2},...,\bm{s}_{n}\\}.$
(3)
Let $\bm{Y}=(Y(\bm{s}_{1}),Y(\bm{s}_{2}),...,Y(\bm{s}_{n}))^{T}$ be a
multivariate random vector representing the responses from the $n$ spatial
locations, and let $\bm{f}^{(k)}$ be a vector of predicted values at the $n$
spatial locations generated from the $k$th tree ($k\geq 0$), we re-write (3)
as
$\bm{Y}=\sum_{k=0}^{K}\bm{f}^{(k)}+\bm{\varepsilon},\quad\quad\bm{\varepsilon}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{\bm{\theta}}).$
(4)
where $\bm{f}^{(0)}$ is a vector of zeros corresponding to the initial
condition when no tree has been grown.
### A Regularized Problem
This subsection presents the detailed tree structures and formulates a
regularized optimization problem that leads to a sequence of ensemble trees.
In (3), each tree $f_{k}$ is a Classification and Regression Tree (CART) that
resides in a binary tree space
$\mathcal{F}=\left\\{f(\bm{x})=w_{q(\bm{x})}\right\\}$ (5)
where $q:\mathcal{R}^{p}\rightarrow T$ and $w\in\mathcal{R}^{T}$. Here, $T$
represents the number of tree leaves (i.e., terminal nodes), $w$ is the value
on a tree leaf (i.e., leaf weight), and $q$ determines the tree structure
(i.e., a mapping that links a feature vector $\bm{x}$ to a tree leaf).
Suppose that a number of $k-1$ trees have been grown ($k\geq 1$). Then, the
(ensemble) predicted values at the $n$ spatial locations are given by
$\hat{\bm{y}}^{(k-1)}=\sum_{j=0}^{k-1}\bm{f}^{(j)}$ from the $k-1$ trees. An
immediate next step is to construct the $k$th tree and add the new tree to the
ensemble such that
$\hat{\bm{y}}^{(k)}=\hat{\bm{y}}^{(k-1)}+\bm{f}^{(k)}=\sum_{j=0}^{k}\bm{f}^{(j)}$.
For binary trees, this involves finding the optimal split features as well as
the split points for the $k$th tree. This task can be formulated as a
regularized optimization problem:
$\min_{\bm{f}^{(k)}}\left\\{\ell(\hat{\bm{y}}^{(k-1)}+\bm{f}^{(k)})+\Omega(\bm{f}^{(k)})\right\\}$
(6)
where $\ell$ is a loss function that depends on the output of the $k$th tree,
and the regularization $\Omega$ is given by:
$\Omega(\bm{f})=\gamma T+\frac{1}{2}\lambda\|\bm{w}\|^{2}.$ (7)
The first term, $\gamma T$, regularizes the depth of the tree (by penalizing
the total number of leaves), while the second term is used to regularize the
contribution of tree $k$ to the ensemble predictions (by penalizing the
weights on tree leaves). Recall that, the fundamental idea behind boosting
trees is to construct a sequence of correlated “weaker learners” (i.e., simple
trees), where each “weaker learner” is added to explain the unexplained
variation by existing trees in the ensemble (Chipman et al., 2010). Hence, the
regularization (7) effectively controls the complexity of individual trees. In
fact, it is worth noting that the regularization also helps to prevent the
well-known overfitting issue of boosting trees. When a sufficient number of
trees have been included in the ensemble, the penalty of adding one more tree
may dominate the benefit (of adding more trees), which stops the algorithm
from growing more trees.
Because the multivariate response $\bm{Y}$ is spatially correlated with the
covariance matrix $\bm{\Sigma}_{\bm{\theta}}$, a sensible choice for the loss
function $\ell$ is the squared Mahalanobis length of the residual vector:
$\begin{split}\ell(\hat{\bm{y}}^{(k-1)}+\bm{f}^{(k)})&=\ell(\hat{\bm{y}}^{(k)})\\\
&\equiv(\bm{y}-\hat{\bm{y}}^{(k)})^{T}\bm{\Sigma}^{-1}_{\theta}(\bm{y}-\hat{\bm{y}}^{(k)})\end{split}$
(8)
and (6) can thus be written as
$\min_{\bm{f}^{(k)}}\left\\{(\bm{y}-\hat{\bm{y}}^{(k)})^{T}\bm{\Sigma}^{-1}_{\theta}(\bm{y}-\hat{\bm{y}}^{(k)})+\gamma
T+\frac{1}{2}\lambda\|\bm{w}\|^{2}\right\\}.$ (9)
Note that, (9) above extends the classical XGBoost which does not consider the
correlation among the elements of $\bm{Y}$ (Chen and Guestrin, 2016). The
extension made by this paper is in analogy to the extension from Ordinary
Least Squares to Generalized Least Squares. However, such an extension
requires new algorithms for the problem (9) to be efficiently solved.
The regularized optimization above is a formidable combinatorial optimization
problem which can hardly be directly solved. Hence, we approximate the
objective function $\ell(\hat{\bm{y}}^{(k)})+\Omega(\bm{f}^{(k)})$ by a
second-order multivariate Taylor expansion:
$\begin{split}\ell(\hat{\bm{y}}^{(k)})&+\Omega(\bm{f}^{(k)})\\\
&\approx\ell(\hat{\bm{y}}^{(k-1)})+\bm{g}^{T}\bm{f}^{(k)}+\frac{1}{2}(\bm{f}^{(k)})^{T}\bm{H}\bm{f}^{(k)}+\Omega(\bm{f}^{(k)})\end{split}$
(10)
where $\bm{g}$ is the column gradient vector of the loss function with its
$i$th element given by
$g_{i}=\partial\ell(\hat{\bm{y}}^{(k-1)})/\partial\hat{y}_{i}^{(k-1)}$, and
$\bm{H}$ is the Hessian matrix with its $(i,j)$th entry being given by
$h_{i,j}=\partial^{2}\ell(\hat{\bm{y}}^{(k-1)})/\partial\hat{y}_{i}^{(k-1)}\partial\hat{y}_{j}^{(k-1)}$.
Because the first term on the right-hand-side of (10) is a constant, it is
sufficient to minimize the sum of the remaining three terms:
$L^{(k)}=\bm{g}^{T}\bm{f}^{(k)}+\frac{1}{2}(\bm{f}^{(k)})^{T}\bm{H}\bm{f}^{(k)}+\Omega(\bm{f}^{(k)}).$
(11)
Note that, for any given tree structure, it is possible to define a set
$I_{p}=\\{i\mid q(x_{i})=p\\}$ that consists of all samples that fall into
leaf $p$. Then, we let $\bm{g}_{p}$ be a column vector that only retains the
elements in $\bm{g}$ corresponding to samples in $I_{p}$, and similarly, let
$\bm{H}_{p,q}$ be a matrix by only keeping the rows and columns of $\bm{H}$
corresponding to samples in $I_{p}$ and $I_{q}$, respectively.
Figure 3 provides an illustration of how $\bm{g}_{p}$ and $\bm{H}_{p,q}$ are
constructed. Consider a simple example where $n=7$ (i.e., only 7 samples are
available), then, the dimensions of the gradient vector $\bm{g}$ and the
Hessian matrix $\bm{H}$ are $7\times 1$ and $7\times 7$, respectively. Suppose
that $I_{p}={1,2,6}$ and $I_{q}={3,4}$. Then, the vector $\bm{g}_{p}$ consists
of the 1st, 2nd and the 6th element in $\bm{g}$, and the matrix $\bm{H}_{p,q}$
is a $3\times 2$ matrix that retains the entries $\bm{H}$ located at the
intersections of rows 1, 2 and 6, and columns 3 and 4.
Figure 3: An illustration of the vector $\bm{g}_{p}$ and the matrix
$\bm{H}_{p,q}$
Then, for any given tree structure $q(\bm{x})$, we re-write (11) as follows:
$\displaystyle
L^{(k)}=\sum_{p=1}^{T}\bm{g}^{T}_{p}\bm{w}_{p}+\frac{1}{2}\left\\{\sum_{p\in
T}\bm{w}^{T}_{p}\bm{H}_{p,p}\bm{w}_{p}\right\\}$ (12)
$\displaystyle+\frac{1}{2}\left\\{\sum_{(p,q)\in
C_{2}^{T}}\bm{w}^{T}_{p}\bm{H}_{p,q}\bm{w}_{q}\right\\}+\Omega(\bm{f}^{(k)})$
where $C_{2}^{T}$ denotes the combination (i.e., the number of 2-combinations
from a given set of $T$ elements), $\bm{w}_{p}=w_{p}\bm{1}_{|I_{p}|}$ with
$w_{p}$ and $\bm{1}_{|I_{p}|}$ respectively being the weight on tree leaf $p$
and a column vector of ones of dimension $|I_{p}|$, and $|\cdot|$ represents
the cardinality of a set.
Substituting (7) into (12) yields
$\begin{split}L^{(k)}=&\gamma T+\sum_{p=1}^{T}\left\\{w_{p}\sum_{i\in
I_{p}}g_{i}+\frac{1}{2}\left[\lambda+\sum_{(i,j)\in
I_{p}}h_{i,j}\right]w_{p}^{2}\right\\}\\\
&+\frac{1}{2}\sum_{p=1}^{T}\left\\{\sum_{q=1;q\neq
p}^{T}\left[\lambda+\sum_{i\in I_{p};j\in
I_{q}}h_{i,j}\right]w_{p}w_{q}\right\\}.\end{split}$ (13)
Taking the partial derivatives of (13) with respect to $\\{w_{p}\\}_{p=1}^{T}$
leads to a system of equations, which provides the key computational
advantage: given any tree structure, the optimal weights
$\bm{w}=\\{w_{1},w_{2},...,w_{T}\\}$ can be quickly found by solving the
linear system:
$\Xi\bm{w}=-\tilde{\bm{g}}$ (14)
where $\Xi$ is a $T\times T$ matrix with its $p$th row given by
$\frac{1}{2}(\lambda+\sum_{i\in I_{p};j\in
I_{1}}h_{i,1}),\frac{1}{2}(\lambda+\sum_{i\in I_{p};j\in
I_{2}}h_{i,2}),...,(\lambda+\sum_{(i,j)\in
I_{p}}h_{i,j}),...,\frac{1}{2}(\lambda+\sum_{i\in I_{p};j\in I_{T}}h_{i,T})$
(15)
and $\tilde{\bm{g}}$ is a $T\times 1$ vector with its $p$th element given by
$\sum_{i\in I_{p}}g_{i}$.
Obtaining the linear system (14) plays an extremely important role in
searching for the optimal tree: given any candidate tree structure, it is
possible to quickly and accurately find the optimal weights $\bm{w}$ on the
leaf nodes by solving (14) using least squares, i.e., no numerical search is
required. By substituting the optimal $\bm{w}$ into (11) immediately yields
the value of the objective function $L^{(k)}$.
### The Sequential Update of the Unknown Covairance Matrix
Constructing the linear system (14) and evaluating the objective function
$L^{(k)}$ require a known covariance matrix of the errors,
$\bm{\Sigma}_{\bm{\theta}}$. However, $\bm{\Sigma}_{\bm{\theta}}$ is not known
and needs to be estimated before the first tree ($k=1$), or any subsequent
tree ($k>1$), can be constructed. In this section, we describe how
$\bm{\Sigma}_{\bm{\theta}}$ can be consistently estimated before the $k$th
tree is constructed, given the outputs from the first $k-1$ trees.
Suppose that $k-1$ trees have been constructed ($k\geq 1$), and let
$\bm{r}^{(k-1)}=\bm{Y}-\sum\bm{f}^{(k-1)}$ be the residual vector. It follows
from (4) that $\bm{r}^{(k-1)}$ is Gaussian with the covariance
$\bm{\Sigma}_{\bm{\theta}}$. Note that, the mean of $\bm{r}^{(k-1)}$ may not
even be close to zero when $k$ is small, i.e., when there are not sufficient
trees in the ensemble to well capture the mean of $\bm{Y}$. In this case, we
model $\bm{r}^{(k-1)}$ by a Locally Weighted Mixture of Linear Regressions
(LWMLR) (Stroud et al., 2001):
$\bm{r}^{(k-1)}=\sum_{j=1}^{J}\pi^{T}_{j}(\bm{s})\bm{k}_{j}(\bm{s})\bm{\beta}_{j}+\varepsilon(\bm{s}),\quad\quad
k\geq 1$ (16)
where $\bm{k}_{j}(\bm{s})=\\{k_{j1}(\bm{s}),...,k_{jq}(\bm{s})\\}$ is a set of
spatial basis functions, $\bm{\beta}_{j}=(\beta_{j1},...,\beta_{jq})$ is a
vector of unknown coefficients, $\pi_{j}(\bm{s})$ is a Gaussian kernel given
as follows:
$\pi_{j}(\bm{s})\propto|\bm{V}_{j}|^{-1/2}\exp\left\\{-\frac{1}{2}(\bm{s}-\bm{\mu}_{j})^{T}\bm{V}_{j}^{-1}(\bm{s}-\bm{\mu}_{j})\right\\}$
(17)
Note that, we may re-write (16) as a linear model,
$\bm{r}^{(k-1)}=\bm{X}\bm{B}+\bm{\varepsilon}$, where
$\bm{X}=(\text{diag}(\pi_{1})\bm{X}_{1},...,\text{diag}(\pi_{J})\bm{X}_{J})$,
$\bm{X}_{j}=(\bm{k}_{j}^{T}(\bm{s}_{1}),\bm{k}_{j}^{T}(\bm{s}_{2}),...,\bm{k}_{J}^{T}(\bm{s}_{J}))^{T}$,
and $\bm{B}=(\bm{\beta}_{1},\bm{\beta}_{2},...,\bm{\beta}_{J})^{T}$. Then, it
is possible to obtain a consistent estimate of $\bm{\Sigma}_{\bm{\theta}}$,
$\hat{\bm{\Sigma}}_{\bm{\theta}}^{(k-1)}$, using the Feasible Generalized
Least Square (FGLS), before the $k$th tree can be constructed.
### The Boost-S Algorithm
Following the discussions above, Figure 4 provides a high-level illustration
of the flow of the Boost-S algorithm. At the initialization stage, we obtain
the initial estimate of the covariance matrix
$\hat{\bm{\Sigma}}_{\bm{\theta}}^{(0)}$. Then, by solving the regularized
optimization problem (6), a new tree $k$ is constructed and added to the
ensemble. Next, the estimate of the covariance matrix is updated
$\hat{\bm{\Sigma}}_{\bm{\theta}}^{(k)}$ before tree $k+1$ can be constructed.
The steps are repeated until $K$ trees have been grown in the ensemble. The
Boost-S algorithm is formalized by Algorithm 1.
Set the values for $\lambda$, $\gamma$ and $K$
Let k = 0, $\bm{f}^{(0)}=\bm{0}$ and $\bm{r}^{(0)}=\bm{y}$
Obtain the initial estimate, $\hat{\bm{\Sigma}}_{\bm{\theta}}^{(0)}$, from
(16) using FGLS
for _k=1,…,K_ do
Grow tree $k$ by repeating the following steps:
(i) Given the current tree topology, generate a set of all possible new tree
structures by splitting a tree node based on candidate split variables and
candidate split values.
(ii) For each new tree structure, obtain the weights $\bm{\omega}$ on the leaf
nodes by solving the linear system $\Xi\bm{w}=-\tilde{\bm{g}}$, and evaluate
the objective function $L^{(k)}$ in (11) for the new topology.
(iii) If there exists at least one new tree structure that further reduces the
objective function (over the existing tree structure), retain the new topology
that generates the greatest reduction and go to (i); otherwise, terminate the
tree growing process for tree $k$, and go to the next step.
(iv) Update the estimate $\hat{\bm{\Sigma}}_{\bm{\theta}}^{(k)}$using FGLS.
end for
Algorithm 1 Boost-S: Gradient Boosted Trees for Spatial Data
Figure 4: An illustration of the flow of the Boost-S algorithm
## Illustration of Boost-S on the FDG-PET Imaging Data
This section re-visits the motivating application presented in Section 1.2. We
apply Boost-S to real datasets and compare the performance of Boost-S to that
of existing approaches.
### Data
The data used in this section are obtained from 25 patients diagnosed with
locally advanced and surgically unresctable non-small cell lung cancer (NSCLC)
who enrolled onto the FLARE-RT clinical trial (NCT02773238). For each patient,
this clinical trial data set contains geographic features, dosage, Pre-RT and
Mid-RT SUV levels. As discussed in Section 1.2, the goal is to model the
difference between the Pre-RT SUV (before treatment) and Mid-RT SUV (during
third week of treatment course), which helps clinicians further optimize or
control the treatment plans.
Let $Y(\bm{s})$ represent the ratio between Mid-RT SUV and Pre-RT SUV at
location $\bm{s}$ (a voxel in the image) within the spatial domain
$\mathcal{S}$ occupied by the tumor, i.e.,
$Y(\bm{s})=\frac{\text{Mid-RT SUV}}{\text{Pre-RT SUV}}.$ (18)
If the treatment is effective, the Mid-RT SUV is expected to be lower than the
Pre-RT SUV level. Hence, a lower ratio indicates more effective treatment.
### Application of Boost-S
We first demonstrate the application of Boost-S on the data collected from one
of the 25 patients. The PET scan for this patient has 3110 voxels (i.e., the
number of spatial locations). We randomly split the data into two parts: 15%
for training while 85% for testing. Such a low training-testing ratio is
chosen to demonstrate the out-of-sample prediction capability of Boost-S
constructed from a relatively small training dataset.
To obtain the initial estimate of the covariance matrix
$\hat{\bm{\Sigma}}_{\bm{\theta}}^{(0)}$, the FGLS is used to solve the linear
model (16) in Algorithm 1. Before the FGLS is performed, one needs to first
choose a parametric spatial covariance function $c(\cdot)$ of the process
$\varepsilon$ in (16). Figure 5 shows both the empirical semivariance and the
fitted semivariance using FGLS assuming the Gaussian covariance function. The
sill, range and nugget effect can be clearly seen from Figure 5, and the
Gaussian covariance function appears to be an appropriate choice.
Figure 5: Exploratory analysis on the covariance structure: plot of the
empirical and fitted semivariance using FGLS
Algorithm 1 requires the tuning parameters $\lambda$ and $\gamma$ to be pre-
specified. The choice of these two parameters affects both the depth and
contribution of individual trees to the ensemble prediction, which in turn
influences the total number of trees in the ensemble. A common strategy
suggests that we explore the suitable values for $\lambda$ and $\gamma$ while
leaving $K$ as the primary parameter (Hastie et al., 2009). Although it is
theoretically possible to perform a grid search for the best combinations of
$\lambda$ and $\gamma$ on a two-dimensional space, such an approach may not be
practical nor necessary in practice when it is computationally intensive to
run Boost-S on big datasets. Hence, we resort to a powerful tool in computer
experiments—the space-filling designs (Joseph, 2016). The idea of space-
filling designs is to have points everywhere in the experimental region with
as few gaps as possible, which serves our purpose very well. Figure 6 shows
the Maximum Projection Latin Hypertube Design (MaxProLHD, Joseph et al.
(2015)) of 16 runs with different combinations of $\lambda$ and $\gamma$,
where the experimental ranges for these two parameters are respectively
$[0,0.1]$ and $[0,10]$. For each design, Figure 7 shows the box plot of the
number of tree leaves per tree in an ensemble for each combination of
$\lambda$ and $\gamma$. Since the key idea behind boosting trees is that each
individual tree needs to be kept simple (Hastie et al., 2009), we identify
that Designs #7, #8 and #9 provide the most suitable combinations of $\lambda$
and $\gamma$. From Figure 6, these three design points are adjacent to each
other, indicating that the appropriate choices for $\lambda$ and $\gamma$ are
approximately within $[0.025,0.075]$ and $[4,5.5]$. A refined search in a much
smaller experimental region yields an appropriate combination of
$\lambda=0.05$ and $\gamma=4.25$ (Design “$*$” in Figure 6). Design “$*$” is
between Designs #7 and #8, and the 25th, 50th and 75th empirical quartiles of
the number of tree leaves are 6, 8 and 10 as shown in Figure 7.
Figure 6: Maximum projection design of 16 runs with different combinations of
$\lambda$ and $\gamma$. Design “$*$” yields an appropriate combination such
that $\lambda=0.05$ and $\gamma=4.25$.
Figure 7: Boxplot of the number of tree leaves per tree in an ensemble for the
16 candidate designs. Design “$*$” indicates the chosen combination such that
$\lambda=0.05$ and $\gamma=4.25$.
After $\lambda$ and $\gamma$ have been appropriately chosen, 49 trees are
constructed to form the ensemble predictor using Algorithm 1. Figure 8 (top
panel) shows that the Mahalanobis distance (i.e., the objective function)
decreases as more trees have been included into the ensemble, indicating that
the algorithm is working as expected. Figure 8 (bottom panel) shows the number
of tree leaves for individual trees in this ensemble. It is interesting to
note that the algorithm no longer splits the (root) tree node after 37 trees
have been grown. In addition, the outputs of these one-node trees are all
zeros, indicating that all trees after tree 37 are completely redundant. This
is precisely due to the regularization $\gamma T$ and
$\frac{1}{2}\lambda\|\bm{w}\|^{2}$ in (6). The gain in $\ell$ is outweighted
by the loss caused by $\Omega$ if one more tree is added.
Figure 8: Top panel: the Mahalanobis distance (i.e., objective function)
decreases as the number of trees grows; Bottom panel: the number of tree
leaves of individual trees.
Applying the constructed ensemble trees to the testing dataset, Figure 9 shows
the out-of-sample Root-Mean-Square-Error (RMSE) and Mean-Gross-Error (MGE). We
see that both performance metrics decrease as more trees are included and
stabilize approximately after 30 to 40 trees have been grown. As discussed
above, this is precisely due to the fact that all trees after #37 are
redundant with zero output. Figure 10 shows the (out-of-sample) predictions
against actual observations of the SUV level at differnt voxels. The figure
shows that the proposed Boost-S accurately predicts the SUV levels given the
covariate information.
Figure 9: Out-of-sample model performance in terms of RMSE and MGE
Figure 10: Out-of-sample predictions against actual observations
Figure 11: MidPET images on the $x-y$ plane for different values of $z$. Rows
1 and 3: observed images; Row 2 and 4: resconstructed images.
Furthermore, we reconstruct the Mid-RT SUV (3 weeks after treatment) by the
proposed Boost-S, and compare the actual and predicted Mid-RT SUV using slice
plots (Figure 11). Because a tumor is a 3D object with three coordinates, $x$,
$y$, and $z$, the MidPET images are shown on the $x-y$ plane for given values
of $z$. We see that the ensemble trees are capable of predicting the Mid-RT
SUV for the entire tumor body.
### Comparison Study
We compare the predictive capability of Boost-S to five other methods using
the data collected from 25 patients. The five other methods included in the
comparison study are: Random Forests (RF) (Breiman, 2001), Extreme Gradient
Boosting Trees (XGBoost) (Chen and Guestrin, 2016), non-parametric cubic
splines, universal kriging with a linear spatial trend, and multiple linear
regression without considering the spatial correlation. For each patient, 15%
of the observations are randomly chosen as the training dataset, and all 6
models are constructed to generate the out-of-sample predictions using the
testing dataset. Such a low training-testing ratio is used to test the
predictive capabilities of these methods.
Tables 1, 2 and 3 respectively show the Mean Gross Error (MGE), Relative Error
(RE) and Rooted Mean Squared Error (RMSE) of the out-of-sample predictions for
25 patients using 6 different methods. It is interesting to see that
* •
The proposed Boost-S yields the lowest MGE for 19 out of the 25 patients,
while RF performs the best for the remaining 6 patients;
* •
The proposed Boost-S yields the lowest RE for 18 out of the 25 patients, while
RF performs the best for the remaining 7 patients;
* •
The proposed Boost-S yields the lowest RMSE for 17 out of the 25 patients,
while RF performs the best for the remaining 8 patients;
Hence, we conclude that, the proposed Boost-S provides the best performance
for majority of the patients in terms of all three performance measures,
although no method uniformly outperforms others (which is realistic and
expected given the well-known modeling power of RF and XGBoost). The reason
why Boost-S outperforms other additive-tree-based methods, i.e., RF and
XGBoost, is because of its capability of accounting for the spatial
correlation among observations. The reason why Boost-S outperforms universal
Kriging and multiple linear regression is due to the advantages of non-
parametric tree-based method in capturing complex non-linear and interaction
effects between features and responses. The observation demonstrates the
effectiveness of Boost-S as a useful extension to existing ensemble tree-based
methods by accounting for the spatial correlation among observations.
Table 1: Mean Gross Error of the out-of-sample predictions for 25 patients using 6 different methods (note that: rows 1 to 25 respectively show the results corresponding to the 25 patients, while the last row shows the p-value of the one-side paired Wilcoxon test) Boost-S | RF | XGBoost | Cubic Splines | Universal Kriging | Linear Regression
---|---|---|---|---|---
8.24 | 6.78 | 7.31 | 8.45 | 10.32 | 9.01
10.68 | 9.61 | 10.55 | 11.57 | 13.41 | 13.79
5.84 | 6.22 | 6.32 | 7.14 | 10.37 | 9.08
4.30 | 4.37 | 4.68 | 6.19 | 6.76 | 6.20
3.15 | 4.45 | 3.97 | 4.70 | 10.04 | 8.58
1.95 | 2.40 | 2.51 | 3.07 | 5.42 | 4.25
5.20 | 5.40 | 5.67 | 5.47 | 13.30 | 10.34
5.04 | 5.30 | 5.93 | 9.00 | 10.39 | 9.88
8.59 | 9.28 | 9.50 | 9.22 | 12.55 | 11.79
2.37 | 2.79 | 3.17 | 4.46 | 6.70 | 5.97
9.08 | 8.63 | 8.91 | 11.16 | 10.79 | 11.22
1.44 | 1.74 | 2.08 | 2.64 | 4.00 | 2.99
1.65 | 1.91 | 2.08 | 3.61 | 4.36 | 3.69
0.74 | 1.32 | 1.35 | 1.58 | 2.48 | 1.83
10.19 | 9.77 | 10.59 | 10.98 | 15.03 | 13.71
4.17 | 4.37 | 4.25 | 7.79 | 9.64 | 7.89
2.71 | 3.20 | 3.78 | 4.22 | 7.15 | 6.47
6.53 | 7.06 | 8.27 | 8.41 | 13.43 | 12.18
13.10 | 10.62 | 11.36 | 12.57 | 13.77 | 14.23
3.51 | 3.96 | 4.06 | 4.44 | 9.30 | 6.85
1.22 | 1.24 | 1.57 | 1.58 | 2.55 | 2.03
4.17 | 4.82 | 5.01 | 7.69 | 11.25 | 8.71
5.47 | 6.62 | 6.43 | 10.65 | 18.82 | 12.05
3.87 | 3.63 | 4.13 | 5.08 | 5.52 | 5.47
1.86 | 2.42 | 2.58 | 3.57 | 5.60 | 4.19
N.A. | 0.04 | 0.001 | $<10^{-6}$ | $<10^{-7}$ | $<10^{-7}$
Table 2: Relative Error (in %) of the out-of-sample predictions for 25 patients using 6 different methods (note that: rows 1 to 25 respectively show the results corresponding to the 25 patients, while the last row shows the p-value of the one-side paired Wilcoxon test) Boost-S | RF | XGBoost | Cubic Splines | Universal Kriging | Linear Regression
---|---|---|---|---|---
9.67 | 8.01 | 8.58 | 9.91 | 12.29 | 10.68
56.62 | 51.01 | 55.37 | 60.03 | 60.47 | 67.27
11.04 | 12.52 | 11.93 | 13.72 | 18.40 | 17.01
5.03 | 5.34 | 5.53 | 7.60 | 8.36 | 7.61
7.30 | 9.41 | 8.26 | 10.77 | 20.25 | 19.07
4.42 | 5.73 | 5.65 | 7.29 | 12.52 | 9.72
8.92 | 9.41 | 9.52 | 9.72 | 20.53 | 18.06
23.25 | 22.32 | 25.52 | 43.34 | 43.51 | 43.78
13.62 | 15.45 | 15.18 | 14.45 | 20.04 | 18.91
2.28 | 2.74 | 3.07 | 4.26 | 6.48 | 5.77
11.44 | 10.78 | 10.94 | 14.42 | 13.41 | 14.39
1.09 | 1.32 | 1.55 | 1.99 | 3.06 | 2.25
3.66 | 4.25 | 4.55 | 8.00 | 9.70 | 8.20
1.49 | 2.75 | 2.72 | 3.17 | 4.70 | 3.59
21.40 | 19.97 | 21.21 | 23.26 | 32.02 | 29.69
8.36 | 8.67 | 8.13 | 15.27 | 20.18 | 15.49
3.90 | 4.68 | 5.34 | 6.05 | 9.93 | 9.51
7.46 | 8.79 | 9.97 | 9.99 | 17.54 | 15.31
21.70 | 17.32 | 18.19 | 20.08 | 21.71 | 23.53
4.02 | 4.58 | 4.60 | 5.05 | 10.15 | 7.95
1.91 | 1.94 | 2.43 | 2.53 | 4.08 | 3.24
6.34 | 7.18 | 7.61 | 11.55 | 17.54 | 13.37
8.10 | 9.55 | 8.73 | 16.69 | 25.45 | 18.96
5.24 | 5.01 | 5.63 | 7.10 | 7.85 | 7.61
5.07 | 6.59 | 6.86 | 9.72 | 14.92 | 11.26
N.A. | 0.09 | 0.004 | $<10^{-6}$ | $<10^{-7}$ | $<10^{-7}$
Table 3: Rooted Mean Squared Error of the out-of-sample predictions for 25 patients using 6 different methods (note that: rows 1 to 25 respectively show the results corresponding to the 25 patients, while the last row shows the p-value of the one-side paired Wilcoxon test) Boost-S | RF | XGBoost | Cubic Splines | Universal Kriging | Linear Regression
---|---|---|---|---|---
11.05 | 9.01 | 9.66 | 10.89 | 13.00 | 11.66
16.46 | 14.77 | 16.22 | 16.14 | 19.66 | 19.42
7.73 | 7.83 | 8.10 | 9.17 | 12.89 | 11.61
5.74 | 5.51 | 5.89 | 7.73 | 8.28 | 7.74
4.13 | 5.97 | 5.41 | 6.00 | 12.83 | 10.64
2.68 | 3.11 | 3.24 | 3.89 | 6.68 | 5.36
7.03 | 6.68 | 7.13 | 6.98 | 17.10 | 12.64
7.62 | 7.88 | 8.58 | 11.50 | 14.09 | 13.07
11.15 | 11.69 | 12.02 | 11.80 | 15.57 | 15.04
3.21 | 3.80 | 4.32 | 5.97 | 8.28 | 7.66
12.52 | 12.06 | 12.41 | 14.47 | 14.68 | 14.87
1.89 | 2.20 | 2.64 | 3.64 | 4.75 | 4.02
2.20 | 2.46 | 2.66 | 4.56 | 5.10 | 4.68
1.04 | 1.69 | 1.72 | 1.95 | 3.21 | 2.37
14.17 | 13.19 | 14.17 | 14.29 | 18.66 | 17.32
5.77 | 5.94 | 5.71 | 10.18 | 12.27 | 10.31
3.48 | 4.15 | 4.73 | 5.30 | 8.99 | 8.28
8.72 | 8.98 | 10.50 | 10.83 | 16.69 | 15.26
19.26 | 15.23 | 16.30 | 17.69 | 20.01 | 19.80
5.02 | 5.21 | 5.29 | 5.94 | 11.18 | 8.33
1.72 | 1.82 | 2.22 | 2.09 | 3.29 | 2.72
5.52 | 6.61 | 6.59 | 9.75 | 13.63 | 11.06
7.36 | 8.73 | 8.68 | 13.29 | 23.28 | 15.15
5.10 | 4.78 | 5.41 | 6.63 | 7.07 | 7.07
2.55 | 3.16 | 3.34 | 4.67 | 7.14 | 5.39
N.A. | 0.19 | 0.003 | $<10^{-4}$ | $<10^{-7}$ | $<10^{-7}$
Figures 14, 12 and 13 respectively show the MGE, RE and RMSE of the out-of-
sample predictions for the 25 patients. Such visualizations provide a more
holistic perspective on the performance of the six candidate methods.
Figure 12: Mean Gross Error (MGE) of the out-of-sample predictions for 25
patients using 6 different methods
Figure 13: Relative Error (RE) of the out-of-sample predictions for 25
patients using 6 different methods
Figure 14: Rooted Mean Squared Error (RMSE) of the out-of-sample predictions
for 25 patients using 6 different methods
## Conclusion
This paper proposed a new gradient Boosted Trees algorithm for Spatial Data
with covariate information (Boost-S). It has been shown that the Boost-S
successfully integrates spatial correlation into the classical framework of
gradient boosted trees. A computationally-efficient algorithm as well as the
technical details have been presented. The Boost-S algorithm grows individual
trees by solving a regularized optimization problem, where the objective
function involves two penalty terms on tree complexity and takes into account
the underlying spatial correlation. The advantages of the proposed Boost-S,
over five other commonly used approaches, have been demonstrated using real
datasets involving the spatially-correlated FDG-PET imaging data collected
during cancer chemoradiotherapy.
## Acknowledgment
This investigation was supported in part by National Institutes of Health
grant R01CA204301.
## References
* Banerjee (2017) Banerjee, S. (2017), “High-Dimensional Bayesian Geostatistics,” Bayesian Analysis, 12, 583–614.
* Banerjee et al. (2004) Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2004), Hierarchical Modeling and Analysis for Spatial Data, 2nd ed., Boca Raton, Florida: CRC Press.
* Banerjee et al. (2008) Banerjee, S., Gelfand, A. E., Finley, A. O., and Sang, H. (2008), “Gaussian Predictive Proess Models for Large Spatial Data Sets,” Journal of the Royal Statistical Society: Series B, 70, 825–848.
* Berliner (2003) Berliner, L. M. (2003), “Physical-Statistical Modeling in Geophysics,” Journal of Geophysical Research-Atmospheres, 108, 3–10.
* Besag (1974) Besag, J. E. (1974), “Spatial Interaction and the Statistical Analysis of Lattice Systems,” Journal of the Royal Statistical Society, B, 36, 192–225.
* Bowen et al. (2019) Bowen, S. R., Hippe, D. S., Chaovalitwongse, W. A., Duan, C., Thammasorn, P., Liu, X., Miyaoka, R. S., Vesselle, H. J., Kinahan, P. E., Rengan, R., et al. (2019), “Voxel Forecast for Precision Oncology: predicting spatially variant and multiscale cancer therapy response on longitudinal quantitative molecular imaging,” Clinical Cancer Research, 25, 5027–5037.
* Breiman (2001) Breiman, L. (2001), “Random forests,” Machine learning, 45, 5–32.
* Brown et al. (2000) Brown, P. E., Karesen, K. F., Roberts, G. O., and Tonellato, S. (2000), “Blur-Generated Non-Separable Space-Time Models,” Journal of the Royal Statistical Society: Series B, 62, 847–860.
* Carlin and Banerjee (2003) Carlin, B. P. and Banerjee, S. (2003), Hierarchicla Multivariate CAR Models for SpatioTemporally Correlated Survival Data (with discussion), Oxford: Oxford University Press.
* Chen and Guestrin (2016) Chen, T. and Guestrin, C. (2016), “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
* Chipman et al. (2010) Chipman, H. A., George, E. I., and McCulloch, R. E. (2010), “Bart: Bayesian Additive Regression Trees,” The Annals of Applied Statistics, 4, 266–298.
* Cressie and Huang (1999) Cressie, N. and Huang, H. C. (1999), “Classes of Nonseparable, Spatio-Temporal Stationary Covariance Functions,” Journal of the American Statistical Association, 94, 1330–1340.
* Cressie and Johannesson (2002) Cressie, N. and Johannesson, G. (2002), “Fixed Rank Kriging for Very Large Spatial Data Sets,” Journal of the Royal Statistical Society: Series B, 70, 209–226.
* Cressie and Wikle (2011a) Cressie, N. and Wikle, C. (2011a), Statistics for Spatio-Temporal Data, Hoboken, New Jersey: John Wiley & Sons.
* Cressie and Wikle (2011b) Cressie, N. and Wikle, C. K. (2011b), Statistics for spatio-temporal data, John Wiley & Sons.
* Datta et al. (2016) Datta, A., Banerjee, S., Finley, A. O., and Gelfand, A. E. (2016), “Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets,” Journal of the American Statistical Association, 111, 800–812.
* Ezzat et al. (2019) Ezzat, A., Jun, M., and Ding, Y. (2019), “Spatio-temporal short-term wind forecast: A calibrated regime-switching method,” The Annals of Applied Statistics, 1484–1510.
* Fang et al. (2019) Fang, X., Paynabar, K., and Gebraeel, N. (2019), “Image-Based Prognostics Using Penalized Tensor Regression,” Technometrics, 61, 369–384.
* Fuentes (2007) Fuentes, M. (2007), “Approximate Likelihood for Large Irregularly Spaced Spatial Data,” Journal of the American Statistical Association, 102, 321–331.
* Fuentes et al. (2005) Fuentes, M., Chen, L., Davis, J. M., and Lackmann, G. M. (2005), “Modeling and Predicting Complex Space-Time Structures and Patterns of Coastal Wind Fields,” Environmetrics, 16, 449–464.
* Ghosh et al. (2010) Ghosh, S. K., Bhave, P. E., Davis, J. M., and Lee, H. (2010), “Spatio-Temporal Analysis of Total Nitrate Concentrations using Dynamic Statistical Models,” Journal of the American Statistical Association, 105, 538–551.
* Gneiting (2002) Gneiting, T. (2002), “Nonseparable, Stationary Covariance Functions for Space-Time Data,” Journal of the American Statistical Association, 97, 590–600.
* Gneiting et al. (2006) Gneiting, T., Genton, M. G., and Guttorp, P. (2006), “Geostatistical space-time models, stationarity, separability, and full symmetry.” in Statistical Methods for Spatio-Temporal Systems, eds. Finkenstadt, B., Held, L., and Isham, V., Boca Raton: Chapman & Hall, pp. 151–175.
* Guinness and Fuentes (2015) Guinness, J. and Fuentes, M. (2015), “Likelihood Approximations for Big Nonstationary Spatial-Temporal Lattice Data,” Statistica Sinica.
* Guinness and Stein (2013) Guinness, J. and Stein, M. (2013), “Interpolation of Nonstationary High Frequency Spatial-Temporal Temperature Data,” Annals of Applied Statistics, 7, 1684––1708.
* Hastie et al. (2009) Hastie, T., Tibshirani, R., and Friedman, J. (2009), The Elements of Statistical Learning, 2nd Edition, New York: Springer.
* Higdon (1998) Higdon, D. (1998), “A Process-Convolution Approach to Modeling Temperatures in the North Atlantic Ocean,” Environmental Ecology Statistics, 5, 173–190.
* Hooten and Wikle (2008) Hooten, M. B. and Wikle, C. K. (2008), “A hierarchical Bayesian non-linear spatio-temporal model for the spread of invasive species with appliation to the Eurasian Collared-Dove,” Environmental and Ecological Statistics, 15, 59–70.
* Joseph (2016) Joseph, V. R. (2016), “Space-filling designs for computer experiments: A review,” Quality Engineering, 28, 28–35.
* Joseph et al. (2015) Joseph, V. R., Gul, E., and Ba, S. (2015), “Maximum Projection Designs for Computer Experiments,” Biometrika, 102, 371–380.
* Katzfuss (2017) Katzfuss, M. (2017), “A multi-resolution approximation for massive spatial datasets,” Journal of the American Statistical Association, 112, 201–214.
* Katzfuss et al. (2020) Katzfuss, M., Stroud, J. R., and Wikle, C. K. (2020), “Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models,” Journal of the American Statistical Association, 115, 866–885.
* Krainski et al. (2019) Krainski, E. T., Gomez-Rubio, V., Bakka, H., Lenzi, A., Castro-Camilo, D., Simpson, D., Lindgren, F., and Rue, H. (2019), Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA, Boca Raton: Chapman and Hall/CRC.
* Lenzi et al. (2019) Lenzi, A., Castruccio, S., Rue, H., and Genton, M. G. (2019), “Improving Bayesian Local Spatial Models in Large Data Sets,” arXiv:1907.06932.
* Lindgren and Rue (2011) Lindgren, F. and Rue, H. (2011), “An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: the Stochastic Partial Differntial Equation Approach,” Journal of the Royal Statistical Society: Series B, 73, 423–498.
* Liu et al. (2018a) Liu, X., Gopal, V., and Kalagnanam, J. (2018a), “A Spatio-Temporal Modeling Framework for Weather Radar Image Data in Tropical Southeast Asia,” The Annals of Applied Statistics, 12, 378–407.
* Liu et al. (2018b) — (2018b), “A Spatio-Temporal Modeling Framework for Weather Radar Image Data in Tropical Southeast Asia,” The Annals of Applied Statistics, 12, 378–407.
* Liu and Pan (2020) Liu, X. and Pan, R. (2020), “Analysis of Large Heterogeneous Repairable System Reliability Data with Static System Attributes and Dynamic Sensor Measurement in Big Data Environment,” Technometrics, 62, 206–222.
* Liu et al. (2016) Liu, X., Yeo, K. M., Hwang, Y. D., Singh, J., and Kalagnanam, J. (2016), “A Statistical Modeling Approach for Air Quality Data Based on Physical Dispersion Processes and Its Application to Ozone Modeling,” The Annals of Applied Statistics, 10, 756–785.
* Liu et al. (2018c) Liu, X., Yeo, K. M., and Kalagnanam, J. (2018c), “A Statistical Modeling Approach for Spatio-Temporal Degradation Data,” Journal of Quality Technology, 50, 166–182.
* Liu et al. (2020) Liu, X., Yeo, K. M., and Lu, S. Y. (2020), “Statistical Modeling for Spatio-Temporal Data from Stochastic Convection-Diffusion Processes,” Journal of the American Statistical Association, to appear, arXiv:1910.10375.
* Mondal and Wang (2019) Mondal, D. and Wang, C. (2019), “A matrix-free method for spatial-temporal Gaussian state-space models,” Statstica Sinica (to appear), 29, 2205–2227.
* Nychka and Wikle (2002) Nychka, D. and Wikle, C. Royle, J. A. (2002), “Multiresolution Models for Nonstationary Spatial Covariance Functions,” Statistical Modeling, 2, 315–331.
* R-INLA (2019) R-INLA (2019), The R-INLA Project, http://www.r-inla.org/.
* Reich et al. (2011) Reich, B. J., Eidsvik, J. Guindani, M., Nail, A. J., and Schmidt, A. M. (2011), “A Class of Covariate-Dependent Spatiotemporal Covariance Functions for the Analysis of Daily Ozone Concentration,” The Annals of Applied Statistics, 5, 2425–2447.
* Rue et al. (2009) Rue, H., Martino, S., and Chopin, N. (2009), “Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations,” Journal of the Royal Statistical Society, B.
* Schabenberger and Gotway (2005) Schabenberger, O. and Gotway, C. A. (2005), Statistical Methods for Spatial Data Analysis, Boca Raton, Florida: Chapman & Hall/CRC.
* Schapire (1999) Schapire, R. E. (1999), “A Brief Introduction to Boosting,” in Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence.
* Sigrist (2020) Sigrist, F. (2020), “Gaussian Process Boosting,” arXiv:2004.02653v2.
* Sigrist et al. (2015) Sigrist, F., Kunsch, H. R., and Stahel, W. A. (2015), “Stochastic Partial Differential Equation based Modelling of Large Space-Time Data Sets,” Journal of the Royal Statistical Society: Series B, 77, 3–33.
* Stein et al. (2004) Stein, M. L., Chi, Z., and Welty, L. J. (2004), “Approximating Likelihoods for Large Spatial Data Sets,” Journal of the Royal Statistical Society: Series B, 66, 275–296.
* Stroud et al. (2001) Stroud, J. R., Muller, P., and Sanso, B. (2001), “Dynamic Models for Spatiotemporal Data,” Journal of the Royal Statistical Society: Series B, 63, 673–689.
* Stroud et al. (2010) Stroud, J. R., Stein, M. L., L. B. M., Schwab, D. J., and Beletsky, D. (2010), “An Ensemble Kalman Filter and Smoother for Satellite Data Assimilation,” Journal of American Statistical Association, 105, 978–990.
* Wang et al. (2016) Wang, K., Jiang, W., and Li, B. (2016), “A Spatial Variable Selection Method for Monitoring Product Surface,” International Journal of Production Research, 54, 4161–4181.
* Wikle and Cressie (1999) Wikle, C. K. and Cressie, N. (1999), “A Dimension-Reduced Approach to Space-Time Kalman Filtering,” Biometrika, 86, 815–829.
* Yan et al. (2019) Yan, H., Zhao, X., Hu, Z., and Du, D. (2019), “Physics-based Deep Spatio-temporal Metamodeling for Cardiac Electrical Conduction Simulation,” .
* Yao et al. (2017) Yao, B., Zhu, R., and Yang, H. (2017), “Characterizing the location and extent of myocardial infarctions with inverse ECG modeling and spatiotemporal regularization,” IEEE Journal of Biomedical and Health Informatics, 22, 1445–1455.
* Yue et al. (2020) Yue, X., Wen, Y., Hunt, J. H., and Shi, J. (2020), “Active Learning for Gaussian Process considering Uncertainties, with an Application to Automatic Shape Control of Composite Fuselage,” arxiv: 2004.10931.
* Zang and Qiu (2019) Zang, Y. and Qiu, P. (2019), “Phase I Monitoring of Spatial Surface Data from 3D Printing,” Technometrics, 60, 169–180.
|
# On Small-World Networks: Survey and Properties Analysis
Alaa Eddin Alchalabi _Graduate School of Electronics and Computer
Engineering_
Istanbul Sehir University, Istanbul, Turkey
<EMAIL_ADDRESS>
###### Abstract
Complex networks has been a hot topic of research over the past several years
over-crossing many disciplines, starting from mathematics and computer science
and ending by the social and biological sciences. Random graphs were studied
to observe the qualitative features they have in common in planetary-scale
data sets which helps us to project the insights proven to real-world
networks.
In this paper, We survey the particular case of small-world phenomena and
decentralized search algorithms. We start by explaining the first empirical
study for the “six degrees of separation” phenomenon in social networks; then
we review some of the probabilistic network models based on this work,
elaborating how these models tried to explain the phenomenon’s properties, and
lastly, we review few of the recent empirical studies empowered by these
models. Finally, some future works are proposed in this area of research.
###### Index Terms:
Small-World, Complex Networks, Lattices and Random Graphs, Search Algorithms.
## I Introduction
Recently, the study of complex networks has emerged in various range of
disciplines and research areas. The World Wide Web has revolutionized the way
we deal with everything one deals in daily life. Computer scientists were
curious to find a way to handle the wheel of controlling the complexity and
the enormous growth of the Internet. Social networks’ data scale is
unpredictably uncontrollable by social scientists. The biological interactions
in cell’s metabolism are expected to define its pathways and could provide
insights to biologists [13]. The urge a new born science is needed in order to
be able to manipulate networks before networks manipulate our needs [8].
The study of complex networks evolved since the study of randomly generated
graphs by Erdos and Renyi [4], and the appearance of a large-scale network
data had leashed tremendous work in multi-disciplinary areas including the
real and the virtual world [13]. The efforts were put to describe the
properties of random graphs in large networks which raised more and more
technical questions to be answered. To mimic real-networks, a randomly
produced stylized network model is adopted in order to generalize the
resulting conclusions and properties onto real-networks. Simple models fails
to capture the complexity of a realistic network’s structure and features
offering a strong mathematical basis which futures investigations can be build
upon.
In the next sections of this paper, we survey the “small-world phenomenon” and
few related problems. We start with the famous psychologist Stanley Milgram’s
social experiment, that captures the main aspects of the phenomenon [11],
[14]; we review few of the models based on random graphs that tries to explain
the problem [7], [9], [12], [15], [16]; and then we mention recent work that
has applied the traditional insights of these models on large data sets
extracted from famous web applications [2], [10]. Lastly, some suggested
further extensions to small-world networks are discussed, along with some
future works followed by their relevance to this field.
## II small-world phenomenon
The small-world phenomenon has recently been the hot topic of both theoretical
and practical research, and it has been given huge attention from most, if not
all, multi-disciplinary researchers. The term “small-world”, linked by all
means to the “short chains of acquaintances”, or the “six degrees or
separation” [5][6][16], refers to the human social network’s graph; where
nodes replaces people, and edges between two nodes mimic if the two
corresponding persons know each other on a first-name basis [8]. The graph is
described to be a “small-world” because of the fact that any two random pairs
of nodes are separated by relatively a small number of nodes, generally less
than 6. Although the first-name basis rule is a bit naive for an edge
definition, the resulted graph behaves as a real-world network.
Small-world networks are of great importance because of adoption to the
limitations of either of the end extreme networks types; random networks and
regular lattices. Small-world networks proved their ability to be used as
frameworks for the study of interaction networks of complex systems [1].
The most important key of the small-world study is to prove the hypothesis
that assumes the qualitatively shared structure among a variety of networks
across variant fields. A common property arises in large networks which is the
existence of short paths between most of the nodes pairs although nodes in
network have a high degree of clustering. Nodes can also be reached and
navigated with no need of universal understanding of the whole network. Such
properties contributed in describing large-scale social networks behavior, and
additionally, they gave important insights to create applications to the
internal design of decentralized peer-to-peer systems.
### II-A Milgram’s Experiment
Stanley Milgram, the famous social psychologist, made an experiment to measure
people connectivity in the USA in the 1960s and to test the small-world
property [11][14]. The experiment questions the probability of an arbitrarily
two selected individuals from a large data set to know each other in person. A
target person was selected in the state of Massachusetts who was a Boston
stockbroker, and 296 arbitrarily selected individuals as “starting persons”
from Nebraska and Boston were asked to generate acquaintance chains to the
stockbroker in Boston. The selected group were given a document of the
described study with the target’s name. The senders were asked to choose a
recipient which they think that he/she will contribute to carry the message to
the target person in the shortest way possible. The concept of “roster” was
introduced to prevent the message goes back to a previous sender “loop” and to
track the number of node the message reached.
The results of the experiment were quite astonishing. Sixty-four chains made
their way to the target person. The mean number of the intermediaries was 5.2
with median of 6. Boston starting chains exhibits shorter range chains than
Nebraska’s chains. Additional experiments by Korte and Milgram proved that
these numbers are quite stable [14].
Some comments on Milgram’s experiments exhibit the inability of this model to
be generalized to larger networks. Varying the information about the target
person might affect the decisions taken by senders, and here we meant
psychological and sociological factors take place.
## III Small-world based empirical models
### III-A Watts and Strogatz’s Model
Watts and Strogatz came up with a model that aims to explain the small-world
property. After Bollobas and de la Vega [3] introduced the theorem which
proves the logarithmic property in the path length with respect to the number
of nodes _O(log n)_ in small-world networks, Watts and Strogatz felt that
there was something missing in the theorem. The model proposed considered
small-world networks to be highly-structured with relatively a few number of
random links added within. Long-range connections in this model plays a
crucial rule in creating short paths through all nodes [15].
The model adopts the idea of rewiring edges between nodes with a certain
probability to generate a graph. The probability allows the change between
regular networks (p=0) and random networks (p=1). The model starts by
generating a ring of n connected nodes (average degree of k). Then, the
rewiring of each edge with the probability p and the landing node is also
chosen randomly. The clustering coefficient _$(C_{p})$_ is a measure which
reflects the fraction of the connected neighbours to a node in a graph
compared to all possible connections of the neighboured averaged over all
nodes [15].
The results derived for a regular graph (p = 0) a highly clustered (_$C\sim$_
3/4) and path length _$L\sim n/2k$_ where _$n >k>ln(n)>1$_ should be chosen.
For random graphs (p = 1) the resulted network is poorly with a low clustering
coefficient (_$C\sim k/n$_) and a small path length _$L\sim ln(n)/ln(k)$_.
Their research also included three empirical real-world examples of small-
world models, and their main finding was that the average path length of the
chosen example was slightly higher than the random model, while the clustering
coefficient had clearly a much higher value than the random model. Using their
results, they reasoned how infectious disease spread rapidly in small-world
societies.
Some drawbacks of Watts and Strgatz’s model is that it cannot be generalized
to all small-world models. Some extended works by other scientists tried to
fill in gaps.
### III-B Classes of Small-World Networks
Due to the limited vision of the Watts and Strogatz model, new explanation was
needed. Trying to look at the dilemma from another prospective, Amaral et al.
tried to classify small-world networks to three classes reporting an empirical
study of real-world networks [1]. The study covers mainly the statistical
properties of real-world networks, and it was enough to prove the existence of
three classes of small-world networks: Scale-free, broad-scale, and single
scale[1].
#### III-B1 Scale-free
The networks which is characterized by a vertex connectivity distribution
which decays as a power law.
#### III-B2 Broad-scale
The networks characterized by a connectivity distribution that has a power-low
region and followed by a sharp cutt-off.
#### III-B3 Single-scale
The networks characterized by a connectivity distribution with a fast decaying
tail.
The research also gave an answer to why such taxonomy exist, and they reasoned
that by mentioning two types of factors. The first factor was the aging of the
vertices, which in time old nodes will stop being as effective in the network,
and an example of that can be the actors network. The second type of factor
was the cost of adding new links to the vertices which limited by the vertex
capacity. An example of this can be the airports map where placing too many
vertices are pricey and not practical.
### III-C Kleinbergs’s Algorithmic Prospective
Kleinberg’s way of explaining small-world properties was a bit close to Watts
and Strogatz but with slight differences [7]. Kleinberg used a _n x n_ grid of
nodes to represent the network, and to add the small-world flavour, a number
of long-range connection edges were added and not rewired. After adding the
edges, the probability of connecting two random vertices (v,w) is proportional
to _$1/d(v,w)^{q}$_ where q is the clustering coefficient [9].
Kleinberg came up with theorems to quantify the decentralized algorithms’
delivery time which generalized the results in [3] by Bollobás and de la Vega
of the logarithmic behavior of short pathways in networks. He proved that the
time needed is not always logarithmic but it depends on other parameters. A
new parameter was introduced ($\alpha$ >=0) that controls the long-range
connections. Interestingly, the delivery time varies depends on $\alpha$ as
follows:
#### III-C1 For 0 <$\alpha$ <2
the delivery time of any decentralized algorithm in the grid-based model is
$\Omega$ ($n^{(2-a)/3}$).
#### III-C2 For $\alpha$ = 2
the delivery time of any decentralized algorithm in the grid-based model is
O($log^{2}n$).
#### III-C3 For $\alpha$ >2
the delivery time of any decentralized algorithm in the grid-based model is
$\Omega$ ($n^{(a-2)/(a-1)}$) [7].
## IV Recent real-world Empirical experiments
### IV-A Dodds, Muhammad, and Watts Experiment
Dodds et al. tried to mimic Milgram’s experiment with the electronic messaging
systems. Around 60,000 randomly selected email users attempted to reach 18
target persons in 13 different countries.
The findings were quite unexpected. Successful social chains passed through
intermediate to weak strength ties [12]. This finding proves that the highly
connected hubs’ effect is negligible. The attrition of message chains showed
that messages could make it to the target in a median of five to seven. The
cool fact about attrition rate was the constancy of its value for a certain
period of time. The 384 completed chains (out of 24,163) had an average chain
length of 4.05. This number was considered misleading by the authors, which
made them evaluate the experiment using new metrics.
The general results showed that the the network structure alone is not enough
to interpret the network. Actions and perceptions of the individuals are big
contributors.
### IV-B Leskovec et al. on a Large Instant-Messaging Network
Leskovec et al. presented a study in 2003 which captured a month of
communication activities within the Microsoft Messenger instant-messaging
system [10]. The data set contained about 30 million conversations between 240
million people, and a graph was constructed containing 180 million nodes and
1.3 billion undirected edges. The network represents accounts that were active
during June 2006.
The resulted average path length among users was 6.6 with 6 being the median.
The results showed that users with similar age, language, and location tend to
communicate with each other. Users with different genders tend to communicate
more and also for longer conversations [10]. Conversations tends to decrease
with the increase in the distance. However, communication chains through
relatively long distances tend to carry longer conversations [10].
### IV-C Bakhshandeh’s Degrees of Separation in Twitter
Bakhshandeh et al. did an interesting analysis to identify the degree
separation between two Twitter users. They used a new search technique to
provide near-optimal solutions more optimized than greedy approaches [2]. The
average separation path length was 3.43 between any two random Twitter users
which required 67 requests from Twitter, while the near-optimal was 3.88 using
only 13.3 requests on average. Surprisingly, Twitter’s 3.43 degree of
separation is small and the reason they have claimed was the indicative of
changing social norms in the modern connected world.
## V Further Extensions and future works
There is no doubt that small-world networks are still and will still be a hot
research topic due to its nature. In this section, we would like to propose
some ideas for future extensions which might propose solutions for unanswered
or vaguely answered questions.
Introducing machine learning techniques to small-world networks, in my
opinion, is a good idea. Constructing networks should be smart enough in order
to be controllable not only interpretable. Small-world networks could be build
to mimic the brain neural-map which might give us more insight on how the
human brain works. ML techniques can be also used to conserve the “six degrees
of separation rule” or even to break it which completely depends on the
application.
Introducing local reference nodes in such networks could be a new idea to be
implemented. Reference nodes could have some regional knowledge about the
surrounding nodes. They can control the “hubs” and determine how new links can
be distributed among reference nodes. We can think of routers as examples. The
uniqueness of the node is somehow unrealistic for some applications, and that
shows the urge of introducing a new concept.
## VI Conclusion
At this paper, we discussed the famous phenomenon of small-world networks and
its importance in various areas. Few of the small-world driven models were
surveyed. Then recent real-world experiments in the context of complex
networks were mentioned. Later, further extensions and future works were
proposed. In the future, we will try to implement that the suggested ideas
practically on a given data set. By taking into account their bros and cons,
the ideas will be later evaluated against the other state-of-art
implementations.
## References
* [1] Amaral, Luıs A. Nunes, et al. ”Classes of small-world networks.” Proceedings of the national academy of sciences 97.21 (2000): 11149-11152.
* [2] Bakhshandeh, Reza, et al. ”Degrees of separation in social networks.” Fourth Annual Symposium on Combinatorial Search. 2011.
* [3] Bollobas, B., de la Vega,W. F., The diameter of random regular graphs. (1982), 125–134.
* [4] Erdos, P., and Renyi, A., On the Evolution of Random Graphs. Magyar Tud. Akad. Mat. Kutató Int. Közl. 5 (1960), 17–61.
* [5] Fass, Craig, Mike Ginelli, and Brian Turtle. Six Degrees of Kevin Bacon. Plume Books, 1996.
* [6] Guare, John. Six degrees of separation: A play. Vintage, 1990.
* [7] J. Kleinberg. The small-world phenomenon: An algorithmic perspective. Proc. 32nd ACM Symposium on Theory of Computing, 2000.
* [8] J. Kleinberg. Complex Networks and Decentralized Search Algorithms. Proceedings of the International Congress of Mathematicians (ICM), 2006.
* [9] Kleinberg, J. Navigation in a Small World. Nature 406(2000), 845.
* [10] Leskovec, Jure, and Eric Horvitz. ”Planetary-scale views on a large instant-messaging network.” Proceedings of the 17th international conference on World Wide Web. ACM, 2008.
* [11] Milgram, Stanley. ”The small world problem.” Psychology today 2.1 (1967): 60-67.
* [12] Peter Sheridan Dodds, Roby Muhamad, Duncan J. Watts. An Experimental Study of Search in Global Social Networks. Science 301(2003), 827.
* [13] Strogatz, Steven H. ”Exploring complex networks.” Nature 410.6825 (2001): 268-276.
* [14] Travers, Jeffrey, and Stanley Milgram. ”An experimental study of the small world problem.” Sociometry (1969): 425-443.
* [15] Watts, D. J. and S. H. Strogatz. Collective dynamics of ’small-world’ networks. Nature 393:440-42(1998).
* [16] Watts, Duncan J. Six degrees: The science of a connected age. WW Norton and Company, 2004.
| John Doe Hello, here is some text without a meaning. This text should show
what a printed text will look like at this place. If you read this text, you
will get no information. Really? Is there no information? Is there a
difference between this text and some nonsense like “Huardest gefburn”? Kjift
– not at all! A blind text like this gives you information about the selected
font, how the letters are written and an impression of the look. This text
should contain all letters of the alphabet and it should be written in of the
original language. There is no need for special content, but the length of
words should match the language.
---|---
|
# Hidden-charm pentaquarks with triple strangeness due to the
$\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ interactions
Fu-Lai Wang1,2<EMAIL_ADDRESS>Xin-Dian Yang1,2<EMAIL_ADDRESS>Rui
Chen4,5 chen<EMAIL_ADDRESS>Xiang Liu1,2,3111Corresponding author
<EMAIL_ADDRESS>1School of Physical Science and Technology, Lanzhou
University, Lanzhou 730000, China
2Research Center for Hadron and CSR Physics, Lanzhou University and Institute
of Modern Physics of CAS, Lanzhou 730000, China
3Lanzhou Center for Theoretical Physics, Key Laboratory of Theoretical Physics
of Gansu Province, and Frontiers Science Center for Rare Isotopes, Lanzhou
University, Lanzhou 730000, China
4Center of High Energy Physics, Peking University, Beijing 100871, China
5School of Physics and State Key Laboratory of Nuclear Physics and Technology,
Peking University, Beijing 100871, China
###### Abstract
Motivated by the successful interpretation of these observed $P_{c}$ and
$P_{cs}$ states under the meson-baryon molecular picture, we systematically
investigate the possible hidden-charm molecular pentaquark states with triple
strangeness which is due to the $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$
interactions. We perform a dynamical calculation of the possible hidden-charm
molecular pentaquarks with triple strangeness by the one-boson-exchange model,
where the $S$-$D$ wave mixing effect and the coupled channel effect are taken
into account in our calculation. Our results suggest that the
$\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}={3}/{2}^{-}$ and the
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with $J^{P}={5}/{2}^{-}$ can be
recommended as the candidates of the hidden-charm molecular pentaquark with
triple strangeness. Furthermore, we discuss the two-body hidden-charm strong
decay behaviors of these possible hidden-charm molecular pentaquarks with
triple strangeness by adopting the quark-interchange model. These predictions
are expected to be tested at the LHCb, which can be as a potential research
issue with more accumulated experimental data in near future.
## I Introduction
As is well known, the study of the matter spectrum is an important way to
explore the relevant matter structures and the involved interaction
properties. In the hadron physics, since the discovery of the $X(3872)$ by the
Belle Collaboration Choi:2003ue , a series of exotic states has been observed
benefiting from the accumulation of more and more experimental data with high
precision, and the exotic hadrons have stimulated extensive studies in the
past two decades (see the review articles Chen:2016qju ; Liu:2019zoy ;
Olsen:2017bmm ; Guo:2017jvc ; Liu:2013waa ; Hosaka:2016pey ; Brambilla:2019esw
for learning the relevant processes). Exploring these exotic hadronic states
not only gives new insights for revealing the hadron structures, but also
provides useful hints to deepening our understanding of the nonperturbative
behavior of the quantum chromodynamics (QCD) in the low energy regions.
In fact, investigating the pentaquark states has been a long history, which
can be tracked back to the birth of the quark model GellMann:1964nj ;
Zweig:1981pd . Among exotic hadronic states, the hidden-charm molecular
pentaquarks have attracted much attention as early as 2010 Li:2014gra ;
Karliner:2015ina ; Wu:2010jy ; Wang:2011rga ; Yang:2011wz ; Wu:2012md ;
Chen:2015loa and become a hot topic with the discovery of the $P_{c}(4380)$
and $P_{c}(4450)$ in the $\Lambda_{b}\to J/\psi pK$ process by the LHCb
Collaboration Aaij:2015tga . In 2019, there was a new progress about the
observation of three narrow structures [$P_{c}(4312)$, $P_{c}(4440)$, and
$P_{c}(4457)$] by revisiting the process $\Lambda_{b}\to J/\psi pK$ based on
more collected data Aaij:2019vzc , and they are just below the corresponding
thresholds of the $S$-wave charmed baryon and $S$-wave anticharmed meson. This
provides strong evidence to support the existence of the hidden-charm meson-
baryon molecular states. More recently, the LHCb Collaboration reported a
possible hidden-charm pentaquark with strangeness $P_{cs}(4459)$ Aaij:2020gdg
, and this structure can be assigned as the $\Xi_{c}\bar{D}^{*}$ molecular
state Chen:2016ryt ; Wu:2010vk ; Hofmann:2005sw ; Anisovich:2015zqa ;
Wang:2015wsa ; Feijoo:2015kts ; Lu:2016roh ; Xiao:2019gjd ; Shen:2020gpw ;
Chen:2015sxa ; Zhang:2020cdi ; Wang:2019nvm ; Chen:2020uif ; Peng:2020hql ;
1830432 ; 1830426 ; Liu:2020hcv ; 1839195 .
Facing the present status of exploring the hidden-charm molecular pentaquarks
Chen:2016qju ; Liu:2019zoy ; Olsen:2017bmm ; Guo:2017jvc , we naturally
propose a meaningful question: why are we interested in the hidden-charm
molecular pentaquark states? The hidden-charm pentaquark states are relatively
easy to produce via the bottom baryon weak decays in the experimental
facilities Aaij:2019vzc ; Aaij:2020gdg , and the hidden-charm quantum number
is a crucial condition for the existence of the hadronic molecules Li:2014gra
; Karliner:2015ina . In addition, it is worth indicating that the heavy
hadrons are more likely to generate the bound states due to the relatively
small kinetic terms, and the interactions between the charmed baryon and the
anticharmed meson may be mediated by exchanging a series of allowed light
mesons Chen:2016qju ; Liu:2019zoy . Indeed, these announced hidden-charm
pentaquark states have a ($c\bar{c}$) pair Chen:2016qju ; Liu:2019zoy ;
Olsen:2017bmm ; Guo:2017jvc .
Based on the present research progress on the hidden-charm pentaquarks
Chen:2016qju ; Liu:2019zoy ; Olsen:2017bmm ; Guo:2017jvc , the theorists
should pay more attention to making the reliable prediction of various types
of the hidden-charm molecular pentaquarks and give more abundant suggestions
to searching for the hidden-charm molecular pentaquarks accessible at the
forthcoming experiment. Generally speaking, there are two important approaches
to construct the family of the hidden-charm molecular pentaquark states which
is very special in the hadron spectroscopy. Firstly, we propose that there may
exist a series of hidden-charm molecular pentaquarks with the different
strangeness. Secondly, we also have enough reason to believe that there may
exist more hidden-charm molecular pentaquark states with higher mass. In fact,
we already studied the $\Xi_{c}^{(\prime,*)}\bar{D}_{s}^{(*)}$ systems with
double strangeness Wang:2020bjt and the $\mathcal{B}_{c}^{(*)}\bar{T}$
systems with $\mathcal{B}_{c}^{(*)}=\Lambda_{c}/\Sigma_{c}^{(*)}$ and
$\bar{T}=\bar{D}_{1}/\bar{D}_{2}^{*}$ Wang:2019nwt , and predicted a series of
possible candidates of the hidden-charm molecular pentaquarks. In fact, the
triple-strangeness hidden-charm pentaquarks may be regarded as systems that
can be used to reveal the binding mechanism and the importance of the scalar-
meson exchange as they are not expected to exist in the treatment of Ref.
1839195 . Thus, we investigate the possible hidden-charm molecular pentaquarks
with triple strangeness from the $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$
interactions, which will be a main task of the present work.
In the present work, we perform a dynamical calculation with the possible
hidden-charm molecular pentaquark states with triple strangeness by adopting
the one-boson-exchange (OBE) model Chen:2016qju ; Liu:2019zoy , which involves
the interactions between an $S$-wave charmed baryon $\Omega_{c}^{(*)}$ and an
$S$-wave anticharmed-strange meson $\bar{D}_{s}^{(*)}$. In concrete
calculation, the $S$-$D$ wave mixing effect and the coupled channel effect are
taken into account. Furthermore, we study the two-body hidden-charm strong
decay behaviors of these possible hidden-charm molecular pentaquarks. Here, we
adopt the quark-interchange model to estimate the transition amplitudes for
the decay widths Barnes:1991em ; Barnes:1999hs ; Barnes:2000hu , which is
widely used to give the decay information of the exotic hadronic states during
the last few decades Wang:2018pwi ; Wang:2019spc ; Xiao:2019spy ; Wang:2020prk
; Hilbert:2007hc . We hope that the present investigation is a key step to
complement the family of the hidden-charm molecular pentaquark state and may
provide crucial information of searching for possible hidden-charm molecular
pentaquarks with triple strangeness. With higher statistic data accumulation
at Run III of the LHC and after High-Luminosity-LHC upgrade Bediaga:2018lhg ,
it is highly probable that these possible hidden-charm molecular pentaquarks
with triple strangeness can be detected at the LHCb Collaboration in the near
future, which will be full of opportunities and challenges.
The remainder of this paper is organized as follows. In Sec. II, we introduce
how to deduce the effective potentials and present the bound state properties
of these investigated $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems. We present
the quark-interchange model and the two-body strong decay behaviors of these
possible molecular pentaquarks in Sec. III. Finally, a short summary follows
in Sec. IV.
## II The $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ interactions
### II.1 OBE effective potentials
In the present work, we study the interactions between an $S$-wave charmed
baryon $\Omega_{c}^{(*)}$ and an $S$-wave anticharmed-strange meson
$\bar{D}_{s}^{(*)}$. Here, we adopt the OBE model Chen:2016qju ; Liu:2019zoy ,
and consider the effective potentials from the $f_{0}(980)$, $\eta$, and
$\phi$ exchanges. In particular, we need to emphasize that the light scalar
meson $f_{0}(980)$ exchange provides effective interaction for these
investigated systems, and we do not consider the $\sigma$ and $a_{0}(980)$
exchanges in our calculation, since the $\sigma$ is usually considered as a
meson with only up and down quarks and the $a_{0}(980)$ is the light isovector
scalar meson. In this subsection, we construct the relevant wave functions and
effective Lagrangians, and deduce the OBE effective potentials in the
coordinate space for all of the investigated systems.
Firstly, we introduce the flavor and spin-orbital wave functions involved in
our calculation. For the $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems, the
flavor wave function $|I,I_{3}\rangle$ is quite simple and reads as
$|0,0\rangle=|\Omega_{c}^{(*)0}{D}_{s}^{(*)-}\rangle$, where $I$ and $I_{3}$
are the isospin and its third component of the discussed system. In addition,
the spin-orbital wave functions $|{}^{2S+1}L_{J}\rangle$ for these
investigated $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems are explicitly
written as
$\displaystyle\left|\Omega_{c}\bar{D}_{s}\left({}^{2S+1}L_{J}\right)\right\rangle$
$\displaystyle=$
$\displaystyle\sum_{m,m_{L}}C^{J,M}_{\frac{1}{2}m,Lm_{L}}\chi_{\frac{1}{2}m}\left|Y_{L,m_{L}}\right\rangle,$
$\displaystyle\left|\Omega_{c}^{*}\bar{D}_{s}\left({}^{2S+1}L_{J}\right)\right\rangle$
$\displaystyle=$
$\displaystyle\sum_{m,m_{L}}C^{J,M}_{\frac{3}{2}m,Lm_{L}}\Phi_{\frac{3}{2}m}\left|Y_{L,m_{L}}\right\rangle,$
$\displaystyle\left|\Omega_{c}\bar{D}_{s}^{*}\left({}^{2S+1}L_{J}\right)\right\rangle$
$\displaystyle=$
$\displaystyle\sum_{m,m^{\prime},m_{S},m_{L}}C^{S,m_{S}}_{\frac{1}{2}m,1m^{\prime}}C^{J,M}_{Sm_{S},Lm_{L}}\chi_{\frac{1}{2}m}\epsilon_{m^{\prime}}^{\mu}\left|Y_{L,m_{L}}\right\rangle,$
$\displaystyle\left|\Omega_{c}^{*}\bar{D}_{s}^{*}\left({}^{2S+1}L_{J}\right)\right\rangle$
$\displaystyle=$
$\displaystyle\sum_{m,m^{\prime},m_{S},m_{L}}C^{S,m_{S}}_{\frac{3}{2}m,1m^{\prime}}C^{J,M}_{Sm_{S},Lm_{L}}\Phi_{\frac{3}{2}m}\epsilon_{m^{\prime}}^{\mu}\left|Y_{L,m_{L}}\right\rangle.$
In the above expressions, $S$, $L$, and $J$ denote the spin, orbit angular
momentum, and total angular momentum for the discussed system, respectively.
The constant $C^{e,f}_{ab,cd}$ is the Clebsch-Gordan coefficient, and
$|Y_{L,m_{L}}\rangle$ is the spherical harmonics function. In the static
limit, the polarization vector $\epsilon_{m}^{\mu}\,(m=0,\,\pm 1)$ with the
spin-1 field can be expressed as $\epsilon_{0}^{\mu}=\left(0,0,0,-1\right)$
and $\epsilon_{\pm}^{\mu}=\left(0,\,\pm 1,\,i,\,0\right)/\sqrt{2}$.
$\chi_{\frac{1}{2}m}$ stands for the spin wave function of the charmed baryon
with spin $S={1}/{2}$, and the polarization tensor $\Phi_{\frac{3}{2}m}$ of
the charmed baryon with spin quantum number $S={3}/{2}$ can be written in a
general form, i.e.,
$\displaystyle\Phi_{\frac{3}{2}m}=\sum_{m_{1},m_{2}}C^{\frac{3}{2},m}_{\frac{1}{2}m_{1},1m_{2}}\chi_{\frac{1}{2}m_{1}}\epsilon_{m_{2}}^{\mu}.$
(2.2)
In order to write out the relevant scattering amplitudes quantitatively, we
usually adopt the effective Lagrangian approach. To be convenient, we
construct two types of super-fields $\mathcal{S}_{\mu}$ and
$H^{(\overline{Q})}_{a}$ via the heavy quark limit Wise:1992hn . The
superfield $\mathcal{S}_{\mu}$ is expressed as a combination of the charmed
baryons $\mathcal{B}_{6}$ with $J^{P}=1/2^{+}$ and $\mathcal{B}^{*}_{6}$ with
$J^{P}=3/2^{+}$ in the $6_{F}$ flavor representation Chen:2017xat , and the
superfield $H^{(\overline{Q})}_{a}$ includes the anticharmed-strange vector
meson $\bar{D}^{*}_{s}$ with $J^{P}=1^{-}$ and the pseudoscalar meson
$\bar{D}_{s}$ with $J^{P}=0^{-}$ Ding:2008gr . The general expressions of the
super-fields $\mathcal{S}_{\mu}$ and $H^{(\overline{Q})}_{a}$ can be given by
$\displaystyle\mathcal{S}_{\mu}$ $\displaystyle=$
$\displaystyle-\sqrt{\frac{1}{3}}(\gamma_{\mu}+v_{\mu})\gamma^{5}\mathcal{B}_{6}+\mathcal{B}_{6\mu}^{*},$
$\displaystyle H^{(\overline{Q})}_{a}$ $\displaystyle=$
$\displaystyle\left(\bar{D}^{*(\overline{Q})\mu}_{a}\gamma_{\mu}-\bar{D}^{(\overline{Q})}_{a}\gamma_{5}\right)\frac{1-/\\!\\!\\!v}{2}.$
(2.3)
Here, $v_{\mu}=(1,\bm{0})$ is the four velocity under the nonrelativistic
approximation.
With the above preparation, we construct the relevant effective Lagrangians to
describe the interactions among the heavy hadrons
$\mathcal{B}_{6}^{(*)}/\bar{D}_{s}^{(*)}$ and the light scalar, pseudoscalar,
or vector mesons as Ding:2008gr ; Chen:2017xat
$\displaystyle\mathcal{L}_{\mathcal{B}^{(*)}_{6}}$ $\displaystyle=$
$\displaystyle
l_{S}\langle\bar{\mathcal{S}}_{\mu}f_{0}\mathcal{S}^{\mu}\rangle-\frac{3}{2}g_{1}\varepsilon^{\mu\nu\lambda\kappa}v_{\kappa}\langle\bar{\mathcal{S}}_{\mu}{\mathcal{A}}_{\nu}\mathcal{S}_{\lambda}\rangle$
$\displaystyle+i\beta_{S}\langle\bar{\mathcal{S}}_{\mu}v_{\alpha}\left(\mathcal{V}^{\alpha}-\rho^{\alpha}\right)\mathcal{S}^{\mu}\rangle+\lambda_{S}\langle\bar{\mathcal{S}}_{\mu}F^{\mu\nu}(\rho)\mathcal{S}_{\nu}\rangle,$
$\displaystyle\mathcal{L}_{H}$ $\displaystyle=$ $\displaystyle
g_{S}\langle\bar{H}^{(\overline{Q})}_{a}f_{0}H^{(\overline{Q})}_{a}\rangle+ig\langle\bar{H}^{(\overline{Q})}_{a}\gamma_{\mu}{\mathcal{A}}_{ab}^{\mu}\gamma_{5}H^{(\overline{Q})}_{b}\rangle$
(2.4)
$\displaystyle-i\beta\langle\bar{H}^{(\overline{Q})}_{a}v_{\mu}\left(\mathcal{V}^{\mu}-\rho^{\mu}\right)_{ab}H^{(\overline{Q})}_{b}\rangle$
$\displaystyle+i\lambda\langle\bar{H}^{(\overline{Q})}_{a}\sigma_{\mu\nu}F^{\mu\nu}(\rho)_{ab}H^{(\overline{Q})}_{b}\rangle,$
which satisfy the requirement of the heavy quark symmetry, the chiral
symmetry, and the hidden local symmetry Casalbuoni:1992gi ; Casalbuoni:1996pg
; Yan:1992gz ; Harada:2003jx ; Bando:1987br . The axial current
$\mathcal{A}_{\mu}$ and the vector current ${\cal V}_{\mu}$ can be defined as
${\mathcal{A}}_{\mu}=\left(\xi^{\dagger}\partial_{\mu}\xi-\xi\partial_{\mu}\xi^{\dagger}\right)/2$
and
${\mathcal{V}}_{\mu}=\left(\xi^{\dagger}\partial_{\mu}\xi+\xi\partial_{\mu}\xi^{\dagger}\right)/2$,
respectively. Here, the pseudo-Goldstone field can be written as
$\xi=\exp(i\mathbb{P}/f_{\pi})$, where $f_{\pi}$ is the pion decay constant.
In the above formulas, the vector meson field $\rho_{\mu}$ and its strength
tensor $F_{\mu\nu}(\rho)$ are $\rho_{\mu}=i{g_{V}}\mathbb{V}_{\mu}/{\sqrt{2}}$
and
$F_{\mu\nu}(\rho)=\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}+[\rho_{\mu},\rho_{\nu}]$,
respectively. Here, $\mathcal{B}_{6}^{(*)}$, $\mathbb{V}_{\mu}$, and
${\mathbb{P}}$ are the matrices of the charmed baryon in the $6_{F}$ flavor
representation, light vector meson, and light pseudoscalar meson, which can be
written as
$\displaystyle\left.\begin{array}[]{c}\mathcal{B}_{6}^{(*)}=\left(\begin{array}[]{ccc}\Sigma_{c}^{{(*)}++}&\frac{\Sigma_{c}^{{(*)}+}}{\sqrt{2}}&\frac{\Xi_{c}^{(^{\prime},*)+}}{\sqrt{2}}\\\
\frac{\Sigma_{c}^{{(*)}+}}{\sqrt{2}}&\Sigma_{c}^{{(*)}0}&\frac{\Xi_{c}^{(^{\prime},*)0}}{\sqrt{2}}\\\
\frac{\Xi_{c}^{(^{\prime},*)+}}{\sqrt{2}}&\frac{\Xi_{c}^{(^{\prime},*)0}}{\sqrt{2}}&\Omega_{c}^{(*)0}\end{array}\right),\\\
{\mathbb{V}}_{\mu}={\left(\begin{array}[]{ccc}\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&\rho^{+}&K^{*+}\\\
\rho^{-}&-\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&K^{*0}\\\
K^{*-}&\bar{K}^{*0}&\phi\end{array}\right)}_{\mu},\\\
{\mathbb{P}}={\left(\begin{array}[]{ccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&\pi^{+}&K^{+}\\\
\pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&K^{0}\\\
K^{-}&\bar{K}^{0}&-\sqrt{\frac{2}{3}}\eta\end{array}\right)},\end{array}\right.$
(2.17)
respectively. By expanding the compact effective Lagrangians to the leading
order of the pseudo-Goldstone field $\xi$, we can further obtain the concrete
effective Lagrangians. The effective Lagrangians for $\mathcal{B}_{6}^{(*)}$
and the light mesons are expressed as
$\displaystyle\mathcal{L}_{\mathcal{B}_{6}^{(*)}\mathcal{B}_{6}^{(*)}f_{0}}$
$\displaystyle=$ $\displaystyle-
l_{S}\langle\bar{\mathcal{B}}_{6}f_{0}\mathcal{B}_{6}\rangle+l_{S}\langle\bar{\mathcal{B}}_{6\mu}^{*}f_{0}\mathcal{B}_{6}^{*\mu}\rangle$
(2.18)
$\displaystyle-\frac{l_{S}}{\sqrt{3}}\langle\bar{\mathcal{B}}_{6\mu}^{*}f_{0}\left(\gamma^{\mu}+v^{\mu}\right)\gamma^{5}\mathcal{B}_{6}\rangle+h.c.,$
$\displaystyle\mathcal{L}_{\mathcal{B}_{6}^{(*)}\mathcal{B}_{6}^{(*)}\mathbb{P}}$
$\displaystyle=$ $\displaystyle
i\frac{g_{1}}{2f_{\pi}}\varepsilon^{\mu\nu\lambda\kappa}v_{\kappa}\langle\bar{\mathcal{B}}_{6}\gamma_{\mu}\gamma_{\lambda}\partial_{\nu}\mathbb{P}\mathcal{B}_{6}\rangle$
(2.19)
$\displaystyle-i\frac{3g_{1}}{2f_{\pi}}\varepsilon^{\mu\nu\lambda\kappa}v_{\kappa}\langle\bar{\mathcal{B}}_{6\mu}^{*}\partial_{\nu}\mathbb{P}\mathcal{B}_{6\lambda}^{*}\rangle$
$\displaystyle+i\frac{\sqrt{3}g_{1}}{2f_{\pi}}v_{\kappa}\varepsilon^{\mu\nu\lambda\kappa}\langle\bar{\mathcal{B}}_{6\mu}^{*}\partial_{\nu}\mathbb{P}{\gamma_{\lambda}\gamma^{5}}\mathcal{B}_{6}\rangle+h.c.,$
$\displaystyle\mathcal{L}_{\mathcal{B}_{6}^{(*)}\mathcal{B}_{6}^{(*)}\mathbb{V}}$
$\displaystyle=$
$\displaystyle-\frac{\beta_{S}g_{V}}{\sqrt{2}}\langle\bar{\mathcal{B}}_{6}v\cdot\mathbb{V}\mathcal{B}_{6}\rangle$
(2.20)
$\displaystyle-i\frac{\lambda_{S}g_{V}}{3\sqrt{2}}\langle\bar{\mathcal{B}}_{6}\gamma_{\mu}\gamma_{\nu}\left(\partial^{\mu}\mathbb{V}^{\nu}-\partial^{\nu}\mathbb{V}^{\mu}\right)\mathcal{B}_{6}\rangle$
$\displaystyle-\frac{\beta_{S}g_{V}}{\sqrt{6}}\langle\bar{\mathcal{B}}_{6\mu}^{*}v\cdot\mathbb{V}\left(\gamma^{\mu}+v^{\mu}\right)\gamma^{5}\mathcal{B}_{6}\rangle$
$\displaystyle-i\frac{\lambda_{S}g_{V}}{\sqrt{6}}\langle\bar{\mathcal{B}}_{6\mu}^{*}\left(\partial^{\mu}\mathbb{V}^{\nu}-\partial^{\nu}\mathbb{V}^{\mu}\right)\left(\gamma_{\nu}+v_{\nu}\right)\gamma^{5}\mathcal{B}_{6}\rangle$
$\displaystyle+\frac{\beta_{S}g_{V}}{\sqrt{2}}\langle\bar{\mathcal{B}}_{6\mu}^{*}v\cdot{V}\mathcal{B}_{6}^{*\mu}\rangle$
$\displaystyle+i\frac{\lambda_{S}g_{V}}{\sqrt{2}}\langle\bar{\mathcal{B}}_{6\mu}^{*}\left(\partial^{\mu}\mathbb{V}^{\nu}-\partial^{\nu}\mathbb{V}^{\mu}\right)\mathcal{B}_{6\nu}^{*}\rangle+h.c.,$
and the effective Lagrangians to describe the $S$-wave anticharmed-strange
mesons $\bar{D}_{s}^{(*)}$ and the light scalar, pseudoscalar, or vector
mesons are
$\displaystyle\mathcal{L}_{{\bar{D}}^{(*)}{\bar{D}}^{(*)}f_{0}}$
$\displaystyle=$
$\displaystyle-2g_{S}{\bar{D}}_{a}{\bar{D}}_{a}^{{\dagger}}f_{0}+2g_{S}{\bar{D}}_{a\mu}^{*}{\bar{D}}_{a}^{*\mu{\dagger}}f_{0},$
(2.21) $\displaystyle\mathcal{L}_{{\bar{D}}^{(*)}{\bar{D}}^{(*)}\mathbb{P}}$
$\displaystyle=$
$\displaystyle\frac{2ig}{f_{\pi}}v^{\alpha}\varepsilon_{\alpha\mu\nu\lambda}{\bar{D}}_{a}^{*\mu{\dagger}}{\bar{D}}_{b}^{*\lambda}\partial^{\nu}{\mathbb{P}}_{ab}$
(2.22)
$\displaystyle+\frac{2g}{f_{\pi}}\left({\bar{D}}_{a}^{*\mu{\dagger}}{\bar{D}}_{b}+{\bar{D}}_{a}^{{\dagger}}{\bar{D}}_{b}^{*\mu}\right)\partial_{\mu}{\mathbb{P}}_{ab},$
$\displaystyle\mathcal{L}_{{\bar{D}}^{(*)}{\bar{D}}^{(*)}\mathbb{V}}$
$\displaystyle=$ $\displaystyle\sqrt{2}\beta
g_{V}{\bar{D}}_{a}{\bar{D}}_{b}^{{\dagger}}v\cdot\mathbb{V}_{ab}-\sqrt{2}\beta
g_{V}{\bar{D}}_{a\mu}^{*}{\bar{D}}_{b}^{*\mu{\dagger}}v\cdot\mathbb{V}_{ab}$
$\displaystyle-2\sqrt{2}i\lambda
g_{V}{\bar{D}}_{a}^{*\mu{\dagger}}{\bar{D}}_{b}^{*\nu}\left(\partial_{\mu}\mathbb{V}_{\nu}-\partial_{\nu}\mathbb{V}_{\mu}\right)_{ab}$
$\displaystyle-2\sqrt{2}\lambda
g_{V}v^{\lambda}\varepsilon_{\lambda\mu\alpha\beta}\left({\bar{D}}_{a}^{*\mu{\dagger}}{\bar{D}}_{b}+{\bar{D}}_{a}^{{\dagger}}{\bar{D}}_{b}^{*\mu}\right)\partial^{\alpha}\mathbb{V}^{\beta}_{ab}.$
In the above effective Lagrangians, the coupling constants can be either
extracted from the experimental data or calculated by the theoretical models,
and the signs of these coupling constants are fixed via the quark model
Riska:2000gd . The values of these coupling constants are $l_{S}=6.20$,
$g_{S}=0.76$,222In this work, we consider the contribution from light scalar
meson $f_{0}(980)$ exchange. Here, the corresponding coupling constant
involved in effective Lagrangians [Eq. (2.18) and Eq. (2.21)] is approximately
taken as the same as that for the case of light scalar $\sigma$. $g_{1}=0.94$,
$g=0.59$, $f_{\pi}=132~{}\rm{MeV}$, $\beta_{S}g_{V}=10.14$, $\beta
g_{V}=-5.25$, $\lambda_{S}g_{V}=19.2~{}\rm{GeV}^{-1}$, and $\lambda
g_{V}=-3.27~{}\rm{GeV}^{-1}$ Chen:2019asm , which are widely used to discuss
the hadronic molecular states Wang:2020bjt ; Chen:2017xat ; Wang:2019nwt ;
Chen:2019asm ; He:2015cea ; He:2019ify ; Chen:2018pzd . In particular, we need
to emphasize that these input coupling constants can well reproduce the masses
of the $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$ Aaij:2019vzc under the
meson-baryon molecular picture when adopting the OBE model Chen:2019asm ;
He:2019ify .
We follow the standard strategy to deduce the effective potentials in the
coordinate space in Refs. Wang:2020dya ; Wang:2019nwt ; Wang:2019aoc , which
is a lengthy and tedious calculation. In Fig. 1, we present the relevant
Feynman diagram for the
$\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}\to\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$
scattering processes.
Figure 1: Relevant Feynman diagram for the
$\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}\to\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$
scattering processes.
At the hadronic level, we firstly write out the scattering amplitude
$\mathcal{M}(h_{1}h_{2}\to h_{3}h_{4})$ of the scattering process
$h_{1}h_{2}\to h_{3}h_{4}$ by considering the effective Lagrangian approach.
And then, the effective potential in momentum space
$\mathcal{V}^{h_{1}h_{2}\to h_{3}h_{4}}(\bm{q})$ can be related to the
scattering amplitude $\mathcal{M}(h_{1}h_{2}\to h_{3}h_{4})$ with the help of
the Breit approximation Breit:1929zz ; Breit:1930zza and the nonrelativistic
normalization, i.e.,
$\displaystyle\mathcal{V}_{E}^{h_{1}h_{2}\to h_{3}h_{4}}(\bm{q})$
$\displaystyle=$ $\displaystyle-\frac{\mathcal{M}(h_{1}h_{2}\to
h_{3}h_{4})}{\sqrt{\prod_{i}2m_{i}\prod_{f}2m_{f}}},$ (2.24)
where $m_{i}$ and $m_{f}$ are the masses of the initial states
$(h_{1},\,h_{2})$ and final states $(h_{3},\,h_{4})$, respectively. By
performing the Fourier transformation, the effective potential in the
coordinate space $\mathcal{V}^{h_{1}h_{2}\to h_{3}h_{4}}(\bm{r})$ can be
deduced
$\displaystyle\mathcal{V}^{h_{1}h_{2}\to
h_{3}h_{4}}_{E}(\bm{r})=\int\frac{d^{3}\bm{q}}{(2\pi)^{3}}e^{i\bm{q}\cdot\bm{r}}\mathcal{V}_{E}^{h_{1}h_{2}\to
h_{3}h_{4}}(\bm{q})\mathcal{F}^{2}(q^{2},m_{E}^{2}).$
In order to reflect the finite size effect of the discussed hadrons and
compensate the off-shell effect of the exchanged light mesons Wang:2020dya ,
we need to introduce the monopole form factor
$\mathcal{F}(q^{2},m_{E}^{2})=(\Lambda^{2}-m_{E}^{2})/(\Lambda^{2}-q^{2})$ in
the interaction vertex Tornqvist:1993ng ; Tornqvist:1993vu . Here, $\Lambda$,
$m_{E}$, and $q$ are the cutoff parameter, the mass, and the four momentum of
the exchanged light meson, respectively.
In addition, we also need a series of normalization relations for the heavy
hadrons $D_{s}$, $D_{s}^{*}$, $\Omega_{c}$, and $\Omega_{c}^{*}$, i.e.,
$\displaystyle\langle 0|D_{s}|c\bar{s}\left(0^{-}\right)\rangle$
$\displaystyle=$ $\displaystyle\sqrt{M_{D_{s}}},$ (2.26) $\displaystyle\langle
0|D_{s}^{*\mu}|c\bar{s}\left(1^{-}\right)\rangle$ $\displaystyle=$
$\displaystyle\sqrt{M_{D_{s}^{*}}}\epsilon^{\mu},$ (2.27)
$\displaystyle\langle 0|\Omega_{c}|css\left({1}/{2}^{+}\right)\rangle$
$\displaystyle=$
$\displaystyle\sqrt{2M_{\Omega_{c}}}{\left(\chi_{\frac{1}{2}m},\frac{\bf{\sigma}\cdot\bf{p}}{2M_{\Omega_{c}}}\chi_{\frac{1}{2}m}\right)^{T}},$
(2.28) $\displaystyle\langle
0|\Omega_{c}^{*\mu}|css\left({3}/{2}^{+}\right)\rangle$ $\displaystyle=$
$\displaystyle\sum_{m_{1},m_{2}}C_{1/2,m_{1};1,m_{2}}^{3/2,m_{1}+m_{2}}\sqrt{2M_{\Omega_{c}^{*}}}$
(2.29)
$\displaystyle\times\left(\chi_{\frac{1}{2}m_{1}},\frac{\bf{\sigma}\cdot\bf{p}}{2M_{\Omega_{c}^{*}}}\chi_{\frac{1}{2}m_{1}}\right)^{T}\epsilon^{\mu}_{m_{2}}.$
With the above preparation, we can deduce the OBE effective potentials in the
coordinate space for all of the investigated processes, which are collected in
the A.
### II.2 Finding bound state solutions for discussed systems
Now, we attempt to find the loosely bound state solutions of these discussed
$\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems by solving the coupled channel
Schrödinger equation, i.e.,
$\displaystyle-\frac{1}{2\mu}\left(\nabla^{2}-\frac{\ell(\ell+1)}{r^{2}}\right)\psi(r)+V(r)\psi(r)=E\psi(r)$
(2.30)
with $\nabla^{2}=\frac{1}{r^{2}}\frac{\partial}{\partial
r}r^{2}\frac{\partial}{\partial r}$, where
$\mu=\frac{m_{1}m_{2}}{m_{1}+m_{2}}$ is the reduced mass for the discussed
system. The bound state solutions include the binding energy $E$, the root-
mean-square radius $r_{\rm RMS}$, and the probability of the individual
channel $P_{i}$, which provides us with valuable information to analyze
whether the loosely bound state exists. In this work, we are interested in the
$S$-wave $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems since there exists the
repulsive centrifugal potential for the higher partial wave states
$\ell\geqslant 1$.
In our calculation, the masses of these involved hadrons are
$m_{f_{0}}=990.00$ MeV, $m_{\eta}=547.85$ MeV, $m_{\phi}=1019.46$ MeV,
$m_{D_{s}}=1968.34$ MeV, $m_{D_{s}^{*}}=2112.20$ MeV, $m_{\Omega_{c}}=2695.20$
MeV, and $m_{\Omega_{c}^{*}}=2765.90$ MeV, which are taken from the Particle
Data Group (PDG) Zyla:2020zbs . As the remaining phenomenological parameter,
we take the cutoff value from 1.00 to 4.00 GeV. Usually, a loosely bound state
with the cutoff parameter closed to 1.00 GeV can be suggested as the possible
hadronic molecular candidate according to the experience of the deuteron
Tornqvist:1993ng ; Tornqvist:1993vu ; Wang:2019nwt ; Chen:2017jjn . For an
ideal hadronic molecular candidate, the reasonable binding energy should be at
most tens of MeV, and the typical size should be larger than the size of all
the included component hadrons Chen:2017xat .
In addition, the $S$-$D$ wave mixing effect is considered in this work, which
plays an important role to modify the bound state properties of the deuteron
Wang:2019nwt . The relevant channels $|{}^{2S+1}L_{J}\rangle$ are summarized
in Table 1.
Table 1: The relevant channels $|{}^{2S+1}L_{J}\rangle$ involved in our calculation. Here, “$...$” means that the $S$-wave component for the corresponding channel does not exist. $J^{P}$ | $\Omega_{c}\bar{D}_{s}$ | $\Omega_{c}^{*}\bar{D}_{s}$ | $\Omega_{c}\bar{D}_{s}^{*}$ | $\Omega_{c}^{*}\bar{D}^{*}_{s}$
---|---|---|---|---
${1}/{2}^{-}$ | $|{}^{2}\mathbb{S}_{1/2}\rangle$ | $...$ | $|{}^{2}\mathbb{S}_{1/2}\rangle/|{}^{4}\mathbb{D}_{1/2}\rangle$ | $|{}^{2}\mathbb{S}_{1/2}\rangle/|{}^{4,6}\mathbb{D}_{1/2}\rangle$
${3}/{2}^{-}$ | $...$ | $|{}^{4}\mathbb{S}_{3/2}\rangle/|{}^{4}\mathbb{D}_{3/2}\rangle$ | $|{}^{4}\mathbb{S}_{3/2}\rangle/|{}^{2,4}\mathbb{D}_{3/2}\rangle$ | $|{}^{4}\mathbb{S}_{3/2}\rangle/|{}^{2,4,6}\mathbb{D}_{3/2}\rangle$
${5}/{2}^{-}$ | $...$ | $...$ | $...$ | $|{}^{6}\mathbb{S}_{5/2}\rangle/|{}^{2,4,6}\mathbb{D}_{5/2}\rangle$
Before performing numerical calculation, we analyze the OBE effective
potentials for these discussed $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems as
below:
* •
For the $\Omega_{c}\bar{D}_{s}$ and $\Omega_{c}^{*}\bar{D}_{s}$ systems, only
the $f_{0}$ and $\phi$ exchange interactions are allowed. Meanwhile, the
tensor force from the $S$-$D$ wave mixing effect disappears in the effective
potentials, and thus the contribution of the $S$-$D$ wave mixing effect does
not affect the bound state properties of the $\Omega_{c}\bar{D}_{s}$ and
$\Omega_{c}^{*}\bar{D}_{s}$ systems.
* •
For the $\Omega_{c}\bar{D}_{s}^{*}$ and $\Omega_{c}^{*}\bar{D}_{s}^{*}$
systems, in addition to the $f_{0}$ and $\phi$ exchange interactions, the
$\eta$ exchange interaction and the $S$-$D$ wave mixing effect need to be
taken into account.
#### II.2.1 The $\Omega_{c}\bar{D}_{s}$ and $\Omega_{c}^{*}\bar{D}_{s}$
systems
For the $S$-wave $\Omega_{c}\bar{D}_{s}$ state with $J^{P}={1}/{2}^{-}$ and
the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}$ state with $J^{P}={3}/{2}^{-}$, we
fail to find their bound state solutions by varying the cutoff parameter in
the range of $1.00$-$4.00~{}{\rm GeV}$ with the single channel analysis.
Nevertheless, we can further take into account the coupled channel effect. In
the coupled channel analysis, the binding energy of the bound state is
determined by the lowest mass threshold among various investigated channels
Chen:2017xat .
For the $S$-wave $\Omega_{c}\bar{D}_{s}$ state with $J^{P}={1}/{2}^{-}$, we
consider the coupled channel effect from the $\Omega_{c}\bar{D}_{s}$,
$\Omega_{c}\bar{D}_{s}^{*}$, and $\Omega_{c}^{*}\bar{D}_{s}^{*}$ channels. In
Table 2, we present the obtained bound state solutions by performing the
coupled channel analysis. When we set the cutoff parameter $\Lambda$ around
2.92 GeV, the loosely bound state solutions can be obtained, and the
$\Omega_{c}\bar{D}_{s}$ channel is dominant with almost 90% probabilities.
Since the cutoff parameter $\Lambda$ is obviously different from 1.00 GeV
Tornqvist:1993ng ; Tornqvist:1993vu , the $S$-wave $\Omega_{c}\bar{D}_{s}$
state with $J^{P}={1}/{2}^{-}$ is not priority for recommending the hadronic
molecular candidate.
Table 2: Bound state solutions of the $S$-wave $\Omega_{c}\bar{D}_{s}$ state with $J^{P}={1}/{2}^{-}$ by performing coupled channel analysis. Here, the cutoff parameter $\Lambda$, binding energy $E$, and root-mean-square radius $r_{RMS}$ are in units of $\rm{GeV}$, $\rm{MeV}$, and $\rm{fm}$, respectively. $\Lambda$ | $E$ | $r_{\rm RMS}$ | P($\Omega_{c}\bar{D}_{s}/\Omega_{c}\bar{D}_{s}^{*}/\Omega_{c}^{*}\bar{D}_{s}^{*}$)
---|---|---|---
2.92 | $-3.71$ | 1.26 | 92.92/4.77/2.31
2.93 | $-12.65$ | 0.64 | 90.11/6.66/3.23
In Table 3, we list the bound state solutions of the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}$ state with $J^{P}={3}/{2}^{-}$ with the coupled
channel analysis. Our numerical results show that the bound state solutions
can be obtained by choosing the cutoff parameter $\Lambda$ around 1.78 GeV or
even larger, and the $\Omega_{c}\bar{D}_{s}^{*}$ system is the dominant
channel with the probabilities over 80%. However, we find the size ($r_{\rm
RMS}\sim 0.33~{}{\rm{fm}}$) of this bound state is too small,333 We notice
that the obtained values of $r_{RMS}$ are too small, which is due to the fact
that this sysmtem is dominated by the $\Omega_{c}\bar{D}_{s}^{*}$ channel as
shown in the last column of Table 3. which is not consistent with a loosely
molecular state picture Chen:2017xat . Thus, we tentatively exclude the
possibility of the existence of the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}$
molecular state with $J^{P}={3}/{2}^{-}$.
Table 3: Bound state solutions of the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}$ state with $J^{P}={3}/{2}^{-}$ when the coupled channel effect is introduced. The units are the same as Table 2. $\Lambda$ | $E$ | $r_{\rm RMS}$ | P($\Omega_{c}^{*}\bar{D}_{s}/\Omega_{c}\bar{D}_{s}^{*}/\Omega_{c}^{*}\bar{D}_{s}^{*}$)
---|---|---|---
1.78 | $-6.15$ | 0.33 | 0.01/86.64/13.36
1.79 | $-17.41$ | 0.32 | 0.01/86.37/13.63
#### II.2.2 The $\Omega_{c}\bar{D}_{s}^{*}$ and
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ systems
For the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ system, the relevant numerical
results are collected in Table 4. For $J^{P}={1}/{2}^{-}$, there do not exist
bound states until we increase the cutoff parameter to be around 4.00 GeV,
even if we consider the coupled channel effect. Thus, we conclude that our
quantitative analysis does not support the existence of the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ molecular state with $J^{P}={1}/{2}^{-}$.
Table 4: Bound state solutions of the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ system. The units are the same as Table 2. Effect | Single channel | $S$-$D$ wave mixing effect | Coupled channel
---|---|---|---
$J^{P}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | P(${}^{4}\mathbb{S}_{\frac{3}{2}}/{}^{2}\mathbb{D}_{\frac{3}{2}}/{}^{4}\mathbb{D}_{\frac{3}{2}})$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | P($\Omega_{c}\bar{D}_{s}^{*}/\Omega_{c}^{*}\bar{D}_{s}^{*}$)
${3}/{2}^{-}$ | 1.96 | $-0.19$ | 4.76 | 1.96 | $-0.33$ | 4.14 | 99.94/0.01/0.05 | 1.67 | $-1.36$ | 2.27 | 95.84/4.16
1.98 | $-5.36$ | 1.09 | 1.98 | $-5.71$ | 1.06 | 99.92/0.02/0.06 | 1.69 | $-8.38$ | 0.90 | 92.17/7.83
1.99 | $-9.44$ | 0.82 | 1.99 | $-9.84$ | 0.81 | 99.93/0.02/0.05 | 1.70 | $-13.35$ | 0.72 | 91.00/9.00
For the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}={3}/{2}^{-}$,
we notice that the effective potentials from the $f_{0}$, $\eta$, and $\phi$
exchanges provide the attractive forces, and there exist the bound state
solutions with the cutoff parameter around 1.96 GeV by performing the single
channel analysis. More interestingly, the bound state properties will change
accordingly after including the coupled channels $\Omega_{c}\bar{D}_{s}^{*}$
and $\Omega_{c}^{*}\bar{D}_{s}^{*}$, where we can obtain the loosely bound
state solutions when the cutoff parameter $\Lambda$ around 1.67 GeV. Moreover,
this bound state is mainly composed of the $\Omega_{c}\bar{D}_{s}^{*}$ channel
with the probabilities over 90%. Based on our numerical results, the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}={3}/{2}^{-}$ can be recommended
as a good candidate of the hidden-charm molecular pentaquark with triple
strangeness.
Comparing the numerical results, it is obvious that the $D$-wave probabilities
are less than 1% and the $S$-$D$ mixing effect can be ignored in forming the
$S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ bound states, but the coupled channel
effect is obvious in generating the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ bound
states, especially for the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ molecular
candidate with $J^{P}={3}/{2}^{-}$.
For the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ system, the bound state
properties are collected in Table 5. Here, we still scan the $\Lambda$
parameter range from 1.00 GeV to 4.00 GeV. For $J^{P}=1/2^{-}$, the binding
energy is a few MeV and the root-mean-square radii are around 1.00 fm with the
cutoff parameter $\Lambda$ larger than 3.59 GeV when only considering the
$S$-wave channel, and we can also obtain the bound state solutions when the
cutoff value $\Lambda$ is lowered down 3.51 GeV after adding the contribution
of the $D$-wave channels. Because the obtained cutoff parameter $\Lambda$ is
far away from 1.00 GeV Tornqvist:1993ng ; Tornqvist:1993vu , our numerical
results disfavor the existence of the molecular candidate for the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with $J^{P}=1/2^{-}$. For
$J^{P}=3/2^{-}$, there do not exist the bound state solutions when the cutoff
parameter varies from 1.00 GeV to 4.00 GeV. This situation does not change
when the $S$-$D$ wave mixing effect is considered. Thus, we can exclude the
$S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with $J^{P}=3/2^{-}$ as the
hadronic molecular candidate. For $J^{P}=5/2^{-}$, we notice that the total
effective potentials due to the $f_{0}$, $\eta$, and $\phi$ exchanges are
attractive. We can obtain the loosely bound state solutions by taking the
cutoff value around 1.64 GeV when only considering the contribution of the
$S$-wave channel, and the bound state solutions also can be found with the
cutoff parameter around 1.64 GeV after considering the $S$-$D$ wave mixing
effect. As a result, the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with
$J^{P}=5/2^{-}$ can be regarded as the hidden-charm molecular pentaquark
candidate with triple strangeness.
Table 5: Bound state solutions of the $S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ system. The units are the same as Table 2. Effect | Single channel | $S$-$D$ wave mixing effect
---|---|---
$J^{P}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | P(${}^{2}\mathbb{S}_{\frac{1}{2}}/{}^{4}\mathbb{D}_{\frac{1}{2}}/{}^{6}\mathbb{D}_{\frac{1}{2}})$
${1}/{2}^{-}$ | $3.59$ | $-0.27$ | $4.96$ | 3.51 | $-0.29$ | 4.89 | 99.97/0.02/0.01
$3.80$ | $-1.18$ | $2.96$ | 3.76 | $-1.74$ | 2.52 | 99.92/0.05/0.03
$4.00$ | $-2.63$ | $2.11$ | 4.00 | $-4.35$ | 1.72 | 99.87/0.08/0.05
$J^{P}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | $\Lambda$ | $E$ | $r_{\rm RMS}$ | P(${}^{6}\mathbb{S}_{\frac{5}{2}}/{}^{2}\mathbb{D}_{\frac{5}{2}}/{}^{4}\mathbb{D}_{\frac{5}{2}}/{}^{6}\mathbb{D}_{\frac{5}{2}})$
${5}/{2}^{-}$ | 1.64 | $-0.31$ | 4.27 | 1.64 | $-0.80$ | 2.97 | 99.81/0.02/0.01/0.15
1.66 | $-4.93$ | 1.19 | 1.66 | $-5.81$ | 1.11 | 99.76/0.03/0.01/0.20
1.68 | $-13.13$ | 0.74 | 1.67 | $-9.55$ | 0.87 | 99.77/0.03/0.01/0.19
To summarize, we predict two types of hidden-charm molecular pentaquark states
with triple strangeness, i.e., the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$
molecular state with $J^{P}={3}/{2}^{-}$ and the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecular state with $J^{P}=5/2^{-}$. Here, we
want to indicate that the effective potentials from the $\phi$ and $\eta$
exchanges are attractive for the $\Omega_{c}\bar{D}_{s}^{*}$ system with
$J^{P}={3}/{2}^{-}$ and the $\Omega_{c}^{*}\bar{D}_{s}^{*}$ system with
$J^{P}=5/2^{-}$, which is due to the contributions from the ${\bf q}^{2}$
terms in the deduced effective potentials. In fact, this issue has been
discussed in Ref. 1839195 .
## III Decay behaviors of these possible $\Omega_{c}^{(*)}\bar{D}_{s}^{*}$
molecular states
In order to further reveal the inner structures and properties of the possible
hidden-charm molecular pentaquarks with triple strangeness, we calculate the
strong decay behaviors of these possible molecular candidates. In this work,
we discuss the hidden-charm decay mode, the corresponding final states
including the $\eta_{c}(1S)\Omega$ and $J/\psi\Omega$. Different with the
binding of the possible hidden-charm molecular pentaquarks with triple
strangeness, the interactions in the very short range distance contribute to
the hidden-charm decay processes. Thus, the quark-interchange model
Barnes:1991em ; Barnes:1999hs can be as a reasonable theoretical framework.
### III.1 The quark-interchange model
When using the quark-interchange model to estimate the transition amplitudes
in calculating the decay widths, we usually adopt the nonrelativistic quark
model to describe the quark-quark interaction Wang:2019spc ; Xiao:2019spy ,
which is expressed as Wong:2001td
$V_{ij}(q^{2})=\frac{\lambda_{i}}{2}\cdot\frac{\lambda_{j}}{2}\left(\frac{4\pi\alpha_{s}}{q^{2}}+\frac{6\pi
b}{q^{4}}-\frac{8\pi\alpha_{s}}{3m_{i}m_{j}}e^{-{\frac{q^{2}}{4\sigma^{2}}}}{\bf{s}}_{i}\cdot{\bf{s}}_{j}\right),$
(3.1)
where $\lambda_{i}(\lambda_{j})$, $m_{i}(m_{j})$, and
${\bf{s}}_{i}({\bf{s}}_{j})$ represent the color factor, the mass, and the
spin operator of the interacting quarks, respectively. $\alpha_{s}$ denotes
the running coupling constant, which reads as Wong:2001td
$\alpha_{s}(Q^{2})=\frac{12\pi}{\left(32-2n_{f}\right){\rm
ln}\left(A+\frac{Q^{2}}{B^{2}}\right)},$ (3.2)
where $Q^{2}$ is the square of the invariant mass of the interacting quarks,
and the relevant parameters Wong:2001td in Eqs. (3.1) and (3.2) are collected
in Table 6.
Table 6: The parameters of the nonrelativistic quark model Wong:2001td and the oscillating parameters of the Gaussian function Wang:2019spc . Quark model | $b~{}(\rm{GeV}^{2})$ | $\sigma~{}(\rm{GeV})$ | $A$
---|---|---|---
0.180 | 0.897 | 10
$B$ (GeV) | $m_{s}~{}(\rm{GeV})$ | $m_{c}~{}(\rm{GeV})$
0.310 | 0.575 | 1.776
Oscillating parameters | $\beta_{D^{\ast}_{s}}~{}(\rm{GeV})$ | $\beta_{\eta_{c}}~{}(\rm{GeV})$ | $\beta_{J/\psi}~{}(\rm{GeV})$
0.562 | 0.838 | 0.729
$\alpha_{\lambda\Omega}~{}(\rm{GeV})$ | $\alpha_{\rho\Omega}~{}(\rm{GeV})$ | $\alpha_{\lambda\Omega_{c}}~{}(\rm{GeV})$
0.466 | 0.407 | 0.583
$\alpha_{\rho\Omega_{c}}~{}(\rm{GeV})$ | $\alpha_{\lambda\Omega^{\ast}_{c}}~{}(\rm{GeV})$ | $\alpha_{\rho\Omega^{\ast}_{c}}~{}(\rm{GeV})$
0.444 | 0.537 | 0.423
To get the transition amplitudes within the quark-interchange model, we take
the same convention as the previous work Wang:2019spc ; Xiao:2019spy ;
Wang:2020prk . The transition amplitude for the process $A(css)+B(s\bar{c})\to
C(sss)+D(c\bar{c})$ can be decomposed as four processes in the hadronic
molecular picture, which are illustrated in Fig. 2.
Figure 2: Quark-interchange diagrams for the process $A(css)+B(s\bar{c})\to
C(sss)+D(c\bar{c})$ in the hadronic molecular picture.
The Hamiltonian of the initial hidden-charm molecular pentaquark state can be
written as Wang:2019spc
$H_{\rm{Initial}}=H^{0}_{A}+H^{0}_{B}+V_{AB},$ (3.3)
where $H^{0}_{A}$ and $H^{0}_{B}$ are the Hamiltonian of the free baryon A and
meson B, and $V_{AB}$ denotes the interaction between the baryon A and the
meson B.
Furthermore, we define the color wave function $\omega_{\rm{color}}$, the
flavor wave function $\chi_{\rm{flavor}}$, the spin wave function
$\chi_{\rm{spin}}$, and the momentum space wave function $\phi(\bf{p})$,
respectively. Thus, the total wave function can be expressed as
$\psi_{\rm{total}}=\omega_{\rm{color}}\chi_{\rm{flavor}}\chi_{\rm{spin}}\phi(\bf{p}).$
(3.4)
In this work, we take the Gaussian functions to approximate the momentum space
wave functions for the baryon, meson, and molecule. The more explicit forms of
the relevant Gaussian function can be found in Ref. Wang:2019spc , and the
oscillating parameters of the meson and baryon are estimated by fitting their
mass spectrum in the Godfrey-Isgur model Godfrey:1985xj , which are listed in
Table 6. For an $S$-wave loosely bound state composed of two hadrons A and B,
the oscillating parameter $\beta$ can be related to the mass of the molecular
state $m$, i.e., $\beta=\sqrt{3\mu(m_{A}+m_{B}-m)}$ with
$\mu=\frac{m_{A}m_{B}}{m_{A}+m_{B}}$ Weinberg:1962hj ; Weinberg:1963zza ;
Guo:2017jvc . And then, the $T$-matrix $T_{fi}$ represents the relevant
effective potential in the quark-interchange diagrams, which can be factorized
as
$T_{fi}=I_{\rm{color}}I_{\rm{flavor}}I_{\rm{spin}}I_{\rm{space}},$ (3.5)
where $I_{i}$ with the subscripts color, flavor, spin, and space stand for the
corresponding factors, and the calculation details of these factors
$I_{i}\,(i=\rm{color},\,\rm{flavor},\,\rm{spin},\,\rm{space})$ are referred to
in Ref. Wang:2019spc .
For the two-body strong decay widths of these discussed molecular candidates,
they can be explicitly expressed as
$\displaystyle\Gamma=\frac{|{\bf{P}}_{C}|}{32\pi^{2}m^{2}(2J+1)}\int
d\Omega|\mathcal{M}|^{2}.$ (3.6)
In the above expression, ${\bf{P}}_{C}$, $m$, and $\mathcal{M}$ stand for the
momentum of the final state, the mass of the molecular state, and the
transition amplitude of the discussed process, respectively. Here, we want to
emphasize that there exists a relation of the transition amplitude
$\mathcal{M}$ and the $T$-matrix $T_{fi}$, i.e.,
$\displaystyle\mathcal{M}=-(2\pi)^{\frac{3}{2}}\sqrt{2m2E_{C}2E_{D}}T_{fi},$
(3.7)
where $E_{C}$ and $E_{D}$ are the energies of the final states C and D,
respectively. Through the above preparation, we can calculate the two-body
hidden-charm strong decay widths of these proposed
$\Omega_{c}^{(*)}\bar{D}_{s}^{*}$ molecular states.
### III.2 Two-body hidden-charm strong decay widths of these proposed
$\Omega_{c}^{(*)}\bar{D}_{s}^{*}$ molecular states
In the above section, our results suggest that the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}={3}/{2}^{-}$ and the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with $J^{P}=5/2^{-}$ can be regarded as
the hidden-charm molecular pentaquark candidates with triple strangeness.
Thus, we will study the two-body strong decay property of these possible
hidden-charm molecular pentaquarks with triple strangeness, which provides
valuable information to search for these proposed molecular candidates in
experiment.
In this work, we focus on the two-body hidden-charm strong decay channels for
these predicted hidden-charm molecular pentaquarks with triple strangeness.
For the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ molecular state with
$J^{P}={3}/{2}^{-}$, it can decay into the $J/\psi\,\Omega$ and
$\eta_{c}\,\Omega$ channels through the $S$-wave interaction. For the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecular state with $J^{P}={5}/{2}^{-}$, we
only take into account the $J/\psi\,\Omega$ decay channel via the $S$-wave
coupling, while the $\eta_{c}\,\Omega$ channel is suppressed since it is a
$D$-wave decay Wang:2019spc .
In order to intuitively clarify the uncertainty of the binding energies, we
present the binding energies dependence of the decay widths for the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ molecular state with $J^{P}={3}/{2}^{-}$ and the
$S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecular state with
$J^{P}={5}/{2}^{-}$ in Fig. 3. As stressed in Sec. II, the hadronic molecule
is a loosely bound state Chen:2017xat , so the binding energies of these
hidden-charm molecular pentaquarks with triple strangeness change from $-20$
to $-1$ MeV in calculating the decay widths. With increasing the absolute
values of the binding energy, the decay widths become larger, which is
consistent with other theoretical calculations Chen:2017xat ; Lin:2017mtz ;
Lin:2018kcc ; Lin:2018nqd ; Shen:2019evi ; Lin:2019qiv ; Lin:2019tex ;
Dong:2019ofp ; Dong:2020rgs ; Xiao:2019mvs ; Wu:2018xaa ; Chen:2017abq ;
Xiao:2016mho .
Figure 3: The binding energies dependence of the decay widths for the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ molecular state with $J^{P}={3}/{2}^{-}$ and the
$S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecular state with
$J^{P}={5}/{2}^{-}$.
As illustrated in Fig. 3, when the binding energies are taken as $-15$ MeV
with typical values, the dominant decay channel is the $J/\psi\,\Omega$ around
one MeV for the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ molecular state with
$J^{P}={3}/{2}^{-}$, and the decay width of the $J/\psi\,\Omega$ channel is
predicted to be around several MeV for the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecule with $J^{P}={5}/{2}^{-}$. Thus, the
$J/\psi\,\Omega$ should be the promising channel to observe the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ molecular state with $J^{P}={3}/{2}^{-}$ and the
$S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecular state with
$J^{P}={5}/{2}^{-}$. Meanwhile, it is interesting to note that the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ molecular state with $J^{P}={3}/{2}^{-}$ prefers
to decay into the $J/\psi\,\Omega$ channel, but the decay width of the
$\eta_{c}\,\Omega$ channel is comparable to the $J/\psi\,\Omega$ channel,
which indicates that the $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ molecule with
$J^{P}={3}/{2}^{-}$ can be detected in the $\eta_{c}\,\Omega$ channel in
future experiment.
In the heavy quark symmetry, the relative partial decay branch ratio between
the $\eta_{c}(1S)\Omega$ and $J/\psi\Omega$ for the
$\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}=3/2^{-}$ can be estimated as
$\displaystyle\mathcal{R}_{\text{HQS}}=\frac{\Gamma(\Omega_{c}\bar{D}_{s}^{*}\to\eta_{c}(1S)\Omega)}{\Gamma(\Omega_{c}\bar{D}_{s}^{*}\to
J/\psi\Omega)}=0.6,$ (3.8)
since the relative momentum in the $\eta_{c}(1S)\Omega$ channel is larger than
that in the $J/\psi\Omega$ channel, $\mathcal{R}(E)$ should be a little larger
than $\mathcal{R}_{\text{HQS}}=0.6$, where $E$ is the binding energy. In our
calculation, we obtain
$\displaystyle\mathcal{R}(-5~{}\text{MeV})=\mathcal{R}(-10~{}\text{MeV})=0.62,\quad\mathcal{R}(-15~{}\text{MeV})=0.67.$
Obviously, our results are consistent with the estimation in the heavy quark
limit.
## IV Summary
Searching for exotic hadronic state is an interesting and important research
topic of hadron physics. With accumulation of experimental data, the LHCb
observed three narrow $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$ in 2019
Aaij:2019vzc , and found the evidence of the $P_{cs}(4459)$ as a hidden-charm
pentaquark with strangeness Aaij:2020gdg . These progresses make us have
reason to believe that there should exist a zoo of the hidden-charm molecular
pentaquark. At present, the hidden-charm molecular pentaquark with triple
strangeness is still missing, which inspires our interest in exploring how to
find these intriguing hidden-charm molecular pentaquark states with triple
strangeness.
Mass spectrum information is crucial to searching for them. In this work, we
perform the dynamical calculation of the possible hidden-charm molecular
pentaquark states with triple strangeness from the
$\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ interactions, where the effective
potentials can be obtained by the OBE model. By finding bound state solutions
of these discussed systems, we find that the most promising hidden-charm
molecular pentaquarks with triple strangeness are the $S$-wave
$\Omega_{c}\bar{D}_{s}^{*}$ state with $J^{P}={3}/{2}^{-}$ and the $S$-wave
$\Omega_{c}^{*}\bar{D}_{s}^{*}$ state with $J^{P}=5/2^{-}$. Besides mass
spectrum study, we also discuss their two-body hidden-charm strong decay
behaviors within the quark-interchange model. In concrete calculation, we
mainly focus on the $J/\psi\,\Omega$ and $\eta_{c}\,\Omega$ decay modes for
the predicted $S$-wave $\Omega_{c}\bar{D}_{s}^{*}$ molecule with
$J^{P}={3}/{2}^{-}$ and the $J/\psi\,\Omega$ decay channel for the predicted
$S$-wave $\Omega_{c}^{*}\bar{D}_{s}^{*}$ molecule with $J^{P}={5}/{2}^{-}$.
In the following years, the LHCb Collaboration will collect more experimental
data at Run III and upgrade the High-Luminosity-LHC Bediaga:2018lhg .
Experimental searches for these predicted hidden-charm molecular pentaquarks
with triple strangeness are an area full of opportunities and challenges in
future experiments.
## ACKNOWLEDGMENTS
We would like to thank Z. W. Liu, G. J. Wang, L. Y. Xiao, and S. Q. Luo for
very helpful discussions. This work is supported by the China National Funds
for Distinguished Young Scientists under Grant No. 11825503, National Key
Research and Development Program of China under Contract No. 2020YFA0406400,
the 111 Project under Grant No. B20063, and the National Natural Science
Foundation of China under Grant No. 12047501. R. C. is supported by the
National Postdoctoral Program for Innovative Talent.
## Appendix A Relevant subpotentials
Through the standard strategy Wang:2020dya ; Wang:2019nwt ; Wang:2019aoc , we
can derive the effective potentials in the coordinate space for these
investigated $\Omega_{c}^{(*)}\bar{D}_{s}^{(*)}$ systems, i.e.,
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}\rightarrow\Omega_{c}\bar{D}_{s}}$
$\displaystyle=$ $\displaystyle-AY_{f_{0}}-\frac{C}{2}Y_{\phi},$ (1.1)
$\displaystyle\mathcal{V}^{\Omega_{c}^{*}\bar{D}_{s}\rightarrow\Omega_{c}^{*}\bar{D}_{s}}$
$\displaystyle=$
$\displaystyle-A\mathcal{A}_{1}Y_{f_{0}}-\frac{C}{2}\mathcal{A}_{1}Y_{\phi},$
(1.2)
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}^{*}\rightarrow\Omega_{c}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle-A\mathcal{A}_{2}Y_{f_{0}}+\frac{2B}{9}\left[\mathcal{A}_{3}\mathcal{O}_{r}+\mathcal{A}_{4}\mathcal{P}_{r}\right]Y_{\eta}$
(1.3)
$\displaystyle-\frac{C}{2}\mathcal{A}_{2}Y_{\phi}-\frac{2D}{9}\left[2\mathcal{A}_{3}\mathcal{O}_{r}-\mathcal{A}_{4}\mathcal{P}_{r}\right]Y_{\phi},$
$\displaystyle\mathcal{V}^{\Omega_{c}^{*}\bar{D}_{s}^{*}\rightarrow\Omega_{c}^{*}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle-A\mathcal{A}_{5}Y_{f_{0}}-\frac{B}{3}\left[\mathcal{A}_{6}\mathcal{O}_{r}+\mathcal{A}_{7}\mathcal{P}_{r}\right]Y_{\eta}$
(1.4)
$\displaystyle-\frac{C}{2}\mathcal{A}_{5}Y_{\phi}+\frac{D}{3}\left[2\mathcal{A}_{6}\mathcal{O}_{r}-\mathcal{A}_{7}\mathcal{P}_{r}\right]Y_{\phi},$
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}\rightarrow\Omega_{c}^{*}\bar{D}_{s}}$
$\displaystyle=$
$\displaystyle\frac{A}{\sqrt{3}}\mathcal{A}_{8}Y_{f_{0}1}+\frac{C}{2\sqrt{3}}\mathcal{A}_{8}Y_{\phi
1},$ (1.5)
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}\rightarrow\Omega_{c}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle\frac{2B}{9}\left[\mathcal{A}_{9}\mathcal{O}_{r}+\mathcal{A}_{10}\mathcal{P}_{r}\right]Y_{\eta
2}$ (1.6)
$\displaystyle+\frac{2D}{9}\left[2\mathcal{A}_{9}\mathcal{O}_{r}-\mathcal{A}_{10}\mathcal{P}_{r}\right]Y_{\phi
2},$
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}\rightarrow\Omega_{c}^{*}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle-\frac{B}{3\sqrt{3}}\left[\mathcal{A}_{11}\mathcal{O}_{r}+\mathcal{A}_{12}\mathcal{P}_{r}\right]Y_{\eta
3}$ (1.7)
$\displaystyle-\frac{D}{3\sqrt{3}}\left[2\mathcal{A}_{11}\mathcal{O}_{r}-\mathcal{A}_{12}\mathcal{P}_{r}\right]Y_{\phi
3},$
$\displaystyle\mathcal{V}^{\Omega_{c}^{*}\bar{D}_{s}\rightarrow\Omega_{c}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle\frac{B}{3\sqrt{3}}\left[\mathcal{A}_{13}\mathcal{O}_{r}+\mathcal{A}_{14}\mathcal{P}_{r}\right]Y_{\eta
4}$ (1.8)
$\displaystyle+\frac{D}{3\sqrt{3}}\left[2\mathcal{A}_{13}\mathcal{O}_{r}-\mathcal{A}_{14}\mathcal{P}_{r}\right]Y_{\phi
4},$
$\displaystyle\mathcal{V}^{\Omega_{c}^{*}\bar{D}_{s}\rightarrow\Omega_{c}^{*}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle\frac{B}{3}\left[\mathcal{A}_{15}\mathcal{O}_{r}+\mathcal{A}_{16}\mathcal{P}_{r}\right]Y_{\eta
5}$ (1.9)
$\displaystyle+\frac{D}{3}\left[2\mathcal{A}_{15}\mathcal{O}_{r}-\mathcal{A}_{16}\mathcal{P}_{r}\right]Y_{\phi
5},$
$\displaystyle\mathcal{V}^{\Omega_{c}\bar{D}_{s}^{*}\rightarrow\Omega_{c}^{*}\bar{D}_{s}^{*}}$
$\displaystyle=$
$\displaystyle\frac{A}{\sqrt{3}}\mathcal{A}_{17}Y_{f_{0}6}+\frac{B}{3\sqrt{3}}\left[\mathcal{A}_{18}\mathcal{O}_{r}+\mathcal{A}_{19}\mathcal{P}_{r}\right]Y_{\eta
6}$ $\displaystyle+\frac{C\mathcal{A}_{17}}{2\sqrt{3}}Y_{\phi
6}-\frac{D}{3\sqrt{3}}\left[2\mathcal{A}_{18}\mathcal{O}_{r}-\mathcal{A}_{19}\mathcal{P}_{r}\right]Y_{\phi
6}.$
Here, $\mathcal{O}_{r}=\frac{1}{r^{2}}\frac{\partial}{\partial
r}r^{2}\frac{\partial}{\partial r}$ and
$\mathcal{P}_{r}=r\frac{\partial}{\partial
r}\frac{1}{r}\frac{\partial}{\partial r}$. Additionally, we also define
several variables, which include $A=l_{S}g_{S}$, $B=g_{1}g/f_{\pi}^{2}$,
$C=\beta_{S}\beta g_{V}^{2}$, and $D=\lambda_{S}\lambda g_{V}^{2}$. The
function $Y_{i}$ can be defined as
$\displaystyle Y_{i}=\dfrac{e^{-m_{i}r}-e^{-\Lambda_{i}r}}{4\pi
r}-\dfrac{\Lambda_{i}^{2}-m_{i}^{2}}{8\pi\Lambda_{i}}e^{-\Lambda_{i}r}.$
(1.11)
Here, $m_{i}=\sqrt{m^{2}-q_{i}^{2}}$ and
$\Lambda_{i}=\sqrt{\Lambda^{2}-q_{i}^{2}}$. Variables $q_{i}\,(i=1\,,...,\,6)$
are defined as $q_{1}=0.04$ GeV, $q_{2}=0.06$ GeV, $q_{3}=0.02$ GeV,
$q_{4}=0.10$ GeV, $q_{5}=0.06$ GeV, and $q_{6}=0.04$ GeV.
In the above effective potentials, we also introduce several operators, i.e.,
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}\left({\bm{\epsilon}^{\dagger
b}_{3}}\cdot{\bm{\epsilon}^{n}_{1}}\right)\chi^{m}_{1},$
$\displaystyle\mathcal{A}_{2}$ $\displaystyle=$
$\displaystyle\chi^{\dagger}_{3}\left({\bm{\epsilon}^{\dagger}_{4}}\cdot{\bm{\epsilon}_{2}}\right)\chi_{1},$
$\displaystyle\mathcal{A}_{3}$ $\displaystyle=$
$\displaystyle\chi^{\dagger}_{3}\left[{\bm{\sigma}}\cdot\left(i{\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}}\right)\right]\chi_{1},$
$\displaystyle\mathcal{A}_{4}$ $\displaystyle=$
$\displaystyle\chi^{\dagger}_{3}T({\bm{\sigma}},i{\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}})\chi_{1},$
$\displaystyle\mathcal{A}_{5}$ $\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}\left({\bm{\epsilon}^{n}_{1}}\cdot{\bm{\epsilon}^{\dagger
b}_{3}}\right)\left({\bm{\epsilon}_{2}}\cdot{\bm{\epsilon}^{\dagger}_{4}}\right)\chi^{m}_{1},$
$\displaystyle\mathcal{A}_{6}$ $\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}\left({\bm{\epsilon}^{n}_{1}}\times{\bm{\epsilon}^{\dagger
b}_{3}}\right)\cdot\left({\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}}\right)\chi^{m}_{1},$
$\displaystyle\mathcal{A}_{7}$ $\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}T({\bm{\epsilon}^{n}_{1}}\times{\bm{\epsilon}^{\dagger
b}_{3}},{\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}})\chi^{m}_{1},$
$\displaystyle\mathcal{A}_{8}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}\left({\bm{\epsilon}^{\dagger b}_{3}}\cdot{\bm{\sigma}}\right)\chi_{1},$
$\displaystyle\mathcal{A}_{9}$ $\displaystyle=$
$\displaystyle\chi^{\dagger}_{3}\left({\bm{\sigma}}\cdot{\bm{\epsilon}^{\dagger}_{4}}\right)\chi_{1},$
$\displaystyle\mathcal{A}_{10}$ $\displaystyle=$
$\displaystyle\chi^{\dagger}_{3}T({\bm{\sigma}},{\bm{\epsilon}^{\dagger}_{4}})\chi_{1},$
$\displaystyle\mathcal{A}_{11}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}\left[{\bm{\epsilon}^{\dagger}_{4}}\cdot\left(i{\bm{\sigma}}\times{\bm{\epsilon}^{\dagger
b}_{3}}\right)\right]\chi_{1},$ $\displaystyle\mathcal{A}_{12}$
$\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}T({\bm{\epsilon}^{\dagger}_{4}},i{\bm{\sigma}}\times{\bm{\epsilon}^{\dagger
b}_{3}})\chi_{1},$ $\displaystyle\mathcal{A}_{13}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger}_{3}\left[{\bm{\epsilon}^{\dagger}_{4}}\cdot\left(i{\bm{\sigma}}\times{\bm{\epsilon}^{b}_{1}}\right)\right]\chi^{a}_{1},$
$\displaystyle\mathcal{A}_{14}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger}_{3}T({\bm{\epsilon}^{\dagger}_{4}},i{\bm{\sigma}}\times{\bm{\epsilon}^{b}_{1}})\chi^{a}_{1},$
$\displaystyle\mathcal{A}_{15}$ $\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}\left[{\bm{\epsilon}^{\dagger}_{4}}\cdot\left(i{\bm{\epsilon}^{n}_{1}}\times{\bm{\epsilon}^{\dagger
b}_{3}}\right)\right]\chi^{m}_{1},$ $\displaystyle\mathcal{A}_{16}$
$\displaystyle=$
$\displaystyle\sum_{a,b,m,n}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}C^{\frac{3}{2},m+n}_{\frac{1}{2}m,1n}\chi^{\dagger
a}_{3}T({\bm{\epsilon}^{\dagger}_{4}},i{\bm{\epsilon}^{n}_{1}}\times{\bm{\epsilon}^{\dagger
b}_{3}})\chi^{m}_{1},$ $\displaystyle\mathcal{A}_{17}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}\left({\bm{\sigma}}\cdot{\bm{\epsilon}^{\dagger
b}_{3}}\right)\left({\bm{\epsilon}_{2}}\cdot{\bm{\epsilon}^{\dagger}_{4}}\right)\chi_{1},$
$\displaystyle\mathcal{A}_{18}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}\left({\bm{\sigma}}\times{\bm{\epsilon}^{\dagger
b}_{3}}\right)\cdot\left({\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}}\right)\chi_{1},$
$\displaystyle\mathcal{A}_{19}$ $\displaystyle=$
$\displaystyle\sum_{a,b}C^{\frac{3}{2},a+b}_{\frac{1}{2}a,1b}\chi^{\dagger
a}_{3}T({\bm{\sigma}}\times{\bm{\epsilon}^{\dagger
b}_{3}},{\bm{\epsilon}_{2}}\times{\bm{\epsilon}^{\dagger}_{4}})\chi_{1}.$
(1.12)
Here,
$T({\bm{x}},{\bm{y}})=3\left(\hat{\bm{r}}\cdot{\bm{x}}\right)\left(\hat{\bm{r}}\cdot{\bm{y}}\right)-{\bm{x}}\cdot{\bm{y}}$
is the tensor force operator. In Table 7, we collect the numerical matrix
elements $\langle f|\mathcal{A}_{k}|i\rangle\,(k=1\,,...,\,7)$ with the
$S$-$D$ wave mixing effect analysis. Of course, the relevant numerical matrix
elements $\langle f|\mathcal{A}_{k}|i\rangle\,(k=8\,,...,\,19)$ will be
involved in the coupled channel analysis. For the coupled channel analysis
with $J=1/2$, we have $\mathcal{A}_{9}=\sqrt{3}$, $\mathcal{A}_{11}=\sqrt{2}$,
$\mathcal{A}_{18}=-\sqrt{{2}/{3}}$, and
$\mathcal{A}_{k}=0\,(k=10,\,12,\,17,\,19)$. And, there exists
$\mathcal{A}_{13}=1$, $\mathcal{A}_{15}=\sqrt{{5}/{3}}$,
$\mathcal{A}_{18}=-\sqrt{{5}/{3}}$, and
$\mathcal{A}_{k}=0\,(k=14,\,16,\,17,\,19)$ for the coupled channel analysis
with $J=3/2$.
Table 7: The numerical matrix elements $\langle f|\mathcal{A}_{k}|i\rangle\,(k=1\,,...,\,7)$ with the $S$-$D$ wave mixing effect analysis. Matrix elements | $J=1/2$ | $J=3/2$ | $J=5/2$
---|---|---|---
$\langle\Omega_{c}^{*}\bar{D}_{s}|\mathcal{A}_{1}|\Omega_{c}^{*}\bar{D}_{s}\rangle$ | $/$ | diag(1,1) | $/$
$\langle\Omega_{c}\bar{D}_{s}^{*}|\mathcal{A}_{2}|\Omega_{c}\bar{D}_{s}^{*}\rangle$ | diag(1,1) | diag(1,1,1) | $/$
$\langle\Omega_{c}\bar{D}_{s}^{*}|\mathcal{A}_{3}|\Omega_{c}\bar{D}_{s}^{*}\rangle$ | diag($-2$,$1$) | diag($1$,$-2$,$1$) | $/$
$\langle\Omega_{c}\bar{D}_{s}^{*}|\mathcal{A}_{4}|\Omega_{c}\bar{D}_{s}^{*}\rangle$ | $\left(\begin{array}[]{cc}0&-\sqrt{2}\\\ -\sqrt{2}&-2\end{array}\right)$ | $\left(\begin{array}[]{ccc}0&1&2\\\ 1&0&-1\\\ 2&-1&0\end{array}\right)$ | $/$
$\langle\Omega_{c}^{*}\bar{D}_{s}^{*}|\mathcal{A}_{5}|\Omega_{c}^{*}\bar{D}_{s}^{*}\rangle$ | diag(1,1,1) | diag(1,1,1,1) | diag(1,1,1,1)
$\langle\Omega_{c}^{*}\bar{D}_{s}^{*}|\mathcal{A}_{6}|\Omega_{c}^{*}\bar{D}_{s}^{*}\rangle$ | diag($\frac{5}{3}$,$\frac{2}{3}$,$-1$) | diag($\frac{2}{3}$,$\frac{5}{3}$,$\frac{2}{3}$,$-1$) | diag($-1$,$\frac{5}{3}$,$\frac{2}{3}$,$-1$)
$\langle\Omega_{c}^{*}\bar{D}_{s}^{*}|\mathcal{A}_{7}|\Omega_{c}^{*}\bar{D}_{s}^{*}\rangle$ | $\left(\begin{array}[]{ccc}0&-\frac{7}{3\sqrt{5}}&\frac{2}{\sqrt{5}}\\\ -\frac{7}{3\sqrt{5}}&\frac{16}{15}&-\frac{1}{5}\\\ \frac{2}{\sqrt{5}}&-\frac{1}{5}&\frac{8}{5}\end{array}\right)$ | $\left(\begin{array}[]{cccc}0&\frac{7}{3\sqrt{10}}&-\frac{16}{15}&-\frac{\sqrt{7}}{5\sqrt{2}}\\\ \frac{7}{3\sqrt{10}}&0&-\frac{7}{3\sqrt{10}}&-\frac{2}{\sqrt{35}}\\\ -\frac{16}{15}&-\frac{7}{3\sqrt{10}}&0&-\frac{1}{\sqrt{14}}\\\ -\frac{\sqrt{7}}{5\sqrt{2}}&-\frac{2}{\sqrt{35}}&-\frac{1}{\sqrt{14}}&\frac{4}{7}\end{array}\right)$ | $\left(\begin{array}[]{cccc}0&\frac{2}{\sqrt{15}}&\frac{\sqrt{7}}{5\sqrt{3}}&-\frac{2\sqrt{14}}{5}\\\ \frac{2}{\sqrt{15}}&0&\frac{\sqrt{7}}{3\sqrt{5}}&-\frac{4\sqrt{2}}{\sqrt{105}}\\\ \frac{\sqrt{7}}{5\sqrt{3}}&\frac{\sqrt{7}}{3\sqrt{5}}&-\frac{16}{21}&-\frac{\sqrt{2}}{7\sqrt{3}}\\\ -\frac{2\sqrt{14}}{5}&-\frac{4\sqrt{2}}{\sqrt{105}}&-\frac{\sqrt{2}}{7\sqrt{3}}&-\frac{4}{7}\end{array}\right)$
## References
* (1) S. K. Choi et al. (Belle Collaboration), Observation of a Narrow Charmonium-Like State in Exclusive $B^{\pm}\to K^{\pm}\pi^{+}\pi^{-}J/\psi$ Decays, Phys. Rev. Lett. 91, 262001 (2003).
* (2) H. X. Chen, W. Chen, X. Liu, and S. L. Zhu, The hidden-charm pentaquark and tetraquark states, Phys. Rep. 639, 1 (2016).
* (3) Y. R. Liu, H. X. Chen, W. Chen, X. Liu, and S. L. Zhu, Pentaquark and tetraquark states, Prog. Part. Nucl. Phys. 107, 237 (2019).
* (4) S. L. Olsen, T. Skwarnicki, and D. Zieminska, Nonstandard heavy mesons and baryons: Experimental evidence, Rev. Mod. Phys. 90, 015003 (2018).
* (5) F. K. Guo, C. Hanhart, U. G. Mei$\ss$ner, Q. Wang, Q. Zhao, and B. S. Zou, Hadronic molecules, Rev. Mod. Phys. 90, 015004 (2018).
* (6) X. Liu, An overview of $XYZ$ new particles, Chin. Sci. Bull. 59, 3815 (2014).
* (7) A. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai, and S. Yasui, Exotic hadrons with heavy flavors: $X$, $Y$, $Z$, and related states, Prog. Theor. Exp. Phys. 2016, 062C01 (2016).
* (8) N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo, and C. Z. Yuan, The $XYZ$ states: Experimental and theoretical status and perspectives, Phys. Rep. 873, 1 (2020).
* (9) M. Gell-Mann, A schematic model of baryons and mesons, Phys. Lett. 8, 214 (1964).
* (10) G. Zweig, An SU(3) model for strong interaction symmetry and its breaking. Version 1, CERN, Report No. CERN-TH-401, 1964.
* (11) J. J. Wu, R. Molina, E. Oset, and B. S. Zou, Prediction of Narrow $N^{*}$ and $\Lambda^{*}$ Resonances with Hidden Charm Above 4 GeV, Phys. Rev. Lett. 105, 232001 (2010).
* (12) W. L. Wang, F. Huang, Z. Y. Zhang, and B. S. Zou, $\Sigma_{c}\bar{D}$ and $\Lambda_{c}\bar{D}$ states in a chiral quark model, Phys. Rev. C 84, 015203 (2011).
* (13) Z. C. Yang, Z. F. Sun, J. He, X. Liu, and S. L. Zhu, The possible hidden-charm molecular baryons composed of anticharmed meson and charmed baryon, Chin. Phys. C 36, 6 (2012).
* (14) J. J. Wu, T.-S. H. Lee, and B. S. Zou, Nucleon resonances with hidden charm in coupled-channel Models, Phys. Rev. C 85, 044002 (2012).
* (15) X. Q. Li and X. Liu, A possible global group structure for exotic states, Eur. Phys. J. C 74, 3198 (2014).
* (16) R. Chen, X. Liu, X. Q. Li, and S. L. Zhu, Identifying Exotic Hidden-Charm Pentaquarks, Phys. Rev. Lett. 115, 132002 (2015).
* (17) M. Karliner and J. L. Rosner, New Exotic Meson and Baryon Resonances from Doubly-Heavy Hadronic Molecules, Phys. Rev. Lett. 115, 122001 (2015).
* (18) R. Aaij et al. (LHCb Collaboration), Observation of $J/\psi$ Resonances Consistent with Pentaquark States in $\Lambda_{b}^{0}\rightarrow J/\psi K^{-}p$ Decays, Phys. Rev. Lett. 115, 072001 (2015).
* (19) R. Aaij et al. (LHCb Collaboration), Observation of a Narrow Pentaquark State, $P_{c}(4312)^{+}$, and of Two-Peak Structure of the $P_{c}(4450)^{+}$, Phys. Rev. Lett. 122, 222001 (2019).
* (20) R. Aaij et al. (LHCb Collaboration), Evidence of a $J/\psi\varLambda$ structure and observation of excited $\Xi^{-}$ states in the $\Xi_{b}^{-}\to J/\psi\varLambda K^{-}$ decay, arXiv:2012.10380.
* (21) J. Hofmann and M. F. M. Lutz, Coupled-channel study of crypto-exotic baryons with charm, Nucl. Phys. A 763, 90 (2005).
* (22) J. J. Wu, R. Molina, E. Oset and B. S. Zou, Dynamically generated $N^{*}$ and $\Lambda^{*}$ resonances in the hidden charm sector around 4.3 GeV, Phys. Rev. C 84, 015202 (2011).
* (23) V. V. Anisovich, M. A. Matveev, J. Nyiri, A. V. Sarantsev, and A. N. Semenova, Nonstrange and strange pentaquarks with hidden charm, Int. J. Mod. Phys. A 30, 1550190 (2015).
* (24) Z. G. Wang, Analysis of the ${\frac{1}{2}}^{\pm}$ pentaquark states in the diquark-diquark-antiquark model with QCD sum rules, Eur. Phys. J. C 76, 142 (2016).
* (25) A. Feijoo, V. K. Magas, A. Ramos, and E. Oset, A hidden-charm $S=-1$ pentaquark from the decay of $\Lambda_{b}$ into $J/\psi\eta\Lambda$ states, Eur. Phys. J. C 76, no. 8, 446 (2016).
* (26) J. X. Lu, E. Wang, J. J. Xie, L. S. Geng, and E. Oset, The $\Lambda_{b}\rightarrow J/\psi K^{0}\Lambda$ reaction and a hidden-charm pentaquark state with strangeness, Phys. Rev. D 93, 094009 (2016).
* (27) H. X. Chen, L. S. Geng, W. H. Liang, E. Oset, E. Wang, and J. J. Xie, Looking for a hidden-charm pentaquark state with strangeness $S=-1$ from $\Xi^{-}_{b}$ decay into $J/\psi K^{-}\Lambda$, Phys. Rev. C 93, 065203 (2016).
* (28) R. Chen, J. He, and X. Liu, Possible strange hidden-charm pentaquarks from $\Sigma_{c}^{(*)}\bar{D}_{s}^{*}$ and $\Xi^{(^{\prime},*)}_{c}\bar{D}^{*}$ interactions, Chin. Phys. C 41, 103105 (2017).
* (29) C. W. Xiao, J. Nieves, and E. Oset, Prediction of hidden charm strange molecular baryon states with heavy quark spin symmetry, Phys. Lett. B 799, 135051 (2019).
* (30) C. W. Shen, H. J. Jing, F. K. Guo, and J. J. Wu, Exploring possible triangle singularities in the $\Xi^{-}_{b}\to K^{-}J/\psi\Lambda$ decay, Symmetry 12, 1611 (2020).
* (31) B. Wang, L. Meng, and S. L. Zhu, Spectrum of the strange hidden charm molecular pentaquarks in chiral effective field theory, Phys. Rev. D 101, 034018 (2020).
* (32) Q. Zhang, B. R. He, and J. L. Ping, Pentaquarks with the $qqs\bar{Q}Q$ configuration in the Chiral Quark Model, arXiv:2006.01042.
* (33) H. X. Chen, W. Chen, X. Liu, and X. H. Liu, Establishing the first hidden-charm pentaquark with strangeness, arXiv:2011.01079.
* (34) F. Z. Peng, M. J. Yan, M. Sánchez Sánchez, and M. P. Valderrama, The $P_{cs}(4459)$ pentaquark from a combined effective field theory and phenomenological perspectives, arXiv:2011.01915.
* (35) R. Chen, Can the newly $P_{cs}(4459)$ be a strange hidden-charm $\Xi_{c}\bar{D}^{*}$ molecular pentaquarks?, arXiv:2011.07214.
* (36) H. X. Chen, Hidden-charm pentaquark states through the current algebra: From their productions to decays, arXiv:2011.07187.
* (37) M. Z. Liu, Y. W. Pan, and L. S. Geng, Can discovery of hidden charm strange pentaquark states help determine the spins of $P_{c}(4440)$ and $P_{c}(4457)$ ?, Phys. Rev. D 103, 034003 (2021).
* (38) X. K. Dong, F. K. Guo, and B. S. Zou, A survey of heavy-antiheavy hadronic molecules, arXiv:2101.01021.
* (39) F. L. Wang, R. Chen, and X. Liu, Prediction of hidden-charm pentaquarks with double strangeness, Phys. Rev. D 103, 034014 (2021).
* (40) F. L. Wang, R. Chen, Z. W. Liu, and X. Liu, Probing new types of $P_{c}$ states inspired by the interaction between $S$-wave charmed baryon and anti-charmed meson in a $\bar{T}$ doublet, Phys. Rev. C 101, 025201 (2020).
* (41) T. Barnes and E. S. Swanson, A Diagrammatic approach to meson meson scattering in the nonrelativistic quark potential model, Phys. Rev. D 46, 131 (1992).
* (42) T. Barnes, N. Black, D. J. Dean, and E. S. Swanson, BB intermeson potentials in the quark model, Phys. Rev. C 60, 045202 (1999).
* (43) T. Barnes, N. Black, and E. S. Swanson, Meson meson scattering in the quark model: Spin dependence and exotic channels, Phys. Rev. C 63, 025204 (2001).
* (44) J. P. Hilbert, N. Black, T. Barnes, and E. S. Swanson, Charmonium-nucleon dissociation cross sections in the quark model, Phys. Rev. C 75, 064907 (2007).
* (45) G. J. Wang, X. H. Liu, L. Ma, X. Liu, X. L. Chen, W. Z. Deng, and S. L. Zhu, The strong decay patterns of $Z_{c}$ and $Z_{b}$ states in the relativized quark model, Eur. Phys. J. C 79, 567 (2019).
* (46) G. J. Wang, L. Y. Xiao, R. Chen, X. H. Liu, X. Liu, and S. L. Zhu, Probing hidden-charm decay properties of $P_{c}$ states in a molecular scenario, Phys. Rev. D 102, 036012 (2020).
* (47) L. Y. Xiao, G. J. Wang, and S. L. Zhu, Hidden-charm strong decays of the $Z_{c}$ states, Phys. Rev. D 101, 054001 (2020).
* (48) G. J. Wang, L. Meng, L. Y. Xiao, M. Oka, and S. L. Zhu, Mass spectrum and strong decays of tetraquark $\bar{c}\bar{s}qq$ states, arXiv:2010.09395.
* (49) R. Aaij et al. (LHCb Collaboration), Physics case for an LHCb Upgrade II-Opportunities in flavour physics, and beyond, in the HL-LHC era, arXiv:1808.08865.
* (50) M. B. Wise, Chiral perturbation theory for hadrons containing a heavy quark, Phys. Rev. D 45, R2188 (1992).
* (51) R. Chen, A. Hosaka, and X. Liu, Searching for possible $\Omega_{c}$-like molecular states from meson-baryon interaction, Phys. Rev. D 97, 036016 (2018).
* (52) G. J. Ding, Are $Y(4260)$ and $Z_{2}^{+}$(4250) ${\rm D_{1}D}$ or ${\rm D_{0}D^{*}}$ hadronic molecules? Phys. Rev. D 79, 014001 (2009).
* (53) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Light vector resonances in the effective chiral Lagrangian for heavy mesons, Phys. Lett. B 292, 371 (1992).
* (54) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Phenomenology of heavy meson chiral Lagrangians, Phys. Rep. 281, 145 (1997).
* (55) T. M. Yan, H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, and H. L. Yu, Heavy quark symmetry and chiral dynamics, Phys. Rev. D 46, 1148 (1992); [Phys. Rev. D 55, 5851E (1997)].
* (56) M. Bando, T. Kugo, and K. Yamawaki, Nonlinear realization and hidden local symmetries, Phys. Rep. 164, 217 (1988).
* (57) M. Harada and K. Yamawaki, Hidden local symmetry at loop: A new perspective of composite gauge boson and chiral phase transition, Phys. Rep. 381, 1 (2003).
* (58) D. O. Riska and G. E. Brown, Nucleon resonance transition couplings to vector mesons, Nucl. Phys. A 679, 577 (2001).
* (59) R. Chen, Z. F. Sun, X. Liu, and S. L. Zhu, Strong LHCb evidence supporting the existence of the hidden-charm molecular pentaquarks, Phys. Rev. D 100, 011502 (2019).
* (60) J. He, Study of $P_{c}(4457)$, $P_{c}(4440)$, and $P_{c}(4312)$ in a quasipotential Bethe-Salpeter equation approach, Eur. Phys. J. C 79, 393 (2019).
* (61) J. He, $\bar{D}\Sigma^{*}_{c}$ and $\bar{D}^{*}\Sigma_{c}$ interactions and the LHCb hidden-charmed pentaquarks, Phys. Lett. B 753, 547 (2016).
* (62) R. Chen, F. L. Wang, A. Hosaka, and X. Liu, Exotic triple-charm deuteronlike hexaquarks, Phys. Rev. D 97, 114011 (2018).
* (63) F. L. Wang, R. Chen, Z. W. Liu, and X. Liu, Possible triple-charm molecular pentaquarks from $\Xi_{cc}D_{1}/\Xi_{cc}D_{2}^{*}$ interactions, Phys. Rev. D 99, 054021 (2019).
* (64) F. L. Wang and X. Liu, Exotic double-charm molecular states with hidden or open strangeness and around $4.5\sim 4.7$ GeV, Phys. Rev. D 102, 094006 (2020).
* (65) G. Breit, The effect of retardation on the interaction of two electrons, Phys. Rev. 34, 553 (1929).
* (66) G. Breit, The fine structure of HE as a test of the spin interactions of two electrons, Phys. Rev. 36, 383 (1930).
* (67) N. A. Tornqvist, From the deuteron to deusons, an analysis of deuteronlike meson-meson bound states, Z. Phys. C 61, 525 (1994).
* (68) N. A. Tornqvist, On deusons or deuteron-like meson-meson bound states, Nuovo Cimento Soc. Ital. Fis. 107A, 2471 (1994).
* (69) P. A. Zyla et al. (Particle Data Group), Review of particle physics, PTEP 2020, no.8, 083C01 (2020).
* (70) R. Chen, A. Hosaka, and X. Liu, Prediction of triple-charm molecular pentaquarks, Phys. Rev. D 96, 114030 (2017).
* (71) C. Y. Wong, E. S. Swanson, and T. Barnes, Heavy quarkonium dissociation cross-sections in relativistic heavy ion collisions, Phys. Rev. C 65, 014903 (2002) Erratum: [Phys. Rev. C 66, 029901 (2002)].
* (72) S. Godfrey and N. Isgur, Mesons in a relativized quark model with chromodynamics, Phys. Rev. D 32, 189 (1985).
* (73) S. Weinberg, Elementary particle theory of composite particles, Phys. Rev. 130, 776 (1963).
* (74) S. Weinberg, Quasiparticles and the Born series, Phys. Rev. 131, 440 (1963).
* (75) Y. H. Lin, C. W. Shen, F. K. Guo, and B. S. Zou, Decay behaviors of the $P_{c}$ hadronic molecules, Phys. Rev. D 95, 114017 (2017).
* (76) Y. H. Lin, C. W. Shen, and B. S. Zou, Decay behavior of the strange and beauty partners of $P_{c}$ hadronic molecules, Nucl. Phys. A980, 21-31 (2018).
* (77) Y. H. Lin and B. S. Zou, Hadronic molecular assignment for the newly observed $\Omega^{*}$ state, Phys. Rev. D 98, 056013 (2018).
* (78) C. W. Shen, J. J. Wu, and B. S. Zou, Decay behaviors of possible $\Lambda_{c\bar{c}}$ states in hadronic molecule pictures, Phys. Rev. D 100, 056006 (2019).
* (79) Y. H. Lin and B. S. Zou, Strong decays of the latest LHCb pentaquark candidates in hadronic molecule pictures, Phys. Rev. D 100, 056005 (2019).
* (80) Y. H. Lin, F. Wang, and B. S. Zou, Reanalysis of the newly observed $\Omega^{*}$ state in a hadronic molecule model, Phys. Rev. D 102, 074025 (2020).
* (81) X. K. Dong, Y. H. Lin, and B. S. Zou, Prediction of an exotic state around 4240 MeV with $J^{PC}=1^{-+}$ as $C$-parity partner of $Y(4260)$ in molecular picture, Phys. Rev. D 101, 076003 (2020).
* (82) X. K. Dong and B. S. Zou, Prediction of possible $DK_{1}$ bound states, arXiv:2009.11619.
* (83) D. Y. Chen, C. J. Xiao, and J. He, Hidden-charm decays of $Y(4390)$ in a hadronic molecular scenario, Phys. Rev. D 96, 054017 (2017).
* (84) C. J. Xiao and D. Y. Chen, Possible $B^{(\ast)}\bar{K}$ hadronic molecule state, Eur. Phys. J. A 53, 127 (2017).
* (85) C. J. Xiao, Y. Huang, Y. B. Dong, L. S. Geng, and D. Y. Chen, Exploring the molecular scenario of $P_{c}(4312)$ , $P_{c}(4440)$, and $P_{c}(4457)$, Phys. Rev. D 100, 014022 (2019).
* (86) Q. Wu, D. Y. Chen, and F. K. Guo, Production of the $Z_{b}^{(\prime)}$ states from the $\Upsilon(5S,6S)$ decays, Phys. Rev. D 99, 034022 (2019).
|
Recent advances in meta-learning has led to remarkable performances on several few-shot learning benchmarks. However, such success often ignores the similarity between training and testing tasks, resulting in a potential bias evaluation. We, therefore, propose a generative approach based on a variant of Latent Dirichlet Allocation to analyse task similarity to optimise and better understand the performance of meta-learning. We demonstrate that the proposed method can provide an insightful evaluation for meta-learning algorithms on two few-shot classification benchmarks that matches common intuition: the more similar the higher performance. Based on this similarity measure, we propose a task-selection strategy for meta-learning and show that it can produce more accurate classification results than methods that randomly select training tasks.
§ INTRODUCTION
The vast development in machine learning has enabled possibilities to solve increasingly complex applications. Such complexity require high capacity models, which in turn need a massive amount of annotated data for training, resulting in an arduous, costly and even infeasible annotation process. This has, therefore, motivated the research of novel learning approaches, generally known as transfer learning, that exploit past experience (in the form of models learned from other training tasks) to quickly learn a new task using relatively small training sets.
Transfer-learning, and in particular, meta-learning, has recently achieved state-of-the-art results in several few-shot learning benchmarks <cit.>. Such success depends not only on the effectiveness of transfer learning algorithms, but also on the similarity between training and testing tasks <cit.>. More specifically, the larger the subset of training tasks that are similar to the testing tasks, the higher the classification accuracy on those testing tasks. However, meta-learning methods are assessed without taking into account such observation, which can bias the meta-learning classification results depending on the policy for selecting training and testing tasks.
In this paper, we propose a generative approach based on a continuous version of Latent Dirichlet Co-Clustering to model classification tasks. The resulting model represents tasks in a latent task-theme simplex, and hence, allows to quantitatively measure their similarity. The proposed similarity measure enables the possibility of selecting the most related tasks from the training set for the meta-learning of a novel testing task. We empirically demonstrate that the proposed task selection strategy outperforms the one that randomly selects training tasks across several meta-learning methods.
§ RELATED WORK
With this paper, we target an improved understanding of meta-learning algorithms <cit.>, which can allow us to improve their current performance.
Although meta-learning has progressed steadily with many remarkable achievements, it has been reported that there is a large variance of performance among testing tasks <cit.>. This observation suggests that not all testing tasks are equally related to training tasks.
Task hardness which is based on the cosine similarity between the embedding of labelled and unlabelled data is, therefore, proposed to better justify the performance of meta-learning methods <cit.>. This, however, quantifies only the similarity between samples within a task without investigating the similarity between tasks.
Task similarity has been intensively studied in the field of multi-task learning. Some remarkable works include task-clustering using k-nearest neighbours <cit.>, modelling common prior between tasks as a mixture of distributions <cit.> with the extension using Dirichlet Process <cit.>, applying a convex formulation to either cluster <cit.> or learning task relationship through task covariance matrices <cit.>. Other approaches try to provide theoretical guarantees when learning the similarity or relationship between tasks <cit.>. Following a similar approach, an extensive experiment was carried out on 26 computer-vision tasks to determine the correlation between those tasks, also known as taskonomy <cit.>. Some recent works <cit.> take a slightly different approach by investigating the correlation of the label distribution between those tasks of interest. One commonality of those studies is their reliance on a discriminative approach, where the similarity of task-specific classifiers are used to quantify task relatedness. In addition, most of those works focus more on the conventional machine learning setting, which requires a sufficient number of labelled data on the novel tasks to perform transfer learning. In contrast, our proposal follows a generative approach which does not depend on any task-specific classifier. Our approach can also work in the few-shot setting, where only a few labelled data points from the targeted tasks are available. Another work that is slightly related to task similarity is Task2Vec <cit.>, which employs Fisher information matrix of an external network, known as probe network, to model visual tasks as fixed vectors in an embedding space, allowing to analyse and calculate task similarity. However, its application is still limited due to the need of an external network pre-trained to perform specific tasks on some standard visual data sets.
Our work is also related to finite mixture models <cit.>, such as the Latent Dirichlet Allocation (LDA) <cit.>, in topic modelling which analyses and summarises text data, or in computer vision <cit.>. LDA assumes that each document within a given corpus can be represented as a finite mixture model, where its components are the latent topics shared across all documents. Training an LDA model or its variants on a large text corpus is challenging, so several approximate inference techniques have been proposed, ranging from mean-field variational inference (VI) <cit.>, collapsed Gibbs' sampling <cit.> and collapsed VI <cit.>. Furthermore, several online inference methods have been developed to increase the training efficiency for large corpora <cit.>. Our work is slightly different from the inference for conventional LDA models, where we perform online learning for Latent Dirichlet Co-clustering <cit.> – a variant of LDA – that includes the information of paragraphs into the model. In addition, our approach considers word as continuous data, instead of the discrete data represented by a bag-of-word vector generally used in topic modelling.
§ METHOD
[scale=0.875, rounded corners=2pt, every node/.style=scale=0.875, minimum size=1.1cm]
Directed acyclic graph represents the continuous LDCC that models classification tasks as a finite mixture of Gaussian distributions.
To relate image classification to topic modelling, we consider a task as a document, a class as a paragraph, and an image as a word. Given these analogies, we employ the Latent Dirichlet Co-clustering (LDCC) <cit.> – a variant of LDA – to model classification tasks. The LDCC extends the conventional LDA to a hierarchical structure by including the information about paragraphs, or in our case, data classes, into the model.
Since the data in classification is assumed to be continuous, the categorical word-topic distribution in the original LDCC model is replaced by a Gaussian image-theme distribution. Each classification task can be modelled as a mixture of \(L\) task-themes (corresponding to document topic in LDCC), where each task-theme is a summary of many finite mixtures of \(K\) image-themes. We can, therefore, utilise this representation, and in particular the task-theme mixture parameter to quantify the similarity between tasks.
We assume that there are \(M\) classification tasks, where each task consists of \(C\) classes, and each class has \(N\) images (i.e., using meta-learning nomenclature, this represents \(M\) \(C\)-way \(N\)-shot classification tasks). For simplicity, \(C\) and \(N\) are assumed to be fixed across all tasks, but the extension of varying \(C\) and \(N\) is trivial and can be implemented straightforwardly. The process to generate classification tasks from an \(L\)-task-\(K\)-image theme model shown in <ref> can be presented as follows:
* Initialise means and precision matrices of \(K\) Gaussian image-theme \(\{\bm{\mu}_{k}, \bm{\Lambda}_{k}\}_{k=1}^{K}\), where \(\bm{\mu}_{k} \in \mathbb{R}^{D}\), and \(\bm{\Lambda}_{k} \in \mathbb{R}^{D \times D}\) is positive definite matrix
* For task \(d\)-th in the collection of \(M\) tasks:
* Choose a task-theme mixture: \(\bm{\phi}_{d}~\sim~\mathrm{Dirichlet}_{L} \left( \bm{\phi}; \bm{\delta}\right) \)
* For the \(c\)-th class in the \(d\)-th task:
* Choose a task-theme assignment: \(\mathbf{y}_{dc}~\sim~\mathrm{Categorical}(\mathbf{y}; \bm{\phi}_{d})\)
* Choose an image-theme mixture: \(\bm{\theta}_{dc} \sim \mathrm{Dirichlet}_{K} \left( \bm{\theta}; \bm{\alpha}_{l} \right) \), where \({y_{dcl} = 1}\)
* For image \(n\)-th in class \(c\)-th of task \(d\)-th:
* Choose an image-theme assignment: \(\mathbf{z}_{dcn}~\sim~\mathrm{Categorical} \left( \mathbf{z}; \bm{\theta}_{dc} \right)\)
* Choose an image: \(\mathbf{x}_{dcn} \sim \mathcal{N}\left(\mathbf{x}; \bm{\mu}_{k}, \bm{\Lambda}_{k}^{-1}\right) \), where: \(z_{dcnk} = 1\).
If the \(K\) Gaussian image-themes \( \{ (\bm{\mu}_k, \bm{\Lambda}_k) \} _{k=1}^K\), and the Dirichlet concentration \(\{ \bm{\alpha} \}_{l=1}^{L}\) for each class are known, we can infer the mixture parameter \(\bm{\phi}_{d}\) based on the observed images \(\mathbf{x}_{d}\) of any arbitrary task \(d\)-th to represent that task in the latent task-theme simplex. This representation enables the possibility of performing further analysis, such as measuring distances between tasks. Hence, our objective is to learn these parameters from the \(M\) given classification tasks. In short, our objective is to maximise log-likelihood:
\begin{equation}
\max_{\bm{\mu}, \bm{\Sigma}, \bm{\alpha}} \ln p(\mathbf{x} | \bm{\mu}, \bm{\Sigma}, \bm{\alpha}).
\label{eq:mle}
\end{equation}
Due to the complexity of the graphical model with latent variables as shown in <ref>, the inference for the likelihood in (<ref>) is intractable, and therefore, the estimation must rely on approximate inference. Current methods to approximate the posterior of LDA-based models fall into two main categories: sampling <cit.> and optimisation <cit.>. Each approach has strengths and weaknesses, where the choice mostly depends on the application of interest. For the problem of task similarity where \(M\) is very large, the optimisation approach, and in particular, the mean-field VI, is preferable due to its efficiency and scalability to large data sets. In this paper, VI is used to infer the parameters of interest.
The log-likelihood of interest can be lower-bounded by Jensen's inequality. The lower-bound is often known as evidence lower-bound (ELBO) and can be expressed as:
\begin{equation}
\begin{aligned}[b]
\mathsf{L} & = \mathbb{E}_{q} \left[ \ln p(\mathbf{x}, \bm{\phi}, \mathbf{y}, \bm{\theta}, \mathbf{z} | \bm{\delta}, \bm{\alpha}, \bm{\mu}, \bm{\Sigma}) \right] - \mathbb{E}_{q} \left[ q(\bm{\phi}, \mathbf{y}, \bm{\theta}, \mathbf{z}) \right].
\end{aligned}
\label{eq:elbo}
\end{equation}
Following the conventional variational inference for LDA <cit.>, we choose a fully factorised variational distribution \(q\) as our variational posterior:
\begin{equation}
\begin{aligned}[b]
q(\bm{\phi}, \mathbf{y}, \bm{\theta}, \mathbf{z}) & = \prod_{d=1}^{M} q(\bm{\phi}_{d}; \bm{\lambda}_{d}) \prod_{c=1}^{C} q(\mathbf{y}_{dc}; \bm{\eta}_{dc}) \, q(\bm{\theta}_{dc}; \bm{\gamma}_{dc}) \prod_{n=1}^{N} q(\mathbf{z}_{dcn}; \mathbf{r}_{dcn}),
\end{aligned}
\label{eq:q}
\end{equation}
\begin{align*}
q(\bm{\phi}_{d}; \bm{\lambda}_{d}) = \mathrm{Dirichlet}_{L} \left(\bm{\phi}_{d}; \bm{\lambda}_{d} \right) & \qquad q(\mathbf{y}_{dc}; \bm{\eta}_{dc}) = \mathrm{Categorical}\left(\mathbf{y}_{dc}; \bm{\eta}_{dc}\right) \\
q(\bm{\theta}_{dc}; \bm{\gamma}_{dc}) = \mathrm{Dirichlet}_{K} \left( \bm{\theta}_{dc}; \bm{\gamma}_{dc} \right) & \qquad q(\mathbf{z}_{dcn}; \mathbf{r}_{dcn}) = \mathrm{Categorical} \left(\mathbf{z}_{dcn}; \mathbf{r}_{dcn} \right).
\end{align*}
Given the variational distribution \(q\) defined in Eq. (<ref>), we can rewrite the ELBO as:
\begin{equation}
\begin{aligned}
\mathsf{L} & = \mathbb{E}_{q} \left[ \ln p(\mathbf{x} | \mathbf{z}, \bm{\mu}, \bm{\Sigma}) + \ln p(\mathbf{z} | \bm{\theta}) + \ln p(\bm{\theta} | \mathbf{y}, \bm{\alpha}) + \textcolor{violet}{\ln p(\mathbf{y} | \bm{\phi})} + \textcolor{violet}{\ln p(\bm{\phi} | \bm{\delta})} \right.\\
& \qquad \left. - \ln q(\mathbf{z}) - \ln q(\bm{\theta}) - \textcolor{violet}{\ln q(\mathbf{y})} - \textcolor{violet}{\ln q(\bm{\phi})} \right].
\end{aligned}
\label{eq:elbo_factorised}
\end{equation}
Comparing to the conventional LDA <cit.>, the ELBO in Eq. (<ref>) contains 4 extra terms highlighted in violet. The presence of those terms are due to the hierarchical structure of LDCC that takes the factor of classes (analogous to paragraphs) into the model.
Instead of maximising likelihood, we maximise its lower-bound, resulting in an alternative objective function:
\begin{equation}
\max_{\bm{\mu}, \bm{\Sigma}, \bm{\alpha}} \, \, \max_{\mathbf{r}, \bm{\gamma}, \bm{\eta}, \bm{\lambda}} \mathsf{L}.
\end{equation}
Given the usage of prior conjugate, all of the terms in the ELBO can be evaluated straightforwardly (please refer to <ref>). The optimisation is based on gradient, and performed in two steps, resulting in a process analogous to the expectation-maximisation (EM) algorithm. In the E-step, the task-specific variational-parameters \(\mathbf{r}, \bm{\gamma}, \bm{\eta}\) and \(\bm{\lambda}\) are iteratively updated, while holding the meta-parameters \(\bm{\mu}, \bm{\Sigma}, \bm{\alpha}\) fixed. In the M-step, the meta-parameters are updated using the values of the task-specific variational-parameters obtained in the E-step. The inference for the meta image-themes are similar to the estimation of Gaussian mixture model <cit.>. Please refer to <ref> for more details.
Conventionally, the iterative updates in the E-step and M-step require a full pass through the entire collection of tasks. This is, however, very slow and even infeasible since \(M\) is often in the magnitude of millions. We, therefore, propose an online VI inspired by the online learning for LDA <cit.> to infer the image-themes. When the \(d\)-th task is observed, we perform EM to obtain the task-specific image-themes (denoted by a tilde on top of variables) that are locally optimal for that task. The meta image-themes of interest are then updated as a weighted average between their previous values and the task-specific values:
\begin{equation}
\bm{\mu} \gets (1 - \rho_{d}) \bm{\mu} + \rho_{d} \Tilde{\bm{\mu}}, \quad \bm{\Sigma} \gets (1 - \rho_{d}) \bm{\Sigma} + \rho_{d} \Tilde{\bm{\Sigma}}, \quad \bm{\alpha} \gets \bm{\alpha} - \rho_{d} \mathbf{H}^{-1} \mathbf{g},
\label{eq:online_update}
\end{equation}
where \(\rho_{d} = (\tau_{0} + d)^{-\tau_{1}}\) with \(\tau_{0} \ge 0\) and \(\tau_{1} \in (0.5, 1]\) <cit.>, and \(\mathbf{g}\) is the gradient of \(\mathsf{L}\) w.r.t. \(\bm{\alpha}\), and \(\mathbf{H}\) is the Hessian matrix. Please refer to <ref> for the details of the online learning algorithm.
Also, instead of updating the image-themes when observing a single task, we use multiple or a mini-batch of tasks to reduce noise. The mini-batch version requires a slight modification, where we calculate the average of all task-specific image-themes for the tasks in the same mini-batch, and use that as the task-specific value to update the corresponding meta image-theme.
Given the image-themes \(\{\bm{\mu}_{k}, \bm{\Sigma}_{k} \}_{k=1}^{K} \) and the Dirichlet parameter \(\{ \bm{\alpha}_{l} \}_{l=1}^{L}\), we can represent a task by its variational Dirichlet posterior of the task-topic mixing coefficients \(q(\bm{\phi}_{d}; \bm{\lambda}_{d})\) in the latent task-theme simplex. This new representation of classification tasks has two advantages comparing to the recently proposed task representation Task2Vec <cit.>: (i) it does not need any pre-trained networks, and (ii) the use of probability distribution, instead of a single value vector as in Task2Vec, allowing to include modelling uncertainty when representing tasks. In addition, we can utilise this representation to quantitatively analyse the similarity between two tasks through a divergence between \(q(\bm{\phi}_{d}; \bm{\lambda}_{d})\).
Commonly, symmetric distances, such as Jensen-Shannon divergence, Hellinger distance, or earth's mover distance are employed to calculate the divergence between distributions. However, it is argued that similarity should be represented as an asymmetric measure <cit.>. This is reasonable in the context of transfer learning, since knowledge gained from learning a difficult task might significantly facilitate the learning of an easy task, but the reverse might not always have the same level of effectiveness. In light of asymmetric distance, we decide to use Kullback-Leibler (KL) divergence, denoted as \(D_{\mathrm{KL}}[. \Vert .]\). As \(D_{\mathrm{KL}} \left[ P \Vert Q \right]\) is defined as the information lost when using a code optimised for \(Q\) to encode the samples of \(P\), we, therefore, calculate \(D_{\mathrm{KL}} \left[ q(\bm{\phi}_{d}; \bm{\lambda}_{M + 1}) \Vert q(\bm{\phi}_{d}; \bm{\lambda}_{d}) \right]\), where \(d \in \{1, \ldots, M\}\), to assess how the \(d\)-th training task differs from the learning of the novel \((M + 1)\)-th task.
Correlation Diagram
We define a correlation diagram as a qualitative measure that represents visually the performance effectiveness for meta-learning algorithms. The diagram plots the expected classification accuracy as a function of KL divergence between testing and training tasks. Intuitively, the closer a testing task is from the training tasks, the higher the performance. Hence, we can use our proposed correlation diagram to qualitatively compare different meta-learning methods.
A correlation diagram can be constructed by first calculating the average distance between each testing task, denoted as \(M + i\) subscript with \(i \in \mathbb{N}\), to all training tasks:
\begin{equation*}
\overline{D}_{M + i} = \frac{1}{M} \sum_{d=1}^{M} D_{\mathrm{KL}} \left[q(\bm{\phi}; \bm{\lambda}_{M + i}) \Vert q(\bm{\phi}; \bm{\lambda}_{d}) \right].
\end{equation*}
The obtained average distances are then grouped into \(J\) interval bins, each of size \(\triangle_{J}~=~\max_{i} \overline{D}_{M + i} /J\). Let \(B_{j}\) with \(j \in \{1, \ldots, J\}\) be the set of testing tasks that have their average KL distances falling within the interval \(I_{j} = \left((j - 1) \triangle_{J}, j \triangle_{J} \right]\). The distance of bin \(B_{j}\) is defined as:
\begin{equation*}
d(B_{j}) = \frac{1}{|B_{j}|} \sum_{i \in B_{j}} \overline{D}_{M + i}.
\end{equation*}
Next, a model trained on the training tasks is employed to evaluate the prediction accuracy \(a^{(v)}_{i}\) on all the testing tasks to obtain the accuracy for bin \( B_{j} \):
\begin{equation*}
a(B_{j}) = \frac{1}{|B_{j}|} \sum_{i \in B_{j}} a^{(v)}_{i}.
\end{equation*}
Finally, plotting \(d(B_{j})\) against \(a(B_{j})\) gives the desired correlation diagram (e.g., <ref>).
§ EXPERIMENTS
We carry out two experiments – correlation diagram and task selection – to demonstrate the capability of the proposed approach. We evaluate the proposed approach on \(n\)-way classification tasks formed from two separated data sets: Omniglot <cit.> and mini-ImageNet <cit.>. In this setting, a testing task is represented by a \(k\)-shot labelled data without the availability of unlabelled data following the transductive learning setting <cit.>. We evaluate the performance on several meta-learning algorithms, such as MAML <cit.>, Prototypical Networks <cit.>, Amortised Meta-learner (ABML) <cit.>, BMAML <cit.> and VAMPIRE <cit.>, to verify the distance-performance correlation using our proposed method.
For Omniglot, we follow the pre-processing steps as in few-shot image classification without any data augmentation, and use the standard train-test split in the original paper to prevent information leakage. For mini-ImageNet, we follow the common train-test split with 80 classes for training and 20 classes for testing <cit.>. Since the dimension of raw images in mini-ImageNet is large, we employ the 640-dimensional features extracted from a wide-residual-network <cit.> to ease the calculation.
We follow Algorithm <ref> in <ref> to obtain the posterior of the image-theme using tasks in training set. We use \(L = 4\) task-themes and \(K = 8\) image-themes for both data sets. The Dirichlet distribution for task-theme mixture, \(\mathrm{Dirchlet}_{L}(\bm{\phi}_{d} | \bm{\delta}\), is chosen to be symmetric with \(\delta = 0.5\). The parameter inference, or training, is carried out with 16 images per class while varying the number of classes between 5 to 10 to fit into the memory of a Nvidia 1080 Ti GPU. The inference of the variational parameter \(\bm{\lambda}\) is done on all available labelled data in each class (\(20\) for Omniglot and \(600\) for mini-ImageNet). Note that this is used for the correlation diagram demonstration. For the task selection, this number matches the number of shots in the few-shot learning setting[Implementation can be found at <https://github.com/cnguyen10/similarity_classification_tasks>].
For the evaluation on meta-learning algorithms, we use a similar 4 convolutional module network to train on Omniglot <cit.>, while using a fully connected network with 1 hidden layer consisting of 128 units to train on the extracted features of mini-ImageNet <cit.>.
Note that the numbers of tasks formed from the two data sets are very large. For Omniglot, approximate \(6.8 \times 10^{12}\) and \(10^{12}\) unique tasks can be generated from the training and testing sets, respectively. For mini-ImageNet, these numbers are slightly more manageable with about \(2.4 \times 10^{6}\) unique tasks for training, and \(15,504\) tasks for testing. To reduce the computation and facilitate the analysis, we randomly select 1 million Omniglot tasks for training, and \(20,000\) tasks for testing. For mini-ImageNet, we select 2 million tasks for training and \(15,504\) tasks for testing.
§.§ Correlation Diagram
Correlation diagram plots the average accuracy predicted by meta-learning algorithms as a function of the average KL divergence of each task in the testing set to all tasks in the training set on the 5-way 1-shot setting.
To construct the correlation diagram, we train a continuous LDCC on \(n\)-way 16-shot setting (\(n\) varies from 5 to 10), and then infer the variational parameter \(\bm{\lambda}\) of the task-theme mixture \(\bm{\phi}\). The inferred \(\lambda\) is used to calculate the KL divergence distance between testing tasks to all training tasks. Note that the continuous LDCC is only trained on the training tasks. We then separately evaluate the performance of different meta-learning algorithms on the same 5-way 1-shot setting, and plot the correlation diagram in <ref>. The results of the performance versus the task distance (or similarity) agree well with the common intuition: the testing tasks closer to the training tasks have higher prediction accuracy. Note that this observation is consistent across several meta-learning methods. It is also interesting to notice that some methods are more robust than others with respect to the dissimilarity between training and testing tasks.
§.§ Task Selection
The prediction accuracy of several meta-learning methods on 5-way 5-shot mini-ImageNet testing tasks when training tasks are pro-actively selected outperforms the un-selective approaches, and slightly better than Task2Vec. The error bars on the un-selective cases represent the 95% confident intervals calculated on the 50 trials of random task selection.
MAML 5-way
ProtoNet 5-way
MAML 10-way
ProtoNet 10-way
The proposed task-selective approach outperforms the randomly chosen training tasks, and shows slightly better results than Task2Vec when varying the number of classes within a classification task as well as the number of training tasks.
We show that when there is a constraint on the number of training tasks, selecting tasks based on the proposed similarity outperforms the un-selective one that randomly selects training tasks. To demonstrate, we assume that one can pick a small number of mini-ImageNet tasks from the whole training set to train a meta-learning model, and evaluate on all tasks in the testing set. In the selective case, we use the LDCC model trained on all training tasks to infer the variational mixture parameters \(\bm{\lambda}\) for all training and testing tasks. We then pick the training tasks that are closest to all the testing tasks using the proposed KL divergence, and use them to train a meta-learning model. In the un-selective case, we randomly select the same number of training tasks without measuring any similarity. We also include Task2Vec as a baseline for the selective case to compare with our proposed approach. As the experiment is based on extracted features of mini-ImageNet, it is difficult to adapt to some common pre-trained networks, which is used as a probe network in Task2Vec. To work around, we use MAML to train a fully-connected network with three hidden layers on the training set under 5-way 5-shot setting, and use the feature extractor (excluding the last layer) of this network as the probe network for Task2Vec. This modelling approach results in a 3-D Task2Vec representation which is the same dimension as \(\bm{\phi}_{d}\) in the continuous LDCC, and hence, can be compared fairly. In addition, we directly calculate the diagonal of Fisher information matrix of the probe network without using the proposed approximation in Task2Vec to reduce the complexity of hyper-parameter tuning.
<ref> shows the accuracy results tested on \(15,504\) mini-ImageNet testing tasks on the 5-way 5-shot setting for models trained on 1,000 training tasks. We also report the 95% confident interval for the case of random task selection. Statistically, meta-learning methods trained on tasks selected from our proposed solution outperform the un-selective cases, and slightly better than Task2Vec, especially for the probabilistic meta-learning methods such as BMAML, ABML and VAMPIRE.
To study the effects induced by the number of training tasks, and the number of ways within each task, we run an extensive experiment with a similar 5-shot setting, but varying the number of training tasks and ways, and plot the results in <ref>. In general, the proposed approach out-performs the un-selective approach, and is slightly better than Task2Vec.
Despite promising results, there are some limitations of our proposed task selection. The proposed approach requires a sufficient number of labelled data in the testing tasks. More specifically, we need 5 labelled images per class, so that the trained LDCC model can correctly infer \(\bm{\lambda}\). Further reduction in the number of labelled data in the test set might result in a poor estimation of \(\bm{\lambda}\), hindering the task selection process. This is a well-known issue in LDA and its variations, which do not work well for short texts. Nevertheless, the assumption of 5-shot setting, which shows a promising result for task selection, is still reasonable in many few-shot learning applications.
§ CONCLUSION
We propose a generative approach based on the continuous LDCC adopted in topic modelling to model classification tasks. Under this modelling approach, a classification task can be expressed as a finite mixture model of Gaussian distributions, whose components are shared across all tasks. This new representation of classification tasks allows one to quantify the similarity between tasks through the asymmetric KL divergence. We also introduce a task selection strategy based on the proposed task similarity, and demonstrate its superiority in meta-learning comparing to the conventional approach where training tasks are randomly selected.
§ ACKNOWLEDGEMENT
This work was supported with supercomputing resources provided by the Phoenix HPC service at the University of Adelaide.
§ BROADER IMPACT
The proposed approach is helpful in transfer-learning tasks, especially when the amount of training data for the testing task is limited. By representing tasks in the topic space, the proposed approach allows to assess task similarity and provide insightful understanding when transfer-learning will be effective. This has the benefit of saving costs on data collection and annotation for the testing task. However, the trade-off is related to the computational cost involved in the training of the LDCC model for computing the task-to-task similarities.
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
booktitleIEEE International Conference on Computer Vision
titleTASK2VEC: Task embedding for meta-learning
journaltitleJournal of Machine Learning Research
titleTask clustering and gating for Bayesian multitask learning
titlePattern Recognition and Machine Learning
journaltitleJournal of Machine Learning Research
titleLatent Dirichlet allocation
booktitleArtificial Intelligence and Statistics
titleOnline inference of topics with latent Dirichlet allocation
booktitleInternational Conference on Learning Representations
titleA closer look at few-shot classification
journaltitlearXiv preprint arXiv:2003.04390
titleA new meta-baseline for few-shot learning
booktitleInternational Conference on Learning Representations
titleA baseline for few-shot image classification
booktitleInternational Conference on Machine Learning
titleModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
booktitleInternational Conference on Knowledge Discovery and Data Mining (ACM SIGKDD)
titleStochastic collapsed variational Bayesian inference for latent Dirichlet allocation
journaltitleNational Academy of Sciences
titleFinding scientific topics
booktitleAdvances in Neural Information Processing Systems
titleOnline learning for latent Dirichlet allocation
booktitleAdvances in Neural Information Processing Systems
titleClustered multi-task learning: A convex formulation
American Association for the Advancement of Science
titleHuman-level concept learning through probabilistic program induction
booktitleInternational Conference on Computer Vision and Pattern Recognition
titleA bayesian hierarchical model for learning natural scene categories
Technical report, MIT
titleEstimating a Dirichlet distribution
booktitleIEEE Winter Conference on Applications of Computer Vision
titleUncertainty in model-agnostic meta-learning using variational inference
booktitleInternational Conference on Machine Learning
titleLEEP: A New Measure to Evaluate Transferability of Learned Representations
Genetics Soc America
titleInference of population structure using multilocus genotype data
booktitleInternational Conference on Learning Representations
titleAmortized Bayesian meta-learning
booktitleInternational Conference on Learning Representations
titleOptimization as a model for few-shot learning
booktitleInternational Conference on Learning Representations
titleMeta-learning with latent embedding optimization
booktitleInternational Conference on Data Mining
titleLatent Dirichlet co-clustering
booktitleInternational Joint Conference on Artificial Intelligence
titleA Principled Approach for Learning Task Similarity in Multitask Learning
booktitleAdvances in Neural Information Processing Systems
titlePrototypical networks for few-shot learning
booktitleAdvances in Neural Information Processing Systems
titleA collapsed variational Bayesian inference algorithm for latent Dirichlet allocation
booktitleInternational Conference on Machine Learning
titleDiscovering structure in multiple learning tasks: The TC algorithm
booktitleInternational Conference on Computer Vision
titleTransferability and hardness of supervised classification tasks
American Psychological Association
journaltitlePsychological review
titleFeatures of similarity.
booktitleAdvances in Neural Information Processing Systems
titleMatching networks for one shot learning
journaltitleJournal of Machine Learning Research
titleMulti-task learning for classification with Dirichlet process priors
booktitleAdvances in Neural Information Processing Systems
titleBayesian Model-Agnostic Meta-Learning
booktitleIEEE Conference on Computer Vision and Pattern Recognition
titleTaskonomy: Disentangling task transfer learning
booktitleConference on Uncertainty in Artificial Intelligence
titleA convex formulation for learning task relationships in multi-task learning
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
booktitleIEEE International Conference on Computer Vision
titleTASK2VEC: Task embedding for meta-learning
journaltitleJournal of Machine Learning Research
titleTask clustering and gating for Bayesian multitask learning
titlePattern Recognition and Machine Learning
journaltitleJournal of Machine Learning Research
titleLatent Dirichlet allocation
booktitleArtificial Intelligence and Statistics
titleOnline inference of topics with latent Dirichlet allocation
booktitleInternational Conference on Learning Representations
titleA closer look at few-shot classification
journaltitlearXiv preprint arXiv:2003.04390
titleA new meta-baseline for few-shot learning
booktitleInternational Conference on Learning Representations
titleA baseline for few-shot image classification
booktitleInternational Conference on Machine Learning
titleModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
booktitleInternational Conference on Knowledge Discovery and Data Mining (ACM SIGKDD)
titleStochastic collapsed variational Bayesian inference for latent Dirichlet allocation
journaltitleNational Academy of Sciences
titleFinding scientific topics
booktitleAdvances in Neural Information Processing Systems
titleOnline learning for latent Dirichlet allocation
booktitleAdvances in Neural Information Processing Systems
titleClustered multi-task learning: A convex formulation
American Association for the Advancement of Science
titleHuman-level concept learning through probabilistic program induction
booktitleInternational Conference on Computer Vision and Pattern Recognition
titleA bayesian hierarchical model for learning natural scene categories
Technical report, MIT
titleEstimating a Dirichlet distribution
booktitleIEEE Winter Conference on Applications of Computer Vision
titleUncertainty in model-agnostic meta-learning using variational inference
booktitleInternational Conference on Machine Learning
titleLEEP: A New Measure to Evaluate Transferability of Learned Representations
Genetics Soc America
titleInference of population structure using multilocus genotype data
booktitleInternational Conference on Learning Representations
titleAmortized Bayesian meta-learning
booktitleInternational Conference on Learning Representations
titleOptimization as a model for few-shot learning
booktitleInternational Conference on Learning Representations
titleMeta-learning with latent embedding optimization
booktitleInternational Conference on Data Mining
titleLatent Dirichlet co-clustering
booktitleInternational Joint Conference on Artificial Intelligence
titleA Principled Approach for Learning Task Similarity in Multitask Learning
booktitleAdvances in Neural Information Processing Systems
titlePrototypical networks for few-shot learning
booktitleAdvances in Neural Information Processing Systems
titleA collapsed variational Bayesian inference algorithm for latent Dirichlet allocation
booktitleInternational Conference on Machine Learning
titleDiscovering structure in multiple learning tasks: The TC algorithm
booktitleInternational Conference on Computer Vision
titleTransferability and hardness of supervised classification tasks
American Psychological Association
journaltitlePsychological review
titleFeatures of similarity.
booktitleAdvances in Neural Information Processing Systems
titleMatching networks for one shot learning
journaltitleJournal of Machine Learning Research
titleMulti-task learning for classification with Dirichlet process priors
booktitleAdvances in Neural Information Processing Systems
titleBayesian Model-Agnostic Meta-Learning
booktitleIEEE Conference on Computer Vision and Pattern Recognition
titleTaskonomy: Disentangling task transfer learning
booktitleConference on Uncertainty in Artificial Intelligence
titleA convex formulation for learning task relationships in multi-task learning
|
# Joint Coreference Resolution and Character Linking
for Multiparty Conversation
Jiaxin Bai1, Hongming Zhang1, Yangqiu Song1, and Kun Xu2
1CSE, HKUST
2 Tencent AI Lab
{jbai, hzhangal<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Character linking, the task of linking mentioned people in conversations to
the real world, is crucial for understanding the conversations. For the
efficiency of communication, humans often choose to use pronouns (e.g., “she”)
or normal phrases (e.g., “that girl”) rather than named entities (e.g.,
“Rachel”) in the spoken language, which makes linking those mentions to real
people a much more challenging than a regular entity linking task. To address
this challenge, we propose to incorporate the richer context from the
coreference relations among different mentions to help the linking. On the
other hand, considering that finding coreference clusters itself is not a
trivial task and could benefit from the global character information, we
propose to jointly solve these two tasks. Specifically, we propose C2, the
joint learning model of Coreference resolution and Character linking. The
experimental results demonstrate that C2 can significantly outperform previous
works on both tasks. Further analyses are conducted to analyze the
contribution of all modules in the proposed model and the effect of all hyper-
parameters.
## 1 Introduction
Understanding conversations has long been one of the ultimate goals of the
natural language processing community, and a critical step towards that is
grounding all mentioned people to the real world. If we can achieve that, we
can leverage our knowledge about these people (e.g., things that happened to
them before) to better understand the conversation. On the other hand, we can
also aggregate the conversation information back to our understanding about
these people, which can be used for understanding future conversations that
involve the same people. To simulate the real conversations and investigate
the possibility for models to ground mentioned people, the character linking
task was proposed Chen and Choi (2016). Specifically, it uses the transcripts
of TV shows (i.e., Friends) as the conversations and asks the models to ground
all person mentions to characters.
Figure 1: The composition of the mentions in conversations for character
grounding. Over 88% of the mentions are not named entities, which brings
exceptional challenges when linking those to character entities.
Even though the character linking task can be viewed as a special case of the
entity linking task, it is more challenging than the ordinary entity linking
task for various reasons. First, the ordinary entity linking task often aims
at linking named entities to external knowledge bases such as Wikipedia, where
rich information (e.g., definitions) are available. However, for the character
linking task, we do not have the support of such rich knowledge base and all
we have are the names of these characters and simple properties (e.g., gender)
about these characters. Second, the mentions in the ordinary entity linking
are mostly concepts and entities, but not pronouns. However, as shown in
Figure 1, 88% of the character mentions are pronouns (e.g., “he”) or personal
nouns (e.g., “that guy”) while only 12% are named entities.
Figure 2: Coreference clusters can help to connect the whole conversation to
provide a richer context for each mention such that we can better link them to
Paul. Meanwhile, the character Pual can also provide global information to
help resolve the coreference.
Considering that pronouns have relatively weak semantics by themselves, to
effectively ground mentions to the correct characters, we need to fully
utilize the context information of the whole conversation rather than just the
local context they appear in. One potential solution is using the coreference
relations among different mentions as the bridge to connect the richer
context. One example is shown in Figure 2. It is difficult to directly link
the highlighted mentions to the character Paul based on their local context
because the local context of each mention can only provide a single piece of
information about its referent, e.g., “person is a male” or “the person works
with Monica.” Given the coreference cluster, the mentions refer to the same
person, and the pieces of information are put together to jointly determining
the referent. As a result, it is easier for a model to do character linking
with resolved coreference. Similar observations are also made in Chen et al.
(2017).
At the same time, we also noticed that coreference resolution, especially
those involving pronouns, is also not trivial. As shown by the recent
literature on the coreference resolution task Lee et al. (2018); Kantor and
Globerson (2019), the task is still challenging for current models and the key
challenge is how to utilize the global information about entities. And that is
exactly what the character linking model can provide. For example, in Figure
2, it is difficult for a coreference model to correctly resolve the last
mention he in the utterance given by Ross based on its local context, because
another major male character (Joey) joins the conversation, which can distract
and mislead the coreference model. However, if the model knows the mention he
links to the character Paul and Paul works with Monica, it is easier to
resolve he to some guy that Monica works with.
Motivated by these observations, we propose to jointly train the Coreference
resolution and Character linking tasks and name the joint model as C2. C2
adopts a transformer-based text encoder and includes a mention-level self-
attention (MLSA) module that enables the model to do mention-level
contextualization. Meanwhile, a joint loss function is designed and utilized
so that both tasks can be jointly optimized. The experimental results
demonstrate that C2 outperforms all previous work significantly on both tasks.
Specifically, compared with the previous work Zhou and Choi (2018), C2
improves the performance by 15% and 26% on the coreference resolution and
character linking tasks111The performance on the coreference resolution is
evaluated based on the average F1 score of B3, CEAFϕ4, and BLANC. The
performance on the character linking task is evaluated by the average F1 score
of the micro and macro F1. respectively comparing to the previous state-of-
the-art model ACNN Zhou and Choi (2018) . Further hyper-parameter and ablation
studies testify the effectiveness of different components of C2 and the effect
of all hyper-parameters. Our code is available at https://github.com/HKUST-
KnowComp/C2.
## 2 Problem Formulations and Notations
We first introduce the coreference resolution and character linking tasks as
well as used notations. Given a conversation, which contains multiple
utterances and $n$ character mentions $c_{1},c_{2},...,c_{n}$, and a pre-
defined character set $\mathcal{Z}$, which contains $m$ characters
$z_{1},z_{2},...,z_{m}$. The coreference resolution task is grouping all
mentions to clusters such that all mentions in the same cluster refer to the
same character. The character linking task is linking each mention to its
corresponding character.
## 3 Model
Figure 3: The coreference module and the linking module share the same
mention representation $g^{(n)}$ as inputs. The mention representation
$g^{(i)}$ are iteratively refined through the mention-level self-attention
layers. The initial mention representations $g^{(0)}$ are the sum of text span
representations from a pre-trained text encoder and corresponding speaker
embeddings.
In this section, we introduce the proposed C2 framework, which is illustrated
in Figure 3. With the conversation and all mentions as input, we first encode
them with a shared mention representation encoder module, which includes a
pre-trained transformer text encoder and a mention-level self-attention (MLSA)
module. After that, we make predictions for both tasks via two separate
modules. In the end, a joint loss function is devised so that the model can be
effectively trained on both tasks simultaneously. Details are as follows.
### 3.1 Mention Representation
We use pre-trained language models Devlin et al. (2018); Joshi et al. (2019a)
to obtain the contextualized representations for mentions. As speaker
information is critical for the conversation understanding, we also include
that information by appending speaker embeddings to each mention. As a result,
the initial representation of mention $i$ is:
$g_{i}^{(0)}=t_{start_{i}}+t_{end_{i}}+e_{speaker_{i}},$ (1)
where $t_{start_{i}}$ and $t_{end_{i}}$ are the contextualized representation
of the beginning and the end tokens of mention $i$, and the $e_{speaker_{i}}$
is the speaker embedding for the current speaker. Here, we omit the embeddings
of inner tokens because their semantics has been effectively encoded via the
language model. The speaker embeddings are randomly initialized before
training.
Sometimes the local context of a mention is not enough to make reasonable
predictions, and it is observed that the co-occurred mentions can provide
document-level context information. To refine the mention representations
given the presence of other mentions in the document, we introduce the
Mention-Level Self-Attention (MLSA) layer, which has $n$ layers of transformer
encoder structure Vaswani et al. (2017) and is denoted as $T$. Formally, this
iterative mention refinement process can be described by
$\displaystyle
g_{1}^{(i+1)},...,g_{k}^{(i+1)}=T(g_{1}^{(i)},...,g_{k}^{(i)}),$ (2)
where $k$ indicates the number of mentions in a document, and the $g^{(i)}$
means the mention representation from the $i$-th layer of MLSA.
Dataset | Episodes | Scenes | Utterances | Speakers | Mentions | Entities
---|---|---|---|---|---|---
TRN | 76 | 987 | 18,789 | 265 | 36,385 | 628
DEV | 8 | 122 | 2,142 | 48 | 3,932 | 102
TST | 13 | 192 | 3,597 | 91 | 7,050 | 165
Total | 97 | 1,301 | 24,528 | 331 | 47,367 | 781
Table 1: The detailed information about the datasets. For each season, the
episode 1 to 19 are used for training, the episode 20 to 21 for development,
and the remaining for testing.
### 3.2 Coreference Resolution
Following the previous work Joshi et al. (2019a), we model the coreference
resolution task as an antecedent finding problem. For each mention, we aim at
finding one of the previous mentions that refer to the same person. If no such
previous mention exists, it should be linked to the dummy mention
$\varepsilon$. Thus the goal of a coreference model is to learn a
distribution, $P(y_{i})$ over each antecedent for each mention $i$:
$\displaystyle
P(y_{i})=\frac{e^{s(i,y_{i})}}{\Sigma_{y^{\prime}\in\mathcal{Y}(i)}e^{s(i,y^{\prime})}},$
(3)
where $s(i,j)$ is the score for the antecedent assignment of mention $i$ to
$j$. The score $s(i,j)$ contains two parts: (1) the plausibility score of the
mentions $s_{a}(i,j)$; (2) the mention score measuring the plausibility of
being a proper mention $s_{m}(i)$. Formally, the $s(i,j)$ can be expressed by
$\displaystyle s(i,j)$ $\displaystyle=s_{m}(i)+s_{m}(j)+s_{a}(i,j),$ (4)
$\displaystyle s_{m}(i)$ $\displaystyle=FFNN_{m}(g_{i}^{(n)}),$ (5)
$\displaystyle s_{a}(i,j)$
$\displaystyle=FFNN_{a}([g_{i}^{(n)},g_{j}^{(n)}]),$ (6)
where $g^{(n)}$ stands for the last layer mention representation resulted from
the MLSA and $FFNN$ indicates the feed-forward neural network.
### 3.3 Character Linking
The character linking is formulated as a multi-class classification problem,
following previous work Zhou and Choi (2018). Given the mention
representations $g^{(n)}$, the linking can be done with a simple feed-forward
network, denoted as $FFNN(\cdot)$. Specifically, the probability of character
entity $z_{i}$ is linked with a given mention $i$ can be calculated by:
$\displaystyle Q(z_{i})=Softmax(FFNN_{l}(g_{i}^{(n)}))_{z_{i}},$ (7)
where the notation $(.)_{z}$ represents the $z$-th composition of a given
vector.
### 3.4 Joint Learning
To jointly optimize both coreference resolution and entity linking, we design
a joint loss of both tasks. For coreference resolution, given the gold
clusters, we minimize the negative log-likelihood of the possibility that each
mention is linked to a gold antecedent. Then the coreference loss $L_{c}$
becomes
$\displaystyle L_{c}=-\sum_{i=1}^{N}\log\sum_{y\in\mathcal{Y}(i)\cap
GOLD(i)}P(y),$ (8)
where the $GOLD(i)$ denotes the gold coreference cluster that mention $i$
belongs to. Similarly, for character linking, we minimize the negative log-
likelihood of the joint probability for each mention being linked to the
correct referent character:
$\displaystyle L_{l}=-\sum_{i=1}^{N}\log Q(z_{i}).$ (9)
Finally, the joint loss can be the arithmetic average of the coreference loss
and linking loss:
$\displaystyle L=\frac{1}{2}(L_{l}+L_{c}).$ (10)
## 4 Experiments
In this section, we introduce the experimental details to demonstrate the
effectiveness of C2.
Model | B3 | CEAF$\phi 4$ | BLANC | Ave.F1
---|---|---|---|---
Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
ACNN | 84.30 | 71.90 | 77.60 | 54.50 | 71.80 | 62.00 | 84.30 | 80.40 | 82.10 | 73.96 (0.97)
CorefQA (SpanBERT-Large) | 73.72 | 75.55 | 74.62 | 65.82 | 72.38 | 68.94 | 86.82 | 84.69 | 85.75 | 76.44 (0.20)
C2F (BERT-Base) | 69.62 | 76.11 | 72.72 | 66.44 | 60.92 | 63.56 | 79.38 | 86.05 | 82.38 | 72.88 (0.23)
C2F (BERT-Large) | 71.72 | 80.25 | 75.75 | 69.97 | 62.61 | 66.08 | 81.65 | 88.23 | 84.63 | 75.49 (0.18)
C2F (SpanBERT-Base) | 72.49 | 77.88 | 75.08 | 66.00 | 64.23 | 65.10 | 81.60 | 87.43 | 84.27 | 74.81 (0.19)
C2F (SpanBERT-Large) | 81.93 | 84.38 | 82.57 | 78.04 | 71.99 | 74.89 | 88.15 | 91.09 | 89.56 | 82.34 (0.17)
C2 (BERT-Base) | 78.10 | 81.56 | 79.79 | 72.48 | 69.87 | 71.15 | 86.14 | 89.49 | 87.74 | 80.14 (0.21)
C2 (BERT-Large) | 78.49 | 81.90 | 80.16 | 73.81 | 71.15 | 72.46 | 86.20 | 89.93 | 87.97 | 80.17 (0.23)
C2 (SpanBERT-Base) | 81.18 | 83.59 | 82.36 | 73.64 | 73.09 | 73.36 | 88.06 | 91.04 | 89.49 | 81.74 (0.19)
C2 (SpanBERT-Large) | 85.83 | 85.27 | 85.55 | 77.13 | 77.84 | 77.48 | 92.31 | 92.03 | 92.17 | 85.06 (0.16)
Table 2: Experimental results on the coreference resolution task. The results
are presented in a 2-digit decimal following previous work. Standard
deviations of the average F1 scores are shown in brackets.
### 4.1 Data Description
We use the latest released character identification
V2.0222https://github.com/emorynlp/character-identification as the
experimental dataset, and we follow the standard training, developing, and
testing separation provided by the dataset. In the dataset, all mentions are
annotated with their referent global entities. For example, in Figure 4, the
mention I is assigned to ROSS, and the mentions mom and dad are assigned to
JUDY and JACK respectively in the first utterance given by Ross. The gold
coreference clusters are derived by grouping the mentions assigned to the same
character entity. Statistically, the dataset includes four seasons of the TV
show Friends, which contain 97 episodes, 1,301 scenes, and 24,528 utterances.
In total, there are 47,367 mentions, which are assigned to 781 unique
characters. The detailed statistics are shown in Table 1.
Figure 4: The example annotations for character identification. The arrows in
the figure are pointing from the character mentions to their referent
character entities.
### 4.2 Baseline Methods
The effectiveness of the joint learning model is evaluated on both the
coreference resolution and character linking tasks. To fairly compare with
existing models, only the singular mentions are used following the singular-
only setting (S-only) in the previous work Zhou and Choi (2018).
For the coreference resolution task, we compare with the following methods.
* •
ACNN: A CNN-based model Zhou and Choi (2018) coreference resolution model that
can also produce the mention and mention-cluster embeddings at the same time.
* •
C2F: The end-to-end coarse-to-fine coreference model Joshi et al. (2019b) with
BERT Devlin et al. (2018) or SpanBERT Joshi et al. (2019a) as the encoder.
* •
CorefQA: An approach that reformulates the coreference resolution problem as a
question answering problem Wu et al. (2020) and being able to be benefited
from fine-tuned question-answer text encoders.
For the character linking task, we also include ACNN as a baseline method.
Considering existing general entity linking models Kolitsas et al. (2018); van
Hulst et al. (2020); Raiman and Raiman (2018); Onando Mulang et al. (2020)
cannot be applied to the character linking problem because they are not
designed to handle pronouns, we propose another text-span classification model
with transformer encoder as another strong baseline for the character linking
task.
* •
ACNN: A model that uses the mention and mention-cluster embeddings as input to
do character linking Zhou and Choi (2018).
* •
BERT/SpanBERT: A text-span classification model consists of a transformer text
encoder followed by a feed-forward network.
### 4.3 Evaluation Metrics
We follow the previous work Zhou and Choi (2018) for the evaluation metrics.
Specifically, for coreference resolution, three evaluation metrics, B3,
CEAFϕ4, and BLANC, are used. The metrics are all proposed by the CoNNL’12
shared task Pradhan et al. (2012) to evaluate the output coreference cluster
against the gold clusters. We follow Zhou and Choi (2018) to use BLANC
Recasens and Hovy (2011) to replace MUC Vilain et al. (1995) because BLANC
takes singletons into consideration but MUC does not. As for the character
linking task, we use the Micro and Macro F1 scores to evaluate the multi-class
classification performance.
### 4.4 Implementation Details
In our experiments, we consider four different pre-trained language encoders:
BERT-Base, BERT-Large, SpanBERT-Base, and SpanBERT-Large, and we use $n=2$
layers of the mention-level self-attention (MLSA). The feed-forward networks
are implemented by two fully connected layers with ReLU activations. Following
the previous work, Zhou and Choi (2018), the scene-level setting is used,
where, each scene is regarded as a document for coreference resolution and
linking. During the training, each mini-batch consists of segments obtained
from a single document. The joint learning model is optimized with the Adam
optimizer Kingma and Ba (2015) with an initial learning rate of 3e-5, and a
warming-up rate of 10%. The model is set to be trained for 100 epochs with an
early stop. All the experiments are repeated three times, and the average
results are reported.
Model | Ro | Ra | Ch | Mo | Jo | Ph | Em | Ri | Micro | Macro
---|---|---|---|---|---|---|---|---|---|---
ACNN | 78.3 | 86.5 | 78.8 | 81.7 | 78.3 | 88.8 | 69.2 | 83.9 | 73.7 (0.6) | 59.6 (2.3)
BERT-Base | 87.4 | 89.9 | 86.6 | 88.2 | 87.1 | 91.1 | 94.3 | 62.4 | 84.0 (0.1) | 77.3 (0.2)
BERT-Large | 88.2 | 89.9 | 87.9 | 88.8 | 87.7 | 93.1 | 93.5 | 68.0 | 84.8 (0.2) | 79.1 (0.2)
SpanBERT-Base | 87.6 | 91.8 | 86.7 | 88.2 | 86.8 | 92.6 | 94.6 | 73.3 | 84.2 (0.1) | 77.3 (0.2)
SpanBERT-Large | 90.9 | 92.8 | 88.3 | 90.3 | 90.2 | 94.3 | 94.6 | 71.7 | 85.5 (0.1) | 79.8 (0.2)
C2 (BERT-Base) | 86.5 | 87.8 | 85.6 | 86.8 | 88.1 | 92.4 | 93.0 | 66.0 | 84.0 (0.1) | 78.6 (0.2)
C2 (BERT-Large) | 85.9 | 90.0 | 87.3 | 86.9 | 87.2 | 93.0 | 96.1 | 66.0 | 84.9 (0.1) | 79.5 (0.2)
C2(SpanBERT-Base) | 89.8 | 91.3 | 90.5 | 90.9 | 87.8 | 93.2 | 93.4 | 71.3 | 85.7 (0.1) | 81.0 (0.1)
C2 (SpanBERT-Large) | 91.2 | 94.1 | 91.1 | 92.5 | 90.4 | 94.4 | 89.2 | 77.1 | 87.0 (0.1) | 81.1 (0.1)
Table 3: Experimental results per character on the character linking. The
results are presented in a 1-digit decimal following previous work. Standard
deviations of the Micro and Macro F1 scores are shown in brackets. The names
in the table are written in two-letter acronyms. Ro: Ross, Ra: Rachel, Ch:
Chandler, Mo: Monica, Jo: Joey, Ph: Phoebe, Em: Emily, Ri: Richard
## 5 Results and Analysis
In this section, we discuss the experimental results and present a detailed
analysis.
### 5.1 Coreference Resolution Results
The performances of coreference resolution models are shown in Table 2. C2
with SpanBERT-large achieves the best performance on all evaluation metrics.
Comparing to the baseline ACNN model, which uses hand-crafted features, C2
uses a transformer to better encode the contextual information. Besides that,
even though ACNN formulates the coreference resolution and character linking
tasks in a pipe-line and uses the coreference resolution result to help
character linking, the character linking result cannot be used to help to
resolve coreference clusters. As a comparison, we treat both tasks jointly
such that they can help each other.
Currently, CorefQA is the best-performing general coreference resolution model
on the OntoNotes dataset Pradhan et al. (2012). However, its performance is
limited on the conversation dataset due to two reasons. First, different from
the experimental setting of OntoNotes, the mentions in our experiment setting
are gold mentions. Consequently, the flexible span predicting strategy of
CorefQA loses its advantages because of the absence of the mention proposal
stage. Second, the CorefQA leverages the fine-tuning on other question
answering (QA) datasets and it is possible that the used QA dataset (i.e.,
SQuAD-2.0 Rajpurkar et al. (2018)) is more similar to OntoNotes rather than
the used multiparty conversation dataset, which is typically much more
informal. As a result, the effect of such fine-tuning process only works on
OntoNotes.
The coarse-to-fine (C2F) model Joshi et al. (2019b) with a transformer encoder
was the previous state-of-the-art model on OntoNotes. Referring to Table 2,
given the same text encoder, the proposed C2 model can constantly outperform
the C2F model. These results further demonstrate that with the help of the
proposed joint learning framework, the out-of-context character information
can help achieve better mention representations so that the coreference models
can resolve them more easily.
### 5.2 Character Linking Results
As shown in Table 3, the proposed joint learning model also achieves the best
performance on the character linking task and there are mainly two reasons for
that. First, the contextualized mention representations obtained from pre-
trained language encoders can better encode the context information than those
representations used in ACNN. Second, with the help of coreference clusters,
richer context about the whole conversation is encoded for each mention. For
example, when using the same pre-trained language model as the encoder, C2 can
always outperform the baseline classification model. These empirical results
confirm that, though the BERT and SpanBERT can produce very good vector
representation for the mentions based on the local context, the coreference
clusters can still provide useful document-level contextual information for
linking them to a global character entity.
Figure 5: The x-axis is the number of MLSA layers used in the C2. The y-axes
are the F1 scores on each metric for their corresponding tasks. The curves
have general trends of going up, which indicates that the model performs
better when there are more layers.
### 5.3 The Number of MLSA Layers
Figure 6: Case study. All mentions that are linked to the same character and
in the same coreference cluster are highlighted with the same color. The
misclassified mention is marked with the red cross.
Another contribution of the proposed C2 model is the proposed mention-level
self-attention (MLSA) module, which helps iteratively refine the mention
representations according to the other mentions co-occurred within the same
document. In this section, to show its effect and the influence of iteration
layers, we tried different layers and show their performances on the test set
in Figure 5. We conducted the experiments with the SpanBERT-Base encoder and
all other hyper-parameters are the same. The x-axis is the number of layers,
and the y-axes are F1 scores of B3, CEAF, and BLANC for coreference
resolution, the Macro and Micro F1 scores for character linking. From the
results, we can see that with the increase of layer number from zero to five,
the F1 scores on both tasks gradually increase. This trend demonstrates that
the model can perform better on both tasks when there are more layers.
Meanwhile, the marginal performance improvement of the MLSA layer is
decreasing. This indicates that adding too many layers of MLSA may not further
help improve the performance because enough context has been included.
Considering the balance between performance and computational efficiency, we
chose the iteration layers to be two in our current model based on similar
observations made on the development set.
| Coreference F1 | Linking F1
---|---|---
Model | B3 | CEAF$\phi 4$ | BLANC | Micro | Macro
C2 | 85.54 | 77.48 | 92,17 | 87.05 | 81.09
\- MLSA | 83.57 | 75.32 | 90.51 | 86.26 | 80.32
\- Linking | 83.50 | 76.10 | 90.08 | - | -
\- Coref. | - | - | - | 86.94 | 79.58
Table 4: Three ablation studies are conducted concerning the MLSA layers, the
coreference resolution module, and the character linking module.
### 5.4 Ablation Study
In this section, we present the ablation study to clearly show the effect of
different modules in the proposed framework C2 in Table 4. First, we try to
remove the mention-level self-attention (MLSA) from our joint learning model
and a clear performance drop is observed on both tasks. Specifically, the
performance on coreference resolution is reduced by 1.21 on the average F1,
and meanwhile, the macro-F1 and micro-F1 scores on character linking decreased
by 0.77 and 0.79 respectively. The reduction reveals that the MLSA indeed
helps achieve better mention representations with the help from co-occurred
mentions. Second, we try to remove the coreference resolution and character
linking modules. When the character linking module is removed, it is observed
that the performance on coreference resolution decreased by 1.94 on the
averaged F1 score. When the coreference module is removed, the performance of
C2 on character linking dropped by 0.83 on the average of Micro and Macro F1
scores. These results prove that the modeling of coreference resolution and
character linking can indeed help each other and improve the performance
significantly, and the proposed joint learning framework can help to achieve
that goal.
### 5.5 Case Study
Besides the quantitative evaluation, in this section, we present the case
study to qualitatively evaluate the strengths and weaknesses of the proposed
C2 model. As shown in Figure 6, we randomly select an example from the
development set to show the prediction results of the proposed model on both
tasks. To illustrate the coreference resolution and character linking results
from the C2 model, the mentions from the same coreference cluster are
highlighted with the same color. Also, we use the same color to indicate to
which character the mentions are referring. Meanwhile, the falsely predicted
result is marked with a red cross.
#### 5.5.1 Strengths
For this example, the results on both tasks are consistent. The mentions that
are linked to the same character entity are in the same coreference group and
vice versa. Based on this observation and previous experimental results, it is
more convincing that the proposed model can effectively solve the two problems
at the same time. Besides that, we also notice that the model does not overfit
the popular characters. It can correctly solve all the mentions referring to
not only main characters, and also for the characters that only appear several
times such as MAN 1. Last but not least, the proposed model can correctly
resolve the mention to the correct antecedent even though there is a long
distance between them in the conversation. For example, the mention me in
utterance 14 can be correctly assigned to the mention you in utterance 2,
though there are 11 utterances in between. It shows that by putting two tasks
together, the proposed model can better utilize the whole conversation
context. The only error made by the model is incorrectly classifying a mention
and at the same time putting it into a wrong coreference cluster.
#### 5.5.2 Weaknesses
By analyzing the error case, it is noticed that the model may have trouble in
handling the mentions that require common sense knowledge. Humans can
successfully resolve the mention her to Danielle because they know Danielle is
on the other side of the telephone, but Monica is in the house. As a result,
Chandler can only deceive Danielle but not Monica. But the current model,
which only relies on the context, cannot tell the difference.
### 5.6 Error Analysis
We use the example in Figure 6 to emphasize the error analysis that compares
the performance of our model and the baseline models. The details are as
follows. In this example, the only mistake made by our model is related to
common-sense knowledge, and the baseline models are also not able to make a
correct prediction.
For coreference resolution, 3 out of 25 mentions are put into a wrong cluster
by the c2f baseline model. The baseline model failed to do long-distance
antecedent assignments (e.g., the “me” in utterance 14). Meanwhile, our model
is better in this case because it successfully predicts the antecedent of the
mention “me”, even though its corresponding antecedent is far away in
utterance 2. This example demonstrates the advantage that our joint model can
use global information obtained from character linking to better resolve the
co-referents that are far away from each other.
For character linking, 2 out of 25 mentions are linked to the wrong characters
by the baseline model. It is observed that the baseline model cannot
consistently make correct linking predictions to less-appeared characters, for
example, the “He” in utterance 6. In this case, our model performs better
mainly because it can use the information gathered from the nearby co-
referents to adjust its linking prediction, as its nearby co-referents are
correctly linked to corresponding entities.
## 6 Related Works
Coreference resolution is the task of grouping mentions to clusters such that
all the mentions in the same cluster refer to the same real-world entity
Pradhan et al. (2012); Zhang et al. (2019a, b); Yu et al. (2019). With the
help of higher-order coreference resolution mechanism Lee et al. (2018) and
strong pre-trained language models (e.g., SpanBERT Joshi et al. (2019b)), the
end-to-end based coreference resolution systems have been achieving impressive
performance on the standard evaluation dataset Pradhan et al. (2012).
Recently, motivated by the success of the transfer learning, Wu et al. (2020)
propose to model the coreference resolution task as a question answering
problem. Through the careful fine-tuning on a high-quality QA dataset (i.e.,
SQUAD-2.0 Rajpurkar et al. (2018)), it achieves the state-of-the-art
performance on the standard evaluation benchmark. However, as disclosed by
Zhang et al. (2020), current systems are still not perfect. For example, they
still cannot effectively handle pronouns, especially those in informal
language usage scenarios like conversations. In this paper, we propose to
leverage the out-of-context character information to help resolve the
coreference relations with a joint learning model, which has been proven
effective in the experiments.
As a traditional NLP task, entity linking Mihalcea and Csomai (2007); Ji et
al. (2015); Kolitsas et al. (2018); Raiman and Raiman (2018); Onando Mulang et
al. (2020); van Hulst et al. (2020) aims at linking mentions in context to
entities in the real world (typically in the format of knowledge graph).
Typically, the mentions are named entities and the main challenge is the
disambiguation. However, as a special case of the entity linking, the
character linking task has its challenge that the majority of the mentions are
pronouns. In the experiments, we have demonstrated that when the local context
is not enough, the richer context information provided by the coreference
clusters could be very helpful for linking mentions to the correct characters.
In the NLP community, people have long been thinking that the coreference
resolution task and entity linking should be able to help each other. For
example, Ratinov and Roth (2012) show how to use knowledge from named-entity
linking to improve the coreference resolution, but do not consider doing it in
a joint learning approach. After that, Hajishirzi et al. (2013) demonstrate
that the coreference resolution and entity linking are complementary in terms
of reducing the errors in both tasks. Motivated by these observations, a joint
model for coreference, typing, and linking is proposed Durrett and Klein
(2014) to improve the performance on three tasks at the same time. Compared
with previous works, the main contributions of this paper are two-fold: (1) we
tackle the challenging character linking problem; (2) we design a novel
mention representation encoding method, which has been shown effective on both
the coreference resolution and character linking tasks.
## 7 Conclusion
In this paper, we propose to solve the coreference resolution and character
linking tasks jointly. The experimental results show that the proposed model
C2 performs better than all previous models on both tasks. Detailed analysis
is also conducted to show the contribution of different modules and the effect
of the hyper-parameter.
## 8 Acknowledgements
This paper was supported by the NSFC Grant U20B2053 from China, the Early
Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No.
16211520), and the Research Impact Fund (RIF, No. R6020-19) from the Research
Grants Council (RGC) of Hong Kong, with special thanks to the Tencent AI Lab
Rhino-Bird Focused Research Program.
## References
* Chen et al. (2017) Henry Y. Chen, Ethan Zhou, and Jinho D. Choi. 2017. Robust coreference resolution and entity linking on dialogues: Character identification on TV show transcripts. In _Proceedings of CoNLL)_ , pages 216–225.
* Chen and Choi (2016) Yu-Hsin Chen and Jinho D. Choi. 2016. Character identification on multiparty conversation: Identifying mentions of characters in TV shows. In _Proceedings of SIGDIAL_ , pages 90–100.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Durrett and Klein (2014) Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. _Transactions of the association for computational linguistics_ , 2:477–490.
* Hajishirzi et al. (2013) Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke Zettlemoyer. 2013. Joint coreference resolution and named-entity linking with multi-pass sieves. In _Proceedings of the EMNLP 2013_ , pages 289–299.
* van Hulst et al. (2020) Johannes M. van Hulst, Faegheh Hasibi, Koen Dercksen, Krisztian Balog, and Arjen P. de Vries. 2020. Rel: An entity linker standing on the shoulders of giants. _Proceedings of SIGIR 2020_.
* Ji et al. (2015) Heng Ji, Joel Nothman, Ben Hachey, and Radu Florian. 2015. Overview of tac-kbp2015 tri-lingual entity discovery and linking. _Theory and Applications of Categories_.
* Joshi et al. (2019a) Mandar Joshi, Danqi Chen, Y. Liu, Daniel S. Weld, L. Zettlemoyer, and Omer Levy. 2019a. Spanbert: Improving pre-training by representing and predicting spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77.
* Joshi et al. (2019b) Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. 2019b. BERT for coreference resolution: Baselines and analysis. In _Proceedings of EMNLP_.
* Kantor and Globerson (2019) Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In _Proceedings of ACL 2019_ , pages 673–677.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _Proceedings of ICLR 2015_.
* Kolitsas et al. (2018) Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In _Proceedings of CoNNL 2018_ , pages 519–529.
* Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In _Proceedings of NAACL (Short Papers)_ , pages 687–692.
* Mihalcea and Csomai (2007) Rada Mihalcea and Andras Csomai. 2007. Wikify! linking documents to encyclopedic knowledge. CIKM ’07, page 233–242.
* Onando Mulang et al. (2020) Isaiah Onando Mulang, Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, and Jens Lehmann. 2020. Evaluating the impact of knowledge graph context on entity disambiguation models. _arXiv e-prints_ , pages arXiv–2008.
* Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. pages 1–40.
* Raiman and Raiman (2018) Jonathan Raiman and Olivier Raiman. 2018. Deeptype: multilingual entity linking by neural type system evolution. _arXiv preprint arXiv:1802.01021_.
* Rajpurkar et al. (2018) Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In _Proceedings of ACL 2018_ , pages 784–789.
* Ratinov and Roth (2012) Lev Ratinov and Dan Roth. 2012. Learning-based multi-sieve co-reference resolution with knowledge. In _Proceedings of EMNLP 2012_ , pages 1234–1244.
* Recasens and Hovy (2011) Marta Recasens and Eduard Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. _Natural Language Engineering_ , 17:485 – 510.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, L. Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _ArXiv_ , abs/1706.03762.
* Vilain et al. (1995) Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. _Proceedings of the 6th conference on message understanding_ , pages 45–52.
* Wu et al. (2020) Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. CorefQA: Coreference resolution as query-based span prediction. In _Proceedings ACL 2020_ , pages 6953–6963.
* Yu et al. (2019) Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019. What you see is what you get: Visual pronoun coreference resolution in dialogues. In _Proceedings of EMNLP-IJCNLP 2019_ , pages 5122–5131.
* Zhang et al. (2019a) Hongming Zhang, Yan Song, and Yangqiu Song. 2019a. Incorporating context and external knowledge for pronoun coreference resolution. In _Proceedings of NAACL-HLT 2019_ , pages 872–881.
* Zhang et al. (2019b) Hongming Zhang, Yan Song, Yangqiu Song, and Dong Yu. 2019b. Knowledge-aware pronoun coreference resolution. In _Proceedings of ACL 2019_ , pages 867–876.
* Zhang et al. (2020) Hongming Zhang, Xinran Zhao, and Yangqiu Song. 2020. A brief survey and comparative study of recent development of pronoun coreference resolution. _CoRR_ , abs/2009.12721.
* Zhou and Choi (2018) Ethan Zhou and Jinho D. Choi. 2018. They exist! introducing plural mentions to coreference resolution and entity linking. In _Proceedings of ICCL_ , pages 24–34.
|
# The cylindrical width of transitive sets
Ashwin Sah , Mehtaab Sawhney and Yufei Zhao Department of Mathematics,
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
<EMAIL_ADDRESS>
###### Abstract.
We show that for every $1\leq k\leq d/(\log d)^{C}$, every finite transitive
set of unit vectors in $\mathbb{R}^{d}$ lies within distance
$O(1/\sqrt{\log(d/k)})$ of some codimension $k$ subspace, and this distance
bound is best possible. This extends a result of Ben Green, who proved it for
$k=1$.
Sah and Sawhney were supported by NSF Graduate Research Fellowship Program
DGE-1745302. Zhao was supported by NSF Award DMS-1764176, a Sloan Research
Fellowship, and the MIT Solomon Buchsbaum Fund.
## 1\. Introduction
The following counterintuitive fact was conjectured by the third author and
proved by Green [4]. It says that every finite transitive subset of a high
dimensional sphere is close to some hyperplane. Here a subset $X$ of a sphere
in $\mathbb{R}^{d}$ is _transitive_ if for every $x,x^{\prime}\in X$, there is
some $g\in\mathsf{O}(\mathbb{R}^{d})$ so that $gX=X$ and $gx=x^{\prime}$. We
say that $X$ has _width_ at most $2r$ if it lies within distance $r$ of some
hyperplane. The finiteness assumption is important since otherwise the whole
sphere is a counterexample.
###### Theorem 1.1 (Green [4]).
Let $X$ be a finite transitive subset of the unit sphere in $\mathbb{R}^{d}$.
Then the width of $X$ is at most $O(1/\sqrt{\log d})$. Furthermore, this upper
bound is best possible up to a constant factor.
The bound in the theorem is tight since the set $X$ obtained by taking all
permutations and coordinate-wise $\pm$ signings of the unit vector
$(1,1/\sqrt{2},\ldots,1/\sqrt{d})/\sqrt{H_{d}}$, where
$H_{d}=1+1/2+\cdots+1/d\sim\log d$, has width on the order of $1/\sqrt{\log
d}$.
Green’s proof uses a clever induction scheme along with sophisticated group
theoretic arguments, including an application of the classification of finite
simple groups.
We generalize Green’s result by showing that a finite transitive set lies not
only near some hyperplane, but in fact it lies near a subspace of codimension
$k$, as long as $k$ is not too large.
We say that $X\subset\mathbb{R}^{d}$ has _$k$ -cylindrical width_ at most $2r$
if $X$ lies within distance $r$ of some affine codimension $k$ subspace. The
case $k=1$ corresponds to the usual notion of width. Our main result below
implies that every finite transitive subset of the unit sphere in
$\mathbb{R}^{d}$ has $k$-cylindrical width $O(1/\sqrt{\log(d/k)})$ as long as
$k$ is not too large.
###### Theorem 1.2.
There is an absolute constant $C>0$ so that the following holds. Let $1\leq
k\leq d/(\log(3d))^{C}$. Let $X$ be a finite transitive subset of the unit
sphere in $\mathbb{R}^{d}$. Then there is a real $k$-dimensional subspace $W$
such that
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
Here and throughout $a\lesssim b$ means that $a\leq C^{\prime}b$ for some
absolute constant $C^{\prime}$. We write $\lVert\mathbf{x}\rVert_{2}$ for the
usual Euclidean norm of a vector $\mathbf{x}$. Also $\operatorname{proj}_{W}$
is the orthogonal projection onto $W$.
We deduce the above theorem from a complex version using a theorem on
restricted invertibility (see Section 6). A transitive subset of the complex
unit sphere is defined to be the orbit of a point under the action of some
subgroup of the unitary group.
###### Theorem 1.3.
There is an absolute constant $C>0$ so that the following holds. Let $1\leq
k\leq d/(\log(3d))^{C}$. Let $X$ be a finite transitive subset of the unit
sphere in $\mathbb{C}^{d}$. Then there is a complex $k$-dimensional subspace
$W$ such that
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
We suspect that the $1\leq k\leq d/(\log(3d))^{C}$ hypothesis is unnecessary
in both Theorems 1.2 and 1.3.
###### Conjecture 1.4.
Let $1\leq k\leq d$. Let $X$ be a finite transitive subset of the unit sphere
in $\mathbb{C}^{d}$. Then there is a complex $k$-dimensional subspace $W$ such
that
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\lesssim\frac{1}{\sqrt{\log(2d/k)}}.$
One particularly intriguing special case of 1.4 is that every finite
transitive set of unit vectors in $\mathbb{R}^{d}$ has $k$-cylindrical width
$o(1)$ for all $k=o(d)$.
We prove a matching lower bound on the cylindrical radius (See Section 7 for
proof.)
###### Theorem 1.5.
Let $1\leq k\leq d$. There exists a transitive set $X$ in $\mathbb{R}^{d}$
such that for any (real or complex) $k$-dimensional subspace $W$ we have
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\gtrsim\frac{1}{\sqrt{\log(2d/k)}}.$
We propose another closely related conjecture: every finite transitive set in
$\mathbb{R}^{d}$ lies inside a small cube.
###### Conjecture 1.6.
Let $X$ be a finite transitive subset of the unit sphere in $\mathbb{R}^{d}$
(or $\mathbb{C}^{d}$). Then there is a unitary basis $L$ such that
$\sup_{\mathbf{x}\in X,\mathbf{v}\in
L}|\langle\mathbf{v},\mathbf{x}\rangle|\lesssim\frac{1}{\sqrt{\log d}}.$ (1.1)
Establishing an upper bound that decays to zero as $d\to\infty$ would already
be interesting. Note that Theorem 1.3 implies the existence of a set $L$ of
orthonormal vectors with $\lvert L\rvert\geq d^{0.99}$ so that 1.1 holds (and
likely extendable to $\lvert L\rvert\geq d/(\log d)^{C}$ via our techniques).
Proving either conjecture in full appears to require additional ideas.
###### Remark.
Green’s proof [4] of Theorems 1.2 and 1.3 in the case $k=1$ contains two
errors. The first error is due to a missing supremum inside the integral in
the first and second lines of the last display equation in proof of
Proposition 2.1 on page 560. The second error occurs at the final equality
step of the top display equation on page 569, after right after (4.4); here an
orthogonality relation was incorrectly applied as it requires an unjustified
exchange of the integral and supremum. Our proof here corrects these errors.
Green has also updated the arXiv version of his paper [4] incorporating these
corrections.
## 2\. Proof strategy
The subspace $W$ in Theorem 1.3 must vary according to the transitive set $X$.
On other hand, the strategy is to construct a single probability distribution
$\mu$ (depending only on the symmetry group
$G\leqslant\mathsf{U}(\mathbb{C}^{d})$ but not on $X$) on the set
$\operatorname{Gr}_{\mathbb{C}}(k,d)$ of $k$-dimensional subspaces of
$\mathbb{C}^{d}$. This is an important idea introduced by Green (for $k=1$).
###### Definition 2.1.
Let $1\leq k\leq d$. Let $f_{k}(d)$ be the smallest value so that for every
finite $G\leqslant\mathsf{U}(\mathbb{C}^{d})$, there is a probability measure
$\mu$ on $\operatorname{Gr}_{\mathbb{C}}(k,d)$ such that for all
$\mathbf{v}\in\mathbb{S}(\mathbb{C}^{d})$,
$\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)\leq
f_{k}(d)^{2}$
The values $f_{k}(d)$ are well defined since the space of probability measures
$\mu$ in question is closed under weak limits.
Our main result about $f_{k}(d)$ is stated below.
###### Theorem 2.2.
If $k\leq d/(\log d)^{20}$, then
$f_{k}(d)\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
###### Proof of Theorem 1.3 given Theorem 2.2.
Let our transitive set $X$ be the orbit of
$\mathbf{v}\in\mathbb{S}(\mathbb{C}^{d})$ under the action of the the finite
subgroup $G\leqslant\mathsf{U}(\mathbb{C}^{d})$. By Theorem 2.2 and Definition
2.1, there is a measure $\mu$ on $\operatorname{Gr}_{\mathbb{C}}(k,d)$ such
that
$\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)\leq
f_{k}(d)^{2}.$
Therefore there is some $k$-dimensional subspace $W$ with
$\sup_{g\in G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}\leq
f_{k}(d)\lesssim\frac{1}{\sqrt{\log(d/k)}}.\qed$
To prove Theorem 2.2 , we will decompose $G$ to “smaller”, more restricted
cases, namely irreducible and primitive representations. We will also need to
consider permutation groups (for both the reduction step as well as the
primitive case).
### 2.1. Preliminaries
###### Definition 2.3.
We say that $G\leqslant\mathsf{U}(\mathbb{C}^{d})$ is _imprimitive_ if there
is a _system of imprimitivity_ : a decomposition
$\mathbb{C}^{d}=\bigoplus_{i=1}^{\ell}V_{i}$
with $0<\dim V_{i}<d$ such that for every $g\in G$ and $i\in[\ell]$ one has
$gV_{i}=V_{j}$ for some $j\in[\ell]$. (The subspaces $V_{i}$ need not be
orthogonal.) Otherwise we say that $G$ is _primitive_.
###### Remark.
Both primitivity and irreducibility are properties of a representation, rather
than intrinsic to a group. We identify $G\leqslant\mathsf{U}(\mathbb{C}^{d})$
with its natural representation on $\mathbb{C}^{d}$.
It follows from Maschke’s theorem that primitive group representations are
irreducible.
###### Definition 2.4.
Given $\mathbf{v}=(v_{1},\ldots,v_{d})\in\mathbb{C}^{d}$, let
$\mathbf{v}^{\succ}=(\lvert v_{\sigma(1)}\rvert,\ldots,\lvert
v_{\sigma(d)}\rvert)\in\mathbb{R}^{d}$
where $\sigma$ is a permutation of $[d]$ so that
$\lvert v_{\sigma(1)}\rvert\geq\cdots\geq\lvert v_{\sigma(d)}\rvert.$
We write $v_{i}^{\succ}$ for the $i$-th coordinate of $\mathbf{v}^{\succ}$.
Let
$\operatorname{Dom}(\mathbf{v})=\\{\mathbf{w}\in\mathbb{C}^{d}:w_{i}^{\succ}\leq
v_{i}^{\succ}\text{for all }i\in[d]\\}.$
Let (here $\mathfrak{S}_{d}$ denotes the symmetric group)
$\Gamma_{d}:=\mathfrak{S}_{d}\ltimes(\mathbb{S}^{1})^{d}\leq\mathsf{U}(\mathbb{C}^{d})$
be the group that acts on $\mathbb{C}^{d}$ be permuting its coordinates and
multiplying individual coordinates by unit complex numbers. Then
$\operatorname{Dom}(\mathbf{v})$ is the convex hull of the $\Gamma_{d}$-orbit
of $\mathbf{v}$.
We define some variants of $f_{k}(d)$ when the group $G$ is restricted to
special types.
###### Definition 2.5.
Given $k\in[d]$, let $f^{\operatorname{irred}}_{k}(d)$ (resp.
$f^{\operatorname{prim}}_{k}(d)$) be the smallest value so that for every
finite $G\leqslant\mathsf{U}(\mathbb{C}^{d})$ which is irreducible (resp.
primitive), there is a probability measure $\mu$ on
$\operatorname{Gr}_{\mathbb{C}}(k,d)$ such that for every
$\mathbf{v}\in\mathbb{S}(\mathbb{C}^{d})$,
$\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)\leq
f^{\operatorname{irred}}_{k}(d)^{2}\qquad\text{(resp.
$f^{\operatorname{prim}}_{k}(d)^{2}$)}.$
The permutation action on $\mathbb{C}^{d}$ deserves special attention.
###### Definition 2.6.
Let $f^{\operatorname{sym}}_{k}(d)$ be the smallest value so that there is a
probability measure $\mu$ on $\operatorname{Gr}_{\mathbb{C}}(k,d)$ such that
for every $\mathbf{v}\in\mathbb{S}(\mathbb{C}^{d})$,
$\int\sup_{\mathbf{u}\in\operatorname{Dom}(\mathbf{v})}\lVert\operatorname{proj}_{W}(\mathbf{u})\rVert_{2}^{2}d\mu(W)\leq
f^{\operatorname{sym}}_{k}(d)^{2}.$
Define $f^{\operatorname{alt}}_{k}(d)$ to be the same with the additional
constraint that $\mu$ is supported on the set of $k$-dimensional subspaces of
the hyperplane $x_{1}+\cdots+x_{d}=0$.
We will often equivalently consider, instead of $\mu$ on
$\operatorname{Gr}_{\mathbb{C}}(k,d)$, the corresponding measure $\mu^{\ast}$
on the complex Stiefel manifold $V_{k}(\mathbb{C}^{d})$, that is, $\mu^{\ast}$
is derived from $\mu$ by first sampling a $\mu$-random $k$-dimensional
subspace $W$ of $\mathbf{C}^{d}$, and then outputting a uniformly sampled a
unitary basis $(\mathbf{w}_{1},\ldots,\mathbf{w}_{k})$ of $W$. We have
$\lVert\operatorname{proj}_{W}(\mathbf{u})\rVert_{2}^{2}=\sum_{\ell=1}^{k}\lvert\langle
g\mathbf{v}_{1},\mathbf{w}_{\ell}\rangle\rvert^{2}$.
### 2.2. Reductions
We first reduce the general problem to the irreducible case.
###### Proposition 2.7.
If $1\leq k<\ell\leq d$ then
$f_{k}(d)\leq\max\bigl{\\{}\sqrt{k/\ell},\sup_{d^{\prime}\geq
d/(2\ell)}f^{\operatorname{irred}}_{\lceil
2kd^{\prime}/d\rceil}(d^{\prime})\bigr{\\}}.$
We then reduce the irreducible case to the primitive case and the alternating
case.
###### Proposition 2.8.
If $k\leq d/2$, then
$f^{\operatorname{irred}}_{k}(d)\leq\max_{d_{1}d_{2}=d}\bigl{(}\min\bigl{\\{}f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2}),f^{\operatorname{alt}}_{k}(d_{1})+\mathbbm{1}_{k\geq
d_{1}}\bigr{\\}}\bigr{)}.$
The symmetric and alernating cases can be handled explicitly, yielding the
following.
###### Proposition 2.9.
If $k\leq d/(\log d)^{5}$, then
$f_{k}^{\operatorname{sym}}(d)\leq f_{k}^{\operatorname{alt}}(d)\lesssim
1/\sqrt{\log(d/k)}.$
This leaves the primitive case, which we prove by invoking an group theoretic
result proved by Green [4, Proposition 4.2] that allows us to once again
reduce to the alternating case once again.
###### Proposition 2.10.
There is an absolute constant $c>0$ such that for $k\leq cd/(\log d)^{4}$ we
have
$f_{k}^{\operatorname{prim}}(d)\lesssim\sup_{d^{\prime}\geq cd/(\log
d)^{4}}f^{\operatorname{alt}}_{k}(d^{\prime}).$
### 2.3. Putting everything together
We are now in position to derive Theorem 2.2 using the preceding statements.
###### Proposition 2.11.
If $k\leq 2d/(\log d)^{10}$ then $f^{\operatorname{prim}}_{k}(d)\lesssim
1/\sqrt{\log(d/k)}$.
###### Proof.
Combine Propositions 2.9 and 2.10. ∎
###### Proposition 2.12.
If $k\leq d/(\log d)^{10}$ then $f^{\operatorname{irred}}_{k}(d)\lesssim
1/\sqrt{\log(d/k)}$.
###### Proof.
By Proposition 2.8, we have
$f^{\operatorname{irred}}_{k}(d)\leq\max_{d_{1}d_{2}=d}(\min(f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2}),f^{\operatorname{alt}}_{k}(d_{1})+\mathbbm{1}_{k\geq
d_{1}})).$
First consider the case $d_{1}\leq k$. We have
$\lceil k/d_{1}\rceil\leq\frac{2d}{d_{1}(\log d)^{10}}\leq\frac{2d_{2}}{(\log
d_{2})^{10}}.$
By Proposition 2.11, we have
$f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2})\lesssim\frac{1}{\sqrt{\log(d_{2}/\lceil
k/d_{1}\rceil)}}\leq\frac{1}{\sqrt{\log(d/(2k))}}.$
Now consider the case $d_{1}>k$. Since $d_{2}(d_{1}/k)=d/k$, we have
$\max\\{d_{1},d_{2}/k\\}\geq\sqrt{d/k}$. If $d_{2}\geq\sqrt{d/k}$, then
$f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2})=f^{\operatorname{prim}}_{1}(d_{2})\lesssim\frac{1}{\sqrt{\log
d_{2}}}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
On the other hand, if $d_{1}/k\geq\sqrt{d/k}$, then $d_{1}/k\geq(\log d)^{5}$
so
$k\leq\frac{d_{1}}{(\log d)^{5}}\leq\frac{d_{1}}{(\log d_{1})^{5}}.$
Hence Proposition 2.9 yields
$f^{\operatorname{alt}}_{k}(d_{1})\lesssim\frac{1}{\sqrt{\log(d_{1}/k)}}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
Thus it follows that, for all $d_{1}d_{2}=d$,
$\min(f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2}),f^{\operatorname{alt}}_{k}(d_{1})+\mathbbm{1}_{k\geq
d_{1}})\lesssim\frac{1}{\sqrt{\log(d/k)}},$
and the result follows. ∎
Now we show the main result assuming the above statements.
###### Proof of Theorem 2.2.
Let $\ell=\lceil\sqrt{dk}\rceil\geq 2k$. We have
$k/\ell\lesssim\sqrt{k/d}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
Also, if $d^{\prime}\geq d/(2\ell)$ then
$\bigg{\lceil}\frac{2kd^{\prime}}{d}\bigg{\rceil}\leq\frac{d^{\prime}}{d/(2\ell)}\leq\frac{d^{\prime}}{(\log
d)^{10}}\leq\frac{d^{\prime}}{(\log d^{\prime})^{10}}.$
By Proposition 2.12, we have
$f^{\operatorname{irred}}_{\lceil
2kd^{\prime}/d\rceil}(d^{\prime})\lesssim\frac{1}{\sqrt{\log(d^{\prime}/\lceil
2kd^{\prime}/d\rceil)}}\lesssim\frac{1}{\sqrt{\log(d/(2\ell))}}\lesssim\frac{1}{\sqrt{\log(d/k)}}.$
Applying Proposition 2.7 to $k$ and $\ell=\lceil\sqrt{dk}\rceil$, we find
$f_{k}(d)\leq\max(\sqrt{k/\ell},\sup_{d^{\prime}\geq
d/(2\ell)}f^{\operatorname{irred}}_{\lceil
2kd^{\prime}/d\rceil}(d^{\prime}))\lesssim\frac{1}{\sqrt{\log(d/k)}}.\qed$
### 2.4. Paper outline
In Section 3, we prove the two key reductions, Propositions 2.7 and 2.8. In
Section 4, we prove the key estimate for the symmetric and alternating cases,
Proposition 2.9. In Section 5, we prove the primitive case, Proposition 2.10.
Finally, in Section 6 we deduce a real version from the complex version,
proving Theorem 1.2. In Section 7 we demonstrate optimality of our results by
exhibiting the matching lower bound Theorem 1.5.
## 3\. Reduction to primitive representations
We first reduce the general case to the alternating and irreducible cases.
###### Proof of Proposition 2.7.
Consider $G\leqslant\mathsf{U}(\mathbb{C}^{d})$. By Maschke’s theorem, we can
decompose into irreducible representations of $G$:
$\mathbb{C}^{d}=\bigoplus_{j=1}^{m}V_{j}.$
Let $d_{j}=\dim V_{j}$. Let
$J=\\{j\in[m]:d_{j}\geq d/(2\ell)\\}.$
First suppose $\sum_{j\in J}d_{j}\geq d/2$. Then in each such $V_{j}$, we
consider the probability measure $\mu_{j}$ that witnesses
$f^{\operatorname{irred}}_{\lceil 2kd_{j}/d\rceil}(d_{j})$ for the irreducible
representation of $G$ on $V_{j}$. That is, $\mu_{j}$ samples a $\lceil
2kd_{j}/d\rceil$-dimensional subspace of $V_{j}$ and satisfies
$\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu_{j}(W)\leq
f^{\operatorname{irred}}_{\lceil
2kd^{\prime}/d\rceil}(d_{j})^{2}\lVert\mathbf{v}\rVert_{2}^{2}$
for each $\mathbf{v}\in V_{j}$. We define $\mu$ to be a uniformly random
$k$-dimensional subspace of $\bigoplus_{j\in J}W_{j}$, where each $W_{j}$ is
an independent $\mu_{j}$-random $\lceil 2kd_{j}/d\rceil$-dimensional subspace
of $V_{j}$. (Note the $W_{j}$’s are orthogonal as the $V_{j}$’s are.) The
total dimension of this direct sum is at least $k$, so $\mu$ is well-defined.
Given $\mathbf{v}\in\mathbb{C}^{d}$, write
$\mathbf{v}=\sum_{j=1}^{m}\mathbf{v}_{j}$ with $\mathbf{v}_{j}\in V_{j}$. We
have
$\displaystyle\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle\leq\int\sup_{g\in G}\lVert\operatorname{proj}_{\bigoplus_{j\in
J}W_{j}}(g\mathbf{v})\rVert_{2}^{2}\prod_{j\in J}d\mu_{j}(W_{j})$
$\displaystyle\leq\sum_{j\in J}\int\sup_{g\in
G}\lVert\operatorname{proj}_{W_{j}}(g\mathbf{v})\rVert_{2}^{2}d\mu_{j}(W_{j})$
$\displaystyle\leq\sum_{j\in J}f^{\operatorname{irred}}_{\lceil
2kd_{j}/d\rceil}(d_{j})^{2}\lVert\mathbf{v}_{j}\rVert_{2}^{2}$
$\displaystyle\leq\sup_{d^{\prime}\geq
d/(2\ell)}f^{\operatorname{irred}}_{\lceil
2kd^{\prime}/d\rceil}(d^{\prime})^{2}\lVert\mathbf{v}\rVert_{2}^{2}$
by orthogonality of the $V_{j}$.
Next suppose $\sum_{j\in J}d_{j}<d/2$. Then $|[m]\setminus J|\geq\ell$. Let
$I$ be an $\ell$-element subset of $[m]\setminus J$. Choose arbitrary
$\mathbf{w}_{j}\in\mathbb{S}(V_{j})\subseteq\mathbb{C}^{d}$ for $j\in I$,
which are clearly orthogonal. Let $\mu$ be the probability measure on
$k$-dimensional subspaces of $\mathbb{C}^{d}$ obtained by taking the span of
$k$ uniform random elements in
$\\{\mathbf{w}_{1},\ldots,\mathbf{w}_{\ell}\\}$.
For each $g\in G$, write
$\mathbf{u}_{g}=(\langle g\mathbf{v},\mathbf{w}_{1}\rangle,\ldots,\langle
g\mathbf{v},\mathbf{w}_{\ell}\rangle)$
and
$\mathbf{v}^{\prime}=(\lVert\operatorname{proj}_{V_{1}}\mathbf{v}\rVert_{2},\ldots,\lVert\operatorname{proj}_{V_{\ell}}\mathbf{v}\rVert_{2}).$
Given $S\subseteq[\ell]$, let $\operatorname{proj}_{S}$ take the projection of
an $\ell$-dimensional vector down to that subset of coordinates. We have
$\displaystyle\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle=\frac{1}{\binom{\ell}{k}}\sum_{S\in\binom{[\ell]}{k}}\sup_{g\in
G}\lVert\operatorname{proj}_{S}(\mathbf{u}_{g})\rVert_{2}^{2}$
$\displaystyle\leq\frac{1}{\binom{\ell}{k}}\sum_{S\in\binom{[\ell]}{k}}\sum_{j\in
S}(v_{j}^{\prime})^{2}$
$\displaystyle=\frac{k}{\ell}\sum_{j=1}^{\ell}(v_{j}^{\prime})^{2}\leq\frac{k}{\ell}\lVert\mathbf{v}\rVert_{2}^{2}.$
The first equality follows by the definition of $\mu$, the subsequent
inequality follows by $|\langle g\mathbf{v},\mathbf{w}_{j}\rangle|\leq
v_{j}^{\prime}$, and the last line is by direct computation and orthogonality
of the $V_{j}$. ∎
We next reduce the irreducible case to the primitive case. We first collect a
few facts proved in [4] regarding systems of imprimitivity.
###### Lemma 3.1 ([4, Section 2]).
Let $G\leqslant\mathsf{U}(\mathbb{C}^{d})$ be irreducible but imprimitive.
Consider a system imprimitivity
$\mathbb{C}^{d}=\bigoplus_{j=1}^{d_{1}}V_{j}$
with $d_{1}$ maximal over all such systems of primitivity. Let $H=\\{g\in
G:gV_{1}=V_{1}\\}$ and choose $\gamma_{1},\ldots,\gamma_{d_{1}}$ such that
$\gamma_{j}V_{1}=V_{j}$. Then the following hold:
1. 1.
The $V_{j}$ are orthogonal and have the same dimension, and $G$ acts
transitively on them.
2. 2.
$H$ has primitive action on $V_{1}$ (i.e. the representation of $H$ on $V_{1}$
is primitive).
3. 3.
$\gamma_{1},\ldots,\gamma_{d_{1}}$ form a complete set of left coset
representatives for $H$ in $G$.
4. 4.
For each $g\in G$ there is $\sigma_{g}\in\mathfrak{S}_{d_{1}}$ so that
$\gamma_{\sigma_{g}(j)}^{-1}g\gamma_{j}\in H$ for all $j\in[d_{1}]$ (i.e.,
$\sigma_{g}$ records how $g$ permutes $\\{V_{1},\ldots,V_{d_{1}}\\}$).
Now we are ready to prove Proposition 2.8, which recall says that for all
$k\leq d/2$,
$f^{\operatorname{irred}}_{k}(d)\leq\max_{d_{1}d_{2}=d}\bigl{(}\min\bigl{\\{}f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2}),f^{\operatorname{alt}}_{k}(d_{1})+\mathbbm{1}_{k\geq
d_{1}}\bigr{\\}}\bigr{)}.$
###### Proof of Proposition 2.8.
Let $G\leqslant\mathsf{U}(\mathbb{C}^{d})$ be irreducible but imprimitive.
Consider a system of imprimitivity
$\mathbb{C}^{d}=\bigoplus_{j=1}^{d_{1}}V_{j}$
with $d_{1}$ maximal among all systems of imprimitivity. By Lemma 3.1, the
spaces $V_{j}$ are orthogonal and all the $\dim V_{j}$ are equal. Let
$d_{2}=\dim V_{1}$, so that $d_{1}d_{2}=d$. Furthermore, $H=\\{g\in
G:gV_{1}=V_{1}\\}$ acts primitively on $V_{1}$, that $G$ acts transitively on
the $V_{j}$, and that there are $\gamma_{1},\ldots,\gamma_{d_{1}}$ so that
$\gamma_{j}V_{1}=V_{j}$ which form a complete set of left coset
representatives for $H$ in $G$. For each $g\in G$ we have some
$\sigma_{g}\in\mathfrak{S}_{d_{1}}$ so that
$\gamma_{\sigma_{g}(j)}^{-1}g\gamma_{j}\in H$ for all $j\in[d_{1}]$. Define
$h(g,j)=\gamma_{\sigma_{g}(j)}^{-1}g\gamma_{j}$.
Let $\mathbf{v}\in\mathbb{C}^{d}$. There is a unique orthogonal decomposition
$\mathbf{v}=\sum_{j=1}^{d_{1}}\gamma_{j}\mathbf{v}_{j}$
where $\mathbf{v}_{j}\in V_{1}$ for all $j\in[d_{1}]$. We have
$g\mathbf{v}=\sum_{j=1}^{d_{1}}g\gamma_{j}\mathbf{v}_{j}=\sum_{j=1}^{d_{1}}\gamma_{j}h(g,\sigma_{g}^{-1}(j))\mathbf{v}_{\sigma_{g}^{-1}(j)}.$
Finally, if
$\mathbf{w}=\sum_{j=1}^{d_{1}}\lambda_{j}\gamma_{j}\mathbf{x}$
for some
$\bm{\lambda}=(\lambda_{1},\ldots,\lambda_{d_{1}})\in\mathbb{C}^{d_{1}}$ and
$\mathbf{x}\in V_{1}$ then we see from the above and orthogonality that
$\langle g\mathbf{v},\mathbf{w}\rangle=\sum_{j=1}^{d_{1}}\lambda_{j}\langle
h(g,\sigma_{g}^{-1}(j))\mathbf{v}_{\sigma_{g}^{-1}(j)},\mathbf{x}\rangle.$
Now we return to the situation at hand: we need to choose a $k$-dimensional
space with a good projection for our transitive set. Consider the map
$\psi:V_{1}\times\mathbb{C}^{d_{1}}\to\mathbb{C}^{d}$ given by
$\psi(\mathbf{x},\bm{\lambda})=\sum_{j=1}^{d_{1}}\lambda_{j}\gamma_{j}\mathbf{x}.$
It clearly maps the pair of unit spheres into the unit sphere. Given
probability measures $\mu_{1}$ on
$\operatorname{Gr}_{\mathbb{C}}(k_{1},V_{1})$ and $\mu_{2}$ on
$\operatorname{Gr}_{\mathbb{C}}(k_{2},\mathbb{C}^{d_{1}})$, we define the
pushforward measure $\mu$ on $\operatorname{Gr}_{\mathbb{C}}(k_{1}k_{2},d)$ by
taking the image of these two subspaces under $\psi$. Equivalently, suppose
$\mu_{1}^{\ast}$ samples a unitary basis
$\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}}$ of a subspace of $V_{1}$ and
$\mu_{2}^{\ast}$ samples a unitary basis
$\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k_{2}}$ of a subspace of
$\mathbb{C}^{d_{1}}$, then $\mu$ samples the subspace of $\mathbf{C}^{d}$ with
basis $\\{\psi(\mathbf{x}_{i},\bm{\lambda}_{j}):i\in[k_{1}],j\in[k_{2}]\\}$.
It is easy to check this basis is in fact unitary.
Next, we choose $\mu_{1}$ and $\mu_{2}$ based on the sizes of $d_{1}$ and
$d_{2}$.
First let $k_{1}=\lceil k/d_{1}\rceil\leq d_{2}$ (as $k\leq d/2$) and
$k_{2}=d_{1}$. We let $\mu_{1}$ be the measure guaranteed by Definition 2.1 so
that
$\int\sup_{h\in
H}\lVert\operatorname{proj}_{W}(h\mathbf{u})\rVert_{2}^{2}d\mu_{1}(W)\leq
f^{\operatorname{prim}}_{k_{1}}(d_{2})^{2}\lVert\mathbf{u}\rVert_{2}^{2}$
for all $\mathbf{u}\in V_{1}$ and let $\mu_{2}$ be the atom on the space
$\mathbb{C}^{d_{1}}$ in $\operatorname{Gr}_{\mathbb{C}}(d_{1},d_{1})$. Let
$\mu$ be the $\psi$-pushforward of $(\mu_{1},\mu_{2})$ as described earlier.
We find
$\displaystyle\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle=\int\sup_{g\in
G}\sum_{\ell=1}^{k_{1}}\sum_{j=1}^{d_{1}}|\langle
g\mathbf{v},\psi(\mathbf{x}_{\ell},\mathbf{e}_{j})\rangle|^{2}d\mu_{1}^{\ast}(\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}})$
$\displaystyle=\int\sup_{g\in
G}\sum_{\ell=1}^{k_{1}}\sum_{j=1}^{d_{1}}|\langle
h(g,\sigma_{g}^{-1}(j))\mathbf{v}_{\sigma_{g}^{-1}(j)},\mathbf{x}_{\ell}\rangle|^{2}d\mu_{1}^{\ast}(\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}})$
$\displaystyle=\int\sup_{g\in
G}\sum_{\ell=1}^{k_{1}}\sum_{j=1}^{d_{1}}|\langle
h(g,j)\mathbf{v}_{j},\mathbf{x}_{\ell}\rangle|^{2}d\mu_{1}^{\ast}(\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}})$
$\displaystyle\leq\sum_{j=1}^{d_{1}}\int\sup_{g\in
G}\sum_{\ell=1}^{k_{1}}|\langle
h(g,j)\mathbf{v}_{j},\mathbf{x}_{\ell}\rangle|^{2}d\mu_{1}^{\ast}(\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}})$
$\displaystyle\leq\sum_{j=1}^{d_{1}}\int\sup_{h\in
H}\sum_{\ell=1}^{k_{1}}|\langle
h\mathbf{v}_{j},\mathbf{x}_{\ell}\rangle|^{2}d\mu_{1}^{\ast}(\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{1}})$
$\displaystyle\leq\sum_{j=1}^{d_{1}}f^{\operatorname{prim}}_{k_{1}}(d_{2})^{2}\lVert\mathbf{v}_{j}\rVert_{2}^{2}=f^{\operatorname{prim}}_{k_{1}}(d_{2})^{2}\lVert\mathbf{v}\rVert_{2}^{2}.$
The last equality is by orthogonality of $V_{1},\ldots,V_{d_{1}}$ and
unitarity of $\gamma_{j}$ for $j\in[d_{1}]$.
Now suppose that $k<d_{1}$. Let $k_{1}=1$ and $k_{2}=k$. Choose an arbitrary
unit vector $\mathbf{x}\in V_{1}$ and $\mu_{1}$ be an atom on
$\operatorname{Gr}_{\mathbb{C}}(1,V_{1})$ supported on the line
$\mathbb{C}\mathbf{x}$. Let $\mu_{2}$ be guaranteed by Definition 2.6 so that
$\int\sup_{\mathbf{u}\in\operatorname{Dom}(\mathbf{w})}\sum_{\ell=1}^{k}|\langle\mathbf{u},\bm{\lambda}_{\ell}\rangle|^{2}d\mu_{2}^{*}(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k})\leq
f_{k}^{\operatorname{alt}}(d_{1})^{2}\lVert\mathbf{w}\rVert_{2}^{2}$
for all $\mathbf{w}\in V_{1}$. Let $\mu$ be the $\psi$-pushforward of
$(\mu_{1},\mu_{2})$ as described earlier. We find
$\displaystyle\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle=\int\sup_{g\in G}\sum_{\ell=1}^{k}|\langle
g\mathbf{v},\mathbf{w}_{\ell}\rangle|^{2}d\mu^{\ast}(\mathbf{w}_{1},\ldots,\mathbf{w}_{k})$
$\displaystyle=\int\sup_{g\in G}\sum_{\ell=1}^{k}|\langle
g\mathbf{v},\psi(\mathbf{x},\bm{\lambda}_{\ell})\rangle|^{2}d\mu_{2}^{\ast}(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k})$
$\displaystyle=\int\sup_{g\in
G}\sum_{\ell=1}^{k}\bigg{|}\sum_{j=1}^{d_{1}}\lambda_{\ell,j}\langle
h(g,\sigma_{g}^{-1}(j))\mathbf{v}_{\sigma_{g}^{-1}(j)},\mathbf{x}\rangle\bigg{|}^{2}d\mu_{2}^{\ast}(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k})$
$\displaystyle\leq\int\sup_{\mathbf{u}\in\operatorname{Dom}(\mathbf{y})}\sum_{\ell=1}^{k}|\langle\mathbf{u},\bm{\lambda}_{\ell}\rangle|^{2}d\mu_{2}^{\ast}(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k}),$
where $\mathbf{y}$ has coordinates $y_{j}=\sup_{h\in H}|\langle
h\mathbf{v}_{j},\mathbf{x}\rangle|$ for $j\in[d_{1}]$. We immediately deduce
$\displaystyle\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle\leq\int\sup_{\mathbf{u}\in\operatorname{Dom}(\mathbf{y})}\sum_{\ell=1}^{k}|\langle\mathbf{u},\bm{\lambda}_{\ell}\rangle|^{2}d\mu_{2}^{\ast}(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k})$
$\displaystyle\leq
f_{k}^{\operatorname{alt}}(d_{1})^{2}\lVert\mathbf{y}\rVert_{2}^{2}\leq
f_{k}^{\operatorname{alt}}(d_{1})^{2}\sum_{j=1}^{d_{1}}\lVert\mathbf{v}_{j}\rVert_{2}^{2}=f_{k}^{\operatorname{alt}}(d_{1})^{2}\lVert\mathbf{v}\rVert_{2}^{2}.$
Note that the above constructed measures in both cases are independent of
$\mathbf{v}$. The second construction is only valid when $k<d_{1}$. Therefore
since the $f$ values are clearly bounded by $1$, we have an upper bound of
$f^{\operatorname{prim}}_{k}(d)\leq\max_{d_{1}d_{2}=d}(\min(f^{\operatorname{prim}}_{\lceil
k/d_{1}\rceil}(d_{2}),f^{\operatorname{alt}}_{k}(d_{1})+\mathbbm{1}_{k\geq
d_{1}})),$
as claimed. ∎
## 4\. Permutation groups
In this section, we establish upper bounds for $f^{\operatorname{sym}}_{k}(d)$
and $f^{\operatorname{alt}}_{k}(d)$, extending the previous construction [4,
Section 3] for $k=1$.
A useful high dimensional intuition is that, for small $k$, a random
$k$-dimensional subspace of $\mathbb{R}^{d}$ has the property that _all_ its
unit vectors have distribution of coordinate magnitudes similar to that of a
random Gaussian vector.
We first need the existence of a large dimension subspace of $\mathbb{R}^{d}$
with certain delocalization properties. We encode this through the following
norm.
###### Definition 4.1.
Given $\mathbf{v}\in\mathbb{R}^{d}$, let
$\lVert\mathbf{v}\rVert_{T}^{2}=\sup_{\emptyset\subsetneq
S\subseteq[d]}\log^{4}(2d/|S|)\sum_{j\in S}v_{j}^{2}$
and let
$T^{\ast}=\\{\mathbf{t}\in\mathbb{R}^{d}:|\langle\mathbf{t},\mathbf{w}\rangle|\leq
1\text{ whenever }\lVert\mathbf{w}\rVert_{T}\leq 1\\}.$
###### Remark.
Note that $\lVert\cdot\rVert_{T}$ is a norm as it can be represented as a
supremum of seminorms. Hence
$\lVert\mathbf{w}\rVert_{T}=\sup_{t\in
T^{\ast}}|\langle\mathbf{t},\mathbf{w}\rangle|.$
We next recall a classical lemma regarding the concentration of norms on
Gaussian space (see e.g. [5]); we provide a short proof for convenience.
###### Lemma 4.2.
There is an absolute constant $C>0$ so that for all $p\geq 1$, a Gaussian
random vector $\mathbf{w}\sim\mathcal{N}(0,I_{d})$ satisfies
$(\mathbb{E}_{w_{1}+\cdots+w_{d}=0}\lVert\mathbf{w}\rVert_{T}^{p})^{1/p}\leq(\mathbb{E}\lVert\mathbf{w}\rVert_{T}^{p})^{1/p}\leq\mathbb{E}\lVert\mathbf{w}\rVert_{T}+C\sqrt{p}\sup_{\mathbf{t}\in
T^{\ast}}\lVert\mathbf{t}\rVert_{2}.$
###### Proof.
For first inequality note that $\mathbf{w}\sim\mathcal{N}(0,I_{d})$ can be
written as $\mathbf{w^{\prime}}+G\mathbf{1}$ where $\mathbf{w^{\prime}}$ is
drawn from $\mathcal{N}(0,I_{d})$ conditioned on having coordinate sum zero
and $G\in\mathcal{N}(0,1)$ is independent of $\mathbf{w^{\prime}}$. Then by
convexity note that
$(\mathbb{E}\lVert\mathbf{w}\rVert_{T}^{p})^{1/p}=(\mathbb{E}\lVert\mathbf{w^{\prime}}+G\mathbf{1}\rVert_{T}^{p})^{1/p}\geq(\mathbb{E}_{\mathbf{w^{\prime}}}\lVert\mathbb{E}[\mathbf{w^{\prime}}+\mathbf{v^{\prime}}|\mathbf{w^{\prime}}]\rVert_{T}^{p})^{1/p}=(\mathbb{E}_{w_{1}+\cdots+w_{d}=0}\lVert\mathbf{w}\rVert_{T}^{p})^{1/p}.$
To prove the second inequality first note that
$\lVert\mathbf{w}\rVert_{T}-\lVert\mathbf{v}\rVert_{T}\leq\lVert\mathbf{w}-\mathbf{v}\rVert_{T}=\sup_{\mathbf{t}\in
T*}|\langle\mathbf{t},\mathbf{w}-\mathbf{v}\rangle|\leq\lVert\mathbf{w}-\mathbf{v}\rVert_{2}\sup_{t\in
T*}\lVert\mathbf{t}\rVert_{2}.$
Therefore if $L=\sup_{\mathbf{t}\in T^{\ast}}\lVert\mathbf{t}\rVert_{2}$ then
$\mathbf{w}\mapsto\lVert\mathbf{w}\rVert_{T}$ is an $L$-Lipschitz function
with respect to Euclidean distance. Therefore by Gaussian concentration for
Lipschitz functions (see e.g. [1, p. 125]) we have that
$\mathbb{P}[|\lVert\mathbf{w}\rVert_{T}-\mathbb{E}[\lVert\mathbf{w}\rVert_{T}]|\geq
t]\leq 2\exp(-ct^{2}/L^{2})$
where $c$ is an absolute constant. Using standard moment bounds for sub-
Gaussian random variables (see e.g. [8, Proposition 2.5.2]), we find that
$(\mathbb{E}|\lVert\mathbf{w}\rVert_{T}-\mathbb{E}\lVert\mathbf{w}\rVert_{T}|^{p})^{1/p}\leq
C\sqrt{p}\sup_{\mathbf{t}\in T^{\ast}}\lVert\mathbf{t}\rVert_{2}$
for an absolute constant $C>0$. Finally, Minkowski’s inequality implies that
$(\mathbb{E}\lVert\mathbf{w}\rVert_{T}^{p})^{1/p}\leq\mathbb{E}\lVert\mathbf{w}\rVert_{T}+(\mathbb{E}|\lVert\mathbf{w}\rVert_{T}-\mathbb{E}\lVert\mathbf{w}\rVert_{T}|^{p})^{1/p}$
and therefore the result follows. ∎
We now prove an upper bound for $\mathbb{E}[\lVert\mathbf{w}\rVert_{T}]$.
###### Lemma 4.3.
A Gaussian random vector $\mathbf{w}\sim\mathcal{N}(0,I_{d})$ satisfies
$\mathbb{E}\lVert\mathbf{w}\rVert_{T}\lesssim\sqrt{d}$.
###### Proof.
Recall $\mathbf{w}_{i}^{\succ}$ from Definition 2.4. We have
$\displaystyle\mathbb{E}(w_{i}^{\succ})^{2}$
$\displaystyle=\int_{0}^{\infty}\mathbb{P}[w_{i}^{\succ}\geq\sqrt{t}]dt\leq\int_{0}^{\infty}\min\bigg{(}1,\binom{d}{i}(2e^{-t/2})^{i}\bigg{)}dt$
$\displaystyle\leq\int_{0}^{\infty}\min(1,(2de^{1-t/2}/i)^{i})dt\lesssim\log(2d/i).$
Therefore
$\displaystyle(\mathbb{E}\lVert\mathbf{w}\rVert_{T})^{2}$
$\displaystyle\leq\mathbb{E}\lVert\mathbf{w}\rVert_{T}^{2}\leq\sum_{i=1}^{d}\log^{4}(2d/i)(w_{i}^{\succ})^{2}\lesssim\sum_{i=1}^{d}\log^{5}(2d/i)$
$\displaystyle\leq d\int_{0}^{1}\log(2/x)^{5}~{}dx=d\int_{0}^{\infty}(y+\log
2)^{5}e^{-y}~{}dy\lesssim d.\qed$
We are in position to derive a high-probability version.
###### Lemma 4.4.
With probability at least $1-\exp(-2d/(\log d)^{4})$, a standard Gaussian
vector $\mathbf{w}\sim\mathcal{N}(0,I_{d})$ satisfies
$\lVert\mathbf{w}\rVert_{T}\lesssim\sqrt{d}$. In fact, the same is true after
conditioning $\mathbf{w}$ to have coordinate sum $0$.
###### Proof.
Note that if $\mathbf{t}\in T^{\ast}$, then
$\lVert\mathbf{t}\rVert_{2}^{2}=\lVert\mathbf{t}\rVert_{T}\bigg{|}\bigg{\langle}\mathbf{t},\frac{\mathbf{t}}{\lVert\mathbf{t}\rVert_{T}}\bigg{\rangle}\bigg{|}\leq\lVert\mathbf{t}\rVert_{T}\leq\log^{2}(2d)\lVert\mathbf{t}\rVert_{2}.$
Hence
$\sup_{\mathbf{t}\in T^{\ast}}\lVert\mathbf{t}\rVert_{2}\leq\log^{2}(2d).$
To deduce the claimed bound, note that
$\displaystyle\mathbb{P}[\lVert\mathbf{w}\rVert_{T}\geq K\sqrt{d}]$
$\displaystyle\leq(K\sqrt{d})^{-p}\mathbb{E}[\lVert\mathbf{w}\rVert_{T}^{p}]$
$\displaystyle\leq(K\sqrt{d})^{-p}(\mathbb{E}\lVert\mathbf{w}\rVert_{T}+C\sqrt{p}\sup_{\mathbf{t}\in
T^{\ast}}\lVert\mathbf{t}\rVert_{2})^{p}$
$\displaystyle\leq(K\sqrt{d})^{-p}(C^{\prime}\sqrt{d}+C^{\prime}\sqrt{p}\log^{2}(2d))^{p}$
for appropriate absolute constants $C,C^{\prime}>0$, using Lemmas 4.2 and 4.3
and the above inequality. Letting $p=d/(\log d)^{4}$ and $K>0$ be a
sufficiently large absolute constant yields
$\mathbb{P}[\lVert\mathbf{w}\rVert_{T}\geq K\sqrt{d}]\leq\exp(-2p),$
as desired. The same holds is we condition on sum $0$, using the moment bound
for the conditional variable derived in Lemma 4.2 instead. ∎
###### Lemma 4.5.
There is a $\lceil d/(\log d)^{4}\rceil$-dimensional subspace of the
hyperplane $\mathbf{1}^{\perp}$ in $\mathbb{R}^{d}$ such that each of its unit
vectors $\mathbf{v}$ satisfies
$\lVert\mathbf{v}\rVert_{T}\lesssim 1.$
###### Proof.
We can assume $d$ is sufficiently large. Let $k=\lceil d/(\log d)^{4}\rceil$,
and consider a uniformly random $k$-dimensional subspace $W$ of
$\mathbf{1}^{\perp}$. Let $U$ be a $d\times k$ matrix whose columns form an
orthonormal basis of $W$, chosen uniformly at random.
By a standard volume packing argument (e.g., see [7, Lemma 4.3]), there exists
$\mathcal{N}\subset\mathbb{S}(\mathbb{R}^{k})$ with
$\lvert\mathcal{N}\rvert\leq 6^{k}$ such that for every
$\mathbf{v}\in\mathbb{S}(\mathbb{R}^{k})$ there is
$\mathbf{v}^{\prime}\in\mathcal{N}$ so that
$\lVert\mathbf{v}-\mathbf{v}^{\prime}\rVert_{2}\leq 1/2$. Thus if $\mathbf{u}$
is a unit vector in the direction of $\mathbf{v}-\mathbf{v}^{\prime}$, we have
$\lVert U\mathbf{v}\rVert_{T}\leq\lVert U\mathbf{v}^{\prime}\rVert_{T}+\lVert
U(\mathbf{v}-\mathbf{v}^{\prime})\rVert_{T}\leq\lVert
U\mathbf{v}^{\prime}\rVert_{T}+\frac{1}{2}\lVert U\mathbf{u}\rVert_{T}.$
We deduce
$\sup_{\mathbf{v}\in\mathbb{S}(\mathbb{R}^{k})}\lVert
U\mathbf{v}\rVert_{T}\leq\sup_{\mathbf{v}^{\prime}\in\mathcal{N}}\lVert
U\mathbf{v}^{\prime}\rVert_{T}+\frac{1}{2}\sup_{\mathbf{u}\in\mathbb{S}(\mathbb{R}^{k})}\lVert
U\mathbf{u}\rVert_{T}$
and thus
$\sup_{\mathbf{v}\in\mathbb{S}(\mathbb{R}^{k})}\lVert
U\mathbf{v}\rVert_{T}\leq 2\sup_{\mathbf{v}^{\prime}\in\mathcal{N}}\lVert
U\mathbf{v}^{\prime}\rVert_{T}.$
Now fix some $\mathbf{v}\in\mathcal{N}$. Note the distribution of
$U\mathbf{v}$ is uniform among unit vectors in $\mathbf{1}^{\perp}$ since $W$
was chosen uniformly. Now note that for any constant $C$ we have that
$\mathbb{P}[\lVert U\mathbf{v}\rVert_{T}\geq
C]=\mathbb{P}[\lVert\mathbf{G}/\lVert\mathbf{G}\rVert_{2}\rVert_{T}\geq C]$
where $\mathbf{G}\sim N(0,I_{d}-(\mathbf{1}^{T}\mathbf{1})/d)$. Now since
$\mathbf{G}/\lVert\mathbf{G}\rVert_{2}$ and $\lVert\mathbf{G}\rVert_{2}$ are
independent we have that
$\displaystyle\mathbb{P}[\lVert\mathbf{G}/\lVert\mathbf{G}\rVert_{2}\rVert_{T}\geq
C]$ $\displaystyle=\mathbb{P}[\lVert\mathbf{G}\rVert_{2}\leq
2\sqrt{d}]^{-1}\mathbb{P}[\lVert\mathbf{G}/\lVert\mathbf{G}\rVert_{2}\rVert_{T}\geq
C\text{ and }\lVert\mathbf{G}\rVert_{2}\leq 2\sqrt{d}]$ $\displaystyle\leq
2\mathbb{P}[\lVert\mathbf{G}/\lVert\mathbf{G}\rVert_{2}\rVert_{T}\geq C\text{
and }\lVert\mathbf{G}\rVert_{2}\leq 2\sqrt{d}]$ $\displaystyle\leq
2\mathbb{P}[\lVert\mathbf{G}/\lVert\mathbf{G}\rVert_{2}\rVert_{T}\geq
2C\sqrt{d}].$
By Lemma 4.4, the last expression is at most $2\exp(-2d/(\log d)^{4})$. The
result follows upon taking the union bound over at most $6^{k}$ vectors in
$\mathcal{N}$, since $6<e^{2}$. ∎
Finally, we will need a form of Selberg’s inequality (see [3, Chapter 27,
Theorem 1]).
###### Lemma 4.6.
For $\mathbf{v}_{1},\ldots,\mathbf{v}_{m}\in\mathbb{C}^{d}$ we have that
$\sup_{\mathbf{w}\in\mathbb{S}(\mathbb{C}^{d})}\sum_{i=1}^{m}|\langle\mathbf{w},\mathbf{v}_{i}\rangle|^{2}\leq\sup_{i\in[m]}\sum_{j=1}^{m}|\langle\mathbf{v}_{i},\mathbf{v}_{j}\rangle|.$
Now we prove Proposition 2.9, which recall says that for $k\leq d/(\log
d)^{5}$, one has
$f_{k}^{\operatorname{sym}}(d)\leq f_{k}^{\operatorname{alt}}(d)\lesssim
1/\sqrt{\log(d/k)}.$
The first inequality is immediate as the set of allowable $\mu$’s in the
definition of $f_{k}^{\operatorname{alt}}$ is a subset of those of
$f_{k}^{\operatorname{sym}}$. So we just need to prove the second inequality.
###### Proof of Proposition 2.9.
Let $\mathbf{e}_{i}$ be the $i$-th coordinate vector. For each $j$ with $k\leq
2^{j}/(\log 2^{j})^{4}\leq d$, we apply Lemma 4.5 to the space
$V_{j}=\operatorname{span}_{\mathbb{R}}\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{2^{j}}\\}$.
Here the $T$-norm is defined with respect to this $2^{j}$-dimensional space.
In particular, there exists a $k$-dimensional (real) subspace of the
orthogonal complement of $\mathbf{e}_{1}+\cdots+\mathbf{e}_{2^{j}}$ within
$V_{j}$, call it $W_{j}$, so that every unit vector $\mathbf{u}\in W_{j}$
satisfies
$\sum_{i\in S}u_{i}^{2}\lesssim\frac{1}{\log^{4}(2^{j+1}/|S|)}$
for every nonempty $S\subseteq[2^{j}]$. Let
$V_{j}^{\prime}=\operatorname{span}_{\mathbb{C}}V_{j}$ and
$W_{j}^{\prime}=\operatorname{span}_{\mathbb{C}}W_{j}$. We immediately deduce
that every unit vector $\mathbf{u}\in W_{j}^{\prime}$ satisfies
$\sum_{i\in S}|u_{i}|^{2}\lesssim\frac{1}{\log^{4}(2^{j+1}/|S|)}$ (4.1)
since we can write it as
$\mathbf{u}=\alpha\mathbf{u}_{r}+\beta\sqrt{-1}\mathbf{u}_{c}$ where
$\mathbf{u}_{r},\mathbf{u}_{c}\in W_{j}$ are real unit vectors and
$\alpha,\beta\in\mathbb{R}$ satisfy $\alpha^{2}+\beta^{2}=1$.
Now we construct our random subspace as follows: let $W=W_{j}$ where $j$ is a
random integer uniformly chosen from
$J=\\{\lceil\log_{2}(2k\log^{4}d)\rceil,\ldots,\lfloor\log_{2}d\rfloor\\}.$
Let $\mu$ be the probability measure on $\operatorname{Gr}_{\mathbb{C}}(k,n)$
that gives $W$.
For every $\mathbf{v}\in\mathbb{S}(\mathbb{C}^{d})$, we have
$\sup_{\gamma\in\Gamma_{d}}\lVert\operatorname{proj}_{W}(\gamma\mathbf{v})\rVert_{2}=\sup_{\gamma\in\Gamma_{d}}\sup_{\mathbf{w}\in\mathbb{S}(W)}|\langle\gamma\mathbf{v},\mathbf{w}\rangle|=\sup_{\mathbf{w}\in\mathbb{S}(W)}\langle\mathbf{v}^{\succ},\mathbf{w}^{\succ}\rangle.$
Therefore
$\int\sup_{\gamma\in\Gamma_{d}}\lVert\operatorname{proj}_{W}(\gamma\mathbf{v})\rVert_{2}^{2}d\mu(W)=\frac{1}{|J|}\sum_{j\in
J}\sup_{\gamma\in\Gamma_{d}}\lVert\operatorname{proj}_{W_{j}}(\gamma\mathbf{v})\rVert_{2}^{2}=\frac{1}{|J|}\sum_{j\in
J}\sup_{\mathbf{w}\in\mathbb{S}(W_{j})}\langle\mathbf{v}^{\succ},\mathbf{w}^{\succ}\rangle^{2}.$
Let $\mathbf{w}_{j}^{\prime}\in\mathbb{S}(W_{j})$ be such that
$\sup_{\mathbf{w}\in\mathbb{S}(W_{j})}\langle\mathbf{v}^{\succ},\mathbf{w}^{\succ}\rangle^{2}=\langle\mathbf{v}^{\succ},(\mathbf{w}_{j}^{\prime})^{\succ}\rangle^{2},$
which exists by compactness. For $i,j\in J$ with $i\geq j$, we have
$|\langle(\mathbf{w}_{i}^{\prime})^{\succ},(\mathbf{w}_{j}^{\prime})^{\succ}\rangle|\leq\lVert\operatorname{proj}_{V_{i}}((\mathbf{w}_{j}^{\prime})^{\succ})\rVert_{2}\lesssim\frac{1}{\log^{2}(2^{i+1}/2^{j})}.$
The first inequality follows from $\mathbf{w}_{j}^{\prime}\in V_{j}$, which
implies $(\mathbf{w}_{j}^{\prime})^{\succ}\in V_{j}$. The second follows from
4.1 applied to $\mathbf{w}_{j}^{\prime}$ and $S$ a subset of $[2^{i}]$
composed of the $2^{j}$ largest magnitude coordinates of
$\mathbf{w}_{j}^{\prime}$.
Applying Lemma 4.6, we deduce
$\displaystyle\int\sup_{\gamma\in\Gamma_{d}}\lVert\operatorname{proj}_{W}(\gamma\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle=\frac{1}{|J|}\sum_{j\in
J}\langle\mathbf{v}^{\succ},(\mathbf{w}_{j}^{\prime})^{\succ}\rangle^{2}\leq\sup_{i\in
J}\frac{1}{|J|}\sum_{j\in
J}|\langle(\mathbf{w}_{i}^{\prime})^{\succ},(\mathbf{w}_{j}^{\prime})^{\succ}\rangle|$
$\displaystyle\lesssim\frac{1}{|J|}\bigg{(}\sum_{j\in J,j\geq
i}\frac{1}{\log^{2}(2^{j+1}/2^{i})}+\sum_{j\in
J,j<i}\frac{1}{\log^{2}(2^{i+1}/2^{j})}\bigg{)}\lesssim\frac{1}{|J|}.$
This $\mu$ thus shows that $f_{k}^{\operatorname{alt}}(d)\lesssim
1/\sqrt{\log(d/k)}$. ∎
## 5\. Primitive representations
We now turn to the case of bounding $f_{k}^{\operatorname{prim}}(d)$. First,
we show that if the group $G\leqslant\mathsf{U}(\mathbb{R}^{d})$ is
sufficiently small, then a random basis achieves the necessary bound for
$f_{k}^{\operatorname{prim}}(d)$. This is a minor modification of [4,
Proposition 4.1].
###### Proposition 5.1.
Let $G\leqslant\mathsf{U}(\mathbb{C}^{d})$. Suppose that $[G:Z_{d}\cap G]\leq
e^{d/\log d}$, where $Z_{d}:=\\{\lambda I_{d}:|\lambda|=1\\}$. Then for
$k\in[d]$ there exists a probability measure $\mu$ on
$\operatorname{Gr}_{\mathbb{C}}(k,d)$ such that
$\int\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)\lesssim\frac{1}{\log(2d/k)}\lVert\mathbf{v}\rVert_{2}^{2}$
for all $\mathbf{v}\in\mathbb{C}^{d}$.
###### Proof.
We let $\mu$ be the uniform measure on $\operatorname{Gr}_{\mathbb{C}}(k,d)$.
By scaling, we may assume that $\mathbf{v}$ is a unit vector. Furthermore let
$W^{\prime}$ be the subspace generated by the first $k$ coordinate vectors
$\mathbf{e}_{1},\ldots,\mathbf{e}_{k}$. Note that
$\displaystyle\mathbb{P}_{W}\bigg{[}\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}\geq t\bigg{]}$
$\displaystyle\leq e^{d/\log
d}\mathbb{P}_{W}[\lVert\operatorname{proj}_{W}(\mathbf{v})\rVert_{2}\geq t]$
$\displaystyle\leq e^{d/\log
d}\mathbb{P}_{\mathbf{v}^{\prime}\in\mathbb{S}(\mathbb{C}^{d})}[\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v}^{\prime})\rVert_{2}\geq
t]$
using a union bound and then orthogonal invariance. Now note that
$\mathbb{E}[\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v}^{\prime})\rVert_{2}]^{2}\leq\mathbb{E}[\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v}^{\prime})\rVert_{2}^{2}]=k/d$
and that $\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v^{\prime}})\rVert$
is a $1$-Lipschitz function of $\mathbf{v}^{\prime}$. Therefore by Lévy
concentration on the sphere we have that
$\mathbb{P}_{\mathbf{v}^{\prime}\in\mathbb{S}(\mathbb{C}^{d})}[\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v}^{\prime})\rVert_{2}\geq\sqrt{k/d}+C/\sqrt{\log
d}]\leq e^{-2d/\log d}$
for a suitably large absolute constant $C$. Finally, using $\sqrt{k/d}\lesssim
1/\sqrt{\log(2d/k)}$ and using the bound
$\lVert\operatorname{proj}_{W^{\prime}}(\mathbf{v^{\prime}})\rVert_{2}\leq 1$,
the desired result follows immediately. ∎
We need the following key group theoretic result from Green [4], which in turn
builds on ideas from Collins’ work on optimal bounds for Jordan’s theorem [2].
Roughly, it says that if $[G:Z_{d}\cap G]$ is large then $G$ has a large
normal alternating subgroup. The first part of the following theorem is [4,
Proposition 4.2], while the rest is implicit in the proof of [4, Proposition
1.11].
###### Theorem 5.2 ([4, Section 4]).
Let $G\leqslant\mathsf{U}(\mathbb{C}^{d})$ be primitive and suppose that
$[G:Z_{d}\cap G]\geq e^{d/\log d}$. If $d$ is sufficiently large then all of
the following hold.
1. (1)
$G$ has a normal subgroup isomorphic to the alternating group $A_{n}$ for some
$n\gtrsim d/(\log d)^{4}$.
2. (2)
$G$ has a subgroup of index at most $2$ of the form $A_{n}\times H$, with the
same $n$.
3. (3)
The resulting representation $\rho:A_{n}\times H\hookrightarrow
G\hookrightarrow\mathsf{U}(\mathbb{C}^{d})$ decomposes into irreducible
representations, at least one of which (call it $\rho_{1}$) is of the form
$\rho_{1}\simeq\psi\otimes\psi^{\prime}$, where $\psi^{\prime}$ is an
irreducible representation of $H$ and $\psi$ is the representation of $A_{n}$
acting via permutation of coordinates on
$\\{\mathbf{z}\in\mathbb{C}^{n}:z_{1}+\cdots+z_{n}=0\\}$.
We are now in position to prove Proposition 2.10, which recall says that there
is an absolute constant $c>0$ such that for every $k\leq cd/(\log d)^{4}$ we
have
$f_{k}^{\operatorname{prim}}(d)\lesssim\sup_{d^{\prime}\geq cd/(\log
d)^{4}}f^{\operatorname{alt}}_{k}(d^{\prime}).$
The proof mirrors that of [4, Proposition 1.11], but we correct an error of
Green ([4, p. 20]) involving an incorrect orthogonality identity. This
erroneous deduction is replaced by an argument which still allows one to
reduce the primitive case to the alternating case.
###### Proof of Proposition 2.10.
We may assume $d$ is sufficiently large. If $[G:Z_{d}\cap G]\leq e^{d/\log
d}$, then the result follows by Proposition 5.1. So we can assume
$[G:Z_{d}\cap G]\geq e^{d/\log d}$, and thus by Theorem 5.2, $G$ has a normal
subgroup isomorphic to $A_{n}$ for some $n\gtrsim d/(\log d)^{4}$ and that $G$
has a subgroup of index at most $2$ which is of the form $A_{n}\times H$. If
the index is $2$, let $\tau$ be the nontrivial right coset representative of
$A_{n}\times H$ in $G$ (otherwise just let $\tau$ be the identity). Note that
$\displaystyle\sup_{g\in
G}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}$
$\displaystyle\leq\sup_{g\in A_{n}\times
H}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}+\sup_{g\in
A_{n}\times H}\lVert\operatorname{proj}_{W}(g\tau\mathbf{v})\rVert_{2}^{2},$
so it is easy to see that, up to losing a constant factor, we may reduce to
studying groups of the form $G=A_{n}\times H$ where $n\gtrsim d/(\log d)^{4}$
(but note that the representation may no longer be primitive, or even
irreducible).
Now Theorem 5.2 shows that the representation $\rho:A_{n}\times
H\to\mathsf{U}(\mathbb{C}^{d})$ coming from this setup has an irreducible
component of the form $\rho_{1}\simeq\psi\otimes\psi^{\prime}$, where
$\psi^{\prime}$ is an irreducible representation of $H$ and $\psi$ is the
representation of $A_{n}$ acting via permutation of coordinates on
$\\{\mathbf{z}\in\mathbb{C}^{n}:z_{1}+\cdots+z_{n}=0\\}$.
Note that $\dim\rho_{1}\geq\dim\psi=n-1\gtrsim d/(\log d)^{4}$, so
$\dim\rho_{1}\geq k$ provided that $c>0$ is sufficiently small. We will choose
a $k$-dimensional subspace of the irreducible component $\rho_{1}$.
We explicitly present this situation as follows. Let $V^{\prime}$ be the space
acted on by $\psi^{\prime}$ (unitarily). Consider
$V=\mathbf{1}^{\perp}\subseteq\mathbb{C}^{n}$, and consider the spaces
$V\otimes V^{\prime}\subseteq\mathbb{C}^{n}\otimes V^{\prime}$, which has a
natural unitary structure given by the tensor product. Note $\psi$ acts on $V$
by permutation of coordinates when represented in $\mathbb{C}^{n}$. Every
vector in $V\otimes V^{\prime}$ is spanned by pure tensors
$\mathbf{v}\otimes\mathbf{v}^{\prime}$ where $\mathbf{v}$ has zero coordinate
sum, and $\rho_{1}((a,h))$ acts by $\psi(a)\otimes\psi^{\prime}(h)$ on pure
tensors. In fact, we can extend this action to all of $\mathbb{C}^{n}\otimes
V^{\prime}$ in the natural way (and the resulting representation is isomorphic
to a direct sum of $\rho_{1}$ and
$\operatorname{triv}_{A_{n}}\otimes\psi^{\prime}$). At this point, the
analysis will be similar to that in the proof of Proposition 2.8.
Let $\nu$ be the measure on $\operatorname{Gr}_{\mathbb{C}}(k,n)$ which is
guaranteed by Definition 2.6 (so is supported on subspaces of
$V\subseteq\mathbb{C}^{n}$) and consider the measure which is supported on a
single atom in $\operatorname{Gr}_{\mathbb{C}}(1,V^{\prime})$ in the direction
of a fixed unit vector $\mathbf{x}$. Let $\mu$ be the tensor of these two
measures, i.e., if $\nu^{\ast}$ samples $k$ orthonormal (sum zero) vectors
$\mathbf{u}_{1},\ldots,\mathbf{u}_{k}$ then we choose the subspace with basis
$\mathbf{u}_{1}\otimes\mathbf{x},\ldots,\mathbf{u}_{k}\otimes\mathbf{x}$.
Now consider some $\mathbf{v}$ in the space $V\otimes
V^{\prime}\subseteq\mathbb{C}^{n}\otimes V^{\prime}$, and write it as
$\mathbf{v}=\sum_{j=1}^{n}\mathbf{e}_{j}\otimes\mathbf{v}_{j}^{\prime}$
where the $\mathbf{e}_{j}$ is the $j$-th coordinate vector of
$\mathbb{C}^{n}$. In fact, the $\mathbf{v}_{j}^{\prime}$ must add up to
$\mathbf{0}\in V^{\prime}$. We see that
$\lVert\mathbf{v}\rVert_{2}^{2}=\sum_{j=1}^{n}\lVert\mathbf{v}_{j}^{\prime}\rVert_{2}^{2}.$
We have
$\displaystyle\int\sup_{g\in A_{n}\times H}$
$\displaystyle\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)$
$\displaystyle=\int\sup_{a\in A_{n},h\in
H}\sum_{\ell=1}^{k}\bigg{|}\bigg{\langle}\sum_{j=1}^{n}\psi(a)\mathbf{e}_{j}\otimes\psi^{\prime}(h)\mathbf{v}_{j}^{\prime},\mathbf{w}_{\ell}\bigg{\rangle}\bigg{|}^{2}d\mu^{\ast}(\mathbf{w}_{1},\ldots,\mathbf{w}_{k})$
$\displaystyle=\int\sup_{a\in A_{n},h\in
H}\sum_{\ell=1}^{k}\bigg{|}\sum_{j=1}^{n}\langle\psi(a)\mathbf{e}_{j},\mathbf{u}_{\ell}\rangle\langle\psi^{\prime}(h)\mathbf{v}_{j}^{\prime},\mathbf{x}\rangle\bigg{|}^{2}d\nu^{\ast}(\mathbf{u}_{1},\ldots,\mathbf{u}_{k})$
$\displaystyle\leq\int\sup_{\mathbf{w}\in\operatorname{Dom}(\mathbf{y})}\sum_{\ell=1}^{k}|\langle\mathbf{w},\mathbf{u}_{\ell}\rangle|^{2}d\nu^{\ast}(\mathbf{u}_{1},\ldots,\mathbf{u}_{k})$
$\displaystyle\leq
f^{\operatorname{alt}}_{k}(n)^{2}\lVert\mathbf{y}\rVert_{2}^{2},$
where $\mathbf{y}\in\mathbb{C}^{n}$ satisfies $y_{j}=\sup_{h\in
H}|\langle\psi^{\prime}(h)\mathbf{v}_{j}^{\prime},\mathbf{x}\rangle|$. The
first inequality follows by noting that
$\langle\psi(a)\mathbf{e}_{j},\mathbf{u}_{\ell}\rangle$ as $j$ varies simply
records the coordinates of $\mathbf{u}_{\ell}$ in some permutation, and by
considering $\mathbf{w}=(w_{1},\ldots,w_{n})$ defined via
$w_{j}=\langle\psi^{\prime}(h)\mathbf{v}_{j}^{\prime},\mathbf{x}\rangle$,
which is clearly on $\operatorname{Dom}(\mathbf{y})$. Now we see
$\int\sup_{g\in A_{n}\times
H}\lVert\operatorname{proj}_{W}(g\mathbf{v})\rVert_{2}^{2}d\mu(W)\leq
f^{\operatorname{alt}}_{k}(n)^{2}\lVert\mathbf{y}\rVert_{2}^{2}\leq
f^{\operatorname{alt}}_{k}(n)^{2}\sum_{j=1}^{n}\lVert\mathbf{v}_{j}^{\prime}\rVert_{2}^{2}=f^{\operatorname{alt}}_{k}(n)^{2}\lVert\mathbf{v}\rVert_{2}^{2}.\qed$
This completes all the components of the proof of Theorem 1.3.
## 6\. Real subspaces
We already proved Theorem 1.3, which finds a complex subspace. Now we use it
to deduce Theorem 1.2, which gives a real subspace. We will apply the
following version of the restricted invertibility theorem, which is a special
case of [6, Theorem 6]. We write $s_{1}(M)\geq s_{2}(M)\geq\cdots$ for the
singular values of a matrix $M$.
###### Theorem 6.1 ([6, Theorem 6]).
Let $M$ be a real $2k\times 4k$ matrix of rank $2k$. There exists
$S\subseteq[4k]$ with $|S|=k$ such that $M_{S}$, the restriction of $M$ to the
columns $S$, satisfies
$s_{k}(M_{S})\gtrsim\sqrt{\frac{\sum_{j=3k/2}^{4k}s_{j}(M)^{2}}{k}}.$
###### Proof of Theorem 1.2.
Let $2k\leq d/(\log d)^{C}$, where $C$ is as in Theorem 1.3. By embedding $X$
in $\mathbb{S}(\mathbb{C}^{d})$ and using Theorem 1.3 we can find a
$2k$-dimensional complex subspace $W$ of $\mathbb{C}^{d}$ such that
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\lesssim
1/\sqrt{\log(d/k)}.$
Let $\mathbf{v}_{1},\ldots,\mathbf{v}_{2k}$ be a unitary basis for the
subspace $W$ and let the matrix with these columns be denoted by $B$. Now
consider the matrix $M$ which has $4k$ columns which are
$\operatorname{Re}\mathbf{v}_{1},\ldots,\operatorname{Re}\mathbf{v}_{2k}$ and
$\operatorname{Im}\mathbf{v}_{1},\ldots,\operatorname{Im}\mathbf{v}_{2k}$.
Note that $M$ has $s_{2k}(M)\geq 1/\sqrt{2}$ as any vectors in
$\mathbb{C}^{4k}$ satisfying $iv_{j}=v_{j+2k}$ have $\lVert
M\mathbf{v}\rVert=\lVert\mathbf{v}\rVert/\sqrt{2}$. Therefore by Theorem 6.1
one can select $k$ columns such that the matrix $N$ with those $k$ columns
satisfies
$s_{k}(N)\gtrsim 1.$
Now consider any unit vector $\mathbf{v}$ in the image of $N$. Such a vector
can be represented as $\mathbf{v}=N\mathbf{w}$ where
$\lVert\mathbf{w}\rVert\lesssim 1$. It therefore suffices to prove that
$\sup_{\mathbf{x}\in X,\mathbf{w}\in\mathbb{S}(\mathbb{R}^{k})}|\langle
N\mathbf{w},\mathbf{x}\rangle|\lesssim 1/\sqrt{\log(d/k)}.$
To see this separate $N$ into $N_{1}$ and $N_{2}$ where $N_{1}$ corresponds to
columns chosen from the real parts of vectors $\mathbf{v}_{i}$ and the columns
are chosen from the complex parts of $\mathbf{v}_{i}$. Let these have $\ell$
and $k-\ell$ columns respectively. Then
$\displaystyle\sup_{\mathbf{x}\in
X,\mathbf{w}\in\mathbb{S}(\mathbb{R}^{k})}|\langle
N\mathbf{w},\mathbf{x}\rangle|$ $\displaystyle\leq\sup_{\mathbf{x}\in
X,\mathbf{w}\in\mathbb{S}(\mathbb{R}^{\ell})}|\langle
N_{1}\mathbf{w},\mathbf{x}\rangle|+\sup_{\mathbf{x}\in
X,\mathbf{w}\in\mathbb{S}(\mathbb{R}^{k-\ell})}|\langle
N_{2}\mathbf{w},\mathbf{x}\rangle|$ $\displaystyle\leq 2\sup_{\mathbf{x}\in
X,\mathbf{w}\in\mathbb{S}(\mathbb{C}^{k})}|\langle
B\mathbf{w},\mathbf{x}\rangle|$ $\displaystyle\lesssim
1/\sqrt{\log(d/k)}.\qed$
## 7\. Lower Bound
Finally, we show a lower bound of $\Omega(1/\sqrt{\log(2d/k)})$, which
demonstrates optimality of our results.
###### Proof of Theorem 1.5.
We prove the real case; an analogous proof works over $\mathbb{C}$ by
considering a suitably fine discretization of $\Gamma_{d}$, or we can repeat
the proof in Section 6 to transfer a lower bound from real to complex.
The claim for $k=1$ was already proved in [4, _Sharpness_ after Theorem 1.3]
(see the construction at the beginning of this article right after Theorem
1.1). The case $k=1$ implies the result also for $k\leq d^{1-c}$ for any
constant $c$, since we can project from $W$ onto a arbitrary 1-dimensional
subspace of $W$.
So from now on assume $k\geq d^{1/2}$. Consider the action of
$G=\mathfrak{S}_{d}\ltimes(\mathbb{Z}/2\mathbb{Z})^{d}$ on $\mathbb{R}^{d}$ by
permutation and signing. Let
$\mathbf{a}=\left(\frac{1}{\sqrt{\lfloor
k/2\rfloor+1}},\ldots,\frac{1}{\sqrt{d}},0,\ldots,0\right).$
Let $X$ be the $G$-orbit of $\mathbf{a}/\lVert\mathbf{a}\rVert_{2}$.
Let $W$ be a $k$-dimensional subspace of $\mathbb{R}^{d}$. We wish to show
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\gtrsim
1/\sqrt{\log(2d/k)}$.
Let $\mathbf{y}=(y_{1},\ldots,y_{d})$ a uniform random vector in
$\mathbb{S}(W)$. Let $\sigma_{i}=(\mathbb{E}y_{i}^{2})^{1/2}$. We have
$\sigma_{1}^{2}+\cdots+\sigma_{d}^{2}=\mathbb{E}[y_{1}^{2}+\cdots+y_{d}^{2}]=1$
(7.1)
and
$\sigma_{i}^{2}=\frac{1}{k}\lVert\operatorname{proj}_{W}(\mathbf{e}_{i})\rVert^{2}\leq\frac{1}{k}.$
(7.2)
Without loss of generality, assume that
$1/\sqrt{k}\geq\sigma_{1}\geq\cdots\geq\sigma_{d}\geq 0$, so that
$\sigma_{i}\leq 1/\sqrt{i}$ for each $i$. We claim that
$a_{i}\geq\sqrt{\frac{2}{3}}\sigma_{i}\qquad\text{ for all }1\leq i\leq
d-k/2.$
Indeed, for $i\leq k$, we have $a_{i}\geq
1/\sqrt{3k/2}\geq\sqrt{3/2}\sigma_{i}$. For $k<i\leq d-\lfloor k/2\rfloor$, we
have $a_{i}=1/\sqrt{\lfloor k/2\rfloor+i}\geq\sigma_{i}\sqrt{i/(\lfloor
k/2\rfloor+i)}\geq\sqrt{2/3}\sigma_{i}$.
We have $\mathbb{E}|y_{i}|\gtrsim(\mathbb{E}y_{i}^{2})^{1/2}=\sigma_{i}$ since
$y_{i}$ is distributed as the first coordinate of a random point on
$\sigma_{i}\sqrt{k}\cdot\mathbb{S}(\mathbb{R}^{k})$.
Putting everything together, we have
$\displaystyle\lVert\mathbf{a}\rVert_{2}\,\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}$
$\displaystyle\geq\sup_{g\in
G}\lVert\operatorname{proj}_{W}g\mathbf{a}\rVert\geq\mathbb{E}\sup_{g\in
G}\langle\mathbf{a},g\mathbf{y}\rangle$
$\displaystyle\geq\mathbb{E}\sum_{1\leq i\leq
d}a_{i}|y_{i}|\gtrsim\sum_{i=1}^{d}a_{i}\sigma_{i}\gtrsim\sum_{i=1}^{d-k/2}\sigma_{i}^{2}\geq\frac{1}{2},$
where the final step uses 7.1 and 7.2. Thus
$\sup_{\mathbf{x}\in
X}\lVert\operatorname{proj}_{W}\mathbf{x}\rVert_{2}\gtrsim\frac{1}{\lVert\mathbf{a}\rVert_{2}}\gtrsim
1/\sqrt{\log(2d/k)}.\qed$
## References
* [1] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart, _Concentration inequalities_ , Oxford University Press, Oxford, 2013, A nonasymptotic theory of independence, With a foreword by Michel Ledoux.
* [2] Michael J. Collins, _On Jordan’s theorem for complex linear groups_ , J. Group Theory 10 (2007), 411–423.
* [3] Harold Davenport, _Multiplicative number theory_ , third ed., Graduate Texts in Mathematics, vol. 74, Springer-Verlag, New York, 2000, Revised and with a preface by Hugh L. Montgomery.
* [4] Ben Green, _On the width of transitive sets: Bounds on matrix coefficients of finite groups_ , Duke Math. J. 169 (2020), 551–578.
* [5] Michel Ledoux and Michel Talagrand, _Probability in Banach spaces_ , Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 23, Springer-Verlag, Berlin, 1991, Isoperimetry and processes.
* [6] Assaf Naor and Pierre Youssef, _Restricted invertibility revisited_ , A journey through discrete mathematics, Springer, Cham, 2017, pp. 657–691.
* [7] Mark Rudelson, _Recent developments in non-asymptotic theory of random matrices_ , Modern aspects of random matrix theory, Proc. Sympos. Appl. Math., vol. 72, Amer. Math. Soc., Providence, RI, 2014, pp. 83–120.
* [8] Roman Vershynin, _High-dimensional probability_ , Cambridge Series in Statistical and Probabilistic Mathematics, vol. 47, Cambridge University Press, Cambridge, 2018, An introduction with applications in data science, With a foreword by Sara van de Geer.
|
# Statistical guided-waves-based SHM via stochastic non-parametric time series
models
Ahmad Amer Intelligent Structural Systems Laboratory (ISSL)
Department of Mechanical, Aerospace and Nuclear Engineering
Rensselaer Polytechnic Institute, Troy, NY, USA
Email<EMAIL_ADDRESS>Fotis Kopsaftopoulos111Corresponding author.
Intelligent Structural Systems Laboratory (ISSL)
Department of Mechanical, Aerospace and Nuclear Engineering
Rensselaer Polytechnic Institute, Troy, NY, USA
Email<EMAIL_ADDRESS>
###### Abstract
Damage detection in active-sensing, guided-waves-based Structural Health
Monitoring (SHM) has evolved through multiple eras of development during the
past decades. Nevertheless, there still exists a number of challenges facing
the current state-of-the-art approaches, both in the industry as well as in
research and development, including low damage sensitivity, lack of robustness
to uncertainties, need for user-defined thresholds, and non-uniform response
across a sensor network. In this work, a novel statistical framework is
proposed for active-sensing SHM based on the use of ultrasonic guided waves.
This framework is based on stochastic non-parametric time series models and
their corresponding statistical properties in order to readily provide healthy
confidence bounds and enable accurate and robust damage detection via the use
of appropriate statistical decision making tests. Three such methods and
corresponding statistical quantities (test statistics) along with decision
making schemes are formulated and experimentally assessed via the use of three
coupons with different levels of complexity: an Al plate with a growing notch,
a Carbon fiber-reinforced plastic (CFRP) plate with added weights to simulate
local damages, and the CFRP panel used in the Open Guided Waves project [1],
all fitted with piezoelectric transducers and a pitch-catch configuration. The
performance of the proposed methods is compared to that of state-of-the-art
time-domain damage indices (DIs). The results demonstrate the increased
sensitivity and robustness of the proposed methods, with better tracking
capability of damage evolution compared to conventional approaches, even for
damage-non-intersecting actuator-sensor paths. In particular, the $Z$
statistic emerges as the best damage detection metric compared to conventional
DIs, as well as the other proposed statistics. This is attributed to the
incorporation of experimental uncertainty in defining the $Z$ statistic, which
results in a both sensitive and robust approach for damage detection. Overall,
the proposed statistical methods exhibit greater damage sensitivity across
different components, with enhanced robustness to uncertainty, as well as
user-friendly application.
###### Contents
1. 1 Introduction
2. 2 The Statistical Damage Detection Framework
1. 2.1 The General Framework
2. 2.2 Overview of Non-parametric Time Series Representations
3. 2.3 The Single-set $F$ Statistic Method
4. 2.4 The Multiple-set Modified $F_{m}$ Statistic Method
5. 2.5 The Multiple-set $Z$ Statistic Method
6. 2.6 Reference State-of-the-Art Damage Indices
3. 3 Results and Discussion
1. 3.1 Test Case I: Damage Detection in an Aluminum Plate
1. 3.1.1 Test Setup, Damage Types and Data Acquisition
2. 3.1.2 Damage Detection Results
2. 3.2 Test Case II: Damage Detection in CFRP Plate
1. 3.2.1 Test Setup, Damage Types and Data Acquisition
2. 3.2.2 Damage Detection Results
3. 3.3 Test Case III: Damage Detection in the Open Guided-Waves CFRP Panel
1. 3.3.1 Test Setup, Damage Types and Data Acquisition
2. 3.3.2 Damage Detection Results
4. 4 Conclusions
5. A Damage Detection Summary Results: Notched Al Coupon
6. B Damage Detection Summary Results: CFRP Coupon
7. C Damage Detection Summary Results: OGW CFRP Panel
## 1 Introduction
In the near future, Structural Health Monitoring (SHM) systems will be capable
of implementing all four levels of SHM, namely: damage detection,
localization, quantification and remaining useful life estimation (prognosis)
[2, 3, 4, 5], with sustainable levels of performance in complex components
under varying operational and environmental conditions. In order to reach this
milestone, a number of challenges facing current SHM techniques needs to be
addressed. These challenges originate from the deterministic nature of the
majority of the currently-employed approaches, i.e. they do not allow for the
extraction of appropriate confidence intervals for damage detection,
localization and quantification [6, 7, 8]. This leads to the characterization
of those techniques as inefficient in the face of uncertainty, stochastic
time-variant and non-linear structural responses [9, 10, 11], as well as
incipient damage types and complex failure modes that can be easily masked by
the effects of varying states [12, 13]. Thus, there lies a need for the
development of SHM frameworks, where proper understanding, modeling, and
analysis of stochastic structural responses under varying states and damage
characteristics is achieved for clearing the road towards achieving the
aforementioned ultimate goal of SHM systems. Towards this end, many
researchers have proposed the use of statistical distributions of damage-
related features in devising SHM metrics and corresponding probabilities (also
known as statistical and/or probabilistic SHM); for instance see [14, 15, 16,
17, 18, 19, 7] for probabilistic damage detection and localization, as well as
[14, 20, 21, 22, 8] for probabilistic damage quantification). These approaches
promise wider applicability due to the direct extraction of confidence bounds
for detection, localization, and quantification, as well as present an
alternative approach for SHM reliability quantification without the need for
further non-destructive testing, such as that required to obtain the
Probability of Detection (POD) [23, 24, 4, 25, 26].
In a more specific context, as the most fundamental level of SHM [6, 14, 27],
damage detection has received significant attention throughout the last two
decades. Within the framework of active-sensing guided-waves-based SHM, one of
the most widely-used techniques for damage detection (and quantification) is
the concept of the Damage/Health Index/Indicator (D/HI) [28, 29], where some
features of the signal for an unknown structural state is compared to that
coming from the healthy structure [30, 31]. To this end, the most-widely used
DI-based approaches are based on the time delay of specific mode wave packets
in the acousto-ultrasound signal, the amplitude/magnitude of the signal, and
the energy content of the signals, all used as the features to differentiate
between a healthy and a damaged structure [32, 33, 30, 4, 31, 34]. These
approaches, thereon denoted as conventional DI-based methods, have been used
extensively in the literature due to their simplicity (no experience needed in
interpretation of results) and the allowance of a damage/no-damage paradigm,
which may facilitate the decision-making stage [32]. However, there exists a
number of challenges facing these types of methods when it comes to damage
detection. Namely, their deterministic nature and the time-varying and non-
linear structural responses within a structure can limit the applicability of
such methods [6, 9, 10, 11]. In addition, the effect of complex damage types
and their stochastic evolution can be masked under varying operational and
environmental states, further inhibiting the effectiveness of such DIs in
damage detection [6, 12, 13]. Other issues related to the conventional DI-
based approach include the need for user-defined damage thresholds for damage
detection [35, 36] and the phenomenon of saturation [37].
As such, the challenges facing the aforementioned methods have been tackled
throughout the literature using different approaches, with most of them based
on advanced variants of conventional DIs. Although these endeavors span many
strategies, the most common approaches either enhance current time-domain DIs
(see for instance [4, 38]), use frequency-domain or mixed-domain DIs (see for
instance [32, 39, 35]), or use advanced signal processing/modelling as a
preliminary step before calculating DIs (see [40, 41, 42]). Another family of
techniques is based on baseline-free approaches (see [36, 43, 44]), which
themselves can be further categorized into a number of approaches as will be
discussed shortly. Although the proposed enhancements exhibit better damage
detection performance compared to conventional DIs, no single technique seems
to collectively address the current drawbacks of conventional DIs. The
following discussion briefly outlines selected studies from each family of
approaches, highlighting the advantages and drawbacks of each family of
techniques.
In the context of enhanced time-domain approaches, Janapati et al. [4]
proposed a DI that depends solely on guided-wave propagation signal
normalization and applied it to many identical coupons in order to pinpoint
the source of variability between seemingly-identical SHM systems, and thus
better understand the effect of uncertainties on time-domain DIs. Su and
coworkers [39] compared three different DIs for fatigue damage
characterization within an active-sensing acousto-ultrasonic SHM framework:
the traditional time-of-flight delay and energy dissipation indices, and a
novel index utilizing non-linear features of guided-wave signals, such as the
second harmonic generation. They observed higher sensitivities, as well as
better damage evolution-tracking using nonlinear features compared to the
traditional approaches relying on linear ones. In addition, they concluded
that analyzing the time-frequency domain instead of the time domain alone
enhances damage detection ability, especially for early-stage cracks. Building
upon that mixed-domain approach, Jin et al. [32] used DIs in both the time and
the frequency domains. In addition, in order to address the uncertainties in
each individual path DI due to noise and varying conditions, they proposed an
arithmetic fusion algorithm, where DIs based on amplitude and energy, both
generated in the time and the frequency domains (a total of four DIs), are
each summed over all the actuator-sensor path signals coming from a steel
plate in order to “visualize” fatigue crack growth. Although these endeavors
were capable in addressing certain challenges facing DIs, they are still not
probabilistic in nature, and are thus still prone to error due to
uncertainties.
In order to address this, and building upon the fact that the frequency domain
may offer a different representation of the signal dynamics, many researchers
studied the effect of damage on the energy of wavelets (coming from some type
of wavelet transformation of the signals) by using the concept of entropy
[45]. Basically, the Shannon entropy [46] is calculated for windows/parts of
the signal and the observed changes are related to damage, as appropriate. As
this method is essentially based on the probability distribution of the energy
of each wavelet, it is more effective in capturing uncertainties compared to
conventional approaches. However, defining damage/healthy thresholds, as well
as the analysis of changes in entropy, are not straight forward [47], and user
expertise is required in order to properly detect damage. Another approach was
adopted by Qiu and coworkers [35], where time-domain (as-received signal
amplitude) and frequency-domain (frequency-response function) variants of the
DI were used as features to develop a Gaussian mixture model for guided-wave
signals in the healthy case. Then, upon the acquisition of new feature values,
the model is migrated and a statistical technique is used to measure the
differences between the baseline and the migrated models. Applying this
approach to a real-life fuselage component with a developing fatigue crack,
they observed the enhanced sensitivity and better damage evolution tracking.
Most importantly, they also concluded the superiority of this technique owing
to the lack of requiring user-experience in defining detection thresholds. One
drawback to this approach, however, is the complexity in defining the original
Gaussian mixture model, which requires many data sets, and multiple steps,
including k-means clustering and expectation maximization algorithms.
In another attempt to enhance the detection capability of DIs under
uncertainty, several researchers proposed baseline-free techniques, which do
not require the presence of pre-sampled reference/healthy signals to compare
to. In many of these techniques, either an instantaneous baseline is acquired
from an identical path to the one being investigated, or from the reciprocal
of that path, or the signal is reversed and analyzed for mode conversions and
deviations from the original one, amongst other methods [43]. Although these
approaches appear to be more robust to varying conditions owing to the lack of
pre-sampled baseline signals, most of them require knowledge of dispersion
curves for the components being monitored, and dictate sophisticated actuation
strategies. In addition, it has been shown that, depending on the actuator-
sensor path from which the signal is coming, the evolution of the DIs can
proceed in a manner uncorrelated with damage evolution [8, 32], which clearly
limits the applicability of many of these techniques to specific sensor
network designs and simple boundary conditions. Although there are recent
studies on averting this latter drawback [48, 44], these approaches still
require experience in defining detection thresholds.
Although the highlighted approaches show promise for enhancing the damage
detection capability of DIs, from the discussion above it can be concluded
that there exist no approaches capable of overcoming the aforementioned
challenges facing DIs with a user-friendly way that can be widely applicable
to different sensor networks and component designs. Namely, there still lies
the need to develop damage detection techniques that overcome the following
challenges, which are faced by the current DI-based approaches:
* •
Complexity in defining/calculating the damage detection metric.
* •
Lack of a straightforward approach in defining appropriate statistical damage
thresholds, i.e. the need for historical data and user experience with
specific damage cases.
* •
Poor performance in complex damage cases and non-linear structural responses
(e.g. complex composite parts).
* •
Lack of robustness towards uncertainties originating from operational or
environmental sources.
* •
Failure to follow damage evolution in some cases, either entirely (e.g. damage
non-intersecting signals [8]), or starting from a certain damage size (e.g.
saturation phenomenon [37]).
In order to overcome these challenges, the use of non-parametric time series
(NP-TS) models within a statistical framework is proposed herein in order to
tackle damage detection under uncertainty. NP-TS models have been widely used
in damage detection via vibration-based SHM [49, 50, 51] due to their
stochastic nature, which inherently accounts for uncertainty and allows for
the extraction of theoretical and experimental confidence intervals, avoiding
the need for user-defined thresholds, based on statistical decision making
schemes. Also, NP-TS frequency-domain representations of system dynamics can
exhibit increased sensitivity to damage and entertain simplicity of
application requiring little-to-no user experience [49, pp. 212]. Finally, as
will be shown herein, NP-TS models can prove superior to conventional DIs in
following damage evolution. Thus, the use of stochastic NP-TS representations
for damage detection has the potential to enhance the detection performance
with straight-forward applicability due to the ability of extracting “inherent
thresholds” from the SHM metrics themselves, as well as simplicity of
application.
In a previous preliminary study [7], the authors applied NP-TS representations
and statistical hypothesis testing (SHT) to an Al coupon and a stiffened panel
showing the extraction of estimation confidence intervals from the metrics
being used, as well as the enhanced detection capability with these stochastic
models compared to DI-based approaches. In the current study, work on NP-TS
models for damage detection in active-sensing acousto-ultrasound SHM is
significantly expanded, and their performance in detecting damage over three
different structural coupons is compared and experimentally assessed with that
of two state-of-the-art DIs from the literature [4, 35]. To the author’s best
of knowledge, the NP-TS-based damage detection metrics used herein have not
been proposed previously within the framework of active-sensing guided-waves-
based SHM. The main novel aspects of this study include:
* (a)
The introduction of a novel data-based statistical damage detection framework
based on NP-TS models in active-sensing guided-waves-based SHM, as well as the
expansion of two previously proposed methods by the authors and co-workers
[52, 49].
* (b)
The application of the proposed methods in two composite panels with different
types of simulated damage, as well as in a notched Al coupon.
* (c)
The extraction of statistical confidence intervals and the detection of damage
via appropriate statistical hypothesis testing schemes, negating the
requirement of user-defined thresholds.
* (d)
The proposal of a straight-forward damage detection method in active-sensing
guided-waves-based SHM, with the advantage of enhanced detection capability
over conventional DI-based approaches, without sacrificing simplicity.
The remainder of this paper is organized as follows: Section 2 introduces the
development of the statistical framework (Subsection 2.1), the theory of the
utilized stochastic NP-TS representation (Subsection 2.2), the statistics used
in this study for damage detection (Subsections 2.3, 2.4 and 2.5), as well as
briefly presents the literature-based DIs used for comparison in this study
(Subsection 2.6). Then, the experimental setup, the results and the
discussions are presented for every coupon consecutively in Section 3.
Finally, Section 4 concludes this study and proposes extra steps for
enhancement of damage detection within active-sensing guided-wave SHM systems.
## 2 The Statistical Damage Detection Framework
### 2.1 The General Framework
The use of statistical methods for damage detection and identification has
been previously reported for vibration-based SHM [53, 54, 49], and only
recently for active-sensing guided-wave SHM [7]. A typical statistical
framework for damage detection, localization and quantification is shown in
Figure 1 [7, 52]. In this framework, $x[t]$ and $y[t]$ are the individual
actuation and response signals, respectively, for every structural case,
respectively, indexed with discrete time $t$ ($t=1,2,\ldots,N$), which can be
converted to continuous time through the transformation $(t-1)T_{s}$, where
$T_{s}$ is the sampling time for the recorded signals. The subscripts
($o,A,B,\ldots,$ and $u$) indicate the healthy, damage $A,B,\ldots,$ and
unknown cases, respectively. In this context, the damage cases labeled as
($A,B,\ldots$) can resemble different types, sizes or locations of damage. For
each structural case, all actuation ($X$) and response ($Y$) signals can be
presented as $Z=(X,Y)$, with $Z_{o},Z_{A},Z_{B},...,$ and $Z_{u}$ indicating
the different cases as before.
Figure 1: Framework for statistical time series methods for structural health
monitoring [7, 52, 49].
As shown in Figure 1, the statistical time series framework consists of two
phases, namely the baseline and inspection phase. In the baseline phase, NP-TS
models, each producing a characteristic quantity $\widehat{Q}$, are identified
and properly validated for the healthy ($\widehat{Q}_{0}$) time series signal,
as well as, if available, different predefined damage cases
($\widehat{Q}_{A}$, $\widehat{Q}_{B},\ldots$). Then, during the inspection
phase, the same NP-TS models are identified for the unknown
($\widehat{Q}_{u}$) state of the system. Next, damage detection is achieved
through applying appropriate binary statistical hypothesis tests to assess the
statistical deviation of the unknown quantity $\widehat{Q}_{u}$ from
$\widehat{Q}_{0}$ corresponding to the healthy signal (damage detection).
Based on data availability, the statistical similarity to one of the damage
characteristic quantities $\widehat{Q}_{A}$, $\widehat{Q}_{B},\ldots$ with the
baseline quantity $\widehat{Q}_{0}$ can enable statistical damage
identification/classification. In the present study, this framework is only
used for damage detection.
### 2.2 Overview of Non-parametric Time Series Representations
Stochastic NP-TS representations utilize time-domain Auto-/Cross-Covariance
Functions (A/CCF) and/or frequency-domain Power-/Cross-Spectral Densities
(P/CSD) in order to model a dynamic stationary signal [55, Chapter 2, pp. 39].
As discussed above, frequency-domain models are used in this study. In this
context, several estimators have been developed for the PSD (also referred to
as Auto Spectral Density) of a sensor excitation and/or response signal,
including the periodogram, the Thompson, the Blackman-Tukey, and the Bartlett-
Welch (or simply Welch) estimators [56, Chapter 5, pp. 235]. As such
estimators are random variables that represent the true PSD of a system, their
corresponding statistical properties, such as the mean and variance, allow for
the extraction of estimation confidence intervals that can be subsequently
used to represent statistical damage thresholds. In this study, the Welch PSD
estimate, which is a modified periodogram estimator using a series of
overlapping windows [57, Chapter 4, pp. 76] is used for damage detection. For
a time series signal $x[t]$, the frequency-domain ($\omega$) Welch PSD
($\widehat{S}_{xx}(\omega)$) is based on the averaging of multiple-windowed
periodograms using properly-selected sample windows $w[t]$ with 50% overlap,
and is calculated as follows [58, Chapter 8, pp. 418] (the hat indicates an
estimated variable):
$\widehat{S}_{xx}(\omega)=\frac{1}{KLUT}\sum_{i=0}^{K-1}\Bigl{|}T\sum_{t=0}^{L-1}w[t]\cdot\widehat{x}[t+iD]^{(-j2\pi\omega
tT)}\Bigr{|}^{2}$ (1)
with
$U=\frac{1}{L}\sum_{t=0}^{L-1}w^{2}[t],\quad\widehat{x}[t]=x[t]-\widehat{\mu}_{x},\quad
N=L+D(K-1)$ (2)
and $N$, $L$, $K$, $D$, and $T$ being the total number of signal samples, the
size of each window, the number of utilized windows, the number of overlapping
data points in each window, and the time period of the signal, respectively.
$\widehat{\mu}_{x}$ represents the mean of the time series. The estimation
statistics, that is the mean and variance, of the Welch PSD can be described
as follows in case the Bartlett window is used [58, Chapter 8, pp. 419]:
$E\\{\widehat{S}_{xx}(\omega)\\}=\frac{1}{2\pi
LU}{S}_{xx}(\omega)|W(\omega)|^{2}$ (3)
$Var\\{\widehat{S}_{xx}(\omega)\\}\approx\frac{9}{16}\frac{L}{N}{S}^{2}_{xx}(\omega)$
(4)
where $W(\omega)$ designates the Fourier transform of the window function. One
of the main reasons behind the wide use of the Welch PSD estimator is that it
is asymptotically unbiased and consistent [56]. In this study, the Welch PSD
estimate is used in developing appropriate statistical quantities, also
referred to as test statistics, and corresponding statistical hypothesis tests
for damage detection, as described in the following section.
### 2.3 The Single-set $F$ Statistic Method
Based on the PSD-based NP-TS method and the corresponding statistical
hypothesis testing setup presented in [49, 52], damage can be detected by
assessing changes in the Welch PSD of properly-determined wave packets/modes
from an acousto-ultrasound time series signal. Thus, the characteristic
quantity in this study is $Q={S}_{xx}(\omega)={S}(\omega)$. The main idea is
based on the comparison of the Welch PSD of the response of the structure in
an unknown state, $S_{u}(\omega)$, to that of the structure in its healthy
state, $S_{o}(\omega)$. Damage detection can thus be tackled using the
following SHT problem [7, 49]:
$\begin{array}[]{llll}H_{o}&:&S_{u}(\omega)=S_{o}(\omega)&\text{(null
hypothesis -- healthy structure )}\\\ H_{1}&:&S_{u}(\omega)\neq
S_{o}(\omega)&\text{(alternative hypothesis -- damaged structure)}\end{array}$
(5)
Again, due to the finite nature of the experimental time series, the true PSD
values are unknown, and thus corresponding estimated quantities are utilized
instead ($\widehat{S}$). It can be shown that the Welch PSD estimate will have
the following property [57, Chapter 3, pp. 46]:
$2K\widehat{S}(\omega)/{S}(\omega)\sim\chi^{2}(2K)$ (6)
In the above expression, the factor of 2 comes from the fact that every
periodogram used in averaging the Welch PSD has a real and a complex
component. Consequently, a damage detection statistic following the
$\mathcal{F}$ distribution with $(2K,2K)$ degrees of freedom can be developed
as follows:
$F=\frac{\widehat{S}_{o}(\omega)/S_{o}(\omega)}{\widehat{S}_{u}(\omega)/S_{u}(\omega)}\,\sim\,\mathcal{F}(2K,2K)$
(7)
In the case of a healthy structure (null hypothesis), $S_{u}(\omega)$ and
$S_{o}(\omega)$ coincide, thus:
$\text{Under}\;H_{o}:\quad
F=\frac{\widehat{S}_{o}(\omega)}{\widehat{S}_{u}(\omega)}\,\sim\,\mathcal{F}(2K,2K)$
(8)
Thus, the above SHT decision-making process can be modified as follows:
$\begin{array}[]{ccl}f_{\frac{\alpha}{2}}(2K,2K)\leq
F=\frac{\widehat{S}_{o}(\omega)}{\widehat{S}_{u}(\omega)}\leq
f_{1-\frac{\alpha}{2}}(2K,2K)\quad(\forall\;\omega)&\Longrightarrow&H_{o}\;\text{is
accepted (healthy structure)}\\\ \text{Else}&\Longrightarrow&H_{1}\;\text{is
accepted (damaged structure)}\\\ \end{array}$ (9)
where $\alpha$ is the Type I error (false alarm) probability,
$f_{\frac{\alpha}{2}}$, $f_{1-\frac{\alpha}{2}}$ designate the $\mathcal{F}$
distribution’s $\frac{\alpha}{2}$ and $1-\frac{\alpha}{2}$ critical points,
respectively ($f_{\alpha}$ is defined such that Prob$(F\leq
f_{\alpha})=\alpha$).
### 2.4 The Multiple-set Modified $F_{m}$ Statistic Method
In many realistic cases, a single baseline signal may not be representative of
the healthy structure, and the average of many signal realizations might be
more meaningful. In that case, multiple closely-spaced (time-wise) response
signal realizations available for each state of the component being monitored
under nominally-constant environmental/operational conditions can be used to
entail some experimental statistics in the estimation of the SHM metric being
used. Towards this end, the sample expectation, that is,
$E\\{\widehat{S}_{o}(\omega)\\}=\frac{1}{M}\sum_{h=1}^{M}\widehat{S}_{o}(\omega)\;$
(10)
can be used in order to “expand” the baseline/healthy estimates for the
structure being monitored. In the above expression, $M$ is the number of
healthy data sets used in the estimation of the metric being used. Then,
following the aforementioned property of PSD estimates in equation (6), the
following expression can be developed:
$2KME\\{\widehat{S}_{o}(\omega)\\}/{S}_{o}(\omega)\sim\chi^{2}(2KM)$ (11)
As such, a modified $F$ statistic can be developed by replacing the Welch PSD
estimate with the mean of all PSD estimates of $M$ number of time-series
signals taken for the system under the baseline/healthy state:
$F_{m}=\frac{E\\{\widehat{S}_{o}(\omega)\\}/S_{o}(\omega)}{\widehat{S}_{u}(\omega)/S_{u}(\omega)}\;\sim\;\mathcal{F}(2KM,2K)$
(12)
Under the null hypothesis in equation (5), the $S_{o}(\omega)$ and
$S_{u}(\omega)$ coincide:
$\text{Under}\;H_{o}:\quad
F_{m}=\frac{E\\{\widehat{S}_{o}(\omega)\\}}{\widehat{S}_{u}(\omega)}\;\sim\;\mathcal{F}(2KM,2K)$
(13)
thus, the modified decision making scheme with the appropriate confidence
levels, can be expressed as follows:
$\begin{array}[]{ccl}f_{\frac{\alpha}{2}}(2KM,2K)\leq
F_{m}=\frac{E\\{\widehat{S}_{o}(\omega)\\}}{\widehat{S}_{u}(\omega)}\leq
f_{1-\frac{\alpha}{2}}(2KM,2K)\quad(\forall\;\omega)&\Longrightarrow&H_{o}\;\text{is
accepted (healthy)}\\\ \text{Else}&\Longrightarrow&H_{1}\;\text{is accepted
(damaged)}\\\ \end{array}$ (14)
with $f_{\frac{\alpha}{2}}$, $f_{1-\frac{\alpha}{2}}$ designating the
$\mathcal{F}$ distribution’s $\frac{\alpha}{2}$ and $1-\frac{\alpha}{2}$
critical points, respectively ($f_{\alpha}$ is defined such that
Prob$(F_{m}\leq f_{\alpha})=\alpha$).
### 2.5 The Multiple-set $Z$ Statistic Method
With the availability of a sufficiently-large number of data sets, that is a
large $M$ in equation (10), $E\\{\widehat{S}_{o}(\omega)\\}$ would approach
the true PSD, and according to the Central Limit Theorem (CLT) [59, Chapter 3,
pp. 62], $E\\{\widehat{S}_{o}(\omega)\\}$ would also follow a normal
distribution with the true PSD being the mean and $\sigma_{0}$ the variance.
Utilizing these statistical phenomena, and based on the $Z$ statistic
developed by Fassois and coworkers [49, 52] for the Frequency Response
Function (FRF) of vibration-based SHM signals, a novel $Z$ statistic is
proposed herein utilizing the Welch PSD estimate for many baseline active-
sensing acousto-ultrasound SHM signals. The following SHT problem is posed for
damage detection in this case:
$\begin{array}[]{llll}H_{o}&:&S_{o}(\omega)-S_{u}(\omega)=0&\text{(null
hypothesis -- healthy structure)}\\\ H_{1}&:&S_{o}(\omega)-S_{u}(\omega)\neq
0&\text{(alternative hypothesis -- damaged structure)}\end{array}$ (15)
where both terms in the hypothesis above are the true values of the respective
PSDs. As mentioned above, under the assumption of a large $M$ in equation (10)
(many baseline signals used for expectation estimation), the first term in the
hypothesis test above ($S_{o}(\omega)$) can be replaced by the expectation,
which would be normally distributed as aforementioned. Additionally, assuming
the availability of many data points (that is, a large $N$) used for PSD
estimation, the second term in the formulation of the hypothesis test
($S_{u}(\omega)$) can be replaced by an estimate [57, Chapter 3, pp. 45],
which will also follow a normal distribution due to the asymptotic properties
of $\chi^{2}$-distributed estimates, as also dictated by the central limit
theorem (CLT) [59, Chapter 3, pp. 62]. Under the null hypothesis, both of
these terms would follow the same distribution since both would be coming from
a healthy structural case. Thus, under the null hypothesis:
$\begin{array}[]{llll}\text{Under}\;H_{o}&:&E\\{\widehat{S}_{o}(\omega)\\}-\widehat{S}_{u}(\omega)\;\sim\;\mathcal{N}(0,2\sigma_{o}^{2}(\omega))&\mbox{(null
hypothesis -- healthy structure)}\\\ \end{array}$ (16)
where $\sigma_{o}^{2}(\omega)$ can be estimated from the baseline phase, and
can be assumed to have negligible variability if a large number of signals is
used in estimating the value of the PSDs [49, 52]. Thus, by defining an
appropriate type I error, or false alarm, probability ($\alpha$), the Welch
PSD-based $Z$ statistic can be expressed as follows:
$\begin{array}[]{ccl}Z=\frac{\mid
E\\{\widehat{S}_{o}(\omega)\\}-\widehat{S}_{u}(\omega)\mid}{\sqrt{2\widehat{\sigma}_{o}^{2}(\omega)}}\leq
Z_{1-\frac{\alpha}{2}}\quad(\forall\;\omega)&\Longrightarrow&H_{o}\;\text{is
accepted (healthy structure)}\\\ \text{Else}&\Longrightarrow&H_{1}\;\text{is
accepted (damaged structure)}\\\ \end{array}$ (17)
with $Z_{1-\frac{\alpha}{2}}$ designating the standard Normal distribution’s
$1-\frac{\alpha}{2}$ critical point. Table 1 summarizes all three statistics
used in this study for damage detection.
Table 1: Summary of the different damage detection statistics utilized in this study. Quantity | $F$ Statistic | $F_{m}$ Statistic | $Z$ Statistic
---|---|---|---
Property | $2K\widehat{S}(\omega)/{S}(\omega)\sim\chi^{2}(2K)$ | $2KME\\{\widehat{S}(\omega)\\}/{S}_{o}(\omega)\sim\chi^{2}(2KM)$ | $E\\{\widehat{S}(\omega)\\}-S(\omega)\;\sim\;\mathcal{N}(0,2\sigma_{o}^{2}(\omega))$
Test Statistic | $F=\frac{\widehat{S}_{o}(\omega)}{\widehat{S}_{u}(\omega)}$ | $F_{m}=\frac{E\\{\widehat{S}_{o}(\omega)\\}}{\widehat{S}_{u}(\omega)}$ | $Z=\frac{\mid E\\{\widehat{S}_{o}(\omega)\\}-\widehat{S}_{u}(\omega)\mid}{\sqrt{2\sigma_{o}^{2}(\omega)}}$
| $K$: Number of non-overlapping segments
Comment | $M$: Number of available baseline data sets
| $\widehat{S}(\omega)$: Welch PSD estimate;
$\omega\;\epsilon[0,2\pi/T_{s}]$: frequency in radians per second ($T_{s}$ is
the sampling time).
It is worth noting here that, although there lies a difference between the
statistical assumptions behind the formulations of the $F_{m}$ and the $Z$
statistics, both are applied to the same data sets in this study, and their
results are compared with respect to which statistic achieves better detection
capabilities.
### 2.6 Reference State-of-the-Art Damage Indices
In this work, two time-domain damage indices are utilized as reference in
order to compare between the performance of DIs and the performance of the NP-
TS models proposed herein. The first DI was adopted from the work of Janapati
et al. [4], which is characterized by high sensitivity to damage size and
orientation, and low sensitivity to other variations such as adhesive
thickness and the material properties of the structure, sensors, and adhesive.
Given a baseline $y_{0}[t]$ and an unknown $y_{u}[t]$ signal indexed with
normalized discrete time $t$ ($t=1,2,3,\ldots,N$ where $N$ is the number of
data samples considered in the calculation of the DIs, which depends on the
studied coupon as will be shown in Section 3), the formulation of that DI is
as follows:
$Y_{u}^{n}[t]=\frac{y_{u}[t]}{\sqrt{\sum_{t=1}^{N}{y^{2}_{u}[t]}}},\quad
Y_{0}^{n}[t]=\frac{\sum_{t=1}^{N}{(y_{0}[t]\cdot
Y_{u}^{n}[t])}}{y_{0}[t]\cdot\sum_{t=1}^{N}{y_{0}^{2}[t]}},\quad
DI=\sum_{t=1}^{N}{(Y^{n}_{u}[t]-Y^{n}_{0}[t])}$ (18)
In this notation, $Y^{n}_{u}[t]$ and $Y^{n}_{0}[t]$ are normalized unknown
(inspection) and baseline signals, respectively. The second DI used in this
study is the time-domain DI presented by Qiu et al. [35] and used in training
their Gaussian mixture models due to its sensitivity to changes in wave form
and time of flight. The formulation of that DI is as follows:
$DI=1-\sqrt{\frac{(\sum_{t=1}^{N}{y_{0}[t]\cdot
y_{u}[t])^{2}}}{\sum_{t=1}^{N}{y_{0}^{2}[t]}\cdot\sum_{t=1}^{N}{y^{2}_{u}[t]}}}$
(19)
## 3 Results and Discussion
In this work, the comparison between state-of-the-art DIs and the proposed NP-
TS approaches in damage detection was carried out over three components with
different damage cases: a notched Al plate, a Carbon Fiber-Reinforced Plastic
(CFRP) coupon with weights taped on the surface to simulate a crack, and the
open-source data sets available on the Open Guided-Waves project’s website
[1].
### 3.1 Test Case I: Damage Detection in an Aluminum Plate
#### 3.1.1 Test Setup, Damage Types and Data Acquisition
This first coupon was a 6061 Aluminum $152.4\times 254$ mm ($6\times 10$ in)
coupon ($2.36$ mm/$0.093$ in thick) (McMaster Carr) with a 12-mm (0.5-in)
diameter hole in the middle, as shown in Figure 2. Using Hysol EA 9394
adhesive, the coupon was fitted with six single-PZT (Lead Zirconate Titanate)
SMART Layers type PZT-5A (Acellent Technologies, Inc) as shown in Figure 2.
The PZT sensors are $0.2$ mm ($0.00787$ in) in thickness and $3.175$ mm ($1/8$
in) in diameter. To simulate damage, using an end-mill and a $0.8128$-mm
($0.032$-in) hand saw, a notch was generated extending from the middle hole of
the coupon with length varying between $2$ and $20$ mm, in $2$-mm increments.
Actuation signals in the form of 5-peak tone bursts (5-cycle Hamming-filtered
sine wave) having an amplitude of 90 V peak-to-peak and various center
frequencies were generated in a pitch-catch configuration over each sensor
consecutively. With a sampling rate of 24 MHz, data was collected using a
ScanGenie III data acquisition system (Acellent Technologies, Inc).
Preliminary analysis was conducted, and a center frequency of 250 kHz was
chosen for the complete analysis presented in this study based upon the best
separation between the first two wave packets in various signal paths. All
data sets were exported to MATLAB for analysis.222Matlab version R2018a;
function pwelch.m (window size: 100 for single wave packet analyses and 500
for the full signal/two-wave packet analyses; NFFT: 2000; Overlap: 50%). Table
2 summarizes the relevant experimental details for this coupon.
Figure 2: The Al coupon used in this study shown here with a 20-mm notch (the largest damage size of this test case). The arrows indicate the paths used in the analysis presented herein. Table 2: Summary of experimental details for the Al coupon. Structural State | Number of Data Sets
---|---
Healthy | 20†
2-mm notch | 20
4-mm notch | 20
6-mm notch | 20
8-mm notch | 20
10-mm notch | 20
12-mm notch | 20
14-mm notch | 20
16-mm notch | 20
18-mm notch | 20
20-mm notch | 20
Sampling Frequency: $f_{s}=24$ MHz. Center frequency range: [$50:50:750$] kHz
Number of samples per data set $N=8000$.
†M=20 in equation (10).
#### 3.1.2 Damage Detection Results
In order to assess the performance of the proposed approach, a simple
isotropic Al coupon was initially used. Figure 3 panels a and b, respectively,
show one indicative full response signal and its corresponding first-arrival
wave packet off of sensor 6 when sensor 2 was actuated (refer to Figure 2 for
sensor numbering) under different notch sizes. Because this is a damage-
intersecting path, a gradual decrease in signal amplitude, with a slight
delay, can be observed with increasing notch size. This is expected since the
notch scatters the wave, decreasing the amount of energy going through to
sensor 6 as scattering increases [8]. Figure 3c shows the evolution of the two
chosen state-of-the-art DIs for the first-arrival wave packet. As shown,
although the DIs closely follow damage for notch sizes more than 8 mm, it
might be difficult to detect damages up to 8 mm in size, given the proximity
of the DI values for the healthy case and the damaged cases. Without prior
experience with these types of materials/components, assigning a threshold
between a healthy component and a damaged one might be challenging in that
range of damages. As the length of the analyzed signal increases, the DIs
become more sensitive to small damages as shown in Figure 3d. However, it can
be observed that the DIs do not follow the increase in notch size uniformly
even for a damage-intersecting path like path 2-6. Exploring a damage-non-
intersecting path (Figure 4), the DIs fail to follow damage evolution to a
greater extent, with fluctuations being observed as notch size increases. Such
fluctuation in the DIs can be mistaken for a change in conditions surrounding
the component, which would make the task of damage detection and threshold
identification even more challenging.
abcd Figure 3: Indicative signal from the Al coupon for signal path 2-6
(damage-intersecting) under different notch sizes: (a) full signal; (b) first-
arrival wave packet; (c) single wave packet DIs – the dashed lines designate
the upper and lower $95\%$ confidence bounds for the Janapati et al. (blue)
and Qiu et al. (red) DIs; (d) two wave packet DIs. abcd Figure 4: Indicative
signal from the Al coupon for signal path 6-3 (damage-non-intersecting) under
different notch sizes: (a) full signal; (b) a single wave packet; (c) single-
wave packet DIs – the dashed lines designate the upper and lower $95\%$
confidence bounds for the Janapati et al. (blue) and Qiu et al. (red) DIs; (d)
two-wave packet DIs.
Figure 5 presents indicative results of applying the proposed framework to the
response signal from path 2-6 in the Al coupon (see Table A.1 in the Appendix
for summary results). Figure 5a shows the evolution of the Welch PSD of the
signals as notch size increases, with the red and the black dashed lines
indicating the theoretical (estimation uncertainty) and the experimental 95%
confidence intervals of the healthy PSD, respectively. The first thing to be
observed in this figure is that, using the theoretical estimation confidence
intervals, notch sizes more than 2 mm can be detected with 95% confidence, and
all damage sizes are detected when the experimental 95% confidence levels are
considered. Although the latter result is expected due to the nominally-
controlled lab environment significantly inhibiting change in the Welch PSD
over multiple healthy signals, the former observation shows the enhanced
detection capability of the frequency-domain PSD compared to time-domain DIs
for damages of this type in Al. Furthermore, in contrast to the DIs in Figure
3, the PSDs evolve uniformly with damage, which hints on the enhanced damage
quantification capability of these techniques. Thus, the Welch PSD emerges as
a better metric when it comes to damage detection and quantification for the
case at hand. Applying the SHT frameworks developed in Section 2.1 (Figure 5
panels b-d), one can assess the difference between both approaches in a
statistical way. As shown in Figure 5b, the $F$ statistic is capable of only
detecting the last three damage cases (10-18 mm) with 95% confidence. Although
this performance is somewhat similar to that of the DIs, an advantage in the
proposed approach is the extraction of confidence intervals directly from the
SHM metric being used, without the need for user experience for defining
damage thresholds. The $F_{m}$ statistic (Figure 5c) does a slightly better
job by detecting the 8-mm damage with 95% confidence, which is attributed to
the inclusion of some experimental statistics into the definition of this
metric. Examining the $Z$ statistic (Figure 5d), one can observe that all
damage cases are detected with $95\%$ confidence. Furthermore, the effect of
damage on the $Z$ statistic is again uniform, indicating the superior
performance of this statistic in damage quantification compared to the
conventional time-domain DI approach. Figure 6a shows indicative Welch PSD
estimates for the first two wave packets (using a window size equal to the
width of a single wave packet). As shown, although the peak amplitude of the
PSD at the actuation frequency decreases, the detection performance remains
the same as for a single wave packet. Exploring the three statistics proposed
in this study (Figure 6 panels b-d), it can also be observed that the
detection performance stays the same, with all damages being detected by the
$Z$ statistic with 95% confidence.
Table 3: The different parameters used in estimating the Welch PSD for the Al coupon data sets. Segment Length | $100$
---|---
Window Type | Hamming
Frequency Resolution | $\Delta f=12$ kHz
Sampling Frequency | $24$ MHz
Single Wave Packet
Data Length | $N=500$ samples ($\sim 20$ $\mu s$)
No of non-overlapping segments | $9$
Full Signal Length
Data Length | $N=8000$ samples ($\sim 330$ $\mu s$)
No of non-overlapping segments | $159$
abcd Figure 5: Indicative results from applying the proposed NP-TS approach to
the first arrival wave packet from path 2-6 in the Al coupon under different
damage sizes: (a) Welch PSD – the red and the black dashed lines indicate the
theoretical (estimation uncertainty) and the experimental 95% confidence
bounds of the healthy PSD, respectively; (b) $F$ statistic; (c) $F_{m}$
statistic; (d) $Z$ statistic. abcd Figure 6: Indicative results from applying
the proposed NP-TS approach to the full signal from path 2-6 in the Al coupon
under different damage sizes: (a) Welch PSD – the red and the black dashed
lines indicate the theoretical (estimation uncertainty) and the experimental
95% confidence bounds of the healthy PSD, respectively; (b) $F$ statistic; (c)
$F_{m}$ statistic; (d) $Z$ statistic.
Moving on to the damage-non-intersecting path (path 6-3), Figure 7 shows
indicative results for a single wave packet (see Table A.2 in the Appendix for
summary results). As shown in Figure 7a, using the PSD’s theoretical
estimation confidence intervals (red dashed lines), all damages are deemed
healthy with 95% confidence. This is attributed to the wide nature of the
estimation uncertainty when the PSD of a deterministic signal is being
estimated, as is the case in this study. However, just like the DIs, all
damages are detected with 95% confidence when the experimental uncertainty is
being considered. Being based on the theoretical confidence intervals, both
the $F$ and $F_{m}$ statistics also show all damages as healthy with 95%
confidence (with the exception of the 14 mm case for the $F_{m}$ statistic),
as shown in Figure 7 panels b and c. On the other hand, the $Z$ statistic
(Figure 7d) detects all damage cases with 95% confidence because its
formulation is based on the experimental uncertainty. The same trend can be
observed when the first two wave packets are considered, as shown in Figure 8.
A number of conclusions can be drawn from these observations when it comes to
comparing the proposed statistics to the DIs. Firstly, for a notched Al
coupon, the Welch PSD, the $F$ and $F_{m}$ statistics can be used as a
preliminary step in differentiating between damage-intersecting and non-
intersecting paths, in contrast to the DIs, which do not show a clear
distinction. Secondly, the sensitivity of the $Z$ statistic seems to be the
same as the DIs because of both being based on the experimental confidence
intervals, an advantage that the $Z$ statistic has over the DI is the
extraction of the confidence bounds based on the assumption of a normal
distribution of the expectation of the signals’ PSDs. Thus, the extracted
damage detection thresholds emerge from the formulation of the SHM metric
itself, and don’t require prior experience with such materials and damages, or
physics-based modelling. In contrast, the DIs require complex approaches in
order to set accurate thresholds, and do not entail any theoretical
distribution on the signals, from which thresholds can emerge naturally.
abcd Figure 7: Indicative results from applying the proposed NP-TS approach to
the first-arrival wave packet from path 6-3 in the Al coupon under different
damage sizes: (a) Welch PSD – the red and the black dashed lines indicate the
theoretical (estimation uncertainty) and the experimental 95% confidence
bounds of the healthy PSD, respectively; (b) $F$ statistic; (c) $F_{m}$
statistic; (d) $Z$ statistic. abcd Figure 8: Indicative results from applying
the proposed NP-TS approach to the full signal from path 6-3 in the Al coupon
under different damage sizes: (a) Welch PSD – the red and the black dashed
lines indicate the theoretical (estimation uncertainty) and the experimental
95% confidence bounds of the healthy PSD, respectively; (b) $F$ statistic; (c)
$F_{m}$ statistic; (d) $Z$ statistic.
### 3.2 Test Case II: Damage Detection in CFRP Plate
#### 3.2.1 Test Setup, Damage Types and Data Acquisition
The second coupon used in this study was a CFRP coupon (ACP Composites,)
having the same dimensions as the Al coupon, with multiple $0/90$
unidirectional CF plies. This coupon was also fitted with 6 single-PZT SMART
Layers type PZT-5A (Acellent Technologies, Inc) as shown in Figure 9. Damage
was simulated by attaching 1-6 three-gm weights to the surface of the coupon
next to each other using tacky tape. The same actuation and data acquisition
properties were used for this coupon as with the Al one. Also, similar to the
case of the Al coupon, the actuation center frequency of 250 kHz was chosen
for the analysis presented herein. Tables 4 and 5 summarize the experimental
details for this coupon.
Figure 9: The CFRP coupon used in this study shown here with 6 weights as simulated damage (the largest damage size of this test case). The arrows indicate the paths used in the analysis presented herein. Table 4: Details of the first experimental data set for the CFRP coupon. Structural State | Number of Data Sets | Total Added Weight† (g)
---|---|---
Healthy | 20†† | 0
1 Steel weight | 1 | $3$
2 Steel weights | 1 | $6$
3 Steel weights | 1 | $9$
4 Steel weights | 1 | $12$
5 Steel weights | 1 | $15$
6 Steel weights | 1 | $18$
Sampling Frequency: $f_{s}=24$ MHz. Center frequency range: [$50:50:750$] kHz.
Number of samples per data set $N=8000$.
†Weight of tacky tape not considered here.
††M=20 in equation (10).
Table 5: Details of the second experimental data set for the CFRP coupon. Structural State | Number of Data Sets | Total Added Weight† (g)
---|---|---
Healthy | 20†† | 0
1 Steel weight | 20 | $3$
2 Steel weights | 20 | $6$
3 Steel weights | 20 | $9$
4 Steel weights | 20 | $12$
5 Steel weights | 20 | $15$
6 Steel weights | 20 | $18$
Sampling Frequency: $f_{s}=24$ MHz. Center frequency range: [$50:50:750$] kHz.
Number of samples per data set $N=8000$.
†Weight of tacky tape not considered here.
††M=20 in equation (10).
#### 3.2.2 Damage Detection Results
Figure 10 panels a and b present, respectively, the signals and the first
discernible wave packet, obtained at sensor 4 when sensor 3 was actuated under
different damage sizes, where damage size here indicates the number of taped
weights. Figure 10 panels c and d show the DIs for a single, and double wave
packet lengths, respectively. As shown, because a CFRP coupon exhibits more
non-linearity compared to an Al coupon, the single-wave packet DIs completely
fail to follow damage evolution and can further only detect the last damage
case (6 weights) within the 95% experimental healthy confidence intervals for
both DI formulations as shown in Figure 10c. This performance is slightly
enhanced when analyzing two wave packet lengths (Figure 10d). Figure 11 shows
the same 4 plots for a damage non-intersecting path (path 1-4). As shown panel
c, the performance of the DIs substantially deteriorates, with a decrease in
the value of both DIs with increasing simulated damage size up to 4 weights. A
similar trend is observed in Figure 11d when considering two wave packet
lengths for the analysis. Furthermore, although some damage cases fall outside
the 95% confidence bounds (4 weights for single-wave packet DIs, and 5 and 6
weights for two-wave packet DIs), the general trend is a reduction in the
values of the DIs, which can again be easily mistaken with changing
environmental or operational conditions over an otherwise healthy component.
Thus, in terms of damage detection, the DIs offer poor performance for the
CFRP coupon with the simulated damage used in this study.
abcd Figure 10: Indicative signal from path 3-4 in the CFRP coupon under
different simulated damage sizes (number of attached weights): (a) full
signal; (b) single wave packet; (c) single-wave packet DIs – the dashed lines
designate the upper and lower $95\%$ confidence bounds for the Janapati et al.
(blue) and Qiu et al. (red) DIs; (d) DIs for double the wave packet length.
abcd Figure 11: Indicative signal from path 1-4 in the CFRP coupon under
different simulated damage sizes (number of attached weights): (a) full
signal; (b) single wave packet; (c) single wave packet DIs – the dashed lines
designate the healthy upper and lower $95\%$ confidence bounds for the
Janapati et al. (blue) and Qiu et al. (red) DIs; (d) DIs for double the wave
packet length.
Figure 12a shows the estimated Welch PSD for a single wave packet from path
3-4 under different damage cases. Note that, as mentioned, due to the nature
of the actuation signal, the theoretical confidence intervals are too wide to
detect any of the simulated damages, and thus they are not shown here. Figure
12b shows the $Z$ statistic for that path, from which it can be concluded that
the cases of 3-6 weights are all damage cases with 95% confidence. Thus, the
$Z$ statistic surpasses the DIs in detection performance for this damage-
intersecting path. Examining a longer signal length for the analysis, it can
be seen that both the Welch PSD estimate and the $Z$ statistic (Figure 12
panels c and d, respectively) show more sensitivity to damage, with the former
detecting damage sizes as small as 3 weights, and the latter detecting ones as
small as 2 weights. Moving onto the damage non-intersecting path (1-4), it can
be seen that the Welch PSD estimates (Figure 13 panels a and c) fail to detect
almost any of the damages with 95% confidence levels. The same can be said for
the $Z$ statistics, as shown in Figure 13 panels b and d) , with the exception
of detecting the 2- and 4-weight cases for the single-wave packet $Z$
statistics. This reduction in sensitivity for damage-non-intersecting paths
can be attributed to the effect of damage on the signal, as well as the
relatively wide variability in the baseline signal amplitude, which in turn
leads to widening the 95% healthy confidence bounds.
abcd Figure 12: Indicative results from applying the proposed NP-TS approach
to the signals from path 3-4 in the CFRP coupon under different simulated
damage sizes (number of attached weights): (a) Welch PSD for single wave
packet – the black dashed lines indicate the experimental 95% confidence
bounds of the healthy PSD; (b) $Z$ statistic for single wave packet; (c) Welch
PSD for double the wave packet lengths; (d) $Z$ statistic for double the wave
packet lengths. abcd Figure 13: Indicative results from applying the proposed
NP-TS approach to the signals from path 1-4 in the CFRP coupon under different
simulated damage sizes (number of attached weights): (a) Welch PSD for single
wave packet – the black dashed lines indicate the experimental 95% confidence
bounds of the healthy PSD; (b) $Z$ statistic for single wave packet; (c) Welch
PSD for full signal; (d) $Z$ statistic for full signal.
For this reason, another experiment was taken out where the baseline data
acquisition process was more restrictive (lab was empty). Table 5 presents the
details of this second experiment. Figure 14 shows the the DI plots for the
damage-intersecting (panels a and b) and the damage-non-intersecting (panels c
and d) paths of this new data set. For reference, these plots respectively
correspond to the ones in Figure 10 panels c and d, and Figure 11 panels c and
d. As shown, although both this and the original data sets were all acquired
off of the same coupon with the same temperature setting, even in a lab
environment, baseline variability between different data sets can be
significant. As shown in all panels of Figure 14, the spread in the values of
the healthy DIs is smaller in this new data set, which allows for good
detection performance for the DIs. Exploring the $Z$ statistics (Figure 15),
one can observe the enhanced detection performance here too, given the
narrower experimental confidence bounds in the new data set. Although both the
DIs and the $Z$ statistics almost consistently follow damage size evolution
for the damage-intersecting path (path 3-4), the $Z$ statistic shows better
detection capability for the damage-non-intersecting path,detecting all
damages with 95% confidence.
abcd Figure 14: Damage Index results for the second acquired CFRP coupon data set shown in Table 5: (a) single-wave packet DIs for path 3-4; (b) double-wave packet DIs for path 3-4; (c) single-wave packet DIs for path 1-4; (d) double-wave packet DIs for path 1-4. In all plots, the dashed red lines are the healthy $95\%$ confidence bounds for the Janapati et al. (blue) and Qiu et al. (red) DIs. abcd Figure 15: Indicative $Z$ statistic results for the second acquired CFRP coupon data set shown in Table 5: (a) single-wave packet $Z$ statistics for path 3-4; (b) double-wave packet DIs for path 3-4; (c) single-wave packet $Z$ statistics for path 1-4; (d) double-wave packet $Z$ statistics for path 1-4. In all plots, the dashed red lines are the healthy $95\%$ confidence bounds. Table 6: Parameters used in estimating the Welch PSD for the CFRP coupon data sets. Segment Length | $100$
---|---
Window Type | Hamming
Frequency Resolution | $\Delta f=12$ kHz
Sampling Frequency | $24$ MHz
Single Wave Packet
Data Length | $N=500$ samples ($\sim 20$ $\mu s$)
No of non-overlapping segments | $9$
Full Signal Length
Data Length | $N=8000$ samples ($\sim 330$ $\mu s$)
No of non-overlapping segments | $159$
In order to assess the performance of all 4 metrics (the DI, $F$, $F_{m}$, and
$Z$ statistics) at different alpha (false alarm levels) i.e. at different
confidence intervals, the corresponding Receiver Operating Characteristics
(ROC) curves were explored for different signal lengths (also, see Tables B.4
and B.3 in the Appendix for summary results). Figure 16a shows the ROC for the
4 metrics at alpha levels ranging from 1E-6 to 1, as applied to a single wave
packet off of the damage-intersecting path 3-4. As shown, because this is a
damage-intersecting path, 3 out of the 4 metrics exhibit perfect detection
performance with an area under the ROC curve equal to 1. Also, although the
performance of the $F_{m}$ statistic doesn’t seem to be better than the worst
statistical estimator (the dashed line), the $F$ statistic shows optimal
performance as the alpha levels change, in contrast to its weak detection
performance at an alpha level of 0.05, as mentioned in the discussion of
Figure 12. Moving onto two wave packets of the same path (Figure 16b, one can
observe that the $Z$ statistic outperforms all other metrics in damage
detection. In addition, the $F$ statistic outperforms the DI metric, which
hints on the advantages of using frequency-domain approaches and statistical
hypothesis tests instead of time-domain approaches. For the damage-non-
intersecting path 1-4, it can be observed that the $Z$ statistic outperforms
the DI for both: a single- (Figure 16c) and two- (Figure 16d) wave packet
lengths. Thus, it can be concluded that the $Z$ statistic emerges as the best
damage detection statistic in this study for the CFRP coupon investigated
herein.
abcd Figure 16: Receiver Operating Characteristics (ROC) plots comparing the
different damage detection methods for the new data set of the CFRP coupon
under the effect of the first simulated damage (1 weight): (a) path 3-4 wave
packet; (b) path 3-4 full signal; (c) path 1-4 wave packet; (d) path 1-4 full
signal. In all subplots, 15 out of 20 healthy signals were used for
calculating the mean in estimating the $F_{m}$ and the $Z$ statistics.
### 3.3 Test Case III: Damage Detection in the Open Guided-Waves CFRP Panel
#### 3.3.1 Test Setup, Damage Types and Data Acquisition
The third test case used in this study was the CFRP panel utilized in the Open
Guided-Waves project [1], which had a quasi-isotropic construction with layup
$[45/0/-45/90/-45/0/-45/90]_{S}$. The panel had the dimensions of $500\times
500$ mm ($19.69\times 19.69$ in), and a thickness of 2 mm ($0.079$ in). During
the fabrication process of the panel, 12 PZT sensors, 5 mm ($0.2$ in) in
diameter and $0.2$ mm ($0.0079$ in) in thickness, were co-bonded on the panel.
To simulate damage, a 10-mm diameter, $2.35$-mm-thick ($0.0925$ in) Al disk
($0.5$ g) was consecutively attached using tacky tape on 28 different
locations on the panel grouped into seven groups. Figure 17a shows a schematic
of the CFRP panel, where the sensor and damage locations are shown. Also, the
inset in Figure 17a shows the simulated damage on one of the locations.
Each sensor was consecutively actuated using a 5-peak tone burst signal
(5-cycle Hanning-filtered sine wave) having an amplitude of $\pm 100$ V.
Response signals were sampled over the remaining sensors at a sampling rate of
$10$ MHz. Three sets of 20 baseline (healthy) signal realizations per sensor
were recorded. After acquiring the first baseline set (the first 20 healthy
signals), a single signal per sensor was recorded for each damage location,
with the weight on locations D$1-11$. After that, the second baseline set was
acquired (healthy signals $21-40$), followed by recording a single damage
signal per sensor per weight location for locations D$12-20$. Finally, signals
were recorded with the weight at locations D$21-28$ after the third baseline
set was acquired (healthy signals $41-60$). This resulted in $60$ baseline
realizations and $28$ damage signals per sensor (one signal for each damage
location for each sensor.) Other data sets were also recorded that are not
used in this study. Table 7 summarizes the experimental details of this panel,
and the readers are directed to the original study [1] for more information on
test setup. For ease of comparison of the proposed damage detection methods,
only the response signals for the actuation with $260$ kHz center frequency
were chosen for analysis in this study.
In the present study, signals from simulated damages in the same damage group
(see Figure 17a and Table 7) were treated as different realizations of single
damage in the vicinity of that group on the CFRP panel. In addition, for all
the detection metrics in this study, each damage group was analyzed against
its corresponding baseline data set only (the healthy data set immediately
preceding that damage group in the data acquisition process) for accuracy.
Actuator-sensor path 3-12 was used to demonstrate the performance of the
different detection techniques proposed herein. As such, damage groups $2$ and
$3$ (see Table 7) were considered as different realizations of signals for two
damages intersected by the path (with the first baseline set used for
comparison). On the other hand, damage groups $7$ and $8$ were treated as
different realizations of signals for two damages not intersected by the
signal path (with the third baseline set used for comparison).
Table 7: Summary of experimental details for the CFRP panel [1]. Structural State† | Number of Data Sets | Data Set Label
---|---|---
Healthy (weight unattached) | 20†† | First Baseline Set
Weight on D1-4 | 4 | Damage Group 1
Weight on D5-8 | 4 | Damage Group 2
Weight on D9-11 | 3 | Damage Group 3
Healthy (weight unattached) | 20†† | Second Baseline Set
Weight on D12 | 1 | Damage Group 4
Weight on D13-16 | 4 | Damage Group 5
Weight on D17-20 | 4 | Damage Group 6
Healthy (weight unattached) | 20†† | Third Baseline Set
Weight on D21-24 | 4 | Damage Group 7
Weight on D25-28 | 4 | Damage Group 8
Sampling Frequency: $f_{s}=10$ MHz. Center frequency range: [$40:20:260$] kHz.
Number of samples per data set $N=13106$.
†Weight was attached to one location (e.g. D4) at a time within each damage
group.
††M=20 in equation (10)
#### 3.3.2 Damage Detection Results
Figure 17 panels b and c show the complete signal and the first-arrival wave
packet, respectively, for signal path 3-12 on the CFRP panel when the Al
weight was on locations 5-11 (damage-intersecting case). It is worth noting
that, examining other paths (not shown here), the packet shown in panel c is
actually a combination of two wave packets merged together, as can also be
observed from the number of cycles in the shown packet. Even though this
limits the analysis to only this single wave structure, this path was still
chosen because it directly intersects (or does not intersect) almost complete
damage location groups, which allows for the analysis of detection performance
for both types of paths.
Figure 17d shows the values of the DI formulated by Janapati et al. [4] for
the first 20 baseline signals and the signals corresponding to damage
locations 5-11. As shown, within the $95$% healthy confidence bounds (dashed
blue lines), only damages at locations 5-7 are detected, while the rest of the
damage cases are considered healthy with $95$% confidence. Noting that the
healthy signals in each baseline set were taken under controlled temperatures,
it can be again concluded that the DI lacks robustness to uncertainties even
in controlled environments, where the values of the DI still fluctuate even
for the healthy case, producing wide healthy bounds that affect detection
performance. Figure 18 shows the same set of figures for the third baseline
set (healthy signals 41-60), and the signals corresponding to damage locations
21-28 (damage-non-intersecting case.) As shown, the DI fails to detect any of
the damage cases with $95$% confidence.
abcd Figure 17: (a) A schematic of the CFRP panel used in the Open Guided
Waves open source data project [1] with all the simulated damage locations.
The inset shows a snapshot of part of the actual panel with damage location
markings and the Al weight used to simulate damage on one of the locations.
The arrow indicates the path used in the analysis presented herein; (b)
indicative signals from path 3-12 for the healthy case, as well as when the Al
weight (simulated damage) was on locations D5-10 (damage-intersecting case);
(c) the first arrival wave packets; (d) the Janapati et al. DI for the first
arrival wave packets – the dashed blue lines are the upper and lower $95\%$
confidence bounds for the Janapati et al. DI as applied to the DI values of
corresponding 20-baseline signal data set. abc Figure 18: (a) Indicative
signals from path 3-12 for the healthy case, as well as when the Al weight
(simulated damage) was on locations D21-28 (damage-non-intersecting case); (b)
the first arrival wave packets; (c) the Janapati et al. DI for the first
arrival wave packets - the dashed blue lines are the upper and lower $95\%$
confidence bounds for the Janapati et al. DI as applied to the DI values of
corresponding 20-baseline signal data set.
Examining the Welch PSD estimates for the damage-intersecting case, Figure 19a
shows that all damage cases are detected with $95$% confidence. This
immediately shows the advantage of this frequency-domain metric when it comes
to damage detection. Figure 19b shows the $Z$ statistic for that case, where
again all damage cases are detected with $95$% confidence. Also, it can be
observed that there are no false alarms in this case, whereas there was at
least one false alarm event with the DI (see Figure 17c). Figure 19c presents
the Welch PSD estimate for the damage-non-intersecting set of signals. As
shown, at least 5 out of the 8 damage cases were detected with $95$% accuracy.
Examining the $Z$ statistic, it can be observed that only one damage case is
detected with $95$% confidence, while the rest are deemed healthy. It is worth
noting that neither of the other two statistics (the $F$ and $F_{m}$
statistics) detected any of the damage cases with the set confidence bounds
for the damage-intersecting and non-intersecting cases. Again, this called
upon the exploration of the effect of different confidence intervals
(manifested in different alpha false alarm levels in the statistical
hypothesis tests) in order to conclusively assess the performance of the
different detectors proposed herein.
Table 8: The Welch PSD estimation parameters for the OGW coupon data sets. Segment Length | $40$
---|---
Window Type | Hamming
Frequency Resolution | $\Delta f=5$ kHz
Sampling Frequency | $10$ MHz
Single Wave Packet
Data Length | $N=360$ samples ($\sim 40$ $\mu s$)
No of non-overlapping segments | $9$
abcd Figure 19: The results of applying the proposed NP-TS approach to the
signals from path 3-12 in the Open Guided-Waves CFRP panel under different
simulated damage locations: (a) Welch PSD for D5-10 (damage-intersecting case)
- the black dashed lines indicate the theoretical (estimation uncertainty) and
the experimental 95% confidence bounds of the healthy PSD, respectively; (b)
$Z$ Statistic for D5-10 (damage-intersecting case); (c) Welch PSD for D21-28
(damage-non-intersecting case); (d) $Z$ Statistic for D21-28 (damage-non-
intersecting case)
Figure 20 panels a and b show the ROC plots for the damage-intersecting case
and the damage-non-intersecting case, respectively. In constructing each plot,
detection statistics from all corresponding damage locations were used, and
only corresponding baseline groups were considered in each case. As shown in
the damage-intersecting case (panel a), the $Z$, $F$, and $F_{m}$ statistics
all outperform the DI in overall detection performance, with larger areas
under the curves. For the damage-non-intersecting case (panel b), although the
$F$, $F_{m}$ statistics and the DI show similar performance, the $Z$ statistic
surpasses all of them, even for low alpha levels (wider confidence bounds).
Tables C.5 and C.6 in the Appendix also present summary detection results. All
of these results show the superiority of the $Z$ statistic when it comes to
damage detection.
ab Figure 20: ROC plots comparing the different damage detection methods for
the Open Guided Waves project’s CFRP panel (path 3-12): (a) the Al weight
(simulated damage) on D5-11 (damage-intersecting case); (d) the Al weight
(simulated damage) on D21-28 (damage-non-intersecting case). In both subplots,
15 out of 20 healthy signals were used for calculating the mean in estimating
the $F_{m}$ and the $Z$ statistics.
## 4 Conclusions
In this study, three frequency-domain damage detection metrics based on
stochastic non-parametric time series representations were developed and
compared with state-of-the-art damage indices as applied to three different
test cases: a notched Al plate, a CFRP coupon with stacked weights, and the
CFRP panel with different weight locations used in the Open Guided-Waves
project [1]. It was shown that, although the DIs can accurately detect damage
and follow damage evolution in the isotropic Al coupon case, it fails to do
either in the CFRP coupon for $95$% healthy confidence bounds. In addition, it
also shows poor detection performance for the different damage cases of the
CFRP panel at the same confidence levels. Examining the $F$ and $F_{m}$
statistics, because their detection thresholds are either solely dependent on
the theoretical estimation confidence bounds of the Welch PSD estimator ($F$
statistic), or dependent on the theoretical estimation intervals with the
incorporation of some experimental statistics ($F_{m}$ statistics), their
detection performance at $95$% confidence can, in some cases, be even worse
than the DIs. However, for different confidence levels, both, especially the
$F$ statistic, exhibit a detection performance more or less similar to that of
the DIs, as shown in the different Receiver Operating Characteristics plots in
this study. On the other hand, the $Z$ statistic outperforms all other
detectors used in this study for all three test cases, for both: damage-
intersecting and non-intersecting paths. In addition, it also better follows
the evolution of damage for the Al and CFRP coupons in the damage-intersecting
case compared to the DIs, which hints on its damage quantification
capabilities.
Overall, it can be concluded from this study that, for the three test cases
studied herein, methods based on frequency-domain non-parametric statistical
time series models show greater sensitivity to damage, even when used to
analyze damage-non-intersecting signals, compared to time-domain DI-based
approaches, especially in materials exhibiting non-linearities and anisotropic
behaviour such as composites. This was clearly demonstrated when constructing
$95$% healthy confidence bounds accounting for the same experimental
uncertainties in both approaches. In addition, the proposed approaches show
increased robustness to uncertainty with less fluctuation in the values of the
metrics for the healthy test cases compared to the time-domain-based DIs.
Thus, non-parametric time series representations emerge as sources of
constructing accurate and robust metrics that promise enhancement in damage
detection performance for SHM systems.
## Acknowledgment
This work is carried out at the Rensselaer Polytechnic Institute under the
Vertical Lift Research Center of Excellence (VLRCOE) Program, grant number
W911W61120012, with Dr. Mahendra Bhagwat and Dr. William Lewis as Technical
Monitors.
## References
* [1] Moll, J., Kathol, J., Fritzen, C.-P., Moix-Bonet, M., Rennoch, M., Koerdt, M., Herrmann, A. S., Sause, M. G., and Bach, M., “Open Guided Waves: online platform for ultrasonic guided wave measurements,” Structural Health Monitoring, Vol. 18, 2019, pp. 1–12.
* [2] Qiu, L., Liu, M., Qing, X., and Yuan, S., “A quantitative multidamage monitoring method for large-scale complex composite,” Structural Health Monitoring, Vol. 12, No. 3, 2013, pp. 183–196.
* [3] Romano, F., Ciminello, M., Sorrentino, A., and Mercurio, U., “Application of structural health monitoring techniques to composite wing panels,” Journal of Composite Materials, Vol. 53, No. 25, 2019, pp. 3515–3533.
* [4] Janapati, V., Kopsaftopoulos, F., Li, F., Lee, S., and Chang, F.-K., “Damage detection sensitivity characterization of acousto-ultrasound-based structural health monitoring techniques,” Structural Health Monitoring, Vol. 15, No. 2, 2016, pp. 143–161.
* [5] Das, S. and Saha, P., “A review of some advanced sensors used for health diagnosis of civil engineering structures,” Measurement, Vol. 129, 2018, pp. 68–90.
* [6] Farrar, C. R. and Worden, K., “An introduction to Structural Health Monitoring,” The Royal Society – Philosophical Transactions: Mathematical, Physical and Engineering Sciences, Vol. 365, 2007, pp. 303–315.
* [7] Amer, A. and Kopsaftopoulos, F. P., “Probabilistic active sensing acousto-ultrasound SHM based on non-parametric stochastic representations,” Proceedings of the Vertical Flight Society 75th Annual Forum & Technology Display, Philadelphia, PA, USA, May 2019.
* [8] Amer, A. and Kopsaftopoulos, F. P., “Probabilistic Damage Quantification via the Integration of Non-parametric Time-series and Gaussian Process Regression Models,” Proceedings of the 12th International Workshop on Structural Health Monitoring (IWSHM 2019), Palo Alto, CA, USA, September 2019.
* [9] Kopsaftopoulos, F., Nardari, R., Li, Y.-H., and Chang, F.-K., “A stochastic global identification framework for aerospace structures operating under varying flight states,” Mechanical Systems and Signal Processing, Vol. 98, 2018, pp. 425–447.
* [10] Spiridonakos, M. and Fassois, S., “Non-stationary random vibration modelling and analysis via functional series time-dependent ARMA (FS-TARMA) models–A critical survey,” Mechanical Systems and Signal Processing, Vol. 47, No. 1-2, 2014, pp. 175–224.
* [11] Zhang, Q. W., “Statistical damage identification for bridges using ambient vibration data,” Computers and Structures, Vol. 85, 2007, pp. 476–485.
* [12] Ahmed, S. and Kopsaftopoulos, F. P., “Uncertainty quantification of guided waves propagation for active sensing structural health monitoring,” Proceedings of the Vertical Flight Society 75th Annual Forum & Technology Display, Philadelphia, PA, USA, May 2019.
* [13] Ahmed, S. and Kopsaftopoulos, F. P., “Investigation of broadband high-frequency stochastic actuation for active-sensing SHM under varying temperature,” Proceedings of the 12th International Workshop on Structural Health Monitoring (IWSHM 2019), Palo Alto, CA, USA, September 2019\.
* [14] Zhao, J., Gao, H. D., Chang, G. F., Ayhan, B., Yan, F., Kwan, C., and Rose, J. L., “Active health monitoring of an aircraft wing with embedded piezoelectric sensor/actuator network: I. Defect detection, localization and growth monitoring,” Smart Materials and Structures, Vol. 16, No. 4, 2007, pp. 1208–1217.
* [15] Flynn, E. B., Todd, M. D., Wilcox, P. D., Drinkwater, B. W., Croxford, A. J., and Kessler, S., “Maximum-likelihood estimation of damage location in guided-wave structural health monitoring,” Proceedings of The Royal Society A, Burlington, VT, Vol. 467, No. 2133, 2011, pp. 2575–2596.
* [16] Todd, M. D., Flynn, E. B., Wilcox, P. D., Drinkwater, B. W., Croxford, A. J., and Kessler, S., “Ultrasonic wave-based defect localization using probabilistic modeling,” American Institute of Physics Conference Proceedings, Burlington, VT, May 2012.
* [17] Haynes, C., Todd, M., Flynn, E., and Croxford, A., “Statistically-based damage detection in geometrically-complex structures using ultrasonic interrogation,” Structural Health Monitoring, Vol. 12, No. 2, 2012, pp. 141–152.
* [18] Ng, C.-T., “On the selection of advanced signal processing techniques for guided wave damage identification using a statistical approach,” Engineering Structures, Vol. 67, 2014, pp. 50–60.
* [19] Mujica, L. E., Ruiz, M., Pozo, F., Rodellar, J., and Güemes, A., “A structural damage detection indicator based on principal component analysis and statistical hypothesis testing,” Smart Materials and Structures, Vol. 23, No. 2, dec 2013, pp. 025014.
* [20] Peng, T., Saxena, A., Goebel, K., Xiang, Y., Sankarararman, S., and Liu, Y., “A novel Bayesian imaging method for probabilistic delamination detection of composite materials,” Smart Materials and Structures, Vol. 22, 2013, pp. 125019–125028.
* [21] Yang, J., He, J., Guan, X., Wang, D., Chen, H., Zhang, W., and Liu, Y., “A probabilistic crack size quantification method using in-situ Lamb wave test and Bayesian updating,” Mechanical Systems and Signal Processing, Vol. 78, 2016, pp. 118–133.
* [22] He, J., Ran, Y., Liu, B., Yang, J., and Guan, X., “A Lamb wave based fatigue crack length estimation method using finite element simulations,” The 9th International Symposium on NDT in Aerospace, Xiamen, China, November 2017.
* [23] MIL-HDBK-1823A, “MIL-HDBK-1823A,” Nondestructive Evaluation System Reliability Assessment, Department of Defense, April 2009.
* [24] Gallina, A., Packo, P., and Ambrozinski, L., “Model Assisted Probability of Detection in Structural Health Monitoring,” Advanced Structural Damage Detection: From Theory to Engineering Applications, edited by T. Stepinski, T. Uhl, and W. Staszewski, John Wiley and Sons, Ltd., 2013, pp. 382–407.
* [25] Jarmer, G. and Kessler, S. S., “Application of Model Assisted Probability of Detection (MAPOD) to a Guided Wave SHM System,” Structural Health Monitoring 2017: Real-Time Material State Awareness and Data-Driven Safety Assurance– Proceedings of the 12th International Workshop on Structural Health Monitoring (IWSHM 2017), edited by F.-K. Chang and F. Kopsaftopoulos, Stanford University, USA, 2017.
* [26] Moriot, J., Quagebeur, N., Duff, A. L., and Masson, P., “A model-based approach for statistical assessment of detection and localization performance of guided wave–based imaging techniques,” Structural Health Monitoring, Vol. 17, No. 6, 2017, pp. 1460–1472.
* [27] Giurgiutiu, V., “Flutter prediction for flight/wind-tunnel flutter test under atmospheric turbulence excitation,” Journal of Intelligent Materials Systems and Structures, Vol. 16, No. 4, 2005, pp. 291–305.
* [28] Ihn, J. and Chang, F.-K., “Detection and monitoring of hidden fatigue crack growth using a built-in piezoelectric sensor/actuator network, Part I: Diagnostics,” Smart Materials and Structures, Vol. 13, 2004, pp. 609–620.
* [29] Ihn, J. and Chang, F.-K., “Detection and monitoring of hidden fatigue crack growth using a built-in piezoelectric sensor/actuator network, Part II: Validation through riveted joints and repair patches,” Smart Materials and Structures, Vol. 13, 2004, pp. 621–630.
* [30] Ihn, J. and Chang, F.-K., “Pitch-catch active sensing methods in structural health monitoring for aircraft structures,” Structural Health Monitoring, Vol. 7, No. 1, 2008, pp. 5–19.
* [31] Giurgiutiu, V., “Piezoelectric Wafer Active Sensors for Structural Health Monitoring of Composite Structures Using Tuned Guided Waves,” Journal of Engineering Materials and Technology, Vol. 133, No. 4, 2011, pp. 041012.
* [32] Jin, H., Yan, J., Li, W., and Qing, X., “Monitoring of fatigue crack propagation by damage index of ultrasonic guided waves calculated by various acoustic features,” Applied Sciences, Vol. 9, 2019, pp. 4254.
* [33] Xu, B., Zhang, T., Song, G., and Gu, H., “Active interface debonding detection of a concrete-filled steel tube with piezoelectric technologies using wavelet packet analysis,” Mechanical Systems and Signal Processing, Vol. 36, 2013, pp. 7–17.
* [34] Nasrollahi, A., Deng, W., Ma, Z., and Rizzo, P., “Multimodal structural health monitoring based on active and passive sensing,” Structural Health Monitoring, Vol. 17, No. 2, 2018, pp. 395–409.
* [35] Qiu, L., Yuan, S., Bao, Q., Mei, H., and Ren, Y., “Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave Gaussian mixture model,” Smart Materials and Structures, Vol. 25, 2016, pp. 055048.
* [36] Wang, F., Huo, L., and Song, G., “A piezoelectric active sensing method for quantitative monitoring of bolt loosening using energy dissipation caused by tangential damping based on the fractal contact theory,” Smart Materials and Structures, Vol. 27, 2018, pp. 015023.
* [37] Castro, E., Moreno-Garcia, P., and Gallego, A., “Damage Detection in CFRP Plates Using Spectral Entropy,” Shock and Vibration, 2014, pp. 1–8.
* [38] An, Y.-K., Giurgiutiu, V., and Sohn, H., “Integrated impedance and guided-wave-based damage detection,” Mechanical Systems and Signal Processing, Vol. 28, 2012, pp. 50–62.
* [39] Su, Z., Zhou, C., Hong, M., Cheng, L., Wang, Q., and Qing, X., “Acousto-ultrasonics-based fatigue damage characterization: linear versus nonlinear signal features,” Mechanical Systems and Signal Processing, Vol. 45, 2014, pp. 225–239.
* [40] Su, Z. and Ye, L., “Lamb wave-based quantitative identification of delamination in CF/EP composite structures using artificial neural algorithm,” Composite Structures, Vol. 66, 2004, pp. 627–637.
* [41] Song, G., Gu, H., and Mo, Y.-L., “Smart aggregates: multi-functional sensors for concrete structure —a tutorial and a review,” Smart Materials and Structures, Vol. 17, 2008, pp. 033001.
* [42] Tibaduiza, D. A., Mujica, L. E., Rodellar, J., and Güemes, A., “Structural damage detection using principal component analysis and damage indices,” Journal of Intelligent Material Systems and Structures, Vol. 27, No. 2, 2016, pp. 233–248.
* [43] Lize, E., Rebillat, M., Mechbal, N., and Bolzmacher, C., “Optimal dual-PZT and network design for baseline-free SHM of complex anisotropic composite structures,” Smart Materials and Structures, Vol. 27, 2018, pp. 115018.
* [44] Hua, J., Cao, X., Yi, Y., and Lin, J., “Time-frequency damage index of broadband lamb wave for corrosion inspection,” Journal of Sound and Vibration, Vol. 464, 2020, pp. 114985.
* [45] Ibanez, F., Baltazar, A., and Mijarez, R., “Detection of damage in multiwire cables based on wavelet entropy evolution,” Smart Materials and Structures, Vol. 24, 2015, pp. 085036.
* [46] Shannon, C., “A mathematical theory of communication,” Bell System Technology Journal, Vol. 27, 1948, pp. 379–423.
* [47] Rojas, E., Baltazar, A., and Loh, K. J., “Damage detection using the signal entropy of an ultrasonic sensor network,” Smart Materials and Structures, Vol. 24, 2015, pp. 075008.
* [48] Qiu, J., Li, F., Abbas, S., and Zhu, Y., “A baseline-free damage detection approach based on distance compensation of guided waves,” Journal of Low Frequency, Vibration and Active Control, Vol. 38, 2019, pp. 1132–1148.
* [49] Kopsaftopoulos, F. P. and Fassois, S. D., “Vibration based health monitoring for a lightweight truss structure: experimental assessment of several statistical time series methods,” Mechanical Systems and Signal Processing, Vol. 24, 2010, pp. 1977–1997.
* [50] Kopsaftopoulos, F. P. and Fassois, S. D., “A Functional Model Based Statistical Time Series Method for Vibration Based Damage Detection, Localization, and Magnitude Estimation,” Mechanical Systems and Signal Processing, Vol. 39, 2013, pp. 143–161.
* [51] Kopsaftopoulos, F. P. and Fassois, S. D., “Identification of Stochastic Systems Under Multiple Operating Conditions: The Vector-dependent Functionally Pooled (VFP) Parametrization,” under preparation for publication, 2016.
* [52] Fassois, S. D. and Kopsaftopoulos, F. P., “Statistical Time Series Methods for Vibration Based Structural Health Monitoring,” New Trends in Structural Health Monitoring, edited by W. Ostachowicz and A. Guemes, chap. 4, Springer, 2013, pp. 209–264.
* [53] Kopsaftopoulos, F. P. and Fassois, S. D., “Experimental assessment of time series methods for structural health monitoring (SHM),” Proceedings of the 4th European Workshop on Structural Health Monitoring (EWSHM), Cracow, Poland, 2008.
* [54] Kopsaftopoulos, F. P. and Fassois, S. D., “Vibration Based Health Monitoring for a Thin Aluminum Plate – A Comparative Assessment of Statistical Time Series Methods,” Proceedings of the 5th European Workshop on Structural Health Monitoring (EWSHM), Sorrento, Italy, 2010.
* [55] Box, G. E. P., Jenkins, G. M., and Reinsel, G. C., Time Series Analysis: Forecasting & Control, Prentice Hall: Englewood Cliffs, NJ, 3rd ed., 1994\.
* [56] Manolakis, D., Ingle, V. K., and Kogon, S. M., Statistical and Adaptive Signal Processing: Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing, Artech House, New York, 2005.
* [57] Kay, S. M., Modern Spectral Estimation: Theory and Application, Prentice Hall: New Jersey, 1988.
* [58] Hayes, M. H., Statistical Digital Signal Processing and Modelling, John Wiley and Sons, New York, 1996.
* [59] Bendat, J. S. and Piersol, A. G., Random Data: Analysis and Measurement Procedures, Wiley-Interscience: New York, 3rd ed., 2000.
## Appendix A Damage Detection Summary Results: Notched Al Coupon
Table A.1: Damage detection summary results at an $\alpha$ value of $95\%$ for path 2-6 (single wave packet) in the Al plate (damage presented in units of mm). Method | False | Missed Damage ($\%$)
---|---|---
| Alarms ($\%$) | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20
DI† [4] | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
$F$ Statistic† | 0 | 100 | 100 | 100 | 100 | 0 | 0 | 0 | 0 | 0 | 0
$F_{m}$ Statistic†† | 0 | 100 | 100 | 100 | 0 | 0 | 0 | 0 | 0 | 0 | 0
$Z$ Statistic†† | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
False alarms presented as percentage of 400 test cases for the DI and $F$
statistic,
| and as percentage of 20 data sets for the $F_{m}$ and $Z$ statistics.
Missed damages presented as percentage of 20 test cases.
† All 20 baseline data sets were used as reference signals consecutively.
†† 15 out of 20 baseline data sets were used to calculate the baseline mean.
Table A.2: Damage detection summary results at an $\alpha$ value of $95\%$ for path 6-3 (single wave packet) in the Al plate (damage presented in units of mm). Method | False | Missed Damage ($\%$)
---|---|---
| Alarms ($\%$) | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20
DI† [4] | 5.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
$F$ Statistic† | 0 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100
$F_{m}$ Statistic†† | 0 | 100 | 100 | 100 | 100 | 100 | 100 | 0 | 0 | 100 | 100
$Z$ Statistic†† | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
False alarms presented as percentage of 400 test cases for the DI and $F$
statistic,
| and as percentage of 20 data sets for the $F_{m}$ and $Z$ statistics.
Missed damage presented as percentage of 20 test cases.
† All 20 baseline data sets were used as reference signals consecutively.
†† 15 out of 20 baseline data sets were used to calculate the baseline mean.
abcd Figure A.1: Receiver Operating Characteristics (ROC) plots comparing the
different damage detection methods for the notched Al coupon with a notch size
of 2 mm: (a) path 2-6 wave packet; (b) path 2-6 full signal; (c) path 6-3 wave
packet; (d) path 6-3 full signal.
## Appendix B Damage Detection Summary Results: CFRP Coupon
Table B.3: Damage detection summary results at multiple $\alpha$ values for path 1-4 (single wave packet) in the CFRP plate. Method | False | Missed damage ($\%$)
---|---|---
| alarms ($\%$) | 1 Weight | 2 Weights | 3 Weights | 4 Weights | 5 Weights | 6 Weights
DIa† [4] | 5.25 | 99.75 | 84.5 | 94.5 | 99.25 | 93.75 | 53.5
$F$ Statisticb† | 25 | 65 | 5 | 5 | 0 | 0 | 0
$F_{m}$ Statisticc†† | 25 | 40 | 10 | 60 | 0 | 0 | 0
$Z$ Statistica†† | 0 | 25 | 10 | 30 | 10 | 0 | 0
False alarms presented as percentage of 20 test cases.
Missed damages presented as percentage of 20 test cases.
a $\alpha=95\%$.; b $\alpha=1\%$; c $\alpha=10\%$
† All 20 baseline data sets were used as reference signals consecutively.
†† 15 out of 20 baseline data sets were used to calculate the baseline mean.
Table B.4: Damage detection summary results at multiple $\alpha$ values for path 3-4 (single wave packet) in the CFRP plate. Method | False | Missed damage ($\%$)
---|---|---
| alarms ($\%$) | 1 Weight | 2 Weights | 3 Weights | 4 Weights | 5 Weights | 6 Weights
DIa† [4] | 5.25 | 13.25 | 0 | 0 | 0 | 0 | 0
$F$ Statisticb† | 95 | 0 | 5 | 0 | 0 | 0 | 0
$F_{m}$ Statisticc†† | 30 | 60 | 0 | 0 | 0 | 0 | 0
$Z$ Statistica†† | 0 | 0 | 0 | 0 | 0 | 0 | 0
False alarms presented as percentage of 20 test cases.
Missed damages presented as percentage of 20 test cases.
a $\alpha=95\%$.; b $\alpha=1\%$; c $\alpha=10\%$
† All 20 baseline data sets were used as reference signals consecutively.
†† 15 out of 20 baseline data sets were used to calculate the baseline mean.
## Appendix C Damage Detection Summary Results: OGW CFRP Panel
Table C.5: Damage detection summary results at multiple $\alpha$ values for path 3-12 (damage-intersecting case) in the CFRP panel [1]. Method | False | Missed damage ($\%$)
---|---|---
| alarms ($\%$) | D5/6/7/8 | D9/10/11
DIa† [4] | 7.5 | 51.25 | 100
$F$ Statisticb† | 0 | 75 | 100
$F_{m}$ Statisticb†† | 0 | 75 | 33
$Z$ Statistica†† | 0 | 0 | 0
False alarms presented as percentage of 20 test cases. |
Missed damages presented as percentage of all test cases per damage group. |
a $\alpha=95\%$.; b $\alpha=80\%$ |
† All 20 baseline data sets were used as reference signals consecutively. |
†† 15 out of 20 baseline data sets were used to calculate the baseline mean. |
Table C.6: Damage detection summary results at an $\alpha$ value of $95\%$ for path 3-12 (damage-non-intersecting case) in the CFRP panel [1]. Method | False | Missed damage ($\%$)
---|---|---
| alarms ($\%$) | D21/22/23/24 | D25/26/27/28
DIa† [4] | 7.5 | 100 | 93.75
$F$ Statisticb† | 0 | 100 | 75
$F_{m}$ Statisticb†† | 0 | 100 | 75
$Z$ Statistica†† | 0 | 50 | 75
False alarms presented as percentage of 20 test cases. |
Missed damages presented as percentage of all test cases per damage group. |
a $\alpha=95\%$.; b $\alpha=70\%$ |
† All 20 baseline data sets were used as reference signals consecutively. |
†† 15 out of 20 baseline data sets were used to calculate the baseline mean. |
|
# Impurity induced quantum chaos for an ultracold bosonic ensemble in a
double-well
Jie Chen<EMAIL_ADDRESS>Zentrum für Optische
Quantentechnologien, Fachbereich Physik, Universität Hamburg, Luruper Chaussee
149, 22761 Hamburg, Germany Kevin Keiler Zentrum für Optische
Quantentechnologien, Fachbereich Physik, Universität Hamburg, Luruper Chaussee
149, 22761 Hamburg, Germany Gao Xianlong Department of Physics, Zhejiang
Normal University, Jinhua 321004, China Peter Schmelcher Zentrum für
Optische Quantentechnologien, Fachbereich Physik, Universität Hamburg, Luruper
Chaussee 149, 22761 Hamburg, Germany The Hamburg Centre for Ultrafast
Imaging, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany
###### Abstract
We demonstrate that an ultracold many-body bosonic ensemble confined in a one-
dimensional (1D) double-well (DW) potential can exhibit chaotic dynamics due
to the presence of a single impurity. The non-equilibrium dynamics is
triggered by a quench of the impurity-Bose interaction and is illustrated via
the evolution of the population imbalance for the bosons between the two
wells. While the increase of the post-quench interaction strength always
facilitates the irregular motion for the bosonic population imbalance, it
becomes regular again when the impurity is initially populated in the highly
excited states. Such an integrability to chaos (ITC) transition is fully
captured by the transient dynamics of the corresponding linear entanglement
entropy, whose infinite-time averaged value additionally characterizes the
edge of the chaos and implies the existence of an effective Bose-Bose
attraction induced by the impurity. In order to elucidate the physical origin
for the observed ITC transition, we perform a detailed spectral analysis for
the mixture with respect to both the energy spectrum as well as the
eigenstates. Specifically, two distinguished spectral behaviors upon a
variation of the interspecies interaction strength are observed. While the
avoided level-crossings take place in the low-energy spectrum, the energy
levels in the high-energy spectrum possess a band-like structure and are
equidistant within each band. This leads to a significant delocalization of
the low-lying eigenvectors which, in turn, accounts for the chaotic nature of
the bosonic dynamics. By contrast, those highly excited states bear a high
resemblance to the non-interacting integrable basis, which explains for the
recovery of the integrability for the bosonic species. Finally, we discuss the
induced Bose-Bose attraction as well as its impact on the bosonic dynamics.
## I Introduction
Trapping of an ultracold many-body bosonic ensemble in a one-dimensional (1D)
double-well (DW) potential constitutes a prototype system for the
investigations of the correlated quantum dynamics DW_exp_1 ; DW_exp_2 ;
DW_exp_3 . Such a system represents a bosonic Josephson junction (BJJ), an
atomic analogy of the Josephson effect initially predicted for Cooper pair
tunneling through two weakly linked superconductors BJJ_1 ; BJJ_2 . Owing to
the unprecedented controllability of the trapping geometries as well as the
atomic interaction strengths cold_atom_rev , studies of the BJJ unveil various
intriguing phenomena which are not accessible for conventional superconducting
systems BJJ_Rabi_1 ; BJJ_Rabi_2 ; BJJ_Rabi_3 ; BJJ_Frag_1 ; BJJ_Frag_2 ;
BJJ_Squeeze_1 ; BJJ_Squeeze_2 . Examples are the Josephson oscillations
BJJ_Rabi_1 ; BJJ_Rabi_2 ; BJJ_Rabi_3 , fragmentations BJJ_Frag_1 ; BJJ_Frag_2
, macroscopic quantum self trapping DW_exp_3 ; BJJ_Rabi_1 ; BJJ_Rabi_2 ,
collapse and revival sequences BJJ_Rabi_3 as well as the atomic squeezing
state BJJ_Squeeze_1 ; BJJ_Squeeze_2 .
Under the explicit time-dependent driving forces, the BJJ can alternatively
turn into the quantum kicked top (QKT), a famous platform for the
investigations of quantum chaos as well as the classical-quantum
correspondence QKT_1 ; QKT_2 ; QKT_3 ; QKT_4 ; QKT_5 ; QKT_6 ; QKT_7 ; QKT_8 ;
QKT_9 ; QKT_10 ; QKT_11 ; QKT_12 ; QKT_13 . To date, related studies include
the spectral statistics QKT_2 , the entanglement entropy production QKT_3 ;
QKT_4 ; QKT_5 ; QKT_6 ; QKT_7 ; QKT_8 ; QKT_9 ; QKT_10 , the quantum
decoherence and quantum correlations QKT_11 ; QKT_12 as well as the border
between regular and chaotic dynamics QKT_13 . Moreover, by viewing the QKT as
a collective $N$-qubit system, the effects of the quantum chaos on the digital
quantum simulations have also been detailed discussed recently QKT_14 ; QKT_15
.
On the other hand, stimulated by the experimental progresses on few-body
ensembles few_exp1 ; few_exp2 ; few_exp3 ; few_exp4 ; few_exp5 ; few_exp6 ,
significant theoretical effort also focuses on the 1D few-body atomic systems
few_gs_1 ; few_gs_2 ; few_gs_3 ; few_gs_4 ; few_gs_5 ; few_gs_6 ; few_gs_7 ;
few_quench_1 ; few_quench_2 ; few_quench_3 ; few_bf_SC1 ; few_bf_SC2 ,
revealing for example the ground state few_gs_1 ; few_gs_2 ; few_gs_3 ;
few_gs_4 ; few_gs_5 ; few_gs_6 ; few_gs_7 ; few_bf_SC1 ; few_bf_SC2 as well
as the dynamical properties few_quench_1 ; few_quench_2 ; few_quench_3 , which
pave the way for the studies of the binary mixtures with large particle number
imbalance. Such hybridized systems are deeply related to the polaron physics
polaron_1 ; polaron_2 ; polaron_3 as well as the open quantum systems OQS_1
and are particularly interesting owing to the fact that one subsystem is in
the deep quantum regime while the other one can more or less be described by
the semi-classical physics. Note, however, that while most of the discussions
focus on impacts on the minority species from the majority bath, studies which
alternatively explore the feedback to the majority species due to the presence
of the minority one are still rare.
In the present paper, we investigate a binary ultracold atomic mixture made of
a single impurity and a non-interacting many-body bosonic ensemble that are
confined within a 1D DW potential. Unlike most of the previous studies where
the focuses are put on the weak-interacting regime, rendering the impurity
being restricted into the lowest two modes of the DW potential Impurity_BH_1 ;
Impurity_BH_2 ; Impurity_BH_3 ; Impurity_BH_4 ; Impurity_BH_5 ; Impurity_BH_6
, our discussions are not restricted to such a scenario. Specifically, we
study the onset of the chaos for the majority bosonic species due to the
presence of the impurity and put particular emphasis on the its dynamical
response upon a sudden quench of the impurity-Bose interaction strength. As an
exemplary observable, we monitor the quantum evolution of the population
imbalance for the bosons between the two wells starting from a balanced
particle population. While the increase of the post-quench interaction
strength always facilitates a chaotic motion for the bosonic population
imbalance, it becomes regular again when the impurity initially is prepared in
the highly excited states. In order to characterize such an integrability to
chaos (ITC) transition, we employ the linear entanglement entropy as a
signature of quantum chaos, which alternatively measures the decoherence for
the bosonic species. Depending on the degree of chaos, the transient dynamics
of the corresponding linear entanglement entropy can behave as either a rapid
growth or a slow variation with increasing time, whereas, its infinite-time
averaged value, in addition, captures the edge of quantum chaos, i.e., the
border between the integrable and the chaotic regions in the corresponding
classical phase space. Furthermore, by computing the infinite-time averaged
values of the linear entanglement entropy for various initial conditions, we
find a striking resemblance between its profile and a classical phase space
with attractive Bose-Bose interaction, which implies the existence of an
attractive interaction among the bosons induced by the impurity.
In order to elucidate the physical origin for the above observed ITC
transition, we perform a detailed spectral analysis with respect to both the
energy spectrum as well as the eigenstates of the mixture. Two distinguished
spectral behaviors upon a variation of the interspecies interaction strength
are observed. While the avoided level-crossings take place in the low-energy
spectrum, the energy levels in the high-energy spectrum possess a band-like
structure and are equidistant within each band. Consequently, this results in
a significant delocalization for those low-lying eigenstates which, in turn,
accounts for the chaotic nature of the bosonic non-equilibrium dynamics.
Remarkably, those highly excited states bear a striking resemblance to the
non-interacting integrable basis, which explains the recovery of the
integrability for the bosonic species. Finally, we also discuss the induced
Bose-Bose attraction and its impact on the bosonic dynamics.
This paper is organized as follows. In Sec. II, we introduce our setup
including the Hamiltonian, the initial conditions as well as the quantities of
interests. In Sec. III, we present our main observation: the ITC transition
for the bosonic species. In Sec. IV, we perform a detailed spectral analysis
for the mixture with respect to both the energy spectrum as well as the
eigenstates, so as to elucidate the physical origin for the above observed ITC
transition. Finally, our conclusions and outlook are provided in Sec. V.
## II Setup
### II.1 Hamiltonian and angular-momentum representation
The Hamiltonian of our 1D ultracold impurity-Bose mixture is given by
$\hat{H}=\hat{H}_{I}+\hat{H}_{B}+\hat{H}_{IB}$, where
$\displaystyle\hat{H}_{\sigma}$ $\displaystyle=\int
dx_{\sigma}~{}\hat{\psi}^{\dagger}_{\sigma}(x_{\sigma})\textit{h}_{\sigma}(x_{\sigma})\hat{\psi}_{\sigma}(x_{\sigma}),$
$\displaystyle\hat{H}_{IB}$ $\displaystyle={g_{IB}}\int
dx~{}\hat{\psi}^{\dagger}_{I}(x)\hat{\psi}^{\dagger}_{B}(x)\hat{\psi}_{B}(x)\hat{\psi}_{I}(x),$
(1)
and
$\textit{h}_{\sigma}(x_{\sigma})=-\frac{\hbar^{2}}{2m_{\sigma}}\frac{\partial^{2}}{\partial
x_{\sigma}^{2}}+V_{DW}(x_{\sigma})$ is the single-particle Hamiltonian for the
$\sigma=I(B)$ species being confined within a 1D symmetric DW potential
$V_{DW}(x_{\sigma})=a_{\sigma}(x_{\sigma}^{2}-b_{\sigma}^{2})^{2}$. For
simplicity, we consider the atoms for both species are of the same mass
($m_{I}=m_{B}=m$) and are trapped by the same potential geometry, i.e.,
$a_{I}=a_{B}=a_{DW}$ and $b_{I}=b_{B}=b_{DW}$.
$\hat{\psi}_{\sigma}^{\dagger}(x_{\sigma})$
[$\hat{\psi}_{\sigma}(x_{\sigma})$] is the field operator that creates
(annihilates) a $\sigma$-species particle at position $x_{\sigma}$. Moreover,
we neglect the interactions among the bosons and assume the impurity-Bose
interaction is of zero range and can be modeled by a contact potential of
strength Feshbach_1 ; Feshbach_2 ; Feshbach_3 ; few_quench_3
$g_{IB}=\frac{2\hbar^{2}a_{3D}}{\mu
a_{\bot}^{2}}[1-C\frac{a_{3D}}{a_{\bot}}]^{-1}.$ (2)
Here $a_{3D}$ is the 3D impurity-Bose $s$-wave scattering length and $C\approx
1.4603$ is a constant. The parameter $a_{\bot}=\sqrt{\hbar/\mu\omega_{\bot}}$
describes the transverse confinement with $\mu=m/2$ being the reduced mass and
we assume the transverse trapping frequency $\omega_{\bot}$ to be equal for
both species. In the following discussions, we rescale the Hamiltonian of the
mixture $\hat{H}$ for the units of the energy, length and time as
$\eta=\hbar\omega_{\bot}$, $\xi=\sqrt{\hbar/m\omega_{\bot}}$ and
$\tau=1/\omega_{\bot}$, respectively. We focus on the repulsive interaction
regime, i.e., $g_{IB}\geqslant 0$ and set $a_{DW}=0.5$, $b_{DW}=1.5$, such
that the lowest two single-particle energy levels are well separated from the
others [see Fig. 1 (a), the spatial geometry of $V_{DW}(x)$ (black dashed
line) as well as the lowest six single-particle energy levels (grey solid
lines)]. Throughout this work, we explore a binary mixture made of a single
impurity and 100 bosons ($N_{I}=1$, $N_{B}=100$), and focus on the dynamical
response for the majority bosonic species upon a sudden quench of the
impurity-Bose interaction strength (see below). Let us note that such a 1D
mixture is experimentally accessible by imposing strong transverse and weak
longitudinal confinement for a binary e.g., Bose-Fermi mixture with two
different kinds of atoms mixture_exp_bf_1 ; mixture_exp_bf_2 or a Bose-Bose
mixture made of the same atoms with two different hyperfine states
mixture_exp_bb_1 ; mixture_exp_bb_2 . The DW potential can also be readily
constructed by imposing a 1D optical lattice on top of a harmonic trap
DW_exp_3 ; BJJ_2 . Moreover, the contact interaction strength $g_{IB}$ can be
controlled experimentally by tuning the $s$-wave scattering lengths via
Feshbach or confinement-induced resonances Feshbach_1 ; Feshbach_2 ;
Feshbach_3 .
Noticing further that the bosonic species is confined within a tight DW
potential with $\delta_{1}\gg\delta_{0}$ [c.f. Fig. 1 (a)], here $\delta_{i}$
denotes the energy difference between the $i$-th and the $(i+1)$-th single-
particle eigenstates. We adopt the two-mode approximation
$\hat{\psi}_{B}(x)=u_{L}(x)\hat{b}_{L}+u_{R}(x)\hat{b}_{R},$ (3)
with $u_{L,R}(x)$ being the Wannier-like states localized in the left and
right well, respectively. This leads to the low-energy effective Hamiltonian
for the bosonic species
$\hat{H}_{B}=-J_{0}(\hat{b}^{\dagger}_{L}\hat{b}_{R}+\hat{b}^{\dagger}_{R}\hat{b}_{L}),$
(4)
corresponding to the two-site Bose-Hubbard (BH) model with $J_{0}=0.071$ being
the hopping amplitude.
Before proceeding, it is instructive to express the above BH Hamiltonian in
the angular-momentum representation. To see this, we introduce three angular-
momentum operators BJJ_Rabi_3 ; BJJ_4
$\displaystyle\hat{J}_{x}$
$\displaystyle=\frac{1}{2}(\hat{b}_{L}^{\dagger}\hat{b}_{R}+\hat{b}_{R}^{\dagger}\hat{b}_{L}),~{}~{}~{}\hat{J}_{y}=-\frac{i}{2}(\hat{b}_{L}^{\dagger}\hat{b}_{R}-\hat{b}_{R}^{\dagger}\hat{b}_{L}),$
$\displaystyle\hat{J}_{z}$
$\displaystyle=\frac{1}{2}(\hat{b}_{L}^{\dagger}\hat{b}_{L}-\hat{b}_{R}^{\dagger}\hat{b}_{R}),$
(5)
obeying the SU(2) commutation relation
$[\hat{J}_{\alpha},\hat{J}_{\beta}]=i\epsilon_{\alpha\beta\gamma}\hat{J}_{\gamma}$.
The BH Hamiltonian in Eq. (4) thus can be rewritten as
$\hat{H}_{B}=-2J_{0}\hat{J}_{x},$ (6)
which describes the angular momentum precession of a single particle whose
spatial degrees of freedom (DOFs) are frozen. According to definitions for
$\hat{J}_{x}$ and $\hat{J}_{z}$ in Eq. (5), we note that the kinetic energy in
the BH model as well as the population imbalance for the bosons between the
two wells are in analogy to the magnetizations of this single particle along
the $x$ and the $z$ axes. Moreover, the particle number conservation in the
Hamiltonian (4) corresponds to the angular momentum conservation
$\hat{J}^{2}=\hat{J}_{x}^{2}+\hat{J}_{y}^{2}+\hat{J}_{z}^{2}=\frac{N_{B}}{2}(\frac{N_{B}}{2}+1)$
(7)
for the Hamiltonian (6).
For the case $g_{IB}=0$, the angular momentum dynamics can be simply
integrated out from the corresponding Heisenberg equations of motion, in which
$\displaystyle\hat{J}_{y}(t)$
$\displaystyle=\hat{J}_{z}(0)\text{cos}(2J_{0}t)-\hat{J}_{y}(0)\text{sin}(2J_{0}t),$
$\displaystyle\hat{J}_{z}(t)$
$\displaystyle=\hat{J}_{y}(0)\text{cos}(2J_{0}t)+\hat{J}_{z}(0)\text{sin}(2J_{0}t),$
(8)
being the harmonic oscillations with the frequency $\omega_{0}=2J_{0}$ and
$\hat{J}_{x}(t)=\hat{J}_{x}(0)$ is time-independent since
$[\hat{J}_{x},\hat{H}_{B}]=0$. Further introducing the normalized vector
$\hat{\vec{S}}(t)=\hat{S}_{x}(t)\vec{i}+\hat{S}_{y}(t)\vec{j}+\hat{S}_{z}(t)\vec{k}$
with $\hat{S}_{\gamma}(t)=\hat{J}_{\gamma}(t)/J$ for $\gamma=x,y,z$ and
$J=N_{B}/2$, together with the fact that
$\sum_{\gamma=x,y,z}\langle\hat{S}_{\gamma}\rangle^{2}(t)=\sum_{\gamma=x,y,z}\langle\hat{S}_{\gamma}\rangle^{2}(0),$
(9)
one can readily show that the motion of the vector $\hat{\vec{S}}(t)$ always
lies on the Bloch sphere with unit radius if, in addition, we choose the
initial state as the atomic coherent state (ACS) (see below).
### II.2 Classical dynamics
The above angular momentum dynamics can alternatively be understood in a
classical manner. As we will show below, the periodic motions for
$\hat{J}_{y}(t)$ and $\hat{J}_{z}(t)$ [equivalently $\hat{S}_{y}(t)$ and
$\hat{S}_{z}(t)$] correspond to the periodic oscillation of a classical non-
rigid pendulum around its equilibrium position, while the conservation of
$\hat{J}_{x}(t)$ [$\hat{S}_{x}(t)$] relates to the energy conservation of this
pendulum. To this end, we first adopt the mean-field approximation as
$\hat{b}_{\beta}=b_{\beta}$ ($\beta=L,R$) with $b_{\beta}$ being a $c$-number
GPE_1 . The quantum operators $\hat{S}_{x}$, $\hat{S}_{y}$ and $\hat{S}_{z}$
then should be rewritten as
$\displaystyle S_{x}$
$\displaystyle=\frac{1}{2J}(b_{L}^{\ast}b_{R}+b_{R}^{\ast}b_{L}),~{}~{}~{}S_{y}=-\frac{i}{2J}(b_{L}^{\ast}b_{R}-b_{R}^{\ast}b_{L}),$
$\displaystyle S_{z}$
$\displaystyle=\frac{1}{2J}(b_{L}^{\ast}b_{L}-b_{R}^{\ast}b_{R}).$ (10)
Employing the phase-density representation for $b_{\beta}$ as
$b_{\beta}=\sqrt{N_{\beta}^{B}}e^{i\theta_{\beta}}$ and further introducing
the two conjugate variables
$Z=(N_{L}^{B}-N_{R}^{B})/N_{B},~{}~{}~{}~{}~{}~{}\varphi=\theta_{R}-\theta_{L},$
(11)
representing the relative population imbalance between the two wells and the
relative phase difference, respectively, we arrive at
$S_{x}=\sqrt{1-Z^{2}}\text{cos}\varphi,~{}~{}~{}S_{y}=\sqrt{1-Z^{2}}\text{sin}\varphi,~{}~{}~{}S_{z}=Z,$
(12)
whose dynamics are governed by the Hamiltonian
$H_{cl}=-J_{0}\sqrt{1-Z^{2}}\text{cos}\varphi,$ (13)
which, as aforementioned, describes a non-rigid pendulum with angular momentum
$Z$ whose length is proportional to $\sqrt{1-Z^{2}}$ BJJ_Rabi_1 ; BJJ_Rabi_2 ;
BJJ_Rabi_3 ; BJJ_Driven_1 . Comparing the Eq. (12) to the Eq. (13), we note
that $S_{y}$ and $S_{z}$, being the classical counterpart of the quantum
operators $\hat{S}_{y}$ and $\hat{S}_{z}$, now represent the horizontal
displacement and the angular momentum of this classical pendulum, while the
$S_{x}$ ($\hat{S}_{x}$) proportions to its total energy which is conserved
during the dynamics.
In this way, an one-to-one correspondence between the quantum and classical
dynamics is established in which the periodic motions for $\hat{S}_{y}(t)$ and
$\hat{S}_{z}(t)$ are mapped to the periodic oscillations for this classical
pendulum around its equilibrium position. Since our focus is put on the
dynamics of the population imbalance of the bosons, we compare the quantum
evolution $\hat{S}_{z}(t)$ for the case $g_{IB}=0$ to the classical dynamics
$Z(t)$ in Fig. 1 (b) and no discrepancies are observed among them. Hence, for
the case $g_{IB}=0$, we will always refer the classical $Z(t)$ dynamics as the
quantum $S_{z}(t)$ evolutions. However, it should also be emphasized that the
agreement between $\hat{S}_{z}(t)$ and $Z(t)$ takes place only for this non-
interacting case. For $g_{IB}>0$, on one side, the mixture has no classical
mapping, on the other side, the quantum correlations among the bosons come
into play, and, as a result, one can witness even a completely different
quantum dynamics as compared to the classical one, albeit the fact that the
bare Bose-Bose interaction always vanishes (see below).
The above classical interpretation provides us not only with a vivid picture
for visualizing the quantum dynamics in a classical manner, but also with the
profound physical insights with respect to its overall dynamical properties.
In particular, the periodic motions for $\hat{S}_{y}(t)$ and $\hat{S}_{z}(t)$
obtained from the quantum simulations are a direct consequence of the
integrability of the classical Hamiltonian $H_{cl}$. Owing to the energy
conservation for the case $g_{IB}=0$, $H_{cl}$ is completely integrable with
all the corresponding classical trajectories, characterized by
$[Z(t),\varphi(t)]$, being periodic in time PS . Such an integrability is also
transparently shown in the classical phase space [see Fig. 1 (c)]. Depending
on the initial condition, two distinguished types of motions are clearly
observed: a periodic trajectory orbiting around the fix point either located
at $(Z=0,\varphi=0)$ or $(Z=0,\varphi=\pi)$, referred as the zero- and the
$\pi$-phase mode for a 1D BJJ BJJ_2 .
Figure 1: (Color online) (a) Single-particle spectrum for the double-well
potential, in which the gray horizontal lines denote the lowest six energy
levels and the blue (red) arrows represent possible transitions that reverse
(preserve) the spatial parity of the impurity. (b) Real-time dynamics for the
bosonic population imbalance $S_{z}(t)$ for the initial state
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ and for the case
$g_{IB}=0$ (red solid line), together with the classical $Z(t)$ dynamics
starting from the phase point $(Z=0,\varphi=\pi/4)$ (blue dashed line). (c)
Classical phase space for $J_{0}=0.071$. The red dot denotes the phase space
point ($Z=0,\varphi=\pi/4$) corresponding to the ACS
$|\theta,\varphi\rangle=|\pi/2,\pi/4\rangle$.
### II.3 Breaking of the integrability
In contrast to the above integrable limit, the presence of the impurity-Bose
interaction leads to the energy transport between the two species and, hence,
breaks the integrability for the bosonic species. In order to elaborate on
this process in more detail, we decompose the interspecies interaction into
various impurity-boson pair excitations
$\hat{H}_{IB}=\sum_{i,j=0}^{\infty}\sum_{\alpha,\beta=L,R}U_{ij\alpha\beta}\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{b}_{\alpha}^{\dagger}\hat{b}_{\beta},$
(14)
with $U_{ij\alpha\beta}=g_{IB}\int
dx~{}\phi_{i}(x)\phi_{j}(x)u_{\alpha}(x)u_{\beta}(x)$ and $\\{\phi_{i}(x)\\}$
being the single-particle basis for the DW potential. Moreover, $u_{L/R}(x)$,
being the above mentioned localized Wannier-like states, are constructed via a
linear superposition of the lowest two eigenstates $\phi_{0}(x)$ and
$\phi_{1}(x)$. Note that Eq. (14) is obtained by means of an expansion of the
field operator for the impurity
$\hat{\psi}_{I}(x)=\sum_{i=0}^{\infty}\phi_{i}(x)\hat{a}_{i}$, meanwhile, by
employing the two-mode approximation in Eq. (3) for the bosonic species.
Besides, all the eigenstate wavefunctions $\\{\phi_{i}(x)\\}$ are chosen to be
real due to the preserved time-reversal symmetry in the single-particle
Hamiltonian $\textit{h}_{\sigma}$.
Next, we group different pair excitations with respect to their bosonic
indices as
$\displaystyle\hat{H}_{IB}$
$\displaystyle=\left[\sum_{i,j=0}^{\infty}U_{ijLR}\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{b}_{L}^{\dagger}\hat{b}_{R}+\sum_{i,j=0}^{\infty}U_{ijRL}\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{b}_{R}^{\dagger}\hat{b}_{L}\right]$
$\displaystyle+\left[\sum_{i,j=0}^{\infty}U_{ijLL}\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{b}_{L}^{\dagger}\hat{b}_{L}+\sum_{i,j=0}^{\infty}U_{ijRR}\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{b}_{R}^{\dagger}\hat{b}_{R}\right].$
By noticing the fact that
$U_{ijLR}=U_{ijRL},~{}~{}U_{ijLL}=\eta U_{ijRR}$ (16)
with $\eta=1$ ($\eta=-1$) for $n_{e,o}=|i-j|$ being an even (odd) number,
together with the definitions given in Eq. (5), we finally arrive at
$\displaystyle\hat{H}_{IB}$
$\displaystyle=\left(\sum_{i,j=0}^{\infty}U_{ij}^{(1)}\hat{a}_{i}^{\dagger}\hat{a}_{j}\right)2\hat{J}_{x}+\left(\sum_{|i-j|=n_{e}}^{\infty}U_{ij}^{(2)}\hat{a}_{i}^{\dagger}\hat{a}_{j}\right)\hat{N}_{B}$
$\displaystyle+\left(\sum_{|i-j|=n_{o}}^{\infty}U_{ij}^{(3)}\hat{a}_{i}^{\dagger}\hat{a}_{j}\right)2\hat{J}_{z}$
$\displaystyle=\hat{H}_{IB}^{(1)}+\hat{H}_{IB}^{(2)}+\hat{H}_{IB}^{(3)}.$ (17)
Here $U_{ij}^{(1)}=U_{ijLR}=U_{ijRL}$, $U_{ij}^{(2)}=U_{ijLL}=U_{ijRR}$ and
$U_{ij}^{(3)}=U_{ijLL}=-U_{ijRR}$. Let us emphasize that the Eq. (16) relies
on the fact that the DW potential is spatially symmetric, as a result, all its
single-particle eigenstates $\\{\phi_{i}\\}$ respect the spatial parity
symmetry.
Equation (17) transparently elaborates how the interspecies interaction
$\hat{H}_{IB}$ breaks the integrability for the Hamiltonian $\hat{H}_{B}$.
Since both $\hat{H}_{IB}^{(1)}$ and $\hat{H}_{IB}^{(2)}$ commute with
$\hat{H}_{B}$ [c.f. Eq. (6)], it is the non-commutativity between
$\hat{H}_{IB}^{(3)}$ and $\hat{H}_{B}$ that results in the energy non-
conservation for the bosonic species, and breaks its integrability for
$g_{IB}=0$. Further inspecting the $\hat{H}_{IB}^{(3)}$ term in more detail,
we notice that it corresponds to all the different single-particle excitations
that reverse the impurity’s spatial parity [see Fig. 1 (a) for a schematic
illustration]. With this, we conclude that those parity non-conservation
transitions of the impurity leads to the integrability breaking for the
majority bosonic species.
### II.4 Initial condition
We prepare our impurity-Bose mixture initially as
$|\Psi(0)\rangle=|\phi_{n}\rangle\otimes|\theta,\varphi\rangle$, being a
product state between the two species. Here $|\phi_{n}\rangle$ is the $n$-th
single-particle eigenstate for the impurity and $|\theta,\varphi\rangle$
denotes the ACS given by ACS_1 ; ACS_2
$\displaystyle|\theta,\varphi\rangle$
$\displaystyle=\frac{1}{\sqrt{N_{B}!}}\left[\text{cos}(\frac{\theta}{2})\hat{b}^{\dagger}_{L}+\text{sin}(\frac{\theta}{2})e^{i\varphi}\hat{b}^{\dagger}_{R}\right]^{N_{B}}~{}|vac\rangle$
$\displaystyle=\sum_{N^{B}_{L}=0}^{N_{B}}\left(\begin{array}[]{c}N_{B}\\\
N^{B}_{L}\end{array}\right)^{1/2}\text{cos}^{N^{B}_{L}}(\theta/2)~{}\text{sin}^{N^{B}_{R}}(\theta/2)~{}e^{iN^{B}_{R}\varphi}~{}|N^{B}_{L},N^{B}_{R}\rangle,$
(20)
which is the linear superposition of all the number states
$\\{|N^{B}_{L},N^{B}_{R}\rangle\\}$ and fulfills the completeness relation
$(N_{B}+1)\int\frac{d\Omega}{4\pi}|\theta,\varphi\rangle\langle\theta,\varphi|=1$
(21)
with $d\Omega=\text{sin}\theta d\theta d\varphi$ being the volume element.
Physically, the ACS $|\theta,\varphi\rangle$ corresponds to the classical
state $(Z,\varphi)$ in such a way that
$\text{cos}\theta=(N^{B}_{L}-N^{B}_{R})/N_{B}=Z$ controls the initial
population difference for the bosons and $\varphi$, possessing the same
meaning with its classical counterpart, determines the phase difference
between the two wells BJJ_4 . For a given ACS $|\theta,\varphi\rangle$, the
mean values for the angular-momentum operators introduced in Eq. (5) are
BJJ_Rabi_3
$\langle\hat{S}_{x}\rangle=\text{sin}\theta\text{cos}\varphi,~{}~{}~{}\langle\hat{S}_{y}\rangle=\text{sin}\theta\text{sin}\varphi,~{}~{}~{}\langle\hat{S}_{z}\rangle=\text{cos}\theta,$
(22)
which satisfies the normalization condition
$\langle\hat{S}_{x}\rangle^{2}+\langle\hat{S}_{y}\rangle^{2}+\langle\hat{S}_{z}\rangle^{2}=1$.
Together with the Eqs. (8) and (9), we conclude that, for the case $g_{IB}=0$,
the motion of the $\hat{\vec{S}}(t)$ vector starting from an arbitrary ACS
always lies on a Bloch sphere with unit radius. Even for the case $g_{IB}>0$,
where the vector $\hat{\vec{S}}(t)$ can jump out of the Bloch sphere
significantly, the use of the ACS still allows us to visualize the quantum
trajectory in a classical manner (see below), which simplifies the analysis of
the complex quantum dynamics to a large extent, meanwhile, provides insights
for the classical-quantum correspondence. Finally, let us note that the ACS
has been implemented in recent ultracold experiments in a controllable manner.
Tuning a two-photon transition between two hyperfine states of
${}^{87}\textrm{Rb}$ atoms, allows for preparing an ACS with arbitrary
$|\theta,\varphi\rangle$ ACS_3 ; ACS_4 .
In this paper, we aim at exploring the dynamical response of the majority
bosonic species to the presence of the impurity. To this end, we quench at
$t=0$ the impurity-Bose interaction strength from initial $g_{IB}=0$ to some
finite value $g_{IB}>0$, and monitor the quantum evolution of the bosonic
population imbalance starting from a balanced population. While the initial
state for the mixture is
$|\Psi(0)\rangle=|\phi_{n}\rangle\otimes|\theta,\varphi\rangle$, without other
specifications, we always choose the bosonic part being
$|\theta,\varphi\rangle=|\pi/2,\pi/4\rangle$. The corresponding $S_{z}(t)$
dynamics for this initial ACS and for the case $g_{IB}=0$ has been detailed
discussed above and is presented in Fig. 1 (b) (red solid line). Furthermore,
we also consider the scenarios for various initial impurity states
$|\phi_{n}\rangle$, so as to explore its impact on the bosonic dynamics.
## III Bosonic ITC transition
### III.1 Onset of quantum chaos
Let us first focus on the case where the impurity is initially prepared in its
ground state. The many-body initial state for the mixture is then given by
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$. Fig. 2 depicts
the real-time population imbalance for the bosonic species $S_{z}(t)$ for
various fixed postquench impurity-Bose interaction strengths $g_{IB}=0.01$
[Fig. 2 (a)], $g_{IB}=0.1$ [Fig. 2 (b)] and $g_{IB}=1.0$ [Fig. 2 (c)],
together with the classical $Z(t)$ dynamics (all blue dashed lines) which, as
aforementioned, equivalents to the $S_{z}(t)$ for $g_{IB}=0$. For a weak
impurity-Bose interaction ($g_{IB}=0.01$), the $S_{z}(t)$ dynamics is only
slightly perturbed by the presence of the impurity, as a result, it leads to
the small deviations of the population imbalance between the quantum and the
classical simulations [c.f. Fig. 2 (a), red solid line and blue dashed line].
For a larger time scale ($t>5000$), a “collapse-and-revival” behavior for
$S_{z}(t)$ is observed (result is not shown here), manifesting its near
integrability in this weak interacting regime BJJ_Rabi_3 ; BJJ_Driven_1 .
Further increasing the interaction strength, the quantum $S_{z}(t)$ evolution
becomes much more complicated and large discrepancies between $S_{z}(t)$ and
${Z}(t)$ are observed with respect to both the oscillation amplitude and the
frequencies. For the case $g_{IB}=1.0$, the quantum $S_{z}(t)$ dynamics
finally becomes completely irregular [c.f. Fig. 2 (c), red solid line],
signifying the onset of quantum chaos for the bosonic species.
In order to diagnose such an ITC transition, meanwhile, to quantify the degree
of the above observed quantum chaos, we employ the linear entanglement entropy
(EE)
$S_{L}=1-\text{tr}\hat{\rho}_{1B}^{2}$ (23)
for the bosonic species, which represents the bipartite entropy between the
single boson and the $N_{B}-1$ bosons after tracing out the impurity QKT_7 ;
QKT_8 . Here $\hat{\rho}_{1B}$ stands for the reduced one-body density matrix
for the bosonic species dma1_1 ; dma1_2 ; BJJ_4 . Before proceeding, let us
point out the reason for not using the spectral statistics as an indicator for
the quantum chaos. Similar to the situation for a single particle in a 1D
harmonic trap, the single DOF of the Hamiltonian $\hat{H}_{B}$ for a fixed
particle number violates the Berry-Tabor conjecture, which states that the
energy level spacing distribution follows the universal Poisson form for an
integrable system level_1 ; level_2 ; level_3 . As a result, the variation of
the level distribution for our mixture upon the increase of $g_{IB}$ can
behave largely different as compared to other systems level_3 , and hence, it
is insufficient to capture the quantum chaos. Upon a spectral decomposition of
the reduced density matrix $\hat{\rho}_{1B}$, $S_{L}$ in Eq. (23) can be
expressed, with respect to the natural populations $\\{n_{1},n_{2}\\}$, as
$S_{L}=1-\sum_{i=1}^{2}n^{2}_{i}$. In this way, the linear EE alternatively
measures the degree of the decoherence for the bosonic species. Note that the
two-mode expansion employed in the Eq. (3) renders the single-particle
Hamiltonian $\textit{h}(x)$ being restricted to a two-dimensional Hilbert
space and thus gives rise to only two natural populations obtained from the
spectral decomposition BJJ_4 . For the case where all the bosons reside in the
same single-particle state, the bosonic species is of complete coherence, as a
result, we have $S_{L}=0$. By contrast, for the case of maximal decoherence we
have $n_{1}=n_{2}=1/2$, which gives rise to the upper bound for the linear EE
as $S_{L}=1/2$.
The linear EE has been extensively used in the QKT systems as a signature of
the quantum chaos QKT_7 ; QKT_8 . Depending on whether the corresponding
classical trajectory is regular or chaotic, the linear EE behaves as either as
rapidly growing or a slowly varying in a short time (referred to as the
Ehrenfest time). On the other hand, the infinite-time averaged values of the
linear EE for various initial ACSs additionally characterize the edge of the
quantum chaos, denoted as the border between the integrable and the chaotic
region in the corresponding classical phase space QKT_7 ; QKT_8 . Fig. 3 (a)
reports the transient dynamics of the linear EE for the cases examined in Fig.
2 (a-c). At short times ($t<200$), the $S_{L}(t)$ evolution for a stronger
interaction exhibits a more rapid growth as compared to the cases with a
smaller $g_{IB}$. This is particularly obvious for the case $g_{IB}=1.0$,
where we observe the linear EE surges to the value $S_{L}=0.38$ at $t=10$,
while it only reaches to $S_{L}=0.02$ ($S_{L}=0.0007$) for the case
$g_{IB}=0.1$ ($g_{IB}=0.01$). With this knowledge, we conclude that the
different transient dynamical behaviors of the linear EE fully capture the ITC
transition that is observed in the dynamics of the bosonic population
imbalance. Besides, we shall also note that the linear EE for $t=0$ trivially
vanishes since all the bosons are initially condensed into the same single-
particle state [c.f. Eq. (20)].
Having investigated the transient dynamics of the linear EE for a specific
ACS, let us now explore its asymptotic behaviors with respect to different
ACSs, which, as aforementioned, characterize the edge of the quantum chaos. To
this end, we compute the infinite-time averaged value of the linear EE (ITEE)
for the initial state
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\theta,\varphi\rangle$,
$\overline{S}_{L}(\theta,\varphi)=lim_{T\rightarrow\infty}~{}\frac{1}{T}\int_{0}^{T}dt~{}S_{L}(t).$
(24)
Note that, the impurity initially always occupies the ground state
$|\phi_{0}\rangle$ and in our practical numerical simulations the time average
is performed up to $t=10^{4}$, being much larger than any other time scales
involved in the dynamics. Before proceeding, let us point out the geometrical
interpretation of the ITEE value. To show it, we first of all rewrite the
linear EE in Eq. (23) for time $t$ as QKT_7 ; QKT_8
$S_{L}(t)=\frac{1}{2}\left[1-\sum_{\gamma=x,y,z}\langle\hat{S}_{\gamma}\rangle^{2}(t)\right],$
(25)
where we have used the relation
$\hat{\rho}_{1B}=\frac{1}{2}\left[1+\sum_{\gamma=x,y,z}\langle\hat{S}_{\gamma}\rangle\hat{\sigma}_{\gamma}\right],$
(26)
with $\\{\hat{\sigma}_{\gamma}\\}$ being the Pauli matrices. Since $S_{L}(t)$
is proportional to the instant distance of the vector $\hat{\vec{S}}(t)$ to
the Bloch sphere, $\overline{S}_{L}$ thus measures its averaged distance for
the entire dynamics. From the results in the QKT systems QKT_7 ; QKT_8 , we
note that there exists a clear correspondence between the ITEE values and the
classical phase space structure. Regions of low ITEE correspond to regular
trajectories, while regions of high EE correspond to the chaotic trajectories.
Moreover, a sudden change of the ITEE value takes place as one crosses the
border between the integrable and the chaotic region, which, as
aforementioned, characterizes the edge of the quantum chaos. Fig. 3 (b)
depicts the computed ITEE values for various ACSs for the case $g_{IB}=1.0$.
Note that, we have rescaled the $\theta$ axis to $\text{cos}\theta$ since
$\text{cos}\theta=Z$ [see discussions in Sec. II.4]. Varying the initial ACS,
the ITEE value varies accordingly. In particular, regions close to
($\text{cos}\theta=0,\varphi=\pi$) and ($\text{cos}\theta=\pm
0.8,\varphi=0,2\pi$) possess significant low ITEE values as compared to the
other places. Such a $\overline{S}_{L}(\theta,\varphi)$ profile significantly
deviates from the structure of the non-interacting classical phase space.
Instead, it bears a striking resemblance to the phase space with an attractive
Bose-Bose interaction with the positions for those fixed points precisely
match with those low ITEE regions [c.f. Fig. 3 (c), red stars]. Hence, we note
that it indicates an effective Bose-Bose attraction existing among the bosons.
In Sec. IV.3, we will discuss this induced interaction in detail as well as
its impact on the bosonic dynamics.
Figure 2: (Color online) Time evolution of the bosonic population imbalance
$S_{z}(t)$ (red solid lines) for the initial state
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ and for various
fixed impurity-Bose interaction strengths, in which (a) $g_{IB}=0.01$, (b)
$g_{IB}=0.1$ and (c) $g_{IB}=1.0$. For comparisons, the classical $Z(t)$
dynamcis is depicted as well (all blue dashed lines). Figure 3: (Color online)
(a) The linear EE evolutions for the initial state
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ and for the post-
quench interaction strengths $g_{IB}=0.01$ (red solid line), $g_{IB}=0.1$
(green solid line) and $g_{IB}=1.0$ (blue solid line). (b) Infinite-time
averaged values for the linear EE for $g_{IB}=1.0$ and for various ACSs. (c) A
typical classical phase space for the BJJ with an attractive on-site
interaction, where the red stars denote the corresponding classical fixed
points.
### III.2 Recovery of the integrability
In this section, we investigate the scenario where the impurity is initially
pumped into a highly excited state. The out-of-equilibrium dynamics again is
triggered by a sudden quench of the impurity-Bose interaction strength. Here,
our main aim is to show that the integrability of the bosonic species is
recovered by means of preparing the impurity in a highly excited state. The
initial condition of the impurity, therefore, provides an additional DOF for
controlling the ITC transition of the majority bosonic species. Here, we note
that the employed notion of “integrability” specifically refers to how close
the bosonic dynamics in the interacting cases ($g_{IB}>0$) is to the one in
the non-interacting integrable case ($g_{IB}=0$), which is different from the
commonly used context in which it is uniquely associated to the system’s
Hamiltonian.
For an illustrative purpose, we consider the impurity is initially at
$|\phi_{150}\rangle$, being the 150-th excited state, and focus on the case
for the post-quench interaction strength $g_{IB}=1.0$. The many-body state for
$t=0$ is again given by
$|\Psi(0)\rangle=|\phi_{150}\rangle\otimes|\pi/2,\pi/4\rangle$. The
corresponding quantum evolution of the bosonic population imbalance $S_{z}(t)$
is depicted in Fig. 4 (a) (red solid line). As compared to the classical
$Z(t)$ dynamics [Fig. 4 (a), blue dashed line], we find a good agreement
between them with negligible discrepancies. Interestingly, these discrepancies
are even much smaller than the ones between $S_{z}(t)$ and $Z(t)$ for the case
$g_{IB}=0.01$ [c.f. Fig. 2 (a)]. Besides, we also note that the negligible
increment of the corresponding linear EE in the course of the dynamics
alternatively signifies the recovery of the integrability for the bosonic
species [c.f. Fig. 4 (b)].
Figure 4: (Color online) Time evolution of the bosonic population imbalance
$S_{z}(t)$ for $g_{IB}=1.0$ and for the initial state
$|\Psi(0)\rangle=|\phi_{150}\rangle\otimes|\pi/2,\pi/4\rangle$ (red solid
line), together with the classical $Z(t)$ dynamics (blue dashed line) which
corresponds to the $S_{z}(t)$ dynamics for $g_{IB}=0$. (b) The evolution of
the linear EE for the corresponding case.
## IV Spectral analysis and induced interaction
In order to shed light on the physics for the above-analyzed bosonic dynamics,
hereafter, we perform a detailed spectral analysis for the mixture with
respect to both the energy spectrum and the eigenstates via a numerically
exact diagonalization (ED). In particular, we would like to unveil the
physical origin for the observed ITC transition for the bosonic species
manifested by the corresponding dynamics of the population imbalance.
Moreover, we will discuss the presented Bose-Bose attraction induced by the
impurity as well as its impact on the bosonic dynamics.
### IV.1 Spectral structure
Let us begin with the case for $g_{IB}=0$. In the absence of the interspecies
interaction, the two species are completely decoupled. As a result, the
eigenenergy of the mixture is trivially given by
$E=\epsilon_{k}+\epsilon^{B}_{l}$ with $\epsilon_{k}$ and $\epsilon^{B}_{l}$
being the $k$-th and $l$-th eigenvalue for the subsystem Hamiltonians
$\hat{H}_{I}$ and $\hat{H}_{B}$, respectively. Owing to the neglected Bose-
Bose interaction, the many-body spectrum for $\hat{H}_{B}$ is always
equidistant with the energy difference $2J_{0}$ between the two successive
levels, which accounts for the harmonic oscillation of the $S_{z}(t)$ dynamics
for the case $g_{IB}=0$ [c.f. Fig. 1 (b)]. As for the impurity, due to the
rapid growth of the energy difference between two successive eigenstates, the
single-particle spectrum is inhomogeneous in which the high-energy part is
much more sparse as compared to the low-energy one [c.f. Fig. 1 (a)]. An
important consequence for such a spectral structure on the mixture’s many-body
spectrum is the following. For $\delta_{i}>\Delta_{B}$, with
$\delta_{i}=\epsilon_{i+1}-\epsilon_{i}$ being the energy difference between
the $i$-th and the $(i+1)$-th single-particle eigenstates for the DW potential
(see also the discussions in Sec. II.1), and $\Delta_{B}$ representing the
width of the spectrum for the Hamiltonian $\hat{H}_{B}$, a band-like structure
is naturally formed in the high-energy part of the many-body spectrum with the
band gap being $\delta_{i}-\Delta_{B}$, meanwhile, the energy levels within
each band are equidistant.
This simple picture, however, ceases to be valid upon the variation of the
impurity-Bose interaction. Indeed, the inclusion of the interspecies
interaction introduces additional coupling between the two subsystems and, as
a result, our spectral analysis needs to be performed with respect to the
complete mixture. Figure 5 showcases the many-body spectrum as a function of
the interspecies interaction strength $g_{IB}$. Owing to the preserved spatial
parity symmetry in the Hamiltonian $\hat{H}$, we present here only half of the
spectrum which corresponds to the even parity eigenstates. With the increase
of $g_{IB}$, the low-energy spectrum shows many avoided-crossings among the
energy levels, which is in sharp contrast to the high-energy spectrum where
only a linear growth of their values is observed [c.f. Figs. 5 (a) and (c)].
Moreover, for the high-energy spectrum, features like the band-like structure
as well as the equidistant energy levels within each band that are present in
the non-interacting limit are retained in the interacting cases as well.
The above two distinguished spectral behaviors can roughly be understood via
the structure of the impurity’s single-particle spectrum [c.f. Fig. 1 (a)].
Owing to the large energy separations among those highly excited states, the
transitions for the impurity among those states are significantly prohibited.
From a many-body perspective, the resulting high-energy effective Hamiltonian
of the mixture reads
$\hat{H}^{\prime}=\hat{H}_{I}+\hat{H}_{B}+\hat{H}_{IB}^{\prime}$, with
$\displaystyle\hat{H}_{I}$ $\displaystyle=\sum_{i\gg
1}\epsilon_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\hat{H}_{B}=-2J_{0}\hat{J}_{x},$
$\displaystyle\hat{H}_{IB}^{\prime}$ $\displaystyle\approx\sum_{i\gg
1}2U_{i}^{(1)}\hat{J}_{x}+U_{i}^{(2)}\hat{N}_{B}\approx\sum_{i\gg
1}U_{i}^{(2)}\hat{N}_{B}.$ (27)
Here $U_{i}^{(1)}=U_{iiLR}=U_{iiRL}$, $U_{i}^{(2)}=U_{iiLL}=U_{iiRR}$ and we
notice that $U_{i}^{(1)}=g_{IB}\int
dx~{}\phi_{i}(x)\phi_{i}(x)u_{L}(x)u_{R}(x)\approx 0$, due to the negligible
spatial overlap between the two localized states $u_{L}(x)$ and $u_{R}(x)$.
Before proceeding, we note the validity condition for the above high-energy
effective Hamiltonian as: $\delta_{i}\gg\epsilon_{IB}$ and
$\delta_{i}\gg\Delta_{B}$ with $\epsilon_{IB}$ being the interspecies
interaction energy per particle. Eq. (27) explains the observed high-energy
spectral behaviors as follows: since the interspecies interaction
$\hat{H}_{IB}^{\prime}$ now becomes the “zero-point” energy of the mixture,
the increment of the $g_{IB}$ thus only raises the energy level for those
highly excited states. As a result, the band-like structure as well as
equidistant nature that are formed in the non-interacting case are naturally
preserved.
In contrast, the densely distributed low-energy (single-particle) spectrum of
the impurity facilitates the transitions among different (low-lying) many-body
eigenstates caused by the interspecies interaction $\hat{H}_{IB}$ [c.f. Eq.
(17)]. With increasing $g_{IB}$, this results in the above observed avoided
level-crossings among the low-energy many-body spectrum QKT_1 .
### IV.2 Eigenstate delocalization
The avoided level-crossings in the low-energy spectrum impact the
characteristics of the corresponding eigenstates as well. Specifically, it
results in a significant delocalization for those low-lying eigenvectors with
respect to an integrable basis (see below), which, in turn, accounts for the
chaotic nature of the bosonic non-equilibrium dynamics. To demonstrate this,
we introduce the Shannon entropy
$S^{S}_{j}=-\sum_{k}c_{j}^{k}\ln{c_{j}^{k}}$ (28)
for a many-body eigenstate $|\Phi_{j}\rangle$ of the mixture as a measure of
the delocalization IPR_1 ; IPR_2 . Here
$c_{j}^{k}=|\langle\psi_{k}|\Phi_{j}\rangle|^{2}$ with
$\\{|\psi_{k}\rangle\\}$ being the eigenbasis for the Hamiltonian
$\hat{H}_{B}$ that are used as the “integrable basis”. The Shanon entropy
thereby measures the number of this integrable basis vectors that contribute
to each eigenstate. As a result, the lower the Shanon entropy value is the
closer this eigenstate $|\Phi_{j}\rangle$ is to a non-interacting eigenvector.
From the random matrix theory (RMT), for a chaotic system described by the
gaussian orthogonal ensemble (GOE), the amplitudes $c_{j}^{k}$ are independent
random variables and all eigenstates are completely delocalized QKT_1 .
However, due to the spectral fluctuations the weights $\\{c_{j}^{k}\\}$
fluctuate around $1/D$, yielding the averaged value
$S_{\text{GOE}}=\ln{(0.48D)}$ IPR_1 ; IPR_2 . Here, we refer to $D=N_{B}+1$ as
the Hilbert space dimension for the bosonic species, which is different from
the single-species cases IPR_1 ; IPR_2 .
Figure 6 (a) presents the Shannon entropy of the many-body eigenstates as a
function of their quantum numbers $j$ (sorted in the ascending order with
respect to the energy) for the case $g_{IB}=1.0$. The distinguished
localization nature between the low-lying and the highly excited eigenvectors
are clearly exhibited. While those low-energy eigenvectors are delocalized
with the corresponding Shannon entropy values close to the result from the GOE
$S_{\text{GOE}}=3.8812$, for increasing $j$, a decrease of the $S^{S}_{j}$
value is clearly observed, indicating those high-energy eigenvectors are
significantly localized. Thus, we may further conjecture that
$S^{S}_{j}\rightarrow 0$ for $j\rightarrow\infty$. Physically, the avoided
level-crossings in the low-energy spectrum results in a strong mixing of
different eigenstates with respect to their physical properties. In this way,
an eigenstate from the non-interacting basis can be largely delocalized after
experiencing a serious of avoided level-crossings QKT_1 . On the other hand,
the localized nature for those high-lying excited states can also be readily
seen from the effective Hamiltonian in Eq. (27). Since here
$\hat{H}_{IB}^{\prime}$ corresponds to the “zero-point ” energy of the
mixture, it is not surprising that the interacting basis (eigenstates of the
mixture for $g_{IB}>0$) is similar to the non-interacting integrable basis.
Before proceeding, let us highlight that the degree of the localization for an
eigenstate $|\Phi_{j}\rangle$ also reflects the degree of the encoded
entanglement between the impurity and the majority bosons. To see this, we
employ the von Neumann entropy for an eigenstate $|\Phi_{j}\rangle$ few_gs_7 ;
Schmidt ,
$S^{V}_{j}=-\text{tr}(\hat{\rho}_{j}\ln{\hat{\rho}_{j}})$ (29)
with $\hat{\rho}_{j}=|\Phi_{j}\rangle\langle\Phi_{j}|$ being the corresponding
density matrix. For the case where the two species are non-entangled, the
eigenstate $|\Phi_{j}\rangle$ is simply of a product form with respect to the
wavefunctions of the two species. Correspondingly, it gives rise to the von
Neumann entropy $S^{V}_{j}=0$. By contrast, any existing entanglement between
the two species will lead to an increase of the von Neumann entropy,
therefore, one can anticipate large $S^{V}_{j}$ values for those highly
entangled eigenstates. The corresponding von Neumann entropies for various
eigenstates for the case $g_{IB}=1.0$ are shown in Fig. 6 (b). As compared to
the Fig. 6 (a), a striking resemblance between the $S^{V}_{j}$ and $S^{S}_{j}$
distributions are transparently observed, manifesting the existence of the
correspondence between a delocalized (localized) eigenstate to a large (small)
von Neumann entropy value. Based on this knowledge, we refer to the above
eigenstate delocalization as the entanglement induced delocalization.
Finally, let us discuss the impact of the eigenstate delocalization to the
bosonic non-equilibrium dynamics. For the case
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$, the initial
state is mainly a linear superposition of those low-lying eigenvectors for
both $g_{IB}=0$ and $g_{IB}=1.0$ [c.f. Fig. 6 (c), the left part]. Owing to
the delocalization nature for the eigenstates of the mixture for large
interspecies interactions, the expansion coefficients $\\{A_{j}^{1}\\}$ for
$g_{IB}=1.0$ are broadly distributed as compared to the ones
($\\{A_{j}^{0}\\}$) for $g_{IB}=0.0$, reflecting the fact that much more
eigenstates are involved in the bosonic dynamics. Since the energy levels for
the interacting case are no longer equidistant, it thus gives rise to the
completely irregular behaviors for the above $S_{z}(t)$ dynamics [c.f. Fig. 2
(c)]. In contrast, those highly excited states in the interacting basis
preserve the main features of the non-interacting basis, leaving a similar
distribution of the corresponding expansion coefficients [c.f. Fig. 6 (c), the
right part]. Together with the equidistant nature for those high-lying energy
levels, it thereby accounts for the integrable $S_{z}(t)$ motion for the
initial state $|\Psi(0)\rangle=|\phi_{150}\rangle\otimes|\pi/2,\pi/4\rangle$
and for the case $g_{IB}=1.0$.
Figure 5: (Color online) Energy spectrum of the mixture as a function of
impurity-Bose interaction strength $g_{IB}$. (a) High-energy part of the
spectrum, (b) A zoom-in view of the high-energy spectrum, (c) Low-energy part
of the spectrum, (d) A zoom-in view of the low-energy spectrum. Figure 6:
(Color online) (a) Shannon entropy $S^{S}_{j}$ for the many-body eigenstates
as a function of quantum number $j$ for the case $g_{IB}=1.0$. The red dashed
line denotes the Shannon entropy from the GOE $S_{\text{GOE}}=3.8812$. (b) Von
Neumann entropy $S^{V}_{j}$ for the eigenstates for the case $g_{IB}=1.0$. (c)
Expansion coefficients $A_{j}=|\langle\Psi(0)|\Phi_{j}\rangle|^{2}$ with
respect to eigenstates for initial states
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ (left part) and
$|\Psi(0)\rangle=|\phi_{150}\rangle\otimes|\pi/2,\pi/4\rangle$ (right part)
and for the cases $g_{IB}=0.0$ (red solid line and are denoted as $A_{j}^{0}$)
and $g_{IB}=1.0$ (blue dashed line and are denoted as $A_{j}^{1}$).
### IV.3 Induced Bose-Bose attraction
The presence of the impurity not only brings the bosonic species into the
chaotic regime, yielding an irregular behavior for the corresponding
$S_{z}(t)$ motion, but also fundamentally changes its dynamical properties. As
we will show below, the impurity effectively induces an attractive Bose-Bose
interaction, which, in turn, leads to a completely different quantum
trajectory as compared to the integrable case. To show it, we employ the time-
averaged Husimi distribution (TAHD) TAHD_1 ; BJJ_4 ; QKT_7
$\overline{Q}_{H}(\theta,\varphi)=lim_{T\rightarrow\infty}~{}\frac{1}{T}\int_{0}^{T}Q_{H}(\theta,\varphi,t)dt,$
(30)
with
$Q_{H}(\theta,\varphi,t)=\frac{N_{B}+1}{4\pi}\langle\theta,\varphi|\hat{\rho}_{B}(t)|\theta,\varphi\rangle,$
(31)
and $\hat{\rho}_{B}(t)$ being the reduced density matrix for the bosonic
species after tracing out the impurity. According to the Eq. (21),
$Q_{H}(\theta,\varphi,t)$ satisfies the normalization condition $\int
Q_{H}(\theta,\varphi,t)d\Omega=1$. Physically, the TAHD represents the
probability for the bosons to locate at a specific ACS
$|\theta,\varphi\rangle$ averaged over the entire dynamics, which, with
respect to its physical meaning, resembles to the probability density function
(PDF) for a classical trajectory. In this sense, we note that the TAHD
represents a quantum trajectory in an averaged manner.
The computed TAHD for the initial state
$|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ and for the case
$g_{IB}=0$ is depicted in Fig. 7 (a), together with the classical trajectory
governed by the Hamiltonian $H_{cl}$ and starting from the phase point
$(Z=0,\varphi=\pi/4)$ (black solid line). Compared to the classical
trajectory, we note that the TAHD profile fully captures its main
characteristic with those high $\overline{Q}_{H}(\theta,\varphi)$ regions
precisely matching the positions for this classical trajectory, which
additionally manifests the agreement between the quantum $S_{z}(t)$ and
classical $Z(t)$ dynamics for the case $g_{IB}=0$ [c.f. Fig. 1 (b)]. The TAHD
for $g_{IB}=1.0$, however, deviates from the non-interacting case
significantly and bears a striking resemblance to the classical trajectory
corresponding to the BH Hamiltonian in Eq. (4) with an on-site attraction
[c.f. Figs. 7 (b) and 3 (c)]. In this sense, we conjecture an effective Bose-
Bose attraction is induced by the impurity in the dynamics which, in turn,
alters the corresponding quantum trajectory.
This expectation is indeed confirmed by analyzing the pair-correlation
function few_gs_6 ; few_gs_7 ; GPE_1
$g_{2}(\alpha,\beta)=\frac{\rho_{2}^{B}(\alpha,\beta)}{\rho_{1}^{B}(\alpha)\rho_{1}^{B}(\beta)},$
(32)
for the bosons, with $\rho_{2}^{B}(\alpha,\beta)$ and $\rho_{1}^{B}(\alpha)$
being the reduced two- and one-body density for the bosonic species and
$\alpha,\beta=L,R$. Physically, $\rho_{2}^{B}(L,R)$ denotes a measure for the
joint probability of finding one boson at the left well while the second is at
the right well. Through the division by the one-body densities, the $g_{2}$
function excludes the impact of the inhomogeneous density distribution and
thereby directly reveals the spatial two-particle correlations induced by the
interaction few_gs_6 ; few_gs_7 . Based on this knowledge, let us first
elaborate the $g_{2}$ function for the non-interacting case, which corresponds
to the TAHD depicted in Fig. 7 (a). Since there is no interaction among the
particles, all the bosons thus can independently hop between the two wells,
hence, it always results in $g_{2}^{o}=g_{2}^{d}=1$, with
$g_{2}^{o}=g_{2}(\alpha,\alpha)$ [$g_{2}^{d}=g_{2}(\alpha,\beta\neq\alpha)$]
being the two-particle correlations within the same well (between the two
wells). By contrast, the presence of the impurity-Bose interaction largely
changes the above $g_{2}$ profile. As shown in Fig.7 (c), the $g_{2}$ function
quickly deviates from the initial values $g_{2}^{o}=g_{2}^{d}=1$ to
$g_{2}^{o}>1$ and $g_{2}^{d}<1$ for $t<5$ and persistently oscillate around
their asymptotic values $g_{2}^{o}=1.28$ and $g_{2}^{d}=0.72$, respectively.
Physically, such an evolution of the $g_{2}$ function indicates that the
bosons are in favor of bunching together with a collective tunneling between
the wells in the dynamics, which evidently manifests the existence of the
Bose-Bose attraction induced by the impurity-Bose repulsion.
Figure 7: (Color online) Time-averaged Husimi distribution for the initial
state $|\Psi(0)\rangle=|\phi_{0}\rangle\otimes|\pi/2,\pi/4\rangle$ and for (a)
$g_{IB}=0.0$ and (b) $g_{IB}=1.0$. Moreover, the black solid line in (a)
denotes the classical trajectory starting from the phase point
$(Z=0,\varphi=\pi/4)$. (c) The evolution of the pair-correlation function
$g_{2}^{o}(t)$ (blue solid line) and $g_{2}^{d}(t)$ (red solid line) for the
case examined in (b).
## V Conclusions and Outlook
We have demonstrated that a non-interacting ultracold many-body bosonic
ensemble confined in a 1D DW potential can exhibit a chaotic nature due to the
presence of a single impurity. We trigger the non-equilibrium dynamics by
means of a quench of the impurity-Bose interaction and monitor the evolution
of the population imbalance for the bosons between the two wells. While the
increase of the post-quench interaction strength always facilitates the
chaotic motion for the bosonic population imbalance, it becomes regular again
for the cases where the impurity is initially prepared in a highly excited
state. Employing the linear entanglement entropy, it not only enables us to
characterize such an ITC transition but also implies the existence of an
effective Bose-Bose attraction in the dynamics induced by the impurity. In
order to elucidate the physical origin for the above observed ITC transition,
we perform a detailed spectral analysis for the mixture with respect to both
the energy spectrum as well as the eigenstates. In particular, two
distinguished spectral behaviors upon a variation of the interspecies
interaction strength are observed: while the avoided level-crossings take
place in the low-energy spectrum, the energy levels in the high-energy
spectrum possess the main features of the integrable limit. Consequently, it
results in a significant delocalization for the low-lying eigenvectors which,
in turn, accounts for the chaotic nature of the bosonic dynamics. In contrast,
those highly excited states bear a high resemblance to the non-interacting
integrable basis, rendering the recovery of the integrability for the bosonic
species. Finally, we discuss the induced Bose-Bose attraction as well as its
impact on the bosonic dynamics.
Possible future investigations include the impact on the bosonic dynamics with
the inclusion of several additional impurities and/or the bare Bose-Bose
repulsion. Since for the latter there exists a competition between the bare
Bose-Bose repulsion and the induced attractive interaction, this may
significantly affect the bosonic ITC transition. Another interesting
perspective is the study of the chaotic dynamics for an atomic mixture
consisting of atomic species with different masses. The impact of the higher
bands of the DW potential, beyond the two-site BH description for the bosonic
species, is also an interesting perspective.
###### Acknowledgements.
The authors acknowledge fruitful discussions with A. Mukhopadhyay and X.-B.
Wei. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) - SFB 925 - project 170620586. K. K. gratefully
acknowledges a scholarship of the Studienstiftung des deutschen Volkes. G. X.
acknowledges support from the NSFC under Grants No. 11835011 and No. 11774316.
## References
* (1) M. R. Andrews, C. G. Townsend, H.-J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle, Science 275, 637 (1997).
* (2) A. Rohrl, M. Naraschewski, A. Schenzle, and H. Wallis, Phys. Rev. Lett. 78, 4143 (1997).
* (3) M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Phys. Rev. Lett. 95, 010402 (2005).
* (4) B. D. Josephson, Phys. Lett. 1, 251 (1962).
* (5) R. Gati and M. K. Oberthaler, J. Phys. B: At. Mol. Opt. Phys. 40 R61 (2007).
* (6) W. D. Phillips, Rev. Mod. Phys. 70, 721 (1998); I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
* (7) A. Smerzi, S. Fantoni, S. Giovanazzi, and S. R. Shenoy, Phys. Rev. Lett. 79, 4950 (1997).
* (8) S. Raghavan, A. Smerzi, S. Fantoni, and S. R. Shenoy, Phys. Rev. A 59, 620 (1999).
* (9) G. J. Milburn, J. Corney, E. M. Wright, and D. F. Walls, Phys. Rev. A 55, 4318 (1997).
* (10) K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A 89, 023602 (2014).
* (11) K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A 82, 013620 (2010).
* (12) J. Estève, C. Gross, A. Weller, S. Giovanazzi, and M. K. Oberthaler, Nature (London) 455, 1216 (2008).
* (13) B. Juliá-Díaz, T. Zibold, M. K. Oberthaler, M. Melé-Messeguer, J. Martorell, and A. Polls, Phys. Rev. A 86, 023615 (2012).
* (14) F. Haake, Quantum Signatures of Chaos, (Springer, Berlin, Heidelberg, 2010).
* (15) F. Haake, M. Kus and R. Scharf, Z. Phys. B 65, 381 (1987).
* (16) M. Lombardi and A. Matzkin, Phys. Rev. E 83, 016207 (2011).
* (17) R. Alicki, D. Makowiec, and W. Miklaszewski, Phys. Rev. Lett. 77, 838 (1996).
* (18) Jayendra N. Bandyopadhyay and A. Lakshminarayan, Phys. Rev. E 69, 016201(2004).
* (19) S. Chaudhury, A. Smith, B. E. Anderson, S. Ghose and P. S. Jessen, Nature 461, 768 (2009).
* (20) A. Piga, M. Lewenstein, and J. Q. Quach, Phys. Rev. E 99, 032213 (2019).
* (21) X. G. Wang, S. Ghose, B. C. Sanders, and B. B. Hu, Phys. Rev. E 70, 016217 (2004).
* (22) C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, and J. M. Martinis, Nat. Phys. 12, 1037 (2016).
* (23) J. B. Ruebeck, J. Lin, and A. K. Pattanayak, Phys. Rev. E 95, 062222 (2017).
* (24) U. T. Bhosale and M. S. Santhanam, Phys. Rev. E 95, 012216 (2017).
* (25) S. Ghose, R. Stock, P. Jessen, R. Lal, and A. Silberfarb, Phys. Rev. A 78, 042318 (2008).
* (26) Y. S. Weinstein, S. Lloyd, and C. Tsallis, Phys. Rev. Lett. 89, 214101 (2002).
* (27) M. Heyl, P. Hauke, and P. Zoller, Sci. Adv. 5, eaau8342 (2019).
* (28) L. M. Sieberer, T. Olsacher, A. Elben, M. Heyl, P. Hauke, F. Haake, and P. Zoller, npj Quantum Information 5, 78 (2019).
* (29) A. N. Wenz, G. Zürn, S. Murmann, I. Brouzos, T. Lompe, S. Jochim, Science 342, 457 (2013).
* (30) G. Zürn, A. N. Wenz, S. Murmann, A. Bergschneider, T. Lompe, and S. Jochim, Phys. Rev. Lett. 111, 175302 (2013).
* (31) S. Murmann, A. Bergschneider, V. M. Klinkhamer, G. Zürn, T. Lompe, S. Jochim, Phys. Rev. Lett. 114, 080402 (2015).
* (32) F. Serwane, G. Zürn, T. Lompe, T. B. Ottenstein, A. N. Wenz, and S. Jochim, Science 332, 336 (2011).
* (33) G. Zürn, F. Serwane, T. Lompe, A. N. Wenz, M. G. Ries, J. E. Bohn, and S. Jochim, Phys. Rev. Lett. 108, 075303 (2012).
* (34) S. Murmann, F. Deuretzbacher, G. Zürn, J. Bjerlin, S. M. Reimann, L. Santos, T. Lompe, and S. Jochim, Phys. Rev. Lett. 115, 215301 (2015).
* (35) A. S. Dehkharghani, F. F. Bellotti, and N. T. Zinner, J. Phys. B: At., Mol., Opt. Phys. 50, 144002 (2017).
* (36) H. P. Hu, L. Pan, and S. Chen, Phys. Rev. A 93, 033636 (2016).
* (37) A. S. Dehkharghani, A. G. Volosniev, and N. T. Zinner, J. Phys. B: At., Mol., Opt. Phys. 49, 085301 (2016).
* (38) D. Pȩcak, A. S. Dehkharghani, N. T. Zinner, and T. Sowiński, Phys. Rev. A 95, 053632 (2017).
* (39) K. Keiler, S Krönke and P. Schmelcher, New J. Phys. 20, 033030 (2018).
* (40) J. Chen, J. M. Schurer, and P. Schmelcher, Phys. Rev. A 98, 023602 (2018).
* (41) J. Chen, J. M. Schurer, and P. Schmelcher, Phys. Rev. Lett. 121, 043401 (2018).
* (42) H. P. Hu, L. M. Guan, and S. Chen, New J. Phys. 18, 025009 (2016).
* (43) B. Fang, P. Vignolo, M. Gattobigio, C. Miniatura, and A. Minguzzi, Phys. Rev. A 84, 023626 (2011).
* (44) M. Pyzh, S Krönke, C. Weitenberg and P. Schmelcher, New J. Phys. 20, 015006 (2018).
* (45) A. C. Pflanzer, S. Zöllner, and P. Schmelcher, Phys. Rev. A 81, 023612 (2010).
* (46) A. C. Pflanzer, S. Zöllner, and P. Schmelcher, J. Phys. B: At., Mol., Opt. Phys. 42, 231002 (2009).
* (47) L. D. Landau and S. I. Pekar, Zh. Eksp. Teor. Fiz. 18, 419 (1948).
* (48) R. P. Feynman, Phys. Rev. 97, 660 (1955).
* (49) A. S. Alexandrov and J. T. Devreese, Advances in Polaron Physics, (Springer-Verlag, Berlin, 2010).
* (50) H. -P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, (Oxford University Press, USA, 2007).
* (51) M. Rinck and C. Bruder, Phys. Rev. A 83, 023608 (2011).
* (52) F. Mulansky, J. Mumford, and D. H. J. O’Dell, Phys. Rev. A 84, 063602 (2011).
* (53) J. Mumford and D. H. J. O’Dell, Phys. Rev. A 90, 063617 (2014).
* (54) J. Mumford, J. Larson, and D. H. J. O’Dell, Phys. Rev. A 89, 023620 (2014).
* (55) J. Mumford, W. Kirkby, and D. H. J. O’Dell, J. Phys. B: At. Mol. Opt. Phys. 53, 145301 (2020).
* (56) J Mumford et al., J. Phys. B: At. Mol. Opt. Phys. 53, 145301 (2020).
* (57) M. Olshanii, Phys. Rev. Lett. 81, 938 (1998).
* (58) C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010).
* (59) T. Köhler, K. Góral, and P. S. Julienne, Rev. Mod. Phys. 78, 1311 (2006).
* (60) A. G. Truscott, K. E. Strecker, W. I. McAlexander, G. B. Partridge, and R. G. Hulet, Science 291, 2570 (2001).
* (61) F. Schreck, L. Khaykovich, K. L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, and C. Salomon, Phys. Rev. Lett. 87, 080403 (2001).
* (62) D. S. Hall, M. R. Matthews, J. R. Ensher, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 81, 1539 (1998).
* (63) S. Tojo, Y. Taguchi, Y. Masuyama, T. Hayashi, H. Saito, and T. Hirano, Phys. Rev. A 82, 033609 (2010).
* (64) J. Chen, A. K. Mukhopadhyay, and P. Schmelcher, Phys. Rev. A 102, 033302 (2020).
* (65) C. J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases, (Cambridge University Press, New York, 2008).
* (66) M. Holthaus and S. Stenholm, Eur. Phys. J. B 20, 451 (2001).
* (67) M. Tabor, Chaos and Integrability in nonlinear Dynamics: An Introduction, (Wiley-Interscience, USA, 1989).
* (68) F. T. Arecchi, Eric Courtens, R. Gilmore, and H. Thomas, Phys. Rev. A 6, 2211 (1972).
* (69) J. M. Radcliffe, J. Phys. A: Gen. Phys. 4, 313 (1971).
* (70) T. Zibold, E. Nicklas, C. Gross, and M. K. Oberthaler, Phys. Rev. Lett. 105, 204101 (2010).
* (71) J. Tomkovic, W. Muessel, H. Strobel, S. Löck, P. Schlagheck, R. Ketzmerick, and M. K. Oberthaler, Phys. Rev. A 95, 011602(R) (2017).
* (72) O. Penrose and L. Onsager, Phys. Rev. 104, 576 (1956).
* (73) K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A 78, 023615 (2008).
* (74) M. V. Berry and M. Tabor, Proc. R. Soc. Lond. A 356, 375 (1977).
* (75) S. R. Dahmen et al., J. Stat. Mech., P10019 (2004).
* (76) L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, Adv. Phys. 65, 239 (2016).
* (77) L. F. Santos and M. Rigol, Phys. Rev. E 81, 036206 (2010).
* (78) L. F. Santos and M. Rigol, Phys. Rev. E 82, 031130 (2010).
* (79) A. Pathak, Elements of Quantum Computation and Quantum Communication, (Taylor & Francis, 2013).
* (80) K. Husimi, Proc. Phys. Math. Soc. Jpn. 22, 264 (1940).
|
# Harvest: A Reliable and Energy Efficient Bulk Data Collection Service for
Large Scale Wireless Sensor Networks
Vinayak Naik
Anish Arora
BITS Pilani, Goa<EMAIL_ADDRESS>The Ohio State University
<EMAIL_ADDRESS>
###### Abstract
_We present a bulk data collection service, Harvest, for energy constrained
wireless sensor nodes. To increase spatial reuse and thereby decrease latency,
Harvest performs concurrent, pipelined exfiltration from multiple nodes to a
base station. To this end, it uses a distance-k coloring of the nodes, notably
with a constant number of colors, which yields a TDMA schedule whereby nodes
can communicate concurrently with low packet losses due to collision. This
coloring is based on a randomized CSMA approach which does not exploit
location knowledge. Given a bounded degree of the network, each node waits
only O $(1)$ time to obtain a unique color among its distance-k neighbors, in
contrast to the traditional deterministic distributed distance-k vertex
coloring wherein each node waits O$(\Delta^{2})$ time to obtain a color._
_Harvest offers the option of limiting memory use to only a small constant
number of bytes or of improving latency with increased memory use; it can be
used with or without additional mechanisms for reliability of message
forwarding._
_We experimentally evaluate the performance of Harvest using 51 motes in the
Kansei testbed. We also provide theoretical as well as TOSSIM-based comparison
of Harvest with Straw, an extant data collection service implemented for
TinyOS platforms that use one-node at a time exfiltration. For networks with
more than 3-hops, Harvest reduces the latency by at least 33% as compared to
that of Straw._
## 1 Introduction
Wireless sensor nodes often maintain logs of network, environment, middleware,
and application behavior. Examples of logged information include link
qualities, network routes, sensory data, mobility traces, exception reports,
application statistics, etc. The collection of bulk data from a number of
wireless sensor nodes is thus a frequent requirement for testers, operators,
managers, modelers, and users. In this paper, we focus on the convergecast of
the (potentially different) bulk data logged at a number of nodes to one “base
station” node. We consider an “off-line” setting where no other data traffic
is present on the network; this case arises when the bulk data collection can
materially interfere with ongoing application traffic or when the
size/generation-rate of the bulk data exceeds the effective communication
capacity of the source nodes with respect to the base station.
As networks scale to larger numbers of nodes and communication hops and as the
bulk datum sizes grow, the reliability, energy-efficiency, and latency of the
collection operation become key issues. While these issues have been well
studied in the context of bulk data dissemination, they have received far less
attention in the case of bulk data convergecast. Also, since network debugging
and management are primary motivations for the collection service, it is
desirable that this service have a small footprint in terms of instruction
memory, data memory, message overhead, and wireless traffic, and to minimize
its dependence on other network services, including localization and time
synchronization. In this paper, we present and evaluate a protocol that meets
these requirements; we call this protocol Harvest.
Harvest achieves its reliability with two measures: First, it schedules the
transmissions (of messages containing bulk datum pieces) so that message
losses due to collision are reduced. We reduce the problem of computing a TDMA
schedule that avoids hidden terminals to computing a distance-2 (henceforth
D-2) vertex coloring; the color of a node decides the slot in which it can
transmit. In the unit disk graph model, these two problems are equivalent [6]:
this intuitively follows from the observation if two non-neighboring nodes $u$
and $v$ interfere with each other then there exists a node $w$ such that there
are edges $(u,w)$ and $(w,v)$ in the unit disk graph. Second, Harvest uses
acknowledgments and retransmissions at the MAC layer for recover from losses
Harvest achieves its energy efficiency by avoiding energy-intensive flash
operations: it performs at most one flash read and no flash write for any bulk
datum piece, and stores the piece at nodes en route to the base station only
in their RAM. Of course, avoiding message collision also yields energy
efficiency. Harvest keeps the message overhead low (below 9 bytes per bulk
data packet). To keep the number of control message tranmissions low, we
present a distributed algorithm for its TDMA schedule computation that
generates O(1) control message per node involved, which is significantly
better than the traditional, distributed alternative, which incurs
O$(\Delta^{2})$ per node involved, where $\Delta$ is the node degree. As we
explain shortly, this improvement is enabled by computing vertex coloring with
a constant number of colors that may be smaller than $\Delta$. As a result,
not every node in the network is colored and so the TDMA schedule has to be
computed in an ongoing manner, which in turn implies the control message
savings is an ongoing one. Moreover, since an idle radio also consumes
significant power (of the same order as that during message reception),
Harvest schedules the switching off of the radio to save energy:
asymptotically, a node keeps its radio on only for the time it is scheduled to
send data on behalf of itself or someother node.
Finally, Harvest achieves low latency in two ways. For one, it exploits
spatial reuse. Instead of collecting data from only one node at a time,
Harvest allows data collection from some constant number of nodes
concurrently. For ease of exposition only, we let the concurrency constant be
2 henceforth (the protocol assumes that the user will specify this concurrency
constant as a parameter; in fact, larger constants for dense networks yield
lower latency). For nodes (including the base station) to concurrently receive
data from 2 nodes, 4 colors are needed in the vertex coloring (one for the
node, two for its allowed children, and one for its parent node). In other
words, regardless of the density of the network, Harvest colors at most 4
nodes in the inteference region of any node at any time. Once a colored node
has completed its transmissions, Harvest lets an uncolored node in its
interference region to assume that color; this is the on-going aspect of the
TDMA schedule computation. We validate that a concurrency constant of 2 yields
33% less latency for large networks —specifically networks larger than 3 hops—
than the approach that collects data from one node at a time; the latter
approach is adopted by the Straw protocol. This latency improvement is
obtained even if we restrict Harvest to allow concurrent reception at only the
base station and reception from at most one child node at every other colored
node. And two, we ensure that the control algorithms used by Harvest have
constant time complexities. In particular, our TDMA slot synchronization
algorithm (which obviates the need for a time synchronization service) has a
local convergence time of O$(1)$ as opposed to O$(\Delta^{2})+$O$(D)$ in
traditional algorithms, given the bound on the node density.
Contributions of the paper.
1. 1.
We present a randomized distributed algorithm that assigns a constant number
of colors in the D-2 region of every node (if such a coloring is possible) so
that each node that gets a color waits O$(1)$ time until it gets a unique
color; this contrasts with the O$(\Delta^{2})$ wait time of traditional
deterministic D-2 coloring algorithms. This is achieved by executing our TDMA
scheduling algorithm on top of a CSMA/CA-based MAC; thus, in the event that
two nodes within reliable range of each other will start to contend on a color
simulataneously (i.e. in the same slot), then due to CSMA/CA, one of the two
nodes will back off and receive a packet from the other contending node,
therby yielding that color. We note that the algorithm would work given (local
or nonlocal) ways of calculating the interference regions of nodes other than
our method for local computation of the D-2 neighborhood set.
2. 2.
We present an algorithm that synchronizes its TDMA slots with that other
colored nodes within O$(1)$ time of its being colored. Traditional
deterministic TDMA slot synchronization algorithms have O$(Dia)$ convergence
time, where $Dia$ is the diameter of the network. Intuitively, the reason for
the O$(Dia)$ convergence time is that the maximum number of colors used the
nodes in any interference region needs to be propagated to all the nodes after
the D-2 coloring so that they can agree on the transmission period. Harvest
achieves constant time converence because of its use of a pre-specified
constant number of colors.
3. 3.
We present a data collection algorithm that can use a small constant number
(even 1) of packet-sized buffers, irrespective of the number of nodes and the
bulk datum sizes. In effect, each node in Harvest can have at the most two
children node that send data to it. Using 1 buffer let’s the node forward on
behalf of only one child; using two let’s the node forward on behalf of both;
and using more buffers helps in further reducing the latency.
4. 4.
We evaluate the performance benefit of the spatial reuse and TDMA scheduling
in Harvest, by providing a comparative performance with the Straw protocol, in
terms of latency and energy improvement achieved via the former and the
relative message overheads (Straw use 7 bytes per packet versus Harvest’s 9).
Organization of the paper. In Section 2, we present the system model and
problem statement in more detail. We describe the Harvest protocol and its
TinyOS implementation in Section 3. We analyse the performance of the
randomized TDMA scheduling algorithm and the data collection components of
Harvest in Section 4. In Section 5, we overview the Straw protocol and compare
its performance with that of Harvest, analytically and via TOSSIM simulations.
We describe a number of extensions of Harvest and discuss its relevance for
collecting on-line streaming data in Section 6. We discuss related work in
Section 7 and make concluding remarks in Section 8.
## 2 The Bulk Data Convergecast Problem
The system consists of $n,n>0,$ wireless sensor nodes, called motes, one of
which is distinguished as a base station. We do not make assumptions of how
the mote locations are spatially distributed, nor do we assume the
availability of a location service. We do assume that each mote can
communicate with the base station over zero or more communication hops and the
degree $\Delta$ in the network is bounded.
Some of the motes initially each have a bulk datum in their data store, whose
size may vary from mote to mote and may exceed that of the mote’s RAM. For
simplicity, we assume that each bulk datum resides in the nonvolatile store of
its mote. Desired is a middleware service that upon initiation from the base
station collects of all motes’ bulk data at the base station. In decreasing
order of importance, the performance metrics of the service are: first, energy
efficiency and reliability, and, second, latency, by which we mean the time
taken at the base station from the initiation to the completion of the bulk
data collection opreation.
With respect to energy, Table 1 illustrates the energy cost in terms of
current draw for common operations for the case of motes in the Mica-2/XSM
family.
Operation | Current Draw
---|---
CPU and Idle Radio | 8 mA
Packet Reception | 7.03 mA
Packet Transmission | 10.4 mA
EEPROM Read | 6.2 mA
EEPROM Write | 18.4 mA
Table 1: Energy required by common operations
The table suggests that given the aggregate current draw for an EEPROM read
and write (24.6mA) is significant, and thus every addition of flash operations
to the bulk convergecast forwarding process will almost double the current
draw associated with the minimum aggregate current draw of radio receive, CPU,
and radio transmission (25.43mA). It follows that minimizing EEPROM operations
is desirable for the energy metric. The table also identifies the energy
overhead associated with an idle radio. One implication is that a mote should
sleep as soon as it has no data to send of its own or on behalf of other
motes.
With respect to reliability, we focus attention on obtaining high but not
necessarily 100% reliable data collection. Unlike the dual problem of bulk
data dissemination, where objectives such as mote reprogramming demand all-or-
nothing delivery of bulk data, the use cases of bulk data convergecast can
often tolerate low levels of unreliability. In designing our solution, we do
not emphasize a particular selection of a link estimation technique or
retrasmission mechanism. (Specifically, our experimental evaluation of our
solution uses the WMEWMA link estimation approach of Woo and Culler [16] and
0-retransmissions, but these choices are not of central importance.)
With respect to latency, we note that the problem statement does not emphasize
the latency of collection from the perspective of individual motes. Had we
considered the version of the problem where motes were continually generate
and stream data to the base station, low jitter and comparable latency across
the motes would have been desirable. We therefore regard these latter
requirements as being optional, but not first order, for solving the problem.
Finally, in designing our solution, we do not assume the availability of a
time synchronization service.
## 3 The Harvest Protocol
### 3.1 The Components of Harvest
In this section, we describe the three components of Harvest, viz.
interference neighborhood discovery, randomized slot assignment and
synchronization, and data collection.
Neighborhood Discovery. Each node performs online link estimation to find out
its 1-hop neighborhood set. For ease of exposition, we first assume that all
the links are symmetric, i.e., the link quality between two nodes is same in
both the directions. Therefore, it is sufficient to do link estimation in any
one direction. However, the links in sensor network may not always be
symmetric, so we will extend the link estimation in both the directions to
deal with asymmetric links.
A number of metrics can be used for this link estimation; for instance we may
use the window mean with exponentially weighted moving average (WMEWMA)
metric. This metric was has been used by MintRoute protocol [16] of Woo and
Culler. There are two tuning parameters for WMEWMA-based link estimations,
viz. $\alpha$ and $t$. The parameter $\alpha$ determines the size of the
history used in link estimation and $t$ determines the rate at which link
estimation is updated.(Experimental results in the literature show that the
values 0.6 and 30 for $\alpha$ and $t$ respectively, provide stable and agile
link estimation for Chipcon’s CC1000 radio. In particular, the settling time,
which is the length of time for the estimator to converge within $\pm 10\%$ of
the actual value and remain within the error bound.)
Based on link estimation, path selection to the base station can be based
again on a number of metrics studied in the literature, e.g., end-to-end path
reliability, hop distance, end-to-end mac latency, etc;. for instance, we may
use a combination of link quality and hop distance. In particular, we define
the 1-hop neighborhood of a node $A$ to be the set of nodes that have WMEWMA
value greater than or equal to 75 (which roughly implies a stable packet loss
rate less than 10%). Among the 1-hop neighbors, node $A$ selects a node with
the least hop distance to the base station as its parent. Using TOSSIM
simulations, we find that the minimum WMEWMA link quality between two nodes at
2-hops from each other is 30 (which roughly implies the nodes can reliably
sense each other’s carrier).
Randomized Slot Assignment and Synchronization. As explained in Section 1,
Harvest uses 4 colors in the entire network (i.e., two more colors than our de
facto concurrency constant of 2). The TDMA scheduling divides time into
intervals of length $T=4*t_{S}$, where $t_{S}$ is the duration of a timeslot.
Note that the color assignment should be such two nodes with that are not
within 2 hops of each other can use same color. Further, every node can have
at the most two children. In each time period $T$, a node can forward only one
packet (this could be its own packets or on behalf of one of its two
children). The parent can signal which child should send a packet next by
ordering the child IDs in the Harvest message, as we shall explain later.
To begin the TDMA scheduling, the base station selects a color for itself and
starts sending beacon messages in its timeslot. The 1-hop neighbors of base
station randomly select an available color and start sending their payload.
Every node’s packet contains the IDs of its 1-hop neighborhood transmitters.
If node A hears its 1-hop neighbor transmitting in the same time-slot then one
of two nodes gives up its color. The priority among the contending nodes is
decided by considering which nodes was the first to select the color and then
by the IDs of the nodes. Thus, priority is locally computed by looking at the
sequence number of the messages and the unique IDs of the nodes.
The underlying MAC layer in Harvest is CSMA/CA based. As a result, even if two
nodes try to transmit in the same time slot, only one of them can succeed. We
claim that this phenomenon applies to both scenarios, viz. when the two
contending nodes are 1-hop neighbors of each other or 2-hops neighbors of each
other. The nodes in the 1-hop neighborhood of the base station select a color
and the wave propagates. After a node selects a color for itself or finds that
there are no available colors in its D-2 neighborhood, it turns off the
backoffs in the underlying CSMA/CA. Every node maintains a list of node IDs,
which are using the 4 colors in its D-2 neighborhood, as a soft state.
Whenever a node finishes its data transmission, it stops transmitting and its
color is available for reuse. All the nodes in the D-2 neighborhood learn this
information by the virtue of the soft state and enable backoff in the
underlying CSMA/CA. And the process of randomized color selection repeats.
The selection of 2 senders and the color assignment is a distributed
operation. The operation is initiated from a single base station as opposed to
the nodes in the network, since uniquely selecting nodes in a distributed
manner would incur additional message overhead for coordination. The approach
outperforms a centralized solution because in the latter a single node (such
as the base station) would need to collect the entire topology information to
compute disjoint paths between two nodes. Further, the operation would have to
repeated whenever a new sender is selected.
Data Forwarding Protocol. As soon as a node has selected its parent and
uniquely selected a color in its D-2 neighborhood, it starts sending data
packets to its parent in the corresponding timeslot. A parent can choose to
receive packets from either of its children. If a parent node has 1 buffer
space, then it can receive only 1 packet in the entire time period $T=4*t_{S}$
($t_{S}$ is long enough to transmit a Harvest message with CSMA/CA backoffs
disabled). In this csae, a parent node receives packets from a child in every
alternate time period. The process of alternation ensures that the colors
assigned to its children are not unassigned. This is a minor variation in the
Harvest protocol as described above. Instead of one packet buffer at each
node, more than one packet buffers can be allocated at each node. This will
expedite the data collection.
### 3.2 Implementation Description
In this section, we describe the implementation of Harvest in NesC under
TinyOS 1.x release. Harvest has a single message structure; Figure 1
illustrates this structure, using numbers that denote field sizes in terms of
bits.
Figure 1: Harvest Message Structure
The payload in each Harvest packet is 20 bytes (this is in contrast to Straw’s
payload of 22 bytes). The color ID identifies one of the 4 colors used by the
node. The # hops denotes the distance of the sender from the base station.
This information is used by a node to select it parent, which has the minimum
distance to the base station, among the set of nodes within 1-hop
neighborhood.
The child IDs are used identify the IDs of the sender’s children. A non-null
value declares that the sender is available for forwarding. Also, the sender
can use this field to declare its decision about the selected children in case
multiple node are in contention for the selection. In particular, the ID in
the first field among the two, should send packet in the next time period.
The array of 4 node IDs denotes the IDs of the 4 nodes, in the sender’s D-2
neighborhood, which are currently using the 4 colors. The array is ordered in
the increasing order of the color IDs. Every node copies the array received
from its 1-hop neighborhood and maintains it as a soft state. If a color is
not refreshed for a certain time, then the node assumes that the color is free
and sends this information as part of its messages. The sequence number is a
monotonically increasing number and denotes the sequence number of the packet.
It is used in the calculation of WMEWMA link estimate and to select a unique
node in the case of 2 nodes contending for the same color or same parent. The
range of sequence numbers can be chosen depending upon the number of packets
to be transmitted and also the number can be recycled to save space.
There are no explicit sender ID and the destination ID fields. The sender ID
can be retrieved from the message by reading the node ID at the location of
sender’s color ID. The destination ID field is used from the TOS header in the
TinyOS packet. Harvest uses promiscuous mode of transmission so that
neighboring nodes can learn about the color allocation. But only the node
identified as the destination node forwards the packet to the base station.
The base station’s ID is 0. The receiver can identify whether the message is
from the base station or not by looking at the sender ID.
Harvest does not need an explicit tine synchronization service for its TDMA to
function. Every packet contains the D-2 color of the sender. We use the
synchronous reception property of the wireless medium to achieve time
synchronization among the nodes [2]. In particular, when a node hears a packet
from its parent, it synchronizes its time with that of its parent. Since base
station is the root of the tree, all the time at all the nodes is synchronized
to that of the single clock of the base station by virtue of induction. This
synchronization scheme is also used in Sprinkler [8], which uses TDMA.
## 4 Performance Evaluation
In this section, we evaluate the latency and number of packet transmissions
for Harvest’s randomized slot assignment algorithm and data collection
protocol.
### 4.1 Randomized Slot Assignment Performance
As described earlier, Harvest uses an underlying CSMA/CA protocol for color
selection. In particular, when a node receives a message, it finds out the
available colors in its D-2 neighborhood from the received message. If one or
more colors are available, the node randomly selects an available color and
starts transmitting from the corresponding TDMA slot in next time interval. It
is possible that two or more nodes can simultaneously select the same color
and therefore their transmitted packets can collide with each other. We show
here that in O$(1)$ time, a unique node will select a unique color in the
node’s D-2 neighborhood.
TinyOS uses a variant of non-persistent CSMS/CA protocol [15]. We briefly
recall the definition of non-persistent CSMA/CA protocol [1]:
1. 1.
A node senses channel before transmission.
2. 2.
If the channel is free, it immediately transmits a frame; otherwise it waits
for a random amount of time.
3. 3.
After waiting, it repeats step 1.
In the case of TinyOS, a node waits for a random amount of time before it
senses the channel. This ensures that the transmissions are not synchronized.
Because of the initial random wait, the throughput of the CSMA/CA in TinyOS is
better than that of the classical non-persistent CSMA/CA.
###### Theorem 1
Given that the degree ($\Delta$) of network is bounded, Harvest takes O$(1)$
time for assigning a unique color to a node in its D-2 neighborhood.
###### Proof 4.2.
Given that $\Delta$ is bounded, for non-persistent CSMA/CA, there exists a
constant $\tau>0$ such that the probability of a frame transmission without
collision is at least $\tau$ [1]. The same holds for the CSMA/CA in TinyOS
which is variant of non-persistent CSMA/CA.
Therefore, the expected time for a frame transmission without failure is also
O$(1)$. In the event that two or more nodes select the same color and transmit
in the same timeslot, in O$(1)$ time, a unique node will succeed in a
transmission without failure. After one transmission without collision, all
the nodes in the D-1 neighborhood will learn that the color is not available.
Similarly, for the D-2 neighborhood, a unique color is selected in O$(1)$ time
since the packet delivery rate between D-2 neighbors is non-zero. After one
transmission without collision by the successful node, or via a neighbor of
the successful node, the color assignment of the successful node get conveyed
to its 2-hop neighborhood.
The value of $\tau$ depends upon the range of values for random wait and
$\Delta$. Instead of finding the value of $\tau$, we perform experiments to
measure the convergence time of Harvest’s slot assignment for different values
of $\Delta$. We use 51 XSM motes in an indoor testbed, Kansei. An XSM mote
uses Chipcon’s CC1000 radio and is for the purposes of this experiment similar
to a Mica-2 mote. We use the TinyOS 1.x release and the standard MAC that
comes with the 1.x release. The topology of the network is as shown in the
Figure 2. The motes are placed in grid with 3ft unit separation on the X and Y
axes. We uses default power level and default frequency for transmission. The
mote at location (0,0) is selected as the base station, as shown in Figure 2.
Each slot is of the duration 31 msec, which is the minimum possible given that
the radio transmission takes at 23 msec and the UART transmission takes at
least 8 msec in TinyOS 1.x over XSM. Each node has a payload of 100 packets to
be sent to the base station.
Figure 2: Testbed Topolgy
We measure the time required to collect all the data packets from the first
mote after the start of the experiment as a function of the number of nodes.
The time is sampled at a granularity of 30 times the time for a transmission
period. The number of sampling periods required for the first node to complete
data exfiltration denotes the convergence time of the color selection
algorithm. As shown in Table 2, the convergence time has a variance of 1
sampling period, which is negligible. Hence, for the non-persistent CSMA/CA
implementation in TinyOS 1.x release, the convergence time of Harvest’s
randomized slot assignment is negligible for a $\Delta$ up to 51.
# nodes | Convergence time
---|---
6 | 8
12 | 8
18 | 8
22 | 8
31 | 7
42 | 8
51 | 9
Table 2: Scalability of color selection
### 4.2 Data Collection Protocol
Latency. We define the total latency of Harvest data collection to be the
duration between the moment that the base station receives the first data
packet and the moment it receives the last data packet. Since the base station
has 2 children and there are 4 timeslots per time period $T,T=4*t_{s}$, it
receives 2 packets per time $T$. Therefore, for $n$ nodes and $M$ number of
packets from each node, the time required to receive $n*M$ packets is
$n*M*2*t_{s}$, which is O$(n*M)$. The time required to build the tree rooted
at the base station is in the worst case O$(n)$. Note that the tree building
is happening in parallel to the data collection. But for the worst case
analysis, we can assume that the two processes happen sequentially. In that
case, the total latency of Harvest is O$(n)$ \+ O$(n*M)$ = O$(n*M)$.
Number of Transmissions. Let $h$ be the average height of a node in the
routing tree. Therefore given $n*M$ packets, the total number of transmissions
is O$((n*M*h)/2)$.
## 5 Performance Comparison
### 5.1 The Straw Protocol
In this section, we compare the performance of Harvest with that of Straw [4].
Similar to Harvest, the objective of Straw protocol is to collect bulk data
from all the nodes at the base station. Unlike Harvest, Straw collects data
from one node at a time. For each node, the data collection is divided into
two phases, viz. broadcast and collection. In case the collection phase loses
packets, the two phases are repeated to recover from loss. (The broadcast
command in the recovery phase contains the sequence numbers of the lost
packets.) The overall goal of the protocol is to minimize latency and number
of packet transmissions. The broadcast phase disseminates the ID of a selected
node, from which data is to be collected. Following the broadcast phase, the
selected node periodically sends packets to the base station. The route is
selected using MintRoute protocol.
For all nodes that are at a distance greater than 2-hops from the base
station, the transmission period in Straw is $3*t_{h}$, where $t_{h}$ is the
time required to traverse single hop. The number 3 is chosen to reduce the
interference with data forwarding at an upstream node. If we color the nodes
that transmit at the same time, then the coloring of transmitting nodes
effectively yields a D-2 coloring. Note that the transmitting nodes induce a
one dimensional graph, in other words, a single line (and hence the name
“straw”). Due to the fact that each node sends packets at the period of
$3*t_{h}$, the base station receives a packet after every $2*t_{h}$ time. For
nodes at 1-hop and 2-hop distances from the base station, the transmission
period is $t_{h}$ and $2*t_{h}$ respectively.
The initial broadcast command sets up the colors on the linear path from a
node to the base station. This corresponds to a deterministic slot assignment,
as compared to the randomized slot assignment of Harvest. Further, a node from
which data is collected, is selected by the base station as opposed to the
local, distributed selection in Harvest.
### 5.2 Latency Comparison
#### 5.2.1 Theoretical Comparison
Straw uses a broadcast for slot assignment. In each broadcast phase, a node
forwards the broadcast command once. For collecting data from $n$ nodes, Straw
will therefore employ $n$ broadcast sessions, on average lasting for at least
O($h$) time. Therefore, the total latency for assigning slots is O$(n*h)$ as
compared to O$(n)$ for Harvest.
In Straw’s data collection protocol, only the nodes on the path from the
current sender to the base station are transmitting. The rest of the network
is idle, in other words, spatial reuse is limited. If the rest of the network
lies outside interference distance from the the transmitters, then an idle
node from the rest of the network can send its data towards the base station.
However, finding a nodes outside interference distance from the current
transmitters could be impossible, especially near the base station. A solution
is to increase the number of D-2 colors from 3.
Instead of a linear structure, Harvest utilizes a tree structure to collect
data packets. Given the concurrency constant of 2, Harvest uses a binary tree.
Harvest uses 4 colors in order to ensure that the binary tree can be D-2
colored. In that case, the base station receives 2 packets every $4*t_{s}$,
where $t_{s}$ is the duration of a timeslot. Therefore the rate of data
collection at the base station is equal to a packet after every $t_{s}$ time.
Note that we can utilize any m-ary tree and $C$ colors, and the resulting rate
of data collection at the base station would be $m/C$.
In Straw, the rate of data collection from the nodes at more than 2-hops from
the base station is 1 packet per $3*t_{s}$. If we assume that the number of
nodes at 1-hop and 2-hop distance from the base station is far less than that
the total number of nodes $n$, the latency of data collection for Straw is
$n*M*3*t_{s}$. Therefore, data collection of Harvest has 33.33% lower latency
than that of Straw.
Further, the overall order complexity of the latency of Straw is O$(n*h)$ \+
O$(n*M)$, which exceeds the O$(n*M)$ of harvest if O$(h)$ is greater than
O$(M)$.
#### 5.2.2 Simulation-based Comparison
Figure 3: Network topology for simulation
To validate the claimed improvement in latency, we perform simulations in
TOSSIM. We setup a network of 20 non-base station motes and 1 base station
node. As shown in Figure 3, we ensure that there are nodes at more than 2-hop
distance from the base station. Also, the base station has more than 1 node at
1-hop distance.
The QueuedSend buffer module at the TinyOS’s MAC layer uses explicit
acknowledgment. In the case of unsuccessful transmission, a retransmission is
attempted. However, the retransmission could happen in an incorrect timeslot,
resulting in a collision. Therefore, we have disabled the MAC layer ACK in
this simulation. However, we can still utilize the MAC-level ACK by channeling
the ACK information to the Straw and Harvest protocol layer. Use of ACK will
increase the reliability to data collection.
All of the simulation are done under TOSSIM. This NesC-code simulator has an
option to instantiate a link quality set given the node placement. The links
qualities are based on some empirical measurements carried out for MICA-2
motes. The links qualities vary in spatial and temporal dimensions. Since the
current implementation of Harvest assumes symmetric links and base its parent
selection criteria by measuring link quality in one direction, we have pre-
processed the link quality set so that all the links are symmetric. In our
future work, we will refine the implementation to deal with asymmetric links
by computing link quality in both directions. In particular, link quality from
the child to parent will be computed by counting the number of successful and
failed ACKs. We use our NesC implementation and we use the Straw code which
has been available as part of a Golden Gate Bridge health monitoring project
contribution folder under TinyOS 1.x release.
We measure the rate at which data is collected at the base station. We observe
that the rate of data collection is $1.67$ packets per $4*t_{s}$ for Harvest
and $0.8$ packets per $3*t_{s}$ for Straw, as shown in Table 3. The rate of
data collection is lower than the respective theoretical values due to the
fact that ACKs are disabled. The observed latency gain under simulation is
$36\%$, which is close to the theoretical value of $33.33\%$.
Service | Theoretical | Simulation
---|---|---
Straw | 1 packets/$(3*t_{s})$ | 0.8 packets/$(3*t_{s})$
Harvest | 2 packets/$(4*t_{s})$ | 1.67 packets/$(3*t_{s})$
Table 3: Latency Comparison
### 5.3 Energy Comparison
#### 5.3.1 Theoretical Comparison
Straw uses a broadcast to disseminate the command to send the ID of a selected
node. This is equivalent to the slot assignment in Harvest. In a broadcast
phase, each node in the network forwards a newly heard packet exactly once.
Therefore, the total number of transmissions in a broadcast phase are $n$,
where $n$ is the total number of nodes in the network. Therefore, to collect
data from $n$ nodes, the total number of packet transmissions are $n^{2}$.
Also, each broadcast phase is followed by a reply from a selected node to the
base station. The total number of transmissions, for each reply, is a function
of number of hops from the selected node to the base station. In the worst
case, the average path length in the network could be $n/2$. In that case, the
total number of replies for all nodes is $n^{2}+n/2$, which is O$(n^{2})$.
In Harvest, the control information pertaining to slot assignment is
piggybacked on the data messages. Therefore, Harvest does not have packet
transmissions for slot assignment. Hence, it saves O$(n^{2})$ number of packet
transmissions as compared to Straw.
We assume that Straw and Harvest both use the shortest path routes to transmit
data packets to the base station. In that case, the total number of data
packet transmissions for data collection purposes are the same for Straw and
Harvest. In particular, this number is O$((n*M*h)/2)$.
The total number of messages for Straw is O$(n^{2})$ \+ O$((n*M*h/2)$.
#### 5.3.2 Simulation-based Comparison
In reality, the radio behavior is more complex than that represented by the
simplistic unit-disk radio model. Not only is packet delivery rate less than
100% but it also varies in space and time. Therefore, we conduct simulations
over a multi-hop network to compare the number of packet transmissions for
Straw and Harvest. We conduct simulations in the same network topology as used
in Section 5.2.2. In future, we plan to compare results in a real sensor
network.
The number on top of each node, in Figure 4, illustrates the number of
broadcast sessions required to reliably convey the command to each of the 20
nodes. The total number of broadcast sessions are 46. Given 20 nodes, Straw
consumes 20 times 46, i.e. 920, more packet transmissions.
Figure 4: Number of broadcast sessions for 21 nodes in Straw
## 6 Harvest Extensions and Discussion
Duty Cycling of Radios. As we discussed in Section 2, an idle radio draws a
significant amount of current and so energy efficiency is gained by letting
idle nodes sleep. In Harvest, we achieve this as follows. When a node sees
that no colors are available for itself in its interference region (i.e., its
2-hop neighborhood), it can switch off its radio until a color is expected to
be available again. Given some knowledge of the number of packets to be
transmitted that color and by observing the sequence number of the packet
currently being transmitted for that color, a sleeping duration can be readily
calculated.
Furthermore, once a node is done with its role in the convergecast, it can
switch off its radio permanently. A node is defined to be done with its
transmissions after it has sent all of its packets and the packets of its
children.
Reliability when all Children Transmit Concurrently. As described in Section
3, the data collection protocol allows non-base station nodes to forward data
from one or more of its children. (If more than one child can transmit, then
the protocols maintains at least one buffer per child.) When only one child is
allowed to transmit, the implicit acknowledgement scheme suffices for nodes to
discover whether or not their transmissions were successfully received.
When more than one child is allowed to transmit, using the implicit
acknowledgement scheme implies either a delay in loss detection or a
modification of the protocol to expose more node buffer information. One
alternative in this case would be to use explicit acknowledgements. If we
assume that explicit acknowledgements can be send immediately (or within some
constant delay after message reception), then the tranmission slots can be
extended to subsume both the transmission time and the acknowledgement time.
Continuous Streaming of Data to the Base Station. Harvest collects data
simultaneously from multiple nodes, as opposed to receiving data from only one
node at a time. In this sense, the data received at the base station resembles
a continuous stream of data from the network ordered in time. It is therefore
conceivable to use Harvest as the basis for collecting in an on-line fashion
continuous data streams from the network.
Note that the description in Section 3 for the case where all nodes forward
data from multiple children allows the possibility that the data from each
child is forwarded in a round robin. More sophisticated rules for fair
scheduling that consider the distance of the node from the base station can be
defined to achieve global fairness, otherwise nodes near base station will
contribute more packets as opposed to the ones farther from the base station.
One extension of Harvest that we are presently studying is in the context of
real-time wireless sensor network applications, such as visualizing link
quality of the network in real time or viewing consistent global snapshots of
the wireless sensor network.
## 7 Related Work
TDMA and CSMA. Herman et al [3] have proposed a randomized TDMA algorithm that
first forms clusters, each with a unique cluster-head. Each cluster-head then
allocates colors to its children. Cluster-heads are ordered in a monotonically
increasing order, so the color assignment occurs sequentially per that order.
A similarity between this work and Harvest’s TDMA scheduling is the use of
underlying CSMA/CA MAC layer to CSMA/CA to communicate control information
pertaining to node coloring and TDMA scheduling. Z-MAC [9] uses both TDMA and
CSMA/CA features in manner different from Harvest. Z-MAC is hybrid MAC that
uses TDMA under high contention and CSMA under low contention, whereas data
transmission in Harvest is always in TDMA mode. Kulkarni and Arumugam [7]
describe TDMA based protocols that are optimized for convergecast, that work
however assumes grid localization.
RID [18] is a radio interference detection service that detects interference
relations between nodes at run-time, and provides higher fidelity for
collision avoidance when using TDMA. The RID approach would be a suitable
candidate for enabling Harvest’s distributed coloring protocol.
Convergecast routing. There is a rich body of work on convergecast routing for
wireless sensor systems. Several protocols assume location information. Most
of the others such as MintRoute [16], RMST [11], PSFQ [13], Drain [12] are,
unlike Straw and Harvest, not optimized for the energy and latency
requirements associated with the collection of payloads that can well be in
the thousands of packets per node. For instance, MintRoute does not pipeline
transmissions, which would yield higher latency for bulk data transport, and
Drain is optimized for the case of a single packet payload per source mote.
Reliability. The study of reliability in convergecast has often arisen in the
context of concurrent event detections, which tend to occur in a bursty manner
or with multiple sources are continuously/periodically generating packets
(with low duty cycle). RBC [17] focuses on the former whereas the traffic
models considered in CODA [14] and ESRT [10] focus on the latter. RBC deals
with bursts by maintaining information about queue conditions of the
neighboring node as well as number of times enqueued packets were
retransmitted, which results in sizeable RAM usage. Also, the queue condition
has to be transmitted in RBC, which results in sizeable communication
overhead. The alternative approaches of packet retransmissions, of
acknowledgements, of hop-by-hop recovery, as well as selecting alternative
routes upon link failure are also relevant approaches for improving the
reliability of Harvest in particular application contexts. The use of TDMA and
receiver-driven flow control mitigate the consideration of congestion.
Coding of bulk data is a relevant approach for tolerating packet loss in bulk
convergecast. Kim et al [5] have considered the use of erasure codes. We have
regarded this relevant consideration as being orthogonal to the pipelining and
spatial reuse considerations of Harvest.
## 8 Conclusion
We have presented a bulk data collection service, Harvest, for energy
constrained wireless sensor nodes. Harvest assumes a bounded node density,
i.e., degree $\Delta$. This assumption enable us to We assign distance-k (our
exposition has used k=2) colors to nodes in O$(1)$ time by utilizing an
underlying CSMA/CA MAC layer. We use a constant number of colors in the entire
network, which enables the per node computation of its TDMA schedule to occur
in O$(1)$ time. Harvest exploits the spatial parallelism in collecting data,
thereby achieving a latency gain of at least 33% in large networks (i.e.,
networks with more than three hops) as compared to that of Straw. Harvest also
avoids the O$(n^{2})$ number of broadcasted control transmissions used in
Straw. Further, Harvest requires only O$(1)$ number of buffers at each node.
Therefore, Harvest is suitable for large scale network of wireless sensor
network. We provide theoretical bounds on the performance of Harvest and
perform simulation results to validate the theoretical bound. We find that the
spatial parallelism not only reduces latency, but also creates an opportunity
to collect global data in a fair real-time manner. Our present work is
studying extensions of Harvest for the case of on-line continous data
streaming from the network to the base station.
## References
* [1] D. Bertsekas and R. Gallager. Data Networks. Prentice Hall, Englewood Cliffs, NJ, 1987.
* [2] J. Elson. Time Synchronization in Wireless Sensor Network. PhD thesis, UCLA, 2003.
* [3] T. Herman and S. Tixeuil. A distributed tdma slot assignment algorithm for wireless sensor networks. In Algorithmic Aspects of Wireless Sensor Networks, pages 45–58, 2004.
* [4] S. Kim. Wireless sensor networks for structural health monitoring. Master’s thesis, University of California at Berkeley, USA, 2005.
* [5] S. Kim, R. Fonseca, and D. Culler. Reliable transfer on wireless sensor networks. In Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, 2004.
* [6] S. Krumke, M. Marathe, and S. Ravi. Models and approximation algorithms for channel assignment in radio networks. Wireless Networks, 7(6):575–584, 2001.
* [7] S. Kulkarni and M. Arumugam. SS-TDMA: A Self-Stabilizing MAC for Sensor Networks, chapter In Sensor Network Operations. IEEE Press, 2005.
* [8] V. Naik, A. Arora, P. Sinha, and H. Zhang. Sprinkler: A reliable and energy efficient data dissemination service for wireless embedded devices. In The 26th IEEE Real-Time Systems Symposium, December 2005.
* [9] I. Rhee, A. Warrier, M. Aia, and J. Min. Z-mac: A hybrid mac for wireless sensor networks. In SenSys, pages 90–101, 2005.
* [10] Y. Sankarasubramaniam, O. Akan, and I. Akyildiz. Esrt: Event-to-sink reliable transport in wireless sensor networks. In The ACM Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), 2003.
* [11] F. Stann and J. Heidemann. Rmst: Reliable data transport in sensor networks. In The 1st IEEE Intl. Workshop on Sensor Network Protocols and Applications (SNPA), pages 102–112, 2003.
* [12] G. Tolle and D. Culler. Design of an application-cooperative management system for wireless sensor networks. In Second European Workshop on Wireless Sensor Networks, 2005.
* [13] C. Wan, A. Campbell, and L. Krishnamurthy. Psfq: A reliable transport protocol for wireless sensor networks. In WSNA ’02: Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications, pages 1–11, New York, NY, USA, 2002. ACM Press.
* [14] C. Wan, S. Eisenman, and A. Campbell. Coda: congestion detection and avoidance in sensor networks. In SenSys, pages 266–279, 2003.
* [15] A. Woo and D. Culler. A transmission control scheme for media access in sensor networks. In ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom), pages 221–235, 2001.
* [16] A. Woo, T. Tong, and D. Culler. Taming the underlying challenges of reliable multihop routing in sensor networks. In SenSys ’03: Proceedings of the 1st international conference on Embedded networked sensor systems, pages 14–27, 2003.
* [17] H. Zhang, A. Arora, Y. Choi, and M. Gouda. Reliable bursty convergecast in wireless sensor networks. In 6th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2005.
* [18] G. Zhou, T. He, J. Stankovic, and T. Abdelzaher. Rid: radio interference detection in wireless sensor networks. In The 24th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM), pages 891– 901, 2005.
|
# Fine-Grained Named Entity Typing over Distantly Supervised Data via
Refinement in Hyperbolic Space
Muhammad Asif Ali,1 Yifang Sun,1 Bing Li,1 Wei Wang,1,2
1School of Computer Science and Engineering, UNSW, Australia
2College of Computer Science and Technology, DGUT, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Fine-Grained Named Entity Typing (FG-NET) aims at classifying the entity
mentions into a wide range of entity types (usually hundreds) depending upon
the context. While distant supervision is the most common way to acquire
supervised training data, it brings in label noise, as it assigns type labels
to the entity mentions irrespective of mentions’ context. In attempts to deal
with the label noise, leading research on the FG-NET assumes that the fine-
grained entity typing data possesses a euclidean nature, which restraints the
ability of the existing models in combating the label noise. Given the fact
that the fine-grained type hierarchy exhibits a hierarchal structure, it makes
hyperbolic space a natural choice to model the FG-NET data. In this research,
we propose FGNET-RH, a novel framework that benefits from the hyperbolic
geometry in combination with the graph structures to perform entity typing in
a performance-enhanced fashion. FGNET-RH initially uses LSTM networks to
encode the mention in relation with its context, later it forms a graph to
distill/refine the mention’s encodings in the hyperbolic space. Finally, the
refined mention encoding is used for entity typing. Experimentation using
different benchmark datasets shows that FGNET-RH improves the performance on
FG-NET by up to 3.5% in terms of strict accuracy.
Keywords — FG-NET, Hyperbolic Geometry, Distant Supervision, Graph Convolution
## 1 Introduction
Named Entity Typing (NET) is a fundamental operation in natural language
processing, it aims at assigning discrete type labels to the entity mentions
in the text. It has immense applications, including: knowledge base
construction [7]; information retrieval [12]; question answering [18];
relation extraction [27] etc. Traditional NET systems work with only a coarse
set of type labels, e.g., organization, person, location, etc., which severely
limit their potential in the down-streaming tasks. In recent past, the idea of
NET is extended to Fine-Grained Named Entity Typing (FG-NET) that assigns a
wide range of correlated entity types to the entity mentions [13]. Compared to
NET, the FG-NET has shown a remarkable improvement in the sub-sequent
applications. For example, Ling and Weld, [13] showed that FG-NET can boost
the performance of the relation extraction by 93%.
FG-NET encompasses hundreds of correlated entity types with little contextual
differences, which makes it labour-intensive and error-prone to acquire
manually labeled training data. Therefore, distant supervision is widely used
to acquire training data for this task. Distant supervision relies on: (i)
automated routines to detect the entity mention, and (ii) using type-hierarchy
from existing knowledge-bases, e.g., Probase [24], to assign type labels to
the entity mention. However, it assigns type-labels to the entity mention
irrespective of the mention’s context, which results in label noise [20].
Examples in this regard are shown in Figure 1, where the distant supervision
assigns labels: {person, author, president, actor, politician} to the entity
mention: _“Donald Trump”_ , whereas, from contextual perspective, it should be
labeled as: {person, president, politician} in S1, and {person, actor} in S2.
Likewise, the entity mention: _“Vladimir Putin”_ should be labeled as:
{person, author} and {person, athlete} in S3 and S4 respectively. This label
noise in-turn propagates in the model learning and severely effects/limits the
end-performance of the FG-NET systems.
Figure 1: FG-NET training data acquired by distant supervision
Earlier research on FG-NET either ignored the label noise [13], or applied
some heuristics to prune the noisy labels [8]. Ren et al., [19] bifurcated the
training data into clean and noisy data samples, and used different set of
loss functions to model them. However, the modeling heuristics proposed by
these models are not able to cope with the label noise, which limits the end-
performance of the FG-NET systems relying on distant supervision. We,
moreover, observe that these models are designed assuming a euclidean nature
of the problem, which is inappropriate for FG-NET, as the fine-grained type
hierarchy exhibit a hierarchical structure. Given that it is not possible to
embed hierarchies in euclidean space [15], this assumption, in turn limits the
ability of the existing models to: (i) effectively represent FG-NET data, (ii)
cater label noise, and (iii) perform FG-NET classification task in a robust
way.
The inherent advantage of hyperbolic geometry to embed hierarchies is well-
established in literature. It enforces the items on the top of the hierarchy
to be placed close to the origin, and the items down in the hierarchy near
infinity. This enables the embedding norm to cater to the depth in the
hierarchy, and the distance between embeddings represent the similarity
between the items. Thus the items sharing a parent node are close to each
other in the embeddings space. This makes the hyperbolic space a perfect
paradigm for embedding the distantly supervised FG-NET data, as it explicitly
allows label-smoothing by sharing the contextual information across noisy
entity mentions corresponding to the same type hierarchy, as shown in Figure 2
(b), for a 2D Poincaré Ball. For example, given the type hierarchy: _“Person”_
$\leftarrow$ _“Leader”_ $\leftarrow$ _“Politician”_ $\leftarrow$ _“President”_
, the hyperbolic embeddings, on contrary to the euclidean embeddings, offer a
perfect geometry for the entity type _“President”_ to share and augments the
context of _“Politician”_ , which in turn adds to the context of _“Leader”_
and _“Person”_ etc., shown in Figure 2 (a). We hypothesize that such
hierarchically-organized contextually similar neighbours provide a robust
platform for the end task, i.e., FG-NET over distantly supervised data, also
discussed in detail in the section 4.5.1.
Figure 2: (a) An illustration of how the entity type “President” shares the
context of the entity type “Politician” which in turn shares the context of
the entity-type “Leader” and so on; (b) Embedding FG-NET data in 2-D Poincare
Ball, where each disjoint type may be embedded along a different direction
Nevertheless, we propose Fine-Grained Entity Typing with Refinement in
Hyperbolic space (FGNET-RH), shown in Figure 3. FGNET-RH follows a two-stage
process, stage-I: encode the mention along with its context using multiple
LSTM networks, stage-II: form a graph to refine mention’s encoding from
stage-I by sharing contextual information in the hyperbolic space. In order to
maximize the benefits of using the hyperbolic geometry in combination with the
graph structure, FGNET-RH maps the mention encodings (from stage-I) to the
hyperbolic space. And, performs all the operations: linear transformation,
type-specific contextual aggregation etc., in the hyperbolic space, required
for appropriate additive context-sharing along the type hierarchy to smoothen
the noisy type-labels prior to the entity typing. The major contributions of
FGNET-RH are enlisted as follows:
1. 1.
FGNET-RH accommodates the benefits of: the graph structures and the hyperbolic
geometry to perform fine-grained entity typing over distantly supervised noisy
data in a robust fashion.
2. 2.
FGNET-RH explicitly allows label-smoothing over the noisy training data by
using graphs to combine the type-specific contextual information along the
type-hierarchy in the hyperbolic space.
3. 3.
Experimentation using two models of the hyperbolic space, i.e., the
Hyperboloid and the Poincaré-Ball, shows that FGNET-RH outperforms the
existing research by up to 3.5% in terms of strict accuracy.
## 2 Related Work
Existing research on FG-NET can be bifurcated into two major categories: (i)
traditional feature-based systems, and (ii) embedding models.
Traditional feature-based systems rely on feature extraction, later using
these features to train machine learning models for classification. Amongst
them, Ling and Weld [13] developed FiGER, that uses hand-crafted features to
develop a multi-label, multi-class perceptron classifier. Yosef et al., [29]
developed HYENA, i.e., a hierarchical type classification model using hand-
crafted features in combination with the SVM classifier. Gillick et al., [8]
proposed context-dependent fine-grained typing using hand-crafted features
along with logistic regression classifier. Shimaoka et al., [21] developed
neural architecture for fine-grained entity typing using a combination of
automated and hand-crafted features.
Embedding models use widely available embedding resources with customized loss
functions to form classification models. Yogatama et al., [28] used embeddings
along with Weighted Approximate Rank Pairwise (WARP) loss. Ren et al., [19]
proposed AFET that uses different set of loss functions to model the clean and
the noisy entity mentions. Abhishek et al., [1] proposed end-to-end
architecture to jointly embed the mention and the label embeddings. Xin et
al., [25] used language models to compute the compatibility between the
context and the entity type prior to entity typing. Choi et al., [4] proposed
ultra-fine entity typing encompassing more than 10,000 entity types. They used
crowd-sourced data along with the distantly supervised data for model
training.
Graph convolution networks are introduced in recent past in order to extend
the concept of convolutions from regular-structured grids to graphs [11]. Ali
et al., [2] proposed attentive convolutional network for fine-grained entity
typing. Nickel et al., [15] illustrated the benefits of hyperbolic geometry
for embedding the graph structured data. Chami et al., [3] combined graph
convolutions with the hyperbolic geometry. López et al., [14] used hyperbolic
geometry for ultra-fine entity typing. To the best of our knowledge, we are
the first to explore the combined benefits of the graph convolution networks
in relation with the hyperbolic geometry for FG-NET over distantly supervised
noisy data.
## 3 Proposed Approach
### 3.1 Problem Definition
In this paper, we build a multi-class, multi-label entity typing system using
distantly supervised data to classify an entity mention into a set of fine-
grained entity types. Specifically, we propose attentive type-specific
contextual aggregation in the hyperbolic space to fine-tune the mention’s
encodings learnt over noisy data prior to entity typing. We assume the
availability of training corpus $C_{train}$ acquired via distant supervision,
and manually labeled test corpus $C_{test}$. Each corpus $C$ (train/test)
encompasses a set of sentences. For each sentence, the contextual token
$\\{c_{i}\\}_{i=1}^{N}$, the mention spans $\\{m_{i}\\}_{i=1}^{N}$
(corresponding to the entity mentions), and the candidate type labels
$\\{t_{i}\\}_{i=1}^{N}\in\\{0,1\\}^{T}$ ($T$-dimensional vector with
$t_{i,x}=1$ if $x^{th}$ type corresponds to the true label and zero otherwise)
have been priorly identified. The type labels are inferred from type hierarchy
in the knowledge base $\psi$ with the schema $T_{\psi}$. Similar to Ren et
al., [19], we bifurcate the training data $D_{tr}$ into clean
$D_{tr\text{-}clean}$ and noisy $D_{tr\text{-}noisy}$, if the corresponding
mention’s type-path follows a single path in the type-hierarchy $T_{\psi}$ or
otherwise. Following the type-path in Figure 1 (ii), a mention with labels _{
person, author}_ will be considered as clean, whereas, a mention with labels
_{ person, president, author}_ will be considered as noisy.
### 3.2 Overview
Our proposed model, FGNET-RH, follows a two-step approach, labeled as stage-I
and stage-II in the Figure 3. Stage-I follows text encoding pipeline to
generate mention’s encoding in relation with its context. Stage-II is focused
on label noise reduction, for this, we map the mention’s encoding (from
stage-I) in the hyperbolic space and use a graph to share aggregated type-
specific contextual information along the type-hierarchy in order to refine
the mention encoding. Finally, the refined mention encoding is embedded along
with the label encodings in the hyperbolic space for entity typing. Details of
each stage are given in the following sub-sections.
Figure 3: Proposed model, i.e., FGNET-RH, stage-I learns mention’s encodings
based on local sentence-specific context, stage-II refines the encodings
learnt in stage-I in the hyperbolic space.
### 3.3 Stage-I (Noisy Mention Encoding)
Stage-I follows a standard text processing pipeline using multiple LSTM
networks [9] to encode the entity mention in relation with its context.
Individual components of stage-I are explained as follows:
##### Mention Encoding:
We use LSTM network to encode the character sequence corresponding to the
mention tokens. We use $\phi_{e}=[\overrightarrow{men}]\in\mathbf{R}^{e}$ to
represent the encoded mention’s tokens.
##### Context Encoding:
For context encoding, we use multiple Bi-LSTM networks to encode the tokens
corresponding to the left and the right context of the entity mention. We use
$\phi_{c_{l}}$ =
$[\overleftarrow{c_{l}};\overrightarrow{c_{l}}]\in\mathbf{R}^{c}$ and
$\phi_{c_{r}}$ =
$[\overleftarrow{c_{r}};\overrightarrow{c_{r}}]\in\mathbf{R}^{c}$ to represent
the encoded left and the right context respectively.
##### Position Encoding:
For position encoding, we use LSTM network to encode the position of the left
and the right contextual tokens. We use $\phi_{p_{l}}$ =
$[\overleftarrow{l_{p}}]\in\mathbf{R}^{p}$ and ;
$\phi_{p_{r}}=[\overrightarrow{r_{p}}]\in\mathbf{R}^{p}$ to represent the
encoded position corresponding to the mention’s left and the right context.
##### Mention Encodings:
Finally, we concatenate all the mention-specific encodings to get
L-dimensional noisy encoding: $x_{m}\in\mathbf{R}^{L}$, where $L=e+2*c+2*p$.
(3.1) $x_{m}=[\phi_{{p_{l}}};\phi_{c_{l}};\phi_{e};\phi_{c_{r}};\phi_{p_{r}}]$
### 3.4 Stage-II (Fine-tuning the Mention Encodings)
Stage-II is focused on alleviating the label noise. Underlying assumption in
combating the label noise is that the contextually similar mentions should get
similar type labels. For this, we form a graph to cluster contextually-similar
mentions and employ hyperbolic geometry to share the contextual information
along the type-hierarchy. As shown in Figure 3, the stage-II follows the
following pipeline:
1. 1.
Construct a graph such that contextually and semantically similar mentions
end-up being the neighbors in the graph.
2. 2.
Use exponential map to project the noisy mention encodings from stage-I to the
hyperbolic space.
3. 3.
In the hyperbolic space, use the corresponding exponential and logarithmic
transformations to perform the core operations, i.e., (i) linear
transformation, and (ii) contextual aggregation, required to fine-tune the
encodings learnt in stage-I prior to entity typing.
We work with two models in the hyperbolic space, i.e., the Hyperboloid
$(\mathbb{H}^{d})$ and the Poincaré-Ball $(\mathbb{D}^{d})$. In the following
sub-sections, we provide the mathematical formulation for the Hyperboloid
model of the hyperbolic space. Similar formulation can be designed for the
Poincaré-Ball model.
#### 3.4.1 Hyperboloid Model
$d$-dimensional hyperboloid model of the hyperbolic space (denoted by
$\mathbb{H}^{d,K}$) is a space of constant negative curvature ${-1}/{K}$, with
$\mathcal{T}_{\textbf{p}}\mathbb{H}^{d,K}$ as the euclidean tangent space at
point p, such that:
$\displaystyle\mathbb{H}^{d,K}$
$\displaystyle=\\{\textbf{p}\in\mathbb{R}^{d+1}:\langle\textbf{p},\textbf{p}\rangle=-K,p_{0}>0\\}$
(3.2) $\displaystyle\mathcal{T}_{\textbf{p}}\mathbb{H}^{d,K}$
$\displaystyle={\textbf{r}\in\mathbb{R}^{d+1}:\langle\textbf{r},\textbf{p}\rangle_{\mathcal{L}}=0}$
where
$\langle,.,\rangle_{\mathcal{L}}:\mathbb{R}^{d+1}\times\mathbb{R}^{d+1}\rightarrow\mathbb{R}$
denotes the Minkowski inner product, with
$\langle\textbf{p},\textbf{q}\rangle_{\mathcal{L}}=-p_{0}q_{0}+p_{1}q_{1}+...+p_{d}q_{d}$.
##### Geodesics and Distances:
For two points p, $\textbf{q}\in\mathbb{H}^{d,K}$, the distance function
between them is given by:
(3.3) $\displaystyle
d_{\mathcal{L}}^{K}(\textbf{p},\textbf{q})=\sqrt{K}\text{arccosh}(-\langle\textbf{p},\textbf{q}\rangle_{\mathcal{L}}/K)$
##### Exponential and Logarithmic maps:
We use exponential and logarithmic maps for mapping to and from the hyperbolic
and the tangent space respectively. Formally, given a point
$\textbf{p}\in\mathbb{H}^{d,K}$ and tangent vector
$\textbf{t}\in\mathcal{T}_{\textbf{p}}\mathbb{H}^{d,K}$, the exponential map
$\exp_{\textbf{p}}^{K}:\mathcal{T}_{\textbf{p}}\mathbb{H}^{d,K}\rightarrow\mathbb{H}^{d,K}$
assigns a point to t such that $\exp_{\textbf{p}}^{K}(\textbf{t})=\gamma(1)$,
where $\gamma$ is the geodesic curve that satisfies $\gamma(0)=\textbf{p}$ and
$\dot{\gamma}=\textbf{t}$.
The logarithmic map $(\log^{K}_{\textbf{p}})$ being the bijective inverse maps
a point in hyperbolic space to the tangent space at p. We use the following
equations for the exponential and the logarithmic maps:
(3.4)
$\exp_{\textbf{p}}^{K}(\textbf{v})=\cosh(\frac{||\textbf{v}||_{\mathcal{L}}}{\sqrt{K}})\textbf{p}+\sqrt{K}\sinh(\frac{||\textbf{v}||_{\mathcal{L}}}{\sqrt{K}})\frac{\textbf{v}}{||\textbf{v}||_{\mathcal{L}}}$
(3.5)
$\log^{K}_{\textbf{p}}(\textbf{q})=d_{\mathcal{L}}^{K}(\textbf{p},\textbf{q})\frac{\textbf{q}+\frac{1}{K}<\textbf{p},\textbf{q}>_{\mathcal{L}}\textbf{p}}{||\textbf{q}+\frac{1}{K}<\textbf{p},\textbf{q}>_{\mathcal{L}}\textbf{p}||_{\mathcal{L}}}$
#### 3.4.2 Graph Construction
The end-goal of graph construction is to group the entity mentions in such a
way that contextually similar mentions are clustered around each other by
forming edges in the graph. Given the fact, the euclidean embeddings are
better at capturing the semantic aspects of the text data [6], we opt to use
deep contextualized embeddings in the euclidean domain [17] for the graph
construction. For each entity type, we average out corresponding $1024d$
embeddings for all the mentions in the training corpus $C_{train}$, to learn
prototype vectors for each entity type, i.e., $\\{prototype_{t}\\}_{t=1}^{T}$.
Later, for each entity type $t$, we capture type-specific confident mention
candidates $cand_{t}$, following the criterion: $cand_{t}=cand_{t}\cup
men\text{ if }(cos(men,\\{Prototype_{t}\\})\geq\delta)$ $\forall men\in
C;\forall{}t\in T$, where $\delta$ is a threshold. Finally, we form pairwise
edges for all the mention candidates corresponding to each entity-type, i.e.,
$\\{cand\\}_{t=1}^{T}$, to construct the graph $G$, with adjacency matrix $A$.
#### 3.4.3 Mapping Noisy Mention Encodings to the Hyperbolic space
The mention encodings learnt in the stage-I are noisy, as they are learnt over
distantly supervised data. These encodings lie in the euclidean space, and in
order to refine them, we first map them to the hyperbolic space, where we may
best exploit the fine-grained type hierarchy in relation with the type-
specific context to fine-tune these encodings as an aggregate of contextually-
similar neighbors.
Formally, let $\mathbf{p}^{E}=X_{m}\in\mathbf{R}^{N\times L}$ be the matrix
corresponding to the noisy mentions’ encodings in the euclidean domain. We
consider $o=\\{\sqrt{K},0,...,0\\}$ as a reference point (origin) in a
d-dimensional Hyperboloid with curvature $-1/K\,(\mathbb{H}^{d,K})$;
$(0,\textbf{p}^{E})$ as a point in the tangent space
$(\mathcal{T}\mathbb{H}^{d,K})$, and map it to
$\textbf{p}^{H}\in\mathbb{H}^{d,K}$ using the exponential map given in
Equation (3.4), as follows:
$\displaystyle\textbf{p}^{H}$ $\displaystyle=\exp^{K}((0,\textbf{p}^{E}))$
$\displaystyle\exp^{K}((0,\textbf{p}^{E}))$
$\displaystyle=\Big{(}\sqrt{K}\cosh\Big{(}\frac{||\textbf{p}^{E}||_{2}}{\sqrt{K}}\Big{)},$
(3.6)
$\displaystyle\sqrt{K}\sinh\Big{(}\frac{||\textbf{p}^{E}||_{2}}{\sqrt{K}}\Big{)}\frac{\textbf{p}^{E}}{||\textbf{p}^{E}||_{2}}\Big{)}$
#### 3.4.4 Linear Transformation
In order to perform linear transformation operation on the noisy mention
encodings, i.e., (i) multiplication by weight matrix W, and (ii) addition of
bias vector b, we rely on the exponential and the logarithmic maps. For
multiplication with the weight matrix, firstly, we apply logarithmic map on
the encodings in the hyperbolic space, i.e.,
$\textbf{p}^{H}\in\mathbb{H}^{d,K}$, in order to project them to
$\mathcal{T}\mathbb{H}^{d,K}$. This projection is then multiplied by the
weight matrix $W$, and the resultant vectors are projected back to the
manifold using the exponential map. For a manifold with curvature constant
$K$, these operations can be summarized in the equation, given below:
(3.7) $W\otimes\textbf{p}^{H}=\exp^{K}(W\log^{K}(\textbf{p}^{H}))$
For bias addition, we rely on parallel transport, let b be the bias vector in
$\mathcal{T}\mathbb{H}^{d,K}$, we parallel transport b along the tangent space
and finally map it to the manifold. Formally, let
$\textbf{T}^{K}_{\textbf{o}\rightarrow\textbf{p}^{H}}$ represent the parallel
transport of a vector from $\mathcal{T}_{\textbf{o}}\mathbb{H}^{d,K}$ to
$\mathcal{T}_{\textbf{x}^{H}}\mathbb{H}^{d,K}$, we use the following equation
for the bias addition:
(3.8)
$\textbf{p}^{H}\oplus\textbf{b}=\exp^{K}_{\textbf{x}^{H}}(\textbf{T}^{K}_{o\rightarrow\textbf{p}^{H}}(\textbf{b}))$
#### 3.4.5 Type-Specific Contextual Aggregation
Aggregation is a crucial step for noise reduction in FG-NET, it helps to
smoothen the type-label by refining/fine-tuning the noisy mention encodings by
accumulating information from contextually similar neighbors lying at multiple
hops. Given the graph $G$, with nodes $(V)$ being the entity mentions, we use
the pairwise embedding vectors along the edges of the graph to compute the
attention weights $\eta_{ij}=cos(men^{i},men^{j})\forall(i,j)\in V$. In order
to perform the aggregation operation, we first use the logarithmic map to
project the results of the linear transformation from hyperbolic space to the
tangent space. Later, we use the neighboring information contained in $G$ to
compute the refined mention encoding as attentive aggregate of the neighboring
mentions. Finally, we map these results back to the manifold using the
exponential map $\exp^{K}$. Our methodology for contextual aggregation is
summarized in the following equation:
(3.9)
$AGG_{cxtx}(\textbf{p}^{H})_{i}=\exp^{K}_{\textbf{x}^{H}_{i}}\Big{(}\sum_{j\in\textit{N}(i)}(\widetilde{\eta_{ij}\odot
A})\log^{K}(\textbf{p}^{H}_{j})\Big{)}$
where $\widetilde{\eta_{ij}\odot A}$ is the Hadamard product of the attention
weights and the adjacency matrix $A$. It accommodates the degree of contextual
similarity among the mention pairs in $G$.
#### 3.4.6 Non-Linear Activation
Contextually aggregated mention’s encoding is finally passed through a non-
linear activation function $\sigma$ ($\mathsf{ReLU}$ in our case). For this,
we follow similar steps, i.e., (i) map the encodings to the tangent space,
(ii) apply the activation function in the tangent space, (iii) map the results
back to the hyperbolic space using exponential map. These steps are summarized
in the following equation:
(3.10) $\sigma(\textbf{p}^{H})=\exp^{K}(\sigma(\log^{K}(\textbf{p}^{H})))$
### 3.5 Complete Model
We combine the above-mentioned steps to get the refined mention encodings at
lth-layer $\textbf{z}_{out}^{l,H}$ as follows:
$\displaystyle\textbf{p}^{l,H}$
$\displaystyle=W^{l}\otimes\textbf{p}^{l-1,H}\oplus\textbf{b}^{l}\text{;}$
(3.11) $\displaystyle\textbf{y}^{l,H}$
$\displaystyle=AGG_{cxtx}(\textbf{p}^{l,H})\text{;}\,\,\,\textbf{z}^{l,H}_{out}=\sigma(\textbf{y}^{l,H})$
Let $\textbf{z}_{out}^{l,H}\in\mathbb{H}^{d,K}$ correspond to the refined
mentions’ encodings hierarchically organized in the hyperbolic space. We embed
them along with the fine-grained type label encodings
$\\{\phi_{t}\\}_{t=1}^{T}\in\mathbb{H}^{d}$. For that we learn a function
$f(\textbf{z}^{l,H}_{out},\phi_{t})=\phi_{t}^{T}\times\textbf{z}^{l,H}+bias_{t}$,
and separately learn the loss functions for the clean and the noisy mentions.
##### Loss for clean mentions:
In order to model the clean entity mentions $D_{tr\text{-}clean}$, we use a
margin-based loss to embed the refined mention encodings close to the true
type labels ($T_{y}$), and push it away from the false type labels
($T_{y^{{}^{\prime}}}$). The loss function is summarized as follows:
$\displaystyle L_{clean}$ $\displaystyle=\sum_{t\in
T_{y}}\mathsf{ReLU}(1-f(\textbf{z}^{l,H}_{out},\phi_{t}))+$ (3.12)
$\displaystyle\sum_{t^{{}^{\prime}}\in
T_{y^{{}^{\prime}}}}\mathsf{ReLU}(1+f(\textbf{z}^{l,H}_{out},\phi_{t^{{}^{\prime}}}))$
##### Loss for noisy mentions:
In order to model the noisy entity mentions $D_{tr\text{-}noisy}$, we use a
variant of above-mentioned loss function to embed the mention close to most
relevant type label $t^{*}$, where $t^{*}=\operatorname*{argmax}_{t\in
T_{y}}f(\textbf{z}^{l,H}_{out},\phi_{t})$, among the set of noisy type labels
$(T_{y})$ and push it away from the irrelevant type labels
($T_{y^{{}^{\prime}}}$). The loss function is mentioned as follows:
$\displaystyle L_{noisy}$
$\displaystyle=\mathsf{ReLU}(1-f(\textbf{z}^{l,H}_{out},\phi_{t^{*}}))+$
(3.13) $\displaystyle\sum_{t^{{}^{\prime}}\in
T_{y^{{}^{\prime}}}}\mathsf{ReLU}(1+f(\textbf{z}^{l,H}_{out},\phi_{t^{{}^{\prime}}}))$
Finally, we minimize $L_{clean}+L_{noisy}$ as the final loss function of the
FGNET-RH.
## 4 Experimentation
### 4.1 Dataset
We evaluate our model using a set of publicly available datasets for FG-NET.
We chose these datasets because they contain fairly large proportion of test
instances and corresponding evaluation will be more concrete. Statistics of
these dataset is shown in Table 1. These datasets are explained as follows:
##### BBN:
Its training corpus is acquired from the Wall Street Journal annotated by [22]
using DBpedia Spotlight.
##### OntoNotes:
It is acquired from newswire documents contained in the OntoNotes corpus [23].
The training data is mapped to Freebase types via DBpedia Spotlight [5]. The
testing data is manually annotated by Gillick et al., [8].
Dataset | BBN | OntoNotes
---|---|---
Training Mentions | 86078 | 220398
Testing Mentions | 13187 | 9603
% clean mentions (training) | 75.92 | 72.61
% clean mentions (testing) | 100 | 94.0
Entity Types | 47 | 89
Table 1: Fine-Grained Named Entity Typing data sets
### 4.2 Experimental Settings
In order to set up a fair platform for comparative evaluation, we use the same
data settings (training, dev and test splits) as used by all the models
considered as baselines in Table 2. All the experiments are performed using
Intel Gold 6240 CPU with 256 GB main memory.
##### Model Parameters:
For stage-I, the hidden layer size of the context and the position encoders is
set to 100d. The hidden layer size of the mention character encoder is 200d.
Character, position and label embeddings are randomly initialized. We report
the model performance using 300d Glove [16] and 1024d deep contextualized
embeddings [17].
For stage-II, we construct graphs with 5.4M and 0.6M edges for BBN and
OntoNotes respectively. Curvature constant of the hyperbolic space is set to
$K=1$. All the models are trained using Adam optimizer [10] with learning rate
= 0.001.
### 4.3 Model Comparison
We evaluate FGNET-RH against the following baseline models: (i) Figer [13];
(ii) Hyena [29]; (iii) AFET, AFET-NoCo and AFET-NoPa [19]; (iv) Attentive
[21]; (v) FNET [1]; (vi) NFGEC + LME [25]; and (vii) FGET-RR [2]. For
performance comparison, we use the scores reported in the original papers, as
they are computed using a similar data settings as that of ours.
Note that we do not compare our model against [4, 14] because these models use
crowd-sourced data in addition to the distantly supervised data for model
training. Likewise, we exclude [26] from evaluation because Xu and Barbosa
changed the fine-grained problem definition from multi-label to single-label
classification problem. This makes their problem settings different from that
of ours and the end results are no longer comparable.
### 4.4 Main Results
The results of the proposed model are shown in Table 2. For each data, we
boldface the best scores with the existing state-of-the art underlined. These
results show that FGNET-RH outperforms the existing state-of-the-art models by
a significant margin. For the BBN data, FGNET-RH achieves 3.5%, 1.2% and 1.5%
improvement in strict accuracy, mac-F1 and mic-F1 respectively, compared to
the FGET-RR. For OntoNotes, FGNET-RH improves the mac-F1 and mic-F1 scores by
1.2% and 1.6%.
These results show that FGNET-RH offers multi-faceted benefits, i.e., using
hyperbolic space in combination with the graphs to encode the hierarchy, while
at the same time catering to noise in the best possible way. Especially,
augmented context sharing along the hierarchy leads to considerable
improvement in the performance compared to the baseline models.
| OntoNotes | BBN
---|---|---
| strict | mac-F1 | mic-F1 | strict | mac-F1 | mic-F1
FIGER [13] | 0.369 | 0.578 | 0.516 | 0.467 | 0.672 | 0.612
HYENA [29] | 0.249 | 0.497 | 0.446 | 0.523 | 0.576 | 0.587
AFET-NoCo [19] | 0.486 | 0.652 | 0.594 | 0.655 | 0.711 | 0.716
AFET-NoPa [19] | 0.463 | 0.637 | 0.591 | 0.669 | 0.715 | 0.724
AFET-CoH [19] | 0.521 | 0.680 | 0.609 | 0.657 | 0.703 | 0.712
AFET [19] | 0.551 | 0.711 | 0.647 | 0.670 | 0.727 | 0.735
Attentive [21] | 0.473 | 0.655 | 0.586 | 0.484 | 0.732 | 0.724
FNET-AllC [1] | 0.514 | 0.672 | 0.626 | 0.655 | 0.736 | 0.752
FNET-NoM [1] | 0.521 | 0.683 | 0.626 | 0.615 | 0.742 | 0.755
FNET [1] | 0.522 | 0.685 | 0.633 | 0.604 | 0.741 | 0.757
NFGEC+LME [25] | 0.529 | 0.724 | 0.652 | 0.607 | 0.743 | 0.760
FGET-RR[2] (Glove) | 0.567 | 0.737 | 0.680 | 0.740 | 0.811 | 0.817
FGET-RR[2] (ELMO) | 0.577 | 0.743 | 0.685 | 0.703 | 0.819 | 0.823
FGNET-RH (Hyperboloid + Glove) | 0.580 | 0.738 | 0.685 | 0.766 | 0.828 | 0.835
FGNET-RH (Hyperboloid + ELMO) | 0.575 | 0.752 | 0.696 | 0.712 | 0.824 | 0.823
FGNET-RH (Poincaré-Ball + Glove) | 0.579 | 0.741 | 0.684 | 0.760 | 0.829 | 0.833
FGNET-RH (Poincaré-Ball + ELMO) | 0.573 | 0.740 | 0.685 | 0.698 | 0.828 | 0.830
Table 2: FG-NET performance comparison against baseline models
### 4.5 Ablation Study
In this section, we evaluate the impact of different model components on label
de-noising. Specifically, we analyze the performance of FGNET-RH using
variants of the adjacency graph, including: (i) randomly generated adjacency
graph of approximately the same size as $G$: $\text{FGNET-RH}{}\,(R)$, (ii)
unweighted adjacency graph: $\text{FGNET-RH}{}\,(A)$, and (iii) pairwise
contextual similarity as the attention weights $\text{FGNET-
RH}{}\,(\widetilde{\eta\odot A})$. The results in Table 3 show that for the
given model architecture, the performance improvement (correspondingly noise-
reduction) can be attributed to using the appropriate adjacency graph. A
drastic reduction in the model performance for $\text{FGNET-RH}{}\,(R)$ shows
that once the contextual similarity structure of the graph is lost, the label-
smoothing is no longer effective. Likewise, improvement in performance for the
models: $\text{FGNET-RH}{}\,(A)$ and $\text{FGNET-RH}{}\,(\widetilde{\eta\odot
A})$, implies that the adjacency graphs $(A)$ and especially
$(\widetilde{\eta\odot A})$ indeed incorporate the required type-specific
contextual clusters at the needed level of granularity to effectively smoothen
the noisy labels prior to the entity typing.
Model | OntoNotes | BBN
---|---|---
strict | mac-F1 | mic-F1 | strict | mac-F1 | mic-F1
$\text{FGNET-RH}{}\,(R)$ | 0.484 | 0.643 | 0.597 | 0.486 | 0.647 | 0.653
$\text{FGNET-RH}{}\,(A)$ | 0.531 | 0.699 | 0.632 | 0.735 | 0.808 | 0.815
$\text{FGNET-RH}{}\,(\widetilde{\eta\odot A})$ | 0.580 | 0.738 | 0.685 | 0.766 | 0.828 | 0.835
| Hyperboloid ($\mathbb{H}^{d}$)
$\text{FGNET-RH}{}\,(R)$ | 0.490 | 0.665 | 0.608 | 0.633 | 0.704 | 0.724
$\text{FGNET-RH}{}\,(A)$ | 0.571 | 0.737 | 0.679 | 0.746 | 0.814 | 0.822
$\text{FGNET-RH}{}\,(\widetilde{\eta\odot A})$ | 0.579 | 0.741 | 0.684 | 0.760 | 0.829 | 0.833
| Poincaré-Ball ($\mathbb{D}^{d}$)
Table 3: FGNET-RH performance comparison using different adjacency matrices
and Glove Embeddings
#### 4.5.1 Effectiveness of Hyperbolic Geometry
In order to verify the effectiveness of refining the mention encodings in the
hyperbolic space (stage-II), we perform label-wise performance analysis for
the dominant labels in the BBN dataset. Corresponding results for the
Hyperboloid and the Poincaré-Ball model (in Table 4) show that FGNET-RH
outperforms the existing state-of-the-art, i.e., FGET-RR by Ali et al., [2],
achieving higher F1-scores across all the labels. Note that FGNET-RH can
achieve higher performance for the base type labels: {e.g., _“/Person”,
“/Organization”, “/GPE”_ etc.,}, as well as other type labels down in the
hierarchy, {e.g., _“/Organization/Corporation”, “/GPE/City”_ etc.,}. For
{_“Organization”_ and _“Corporation”_} FGNET-RH achieves a higher F1=0.896 and
F1=0.855 respectively, compared to the F1=0.881 and F1=0.844 by FGET-RR. This
is made possible because embedding in the hyperbolic space enables type-
specific context sharing at each level of the type hierarchy by appropriately
adjusting the norm of the label vector.
To further strengthen our claims regarding the effectiveness of using
hyperbolic space for FG-NET, we analyzed the context of the entity types along
the type-hierarchy. We observed, for the fine-grained type labels, the context
is additive and may be arranged in a hierarchical structure with the generic
terms lying at the root and the specific terms lying along the children nodes.
For example, _“Government Organization”_ being a subtype of _“Organization”_
adds tokens similar to {bill, treasury, deficit, fiscal, senate etc., } to the
context of _“Organization”_. Likewise, _“Hospital”_ adds tokens similar to
{family, patient, kidney, stone, infection etc., } to the context of
_“Organization”_.
Labels | Support | FGET-RR [2] | FGNET-RH (Poincaré-Ball) | FGNET-RH (Hyperboloid)
---|---|---|---|---
Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1
/Organization | 45.30% | 0.924 | 0.842 | 0.881 | 0.916 | 0.876 | 0.896 | 0.926 | 0.860 | 0.891
/Organization/Corporation | 35.70% | 0.921 | 0.779 | 0.844 | 0.903 | 0.812 | 0.855 | 0.908 | 0.801 | 0.851
/Person | 22.00% | 0.86 | 0.886 | 0.872 | 0.876 | 0.902 | 0.889 | 0.843 | 0.911 | 0.876
/GPE | 21.30% | 0.924 | 0.845 | 0.883 | 0.92 | 0.868 | 0.893 | 0.924 | 0.885 | 0.904
/GPE/City | 9.17% | 0.802 | 0.767 | 0.784 | 0.806 | 0.750 | 0.777 | 0.804 | 0.795 | 0.799
Table 4: Label-wise Precision, Recall and F1 scores for the BBN data compared
with FGET-RR [2]
This finding correlates with the norm of the label vectors, shown in Table 5
for the Poincaré-Ball model. The vector norm of the entity types deep in the
hierarchy {e.g., _“/Facility/Building”, “/Facility/Bridge”,
“/Facility/Highway”_ etc., } is greater than that of the base entity type {
_“/Facility”_ }. A similar trend is observed for the fine-grained types:
{_“/Organization/Government”, “/Organization/Political”_ etc.,} compared to
the base type: {_“/Organization”_}. It justifies that FGNET-RH indeed adjusts
the norm of the label vector according to the depth of the type-label in the
label-hierarchy, which allows the model to consequently cluster the type-
specific context along the hierarchy in an augmented fashion.
In addition, we also analyzed the entity mentions corrected especially by the
label-smoothing process, i.e., the stage-II of FGNET-RH. For this, we examined
the model performance with and without the label-smoothing, i.e., we
separately build a classification model by using the output of stage-I. For
the BBN data, the stage-II is able to correct about 18% of the mis-
classifications made by stage-I. For example in the sentence: _“CNW Corp. said
the final step in the acquisition of the company has been completed with the
merger of CNW with a subsidiary of Chicago & amp.”_, the bold-faced entity
mention CNW is labeled {_“/GPE”_} by stage-I. However, after label-smoothing
in stage-II, the label predicted by FGNET-RH is
{_“/Organization/Corporation”_}, which indeed is the correct label. A similar
trend was observed for the OntoNotes data set.
This analysis concludes that the FGNET-RH using a blend of the contextual
graphs and the hyperbolic space incorporates the right geometry to embed the
noisy FG-NET data with lowest possible distortion. Compared to the euclidean
space, the hyperbolic space being a non-euclidean space allows the graph
volume (number of nodes within a fixed radius) to grow exponentially along the
hierarchy. This enables the FGNET-RH to perform label-smoothing by forming
type-specific contextual clusters across noisy mentions along the type
hierarchy.
#### 4.5.2 Error Cases
We analyzed the prediction errors of FGNET-RH and attribute them to the
following factors:
##### Inadequate Context:
For these cases, type-labels are dictated entirely by the mention tokens, with
very little information contained in the context. For example, in the
sentence: _“The IRS recently won part of its long-running battle against
John.”_, the entity mention _“ IRS”_ is labeled as
{_“/Organization/Corporation”_} irrespective of any information contained in
the mention’s context. Limited information contained in the mention’s context
in turn limits the end-performance of FGNET-RH in predicting all possible
fine-grained labels thus effecting the recall. For the BBN data set, more than
30% errors may be attributed to the inadequate mention’s context.
##### Correlated Context:
FG-NET type hierarchy encompasses semantically correlated entity types, e.g.,
{ _“Organization”_ vs _“Corporation”_}; {_“Actor”_ vs _“Artist”_}; {_“Actor”_
vs _“Director”_}; {_“Ship”_ vs _“Spacecraft”_}; {_“Coach”_ vs _“Athlete”_}
etc., with highly convoluted context. For example, the context of the entity
types {_“actor”_} and {_“artist”_} is extremely overlapping, it contains
semantically-related tokens like: {direct, dialogue, dance, acting, etc.,}.
This high contextual overlap makes it hard for the FGNET-RH to delineate the
decision boundary across these correlated entity types. It leads to false
predictions by the model thus effecting the precision. For the BBN data set,
more than 35% errors may be attributed to the correlated context.
##### Label Bias:
Label bias originating from the distant supervision may result in the label-
smoothing to be in-effective. This occurs specifically if all the labels
originating from the distant supervision are incorrect. For the BBN data
approximately 5% errors may be attributed to the label bias.
The rest of the errors may be attributed to the inability of FGNET-RH to
explicitly deal with different word senses, in-depth syntactic analysis, in-
adequacy of underlying embedding models to handle semantics, etc. We plan to
accommodate these aspects in the future work.
Label | Norm | Label | Norm
---|---|---|---
/Organization | 0.855 | /Facility | 0.643
/Organization/Religious | 0.860 | /Facility/Building | 0.725
/Organization/Government | 0.870 | /Facility/Bridge | 0.745
/Organization/Political | 0.875 | /Facility/Highway | 0.815
Table 5: FGNET-RH Label-norms for the Poincaré-Ball model, the norm for the
base type-labels is lower than the type-labels deep in the hierarchy
## 5 Conclusions
In this paper, we introduced FGNET-RH, a novel approach that combines the
benefits of graph structures and hyperbolic geometry to perform entity typing
in a robust fashion. FGNET-RH initially learns noisy mention encodings using
LSTM networks and constructs a graph to cluster contextually similar mentions
using embeddings in euclidean domain, later it performs label-smoothing in
hyperbolic domain to refine the noisy encodings prior to the entity-typing.
Performance evaluation using the benchmark datasets shows that the FGNET-RH
offers a perfect geometry for context sharing across distantly supervised
data, and in turn outperforms the existing research on FG-NET by a significant
margin.
## References
* [1] Abhishek, Ashish Anand, and Amit Awekar. Fine-grained entity type classification by jointly learning representations and label embeddings. In EACL (1), pages 797–807. Association for Computational Linguistics, 2017.
* [2] Muhammad Asif Ali, Yifang Sun, Bing Li, and Wei Wang. Fine-grained named entity typing over distantly supervised data based on refined representations. In AAAI, pages 7391–7398. AAAI Press, 2020.
* [3] Ines Chami, Zhitao Ying, Christopher Ré, and Jure Leskovec. Hyperbolic graph convolutional neural networks. In NeurIPS, pages 4869–4880, 2019.
* [4] Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. Ultra-fine entity typing. In ACL (1), pages 87–96. Association for Computational Linguistics, 2018.
* [5] Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. Improving efficiency and accuracy in multilingual entity extraction. In I-SEMANTICS, pages 121–124. ACM, 2013.
* [6] Bhuwan Dhingra, Christopher J. Shallue, Mohammad Norouzi, Andrew M. Dai, and George E. Dahl. Embedding text in hyperbolic spaces. In TextGraphs@NAACL-HLT, pages 59–69. Association for Computational Linguistics, 2018.
* [7] Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In KDD, pages 601–610. ACM, 2014.
* [8] Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. Context-dependent fine-grained entity type tagging. CoRR, abs/1412.1820, 2014.
* [9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
* [10] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR (Poster), 2015.
* [11] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR (Poster). OpenReview.net, 2017.
* [12] Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67, 2010.
* [13] Xiao Ling and Daniel S. Weld. Fine-grained entity recognition. In AAAI. AAAI Press, 2012.
* [14] Federico López, Benjamin Heinzerling, and Michael Strube. Fine-grained entity typing in hyperbolic space. In RepL4NLP@ACL, pages 169–180. Association for Computational Linguistics, 2019.
* [15] Maximilian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In NIPS, pages 6338–6347, 2017.
* [16] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. ACL, 2014.
* [17] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In NAACL-HLT, pages 2227–2237. Association for Computational Linguistics, 2018.
* [18] Deepak Ravichandran and Eduard Hovy. Learning surface text patterns for a question answering system. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 41–47. Association for Computational Linguistics, 2002.
* [19] Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. AFET: automatic fine-grained entity typing by hierarchical partial-label embedding. In EMNLP, pages 1369–1378. The Association for Computational Linguistics, 2016.
* [20] Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. Label noise reduction in entity typing by heterogeneous partial-label embedding. In KDD, pages 1825–1834. ACM, 2016.
* [21] Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. An attentive neural architecture for fine-grained entity type classification. In AKBC@NAACL-HLT, pages 69–74. The Association for Computer Linguistics, 2016.
* [22] Ralph Weischedel and Ada Brunstein. Bbn pronoun coreference and entity type corpus. Linguistic Data Consortium, Philadelphia, 112, 2005.
* [23] Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium, 2011\.
* [24] Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Qili Zhu. Probase: a probabilistic taxonomy for text understanding. In SIGMOD Conference, pages 481–492. ACM, 2012.
* [25] Ji Xin, Hao Zhu, Xu Han, Zhiyuan Liu, and Maosong Sun. Put it back: Entity typing with language model enhancement. In EMNLP, pages 993–998. Association for Computational Linguistics, 2018.
* [26] Peng Xu and Denilson Barbosa. Neural fine-grained entity type classification with hierarchy-aware loss. In NAACL-HLT, pages 16–25. Association for Computational Linguistics, 2018.
* [27] Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Schütze. Noise mitigation for neural entity typing and relation extraction. arXiv preprint arXiv:1612.07495, 2016.
* [28] Dani Yogatama, Daniel Gillick, and Nevena Lazic. Embedding methods for fine grained entity type classification. In ACL (2), pages 291–296. The Association for Computer Linguistics, 2015.
* [29] Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. Hyena-live: Fine-grained online entity type classification from natural-language text. In ACL (Conference System Demonstrations), pages 133–138. The Association for Computer Linguistics, 2013.
|
# Towards Robustness to Label Noise in Text Classification via Noise Modeling
Siddhant Garg Amazon Alexa AI<EMAIL_ADDRESS>, Goutham Ramakrishnan
Health at Scale Corporation<EMAIL_ADDRESS>and Varun Thumbe KLA
Corporation<EMAIL_ADDRESS>
(2021)
###### Abstract.
Large datasets in NLP tend to suffer from noisy labels due to erroneous
automatic and human annotation procedures. We study the problem of text
classification with label noise, and aim to capture this noise through an
auxiliary noise model over the classifier. We first assign a probability score
to each training sample of having a clean or noisy label, using a two-
component beta mixture model fitted on the training losses at an early epoch.
Using this, we jointly train the classifier and the noise model through a
novel de-noising loss having two components: (i) cross-entropy of the noise
model prediction with the input label, and (ii) cross-entropy of the
classifier prediction with the input label, weighted by the probability of the
sample having a clean label. Our empirical evaluation on two text
classification tasks and two types of label noise: random and input-
conditional, shows that our approach can improve classification accuracy, and
prevent over-fitting to the noise.
Label Noise; Noise Model; Robustness; Text Classification; NLP
††journalyear: 2021††copyright: acmcopyright††conference: Proceedings of the
30th ACM International Conference on Information and Knowledge Management;
November 1–5, 2021; Virtual Event, QLD, Australia††booktitle: Proceedings of
the 30th ACM International Conference on Information and Knowledge Management
(CIKM ’21), November 1–5, 2021, Virtual Event, QLD, Australia††price:
15.00††doi: 10.1145/3459637.3482204††isbn: 978-1-4503-8446-9/21/11††ccs:
Computing methodologies Natural language processing
## 1\. Introduction
Training modern ML models requires access to large accurately labeled
datasets, which are difficult to obtain due to errors in automatic or human
annotation techniques (Wang et al., 2018; Zlateski et al., 2018). Recent
studies (Zhang et al., 2016) have shown that neural models can over-fit on
noisy labels and thereby not generalize well. Human annotations for language
tasks have been popularly obtained from platforms like Amazon Mechanical Turk
(Ipeirotis et al., 2010), resulting in noisy labels due to ambiguity of the
correct label (Zhan et al., 2019), annotation speed, human error, inexperience
of annotator, etc. While learning with noisy labels has been extensively
studied in computer vision (Reed et al., 2015; Zhang et al., 2018;
Thulasidasan et al., 2019), the corresponding progress in NLP has been
limited. With the increasing size of NLP datasets, noisy labels are likely to
affect several practical applications (Agarwal et al., 2007).
Figure 1. Illustration of our approach, with an auxiliary noise model $N_{M}$
on top of the classifier $M$. We jointly train the models using a de-noising
loss $\mathcal{L}_{DN}$, and use the clean label prediction $\hat{y}^{(c)}$
during inference.
In this paper, we consider the problem of text classification, and capture the
label noise through an auxiliary noise model (See Fig. 1). We leverage the
finding of learning on clean labels being easier than on noisy labels (Arazo
et al., 2019), and first fit a two-component beta-mixture model (BMM) on the
training losses from the classifier at an early epoch. Using this, we assign a
probability score to every training sample of having a clean or noisy label.
We then jointly train the classifier and the noise model by selectively
guiding the former’s prediction for samples with high probability scores of
having clean labels. More specifically, we propose a novel de-noising loss
having two components: (i) cross-entropy of the noise model prediction with
the input label and (ii) cross-entropy of the classifier prediction with the
input label, weighted by the probability of the sample having a clean label.
Our formulation constrains the noise model to learn the label noise, and the
classifier to learn a good representation for the prediction task from the
clean samples. At inference time, we remove the noise model and use the
predictions from the classifier.
(a) Epoch 1
(b) Epoch 9
(c) Epoch 30
(d) Fitting a BMM at Epoch 9
Figure 2. (a), (b) and (c) show the histogram of the training loss from the
classifier for the train split (with Clean/ Noisy label) at different epochs
(word-LSTM on TREC with 40% random noise). Initially (see (a)), the losses are
high for all data points (both clean and noisy labels). A fully trained model
achieves low losses on both clean and noisy data points, indicating over-
fitting to the noise, as seen in (c). However, at an early epoch of training,
we observe that samples with clean labels have lower losses while those with
noisy labels have high losses, leading to the formation of two clusters as
seen in (b). We fit a beta-mixture model at this intermediate epoch to
estimate the probability of a sample having a clean or noisy label, as shown
in (d).
Most existing works on learning with noisy labels assume that the label noise
is independent of the input and only conditional on the true label. Text
annotation complexity has been shown to depend on the lexical, syntactic and
semantic input features (Joshi et al., 2014) and not solely on the true label.
The noise model in our formulation can capture an arbitrary noise function,
which may depend on both the input and the original label, taking as input a
contextualized input representation from the classifier. While de-noising the
classifier for sophisticated noise functions is a challenging problem, we take
the first step towards capturing a real world setting.
We evaluate our approach on two popular datasets, for two different types of
label noise: random and input-conditional; at different noise levels. Across
two model architectures, our approach results in improved model accuracies
over the baseline, while preventing over-fitting to the label noise.
## 2\. Related Work
There have been several research works that have studied the problem of
combating label noise in computer vision (Frénay and Verleysen, 2014; Jiang et
al., 2018, 2019) through techniques like bootstrapping (Reed et al., 2015),
mixup (Zhang et al., 2018), etc. Applying techniques like mixup (convex
combinations of pairs of samples) for textual inputs is challenging due to the
discrete nature of the input space and retaining overall semantics. In natural
language processing, Agarwal et al. (2007) study the effect of different kinds
of noise on text classification, Ardehaly and Culotta (2018) study social
media text classification using label proportion (LLP) models, and Malik and
Bhardwaj (2011) automatically validate noisy labels using high-quality class
labels. Jindal et al. (2019) capture random label noise via a
$\ell_{2}$-regularized matrix learned on the classifier logits. Our work
differs from this as we i) use a neural network noise model over
contextualized embeddings from the classifier, with (ii) a new de-noising loss
to explicitly guide learning. It is difficult to draw a distinction between
noisy labels, and outliers which are hard to learn from. While several works
perform outlier detection (Goodman et al., 2016; Larson et al., 2019) to
discard these samples while learning the classifier, we utilise the noisy data
in addition to the clean data for improving performance.
## 3\. Methodology
Problem Setting Let
$(X,Y^{(c)}){=}\\{(x_{1},y_{1}^{(c)}),\dots,(x_{N},y_{N}^{(c)})\\}$ denote
clean training samples from a distribution
$\mathcal{D}{=}{\mathcal{X}}{\times}{\mathcal{Y}}$. We assume a function
$\mathcal{F}:{\mathcal{X}}{\times}{\mathcal{Y}\rightarrow\mathcal{Y}}$ that
introduces noise in labels $Y^{(c)}$. We apply $\mathcal{F}$ on $(X,Y^{(c)})$
to obtain the noisy training data
$(X,Y^{(n)})=\\{(x_{1},y_{1}^{(n)}),\dots,(x_{N},y_{N}^{(n)})\\}$.
$(X,Y^{(n)})$ contains a combination of clean samples (whose original label is
retained $y^{(n)}{=}y^{(c)}$) and noisy samples (whose original label is
corrupted $y^{(n)}{\neq}y^{(c)}$). Let $(X_{T},Y_{T})$ be a test set sampled
from the clean distribution $\mathcal{D}$. Our goal is to learn a classifier
model $\mathcal{M}:\mathcal{X}{\rightarrow}\mathcal{Y}$ trained on the noisy
data $(X,Y^{(n)})$, which generalizes well on $(X_{T},Y_{T})$. Note that we do
not have access to the clean labels $Y^{(c)}$ at any point during training.
Modeling Noise Function $\mathcal{F}$ We propose to capture $\mathcal{F}$
using an auxiliary noise model $N_{M}$ on top of the classifier model $M$, as
shown in Fig. 1. For an input $x$, a representation $R_{M}(x)$, derived from
$M$, is fed to $N_{M}$. $R_{M}(x)$ can typically be the contextualized input
embedding from the penultimate layer of $M$. We denote the predictions from
$M$ and $N_{M}$ to be $\hat{y}^{(c)}$(clean prediction) and
$\hat{y}^{(n)}$(noisy prediction) respectively. The clean prediction
$\hat{y}^{(c)}$ is used for inference.
Model | | | TREC (word-LSTM: 93.8, word-CNN: 92.6) | | AG-News (word-LSTM: 92.5, word-CNN: 91.5)
---|---|---|---|---|---
Noise % | | 10 | 20 | 30 | 40 | 50 | | 10 | 20 | 30 | 40 | 50
word LSTM | Baseline | | 88.0 (-0.6) | 89.4 (-9.6) | 83.4 (-19.0) | 79.6 (-24.8) | 77.6 (-27.2) | | 91.9 (-1.7) | 91.3 (-1.5) | 90.5 (-2.5) | 89.3 (-3.7) | 88.6 (-10.5)
$\mathcal{L}_{DN{-}H}$ | | 92.2 (-0.6) | 90.2 (-0.2) | 88.8 (-0.4) | 83.0 (-3.6) | 82.4 (0.0) | | 91.5 (-0.1) | 90.6 (-0.1) | 90.8 (-0.1) | 90.3 (0.0) | 89.0 (-0.1)
$\mathcal{L}_{DN{-}S}$ | | 92.4 (-1.0) | 90.0 (-0.2) | 87.4 (-2) | 83.4 (-1.0) | 82.6 (-8.4) | | 91.8 (-0.3) | 90.8 (-0.2) | 91.0 (-0.1) | 90.3 (-0.1) | 88.6 (-0.1)
word CNN | Baseline | | 88.8 (-1.4) | 89.2 (-1.8) | 84.8 (-8.0) | 82.2 (-15.0) | 77.6 (-16.0) | | 90.9 (-2.7) | 90.6 (-6.2) | 89.3 (-10.2) | 89.2 (-17.9) | 87.4 (-25.2)
$\mathcal{L}_{DN{-}H}$ | | 91 (-0.2) | 90.8 (-0.2) | 89.4 (-1.0) | 81.4 (0.0) | 81.4 (-4.8) | | 91.3 (-0.2) | 91.0 (-0.4) | 90.3 (-0.3) | 88.3 (-3.2) | 86.6 (-3.5)
$\mathcal{L}_{DN{-}S}$ | | 92.2 (-1.4) | 91.8 (-2.0) | 88.8 (-2.8) | 77.0 (-2.4) | 77.2 (-7.0) | | 90.9 (0.0) | 90.4 (-0.1) | 88.7 (-1.1) | 86.6 (-3.5) | 84.5 (-10.2)
Table 1. Results from experiments using random noise. Here for A(B): A refers
to the Best model accuracy while B refers to (Last-Best) accuracy. The models
with highest Best accuracies are in bold. For each noise $\%$, the least and
most reductions in Last accuracy are highlighted in green and red. Baseline
($0\%$ noise) reported beside dataset.
### 3.1. Estimating clean/noisy label using BMM
It has been empirically observed that classifiers that capture input semantics
do not fit the noise before significantly learning from the clean samples
(Arazo et al., 2019). For a classifier trained using a cross entropy
loss($\mathcal{L}_{CE}$) on the noisy dataset, this can be exploited to
cluster the input samples as being clean/noisy in an unsupervised manner.
Initially the training loss on both clean and noisy samples is large, and
after a few training epochs, the loss of majority of the clean samples
reduces. Since the loss of the noisy samples is still large, this segregates
the samples into two clusters with different loss values. On further training,
the model over-fits on the noisy samples and the training loss on both samples
reduces. We illustrate this in Fig. 2(a)$-$(c). We fit a 2-component Beta
mixture model (BMM) over the normalized training losses
($\mathcal{L}_{CE}(\hat{y}^{(c)},\cdot)\in[0,1]$) obtained after training the
model for some warmup epochs $T_{0}$. Using a Beta mixture model works better
than using a Gaussian mixture model as it allows for asymmetric distributions
and can capture the short left-tails of the clean sample losses. For a sample
$(x,y)$ with normalized loss $\mathcal{L}_{CE}(\hat{y}^{(c)},y)=\ell$, the BMM
is given by:
$\displaystyle p(\ell)=\lambda_{c}\cdot p(\ell|\text{clean})+\lambda_{n}\cdot
p(\ell|\text{noisy})$ $\displaystyle
p(\ell|\text{clean})=\frac{\Gamma(\alpha_{c}+\beta_{c})}{\Gamma(\alpha_{c})\Gamma(\beta_{c})}\ell^{\alpha_{c}-1}{(1-\ell)}^{\beta_{c}-1}$
$\displaystyle
p(\ell|\text{noisy})=\frac{\Gamma(\alpha_{n}+\beta_{n})}{\Gamma(\alpha_{n})\Gamma(\beta_{n})}\ell^{\alpha_{n}-1}{(1-\ell)}^{\beta_{n}-1}$
where $\Gamma$ denotes the gamma distribution and $\alpha_{c/n},\beta_{c/n}$
are the parameters corresponding to the individual clean/noisy Beta
distributions. The mixture coefficients $\lambda_{c}$ and $\lambda_{n}$, and
parameters ($\alpha_{c/n},\beta_{c/n}$) are learnt using the EM algorithm. On
fitting the BMM $\mathcal{B}$, for a given input $x$ with a normalized loss
$\mathcal{L}_{CE}(\hat{y}^{(c)},y)=\ell$, we denote the posterior probability
of $x$ having a clean label by:
$\mathcal{B}(x)=\frac{\lambda_{c}\cdot p(\ell|\text{clean})}{\lambda_{c}\cdot
p(\ell|\text{clean})+\lambda_{n}\cdot p(\ell|\text{noisy})}$
The BMM $\mathcal{B}$ learnt from Fig. 2b is shown in Fig. 2d.
Algorithm 1 Training using $\mathcal{L}_{DN-H}$
Input: Train data $(x_{i},y_{i}^{(n)})_{i=1}^{N}$, warmup epochs $T_{0}$,
total epochs $T$, parameter $\beta$, classifier $M$, noise model $N_{M}$
for epoch in $\\{1,\dots,T_{0}\\}$ do
$\hat{y_{i}}^{(c)}\leftarrow M(x_{i})\ \forall\ i\in[N]$
Train $M$ with $\sum_{i}\mathcal{L}_{CE}(\hat{y_{i}}^{(c)},y_{i}^{(n)})$
end for
Fit a 2-mixture BMM $\mathcal{B}$ on
$\\{\mathcal{L}_{CE}(\hat{y_{i}}^{(c)},y_{i}^{(n)})\\}_{i=1}^{N}$
for epoch in $\\{T_{0}+1,\dots,T\\}$ do
$\hat{y_{i}}^{(c)}\leftarrow M(x_{i})$, $\hat{y_{i}}^{(n)}\leftarrow
N_{M}(R_{M}(x_{i}))\ \ \forall\ i\in[N]$
Train $M,N_{M}$ with $\mathcal{L}_{DN{-}H}=$
$\qquad\sum_{i}\big{(}\mathcal{L}_{CE}(\hat{y_{i}}^{(n)},y_{i}^{(n)}){+}$
$\beta\cdot\mathbbm{1}[\mathcal{B}(x){>}0.5]\cdot\mathcal{L}_{CE}(\hat{y_{i}}^{(c)},y_{i}^{(n)})\big{)}$
end for
Return: Trained classifier model $M$
### 3.2. Learning $M$ and $N_{M}$
We aim to train $M,N_{M}$ such that when given an input, $M$ predicts the
clean label and $N_{M}$ predicts the noisy label for this input (if
$\mathcal{F}$ retains the original clean label for this input, then both
$M,N_{M}$ predict the clean label). Thus for an input $(x,y)$ having a clean
label, we want $\hat{y}^{(c)}{=}\hat{y}^{(n)}{=}y$; and for an input $(x,y)$
having a noisy label, we want $\hat{y}^{(n)}{=}y$ and $\hat{y}^{(c)}$ to be
the clean label for $x$. We jointly train $M,N_{M}$ using the de-noising loss
proposed below:
(1) $\displaystyle\mathcal{L}_{DN}{=}\ \mathcal{L}_{CE}(\hat{y}^{(n)},y)\
{+}\beta{\cdot}\mathcal{B}(x){\cdot}\mathcal{L}_{CE}({\hat{y}^{(c)}},y)$
The first term trains the $M{-}N_{M}$ cascade jointly using cross entropy
between $\hat{y}^{(n)}$ and $y$. The second term trains $M$ to predict
$\hat{y}^{(c)}$ correctly for samples believed to be clean, weighted by
$\mathcal{B}(x)$. Here $\beta$ is a parameter that controls the trade-off
between the two terms.
By jointly training $M,N_{M}$ with $\mathcal{L}_{DN}$, we implicitly constrain
the label noise in $N_{M}$. We use an alternative loss formulation by
replacing the Bernoulli $\mathcal{B}(x)$ with the indicator
$\mathbbm{1}[\mathcal{B}(x){>}0.5]$. For ease of notation, we refer the former
(using $\mathcal{B}(x)$) as the soft de-noising loss $\mathcal{L}_{DN{-}S}$
and the latter as the hard de-noising loss $\mathcal{L}_{DN{-}H}$.
Thus we use the following 3-step approach to learn $M$ and $N_{M}$:
1. (1)
Warmup: Train $M$ using $\mathcal{L}_{CE}(\hat{y}^{(c)},y)$.
2. (2)
Fitting BMM: Fit a 2-component BMM $\mathcal{B}$ on the
$\mathcal{L}_{CE}(\hat{y}^{(c)},y)$ for all $(x,y)\in(X,Y^{(n)})$.
3. (3)
Training with $\mathcal{L}_{DN}$: Jointly train $M$ and $N_{M}$ end-to-end
using $\mathcal{L}_{DN{-}S/H}$.
We summarize our methodology in Algorithm 1, when using $\mathcal{L}_{DN-H}$.
Dataset | Num. Classes | Train | Validation | Test
---|---|---|---|---
TREC (Li and Roth (2002)) | 6 | 4949 | 503 | 500
AG-News (Gulli (2005)) | 4 | 112000 | 8000 | 7600
Table 2. Summary statistics of the datasets
Model | | | Token (How/What) based | | Length based
---|---|---|---|---|---
Noise % | | 10 | 20 | 30 | 40 | 50 | | 10 | 20 | 30 | 40 | 50
word LSTM | Baseline | | 89.2 (-0.4) | 84.4 (-8.2) | 77.8 (-10.6) | 76 (-17) | 71.8 (-15.8) | | 91.4 (-1.0) | 87 (0.6) | 82.2 (1.8) | 82.4 (-2.6) | 74.2 (-3.0)
$\mathcal{L}_{DN{-}H}$ | | 91.8 (0) | 87.4 (-2.2) | 84.2 (0.4) | 79 (1) | 67.8 (1.4) | | 91.6 (-0.6) | 90.2 (-0.8) | 87.4 (-0.2) | 87.4 (-0.8) | 79 (0)
$\mathcal{L}_{DN{-}S}$ | | 91.8 (0.2) | 90.6 (-1.2) | 83.8 (-6.8) | 79.2 (-19.2) | 75.6 (-15.8) | | 92 (0.4) | 90.6 (1) | 85.4 (0.2) | 84 (-2.8) | 75 (-3)
word CNN | Baseline | | 90.4 (-3.6) | 83.8 (-1.8) | 82.4 (-7.4) | 78.8 (-17.2) | 52 (1.4) | | 91 (0) | 88 (1.2) | 85.2 (-1.2) | 82 (-2.6) | 73.6 (-1.4)
$\mathcal{L}_{DN{-}H}$ | | 90 (0.8) | 86.6 (-3) | 84.4 (-0.6) | 80.6 (-4.2) | 74 (-7.6) | | 90.6 (-0.8) | 89.6 (-1) | 87.2 (-0.2) | 82.6 (-0.4) | 77 (-6.2)
$\mathcal{L}_{DN{-}S}$ | | 91.2 (-0.4) | 86.8 (-1.8) | 84.2 (-4.2) | 81.8 (-12) | 65.2 (-12.4) | | 92.8 (-3.6) | 91 (-2.2) | 86.8 (-1.4) | 86 (-4) | 75.4 (1.8)
Table 3. Results from experiments using input-conditional noise on the TREC
dataset.
## 4\. Evaluation
Datasets We experiment with two popular text classification datasets: (i) TREC
question-type dataset (Li and Roth, 2002), and (ii) AG-News dataset (Gulli,
2005) (Table 2). We inject noise in the training and validation sets, while
retaining the original clean test set for evaluation. Note that collecting
real datasets with known patterns of label noise is a challenging task, and
out of the scope of this work. We artificially inject noise in clean datasets,
which enables easy and extensive experimentation.
Models We conduct experiments on two popular model architectures: word-LSTM
(Hochreiter and Schmidhuber, 1997) and word-CNN (Kim, 2014). For word-LSTM, we
use a 2-layer BiLSTM with hidden dimension of 150. For word-CNN, we use 300
kernel filters each of size 3, 4 and 5. We use the pre-trained GloVe
embeddings (Pennington et al., 2014) for initializing the word embeddings for
both models. We train models on TREC and AG-News for 100 and 30 epochs
respectively. We use an Adam optimizer with a learning rate of $10^{-5}$ and a
dropout of $0.3$ during training. For the noise model $N_{M}$, we use a simple
2-layer feedforward neural network, with the number of hidden units
$n_{hidden}=4{\cdot}n_{input}$. We choose the inputs to the noise model
$R_{M}(x)$ as per the class of label noise, which we describe in Section 4.1
and 4.2. We conduct hyper-parameter tuning for the number of warmup epochs
$T_{0}$ and $\beta$ using grid search over the ranges of {6,10,20} and
{2,4,6,8,10} respectively.
Metrics and Baseline We evaluate the robustness of the model to label noise on
two fronts: (i) how well it performs on clean data, and (ii) how much it over-
fits the noisy data. For the former, we report the test set accuracy (denoted
by Best) corresponding to the model with best validation accuracy . For the
latter, we examine the gap in test accuracies between the Best, and the Last
model (after last training epoch). We evaluate our approach against only
training $M$ (as the baseline), for two types of noise: random and input-
conditional, at different noise levels.
### 4.1. Results: Random Noise
For a specific Noise %, we randomly change the original labels of this
percentage of samples. Since the noise function is independent of the input,
we use logits from $M$ as the input $R(x)$ to $N_{M}$. We report the Best and
(Last \- Best) test accuracies in Table 1. From the experiments, we observe
that:
(i) $L_{DN-S}$ and $L_{DN-H}$ almost always outperforms the baseline across
different noise levels. The performance of $L_{DN-S}$ and $L_{DN-H}$ are
similar. We observe that training with $L_{DN{-}S}$ tends to be better at low
noise %, whereas $L_{DN{-}H}$ tends to be better at higher noise %. Our method
is more effective for TREC than AG-News, since even the baseline can learn
robustly on AG-News.
(ii) Our approach using $L_{DN{-}S}$ and $L_{DN{-}H}$ drastically reduces
over-fitting on noisy samples (visible from small gaps between Best and Last
accuracies). For the baseline, this gap is significantly larger, especially at
high noise levels, indicating over-fitting to the label noise. For example,
consider word-LSTM on TREC at 30% noise: while the baseline suffers a sharp
drop of 24.8 points from 79.6%, the accuracy of the $L_{DN{-}S}$ model drops
just 1.0% from 83.4%.
Figure 3. Test accuracy across training epochs of word-LSTM model on the TREC
dataset with two levels of random noise: $30\%$ and $40\%$. The baseline
heavily over-fits on the noise degrading performance, while $L_{DN{-}S}$ and
$L_{DN{-}H}$ avoid this.
We further demonstrate that our approach avoids over-fitting, thereby
stabilizing the model training by plotting the test accuracies across training
epochs in Fig. 3. We observe that the baseline model over-fits the label noise
with more training epochs, thereby degrading test accuracy. The degree of
over-fitting is greater at higher levels of noise (Fig. 3(b) vs Fig. 3(a)). In
comparison, our de-noising approach using both $L_{DN-S}$ and $L_{DN-H}$ does
not over-fit on the noisy labels as demonstrated by stable test accuracies
across epochs. This is particularly beneficial when operating in a few-shot
setting where one does not have access to a validation split that is
representative of the test split for early stopping.
| Noise | AP (7.8%) | Reuters (10.8%) | Either (18.6%)
---|---|---|---|---
word LSTM | Baseline | 82.8 (-0.5) | 85.6 (-0.8) | 75.7 (-0.4)
$\mathcal{L}_{DN{-}H}$ | 82.7 (0) | 85.7 (-0.1) | 76.6 (-0.4)
$\mathcal{L}_{DN{-}S}$ | 82.8 (0.3) | 85.5 (0.1) | 76 (-0.1)
word CNN | Baseline | 83.1 (-0.2) | 85.7 (0) | 76.6 (-0.9)
$\mathcal{L}_{DN{-}H}$ | 82.4 (0.8) | 86.2 (0) | 76.1 (0.1)
$\mathcal{L}_{DN{-}S}$ | 82.5 (0.5) | 86.1 (0.1) | 76.4 (0)
Table 4. Results from input-conditional noise on AG-News.
### 4.2. Results: Input-Conditional Noise
We heuristically condition the noise function $\mathcal{F}$ on lexical and
syntactic input features. We are the first to study such label noise for text
inputs, to our knowledge. For both the TREC and AG-News, we condition
$\mathcal{F}$ on syntactic features of the input: (i) The TREC dataset
contains different types of questions. We selectively corrupt the labels of
inputs that contain the question words ‘How’ or ‘What’ (chosen based on
occurrence frequency). For texts starting with ‘How’ or ‘What’, we insert
random label noise (at different levels). We also consider $\mathcal{F}$
conditional on the text length (a lexical feature). More specifically, we
inject random label noise for the longest x% inputs in the dataset. (ii) The
AG-News dataset contains news articles from different news agency sources. We
insert random label noise for inputs containing the token ‘AP’, ‘Reuters’ or
either one of them. We concatenate the contextualised input embedding from the
penultimate layer of $M$ and the logits corresponding to $\hat{y}^{(c)}$ as
the input $R_{M}(x)$ to $N_{M}$. We present the results in Tables 3 and 4.
On TREC, our method outperforms the baseline for both the noise patterns we
consider. For the question-length based noise, we observe the same trend of
$L_{DN{-}H}$ outperforming $L_{DN{-}S}$ at high noise levels, and vice-versa.
On AG-News, the noise % for inputs having the specific tokens ’AP’ and
’Reuters’ are relatively low, and our method performs at par or marginally
improves over the baseline performance. Interestingly, the input–conditional
noise we consider makes effective learning very challenging, as demonstrated
by significantly lower _Best_ accuracies for the baseline model than for
random noise. As the classifier appears to overfit to the noise very early
during training, we observe relatively smaller gaps between Best and Last
accuracies. Compared to random noise, our approach is less efficient at
alleviating the _(Best-Last)_ accuracy gap for input-conditional noise. These
experiments however reveal promising preliminary results on learning with
input-conditional noise.
## 5\. Conclusion
We have presented an approach to improve text classification when learning
from noisy labels by jointly training a classifier and a noise model using a
de-noising loss. We have evaluated our approach on two text classification
tasks. demonstrate its effectiveness through an extensive evaluation. Future
work includes studying more complex $\mathcal{F}$ for other NLP tasks like
language inference and QA.
## References
* (1)
* Agarwal et al. (2007) Sumeet Agarwal, Shantanu Godbole, Shourya Roy, and Diwakar Punjani. 2007. How much noise is too much: A study in automatic text classification. In _In Proc. of ICDM_.
* Arazo et al. (2019) Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, and Kevin Mcguinness. 2019. Unsupervised Label Noise Modeling and Loss Correction. In _Proceedings of the 36th International Conference on Machine Learning_ _(Proceedings of Machine Learning Research, Vol. 97)_ , Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, Long Beach, California, USA, 312–321. http://proceedings.mlr.press/v97/arazo19a.html
* Ardehaly and Culotta (2018) Ehsan Ardehaly and Aron Culotta. 2018. Learning from noisy label proportions for classifying online social data. _Social Network Analysis and Mining_ 8 (12 2018). https://doi.org/10.1007/s13278-017-0478-6
* Frénay and Verleysen (2014) Benoît Frénay and Michel Verleysen. 2014. Classification in the Presence of Label Noise: A Survey. _IEEE Transactions on Neural Networks and Learning Systems_ 25 (2014), 845–869.
* Goodman et al. (2016) James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016\. Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 1–11. https://doi.org/10.18653/v1/P16-1001
* Gulli (2005) A. Gulli. 2005. The Anatomy of a News Search Engine. In _Special Interest Tracks and Posters of the 14th International Conference on World Wide Web_ (Chiba, Japan) _(WWW ’05)_. Association for Computing Machinery, New York, NY, USA, 880–881.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. _Neural Comput._ 9, 8 (Nov. 1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
* Ipeirotis et al. (2010) Panagiotis G. Ipeirotis, Foster Provost, and Jing Wang. 2010\. Quality Management on Amazon Mechanical Turk. In _Proceedings of the ACM SIGKDD Workshop on Human Computation_ (Washington DC) _(HCOMP ’10)_. Association for Computing Machinery, New York, NY, USA, 64–67. https://doi.org/10.1145/1837885.1837906
* Jiang et al. (2019) Junjun Jiang, Jiayi Ma, Zheng Wang, Chen Chen, and Xianming Liu. 2019. Hyperspectral Image Classification in the Presence of Noisy Labels. _IEEE Transactions on Geoscience and Remote Sensing_ 57 (2019), 851–865.
* Jiang et al. (2018) Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. In _Proceedings of the 35th International Conference on Machine Learning_ , Jennifer Dy and Andreas Krause (Eds.). PMLR. http://proceedings.mlr.press/v80/jiang18c.html
* Jindal et al. (2019) Ishan Jindal, Daniel Pressel, Brian Lester, and Matthew Nokleby. 2019. An Effective Label Noise Model for DNN Text Classification. In _Proceedings of North American Chapter of the Association of Computational Linguistics 2019_.
* Joshi et al. (2014) Aditya Joshi, Abhijit Mishra, Nivvedan Senthamilselvan, and Pushpak Bhattacharyya. 2014. Measuring Sentiment Annotation Complexity of Text. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_. Association for Computational Linguistics, Baltimore, Maryland.
* Kim (2014) Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014_. 1746–1751.
* Larson et al. (2019) Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld, Parker Hill, Michael A. Laurenzano, Johann Hauswald, Lingjia Tang, and Jason Mars. 2019. Outlier Detection for Improved Data Quality and Diversity in Dialog Systems. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Association for Computational Linguistics, Minneapolis, Minnesota, 517–527. https://doi.org/10.18653/v1/N19-1051
* Li and Roth (2002) Xin Li and Dan Roth. 2002\. Learning Question Classifiers. In _Proceedings of the 19th International Conference on Computational Linguistics - Volume 1_ (Taipei, Taiwan) _(COLING ’02)_. Association for Computational Linguistics, Stroudsburg, PA, USA, 1–7.
* Malik and Bhardwaj (2011) H. H. Malik and V. S. Bhardwaj. 2011. Automatic Training Data Cleaning for Text Classification. In _2011 IEEE 11th International Conference on Data Mining Workshops_. 442–449.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In _Empirical Methods in Natural Language Processing (EMNLP)_. 1532–1543. http://www.aclweb.org/anthology/D14-1162
* Reed et al. (2015) Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2015. Training Deep Neural Networks on Noisy Labels with Bootstrapping.. In _ICLR (Workshop)_ , Yoshua Bengio and Yann LeCun (Eds.). http://dblp.uni-trier.de/db/conf/iclr/iclr2015w.html#ReedLASER14
* Thulasidasan et al. (2019) Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, and Jamal Mohd-Yusof. 2019\. Combating Label Noise in Deep Learning using Abstention. In _Proceedings of the 36th International Conference on Machine Learning_ , Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR. http://proceedings.mlr.press/v97/thulasidasan19a.html
* Wang et al. (2018) Fei Wang, Liren Chen, Cheng Li, Shiyao Huang, Yanjie Chen, Chen Qian, and Chen Change Loy. 2018. The Devil of Face Recognition is in the Noise. _arXiv preprint arXiv:1807.11649_ (2018).
* Zhan et al. (2019) Xueying Zhan, Yaowei Wang, Yanghui Rao, and Qing Li. 2019\. Learning from Multi-Annotator Data: A Noise-Aware Classification Framework. _ACM Trans. Inf. Syst._ 37, 2, Article 26 (Feb. 2019), 28 pages. https://doi.org/10.1145/3309543
* Zhang et al. (2016) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. _ICLR_ (2016). http://arxiv.org/abs/1611.03530 cite arxiv:1611.03530Comment: Published in ICLR 2017.
* Zhang et al. (2018) Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond Empirical Risk Minimization. In _International Conference on Learning Representations_. https://openreview.net/forum?id=r1Ddp1-Rb
* Zlateski et al. (2018) Aleksandar Zlateski, Ronnachai Jaroensri, Prafull Sharma, and Fredo Durand. 2018. On the Importance of Label Quality for Semantic Segmentation. In _CVPR_. 1479–1487. https://doi.org/10.1109/CVPR.2018.00160
|
# How does an external electric field trigger the Cassie-Baxter-Wenzel wetting
transition on a textured surface?
Ke Xiao†, Xi Chen†, and Chen-Xu Wu<EMAIL_ADDRESS>Fujian Provincial Key
Laboratory for Soft Functional Materials Research, Department of Physics,
College of Physical Science and Technology, Xiamen University, Xiamen 361005,
People’s Republic of China
(August 27, 2024)
###### Abstract
Understanding the critical condition and mechanism of the droplet wetting
transition between Cassie-Baxter state and Wenzel state triggered by an
external electric field is of considerable importance because of its numerous
applications in industry and engineering. However, such a wetting transition
on a patterned surface is still not fully understood, e.g., the effects of
electro-wetting number, geometry of the patterned surfaces, and droplet volume
on the transition have not been systematically investigated. In this paper, we
propose a theoretical model for the Cassie-Baxter-Wenzel wetting transition
triggered by applying an external voltage on a droplet placed on a mirco-
pillared surface or a porous substrate. It is found that the transition is
realized by lowering the energy barrier created by the intermediate composite
state considerably, which enables the droplet to cross the energy barrier and
complete the transition process. Our calculations also indicate that for fixed
droplet volume, the critical electrowetting number (voltage) will increase
(decrease) along with the surface roughness for a micro-pillar patterned
(porous) surface, and if the surface roughness is fixed, a small droplet tends
to ease the critical electrowetting condition for the transition. Besides,
three dimensional phase diagrams in terms of electrowetting number, surface
roughness, and droplet volume are constructed to illustrate the Cassie-Baxter-
Wenzel wetting transition. Our theoretical model can be used to explain the
previous experimental results about the Cassie-Baxter-Wenzel wetting
transition reported in the literature.
## I INTRODUCTION
The role of wettability is widely studied in order to gain highly water-
repellent substrates referred to as superhydrophobic surfaces with an
ultrahigh apparent contact angle, a much smaller contact angle hysteresis
D.Quere2005 ; D.Quere2008 , and hydrodynamic slip C.Choi2006 ; P.Joseph2006 ;
A.Steinberger2007 . These properties rely on the nano- or the microscale
topological structures of the surfaces which exhibit a broad range of
applications in engineering such as self-cleaning R.Blossey2003 ;
K.M.Wisdom2013 , water proofing A.Lafuma2003 ; M.Nosonovsky2007 , drag
reduction R.Truesdell2006 , anti-dew/reflection J.B.Boreyko2009 ;
R.H.Siddique2015 , bactericidal activity E.P.Ivanova2012 ; E.P.Ivanova2013 ;
X.L.Li2016 ; KeXiao2020 , and so on. Typically, when a liquid droplet rests on
such a roughened surface, a classical description of the droplet is
characterized by Wenzel (W) R.N.Wenzel1936 and Cassie-Baxter (CB)
A.B.D.Cassie1944 wetting states. The former one corresponds to a homogenous
wetting state in which liquid penetrates into the texture, i.e. a fully wetted
state losing superhydrophobicity, whereas the droplet in Cassie-Baxter state
merely suspends on the tips of the rough surface, featured with air being
trapped in the cavities of the rough surface topography. Generally, the
transition between these two states can be triggered via various approaches by
tuning external control parameters, such as passive strategies that rely, for
example, on the utilization of gravity force Z.Yoshimitsu2002 ; B.Majhy2020 ,
evaporation P.Tsai2010 ; X.M.Chen2012 ; H.C.M.Fernandes2015 , and Laplace
pressure A.Giacomello2012 ; P.Lv2014 , and active strategies that employ
surface acoustic wave A.Sudeepthi2020 , vibration E.Bormashenko2007 ;
J.B.Boreyko2009October , and even electric field G.Manukyan2011 ; J.M.Oh2011 ;
R.Roy2018 ; B.X.Zhang2019 , a technique called electrowetting (EW). The
advantages of adaption to various geometries, little power consumption, and
fast and precise fine-tuning of the wetting state make EW a prevalent
technique receiving significant research interests in the past decade
F.Mugele2005 ; W.C.Nelson2012 . It is well known that once an electric voltage
is applied between a substrate and a droplet settling on it, the initial
equilibrium contact angle of the droplet will be reduced to a new smaller
value, leading to an alteration of its apparent wettability referred to as
electrowetting-on-dielectric F.Mugele2005 ; L.Q.Chen2014 . Such an EW
phenomenon has attracted significant attention due to its extensive
applications in lab-on-chip systems F.Mugele2005 and microfluidic operations
M.G.Pollack2002 ; V.Bahadur2007 , and the discovery of the ability to induce
wetting states transition G.Manukyan2011 ; J.M.Oh2011 ; R.Roy2018 ;
B.X.Zhang2019 and the droplet detachment A.Cavalli2016 ; Q.Vo2019 ;
Q.G.Wang2020 ; K.Xiao .
Recently, numerous efforts using experiment G.Manukyan2011 ; S.Berry2012 ;
Y.Chen2019 , theoretical modeling V.Bahadur2007 ; R.Roy2018 , and computer
simulation J.M.Oh2011 ; B.X.Zhang2019 ; A.M.Miqdad2016 have been devoted to
getting a better understanding of the wetting transition triggered by electric
field. By combining experiment and numerical simulation, it has been found
that the stability of the CB state under EW is determined by the balance of
the Maxwell stress and the Laplace stress G.Manukyan2011 . Meanwhile, the
wetting transition from CB state to W state is controlled by the energy
barrier stemming from the pinning of the contact lines at the edges of the
hydrophobic pillars G.Manukyan2011 . Based on a surface energy model, Roy et
al. estimated the energy barriers for the EW-induced CB-to-W transition of a
droplet on a mushroom-shaped re-entrant microstructures and an array of
cylindrical microposts. They experimentally demonstrated that the transition
on a mushroom structure is more resilient than that on an array of microposts
R.Roy2018 . Besides, computer simulation also provides a useful complement to
revealing the underlying mechanism of droplet wetting transition on textured
surfaces. By employing molecular dynamics simulations, Zhang et al.
B.X.Zhang2019 studied the mechanism behind the CB-to-W transition of a
nanoscale water droplet resting on a nanogrooved surface under an external
electric field, and found that there exists an energy barrier separating the
CB state and the W state. In addition, they also discussed the dependence of
the energy barrier on the electric filed strength, the groove aspect ratio,
and the intrinsic contact angle of the groove.
Despite of the fact that the EW-induced transition have been extensively
studied either via experimental or theoretical approaches, a systematic
analytical understanding of the underlying mechanism, in particular, the
dependence of critical electric voltage on surface roughness and droplet
volume has not been explored.
In this paper, we establish a theoretical model to study the EW transition on
micro-patterned surfaces through analyzing the difference of interfacial free
energy between CB state and intermediate composite state. The effects of
surface roughness and droplet volume on the threshold voltage are discussed.
To further explore the interrelation among threshold voltage, surface
roughness and droplet volume, three dimensional (3D) phase diagrams in the
corresponding parameter space are also constructed. We expect that our model
can offer some guidance to the design and fabrication of the patterned
surfaces and allow one to study EW transition on other types of patterned
surfaces.
## II THEORETICAL MODELING
We begin our investigation by considering an EW setup consisting of a
millimeter-sized sessile water droplet deposited on two different types of
superhydrophobic surface decorated with, respectively, a square lattice of
cylindric mircropillars and a regular array of pores, as shown in Fig. 1. The
three dimensional (3D) geometry of the micro-pillar patterned surface in this
paper is schematically shown in Fig. 1(a). The top view and the side view of
the squarely distributed micropore-patterned surface are sketched in Figs.
1(b) and 1(c) respectively. Here, the cylindrical pillars and the pores are
characterized by their radius $R$, height $H$, and gap pitch $P$ (or center-
to-center interspacing $S$) between neighboring pillars or pores,
respectively. Traditionally, nondimensional parameters roughness factor $r$
and solid fraction $\phi_{s}$, which are defined as the ratio of the actual
area of the solid surface to its projection area and the ratio of the contact
solid surface (tip of the pillars or the pores) to the total horizontal
surface respectively, are commonly used to represent the level of roughness
for the textured surfaces. Geometrically, in this paper, the roughness factor
is given by $r=1+2\pi RH/S^{2}$, and the solid fraction $\phi_{s}$ are written
as $\pi R^{2}/S^{2}$ for the pillar patterned surface and $1-\pi R^{2}/S^{2}$
for the porous surface respectively. The two possible wetting states, i.e., CB
state and W state, are illustrated by Figs. 1(d) and 1(f), between which there
exists an energy barrier, i.e., an intermediate composite state [see Fig.
1(e)], which can be lowered to a level below CB state by applying an external
electric voltage across the droplet.
Figure 1: (Color online) (a) Schematic 3D picture of a microstructured surface
with cylindrical pillars of radius $R$, height $H$ and gap pitch $P$ on a
periodic square lattice with pillar-to-pillar spacing $S$. Schematic (b) top
view and (c) side view of a regular array of pores of radius $R$, depth $H$,
gap pitch $P$, and center-to-center pore spacing $S$. Different wetting states
of a droplet on a microtextured surface in the presence of an external
electric voltage: (d) CB state, (e) intermediate composite state, and (f) W
state.
This leads to a CB-W wetting transition. In order to probe the critical
electric voltage that triggers the wetting transition, it is necessary to
calculate the total interfacial free energy for CB state and intermediate
composite state, which, in general, is given by the sum of all the surface
energies between the droplet and the patterned surface, i.e.,
$\displaystyle G=\gamma_{\rm lv}A_{\rm lv}+\gamma_{\rm sv}A_{\rm
sv}+\gamma_{\rm ls}^{\rm eff}A_{\rm ls},$ (1)
where $A_{\rm lv}$, $A_{\rm sv}$, and $A_{\rm ls}$ are the areas of the
liquid-vapor, solid-vapor, and liquid-solid interfaces, and $\gamma_{\rm lv}$,
$\gamma_{\rm sv}$, and $\gamma_{\rm ls}^{\rm eff}$ are the liquid-vapor,
solid-vapor, and effective liquid-solid interfacial energy density,
respectively. Here $\gamma_{\rm ls}^{\rm eff}=\gamma_{\rm ls}-\eta\gamma_{\rm
lv}$ with $\eta=\varepsilon_{0}\varepsilon U^{2}/2d\gamma_{\rm lv}$ represents
the dimensionless electrowetting number, where $\varepsilon_{0}$,
$\varepsilon$, and $d$ are the dielectric permittivity in vacuum, the relative
dielectric constant, and the thickness of the insulating layer, respectively.
Let $A_{\rm t}=A_{\rm sv}+A_{\rm ls}$, the total area of the solid surface
including solid-vapor and liquid-solid interfaces, and with a consideration of
the Young’s equation $\gamma_{\rm sv}-\gamma_{\rm ls}=\gamma_{\rm
lv}\cos{\theta_{\rm Y}}$, Eq. (1) can be converted to
$\displaystyle G=\gamma_{\rm lv}A_{\rm lv}-\gamma_{\rm lv}(\cos{\theta_{\rm
Y}}+\eta)A_{\rm ls}+\gamma_{\rm sv}A_{\rm t},$ (2)
where $\theta_{\rm Y}$ is the apparent contact angle at the equilibrium state.
Here we assume that the volume of the droplet is small, corresponding to a
characteristic size smaller than the capillary length $l_{\rm
c}=\sqrt{\gamma_{\rm lv}/\rho g}\sim 2.7\leavevmode\nobreak\ {\rm mm}$, where
$\rho$ and $g$ are the density of the water and the gravitational acceleration
respectively. In this case the gravitational effect can be neglected, and the
shape of the droplet associated to the transition can be treated as a sphere.
In particular, the total interfacial free energy of a water droplet in CB
state on a patterned surface, as schematically illustrated by Fig. 1(d), can
be calculated as
$\displaystyle G_{\rm CB}=$ $\displaystyle\gamma_{\rm lv}\pi R_{\rm
CB}^{2}\Big{[}\nu(\theta_{\rm CB})+(1-\phi_{s})-(\cos{\theta_{\rm
Y}}+\eta)\phi_{s}\Big{]}$ $\displaystyle+\gamma_{\rm sv}A_{\rm t},$ (3)
where $\nu(\theta)=2/(1+\cos\theta)$ is a dimensionless function. In addition,
as the evaporation effect of the water droplet is excluded as well, it is
reasonable to assume that the droplet volume $V_{0}$ is conserved. Therefore
the base radius $R_{\rm CB}$ and the contact angle $\theta_{\rm CB}$ of the
droplet can be determined according to a minimization of the global energy
under the constraint of fixed droplet volume. Similarly, the total interfacial
free energy of the intermediate composite state reads
$\displaystyle G_{\rm inter}=$ $\displaystyle\gamma_{\rm lv}\pi R_{\rm
inter}^{2}\Bigg{\\{}\nu(\theta_{\rm inter})+(1-\phi_{s})\nu(\theta_{\rm
Y}-\frac{\pi}{2})-$ $\displaystyle(\cos{\theta_{\rm
Y}}+\eta)\bigg{[}\phi_{s}+(r-1)\frac{h}{H}\bigg{]}\Bigg{\\}}+\gamma_{\rm
sv}A_{\rm t},$ (4)
where $R_{\rm inter}$ and $h$ are the base radius of the droplet and the
penetration height from the tip of the pillars or pores to the point at which
the curved interstitial liquid-vapor meniscus touches the edges of the walls
(see Appendix for the calculation of $h$), respectively.
Figure 2: (Color online) Representative total interfacial free energies of the
CB and intermediate states of a droplet on a micropillar-patterned surface:
$G_{\rm CB}$ and $G_{\rm inter}$, respectively, as a function of EW number.
The inset shows their difference, $\Delta G=G_{\rm inter}-G_{\rm CB}$, which
gives the critical EW number (critical external electric voltage) for the
wetting transition when $\Delta G=0$.
## III RESULTS AND DISCUSSION
In our model, we take into account all the interfacial free energies for a CB
state and an intermediate composite state of a three-dimensional droplet when
placed on a micropillar- or a pore- patterned surface. Our calculations in
this paper were carried out by using $\gamma_{\rm lv}=72.8\leavevmode\nobreak\
{\rm mN\cdot m^{-1}}$, $\varepsilon=3.2$, $d=3\leavevmode\nobreak\ \mu{\rm m}$
and $\theta_{\rm Y}=115^{\circ}$, respectively R.Roy2018 . Without the
application of an external field, as valued by the intercept in Fig. 2, the
energy of the intermediate composite state is higher than that of the CB
state, denying the occurrence of a CB-W transition. However, once an external
field is applied and increased, it is found that although the energy of the CB
state decreases, the energy of the intermediate composite state decreases in a
steady and more rapid way, and there exists an intersection point where the
droplet in CB state crosses the energy barrier created by the intermediate
composite state and changes to a Wenzel state, as shown in Fig. 2. Such a
critical condition corresponds to a critical voltage (or a critical EW number
$\eta_{\rm c}$) which can be estimated by equating the energies of these two
states for a given textured surface.
Figure 3: (Color online) The critical EW number $\eta_{c}$ for the CB-W
transition vs (a) aspect ratio $R/H$, (b) surface roughness $r$, (c) solid
fraction $\phi_{s}$, and (d) $P/H$, where the black square dot curve and the
red circle dot curve stand for micropillar- and pore- patterned surfaces,
respectively.
For a fixed droplet volume, the EW number $\eta_{\rm c}$ is found to
remarkably depend on the geometric features of the structured surface (surface
roughness), such as aspect ratio $R/H$, relative pitch $P/H$ (density
$1/S^{2}=1/(P+2R)^{2}$), surface roughness $r$ and solid fraction $\phi_{\rm
s}$, as shown in Fig. 3.
It can be seen that, as shown by the black square dot curve in Fig. 3(a), it
becomes harder (easier) for a CB-W transition on a micropillar-patterned
surface (porous surface) to occur with the increase of its aspect ratio $R/H$.
An alternative illustration of Fig. 3(a) is to replace the dimensionless
aspect ratio $R/H$ by surface roughness $r$, indicating that roughness
suppresses (enhances) EW-induced CB-W transition on a micropillar-patterned
surface (porous surface) (Fig. 3(b)). Besides, as the aspect ratio also
correlates with solid fraction $\phi_{\rm s}$, it is possible to depict the
critical EW number in terms of solid fraction $\phi_{\rm s}$ for both pillar-
and pore- patterned surfaces, as shown in Fig. 3(c). Apart from the aspect
ratio, the distribution density of pillars and pores also plays an important
role in determining the critical EW number. A more deeper investigation, as
shown in Fig. 3(d), exhibits that a reduction of critical EW number $\eta_{\rm
c}$ can be achieved by increasing (decreasing) the pitch of pillar-patterned
(pore-patterned) surfaces.
Figure 4: (Color online) The critical EW number $\eta_{c}$ for the CB-W
transition vs droplet volume $V_{0}$, where the black square dot curve and the
red circle dot curve represent micropillar- and pore- patterned surfaces,
respectively.
It has been found experimentally that the onset of CB-W wetting transition
occurs at a certain droplet size as the droplet gets smaller during
evaporation process P.Tsai2010 ; X.M.Chen2012 ; H.C.M.Fernandes2015 .
Meanwhile, the thermodynamic favorable wetting state also depends on droplet
volume K.Xiao2017 . Such a phenomenon can be explained by our theoretical
model. Figure 4 exhibits the effect of droplet volume $V_{0}$ on wetting for a
set of fixed surface geometric parameters ($R=25\leavevmode\nobreak\ {\rm\mu
m}$, $H=50\leavevmode\nobreak\ {\rm\mu m}$, and $S=100\leavevmode\nobreak\
{\rm\mu m}$), revealing that the larger the droplet size, the higher the
critical voltage for the wetting transition to occur.
Figure 5: (Color online) 3D phase diagrams in terms of the critical EW number
$\eta_{c}$, surface roughness $r$ and droplet volume $V_{0}$ for (a) a
micropillar-patterned surface and (b) a pore-patterned surface.
Finally, in order to comprehensively understand how surface roughness and
droplet size affect the critical value of $\eta_{c}$ as a whole, 3D phase
diagrams for micropillar and pore-patterned surface are constructed in terms
of EW number, surfce roughness, and droplet volume, as demonstrated in Fig. 5.
It is found that all phase diagrams are divided into two regimes, namely CB
state (under the curved surface) and W state (above the curved surface)
separated by a coexisting curved surface representing the critical condition
for the wetting transition. According to the 3D phase diagrams, we can deduce
that a higher (lower) critical EW number $\eta_{c}$ is required to trigger the
CB-W transition for rougher pillar- (pore-) patterned surfaces. While for
large droplet, higher $\eta_{c}$ is needed for the wetting transition
regardless of the geometric pattern of the surfaces. Therefore, it becomes
possible that the CB-W transition triggered by EW effect can be effectively
inhibited by engineering a surface with hierarchical roughness or by adopting
large droplet.
To examine the validity of our theoretical model, it is necessary to compare
our theoretical predictions with experimental results. For example, Bahadur et
al. V.Bahadur2008 showed via experiments that the observed transition voltage
is 35 V for microstructured surface with roughness 2.87, solid fraction 0.23,
and pillar height 43.1 ${\rm\mu m}$, while the transition voltage has to be
increased to 58 V to observe the wetting transition for the microstructured
surface with same pillar height but an increased surface roughness 3.71 and
solid fraction 0.55, a result in good agreement with our present conclusions.
In addition, it has been found by the experimental observations made by
Manukyan et al. G.Manukyan2011 that the critical voltage causing the cavities
filled with water and the lateral propagation decreases as the gap widths
between the pillars were widened (equivalent to a reduction of surface
roughness and solid fraction), a conclusion also in accordance with our
results. What’s more, Roy et al. R.Roy2018 also reported that the more
sparsely the cylindrical microposts are distributed, the higher the EW voltage
is required to trigger the CB-W transition. These results all support the
theoretical model proposed in this paper.
## IV CONCLUSION
In this paper we developed a model to interpret the CB-W wetting transition
triggered by an external electric field for a three-dimensional water droplet
deposited on a micropillar- or a pore-patterned surface. It is found that the
electric field lowers the energy barrier created by the intermediate composite
state, which allows the droplet initially in CS state to cross and complete
the EW-induced CB-W transition. The critical value of the electric field
applied for the transition is influenced by the geometrical parameters for the
thermodynamic wetting states, such as the base radius and apparent contact
angle of a droplet, and the critical voltage for the CB-W wetting transition.
3D phase diagrams in terms of EW number, surface roughness, and droplet volume
are constructed. It is shown that low (high) roughness, low (high) pitch,
small (small) solid fraction, and small (small) droplet size encourages a CB-W
wetting transition triggered by an external field, a conclusion in good
agreement with previous investigations reported in the literature.
${{\dagger}}$ These authors contributed equally to this work.
###### Acknowledgements.
This work was funded by the National Science Foundation of China under Grant
No. 11974292 and No. 11947401.
## V Appendix: The calculation of the base radius and the penetration height
of a droplet placed on a patterned surface
For all the wetting states in the main text, the base radius of a droplet is
determined under the constraint of fixed volume, which, in CB state, can be
written as
$\displaystyle V_{\rm CB}=\frac{\pi}{3}R_{\rm CB}^{3}\mu(\theta_{\rm
CB})=V_{0},$ (5)
where $\mu(\theta)=(2+\cos\theta)(1-\cos\theta)^{2}/\sin^{3}\theta$ is a
dimensionless function, and $R_{\rm CB}$ and $\theta_{\rm CB}$ are the base
radius and the apparent contact angle of the droplet respectively. Such an
equation can also be rewritten as
$\displaystyle R_{\rm CB}=\bigg{[}\frac{3V_{0}}{\pi\mu(\theta_{\rm
CB})}\bigg{]}^{1/3}.$ (6)
When a droplet on a microstructured surface reaches the intermediate state,
its volume can be divided into two parts, i.e., the one on the top of the
patterned surface and the other penetrating into the interspacing of the
pillars. The volume of the spherical cap above the patterned surface is given
by
$\displaystyle V_{\rm top}^{\rm inter}=\frac{\pi}{3}R_{\rm
inter}^{3}\mu(\theta_{\rm inter}).$ (7)
Strictly speaking, the equilibrium configuration of the curved liquid-vapor
interface in a unit cell between the pillars needs to be determined by the
Young-Laplace equation. However, due to the fact that the droplet volume
filling the space around the pillars is much smaller than that of the
spherical cap above the patterned surface, it is reasonable to treat the shape
of the liquid-vapor interface in a unit cell between the pillars as a
spherical cap with effective capillary radius $R_{\rm cap}^{\rm eff}$ defined
by R.Roy2018
$\displaystyle R_{\rm cap}^{\rm
eff}=S\bigg{(}\frac{1-\phi_{s}}{\pi}\bigg{)}^{1/2}.$ (8)
Thus, the height of the corresponding spherical cap is calculated as
$\displaystyle h_{\rm cap}^{\rm eff}=R_{\rm cap}^{\rm
eff}\frac{1-\cos\big{(}\theta_{\rm
Y}-\frac{\pi}{2}\big{)}}{\sin\big{(}\theta_{\rm Y}-\frac{\pi}{2}\big{)}}.$ (9)
The penetration height $h$ in the main text can be obtained as $h=H-h_{\rm
cap}^{\rm eff}$. Then the droplet volume underneath the top spherical cap
corresponding to the volume penetrated into the interspacing of the pillars is
given by
$\displaystyle V_{\rm bottom}^{\rm inter}=\frac{\pi}{3}R_{\rm
inter}^{2}(1-\phi_{s})\bigg{[}3h+R_{\rm cap}^{\rm eff}\mu\big{(}\theta_{\rm
Y}-\frac{\pi}{2}\big{)}\bigg{]}.$ (10)
Given this, the base radius $R_{\rm mid}$ of a droplet in the intermediate
state can be found by solving the following equation
$\displaystyle V_{0}=\frac{\pi}{3}R_{\rm inter}^{3}\mu(\theta_{\rm
inter})+\frac{\pi}{3}R_{\rm inter}^{2}(1-\phi_{s})\bigg{[}3h+R_{\rm cap}^{\rm
eff}\mu\big{(}\theta_{\rm Y}-\frac{\pi}{2}\big{)}\bigg{]}.$ (11)
Similarly, when a droplet on a pore-patterned surface gets into intermediate
state, the calculation of its base radius can be done in the same manner as
above so long as we replace the effective capillary radius of the spherical
cap of the bottom part by the radius of the pore $R$.
## References
* (1) D. Quéré, Non-sticking drops, Rep. Prog. Phys. 68, 2495 (2005).
* (2) D. Quéré, Wetting and Roughness, Annu. Rev. Mater. Res. 38, 71 (2008).
* (3) C. Choi and C. J. Kim, Large Slip of Aqueous Liquid Flow over a Nanoengineered Superhydrophobic Surface, Phys. Rev. Lett. 96, 066001 (2006).
* (4) P. Joseph, C. Cottin-Bizonne, J.-M. Benoît, C. Ybert, C. Journet, P. Tabeling, and L. Bocquet, Slippage of Water Past Superhydrophobic Carbon Nanotube Forests in Microchannels, Phys. Rev. Lett. 97, 156104 (2006).
* (5) A. Steinberger, C. Cottin-Bizonne, P. Kleimann, and E. Charlaix, High friction on a bubble mattress, Nature Mater. 6, 665 (2007).
* (6) R. Blossey, Self-Cleaning Surfaces Virtual Realities, Nat. Mater. 2 301-306 (2003)
* (7) K. M. Wisdom, J. A. Watson, X. Qu, F. Liu, G. S. Watson, and C. H. Chen, Selfcleaning of superhydrophobic surfaces by self-propelled jumping condensate, Proc. Natl. Acad. Sci. U. S. A. 20, 7992-7997 (2013).
* (8) A. Lafuma and D. Quere, Superhydrophobic States, Nat. Mater., 2 457-460 (2003).
* (9) M. Nosonovsky and B. Bhushan, Biomimetic Superhydrophobic Surfaces: Multiscale Approach, Nano Lett. 7, 2633 (2007).
* (10) R. Truesdell, A. Mammoli, P. Vorobieff, F. van Swol, and C. J. Brinker, Drag reduction on a patterned superhydrophobic surface, Phys. Rev. Lett. 97, 044504 (2006).
* (11) J. B. Boreyko and C. H. Chen, Self-propelled dropwise condensate on superhydrophobic surfaces, Phys. Rev. Lett. 103, 184501 (2009).
* (12) R. H. Siddique, G. Gomard, and H. Holscher, The role of random nanostructures for the omnidirectional anti-reflection properties of the glasswing butterfly, Nat. Commun. 6, 301 (2015).
* (13) E. P. Ivanova, J. Hasan, H. K. Webb, V. K. Truong, G. S. Watson, J. A. Watson, V. A. Baulin, S. Pogodin, J. Y. Wang, M. J. Tobin, C. Lobbe, and R. J. Crawford, Natural Bactericidal Surfaces: Mechanical Rupture of Pseudomonas aeruginosa Cells by Cicada Wings, Small 8, 2489 (2012).
* (14) E. P. Ivanova, J. Hasan, H. K. Webb, G. Gervinskas, S. Juodkazis, V. K. Truong, A. H. F. Wu, R. N. Lamb, V. A. Baulin, G. S. Watson, J. A. Watson, D. E. Mainwaring, and R. J. Crawford, Bactericidal activity of black silicon, Nat. Commun. 4, 359 (2013).
* (15) X. L. Li, Bactericidal mechanism of nanopatterned surfaces, Phys. Chem. Chem. Phys. 18, 1311-1316 (2016).
* (16) K. Xiao, X. Z. Cao, X. Chen, H. Z. Hu, and C. X. Wu, Bactericidal efficacy of nanopatterned surface tuned by topography, J. Appl. Phys. 128, 064701 (2020).
* (17) R. N. Wenzel, Resistance of Solid Surfaces to Wetting by Water, Ind. Eng. Chem. 28, 988-994 (1936).
* (18) A. B. D. Cassie and S. Baxter, Wettability of Porous Surface, Trans. Faraday Soc. 40, 546-551 (1944).
* (19) Z. Yoshimitsu, A. Nakajima, T. Watanabe, and K. Hashimoto, Effects of Surface Structure on the Hydrophobicity and Sliding Behavior of Water Droplets, Langmuir 18, 5818 (2002).
* (20) B. Majhy, V.P. Singh, A. K. Sen, Understanding wetting dynamics and stability of aqueous droplet over superhydrophilic spot surrounded by superhydrophobic surface, J. Colloid Interface Sci. 565, 582-591 (2020).
* (21) P. Tsai, R. G. H. Lammertink, M. Wessling, and D. Lohse, Evaporation-Triggered Wetting Transition for Water Droplets upon Hydrophobic Microstructures, Phys. Rev. Lett. 104, 116102 (2010).
* (22) X. M. Chen, R. Y. Ma, J. T. Li, C. L. Hao, W. Guo, B. L. Luk, S. C. Li, S. H. Yao, and Z. K. Wang, Evaporation of Droplets on Superhydrophobic Surfaces: Surface Roughness and Small Droplet Size Effects, Phys. Rev. Lett. 109, 116101 (2012).
* (23) H. C. M. Fernandes, M. H. Vainstein, and C. Brito, Modeling of Droplet Evaporation on Superhydrophobic Surfaces, Langmuir 31, 7652-7659 (2015).
* (24) A. Giacomello, M. Chinappi, S. Meloni, and C. M. Casciola, Metastable Wetting on Superhydrophobic Surfaces: Continuum and Atomistic Views of the Cassie-Baxter-Wenzel Transition, Phys. Rev. Lett. 109, 226102 (2012).
* (25) P. Lv, Y. Xue, Y. Shi, H. Lin, and H. Duan, Metastable States and Wetting Transition of Submerged Superhydrophobic Structures, Phys. Rev. Lett. 112, 196101 (2014).
* (26) A. Sudeepthi, L. Yeo, and A. K. Sen, Cassie-Wenzel wetting transition on nanostructured superhydrophobic surfaces induced by surface acoustic waves, Appl. Phys. Lett. 116, 093704 (2020).
* (27) E. Bormashenko, R. Pogreb, G. Whyman, Y. Bormashenko, and M. Erlich, Vibration-induced Cassie-Wenzel wetting transition on rough surfaces, Appl. Phys. Lett. 90, 201917 (2007).
* (28) J. B. Boreyko and C. H. Chen, Restoring Superhydrophobicity of Lotus Leaves with Vibration-Induced Dewetting, Phys. Rev. Lett. 103, 174502 (2009).
* (29) G. Manukyan, J. M. Oh, D. van den Ende, R. G. H. Lammertink, and F. Mugele, Electrical Switching of Wetting States on Superhydrophobic Surfaces: A Route Towards Reversible Cassie-to-Wenzel Transitions, Phys. Rev. Lett. 106, 014501 (2011).
* (30) J. M. Oh, G. Manukyan, D. van den Ende, and F. Mugele, Electric-field-driven instabilities on superhydrophobic surfaces, EPL 93, 56001 (2011).
* (31) R. Roy, J. A. Weibel, and S. V. Garimella, Re-entrant Cavities Enhance Resilience to the Cassie-to-Wenzel State Transition on Superhydrophobic Surfaces during Electrowetting, Langmuir 34, 12787-12793 (2018).
* (32) B. X. Zhang, S. L. Wang, and X. D. Wang, Wetting Transition from the Cassie-Baxter State to the Wenzel State on Regularly Nanostructured Surfaces Induced by an Electric Field, Langmuir 35, 662-670 (2019).
* (33) F. Mugele and J. C. Baret, Electrowetting: from basics to applications, J. Phys.: Condens Matter 17, R705-R774 (2005).
* (34) W. C. Nelson and C. J. Kim, Droplet Actuation by Electrowetting-on-Dielectric (EWOD): A Review, J. Adhes. Sci. Technol. 26, 1747-1771 (2012).
* (35) L. Q. Chen,E. Bonaccurso, Electrowetting - From statics to dynamics, Adv. Colloid Interface Sci. 210, 2-12 (2014).
* (36) M. G. Pollack, A. D. Shenderov, and R. B. Fair, Electrowetting-based actuation of droplets for integrated microfluidics, Lab Chip 2, 96-101 (2002).
* (37) V. Bahadur and S. V. Garimella, Electrowetting-Based Control of Static Droplet States on Rough Surfaces, Langmuir 23, 4918-4924 (2007).
* (38) A. Cavalli, D. J. Preston, E. Tio, D. W. Martin, N. Miljkovic, E. N. Wang, F. Blanchette, and J. W. M. Bush, Electrically induced drop detachment and ejection, Phys. Fluids 28, 022101 (2016).
* (39) Q. Vo and T. Tran, Critical Conditions for Jumping Droplets, Phys. Rev. Lett. 123, 024502 (2019).
* (40) Q. G. Wang, M. Xu, C. Wang, J. P. Gu, N. Hu, J. F. Lyu, and W. Yao, Actuation of a Nonconductive Droplet in an Aqueous Fluid by Reversed Electrowetting Effect, Langmuir 36, 8152-8164 (2020).
* (41) K. Xiao and C. X. Wu, Critical condition for electrowetting-induced detachment of a droplet from a curved surface, arXiv:2012.07255 (2020).
* (42) S. Berry, T. Fedynyshyn, L. Parameswaran, and A. Cabral, Switchable electrowetting of droplets on dual-scale structured surfaces, J. Vac. Sci. Technol. B 30, 06F801 (2012).
* (43) Y. Chen, Y. Suzuki, and K. Morimoto, Electrowetting-Dominated Instability of Cassie Droplets on Superlyophobic Pillared Surfaces, Langmuir 35, 2013-2022 (2019).
* (44) A. M. Miqdad, S. Datta, A. K. Das, and P. K. Das, Effect of electrostatic incitation on the wetting mode of a nano-drop over a pillar-arrayed surface, RSC Adv. 6, 110127 (2016).
* (45) K. Xiao, Y. P. Zhao, G. Ouyang, and X. L. Li, An analytical model of nanopatterned superhydrophobic surfaces, J. Coat. Technol. Res. 14, 1297-1306 (2017).
* (46) V. Bahadur and S. V. Garimella, Electrowetting-Based Control of Droplet Transition and Morphology on Artificially Microstructured Surfaces, Langmuir 24, 8338-8345 (2008).
|
# PPT: Parsimonious Parser Transfer
for Unsupervised Cross-Lingual Adaptation
Kemal Kurniawan1 Lea Frermann1 Philip Schulz2 Trevor Cohn1
1School of Computing and Information Systems, University of Melbourne
2Amazon Research
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Work done outside Amazon.
###### Abstract
Cross-lingual transfer is a leading technique for parsing low-resource
languages in the absence of explicit supervision. Simple ‘direct transfer’ of
a learned model based on a multilingual input encoding has provided a strong
benchmark. This paper presents a method for unsupervised cross-lingual
transfer that improves over direct transfer systems by using their output as
implicit supervision as part of self-training on unlabelled text in the target
language. The method assumes minimal resources and provides maximal
flexibility by (a) accepting any pre-trained arc-factored dependency parser;
(b) assuming no access to source language data; (c) supporting both projective
and non-projective parsing; and (d) supporting multi-source transfer. With
English as the source language, we show significant improvements over state-
of-the-art transfer models on both distant and nearby languages, despite our
conceptually simpler approach. We provide analyses of the choice of source
languages for multi-source transfer, and the advantage of non-projective
parsing. Our code is available online.111https://github.com/kmkurn/ppt-
eacl2021
## 1 Introduction
Figure 1: Illustration of our technique. For a target language sentence
($x_{i}$), a source parser $P_{\theta_{0}}$ predicts a set of candidate arcs
$\tilde{A}(x_{i})$ (subset shown in the figure), and parses
$\tilde{Y}(x_{i})$. The highest scoring parse is shown on the bottom (green),
and the true gold parse (unknown to the parser) on top (red). A target
language parser $P_{\theta}$ is then fine-tuned on a data set of ambiguously
labelled sentences $\\{x_{i},\tilde{Y}(x_{i})\\}$.
Recent progress in natural language processing (NLP) has been largely driven
by increasing amounts and size of labelled datasets. The majority of the
world’s languages, however, are low-resource, with little to no labelled data
available (Joshi et al., 2020). Predicting linguistic labels, such as
syntactic dependencies, underlies many downstream NLP applications, and the
most effective systems rely on labelled data. Their lack hinders the access to
NLP technology in many languages. One solution is cross-lingual model
transfer, which adapts models trained on high-resource languages to low-
resource ones. This paper presents a flexible framework for cross-lingual
transfer of syntactic dependency parsers which can leverage _any_ pre-trained
arc-factored dependency parser, and assumes no access to labelled target
language data.
One straightforward method of cross-lingual parsing is direct transfer. It
works by training a parser on the source language labelled data and
subsequently using it to parse the target language directly. Direct transfer
is attractive as it does not require labelled target language data, rendering
the approach fully unsupervised.222Direct transfer is also called zero-shot
transfer or model transfer in the literature. Recent work has shown that it is
possible to outperform direct transfer if unlabelled data, either in the
target language or a different auxiliary language, is available (He et al.,
2019; Meng et al., 2019; Ahmad et al., 2019b). Here, we focus on the former
setting and present flexible methods that can adapt a pre-trained parser given
unlabelled target data.
Despite their success in outperforming direct transfer by leveraging
unlabelled data, current approaches have several drawbacks. First, they are
limited to generative and projective parsers. However, discriminative parsers
have proven more effective, and non-projectivity is a prevalent phenomenon
across the world’s languages (de Lhoneux, 2019). Second, prior methods are
restricted to single-source transfer, however, transfer from multiple source
languages has been shown to lead to superior results (McDonald et al., 2011;
Duong et al., 2015a; Rahimi et al., 2019). Third, they assume access to the
source language data, which may not be possible because of privacy or legal
reasons. In such source-free transfer, only a pre-trained source parser may be
provided.
We address the three shortcomings with an alternative method for unsupervised
target language adaptation (Section 2). Our method uses high probability edge
predictions of the source parser as a supervision signal in a self-training
algorithm, thus enabling unsupervised training on the target language data.
The method is feasible for discriminative and non-projective parsing, as well
as multi-source and source-free transfer. Building on a framework introduced
in Täckström et al. (2013), this paper for the first time demonstrates their
effectiveness in the context of state-of-the-art neural dependency parsers,
and their generalizability across parsing frameworks. Using English as the
source language, we evaluate on eight distant and ten nearby languages (He et
al., 2019). The single-source transfer variant (Section 2.1) outperforms
previous methods by up to $11\text{\,}\mathrm{\char 37\relax}$ UAS, averaged
over nearby languages. Extending the approach to multi-source transfer
(Section 2.2) gives further gains of $2\text{\,}\mathrm{\char 37\relax}$ UAS
and closes the performance gap against the state of the art on distant
languages. In short, our contributions are:
1. 1.
A conceptually simple and highly flexible framework for unsupervised target
language adaptation, which supports multi-source and source-free transfer, and
can be employed with any pre-trained state-of-the-art arc-factored parser(s);
2. 2.
Generalisation of the method of Täckström et al. (2013) to state-of-the-art,
non-projective dependency parsing with neural networks;
3. 3.
Up to $13\text{\,}\mathrm{\char 37\relax}$ UAS improvement over state-of-the-
art models, considering nearby languages, and roughly equal performance over
distant languages; and
4. 4.
Analysis of the impact of choice of source languages on multi-source transfer
quality.
## 2 Supervision via Transfer
In our scenario of unsupervised cross-lingual parsing, we assume the
availability of a pre-trained source parser, and unlabelled text in the target
language. Thus, we aim to leverage this data such that our cross-lingual
transfer parsing method out-performs direct transfer. One straightforward
method is self-training where we use the predictions from the source parser as
supervision to train the target parser. This method may yield decent
performance as direct transfer is fairly good to begin with. However, we may
be able to do better if we also consider a set of parse trees that have high
probability under the source parser (cf. Fig. 1 for illustration).
If we assume that the source parser can produce a set of possible trees
instead, then it is natural to use all of these trees as supervision signal
for training. Inspired by Täckström et al. (2013), we formalise the method as
follows. Given an unlabelled dataset $\\{x_{i}\\}_{i=1}^{n}$, the training
loss can be expressed as
$\displaystyle\mathcal{L}(\theta)$
$\displaystyle=-\frac{1}{n}\sum_{i=1}^{n}\log\sum_{y\in\tilde{Y}(x_{i})}P_{\theta}(y|x_{i})$
(1)
where $\theta$ is the target parser parameters and $\tilde{Y}(x_{i})$ is the
set of trees produced by the source parser. Note that $\tilde{Y}(x_{i})$ must
be smaller than the set of all trees spanning $x$ (denoted as
$\mathcal{Y}(x_{i})$ ) because $\mathcal{L}(\theta)=0$ otherwise. This
training procedure is a form of self-training, and we expect that the target
parser can learn the correct tree as it is likely to be included in
$\tilde{Y}(x_{i})$. Even if this is not the case, as long as the correct arcs
occur quite frequently in $\tilde{Y}(x_{i})$, we expect the parser to learn a
useful signal.
We consider an arc-factored neural dependency parser where the score of a tree
is defined as the sum of the scores of its arcs, and the arc scoring function
is parameterised by a neural network. The probability of a tree is then
proportional to its score. Formally, this formulation can be expressed as
$\displaystyle P_{\theta}(y|x)$ $\displaystyle=\frac{\exp
s_{\theta}(x,y)}{Z(x)}$ (2) $\displaystyle s_{\theta}(x,y)$
$\displaystyle=\sum_{(h,m)\in A(y)}s_{\theta}(x,h,m)$ (3)
where $Z(x)=\sum_{y\in\mathcal{Y}(x)}\exp s_{\theta}(x,y)$ is the partition
function, $A(y)$ is the set of head-modifier arcs in $y$, and
$s_{\theta}(x,y)$ and $s_{\theta}(x,h,m)$ are the tree and arc scoring
function respectively.
### 2.1 Single-Source Transfer
Here, we consider the case where a single pre-trained source parser is
provided and describe how the set of trees is constructed. Concretely, for
every sentence $x=w_{1},w_{2},\ldots,w_{t}$ in the target language data, using
the source parser, the set of high probability trees $\tilde{Y}(x)$ is defined
as the set of dependency trees that can be assembled from the high probability
arcs set $\tilde{A}(x)=\bigcup_{m=1}^{t}\tilde{A}(x,m)$, where
$\tilde{A}(x,m)$ is the set of high probability arcs whose dependent is
$w_{m}$. Thus, $\tilde{Y}(x)$ can be expressed formally as
$\displaystyle\tilde{Y}(x)$ $\displaystyle=\\{y|y\in\mathcal{Y}(x)\wedge
A(y)\subseteq\tilde{A}(x)\\}.$ (4)
$\tilde{A}(x,m)$ is constructed by adding arcs $(h,m)$ in order of decreasing
arc marginal probability until their cumulative probability exceeds a
threshold $\sigma$ (Täckström et al., 2013). The predicted tree from the
source parser is also included in $\tilde{Y}(x)$ so the chart is never empty.
This prediction is simply the highest scoring tree. This procedure is
illustrated in Fig. 1.
Since $\mathcal{Y}(x)$ contains an exponential number of trees, efficient
algorithms are required to compute the partition function $Z(x)$, arc marginal
probabilities, and the highest scoring tree. First, arc marginal probabilities
can be computed efficiently with dynamic programming for projective trees
(Paskin, 2001) and Matrix-Tree Theorem for the non-projective counterpart (Koo
et al., 2007; McDonald and Satta, 2007; Smith and Smith, 2007). The same
algorithms can also be employed to compute $Z(x)$. Next, the highest scoring
tree can be obtained efficiently with Eisner’s algorithm (Eisner, 1996) or the
maximum spanning tree algorithm (McDonald et al., 2005; Chu and Liu, 1965;
Edmonds, 1967) for the projective and non-projective cases, respectively.
The transfer is performed by initialising the target parser with the source
parser’s parameters and then fine-tuning it with the training loss in Eq. 1 on
the target language data. Following previous works (Duong et al., 2015b; He et
al., 2019), we also regularise the parameters towards the initial parameters
to prevent them from deviating too much since the source parser is already
good to begin with. Thus, the final fine-tuning loss becomes
$\displaystyle\mathcal{L}^{\prime}(\theta)$
$\displaystyle=\mathcal{L}(\theta)+\lambda||\theta-\theta_{0}||_{2}^{2}$ (5)
where $\theta_{0}$ is the initial parameters and $\lambda$ is a hyperparameter
regulating the strength of the $L_{2}$ regularisation. This single-source
transfer strategy was introduced as ambiguity-aware self-training by Täckström
et al. (2013). A difference here is that we regularise the target parser’s
parameters against the source parser’s as the initialiser, and apply the
technique to modern lexicalised state-of-the-art parsers. We refer to this
transfer strategy as PPT hereinafter.
Note that the whole procedure of PPT can be performed even when the source
parser is trained with monolingual embeddings. Specifically, given a source
parser trained _only on monolingual embeddings_ , one can align pre-trained
target language word embeddings to the source embedding space using an offline
cross-lingual alignment method (e.g., of Smith et al. (2017)), and use the
aligned target embeddings with the source model to compute $\tilde{Y}(x)$.
Thus, our method can be used with any pre-trained monolingual neural parser.
### 2.2 Multi-Source Transfer
We now consider the case where multiple pre-trained source parsers are
available. To extend PPT to this multi-source case, we employ the ensemble
training method from Täckström et al. (2013), which we now summarise. We
define $\tilde{A}(x,m)=\bigcup_{k}\tilde{A}_{k}(x,m)$ where
$\tilde{A}_{k}(x,m)$ is the set of high probability arcs obtained with the
$k$-th source parser. The rest of the procedure is exactly the same as PPT.
Note that we need to select one source parser as the main source to initialise
the target parser’s parameters with. Henceforth, we refer to this method as
PPTX.
Multiple source parsers may help transfer better because each parser will
encode different syntactic biases from the languages they are trained on.
Thus, it is more likely for one of those biases to match that of the target
language instead of using just a single source parser. However, multi-source
transfer may also hurt performance if the languages have very different
syntax, or the source parsers are of poor quality, which can arise from poor
quality cross-lingual word embeddings.
## 3 Experiments
### 3.1 Setup
We run our experiments on Universal Dependency Treebanks v2.2 (Nivre et al.,
2018). We reimplement the self-attention graph-based parser of Ahmad et al.
(2019a) that has been used with success for cross-lingual dependency parsing.
Averaged over 5 runs, our reimplementation achieves
$88.8\text{\,}\mathrm{\char 37\relax}$ unlabelled attachment score (UAS) on
English Web Treebank using the same hyperparameters,333Reported in Table 4.
slightly below their reported $90.3\text{\,}\mathrm{\char 37\relax}$
result.444UAS and LAS are reported excluding punctuation tokens. We select the
run with the highest labelled attachment score (LAS) as the source parser. We
obtain cross-lingual word embeddings with the offline transformation of Smith
et al. (2017) applied to fastText pre-trained word vectors (Bojanowski et al.,
2017). We include the universal POS tags as inputs by concatenating the
embeddings with the word embeddings in the input layer. We acknowledge that
the inclusion of gold POS tags does not reflect a realistic low-resource
setting where gold tags are not available, which we discuss more in Section
3.3. We evaluate on 18 target languages that are divided into two groups,
distant and nearby languages, based on their distance from English as defined
by He et al. (2019).555We exclude Japanese and Chinese based on Ahmad et al.
(2019a), who reported atypically low performance on these two languages, which
they attributed to the low quality of their cross-lingual word embeddings. In
subsequent work they excluded these languages (Ahmad et al., 2019b).
During the unsupervised fine-tuning, we compute the training loss over all
trees regardless of projectivity (i.e. we use Matrix-Tree Theorem to compute
Eq. 1) and discard sentences longer than 30 tokens to avoid out-of-memory
error. Following He et al. (2019), we fine-tune on the target language data
for 5 epochs, tune the hyperparameters (learning rate and $\lambda$) on Arabic
and Spanish using LAS, and use these values666Reported in Table 5. for the
distant and nearby languages, respectively. We set the threshold $\sigma=0.95$
for both PPT and PPTX following Täckström et al. (2013). We keep the rest of
the hyperparameters (e.g., batch size) equal to those of Ahmad et al. (2019a).
For PPTX, unless otherwise stated, we consider a leave-one-out scenario where
we use all languages except the target as the source language. We use the same
hyperparameters as the English parser to train these non-English source
parsers and set the English parser as the main source.
### 3.2 Comparisons
We compare PPT and PPTX against several recent unsupervised transfer systems.
First, He is a neural lexicalised DMV parser with normalising flow that uses a
language modelling objective when fine-tuning on the unlabelled target
language data (He et al., 2019). Second, Ahmad is an adversarial training
method that attempts to learn language-agnostic representations (Ahmad et al.,
2019b). Lastly, Meng is a constrained inference method that derives
constraints from the target corpus statistics to aid inference (Meng et al.,
2019). We also compare against direct transfer (DT) and self-training (ST) as
our baseline systems.777ST requires significantly less memory so we only
discard sentences longer than 60 tokens. Complete hyperparameter values are
shown in Table 5.
### 3.3 Results
Target | UAS | LAS
---|---|---
DT | ST | PPT | PPTX | He | Ahmad | Meng | DT | ST | PPT | PPTX | Ahmad
fa | $37.53$ | $38.0$ | $39.47$ | $53.58$ | $63.20$ | — | — | $29.24$ | $30.5$ | $31.60$ | $44.52$ | —
ar† | $37.60$ | $39.2$ | $39.46$ | $48.29$ | $55.44$ | $38.98$ | $47.3$ | $27.34$ | $30.0$ | $29.94$ | $38.48$ | $27.89$
id | $51.63$ | $49.9$ | $50.28$ | $71.91$ | $64.20$ | $51.57$ | $53.1$ | $45.23$ | $44.4$ | $44.72$ | $59.02$ | $45.31$
ko | $35.07$ | $37.1$ | $37.47$ | $34.59$ | $37.03$ | $34.23$ | $37.1$ | $16.57$ | $18.2$ | $17.99$ | $16.11$ | $16.08$
tr | $36.93$ | $38.1$ | $39.16$ | $38.44$ | $36.05$ | — | $35.2$ | $18.49$ | $19.5$ | $19.04$ | $20.60$ | —
hi | $33.70$ | $34.7$ | $33.96$ | $36.39$ | $33.17$ | $37.37$ | $52.4$ | $25.40$ | $26.6$ | $26.37$ | $28.27$ | $28.01$
hr | $61.98$ | $63.4$ | $63.79$ | $71.90$ | $65.31$ | $63.11$ | $63.7$ | $51.87$ | $54.2$ | $54.18$ | $61.17$ | $53.62$
he | $56.62$ | $59.2$ | $60.49$ | $64.17$ | $64.80$ | $57.15$ | $58.8$ | $47.61$ | $50.5$ | $51.11$ | $53.92$ | $49.36$
average | $43.88$ | $44.95$ | $45.51$ | $52.41$ | $52.40$ | — | — | $32.72$ | $34.24$ | $34.37$ | $40.26$ | —
bg | $77.68$ | $80.0$ | $81.22$ | $81.92$ | $73.57$ | $79.72$ | $79.7$ | $66.22$ | $68.9$ | $70.02$ | $70.24$ | $68.39$
it | $77.89$ | $79.7$ | $81.36$ | $83.65$ | $70.68$ | $80.70$ | $82.0$ | $71.07$ | $74.0$ | $75.53$ | $77.73$ | $75.57$
pt | $74.07$ | $76.3$ | $77.07$ | $80.95$ | $66.61$ | $77.09$ | $77.5$ | $65.05$ | $67.6$ | $68.26$ | $70.64$ | $67.81$
fr | $74.80$ | $77.5$ | $78.64$ | $80.57$ | $67.66$ | $78.31$ | $79.1$ | $68.13$ | $71.7$ | $72.76$ | $74.46$ | $73.29$
es† | $72.45$ | $74.9$ | $75.21$ | $78.25$ | $64.28$ | $74.08$ | $75.8$ | $63.78$ | $66.5$ | $67.01$ | $69.15$ | $65.84$
no | $77.86$ | $80.4$ | $81.21$ | $80.01$ | $65.29$ | $80.98$ | $80.4$ | $69.06$ | $71.9$ | $72.66$ | $71.75$ | $73.10$
da | $75.33$ | $76.0$ | $77.29$ | $76.57$ | $61.08$ | $76.25$ | $76.6$ | $66.25$ | $67.4$ | $68.55$ | $67.86$ | $68.03$
sv | $78.89$ | $80.5$ | $82.09$ | $81.03$ | $64.43$ | $80.43$ | $80.5$ | $71.12$ | $72.7$ | $74.20$ | $72.72$ | $76.68$
nl | $68.00$ | $68.9$ | $69.85$ | $74.39$ | $61.72$ | $69.23$ | $67.6$ | $59.47$ | $60.7$ | $61.53$ | $65.39$ | $60.51$
de | $66.79$ | $69.9$ | $69.52$ | $74.05$ | $69.52$ | $71.05$ | $70.8$ | $56.40$ | $60.0$ | $59.69$ | $63.45$ | $61.84$
average | $74.38$ | $76.41$ | $77.35$ | $79.14$ | $66.48$ | $76.78$ | $77.0$ | $65.66$ | $68.14$ | $69.02$ | $70.34$ | $69.11$
Table 1: Test UAS and LAS (avg. 5 runs) on distant (top) and nearby (bottom)
languages, sorted from most distant (fa) to closest (de) to English. PPTX is
trained in a leave-one-out fashion. The numbers for He, Ahmad, and Meng are
obtained from the corresponding papers, direct transfer (DT) and self-training
(ST) are based on our own implementation. † indicates languages used for
hyper-parameter tuning, and thus have additional supervision through the use
of a labelled development set.
Table 1 shows the main results. We observe that fine-tuning via self-training
already helps DT, and by incorporating multiple high probability trees with
PPT, we can push the performance slightly higher on most languages, especially
the nearby ones. Although not shown in the table, we also find the PPT has up
to 6x lower standard deviation than ST, which makes PPT preferrable to ST.
Thus, we exclude ST as a baseline from our subsequent experiments. Our results
seem to agree with that of Täckström et al. (2013) and suggest that PPT can
also be employed for neural parsers. Therefore, it should be considered for
target language adaptation if unlabelled target data is available. Comparing
to He (He et al., 2019), PPT performs worse on distant languages, but better
on nearby languages. This finding means that if the target language has a
closely related high-resource language, it may be better to transfer from that
language as the source and use PPT for adaptation. Against Ahmad (Ahmad et
al., 2019b), PPT performs better on 4 out of 6 distant languages. On nearby
languages, the average UAS of PPT is higher, and the average LAS is on par.
This result shows that leveraging unlabelled data for cross-lingual parsing
without access to the source data is feasible. PPT also performs better than
Meng (Meng et al., 2019) on 4 out of 7 distant languages, and slightly better
on average on nearby languages. This finding shows that PPT is competitive to
their constrained inference method.
Also reported in Table 1 are the ensemble results for PPTX, which are
particularly strong. PPTX outperforms PPT, especially on distant languages
with the average UAS and LAS absolute improvements of $7\text{\,}\mathrm{\char
37\relax}$ and $6\text{\,}\mathrm{\char 37\relax}$ respectively. This finding
suggests that PPTX is indeed an effective method for multi-source transfer of
neural dependency parsers. It also gives further evidence that multi-source
transfer is better than the single-source counterpart. PPTX also closes the
gap against the state-of-the-art adaptation of He et al. (2019) in terms of
average UAS on distant languages. This result suggests that PPTX can be an
option for languages that do not have a closely related high-resource language
to transfer from.
#### Treebank Leakage
Figure 2: Relationship between treebank leakage and LAS for PPTX. Shaded area
shows $95\text{\,}\mathrm{\char 37\relax}$ confidence interval. Korean and
Turkish (in red) are excluded when computing the regression line.
The success of our cross-lingual transfer can be attributed in part to
treebank leakage, which measures the fraction of dependency trees in the test
set that are isomorphic to a tree in the training set (with potentially
different words); accordingly these trees are not entirely unseen. Such
leakage has been found to be a particularly strong predictor for parsing
performance in monolingual parsing (Søgaard, 2020). Fig. 2 shows the
relationship between treebank leakage and parsing accuracy, where the leakage
is computed between the English training set as source and the target
language’s test set. Excluding outliers which are Korean and Turkish because
of their low parsing accuracy despite the relatively high leakage, we find
that there is a fairly strong positive correlation ($r=0.57$) between the
amount of leakage and accuracy. The same trend occurs with DT, ST, and PPT.
This finding suggests that cross-lingual parsing is also affected by treebank
leakage just like monolingual parsing is, which may present an opportunity to
find good sources for transfer.
#### Use of Gold POS Tags
As we explained in Section 3.1, we restrict our experiments to gold POS tags
for comparison with prior work. However, the use of gold POS tags does not
reflect a realistic low-resource setting where one may have to resort to
automatically predicted POS tags. Tiedemann (2015) has shown that cross-
lingual delexicalised parsing performance degrades when predicted POS tags are
used. The degradation ranges from 2.9 to 8.4 LAS points depending on the
target language. Thus, our reported numbers in Table 1 are likely to decrease
as well if predicted tags are used, although we expect the decline is not as
sharp because our parser is lexicalised.
### 3.4 Parsimonious Selection of Sources for PPTX
Figure 3: Comparison of selection of source languages for PPTX on distant and
nearby languages, sorted from most distant (fa) to closest (de) to English.
PPTX-LOO is trained in a leave-one-out fashion. PPTX-REPR uses the
representative source language set, while PPTX-PRAG is adapted from five high-
resource languages. A source language is excluded from the source if it is
also the target language.
In our main experiment, we use all available languages as source for PPTX in a
leave-one-out setting. Such a setting may be justified to cover as many
syntactic biases as possible, however, training dozens of parses may be
impractical. In this experiment, we consider the case where we can train only
a handful of source parsers. We investigate two selections of source
languages: (1) a representative selection (PPTX-REPR) which covers as many
language families as possible and (2) a pragmatic selection (PPTX-PRAG)
containing truly high-resource languages for which quality pre-trained parsers
are likely to exist. We restrict the selections to 5 languages each. For PPTX-
REPR, we use English, Spanish, Arabic, Indonesian, and Korean as source
languages. This selection covers Indo-European (Germanic and Romance), Afro-
Asiatic, Austronesian, and Koreanic language families respectively. We use
English, Spanish, Arabic, French, and German as source languages for PPTX-
PRAG. The five languages are classified as exemplary high-resource languages
by Joshi et al. (2020). We exclude a language from the source if it is also
the target language, in which case there will be only 4 source languages.
Other than that, the setup is the same as that of our main
experiment.888Hyperparameters are tuned; values are shown in Table 5.
We present the result in Fig. 3 where we also include the results for PPT, and
PPTX with the leave-one-out setting (PPTX-LOO). We report only LAS since UAS
shows a similar trend. We observe that both PPTX-REPR and PPTX-PRAG outperform
PPT overall. Furthermore, on nearby languages except Dutch and German, both
PPTX-REPR and PPTX-PRAG outperform PPTX-LOO, and PPTX-PRAG does best overall.
In contrast, no systematic difference between the three PPTX variants emerges
on distant languages. This finding suggests that instead of training dozens of
source parsers for PPTX, training just a handful of them is sufficient, and a
“pragmatic” selection of a small number of high-resource source languages
seems to be an efficient strategy. Since pre-trained parsers for these
languages are most likely available, it comes with the additional advantage of
alleviating the need to train parsers at all, which makes our method even more
practical.
#### Analysis on Dependency Labels
Figure 4: Comparison of direct transfer (DT), PPT, and PPTX-PRAG on select
dependency labels of Indonesian (top) and German (bottom).
Next, we break down the performance of our methods based on the dependency
labels to study their failure and success patterns. Fig. 4 shows the UAS of
DT, PPT, and PPTX-PRAG on Indonesian and German for select dependency labels.
Looking at Indonesian, PPT is slightly worse than DT in terms of overall
accuracy scores (Table 1), and this is reflected across dependency labels.
However, we see in Fig. 4 that PPT outperforms DT on amod. In Indonesian,
adjectives follow the noun they modify, while in English the opposite is true
in general. Thus, unsupervised target language adaptation seems able to
address these kinds of discrepancy between the source and target language. We
find that PPTX-PRAG outperforms both DT and PPT across dependency labels,
especially on flat and compound labels as shown in Fig. 4. Both labels are
related to multi-word expressions (MWEs), so PPTX appears to improve parsing
MWEs in Indonesian significantly.
For German we find that both PPT and PPTX-PRAG outperform DT on most
dependency labels, with the most notable gain on nmod, which appear in
diverse, and often non-local relations in both languages many of which do not
structurally translate, and fine-tuning improves performance as expected.
Also, we see PPTX-PRAG significantly underperforms on compound while PPT is
better than DT. German compounds are often merged into a single token, and
self-training appears to alleviate over-prediction of such relations. The
multi-source case may contain too much diffuse signal on compound and thus the
performance is worse than that of DT. We find that PPT and PPTX improves over
DT on mark, likely because markers are often used in places where German
deviates from English by becoming verb-final (e.g., subordinate clauses). Both
PPT and PPTX-PRAG seem able to learn this characteristic as shown by their
performance improvements. This analysis suggests that the benefits of self-
training depend on the syntactic properties of the target language.
### 3.5 Effect of Projectivity
Model | Target | AVG
---|---|---
id | hr | fr | nl
Non-projective
DT | $45.2$ | $51.9$ | $68.1$ | $59.5$ | $56.2$
PPT | $44.7$ | $54.2$ | $72.8$ | $61.5$ | $58.3$
PPTX-PRAG | $57.4$ | $62.2$ | $77.9$ | $66.4$ | $66.0$
Projective
DT | $45.7$ | $52.1$ | $68.4$ | $59.6$ | $56.4$
PPT | $45.0$ | $54.0$ | $72.3$ | $61.7$ | $58.3$
PPTX-PRAG | $57.5$ | $61.1$ | $78.1$ | $67.7$ | $66.1$
Table 2: Comparison of projective and non-projective direct transfer (DT),
PPT, and PPTX-PRAG. Scores are LAS, averaged over 5 runs.
In this experiment, we study the effect of projectivity on the performance of
our methods. We emulate a projective parser by restricting the trees in
$\tilde{Y}(x)$ to be projective. In other words, the sum in Eq. 1 is performed
only over projective trees. At test time, we search for the highest scoring
projective tree. We compare DT, PPT, and PPTX-PRAG, and report LAS on
Indonesian (id) and Croatian (hr) as distant languages, and on French (fr) and
Dutch (nl) as nearby languages. The trend for UAS and on the other languages
is similar. We use the dynamic programming implementation provided by torch-
struct for the projective case (Rush, 2020). We find that it consumes more
memory than our Matrix-Tree Theorem implementation, so we set the length
cutoff to 20 tokens.999Hyperparameters are tuned; values are shown in Table 5.
Table 2 shows result of our experiment, which suggests that there is no
significant performance difference between the projective and non-projective
variant of our methods. This result suggests that our methods generalise well
to both projective and non-projective parsing. That said, we recommend the
non-projective variant as it allows better parsing of languages that are
predominantly non-projective. Also, we find that it runs roughly 2x faster
than the projective variant in practice.
### 3.6 Disentangling the Effect of Ensembling and Larger Data Size
Model | Target
---|---
ar | es
DT | $28.09$ | $64.11$
PPT | $30.84$ | $67.27$
PPTXEN5 | $30.92$ | $66.25$
PPTX-PRAGS | $36.46$ | $70.32$
PPTX-PRAG | $36.45$ | $71.88$
Table 3: Comparison of LAS on Arabic and Spanish on the development set,
averaged over 5 runs. PPTXEN5 is PPTX with 5 English parsers as source, each
trained on 1/5 size of the English corpus. PPTX-PRAGS is PPTX with the
pragmatic selection of source languages (PPTX-PRAG) but each source parser is
trained on the same amount of data as PPTXEN5.
The effectiveness of PPTX can be attributed to at least three factors: (1) the
effect of ensembling source parsers (ensembling), (2) the effect of larger
data size used for training the source parsers (data), and (3) the diversity
of syntactic biases from multiple source languages (multilinguality). In this
experiment, we investigate to what extent each of those factors contributes to
the overall performance. To this end, we design two additional comparisons:
PPTXEN5 and PPTX-PRAGS.
PPTXEN5 is PPTX with only English source parsers, where each parser is trained
on 1/5 of the English training set. That is, we randomly split the English
training set into five equal-sized parts, and train a separate parser on each.
These parsers then serve as the source parsers for PPTXEN5. Thus, PPTXEN5 has
the benefit of ensembling but not data and multilinguality compared with PPT.
PPTX-PRAGS is PPTX whose source language selection is the same as PPTX-PRAG,
but each source parser is trained on the training data whose size is roughly
the same as that of the training data of PPTXEN5 source parsers. In other
words, the training data size is roughly equal to 1/5 of the English training
set. To obtain this data, we randomly sub-sample the training data of each
source language to the appropriate number of sentences. Therefore, PPTX-PRAGS
has the benefit of ensembling and multilinguality but not data.
Table 3 reports their LAS on the development set of Arabic and Spanish,
averaged over five runs. We also include the results of PPTX-PRAG that enjoys
all three benefits. We observe that PPT and PPTXEN5 perform similarly on
Arabic, and PPTXEN5 has a slightly lower performance on Spanish. This result
suggests a negligable effect of ensembling on performance. On the other hand,
PPTX-PRAGS outperforms PPTXEN5 remarkably, with approximately
$6\text{\,}\mathrm{\char 37\relax}$ and $4\text{\,}\mathrm{\char 37\relax}$
LAS improvement on Arabic and Spanish respectively, showing that
multilinguality has a much larger effect on performance than ensembling.
Lastly, we see that PPTX-PRAG performs similarly to PPTX-PRAGS on Arabic, and
about $1.6\text{\,}\mathrm{\char 37\relax}$ better on Spanish. This result
demonstrates that data size has an effect, albeit a smaller one compared to
multilinguality. To conclude, the effectiveness of PPTX can be attributed to
the diversity contributed through multiple languages, and not to ensembling or
larger source data sets.
## 4 Related Work
Cross-lingual dependency parsing has been extensively studied in NLP. The
approaches can be grouped into two main categories. On the one hand, there are
approaches that operate on the data level. Examples of this category include
annotation projection, which aims to project dependency trees from a source
language to a target language (Hwa et al., 2005; Li et al., 2014; Lacroix et
al., 2016; Zhang et al., 2019); and source treebank reordering, which
manipulates the source language treebank to obtain another treebank whose
statistics approximately match those of the target language (Wang and Eisner,
2018; Rasooli and Collins, 2019). Both methods have no restriction on the type
of parsers as they are only concerned with the data. Transferring from
multiple source languages with annotation projection is also feasible (Agić et
al., 2016).
Despite their effectiveness, these data-level methods may require access to
the source language data, hence are unusable when it is inaccessible due to
privacy or legal reasons. In such source-free transfer, only a model pre-
trained on the source language data is available. By leveraging parallel data,
annotation projection is indeed feasible without access to the source language
data. That said, parallel data is limited for low-resource languages or may
have a poor domain match. Additionally, these methods involve training the
parser from scratch for every new target language, which may be prohibitive.
On the other hand, there are methods that operate on the model level. A
typical approach is direct transfer (aka., zero-shot transfer) which trains a
parser on source language data, and then directly uses it to parse a target
language. This approach is enabled by the shared input representation between
the source and target language such as POS tags (Zeman and Resnik, 2008) or
cross-lingual embeddings (Guo et al., 2015; Ahmad et al., 2019a). Direct
transfer supports source-free transfer and only requires training a parser
once on the source language data. In other words, direct transfer is
unsupervised as far as target language resources.
Previous work has shown that unsupervised target language adaptation
outperforms direct transfer. Recent work by He et al. (2019) used a neural
lexicalised dependency model with valence (DMV) (Klein and Manning, 2004) as
the source parser and fine-tuned it in an unsupervised manner on the
unlabelled target language data. This adaptation method allows for source-free
transfer and performs especially well on distant target languages. A different
approach is proposed by Meng et al. (2019), who gathered target language
corpus statistics to derive constraints to guide inference using the source
parser. Thus, this technique also allows for source-free transfer. A different
method is proposed by Ahmad et al. (2019b) who explored the use of unlabelled
data from an auxiliary language, which can be different from the target
language. They employed adversarial training to learn language-agnostic
representations. Unlike the others, this method can be extended to support
multi-source transfer. An older method is introduced by Täckström et al.
(2013), who leveraged ambiguity-aware training to achieve unsupervised target
language adaptation. Their method is usable for both source-free and multi-
source transfer. However, to the best of our knowledge, its use for neural
dependency parsing has not been investigated. Our work extends theirs by
employing it for the said purpose.
The methods of both He et al. (2019) and Ahmad et al. (2019b) have several
limitations. The method of He et al. (2019) requires the parser to be
generative and projective. Their generative parser is quite impoverished with
an accuracy that is $21$ points lower than a state-of-the-art discriminative
arc-factored parser on English. Thus, their choice of generative parser may
constrain its potential performance. Furthermore, their method performs
substantially worse than direct transfer on nearby target languages. Because
of the availability of resources such as Universal Dependency Treebanks (Nivre
et al., 2018), it is likely that a target language has a closely related high-
resource language which can serve as the source language. Therefore,
performing well on nearby languages is more desirable pragmatically. On top of
that, it is unclear how to employ this method for multi-source transfer. The
adversarial training method of Ahmad et al. (2019b) does not suffer from the
aforementioned limitations but is unusable for source-free transfer. That is,
it assumes access to the source language data, which may not always be
feasible due to privacy or legal reasons.
## 5 Conclusions
This paper presents a set of effective, flexible, and conceptually simple
methods for unsupervised cross-lingual dependency parsing, which can leverage
the power of state-of-the-art pre-trained neural network parsers. Our methods
improve over direct transfer and strong recent unsupervised transfer models,
by using source parser uncertainty for implicit supervision, leveraging only
unlabelled data in the target language. Our experiments show that the methods
are effective for both single-source and multi-source transfer, free from the
limitations of recent transfer models, and perform well for non-projective
parsing. Our analysis shows that the effectiveness of the multi-source
transfer method is attributable to its ability to leverage diverse syntactic
signals from source parsers from different languages. Our findings motivate
future research into advanced methods for generating informative sets of
candidate trees given one or more source parsers.
## Acknowledgments
We thank the anonymous reviewers for the useful feedback. A graduate research
scholarship is provided by Melbourne School of Engineering to Kemal Kurniawan.
## References
* Agić et al. (2016) Željko Agić, Anders Johannsen, Barbara Plank, Héctor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. _Transactions of the Association for Computational Linguistics_ , 4:301–312.
* Ahmad et al. (2019a) Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019a. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2440–2452.
* Ahmad et al. (2019b) Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, and Nanyun Peng. 2019b. Cross-lingual dependency parsing with unlabeled auxiliary languages. In _Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)_ , pages 372–382.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the Association for Computational Linguistics_ , 5:135–146.
* Chu and Liu (1965) Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the shortest arborescence of a directed graph. _Scientia Sinica_ , 14:1396–1400.
* de Lhoneux (2019) Miryam de Lhoneux. 2019. _Linguistically Informed Neural Dependency Parsing for Typologically Diverse Languages_. Ph.d. thesis, Uppsala University.
* Duong et al. (2015a) Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015a. Cross-lingual transfer for unsupervised dependency parsing without parallel data. In _Proceedings of the Nineteenth Conference on Computational Natural Language Learning_ , pages 113–122.
* Duong et al. (2015b) Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015b. Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 845–850.
* Edmonds (1967) Jack Edmonds. 1967. Optimum branchings. _Journal of Research of the national Bureau of Standards B_ , 71(4):233–240.
* Eisner (1996) Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In _COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics_.
* Guo et al. (2015) Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 1234–1244.
* He et al. (2019) Junxian He, Zhisong Zhang, Taylor Berg-Kirkpatrick, and Graham Neubig. 2019. Cross-lingual syntactic transfer through unsupervised adaptation of invertible projections. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3211–3223.
* Hwa et al. (2005) Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. _Natural Language Engineering_ , 11(3):311–325.
* Joshi et al. (2020) Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_.
* Klein and Manning (2004) Dan Klein and Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In _Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)_ , pages 478–485.
* Koo et al. (2007) Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In _Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)_ , pages 141–150.
* Lacroix et al. (2016) Ophélie Lacroix, Lauriane Aufrant, Guillaume Wisniewski, and François Yvon. 2016. Frustratingly easy cross-lingual transfer for transition-based dependency parsing. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1058–1063.
* Li et al. (2014) Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft cross-lingual syntax projection for dependency parsing. In _Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers_ , pages 783–793.
* McDonald et al. (2005) Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič. 2005. Non-projective dependency parsing using spanning tree algorithms. In _Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing_ , pages 523–530.
* McDonald et al. (2011) Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In _Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing_ , pages 62–72.
* McDonald and Satta (2007) Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In _Proceedings of the Tenth International Conference on Parsing Technologies_ , pages 121–132.
* Meng et al. (2019) Tao Meng, Nanyun Peng, and Kai-Wei Chang. 2019. Target language-aware constrained inference for cross-lingual dependency parsing. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1117–1128.
* Nivre et al. (2018) Joakim Nivre, Mitchell Abrams, Željko Agić, and et al. 2018. Universal dependencies 2.2. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
* Paskin (2001) Mark A Paskin. 2001. _Cubic-time parsing and learning algorithms for grammatical bigram models_.
* Rahimi et al. (2019) Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 151–164.
* Rasooli and Collins (2019) Mohammad Sadegh Rasooli and Michael Collins. 2019. Low-resource syntactic transfer with unsupervised source reordering. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3845–3856.
* Rush (2020) Alexander Rush. 2020. Torch-struct: Deep structured prediction library. In _Proceedings of the 58th annual meeting of the association for computational linguistics: System demonstrations_ , pages 335–342.
* Smith and Smith (2007) David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In _Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)_ , pages 132–140.
* Smith et al. (2017) Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017\. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In _International Conference on Learning Representations_.
* Søgaard (2020) Anders Søgaard. 2020. Some languages seem easier to parse because their treebanks leak. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 2765–2770.
* Täckström et al. (2013) Oscar Täckström, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In _Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1061–1071.
* Tiedemann (2015) Jörg Tiedemann. 2015. Cross-lingual dependency parsing with universal dependencies and predicted PoS labels. In _Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015)_ , pages 340–349.
* Wang and Eisner (2018) Dingquan Wang and Jason Eisner. 2018. Synthetic data made to order: The case of parsing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 1325–1337.
* Zeman and Resnik (2008) Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In _Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages_.
* Zhang et al. (2019) Meishan Zhang, Yue Zhang, and Guohong Fu. 2019. Cross-lingual dependency parsing using code-mixed treebank. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 996–1005.
## Appendix A Hyperparameter values
Here we report the hyperparameter values for experiments presented in the
paper. Table 4 shows the hyperparameter values of our English source parser
explained in Section 3.1. Table 5 reports the tuned hyperparameter values for
our experiments shown in Table 1, Fig. 3, and Table 2.
Hyperparameter | Value
---|---
Sentence length cutoff | 100
Word embedding size | 300
POS tag embedding size | 50
Number of attention heads | 10
Number of Transformer layers | 6
Feedforward layer hidden size | 512
Attention key vector size | 64
Attention value vector size | 64
Dropout | 0.2
Dependency arc vector size | 512
Dependency label vector size | 128
Batch size | 80
Learning rate | ${10}^{-4}$
Early stopping patience | 50
Table 4: Hyperparameter values of the source parser. Hyperparameter | Value
---|---
Nearby | Distant
ST
Sentence length cutoff | 60 | 60
Learning rate | $5.6\text{\times}{10}^{-4}$ | $3.7\text{\times}{10}^{-4}$
L2 coefficient ($\lambda$) | $3\text{\times}{10}^{-4}$ | $2.8\text{\times}{10}^{-4}$
PPT
Learning rate | $3.8\text{\times}{10}^{-5}$ | $2\text{\times}{10}^{-5}$
L2 coefficient ($\lambda$) | $0.01$ | $0.39$
PPTX/PPTX-LOO
Learning rate | $2.1\text{\times}{10}^{-5}$ | $5.9\text{\times}{10}^{-5}$
L2 coefficient ($\lambda$) | $0.079$ | $1.2\text{\times}{10}^{-4}$
PPTX-REPR
Learning rate | $1.7\text{\times}{10}^{-5}$ | $9.7\text{\times}{10}^{-5}$
L2 coefficient ($\lambda$) | $4\text{\times}{10}^{-4}$ | $0.084$
PPTX-PRAG
Learning rate | $4.4\text{\times}{10}^{-5}$ | $8.5\text{\times}{10}^{-5}$
L2 coefficient ($\lambda$) | $2.7\text{\times}{10}^{-4}$ | $2.8\text{\times}{10}^{-5}$
Projective PPT
Sentence length cutoff | 20 | 20
Learning rate | ${10}^{-4}$ | ${10}^{-4}$
L2 coefficient ($\lambda$) | $7.9\text{\times}{10}^{-4}$ | $7.9\text{\times}{10}^{-4}$
Projective PPTX-PRAG
Sentence length cutoff | 20 | 20
Learning rate | $9.4\text{\times}{10}^{-5}$ | $9.4\text{\times}{10}^{-5}$
L2 coefficient ($\lambda$) | $2.4\text{\times}{10}^{-4}$ | $2.4\text{\times}{10}^{-4}$
Table 5: Hyperparameter values of ST, PPT, PPTX, PPTX-REPR, PPTX-PRAG,
projective PPT, and projective PPTX-PRAG. Sentence length cutoff for PPT,
PPTX, PPTX-REPR, and PPTX-PRAG is 30, as explained in Section 3.1.
|
# What We Can Learn From Visual Artists About Software Development
Jingyi Li Stanford University , Sonia Hashim University of California,
Santa Barbara and Jennifer Jacobs University of California, Santa Barbara
(2021)
###### Abstract.
This paper explores software’s role in visual art production by examining how
artists use and develop software. We conducted interviews with professional
artists who were collaborating with software developers, learning software
development, and building and maintaining software. We found artists were
motivated to learn software development for intellectual growth and access to
technical communities. Artists valued efficient workflows through skilled
manual execution and personal software development, but avoided high-level
forms of software automation. Artists identified conflicts between their
priorities and those of professional developers and computational art
communities, which influenced how they used computational aesthetics in their
work. These findings contribute to efforts in systems engineering research to
integrate end-user programming and creativity support across software and
physical media, suggesting opportunities for artists as collaborators.
Artists’ experiences writing software can guide technical implementations of
domain-specific representations, and their experiences in interdisciplinary
production can aid inclusive community building around computational tools.
visual art, software development, creativity support tools
††copyright: iw3c2w3††journalyear: 2021††conference: CHI Conference on Human
Factors in Computing Systems; May 8–13, 2021; Yokohama, Japan††booktitle: CHI
Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021,
Yokohama, Japan††doi: 10.1145/3411764.3445682††isbn:
978-1-4503-8096-6/21/05††ccs: Human-centered computing User interface
programming††ccs: Applied computing Media arts
## 1\. Introduction
Software has become closely integrated with visual art production.
Illustration, painting, animation, and photography are just a small sample of
once non-digital fields where many practitioners now incorporate software.
Like all creative tools, software offers artists new opportunities while
simultaneously imposing constraints. For example, digital drawing software
allows artists to continually edit their pieces through layers and undo, but
limits them to the functionality provided on a toolbar and can make it
difficult to integrate physical media.
Developing software for visual art brings many specific challenges. Artists
are distinguished from many other software users by their nonconformity and
unconventional approaches to making artifacts (Feist, 1999). Artists also work
extensively with physical media and manual forms of expression—qualities that
are challenging to integrate in digital representations (Schachman, 2012;
Devendorf and Ryokai, 2015). Designing software abstractions and constraints
for different domains of visual art poses additional challenges. The ways
software designers expect artists to interact with the system may be different
from the ways artists are used to engaging with materials to make art in their
domain (Victor, 2011). Furthermore, artists who use domain-specific creative
coding languages (Reas and Fry, 2004; Lieberman, 2014) face additional
challenges in having to understand abstract representations and work in a
highly structured manner (Victor, 2011). These forms of working can be
incompatible with their existing practices of manual manipulation and non-
objective exploration (Berger, 2014; Goodman, 1968).
These challenges, as well the opportunities they surface, affect a variety of
stakeholders. They include systems engineering researchers who research
creativity support technologies. They also include artists who collaborate
with software developers in their own practice. Finally, they include artists
who are software developers. Such individuals build personal tools, but also
can support communities of creative practitioners by leveraging their domain
knowledge to develop and distribute new, artist-specific platforms (Reas and
Fry, 2004; Levin, 2003). To support multiple stakeholders in software
production for visual art and address these challenges, we sought to develop a
more detailed understanding of the ways visual artists work with software. By
studying how artists develop and use their own forms of software, we
synthesized design implications across the joint interests of artists and
professional software developers.
In this paper, we ask two research questions: (1) What factors lead artists to
embrace or reject software tools created by others? (2) What factors motivate
artists to develop their own software, and how do their approaches diverge or
align with other domains of software development?
Our research questions are motivated by our efforts to develop programming
systems for visual artists over the past seven years and oriented towards
informing future efforts in systems engineering research for creativity
support. Like researchers, artists are motivated, in-part, by creating novel
outcomes. We argue that artists’ motivations for building their own software
are well aligned with the kinds of contributions valued by systems engineering
for visual art production HCI research. Artists’ joint concerns around
expressiveness, audience, and commercial viability, as well as their unique
workflows, can inform approaches in end-user software development. But at the
same time, artist communities have different values, norms, and forms of
dissemination than those of professional software developers (Levin, 2015).
Understanding and integrating these values is the first step in informing
research partnerships between these communities.
To investigate these questions and opportunities, we formalized our initial
observations into 13 in-depth interviews conducted with artists across a
spectrum of software development expertise, deliberately selected from a
larger set of conversations over the past four years. Our interviews sought to
understand how artists’ tools and materials–-digital, physical, and
programmatic–impacted their processes and outcomes. This paper makes two
contributions. First, through a thematic analysis of our interviews, we
surface themes on how software intersects with visual art practice: how
artists were motivated to build software as a mechanism for intellectual and
community engagement, how existing software representations worked with or
against artists’ complex workflows, how artists valued efficiency but resisted
forms of software automation, how artists used software beyond making art to
organize, motivate and reflect, and how interaction with technical communities
impacted artists’ aesthetic choices. Second, our findings suggest
opportunities for how efforts in end-user programming can map onto creativity
support tools for visual artists. Current forms of software automation
misalign with how artists work, while forms of higher level abstraction hinder
their ability to manipulate data representations. We argue that artists, who
innovate in adapting and writing software to fit their idiosyncratic working
styles, can guide technical implementations of domain-specific program
representations and that their experiences in interdisciplinary production can
aid inclusive community-building around computational tools.
## 2\. Background
Our research contributes to HCI efforts to inform technical systems through
the study of creative communities. Because we studied how visual artists
develop software, our work builds on and informs end-user programming
research. To guide our analysis, we also examined artist-built programming
tools and communities, and research-driven creativity support tools.
### 2.1. Informing HCI by Studying Creative Practice
Understanding creative practice requires different forms of inquiry beyond
controlled study (Shneiderman, 2007). By investigating art and craft practice,
researchers have demonstrated alternative strategies for established domains
of HCI. This includes studying ceramics to inform interaction design (Bardzell
et al., 2012), furniture production to inform CAD and digital fabrication
research (Cheatle and Jackson, 2015), and house construction to inform design
for living materials (Dew and Rosner, 2018). Researchers have also used
collaborative models working with artists or craftspeople to engage in joint
production (Jacobs and Zoran, 2015) or co-author HCI publications (Devendorf
et al., 2020; Fiebrink and Sonami, [n.d.]). These investigations challenge
notions that artists or craftspeople have limited technical proficiency,
demonstrate the technical sophistication of their work, and describe concrete
ways in which they can inform technical systems. Inspired by prior efforts, we
focus on how software use and development by visual artists can inform end-
user programming and systems engineering research.
Studies of art practices have also examined the role of manual production in
creative workflows. HCI researchers have theorized that ideas emerge from
interacting with materials (Ingold, 2010; Schon, [n.d.]), tangible making
builds mental models (Papert, 1980; Tversky, 2019), and physical manipulation
facilitates concrete cognitive tasks (Gross, 2009; Klemmer et al., 2006).
Manipulating physical media helps artists develop manual skill and knowledge
(Needleman, 1979), react and plan in step (Berger, 2014; Do and Gross, 2007),
and reason about building tools (McCullough, 1996). Manual tools also afford
expressiveness, by preserving gesture (Mumford, 1952), and speed, which
supports open-ended exploration (Berger, 2014). In contrast to prior work on
manual production, we investigate how efficiency and automation in software
align or conflict with artists’ manual processes and aesthetic preferences.
### 2.2. Creative Coding Systems: Programming Tools & Communities for Visual
Art
The expressive power of programming has led visual artists to develop and
disseminate their own programming tools. Lieberman, Watson, and Castro
developed OpenFrameworks, a textual programming toolkit, to create interactive
audiovisual artwork (Lieberman, 2014). The textual languages Processing and
p5.js (Reas and Fry, 2016; McCarthy and Turner, [n.d.]a) emerged from
exploring how to teach programming through the lens of art (Reas and Fry,
2016). Node-based visual programming frameworks like Max and vvvv apply ideas
from signal processing for computational music towards producing computational
artwork (Cycling74, 2017; vvvv group, 2017). Researchers in music technology
and live-coding have developed domain-specific representations and tools
(Eaglestone et al., 2001; Barbosa et al., 2018; Malloch et al., 2019). The
goals of these domains differ from visual art. For instance, latency and
scrubbing metaphors are less relevant to visual art. Research on high ceilings
in computational music achieved via design metaphors of control intimacy of
musical instruments (Wessel and Wright, 2001) is different from our work that
also focuses on expert practitioners. Our focus is to study the ways visual
artists build software as a means to inform creativity support systems
research.
Some artists who create software tools also engage in community building. From
the OpenFrameworks design philosophy that prioritizes a “do it with others”
approach to making art (Lieberman, 2014), to the Processing community that
builds and maintains the platform’s many extensions (Reas and Fry, 2016), to
the p5.js community statement that recognizes diverse contributions from new
programmers (McCarthy and Turner, [n.d.]b), all of these frameworks rely on
collaborative development from their communities. The School for Poetic
Computation (SFPC), a school run by artists for artists who often build
computational tools, is another example of an artist-led creative coding
community that extends software use (Jacobs, 2018). Artist-developed software
tools align with art practice and are shaped by community engagement. Our work
provides greater detail on this process by examining how artists move from
creating artifacts to authoring software and how artists’ software use is
shaped by interactions with technical communities.
### 2.3. End-User Programming in Art and Design
Research in end-user programming (EUP) supports non-professional programmers
as they develop software to further their work (Lieberman et al., 2006) and
has focused largely on interaction designers. Research has shown that visual
designers seek programming tools that directly integrate with visual drawing
tools (Myers et al., 2008) and use high-level tools mapped to specific tasks
or glued with general purpose languages rather than learn new programming
frameworks (Brandt et al., 2008). Systems like Juxtapose (Hartmann et al.,
2008) and Interstate (Oney et al., 2014) improve programming for interaction
designers through better version management and visualizations. Re-envisioning
software as information substrates (Beaudouin-Lafon, 2017) that integrate data
and application functionality supports greater software malleability and more
varied forms of collaboration in web (Klokmose et al., 2015) and video editing
(Klokmose et al., 2019).
While there has been extensive EUP research targeting designers, less research
examines EUP for visual art. Researchers have developed a variety of graphic
art tools that enable programming through direct manipulation. Some systems
support two pane interfaces that place visual output side-by-side with code
(Chugh et al., 2016; McDirmid, 2016). Recent work demonstrated that allowing
users to directly manipulate intermediate execution values, in addition to
output, minimized textual editing (Hempel et al., 2019). Other work, like
Dynamic Brushes, aims to support expressiveness through a direct manipulation
environment coupled with a programming language (Jacobs et al., 2018). Results
from a study of debugging tools developed for Dynamic Brushes suggested that
artists inspect program functionality while making artwork (Li et al., 2020).
Our research is aimed at informing future efforts in EUP for visual art by
investigating how artists approach software development and work with software
representations. We provide insights into the ways that visual artists’
objectives differ from other end-user programmers and highlight opportunities
in building domain-specific programming tools for visual art.
### 2.4. Creativity Support Tools for Visual Art
Creativity support tools researchers have extensively studied how software can
support visual art workflows. While Shneiderman outlined several opportunities
for creativity support including exploratory discovery and record keeping
(Shneiderman, 2002), many HCI systems for visual art emphasize producing
artifacts. Systems such as large-scale generative design visualizations
(Matejka et al., 2018; Zaman et al., 2015), cross-modal generative sketching
for 3D design (Kazi et al., 2017), or text-based icon design (Zhao et al.,
2020) aid practitioners in exploring ideas and selecting artifacts.
Researchers have also explored supporting specific affordances in tools such
as speed-aware line smoothing algorithms (Thiel et al., 2011), manipulations
of negative space to edit vector graphics (Bernstein and Li, 2015), or
selective undo in digital painting (Myers et al., 2015). Another category of
systems examine ways to digitally emulate physical forms of production (Barnes
et al., 2008; Kazi et al., 2011; Leung and Lara, 2015).
A large body of creativity support research focuses on broadening
participation. Shneiderman, as well as Silverman and Resnick (Resnick and
Silverman, 2005), advocated for creative systems with “low-floors” that reduce
the barriers to entry and “wide walls” that support a diverse range of
outcomes. Researchers have examined reducing barriers to making computational
art through direct manipulation interfaces for creating procedural constraints
(Jacobs et al., 2017; Kazi et al., 2014). Other systems guide novices in tasks
like photographic alignment (E et al., 2020) and narrative video composition
(Kim et al., 2015). Machine learning (ML) based systems including applications
of neural style transfer (NST) (Gatys et al., 2015; Champandard, 2016; Iizuka
et al., 2016), user-in-the-loop tools (Runway AI, 2020), or support for
specific automated tasks like auto-complete for hand-drawn animation (Xing et
al., 2015), sketch simplification (Simo-Serra et al., 2018), and layout
generation (Batra et al., 2019), are increasingly used to facilitate easy
entry to visual art production through high-level automation. ML-based tools
have raised new questions about the relationships between artists and
software. Semmo et al. challenged the use of NST arguing that NST must support
interactivity and customization for use in creative production (Semmo et al.,
2017). Hertzmann critiqued the notion that ML-based automation creates AI-
generated art, arguing that computational systems are not social agents and
therefore cannot make art (Hertzmann, 2018). We seek to understand and
critique high-level forms of software automation for visual art by examining
the ways artists use or reject these systems in practice.
## 3\. Methods
This work is structured around a thematic analysis of interviews with
professional visual artists. These interviews were motivated and informed by
the authors’ personal experiences working between systems engineering and fine
art.
### 3.1. Author Backgrounds
To understand the perspectives that shaped our work, we provide background on
the research team’s expertise and focus. The authors represent a spectrum of
art experience: Jingyi maintains a hobbyist painting and illustration
practice, Sonia studied art history, and Jennifer has formal art training and
worked as a professional artist. Presently, Jingyi and Sonia are graduate
students in computer science and Jennifer is an HCI professor in an
interdisciplinary art and engineering department. As we transitioned from
creating artwork to researching and building software tools, we had the
opportunity to test our tools with practicing artists. Our discussions about
software use, programming, and creative production with artists indicated
differences in how practicing artists and professional software developers
viewed the opportunities of software.
Name | Years Active | Description | Role
---|---|---|---
Molly Morin | 10+ | Digital fabrication artist | Studio artist, fine arts professor
Lynne Yun | 15+ | Letterform designer | Designer at type firm
Eran Hilleli | 10+ | Animator | Animator, software developer
Miwa Matreyek | 10+ | Animator & dancer | Independent artist
Emily Gobeille | 15+ | Interaction designer & illustrator | Studio founder
Shantell Martin | 15+ | Large format & visual artist | Independent artist
Michael Salter | 25+ | Studio artist | Studio artist, fine arts professor
Chris Coleman | 15+ | Emergent media practitioner | Studio artist, fine arts professor
Fish McGill | 8 | Pen & ink illustrator | Studio artist, fine arts professor
Ben Tritt | 20+ | Painter | Studio founder
Nina Wishnok | 10+ | Printmaker | Studio founder
Kim Smith | 10+ | Painter & illustrator | Studio artist, art education company founder
Mackenzie Schubert | 10+ | Illustrator | Studio artist, art technology company founder
Table 1. Artist demographics.
### 3.2. Interview Methodology and Participants
Building from this preliminary observation, we conducted a formal set of semi-
structured interviews with 20 professional visual artists over a period of
four years. We recruited subjects through our established networks drawn from
the artist residencies, exhibitions, and educational networks that Jennifer
participated in. We aimed to interview artists who worked across a diverse set
of materials (e.g., software, paints, code), processes (e.g., coding, manual
drawing, performance), and domains (e.g., illustration, animation, interactive
art). Figure 1 shows sample artworks from each artist and Table 1 shows basic
demographics information, with full artist names released with consent. While
these interviews were interspersed with and helped guide research in building
software tools, in this work, we foreground the insights from these
interviews, rather than presenting a limited subset to motivate a specific
system.
For this paper, we included data from 13 out of the 20 interviews. We omitted
interviews from artists who did not work with software or visual art as well
as those in which the artist focused on their conceptual stances over their
process. Our interviews were primarily in person, with five conducted through
video conferencing software, and on average lasted 1.5 hours. Our interview
objectives were to understand what artists perceive to be the primary
opportunities and limitations of digital, manual, and physical media; how
different media shaped artists’ learning, process, and outcomes; and what
factors encouraged or prevented artists from engaging in software development
in their work. Prior to each interview, we reviewed each artist’s work to
direct process-based questions towards specific pieces.
### 3.3. Data Collection and Analysis
We audio recorded and transcribed each interview. For analysis, we conducted a
reflexive thematic analysis (Braun and Clarke, 2006, 2019), focusing on an
inductive approach open to latent themes. Each author reviewed each
transcript, and, following a discussion of initial patterns, each author coded
a subset of transcripts to initially identify as many interesting data
extracts as possible. The research team refined the codes and conceptualized
them into preliminary themes through weekly meetings and discussions. After
all authors collaboratively drafted a written description of each theme, Sonia
and Jennifer reviewed them with respect to the coded transcripts to confirm
they were representative of the original codes.
### 3.4. Limitations
Our work focuses on a deep qualitative examination of 13 artists. This
approach was necessary to gain insight into the specific factors that shaped
the workflows of our interviewees. Artists represented here had a range of
experiences with software; future research engaging exclusively with artists
who build software will likely uncover additional details such as a
quantitative breakdown of time spent across different software development
tasks. We chose our methodology based on our personal experiences
transitioning between art and software development and research, which we
disclose in our author background statement to contextualize our analysis and
discussion. Any interview risks social desirability bias. Because we had
personal connections with our interviewees, trust contributed to them being
comfortable giving honest answers, increasing the reliability of the responses
(Fujii, 2017).
## 4\. Intersections of Software and Visual Art
We conceptualized themes across six dimensions presented below. Revisiting our
research questions, we describe how software constraints and representations,
use cases beyond producing art, and cultural associations of computational
aesthetics impacted how artists used software tools made by others. Artists
were motivated to develop their own software to create new functionality in
their works, grow intellectually, and gain technical legitimacy in their
communities. Furthermore, their idiosyncratic workflows spanning digital and
physical mediums were often misaligned with both the constraints and forms of
automation provided by existing software.
### 4.1. Artists’ Motivations for Software Development
Figure 1. Artwork created by interviewed artists: A) Interactive installation
by Emily Gobeille. B) Oil painting by Ben Tritt. C) New media sculpture by
Chris Coleman. D) Pencil illustration by Nina Wishnok. E) Line drawing mural
by Shantell Martin. F) Typeface by Lynne Yun. G) Poster by Fish McGill. H)
Comic page by Mackenzie Schubert. I) Digital illustration by Michael Salter in
collaboration with Chris Coleman. J) Vinyl cut sculptures by Molly Morin. K)
Synthesizer controlled animated character by Eran Hilleli. L) Painting by Kim
Smith. M) 3D printed vector graphic by Michael Salter. N) Shadow installation
by Miwa Matreyek.
14 pieces of artwork arranged in a grid. A: People interacting with a wall
with a projected green background and plant-like structures. B: A textured
monochromatic brown oil painting of a human figure. C: A digital sculpture of
a person wearing a hat with web-like textures. D : An abstract pencil line
drawing on light paper, with many geometric forms and an abstract house. E: An
artist standing on a lift drawing a large line drawing mural on a wall. The
mural has a white background and is of people’s faces. F: A poster
demonstrating a font called “Trade Gothic Display Family” in all uppercase,
sans-serif, bold lettering. G: A poster advertising Boston National Portfolio
Day, using primarily purple, red, and blue, with many stylized people walking
into a building. H: A three panel digital comic book page done entirely in
shades of blue on a yellow background. A man is outside of a cluttered house
pulling up something from a cable. I: A digital piece that morphs hand drawn
illustrations with procedurally generated forms. The top half of the piece is
various yellow lines with an indistinguishable vector graphic and the bottom
half is white with some flowers. J: Vinyl cut sculptures, thin white floral
shapes, hanging on the wall with shadows accentuating their form. K: On the
top, a 3D modeled man with a beard, blue skin, and a yellow nose. On the
bottom, the artist interacting with a synthesizer to control the man. L: An
abstract painting, mainly greys, blues, and oranges, on a large canvas that
has been deliberately crumpled and reshaped. M: A black 3D printed gun, except
the barrel of the gun loops in a circle to the mouth of the gun. N: A wall
showing a projection and the shadow of a human. The background is teal and
there is a black and white anemone form obscuring part of the human shadow.
Out of the 13 artists from our interviews, eight either were software
developers or were in the process of learning software development. We
distinguish between software development and programming in that programming
involves writing code, while software development also encompasses testing,
maintaining, and sharing software (Ko et al., 2011). Our software developers
described a variety of motivating factors. Initially, artists developed
software to add interactivity to their artwork. For example, Mackenzie learned
Unity development to facilitate dynamic 3D transitions between panels in an
interactive digital comic, and Emily learned C++ and Macromedia Director’s
Lingo programming language because it enabled her to create interactive
animations. When adding software as a component of their output, artists
valued robustness. Artists also developed software to automate tasks. While
using software as a tool for art-making, artists placed a higher emphasis on
reusability. Molly learned to procedurally generate vector graphics to reduce
manual labor when repeating forms in Adobe Illustrator, and Lynne learned
Python to reduce the effort required to test different combinations of type
when creating new fonts. These objectives align with the established framing
of end-user programmers as individuals who write code in service of another
practice.
In addition to functional applications, artists developed software because of
the opportunities for intellectual and creative growth. For instance, Ben
described an interest diving into programming in order to “look under the
hood” of software. Redesigning or building new software systems enabled
artists to identify constraints in their tools to imagine alternatives.
Lynne’s initial experience with Python acted as the catalyst for her to enroll
in a creative programming course. She described how learning to use a C++
based framework to make her own graphical user interfaces (GUIs) enabled her
to recognize third party software constraints and envision other options:
> If I could make my own GUI for things, maybe I could be using things in a
> different manner. This is a recent thought…I don’t think I ever realized how
> much it was impacting my work.
The notion that authoring software could expand one’s awareness of creative
possibilities was remarkably consistent among the artists we spoke with,
though artists varied how they used this idea in practice. Emily, Chris, and
Eran all developed their own software interfaces that exposed specific
parameters to explore and fine-tune visual properties of their work. Molly
described the intellectual satisfaction she derived from translating her
manual process generating vector geometry for fabrication to an algorithmic
description, enjoying solving complex geometric problems while creating a
reusable tool that reflected her manual practice. Similarly, Eran created
experimental software tools primarily to investigate new concepts, and he was
less concerned if the resulting tool would lead to a finished piece. Artists
instead valued speed in designing software sketches to quickly test ideas.
Finally, four artists described being motivated to develop software to
influence future forms of software design, different from what they currently
observed. Eran and Chris supported students and newcomers to animation and
embedded programming, respectively, by designing tools that addressed
obstacles they experienced in their own work. Lynne described how developing
software would allow her to “have a seat at the table” around media software
production. Likewise, Chris recognized that, as an independent artist and
professor, he couldn’t compete with the speed and resources of professional
software companies, but he could release different kinds of tools that
influenced the direction those companies might take, saying, “[a]ll I can do
is shape the conversation for the professional tools that get made
afterwards.” Kim went a step further, describing her desire to improve her
software development skills as a means to bypass negative experiences and
communication breakdowns she had experienced when working directly with
professional developers:
> It’s not always clear to the developers that I’ve worked with why things are
> important from a designer’s point of view …I think if I could really get
> that skill down and design as well …the end results would be better.
These experiences demonstrate how learning and participating in software
development enabled multiple forms of power in visual art production. First,
artists developed software to create new functionality in artworks. Second,
artists developed software to grow intellectually and they built their own
interfaces to explore or refine work. Finally, by demonstrating knowledge in
software development, artists gained technical legitimacy and could engage in
dialogue with professional software developers or circumvent them altogether.
### 4.2. Selecting Software Constraints and Representations
Our interviews revealed two ways software made by others impacted artists’
processes and outputs: they negotiated different forms of software
constraints, and they carefully selected specific graphic representations.
Artists viewed software constraints as constructive when they could define the
constraints’ parameters. Sometimes, this involved choosing to not use a tool’s
features. Shantell, Eran, Fish, Ben, and Mackenzie all described points when
they constrained themselves from using software-based undo. This constraint
replicated the quality of physical ink, forcing them to work with their
mistakes or avoided breaking their flow of drawing with editing. Artists also
imposed constraints on their practices through their formal knowledge of
design and composition. Michael described how, in Illustrator, he manually
laid out his compositions to follow grid structures but broke those structures
at arbitrary points—a process that would have been more difficult if
Illustrator enforced the grid constraint. In other instances, artists
developed software to enact constraints. Emily, Eran, and Fish authored
software tools that restricted a user’s ability to erase, define geometry, or
select colors. Fish described programming a drawing tool that automatically
faded past strokes over time:
> I worked in Processing on creating drawing tools that would fade over time…I
> was storing screenshots of every stroke, so then I could watch how someone’s
> drawing came together on a loop…and that came out of experiences, just
> looking for ways to get other people that are afraid of drawing, just to
> jump in and try something. So, it was a combination of creating the software
> that would capture each line and mark and letting it fade, so they can get a
> sense of depth as they are working.
While artists had positive experiences imposing their own constraints, they
struggled with the stylistic constraints imposed by feature-rich commercial
software tools. Despite their respect for and reliance on commercial tools
like Adobe After Effects and Unity, Miwa, Molly, Nina, Eran, and Chris
struggled with feeling like stylistic aspects of their work were, in Molly’s
words, “predetermined by the program.” In part, this reaction was tied to the
expectation for fine artists to create novel imagery. Miwa and Chris both
described their need to obscure the means of development when using After
Effects. The constraints of high-level software tools were also at odds with
artists’ desires to enact custom workflows. Artists avoided “defaults” and
“presets” for this reason. For example, Nina avoided Photoshop filters because
they were “straight out of the box” and incompatible with her personal
workflow, and Fish described feeling “stifled” by the aesthetic constraints of
Illustrator defaults until he learned how to author custom brushes.
Similar to constraints, artists embraced or rejected lower-level graphic
representations—e.g., 3D meshes, Bezier curves, bitmaps—based on the extent
that a representation supported their workflow. Both Michael and Molly worked
primarily with vector graphics, despite being adept in other representations,
because vectors were best suited to the curvilinear geometry and “clean”
aesthetics of their work. Similarly, other artists deliberately rejected some
digital graphic representations or were frustrated with the inconsistencies
that emerged when they tried to blend two different representations. Emily
described an extreme dislike for the “smoothness” of 3D graphics, and
Mackenzie described how integrating work from Photoshop and Illustrator
created a stylistic “gap.” In most cases, these tensions with software
representations were not the result of nostalgia for physical media. Emily
wasn’t interested in recreating traditional techniques in a digital format,
stating her goal was to “push both traditional artistic practice and software
tools out of their individual comfort zones, to create something that is
unique and blurs the lines between the two.” Alternatively, Shantell, Michael,
and Mackenzie felt digital representations actually had comparable aesthetic
qualities to physical media.
Instead, the degree to which artists were comfortable working with a digital
representation was determined by how it supported their workflows. At a low
level of individual mark making, Mackenzie described how Bezier curves
afforded editing existing work (a process he described as “finicky”) whereas
bitmap brushes pushed him to “[sketch] one thing once and move through it,”
because the bitmap representation didn’t support the same level of editing
after drawing. At a higher level, Eran described how the timeline
representation in animation software required animators to painstakingly edit
individual frames. As an animation instructor, he observed students
transitioning between drawing the animation and adjusting the timeline to the
point of fatigue:
> The task the person has to do is finish his drawing, move the timeline,
> [and] change something. I see students that, every time they do that, their
> mind hurts…It’s a break in their flow.
These observations led Eran to develop animation tools with continuous, rather
than frame-based, timelines. He recognized that these different
representations lead to trade-offs in workflows—a continuous timeline would
afford speed and shift the focus to drawing, but a key-frame timeline enabled
low-level but laborious control.
Artists also considered aesthetics clearly shaped by a specific representation
to be an indicator of novice work. Michael and Molly described how, as
teachers, they worked with students to master software so that their own
drawing style was preserved, rather than obscured by the qualities of vector
graphics. Mackenzie described how representing applications as standalone
packages restricted iteration and workflows across different tools:
> In some ways those programs are really behind kind of walls and are not very
> modular. The program is this just inside of this [indicating to software
> window] and it has many little tools that exist inside of that and some of
> those have like smaller tools that exist inside of that. So you’re kind of
> using like all of these tools in Photoshop and all of these tools in
> Illustrator…but when I think about the way I work…passing things back and
> forth and iterating in different ways and quickly. I feel like you’ve got
> these big walls between programs…I’m interested in…how those things can be
> broken out or built out separately.
Overall, the degree to which artists embraced software constraints and
representations corresponded to the degree that they could be adapted to an
established workflow. When faced with complex tools with powerful black-box
functionality or high-level representations, artists often tried to use these
tools for unintended purposes. When this was not feasible, they opted for
software tools with limited functionality or built their own.
### 4.3. Non-Linear Physical-Digital Workflows
Despite working across a wide range of visual art domains, each artist
described workflows that integrated digital and physical processes, working
non-linearly between digital and physical production using a diverse set of
tools and approaches. Foremost, artists heavily relied on the ability to
provide manual inputs to digital tools. Miwa, Ben, and Nina all mentioned they
liked the “organic” and “warm” quality of hand drawn art—deviations and
irregularities in their artistic style that were built up over learning to
draw and physically engaging with their body. Artists also improvised physical
materials as digital inputs. Emily photographed and scanned objects to turn
into textures to “incorporate as many non-computer elements into the digital
artwork,” while Nina traced over copies of architectural plans as a starting
point for her prints.
Similarly, artists produced physical outputs with digital fabrication.
Shantell used a CNC mill to fabricate a previously ink-based drawing as
functional printed circuit board traces. Michael described how he would
arbitrarily decide to convert flat vector graphics to 3D physical objects
without advanced planning (Figure 1M):
> I can extrude this. I can laser cut it 15 times and laminate it. …There’s
> something to me, the formal experience of taking something that changes
> dimension, which is exciting. I’m gonna find something that I normally
> couldn’t have…Those opportunities past the computer.
Artists also used physical production as a means to think through problems,
either individually or with collaborators. For example, Emily worked out the
computational rules to define an interactive generative puzzle game while
working on a hand-sketched maze. Likewise, Molly shared how she would “solve
code problems” while felting, constantly “switching back and forth between
doing a little coding and doing a little felting or folding.” Emily and
Michael, two artists who used software tools developed by their collaborators,
shared that they would evaluate the tools by iteratively building physical
artifacts with them. Both sets of collaborators moved back and forth between
digital and physical production to create artwork together. For example, in
his collaboration with Chris, Michael would “manually rip” outputs enabled by
Chris’s tools and “start to draw with” them—he then sent these drawings and
feedback to Chris, who in turn would modify his code.
Moving between digital and physical spaces enabled artists to leverage the
affordances that emerged from using both spaces when producing work, such as
for painters Ben and Kim. Ben described working out ideas by alternating
between digital painting on a tablet, where he could use color pickers to
explore color choices instead of manually mixing paints, and explore material
considerations through physical painting. He said the materiality of these
physical paints added an abstraction to the way a piece communicated an idea
beyond its literal representation, something that wasn’t available in digital
software. Likewise, to inform her physical paintings on canvas (Figure 1L),
Kim relied on “huge [digital] libraries of images and washes” to develop her
ideas. For drawings to “look markedly better and more human,” Fish encouraged
his students to cycle through physical and digital drawing:
> Why don’t you put a piece of tracing paper over the screen right now, and
> just physically draw it? And then let’s look at the drawing. And then let’s
> open up Illustrator, and then create a version of it again.
While appreciative of the benefits software tools provided, artists also
encountered challenges of scale when translating physical artifacts to digital
tools. Emily, who always began her process with physical sketches, described
digitizing, segmenting, and sharing her sketches as unnecessarily laborious
and repetitive. Artists were also unable to preserve the ways they manipulated
physical elements of an artwork to explore scale and composition. Both Miwa
and Emily noted the difficulty of using a screen to design animations that
would be projected at large scale (Figure 1 A & N). Likewise, because she
determined the scale of her paintings relative to her body, Kim was unable to
make the same judgments while working with software interfaces. Nina went
“back and forth” from working in software to printing out and “literally”
putting down work on the floor to look at it when making judgments about
layout and composition. Both Lynne and Nina elaborated how “proportions and
visual relationships” needed to be assessed physically because balance and
weight were perceived differently on a screen.
In summary, artists flexibly and non-linearly moved back and forth between
physical and digital spaces when creating work. Artists relied on physical
manipulation as a means to refine producing artwork and as a form of
reflection or problem solving. Finally, they encountered barriers when they
were unable to use physical manipulation or embodied notions of proportion and
scale in software tools.
### 4.4. Valuing Efficiency and Resisting Software Automation
We observed that artists cared deeply about efficiency in their practice; some
even developed their own forms of automation. While quickly working manually
was important for aesthetic outcomes, existing forms of software-enabled
automation imposed undesirable aesthetics that artists had to go back and
manually refine.
Artists valued speed and efficiency particularly in contexts of idea
exploration, iteration, and turn-around time when working with collaborators.
For example, Emily wrote software to procedurally generate and explore many
different compositions, as this was faster and less effort than manually
creating each one. Eran built tools for new ways of artistic expression with
the goal of “getting to the quickest way [he] could test” them. In his
collaboration with Michael, Chris described the importance of speed:
> I love the fact that I have just enough proficiency with Processing that in
> a day we could produce 20 different interesting iterations and then have a
> longer dialogue about successes and failures and ways to change and ways to
> improve.
In forms of manual art production, working efficiently also resulted in
desired aesthetic outcomes. For example, for Shantell, Mackenzie, and Michael,
speed was synonymous with confidence in drawing and crucial to the aesthetic
development of their line. In contrast, we saw artists reject aspects of
automated efficiency that led to undesirable aesthetics, especially when they
already possessed the manual skills to do something that looked better than
what the software could. For example, because she disliked how the default
algorithm on the vinyl cutter produced intersections, Molly wrote her own to
outline Bezier curves with some thickness in order to vinyl cut her line
drawings (Figure 1J). Similarly, Lynne hesitated to ever arrange typography
along a path because she disliked how the automated result looked:
> In Illustrator…if you try to set text on a path in that circle, it looks
> really crappy, so I’ll never do it. But in an analog format, where I can cut
> and paste the letters or draw them to be there, it looks fine…Maybe it’s
> because of the program that it was almost taboo for me to put type on a
> shape, because it looked terrible in the interface.
In fact, artists often chose to create works by hand even when they recognized
code could have achieved a similar aesthetic outcome. For instance, Michael
preferred to execute a painting that had generative art aesthetics by hand
because, to him, drawing was more efficient than the overhead of programming a
similar result.
Many forms of software-enabled automation that artists relied on were
established features, such as undo, redo, layers, saving multiple versions of
a file, and digital editability. Artists described liking these forms of
automation since they remained “in the loop” and still had aesthetic control
over their pieces. Taken together, the experiences of artists using software
for automation were at odds with the notion that automation would remove
tedious manual labor. On the contrary, because artists lacked control over
nuanced outcomes in automated systems, they often spent time fighting to
achieve their desired aesthetics. Manually executing their pieces, on the
other hand, was both expressive and efficient.
### 4.5. Using Software Tools Beyond Producing Artifacts
Software tools served artists in aspects of their practice beyond working on a
visual artifact. In this section, we report on visual artists’ experiences of
using software tools across tasks of documenting, tracking, generating, and
sharing ideas, as well as reflecting on the process of drawing.
Artists described using digital software to collect, organize, and refer to
artifacts, including sketches from their processes, while making artwork. Ben
described storing and relying on digital recipes of how to mix precise colors
of paint. Fish would often have his sketches in an art board to the side while
working on a main piece, saying it was like having a “life raft” to have “some
kind of composition to play off of.” Miwa shared a similar process, using
Evernote to organize and reference inspiration she came across while working
and her own drawn notes. Mackenzie depended on both Evernote and rigorous
commenting in his code as a means to quickly resume personal projects after
working on client work. He described his C# code in Unity as being “half
comments, at least.”
Artists also identified points where their software tools for organization
fell flat. Emily, who frequently blended drawing and note taking, felt like
compared to a paper notebook, using a computer was “less freeform.” Nina felt
like duplicating her Illustrator artboards as to not lose old iterations while
managing version history “wasn’t streamlined” and her ideas were “getting
muddled,” as the cumbersome versioning obscured her creative decision making.
To aid in reflecting on and analyzing their own processes, artists used forms
of digital software, such as video recording. In creating a projection-based
animation piece (Figure 1N), Miwa recorded footage of herself moving around
the space to analyze how her shadow affected the piece and refine its
composition. Likewise, Fish recorded himself drawing with a webcam in order to
review and reflect upon his process “like a sports commentator.” Shantell also
described recording herself, to rather simulate the pressures of having a live
audience, which mentally forced her to draw. Beyond video, Shantell also cared
about analyzing the metrics of her artwork and speculated on using
computational tools to aid in understanding things she could not see:
> I drew [Figure 1E] at an average of five inches per second and I had someone
> work out the combined amount of line—it’s roughly 1668 feet long. Oh, and
> the average coverage is between 10% and 12%. So, now what can I do with that
> information? One thing I’m interested in as an artist is, can I break down
> all the analytics of my drawing? Can I break down my speeds, my angles, my
> distances?
A few artists built their own software tools to aid in reflection. The
Processing extension Fish created to observe how other artists built up their
drawings (as quoted in section 4.2) was originally for his own practice, but
he also found it valuable as a teaching tool for his students. Nina, on the
other hand, was not interested in using tools for analyzing her own workflows
because she felt like her lack of experience with coding prevented her from
conceptualizing how those tools would work and how she would apply them to her
own practice.
In summary, artists engaged with software not only to create visual artifacts
but also to organize and share materials in support of their pieces, as well
as to introspect on their own and others’ artwork. The fact that artists
prioritized using computational tools for reflection, analysis, organization,
and management showed how they leveraged computational affordances to assist
creative labor, as opposed to performing it. In our interviews, artists
discussed at length computational tools that helped them in their process of
making artwork, as opposed to tools that made the artwork itself.
### 4.6. Relationships between Aesthetics and Audiences
As previously discussed, software shaped the visual characteristics of
artists’ work. Artists’ decisions to obscure or embrace computational
aesthetics were impacted by social and cultural perceptions of technology.
Moreover, mismatches between their own values and those of established
technical communities initially led artists to hesitate in identifying as
technical creators.
Many artists felt artwork produced with generative algorithms trended towards
a specific aesthetic, referring to work that was “generative” or
“glitch”-based in style, and work that deliberately suggested a technical
sophistication by emphasising “shiny,” “sexy,” or “hardcore” elements in its
construction. This idea is consistent with discussions in the computational
art community around a digital or generative art “vernacular” of established
and sometimes cliché aesthetics from a narrow set of algorithms (Watz, 2012).
Chris, Michael, Miwa, Nina and Molly described how their audiences’
expectations surrounding computationally-produced artwork determined the ways
in which they obscured or emphasized these aesthetics. In designing
computationally generated or interactive works, Chris, Nina, and Miwa all
tried to highlight the concepts in their pieces that were not about
technology. When artists chose to incorporate a recently developed
computational technique into their work, the incentive for originality and
novelty created a contest to quickly map out all possible variations or
unconventional applications. Citing the example of Google Deep Dream
(Mordvintsev et al., 2015), Chris described how:
> There’s this weird race to find the new edges of the new box every time an
> update…or a new platform is pushed out because you know all the easy stuff
> is going to be consumed into a more easy popular culture. Constantly finding
> new aesthetics or what are the aesthetics that everybody is sort of working
> in, but that you need to push just beyond.
Michael, Chris’s frequent collaborator, described his appreciation for Chris’s
ability to “finesse” the high-tech components of the work so that they were
“embedded, subdued, and poetic,” recognizing that investing in a high-tech
digital aesthetic required artists to continually adapt to rapidly changing
trends and avoid clichés. He personally chose to avoid this “baggage” in his
own practice.
When artists brought computational work into more traditional art communities,
they had to decide between obscuring the computational qualities of their work
or devoting significant effort to explain and contextualize its technical
properties to their audience. As Molly put it:
> If you’re making work that looks like sculpture, then who’s your audience?
> Is it an audience who understands sculpture, but not what an algorithm is?
> Because I get real tired of explaining what an algorithm is.
Similarly, several artists initially resisted using computation because they
did not share the values of existing technical communities, despite finding
later success. Kim described a “disconnect” between the decision making
processes of designers and developers. When Shantell worked in engineering
communities, she struggled to reconcile her desires for transparent and
aesthetically varied works with engineering norms that emphasized efficiency
through technological opacity and uniformity. Lynne delayed pursuing coding
because she was “burned by the tech culture” of a major Silicon Valley tech
company when she worked there as a designer. Likewise, Molly described how she
initially felt pushed to exhibit mastery in computer programming because of
the “power dynamics” that exist between programming and domains like knitting
or drawing:
> There’s a certain part of the population that’s going think you’re way
> cooler if you can code that thing than if you can draw it out, which is
> crazy.
It is worthwhile to point out that the artists who experienced these conflicts
were all women. The challenges they experienced are consistent with larger
patterns of marginalization of certain groups—including women—in computer
science and engineering (Margolis and Fisher, 2002). Despite these conflicts,
Kim, Shantell, Lynne and Molly persisted—often flourished—in computational
production because they were able to find computational communities or
engineering collaborators with similar values who prioritized what they had to
say. Molly’s immersion in traditional art communities strengthened her belief
in the importance of manual and craft skills. She reached a point where she no
longer felt like she needed to “prove” she could code well for her “art to
matter.”
## 5\. Discussion
Here we discuss how artists currently engage with software in relationship to
the current state of end-user programming (EUP) and creativity support tool
(CST) research. From our analysis, we present critiques of existing
approaches, design implications in response to these approaches, and
strategies for new research opportunities in four categories. (1) Current
tools that seek to lower the barrier of entry for creating art may rely on
forms of automation and abstraction that hinder how artists traditionally
learn through manual engagement with materials. (2) Artists have complex and
non-linear workflows that span physical and digital media; research can work
towards unifying representations across both physical and digital objects. (3)
Artists are uniquely suited as technical collaborators in defining domain-
specific programming representations and their contributions can expand the
dimensions of what counts as systems research. (4) If systems engineering
researchers wish to engage with artists, they have to consider not only the
tools they build, but also the communities that surround them.
### 5.1. Automation Obscures Processes, Abstraction Obscures Data
Based on our interview findings, we challenge the notion that computational
automation reduces tedious manual labor. In section 4.4, we described how
forms of automation and abstraction that forgo manual control presented
barriers to artists becoming self-reliant and producing aesthetically
sophisticated outcomes. Artists instead relied on skilled manual execution and
custom software to be efficient while preserving manual style. We argue that
CST research can focus on not only making tasks faster or easier to
accomplish, but also helping artists develop self-sufficiency through
preserving manual control.
While simpler controls make tasks more accessible to novices, who are the
second most targeted user group in CST research (Frich et al., 2019), forms of
black-box automation can prevent artists from both using these tools in their
existing workflows and in flexibly extending them across multiple workflows.
For example, Molly had to write her own vinyl cutting algorithm from scratch
because she could not edit the existing one provided with the software. As
these forms of automation do not provide access to transparent algorithms,
artists cannot adapt them in unique ways to develop idiosyncratic approaches
to working. They instead may fall into aesthetics pre-determined by the tools,
as described in section 4.2.
Furthermore, higher-level abstractions sometimes prevented artists from
working at multiple granularities. For instance, Mackenzie deliberately
distinguished between the low-level representations of Bézier curves versus
bitmaps when starting digital work as Bézier curves better afforded editing.
In contrast, a higher-level representation, such as an automated effect or
filter, would restrict this kind of meaningful decision making. By depending
on computational scaffolds that do not allow them to manually manipulate data
representations, artists may not develop the skills to produce sophisticated
artifacts. For instance, tools that use generative adversarial networks (GANs)
such as ArtBreeder (Simon, 2019) or various projects focusing on style
transfer (Semmo et al., 2017) forego manual control as artists can only
specify input images; the engineers of such systems are the ones who determine
the visual aesthetics of the final artifact.
Inspired by the processes artists described in section 4.5, forms of
automation that do not remove manual control over processes can be applied to
areas like exploration (Hartmann et al., 2008), project management, and new
ways of “seeing” and reflecting upon artwork (Fraser et al., 2020). CST
research has also investigated data abstractions artists are familiar with,
such as better ways of selecting layers (Shimizu et al., 2019) and
undoing/redoing actions (Chen et al., 2016; Myers et al., 2015). By allowing
artists to have control over forms of automation and manipulate data
representations, CSTs can focus on not only helping artists accomplish tasks,
but also developing forms of self-reliance for unique outcomes.
### 5.2. Adapting Digital Tools to the Unique Workflows of Artists
In section 4.3, we described how artists have complex and non-linear workflows
that move across a variety of mediums, both physical and digital. We draw from
these workflows to advocate for programming systems that integrate manual
input and physical materials with digital output and computational control. We
also see design opportunities that take into account the aesthetic experience
of using digital tools that capture the user’s tacit knowledge.
In creating work across physical and digital mediums, artists particularly
highlighted the struggles of moving between separate tools and adapting their
pieces to fit the constraints of the medium. In these transitions, artists
experience what Winograd and Flores, in interpreting Heidegger, call
“breakdowns” (Winograd and Flores, 1986)—instances when artists engage with
low-level properties of objects because they fail to accommodate fluid and
invisible interactions. Our interviews showed examples of productive
breakdowns when working with physical materials, such as Michael exploring
many fabrication techniques and materials to transform his vector drawings,
which ultimately inspired reflections to shape the final artwork. However,
artists also experienced frustrating breakdowns that were simply a result of
separate applications and tools not being able to interface with each other,
such as Emily having to laboriously transfer her sketches across different
digital mediums.
We argue that one opportunity space for CST researchers—particularly in end-
user programming—is to design program representations that support non-
linearly moving between work spaces of physical and digital media, such that
the output of any single stage can be the input of any other stage. This is in
contrast to current projects, such as those in digital fabrication (Chen et
al., 2018; Savage et al., 2015; Yamaoka et al., 2019), that assume more linear
workflows: artists might start with a digital design tool, then use an
existing program to compile it into machine code, and then fabricate it. For
example, representations that integrate digital graphics, manual drawing
gesture data, and CNC toolpaths could enable artists to program custom
workflows across physical and digital forms of creation, and software
environments that unify interfaces for authoring graphics, modifying manual
input parameters, and programming CNC machine behavior could support rapid
digital-physical transitions. Projects that use interactive paper substrates
(Garcia et al., 2012; Tsandilas et al., 2015) support domain experts in
flexibly transitioning between the digital and physical. Devendorf and Rosner
state that “hybridity,” a melding of exactly two dichotomous categories,
narrows the scope of what designers work with and privileges some interactions
over others (Devendorf and Rosner, 2017)—we imagine designing data
representations so artists can smoothly transition between all forms of making
during any stage of their process.
Work has already investigated how to share workflows between different
fabrication machines through integrated environments (Peek and Gershenfeld,
2018) or domain-specific languages (Tran O’Leary and Peek, 2019).
Additionally, efforts in end-user programming like Enact (Leiva et al., 2019)
or Webstrates (Klokmose et al., 2015) have made progress on accommodating
workflows that span multiple application programs. Webstrates aims to unify
software representations at the level of the operating system such that
objects can be shared across different applications that traditionally have
their own internal representations. However, many forms of art production rely
on specific affordances of physical materials that cannot be digitally
replicated (Ingold, 2010). In extending this concept to support the work of
artists, we argue the “operating system” now has to include the physical
spaces they also inhabit: for instance, from clay to 3D models to CAD software
to G-Code to a 3D printer. Research efforts like Dynamicland (Victor, 2018)
have investigated these concepts by building a programming representation and
operating system to unify actions in software, and those over space and time
in the physical world.
When working with physical materials, artists also based their tool choices
not only on how they helped them execute visual artifacts but also on how they
integrated into their varied intentions while making art. The same tool could
be used by different artists for sketching, for refinement, or simply because
it brought both tactile and emotional pleasure. The focus artists had on their
feelings and senses when using tools suggests a space for systems researchers
to pay attention to the emotional and aesthetic experiences of their software,
in addition to targeting contributions that make accomplishing tasks faster or
easier.
Finally, how artists interacted with tools was shaped by individual and
embodied forms tacit knowledge, like knowing how much pressure to apply to a
pen stroke while sketching versus lining. Capturing, formalizing, and
evaluating these kinds of experiences—as well as emotional and aesthetic
ones—is a challenge. Some CST researchers with backgrounds in art practice may
draw from their own experiences (Torres et al., 2019), but when researchers
lack familiarity with the practices they want to support, artists can be
powerful technical collaborators.
### 5.3. Visual Artists as Technical Collaborators
As detailed in section 4.2, artists have a deep domain knowledge of their
medium and appropriately choose to use or not use tools based on how the
constraints and representations of the tool fit into their workflows.
Devendorf et al. encountered similar knowledge in their residency with
weavers, and argue that craftspeople should be considered technical
collaborators with HCI researchers—while craftspeople’s knowledge may be
through a different medium than software engineering, that knowledge is still
compatible with researchers’ practices of design iteration and innovation
(Devendorf et al., 2020). We expand this notion of craftspeople as technical
collaborators to visual artists; specifically, we argue visual artists are
particularly well situated in defining domain-specific programming
representations that integrate manual expression with computational automation
and digital manipulation.
Artists who code understand the constraints of writing software and applying
code towards artifacts they have previously manually created, such as Lynne
writing Python scripts to speed up typographic labor or Chris deriving
insights about which tools to build immediately after he manually finished his
pieces. Because certain representations may be better aligned with painting,
rather than drawing, or felting, or may even misalign with working by hand, we
suggest that the domain expertise of artists, who are well versed in their
craft, can help define such representations. This is in line with ideas from
participatory and co-design (Muller and Kuhn, 1993) methodologies; artists can
make strong contributions in designing tools beyond the initial need-finding
and final evaluation stages. However, beyond participatory design methods
where artists help define high level features, we suggest that they also be
involved in lower-level engineering discussions about data representations and
implementation. Artists are specifically interested in forging new and
different paths because they open up new avenues for artistic exploration,
instead of basing their success on efficiency.
Research collaborations are a two-way street that should benefit all parties
in tangible ways. Before starting collaborations, researchers should consider
how HCI can also bring value to the types of achievements valued in an art
career. For instance, artists and researchers have different incentives in
disseminating tools they make—artists value releasing tools to their
communities, while less than a quarter of CSTs are publicly available (Frich
et al., 2019), potentially due to the high value placed on the academic paper.
We argue that HCI should broaden the scope of what is considered a systems
engineering contribution to include the forms of output and inquiry artists
value—such as making polished artifacts with systems over time, releasing
useful technologies to support creative communities as opposed to constantly
prioritizing novel systems, and investigating the art practices of individuals
versus reporting on generalizable trends among large groups of artists.
### 5.4. Building Tool Communities
In section 4.1, we described how artists were motivated to learn software
development for various forms of power: in creating new functionality,
intellectual growth, and technical legitimacy. In section 4.6, we described
the experiences of four artists who overcame cultural barriers, differences in
values, and feelings of exclusion to incorporate computation in their
practices. Throughout this paper, we have argued for the value artists bring
to our systems engineering research community. At the same time, these
findings, as well as past research (Roque, 2016), show that the ways artists
incorporate software and programming in their work is heavily influenced by
the perceptions and values of their communities. If systems engineering
researchers, as another technical community, seek to engage artists more
broadly in developing tools, we also need to consider the communities
surrounding the use of such tools.
For inspiration, we can look to two artist-led organizations that have been
successful in teaching programming to artists: p5.js, a JavaScript port of the
Processing creative coding language, and the School for Poetic Computation
(SFPC), an artist-run school with the motto “more poetry, less demo.” Both
these organizations have been successful in devoting many resources towards
building communities where artists feel a sense of belonging, in addition to
providing the tools to make novel work. However, these groups—as well as the
collaborations we reported on—represent a very narrow space within the larger
software community. To support more artists, we need to find ways to build
other communities elsewhere, as well as broaden our existing ones. The power
of art comes from existing in intimate conversation with other humans, so
leaving out such considerations of communities stifles the impact system
builders can have on supporting artists working with computers.
## 6\. Conclusion
In this paper, we examined the intersections of software and visual art to
inform end-user programming and creativity support tools research. Our
recommendations for software for creative production come from our thematic
analysis of 13 artist interviews. By validating artists as core technical
contributors in systems research and highlighting the need for inclusive
community building around computational tools, we recognize and legitimize the
value of distinct approaches to software use. Because systems that use high-
level automation to make artwork conflict with artists’ desires for fine-
grained manipulations of their artifacts and tools, we suggest using
automation to instead support analysis, organization, and reflection. We argue
that diverse, non-linear physical-digital workflows can inform building
flexible data representations for artwork and digital fabrication. Finally, we
believe artists can contribute to shaping these representations because of
their distinct approaches to how they learn, use, and build software that are
rooted in humanistic inquiry and art practice. Based on these findings, our
vision is that software for artists will be written in close collaboration
with artists. Through these collaborations, we see great potential to extend
software use in creative practice and to grow inclusive communities around
software development.
###### Acknowledgements.
The authors extend a huge gratitude to all the artists—Chris Coleman, Emily
Gobeille, Eran Hilleli, Shantell Martin, Miwa Matreyek, Fish McGill, Molly
Morin, Michael Salter, Mackenzie Schubert, Kim Smith, Ben Tritt, Nina Wishnok,
and Lynne Yun—without whom this work would not be possible. The authors would
also like to thank colleagues Eric Rawn, Will Crichton, Evan Strasnick,
Daniela Rosner, Laura Devendorf, Kristin Dew, Enric Boix-Adserà, and Kai
Thaler for their valuable insights and conversations about this work.
## References
* (1)
* Barbosa et al. (2018) J. Barbosa, M. M. Wanderley, and S. Huot. 2018\. ZenStates: Easy-to-Understand Yet Expressive Specifications for Creative Interactive Environments. In _2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)_. 167–175. https://doi.org/10.1109/VLHCC.2018.8506491
* Bardzell et al. (2012) Shaowen Bardzell, Daniela K. Rosner, and Jeffrey Bardzell. 2012. Crafting quality in design: integrity, creativity, and public sensibility. In _Proceedings of the Designing Interactive Systems Conference on - DIS ’12_. ACM Press, 11. https://doi.org/10.1145/2317956.2317959
* Barnes et al. (2008) Connelly Barnes, David E. Jacobs, Jason Sanders, Dan B Goldman, Szymon Rusinkiewicz, Adam Finkelstein, and Maneesh Agrawala. 2008. Video Puppetry: A Performative Interface for Cutout Animation. _ACM Trans. Graph._ 27, 5, Article 124 (Dec. 2008), 9 pages. https://doi.org/10.1145/1409060.1409077
* Batra et al. (2019) Vineet Batra, Ankit Phogat, and Tarun Beri. 2019\. Massively Parallel Layout Generation in Real Time. In _ACM SIGGRAPH 2019 Posters_ (Los Angeles, California) _(SIGGRAPH ’19)_. Association for Computing Machinery, New York, NY, USA, Article 3, 2 pages. https://doi.org/10.1145/3306214.3338596
* Beaudouin-Lafon (2017) Michel Beaudouin-Lafon. 2017\. Towards Unified Principles of Interaction. In _Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter_ _(CHItaly ’17)_. Association for Computing Machinery, 1–2. https://doi.org/10.1145/3125571.3125602
* Berger (2014) John Berger. 2014\. _Selected Essays of John Berger_. Bloomsbury Publishing Plc.
* Bernstein and Li (2015) Gilbert Louis Bernstein and Wilmot Li. 2015. Lillicon: Using Transient Widgets to Create Scale Variations of Icons. _ACM Trans. Graph._ 34, 4, Article 144 (July 2015), 11 pages. https://doi.org/10.1145/2766980
* Brandt et al. (2008) Joel Brandt, Philip J. Guo, Joel Lewenstein, and Scott R. Klemmer. 2008. Opportunistic programming: how rapid ideation and prototyping occur in practice. In _Proceedings of the 4th international workshop on End-user software engineering - WEUSE ’08_. ACM Press, 1–5. https://doi.org/10.1145/1370847.1370848
* Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. _Qualitative Research in Psychology_ 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa arXiv:https://www.tandfonline.com/doi/pdf/10.1191/1478088706qp063oa
* Braun and Clarke (2019) Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. _Qualitative Research in Sport, Exercise and Health_ 11, 4 (2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806 arXiv:https://doi.org/10.1080/2159676X.2019.1628806
* Champandard (2016) Alex J. Champandard. 2016\. Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artworks. _CoRR_ abs/1603.01768 (2016). arXiv:1603.01768 http://arxiv.org/abs/1603.01768
* Cheatle and Jackson (2015) Amy Cheatle and Steven J. Jackson. 2015. Digital Entanglements: Craft, Computation and Collaboration in Fine Art Furniture Production. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing_ _(CSCW ’15)_. Association for Computing Machinery, 958–968. https://doi.org/10.1145/2675133.2675291
* Chen et al. (2016) Hsiang-Ting Chen, Li-Yi Wei, Björn Hartmann, and Maneesh Agrawala. 2016. Data-Driven Adaptive History for Image Editing. In _Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games_ (Redmond, Washington) _(I3D ’16)_. Association for Computing Machinery, New York, NY, USA, 103–111. https://doi.org/10.1145/2856400.2856417
* Chen et al. (2018) Xiang ’Anthony’ Chen, Ye Tao, Guanyun Wang, Runchang Kang, Tovi Grossman, Stelian Coros, and Scott E. Hudson. 2018. Forte: User-Driven Generative Design. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI ’18)_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3174070
* Chugh et al. (2016) Ravi Chugh, Brian Hempel, Mitchell Spradlin, and Jacob Albers. 2016\. Programmatic and Direct Manipulation, Together at Last. _SIGPLAN Not._ 51, 6 (June 2016), 341–354. https://doi.org/10.1145/2980983.2908103
* Cycling74 (2017) Cycling74. 2017\. _Max_. https://cycling74.com/products/max/.
* Devendorf et al. (2020) Laura Devendorf, Katya Arquilla, Sandra Wirtanen, Allison Anderson, and Steven Frost. 2020\. Craftspeople as Technical Collaborators: Lessons Learned through an Experimental Weaving Residency. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ _(CHI ’20)_. Association for Computing Machinery, 1–13. https://doi.org/10.1145/3313831.3376820
* Devendorf and Rosner (2017) Laura Devendorf and Daniela K. Rosner. 2017. Beyond Hybrids: Metaphors and Margins in Design. In _Proceedings of the 2017 Conference on Designing Interactive Systems - DIS ’17_. ACM Press, 995–1000. https://doi.org/10.1145/3064663.3064705
* Devendorf and Ryokai (2015) Laura Devendorf and Kimiko Ryokai. 2015. Being the Machine: Reconfiguring Agency and Control in Hybrid Fabrication. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 2477–2486. https://doi.org/10.1145/2702123.2702547
* Dew and Rosner (2018) Kristin N. Dew and Daniela K. Rosner. 2018. Lessons from the Woodshop: Cultivating Design with Living Materials. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, 1–12. https://doi.org/10.1145/3173574.3174159
* Do and Gross (2007) Ellen Yi-Luen Do and Mark D. Gross. 2007. Environments for Creativity: A Lab for Making Things. In _Proceedings of the 6th ACM SIGCHI Conference on Creativity & Cognition_ (Washington, DC, USA) _(C &C ’07)_. Association for Computing Machinery, New York, NY, USA, 27–36. https://doi.org/10.1145/1254960.1254965
* E et al. (2020) Jane L. E, Ohad Fried, Jingwan Lu, Jianming Zhang, Radomír Mech, Jose Echevarria, Pat Hanrahan, and James A. Landay. 2020\. Adaptive Photographic Composition Guidance. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376635
* Eaglestone et al. (2001) Barry Eaglestone, Nigel Ford, Ralf Nuhn, Adrian Moore, and Guy J Brown. 2001. Composition systems requirements for creativity: what research methodology. In _In Proc. MOSART Workshop_. 7–16.
* Feist (1999) Gregory J Feist. 1999\. The influence of personality on artistic and scientific creativity. _Handbook of creativity_ (1999), 273.
* Fiebrink and Sonami ([n.d.]) Rebecca Fiebrink and Laetitia Sonami. [n.d.]. Reflections on Eight Years of Instrument Creation with Machine Learning. ([n. d.]), 6.
* Fraser et al. (2020) C. Ailie Fraser, Joy O. Kim, Hijung Valentina Shin, Joel Brandt, and Mira Dontcheva. 2020. Temporal Segmentation of Creative Live Streams. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376437
* Frich et al. (2019) Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 2019. Mapping the Landscape of Creativity Support Tools in HCI. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3290605.3300619
* Fujii (2017) Lee Ann Fujii. 2017\. _Interviewing in social science research: A relational approach_. Routledge.
* Garcia et al. (2012) Jérémie Garcia, Theophanis Tsandilas, Carlos Agon, and Wendy Mackay. 2012. Interactive Paper Substrates to Support Musical Creation. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Austin, Texas, USA) _(CHI ’12)_. Association for Computing Machinery, New York, NY, USA, 1825–1828. https://doi.org/10.1145/2207676.2208316
* Gatys et al. (2015) Leon A. Gatys, Alexander S. Ecker, and M. Bethge. 2015\. A Neural Algorithm of Artistic Style. _ArXiv_ abs/1508.06576 (2015).
* Goodman (1968) Nelson Goodman. 1968\. Languages of Art: An Approach to a Theory of Symbols. The Bobbs-Merrill Company. _Inc. New York, Indianapolis_ (1968).
* Gross (2009) M. D. Gross. 2009\. Visual Languages and Visual Thinking: Sketch Based Interaction and Modeling. In _Proceedings of the 6th Eurographics Symposium on Sketch-Based Interfaces and Modeling_ (New Orleans, Louisiana) _(SBIM ’09)_. Association for Computing Machinery, New York, NY, USA, 7–11. https://doi.org/10.1145/1572741.1572743
* Hartmann et al. (2008) Björn Hartmann, Loren Yu, Abel Allison, Yeonsoo Yang, and Scott R. Klemmer. 2008. Design as exploration: creating interface alternatives through parallel authoring and runtime tuning. In _Proceedings of the 21st annual ACM symposium on User interface software and technology - UIST ’08_. ACM Press, 91. https://doi.org/10.1145/1449715.1449732
* Hempel et al. (2019) Brian Hempel, Justin Lubin, and Ravi Chugh. 2019. Sketch-n-Sketch: Output-Directed Programming for SVG. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology_ (New Orleans, LA, USA) _(UIST ’19)_. Association for Computing Machinery, New York, NY, USA, 281–292. https://doi.org/10.1145/3332165.3347925
* Hertzmann (2018) Aaron Hertzmann. 2018\. Can Computers Create Art? _CoRR_ abs/1801.04486 (2018). arXiv:1801.04486 http://arxiv.org/abs/1801.04486
* Iizuka et al. (2016) Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016\. Let There Be Color! Joint End-to-End Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. _ACM Trans. Graph._ 35, 4, Article 110 (July 2016), 11 pages. https://doi.org/10.1145/2897824.2925974
* Ingold (2010) T. Ingold. 2010\. The textility of making. _Cambridge Journal of Economics_ 34, 1 (Jan 2010), 91–102. https://doi.org/10.1093/cje/bep042
* Jacobs (2018) J. Jacobs. 2018\. _SFPC Residency Reflections_. https://medium.com/sfpc/sfpc-residency-reflections-bf32204c92aa.
* Jacobs et al. (2018) Jennifer Jacobs, Joel Brandt, Radomír Mech, and Mitchel Resnick. 2018. Extending Manual Drawing Practices with Artist-Centric Programming Tools. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ _(CHI ’18)_. Association for Computing Machinery, 1–13. https://doi.org/10.1145/3173574.3174164
* Jacobs et al. (2017) Jennifer Jacobs, Sumit Gogia, Radomír Mundefinedch, and Joel R. Brandt. 2017. Supporting Expressive Procedural Art Creation through Direct Manipulation. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ (Denver, Colorado, USA) _(CHI ’17)_. Association for Computing Machinery, New York, NY, USA, 6330–6341. https://doi.org/10.1145/3025453.3025927
* Jacobs and Zoran (2015) Jennifer Jacobs and Amit Zoran. 2015. Hybrid Practice in the Kalahari: Design Collaboration through Digital Tools and Hunter-Gatherer Craft. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ _(CHI ’15)_. Association for Computing Machinery, 619–628. https://doi.org/10.1145/2702123.2702362
* Kazi et al. (2014) Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice. 2014. Kitty: Sketching Dynamic and Interactive Illustrations. In _Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology_ (Honolulu, Hawaii, USA) _(UIST ’14)_. Association for Computing Machinery, New York, NY, USA, 395–405. https://doi.org/10.1145/2642918.2647375
* Kazi et al. (2011) Rubaiat Habib Kazi, Kien Chuan Chua, Shengdong Zhao, Richard Davis, and Kok-Lim Low. 2011\. SandCanvas: New Possibilities in Sand Animation. In _CHI ’11 Extended Abstracts on Human Factors in Computing Systems_ (Vancouver, BC, Canada) _(CHI EA ’11)_. Association for Computing Machinery, New York, NY, USA, 483. https://doi.org/10.1145/1979742.1979562
* Kazi et al. (2017) Rubaiat Habib Kazi, Tovi Grossman, Hyunmin Cheong, Ali Hashemi, and George Fitzmaurice. 2017. DreamSketch: Early Stage 3D Design Explorations with Sketching and Generative Design. In _Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology_ (Québec City, QC, Canada) _(UIST ’17)_. Association for Computing Machinery, New York, NY, USA, 401–414. https://doi.org/10.1145/3126594.3126662
* Kim et al. (2015) Joy Kim, Mira Dontcheva, Wilmot Li, Michael S. Bernstein, and Daniela Steinsapir. 2015. Motif: Supporting Novice Creativity through Expert Patterns. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 1211–1220. https://doi.org/10.1145/2702123.2702507
* Klemmer et al. (2006) Scott R. Klemmer, Björn Hartmann, and Leila Takayama. 2006\. How bodies matter: five themes for interaction design. In _Proceedings of the 6th conference on Designing Interactive systems_ _(DIS ’06)_. Association for Computing Machinery, 140–149. https://doi.org/10.1145/1142405.1142429
* Klokmose et al. (2015) Clemens N. Klokmose, James R. Eagan, Siemen Baader, Wendy Mackay, and Michel Beaudouin-Lafon. 2015. Webstrates: Shareable Dynamic Media. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology_ _(UIST ’15)_. Association for Computing Machinery, 280–290. https://doi.org/10.1145/2807442.2807446
* Klokmose et al. (2019) Clemens N. Klokmose, Christian Remy, Janus Bager Kristensen, Rolf Bagge, Michel Beaudouin-Lafon, and Wendy Mackay. 2019. Videostrates: Collaborative, Distributed and Programmable Video Manipulation. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’19)_. Association for Computing Machinery, 233–247. https://doi.org/10.1145/3332165.3347912
* Ko et al. (2011) Amy J. Ko, Robin Abraham, Laura Beckwith, Alan Blackwell, Margaret Burnett, Martin Erwig, Chris Scaffidi, Joseph Lawrance, Henry Lieberman, Brad Myers, and et al. 2011. The state of the art in end-user software engineering. _Comput. Surveys_ 43, 3 (Apr 2011), 21:1–21:44. https://doi.org/10.1145/1922649.1922658
* Leiva et al. (2019) Germán Leiva, Nolwenn Maudet, Wendy Mackay, and Michel Beaudouin-Lafon. 2019. Enact: Reducing Designer–Developer Breakdowns When Prototyping Custom Interactions. _ACM Trans. Comput.-Hum. Interact._ 26, 3, Article 19 (May 2019), 48 pages. https://doi.org/10.1145/3310276
* Leung and Lara (2015) Joshua Leung and Daniel M. Lara. 2015. Grease Pencil: Integrating Animated Freehand Drawings into 3D Production Environments. In _SIGGRAPH Asia 2015 Technical Briefs_ (Kobe, Japan) _(SA ’15)_. Association for Computing Machinery, New York, NY, USA, Article 16, 4 pages. https://doi.org/10.1145/2820903.2820924
* Levin (2003) Golan Levin. 2003\. _Essay for creative code_. http://www.flong.com/texts/essays/essay_creative_code
* Levin (2015) Golan Levin. 2015\. _For Us, By Us_. http://www.flong.com/texts/essays/for-us-by-us/
* Li et al. (2020) Jingyi Li, Joel Brandt, Radomír Mech, Maneesh Agrawala, and Jennifer Jacobs. 2020. Supporting Visual Artists in Programming through Direct Inspection and Control of Program Execution. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376765
* Lieberman et al. (2006) Henry Lieberman, Fabio Paternò, Markus Klann, and Volker Wulf. 2006. End-user development: An emerging paradigm. In _End user development_. Springer, 1–8.
* Lieberman (2014) Zach Lieberman. 2014\. _ofBook, a collaboratively written book about openFrameworks_. http://openframeworks.cc/ofBook/chapters/foreword.html.
* Malloch et al. (2019) Joseph Malloch, Jérémie Garcia, Marcelo M. Wanderley, Wendy E. Mackay, Michel Beaudouin-Lafon, and Stéphane Huot. 2019\. _A Design Workbench for Interactive Music Systems_. Springer International Publishing, Cham, 23–40. https://doi.org/10.1007/978-3-319-92069-6_2
* Margolis and Fisher (2002) J. Margolis and A. Fisher. 2002. _Unlocking the Clubhouse: Women in Computing_. MIT Press. https://books.google.com/books?id=StwGQw45YoEC
* Matejka et al. (2018) Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018\. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, 1–12. https://doi.org/10.1145/3173574.3173943
* McCarthy and Turner ([n.d.]a) L. McCarthy and M. Turner. [n.d.]a. _p5.js_. https://p5js.org/.
* McCarthy and Turner ([n.d.]b) L. McCarthy and M. Turner. [n.d.]b. _p5.js Community Statement_. https://p5js.org/community/.
* McCullough (1996) Malcom McCullough. 1996\. _Abstracting craft: the practiced digital hand_. MIT Press.
* McDirmid (2016) Sean McDirmid. 2016\. A Live Programming Experience. https://www.youtube.com/watch?v=bnqkglrSqrg.
* Mordvintsev et al. (2015) Alexander Mordvintsev, Christopher Olah, and Mike Tyka. 2015\. Deepdream-a code example for visualizing neural networks. _Google Research_ 2, 5 (2015).
* Muller and Kuhn (1993) Michael J Muller and Sarah Kuhn. 1993. Participatory design. _Commun. ACM_ 36, 6 (1993), 24–28.
* Mumford (1952) Lewsi Mumford. 1952\. _Art and Technics_. Columbia University Press.
* Myers et al. (2008) Brad Myers, Sun Young Park, Yoko Nakano, Greg Mueller, and Andrew Ko. 2008. How designers design and program interactive behaviors. In _2008 IEEE Symposium on Visual Languages and Human-Centric Computing_. IEEE, 177–184.
* Myers et al. (2015) Brad A. Myers, Ashley Lai, Tam Minh Le, YoungSeok Yoon, Andrew Faulring, and Joel Brandt. 2015\. Selective Undo Support for Painting Applications. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 4227–4236. https://doi.org/10.1145/2702123.2702543
* Needleman (1979) Carla Needleman. 1979\. _The work of craft: an inquiry into the nature of crafts and craftsmanship_. Arkana.
* Oney et al. (2014) Stephen Oney, Brad Myers, and Joel Brandt. 2014. InterState: a language and environment for expressing interface behavior. In _Proceedings of the 27th annual ACM symposium on User interface software and technology_ _(UIST ’14)_. Association for Computing Machinery, 263–272. https://doi.org/10.1145/2642918.2647358
* Papert (1980) Seymour Papert. 1980\. _Mindstorms: Children, Computers, and Powerful Ideas_. Basic Books, Inc., USA.
* Peek and Gershenfeld (2018) Nadya Peek and Neil Gershenfeld. 2018. Mods: Browser-based rapid prototyping workflow composition. (2018).
* Reas and Fry (2004) C. Reas and B. Fry. 2004\. _Processing_. http://processing.org.
* Reas and Fry (2016) C. Reas and B. Fry. 2016\. _Processing Overview_. http://processing.org/overview.
* Resnick and Silverman (2005) Mitchel Resnick and Brian Silverman. 2005. Some reflections on designing construction kits for kids. In _Proceedings of the 2005 conference on Interaction design and children_ _(IDC ’05)_. Association for Computing Machinery, 117–122. https://doi.org/10.1145/1109540.1109556
* Roque (2016) Ricarose Vallarta Roque. 2016\. _Family creative learning: designing structures to engage kids and parents as computational creators_. Ph.D. Dissertation. Massachusetts Institute of Technology.
* Runway AI (2020) Inc. Runway AI. 2020\. _RunwayML_. https://runwayml.com/.
* Savage et al. (2015) Valkyrie Savage, Sean Follmer, Jingyi Li, and Björn Hartmann. 2015. Makers’ Marks: Physical Markup for Designing and Fabricating Functional Objects. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology_ (Charlotte, NC, USA) _(UIST ’15)_. Association for Computing Machinery, New York, NY, USA, 103–108. https://doi.org/10.1145/2807442.2807508
* Schachman (2012) Toby Schachman. 2012\. Alternative Programming Interfaces for Alternative Programmers. In _Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software_ (Tucson, Arizona, USA) _(Onward! 2012)_. Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/2384592.2384594
* Schon ([n.d.]) Donald A Schon. [n.d.]. Designing as reflective conversation with the materials of a design situation. ([n. d.]), 17.
* Semmo et al. (2017) Amir Semmo, Tobias Isenberg, and Jürgen Döllner. 2017\. Neural Style Transfer: A Paradigm Shift for Image-Based Artistic Rendering?. In _Proceedings of the Symposium on Non-Photorealistic Animation and Rendering_ (Los Angeles, California) _(NPAR ’17)_. Association for Computing Machinery, New York, NY, USA, Article 5, 13 pages. https://doi.org/10.1145/3092919.3092920
* Shimizu et al. (2019) Evan Shimizu, Matt Fisher, Sylvain Paris, and Kayvon Fatahalian. 2019. Finding Layers Using Hover Visualizations. In _Proceedings of the 45th Graphics Interface Conference on Proceedings of Graphics Interface 2019_ (Kingston, Canada) _(GI’19)_. Canadian Human-Computer Communications Society, Waterloo, CAN, Article 16, 9 pages. https://doi.org/10.20380/GI2019.16
* Shneiderman (2002) Ben Shneiderman. 2002\. Creativity Support Tools. _Commun. ACM_ 45, 10 (Oct. 2002), 116–120. https://doi.org/10.1145/570907.570945
* Shneiderman (2007) Ben Shneiderman. 2007\. Creativity support tools: accelerating discovery and innovation. _Commun. ACM_ 50, 12 (Dec 2007), 20–32. https://doi.org/10.1145/1323688.1323689
* Simo-Serra et al. (2018) Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018\. Mastering Sketching: Adversarial Augmentation for Structured Prediction. _Transactions on Graphics (Presented at SIGGRAPH)_ 37, 1 (2018).
* Simon (2019) Joel Simon. 2019\. _ArtBreeder_. https://www.artbreeder.com/
* Thiel et al. (2011) Yannick Thiel, Karan Singh, and Ravin Balakrishnan. 2011\. Elasticurves: Exploiting Stroke Dynamics and Inertia for the Real-Time Neatening of Sketched 2D Curves. In _Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology_ (Santa Barbara, California, USA) _(UIST ’11)_. Association for Computing Machinery, New York, NY, USA, 383–392. https://doi.org/10.1145/2047196.2047246
* Torres et al. (2019) Cesar Torres, Matthew Jörke, Emily Hill, and Eric Paulos. 2019. Hybrid Microgenetic Analysis: Using Activity Codebooks to Identify and Characterize Creative Process. In _Proceedings of the 2019 on Creativity and Cognition_ (San Diego, CA, USA) _(C &C ’19)_. Association for Computing Machinery, New York, NY, USA, 2–14. https://doi.org/10.1145/3325480.3325498
* Tran O’Leary and Peek (2019) Jasper Tran O’Leary and Nadya Peek. 2019. Machine-o-Matic: A Programming Environment for Prototyping Digital Fabrication Workflows. In _The Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology_ (New Orleans, LA, USA) _(UIST ’19)_. Association for Computing Machinery, New York, NY, USA, 134–136. https://doi.org/10.1145/3332167.3356897
* Tsandilas et al. (2015) Theophanis Tsandilas, Magdalini Grammatikou, and Stéphane Huot. 2015. BricoSketch: Mixing Paper and Computer Drawing Tools in Professional Illustration. In _Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces_ (Madeira, Portugal) _(ITS ’15)_. Association for Computing Machinery, New York, NY, USA, 127–136. https://doi.org/10.1145/2817721.2817729
* Tversky (2019) Barbara Tversky. 2019\. _Mind in motion_. Basic Books.
* Victor (2011) Bret Victor. 2011\. _Dynamic Pictures_. http://worrydream.com/DynamicPicturesMotivation
* Victor (2018) Bret Victor. 2018\. Dynamicland. https://dynamicland.org
* vvvv group (2017) vvvv group. 2017\. _vvvv_. https://vvvv.org/.
* Watz (2012) Marius Watz. 2012\. _Algorithm Critique and Computational Aesthetics_. Vimeo. https://vimeo.com/46903693
* Wessel and Wright (2001) David Wessel and Matthew Wright. 2001. Problems and Prospects for Intimate Musical Control of Computers. In _Proceedings of the 2001 Conference on New Interfaces for Musical Expression_ (Seattle, Washington) _(NIME ’01)_. National University of Singapore, SGP, 1–4.
* Winograd and Flores (1986) Terry Winograd and Fernando Flores. 1986. _Understanding computers and cognition: A new foundation for design_. Intellect Books.
* Xing et al. (2015) Jun Xing, Li-Yi Wei, Takaaki Shiratori, and Koji Yatani. 2015\. Autocomplete Hand-Drawn Animations. _ACM Trans. Graph._ 34, 6, Article 169 (Oct. 2015), 11 pages. https://doi.org/10.1145/2816795.2818079
* Yamaoka et al. (2019) Junichi Yamaoka, Mustafa Doga Dogan, Katarina Bulovic, Kazuya Saito, Yoshihiro Kawahara, Yasuaki Kakehi, and Stefanie Mueller. 2019\. FoldTronics: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300858
* Zaman et al. (2015) Loutfouz Zaman, Wolfgang Stuerzlinger, Christian Neugebauer, Rob Woodbury, Maher Elkhaldi, Naghmi Shireen, and Michael Terry. 2015\. GEM-NI: A System for Creating and Managing Alternatives In Generative Design. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15_. ACM Press, 1201–1210. https://doi.org/10.1145/2702123.2702398
* Zhao et al. (2020) Nanxuan Zhao, Nam Wook Kim, Laura Mariah Herman, Hanspeter Pfister, Rynson W.H. Lau, Jose Echevarria, and Zoya Bylinskii. 2020\. ICONATE: Automatic Compound Icon Generation and Ideation. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376618
|
# Rabi oscillation of V${}_{\text{B}}^{-}$ spin in hexagonal boron nitride
Wei Liu Zhi-Peng Li Yuan-Ze Yang Shang Yu Yu Meng Zhao-An Wang Nai-Jie
Guo Fei-Fei Yan Qiang Li Jun-Feng Wang Jin-Shi Xu Yang Dong Xiang-Dong
Chen Fang-Wen Sun Yi-Tao Wang<EMAIL_ADDRESS>Jian-Shun Tang
<EMAIL_ADDRESS>Chuan-Feng Li<EMAIL_ADDRESS>Guang-Can Guo CAS Key
Laboratory of Quantum Information, University of Science and Technology of
China, Hefei, P.R.China CAS Center For Excellence in Quantum Information and
Quantum Physics, University of Science and Technology of China, Hefei,
P.R.China
###### Abstract
VdW materials are a family of materials ranging from semimetal, semiconductor
to insulator, and their common characteristic is the layered structure. These
features make them widely applied in the fabrication of nano-photonic and
electronic devices, particularly, vdW heterojunctions. HBN is the only layered
material to date that is demonstrated to contain optically-detected electronic
spins, which can benefit the construction of solid qubit and quantum sensor,
etc., especially embedded in the nano-layered-devices. To realize this, Rabi
oscillation is a crucial step. Here, we demonstrate the Rabi oscillation of
V${}_{\text{B}}^{-}$ spins in hBN. Interestingly, we find the behaviors of the
spins are completely different under the conditions of weak and relatively
higher magnetic field. The former behaves like a single wide peak, but the
latter behaves like multiple narrower peaks (e.g., a clear beat in Ramsey
fringes). We conjecture a strong coupling of the spin with the surrounding
nuclear spins, and the magnetic field can control the nuclear spin bath.
Van der Waals (vdW) materials include a family of materials with various
bandgap, and exhibit diverse electronic properties from semimetal (e.g.,
graphene) to semiconductor (e.g., transition metal dichalcogenides, TMDCs for
short), and to insulator (e.g., hexagonal boron nitride, or hBN) XiaF2014 .
Their common feature is the layered structure, namely, the atoms in the same
layer are combined by the strong chemical bond, and the layers are connected
by the relatively weak vdW force. This feature makes the layers from different
vdW materials easy to be stacked together to form heterojunctions Geim2013 ,
which have the advantage of no lattice mismatch compared to their three-
dimensional counterparts including GaAs, silicon, or diamond, etc. Besides,
vdW materials have strong interaction with light since the two-dimensional
confinement Trovatello2021 . These characteristics lead to a great of
applications of vdW materials, such as photocurrent generation YuWJ2013 ,
light-emitting diode Ross2014 , field effect transistor LiuW2013 , single
photon Srivastava2015 ; HeYM2015 ; Koperski2015 ; Chakraborty2015 ;
Tonndorf2015 ; Palacios2017 ; Branny2017 ; Errando2020 ; TranTT2016n ;
TranTT2016p ; TranTT2016a ; Martinez2016 ; Chejanovsky2016 ; Choi2016 ;
Grosso2017 ; XueY2018 ; Proscia2018 ; LiuW2020 ; Fournier2020 ; Barthelmi2020
, and optical parametric amplification Trovatello2021 . Moreover, the light-
valley interaction in TMDCs leads to the field of valleytronics XuX2014 ;
Manzeli2017 . All these applications will contribute to the design and
construction of photonic and electronic devices in very small scale, benefited
from the atomic thickness of vdW materials.
Among this family of layered materials, hBN has a large bandgap of $\sim$6 eV,
which makes it have the ability to host plenty of kinds of defects, just
similar to diamond Barry2020 ; Hanson2008 ; ChenX2015 and silicon carbide
WangJF2020 ; YanFF2020 ; LiQ2020 , etc. Single-layer hBN was first found to
emit single photons at room temperature in 2016 by Tran _et al._ TranTT2016n ,
and after that, a great of interest was stimulated to treat hBN defects as the
promising single-photon emitters TranTT2016p ; TranTT2016a ; Martinez2016 ;
Chejanovsky2016 ; Choi2016 ; Grosso2017 ; XueY2018 ; Proscia2018 ; LiuW2020 ;
Fournier2020 , and furthermore, as the potential solid spin qubit Exarhos2019
; Toledo2018 ; Gottscholl2020 ; Chejanovsky2019 ; Mendelson2020 ; Kianinia2020
; GaoX2020 ; LiuW2021 ; Gottscholl2020a . As the single-photon source, hBN
defect (in monolayer, flake or bulk) has the advantages of high brightness
Grosso2017 ; LiuW2020 , broad spectral range TranTT2016a , easy tunability
Grosso2017 ; XueY2018 , and easy fabrication, etc. The fabrication methods are
diverse, including chemical etch Chejanovsky2016 , electron or ion irradiation
Chejanovsky2016 ; Choi2016 ; Kianinia2020 ; Fournier2020 , laser ablation
Choi2016 ; GaoX2020 , strain Proscia2018 , and so on.
Especially, defects in hBN attract a lot of attentions to be a good candidate
for solid spin qubit (particularly, in the vdW-nano-devices). Actually,
electron paramagnetic resonance (EPR) signals in hBN have been found in very
early decades Katzir1975 ; Moore1972 ; Fanciulli1992 . These signals are
recognized by numerical calculations recently Sajid2018 ; Abdi2018 , and these
theoretical works also predict many possible defects in hBN who have the
potential to give optically detected magnetic resonance (ODMR) signals. In
experiment, Exarhos _et al._ Exarhos2019 find the magnetic-field-dependent
intensity of fluorescence emitted from a hBN defect in 2019. Later, the ODMR
signals are revealed by Gottscholl _et al._ Toledo2018 ; Gottscholl2020 ,
Chejanovsky _et al._ Chejanovsky2019 and Mendelson _et al._ Mendelson2020 ,
respectively. The first kind of defects are assumed to be
V${}_{\text{B}}^{-}$, and the third kind of defects are conjectured to be
related to carbon. Based on the experimental results, several theoretical
analyzations are carried out, especially for V${}_{\text{B}}^{-}$ Ivady2020 ;
Sajid2020 , and the temperature-dependent features of this kind of spin defect
are also detailedly investigated recently in experiment LiuW2021 .
Figure 1: Simplified energy levels and ODMR results. (a) Simplified energy
levels of V${}_{\text{B}}^{-}$ center and the related optical transitions
among ground states (GS), excited states (ES) and metastable states (MS). The
532-nm laser (green) is used for the spin polarization and readout, and the
microwave (pink) is used for coherent control of the spin state. (b) ODMR
spectrum measured at room temperature with no external magnetic field. The
bimodal signal due to the local strain has obvious hyperfine structures, which
indicate the nucleus-electron interaction between V${}_{\text{B}}^{-}$ and the
three neighboring nitrogen nuclei (14N). The experimental data are fitted
using two-Lorentzian function to obtain $\nu_{1}\sim$3.424 GHz and
$\nu_{2}\sim$3.533 GHz. (c) Dependence of the frequency shift of the
$m_{s}=-1\leftrightarrow m_{s}=0$ transition on the magnetic field parallel to
the hexagonal $c$ axis, from which we can fit the $g$ factor of
V${}_{\text{B}}^{-}$ to be $1.992\pm 0.010$. Figure 2: Rabi oscillations. (a)
Pulse sequence of Rabi measurement comprising the first laser pulse for spin
polarization, then the microwave pulse with length $\tau$ for spin
manipulation and the second laser pulse for state readout. (b) Rabi
oscillations on the $m_{s}=-1\leftrightarrow m_{s}=0$ transition observed at
room temperature and different magnetic field. The data are fitted using
$\Sigma_{i=1}^{n}A_{i}\exp(-\tau/T_{i})\cos(2\pi
f_{i}\tau+\phi_{i})+b\exp(-\tau/T_{b})+c$, with $n=1,2$ and $3$ (red lines)
from top to down, respectively. $A_{i}$, $T_{i}$, $f_{i}$, $\phi_{i}$, $b$,
$T_{b}$ and $c$ are the amplitude, oscillation decay time, frequency, phase,
decayed background and its decay time, constant background, respectively. (c)
Rabi frequency measured with a linear dependence on the square root of
microwave power $\sqrt{P}$. Figure 3: $T_{1}$ measurement and spin echo
detections. (a) Pulse sequence for characterizing the spin-lattice relaxation
dynamics, including the spin polarization and readout, the microwave
$\pi$-pulse obtained from Rabi-measurement and the free evolution time $\tau$
for changes. (b) $T_{1}$ measurement at 0 mT revealing the spin-lattice
relaxation time of $T_{1}=16.377\pm 0.416$ $\mu$s. (c) $T_{1}$ time versus
magnetic field, suggesting that there is roughly no $T_{1}$ dependence on
magnetic field. (d) Pulse scheme for spin echo measurement with
$\frac{\pi}{2}-\frac{\tau}{2}-\pi-\frac{\tau}{2}-\frac{\pi}{2}$ sequence,
where $\tau$ is the free evolution time. (e) Optically-detected spin-echo
measurement at 0 mT. (f) Spin echo at 36 mT which cannot be fitted well,
showing complicated oscillations induced by the nuclear spin bath and the red
line is only a guide for eyes. Figure 4: Ramsey interference. (a) Ramsey pulse
sequence with $\frac{\pi}{2}-\tau-\frac{\pi}{2}$. (b) Ramsey result with $B=0$
mT and $f_{\text{MW}}=3428$ MHz. No oscillation is observed but a fast decay
with $T_{2}^{*}=60.198\pm 2.747$ ns. A slow background decay is also observed
as that in Rabi results. (c) Ramsey result at 44 mT and 2200-MHz microwave
frequency. Three frequencies are observed and two of them form a clear beat.
The red line is the fitting. The distances between the adjacent frequencies
are both around $47$ MHz (the hyperfine splitting due to the nucleus-electron
interaction observed and calculated previously). $T_{2}^{*}$ corresponding to
these three frequencies are $0.665\pm 0.108$ $\mu$s, $2.500\pm 2.160$ $\mu$s
and $1.448\pm 0.841$ $\mu$s, respectively.
The energy levels of V${}_{\text{B}}^{-}$ is gradually clear, and a simplified
diagram is sketched in Fig. 1(a). As discussed in Refs. Gottscholl2020 ;
Ivady2020 , this defect contains a triplet ground state (GS), which is
primarily constituted of three energy levels with $m_{s}=0$ and $m_{s}=\pm 1$,
and $D$ is the zero-field splitting (ZFS) between them. ES and MS represent
the excited states and metastable states, respectively. The green arrows stand
for the excitation laser which pump the population to above ES, and the red
arrow represents the fluorescence to be detected. The gray wavy arrows
represent the inter-system crossings (ISC) between $S=3$ and $S=1$. The pink
circled arrow is the applied microwave (MW) between $m_{s}=0$ and $m_{s}=\pm
1$, which will change populations of these states, and hence change the
intensity of fluorescence. By recording the difference of the fluorescence
intensities, we can read the state of spin qubit. This method is called ODMR.
A typical ODMR signal of V${}_{\text{B}}^{-}$ at zero-magnetic-field is shown
in Fig. 1(b). The excitation laser is always on, and the MW works at the
on/off mode, then the contrast is calculated from the difference between the
on and off fluorescences. $\nu_{0}=(\nu_{1}+\nu_{2})/2=3.479$ GHz is the
frequency corresponding to ZFS $D$, and $\nu_{2,1}$ correspond to the
transitions between $m_{s}=\pm 1$ states to $m_{s}=0$, respectively.
Remarkably, for each transition peak, we can clearly see several hyperfine
splittings, which can be identified as the nucleus-electron interaction. For
V${}_{\text{B}}^{-}$ defect, there are three nitrogen nuclei (14N) around,
each of which has a nuclear spin of $I=1$. Therefore, totally $2(3I)+1=7$
hyperfine transitions should be observed, and the interval between them is
detected Gottscholl2020 and calculated Ivady2020 to be approximately $47$
MHz. Fig. 1(c) shows the frequency shift of the $m_{s}=-1\leftrightarrow
m_{s}=0$ transition as the magnetic field varies, from which we can fit the
$g$ factor of this spin to be $1.992\pm 0.010$.
The next step is naturally to coherently operate the solid spin, of which the
Rabi oscillation is a key tool. Here we utilize the two-level states
$m_{s}=-1,0$ as the spin qubit to perform the coherent control. Fig. 2(a) is
the time diagram of Rabi oscillation. After a long excitation-laser pulse, the
spin is polarized to $m_{s}=0$ state, then a MW pulse with length of $\tau$ is
applied to rotate the spin, followed by a readout pulse. The results are shown
in Fig. 2(b). At magnetic field of $B=0$ mT, we see a standard decayed Rabi
oscillation, but we also observe a tiny decay of background, which may be
caused by the overlarge density of V${}_{\text{B}}^{-}$ defects, since our
integrated dose of neutron irradiation is quite large ($\sim 2.0\times
10^{17}$ $n$ cm-2). For results of non-zero magnetic field, we observe an
oscillation of multiple Rabi frequencies. We conjecture that a more orderly
ensemble of defects is formed because of the magnetic field, which makes the
hyperfine peaks narrower and can to some extent be separated from each other;
or maybe some other reasons such as the influences of the environmental
nuclear or electronic spins Hanson2008 . By varying the MW power, we derive
the linear dependence of the fitted Rabi frequency versus the square root of
power $\sqrt{P}$ (see Fig. 2(c)), which indicates the validity of our Rabi
results.
With the Rabi frequency, we can define $\frac{\pi}{2}$-pulse and $\pi$-pulse.
Utilizing $\pi$-pulse, we can measure the spin-lattice relaxation time
$T_{1}$. The pulse sequence is shown in Fig. 3(a) and Fig. 3(b) exhibits the
relaxation result at $B=0$ mT. By fitting this result, we derive
$T_{1}=16.377\pm 0.416$ $\mu$s. Then we repeat this sequence for various
magnetic fields and obtain the results in Fig. 3(c). We find $T_{1}$ is
approximately independent of the magnetic field. Next, we perform the sequence
$\frac{\pi}{2}-\frac{\tau}{2}-\pi-\frac{\tau}{2}-\frac{\pi}{2}$ (spin echo,
see Fig. 3(d)) to measure $T_{2}$. At $B=0$ mT, we observe a monotonic decay
of contrast shown in Fig. 3(e), and $T_{2}$ is fitted to be $82.121\pm 2.462$
ns, which is quite short. We conjecture that it may also be caused by the
overlarge density of V${}_{\text{B}}^{-}$ defects. At $B=36$ mT, we find the
decayed-contrast curve is complicatedly modulated (see Fig. 3(f)). We cannot
fit it well, and the red line is only a guide for eyes. Here we want to note
that, since $T_{2}$ is quite short, the impact of time durations of the MW
pulses, especially the $\pi$-pulse, can not be ignored, therefore, the values
of the fitted $T_{2}$ will have inaccuracy, but the order of magnitude can be
determined. For $T_{1}$ which is far longer than MW-pulse durations, this
problem is not met.
We also perform the Ramsey interference experiment on the V${}_{\text{B}}^{-}$
spins. The pulse sequence is presented in Fig. 4(a), and Fig. 4(b) shows the
result at $B=0$ mT. We have not seen the oscillations, which may be caused by
the fast decay corresponding to a short $T_{2}^{*}$. On one hand, the nucleus-
electron interaction splits the $m_{s}=-1\leftrightarrow m_{s}=0$ transition
into 7 peaks; on the other hand, every peak is broad at zero magnetic field
due to the inhomogeneous ensemble, and the peaks adhere to each other to form
a single wide peak and thus induce a short $T_{2}^{*}$. Similar to the result
in Fig. 2(b), we also see a slow decay of background in this figure (this data
cannot be fitted well using single-decay curve, but a double-decay curve). The
reason may also be attributed to the overlarge density of V${}_{\text{B}}^{-}$
defects in the sample. In contrast, when we apply a magnetic field of $B=44$
mT and set the MW frequency at $f_{\text{MW}}=2200$ MHz, we see a multiple-
frequency oscillation, in which a beat is clearly recognized, and it is
superposed on another slow oscillation. These three frequencies are fitted as
$f_{-1}=-44.171\pm 0.039$ MHz, $f_{0}=0.934\pm 0.131$ MHz, $f_{1}=45.872\pm
0.063$ MHz, respectively. We therefore conjecture that the three peaks that
contribute to the Ramsey oscillation are located at
$h(f_{\text{MW}}+(-)f_{i})$ ($i=-1,0,1$, $h$ is Planck constant, and the sign
“-” represents the possibility of reverse direction), and other peaks
contribute to this oscillation little since they are much farther from the MW
frequency. The distance between the adjacent peaks is $f_{0}-f_{-1}=45.105\pm
0.136$ MHz$\approx f_{1}-f_{0}=44.938\pm 0.145$ MHz (the Planck constant is
omitted here), which is coincident with the calculated Ivady2020 and observed
Gottscholl2020 energy separation of the hyperfine splittings induced by
nucleus-electron interaction (approximately $47$ MHz). The fitted $T_{2}^{*}$
of these three peaks are $0.665\pm 0.108$ $\mu$s, $2.500\pm 2.160$ $\mu$s and
$1.448\pm 0.841$ $\mu$s, respectively. Although we have not extracted $T_{2}$
in the case that magnetic field is nonzero due to the quite complex
modulations, we can conjecture that $T_{2}$ will be in the order of
microseconds from the results of $T_{2}^{*}$ of the individual peaks. It seems
the magnetic field help elongate the spin coherence time.
It is interesting to find that in both the Rabi- and Ramsey-oscillation
results, the observed phenomena are very different when the magnetic field is
weak or relatively strong. At zero magnetic field, the defects behave as a
large ensemble with a single broad peak; but when the magnetic field is strong
enough, the defects behave like a more orderly ensemble with multiple narrow
peaks, i.e., the multiple-frequency fittings of each data with higher magnetic
field. This phenomena suggest that the V${}_{\text{B}}^{-}$ spin is highly
correlated to the neighboring nuclear spins, which can be utilized to study
the nuclear spins or the correlations between them.
As the only candidate for spin qubit in vdW material (to date), the coherent
operations of defects in hBN based on Rabi oscillation play the crucial role,
and provide a powerful tool for the design and construction of spin-based vdW-
nano-devices, especially when the techniques of vdW heterojunction are
combined. Although $T_{2}$ is still quite short, which may be primarily due to
the overlarge density of V${}_{\text{B}}^{-}$ defects in the sample caused by
the high-dose neutron irradiation, there will be several methods to improve
it. For example, we can decrease the integrated dose of neutron irradiation;
or perform a suitable annealing on the sample since high-temperature condition
can reduce the V${}_{\text{B}}^{-}$ defect number; or put the sample into a
low-temperature cryostat; or apply higher magnetic field; etc.
In summary, we have realized the Rabi oscillation of the V${}_{\text{B}}^{-}$
spins in hBN, based on which we also detect $T_{1}$ and perform the spin-echo
and Ramsey-interference experiments. We find $T_{1}$ is almost not affected by
magnetic field, which is roughly around $16.377\pm 0.416$ $\mu$s; however, the
results of Rabi oscillation, spin echo and Ramsey oscillation are very
different under the conditions of weak and relatively strong magnetic field.
At zero magnetic field, the defects behave as a large ensemble with a single
broad peak, i.e., the Rabi result is well fitted by a single-frequency
oscillation, the echo result shows a fast decay of $82.121\pm 2.462$ ns, and
the Ramsey result also decays fast without any oscillations. When the magnetic
field goes higher, the defects behave like a more orderly ensemble with
multiple narrow peaks, and we see multiple-frequency oscillations in both Rabi
and Ramsey results. Especially, we see a clear beat and an additional slow
oscillation in the Ramsey result, and by fitting this result, we find the
distances among these three frequencies ($45.105\pm 0.136$ MHz$\approx
44.938\pm 0.145$ MHz) are roughly equal to the energy separation of the
nucleus-electron-interaction-induced hyperfine splitting (approximately $47$
MHz, $h$ is omitted). The decay times ($T_{2}^{*}$) of these three
oscillations are $0.665\pm 0.108$ $\mu$s, $2.500\pm 2.160$ $\mu$s and
$1.448\pm 0.841$ $\mu$s, respectively, from which we conjecture that $T_{2}$
for each separated peak will be in the order of microsecond. It seems the
magnetic field freezes the environmental spins to some extend and elongates
the $T_{2}$. Our results suggest that the V${}_{\text{B}}^{-}$ spin is highly
correlated to the neighboring nuclear spins, which provides a potential tool
to study them.
## Acknowledgments
This work is supported by the National Key Research and Development Program of
China (No. 2017YFA0304100), the National Natural Science Foundation of China
(Grants Nos. 11822408, 11674304, 11774335, 11821404, and 11904356), the Key
Research Program of Frontier Sciences of the Chinese Academy of Sciences
(Grant No. QYZDY-SSW-SLH003), the Fok Ying-Tong Education Foundation (No.
171007), the Youth Innovation Promotion Association of Chinese Academy of
Sciences (Grants No. 2017492), Science Foundation of the CAS (No. ZDRW-
XH-2019-1), Anhui Initiative in Quantum Information Technologies (AHY020100,
AHY060300), the Fundamental Research Funds for the Central Universities (Nos.
WK2470000026, WK2030000008 and WK2470000028).
## References
* (1) Xia, F., Wang, H., Xiao, D., Dubey, M. & Ramasubramaniam, A. Two-dimensional material nanophotonics. _Nature Photon._ 8, 899-907 (2014).
* (2) Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. _Nature_ 499, 419-425 (2013).
* (3) Trovatello, C. _et al._ Optical parametric amplification by monolayer transition metal dichalcogenides. _Nature Photon._ 15, 6-10 (2021).
* (4) Yu, W. J. _et al._ Highly efficient gate-tunable photocurrent generation in vertical heterostructures of layered materials. _Nature Nanotechnol._ 8, 952-958 (2013).
* (5) Ross, J. S. _et al._ Electrically tunable excitonic light-emitting diodes based on monolayer WSe2 p-n junctions. _Nature Nanotechnol._ 9, 268-272 (2014).
* (6) Liu, W. _et al._ Role of metal contacts in designing high-performance monolayer n-type WSe2 field effect transistors. _Nano Lett._ 13, 1983-1990 (2013).
* (7) Srivastava, A. _et al._ Optically active quantum dots in monolayer WSe2. _Nature Nanotechnol._ 10, 491-496 (2015).
* (8) He, Y. M. _et al._ Single quantum emitters in monolayer semiconductors. _Nature Nanotechnol._ 10, 497-502 (2015).
* (9) Koperski, M. _et al._ Single photon emitters in exfoliated WSe2 structures. _Nature Nanotechnol._ 10, 503-506 (2015).
* (10) Chakraborty, C., Kinnischtzke, L., Goodfellow, K. M., Beams, R. & Vamivakas, A. N. Voltage-controlled quantum light from an atomically thin semiconductor. _Nature Nanotechnol._ 10, 507-511 (2015).
* (11) Tonndorf, P. _et al._ Single-photon emission from localized excitons in an atomically thin semiconductor. _Optica_ 2, 347-352 (2015).
* (12) Palacios-Berraquero, C. _et al._ Large-scale quantum-emitter arrays in atomically thin semiconductors. _Nature Commun._ 8, 1-6 (2017).
* (13) Branny, A., Kumar, S., Proux, R. & Gerardot, B. D. Deterministic strain-induced arrays of quantum emitters in a two-dimensional semiconductor. _Nature Commun._ 8, 1-7. (2017).
* (14) Errando-Herranz, C. _et al._ On-chip single photon emission from a waveguide-coupled two-dimensional semiconductor. Preprint at https://arxiv.org/abs/2002.07657 (2020).
* (15) Tran, T. T., Bray, K., Ford, M. J., Toth, M. & Aharonovich, I. Quantum emission from hexagonal boron nitride monolayers. _Nature Nanotechnol._ 11, 37-41 (2016).
* (16) Tran, T. T. _et al._ Quantum emission from defects in single-crystalline hexagonal boron nitride. _Phys. Rev. Appl._ 5, 034005 (2016).
* (17) Martínez, L. J. _et al._ Efficient single photon emission from a high-purity hexagonal boron nitride crystal. _Phys. Rev. B_ 94, 121405(R) (2016).
* (18) Chejanovsky, N. _et al._ Structural attributes and photodynamics of visible spectrum quantum emitters in hexagonal boron nitride. _Nano Lett._ 16, 7037-7045 (2016).
* (19) Choi, S. _et al._ Engineering and localization of quantum emitters in large hexagonal boron nitride layers. _ACS Appl. Mater. Interfaces_ 8, 29642-29648 (2016).
* (20) Tran, T. T. _et al._ Robust multicolor single photon emission from point defects in hexagonal boron nitride. _ACS Nano_ 10, 7331-7338 (2016).
* (21) Grosso, G. _et al._ Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride. _Nature Commun._ 8, 1-8 (2017).
* (22) Xue, Y. _et al._ Anomalous pressure characteristics of defects in hexagonal boron nitride flakes. _ACS Nano_ 12, 7127-7133 (2018).
* (23) Proscia, N. V. _et al._ Near-deterministic activation of room-temperature quantum emitters in hexagonal boron nitride. _Optica_ 5, 1128-1134 (2018).
* (24) Liu, W., Wang, Y.-T., Li, Z.-P., Yu, S., Ke, Z.-J., Meng, Y., Tang, J.-S., Li, C.-F. & Guo, G.-C. An ultrastable and robust single-photon emitter in hexagonal boron nitride. _Physica E_ 124, 114251 (2020).
* (25) Fournier, C. _et al._ Position-controlled quantum emitters with reproducible emission wavelength in hexagonal boron nitride. Preprint at https://arxiv.org/abs/2011.12224 (2020).
* (26) Barthelmi, K. _et al._ Atomistic defects as single-photon emitters in atomically thin MoS2. _Appl. Phys. Lett._ 117, 070501 (2020).
* (27) Xu, X., Yao, W., Xiao, D. & Heinz, T. F. Spin and pseudospins in layered transition metal dichalcogenides. _Nature Phys._ 10, 343-350 (2014).
* (28) Manzeli, S., Ovchinnikov, D., Pasquier, D., Yazyev, O. V. & Kis, A. 2D transition metal dichalcogenides. _Nature Rev. Mater._ 2, 17033 (2017).
* (29) Barry, J. F. _et al._ Sensitivity optimization for NV-diamond magnetometry. _Rev. Mod. Phys._ 92, 015004 (2020).
* (30) Hanson, R., Dobrovitski, V. V., Feiguin, A. E., Gywat, O. & Awschalom, D. D. Coherent dynamics of a single spin interacting with an adjustable spin bath. _Science_ 320, 352-355 (2008).
* (31) Chen, X.-D., Zou, C.-L., Gong, Z.-J., Dong, C.-H., Guo, G.-C. & Sun, F.-W. Subdiffraction optical manipulation of the charge state of nitrogen vacancy center in diamond. _Light Sci. Appl._ 4, e230-e230 (2015).
* (32) Wang, J.-F., Yan, F.-F., Li, Q., Liu, Z.-H., Liu, H., Guo, G.-P., Guo, L.-P., Zhou, X., Cui, J.-M., Wang, J., Zhou, Z.-Q., Xu, X.-Y., Xu, J.-S., Li, C.-F. & Guo, G.-C. Coherent control of nitrogen-vacancy center spins in silicon carbide at room temperature. _Phys. Rev. Lett._ 124, 223601 (2020).
* (33) Yan, F.-F., Yi, A.-L., Wang, J.-F., Li, Q., Yu, P., Zhang, J.-X., Gali, A., Wang, Y., Xu, J.-S., Ou, X., Li, C.-F. & Guo, G.-C. Room-temperature coherent control of implanted defect spins in silicon carbide. _npj Quantum Inf._ 6, 1-6 (2020).
* (34) Li, Q., Wang, J.-F., Yan, F.-F., Zhou, J.-Y., Wang, H.-F., Liu, H., Guo, L.-P., Zhou, X., Gali, A., Liu, Z.-H., Wang, Z.-Q., Sun, K., Guo, G.-P., Tang, J.-S., Xu, J.-S., Li, C.-F. & Guo, G.-C. Room temperature coherent manipulation of single-spin qubits in silicon carbide with high readout contrast. Preprint at https://arxiv.org/abs/2005.07876 (2020).
* (35) Exarhos, A .L., Hopper, D. A., Patel, R. N., Doherty, M. W. & Bassett, L. C. Magnetic-field-dependent quantum emission in hexagonal boron nitride at room temperature. _Nature Commun._ 10, 222 (2019).
* (36) Toledo, J. R. _et al._ Electron paramagnetic resonance signature of point defects in neutron-irradiated hexagonal boron nitride. _Phys. Rev. B_ 98, 155203 (2018).
* (37) Gottscholl, A. _et al._ Initialization and read-out of intrinsic spin defects in a van der Waals crystal at room temperature. _Nature Mater._ 19, 540-545 (2020).
* (38) Chejanovsky, N. _et al._ Single spin resonance in a van der Waals embedded paramagnetic defect. Preprint at https://arxiv.org/abs/1906.05903 (2019).
* (39) Mendelson, N. _et al._ Identifying carbon as the source of visible single-photon emission from hexagonal boron nitride. _Nature Mater._ (2020). https://doi.org/10.1038/s41563-020-00850-y.
* (40) Kianinia, M., White, S., Froch, J. E., Bradac, C. & Aharonovich, I. Generation of spin defects in hexagonal boron nitride. _ACS Photon._ 7, 2147-2152 (2020).
* (41) Gao, X. _et al._ Femtosecond Laser Writing of Spin Defects in Hexagonal Boron Nitride. Preprint at https://arxiv.org/abs/2012.03207 (2020).
* (42) Liu, W., Li, Z.-P., Yang, Y.-Z., Yu, S., Meng, Y., Wang, Z.-A., Li, Z.-C., Guo, N.-J., Yan, F.-F, Li, Q., Wang, J.-F., Xu, J.-S., Wang, Y.-T., Tang, J.-S., Li, C.-F. & Guo, G.-C. Temperature-dependent energy-level shifts of Spin Defects in hexagonal Boron Nitride. Preprint at https://arxiv.org/abs/2101.09920 (2021).
* (43) Gottscholl, A. _et al._ Room Temperature Coherent Control of Spin Defects in hexagonal Boron Nitride. Preprint at https://arxiv.org/abs/2010.12513 (2020).
* (44) Moore, A. W. & Singer, L. S. Electron spin resonance in carbon-doped boron nitride. _J. Phys. Chem. Solids_ 33, 343-356 (1972).
* (45) Katzir, A., Suss, J. T., Zunger, A. & Halperin, A. Point defects in hexagonal boron nitride. I. EPR, thermoluminescence, and thermally-stimulated-current measurements. _Phys. Rev. B_ 11, 2370 (1975).
* (46) Fanciulli, M. & Moustakas, T. D. in _Wide Band Gap Semiconductors, Proceedings of the Annual Fall Meeting of the Materials Research Society_ (Materials Research Society, Pittsburgh, PA, 1992).
* (47) Sajid, A., Reimers, J. R. & Ford, M. J. Defect states in hexagonal boron nitride: Assignments of observed properties and prediction of properties relevant to quantum computation. _Phys. Rev. B_ 97, 064101 (2018).
* (48) Abdi, M., Chou, J. P., Gali, A. & Plenio, M. B. Color centers in hexagonal boron nitride monolayers: a group theory and ab initio analysis. _ACS Photon._ 5, 1967-1976 (2018).
* (49) Ivády, V. _et al._ Ab initio theory of the negatively charged boron vacancy qubit in hexagonal boron nitride. _npj Comput. Mater._ 6, 1-6 (2020).
* (50) Sajid, A., Thygesen, K. S., Reimers, J. R. & Ford, M. J. Edge effects on optically detected magnetic resonance of vacancy defects in hexagonal boron nitride. _Commun. Phys._ 3, 1-8 (2020).
|
# Multi-Instance Pose Networks: Rethinking Top-Down Pose Estimation
## 1 Multi-Instance Modulation Block (MIMB) Code
## 2 Implementation Details
## 3 Diminishing Returns with $\mathbf{N=3,4}$
## 4 Additional Results on COCO, CrowdPose and OCHuman
### 4.1 Additional results on COCO
### 4.2 Additional results on CrowdPose
### 4.3 Additional results on OCHuman
### 4.4 Robustness to Bounding Box Confidence
## 5 Individual Instance Performance
## 6 Ablation: MIMB
## 7 OCPose Dataset
## 8 Qualitative Results
|
# Reciprocal Landmark Detection and Tracking with Extremely Few Annotations
Jianzhe Lin Student Member, IEEE Ghazal Sahebzamani Student Member, IEEE
Christina Luong Fatemeh Taheri Dezaki Student Member, IEEE Mohammad Jafari
Student Member, IEEE Purang Abolmaesumi Fellow, IEEE and Teresa Tsang
Jianzhe Lin, Ghazal Sahebzamani, Fatemeh Taheri Dezaki, Mohammad Jafari, and
Purang Abolmaesumi are with the Electrical and Computer Engineering
Department, University of British Columbia, Vancouver, BC, V6T 1Z2,
Canada.(e-mail: jianzhelin, ghazal, fatemeht, mohammadj,
purang@ece.ubc.ca).Teresa Tsang is the Associate Head Research and Co-Acting
Head, Department of Medicine,University of British Columbia. She is Director
of Echo Laboratory at Vancouver General Hospital, Vancouver, BC,
<EMAIL_ADDRESS>Luong isa Clinical Assistant Professor
within the Division of Cardiology at the University of British Columbia. She
is the Head of Stress Echocardiography at Vancouver General Hospital,
Vancouver, BC, Canada..(e-mail:t.tsang@ubc.ca).
###### Abstract
Localization of anatomical landmarks to perform two-dimensional measurements
in echocardiography is part of routine clinical workflow in cardiac disease
diagnosis. Automatic localization of those landmarks is highly desirable to
improve workflow and reduce interobserver variability. Training a machine
learning framework to perform such localization is hindered given the sparse
nature of gold standard labels; only few percent of cardiac cine series frames
are normally manually labeled for clinical use. In this paper, we propose a
new end-to-end reciprocal detection and tracking model that is specifically
designed to handle the sparse nature of echocardiography labels. The model is
trained using few annotated frames across the entire cardiac cine sequence to
generate consistent detection and tracking of landmarks, and an adversarial
training for the model is proposed to take advantage of these annotated
frames. The superiority of the proposed reciprocal model is demonstrated using
a series of experiments.
localization, gold standard labels, adversarial training, reciprocal model.
## 1 Introduction
Data scarcity and lack of annotation is a general problem for developing
machine learning models in medical imaging. Among various medical imaging
modalities, ultrasound (US) is the most frequently used modality given its
widespread availability, lower cost, and safety since it does not involve
ionizing radiation. Specifically, US imaging, in the form of echocardiography
(echo), is the standard-of-care in cardiac imaging for the detection of heart
disease. Echo examinations are performed across up to 14 standard views from
several acoustic windows on the chest. In this paper, we specifically focus on
the parasternal long axis (PLAX), which is one of the most common view
acquired in the point-of-care US for rapid examination of cardiac function
(Fig. 1). Several measurements from PLAX require the localization of
anatomical landmarks across discrete points in the cardiac cycle. Our work
specifically investigates automatic localization of the left ventricle (LV)
internal dimension (LVID), which is routinely used to estimate the ejection
fraction (EF), a strong indicator of cardiac function abnormality. In clinics,
LVID landmarks are determined in two frames of the cardiac cycle, i.e. end-
diastolic and end-systolic. However, such annotation is challenging, specially
for general physicians at point-of-care who do not have the experience of
cardiologists. As such, the automation of landmark localization is highly
desirable. However, developing a machine learning model for such automation
has been hindered by the availability of only sparse set of labeled frames in
cardiac cines. Manually labeling all cardiac frames for a large set of cardiac
cines is virtually impractical, given limited expert time.
Figure 1: Example of PLAX images, as one of the most common standard views
acquired in point-of-care echocardiography. Landmarks identified on the left
ventricle are used to measure the EF, a strong indicator of cardiac disease.
Two landmarks on inferolateral and anteroseptal walls (IW, AW) are yellow
color while the LVID is the red line. LVID can be localized with IW and AW.
Instead of manually labeling, we propose a new Reciprocal landmark Detection
and Tracking (RDT) model that enables automation in measurements across the
entire cardiac cycle. The model only uses prior knowledge from sparsely
labeled key frames that are temporally distant in a cardiac cycle. Meanwhile,
we take advantage of temporal coherence of cardiac cine series to impose cycle
consistency in tracking landmarks across unannotated frames that are between
these two annotated frames. To impose consistent detection and tracking of the
landmarks, we propose a reciprocal training as a self-supervision process.
Figure 2: The general flowchart of the proposed detection and tracking model.
Gold standard labels are only available for end-diastolic and end-systolic
frames. The propagation starts from the end-diastolic frame and ends at the
end-systolic frame. The tracking is completed in a cycle way. The two
annotated frames serve as a weak supervision for the model. The detection and
tracking results from the unannotated frames jointly reciprocally provide
another self-supervision.
In summary, we propose a RDT model, which is weakly supervised by only two
annotated keyframes in an image sequence for model training. For testing, the
model is an end-to-end model that detects the landmark in the first frame,
followed by a tracking process. Our contributions are:
* •
A novel Reciprocal landmark Detection and Tracking (RDT) model is proposed. In
the model, the spatial constraint for detection and temporal coherence for
tracking of cardiac cine series work reciprocally, to generate accurate
localization of landmarks;
* •
The sparse nature of echocardiography labels is handled by the proposed model.
The model is only weakly supervised by two annotated image frames that are
temporally distant from each other. The annotation sparsity is also analyzed
in the experimental part;
* •
A novel adversarial training approach (Ad-T) for optimization of the proposed
RDT. Such training is made possible by introducing four complementary losses
as in Fig. 2, i.e. reciprocal loss, motion loss, focal loss, and cycle loss.
Compared with conventional training approaches, Ad-T indirectly achieves
feature augmentation, which is extremely important for model training given
the extremely few annotations. the advantage of such Ad-T is highlighted in
our ablation study.
## 2 Related Work
As a low cost, low risk, and easily accessible modality, the cardiac US is
widely used as an assessment tool in point-of-care. With the utilization of US
technology in various form factors from cart-based to hand-held devices,
measurement of cardiac structures can be typically conducted by users with
diverse levels of expertise. However, due to US images’ noisy nature, studies
indicate large amounts of inter- and intra-observer variability even among
experts [1]. This amount of observer variability may easily lead to errors in
reporting an abnormal patient as normal, or vice versa for borderline cases.
This fact has raised the significance of automated measurement systems by
reducing the variability and increasing the reliability of cardiac reports
among US operators. Furthermore, automation saves a considerable amount of
time by improving clinical workflow.
The problem of automated prediction of clinical measurements, such as
segmentation and keypoint localization of anatomical structures, has been
approached from different angles, especially within the deep learning
literature, where leveraging large size training datasets has led to
significant improvements in the accuracy of predicted measurements. Most of
the recent methods have used fully convolutional neural networks (FCN) as
their main building block to predict pixel-level labels [2, 3, 4, 5, 6, 7, 8,
9]. Similar to numerous works in pose detection literature [10, 11], in many
FCN-based methods, the structure localization problem has been approached by
predicting heatmaps corresponding to the regions of interest at some point in
the network [12]. In [12], a convolutional neural network (CNN) architecture
was explored to combine the local appearance of one landmark with the spatial
configuration of other landmarks for multiple landmark localization. However,
these methods are introduced for problems where data consists of individual
frames, rather than temporal sequences. On the contrary, time plays an
important role in the calculation of measurements such as EF in cardiac
cycles. Therefore, the sole use of these methods may not be sufficient for our
problem of interest and other temporally constrained or real-time
applications.
Recent studies have made use of spatio-temporal models to overcome limitations
of previous models in problems dealing with sequential data, and particularly,
echo cine loops [13, 14]. In [15], while a center of the mass layer was
introduced and placed on top of an FCN architecture to regress keypoints out
of the predicted heatmaps directly, a convolutional long short-term memory
(CLSTM) network was also utilized for improving temporal consistency. In the
cardiac segmentation domain, many works such as [16] have applied recurrent
neural networks to their pipeline. In [17], multi-scale features are first
extracted with pyramid ConvBlocks, and these features are aggregated using
hierarchical ConvLSTMs. Other types of studies have fed motion information to
their network based on estimating the motion vector between consecutive frames
[18, 19, 20]. Another case of this method is presented by [21], in which
similar to our weakly-supervised problem, motion estimation is obtained from
an optical flow branch to enforce spatio-temporal smoothness over a weakly
supervised segmentation task with sparse labels in the temporal dimension.
However, optical flow estimation might contain drastic errors in consecutive
frames with large variability, especially in US images where the boundaries
are fuzzy, and considerable amounts of noise and artifacts may be present.
Therefore, they may not be suitable for a weakly supervised task where the
labels are distant in the time domain. Moreover, although most of the
mentioned methods take temporal coherence into account, these constraints may
not be directly enforced on the model in a desired way [18, 19, 13, 21, 17,
20, 16]. In order to overcome these shortcomings, [22] proposed a method for
consistent segmentation of echocardiograms in the time dimension, where only
end-diastolic and end-systolic frames have segmentation labels per cycle. This
method consists of two co-learning strategies for segmentation and tracking,
in which the first strategy estimates shape and motion fields in appearance
level, and the second one imposes further temporal consistency in shape level
for the previous segmentation predictions. In our method, however, instead of
a segmentation task, we perform detection and tracking with reciprocal
learning in a landmark detection paradigm in the presence of sparse temporal
labels.
## 3 Approach
Our general RDT framework can be found in Figure 2. The model can be divided
into three parts, the _feature encoder_ (blue color), _detection head_ (orange
color), and _tracking head_ (green color). The feature encoder and detection
head combined can be viewed as a Unet-like model, for which the general
structure is similar to Unet. In the model training phase, the input of the
RDT model is an echo sequence starting from the end-diastolic frame and ending
at the end-systolic frame. For the detection branch, the input is the whole
frame, while for the tracking branch, the inputs are patches from two
neighboring frames. The output of the network is two predicted landmark pair
locations for LVID.
### 3.1 Problem Formulation
Suppose the frames in the cardiac cine series are represented by
$\\{I_{1},I_{2},I_{3},...,I_{k}\\}$. For model training, we suppose the end-
diastolic frame to be the $1^{st}$ frame, and the end-systolic frame to be the
$k^{th}$ frame. The $1^{st}$ and $k^{th}$ frames are with annotation, while
the in-between frames are unannotated. The landmark pairs are represented by
${i_{t},a_{t}}$
($i_{t}=\\{x^{i}_{t},y^{i}_{t}\\},a_{t}=\\{x^{a}_{t},y^{a}_{t}\\}$)
corresponding to the landmarks on the inferolateral and anteroseptal walls of
LV in the $t^{th}$ frame, respectively. We use $\phi$ to represent the
_feature encoder_ , and the feature generated for $I_{t}$ is represented by
$\phi_{I_{t}}$. The $\phi_{I_{t}}$ is solely input to the _detection head_ $D$
to get the predicted landmark locations ${i_{t}^{D},a_{t}^{D}}$. For _tracking
head_ , the input is the cropped features of two consecutive frames. One
serves as the template frame while the other serves as the search frame. For
landmark tracking, the predicted locations start from the $2^{nd}$ frame.
After a cycle forward and backward propagation, the predicted location will
end at the $1_{st}$ frame.
### 3.2 Network Architecture and Losses
#### 3.2.1 Shared Feature Encoder
The feature encoder consists of six 3$\times$3 convolution layers, each
followed by a rectified linear unit (ReLU). The third convolution layer is
with a stride equal to 2. Since a single feature encoder is sufficient for the
tracking head, we share this part of the encoder with both tracking and
detection branches. Since the shared encoder is optimized by losses generated
from different heads, the encoded feature should be robust since its
optimization considers both the spatial information exploited by the detection
branch and temporal information explored by the tracking branch.
#### 3.2.2 Detection Head and Focal Loss
The detection head combined with the feature encoder together can be viewed as
an Unet-like structure, which consists of a contracting path and an expansive
path. The contracting path follows the typical architecture of a convolutional
network. The beginning of the detection head is another six layers for feature
generation. There are two similar downsampling steps to the shared feature
encoder. However, we also double the number of feature channels in these two
steps. Every step in the expansive path consists of an upsampling of the
feature map followed by a 2$\times$2 convolution (“up-convolution”). The first
two upsampling layers halve the number of feature channels. We also
concatenate the output of each upsampling layer with a correspondingly cropped
feature map from the contracting path. Each 3$\times$3 convolutions is
followed by a ReLU. As padding is applied, there is no cropping in the whole
neural network. For the final two layers used for classification, the first
one is a 3$\times$3 convolution layer, and the second is a 1$\times$1 layer,
which is used to map each 48-component feature vector to the desired number of
landmarks (Here, the number of landmarks is 2). The last layer’s output is a
two-dimension heatmap, and each location of the heatmap represents the
probability of a target landmark.
Focal loss is generated on annotated frames. For each landmark, there is one
ground-truth positive location in each dimension of the heatmap (Two landmarks
correspond to two dimensions), and all the other locations are negative. For
such ground truth, penalizing negative location equally with the positive ones
is not appropriate, therefore we apply the focal loss. During training, we
reduce the penalty given to negative locations within a radius of the positive
location. We empirically set the radius to be 10 pixels. The amount of penalty
reduction is given by an unnormalized 2D Gaussian
$e^{-(x^{2}+y^{2})/2\delta^{2}}$, whose center is at the positive location and
whose $\sigma$ is 1/3 of the radius. Let $p_{c_{i,j}}$ be the score at
location (i, j) for landmark c in the predicted heatmap, and let $y_{c_{i,j}}$
be the ground-truth heatmap augmented with the unnormalized Gaussians. We
create a variant of focal loss [23]:
$\scriptsize{\mathcal{L}_{\det}}=\sum\limits_{c=1}^{2}{\sum\limits_{i=1}^{H}{\sum\limits_{j=1}^{W}{\left\\{{\begin{array}[]{*{20}{c}}{{{(1-{p_{c_{i,j}}})}^{\alpha}}\log({p_{c_{i,j}}})\quad
if\quad{y_{c_{i,j}}}=1}\\\
{{{(1-y{}_{c_{i,j}})}^{\beta}}{{({p_{c_{i,j}}})}^{\alpha}}\log(1-{p_{c_{i,j}}})\quad
else,}\end{array}}\right.}}}$ (1)
where $\alpha$ and $\beta$ are the hyperparameters that control the
contribution of each point (we empirically set $\alpha$ to 2 and $\beta$ to 4
in all experiments). With the Gaussian distribution encoded in the
$y_{c_{i,j}}$, the term $1-{y_{c_{i,j}}}$ is used for reducing the penalty
around the ground truth locations.
#### 3.2.3 Tracking Head and Cycle Loss:
For the tracking head, when we get $\phi_{I_{t}}$ and $\phi_{I_{t-1}}$, we
first crop the search patches and the template patches both centering at the
landmark pairs in the two consecutive frames, respectively. The two template
patches for inferolateral/anteroseptal landmarks get concatenated and are
represented by $P_{t-1}$, while the two search patches for
inferolateral/anteroseptal landmarks get concatenated and are represented by
$N_{t}$.
The input for the tracking branch is the template patch $P_{t-1}$ with size
$25\times 25$ and the search patch $N_{t}$ with size $29\times 29$, both
centering at the landmark patch $i_{t-1},a_{t-1}$. The size of $P_{t-1}$ and
$N_{t}$ are labeled in Fig. 2, which are set empirically. We formulate the
_tracking head_ $T$ as
$\delta_{i_{t}},\delta_{a_{t}}=T(\phi_{P_{t}},\phi_{N_{t+1}})$.
For the tracking head, we first define a convolutional operation between
$\phi_{P_{t-1}}$ and $\phi_{N_{t}}$ in order to compute the affinity
(similarity) between each sub-patch of $\phi_{N_{t}}$ and $\phi_{P_{t-1}}$. To
be more specific, $\phi_{P_{t-1}}$ and $\phi_{N_{t}}$ are combined by using a
cross-correlation layer
$f(\phi_{N_{t}},\phi_{P_{t-1}})=\phi_{P_{t-1}}*\phi_{N_{t}}.$ (2)
Note that the output of this function is a feature map indicating the
_affinity score_. For hands-on implementation, it is simple to take
$\phi_{P_{t-1}}$ as a kernel matrix to compute dense convolution on
$\phi_{N_{t}}$ within the framework of existing conv-net libraries. The output
feature map is followed by another three fully connected layers (represented
by m in Eq. 3) to predict the landmark motion. Such regression operation is
further formulated as
$T(\phi_{P_{t}},\phi_{N_{t+1}})=\delta_{i_{t}},\delta_{a_{t}}=m(f(\phi_{N_{t}},\phi_{P_{t-1}});\theta_{f}).$
(3)
where $\theta_{f}$ represents the parameters for the fully connected network.
$\delta_{i_{t}}$ and $\delta_{a_{t}}$ are both two-dimensional moves (along
x-axis and y-axis, respectively). The new landmark location is calculated by
adding its previous location to the predicted motion. Such motion prediction
is generally similar with optical flow, in which a new three-layer regression
is also incorporated. This regression makes the learning process adaptive.
Figure 3: Optimization of the proposed reciprocal training.
As the tracking process is only supervised by end-diastolic and end-systolic
frames, we introduce the cycle loss and motion loss to supervise the tracking
branch. To model the cycle process, we iteratively apply the tracking head $T$
in a forward manner:
$\begin{array}[]{l}{L_{t}}^{*}=T({\phi_{{P_{t-1}}}},{\phi_{{N_{t}}}})+{L_{t-1}}^{*}\\\
=T({\phi_{{P_{t-1}}}},{\phi_{{N_{t}}}})+T({\phi_{{P_{t-2}}}},{\phi_{{N_{t-1}}}})+{L_{t-2}}^{*}\\\
=T({\phi_{{P_{t-1}}}},{\phi_{{N_{t}}}})+...T({\phi_{{P_{1}}}},{\phi_{{N_{2}}}})+{L_{1}}^{*},\end{array}$
(4)
in which $L_{t}^{*}=\\{i_{t},a_{t}\\}$ represents the predicted location of
landmark pairs in $t^{th}$ frame, while $L_{1}^{*}=\\{i_{1},a_{1}\\}$
represents the ground truth location of landmark pairs in the first annotated
frame. Here ”+” represents the element-wise addition between the location of
landmarks in the current frame and motion calculated in Eq.3. Also, we use the
same formulation in backward manner as:
$\begin{array}[]{l}{L_{1}}^{*}=T({\phi_{{P_{2}}}},{\phi_{{N_{1}}}})+{L_{2}}^{*}\\\
=T({\phi_{{P_{2}}}},{\phi_{{N_{1}}}})+T({\phi_{{P_{3}}}},{\phi_{{N_{2}}}})+{L_{3}}^{*}\\\
=T({\phi_{{P_{t}}}},{\phi_{{N_{t-1}}}})+...T({\phi_{{P_{2}}}},{\phi_{{N_{1}}}})+{L_{t}}^{*}.\end{array}$
(5)
We use the labeled end-diastolic frame as the beginning frame of the echo cine
series, and the end-systolic frame as the end frame. The motion loss is
defined by the deviation between the predicted landmark pair locations in the
end-systolic frame and their ground truth locations. Suppose the labeled end-
systolic frame is the $k^{th}$ frame; after forward propagation, the motion
loss $\mathcal{L}$ is defined as
$\begin{array}[]{l}\mathcal{L}_{motion}^{k}=\mathcal{L}_{1\rightarrow
k}=\|{L_{k}}-{L_{k}}^{*}\|^{2}\\\
=\|{L_{k}}-(T({\phi_{{P_{k-1}}}},{\phi_{{N_{k}}}})+...T({\phi_{{P_{1}}}},{\phi_{{N_{2}}}})+{L_{1}})\|^{2}.\end{array}$
(6)
The forward propagation is followed by backward propagation that ends at the
end-diastolic frame. By combining Eq. 4 and Eq. 5, the current predicted
landmark pair location in the diastolic frame $L_{1}^{*}$ can actually be
represented by its ground truth location $L_{1}$, and we use the deviation
between these two terms to represent the cycle loss as follow:
$\begin{array}[]{l}\mathcal{L}_{cycle}^{k}=\mathcal{L}_{1\rightarrow
k\rightarrow 1}=\|{L_{1}}-{L_{1}}^{*}\|^{2}\\\
=\|{L_{1}}-{L_{k}}^{*}+{L_{k}}^{*}-{L_{1}}^{*}\|^{2}\\\
=\|(T({\phi_{{P_{k-1}}}},{\phi_{{N_{k}}}})+...T({\phi_{{P_{1}}}},{\phi_{{N_{2}}}}))+\\\
(T({\phi_{{P_{k}}}},{\phi_{{N_{k-1}}}})+...T({\phi_{{P_{2}}}},{\phi_{{N_{1}}}}))\|^{2}.\end{array}$
(7)
Finally, the cycle loss can be simplified as
$\mathcal{L}_{cycle}^{k}=-(\mathcal{L}_{motion}^{k}+\mathcal{L}_{motion}^{1}).$
(8)
#### 3.2.4 Reciprocal Loss for Unannotated Frames:
The former motion loss, cycle loss, and focal loss are applied for the
annotated frames, whereas the reciprocal loss is proposed only for the
unannotated frames, which can be viewed as a self-supervision. In the training
phase, only the end-diastolic and end-systolic frames are annotated while the
in-between frames are unannotated. For these unannotated frames, we can
generate both the $i_{t}^{D},a_{t}^{D}=max(D(\phi_{I_{t}}))$ and the
$i_{t}^{T},a_{t}^{T}=T(\phi_{P_{t-1}},\phi_{N_{t}})+i_{t-1}^{T},a_{t-1}^{T}$.
Although no annotation was assigned, the two predicted landmark pair locations
are assumed to be the same. The discrepancy between these two formulates the
reciprocal loss. The frame rate for reciprocal loss is set as 3, which means
such loss is generated in every three frames. As $D(\phi_{I_{t}})$ is a
heatmap with each location indicating the probability of target location, we
define the reciprocal loss similar to the focal loss. We assume $i_{t}^{T}$
and $a_{t}^{T}$ to be the only positive locations of frame $t$, which is
augmented as a 2D Gaussian distribution centering at each positive location.
The predicted heatmap from the detection branch is viewed as predicted
locations. The formulated reciprocal loss ${\mathcal{L}_{rec}(D,T)}$ is the
same as defined in Eq. 1.
## 4 Optimization
The basic idea for the proposed RDT model is to create a reciprocal learning
between the detection task and the tracking task, as the detection task mainly
focuses on the spatial information of a single frame, while the tracking task
considers the temporal correlation between consecutive frames. However, the
detected landmark pair locations and the tracked landmark pair locations are
assumed to be the same. Therefore, we want the two branches to generate a
discrepancy to optimize both the feature encoder $\phi$ and the
detection/tracking head.
We propose a novel adversarial optimization mechanism. The motivation is for
feature augmentation as the number of data is really limited. Trained by the
augmented feature, both the detection head D and the tracking head T in Fig. 3
can be more robust. In Fig. 3, we use blue color to represent the feature
distribution of the target landmark pair, and orange color to represent the
background. In order to generate a more different distribution of features
from unannotated frames, we propose to utilize the disagreement between D and
T on the prediction of unannotated frames. We assume D and T can predict the
location of annotated frames correctly. Here, we use a key intuition that the
feature distribution of unannotated data outside the support of the annotated
ones is likely to be predicted differently by D and T. Black lines denote this
region as in Fig. 3 (Discrepancy Region). Therefore, if we can measure the
disagreement between D and T and train $\phi$ to maximize the disagreement,
the encoder will generate more unknown feature distributions outside the
support of the annotated ones. The disagreement here is our formerly
formulated reciprocal loss ${\mathcal{L}_{rec}(D,T)}$. This goal can be
achieved by iterative steps as in Fig. 4. We first update the feature encoder
to maximize the ${\mathcal{L}_{rec}(D,T)}$. Then we freeze this encoder part,
and update the D and T to minimize the ${\mathcal{L}_{rec}(D,T)}$, in order to
get the uniformed predicted results for the newly generated unknown feature
from the feature encoder. Detailed optimization steps are described as
follows.
### 4.1 Training Steps:
We need to train D and T, which take inputs from $\phi$. Both D and T must
predict the annotated landmark pair locations correctly. We solve this problem
in three steps, as can be found in Fig. 4.
Step A. First, we train D, T, and $\phi$ to predict the landmark pairs of
annotated frames correctly. We train the networks to minimize three losses
applied to annotated frames. The objective is as follows:
$\mathop{\min}\limits_{\phi,D,T}({\mathcal{L}_{\det}}+\mathcal{L}_{motion}^{k}+\mathcal{L}_{cycle}^{k});$
(9)
Step B. In this step, we train the feature encoder $\phi$ for fixed D and T.
By training the encoder to increase the discrepancy, more unknown feature
distributions different from the annotated data can be generated. Note that
this step only uses the unannotated data. The objective can be formulated as:
$\mathop{\max}\limits_{\phi}({\mathcal{L}_{rec}}({\rm{D}},T));$ (10)
Step C. We train D and T to minimize the discrepancy with a fixed $\phi$. As
this step is to get the uniformed and correct detection/tracking results, the
step is repeated for three times for the same mini-batch empirically. This
setting achieves a trade-off between the encoder and the heads (detection,
tracking). This step is applied on both annotated and unannotated frames, to
get the best model weights of detection/tracking heads for all the existing
features. The objective is as follows:
$\mathop{\min}\limits_{D,T}({\mathcal{L}_{\det}}+\mathcal{L}_{motion}^{k}+\mathcal{L}_{cycle}^{k}+{\mathcal{L}_{rec}}({\rm{D}},T)).$
(11)
These three steps are repeated until convergence. Weights for different losses
are emprically set as 1, in both Step A and Step C. Based on our experience,
the order of the three steps is not essential. However, our primary concern is
to train D, T, and $\phi$ in an adversarial manner.
Figure 4: Stepwise model training process.
## 5 Experiments
### 5.1 Dataset and Setup
Our echocardiography dataset is collected from our local hospital, following
approvals from the Medical Research Ethics Board in coordination with the
privacy office. Data were randomly split into mutually exclusive training and
testing datasets, where no patient data were shared across the two datasets.
The training dataset includes 995 echo cine series with 1990 annotated frames,
while the testing dataset includes 224 sequences with 448 annotated frames.
Different sequences have a different number of frames ranging from 10s to
100s. The number of frames between end-diastolic and end-systolic phases is
different for each cine sample, ranging from 5 to 20 frames.
We run the experiments on our 8x Tesla V100 Server. For the hardware, the CPU
is Intel(R) Xeon(R) CPU E5-2698 v4. All comparison methods are trained until
convergence. For the proposed method trained by a single GPU, the model
converges at 30 epochs, and the running time is 31min/epoch.
### 5.2 Quantitative Results
EF in PLAX view is estimated based on the distance between inferolateral and
anteroseptal landmarks, i.e. LVID. We use the length error (LE) of LVID as
well as the location deviation error (LDE) of inferolateral/anteroseptal
landmarks (abbreviated as IL/AL) as key errors. LDE is also the most widely
used criterion for detection/tracking methods. The comparison is mainly made
among the proposed method, the most recently proposed frame by frame
detection-based method (Modified U-Net[24], CenterNet[25]), and the regular
detection+tracking method (Unet+C-Ynet[26]). Unet here is with the same
structure as the proposed method. Unet and C-Ynet are trained separately. A
general comparison can be found in Table 6.
Table 1: Statistical comparison with the state-of-the-art methods. Errors (’cm’) for different sequences are sorted in ascending order. Evaluation criteria are the Length Error (LE) and the Location Deviation Error (LDE) of Inferolateral/Anteroseptal Landmarks (IL/AL) Method | Frame | Criterion(cm) | Mean$\pm$ std | min | $25\%$ | Median | $75\%$ | $90\%$ | max
---|---|---|---|---|---|---|---|---|---
| | LDE of AL | 1.28 $\pm$ 1.43 | 0.01 | 0.51 | 0.96 | 1.60 | 2.26 | 11.71
Proposed RDT | end-diastolic | LDE of IL | 1.16$\pm$1.27 | 0.06 | 0.40 | 0.88 | 1.48 | 2.15 | 10.67
| | LE of LVID | 0.81$\pm$1.04 | 0.00 | 0.25 | 0.51 | 1.00 | 1.63 | 8.07
CenterNet | | LDE of AL | 1.79 $\pm$ 1.96 | 0.07 | 0.72 | 1.21 | 2.04 | 3.81 | 13.62
[25] | end-diastolic | LDE of IL | 1.71$\pm$1.82 | 0.09 | 0.61 | 1.34 | 2.49 | 3.13 | 12.60
| | LE of LVID | 1.22$\pm$1.94 | 0.03 | 0.44 | 1.15 | 1.81 | 2.24 | 10.33
Unet+C-Ynet | | LDE of AL | 2.29$\pm$3.02 | 0.05 | 0.68 | 1.41 | 2.35 | 5.21 | 19.01
[26] | end-diastolic | LDE of IL | 3.72$\pm$4.05 | 0.07 | 0.78 | 1.91 | 5.26 | 10.89 | 18.81
| | LE of LVID | 2.39$\pm$2.61 | 0.00 | 0.64 | 1.38 | 3.28 | 5.77 | 12.16
Modified U-Net | | LDE of AL | 5.15$\pm$4.86 | 0.10 | 1.27 | 2.99 | 8.18 | 12.72 | 19.76
[24] | end-diastolic | LDE of IL | 5.36$\pm$4.74 | 0.03 | 1.01 | 4.13 | 8.86 | 12.31 | 17.22
| | LE of LVID | 3.40 $\pm$ 3.02 | 0.02 | 0.97 | 2.49 | 5.07 | 7.63 | 15.17
| | LDE of AL | 1.44 $\pm$ 1.30 | 0.06 | 0.66 | 1.16 | 1.75 | 2.67 | 10.37
Proposed RDT | end-systolic | LDE of IL | 1.13$\pm$1.22 | 0.06 | 0.51 | 0.90 | 1.25 | 1.89 | 10.10
| | LE of LVID | 1.09$\pm$0.95 | 0.00 | 0.37 | 0.90 | 1.51 | 2.43 | 5.81
CenterNet | | LDE of AL | 1.90 $\pm$ 1.64 | 0.09 | 0.98 | 1.73 | 2.98 | 3.75 | 13.57
[25] | end-systolic | LDE of IL | 2.03$\pm$2.21 | 0.12 | 0.92 | 1.98 | 3.68 | 4.42 | 14.54
| | LE of LVID | 1.83$\pm$1.48 | 0.06 | 0.95 | 1.78 | 2.93 | 4.31 | 9.25
Unet+C-Ynet | | LDE of AL | 2.78$\pm$2.87 | 0.14 | 0.98 | 1.82 | 3.29 | 5.85 | 19.8
[26] | end-systolic | LDE of IL | 3.42$\pm$3.80 | 0.06 | 0.78 | 1.74 | 4.71 | 9.73 | 17.33
| | LE of LVID | 2.45$\pm$2.61 | 0.00 | 0.73 | 1.41 | 3.02 | 5.14 | 11.51
Modified U-Net | | LDE of AL | 5.05$\pm$4.34 | 0.16 | 1.42 | 2.90 | 8.47 | 12.04 | 16.79
[24] | end-systolic | LDE of IL | 5.72$\pm$4.59 | 0.03 | 1.70 | 4.65 | 9.26 | 12.24 | 17.91
| | LE of LVID | 3.87$\pm$3.21 | 0.03 | 1.64 | 3.02 | 5.18 | 8.39 | 19.38
Comparison with state-of-the-art methods is reported in Table 6. Our results
verify that our detection on the end-diastolic frame performs best over
compared methods. Results also demonstrate that errors in end-systolic and
end-diastolic frames are of the same range, suggesting that the tracking error
is not accumulative over in-between unannotated frames.
Figure 5: Four examples of frames with median LDE. The predicted LVID is the
orange color line with landmarks in yellow color, while the ground truth LVID
is the green color line with landmarks in red color.
### 5.3 Qualitative Results with Visualized Examples
Fig. 5 shows four examples with the location error around the median. Here the
Location Deviation Error (LDE) is the average location error of
Inferolateral/Anteroseptal Landmarks (AL and IL), as there are no cases in our
test data for which the AL and IL are both at the median. For the end-systolic
frame, the average LDE is 0.95$\pm$0.68 cm for mean $\pm$ std, and 0.85 cm for
the median. For the end-diastolic frame, the average LDE is with 0.91$\pm$0.66
cm for mean $\pm$ std, and 0.82 cm for the median.
### 5.4 Ablation Study
In our ablation study, we verify the effectiveness of the adversarial training
(Ad-T), as well as the reciprocal loss (Rec-L). Without the reciprocal loss,
the structural information of in-between unannotated frames is ignored. As
Ad-T is based on Rec-L, without Rec-L the Ad-T cannot be achieved. A detailed
comparison can be found in Table 2.
Table 2: Ablation study for Ad-T and Rec-L. Frame | Criterion(cm) | Ad-T | Rec-L | mean | median
---|---|---|---|---|---
| LDE-AL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 3.22 | 4.50
| LDE-IL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 5.02 | 6.74
| LE | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 2.65 | 3.53
| LDE-AL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.70 | 1.95
ED | LDE-IL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.76 | 2.02
| LE | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.00 | 1.04
| LDE-AL | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.28 | 0.96
| LDE-IL | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.16 | 0.88
| LE | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 0.81 | 0.51
| LDE-AL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 3.17 | 3.85
| LDE-IL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 4.79 | 6.94
| LE | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{0,0,1}{\times}$ | 2.36 | 3.47
| LDE-AL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.76 | 1.92
ES | LDE-IL | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.88 | 1.94
| LE | $\color[rgb]{0,0,1}{\times}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.41 | 1.54
| LDE-AL | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.44 | 1.75
| LDE-IL | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.13 | 1.25
| LE | $\color[rgb]{1,0,0}{\checkmark}$ | $\color[rgb]{1,0,0}{\checkmark}$ | 1.09 | 0.90
Table 2 shows that the reciprocal loss substantially improves the framework.
By adding the reciprocal loss, the errors decrease around 2 cm for all
different criteria. The results again improve a lot when the model is trained
with our proposed Ad-T method.
### 5.5 Model Extension
We also test our proposed model’s extension ability, in which we try only to
use one frame (end-diastolic) in each sequence for model training. Such
training would start from the first annotated frame, and then track in a cycle
way. The motion loss and the focal loss in the last frame are not available in
such a model. The model is mainly trained by the reciprocal loss from the
unannotated frames and the focal loss as well as the cycle loss in the
annotated frame (i.e., the end-diastolic frame). Detailed results are reported
in Table 3. We simply use the medium value of two LDEs (AL and IL) to
represent the LDE.
Table 3: Statistics analysis for model trained by one frame only. Frame | Criterion(cm) | Mean$\pm$ std | min | Medium
---|---|---|---|---
ED | LDE | 1.59$\pm$1.85 | 0.04 | 0.95
| LE | 1.04$\pm$1.20 | 0.02 | 0.68
ES | LDE | 1.76$\pm$1.49 | 0.10 | 1.34
| LE | 1.77$\pm$1.39 | 0.01 | 1.49
We can find even with only one frame annotated, the proposed model can get
satisfying results, when compared with the state-of-the-art. However, results
on end-systolic are much worse than on end-diastolic, which means the second
annotated frame affects the tracking branch a lot.
Table 4: Analysis for the annotation sparsity. Annotation rate | 5-8 | 8-12 | 12-16 | 16-20
---|---|---|---|---
Average LDE (cm)/ sequence | 0.23 | 0.26 | 0.24 | 0.27
Average LDE (cm) /frame | 0.031 | 0.025 | 0.021 | 0.018
Table 5: Analysis for the sparsity of reciprocal loss. Loss rate | 2 | 3 | 4 | 5
---|---|---|---|---
Average LDE (cm)/ sequence | 0.46 | 0.25 | 0.29 | 0.38
### 5.6 Annotation Sparsity Analysis
Sparsity of annotation: As the number of in-between unannotated frames is
random ranging from 5 to 20, such number may influence the tracking branch,
while the detection branch may not be affected. Therefore, to analyze the
influence of annotation sparsity on tracking, we just start from the ground
truth location of the first frame (end-diastolic) to do the tracking. The
whole model does not change. We get the predicted location in the second
annotated frame (end-systolic), and use the location deviation errors (LE) on
this frame for different sequences with different annotation sparsity for
evaluation. Results can be found in Table 4.
We observe that the proposed method is not affected by the annotation
sparsity. The Average LDEs for different sequences are generally the same
around 0.25 cm. The average LDE/frame is the Average LDE divided by the in-
between frame number. As the reciprocal loss is generated every three frames,
whenever a large error is generated from the tracking branch, the discrepancy
between detected and tracked location will also be large, which brings a
significant reciprocal loss. Such loss overcomes the problem brought by large
annotation sparsity.
Sparsity of reciprocal loss: The frequency for applying reciprocal loss for
in-between unannotated frames is also important. A comparison can be found in
Table 5. We can find that only when the reciprocal loss is applied every three
frames, the results are best. Therefore, we empirically set such rate as 3.
The reciprocal loss should not be applied too densely or with a large
sparsity.
### 5.7 Failed Cases Analysis
There are still a few failure cases in our current result. An example is shown
in Fig. 6. The result is with the maximum LDE (LDE of AL: 5.02 cm, LDE of IL:
8.07 cm, LD: 0.48cm). We hypothesize that the reason for such failure is as
follows: During the image acquisition, the operator appears to have zoomed the
ultrasound image on the LV. Hence, no other cardiac chamber is clearly visible
and the appearance of the image is substantially different from a typical PLAX
image. A much larger training data set will be required to avoid failure in
such cases.
If we set 2 cm error for the average LDE (average of IL and AL) as the
critical point for failure, for the end-systolic frame the failure percentage
is $6.1\%$, while for the end-diastolic frame the percentage is $3.7\%$. These
are promising results, compared with the results in [24] whose failure is
$6.7\%$. We note that the model in [24] is trained by densely annotated
sequences, instead of sparsely annotated sequences as in our method.
Figure 6: An example of a discrepant case. This PLAX view is suboptimal and
has been imaged at a low imaging window on the chest resulting altered axis of
the LV. The ground truth LVID label (shown in green color), used clinically,
has been placed in an atypical position based on operator judgment (closer to
the apex) to account for the altered geometry. The predicted LVID is the
orange color line with landmarks in yellow color. It should be noted that
despite relatively large LDE error, both measurements are likely clinical
acceptable, as the distance between AL and IL, rather than their absolute
image coordinates, is the main metric used to measure EF.
### 5.8 Ejection Fraction Error Analysis
We also analyze the proposed method from medical perspective. We calculate the
Ejection Fraction error on the testing dataset. Ejection fraction (EF) is a
measurement, expressed as a percentage, of how much blood the left ventricle
pumps out with each contraction. The ejection fraction represents the percent
of the total amount of blood being pushed out with each heartbeat in the left
ventricle. The Ejection Fraction (EF) is formulated as
$EF=100\times(ED_{vol}-ES_{vol})/ED_{vol},$ (12)
in which $ED_{vol}$ and $ES_{vol}$ are End-diastolic volume and End-systolic
volume respectively, which are formulated by Teichholz formula as below
$ED_{vol}=7\times EDD/(2\textperiodcentered 4+EDD),$ (13) $ES_{vol}=7\times
ESD/(2\textperiodcentered 4+ESD).$ (14)
Here EDD means the length of LV in the end-diastolic frame and the end-
systolic frame respectively. The EF error is the difference between the
predicted EF and ground truth EF. The calculated result can be found in Table
6. Also, we draw the EF scatter as can be found in Fig. 7.
Table 6: Statistical result of EF error for the proposed method. Results | Mean$\pm$ std | min | Median | 90%
---|---|---|---|---
EF prediction | 37.08 $\pm$17.39 | 0.59 | 37.25 | 63.75
EF error | 19.00 $\pm$26.25 | 0.02 | 12.28 | 39.21
Figure 7: The EF scatter plot for the proposed method.
## 6 Conclusion
In this paper, we proposed a novel reciprocal landmark detection and tracking
model. The model is designed to tackle the data and annotation scarcity
problem for ultrasound sequences. The model achieves reliable landmark
detection and tracking with only around 2,000 annotated frames (995 sequences)
for training. For each sequence, only two key frames are annotated. The model
is optimized by a novel adversarial training way, which can better exploit the
training data’s limited information. The comparison with state-of-the-art and
analysis of results verify the effectiveness of our proposed method.
## References
* [1] A. Thorstensen, H. Dalen, B. H. Amundsen, S. A. Aase, and A. Stoylen, “Reproducibility in echocardiographic assessment of the left ventricular global and regional function, the hunt study,” _European Journal of Echocardiography_ , vol. 11, no. 2, pp. 149–156, 2010.
* [2] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2015, pp. 234–241.
* [3] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in _2016 Fourth International Conference on 3D Vision (3DV)_. IEEE, 2016, pp. 565–571.
* [4] M. Avendi, A. Kheradvar, and H. Jafarkhani, “A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac mri,” _Medical Image Analysis_ , vol. 30, pp. 108–119, 2016.
* [5] O. Oktay, E. Ferrante, K. Kamnitsas, M. Heinrich, W. Bai, J. Caballero, S. A. Cook, A. De Marvao, T. Dawes, D. P. O‘Regan _et al._ , “Anatomically constrained neural networks (acnns): application to cardiac image enhancement and segmentation,” _IEEE Transactions on Medical Imaging_ , vol. 37, no. 2, pp. 384–395, 2017.
* [6] T. A. Ngo, Z. Lu, and G. Carneiro, “Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance,” _Medical Image Analysis_ , vol. 35, pp. 159–171, 2017.
* [7] W. Bai, O. Oktay, M. Sinclair, H. Suzuki, M. Rajchl, G. Tarroni, B. Glocker, A. King, P. M. Matthews, and D. Rueckert, “Semi-supervised learning for network-based cardiac mr image segmentation,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2017, pp. 253–260.
* [8] W. Bai, M. Sinclair, G. Tarroni, O. Oktay, M. Rajchl, G. Vaillant, A. M. Lee, N. Aung, E. Lukaschuk, M. M. Sanghvi _et al._ , “Automated cardiovascular magnetic resonance image analysis with fully convolutional networks,” _Journal of Cardiovascular Magnetic Resonance_ , vol. 20, no. 1, p. 65, 2018.
* [9] L. Yao, J. Prosky, E. Poblenz, B. Covington, and K. Lyman, “Weakly supervised medical diagnosis and localization from multiple resolutions,” _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [10] T. Pfister, J. Charles, and A. Zisserman, “Flowing convnets for human pose estimation in videos,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2015, pp. 1913–1921.
* [11] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler, “Joint training of a convolutional network and a graphical model for human pose estimation,” in _Advances in Neural Information Processing Systems_ , 2014, pp. 1799–1807.
* [12] C. Payer, D. Štern, H. Bischof, and M. Urschler, “Regressing heatmaps for multiple landmark localization using cnns,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2016, pp. 230–238.
* [13] N. Savioli, M. S. Vieira, P. Lamata, and G. Montana, “Automated segmentation on the entire cardiac cycle using a deep learning work-flow,” in _2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS)_. IEEE, 2018, pp. 153–158.
* [14] F. T. Dezaki, Z. Liao, C. Luong, H. Girgis, N. Dhungel, A. H. Abdi, D. Behnami, K. Gin, R. Rohling, and P. Abolmaesumi, “Cardiac phase detection in echocardiograms with densely gated recurrent neural networks and global extrema loss,” _IEEE Transactions on Medical Imaging_ , vol. 38, no. 8, pp. 1821–1832, 2018.
* [15] M. Sofka, F. Milletari, J. Jia, and A. Rothberg, “Fully convolutional regression network for accurate detection of measurement points,” in _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support_. Springer, 2017, pp. 258–266.
* [16] X. Du, S. Yin, R. Tang, Y. Zhang, and S. Li, “Cardiac-deepied: Automatic pixel-level deep segmentation for cardiac bi-ventricle using improved end-to-end encoder-decoder network,” _IEEE Journal of Translational Engineering in Health and Medicine_ , vol. 7, pp. 1–10, 2019.
* [17] M. Li, W. Zhang, G. Yang, C. Wang, H. Zhang, H. Liu, W. Zheng, and S. Li, “Recurrent aggregation learning for multi-view echocardiographic sequences segmentation,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2019, pp. 678–686.
* [18] W. Yan, Y. Wang, Z. Li, R. J. Van Der Geest, and Q. Tao, “Left ventricle segmentation via optical-flow-net from short-axis cine mri: preserving the temporal coherence of cardiac motion,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2018, pp. 613–621.
* [19] M. H. Jafari, H. Girgis, Z. Liao, D. Behnami, A. Abdi, H. Vaseli, C. Luong, R. Rohling, K. Gin, T. Tsang _et al._ , “A unified framework integrating recurrent fully-convolutional networks and optical flow for segmentation of the left ventricle in echocardiography data,” in _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support_. Springer, 2018, pp. 29–37.
* [20] S. Chen, K. Ma, and Y. Zheng, “Tan: Temporal affine network for real-rime left ventricle anatomical structure analysis based on 2d ultrasound videos,” _arXiv preprint arXiv:1904.00631_ , 2019.
* [21] C. Qin, W. Bai, J. Schlemper, S. E. Petersen, S. K. Piechnik, S. Neubauer, and D. Rueckert, “Joint learning of motion estimation and segmentation for cardiac mr image sequences,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2018, pp. 472–480.
* [22] H. Wei, H. Cao, Y. Cao, Y. Zhou, W. Xue, D. Ni, and S. Li, “Temporal-consistent segmentation of echocardiography with co-learning from appearance and shape,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2020, pp. 1–8.
* [23] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 2980–2988.
* [24] A. Gilbert, M. Holden, L. Eikvil, S. A. Aase, E. Samset, and K. McLeod, “Automated left ventricle dimension measurement in 2d cardiac ultrasound via an anatomically meaningful cnn approach,” in _Smart Ultrasound Imaging and Perinatal, Preterm and Paediatric Image Analysis_. Springer, 2019, pp. 29–37.
* [25] X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” _arXiv preprint arXiv:1904.07850_ , 2019.
* [26] J. Lin, Y. Zhang, A. Amadou, I. Voigt, T. Mansi, and R. Liao, “Cycle ynet: Semi-supervised tracking of 3d anatomical landmarks,” in _International MICCAI Workshop on Machine Learning in Medical Imaging_. Springer, 2020, pp. 1–8.
|
# Controlling core-hole lifetime through an x-ray planar cavity
Xin-Chao Huang,1 Xiang-Jin Kong,2 Tian-Jun Li,1 Zi-Ru Ma,1 Hong-Chang Wang,3
Gen-Chang Liu,4 Zhan-Shan Wang,4 Wen-Bin Li,4<EMAIL_ADDRESS>Lin-Fan Zhu,1
<EMAIL_ADDRESS>1Hefei National Laboratory for Physical Sciences at
Microscale and Department of Modern Physics, University of Science and
Technology of China, Hefei, Anhui 230026, People’s Republic of China
2Department of Physics, National University of Defense Technology, Changsha,
Hunan 410073, People’s Republic of China
3Diamond Light Source, Harwell Science and Innovation Campus, Didcot,
Oxfordshire, OX11 0DE, UK
4MOE Key Laboratory of Advanced Micro-Structured Materials, Institute of
Precision Optical Engineering (IPOE), School of Physics science and
Engineering, Tongji University, Shanghai 200092, People’s Republic of China
###### Abstract
It has long been believed that core-hole lifetime (CHL) of an atom is an
intrinsic physical property, and controlling it is significant yet is very
hard. Here, CHL of the 2$p$ state of W atom is manipulated experimentally
through adjusting the emission rate of a resonant fluorescence channel with
the assistance of an x-ray thin-film planar cavity. The emission rate is
accelerated by a factor linearly proportional to the cavity field amplitude,
that can be directly controlled by choosing different cavity modes or changing
the angle offset in experiment. This experimental observation is in good
agreement with theoretical predictions. It is found that the manipulated
resonant fluorescence channel even can dominate the CHL. The controllable CHL
realized here will facilitate the nonlinear investigations and modern x-ray
scattering techniques in hard x-ray region.
PACS: 32.80.-t, 32.80.Qk, 42.50.Ct, 32.30.Rj, 78.70.Ck.
The particularity of an inner-shell excitation or ionization is to produce a
core vacancy, which has a finite lifetime, i.e., the so-called core-hole
lifetime (CHL), and then it decays into lower-lying states. There are two main
relaxation pathways, i.e., radiative (fluorescence) and non-radiative (Auger
decay or autoionization) channels, and the CHL is determined by the total
decay rate of all relaxation channels. Normally, Auger effect dominates the
decay routes of K shell for low-Z atoms Auger (1925) and L and M shells for
higher-Z atoms Krause (1979), so the CHL is sometimes called Auger lifetime.
The CHL has long been considered as an intrinsic factor and controlling it is
very difficult because the relaxation channels are hard to be manipulated with
common methods.
Nevertheless, an adjustable CHL is strongly desired, since CHL changes are
useful to detect ultrafast dynamics. An adjustable CHL is needed to give a
deep insight to nonlinear light-matter interaction with the advent of x-ray
free electron laser (XFEL), since the ratio of CHL to XFEL pulse width does
matter for multiphoton ionization Fukuzawa _et al._ (2013), two-photon
absorption Tamasaku _et al._ (2018), population inversion Yoneda _et al._
(2015) and stimulated emission Wu _et al._ (2016); Chen _et al._ (2018). CHL
is also a key factor in resonant x-ray scattering (RXS) process Gel’mukhanov
and Ågren (1999), where the dynamics of the core-excited state is controlled
by the duration time which is determined by both energy detuning and CHL
Gel’mukhanov _et al._ (1999). Because of a lack of an efficient method to
manipulate CHL experimentally, the controlling schemes for duration time were
based on the energy detuning up to now Skytt _et al._ (1996); Feifel _et
al._ (2004); Kimberg _et al._ (2013); Feifel and Piancastelli (2011); Morin
and Miron (2012); Miron and Morin (2011). The dynamics of the core-excited
state determines the application range for RXS techniques, e.g., resonant
inelastic x-ray scattering (RIXS) Ament _et al._ (2011). Since Coulomb
interaction between core-hole and valence electrons only exists during the
existence of core-excited state, the relative timescale between CHL and
elementary excitations governs the effectiveness of indirect RXIS Ament _et
al._ (2007, 2009); Dean _et al._ (2012), especially for charge and magnon
excitations van den Brink (2007); Ament _et al._ (2009); Haverkort (2010);
Jia _et al._ (2016); Tohyama and Tsutsui (2018). In time-resolved RIXS (tr-
RIXS), CHL also needs to be flexibly adjusted for pursuing higher time
resolution Dean _et al._ (2016); Wang _et al._ (2018); Chen _et al._
(2019); Buzzi _et al._ (2018). Therefore, a controllable CHL will be very
useful thus is strongly wished for, from both fundamental and application
perspectives.
Because CHL is determined by the total decay rate of all relaxation channels,
controlling CHL means manipulatable decay channels, at least one of them,
which is a challenging task. Stimulated emission channel could be opened by
intense and short x-ray pulses to accelerate CHL Wu _et al._ (2016); Chen
_et al._ (2018), while such scheme can only be implemented in XFEL. The
present work proposes another scheme that controls the spontaneous emission
channel. R. Feynman once said, the theory behind chemistry is quantum
electrodynamics (QED) Richard (1985), indicating that the spontaneous emission
rate of atom depends on the environment (photonic density of states). A cavity
is such an outstanding system to robustly structure environment and modify the
spontaneous emission rate in visible wavelength regime Tomaš (1995); Raimond
_et al._ (2001), as known cavity-QED. With the dramatic progress of new
generation x-ray source and thin-film technology, cavity-QED effect in hard
x-ray range was demonstrated by the laboratory of thin film planar cavity with
nuclear ensembles Röhlsberger _et al._ (2010, 2012); Heeg _et al._ (2013,
2015a, 2015b); Haber _et al._ (2017) or electronic resonance Haber _et al._
(2019), which breeds the new field of x-ray quantum optics Adams _et al._
(2013).
In this work, a controllable CHL for 2$p$ state of W atom is realized through
adjusting the emission rate of a resonant fluorescence channel with the
assistance of an x-ray thin-film planar cavity. WSi2 has a remarkable white
line around the L${}_{\textrm{III}}$ edge of W, which is a resonant channel
and generally known to be associated with an atomic-like electric dipole
allowed transition, from an inner shell 2$p$ to an unoccupied level 5$d$ Haber
_et al._ (2019); Brown _et al._ (1977); Wei and Lytle (1979). Inside the
cavity, the emission rate of the resonant channel depends on the photonic
density of states where the atom locates, which can be modified by the cavity
field amplitude in experiment. Because the thin-film planar cavity can only
enhance the photonic density of states, but not suppress it, only CHL
shortening is realized in the present experiment. As long as the cavity effect
is strong enough, the total decay rate will have measurable changes and lead
to an controllable CHL.
Fig. 1: The schematic for controlling core-hole lifetime. (a) Cavity sample
and measurement setup. The cavity has a structure of Pt (2.1 nm)/C (18.4
nm)/WSi2 (2.8 nm)/C (18.0 nm)/Pt (16.0 nm)/Si100, and the middle-right inset
shows the energy-level of L${}_{\textrm{III}}$ edge of atom W. The sample is
probed by a monochromatic x-ray, and the resonant fluorescence is measured in
the reflection direction by a CCD and the inelastic fluorescence signals are
collected by an energy-resolved fluorescence detector. The distance between
collimator and sample surface is 31.0 mm, and the hole diameter and the length
of the collimator are 2.8 mm and 20.1 mm. An example of full range
fluorescence spectrum is shown in inset at top-left side, and the grey region
corresponds to the fluorescence photon energy of Lα line. The inset at top-
right side is the reflectivity curve with an incident energy detuning 30 eV
from $E_{0}$, and the pink solid bar indicates the critical angle of Pt (0.46
degree). (b) The values of Re($\eta$) and Im($\eta$) as a function of incident
angle, which is calculated by a transfer matrix formulism. (c) The simplified
energy levels of W. The driving is labeled by the blue arrow, and the cavity
enhanced emission is labeled by the red thick arrow. The inelastic
fluorescence decay is labeled by the red thin arrow.
Fig. 1(a) depicts the cavity structure used in the present work. The thin-film
cavity is made of a multilayer of Pt and C. The top and bottom layers of Pt
with a high electron density are used as mirrors. The layers of C in the
middle with a low electron density are used to guide the x-ray and to stack
the cavity space. In this design, at certain incident angles
$\theta_{\textrm{th}}$ below the critical angle of Pt, x-ray can resonantly
excite specific cavity guided modes where dips in the reflectivity curve
appear as shown in the top-right inset of Fig. 1(a). In the present work,
$\theta_{\textrm{th}}$ are $\theta_{\textrm{1st}}$=0.218∘,
$\theta_{\textrm{3rd}}$=0.312∘ and $\theta_{\textrm{5th}}$=0.440∘ for the
1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ odd orders of
cavity mode. Then the coupling between the cavity and atom is built by
embedding a thin layer of WSi2 at the middle of the cavity where the cavity
field amplitudes are the strongest. The field distributions of the
1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ orders of
cavity mode are sketched in Fig. 1(a).
As shown in the middle-right inset of Fig. 1(a), the inner shell energy-level
system is different from the simple two-level one, and both resonant channel
and incoherent processes such like inelastic radiative channels (Auger decay
channels is not exhibited here) can annihilate the core vacancy state, so the
decay width is determined by the total decay rates of all relaxation channels.
Excited by the incoming x-ray field, the atomic dipole emits the resonant
fluorescence through the resonant channel, and the resonant response could be
written as a simple form of Lorentz function,
$f=-f_{0}\frac{i\gamma_{\textrm{re}}/2}{\delta+i(\gamma_{\textrm{re}}/2+\gamma_{\textrm{in}}/2)}$
(1)
The electronic continuum in higher energy range is not considered here.
$f_{0}$ is a constant, and $\delta$ is the energy detuning between the
incident x-ray energy $E$ and the white line transition energy $E_{0}$.
$\gamma_{\textrm{re}}$ is the natural spontaneous emission rate of the
resonant channel, while $\gamma_{\textrm{in}}$ is the incoherent decay rate
which sums two branches: the radiative decay rate of inelastic channels
$\gamma_{\textrm{ie}}$ and the non-radiative decay rate of Auger process
$\gamma_{\textrm{A}}$, i.e.,
$\gamma_{\textrm{in}}=\gamma_{\textrm{ie}}+\gamma_{\textrm{A}}$. It is clear
that the inverse core-hole lifetime is expressed by the natural width as
$\gamma=\gamma_{\textrm{re}}+\gamma_{\textrm{in}}$.
Cavity strengthens the photonic density of states Tomaš (1995); Röhlsberger
_et al._ (2005) at the position of the radiating atom, so the resonant
fluorescence will be enhanced. Applying the transfer matrix combined with a
perturbation expansion method (SM Sec. I), the resonant fluorescence in the
reflection direction is solved as,
${{r}_{a}}=-\frac{id{{f}_{0}}\times{{\left|{{a}^{z_{a}}}\right|}^{2}}{{\gamma}_{\text{re}}}/2}{\delta+{{\delta}_{c}}+i\left({{\gamma}_{c}}+\gamma\right)/2}$
(2)
$d$ is the thickness of the atomic layer, and
${{\left|{{a}^{z_{a}}}\right|}^{2}}$ is the field intensity where the atom
locates. It can be seen that Eq. (2) still has a Lorentzian resonant response,
while contains additional cavity effects: the cavity enhanced emission rate
$\gamma_{c}$ and the cavity induced energy shift $\delta_{c}$,
$\begin{array}[]{lll}{{\gamma}_{c}}&=d{{f}_{0}}\gamma_{\textrm{re}}\times\operatorname{Re}\left(\eta\right)\\\
{{\delta}_{c}}&=d{{f}_{0}}\gamma_{\textrm{re}}\times\operatorname{Im}\left(\eta\right)\\\
\eta&=pq\\\ \end{array}$ (3)
Thus the emission rate is enhanced by a factor of Re($\eta$), where $p$ and
$q$ are the field amplitudes corresponding to the wave scattered from up
(down) direction into both up and down directions at the position of atomic
layer (Sec. I of SM). Note here that the photonic density of states is
directly related to the cavity field amplitudes Röhlsberger _et al._ (2005,
2012), so Eq. (3) conforms to the typical cavity Purcell effect Raimond _et
al._ (2001) which describes the well-known linear relation between lifetime
shortening and photonic density of states strengthening. It is clear that the
real part of $\eta$ is an essential factor to control the enhanced emission
rate, and the energy shift is modified by the image part of $\eta$. The real
and image parts of $\eta$ as a function of incident angle are depicted in Fig.
1(b), and $\gamma_{c}$ and $\delta_{c}$ are simultaneously modified by the
incident angle around the mode angles, which has been observed by Haber _et
al_ recently Haber _et al._ (2019). On the other hand, Fig. 1(b) suggests
that the strongest enhanced emission rate can be achieved without introducing
additional energy shift by exactly choosing the angles of odd orders of cavity
mode, which will be more convenient to study the individual influence of the
CHL on core-hole dynamics (SM Sec. IV). The fully controllable resonant
channel makes an adjustable total inverse core-hole lifetime,
${{\Gamma}_{n}}=\gamma_{c}+\gamma_{\textrm{re}}+\gamma_{\textrm{ie}}+\gamma_{\textrm{A}}$
(4)
where all four contributions are included, herein $\gamma_{c}$ is the cavity
enhanced emission rate, and
$\gamma=\gamma_{\textrm{re}}+\gamma_{\textrm{ie}}+\gamma_{\textrm{A}}$ is the
natural inverse CHL as the sum of three branches: the natural spontaneous
emission rate of the resonant fluorescence channel, the radiative decay rate
of inelastic fluorescence channels and the Auger decay rate. $\gamma$ is a
fixed value which can be obtained from the experimental spectrum at a large
incident angle (Fig. S3 of SM), i.e., $\gamma/2$=3.6 eV. As long as
$\gamma_{c}$ is large enough, this controllable part will dominate the CHL.
As shown in the simplified energy levels in Fig. 1(c), the core-hole lifetime
determines the linewidth of inelastic scattering, i.e, the fluorescence
spectrum. We employ a RXS formalism known as Kramers-Heisenberg equation to
character the inelastic scattering Gel’mukhanov and Ågren (1999); Ament _et
al._ (2011) as,
${{F}_{if}}\left(\overset{\scriptscriptstyle\rightharpoonup}{k},{\overset{\scriptscriptstyle\rightharpoonup}{k}}^{\prime},\omega,{\omega}^{\prime}\right)=\frac{\left\langle
f\right|{\hat{D}}^{\prime}\left|n\right\rangle\left\langle
n\right|\hat{D}\left|i\right\rangle}{\delta+i{{\Gamma}_{n}}/2}$ (5)
Herein the initial state
$\left|i\right\rangle=\left|g,\overset{\scriptscriptstyle\rightharpoonup}{k}\right\rangle$,
the final state
$\left|f\right\rangle=\left|f,{\overset{\scriptscriptstyle\rightharpoonup}{k}}^{\prime}\right\rangle$,
and the intermediate state $\left|n\right\rangle=\left|e,0\right\rangle$.
$\overset{\scriptscriptstyle\rightharpoonup}{k}$ is the wave vector and
$\hat{D}$ is the transition operator. For the present system, an intermediate
state and a final state are considered, since we choose the
L${}_{\textrm{III}}$ white line transition and measure Lα with energy
$E^{\prime}$. Eq. (5) indicates that CHL changes can be monitored by the
inelastic fluorescence spectrum, and in the present work Lα line (Lα1 is much
stronger than Lα2) is chosen to obtain the inelastic fluorescence spectra.
The measurement was performed on the B16 Test beamline in Diamond Light
Source. Monochromatic x-ray from a double crystal monochromator was used to
scan the incident x-ray energy, and two small apart slits were used to obtain
a good collimation beam with a small vertical beam size of about 50 $\mu$m.
The multilayer was deposited onto a polished silicon wafer (100) using DC
magnetron sputtering method which is popular to fabricate diverse cavity
structures Röhlsberger _et al._ (2010, 2012); Heeg _et al._ (2013, 2015a,
2015b); Haber _et al._ (2017, 2019). Before sample fabrication, the
deposition rate was calibrated carefully to guarantee the layer thickness with
a good accuracy of better than 1 Å. The size of the wafer is 30$\times$30 mm2
which is larger than the footprint to avoid the beam overpassing the sample
length. As shown in the top-right inset of Fig. 1(a), the $\theta-2\theta$
rocking curve with an incident energy detuning 30 eV from the white line
position was measured firstly to find the desired specific incident angles
corresponding to the 1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and
5${}^{\textrm{th}}$ orders of the guided modes, i.e., the corresponding
reflection dips. For a given incident angle, the incident energy $E$ was
scanned from 10161 eV to 10261 eV across the transition energy $E_{0}$=10208
eV. Then the reflectivity corresponding to the resonant channel was measured
by a CCD detector, and the inelastic fluorescence lines were measured
simultaneously by a silicon drift detector (_Vortex_) with a resolution of
about 180 eV at a perpendicular direction. In front of the fluorescence
detector, a collimator guarantees a constant detected area of the sample, and
the footprint $bw/\textrm{sin}\theta$ on the sample surface is determined by
the beam width $bw$ and the incident angle, so the inelastic fluorescence
intensities need to be normalized by taking into account a geometry factor Li
_et al._ (2012). The strongest Lα fluorescence lines (Lα1 at 8398 eV and Lα2
at 8335 eV) of W are far from the white line (10208 eV) and other weak
fluorescence lines (9951 eV of Lβ2, 7387 eV of Ll and other negligible lines),
so Lα can be extracted separately from the energy-resolved fluorescence
spectrum.
Firstly, the 1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$
orders are exactly chosen to control the CHL without introducing additional
energy shift, and the results are depicted in Fig. (2). Fig. 2(a) shows the
experimental and theoretically fitted reflectivity curves. The present
theoretical model for resonant fluorescence does not take into account the
influence of the absorption edge due to its nature of the electronic
continuum, and the continuum overlaps with the right side of the white line.
The sudden increase of the absorption coefficient changes the refractive index
and dramatically alters the cavity properties Haber _et al._ (2019). So the
data below 10220 eV are selected for fitting (labeled by the green region).
Above 10220 eV, the theoretical results diverge from the experimental datum.
The reflection coefficient includes the contributions from two pathways (SM
Sec. II): the first one of $r_{0}$ is from the multilayer cavity itself that
the photon does not interact with the resonant atom, and the second one of
$r_{a}$ is from the resonant atom inside the cavity, i.e., the resonant
fluorescence. The linewidth of the cavity is much larger than the one of atom,
which means that $r_{0}$ is more like a flat continuum state and $r_{a}$ is
more like a sharp discrete state Liu _et al._ (2003); Heeg _et al._ (2015a).
Therefore, the reflectivity spectrum is a result of Fano interference. It can
be seen from Fig. 2(a) that the profile of the reflectivity spectra shows Fano
line-shape. The reflectivity spectra give the values of
$(\gamma_{c}+\gamma)/2$ as 7.9 eV, 6.9 eV and 5.2 eV for the
1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ orders of the
cavity mode.
Fig. 2: (a) The measured and theoretical reflectivity spectra for the
1${}^{\textrm{st}}$, 3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ orders as a
function of incident photon energy. The red solid line is the theoretically
fitted result. (b) The measured and fitted inelastic fluorescence spectra of
Lα as a function of incident photon energy. The solid lines in pink, red,
green and blue are the fitted result, Lorentzian resonance line, electronic
continuum line and the flat background respectively.
Fig. 2(b) shows the experimental and fitted inelastic fluorescence spectra of
Lα as a function of incident x-ray energy for the 1${}^{\textrm{st}}$,
3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ orders. The inelastic fluorescence
spectrum is fitted by a custom function combining a simple Lorentz function
$L(E)$ with a Heaviside step function $H(E)$ (SM Sec.III), herein $L(E)$ with
a linewidth $\Gamma_{n}/2$ is used to describe Eq. (5) and $H(E)$ is used to
describe the absorption edge. The fitted values of $\Gamma_{n}/2$ are 8.6 eV,
6.2 eV and 5.1 eV which match well with the derived values from the resonant
fluorescence spectra of Fig. 2(a), demonstrating that the shortening of CHL
indeed comes from the regulation of resonant fluorescence channel. Moreover,
the value of $\gamma_{c}=\Gamma_{n}-\gamma$ is even larger than $\gamma$ in
the 1${}^{\textrm{st}}$ order, indicating that the adjustable resonant channel
breaks the limitation of Auger processes and unchangeable radiative decay
channels and dominantly determines CHL. A behavior of widening linewidth is
cross-checked by Ll and Lβ2 lines (SM Fig. S5).
Fig. 3: The inelastic fluorescence spectra of Lα as functions of incident
photon energy and angle offset. Angle offset is the deviation between the
incident angle and the $\theta_{\textrm{1st}}$ ($\theta_{\textrm{3rd}}$,
$\theta_{\textrm{5th}}$).
The measured inelastic fluorescence 2D spectra are shown in Fig. 3 for
selected incident angles around the mode angles of the 1${}^{\textrm{st}}$,
3${}^{\textrm{rd}}$ and 5${}^{\textrm{th}}$ orders
($\theta_{\textrm{1st}}$=0.218∘, $\theta_{\textrm{3rd}}$=0.312∘ and
$\theta_{\textrm{5th}}$=0.440∘ respectively). As discussed in Eq. (3), CHL and
the cavity induced energy shift are simultaneously controlled by the incident
angle. When the incident angle scans across the mode angle, a phenomenon of
firstly increasing to maximum at the mode angle then decreasing of the inverse
CHL will be observed along with an additional energy shift, which is
demonstrated by Fig. 3. For the 3${}^{\textrm{rd}}$ order, the maximum
linewidth does not seem to be where the angle offset is 0, this may due to the
occasionally angle shift from the instabilities of the goniometer or sample
holder. Note here that Fig. 3 suggests a way to continuously modify CHL but
introduce additional energy shift.
Fig. 4: (a) The measured and fitted inelastic fluorescence spectra of Lα for
selected angle offsets. (b) Enhanced emission rate $\gamma_{c}$ as a function
of Re$(\eta)$. The values of $\gamma_{c}$ are derived from the fitted
linewidth $\Gamma_{n}$ as $\gamma_{c}=\Gamma_{n}-\gamma$, and the values of
Re$(\eta)$ are obtained by transfer matrix calculation. The dashed blue line
is a linear fitting of the experimental dots to guide the eyes.
As predicted in Eq. (3), the enhanced emission rate $\gamma_{c}$ is linearly
connected with the real part of the cavity filed amplitude $\eta$, and this is
the essential to discuss the magnitude of inverse CHL. It should be noted here
that the present method to control CHL is different from the scenario of
stimulated emission Wu _et al._ (2016); Chen _et al._ (2018) where a non-
linear relationship between the stimulated emission rate and the x-ray field
intensity is expected. The present scheme actually employs a cavity to
manipulate the enhanced spontaneous emission whose decay rate is linearly
determined by the photonic density of states. The inelastic fluorescence
spectra in Fig. 3 are fitted to get the values of $\Gamma_{n}$, and some
selected spectra are shown in Fig. 4(a). Then the values of $\gamma_{c}$ are
obtained based on Eq. (4), and the values of Re$(\eta)$ are calculated by the
transfer matrix formulism. A good linear relationship between $\gamma_{c}$ and
Re$(\eta)$ is depicted in Fig. 4(b) which is consistent with the prediction of
Eq. (3).
From the general viewpoint of cavity-QED in optical regime, the inelastic
channel is an incoherent process which accelerates the decoherence, so it is
regarded as a defect for the system Van Loo _et al._ (2013). However, the
inelastic channel is a natural character and widely exists in atomic inner-
shell systems, herein we demonstrate it can be very useful to monitor CHL
changes, enriching the picture of cavity effect.
In conclusion, the core-hole lifetime for 2$p$ state of W is manipulated
experimentally through constructing an x-ray thin-film planar cavity system.
The core-hole lifetime directly depends on the cavity field amplitude at the
position of W atom (SM Sec. I and Huang _et al._ (2020)), which can be
adjusted by choosing the different orders of cavity mode or varying the
incident angle offset. With a high quality cavity sample, the core-hole
lifetime is conveniently manipulated in experiment. Notably for the case of
the 1${}^{\textrm{st}}$ order, the decay rate of the resonant channel is even
stronger than the natural inverse core-hole lifetime which is dominated by the
Auger process for L${}_{\textrm{III}}$ shell of atom W in common scenarios.
Moreover, the inelastic fluorescence spectra are utilized as a good monitor to
reflect the core-hole lifetime changes. The cavity structure is suitable for a
wide range of x-ray energy from few to tens of keV, so the present scheme
could be extended to a lot of elements which have resonant fluorescence
channel. Utilizing the present cavity technique, the duration time of RXS
process can be controlled not only by the energy detuning, but also by the
core-hole lifetime, which will enrich the physical studies for RXS (SM Sec.
VI) in future. Combing with the high-resolution $\sim$100 meV analyzer Hill
_et al._ (2007), a cavity-manipulating RXS is expected to be achievable.
This work is supported by National Natural Science Foundation of China (Grants
No. U1932207), and the National Key Research and Development Program of China
(Grants No. 2017YFA0303500 and 2017YFA0402300). The experiment was carried out
in instrument B16 of Diamond Light Source Ltd (No. MM21446-1), United Kingdom.
Authors thank Xiao-Jing Liu for fruitful discussion.
## References
* Auger (1925) P. Auger, Comptes Rendus 180, 65 (1925).
* Krause (1979) M. O. Krause, J. Phys. Chem. Ref. Data 8, 307 (1979).
* Fukuzawa _et al._ (2013) H. Fukuzawa, S.-K. Son, K. Motomura, _et al._ , Phys. Rev. Lett. 110, 173005 (2013).
* Tamasaku _et al._ (2018) K. Tamasaku, E. Shigemasa, Y. Inubushi, _et al._ , Phys. Rev. Lett. 121, 083901 (2018).
* Yoneda _et al._ (2015) H. Yoneda, Y. Inubushi, K. Nagamine, _et al._ , Nature 524, 446 (2015).
* Wu _et al._ (2016) B. Wu, T. Wang, C. E. Graves, _et al._ , Phys. Rev. Lett. 117, 027401 (2016).
* Chen _et al._ (2018) Z. Chen, D. J. Higley, M. Beye, _et al._ , Phys. Rev. Lett. 121, 137403 (2018).
* Gel’mukhanov and Ågren (1999) F. Gel’mukhanov and H. Ågren, Phys. Rep. 312, 87 (1999).
* Ament _et al._ (2011) L. J. P. Ament, M. van Veenendaal, T. P. Devereaux, J. P. Hill, and J. van den Brink, Rev. Mod. Phys. 83, 705 (2011).
* Gel’mukhanov _et al._ (1999) F. Gel’mukhanov, P. Sałek, T. Privalov, and H. Ågren, Phys. Rev. A 59, 380 (1999).
* Skytt _et al._ (1996) P. Skytt, P. Glans, J.-H. Guo, K. Gunnelin, C. Såthe, J. Nordgren, F. K. Gel’mukhanov, A. Cesar, and H. Ågren, Phys. Rev. Lett. 77, 5035 (1996).
* Feifel _et al._ (2004) R. Feifel, A. Baev, F. Gelmukhanov, _et al._ , Phys. Rev. A 69, 022707 (2004).
* Kimberg _et al._ (2013) V. Kimberg, A. Lindblad, J. Söderström, O. Travnikova, C. Nicolas, Y. P. Sun, F. Gel’mukhanov, N. Kosugi, and C. Miron, Phys. Rev. X 3, 011017 (2013).
* Feifel and Piancastelli (2011) R. Feifel and M. N. Piancastelli, J. Electron Spectrosc. Relat. Phenom. 183, 10 (2011).
* Morin and Miron (2012) P. Morin and C. Miron, J. Electron Spectrosc. Relat. Phenom. 185, 259 (2012).
* Miron and Morin (2011) C. Miron and P. Morin, _Handbook of High-Resolution Spectroscopy_ (Wiley, Chichester, UK, 2011) pp. 1655–1690.
* Ament _et al._ (2007) L. J. P. Ament, F. Forte, and J. van den Brink, Phys. Rev. B 75, 115118 (2007).
* Ament _et al._ (2009) L. J. P. Ament, G. Ghiringhelli, M. M. Sala, L. Braicovich, and J. van den Brink, Phys. Rev. Lett. 103, 117003 (2009).
* Dean _et al._ (2012) M. Dean, R. Springell, C. Monney, _et al._ , Nat. Mater. 11, 850 C854 (2012) .
* van den Brink (2007) J. van den Brink, Euro. Phys. Lett. 80, 47003 (2007).
* Haverkort (2010) M. W. Haverkort, Phys. Rev. Lett. 105, 167404 (2010).
* Jia _et al._ (2016) C. Jia, K. Wohlfeld, Y. Wang, B. Moritz, and T. P. Devereaux, Phys. Rev. X 6, 021020 (2016).
* Tohyama and Tsutsui (2018) T. Tohyama and K. Tsutsui, Inter. J. Mod. Phys. B 32, 1840017 (2018).
* Dean _et al._ (2016) M. P. Dean, Y. Cao, X. Liu, _et al._ , Nat. Mat. 15, 601 (2016).
* Wang _et al._ (2018) Y. Wang, M. Claassen, C. D. Pemmaraju, C. Jia, B. Moritz, and T. P. Devereaux, Nat. Rev. Mat. 3, 312 (2018).
* Chen _et al._ (2019) Y. Chen, Y. Wang, C. Jia, B. Moritz, A. M. Shvaika, J. K. Freericks, and T. P. Devereaux, Phys. Rev. B 99, 104306 (2019).
* Buzzi _et al._ (2018) M. Buzzi, M. Först, R. Mankowsky, and A. Cavalleri, Nat. Rev. Mat. 3, 299 (2018).
* Richard (1985) F. Richard, _QED: The strange theory of light and matter_ (Princeton University Press, USA, 1985).
* Tomaš (1995) M. S. Tomaš, Phys. Rev. A 51, 2545 (1995).
* Raimond _et al._ (2001) J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73, 565 (2001).
* Röhlsberger _et al._ (2010) R. Röhlsberger, K. Schlage, B. Sahoo, S. Couet, and R. Rüffer, Science 328, 1248 (2010).
* Röhlsberger _et al._ (2012) R. Röhlsberger, H.-C. Wille, K. Schlage, and B. Sahoo, Nature 482, 199 (2012).
* Heeg _et al._ (2013) K. P. Heeg, H.-C. Wille, K. Schlage, _et al._ , Phys. Rev. Lett. 111, 073601 (2013).
* Heeg _et al._ (2015a) K. P. Heeg, C. Ott, D. Schumacher, H.-C. Wille, R. Röhlsberger, T. Pfeifer, and J. Evers, Phys. Rev. Lett. 114, 207401 (2015a).
* Heeg _et al._ (2015b) K. P. Heeg, J. Haber, D. Schumacher, _et al._ , Phys. Rev. Lett. 114, 203601 (2015b).
* Haber _et al._ (2017) J. Haber, X. Kong, C. Strohm, _et al._ , Nat. Photon. 11, 720 (2017).
* Haber _et al._ (2019) J. Haber, J. Gollwitzer, S. Francoual, M. Tolkiehn, J. Strempfer, and R. Röhlsberger, Phys. Rev. Lett. 122, 123608 (2019).
* Adams _et al._ (2013) B. W. Adams, C. Buth, S. M. Cavaletto, _et al._ , J. Mod. Optic. 60, 2 (2013).
* Brown _et al._ (1977) M. Brown, R. E. Peierls, and E. A. Stern, Phys. Rev. B 15, 738 (1977).
* Wei and Lytle (1979) P. S. P. Wei and F. W. Lytle, Phys. Rev. B 19, 679 (1979).
* Röhlsberger _et al._ (2005) R. Röhlsberger, K. Schlage, T. Klein, and O. Leupold, Phys. Rev. Lett. 95, 097601 (2005).
* Huang _et al._ (2020) X.-C. Huang, Z.-R. Ma, X.-J. Kong, W.-B. Li, and L.-F. Zhu, J. Opt. Soc. Am. B 37, 745 (2020).
* Li _et al._ (2012) W. Li, J. Zhu, X. Ma, H. Li, H. Wang, K. J. Sawhney, and Z. Wang, Rev. Sci. Instrum. 83, 053114 (2012).
* Liu _et al._ (2003) X.-J. Liu, L.-F. Zhu, Z.-S. Yuan, _et al._ , Phys. Rev. Lett. 91, 193203 (2003).
* Van Loo _et al._ (2013) A. F. Van Loo, A. Fedorov, K. Lalumière, B. C. Sanders, A. Blais, and A. Wallraff, Science 342, 1494 (2013).
* Hill _et al._ (2007) J. Hill, D. Coburn, Y.-J. Kim, T. Gog, D. Casa, C. Kodituwakku, and H. Sinn, J. Synchrotron Radiat. 14, 361 (2007).
|
††thanks: They contribute equally to this work. ††thanks: They contribute
equally to this work.
# Rényi Entropy Dynamics and Lindblad Spectrum for Open Quantum System
Yi-Neng Zhou Institute for Advanced Study, Tsinghua University, Beijing
100084, China Liang Mao Institute for Advanced Study, Tsinghua University,
Beijing 100084, China Department of Physics, Tsinghua University, Beijing
100084, China Hui Zhai<EMAIL_ADDRESS>Institute for Advanced Study,
Tsinghua University, Beijing 100084, China
###### Abstract
In this letter we point out that the Lindblad spectrum of a quantum many-body
system displays a segment structure and exhibits two different energy scales
in the strong dissipation regime. One energy scale determines the separation
between different segments, being proportional to the dissipation strength,
and the other energy scale determines the broadening of each segment, being
inversely proportional to the dissipation strength. Ultilizing a relation
between the dynamics of the second Rényi entropy and the Lindblad spectrum, we
show that these two energy scales respectively determine the short- and the
long-time dynamics of the second Rényi entropy starting from a generic initial
state. This gives rise to opposite behaviors, that is, as the dissipation
strength increases, the short-time dynamics becomes faster and the long-time
dynamics becomes slower. We also interpret the quantum Zeno effect as specific
initial states that only occupy the Lindblad spectrum around zero, for which
only the broadening energy scale of the Lindblad spectrum matters and gives
rise to suppressed dynamics with stronger dissipation. We illustrate our
theory with two concrete models that can be experimentally verified.
For a closed quantum system, the energy spectrums of Hamiltonian fully
determine the time scales of its dynamics. For an open quantum system, when
the environment is treated by the Markovian approximation, the couplings
between system and environment are controlled by a set of dissipation
operators. In this case, the dynamics of the system is governed by the
Lindblad equation which contains the contributions from both the Hamiltonian
and the dissipation operators open . Obviously, the spectrum of the
Hamiltonian alone can no longer determine the time scales of the entire
dynamics, and a natural question is then what energy scales set the time
scales of dynamics of an open quantum system.
There are various directions to approach this issue, and the answer also
relies on what type of dynamics that we are concerned with. Here let us focus
on the dissipation driven dynamics. There are still different physical
intuitions from different perspectives. One intuition is from the perturbation
theory when the dissipation strength is weaker compared with the typical
energy scales of the Hamiltonian Pan . In this regime, by treating the
dissipation perturbatively, it leads to a scenario that the dissipation
dynamics becomes faster when the dissipation strength is stronger. Another
intuition is from the studies of the quantum Zeno effect Zeno_paradox ;
Zeno1990 ; quantum_Zeno ; Zeno_review , which states that frequent
measurements can slow down the dynamics, provided that the typical time
interval between two successive measurements are shorter than the intrinsic
time scale of the system. Since the measurement can also be understood in term
of dissipations in the Lindblad master equation, it provides another scenario
that the dissipation dynamics is suppressed when the dissipation becomes
stronger, in the regime that the dissipation strength is stronger compared
with the typical energy scales of the Hamiltonian. It seems that these two
scenarios respectively work on different parameter regimes and the results are
also opposite to each other. It will be interesting to see that there actually
exists a framework that can unify these two scenarios.
When a system is coupled to a Markovian environment, the entropy of the system
will increase in time. The entropy dynamics of an open quantum many-body
system is a subject that attracts lots of interests recently entang_review ;
Qi_Cricuit ; Chen ; Kitaev ; YuChen ; Zhai . In this letter, we address the
issue of typical time scales of the entropy increasing dynamics of a quantum
many-body system coupled to a Markovian invironment, and especially, we should
focus on the second Rényi entropy, for the reason that will be clear below,
and answer the question whether the entropy dynamics is faster or slower when
the dissipation strength increases.
Figure 1: Schematic of the mapping between the Lindblad equation (left) and
the Schödinger like equation in a doubled system (right). Here
$\hat{L}\hat{\rho}$ denotes the r.h.s. of Eq. 1.
Our studies are based on a mapping between the Lindblad master equation and a
non-unitary evolution of wave function in a doubled space, as shown in Fig. 1.
Let us first review this mapping Operator_Schmidt ; Mixed_state . Considering
a density matrix $\hat{\rho}$, and given a set of complete bases
$\\{|n\rangle\\},(n=1,\dots,\mathcal{D}_{\text{H}})$ of the Hilbert space with
dimension $\mathcal{D}_{\text{H}}$ (say, the eigenstates of the Hamiltonian
$\hat{H}$ with eigenenergies $E_{n}$), the density matrix $\hat{\rho}$ can be
expressed as $\hat{\rho}=\sum_{mn}\rho_{mn}|m\rangle\langle n|$. By the
operator-to-state mapping, we can construct a wave function
$\Psi_{\rho}=\sum_{mn}\rho_{mn}|m\rangle\otimes|n\rangle$, which contains
exact the same amount information as $\hat{\rho}$. Here $\Psi_{\rho}$ is a
wave function on a system whose size is doubled compared to the original
system, and we will refer these two copies of original system as the “left”
(L) and the “right” (R) systems. Under this mapping, for instance, a density
matrix of a pure state $\hat{\rho}=|\psi\rangle\langle\psi|$ is mapped to a
product state $\Psi_{\rho}=|\psi\rangle\otimes|\psi\rangle$ in the double
system, and a thermal density matrix at temperature $T$ as
$\hat{\rho}=\sum_{n}e^{-E_{n}/(k_{\text{b}}T)}|n\rangle\langle n|$ is mapped
to a thermofield double state at temperature $T/2$ as $\Psi_{\rho}=\sum
e^{-E_{n}/(k_{\text{b}}T)}|n\rangle\otimes|n\rangle$ in the double system.
For an open system coupled to a Markovian environment, the density matrix
obeys the Lindblad master equation given by
$\hbar\frac{d\hat{\rho}}{dt}=-i[\hat{H},\hat{\rho}]+\sum\limits_{\mu}\gamma_{\mu}\left(2\hat{L}_{\mu}\hat{\rho}\hat{L}_{\mu}^{\dagger}-\\{\hat{L}^{\dagger}_{\mu}\hat{L}_{\mu},\hat{\rho}\\}\right),$
(1)
where $\hat{L}_{\mu}$ stand for a set of dissipation operators, and
$\gamma_{\mu}$ are their corresponding dissipation strengths. After the
mapping, the wave function $\Psi_{\rho}$ in the double system satisfies a
Schrödinger-like equation
$i\hbar\frac{d\Psi_{\rho}}{dt}=\left(\hat{H}_{\text{s}}-i\hat{H}_{\text{d}}\right)\Psi_{\rho}.$
(2)
Here $\hat{H}_{\text{s}}$ is the Hermitian part of the Hamiltonian determined
by system itself, and it is given by
$\hat{H}_{\text{s}}=\hat{H}_{\text{L}}\otimes\hat{I}_{\text{R}}-\hat{I}_{\text{L}}\otimes\hat{H}^{\text{T}}_{\text{R}},$
(3)
where operators with subscript “L” and “R” respectively stand for operators
acting on the left and the right systems, and “T” stands for the transpose,
and $\hat{I}$ represents the identity operator. $-i\hat{H}_{\text{d}}$ is the
non-Hermitian part of the Hamiltonian determined by the dissipation operators,
which is given by
$\displaystyle\hat{H}_{\text{d}}=\sum_{\mu}\gamma_{\mu}$
$\displaystyle\left[-2\hat{L}_{\mu,\text{L}}\otimes\hat{L}^{\text{*}}_{\mu,\text{R}}\right.$
$\displaystyle\left.+(\hat{L}^{\dagger}_{\mu}\hat{L}_{\mu})_{\text{L}}\otimes\hat{I}_{\text{R}}+\hat{I}_{\text{L}}\otimes(\hat{L}^{\dagger}_{\mu}\hat{L}_{\mu})^{\text{*}}_{\text{R}}\right],$
(4)
where the superscript * stands for taking complex conjugation. We can
diagnolize this non-Hermitian Hamiltonian
$\hat{H}_{\text{s}}-i\hat{H}_{\text{d}}$, which leads to a set of eigenstates
as
$(\hat{H}_{\text{s}}-i\hat{H}_{\text{d}})|\Psi^{l}_{\rho}\rangle=\epsilon_{l}|\Psi^{l}_{\rho}\rangle,$
(5)
where $\epsilon_{l}$ is in general a complex number, and we denote them as
$\epsilon_{l}=\alpha_{l}-i\beta_{l}$. This spectrum, originated from the
Lindblad equation, is referred to as the Lindblad spectrum. The full Lindblad
spectrum has been studied for a number of models before Prosen1 ; Prosen2 ;
Universal_spectra ; tenfold ; local_random_Liouvillians ; Wang . Here we would
like to make several useful comments on the Lindblad spectrum. i) $\alpha_{l}$
and $-\alpha_{l}$ always appear in pairs in the spectrum; ii) $\beta_{l}$ is
always non-negative; iii) If $\hat{L}_{\mu}$ are all hermitian, there always
exists a zero-energy eigenstate with $\epsilon_{l}=0$, and this eigenstate is
labelled as $l=0$ and is given by
$|\Psi^{l=0}_{\rho}\rangle=\frac{1}{\sqrt{\mathcal{D}_{\text{H}}}}\sum_{n}|n\rangle\otimes|n\rangle$.
Figure 2: The dynamics of the second Rényi entropy $S^{(2)}$ as a function of
$t\gamma$. $\gamma$ is the dissipation strength. Different curves have
different $\gamma$ in unit of $J$. The inset show the long-time behavior of
$S^{(2)}$ as functions of $tJ$ and $tJ^{2}/\gamma$. The dashed line is a
fitting of initial slop based on Eq. 13. (a) is for the Bose-Hubbard model
with $U=J$ and the number of sites $L=6$, and the number of bosons $N=3$. (b)
is for hard core bosons model with $V=J$, $L=8$ and $N=4$. The initial state
is taken as the ground state of $\hat{H}$.
Rényi Entropy and Lindblad Spectrum. Here we bring out a close relation
between the dynamics of the second Rényi entropy and the Lindblad spectrum.
For any density matrix $\hat{\rho}(t)$, the second Rényi entropy $S^{(2)}(t)$
is given by
$e^{-S^{(2)}}=\text{Tr}(\hat{\rho}^{2})=\sum\limits_{mn}\rho_{mn}(t)\rho_{nm}(t).$
(6)
On the other hand, in the double system, the total amplitude of the wave
function is given by
$|\Psi_{\rho}|^{2}=\sum\limits_{mn}\rho_{mn}(t)\rho^{*}_{mn}(t).$ (7)
Since the density matrix is always Hermitian, it gives
$\rho_{nm}(t)=\rho^{*}_{mn}(t)$, and therefore, we have
$e^{-S^{(2)}}=|\Psi_{\rho}|^{2}.$ (8)
An initial state $\Psi_{\rho}(0)$ in the double space can be expanded as
$\Psi_{\rho}(0)=\sum_{l}c_{l}|\Psi^{l}_{\rho}\rangle$, the subsequent
evolution is given by
$\Psi(t)=e^{-i\hat{H}_{\text{s}}t-\hat{H}_{\text{d}}t}|\Psi_{\rho}(0)\rangle=\sum_{l}c_{l}e^{-i\alpha_{l}t-\beta_{l}t}|\Psi^{n}_{\rho}\rangle$
(9)
and therefore
$e^{-S^{(2)}}=|\Psi_{\rho}|^{2}=\sum\limits_{n}|c_{l}|^{2}e^{-2\beta_{l}t}.$
(10)
Since the evolution in double system is non-unitary and all $\beta_{l}$ are
non-negative, the total amplitude of the wave function always decays in time.
Hence, by this entropy-amplitude relation Eq. 8, the decaying of
$|\Psi_{\rho}|^{2}$ gives rise to the increasing of $S^{(2)}$. Note that for
any initial density matrix with trace unity and for hermitian $\hat{L}_{\mu}$,
$c_{l=0}$ always equals $1/\sqrt{\mathcal{D}_{\text{H}}}$. This mode always
does not decay in time because $\beta_{l=0}=0$. If there is no other
eigenmodes with $\beta_{l}=0$, $l=0$ mode is the only remaining mode at
infinite long time, which gives a maximum second Rényi entropy
$\log\mathcal{D}_{\text{H}}$. Before reaching that limit, the imaginary parts
of the Lindblad spectrum of occupied states determine the time scales of the
Rényi entropy dynamics. Our discussion below will be based on this connection.
Models. Although our discussion below is quite general for quantum many-body
systems, we illustrate the results with two concrete models. The first model
is the Bose-Hubbard model, which reads
$\hat{H}=-J\sum\limits_{\langle
ij\rangle}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.})+\frac{U}{2}\sum\limits_{i}\hat{n}_{i}(\hat{n}_{i}-1),$
(11)
where $\hat{b}_{i}$ is the boson annihilation operator at site-$i$, and
$\hat{n}_{i}=\hat{b}^{\dagger}_{i}\hat{b}_{i}$ is the boson number operator at
site-$i$. $\langle ij\rangle$ denotes nearest neighbor sites. $J$ and $U$ are
respectively the hopping and the on-site interaction strengths. For the second
model, we consider hard-core bosons, which prevent two bosons to occupy the
same site. In addition, we introduce the nearest-neighbor repulsion, and the
model reads
$\hat{H}=-J\sum\limits_{\langle
ij\rangle}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.})+V\sum\limits_{\langle
ij\rangle}\hat{n}_{i}\hat{n}_{j}.$ (12)
In one-dimension, these two models are quite different, because the second
model can be mapped to a spinless fermion model with nearest neighbor
repulsion, and can also be mapped to a spin model with nearest neighbor
couplings, but the first model cannot. In both cases, we take all
$\hat{n}_{i}$ as the dissipation operators and we set the dissipation
strengthes uniformly as $\gamma$. In the numerical results shown below, we
have choose $J\sim U$ or $J\sim V$ such that $J$ sets the typical energy scale
of the Hamiltonian part, and therefore, strong and weak dissipations
respectively mean $\gamma/J>1$ or $\gamma/J<1$. Below we will show that both
models exhibit similar features, which supports that our results are quite
universal.
Figure 3: The Lindblad spectrum for strong dissipation case (a1,a2,b1,b2) with
$\gamma=5J$ and for weak dissipation case (c1,c2) with $\gamma=0.2J$. The red
points mark the eigenstates with significant occupation ($|c_{l}|^{2}\geqslant
1/\mathcal{D}_{\text{H}}$) by the initial state. For (a1) and (a2) in the
first raw, the initial state is taken as
$\Psi_{\rho}=|\psi_{\text{g}}\rangle\otimes|\psi_{\text{g}}\rangle$, where
$|\psi_{\text{g}}\rangle$ is the ground state of $\hat{H}$. For (b1) and (b2)
in the second raw, the initial states are taken as the zero-energy eigenstate
of $\hat{H}_{\text{d}}$, that are $|111000\rangle$ for (b1) and
$|11110000\rangle$ for (b2) in Fock bases. The left column (a1,b1,c1) are for
the Bose-Hubbard model with $U=J$ and the number of sites $L=6$, and the
number of bosons $N=3$. The right column (a2,b2,c2) are for hard core bosons
model with $V=J$, $L=8$ and $N=4$.
Dynamics of the Rényi Entropy. We first consider the short-time behavior of
the Rényi entropy dynamics. We apply the short-time expansion to Eq. 9 and
ultilize the relation Eq. 8, and to the leading order of entropy change, we
obtain
$\lim\limits_{t\rightarrow
0}\frac{dS^{(2)}}{dt}=2\frac{\langle\Psi_{\rho}(0)|\hat{H}_{\text{d}}|\Psi_{\rho}(0)\rangle}{\langle\Psi_{\rho}(0)|\Psi_{\rho}(0)\rangle}.$
(13)
The physical meaning of the r.h.s. of Eq. 13 in original system is the
fluctuation of the dissipation operators. For instance, if the initial state
is a pure state and $\hat{\rho}(0)=|\psi(0)\rangle\langle\psi(0)|$, then
$|\Psi_{\rho}(0)\rangle=|\psi(0)\rangle\otimes|\psi(0)\rangle$, and Eq. 13 can
be rewritten as
$\displaystyle\lim\limits_{t\rightarrow 0}\frac{dS^{(2)}}{dt}=$ $\displaystyle
4\sum\limits_{\mu}\gamma_{\mu}\left(\langle\psi(0)|\hat{L}^{\dagger}_{\mu}\hat{L}_{\mu}|\psi(0)\rangle-|\langle\psi(0)|\hat{L}_{\mu}|\psi(0)\rangle|^{2}\right).$
(14)
Suppose all $\gamma_{\mu}$ are taken as the same $\gamma$, this result shows
that the time-dependence of $S^{(2)}$ is governed by a dimensionless time
$\gamma t$. In other word, the larger $\gamma$ is, the faster the Rényi
entropy dynamics increases. This $\gamma t$ scaling is shown in Fig. 2 for two
different models, where one can see that the short-time parts of $S^{(2)}$
curves with different $\gamma$ collapse into a single line when plotted in
term of $\gamma t$. The dashed lines compare the short-time behavior with the
slope given by Eq. 13 and Eq. 14.
In Fig. 2, one also finds that $S^{(2)}$ no longer obeys the $\gamma t$
scaling when $\gamma t>1$. Moreover, in the strong dissipation regime, the
insets plotted in term of $tJ$ show an opposite trend at long-time, that is,
the larger $\gamma$ is, the slower the Rényi entropy increases. In fact, the
long-time behavior of $S^{(2)}$ exhibits a $t/\gamma$ scaling. As shown in the
insets of Fig. 2, when the long-time part of $S^{(2)}$ curves with different
$\gamma$ are ploted in term of $tJ^{2}/\gamma$, they all collapse into a
single curve.
Lindblad Spectrum with Strong Dissipation. This opposite behavior between
short- and long-time can be understood very well in term of the Lindblad
spectrum. As one can see from Fig. 3(a,b), for strong dissipation, the main
feature of the Lindblad spectrum is that it separates into segments along the
imaginary axes of the spectrum, and the separation between segments are
approximately $2\gamma$. For each segment, the width along the imaginary axes
is approximately given by $J^{2}/\gamma$. This feature can be understand by
perturbation treatment of $\hat{H}_{\text{s}}-i\hat{H}_{\text{d}}$. Since the
dissipation strength is stronger than the typical energy scales of the
Hamiltonian, we can treat $\hat{H}_{\text{s}}$ as a perturbation to
$\hat{H}_{\text{d}}$. To the zeroth order of $\hat{H}_{\text{d}}$, the
spectrum is purely imaginary and different segments are separated by
$2\gamma$. More importantly, it worth emphasizing that the eigenstates of
$\hat{H}_{\text{d}}$ are usually highly degenerate, for instance, when
different $\hat{L}_{\mu}$ commute with each other and are related by a
symmetry, such as $\hat{L}_{\mu}$ being $\hat{n}_{i}$ in our examples.
Usually, $\hat{H}_{\text{s}}$ and $\hat{H}_{\text{d}}$ do not commute with
each other, and the perturbation in $\hat{H}_{\text{s}}$ lifts the degeneracy
of the imaginary parts and gives rise to a broadening of the order of
$J^{2}/\gamma$, due to the nature of the second order perturbation.
We call these eigenstates with imaginary energies of the order of a few times
of $\gamma$ as “high imaginary energy states”, and these eigenstates with
imaginary energies of the order of a few times of $J^{2}/\gamma$ as “low-lying
imaginary energy states”. For a generic initial state, both two types of
eigenstates are occupied. Quite generally, the occupations of the “high
imaginary energy states” are significant, for instance, when the initial state
is taken as the eigenstates of $\hat{H}_{\text{s}}$. With the relation between
the Rényi entropy dynamics and the Lindblad spectrum discussed above, it is
clear that the short-time dynamics is dominated by these “high imaginary
energy states” that gives a dynamics scaled by $t\gamma$. Nevertheless, when
$\gamma t>1$, the weights on these “high imaginary energy states” mostly decay
out and the long-time dynamics is therefore dominated by the “low-lying
imaginary energy states” that gives a dynamics scaled by $tJ^{2}/\gamma$.
Figure 4: The dynamics of the second Rényi entropy $S^{(2)}$ as a function of
$tJ^{2}/\gamma$ for specific initial state. $\gamma$ is the dissipation
strength. Different curves have different $\gamma$ in unit of $J$. The inset
show the short-time behavior of $S^{(2)}$ as functions of $tJ$ and $t\gamma$.
(a) is for the Bose-Hubbard model with $U=J$ and the number of sites $L=6$,
and the number of bosons $N=3$. (b) is for hard core bosons model with $V=J$,
$L=8$ and $N=4$. The initial states are taken as the zero-energy eigenstate of
$\hat{H}_{\text{d}}$, that are $|111000\rangle$ for (a) and $|11110000\rangle$
for (b) in Fock bases.
Quantum Zeno Effect Revisited. Here we consider a specific initial state that
satisfies $\hat{H}_{\text{d}}|\Psi(0)\rangle=0$. In other word, such initial
states do not exhibit fluctuation of dissipation operators. Thus, according to
Eq. 13 and Eq. 14, the initial slop of $S^{(2)}$ is zero. Moreover, in the
strong dissipation regime, the populations of the “high imaginary energy
states” are strongly suppressed by the “gap” between different segments and
their contribution becomes negligible, and such initial states mainly populate
the “low-lying imaginary energy states”, as we shown in Fig. 3(b). Therefore,
the entire dynamics of the second Rényi entropy is set by the energy scale
$J^{2}/\gamma$ and it obeys the $t/\gamma$ scaling. This is shown in Fig. 4
for two models. To contrast such specific initial states with generic states
discussed above, we plot in the inset of Fig. 4 the short-time behavior of
$S^{(2)}$ as a function of $t\gamma$ and $tJ$. Unlike the results shown in
Fig. 2, the short-time dynamics with $t\gamma<1$ are quite different, because
it does not exhibit linear behavior and different curves do not collapse into
a single line in term of $t\gamma$.
For these initial states, that the dynamics is slower with stronger
dissipation is reminiscent of the quantum Zeno effect. In fact, the quantum
Zeno effect can indeed be understood in this way. Introducing
$\\{|M\rangle\\},(M=1,\dots,\mathcal{D}_{\text{H}})$ as a set of complete and
orthogonal measurement bases, we define the projection operators as
$\hat{P}_{M}=|M\rangle\langle M|$, and the frequent measurement process can
also be described by the Lindblad equation Eq. 1 with dissipation operator
$\hat{L}_{\mu}$ given by all $\hat{P}_{M}$. With such dissipation operators,
the Lindblad spectrum exhibits a set of “low-lying imaginary energy states”
with energy scale given by $J^{2}/\gamma$. It can be shown that, as long as
the initial state density matrix is diagonal in the measurement bases, the
initial states satisfy $\hat{H}_{\text{d}}|\Psi(0)\rangle=0$.
From Strong to Weak Dissipation. Finally we show that when $\gamma$ decreases
and eventually becomes weaker compared with the typical energy scales in the
Hamiltonian, the segments structure in the Lindblad spectrum disappears, as we
shown in Fig. 3(c). Thus, the entropy dynamics for generic states no longer
display the feature of two time scales. The quantum Zeno effect also
disappears even for the specific initial states, and this is understandable
because in this regime, the typical time interval between two measurements is
already longer than the intrinsic evolution time of the system.
Summary. In this work, we establish a relation between the Rényi entropy
dynamics and the Lindblad spectrum in double space. At the strong dissipation
regime, the Lindblad spectrum exhibits a segment structure, in which we can
introduce the “high imaginary energy eigenstates” and the “low-lying imaginary
energy eigenstates”. For a generic initial state with significantly occupied
“high imaginary energy eigenstates”, the former dominates the short-time
dynamics and the latter dominates the long-time dynamics, which respectively
give rise to $t\gamma$ scaling and $t/\gamma$ scaling. For a specific initial
state with only “low-lying imaginary energy eigenstates” significantly
occupied, the dynamics is dominated by $t/\gamma$ scaling, and we show the
quantum Zeno effect belongs to this class. We illustrate our results with two
concrete models. The second Rényi entropy can now been measured in ultracold
atomic gases in optical lattices, and in fact, it has been measured in the
Bose-Hubbard model with or without disorder Greiner1 ; Greiner2 ; Greiner3 .
The dissipation operators and their strenghes can also now be controlled in
ultracold atomic gases BHMExp , our predictions can therefore be verified
directly in the experimental setup.
Acknowledgment. We thank Lei Pan, Tian-Shu Deng, Tian-Gang Zhou and Pengfei
Zhang for helpful discussions. This work is supported by Beijing Outstanding
Young Scientist Program, NSFC Grant No. 11734010, MOST under Grant No.
2016YFA0301600.
Note Added. When finishing this work, we become aware of a work in which
similar behaviors of the Lindblad spectrum in strong dissipation regime are
also discussed full_spectrum .
## References
* (1) H. P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, Oxford, 2007).
* (2) L. Pan, X. Chen, Y. Chen, and H. Zhai, Nat. Phys. 16, 767(2020).
* (3) B. Misra and E. C. G. Sudarshan, Journal of Mathematical Physics 18, 756 (1977).
* (4) W. M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland, Phys. Rev. A 41, 2295 (1990).
* (5) A. G. Kofman and G. Kurizki, Nature 405, 546 (2000).
* (6) K. Koshino and A. Shimizu, Phys. Rep. 412, 191 (2005).
* (7) L. Aolita, F. de Melo and L. Davidovich, Rep. Prog. Phys. 78, 042001 (2015)
* (8) P. Lorenzo, S. Christoph, and X.-L., Qi, J. High Energ. Phys. 2020, 63 (2020).
* (9) Y. Chen, X.-L. Qi and P. Zhang, J. High Energ. Phys. 2020, 121 (2020).
* (10) P. Dadras, A. Kitaev, arXiv: 2011.09622
* (11) Y. Chen, arXiv: 2012.00223
* (12) K. Su, P. Zhang and H. Zhai, arXiv: 2101.*****
* (13) J. E. Tyson, J. Phys. A: Math. Gen. 36, 10101 (2003).
* (14) M. Zwolak and G. Vidal, Phys. Rev. Lett. 93, 207205 (2004).
* (15) T. Prosen, Phys. Rev. Lett. 109, 090404 (2012)
* (16) M. V. Medvedyeva, F. H. L. Essler, T. Prosen, Phys. Rev. Lett. 117, 137202 (2016)
* (17) S. Denisov, T. Laptyeva, W. Tarnowski, D. Chru?ci?ski, and K. Zyczkowski, Phys. Rev. Lett. 123, 140403 (2019)
* (18) S. Lieu, M. McGinley, and N. R. Cooper, Phys. Rev. Lett. 124, 040401 (2020).
* (19) K. Wang, F. Piazza, and D. J. Luitz, Phys. Rev. Lett. 124, 100604 (2020).
* (20) D. Yuan, H. Wang, Z. Wang, D. L. Deng, arXiv: 2009.00019
* (21) R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli, M. Greiner, Nature 528, 77 (2015).
* (22) A. M. Kaufman, M. Eric Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, M. Greiner, Science 353, 794 (2016)
* (23) A. Lukin, M. Rispoli, R. Schittko, M. Eric Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, M. Greiner, Science 364, 256 (2019)
* (24) R. Bouganne, M. B. Aguilera, A. Ghermaoui, J. Beugnon, F. Gerbier, Nat. Phys. 16, 2125 (2020).
* (25) V. Popkov and C. Presilla, arXiv:2101.05708
|
# Page Curve from Non-Markovianity
Kaixiang Su Institute for Advanced Study, Tsinghua University, Beijing
100084, China Department of Physics, University of California Santa Barbara,
Santa Barbara, California, 93106, USA Pengfei Zhang
<EMAIL_ADDRESS>Institute for Quantum Information and Matter,
California Institute of Technology, Pasadena, California 91125, USA Walter
Burke Institute for Theoretical Physics, California Institute of Technology,
Pasadena, California 91125, USA Hui Zhai<EMAIL_ADDRESS>Institute for
Advanced Study, Tsinghua University, Beijing 100084, China
###### Abstract
In this letter, we use the exactly solvable Sachdev-Ye-Kitaev model to address
the issue of entropy dynamics when an interacting quantum system is coupled to
a non-Markovian environment. We find that at the initial stage, the entropy
always increases linearly matching the Markovian result. When the system
thermalizes with the environment at a sufficiently long time, if the
environment temperature is low and the coupling between system and environment
is weak, then the total thermal entropy is low and the entanglement between
system and environment is also weak, which yields a small system entropy in
the long-time steady state. This manifestation of non-Markovian effects of the
environment forces the entropy to decrease in the later stage, which yields
the Page curve for the entropy dynamics. We argue that this physical scenario
revealed by the exact solution of the Sachdev-Ye-Kitaev model is universally
applicable for general chaotic quantum many-body systems and can be verified
experimentally in near future.
Studying open quantum many-body systems is of fundamental importance for
understanding quantum matters and for future applications of quantum
technology because all systems are inevitably in contact with environments,
and decoherence due to coupling with environments is a major obstacle for
future applications of quantum devices Preskill2018 . So far, most studies of
open quantum systems are limited to either situation in which the systems are
small or weakly correlated quantum many-body systems, or situations that the
environment is treated by the Born-Markovian approximation scully1999quantum ;
breuer2002theory . Little effort has been made on strongly correlated quantum
many-body systems coupled to a non-Markovian environment. This is simply
because both strong correlation and non-Markovianity are difficult to handle
theoretically.
Open systems are also of interest to gravity studies, and the best-known
problem is the black hole information paradox hawking1975 . The central issue
of the black hole information paradox is whether the black hole evaporation
can be considered as undergoing unitary dynamics. If so, the entropy should
first increase and then decrease as the black hole evaporates. Such an entropy
curve as shown in Fig. 1(a) is known as the Page curve page1993 . Here the
entanglement entropy is the entropy of the reduced density matrix of the
radiation part $\mathcal{A}$ after tracing out the remaining black hole part
$\mathcal{B}$ . As shown in Fig. 1(a), this entanglement entropy reaches the
maximum when half of the black hole is evaporated, giving rise to the Page
curve. Reproducing the Page curve from gravity theory is a challenging part of
the black hole information problem, and progresses have been made recently
using the semi-classical gravity calculations penington2019entanglement ;
almheiri2019entropy ; almheiri2020page ; almheiri2019replica ;
penington2019replica .
Figure 1: (a): Illustration of the Page curve: the entanglement entropy
between two sub-systems with length $xL$ and $(1-x)L$. The total length is $L$
and $0\leqslant x\leqslant 1$. (b): Illustration of the setup: an SYK4 system
with $N$ Majorana fermions serves as the system and an SYK2 system with $M$
Majorana fermions serves as the environment. Here $M\gg N$.
In this letter, we explore the Sachdev-Ye-Kitaev model maldacena2016remarks ;
Kitaev2018soft with random four-Majorana fermions interactions (SYK4), and
this SYK4 model is coupled to a system with random quadratic Majorana fermions
couplings (SYK2). The SYK2 part contains a lot more degrees-of-freedom
compared with the SYK4 part such that the SYK2 part can be viewed as the
environment. The motivations for considering such a model are two folds.
First, the SYK4 model is exactly solvable in the large-N limit and its
solution gives rise to a strongly correlated non-Fermi liquid state
maldacena2016remarks . Recently, techniques related to SYK4 model has been
widely used to construct exact solutions to address open issues of strongly
correlated quantum matters gu2017local ; davison2017thermoelectric ;
chen2017competition ; song2017strongly ; zhang2017dispersive ; jian2017model ;
chen2017tunable ; eberlein2017quantum ; zhang2019evaporation ;
almheiri2019universal ; gu2017spread ; chen2020Replica ; zhang2020 ; sk ;
Altman ; sk jian ; zhang2020entanglement ; Liu2020non ; Kitaev . Here the
situation we explored is also exactly solvable and we can use the solution to
understand how a non-Markovian environment affects strongly correlated phases
zhang2019evaporation ; almheiri2019universal ; chen2017tunable . Secondly, the
SYK4 model is holographically dual to the Jackiw-Teitelboim gravity theory in
the AdS2 geometry with a black hole bulk spectrum Polchinski ; bulk Yang ;
bulk2 ; bulk3 ; bulk4 ; bulk5 . Thus, the entropy dynamics of the SYK4 system
coupled to an environment gu2017spread ; chen2020Replica ; Kitaev resembles
the black hole evaporation process penington2019entanglement ;
almheiri2019entropy ; almheiri2020page ; almheiri2019replica ;
penington2019replica and it will be interesting to study when a Page-like
curve can emerge after turning on the coupling between the system and the
environment. Remarkably, the main findings of this work bring these two
aspects together, that is, we show that the Page curve emerges because of the
non-Markovian effect of the environment.
Model. The system under consideration is illustrated in Fig. 1(b) and the
total Hamiltonian is given by
$\displaystyle\hat{H}=\hat{H}_{\text{S}}+\hat{H}_{\text{E}}+\hat{H}_{\text{SE}}$
(1)
$\displaystyle=\sum_{i<j<k<l}J^{S}_{ijkl}\psi_{i}\psi_{j}\psi_{k}\psi_{l}+i\sum_{a<b}J^{\text{E}}_{ab}\chi_{a}\chi_{b}+i\sum_{i,a}V_{ia}\psi_{i}\chi_{a}.$
Here $\psi_{i}$ ($i=1,\dots,N$) denotes $N$ Majorana fermions in the system
and $\chi_{a}$ ($a=1,\dots,M$) denotes $M$ Majorana fermions in the
environment, with $\\{\psi_{i},\psi_{j}\\}=\delta_{ij}$ and
$\\{\chi_{a},\chi_{b}\\}=\delta_{ab}$. Throughout the letter we will use the
subscript “S” and “E” to denote the system part and the environment part
respectively. $\hat{H}_{\text{S}}$ and $\hat{H}_{\text{E}}$ are then SYK4 and
SYK2 Hamiltonians. $J^{S}_{ijkl}$, $J^{\text{E}}_{ab}$ and $V_{ia}$ are
independent random Gaussian variables with variances given by
$\overline{(J^{\text{S}}_{ijkl})^{2}}=\frac{3!J_{S}^{2}}{N^{3}},\qquad\overline{(J^{\text{E}}_{ab})^{2}}=\frac{2!J_{\text{E}}^{2}}{M},\qquad\overline{(V_{ia})^{2}}=\frac{V^{2}}{M}.$
(2)
Throughout the paper, we take $J_{S}=1$ as the energy unit. We focus on the
limit $M\gg N\gg 1$ in which the Schwinger-Dyson equation for $\chi$ contains
no contribution from $\psi$, justifying that the $\chi$ part can be viewed as
the environment. Therefore, the Green’s function
$G_{\chi}(\tau)=\left<T_{\tau}\chi_{i}(\tau)\chi_{i}(0)\right>$ of the
environment takes the standard form of the SYK2 model as maldacena2016remarks
$G_{\chi}(\omega)=-\frac{2}{i\omega+i\text{sgn}(\omega)\sqrt{4J_{\text{E}}^{2}+\omega^{2}}}.$
(3)
The fact that this Green’s function has frequency dependence means that the
environment is treated beyond the Markovian approximation.
We consider the situation in which the system and the environment is initially
decoupled, and both are in thermal equilibrium with inverse temperatures
$\beta_{\text{S}}$ and $\beta_{\text{E}}$ respectively. The initial density
matrix is therefore given by
$\rho(0)=\frac{1}{Z_{\text{S}}Z_{\text{E}}}e^{-\beta_{S}\hat{H}_{\text{S}}}\otimes
e^{-\beta_{\text{E}}\hat{H}_{\text{E}}}$ with $Z_{\text{S}}$ and
$Z_{\text{E}}$ being the corresponding partition functions. Evolving the
system with the Hamiltonian Eq. (1) and tracing out the environment, one
obtains the reduced density matrix of the system $\rho_{\text{S}}(t)$ at time
$t$ as
$\rho_{S}(t)=\text{tr}_{\text{E}}\left[e^{-i\hat{H}t}\rho(0)e^{i\hat{H}t}\right].$
(4)
The corresponding second Rényi entropy $S^{(2)}$ of the system is then given
by
$e^{-S^{(2)}_{\text{S}}(t)}=\text{tr}_{\text{S}}\left[\rho_{S}(t)^{2}\right].$
(5)
Under the disorder replica diagonal assumption, $S^{(2)}_{\text{S}}(t)$ can be
expressed in terms of path-integral over bilocal fields. In the large-$N$
limit, the integral is dominated by the saddle point solution, and the entropy
can be obtained by evaluating the on-shell action.
Figure 2: (a): Entropy curves for different $\kappa$ and $\beta_{\text{E}}$.
Here $\kappa=0.1$ for solid lines and $\kappa=0.2$ for dashed lines. Three
different $\beta_{\text{E}}=(0,3,6)$ are plotted. The red dashed straight line
represents the same initial slope for all curves. Two curves with
$\beta_{\text{E}}=0$ in (a) coincide with the Markovian results. (b): Entropy
curves with different $\kappa$ and a fixed $\beta_{\text{E}}=6$. The curve
with large $\kappa$ in (b) coincides with $\beta_{\text{E}}=0$ curves in (a)
and the Markovian results.
Recovering the Markovian Results. Below we will first discuss situations where
the entropy dynamics of our model can recover the Markovian result. Here, by
the Markovian result we mean dynamics obtained by solving the following
Lindblad master equation scully1999quantum ; breuer2002theory
$\partial_{t}\hat{\rho}=-i[\hat{H}_{\text{S}},\hat{\rho}]+\sum_{i}\kappa_{i}\left(\hat{L}_{i}\rho\hat{L}_{i}^{\dagger}-\frac{1}{2}\\{\hat{L}_{i}^{\dagger}\hat{L}_{i},\hat{\rho}\\}\right).$
(6)
By treating the environment with the Markovian approximation, we only need to
consider the Hamiltonian of the system and the dissipation operators, without
having to explicitly include the environment. To make comparison with our
model, we take $\kappa_{i}={\kappa}$ and $\hat{L}_{i}=\psi_{i}$. Similar to
previous procedures, we consider the initial thermal density matrix
$\rho_{\text{S}}(0)=\frac{1}{Z_{\text{S}}}e^{-\beta_{\text{S}}\hat{H}_{\text{S}}}$,
and we then evolve the system with Eq. (6). The second Rényi entropy can also
be represented as a path-integral where the saddle points approximation is
applicable. In the Markovian case, the Rényi entropy first grows linearly in
time and then saturates to its maximum value $(N/2)\log 2$, forbidding the
possibility of any page-like behaviors Zhai . For reasons that will become
clear below, we consider the large $J_{\text{E}}$ limit and fix
$V^{2}/J_{\text{E}}$ as $\kappa$ in our model. Our discussions below will then
identify the following conditions as sufficient for our model to recover the
Markovian results.
i) Infinite Environment Temperature. The SYK2 environment becomes a Markovian
one when $\beta_{\text{E}}=0$. This is because only the two-point function
enters the effective action for entropy dynamics, and in the large
$J_{\text{E}}$ limit, the Fourier transformation of the real-time Green’s
function of the environment
$G_{\text{E}}^{>}(t,\beta)=\left<\chi_{i}(t)\chi_{i}(0)\right>_{\beta}/Z_{B}$
gives
$G^{>}_{\text{E}}(\omega,\beta_{\text{E}})=\frac{1}{J_{\text{E}}}\frac{1}{1+e^{-\beta_{E}\omega}}.$
(7)
When $\beta_{\text{E}}=0$, the second term vanishes and the Green’s function
of the environment becomes frequency independent, which is equivalent to the
Markovian approximation. Under this situation, the standard derivation of the
master equation Eq. 6 through a second-order perturbation theory yields the
dissipation strength $\kappa=V^{2}/J_{\text{E}}$. As one can see in Fig. 2
(a), when $\beta_{\text{E}}$ is set to zero, the entropy curve becomes
independent of $\kappa$. This universal curve also coincides with the results
from the Markovian approximation.
ii) Short Time. In the Markovian approach, it can be shown by perturbation
that the entropy grows linearly at the initial stage, and when $\kappa t\ll 1$
npZhai ; Zhai , the growth rate can be derived analytically as
$\frac{dS^{(2)}_{\text{S}}(t)}{dt}=\kappa
N(1-2G_{\text{S}}^{W}(0,2\beta_{S})).$ (8)
Here we have defined the Wightman Green’s function for the single SYK4 model
as
$G_{\text{S}}^{W}(t,\beta)=\frac{1}{Z_{\text{S}}}\left<\psi_{i}(t-i\frac{\beta}{2})\psi_{i}(0)\right>_{\beta}.$
(9)
For our model, similar perturbative calculation Kitaev for short-time yields
$\displaystyle\frac{dS_{\text{S}}^{(2)}}{dt}=2V^{2}N\int_{-t}^{t}dt^{\prime}$
$\displaystyle\left[\left(G^{>}_{\text{S}}(t^{\prime},2\beta_{S})-G^{W}_{\text{S}}(t^{\prime},2\beta_{S})\right)\right.$
$\displaystyle\left.\times G^{>}_{\text{E}}(t^{\prime},\beta_{B})\right].$
(10)
By approximating $t^{\prime}=0$ in the integrand, Eq. 10 becomes
$\displaystyle\frac{dS_{\text{S}}^{(2)}}{dt}=\frac{V^{2}}{J_{\text{E}}}N(1-2G_{\text{S}}^{W}(0,2\beta_{S})).$
(11)
By equalling $V^{2}/J_{E}=\kappa$, Eq. 11 is the same as Eq. 8. This can also
be seen in Fig. 2(a) that the initial growth is linearly in $\kappa t$ and the
slop is a constant for varying $\beta_{\text{E}}$ with fixed
$\beta_{\text{S}}$. In other words, although the Green’s function of the
environment Eq. 7 contains the frequency dependent part, it is not important
for initial time and the short-time behavior is always dominated by the
frequency independent part.
iii) Large System-Environment Coupling. The entropy curve of our model also
matches the Markovian result when $\kappa$ is sufficiently large compared with
$J_{\text{S}}$. Since the short time limit is always Markovian as discussed in
ii), here we focus on the long-time limit. Physically, when the coupling
between system and environment is strong enough compared with the internal
energy scales of the system, all Majorana fermions in the system tend to be
maximally entangled with the environment, because the environment contains
more degrees of freedom. Consequently, the entropy is expected to saturate to
the maximum value $(N/2)\log 2$ in the long-time limit, which is the same as
the Markovian case. This can also be shown more rigorously using the path-
integral formalism by relating the Rényi entropy to the inner product of
Kourkoulou-Maldacena pure states zhang2020entanglement ; Liu2020non . In Fig.
2, one can see that the entropy curve gradually approaches the Markovian
result as $\kappa$ increases.
Figure 3: The entropy curve for varying $\beta_{\text{S}}$ with fixed
$\kappa=0.1$ and $\beta_{\text{E}}=3$. Three different
$\beta_{\text{S}}=(2,3,8)$ are plotted. Page-like behaviours is guaranteed
when $\beta_{\text{S}}=2$ because the initial entropy is higher than the
saturated entropy.
The Page Curve. Above we have shown that, under three situations, our model
recovers the Markovian results, and the Markovian results do not display the
Page curve for entropy dynamics. Hence, to reveal effects beyond the Markovian
approximation, the following three conditions should be satisfied
simultaneously, which are: i) the bath temperature should not be too high; ii)
the evolution time should not be too short; iii) the coupling between system
and environment should not be too large. Under these conditions, we find that
Page curve for entropy dynamics is often observed, as was shown in Fig. 3.
Thus, this attributes the emergence of the Page curve to the beyond Markovian
effect. Since we have discussed that the entropy always increases at the
initial time, it is essential to understand the decreasing behavior of the
entropy curve at long time to understand the Page curve. Below we offer two
physical understandings.
The first understanding again relies on perturbation theory. When $\kappa$ is
small, the entropy dynamics can be obtained by doing perturbation in $\kappa$,
which yields the same perturbative results as Eq. 10. Here, since we focus on
the long-time behavior, we can simply replace $t$ by infinity and the range of
integration in Eq. 10 is set to be $(-\infty,\infty)$. By expressing the
Green’s functions $G^{>}$ and $G^{W}$ in terms of the spectral functions
$\rho$ maldacena2016remarks
$\displaystyle G^{>}(\omega,\beta)=\rho(\omega)\frac{1}{1+e^{-\beta\omega}},$
(12) $\displaystyle
G^{W}(\omega,\beta)=\rho(\omega)\frac{1}{2\cosh{(\beta\omega/2)}},$ (13)
and making use of the fact that $\rho(\omega)$ is even in $\omega$, we obtain
the following expression:
$\displaystyle\frac{dS_{\text{S}}^{(2)}}{dt}=2V^{2}N\int_{0}^{\infty}\left[\frac{d\omega}{2\pi}\frac{\rho_{\text{S}}(\omega,2\beta_{S})\rho_{\text{E}}(\omega,\beta_{\text{E}})}{2\cosh\beta_{S}\omega}\right.$
$\displaystyle\left.\times\frac{(e^{\beta_{S}\omega}-1)(1-e^{(\beta_{\text{E}}-\beta_{S})\omega})}{1+e^{\beta_{\text{E}}\omega}}\right].$
(14)
Note that in Eq. 14, all terms are positive definite except for the
$1-e^{(\beta_{\text{E}}-\beta_{S})\omega}$ term. When the temperature of the
environment is lower than the temperature of the system,
$\beta_{\text{E}}>\beta_{\text{S}}$ and
$1-e^{(\beta_{\text{E}}-\beta_{S})\omega}<0$. Therefore,
$dS_{\text{S}}^{(2)}/dt<0$ and the entropy decreases at long time, which
yields the Page curve. This gives a sufficient condition for the emergence of
the Page curve, that is, the small $\kappa$ and the lower environment
temperature, which also agrees with the three aforementioned conditions.
The second understanding replies on inspecting how the system entropy
saturates at sufficiently long times. It is reasonable to assume that the
system eventually reaches thermal equilibrium with the environment, and since
the environment contains much more degrees-of-freedom, the saturation entropy
is determined by $\kappa$ and $\beta_{\text{E}}$ and is independent of
$\beta_{\text{S}}$. The saturation entropy is smaller when the environment
temperature is lower, which corresponds to a decrease in thermal entropy. When
the coupling $\kappa$ is smaller the saturation entropy is also smaller
because this lowers the entanglement entropy. On the other hand, the initial
entropy of the system is mainly determined by parameters $J_{\text{S}}$ and
$\beta_{\text{S}}$ of the system itself. When this saturation entropy is
smaller than the initial entropy, the entropy has to decrease at a later
stage, which also leads to a sufficient condition for the emergence of the
Page curve.
Summary. In summary, we address the issue of when a Page curve can emerge in
entropy dynamics of a system coupled to the environment. Although we use SYK
model as an exactly solvable model to study this problem, the lesson we learn
from our model reveals a general physical picture that should be applicable in
generic chaotic quantum many-body systems. This physical picture contains two
ingredients. First, at the initial stage, the entropy dynamics is always
dominated by the Markovian process which leads to a linear increase of entropy
in time. Secondly, a chaotic system thermalizes with the environment in the
long-time limit. After thermalization, a low environment temperature and a
weak system-environment coupling respectively suppress the thermal and the
entanglement contributions to the system entropy, which ensures a lower system
entropy at long time and forces the entropy to decrease at the later stage.
The long-time decreasing behavior is essential for the emergence of the Page
curve. This long-time behavior is distinct from the Markovian case where the
system is often heated to infinite temperature and the long-time steady state
is described by a density matrix given by the identity matrix. Therefore, the
Page curve is a consequence of the non-Markovian environment. Since the
entanglement entropy can now be measured experimentally Greiner1 ; Greiner2 ;
Greiner3 and the coupling to the environment can be also highly controllable,
for instance, in ultracold atomic systems, the physical picture revealed in
this work can be experimentally verified in near future.
Acknowledgment. This work is supported by Beijing Outstanding Young Scientist
Program, NSFC Grant No. 11734010, MOST under Grant No. 2016YFA0301600.
Note added. When finishing the manuscript, we become aware of a work by Chen
in which the Rényi entropy dynamics has been stuided in a similar model by the
perturbation theory Yu .
## References
* (1) J. Preskill, Quantum 2, 79 (2018).
* (2) M. O. Scully, and M. S. Zubairy, Quantum Optics, Cambridge University Press, Cambridge, 1997.
* (3) H. P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, Oxford University Press, Oxford, 2007.
* (4) S. W. Hawking, Commun.Math. Phys. 43 (1975) 199-220.
* (5) D. N. Page, Phys. Rev. Lett. 71 (1993)1291.
* (6) G. Penington, J. High Energ. Phys. 2020, 2 (2020).
* (7) A. Almheiri, N. Engelhardt, D. Marolf, and H. Maxfield, J. High Energ. Phys. 2019 63 (2019).
* (8) A. Almheiri, R. Mahajan, J. Maldacena, and Y. Zhao, J. High Energ. Phys. 2020, 149 (2020).
* (9) A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian, and A. Tajdini, J. High Energ. Phys. 2020, 13 (2020).
* (10) G. Penington, S. H. Shenker, D. Stanford, and Z. Yang, arXiv:1911.11977.
* (11) J. Maldacena, and D. Stanford, Physical Review D 94 (2016) 106002.
* (12) A. Kitaev, and S. Josephine Suh, J. High Energ. Phys. 2018, 183 (2018).
* (13) Y. Gu, X.-L. Qi and D. Stanford, J. High Energ. Phys. 2017 125 (2017).
* (14) R. A. Davison, W. Fu, A. Georges, Y. Gu, K. Jensen and S. Sachdev, Phys. Rev. B 95 155131 (2017).
* (15) X. Chen, R. Fan, Y. Chen, H. Zhai and P. Zhang, Phys. Rev. Lett. 119 207603 (2017).
* (16) S. Banerjee and E. Altman, Phys. Rev. B 95, 134302 (2017).
* (17) S.-K. Jian and H. Yao, Phys. Rev. Lett. 119, 206602 (2017).
* (18) X.-Y. Song, C.-M. Jian and L. Balents, Phys. Rev. Lett. 119 216601 (2017).
* (19) P. Zhang, Phys. Rev. B 96 205138 (2017).
* (20) C.-M. Jian, Z. Bi and C. Xu, Phys. Rev. B 96 115122 (2017).
* (21) A. Eberlein, V. Kasper, S. Sachdev and J. Steinberg, Phys. Rev. B 96 205123 (2017).
* (22) Y. Gu, A. Lucas and X.-L. Qi, J. High Energ. Phys. 2017, 120 (2017).
* (23) Y. Chen, H. Zhai and P. Zhang, J. High Energ. Phys. 2017 150 (2017).
* (24) P. Zhang, Phys. Rev. B 100 245104 (2019).
* (25) A. Almheiri, A. Milekhin and B. Swingle, arXiv:1912.04912.
* (26) Y. Chen, X.-L. Qi and P. Zhang, J. High Energ. Phys. 2020, 121 (2020).
* (27) P. Zhang, J. High Energ. Phys. 2020, 2 (2020).
* (28) P. Zhang, C. Liu, and X. Chen, SciPost Phys. 8, 094 (2020).
* (29) C. Liu, P. Zhang, and X. Chen, arXiv:2008.11955.
* (30) S.-K. Jian, and B. Swingle, arXiv:2011.08158.
* (31) P. Dadras, A. Kitaev, arXiv: 2011.09622.
* (32) J. Maldacena, D. Stanford and Z. Yang, Prog Theor Exp Phys 2016 (12): 12C104.
* (33) J. Polchinski and V. Rosenhaus, J. High Energ. Phys. 2016, 1 (2016).
* (34) K. Jensen, Phys. Rev. Lett. 117, 111601 (2016).
* (35) A. Jevicki and K. Suzuki, J. High Energ. Phys. 2016, 46 (2016).
* (36) G. Mandal, P. Nayak, and S. R. Wadia, arXiv:1702.04266.
* (37) D. J. Gross and V. Rosenhaus, J. High Energ. Phys. 2017, 92 (2017).
* (38) Y. N. Zhou, L. Mao and H. Zhai, arXiv: 2101.*****.
* (39) L. Pan, X. Chen, Y. Chen, and H. Zhai, Nat. Phys. 16, 767 (2020).
* (40) R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli, M. Greiner, Nature 528, 77 (2015).
* (41) A. M. Kaufman, M. Eric Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, M. Greiner, Science 353, 794 (2016).
* (42) A. Lukin, M. Rispoli, R. Schittko, M. Eric Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, M. Greiner, Science 364, 256 (2019).
* (43) Y. Chen, arXiv: 2012.00223.
|
Gravitational waves from type II axion-like curvaton model and its implication
for NANOGrav result
Masahiro Kawasaki(a,b) and Hiromasa Nakatsuka(a)
(a)ICRR, The University of Tokyo, Kashiwa, Chiba 277-8582, Japan
(b)Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583,
Japan
## 1 Introduction
North American Nanohertz Observatory for Gravitational Waves (NANOGrav) is one
of the Pulsar-timing-array experiments, which has been trying to detect the
gravitational wave (GW) signal through the long-term observation of pulsars.
Recently the NANOGrav reported a hint of the stochastic GW signal in their
12.5–year observation [1]. This signal can be explained by the stochastic GW
with $\Omega_{\rm GW}\sim 10^{-9}$ at $f\sim 10^{-8}\mathrm{Hz}$, which can be
sourced by cosmic string [2, 3, 4, 5, 6, 7], phase transitions [8, 9, 10, 11,
12], large density fluctuations in primordial black hole (PBH) formation
models [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] and other origin [23, 24, 25].
The PBH formation scenario is an attractive candidate for the NANOGrav signal
since it can simultaneously explain the observed $30{M_{\odot}}$ black holes
in the binary merger events in LIGO-Virgo collaboration [26, 27, 28, 29]. The
PBH formation requires the large curvature power spectrum
($\mathcal{P}_{\zeta}(k_{\rm pbh})\sim 0.02$) at a wavenumber $k_{\rm pbh}$
related to the mass of the formed PBHs. Since PBHs are formed when the high-
density regions enter the horizon and collapse, the PBH mass is roughly given
by the horizon mass at the collapse, which is related to the frequency of the
density fluctuation $f$ ($=k_{\text{pbh}}/2\pi$) as [30]
$\displaystyle M(f)$ $\displaystyle\simeq
30{M_{\odot}}\left(\frac{\gamma}{0.2}\right)\left(\frac{\text{g}_{*}}{10.75}\right)^{-1/6}\left(\frac{f}{5.3\times
10^{-10}\,\mathrm{Hz}}\right)^{-2},$ (1)
where $\gamma\sim 0.2$ is a ratio of PBH mass to the horizon mass [31] and
$\text{g}_{*}$ is the numbers of relativistic degrees of freedom at the PBH
formation. The same density fluctuations that collapse to form PBHs also
generate a stochastic background of GWs through the nonlinear coupling,
$\Braket{\zeta\zeta h}$ when they enter the horizon [32, 33, 34], with a
spectrum peaking at the frequency $f\sim 10^{-9}\mathrm{Hz}$, very close to
the one of the NANOGrav signal.
In this paper, we focus on the axion-like curvaton model of the PBH formation
scenario [35, 36, 37, 15, 38]. There are two types of the axion-like curvaton
models; in one type the complex field $\Phi$ (whose phase part is the axion)
rolls down toward origin during inflation [35, 36, 37, 15], and in the other
type $\Phi$ rolls down from the origin [38]. These two types lead to the
different dynamics and the power spectrum of induced GWs. The former type
(type I) has been already studied and successfully explains the NANOGrav
signal [15]. The type II model was first proposed in [38] where the model
parameters were chosen to achieve a narrow density power spectrum of the
density perturbations. However, a broader power spectrum is preferable to
account for the NANOGrav signal. The goal of this paper is to investigate if
such a broad power spectrum can be produced also in the type II model. It is
found that by choosing appropriate parameters of the potential term in Eq.
(15), we obtain the desired broader spectral shape. Moreover, since our model
produces large positive non-Gaussianity on the density power spectrum, we can
achieve the required abundance of PBHs by the smaller amplitude of the density
power spectrum than the Gaussian one. We improve the treatment of the non-
Gaussianity of this model to accurately estimate the required amplitude of
density power spectrum, and also the induced GW.
This paper is organized as follows. We calculate the curvature power spectrum
of the type II axion-like curvaton model in Sec. 2. Next, we calculate the PBH
abundance in Sec.3 and induced GWs in Sec. 4. We can achieve the broad power
spectrum of induced GWs and explain the NANOGrav signal. In Sec. 5, we
conclude this paper.
## 2 Type II axion-like curvaton model
We now briefly summarize the dynamics of background and fluctuations in the
type II axion-like curvaton model. (See [38] for the detailed calculation.) In
this model, large fluctuations are produced in the phase direction of a
complex scalar field $\Phi$, which we call “axion-like curvaton”. The
potential of $\Phi$ is given by [38]
$\displaystyle
V_{\Phi}=\frac{\lambda}{4}\left(|\Phi|^{2}-\frac{v^{2}}{2}\right)^{2}+gI^{2}|\Phi|^{2}-v^{3}\epsilon(\Phi+\Phi^{*}),$
(2)
where $I$ is the inflaton field, $\lambda$ and $g$ are coupling constants,
$v/\sqrt{2}$ is the vacuum expectation value after inflation. The bias term,
$v^{3}\epsilon(\Phi+\Phi^{*})$, is introduced to avoid the cosmic string
problem and the stochastic effect on dynamics of $\Phi$, which requires the
complicated treatment of fluctuations. Note that $\epsilon$ is naturally small
($\epsilon\ll 1$) in the sense of ’t Hooft’s naturalness [39] since $U(1)$
symmetry is restored for $\epsilon=0$.
The field value of $\Phi$ changes during inflation due to the first and second
term of Eq. (2). In the early stage of inflation
($I\gtrsim(\lambda/g)^{1/2}v$), the interaction term with the inflaton fixes
$\Phi$ near the origin. In the late stage, the inflaton field value becomes
smaller and $\Phi$ starts to roll down the Higgs-like potential towards
$|\Phi|=v$. In this dynamics the phase direction acquires large fluctuations
$\sim H/|\Phi|$ ($H$: Hubble parameter during inflation).
The PBH formation requires the large fluctuations with wavenumber $k=k_{\rm
pbh}\simeq 10^{5}\,\mathrm{Mpc}^{-1}$, which corresponds to the scale of
$30{M_{\odot}}$ PBHs. Thus, we determine the inflaton coupling $g$ so that
$\Phi$ starts to roll down potential at $t_{\rm pbh}$ satisfying
$H(t_{\text{pbh}})=k_{\text{pbh}}/a(t_{\text{pbh}})$, which leads to
$\displaystyle g\simeq\frac{\lambda}{4}\left(\frac{v}{I(t_{\rm
pbh})}\right)^{2}.$ (3)
We also assume that the effective mass of $\Phi$ is much larger than the
Hubble parameter ($\lambda v^{2}\gg H^{2}$) until $t_{\rm pbh}$ to suppress
the fluctuation at the larger wavelengths. This condition also ensures
independence on the initial condition since $\Phi$ settles near the origin of
the potential independently of the initial field value of $\Phi$.
Based on the above background field dynamics, we calculate the fluctuation of
$\Phi$, which is decomposed as
$\displaystyle\Phi=\frac{1}{\sqrt{2}}(\varphi_{0}+\varphi)e^{i\theta}$ (4)
with the homogeneous solution $\varphi_{0}$, and the perturbations $\varphi$
and $\theta$. The phase direction $\theta$ works as the curvaton, and the
canonical field of the phase direction is defined as
$\tilde{\sigma}\equiv\varphi_{0}\theta$. The inflation induces the quantum
fluctuations with amplitude $H/(2\pi)$ for $\tilde{\sigma}$ at the horizon
crossing. Thus, the power spectrum of the $\theta$ fluctuations is given by
$\displaystyle\mathcal{P}_{\theta}(k,t_{k})=\left(\frac{H(t_{k})}{2\pi\varphi_{0}(t_{k})}\right)^{2},$
(5)
where $t_{k}$ is the time when the fluctuation with $k$ crosses the horizon.
$\mathcal{P}_{\theta}(k,t_{k})$ is suppressed for $k>k_{\text{pbh}}$ since the
$\varphi_{0}(t_{k})$ quickly grows after $t_{\text{pbh}}$. While the $\Phi$ is
fixed near the origin before $t_{\text{pbh}}$, $\tilde{\sigma}$ has the large
effective mass given by
$\displaystyle\tilde{m}_{\sigma}^{2}$
$\displaystyle\equiv\frac{\partial^{2}\left(-v^{3}\epsilon(\Phi+\Phi^{*})\right)}{\partial\tilde{\sigma}^{2}}\bigg{|}_{\tilde{\sigma}=0}={\sqrt{2}\epsilon
v^{2}\frac{v}{\varphi_{0}}},$ (6)
where we choose the model parameter so as $m_{\tilde{\sigma}}^{2}>H^{2}$ for
$t<t_{\text{pbh}}$. Such large effective mass damps the fluctuations as
$\tilde{\sigma}\propto a^{-3/2}$, and we define the damping factor of the
phase direction with momentum $k$ as
$\displaystyle R_{k}$
$\displaystyle\equiv\left(\frac{\tilde{\sigma}_{k}(t_{\text{pbh}})}{\tilde{\sigma}_{k}(t_{k})}\right)\sim\left(\frac{k}{k_{\text{pbh}}}\right)^{3/2}\text{
for }k<k_{\text{pbh}}.$ (7)
($R_{k}=1$ for $k>k_{\text{pbh}}$.) Finally, the power spectrum of the
$\theta$ fluctuations at the end of inflation $t_{e}$ is given by
$P_{\theta}(k,t_{e})=R_{k}^{2}\mathcal{P}_{\theta}(k,t_{k}).$ (8)
Let us evaluate the density perturbations induced by the fluctuations of the
phase direction. We suppose that the curvaton obtains the axion-like potential
after inflation through some nonperturbative effect. The potential minimum of
the nonperturbative term does not coincide with that of the primordial one
determined by the bias term in our model. Suppose that nonperturbative
potential takes the minimum at $\theta=\theta_{i}$, then it is written as
$\displaystyle
V_{\sigma}=\Lambda^{4}\left[1-\cos\left(\Theta\right)\right]\simeq\frac{1}{2}m_{\sigma}^{2}\sigma^{2},$
(9)
where $\Theta\equiv\theta-\theta_{i}$, the curvaton $\sigma$ is defined as
$\sigma\equiv v\Theta$ and $m_{\sigma}=\Lambda^{2}/v$. We assume that
$m_{\sigma}$ is small enough to neglect the axion-like potential during
inflation, i.e. $H^{2}\gg m_{\sigma}^{2}$. The density fluctuation is given by
${\delta\rho_{\sigma}}/{\rho_{\sigma}}=2\delta\theta/\theta_{i}$.
Neglecting a small contribution from the inflaton ($\mathcal{P}_{\zeta}\sim
10^{-10}$) compared to the curvaton, the power spectrum of curvature
perturbations is given by
$\displaystyle\mathcal{P}_{\zeta}(k)$
$\displaystyle=\left(\frac{r}{4+3r}\right)^{2}\left(\frac{2}{\theta_{i}}\right)^{2}\mathcal{P}_{\theta}(k,t_{k})R_{k}^{2},$
(10)
where $r$ is the ratio of the energy density of the curvaton to that of the
inflaton (or radiation after reheating), $r=\rho_{\sigma}/\rho_{I}$. Assuming
the instant reheating at $t_{\rm reheat}$, the ratio is given by $r(t_{\rm
reheat})=(v^{2}\theta_{i}^{2})/(6M_{pl}^{2})$, which is chosen to be small to
ensure that $\Phi$ does not disturb the inflation. $r$ grows during radiation-
dominated era as $r\propto a$ due to the matter-like behavior of the curvaton,
and its growth ends at the curvaton decay, $t_{\rm decay}$. We require that
the curvaton decays into radiation before it overcloses the universe,
$r(t_{\rm decay})\sim 0.5$, at which the temperature of the universe is
$T_{\rm decay}\sim(r(t_{\rm reheat})/r(t_{\rm decay}))T_{\rm reheat}$. The
typical decay rate of $\Phi$ is related to the curvaton mass as
$\Gamma_{\sigma}=\kappa^{2}m_{\sigma}^{3}/(16\pi v^{2})$ where $\kappa$ is a
coupling constant. We confirm that $T_{\rm decay}\sim 10^{3}\mathrm{GeV}$ is
achieved for $m_{\sigma}\sim 10^{8}\mathrm{GeV}$ for our parameters in Eq.
(15). In the following, $r$ refers to the energy ratio after decay, that is
$r\equiv r(t_{\rm decay})$.
The PBH abundance highly depends on the eventual non-Gaussian distribution of
$\mathcal{P}_{\zeta}(k)$. It is known that the axion-like curvaton models
produce large non-Gaussianity with local type bispectrum characterized by the
following parameter: [37]
$\displaystyle f_{\rm
NL}=\frac{5}{12}\left(-3+\frac{4}{r}+\frac{8}{4+3r}\right).$ (11)
We discuss the enhancement of the PBH abundance by non-Gaussianity in Sec.3.
Non-Gaussianity also affects the power spectrum of the curvature perturbations
and induced gravitational wave through the higher-order correlations [40, 41],
whose contribution is characterized by $\mathcal{P}_{\zeta}f_{\rm NL}^{2}$. We
take non-Gaussianity into account only approximately since the Gaussian
contribution is the dominant for our choice of parameters given by Eq. (15),
$\mathcal{P}_{\zeta}f_{\rm NL}^{2}<0.04$. We estimate non-Gaussian
amplification on the curvature power spectrum as
$\displaystyle
Q^{\rm(NL)}_{\mathcal{P}}\equiv\frac{\mathcal{P}_{\zeta}^{\rm(NL)}(k_{*})}{\mathcal{P}_{\zeta}(k_{*})}$
$\displaystyle=1+\left(\frac{3}{5}f_{\rm
NL}\right)^{2}\frac{k_{*}^{3}}{2\pi\mathcal{P}_{\zeta}({\bm{k}_{*}})}\int\text{d}^{3}q\frac{\mathcal{P}_{\zeta}({\bm{q}})\mathcal{P}(|{\bm{k}_{*}-\bm{q}}|)}{q^{3}|\bm{k}_{*}-\bm{q}|^{3}},$
(12)
where $k_{*}$ is the wavenumber at the peak of $\mathcal{P}_{\zeta}$.
$Q^{\rm(NL)}_{\mathcal{P}}$ is about $1.07$ for $r=0.5$ and $1.01$ for $r=1.0$
in our parameter set.
Finally, the power spectrum of the density perturbations is given by
$\displaystyle\mathcal{P}_{\delta}(k,t)$
$\displaystyle=\left(\frac{2}{3}\frac{k}{a(t)H(t)}\right)^{4}T(k\eta(t))^{2}\mathcal{P}_{\zeta}(k),$
(13)
where $\eta(t)$ is the conformal time and $T(x)$ is the transfer function
during radiation dominated era, which includes the suppression of the density
perturbation in sub-horizon as
$\displaystyle T(x)\equiv$
$\displaystyle~{}3\frac{\sin(x/\sqrt{3})-(x/\sqrt{3})\cos(x/\sqrt{3})}{(x/\sqrt{3})^{3}}.$
(14)
In this paper, we use the following set of model parameters:
$\displaystyle\lambda$ $\displaystyle=7.5\times 10^{-6},\quad v=5.77\times
10^{-2}{M_{\text{pl}}},\quad$ $\displaystyle g$ $\displaystyle=9.56\times
10^{-11},\quad\epsilon=2.57\times 10^{-10},$
$\displaystyle\begin{cases}&r=0.5,\quad\theta_{i}=5.3\times 10^{-2}\\\
&r=1.0,\quad\theta_{i}=6.5\times 10^{-2}\end{cases},$ (15)
where we take $r$ and $\theta_{i}$ to achieve the PBH abundance required to
explains the LIGO-Virgo event rate. Here we remark some relations between the
parameters and the shape of the power spectrum. The larger $r$ or smaller
$\theta_{i}$ increases the abundance of ALP as Eq. (10), and smaller
$\epsilon$ changes the dynamics of $\varphi_{0}$ at $t\sim t_{\rm pbh}$, both
of which result in larger amplitude of the power spectrum. The peak wavenumber
of the power spectrum is determined by the ratio of potential terms,
$g/(\lambda v^{2})$, as discussed in Eq. (3). The width of the spectrum
depends on how slowly $\varphi_{0}$ changes since the fluctuation of $\theta$
with mode $k$ depends on the field value $\varphi_{0}$ at the horizon crossing
as Eq. (5). We flatten the potential by choosing the smaller $v$, $g$ and
$\lambda$ compared to the previous paper [38] to achieve a broad power
spectrum. We numerically evaluate the dynamics of $\Phi$ assuming the chaotic
inflation whose potential is given by $V_{I}=m_{I}^{2}I^{2}/2$ with
$m_{\phi}\simeq 10^{13}\mathrm{GeV}$. The similar dynamics also holds for
other inflation models. We show the curvature power spectrum
$\mathcal{P}_{\zeta}(k)$ [Eq. (10)] in Fig. 1. We also show the constraints on
$\mu$–distortion [42] by COBE/FIRAS [43] and BBN [44], and our model safely
avoids the current constraints.
Figure 1: The curvature power spectrum
$Q^{\rm(NL)}_{\mathcal{P}}\mathcal{P}_{\zeta}(k)$ based on Eqs.(10) and (12).
The non-Gaussian contribution is included by $Q^{\rm(NL)}_{\mathcal{P}}$. The
orange regions are constraints on the curvature power spectrum from
$\mu$–distortion [42] by COBE/FIRAS [43] and BBN [44].
## 3 PBH formation
The PBHs are formed by the collapse of high-density regions when they re-enter
the horizon. The criterion on the PBH formation is estimated by the numerical
simulation [45], in which the threshold value of the averaged density
fluctuation is obtained. Since the threshold value is too large to neglect the
nonlinear contribution, we need to use the effective threshold value including
the nonlinear relation between density and curvature perturbations. The
detailed analysis show that the effective threshold value is $\delta_{\rm
th(eff)}\simeq\sqrt{2}\times 0.53$ for the averaged linear density
perturbation defined by [46]
$\displaystyle\bar{\delta}_{R}(\bm{x})$
$\displaystyle\equiv\int\text{d}^{3}yW(|\bm{x}-\bm{y}|,R)\delta(\bm{y})$
$\displaystyle=\int\frac{\text{d}^{3}p}{(2\pi)^{3}}\tilde{W}(pR)e^{i\bm{p}\cdot\bm{x}}\delta_{p},$
(16)
where $R=k^{-1}$ is the scale of the horizon corresponding to the PBH mass
[Eq.(1)], $W(x,R)$ and $\tilde{W}(z)$ are the window functions in the real and
Fourier spaces, and the $\delta_{p}$ is the density fluctuation. Although it
is known that the choice of window function causes a large uncertainly [47], a
natural choice, often used in the literature, is the real-space top-hat window
function used in the threshold value in the numerical simulation,
$\displaystyle\tilde{W}(z)$ $\displaystyle=3\frac{\sin(z)-z\cos(z)}{z^{3}}.$
(17)
Thus, a PBH is formed when a region has the averaged density
$\bar{\delta}_{R}$ larger than the threshold value $\delta_{\rm th(eff)}$.
We estimate the PBH formation rate based on the Press–Schechter formalism,
where the PBH formation rate is calculated by the probability distribution of
the averaged density perturbations. In our model, $\bar{\delta}_{R}$ follows
the non-Gaussian distribution due to $f_{\rm NL}$ given by Eq. (11), which
drastically changes the PBH abundance. The probability distribution of
$\bar{\delta}_{R}$ is characterized by the variance and skewness defined by
$\displaystyle\sigma_{R}^{2}$
$\displaystyle=\Braket{\bar{\delta}_{R}^{2}}-\Braket{\bar{\delta}_{R}}^{2},$
(18) $\displaystyle\mu_{R}$
$\displaystyle=\sigma_{R}^{-3}\left(\Braket{\bar{\delta}_{R}^{3}}-3\Braket{\bar{\delta}_{R}^{2}}\Braket{\bar{\delta}_{R}}+2\Braket{\bar{\delta}_{R}}^{3}\right),$
(19)
where $\Braket{...}$ describes the ensemble average of $\delta_{p}$. For
simplicity, we neglect the scale dependence of $\mu_{R}$ and evaluate it at
$R=R_{\rm pbh}=k_{\rm pbh}^{-1}$, which corresponds to $30{M_{\odot}}$ PBHs.
Using the formula in [46], the skewness is numerically given by
$\displaystyle\mu\equiv\mu_{R}|_{R=R_{\rm pbh}}\simeq 3.39f_{\rm
NL}\sigma_{R}|_{R=R_{\rm pbh}}.$ (20)
We construct the statistical variable which reproduces the probability
distribution of $\bar{\delta}_{R}$. Using the Gaussian variable $\chi$
characterized by $\Braket{\chi^{2}}=\sigma_{R}^{2}$, we define the statistical
variable as
$\displaystyle\bar{\delta}[\chi]\equiv\chi+\frac{\mu}{6\sigma_{R}}(\chi^{2}-\sigma_{R}^{2}),$
(21)
which has the same variance and skewness in Eqs. (18) and (19) up to
$\mathcal{O}(f_{\rm NL}\sigma_{R})$. The probability distribution of
$\bar{\delta}[\chi]$ is given by [48, 49, 50]
$\displaystyle P_{R}^{\rm(NG)}(\bar{\delta})$
$\displaystyle=\sum_{i=\pm}\left|\frac{\text{d}\chi_{i}(\bar{\delta})}{\text{d}\bar{\delta}}\right|P_{R}^{\rm(G)}(\chi_{i}(\bar{\delta})),$
(22)
where $P_{R}^{\rm(G)}(\chi)$ is the Gaussian distribution function,
$\displaystyle
P_{R}^{\rm(G)}(\chi)=\frac{1}{\sqrt{2\pi}\sigma_{R}}\exp\left(-\frac{1}{2}\frac{\chi^{2}}{\sigma_{R}^{2}}\right),$
(23)
and $\chi_{\pm}(\bar{\delta})$ are two solutions of
$\bar{\delta}=\bar{\delta}[\chi]$,
$\displaystyle\chi_{\pm}(\bar{\delta})=\frac{3\sigma_{R}}{\mu}\left(-1\pm\sqrt{1+\frac{2\mu}{3}\left(\frac{\mu}{6}+\frac{\bar{\delta}}{\sigma_{R}}\right)}\right).$
(24)
The PBH formation rate is given by the probability of
$\bar{\delta}>\delta_{\rm th(eff)}$, which leads to
$\displaystyle\beta(R)$ $\displaystyle=\int_{\bar{\delta}>\delta_{\rm
th(eff)}}P_{R}^{\rm(NG)}(\bar{\delta})\text{d}\bar{\delta}=\int_{\bar{\delta}[\chi]>\delta_{\rm
th(eff)}}P_{R}^{\rm(G)}(\chi)\text{d}\chi$
$\displaystyle\simeq\frac{\sigma_{R}}{\sqrt{2\pi}\chi_{+}(\delta_{\rm
th(eff)})}\exp\left(-\frac{\chi_{+}(\delta_{\rm
th(eff)})^{2}}{2\sigma_{R}^{2}}\right).$ (25)
Here we have used $\mu>0$ and $\chi_{+}(\delta_{\rm th(eff)})/\sigma_{R}\gg 1$
in the last line. The present PBH abundance is given by
$\displaystyle f(M)\equiv\frac{\text{d}\Omega_{\text{PBH}}}{\text{d}\ln
M}\frac{1}{\Omega_{\text{DM}}}$ (26)
$\displaystyle=\frac{\beta(R(M))}{1.8\times
10^{-8}}\left(\frac{\gamma}{0.2}\right)^{3/2}\left(\frac{10.75}{\text{g}_{*}}\right)^{1/4}\left(\frac{0.12}{\Omega_{\text{DM}}h^{2}}\right)\left(\frac{M}{{M_{\odot}}}\right)^{-1/2},$
(27)
where $\text{g}_{*}$ is the number of relativistic degrees of freedom at
$T\sim 30\mathrm{MeV}$.
We show the calculated mass spectrum of PBH in Fig. 2. We also plot the
relevant constrains by microlensing experiments “MACHO/EROS/OGLE” [51, 52, 53]
and energy injection into CMB through accretion around PBHs [54]. Since our
model predicts the broad mass distribution, the large mass part of the
distribution could conflict with the accretion constraints. However, it is
noticed that accretion constraint has a large uncertainty. In fact, two
different constraints are obtained depending on assumptions as shown in Fig.
2. Our mass distribution is consistent if we adopt the weaker accretion
constraint.
Figure 2: The mass spectrum of PBHs (blue line) and the constraints (orange
regions) by microlensing experiments “MACHO/EROS/OGLE”(solid line) [51, 52,
53], CMB through spherical accretion (dashed line) and disk accretion (dotted
line) [54].
## 4 Induced gravitational waves
The large density fluctuations induce the gravitational waves through the
nonlinear coupling $\Braket{\zeta\zeta h}$ when they re-enter the horizon. The
current energy fraction of GWs is written as
$\displaystyle\Omega_{\rm GW}(t_{0},k)$
$\displaystyle=\left(\frac{a_{c}^{2}H_{c}}{a_{0}^{2}H_{0}}\right)^{2}\Omega_{\rm
GW}(\eta_{c},k)$ $\displaystyle\simeq
0.83\left(\frac{\text{g}_{c}}{10.75}\right)^{-1/3}\Omega_{r,0}\Omega_{\rm
GW}(t_{c},k),$ (28)
where $\Omega_{r,0}$ is the current energy fraction of radiation, the
subscript “c” denotes values when GW production effectively finishes.
The energy density of the induced GWs at $t_{c}$ is calculated by solving the
equation of motion of GWs with the source term of scalar fluctuations, and it
is given by [30]
$\displaystyle\Omega_{\rm
GW}(t_{c},k)=\frac{8}{243}\int^{\infty}_{0}\text{d}y\int^{1+y}_{\left|1-y\right|}\text{d}x\mathcal{P}_{\zeta}(kx)\mathcal{P}_{\zeta}(ky)\frac{y^{2}}{x^{2}}$
$\displaystyle\qquad\times\left(1-\frac{(1+y^{2}-x^{2})^{2}}{4y^{2}}\right)^{2}\overline{\mathcal{I}(x,y,\eta_{c})}^{2},$
(29)
where overline means the time average over $\eta_{c}$.
$\mathcal{I}(x,y,\eta_{c})$ is written as
$\displaystyle\mathcal{I}(x,y,\eta_{c})=\frac{k^{2}}{a(\eta_{c})}\int^{\eta_{c}}\text{d}\bar{\eta}a(\bar{\eta})g_{k}(\eta_{c};\bar{\eta})f(ky,kx,\bar{\eta}).$
(30)
Here $g_{k}$ is the Green function,
$\displaystyle g_{k}(\eta,\tilde{\eta})$
$\displaystyle=\frac{\sin(k(\eta-\bar{\eta}))}{k}\theta(\eta-\bar{\eta}),$
(31)
and $f(k_{1},k_{2},\eta)$ is given by
$\displaystyle f(k_{1},k_{2},\eta)$
$\displaystyle=\bigg{[}2T(k_{1},\eta)T(k_{2},\eta)$ $\displaystyle+$
$\displaystyle\left(\frac{\dot{T}(k_{1},\eta)}{H(\eta)}+T(k_{1},\eta)\right)\left(\frac{\dot{T}(k_{2},\eta)}{H(\eta)}+T(k_{2},\eta)\right)\bigg{]},$
(32)
where $T(x)$ is the transfer function of the scalar fluctuations given by
Eq.(14).
We comment on the non-Gaussian contribution on the induced gravitational waves
[55, 40, 41]. It is pointed out in [41] that non-Gaussianity of scalar
fluctuations amplifies the induced gravitational waves when
$\mathcal{P}_{\zeta}f_{\rm NL}^{2}$ is large. Since $\mathcal{P}_{\zeta}f_{\rm
NL}^{2}<0.04$ in our calculation, the effect of non-Gaussianity is expected to
be sub-dominant. Thus, we approximately include the effect of the non-
Gaussianity on $\Omega_{\rm GW}$ by multiplying the factor
$(Q^{\rm(NL)}_{\mathcal{P}})^{2}$ given by Eq. (12). This approximation
includes a part of non-Gaussian contributions, “Hybrid” and “Reducible”-type
terms discussed in [40]. Hybrid type is a product of the Gaussian and non-
Gaussian contribution of curvature perturbation, and Reducible type is that of
non-Gaussian and non-Gaussian contribution. Although there are other types of
sources of GWs, it is known that Hybrid-type is one of the largest
contributions among them. Thus, our calculation can estimate most of the
effects of non-Gaussianity.
The estimated GW spectra for $r=0.5$ (blue line) and $r=1.0$ (orange line) are
shown in Fig. 3. The GW spectrum observed by the NANOGrav experiment is fitted
by the power-law spectrum around $f\sim 10^{-8}$,
$\displaystyle\Omega_{\rm
GW}(f)h^{2}=\frac{2\pi^{2}}{3}\frac{h^{2}f^{2}}{H_{0}^{2}}h_{c}^{2}(f)=A_{\Omega}f^{5-\gamma},$
(33)
where $\gamma$ is the tilt of the spectrum. In Fig. 3, we show the observed
GWs for $\gamma=5$ and 6 with 2-$\sigma$ uncertainty on $A_{\Omega}$. We also
plot current constraints by other PTA experiments, EPTA (solid) [56] and PPTA
(dotted)[57], and future sensitivity by SKA (dashed) [58]. It is found that
the broad power spectrum of GWs in the present model can explain the reported
NANOGrav signal.
Figure 3: The induced GW spectrum and the constraints by PTA experiments. We
include the contribution of non-Gaussianity by using the factor
$Q^{\rm(NL)}_{\mathcal{P}}$. The orange lines are current constraints by EPTA
(solid) [56] and PPTA (Dotted)[57], and future sensitivity by SKA (Dashed)
[58]. We show the reported NANOGrav signal with $\gamma=5$ (blue region) and
$\gamma=6$ (pink region) with 2-$\sigma$ uncertainty (see Eq. (33)).
## 5 Conclusion
The reported signal by the NANOGrav experiment indicates the various
cosmological phenomena like cosmic string, phase transition and PBH formation.
The PBH formation scenario is attractive among them since the reported
frequency $f\sim 10^{-8}\mathrm{Hz}$ is close to the scale of the density
fluctuations to produce $30{M_{\odot}}$ PBH, which can explain the binary
black holes observed by LIGO-Virgo collaboration.
The typical PBH formation models predict the induced GWs with smaller
frequency and larger amplitude compared to the NANOGrav signal. To avoid this
difficulty, one needs to modify the induced GW spectrum by the broad power
spectrum and the non-Gaussianity of density fluctuations, which can enhance
the PBH formation rate and give a good fit to the NANOGrav signal. The axion-
like curvaton model can achieve the required features.
There are two types of the axion-like curvaton models; in type I the complex
field rolls down toward origin during inflation [35, 36, 37, 15], and in type
II it rolls down from the origin[38]. In this paper, we focused on the type II
model and chose the appropriate parameters of the potential term, which leads
to a broader power spectrum of the density perturbations than that in [38]. As
a result it was found that the induced GW spectrum can explain the NANOGrav
signal as shown in Fig. 3. Moreover, the broad power spectrum of the density
fluctuation results in the broad mass spectrum of PBHs as shown in Fig. 2,
which can be tested by the accumulation of the binary merger observations. Our
model predicts large local-type non-Gauusianity which can be probed through
observation of GWs. The spectrum of induced gravitational waves is a useful
tool to distinguish our model from others. For example, cosmic string and
type-I axion-like curvaton models gererally predict much broader GW spectra
than our model.
## Acknowledgment
This work was supported by JSPS KAKENHI Grant Nos. 17H01131 (M.K.), 17K05434
(M.K.), 20H05851 (M.K.), 21K03567(M.K.), JP19J21974 (H.N.), Advanced Leading
Graduate Course for Photon Science (H.N.), and World Premier International
Research Center Initiative (WPI Initiative), MEXT, Japan (M.K.).
## References
* [1] NANOGrav collaboration, Z. Arzoumanian et al., _The NANOGrav 12.5-year Data Set: Search For An Isotropic Stochastic Gravitational-Wave Background_ , 2009.04496.
* [2] J. Ellis and M. Lewicki, _Cosmic String Interpretation of NANOGrav Pulsar Timing Data_ , 2009.06555.
* [3] S. Blasi, V. Brdar and K. Schmitz, _Has NANOGrav found first evidence for cosmic strings?_ , 2009.06607.
* [4] W. Buchmuller, V. Domcke and K. Schmitz, _From NANOGrav to LIGO with metastable cosmic strings_ , _Phys. Lett. B_ 811 (2020) 135914, [2009.10649].
* [5] R. Samanta and S. Datta, _Gravitational wave complementarity and impact of NANOGrav data on gravitational leptogenesis: cosmic strings_ , 2009.13452.
* [6] S. Datta, A. Ghosal, R. Samanta and R. Sinha, _Baryogenesis from ultralight primordial black holes and strong gravitational waves_ , 2012.14981.
* [7] N. Ramberg and L. Visinelli, _The QCD Axion and Gravitational Waves in light of NANOGrav results_ , 12, 2020, 2012.06882.
* [8] Y. Nakai, M. Suzuki, F. Takahashi and M. Yamada, _Gravitational Waves and Dark Radiation from Dark Phase Transition: Connecting NANOGrav Pulsar Timing Data and Hubble Tension_ , 2009.09754.
* [9] A. Addazi, Y.-F. Cai, Q. Gan, A. Marciano and K. Zeng, _NANOGrav results and Dark First Order Phase Transitions_ , 2009.10327.
* [10] A. Neronov, A. Roper Pol, C. Caprini and D. Semikoz, _NANOGrav signal from MHD turbulence at QCD phase transition in the early universe_ , 2009.14174.
* [11] S.-L. Li, L. Shao, P. Wu and H. Yu, _NANOGrav Signal from First-Order Confinement/Deconfinement Phase Transition in Different QCD Matters_ , 2101.08012.
* [12] B. Barman, A. Dutta Banik and A. Paul, _Implications of NANOGrav results and UV freeze-in in a fast-expanding Universe_ , 2012.11969.
* [13] V. Vaskonen and H. Veermäe, _Did NANOGrav see a signal from primordial black hole formation?_ , 2009.07832.
* [14] V. De Luca, G. Franciolini and A. Riotto, _NANOGrav Hints to Primordial Black Holes as Dark Matter_ , 2009.08268.
* [15] K. Inomata, M. Kawasaki, K. Mukaida and T. T. Yanagida, _NANOGrav results and LIGO-Virgo primordial black holes in axion-like curvaton model_ , 2011.01270.
* [16] K. Kohri and T. Terada, _Solar-Mass Primordial Black Holes Explain NANOGrav Hint of Gravitational Waves_ , _Phys. Lett. B_ 813 (2021) 136040, [2009.11853].
* [17] G. Domènech and S. Pi, _NANOGrav Hints on Planet-Mass Primordial Black Holes_ , 2010.03976.
* [18] S. Sugiyama, V. Takhistov, E. Vitagliano, A. Kusenko, M. Sasaki and M. Takada, _Testing Stochastic Gravitational Wave Signals from Primordial Black Holes with Optical Telescopes_ , 2010.02189.
* [19] M. Braglia, D. K. Hazra, F. Finelli, G. F. Smoot, L. Sriramkumar and A. A. Starobinsky, _Generating PBHs and small-scale GWs in two-field models of inflation_ , _JCAP_ 08 (2020) 001, [2005.02895].
* [20] N. Bhaumik and R. K. Jain, _Stochastic induced gravitational waves and lowest mass limit of primordial black holes with the effects of reheating_ , 2009.10424.
* [21] M. Braglia, X. Chen and D. Kumar Hazra, _Probing Primordial Features with the Stochastic Gravitational Wave Background_ , _JCAP_ 03 (2021) 005, [2012.05821].
* [22] V. Atal, A. Sanglas and N. Triantafyllou, _NANOGrav signal as mergers of Stupendously Large Primordial Black Holes_ , 2012.14721.
* [23] S. Vagnozzi, _Implications of the NANOGrav results for inflation_ , _Mon. Not. Roy. Astron. Soc._ 502 (2021) L11, [2009.13432].
* [24] S. Bhattacharya, S. Mohanty and P. Parashari, _Implications of the NANOGrav result on primordial gravitational waves in nonstandard cosmologies_ , 2010.05071.
* [25] S. Kuroyanagi, T. Takahashi and S. Yokoyama, _Blue-tilted inflationary tensor spectrum and reheating in the light of NANOGrav results_ , _JCAP_ 01 (2021) 071, [2011.03323].
* [26] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., _Binary Black Hole Population Properties Inferred from the First and Second Observing Runs of Advanced LIGO and Advanced Virgo_ , 1811.12940.
* [27] S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz et al., _Did LIGO detect dark matter?_ , _Phys. Rev. Lett._ 116 (2016) 201301, [1603.00464].
* [28] S. Clesse and J. García-Bellido, _The clustering of massive Primordial Black Holes as Dark Matter: measuring their mass distribution with Advanced LIGO_ , _Phys. Dark Univ._ 15 (2017) 142–147, [1603.05234].
* [29] M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, _Primordial Black Hole Scenario for the Gravitational-Wave Event GW150914_ , _Phys. Rev. Lett._ 117 (2016) 061101, [1603.08338].
* [30] K. Inomata, M. Kawasaki, K. Mukaida and T. T. Yanagida, _Double inflation as a single origin of primordial black holes for all dark matter and LIGO observations_ , _Phys. Rev._ D97 (2018) 043514, [1711.06129].
* [31] B. J. Carr, _The Primordial black hole mass spectrum_ , _Astrophys. J._ 201 (1975) 1–19.
* [32] R. Saito and J. Yokoyama, _Gravitational wave background as a probe of the primordial black hole abundance_ , _Phys. Rev. Lett._ 102 (2009) 161101, [0812.4339].
* [33] R. Saito and J. Yokoyama, _Gravitational-wave constraints on the abundance of primordial black holes_ , _Progress of theoretical physics_ 123 (2010) 867–886.
* [34] C. Unal, E. D. Kovetz and S. P. Patil, _Multi-messenger Probes of Inflationary Fluctuations and Primordial Black Holes_ , 2008.11184.
* [35] M. Kawasaki, N. Kitajima and T. T. Yanagida, _Primordial black hole formation from an axionlike curvaton model_ , _Phys. Rev._ D87 (2013) 063519, [1207.2550].
* [36] M. Kawasaki, N. Kitajima and S. Yokoyama, _Gravitational waves from a curvaton model with blue spectrum_ , _JCAP_ 1308 (2013) 042, [1305.4464].
* [37] K. Ando, K. Inomata, M. Kawasaki, K. Mukaida and T. T. Yanagida, _Primordial Black Holes for the LIGO Events in the Axion-like Curvaton Model_ , 1711.08956.
* [38] K. Ando, M. Kawasaki and H. Nakatsuka, _Formation of primordial black holes in an axionlike curvaton model_ , _Phys. Rev. D_ 98 (2018) 083508, [1805.07757].
* [39] G. ’t Hooft, _Naturalness, chiral symmetry, and spontaneous chiral symmetry breaking_ , _NATO Sci. Ser. B_ 59 (1980) 135–157.
* [40] C. Unal, _Imprints of Primordial Non-Gaussianity on Gravitational Wave Spectrum_ , _Phys. Rev. D_ 99 (2019) 041301, [1811.09151].
* [41] R.-g. Cai, S. Pi and M. Sasaki, _Gravitational Waves Induced by non-Gaussian Scalar Perturbations_ , _Phys. Rev. Lett._ 122 (2019) 201101, [1810.11000].
* [42] J. Chluba, A. L. Erickcek and I. Ben-Dayan, _Probing the inflaton: Small-scale power spectrum constraints from measurements of the cosmic microwave background energy spectrum_ , _The Astrophysical Journal_ 758 (2012) 76.
* [43] D. J. Fixsen, E. S. Cheng, J. M. Gales, J. C. Mather, R. A. Shafer and E. L. Wright, _The Cosmic Microwave Background spectrum from the full COBE FIRAS data set_ , _Astrophys. J._ 473 (1996) 576, [astro-ph/9605054].
* [44] K. Inomata, M. Kawasaki and Y. Tada, _Revisiting constraints on small scale perturbations from big-bang nucleosynthesis_ , _Phys. Rev._ D94 (2016) 043527, [1605.04646].
* [45] M. Shibata and M. Sasaki, _Black hole formation in the Friedmann universe: Formulation and computation in numerical relativity_ , _Phys. Rev. D_ 60 (1999) 084002, [gr-qc/9905064].
* [46] M. Kawasaki and H. Nakatsuka, _Effect of nonlinearity between density and curvature perturbations on the primordial black hole formation_ , _Phys. Rev. D_ 99 (2019) 123501, [1903.02994].
* [47] K. Ando, K. Inomata and M. Kawasaki, _Primordial black holes and uncertainties on choice of window function_ , 1802.06393.
* [48] S. Young and C. T. Byrnes, _Primordial black holes in non-Gaussian regimes_ , _JCAP_ 1308 (2013) 052, [1307.4995].
* [49] C. T. Byrnes, E. J. Copeland and A. M. Green, _Primordial black holes as a tool for constraining non-Gaussianity_ , _Phys. Rev._ D86 (2012) 043512, [1206.4188].
* [50] S. Young, D. Regan and C. T. Byrnes, _Influence of large local and non-local bispectra on primordial black hole abundance_ , _JCAP_ 1602 (2016) 029, [1512.07224].
* [51] Macho collaboration, R. A. Allsman et al., _MACHO project limits on black hole dark matter in the 1-30 solar mass range_ , _Astrophys. J._ 550 (2001) L169, [astro-ph/0011506].
* [52] EROS-2 collaboration, P. Tisserand et al., _Limits on the Macho Content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds_ , _Astron. Astrophys._ 469 (2007) 387–404, [astro-ph/0607207].
* [53] L. Wyrzykowski et al., _The OGLE View of Microlensing towards the Magellanic Clouds. IV. OGLE-III SMC Data and Final Conclusions on MACHOs_ , _Mon. Not. Roy. Astron. Soc._ 416 (2011) 2949, [1106.2925].
* [54] P. D. Serpico, V. Poulin, D. Inman and K. Kohri, _Cosmic microwave background bounds on primordial black holes including dark matter halo accretion_ , _Phys. Rev. Res._ 2 (2020) 023204, [2002.10771].
* [55] J. Garcia-Bellido, M. Peloso and C. Unal, _Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter_ , _JCAP_ 09 (2017) 013, [1707.02441].
* [56] L. Lentati et al., _European Pulsar Timing Array Limits On An Isotropic Stochastic Gravitational-Wave Background_ , _Mon. Not. Roy. Astron. Soc._ 453 (2015) 2576–2598, [1504.03692].
* [57] R. M. Shannon et al., _Gravitational waves from binary supermassive black holes missing in pulsar observations_ , _Science_ 349 (2015) 1522–1525, [1509.07320].
* [58] C. J. Moore, R. H. Cole and C. P. L. Berry, _Gravitational-wave sensitivity curves_ , _Class. Quant. Grav._ 32 (2015) 015014, [1408.0740].
|
# Modeling opinion leader’s role in the diffusion of innovation
_Internship report originally written in June 2018 by intern N. Vodopivec
under the supervision of C. Adam and J.-P. Chanteau_
Nataša Vodopivec
Univ Grenoble-Alpes
Grenoble INP Carole Adam
Univ Grenoble-Alpes
Grenoble Informatics Lab Jean-Pierre Chanteau
Univ Grenoble-Alpes
Centre de Recherche en Economie de Grenoble
###### Abstract
The diffusion of innovations is an important topic for the consumer markets.
Early research focused on how innovations spread on the level of the whole
society. To get closer to the real world scenarios agent based models (ABM)
started focusing on individual-level agents. In our work we will translate an
existing ABM that investigates the role of opinion leaders in the process of
diffusion of innovations to a new, more expressive platform designed for agent
based modeling. We will do it to show that taking advantage of new features of
the chosen platform should be encouraged when making models in the field of
social sciences in the future, because it can be beneficial for the
explanatory power of simulation results.
## 1 Introduction
Diffusion refers to the process by which an innovation is adopted over time by
members of a social system. An innovation commonly refers to a new technology,
but it can be understood more broadly as a spread of ideas and practices
Kiesling et al. (2012). The question whether a certain innovation will diffuse
in society successfully or not has always been of important nature at the
market level and has gained interest of many researchers since a number of
pioneering works appeared in the 1960s.
####
From the marketing perspective, it is of great importance to understand how
information starting from mass media and traveling through word-of-mouth WoM
affects adoption decisions of customers and consequently the diffusion of a
new product van Eck et al. (2011). Mass media takes the role of an external
influence to a society and WoM the role of an internal influence within the
society. Traditionally, models were based on macro level looking at the
society as a whole.
Most such aggregate models stem from the model introduced by Bass Bass (1969),
which takes the structure of a basic epidemic model where diffusion of
innovation is seen as a contagious process driven by external and internal
influences. This model assumes that the market is homogeneous, which means
that all customers have the same characteristics. It further assumes that each
consumer is connected with all other consumers and can thus influence all
others. From these two assumptions it follows that the probability of adopting
is linearly related to the number of past adopters. These assumptions are
limitations of aggregate models as they ignore that in the real world
consumers are individuals and as such heterogeneous and a part of complex
social structures.
####
To try to overcome these limitations, agent-based modeling Macal and North
(2005) has been increasingly adopted in the recent times. Agent based modeling
takes a different approach to diffusion of innovations, because it looks at
the society from the micro level. Here, an observed entity is not a society as
a whole, but an individual, represented as an agent. Customers’ heterogeneity,
their social interactions and their decision making process can be modeled
explicitly Kiesling et al. (2012). When simulated, macro level observations of
network changes emerge from the micro level interactions between the
individuals.
Most agent based models of innovation diffusion have a similar structure and
comprise of the following elements Jensen and Chappin (2017):
1. 1.
Consumer agents define the individual entities that can adopt an innovation.
These can be individual persons, households, or groups of households. They are
heterogeneous.
2. 2.
Social structure is a description of connections between singular agents,
dividing them in different consumer groups.
3. 3.
Decision making processes are the key actions of consumer agents in any social
model, by which agents decide to adopt or reject the innovation.
4. 4.
Social influence between agents often affects decision making processes and is
commonly modeled as a social network graph. Models vary in the range at which
social influence is exceeded. This can be influence from direct peers, from
the respective social group or the entire population of agents. All these
ranges of influence can be modeled as a social network graph.
####
We are interested in modeling and simulating how innovation, both in the sense
of the ideas, behaviours and in the sense of the products, spreads in the
population. We have chosen to implement the model on a GAMA platform, which is
seen as a current state-of-the-art agent-based modeling language and as an
improved successor of the NetLogo platform. Figure 1 presents a screenshot of
the implemented simulator, showing the social network with opinion leaders (in
pink) and adopters of the innovation (in green).
Figure 1: Screenshot of the GAMA simulator
The main challenge when modeling social models is the verification of the
final model. The verification can be done either using strong theoretical
support or using obtained empirical data. However, due to the scope of our
project it was difficult to obtain either. This is the reason why we have
chosen to use an existing NetLogo model to rewrite and improve in GAMA,
because this way we would be able to validate our model against it. A study by
van Eck van Eck et al. (2011) (further use: reference study) was picked as it
not only models the diffusion of innovations, but additionally investigates
the role of opinion leaders in the process, which is another interesting
phenomenon. Aside from agents being heterogeneous, they are further divided
into two groups, namely the influentials or opinion leaders (OL) and followers
or non-leaders (NL).
####
Goldenberg et al Goldenberg et al. (2009) determine influentials by three
factors: connectivity, knowledge and personality characteristics. Opinion
leaders are a type of influential customers that have all of the
characteristics of the influentials represented as central positions in the
network (which means high connectivity), market knowledge (not necessarily
about a specific product but about markets in general) and innovative
behaviour.
The reference study uses four critical assumptions about opinion leaders,
which are later successfully checked by an empirical study: (1) OL have more
contacts, (2) OL possess different characteristics, (3) OL exert different
types of influence and (4) OL are among earlier adopters.
Two important characteristics of opinion leaders are their innovativeness and
their interpersonal influence. Regarding the degree of innovativeness, it
means that opinion leaders have more experience and expertise with the product
category than the other consumers and that they have been exposed to more
information Lyons and Henderson (2005). Two main types of interpersonal
influence exist:
* •
Informational influence is the tendency to accept information from others and
believe it. Opinion leaders influence other consumers by giving them advice
about a product.
* •
Normative influence stems from the people’s tendency to follow a certain norm;
to adopt a product in order to be approved by other consumers. Normative
influence can also be referred to as a social pressure.
Reference study assumes that opinion leaders play an important role in both
diffusion of information about products (informational influence) and the
diffusion of products themselves (i.e. more product adoptions result in
normative influence) van Eck et al. (2011). Therefore the influence of the
opinion leaders on the speed of diffusion of both information and product, and
on maximum adoption percentage in the process of diffusion of innovation, is
investigated.
####
The focus of this study is to investigate the speed of information diffusion,
the speed of product diffusion, and the maximum adoption percentage of the
product.
The article is structured as follows: in Section 2 we describe the hypotheses
that the reference study has set up and verified; Section 3 introduces the
model with its agents, parameters, and social network; in Section 4 we present
the experiments settings and discuss our simulation results; and in Section 5
we address the conclusions and suggestions for further work.
## 2 Hypotheses
While investigating the role of opinion leaders in the innovation diffusion
process, the impact of each of its three characteristics (innovative
behaviour, normative influence, market knowledge) is looked at and thus more
hypotheses are set up. We have chosen to validate our model against the
following hypotheses put forward and successfully proven in the reference
study van Eck et al. (2011):
##### H1a:
”The more innovative behaviour of the opinion leader results in a higher
adoption percentage.”
##### H1b:
”If the weight of normative influence becomes more important to followers, the
increase in the adoption percentage caused by the more innovative behavior of
opinion leaders increases.”
##### H2a:
”Opinion leaders are less sensitive to normative influence than are
followers.”
##### H2b:
”If opinion leaders are less sensitive to normative influence, adoption
percentages increase.”
##### H2c:
The less sensitive opinion leaders are to normative influence, the more the
adoption percentages increase.
##### H3a:
”Opinion leaders are better at judging product quality, which results in a
higher speed of information diffusion.”
##### H3b:
”Opinion leaders are better at judging product quality, which results in a
higher speed of product diffusion.”
## 3 Model
In this chapter we describe in more detail how our model was built.
### 3.1 Network
Bohlman et al. Bohlman et al. (2010) indicate that specific network topologies
in agent based modeling strongly influence the process of innovation
diffusion: they affect the likelihood that diffusion spreads and the speed of
adoption. This is because the network topology specifies the location and the
number of links of innovators. A scale-free network structure proposed by
Barabasi and Albert (1999) is used, because it stems from many empirical
researches and is confirmed to imitate real world societies where some agents
serve as hubs, meaning their number of connections greatly exceeds the average
and they have central positions in the network.
### 3.2 Agents
Each agent is described by the following attributes:
* •
Opinion leader tells whether an agent is an opinion leader or a non-leader.
* •
Quality threshold is a value given randomly uniformly to each agent before the
beginning of a simulation. Its values are uniformly distributed in the range
U(0,1).
* •
Known quality describes what the agent currently thinks of the product
quality. This value is set dynamically when the agent gets aware of the
product or adopts it, as is further explained in Section 4.1.
* •
Utility threshold is a value given randomly uniformly to each agent before the
beginning of a simulation. OL and NL have different ranges from where this
value can be taken. For NL it is U(0,1) and for OL it is U(0, max), where the
maximum value is defined by a parameter of the experiment.
* •
Awareness tells whether an agent is aware of the product or not.
* •
Adopted tells whether an agent has adopted the product or not.
* •
Weight of normative influence is different for OL and NL and values are set
dynamically as a normal distribution where average value and standard
deviation are set as parameters of the simulation.
The uniform distribution of the values of utility thresholds and of quality
thresholds for individual agents makes the population heterogeneous.
####
An agent’s decision to adopt is based on its utility threshold. The agent’s
utility is calculated at each iteration of the simulation, and once it passes
the agent’s utility threshold the agent adopts the innovation. The utility
function consists of a weighted sum of the individual preference and the
social influence. First represents informational influence and describes the
agent’s opinion on the product quality, and second represents normative
influence and takes into account the number of neighbouring agents that have
already adopted the product. When the weight of the social influence of a
certain agent is low the agent is very individualistic and is consequently
hardly influenced by neighbours. On the contrary, high weight value means that
the agent is very socially susceptible van Eck et al. (2011).
### 3.3 Parameters
The model contains several parameters, which describe the influence of opinion
leaders in various market settings van Eck et al. (2011). Some parameters are
fixed for all experiments and others vary experimentally. The group of fixed
parameters and their values, derived from the model by Delre et al. Delre et
al. (2007), are presented in Table 1. The product quality is set to 0.5,
meaning that if the agents base their decisions to adopt a product purely on
their individual preferences, approximately 50% will never adopt. The mass
media coefficient was set from prior studies. It represents a strong mass
media support because many of the agents in the empirical study were reached
by mass media (i.e. one percent of population is reached in each step).
Variable | Parameter | Value
---|---|---
Product quality | q | 0.5
Mass media coefficient | m_m | 0.01
Number of agents | nb_agents | 500
Table 1: Settings for global parameters, that were fixed in current experiments Variable | Parameter | Value
---|---|---
Max utility threshold of OL | max | 0.8
Average normative
influence, OL | avg_ni_ol | 0.51
Standard deviation for normative influence, OL | dev_ni_ol | 0.2
Average normative
influence, NL | avg_ni_nl | 0.6
Standard deviation for normative influence, NL | dev_ni_nl | 0.2
OL judges product better | NA | Yes
Table 2: Settings for base model parameters
The varied parameters are changed one at a time per experiment to test the
separate hypotheses. The parameters and their values, derived from empirical
study conducted by reference study are presented in Table 2. First a base
experiment with these values was run so that later hypotheses could be tested
realistically. The innovativeness of opinion leaders is implemented as smaller
possible values of it’s utility threshold with regard to that of the followers
(the utility threshold of the followers has a uniform distribution in the
range U(0, 1.0), for OL it’s in the range U(0, 0.8)), which makes them
approximately 20% more likely to adopt the product. The difference is not big
as OL are trying to avoid being too innovative, because if they adopted a
product that turned out to be unsuccessful, they would loose followers. As
observed in the empirical study, the weight of normative influence of opinion
leaders holds a lower value ($\beta_{OL}=$ 0.51) than that of the followers
($\beta_{NL}=$ 0.6) as they care less about the social pressure. The weights
of normative and informative influences sum up to 1, so the weight of
informative influence is 1 - $\beta$. The model can be run either with opinion
leaders in the network or without them. This was important to be able to see
whether the diffusion of the innovation indeed spreads faster in the networks
where innovative opinion leaders are present.
| Innovativeness of OL | Weight of normative influence OL | Weight of normative influence NL |
---|---|---|---|---
Model (hypothesis tested with model) | $U_{i,min}$ | $\beta_{i,OL}$ | $\beta_{i,NL}$ | Quality of the product judgment (OL)
Base Model 1 | U(0, 0.8) | N(0.51, 0.2) | N(0.6, 0.2) | Yes
Model 2 (H1a) | U(0, 1) | N(0.51, 0.2) | N(0.6, 0.2) | Yes
Model 3 (H1b) | U(0, 0.8) | N(0.51, 0.2) | N(0.8, 0.2) | Yes
NA (H2a) | NA | NA | NA | NA
Model 4 (H2b) | U(0, 0.8) | N(0.57, 0.2) | N(0.57, 0.2) | Yes
Model 5 (H2c) | U(0, 0.8) | N(0.2, 0.2) | N(0.6, 0.2) | Yes
Model 6 (H3a, H3b) | U(0, 0.8) | N(0.51, 0.2) | N(0.6, 0.2) | No
Table 3: Parameter settings for hypotheses (adapted from van Eck et al.
(2011))
## 4 Experiments and results
In this chapter we first present the experiment settings, then discuss the
results and finally do a comparison between NetLogo and GAMA platforms.
### 4.1 Experiment settings
A model was created for each separate hypothesis. The values of the varied
parameters used for each model are shown in Table 3. Each model was run in a
separate experiment that consisted of 25 time steps, which was enough for the
maximum adoption percentage to be reached. To collect results for statistics
each experiment was run with the same settings 60 times. We realize that 60 is
a low number of repetitions for completely adequate statistics, but we faced a
problem of the GAMA platform freezing due to too big memory consumption while
trying to run it in batch mode, where more than one experiment is run one
after another automatically. We did not anticipate this to happen as the
calculations were very fast when running 500 consecutive experiments in
NetLogo and GAMA is seen as it’s improved successor. Thus, we were reduced to
having to run each experiment manually which proved to be quite time consuming
so we limited the number of runs to 60 and might do more tests to calibrate
the results if needed in the future.
Each time step further consisted of three phases: mass media, WoM and
adoption. In the beginning of the experiment no agents are aware of the
product or have adopted it. Then mass media informs a predefined percentage
(in our case 1%) of the population about it. In this step, the better market
knowledge of the opinion leaders is implemented as such: because opinion
leaders are able to make good product judgment they will have learned of a
real product quality from mass media and their quality judgment will become
equal to it (q = 0.5, Table 1). On contrary, followers are not able to make
this judgment so they become aware of the product but their perceived product
quality gets a random value. The followers are able to learn about the real
product quality only by WoM from trusted sources, that is from opinion leaders
and agents who have already adopted the product. In the word of mouth stage,
agents talk with their neighbors and may learn about the real product quality
if their neighbors are certain about it. In the adoption stage, agents can
decide to adopt the product if they are aware about it and if the current
value of utility function exceeds their utility threshold.
### 4.2 Results and Validation
| | Adoption percentage
---
(standard deviation)
| Speed of information diffusion
---
Average number of steps
(standard deviation)
| Speed of product diffusion
---
Average number of steps
(standard deviation)
| | Reference
---
study
| Our
---
study
| Reference
---
study
| Our
---
study
| Reference
---
study
| Our
---
study
Base Model 1 - no OL | 0.398 (0.05) | 0.401 (0.05) | 3.64 (1.5) | 4.43 (1.47) | 6.27 (2.0) | 5.78 (1.51)
Base Model 1 | 0.491 (0.05) | 0.454 (0.05) | 1.75 (1.2) | 1.96 (0.43) | 4.94 (1.2) | 2.80 (0.78)
Model 2 (H1a) | 0.405 (0.04) | 0.398 (0.05) | | | |
Model 3 (H1b) | 0.458 (0.06) | 0.395 (0.06) | | | |
NA (H2a) | | | | | |
Model 4 (H2b) | 0.480 (0.05) | 0.455 (0.06) | | | |
Model 5 (H2c) | 0.515 (0.04) | 0.488 (0.05) | | | |
Model 6 (H3a, H3b) | | | 4.73 (2.22) | 4.11 (1.70) | 7.76 (2.21) | 5.64 (1.65)
Table 4: Results of tests of each hypothesis
Before looking at the models testing the hypotheses we had to make sure that
our model confirms the base assumption. It claims that in networks that
include opinion leaders, higher speed of both the information and product
diffusion as well as greater adoption percentage are achieved, than in the
networks without them. Thus, we ran the base model in two experiments, once
with opinion leaders and once without them. The weight of normative influence
($\beta_{i}$) in the comparison model with no opinion leaders is 0.75
(obtained from reference study). The average values of the results and their
standard deviations for these two tests as well as for the rest of the tests
can be found in Table 4. We can see that in the model with opinion leaders the
information diffuses faster than in the model without the opinion leaders, in
the first it takes 1.96 steps compared with 4.43 in the other. The same case
happens for the speed of product diffusion, in the model with OL it takes 2.80
steps compared to 5.78 steps in the model without the OL, which means that the
product diffuses faster in the model with OL. Thirdly, the value of the
average adoption percentage is higher in the model with OL (0.45 in the model
with OL and 0.40 in the model without), which also confirms our assumptions.
Therefore, the base model successfully proves that the opinion leaders product
higher speeds of information and product diffusion and higher adoption
percentage.
Table 4 shows the obtained averaged results from the reference study and from
our model for each of the hypotheses models. Each hypothesis was run in a
different experiment on it’s own model, for which the values are presented in
the Table 3, except for the hypothesis H2a which the reference study validated
by empirical study. When comparing the results we can see that even though
values are a bit different, their proportions stay the same, meaning that our
model was successfully validated against the reference NetLogo model and as
such that the same as in the NetLogo model the hypotheses Ha1, H2a (empirical
study), H2b, H2c, H3a and H3b got supported while the hypothesis H1b did not
get supported.
### 4.3 Comparing the platforms
The differences might be partially attributed to the smaller sample sizes that
we use to average the results, but we think they’re mostly the reason of a
different execution flow in the GAMA platform. It is here that GAMA platform
introduces a difference that we find important when making social models. The
execution flow of NetLogo for the model of diffusion of innovation is
sequential and iterative. For each of the 25 steps of the simulation the three
stages (mass-media, word of mouth, adoption) are executed one after another,
where first one has to complete for all of the agents before the next stage
can commence. Inside each stage, the agents execute the actions linked to it
iteratively in a loop, the agent 1 does it first and agent 500 the last. The
agents inherently act as small blocks of non-connected code and the order of
their execution can never be different as the loop over the agents that calls
each of them determines it.
On the other hand, on GAMA platform each agent acts as it’s own entity with
it’s own behaviours. During the simulation of the 25 steps, the only role of
the world agent that stands above all other agents (on GAMA platform the world
agent acts similar than a main function in many programming languages) is to
schedule them, i.e. gives them an opportunity to act. While the agents still
do not all run at the same time in parallel, they are not connected with
actions of the other agents. When each agent gets it’s turn it runs it’s
behaviours, which are in turn mass-media, WoM and adoption. So the prime
difference is that on GAMA platform the main program only calls the agents and
after that is has no control over how they execute, they act as individual
entities.
However, in our model the world agent mostly still calls the consumer agents
iteratively, starting with agent 1 and finishing with agent 500, which is
still not very representative of the real world because the order of the
agents is the same at each simulation step. There exists a solution to this
problem which is discussed in section 5.
## 5 Conclusions and Further research
We have successfully established and validated a model of diffusion of
innovation nn the state-of-the-art GAMA platform that is designed for agent-
based modeling. We think that it is an important step to take towards the more
realistic modeling of social interactions. However, as mentioned before in
Section 4 the agents still get executed in the same order in each step of the
simulation. This could lead to unrealistic simulations. We would like to
highlight one example of this problem, namely discuss the execution of the
Word of Mouth stage. When in this stage, the agent talks to all of its
neighbours, and if any of them know of the true product quality, then the
agent becomes aware of it by WoM. Now imagine the first simulation step when
after the mass-media stage at most 1% of the population has become aware of
the product. During the WoM stage agents will be called upon iteratively and
each of them will have larger probability that some of its previously non-
aware neighbours have now become aware and can thus share their knowledge
about the product. Consequently, in each of the 25 steps of the simulation,
agent number 1 will always have lesser probability to become aware by WoM than
agent number 500.
To solve this issue GAMA platform provides an option of calling agents in a
random (shuffled) order. In NetLogo such option could be implemented manually,
but would be hard to achieve. As a future work we think that adding this
property and observing the obtained results could be a good idea. The results
might stay the same, but the micro structure of the model would become closer
to the real world social models.
Another promising option of further research would be the extension of the
current agents to BDI agents. Agent based modeling is already a step forward
from the old aggregate models where humans were modeled as equal homogeneous
entities. However, when handling ABMs in the field of social sciences, human
agents can be further improved to become more human-like by giving them
personality traits. These agents are called belief, desire and intention (BDI)
agents. The model allows to use more complex and descriptive agent models to
represent humans. It attempts to capture common understanding of how humans
reason through: beliefs which represent the individual’s knowledge about the
environment and about their own internal state; desires or more specifically
goals (non-conflicting desires which the individual has decided they want to
achieve); and intentions which are the set of plans or sequence of actions
which the individual intends to follow in order to achieve their goals Adam
and Gaudou (2016). Two other important functionalities a BDI system must have
are a rational process by which an agent decides which intentions to follow
depending on the current circumstances, and the level of commitment to the set
of intentions to achieve a long-term goal.
We think that BDI agents are important to give higher descriptive value on
results of social studies, which a diffusion of innovation certainly is. They
give more information on how agents behave and a deeper insight on how
innovation diffuses in the population. As a future work, we will upgrade this
model by expanding its agents to BDI agents, now that the model has been
translated to GAMA, which allows BDI architecture. We will then add different
human factors to these agents and observe how they affect the spread of the
diffusion of an innovation and it’s speed and whether the results will stay in
line with the original model.
## References
* Adam and Gaudou [2016] Carole Adam and Benoit Gaudou. BDI agents in social simulations: a survey. The Knowledge Engineering Review, 31(3):207–238, Cambridge University Press, 2016.
* Bass [1969] F. M Bass. A new product growth for model consumer durables. Management Science, 15(5):215–227.
* Bohlman et al. [2010] J. Bohlman, R. Calantone and M. Zhao. The effects of market network heterogeneity on innovation diffusion: An agent-based modeling approach. Journal of Product Innovation Management, 27(5):741–60.
* Delre et al. [2010] Sebastiano A. Delre. Will it spread or not?: The effects of social influences and network topology on innovation diffusion. The journal of product innovation management : an international publication of the Product Development & Management Association, 27(2), 2010.
* Delre et al. [2007] Sebastiano A. Delre, W. Jager, T. H. A Bijmolt and M. A. Janssen. Targeting and timing promotional activities: An agent-based model for the takeoff of new products. Journal of Business Research, 60(8):826–35.
* van Eck et al. [2011] Peter S. van Eck, Wander Jager and Peter S. H. Leeflang. Opinion leaders’ role in innovation diffusion : a simulation study. The journal of product innovation management : an international publication of the Product Development & Management Association., 28.2011(2):187–203, Oxford, Blackwell Publishing, 2011.
* Goldenberg et al. [2009] J. Goldenberg, S. Han, D. R. Lehmann and J. W. Wong. The role of hubs in the adoption process. Journal of Marketing, 73(2):1–13.
* Jensen and Chappin [2017] Thorben Jensen and Emile J.L. Chappin. Automating agent-based modeling: Data-driven generation and application of innovation diffusion models. Environmental Modelling & Software, 92:261–268, 2017.
* Kiesling et al. [2012] Elmar Kiesling, Markus Günther, Christian Stummer and Lea M. Wakolbinger. Agent-based simulation of innovation diffusion: A review. Central European Journal of Operations Research, 20:183–230, 2012.
* Laciana et al. [2017] Carlos E. Laciana, Gustavo Preyra and Santiago L. Rovere. Size invariance sector for an agent-based innovation diffusion model. ARXIV, 1706.03859, 2017.
* Lyons and Henderson [2005] B. Lyons and K. Henderson. Opinion leadership in a computer-mediated environment. Journal of Consumer Behavior, 4(5):319–29.
* Macal and North [2005] C. M. Macal and M. J. North. Tutorial on agent-based modeling and simulation. In 37th Winter Simulation Conference. Introductory Tutorials: Agent-Based Modeling, 2–15.
* Mills and Schleich [2012] Bradford Mills and Joachim Schleich. Residential Energy-Efficient Technology Adoption, Energy Conservation, Knowledge, and Attitudes: An Analysis of European Countries. Energy Policy, 49, 2012.
* Zhang and Vorobeychik [2016] Haifeng Zhang and Yevgeniy Vorobeychik. Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review. CoRR, 1608.08517, 2016.
|
On event-by-event pseudorapidity fluctuations in relativistic nuclear
interactions
M. Mohisin Khan1∗, Danish F. Meer1, Tahir Hussain2, N. Ahmad3
1\. Department of Applied Physics, ZHCET, Aligarh Muslim University, Aligarh,
India
2\. Applied Sciences and Humanities Section, University Polytechnic, Aligarh
Muslim University, Aligarh, India
3\. Department of Physics, Aligarh Muslim University, Aligarh, India
<EMAIL_ADDRESS>
Abstract
Present study is an attempt to have a detailed look into event-by-
event(e-by-e) pseudorapidity fluctuations of the relativistic charged
particles produced in 28Si-nucleus interactions at incident momenta 4.5A and
14.5A GeV/c. The method used in the present study makes use of a kinematic
variable which is derived in terms of the average pseudo-rapidity and the
total number of particles produced in a single event. The multiplicity and
pseudorapidity dependence of these fluctuations have also been studied. The
results obtained for the experimental data are compared with HIJING
simulation.
Keywords: event-by-event pseudo-rapidity fluctuations, correlation,
relativistic nuclear collisions.
Introduction
Relativistic nuclear collisions are the most fascinating and important tools
to produce matter under extreme conditions of temperature and density. The key
point to study and understand the behaviour of this produced matter is the
copious production of secondary particles in these collisions.The global
observable such as multiplicity and pseudo-rapidity of produced particle play
an important role in understanding the particle production process in the
relativistic hadron-hadron, hadron-nucleus and nucleus-nucleus collisions. The
current interest in such studies are mainly to understand the characteristics
of quark-gluon plasma (QGP) and the scenario of phase transition from QGP to
the normal hadronic phase. Fluctuations in the values of global observable
have always been considered as one of the possible signature of QGP
formation1. Various nearly conclusive studies2-5 regarding QGP formation and
its signatures have been made using the data on three experimental energy
regime from the SPS, RHIC and LHC.
As the matter produced in high energy heavy-ion collisions is a short live
state and the hadronization after the collisions is very fast one has to rely
on the observations made on the characteristics of the produced particles. The
study of the particles coming out of the interaction region may provide
important information regarding the underlying dynamics of collision process
and the multi-particle production. Fluctuations in general and the event-by-
event fluctuations of observable in particular are envisaged to give vital
information about the phase transition5-9. The process of thermalization
along-with the statistical behaviour of the produced particles can be
understood by studying the fluctuation in particle multiplicity and momentum
distribution10-15. Reference 6 stressed that the charge fluctuations may be an
evidence of QGP formation. Many such studies about the fluctuations has been
carried out. However, study of critical point of QGP phase transition can be
well study using the concept of e-by-e fluctuations because in this case the
fluctuations are predicted to be large enough 14-17. A study of each event
produced in relativistic nuclear collision may reveal new physical phenomena
occurring in some rare events for which conditions might have been created in
these collisions. Nuclear collisions at high energies produce a large number
of particles, the analysis of single event with large multiplicity can shed
light on some different physics than the study of averages over a large
events. Predictions have been made about the occurrence of critical density
fluctuations in the vicinity of the phase transition and its manifestation as
e e-by-e fluctuation of different physical observable 18. The e-by-e analysis
may offer a possibility of observing a phase transition directly if the
particle emitting source is hydro-chemical composition. The NA49
collaboration10 observed the fluctuations of transverse momentum and koans to
pins ratio in central Pb-Pb collisions at 158A GeV. A. Bialas and V. Koch 9
and Belkacem et al19 reported the moments of e-by-e fluctuations are very
nearly related to the correlation function. The ALICE collaboration20 has
measured the e-by-e fluctuation in mean transverse momentum in p-p and Pb-Pb
collisions at LHC. A number of papers are available in literature on e-by-e
fluctuation analysis of different observable but a very few papers are there
on e-by-e pseudo-rapidity fluctuations. The first such study was carried out
by the KLM collaboration for 158A GeV Pb-AgBr interactions21, Recently S.
Bhattacharya et al.18 and Gopa Bhoumic et al.22 has carried out the e-by-e
pseudo-rapidity fluctuations analyses on various emulsions data having
different projectiles and targets at 4.1A GeV, 4.5A GeV, 60A GeV and 200A GeV.
In the present study we have carried out e-by-e fluctuations analysis for the
data at 4.5A and 14.5A GeV 28Si-AgBr interactions for the experimental and
HIJING simulated data.
Following sections of this paper are devoted to the details of the data,
analysis methods, results and discussion and the observations made on the
basis of obtained results.
Experimental details of the data
The present analysis has been carried out on the experimental and simulated
data. For the experimental data, two random samples consisting 555 events of
14.5A GeV/c 28Si-nucleus interactions and 530 events of 4.5A GeV/c 28Si-
nucleus interactions with $N_{s}\geq$10 have been used where Ns represents the
number of charged particles produced in an event with relative velocity
($\beta=v/c>0.7$). The emission angles of the relativistic charged particles
were measured and their pseudo rapidities ($\eta=-ln(tan(\theta/2)$) are
determined. All other details about the data may be found elsewhere23,24.
Furthermore, for comparing the experimental results with the corresponding
values obtained for the events generated by Monte Carlo code HIJING-1.3325
event generator, a similar sample of events was simulated.
Method of analysis
M. Gazdzicki and S. Mrowczynski7 proposed an excellent method to measure the
fluctuations of any global kinematic variable. It is worth-mentioning that the
second moment of distributions of global kinematic variables (multiplicity,
rapidity, transverse momentum etc.) for individual events or all the events
taken together may shed light on the extent of thermalization and
randomization feature of high energy nuclear collisions. The basic idea used
in this method is the fact that the correlated production of particles in each
elementary interaction leads to large e-by-e fluctuations and these
fluctuations in high energy nuclear collisions are believed to originate due
to trivial variation in impact parameter of the interaction. It may also be
aroused due to some statistical reasons or due to some dynamical reason of the
underlying processes prevailing at the instant of the collisions. The method
proposed here7 automatically filters the trivial contributions and provides a
way to determine the remaining part contributing to the fluctuations. For
this, a variable, $\Phi$ which is believed to be a measure of fluctuation is
defined whose nonzero values points towards the correlation and fluctuation
and a vanishing value of $\Phi$ points towards the independent particle
emission(random emission) from a single source. The detailed procedure of
calculating this variable is described below.
As the global kinematic variable used in the present analysis to study the
e-by-e fluctuation is the pseudo-rapidity,$\eta$, of the emitted particles we
first define a single particle variable z in terms of $\eta$ as:
$z=\eta-\bar{\eta},$ (1)
where $\bar{\eta}$ represents the mean value of single particle inclusive
pseudo-rapidity distribution that can be expressed as
$\bar{\eta}=\frac{1}{N_{total}}\sum_{m=1}^{N_{evt}}\sum_{i=1}^{N_{m}}\eta_{m},$
(2)
where Nm is the multiplicity of mth event. The second summation over i in the
above equation runs over all the particle produced in the mth event and the
first summation is performed over all the events Nevt in the sample. Ntotal in
the denominator is the total number of particles produced in all the events.
Further, a multi-particle analogue of z, Zk, is defined as
$Z_{k}=\sum_{i=1}^{N_{k}}\eta_{i}-\bar{\eta}.$ (3)
Finally, the measure of fluctuation parameter, $\Phi$ is defined as
$\Phi=\sqrt{\frac{<Z^{2}>}{<N_{total}>}}-\sqrt{\bar{z^{2}}},$ (4)
where the $<Z^{2}>$ and $<N_{total}>$ represents the event averaged of the
variables therein and the $\sqrt{\bar{z^{2}}}$ is the square root values of
the second moment of inclusive z distribution. As stated7, $\Phi$ vanishes
when there is no correlation among the produced particles and its non
vanishing values are a measure of correlations and fluctuation present in the
system. This method has been extensively used with success to analyze many
experimental data18,21 and to verify various aspects theoretically26,27. In
the present analysis we have attempted to study e-by-e fluctuation in 4.5A and
14.5A GeV/c 28Si-nucleus interactions. As the present analysis is meant to
study the e-by-e $\eta$ fluctuations, the variable z is defined as
Results and discussion
First of all the values of $\Phi$s are calculated for different groups of
events selected on the basis of the average multiplicity of the relativistic
charged particles, $<N_{s}>$. This calculation has been made for both the
experimental and simulated data. These values along with statistical errors
are tabulated in Table 1. The the different groups are selected in such a way
to ensure that the average multiplicity of the group is greater than the
average multiplicity of the sample of the data.
Table 1: Calculated values of $\Phi$s for different multiplicity classes for the experimental and HIJING simulated data. Interactions | Multiplicity selection | Experimental | HIJING
---|---|---|---
| | $<N_{s}>$ | $\Phi$ | $<N_{s}>$ | $\Phi$
4.5A GeV/c 28Si-nucleus | Ns $\geq$ 10 | 15.33 | 5.402 $\pm$ 0.073 | 17.85 | 4.491 $\pm$ 0.088
Ns $\geq$ 20 | 31.54 | 5.010 $\pm$ 0.071 | 33.55 | 3.923 $\pm$ 0.080
Ns $\geq$ 30 | 42.14 | 4.410 $\pm$ 0.082 | 39.74 | 3.108 $\pm$ 0.082
Ns $\geq$ 40 | 52.76 | 4.106 $\pm$ 0.088 | 51.23 | 2.213 $\pm$ 0.089
Ns $\geq$ 50 | 64.75 | 3.710 $\pm$ 0.069 | 63.25 | 1.984 $\pm$ 0.094
14.5A GeV/c 28Si-nucleus | Ns $\geq$ 10 | 24.98 | 5.281 $\pm$ 0.074 | 19.71 | 3.874 $\pm$ 0.089
Ns $\geq$ 20 | 33.99 | 5.010 $\pm$ 0.077 | 36.25 | 3.093 $\pm$ 0.101
Ns $\geq$ 30 | 47.99 | 4.740 $\pm$ 0.088 | 43.55 | 2.823 $\pm$ 0.113
Ns $\geq$ 40 | 51.86 | 3.901 $\pm$ 0.097 | 50.55 | 2.123 $\pm$ 0.134
Ns $\geq$ 50 | 64.66 | 2.558 $\pm$ 1.066 | 61.58 | 1.674 $\pm$ 0.149
Tabulated above table are the values of $\Phi$ and the averaged multiplicity
of relativistic charged particles, $<N_{s}>$ for various multiplicity classes
for the experimental and simulated events. It is observed from the table that
the values of $\Phi$ are non-zero for all the multiplicity classes considered
in the present study and this observation is same for both the experimental
and simulated data at the two incident energies for 28Si-nucleus interactions.
These non zero values of $\Phi$ supports the occurrence of dynamical
fluctuations in the $\eta$-variable and the presence of correlation during
particle production process in high energy nucleus-nucleus collisions. It is
also observed from the table that the $\Phi$, which is considered to be the
strength of correlation and fluctuation, shows a decreasing trend with
increasing $<N_{s}>$. This dependence of $\Phi$ on $<N_{s}>$ is shown in
Fig.1. The errors shown in the figure 1 are the statistical one. It is clear
from Fig1. that the e-by-e $\eta$ fluctuation tends to decrease with
increasing mean multiplicity of the produced relativistic charged particles.
This may be due the fact that the contribution to particle production would
have been taken place by several independent sources. These contribution might
be masking the correlated production. One can argue that there are identical
sources which are producing low multiplicity events which is resulting in the
low $\Phi$ values. It means when source fluctuation tends to vanish, pseudo-
rapidity fluctuation increases. This observed trend of variation of $\Phi$
with the average multiplicity at high energies is in agreement with
observations made by other studies in high energy regime18,20,21.
To compare the experimental results with HIJING simulation, sample of events
are generated with statistics approximately 10 times that of the experimental
statistics. It is clear from Fig.1 that the $\Phi$ values obtained for HIJING
data are lower than the its values for the experimental data at both the
energies but the trend of variation of $\Phi$ with $<N_{s}>$ is almost same
for both experimental and simulated data.
Another interesting aspect of the event-by-event pseudo-rapidity fluctuation
or the fluctuation of any global observable describing high energy nuclear
collision data is to see its dependence on phase space region, which in this
study is the pseudo-rapidity space itself. For this, the values of $\Phi$s are
determined for various pseudo-rapidity intervals,
$\Delta\eta=\eta_{2}-\eta_{1}$, where $\eta_{1}$ and $\eta_{2}$ are the lower
and upper limit of a chosen $\eta$-window. In the present study the chosen
$\Delta\eta$ are 0.5,1.0,1.5,2.0,3.0,4.0,5.0,6.0 for both the data sets. The
calculated values of $\Phi$s corresponding to these pseudo-rapidity regions
are listed in Table 2.
Table 2: Calculated values of $\Phi$s in different pseudo-rapidity windows for the experimental and HIJING simulated data. Interactions | $\Delta\eta$ | $\Phi$
---|---|---
| | Experimental | HIJING
4.5A GeV/c 28Si-nucleus | 0.5 | 0.199 $\pm$ 0.009 | 0.116 $\pm$ 0.008
1.0 | 0.499 $\pm$ 0.018 | 0.362 $\pm$ 0.023
1.5 | 1.159 $\pm$ 0.028 | 0.905 $\pm$ 0.082
2.0 | 2.179 $\pm$ 0.041 | 1.937 $\pm$ 0.088
2.5 | 2.890 $\pm$ 0.098 | 2.987 $\pm$ 0.092
3.0 | 3.972 $\pm$ 0.101 | 3.257 $\pm$ 0.098
4.0 | 4.452 $\pm$ 0.161 | 4.096 $\pm$ 0.117
5.0 | 4.622 $\pm$ 0.201 | 4.362 $\pm$ 0.188
6.0 | 5.027 $\pm$ 0.211 | 4.674 $\pm$ 0.198
14.5A GeV/c 28Si-nucleus | 0.5 | 0.194 $\pm$ 0.006 | 0.102 $\pm$ 0.021
1.0 | 0.387 $\pm$ 0.009 | 0.341 $\pm$ 0.029
1.5 | 1.097 $\pm$ 0.021 | 0.891 $\pm$ 0.038
2.0 | 2.063 $\pm$ 0.082 | 1.912 $\pm$ 0.043
2.5 | 2.732 $\pm$ 0.111 | 2.889 $\pm$ 0.055
3.0 | 3.817 $\pm$ 0.128 | 3.172 $\pm$ 0.076
4.0 | 4.158 $\pm$ 0.141 | 3.995 $\pm$ 0.102
5.0 | 4.489 $\pm$ 0.188 | 4.355 $\pm$ 0.111
6.0 | 4.811 $\pm$ 0.214 | 4.788 $\pm$ 0.175
It is observed from Table 2 that as we widened the pseudo-rapidity space, we
noticed larger e-by-e fluctuations. The values of $\Phi$s first increases with
increasing $\Delta\eta$ and then tends to saturate for much larger
$\Delta\eta$. This behaviour is observed at both the energies considered in
this analysis. This might be due to the dominating long-range correlations as
compared to short-range correlations as we explore a larger rapidity space.
Based on phenomenological evidence, it has been argued that particle
production in high energy hadron-hadron and nucleus-nucleus collisions have
been carrying the signals of both the short and long range correlations. The
average number of produced particles virtually depend on the size of the
initiating cluster, this gives rise to the long range correlation, means the
particles which are separated by relatively large $\eta$ shows some
correlation. The values of $\Phi$s for HIJING simulated data are smaller as
compared to its values for the experimental data but HIJING data too show the
similar dependence of $\Phi$ on $\Delta\eta$. These observations are much more
clearly depicted in Fig.2, where we plotted $\Phi$ against $\Delta\eta$ along-
with statistical errors.
Conclusions
Event-by-event fluctuations of pseudo-rapidity of the relativistic charged
particles produced in 28Si-nucleus interactions at 4.5A and 14.5A GeV/c have
been studied in terms of the fluctuation and correlation quantifying
parameter, $\Phi$. Analysis reveals the presence of e-by-e $\eta$-fluctuations
and correlation amongst the produced particles in pseudo-rapidity space at
both the incident momentum as the non vanishing values of $\Phi$s are
obtained. It’s observed that these fluctuations decreases with increasing mean
multiplicities of the produced particles. This might be due to smearing out of
the existing correlation as the more and more independent particle emitting
sources added up. E-by-e fluctuations are also found to depend on the pseudo-
rapidity windows and shows an increasing behaviour with increasing
$\Delta\eta$. Results obtained for HIJING data exhibit a similar trend as
compared to the experimental data at both the energies. Correlation and
fluctuation studies remain to be excellent tools to explore the behaviour of
the system produced in heavy-ion collisions at relativistic and ultra-
relativistic energies.
Acknowledgment: Financial support from DST, Govt. of India is acknowledged
with thanks.
References
1. 1.
M. A. Stephanov, K. Rajagopal, E.V. Shuryak, Phys. Rev. Lett. 81 4816(1998)
2. 2.
M. Luzum eta al., J. Phys. G. 41 063102 (2004)
3. 3.
Y.Yoki et al., Nature 443, 675 (2006)
4. 4.
S. Jeon et al., Phys. Rev. C 73, 014905 (2006)
5. 5.
L. F. Babichev, A.N. Khmialeuski, Proceeding of 15th Int. Conf.-School,
September 20-23, 2010
6. 6.
M. Weber for the ALICE collaboration, J. Phys.: Conf. Series 389, 012036
(2012)
7. 7.
M. Gazdzicki, S. Mrowczynski, Z. Phys. C 54, 127 (1992)
8. 8.
E.V. Shuryak, Phys. Lett B 423, 9 (1998)
9. 9.
A. Bialas, V. Koch, Phys. Lett. B 456, 1(1999)
10. 10.
NA49 Collaboration (H. Appelshauser et al.), Phys. Lett. B 459, 679 (1999)
11. 11.
G. Baym, H. Heisenberg, Phys. Lett. B 469, 7 (1999)
12. 12.
G. Danilov, E. Shuryak, nucl-th/9908027
13. 13.
T. Anticic et al., Phys. Rev. C 70, 034902 (2004)
14. 14.
T. K. Nayak, J. Phys. G. 32, S187 (2006) arXiv:nucl-ex/060802.
15. 15.
H. Heiselberg, Phys. Rep. 351, 161(2001)
16. 16.
M. Stephanov et al., Phys. Rev. Lett. 81, 4816 (1998)
17. 17.
M. Stephanov et al., Phys. Rev. D 61, 114028 (1999)
18. 18.
S. Bhattacharya et al., Phys. Lett. B 726, 194 (2013)
19. 19.
M. Belkacem et al., arXiv:nucl-th/9903017v2, 22 April 1999
20. 20.
B. Abelev et al. Eur. Phys. J. C 74, 3077 (2014)
21. 21.
KLM Collaboration (M.L. Cherry et al.), Acta Phys. Pol. B 29, 2129(1998)
22. 22.
Gopa Bhoumic, Swarnapratim Bhattacharya et al., Euro.Phys.J. A52 196(2016)
23. 23.
Shafiq ahmad et al., J. Phys. Soc. Jpn., 75, 064604 (2006)
24. 24.
Shakeel Ahmad et al., Acta Phys. Pol. B 35, 809 (2004)
25. 25.
M. Gyulassy and X.N. Wang, Comp. Phys. Commun., G 25 1895 (1999)
26. 26.
M. Gazdzicki et al., Eur. Phys. J. C6, 365 (1999)
27. 27.
M. Mrowczynski, Phys. Lett. B439 6 (1998)
|
YHEP-COS-21-01
††thanks: co-corresponding author††thanks: co-corresponding author
# Cosmic-Neutrino-Boosted Dark Matter ($\nu$BDM)
Yongsoo Jho<EMAIL_ADDRESS>Department of Physics and IPAP, Yonsei
University, Seoul 03722, Republic of Korea Jong-Chul Park<EMAIL_ADDRESS>Department of Physics and Institute of Quantum Systems (IQS), Chungnam
National University, Daejeon 34134, Republic of Korea Seong Chan Park
<EMAIL_ADDRESS>Department of Physics and IPAP, Yonsei University, Seoul
03722, Republic of Korea Po-Yan Tseng<EMAIL_ADDRESS>Department of
Physics and IPAP, Yonsei University, Seoul 03722, Republic of Korea
###### Abstract
A novel mechanism of boosting dark matter by cosmic neutrinos is proposed. The
new mechanism is so significant that the arriving flux of dark matter in the
mass window $1~{}{\rm keV}\lesssim m_{\rm DM}\lesssim 1~{}{\rm MeV}$ on Earth
can be enhanced by two to four orders of magnitude compared to one only by
cosmic electrons. Thereby we firstly derive conservative but still stringent
bounds and future sensitivity limits for such cosmic-neutrino-boosted dark
matter ($\nu$BDM) from advanced underground experiments such as Borexino,
PandaX, XENON1T, and JUNO.
## I Introduction
Revealing the properties of dark matter (DM) is definitely one of the most
pressing issues in particle physics, astrophysics, and cosmology. Direct
detection experiments of DM have particular importance as they aim to probe
interaction of DM with standard model (SM) particles [1]. However, there
exists fundamental limitation in detecting a sub-MeV dark matter set by the
maximum kinetic energy of the DM particle in halo:
$K_{\rm DM}^{\rm max}\lower
3.01385pt\hbox{$\;\stackrel{{\scriptstyle\textstyle<}}{{\sim}}\;$}10^{-6}m_{\rm
DM}\lower
3.01385pt\hbox{$\;\stackrel{{\scriptstyle\textstyle<}}{{\sim}}\;$}1~{}{\rm
eV}$ (1)
with the velocity $v\sim 10^{-3}$. This low kinetic energy causes a
significant problem in detecting light dark matter since the recoil energy of
scattered SM particle is also limited by the kinetic energy 111Several ideas
have been suggested to detect signals with low recoil energies by lowering the
threshold energies at detectors (see [2, 3] and references therein).. On the
other hand, there still exists a chance to detect a subcomponent of DM, dubbed
‘boosted dark matter’ (BDM), which may carry much larger energy beyond
threshold due to various mechanisms [4, 5, 6, 7, 8, 9, 10] including
scattering by energetic cosmic-ray particles [11, 12, 13, 14, 15, 16, 17]. We
note that focus has been given to cosmic-ray electron and proton so far even
though the chance is not exclusively open for charged particles.
In this letter, we focus on a noble class of cosmic-neutrino-boosted-dark
matter ($\nu$BDM) extending previous studies: there exist a huge number of
cosmic-ray neutrinos arriving at the solar system from various origins [18].
Our Sun is also generating a large number of neutrinos [19, 20, 21] so that
they may boost DM within the solar system. We find that $\nu$BDM can be a
dominant part of the whole BDM when DM-neutrino interaction is as strong as
DM-electron interaction, which is indeed the case for gauged lepton number as
mediator, for instance [22, 23]. The existing conclusions regarding cosmic-
electron-induced BDM should be re-examined.
## II Boost mechanism by cosmic neutrino
Cosmic neutrino inputs. Near Earth, our Sun provides the dominating neutrino
flux $d\Phi^{\rm Sun}_{\nu}/dK_{\nu}$ in the neutrino energy $K_{\nu}\lesssim
10$ MeV reaching the maximum $\simeq\mathcal{O}(10^{8})~{}[{\rm
cm^{-2}\,s^{-1}\,keV^{-1}}]$ around $K_{\nu}\simeq 0.3$ MeV [19, 20, 21],
which gives the total number of neutrino emission rate per unit energy
$\displaystyle\frac{d\dot{N}^{\rm Sun}_{\nu}}{dK_{\nu}}\equiv\frac{d\Phi^{\rm
Sun}_{\nu}}{dK_{\nu}}\,(4\pi D^{2}_{\odot})\,,$ (2)
where $D_{\odot}=1$ AU is the distance between Sun and Earth. The neutrinos
can boost non-relativistic light DM, leaving distinctive signals at
terrestrial experiments, e.g. XENON1T [24, 25]. The total contributions from
all stars for $\nu$BDM could be significant compared to the BDM flux by the
solar neutrinos. The overall neutrino flux from all stars in the Milky Way
(say, cosmic-neutrino flux) has not been measured by astrophysical
observations, and could be highly anisotropic, which is different from the
isotropic diffused cosmic electrons. In general, DM particles can be boosted
by the neutrino flux from the nearest star, instead of diffused neutrinos.
Keep this philosophy in mind, we will compute the $\nu$BDM flux by starting
with single star contribution in the following section, then integrate the
entire star distribution in the Milky Way.
Cosmic neutrino and DM scattering. The halo DM is boosted by neutrino through
the process $\nu+\chi\to\nu+\chi$, which may originate from the exchange of
the $U(1)_{L_{e}-L_{i}}$ gauge boson or dim-6 effective operators including
$(\bar{\ell}\gamma^{\mu}\ell)(\bar{\chi}\gamma_{\mu}\chi)$ or
$(\bar{\ell}\ell)(\bar{\chi}\chi)$. The resulting BDM kinetic energy $K_{\rm
DM}$ can be determined from the kinetic energy of incoming neutrino $K_{\nu}$.
At the halo DM rest frame, the allowed range of $K_{\rm DM}$ is given by [26]
$\displaystyle 0\leq K_{\rm DM}\leq K^{\rm max}_{\rm DM}\equiv\frac{2m_{\rm
DM}(K^{2}_{\nu}+2m_{\nu}K_{\nu})}{(m_{\rm DM}+m_{\nu})^{2}+2m_{\rm
DM}K_{\nu}}\,.$ (3)
Figure 1: [Top] Schematic description of BDM production by the neutrino from a
single star. [Bottom] Areal density of unit-normalized distribution of the
$\nu$BDM flux from stars in our galaxy
$\mathcal{P}(\vec{y})\equiv\frac{1}{\Phi_{\rm DM}}\frac{d\Phi_{\rm
DM}}{dA_{y}}$ [kpc-2], for two representative ranges of $K_{\rm DM}$: 10 – 100
keV (left) and 1 – 10 MeV (right). $dA_{y}$ is the areal element of the
Galactic disk, defined by position of star, $\vec{y}$.
The BDM flux by neutrinos from a Sun-like star is
$\displaystyle\frac{d\Phi^{(1)}_{\rm DM}(\overrightarrow{y})}{dK_{\rm DM}}$
$\displaystyle\simeq$
$\displaystyle\frac{1}{8\pi^{2}}\left(\tilde{f}_{1}\frac{d\dot{N}^{\rm
Sun}_{\nu}}{dK_{\nu}}\right)\int d^{3}\overrightarrow{z}\frac{\rho_{\rm
DM}(|\overrightarrow{z}|)}{m_{\rm
DM}}\frac{1}{|\overrightarrow{x}-\overrightarrow{z}|^{2}}$ (4)
$\displaystyle\times\left(\left.\frac{dK_{\nu}}{d\bar{\theta}}\right|_{\bar{\theta}=\bar{\theta}_{0}}\right)\left(\left.\frac{d\sigma_{\nu{\rm
DM}}}{dK_{\rm DM}}\right|_{\bar{\theta}=\bar{\theta}_{0}}\right)$
$\displaystyle\times\frac{1}{\sin\bar{\theta}_{0}}\frac{1}{|\overrightarrow{z}-\overrightarrow{y}|^{2}}\times\exp{\left(-\frac{|\overrightarrow{z}-\overrightarrow{y}|}{d_{\nu}}\right)}\,,$
where the schematic diagram of the coordinate system is shown in the top panel
of Fig. 1, and $\overrightarrow{x},\overrightarrow{y}$, and
$\overrightarrow{z}$ represent the positions of Earth, Star, and halo DM,
respectively. The correction factor $\tilde{f}_{1}$ takes into account the
variances of stellar properties from Sun [27] and $\rho_{\rm DM}$ is the DM
halo density profile. The differential $\nu$-DM cross section depends on
scattering angle $\bar{\theta}$, and $\bar{\theta}_{0}$ can be determined by
$K_{\nu}$ and $K_{\rm DM}$ via kinematic relations:
$\displaystyle K_{\nu}(K_{\rm DM},\bar{\theta})$ $\displaystyle=$
$\displaystyle\frac{p^{\prime 2}-K^{2}_{\rm
DM}}{2\left(p^{\prime}\cos\bar{\theta}-K_{\rm DM}\right)}\,,$ (5)
$\displaystyle\left.\frac{dK_{\nu}}{d\bar{\theta}}\right|_{\bar{\theta}=\bar{\theta}_{0}}$
$\displaystyle=$ $\displaystyle\frac{(p^{\prime 2}-K^{2}_{\rm
DM})p^{\prime}}{2\left(p^{\prime}\cos\bar{\theta}_{0}-K_{\rm
DM}\right)^{2}}\sin\bar{\theta}_{0}\,,$ (6)
where $p^{\prime}\equiv\sqrt{2m_{\rm DM}K_{\rm DM}+K^{2}_{\rm DM}}$ is
3-momentum of BDM in the halo DM frame. $dK_{\nu}/d\bar{\theta}\propto
1/\cos^{2}\bar{\theta}$ and large scattering angle $\bar{\theta}\simeq\pi/2$
is favoured for $m_{\rm DM}\gg K_{\rm DM}$, whereas
$dK_{\nu}/d\bar{\theta}\propto 1/(\cos\bar{\theta}-1)^{2}$ makes the forward
scattering $\bar{\theta}\simeq 0$ dominate for $m_{\rm DM}\ll K_{\rm DM}$.
The neutrino flux attenuation due to propagation is determined by the
exponential function in Eq. (4), and the mean free path is obtained as
$d_{\nu}\equiv 1/[(\rho_{\rm DM}/m_{\rm DM})\cdot\sigma_{\nu{\rm DM}}]$ where
the total $\nu$-DM cross section is
$\displaystyle\sigma_{\nu{\rm DM}}(K_{\nu})\equiv\int^{K^{\rm max}_{\rm
DM}}_{0}dK_{\rm DM}\frac{d\sigma_{\nu{\rm DM}}}{dK_{\rm DM}}\,.$ (7)
In the derivation of Eq. (4), we use point-like star approximation, by
starting with finite star radius $R_{\rm star}$ then taking $R_{\rm star}\to
0$. The final result of $d\Phi^{(1)}_{\rm DM}(\overrightarrow{y})/dK_{\rm DM}$
is finite. Due to the distance-squared suppression, the dominating $\nu$BDM
fluxes originate either from halo DM at the vicinity of Earth or the galatic
center (GC).
Figure 2: The unit-normalized arrival direction $\theta$ distributions of the
$\nu$BDM spectral flux $\varphi_{\rm BDM}\equiv d\Phi_{\rm DM}/dK_{\rm DM}$
for two benchmark values of $K_{\rm DM}$: 10 keV (left) and 1 MeV (right)
varying $m_{\rm DM}=$ 5 keV – 5 MeV with a fixed mediator mass, $m_{X}=700$
keV.
From Eq. (4), we can calculate the BDM flux by neutrinos from Sun by taking
$|\overrightarrow{x}-\overrightarrow{y}|=D_{\odot}$. Even though Sun provides
the largest neutrino flux to Earth, only small volume of nearby DM halo
compromises the BDM flux. Therefore, we need to consider the entire stellar
contributions in the Milky Way by convolving Eq. (4) with stellar distribution
$n_{\rm star}(\overrightarrow{y})$:
$\displaystyle\frac{d\Phi_{\rm DM}}{dK_{\rm DM}}=\int
d^{3}\overrightarrow{y}n_{\rm star}(\overrightarrow{y})\frac{d\Phi^{(1)}_{\rm
DM}(\overrightarrow{y})}{dK_{\rm DM}}\,.$ (8)
Here we assume stars distribute within the galactic disk, shown in the top
panel of Fig. 1, with radius $R\leq 20$ kpc and thickness $|h|\leq 1$ kpc.
Using the observation [28] and integrating out the $h$, the stellar
distribution on 2-dimensional galactic disk is given by
$\displaystyle n_{\rm star}(R)\simeq\tilde{f}_{2}\times 1.2\times
10^{11}/(R/{\rm kpc})^{3}~{}[{\rm kpc^{-2}}]\,,$ (9)
where $\tilde{f}_{2}$ factor includes the uncertainties from detailed
structures of the Milky Way, e.g. spiral arms and density fluctuations.
The $\nu$BDM fluxes with two $K_{\rm DM}$ regimes are shown in the bottom
panels of Fig. 1. Due to the high stellar and DM number densities around the
GC, the BDM flux contribution from the GC region exceeds that from the
vicinity of Earth. Fig. 2 shows the $\theta$ dependence of the $\nu$BDM fluxes
$\frac{1}{\varphi_{\rm BDM}}\frac{\varphi_{\rm BDM}}{d\theta}$ at Earth where
$\theta$ represents the angle between the $\nu$BDM arrival direction and the
GC. For $K_{\rm DM}\gg m_{\rm DM}$ in the right panel, the forward scattering
($\bar{\theta}_{0}\simeq 0$) is preferred, so that $\nu$BDM from the GC
dominates and thus $\theta\simeq 0$. In the left panel, $K_{\rm DM}\ll m_{\rm
DM}$ prefers large-angle scattering ($\bar{\theta}_{0}\simeq 90^{\circ}$),
which enhances the flux for $\theta\gtrsim 40^{\circ}$ originating relatively
far from the GC. The $\theta$ dependence of the $\nu$BDM flux can be used to
determine $m_{\rm DM}$ in the future.
Figure 3: [Top] BDM fluxes by solar neutrinos, cosmic neutrinos, and cosmic
electrons. We assume $\sigma_{\nu{\rm DM}}$ comes from a vector boson $X$
coupling to both DM and leptons ($g_{X}=g_{e}=g_{\nu}$) with $(m_{\rm
DM},m_{X},g_{X}g_{\rm DM})=(5{\rm MeV},700{\rm keV},10^{-6})$. The uncertainty
band for $\nu$BDM corresponds to $0.1\leq\tilde{f}\leq 10$. [Bottom] BDM
fluxes for different $m_{X}$ and $m_{\rm DM}$ with $\tilde{f}=1$. Solid and
dotted lines are $\nu$BDM and cosmic electron BDM fluxes, respectively.
In the top panel of Fig. 3, we compare the BDM fluxes via solar neutrinos,
cosmic neutrinos, and cosmic-ray electrons by fixing
$\tilde{f}\equiv\tilde{f}_{1}\cdot\tilde{f}_{2}$ =1. The $\nu$BDM flux is
three orders of magnitude larger than that by solar neutrinos, because the
later is relevant to DM only within a few AUs around Earth. Three bumps of the
$\nu$BDM flux correspond to the $pp$, 13N+15O, and 8B production processes of
solar neutrinos [19]. Assuming $g_{e}=g_{\nu}$, the $\nu$BDM flux can be two
to four orders of magnitude larger than that induced by cosmic electrons for
$K_{\rm DM}\lesssim 50~{}{\rm keV}$. This feature is quite robust for other DM
and mediator masses as shown in the bottom panel.
There are several factors that can make our estimations different. i ) The DM
halo profile, especially around the GC. We take the NFW profile. ii ) The
$\nu$ flux varies with the type and age of stars [27]. iii ) The star
distribution in the Milky Way. All the above uncertainties are hard to be
included in the calculation. In order to show the robustness of the results,
we conservatively vary $0.1\lesssim\tilde{f}\lesssim 10$ in Eq. (4) and (8),
depicted as a blue band in the top panel of Fig. 3.
Attenuation of the BDM flux. The attenuation effect of the cosmic-neutrino
flux scattered by halo DM is taken into account by the exponential factor in
Eq. 4. We estimate the mean free path of cosmic neutrino $d_{\nu}$ by taking
$\sigma_{\nu{\rm DM}}\simeq 10^{-28}-10^{-34}~{}{\rm cm^{2}}$. For the $n_{\rm
DM}\sim({\rm keV}/m_{\rm DM})\times 10^{6}~{}{\rm cm^{-3}}$,
$d_{\nu}\simeq(m_{\rm DM}/{\rm keV})\times(10^{22}-10^{28})~{}{\rm cm}$, which
is larger than the size of the Milky Way and results in negligible effect.
Next, we estimate the mean free path of BDM inside Earth by assuming
$\sigma_{e{\rm DM}}=\sigma_{\nu{\rm DM}}$. For $\sigma_{e{\rm
DM}}=10^{-33}~{}{\rm cm^{2}}$ with electron number density of Earth
$n_{e}\simeq 10^{24}~{}{\rm cm^{-3}}$, the mean free path
$1/(n_{e}\cdot\sigma_{e{\rm DM}})\simeq 10^{4}~{}{\rm km}$ is comparable to
the size of Earth. For $\sigma_{e{\rm DM}}=10^{-29}~{}{\rm cm^{2}}$, the BDM
mean free path reduces to $\mathcal{O}({\rm km})$. Most of DM direct detection
detectors locate a few kilometers underground, rendering the $\nu$BDM signal
be substantially suppressed for $\sigma_{e{\rm DM}}\gtrsim 10^{-29}~{}{\rm
cm^{2}}$. Thus, the attenuation of BDM inside Earth will provide upper limits
on experimental sensitivities as shown in Fig. 4.
## III Experimental sensitivities
To estimate experimental sensitivities, we use two approaches for DM models. i
) Heavy mediator: the interactions can be described by effective cross
sections $\sigma^{\rm eff}_{\nu{\rm DM}}$ and $\sigma^{\rm eff}_{e{\rm DM}}$.
ii ) Light mediator: the $X$ boson from the gauged $U(1)_{L_{e}-L_{i}}$
couples to DM and leptons.
For the approach i ), the differential cross section is defined as
$\displaystyle\frac{d\sigma^{\rm eff}_{\nu{\rm DM},e{\rm DM}}}{dK_{\rm
DM}}\equiv\frac{\sigma^{\rm eff}_{\nu{\rm DM},e{\rm DM}}}{K^{\rm max}_{\rm
DM}-K^{\rm min}_{\rm DM}}\,.$ (10)
On the other hand, for the $U(1)_{L_{e}-L_{i}}$ model, the neutrino-DM
scattering cross section is given by [29]
$\displaystyle\frac{d\sigma_{\nu{\rm DM}}}{dK_{\rm DM}}=\frac{(g_{X}g_{\rm
DM})^{2}}{4\pi}\frac{2m_{\rm DM}(m_{\nu}+K_{\nu})^{2}-K_{\rm
DM}\left[(m_{\nu}+m_{\rm DM})^{2}+2m_{\rm DM}K_{\nu}\right]+m_{\rm
DM}K^{2}_{\rm DM}}{(2m_{\nu}K_{\nu}+K^{2}_{\nu})(2m_{\rm DM}K_{\rm
DM}+m^{2}_{X})^{2}}\,.$ (11)
For $K_{\rm DM}\simeq\mathcal{O}({\rm keV})$ and $m_{\rm DM}\simeq
m_{X}\simeq\mathcal{O}({\rm MeV})$, it makes $d\sigma_{\nu{\rm DM}}/dK_{\rm
DM}$ almost independent of $K_{\rm DM}$.
Figure 4: $\nu$BDM contributions to XENON1T electron recoil, assuming
$\sigma^{\rm eff}_{\nu{\rm DM}}=\sigma^{\rm eff}_{e{\rm DM}}$, where the
$1\sigma$ (green) and $2\sigma$ (white) regions from $\chi^{2}$ analysis, and
the gray-shaded region is excluded more than $2\sigma$. The expected
sensitivities from other underground detectors are depicted: Brexino [30],
PandaX [31], XENONnT [32], and JUNO [33]. For comparison, existing limits are
shown together: CDMS HVeV [34], DAMIC [35], EDELWEISS [36], and SENSEI [37].
The cosmic-electron-BDM constraints from Super-K and Hyper-K [14].
We perform the model-independent $\chi^{2}$ analysis for the effective cross
section $(m_{\rm DM},\sigma^{\rm eff}_{\nu{\rm DM}}=\sigma^{\rm eff}_{e{\rm
DM}})$ in Fig. 4. There are five disconnected $1\sigma$ regions for the
XENON1T excess [25], which originate from the three bumps of the $\nu$BDM flux
spectrum in Fig. 3. The $2\sigma$ exclusion region is gray-shaded. The
$\nu$BDM provides stringent constraint on $\sigma^{\rm eff}_{\nu{\rm
DM}}=\sigma^{\rm eff}_{e{\rm DM}}$ for unexplored small mass $m_{\rm
DM}\lesssim{\rm MeV}$, compared with the current limits from DM direct
detection experiments including CDMS HVeV [34], DAMIC [35], EDELWEISS [36],
and SENSEI [37].
We evaluate the sensitivities of $\nu$BDM with other current (Brexino [30],
PandaX [31]) and future experiments (XENONnT [32], JUNO [33]). To estimate the
sensitivities, we take the four ton-year exposure for XENONnT and 20 kton-year
exposure for JUNO assuming no excess above the expected background and
dominance of statistical uncertainty. Borexino and JUNO have higher energy
threshold above 100 keV but huge statistics. JUNO has the best sensitivity for
$m_{\rm DM}\lesssim 0.5~{}{\rm MeV}$, while XENON1T/nT are better than JUNO
for $m_{\rm DM}\gtrsim 0.5~{}{\rm MeV}$. PandaX has a slightly weaker limit
due to the smaller 0.276 ton-year exposure than XENON1T of 0.65 tonne-year.
For $\sigma_{e{\rm DM}}\gtrsim 10^{-29}~{}{\rm cm^{2}}$, the earth crust
attenuates the BDM flux; specifically, XENON1T and XENONnT [24] are located
underground at a depth of 3600 meter water equivalent (m.w.e.) and Borexino
[38] is at 3800 m.w.e while PandaX [39] is shielded by 2400 m marble
overburden ($\sim 6800$ m.w.e.). The most shallow JUNO detector [33], located
at 700 m deep underground ($\sim 2000$ m.w.e.), has the best upper sensitive
to $\nu$BDM with $\sigma_{e{\rm DM}}\simeq 10^{-28}~{}{\rm cm^{2}}$.
## IV Discussions
The flux of the cosmic-neutrino-boosted-DM ($\nu$BDM) is substantially larger
than the one of the cosmic-electron-boosted-DM so that it contributes
dominantly in direct detection experiments on Earth. Due to the distributions
of the sources of neutrinos in Milky Way and the dark matter in halo, the
angular distribution of the $\nu$BDM is kinematically correlated with the DM
mass. Therefore precise measurement of directional information helps in
determination of the DM mass.
The existing underground detectors probe the parameter region of neutrino-DM
interaction and electron-DM interaction in $10^{-34}~{}{\rm
cm^{2}}\lesssim\sigma_{\nu{\rm DM}}=\sigma_{e{\rm DM}}\lesssim 10^{-28}~{}{\rm
cm^{2}}$ with $1~{}{\rm keV}\lesssim m_{\rm DM}\lesssim 100~{}{\rm MeV}$ based
on the effective cross section approach. Since the DM flux is enhanced by
neutrino-boost, we find parameter regions for the recent XENON1T anomaly (see
Fig. 4). However, they are still hardly consistent with other DM searches.
Finally, we discuss various factors of future refinement of the current study.
Here we only assumed that nuclear activities inside each star are on average
same as in our Sun, so that the neutrino fluxes from each star are all
similar. Obviously, this is a crude estimation and actual neutrino fluxes
differ from star to star. Also, the GC region has the largest population of
main sequence stars and also red giants [40, 41], which enhances
$\tilde{f}_{1}\cdot\tilde{f}_{2}$ factor over unity [27]. Last but not least,
we point out the potential modification due to the extra galactic neutrinos.
Even though extra galactic contributions in neutrino flux is subdominant in
the energy range for $\nu$BDM [18] , it can lead modification in e.g. spatial
and kinetic distributions of $\nu$BDM. All those factors of improvement are
reserved for the future work.
## Acknowledgments
The work is supported in part by Basic Science Research Program through the
National Research Foundation of Korea (NRF) funded by the Ministry of
Education, Science and Technology [NRF-2018R1A4A1025334, NRF-2019R1A2C1089334
(SCP), NRF-2019R1C1C1005073 (JCP) and NRF-2020R1I1A1A01066413 (PYT)].
## References
* Goodman and Witten [1985] M. W. Goodman and E. Witten, Phys. Rev. D31, 3059 (1985).
* Battaglieri _et al._ [2017] M. Battaglieri _et al._ , (2017), arXiv:1707.04591 [hep-ph] .
* Kim _et al._ [2020] D. Kim, J.-C. Park, K. C. Fong, and G.-H. Lee, (2020), arXiv:2002.07821 [hep-ph] .
* Belanger and Park [2012] G. Belanger and J.-C. Park, JCAP 1203, 038, arXiv:1112.4491 [hep-ph] .
* Agashe _et al._ [2014] K. Agashe, Y. Cui, L. Necib, and J. Thaler, JCAP 1410 (10), 062, arXiv:1405.7370 [hep-ph] .
* Berger _et al._ [2015] J. Berger, Y. Cui, and Y. Zhao, JCAP 1502 (02), 005, arXiv:1410.2246 [hep-ph] .
* Kong _et al._ [2015] K. Kong, G. Mohlabeng, and J.-C. Park, Phys. Lett. B743, 256 (2015), arXiv:1411.6632 [hep-ph] .
* Kim _et al._ [2017] D. Kim, J.-C. Park, and S. Shin, Phys. Rev. Lett. 119, 161801 (2017), arXiv:1612.06867 [hep-ph] .
* Giudice _et al._ [2018] G. F. Giudice, D. Kim, J.-C. Park, and S. Shin, Phys. Lett. B780, 543 (2018), arXiv:1712.07126 [hep-ph] .
* D’Eramo and Thaler [2010] F. D’Eramo and J. Thaler, JHEP 06, 109, arXiv:1003.5912 [hep-ph] .
* Bringmann and Pospelov [2019] T. Bringmann and M. Pospelov, Phys. Rev. Lett. 122, 171801 (2019), arXiv:1810.10543 [hep-ph] .
* Ema _et al._ [2019] Y. Ema, F. Sala, and R. Sato, Phys. Rev. Lett. 122, 181802 (2019), arXiv:1811.00520 [hep-ph] .
* Cappiello _et al._ [2019] C. V. Cappiello, K. C. Y. Ng, and J. F. Beacom, Phys. Rev. D99, 063004 (2019), arXiv:1810.07705 [hep-ph] .
* Cappiello and Beacom [2019] C. Cappiello and J. F. Beacom, Phys. Rev. D 100, 103011 (2019), arXiv:1906.11283 [hep-ph] .
* Dent _et al._ [2020] J. B. Dent, B. Dutta, J. L. Newstead, and I. M. Shoemaker, Phys. Rev. D 101, 116007 (2020), arXiv:1907.03782 [hep-ph] .
* Cho _et al._ [2020] W. Cho, K.-Y. Choi, and S. M. Yoo, (2020), arXiv:2007.04555 [hep-ph] .
* Jho _et al._ [2020] Y. Jho, J.-C. Park, S. C. Park, and P.-Y. Tseng, Phys. Lett. B811, 135863 (2020), arXiv:2006.13910 [hep-ph] .
* Vitagliano _et al._ [2020] E. Vitagliano, I. Tamborra, and G. Raffelt, Rev. Mod. Phys. 92, 45006 (2020), arXiv:1910.11878 [astro-ph.HE] .
* Bahcall _et al._ [2005] J. N. Bahcall, A. M. Serenelli, and S. Basu, Astrophys. J. Lett. 621, L85 (2005), arXiv:astro-ph/0412440 [astro-ph] .
* Billard _et al._ [2014] J. Billard, L. Strigari, and E. Figueroa-Feliciano, Phys. Rev. D 89, 023524 (2014), arXiv:1307.5458 [hep-ph] .
* Vitagliano _et al._ [2017] E. Vitagliano, J. Redondo, and G. Raffelt, JCAP 12, 010, arXiv:1708.02248 [hep-ph] .
* Rajpoot [1989] S. Rajpoot, Phys. Rev. D 40, 2421 (1989).
* He _et al._ [1991] X. He, G. C. Joshi, H. Lew, and R. Volkas, Phys. Rev. D 43, 22 (1991).
* Aprile _et al._ [2017] E. Aprile _et al._ (XENON), Eur. Phys. J. C77, 881 (2017), arXiv:1708.07051 [astro-ph.IM] .
* Aprile _et al._ [2020a] E. Aprile _et al._ (XENON), (2020a), arXiv:2006.09721 [hep-ex] .
* Boschini _et al._ [2018] M. J. Boschini _et al._ , Astrophys. J. 854, 94 (2018), arXiv:1801.04059 [astro-ph.HE] .
* Farag _et al._ [2020] E. Farag, F. X. Timmes, M. Taylor, K. M. Patton, and R. Farmer 10.3847/1538-4357/ab7f2c (2020), arXiv:2003.05844 [astro-ph.SR] .
* de Jong _et al._ [2010] J. T. A. de Jong, B. Yanny, H.-W. Rix, A. E. Dolphin, N. F. Martin, and T. C. Beers (SDSS), Astrophys. J. 714, 663 (2010), arXiv:0911.3900 [astro-ph.GA] .
* Cao _et al._ [2020] Q.-H. Cao, R. Ding, and Q.-F. Xiang, (2020), arXiv:2006.12767 [hep-ph] .
* Bellini _et al._ [2011] G. Bellini _et al._ , Phys. Rev. Lett. 107, 141302 (2011), arXiv:1104.1816 [hep-ex] .
* Zhou _et al._ [2020] X. Zhou _et al._ (PandaX-II), (2020), arXiv:2008.06485 [hep-ex] .
* Aprile _et al._ [2020b] E. Aprile _et al._ (XENON), (2020b), arXiv:2007.08796 [physics.ins-det] .
* An _et al._ [2016] F. An _et al._ (JUNO), J. Phys. G43, 030401 (2016), arXiv:1507.05613 [physics.ins-det] .
* Agnese _et al._ [2018] R. Agnese _et al._ (SuperCDMS), Phys. Rev. Lett. 121, 051301 (2018), [erratum: Phys. Rev. Lett.122,no.6,069901(2019)], arXiv:1804.10697 [hep-ex] .
* Aguilar-Arevalo _et al._ [2019] A. Aguilar-Arevalo _et al._ (DAMIC), Phys. Rev. Lett. 123, 181802 (2019), arXiv:1907.12628 [astro-ph.CO] .
* Arnaud _et al._ [2020] Q. Arnaud _et al._ (EDELWEISS), (2020), arXiv:2003.01046 [astro-ph.GA] .
* Barak _et al._ [2020] L. Barak _et al._ (SENSEI), (2020), arXiv:2004.11378 [astro-ph.CO] .
* Back _et al._ [2012] H. Back _et al._ (Borexino), JINST 7, P10018, arXiv:1207.4816 [physics.ins-det] .
* Cao _et al._ [2014] X. Cao _et al._ (PandaX), Sci. China Phys. Mech. Astron. 57, 1476 (2014), arXiv:1405.2882 [physics.ins-det] .
* Robin _et al._ [2012] A. C. Robin, D. J. Marshall, M. Schultheis, and C. Reyle, Astron. Astrophys. 538, A106 (2012), arXiv:1111.5744 [astro-ph.GA] .
* Valenti, E. _et al._ [2016] Valenti, E., Zoccali, M., Gonzalez, O. A., Minniti, D., Alonso-García, J., Marchetti, E., Hempel, M., Renzini, A., and Rejkuba, M., A&A 587, L6 (2016).
|
# Enquire One’s Parent and Child Before Decision: Fully Exploit Hierarchical
Structure for Self-Supervised Taxonomy Expansion
Suyuchen Wang<EMAIL_ADDRESS>0000-0003-0404-2921 Mila & DIRO,
Université de MontréalMontréalQuébecCanada , Ruihui Zhao
<EMAIL_ADDRESS>Tencent Jarvis LabShenzhenGuangdongChina , Xi Chen
<EMAIL_ADDRESS>Tencent Jarvis LabShenzhenGuangdongChina , Yefeng
Zheng<EMAIL_ADDRESS>Tencent Jarvis LabShenzhenGuangdongChina and
Bang Liu<EMAIL_ADDRESS>Mila & DIRO, Université de
MontréalMontréalQuébecCanada
(2021)
###### Abstract.
Taxonomy is a hierarchically structured knowledge graph that plays a crucial
role in machine intelligence. The taxonomy expansion task aims to find a
position for a new term in an existing taxonomy to capture the emerging
knowledge in the world and keep the taxonomy dynamically updated. Previous
taxonomy expansion solutions neglect valuable information brought by the
hierarchical structure and evaluate the correctness of merely an added edge,
which downgrade the problem to node-pair scoring or mini-path classification.
In this paper, we propose the Hierarchy Expansion Framework (HEF), which fully
exploits the hierarchical structure’s properties to maximize the coherence of
expanded taxonomy. HEF makes use of taxonomy’s hierarchical structure in
multiple aspects: i) HEF utilizes subtrees containing most relevant nodes as
self-supervision data for a complete comparison of parental and sibling
relations; ii) HEF adopts a coherence modeling module to evaluate the
coherence of a taxonomy’s subtree by integrating hypernymy relation detection
and several tree-exclusive features; iii) HEF introduces the Fitting Score for
position selection, which explicitly evaluates both path and level selections
and takes full advantage of parental relations to interchange information for
disambiguation and self-correction. Extensive experiments show that by better
exploiting the hierarchical structure and optimizing taxonomy’s coherence, HEF
vastly surpasses the prior state-of-the-art on three benchmark datasets by an
average improvement of 46.7% in accuracy and 32.3% in mean reciprocal rank.
taxonomy expansion, self-supervised learning, hierarchical structure
††copyright: none††copyright: acmcopyright††journalyear: 2021††conference: WWW
’21: The Web Conference; April 19–23, 2021; Ljubljana, Slovenia††booktitle:
WWW ’21: The Web Conference, April 19–23, Ljubljana, Slovenia††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06
## 1\. Introduction
Figure 1. An illustration of the taxonomy expansion task and the contributions
of the proposed HEF model.
Taxonomy is a particular type of hierarchical knowledge graph that portrays
the hypernym-hyponym relations or “is-A” relations of various concepts and
entities. They have been adopted as the underlying infrastructure of a wide
range of online services in various domains, such as product catalogs for
e-commerce (Karamanolakis et al., 2020; Luo et al., 2020), scientific indices
like MeSH (Lipscomb, 2000), and lexical databases like WordNet (Miller, 1995).
A well-constructed taxonomy can assist various downstream tasks, including web
content tagging (Liu et al., 2019; Peng et al., 2019), web searching (Yin and
Shah, 2010), personalized recommendation (Huang et al., 2019) and helping
users achieve quick navigation on web applications (Hua et al., 2017).
Manually constructing and maintaining a taxonomy is laborious, expensive and
time-consuming. It is also highly inefficient and detrimental for downstream
tasks if we construct a taxonomy from scratch (Velardi et al., 2013; Gupta et
al., 2017) as long as the taxonomy has new terms to be added. A more realistic
strategy is to insert new terms (“query”) into an existing taxonomy, i.e., the
seed taxonomy, as a child of an existing node in the taxonomy (“anchor”)
without modifying its original structure to best preserve its design. This
problem is called taxonomy expansion (Jurgens and Pilehvar, 2016).
Early taxonomy expansion approaches use terms that do not exist in the seed
taxonomy and its best-suited position in the seed taxonomy as training data
(Jurgens and Pilehvar, 2015). However, it suffers from the insufficiency of
training data and the shortage of taxonomy structure supervision. More recent
solutions adopt self-supervision and try to exploit the information of nodes
in the seed taxonomy (seed nodes) to perform node pair matching (Shen et al.,
2020) or classification along mini paths in the taxonomy (Yu et al., 2020).
However, these approaches do not fully utilize the taxonomy’s hierarchical
structure’s characteristics, and neglect the coherence of the extended
taxonomy which oughts to be the core of the taxonomy expansion task. More
specifically, existing approaches do not model a hierarchical structure
identical to the taxonomy. Instead, they use ego-nets (Shen et al., 2020) or
mini-paths (Yu et al., 2020) and feature few or no tree-exclusive information,
making them unable to extract or learn the complete hierarchical design of a
taxonomy. Besides, they do not consider the coherence of a taxonomy. They
manage to find the most suitable node in a limited subgraph and only evaluate
the correctness of a single edge instead of the expanded taxonomy, which
downgrades the taxonomy expansion task to a hypernymy detection task. Lastly,
their scoring approach regards the anchor node as an individual node without
considering the hierarchical context information. However, the hierarchical
structure provides multi-aspect criteria to evaluate a node, such as its path
or level correctness. The structure also marks the nodes that are most likely
to be wrongly chosen to be a parent in a specific parental relation.
To solve all the stated flaws in previous works, we propose the Hierarchy
Expansion Framework (HEF), which aims to maximize the coherence of the
expanded taxonomy instead of the fitness of a single edge by fully exploiting
the hierarchical structure of a taxonomy for self-supervised training, as well
as modeling and evaluating the structure of taxonomy. HEF’s designs and goals
are illustrated in Fig. 1. Specifically, we make the following contributions.
Firstly, we design an innovative hierarchical data structure for self-
supervision to mimic how humans construct a taxonomy. Relations in a taxonomy
include hypernymy relations along a root-path and similarity among siblings.
To find the most suitable parent node for the query term, human experts need
to compare an anchor node with all its ancestors to distinguish the most
appropriate one and compare the query with its potential siblings to testify
their similarity. For example, to choose the parent for query “black tea” in
the food taxonomy, the most appropriate anchor “tea” can only be selected by
distinguishing from its ancestors “beverage” and “food”, which are all “black
tea” ’s hypernyms, as well as compare the query “black tea” with “tea”’s
children like “iced tea” and “oolong” to guarantee similarity among siblings.
Thus, we design a new structure called “ego-tree” for self-supervision, which
contains all ancestors and sample of children of a node for taxonomy structure
learning. Our ego-tree incorporates richer topological context information for
attaching a query term to a candidate parent with minimal computation cost
compared to previous approaches based on node pair matching or path
information.
Secondly, we design a new modeling strategy to perform explicit ego-tree
coherence modeling apart from the traditional node-pair hypernymy detection.
Instead of merely modeling the correctness of the added edge, we adopt a more
comprehensive approach to detect whether the anchor’s ego-tree after adding
the query maintains the original design of the seed taxonomy. The design of
taxonomy includes natural hypernymy relations, which needs the representation
of node-pair relations and expert-curated level configurations, such as
species must be placed in the eighth level of biological taxonomy, or adding
one more adjective to a term means exactly one level higher in the e-commerce
taxonomy. We adopt a coherence modeling module to detect the two aspects of
coherence: i) For natural hypernymy relations, we adopt a hypernymy detection
module to represent the relation between the query and each node in the
anchor’s ego-tree. ii) For expert-curated designs, we integrate hierarchy-
exclusive features such as embeddings of a node’s absolute level and relative
level to the anchor into the coherence modeling module.
Thirdly, we design a multi-dimensional evaluation to score the coherence of
the expanded taxonomy. The hierarchical structure of taxonomy allows the model
to evaluate the correctness of path selection and level selection separately
and the parental relationships in a hierarchy not only allow the model to
disambiguate the most similar terms but also enables the model to self-correct
its level selection by deciding the current anchor’s granularity is too high
or too low. We introduce the Fitting Score for the coherence evaluation of the
expanded ego-tree by using a Pathfinder and a Stopper to score path
correctness and level correctness, respectively. The Fitting Score calculation
also disambiguates the most appropriate anchor from its parent and children
and self-correct its level selection by bringing the level suggestion from the
anchor’s parent and one of its children into consideration. The Fitting
Score’s optimization adopts a self-supervised multi-task training paradigm for
the Pathfinder and Stopper, which automatically generates training data from
the seed taxonomy to utilize its information fully.
We conduct extensive evaluations based on three benchmark datasets to compare
our method with state-of-the-art baseline approaches. The results suggest that
the proposed HEF model significantly surpasses the previous solutions on all
three datasets by an average improvement of 46.7% in accuracy and 32.3% in
mean reciprocal rank. A series of ablation studies further demonstrate that
HEF can effectively perform the taxonomy expansion task.
## 2\. Related Work
Taxonomy Construction. Taxonomy construction aims to create a tree-structured
taxonomy with a set of terms (such as concepts and entities) from scratch,
integrating hypernymy discovery and tree structure alignment. It can be
further separated into two subdivisions. The first focuses on topic-based
taxonomy, where each node is a cluster of several terms sharing the same topic
(Zhang et al., 2018; Shang et al., 2020b). The other subdivision tackles the
problem of term-based taxonomy construction, in which each node represents the
term itself (Cocos et al., 2018; Shen et al., 2018; Mao et al., 2018). A
typical pipeline for this task is to extract “is-A” relations with a hypernymy
detection model first using either a pattern-based model (Hearst, 1992;
Agichtein and Gravano, 2000; Jiang et al., 2017; Roller et al., 2018) or a
distributional model (Lin, 1998; Yin and Roth, 2018; Wang et al., 2019; Dash
et al., 2020), then integrate and prune the mined hypernym-hyponym pairs into
a single directed acyclic graph (DAG) or tree (Gupta et al., 2017). More
recent solutions utilize hyperbolic embeddings (Le et al., 2019) or transfer
learning (Shang et al., 2020a) to boost performance.
Taxonomy Expansion. In the taxonomy expansion task, an expert-curated seed
taxonomy like MeSH (Lipscomb, 2000) is provided as both the guidance and the
base for adding new terms. The taxonomy expansion task is a ranking task to
maximize a score of a node and its ground-truth parent in the taxonomy. Wang
et al. (Wang et al., 2014) adopted Dirichlet distribution to model the
parental relations. ETF (Vedula et al., 2018) trained a learning-to-rank
framework with handcrafted structural and semantic features. Arborist (Manzoor
et al., 2020) calculated the ranking score in a bi-linear form and adopted
margin ranking loss. TaxoExpan (Shen et al., 2020) modeled the anchor node by
passing messages from its egonet instead of considering a single node, and
scored by feeding a concatenation of egonet representation and query embedding
to a feed-forward layer. STEAM (Yu et al., 2020) transformed the scoring task
into a classification task on mini-paths and performed model ensemble of three
sub-models processing distributional, contextual, and lexical-syntactic
features, respectively. However, existing approaches mostly neglect the
characteristics of taxonomy’s hierarchical structure and only evaluate the
correctness of a single edge from anchor to query. On the contrary, our method
utilizes the features and relations brought by the hierarchical structure and
aims to enhance the expanded taxonomy’s overall coherence.
Modeling of Tree-Structured Data. Taxonomy expansion involves modeling a tree
or graph structure. Plenty of works have been devoted to extending recurrent
models to tree structures, like Tree-LSTM (Tai et al., 2015). For explicit
tree-structure modeling, previous approaches include modeling the likelihood
of a Bayesian network (Fountain and Lapata, 2012; Wang et al., 2014) or using
graph neural net variants (Shen et al., 2020; Yu et al., 2020). Recently,
Transformers (Vaswani et al., 2017) achieved state-of-the-art performance in
the program translation task by designing a novel positional encoding related
to paths in the tree (Shiv and Quirk, 2019) or merely transforming a tree to
sequence by traversing its nodes (Kim et al., 2020). In our work, we model
tree-structure by a Transformer encoder, which, to the best of our knowledge,
is the first to use the Transformer for taxonomy modeling. We adopt a more
natural setting than (Shiv and Quirk, 2019) by using two different embeddings
for a node’s absolute and relative level to denote positions.
Figure 2. Illustration of the HEF model. Each circle denotes a seed node or a
query node. The “Anchor’s child*” in Fitting Score calculation denotes the
anchor’s child with maximum Pathfinder Score $S_{p}$.
## 3\. Problem Definition
In this section, we provide the formal definition of the taxonomy expansion
task and the explanation of key concepts that will occur in the following
sections.
Definition and Concepts about Taxonomy. A taxonomy
$\mathcal{T}=(\mathcal{N},\mathcal{E})$ is an arborescence that presents
hypernymy relations among a set of nodes. Each node $n\in\mathcal{N}$
represents a term, usually a concept mined from a large corpus online or an
artificially extracted phrase. Each edge
$\left<n_{p},n_{c}\right>\in\mathcal{E}$ points to a node from its most exact
hypernym node, where $n_{p}$ is $n_{c}$’s parent node, and $n_{c}$ is
$n_{p}$’s child node. Since hypernymy relation is transitive (Sang, 2007),
such relation exists not only in node pairs connected by a single edge, but
also in node pairs connected by a path in the taxonomy. Thus, for a node $n$
in the taxonomy, its hypernym set and hyponym set consists of its ancestors
$\mathcal{A}_{n}$, and its descendants $\mathcal{D}_{n}$ respectively.
Definition of the Taxonomy Expansion Task. Given a seed taxonomy
$\mathcal{T}^{0}=(\mathcal{N}^{0},\mathcal{E}^{0})$ and the set of terms
$\mathcal{C}$ to be added to the seed taxonomy, The model outputs the taxonomy
$\mathcal{T}=(\mathcal{N}^{0}\cup\mathcal{C},\mathcal{E}^{0}\cup\mathcal{R})$,
where $\mathcal{R}$ is the newly added relations from seed nodes in
$\mathcal{N}^{0}$ to new terms in $\mathcal{C}$. More specifically, during the
inference phase of a taxonomy expansion model, when given a query node
$q\in\mathcal{C}$, the model finds its best-suited parent node by iterating
each node in the seed taxonomy as an anchor node $a\in\mathcal{N}^{0}$,
calculating a score $f(a,q)$ representing the suitability for adding the edge
$\left<a,q\right>$, and deciding $q$’s parent $p_{q}$ in the taxonomy by
$p_{q}=\mathop{\arg\max}_{a\in\mathcal{N}^{0}}{f(a,q)}$.
Accessible External Resources. As a term’s surface name is usually
insufficient to convey the semantic information for hypernymy relationship
detection, previous research usually utilizes term definitions (Jurgens and
Pilehvar, 2016; Shen et al., 2020) or related web pages (Wang et al., 2014;
Kozareva and Hovy, 2010) to learn term representations. Besides, existing
hypernymy detection solutions usually use large external corpora to discover
lexical or syntactic patterns (Shwartz et al., 2016; Yu et al., 2020). As for
the SemEval-2016 Task 13 datasets (Bordea et al., 2016) used for our model’s
evaluation, utilizing the WordNet (Miller, 1995) definitions is allowed by the
original task, which guarantees a fair comparison with previous solutions.
## 4\. The Hierarchy Expansion Framework
In this section, we introduce the design of the Hierarchy Expansion Framework
(HEF). An illustration of HEF is shown in Fig. 2. We first introduce the way
HEF models the coherence of a tree structure, including two components for
node pair hypernymy detection and ego-tree coherence modeling, respectively.
Then, we discuss how HEF further exploits the hierarchical structure for
multi-dimensional evaluation by the modules of Pathfinder and Stopper, and the
self-supervised paradigm to train the model for the Fitting Score calculation.
### 4.1. Node Pair Hypernymy Detection
We first introduce the hypernymy detection module of HEF, which detects the
hypernymy relationships between two terms. Unlike previous approaches that
manually design a set of classical lexical-syntactic features, we accomplish
the task more directly and automatically by expanding the surface names of
terms to their descriptions and utilizing pre-trained language models to
represent the relationship between two terms.
Given a seed term $n\in\mathcal{N}^{0}$ and a query term $q\in\mathcal{N}^{0}$
during training or $q\in\mathcal{C}$ during inference, the hypernymy detection
module outputs a representation $r_{n,q}$ suggesting how well these two terms
form a hypernymy relation. Note that $n$ might not be identical to the anchor
$a$. Since the surface names of terms do not contain sufficient information
for relation detection, we expand the surface names to their descriptions,
enabling the model to better understand the semantic of new terms. We utilize
the WordNet (Miller, 1995) concept definitions for completing this task.
However, WordNet cannot explain all terms in a taxonomy due to its low
coverage. Besides, many terms used in taxonomies are complex phrases like
“adaptation to climate change” or “bacon lettuce tomato sandwich”. Therefore,
we further develop a description generation algorithm descr($\cdot$), which
generates meaningful and domain-related descriptions for a given term based on
WordNet. Specifically, descr($\cdot$) is a dynamic programming algorithm that
tends to integrate tokens into longer and explainable noun phrases. It
describes each noun phrase by the most relative description to the taxonomy’s
root’s surface name for domain relevance. The details are shown in Alg. 2 in
the appendix. The input for hypernymy detection is organized as the input
format of a Transformer:
$D_{n,q}=\left[\mbox{<CLS>}\oplus\texttt{descr(}n\texttt{)}\oplus\mbox{<SEP>}\oplus\texttt{descr(}q\texttt{)}\oplus\mbox{<SEP>}\right],$
where $\oplus$ represents concatenation, ¡CLS¿ and ¡SEP¿ are the special token
for classification and sentence separation in the Transformer architecture,
respectively.
As shown in Fig. 2, the hypernymy detection module utilizes a pre-trained
DistilBERT (Sanh et al., 2020), a lightweight variant of BERT (Devlin et al.,
2019), to learn the representations of cross-text relationships. Specifically,
we first encode $D_{n,q}$ by ${\rm DistilBERT}(\cdot)$ with positional
encoding. Then we take the final layer representation of ¡CLS¿ as the
representation of the node pair $\left<n,q\right>$:
$r_{n,q}={\rm DistilBERT}\left(D_{n,q}\right)\left[0\right],$
where index $0$ represents the position of ¡CLS¿’s embedding.
### 4.2. Ego-Tree Coherence Modeling
We further design a coherence modeling module to evaluate the coherence of the
tree structure after attaching the query term $q$ into taxonomy $\mathcal{T}$
as the anchor $a\in\mathcal{N}^{0}$’s child.
There are two different aspects for considering a taxonomy’s coherence: i) the
natural hypernymy relations. Since a node’s ancestors in the taxonomy all hold
hypernymy relations with it, an explicit comparison among a node’s ancestors
is needed to distinguish the most appropriate one; ii) the expert-curated
designs, which act as supplement information for maintaining the overall
structure. Some taxonomies contain latent rules about a node’s absolute or
relative levels in a taxonomy. For example, in the biological taxonomy,
kingdoms and species are all placed in the second and eighth levels,
respectively; in some e-commerce catalog taxonomies, terms that are one level
higher than another term contain exactly one more adjective. Hence, the
coherence modeling module needs to: i) model a subtree with the query as a
node in it, rather than a single node pair, enabling the model to learn the
design of a complete hierarchy; ii) add tree-exclusive features like absolute
level or relative level compared to the anchor to assist learning the expert-
curated designs of the taxonomy.
We design the Ego-tree $\mathcal{H}_{a}$, a novel contextual structure of an
anchor $a$, which consists of all the ancestors and children of $a$ (see Fig.
2). This structure contains all relevant nodes to both anchor and query,
enabling the model to both compare all hypernymy relations along the root path
and detect similarity among query and its potential siblings with minimal
computation cost:
(1) $\mathcal{H}_{a}=\mathcal{A}_{a}\cup\left\\{a\right\\}\cup{\rm
sample\\_child}\left(a\right),$
where $\mathcal{A}_{a}$ is all ancestors of $a$ in the seed taxonomy
$\mathcal{T}^{0}$, and sample_child$(\cdot)$ means sampling at most three
children of the anchor based on surface name similarity. The 3-children
sampling is a trade-off between accuracy and speed, for three potential
siblings are empirically enough for a comprehensive similarity comparison with
the query (especially when these potential siblings are quite different) while
decreasing the computation cost. Since this procedure is to leverage the
similarity brought by a hierarchy’s sibling relations, sampling by surface
name similarity is intuitive and cost-saving given that similar surface names
usually indicate similar terms. The input of the coherence modeling module
includes the anchor’s ego-tree $\mathcal{H}_{a}$ and the query $q$ as the
anchor’s child in $\mathcal{H}_{a}$. For each node $n\in\mathcal{H}_{a}$, we
represent the node pair $\left<n,q\right>$ by the following representations:
* •
Ego-tree representations. The ego-tree representation $r_{n,q}$ is the output
of the hypernymy detection module described in Sec. 4.1. It suggests the node
pair’s relation.
* •
Absolute level embedding. The absolute level embedding
$l_{n,q}=\mbox{AbsLvlEmb}\left(d_{n}\right)$, where $d_{n}$ is the depth of
$n$ in the expanded taxonomy. When $n=q$,
$l_{q,q}=\mbox{AbsLvlEmb}\left(d_{a}+1\right)$. It assists the modeling of the
expert-curated designs about granularity of a certain level.
* •
Relative level embedding. The relative level embedding
$e_{n,q}=\mbox{RelLvlEmb}\left(d_{n}-d_{q}\right)$, where $d_{n}$ is the depth
of $n$ in the expanded taxonomy. It assists the modeling of expert-curated
designs about the cross-level comparison.
* •
Segment embedding. The segment embedding of $\left<n,q\right>$
$g_{n,q}=\mbox{SegEmb}\left(\mbox{segment}\left(n\right)\right)$ distinguishes
anchor and query with other nodes in the ego-tree, where:
$\mbox{segment}\left(n,q\right)=\begin{cases}0,&\mbox{if }n\mbox{ is the
anchor},\\\ 1,&\mbox{if }n\mbox{ is the query},\\\
2,&\mbox{otherwise}.\end{cases}$
The input of the coherence modeling module
$R_{a,q}\in\mathbb{R}^{\left(\mathopen{|}\mathcal{H}_{a}\mathclose{|}+3\right)\times
d}$ is the sum of the above embeddings calculated with the anchor’s ego-tree
and the query, organized as the input of a Transformer:
(2) $R_{a,q}=\left[e_{<CLS>}\oplus
e_{<CLS>}\bigoplus_{n\in\mathcal{H}_{a}\cup{\left\\{q\right\\}}}{\left(r_{n,q}+l_{n,q}+e_{n,q}+g_{n,q}\right)}\right],$
where $d$ is the dimension of embedding, $e_{<CLS>}$ is a randomly initialized
placeholder vector for obtaining the ego-tree’s path and level coherence
representations, and $\oplus$ denotes concatenation.
We implement the coherence modeling module using a Transformer encoder.
Transformers are powerful to model sequences, but they lack positional
information to process the relations among nodes in graphs. However, in a
hierarchy like taxonomy, the level of nodes can be used as positional
information, which simultaneously eliminates the positional difference of
nodes on the same level. Transformers are also strong enough to integrate
multiple-source information by adding their embeddings, thus they are quite
suitable for modeling tree structures. In our HEF model, as shown in Fig. 2,
by using two ¡CLS¿s in the module’s input, we can obtain two different
representations: $p_{a,q}$ representing the coherence of hypernymy relations
(whether the path is correct), and $d_{a,q}$ representing the coherence of
inter-level granularity (whether the level is correct), evaluating how well
the query fits the current position in the taxonomy in both horizontal and
vertical perspective:
$\displaystyle p_{a,q}$
$\displaystyle=\mbox{TransformerEncoder}\left(R_{a,q}\right)\left[0\right]$
$\displaystyle d_{a,q}$
$\displaystyle=\mbox{TransformerEncoder}\left(R_{a,q}\right)\left[1\right],$
where $0$ and $1$ are the position indexes of the two ¡CLS¿s.
### 4.3. Fitting Score-based Training and Inference
Figure 3. An illustration of the self-supervision data labels for Pathfinder
and Stopper.
The two representations $p_{a,q}$ and $d_{a,q}$ need to be transformed into
scores indicating the fitness of placing the query $q$ on a particular path
and a particular level. Thus, we propose the Pathfinder for path selection and
the Stopper for level selection, as well as a new self-supervised learning
algorithm for training and the Fitting Score calculation for inference.
Pathfinder. The Pathfinder detects whether the query is positioned on the
right path. This module performs a binary classification using $p_{a,q}$. The
Pathfinder Score $S_{p}=1$ if and only if $a$ and $q$ are on the same root-
path:
(3)
$S_{p}\left(a,q\right)=\sigma\left(\mathbf{W}_{p2}\tanh\left(\mathbf{W}_{p1}p_{a,q}+b_{p1}\right)+b_{p2}\right),$
where $\sigma$ is the sigmoid function, and
$\mathbf{W}_{p1},\mathbf{W}_{p2},b_{p1},b_{p2}$ are trainable parameters for
multi-layer perceptrons.
Stopper. The Stopper detects whether the query $q$ is placed on the right
level, i.e., under the most appropriate anchor $a$ on a particular path.
Selecting the right level is nonidentical to selecting the right path since
levels are kept in order. The order of nodes on a path enables us to design a
more representative module for further classifying whether the current level
is too high (anchor $a$ is a coarse-grained ancestor of $q$) or too low ($a$
is a descendant of $q$). Thus, the Stopper module uses $d_{a,q}$ to perform a
$3$-class classification: searching for a better anchor node needs to go
Forward, remain Current, or go Backward, in the taxonomy:
$\displaystyle[S_{f}\left(a,q\right),S_{c}\left(a,q\right),S_{b}\left(a,q\right)]=$
(4)
$\displaystyle\mbox{softmax}\left(\mathbf{W}_{s2}\tanh\left(\mathbf{W}_{s1}d_{a,q}+b_{s1}\right)+b_{s2}\right),$
where $\mathbf{W}_{p1},\mathbf{W}_{p2},b_{p1},b_{p2}$ are trainable parameters
for multi-layer perceptrons. Forward Score $S_{f}$, Current Score $S_{c}$ and
Backward Score $S_{b}$ are called Stopper Scores.
Self-Supervised Training. Training the HEF model needs data labels for both
the Pathfinder and the Stopper. The tagging scheme is illustrated in Fig. 3.
There are totally four kinds of Pathfinder-Stopper label combinations since
Pathfinder Score is always $1$ when Stopper Tag is Forward or Current. The
training process of HEF is shown in Alg. 1. Specifically, we sample the ego-
tree of all four types of nodes for a query: $q$’s parent $a$, $a$’s
ancestors, $a$’s descendants and other nodes, as a mini-batch for training the
Pathfinder and Stopper simultaneously.
The optimization of Pathfinder and Stopper can be regarded as a multi-task
learning process. The loss $\mathcal{L}_{q}$ in Alg. 1 is a linear combination
of the loss from Pathfinder and Stopper:
$\displaystyle\mathcal{L}_{q}=$
$\displaystyle-\eta\frac{1}{\mathopen{|}\mathcal{X}_{q}\mathclose{|}}\sum_{a\in\mathcal{X}_{q}}{{\rm
BCELoss}\left(\hat{S_{p}}\left(a,q\right),S_{p}\left(a,q\right)\right)}$ (5)
$\displaystyle-\left(1-\eta\right)\frac{1}{\mathopen{|}\mathcal{X}_{q}\mathclose{|}}\sum_{a\in\mathcal{X}_{q}}{\sum_{k\in\left\\{f,c,b\right\\}}{\hat{s_{k}}\left(a,q\right)\log
s_{k}\left(a,q\right)}},$
where ${\rm BCELoss}\left(\cdot\right)$ denotes the binary cross entropy, and
$\eta$ is the weight of multi-task learning.
Algorithm 1 Self-Supervised Training Process of HEF.
1:procedure TrainEpoch($\mathcal{T}^{0},{\Theta^{0}}$)
2: $\Theta\leftarrow\Theta^{0}$
3: for $q\leftarrow\mathcal{N}^{0}-{\rm root}\left(\mathcal{T}^{0}\right)$ do
$\triangleright$ Root is not used as query
4: $\mathcal{X}_{q}=\\{\\}$ $\triangleright$ Initialize anchor set
5: $p\leftarrow{\rm parent}\left(q\right)$ $\triangleright$ Reference node of
labeling
6: $\mathcal{X}_{q}\leftarrow\mathcal{X}_{q}\cup\left\\{p\right\\}$
$\triangleright$ Ground Truth Parent: $S_{p}=1,S_{c}=1$
7: $\mathcal{X}_{q}\leftarrow\mathcal{X}_{q}\cup{\rm
sample}\left(\mathcal{A}_{p}\right)$ $\triangleright$ Ancestors:
$S_{p}=1,S_{f}=1$
8: $\mathcal{X}_{q}\leftarrow\mathcal{X}_{q}\cup{\rm
sample}\left(\mathcal{D}_{p}\right)$ $\triangleright$ Descendants:
$S_{p}=1,S_{b}=1$
9: $\mathcal{X}_{q}\leftarrow\mathcal{X}_{q}\cup{\rm
sample}\left(\mathcal{N}^{0}-\left\\{p\right\\}-\mathcal{A}_{p}-\mathcal{D}_{p}\right)$
10:$\triangleright$ Other nodes: $S_{p}=0,S_{b}=1$
11: for $a\leftarrow\mathcal{X}_{q}$ do
12: Compute $S_{p}\left(a,q\right)$ using Eqn. 3
13: Compute
$S_{f}\left(a,q\right),S_{c}\left(a,q\right),S_{b}\left(a,q\right)$ using Eqn.
4.3
14: end for
15: Compute $\mathcal{L}_{q}$ with $S_{p},S_{f},S_{c},S_{b}$ using Eqn. 5
16: $\Theta\leftarrow{\rm optimize}\left(\Theta,\mathcal{L}_{q}\right)$
17: end for
18: return $\Theta$
19:end procedure
Fitting Score-based Inference. During inference, evaluation of an anchor-query
pair $\left<a,q\right>$ should consider both Pathfinder’s path evaluation and
Stopper’s level evaluation. However, instead of merely using $S_{p}$ and
$S_{c}$, the multi-classifying Stopper also enables the HEF model to
disambiguate the most suited anchor from its neighbors (its direct parent and
children) and self-correct its level prediction by exchanging scores with its
neighbors to find the best position for maintaining the taxonomy’s coherence.
Thus, We introduce the Fitting Score function during inference. For a new
query term $q\in\mathcal{C}$, we first obtain the Pathfinder Scores and
Stopper Scores of all node pairs $\left<a,q\right>,a\in\mathcal{N}^{0}$. For
each anchor node $a$, we assign its Fitting Score by multiplying the following
four items:
* •
$a$’s Pathfinder Score: $S_{p}\left(a,q\right)$, which suggests whether $a$ is
on the right path.
* •
$a$’s parent’s Forward Score: $S_{f}\left({\rm
parent}\left(a\right),q\right)$, which distinguishes $a$ and $a$’s parent, and
rectifies $a$’s Current Score. When $a$ is the root node, we assign this item
as a small number like $1e-4$ since the first level of taxonomy is likely to
remain unchanged.
* •
$a$’s Current Score: $S_{c}\left(a,q\right)$, which suggests whether $a$ is on
the right level.
* •
$a$’s child with maximum Pathfinder Score’s Backward Score:
$S_{b}\left(c_{a}^{*},q\right),c_{a}^{*}=\mathop{\arg\max}_{c_{a}\in{\rm
child}\left(a\right)}{S_{p}\left(c_{a},q\right)}$, which distinguishes $a$ and
$a$’s children, and rectifies $a$’s Current Score. Since $a$ might have
multiple children, we pick the child with max Pathfinder Score, for larger
$S_{p}$ indicates a better hypernymy relation to $q$. When $a$ is a leaf node,
we assign this item as the proportion of leaf nodes in the seed taxonomy to
keep its overall design.
The Fitting Score of $\left<a,q\right>$ is given by:
(6) $F\left(a,q\right)=S_{p}\left(a,q\right)\cdot S_{f}\left({\rm
parent}\left(a\right),q\right)\cdot S_{c}\left(a,q\right)\cdot
S_{b}\left(c_{a}^{*},q\right)$ $c_{a}^{*}=\mathop{\arg\max}_{c_{a}\in{\rm
child}\left(a\right)}{S_{p}\left(c_{a},q\right)}.$
The Fitting Score can be computed using ordered $S_{p},S_{f},S_{c},S_{b}$
arrays and the seed taxonomy’s adjacency matrix. Since a tree’s adjacency
matrix is sparse, the time complexity of Fitting Score computation is low.
After calculating the Fitting Scores between all seed nodes and the query, we
select the seed node with the highest Fitting Score as the query’s parent in
the expanded taxonomy:
(7) ${\rm
parent}\left(q\right)\coloneqq\mathop{\arg\max}_{a\in\mathcal{N}^{0}}F\left(a,q\right).$
## 5\. Experiments
In this section, we first introduce our experimental setups, including
datasets, our implementation details, evaluation criteria, and a brief
description of the compared baseline methods. Then, we provide extensive
evaluation results for overall model performance, performance contribution
brought by each design, and sensitivity analysis of the multi-task learning
weight $\eta$ in Equation 5. In-depth visualizations of hypernymy detection
and coherence modeling modules are provided to analyze the model’s inner
behavior. We also provide a case study in the appendix.111The code will be
available at https://github.com/sheryc/HEF.
### 5.1. Experimental Setup
#### 5.1.1. Datasets
We evaluate HEF on three public benchmark datasets retrieved from SemEval-2016
Task 13 (Bordea et al., 2016). This task contains three taxonomies in the
domain of Environment (SemEval16-Env), Science (SemEval16-Sci), and Food
(SemEval16-Food), respectively. The statistics of the benchmark datasets are
provided in Table 1. Note that the original dataset may not form a tree. In
this case, we use a spanning tree of the taxonomy instead of the original
graph to match the problem definition. The pruning process only removes less
than 6% of the total edges, keeping the taxonomy’s information and avoiding
multiple ground truth parents for a single node.
Since HEF and the compared baselines (Shen et al., 2020; Yu et al., 2020) are
all limited to adding new terms without modifying the seed taxonomy, nodes in
the test and validation set can only sample from leaf nodes to guarantee that
the parents of test or validation nodes exist in the seed taxonomy. This is
also the sampling strategy of TaxoExpan (Shen et al., 2020). Following the
previous state-of-the-art model STEAM (Yu et al., 2020), we exclude 20% of the
nodes in each dataset, of which ten nodes of each dataset are separated as the
validation set for early stopping, and the rest as the test set. The nodes not
included in the validation set and test set are seed nodes for self-
supervision in the training phase and potential anchor nodes in the inference
phase. Note that pruning the dataset does not affect the node count, thus the
scale of the dataset remains identical to our baselines’ settings.
Table 1. Statistics of datasets. $\left|N\right|$ and $\left|E_{O}\right|$ are the numbers of nodes and edges in the original datasets, respectively. $D$ is the depth of the taxonomy. We adopt the spanning tree of each dataset, and $\left|E\right|$ is the number of remaining edges. Dataset | $\left|N\right|$ | $\left|E_{O}\right|$ | $\left|E\right|$ | $D$
---|---|---|---|---
SemEval16-Env | 261 | 261 | 260 | 6
SemEval16-Sci | 429 | 452 | 428 | 8
SemEval16-Food | 1486 | 1576 | 1485 | 8
Table 2. Comparison of the proposed method against state-of-the-art methods. All metrics are presented in percentages (%). Best results for each metric of each dataset are marked in bold. Reported performance is the average of three runs using different random seeds. The MRR of TAXI (Panchenko et al., 2016) is inaccessible since it outputs the whole taxonomy instead of node rankings. The performance of baseline methods are retrieved from (Yu et al., 2020). Dataset | SemEval16-Env | SemEval16-Sci | SemEval16-Food
---|---|---|---
Metric | Acc | MRR | Wu&P | Acc | MRR | Wu&P | Acc | MRR | Wu&P
BERT+MLP | 11.1 | 21.5 | 47.9 | 11.5 | 15.7 | 43.6 | 10.5 | 14.9 | 47.0
TAXI (Panchenko et al., 2016) | 16.7 | - | 44.7 | 13.0 | - | 32.9 | 18.2 | - | 39.2
HypeNet (Shwartz et al., 2016) | 16.7 | 23.7 | 55.8 | 15.4 | 22.6 | 50.7 | 20.5 | 27.3 | 63.2
TaxoExpan (Shen et al., 2020) | 11.1 | 32.3 | 54.8 | 27.8 | 44.8 | 57.6 | 27.6 | 40.5 | 54.2
STEAM (Yu et al., 2020) | 36.1 | 46.9 | 69.6 | 36.5 | 48.3 | 68.2 | 34.2 | 43.4 | 67.0
HEF | 55.3 | 65.3 | 71.4 | 53.6 | 62.7 | 75.6 | 47.9 | 55.5 | 73.5
#### 5.1.2. Baselines for Comparison
We compare our proposed HEF model with the following baseline approaches:
* •
BERT+MLP: This method utilizes BERT (Devlin et al., 2019) to perform hypernym
detection. This model’s input is the term’s surface name, and the
representation of BERT’s classification token $\langle$CLS$\rangle$ is fed
into a feed-forward layer to score whether the first sequence is the ground-
truth parent.
* •
HypeNet (Shwartz et al., 2016): HypeNet is an LSTM-based hypernym extraction
model that scores a term pair by representing node paths in the dependency
tree.
* •
TAXI (Panchenko et al., 2016): TAXI was the top solution of SemEval-2016 Task
13. It explicitly splits the task into a pipeline of hypernym detection using
substring matching and pattern extraction, and hypernym pruning to avoid
multiple parents.
* •
TaxoExpan (Shen et al., 2020): TaxoExpan is a self-supervised taxonomy
expansion model. The anchor’s representation is modeled by a graph network of
its Egonet with consideration of relative levels, and the parental
relationship is scored by a feed-forward layer. BERT embedding is used as its
input instead of the model’s original configuration.
* •
STEAM (Yu et al., 2020): STEAM is the state-of-the-art self-supervised
taxonomy expansion model, which scores parental relations by ensembling three
classifiers considering graph, contextual, and hand-crafted lexical-syntactic
features, respectively.
#### 5.1.3. Implementation Details
In our setting, the coherence modeling module is a 3-layer, 6-head,
768-dimensional Transformer encoder initialized from Gaussian distribution
$\mathcal{N}\left(0,0.02\right)$. The first hidden layers of Pathfinder and
Stopper are both 300-dimensional. The input to the hypernymy detection module
is either truncated or padded to a length of 64. Each training step contains a
set of 32 query-ego-tree pairs of 32 query nodes using gradient accumulation,
with each query-ego-tree pair set containing one ground-truth parent
($S_{p}=1$, $S_{c}=1$), at most 6 ground-truth parent’s ancestors ($S_{p}=1$,
$S_{f}=1$), at most 8 ground-truth parent’s descendants ($S_{p}=1$,
$S_{b}=1$), and at least 16 other nodes ($S_{p}=0$, $S_{b}=1$). The
hyperparameters above are empirically set, since our algorithm is not
sensitive to the setting of splits. Each dataset is trained for 150 epochs. In
a single epoch, each seed node is trained as the query exactly once. AdamW
(Loshchilov and Hutter, 2019) is used for optimization with $\epsilon$ set to
$1\times 10^{-6}$. A linear warm-up is adopted with the learning rate linearly
rise from 0 to 5e-5 in the first 10% of total training steps and linearly drop
to 0 at the end of 150 epochs. The multi-task learning weight $\eta$ is set to
0.9. After each epoch, we validate the model and save the model with the best
performance on the validation set. These hyperparameters are tuned on on
SemEval16-Env’s validation set, and are used across all datasets and
experiments unless specified in the ablation studies or sensitivity analysis.
#### 5.1.4. Evaluation Metrics
Assume $k\coloneqq\mathopen{|}\mathcal{C}\mathclose{|}$ to be the term count
of the test set, $\left\\{p_{1},p_{2},\cdots,p_{k}\right\\}$ to be the
predicted parents for test set queries, and
$\left\\{\hat{p_{1}},\hat{p_{2}},\cdots,\hat{p_{k}}\right\\}$ to be the ground
truth parents accordingly. Following the previous solutions (Manzoor et al.,
2020; Vedula et al., 2018; Yu et al., 2020), we adopt the following three
metrics as evaluation criteria.
* •
Accuracy (Acc): It measures the proportion that the predicted parent for each
node in the test set exactly matches the ground truth parent:
$\mbox{Acc}=\mbox{Hit@1}=\frac{1}{k}\sum_{i=1}^{k}{\mathbb{I}\left(p_{i}=\hat{p_{i}}\right)},$
where $\mathbb{I}(\cdot)$ denotes the indicator function,
* •
Mean Reciprocal Rank (MRR): It calculates the average reciprocal rank of each
test node’s ground truth parent:
$\mbox{MRR}=\frac{1}{k}\sum_{i=1}^{k}{\frac{1}{\mbox{rank}\left(\hat{p_{i}}\right)}},$
* •
Wu & Palmer Similarity (Wu&P) (Wu and Palmer, 1994): It is a tree-based
measurement that judges how close the predicted and ground truth parents are
in the seed taxonomy:
$\mbox{Wu\&P}=\frac{1}{k}\sum_{i=1}^{k}{\frac{2\times\mbox{depth}\left(\mbox{LCA}\left(p_{i},\hat{p_{i}}\right)\right)}{\mbox{depth}\left(p_{i}\right)+\mbox{depth}\left(\hat{p_{i}}\right)}},$
where $\mbox{depth}(\cdot)$ is the node’s depth in the seed taxonomy and
$\mbox{LCA}(\cdot,\cdot)$ is the least common ancestor of two nodes.
### 5.2. Main Results
The performance of HEF is shown in Table 2. HEF achieves the best performance
on all datasets and surpasses previous state-of-the-art models with a
significant improvement on all metrics.
From the table, we get an overview of how taxonomy expansion models evolve
chronologically. The solution of BERT+MLP does not utilize any structural and
lexical-syntactic features of terms, and the insufficiency of information
attributes to its poor results. Models of the first generation like TAXI and
HypeNet utilize lexical, syntactic, or contextual information to achieve
better results, mainly for the task’s hypernymy detection part. However, these
two models do not utilize any of the structural information of taxonomy; hence
they are unable to maintain the taxonomy’s structural design. Models of the
second generation, like TaxoExpan and STEAM, inherit handcrafted lexical-
syntactic features for detecting hypernymy relations. They also utilize the
structural information by self-supervision from seed taxonomy and graph neural
networks on small subgraphs of the taxonomy. However, they neglect the
hierarchical structure of taxonomies, and they do not consider the coherence
of the whole expanded taxonomy. Thus, their usage of structural information is
only an improvement for performing hypernymy detection rather than taxonomy
expansion.
HEF further improves both the previous two generations’ strength by proposing
a new approach that better fits the taxonomy expansion task. Moreover, it
introduces a new goal for the task: to best preserve the taxonomy’s coherence
after expansion. We propose the description generation algorithm to generate
accurate and domain-specific descriptions for complex and rare terms, to
incorporate lexical-syntactic features for hypernymy detection. Aided by
DistilBERT’s power of sentence-pair representation, HEF can mine hypernymy
features more automatically and accurately. HEF also aims to fully exploit the
information brought by the taxonomy’s hierarchical structure to boost
performance. HEF uses ego-trees to perform thorough comparison along root path
and among siblings, injects tree-exclusive features to assist modeling the
expert-curated taxonomy designs and explicitly evaluates both path and level
for the anchor node as well as its parent and child. Experiment results
suggest that these designs are capable of modeling and maximizing the
coherence of taxonomy in different aspects, which results in a vast
performance increase in the taxonomy expansion task.
| |
---|---|---
(a) Accuracy on all 3 datasets. | (b) MRR on all 3 datasets. | (c) Wu&P on all 3 datasets.
Figure 4. Sensitivity analysis of model performance under different multi-task
learning weight $\eta$.
### 5.3. Ablation Studies
Table 3. Ablation experiment results on the SemEval16-Env dataset. All metrics are presented in percentages (%). For each ablation experiment setting, only the best result is reported. Abl. Type | Setting | Acc | MRR | Wu&P
---|---|---|---|---
Original | HEF | 55.3 | 65.3 | 71.4
Dataflows | \- WordNet Descriptions | 41.5 | 55.3 | 62.6
\- Ego-tree + Egonet | 45.3 | 60.6 | 69.9
\- Relative Level Emb. | 49.1 | 59.2 | 60.9
\- Absolute Level Emb. | 49.1 | 60.6 | 68.4
Scoring Function | Stopper Only | 52.8 | 62.5 | 68.7
Pathfinder + Current Only | 50.9 | 62.1 | 66.8
Current Only | 41.5 | 54.7 | 58.6
We discuss how exploiting different characteristics of taxonomy’s hierarchical
structure brings performance increase by a series of ablation studies. We
substitute some designs of HEF in dataflow and score function to a vanilla
setting and rerun the experiments. The results of the ablation studies are
shown in Table 3.
* •
\- WordNet Descriptions: WordNet descriptions are substituted with the term’s
surface name as the hypernymy detection module’s input.
* •
\- Ego-tree + Egonet: the Egonet from TaxoExpan (Shen et al., 2020) is used
instead of the ego-tree for modeling the tree structure.
* •
\- Relative Level Emb.: The relative level embedding for the coherence
modeling module is removed.
* •
\- Absolute Level Emb.: The absolute level embedding for the coherence
modeling module is removed.
* •
Stopper Only: Only the Stopper Scores are used for Fitting Score calculation.
More specifically, $\eta=0$, and the Fitting Score in Equation 6 becomes:
$F\left(a,q\right)=S_{f}\left({\rm parent}\left(a\right),q\right)\cdot
S_{c}\left(a,q\right)\cdot S_{b}\left(c_{a}^{*},q\right),$
$c_{a}^{*}=\mathop{\arg\max}_{c_{a}\in{\rm
child}\left(a\right)}{S_{p}\left(c_{a},q\right)}.$
* •
Pathfinder + Current Only: Only the Pathfinder Score and the Current Score are
used for Fitting Score calculation. More specifically, the Fitting Score in
Equation 6 and the loss in Equation 5 become:
$F\left(a,q\right)=S_{p}\left(a,q\right)\cdot S_{c}\left(a,q\right),$
$\displaystyle\mathcal{L}_{q}=$
$\displaystyle-\eta\frac{1}{\mathopen{|}\mathcal{X}_{q}\mathclose{|}}\sum_{a\in\mathcal{X}_{q}}{{\rm
BCELoss}\left(\hat{S_{p}}\left(a,q\right),S_{p}\left(a,q\right)\right)}$
$\displaystyle-\left(1-\eta\right)\frac{1}{\mathopen{|}\mathcal{X}_{q}\mathclose{|}}\sum_{a\in\mathcal{X}_{q}}{{\rm
BCELoss}\left(\hat{S_{c}}\left(a,q\right),S_{c}\left(a,q\right)\right)}.$
* •
Current Only: Only the Current Score is used for Fitting Score calculation.
This is the scoring strategy identical to prior arts (Shen et al., 2020; Yu et
al., 2020). More specifically, the Fitting Score in Equation 6 and the loss in
Equation 5 become:
$F\left(a,q\right)=S_{c}\left(a,q\right),$
$\mathcal{L}_{q}=-\frac{1}{\mathopen{|}\mathcal{X}_{q}\mathclose{|}}\sum_{a\in\mathcal{X}_{q}}{{\rm
BCELoss}\left(\hat{S_{c}}\left(a,q\right),S_{c}\left(a,q\right)\right)}.$
We notice that by changing the design of dataflows, the performance of the HEF
model suffers from various deteriorations. Substituting WordNet descriptions
with a term’s surface name surprisingly remains a relatively high performance,
which might attribute to the representation power of the DistilBERT model.
Using Egonets rather than ego-trees for coherence modeling also affects the
performance. Although egonets can capture the local structure of taxonomy,
ego-trees are more capable of modeling the complete construction of a
hierarchy. For the introduction of level embeddings, the results show that
removing one of the two level embeddings for the coherence modeling module
hurts the learning of taxonomy’s design. This is in accordance with the
previous research about the importance of using the information of both
absolute and relative positions in Transformers (Shaw et al., 2018) and
confirms our assumption that taxonomies have intrinsic designs about both
absolute and relative levels.
Changes to the score function bring a smaller negative impact on the model
compared to the dataflow changes, except for the setting of using merely
Current Score. When using only the Current Score, the model loses the ability
to disambiguate with its neighbors and the capacity of directly choosing the
right path, downgrading the problem to be a series of independent scoring
problems like the previous solutions. Adding Backward Score and Forward Score
into Fitting Score calculation allows the model to distinguish the ground
truth from its neighbors, bringing a boost to accuracy. However, without the
Pathfinder, the “Stopper Only” setting only explicitly focuses on choosing the
right level without considering the path and is inferior to the original HEF
model.
However, we observe that although changing several designs of dataflow or
scoring function deteriorates the performance, our method can still surpass
the previous state-of-the-art in Acc and MRR, suggesting that the HEF model
introduces improvements in multiple aspects, which also testifies that
maximizing the taxonomy’s coherence is a better goal for the taxonomy
expansion task.
Figure 5. Illustration of one self-attention head in the last layer of the
hypernymy detection module, showing how the hypernymy detection module detects
hypernymy relations. In this example, the seed node is “tea”, and the query is
“oolong”.
### 5.4. Impact of Multi-Task Learning Weight $\eta$
In this section, we discuss the impact of $\eta$ in Equation 5 through a
sensitivity analysis. Since $\eta$ controls the proportion of loss from the
path-oriented Pathfinder and the level-oriented Stopper, this hyperparameter
affects HEF’s tendency to prioritize path or level selection. The results on
all three datasets are shown in Fig. 4.
From the result, we notice that $\eta$ cannot be set too low, which means that
explicit path selection contributes a lot to the model’s performance. This is
in accordance with the fact that taxonomies are based on hypernymy relations
and selecting the right path is the essential guidance for anchor selection. A
better setting of $\eta$ is $\left[0.4,0.6\right]$. In this setting, the model
tends to balance path and level selections, which results in better
performance. Surprisingly, setting $\eta$ to a high value like 0.9 also brings
a performance boost, and sometimes even achieves the best result. This
phenomenon consistently exists when changing random seeds. However, setting
$\eta$ to 1 means using merely Pathfinder, which cannot distinguish the ground
truth from other nodes and breaks the model. This discovery further testifies
the importance of explicitly evaluating path selection in the taxonomy
expansion task.
### 5.5. Visualization of Self-Attentions
#### 5.5.1. Node Pair Hypernymy Detection Module
To illustrate how the hypernymy detection module works, we show the weights of
one of the attention heads of the hypernymy detection module’s last
Transformer encoder layer in Fig. 5.
By expanding a term to its description, the model is capable of understanding
the term “oolong” by its description, which cannot be achieved by constructing
rule-based lexical-syntactic features since “oolong” and “tea” have no
similarity in their surface names. Furthermore, by adopting the pretrained
DistilBERT, the hypernymy detection module can also discover more latent
patterns such as the relation between “leaves” and “tree”, allowing the model
to discover more in-depth hypernymy relations.
#### 5.5.2. Ego-Tree Coherence Modeling Module
Figure 6. Illustration of one self-attention head in the first layer of the
coherence modeling module, showing how the coherence modeling module finds the
most fitted node in an ego-tree. In this example, the anchor is “herb”, the
query is “oolong”, and the query’s ground truth parent is “tea”.
To illustrate how the coherence modeling module compares the nodes in the ego-
tree to maintain the taxonomy’s coherence, we present the weights of an
attention head of the module’s first Transformer encoder layer in Fig. 6.
Since the last layer’s attention mostly focuses on the anchor node (“herb”),
the first layer can better illustrate the model’s comparison among ego-tree
nodes.
Based on our observation, the two ¡CLS¿s are capable of finding the best-
suited parent node in the ego-tree even if it is not the anchor. Since the
coherence modeling module utilizes ego-trees for hierarchy modeling, the
coherence modeling module can compare a node with all its ancestors and its
children to find the most suited anchor, which makes the model more robust.
Besides, the coherence modeling module is also able to assign a lower
attention weight to the best-suited parent’s parent node when it is on the
right path, suggesting that the coherence modeling module can achieve both
path-wise and level-wise comparison.
## 6\. Conclusion
We proposed HEF, a self-supervised taxonomy expansion model that fully
exploits the hierarchical structure of a taxonomy for better hierarchical
structure modeling and taxonomy coherence maintenance. Compared to previous
methods that evaluate the anchor by merely a new edge in a normal graph
neglecting the tree structure of taxonomy, we used extensive experiments to
prove that, evaluating a tree structure for coherence maintenance, and mining
multiple tree-exclusive features in the taxonomy, including hypernymy
relations from parent-child relations, term similarity from sibling relations,
absolute and relative levels, path+level based multi-dimensional evaluation,
and disambiguation based on parent-current-child chains all brought
performance boost. This indicates the importance of using the information of
tree structure for the taxonomy expansion task. We also proposed a framework
for injecting these features, introduced our implementation of the framework,
and surpassed the previous state-of-the-art. We believe that these novel
designs and their motivations will not only benefit the taxonomy expansion
task, but also be influential for all tasks involving hierarchical or tree
structure modeling and evaluation. Future works include how to model and
utilize these or new tree-exclusive features to boost other taxonomy-related
tasks, and better implementation of each module in HEF.
###### Acknowledgements.
Thanks to everyone who helped me with this paper in the Tencent Jarvis Lab, my
family, and my loved one.
## References
* (1)
* Agichtein and Gravano (2000) Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting Relations From Large Plain-Text Collections. In _Proceedings of JCDL_. 85–94.
* Bordea et al. (2016) Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016\. SemEval-2016 Task 13: Taxonomy Extraction Evaluation (TExEval-2). In _Proceedings of the SemEval-2016_. 1081–1091.
* Cocos et al. (2018) Anne Cocos, Marianna Apidianaki, and Chris Callison-Burch. 2018\. Comparing Constraints for Taxonomic Organization. In _Proceedings of NAACL_. 323–333.
* Dash et al. (2020) Sarthak Dash, Md Faisal Mahbub Chowdhury, Alfio Gliozzo, Nandana Mihindukulasooriya, and Nicolas Rodolfo Fauceglia. 2020\. Hypernym Detection Using Strict Partial Order Networks. In _Proceedings of AAAI_. 7626–7633.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of NAACL_. 4171–4186.
* Fountain and Lapata (2012) Trevor Fountain and Mirella Lapata. 2012. Taxonomy Induction Using Hierarchical Random Graphs. In _Proceedings of NAACL_. 466–476.
* Gupta et al. (2017) Amit Gupta, Rémi Lebret, Hamza Harkous, and Karl Aberer. 2017\. Taxonomy Induction Using Hypernym Subsequences. In _Proceedings of CIKM_. 1329–1338.
* Hearst (1992) Marti A. Hearst. 1992\. Automatic Acquisition of Hyponyms From Large Text Corpora. In _Proceedings of ACL_. 539–545.
* Hua et al. (2017) Wen Hua, Zhongyuan Wang, Haixun Wang, Kai Zheng, and Xiaofang Zhou. 2017. Understand Short Texts by Harvesting and Analyzing Semantic Knowledge. _IEEE Transactions on Knowledge and Data Engineering_ (2017), 499–512.
* Huang et al. (2019) Jin Huang, Zhaochun Ren, Wayne Xin Zhao, Gaole He, Ji-Rong Wen, and Daxiang Dong. 2019\. Taxonomy-Aware Multi-Hop Reasoning Networks for Sequential Recommendation. In _Proceedings of WSDM_. 573–581.
* Jiang et al. (2017) Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M. Kaplan, Timothy P. Hanratty, and Jiawei Han. 2017. MetaPAD: Meta Pattern Discovery From Massive Text Corpora. In _Proceedings of KDD_. 877–886.
* Jurgens and Pilehvar (2015) David Jurgens and Mohammad Taher Pilehvar. 2015. Reserating the Awesometastic: An Automatic Extension of the WordNet Taxonomy for Novel Terms. In _Proceedings of NAACL_. 1459–1465.
* Jurgens and Pilehvar (2016) David Jurgens and Mohammad Taher Pilehvar. 2016. SemEval-2016 Task 14: Semantic Taxonomy Enrichment. In _Proceedings of the SemEval-2016_. 1092–1102.
* Karamanolakis et al. (2020) Giannis Karamanolakis, Jun Ma, and Xin Luna Dong. 2020. TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories. (2020).
* Kim et al. (2020) Seohyun Kim, Jinman Zhao, Yuchi Tian, and Satish Chandra. 2020\. Code Prediction by Feeding Trees to Transformers. (2020).
* Kozareva and Hovy (2010) Zornitsa Kozareva and Eduard Hovy. 2010. A Semi-Supervised Method to Learn and Construct Taxonomies Using the Web. In _Proceedings of EMNLP_. 1110–1118.
* Le et al. (2019) Matthew Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, and Maximilian Nickel. 2019\. Inferring Concept Hierarchies From Text Corpora via Hyperbolic Embeddings. In _Proceedings of ACL_. 3231–3241.
* Lin (1998) Dekang Lin. 1998\. An Information-Theoretic Definition of Similarity. In _Proceedings of ICML_. 296–304.
* Lipscomb (2000) Carolyn E. Lipscomb. 2000\. Medical Subject Headings (MeSH). _Bulletin of the Medical Library Association_ (2000), 265–266.
* Liu et al. (2019) Bang Liu, Weidong Guo, Di Niu, Chaoyue Wang, Shunnan Xu, Jinghong Lin, Kunfeng Lai, and Yu Xu. 2019\. A User-Centered Concept Mining System for Query and Document Understanding at Tencent. In _Proceedings of KDD_. 1831–1841.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. (2019).
* Luo et al. (2020) Xusheng Luo, Luxin Liu, Yonghua Yang, Le Bo, Yuanpeng Cao, Jinghang Wu, Qiang Li, Keping Yang, and Kenny Q. Zhu. 2020. AliCoCo: Alibaba E-Commerce Cognitive Concept Net. In _Proceedings of SIGMOD_. 313–327.
* Manzoor et al. (2020) Emaad Manzoor, Rui Li, Dhananjay Shrouty, and Jure Leskovec. 2020\. Expanding Taxonomies With Implicit Edge Semantics. In _Proceedings of TheWebConf_. 2044–2054.
* Mao et al. (2018) Yuning Mao, Xiang Ren, Jiaming Shen, Xiaotao Gu, and Jiawei Han. 2018. End-To-End Reinforcement Learning for Automatic Taxonomy Induction. In _Proceedings of ACL_. 2462–2472.
* Miller (1995) George A. Miller. 1995\. WordNet: A Lexical Database for English. _Commun. ACM_ (1995), 39–41.
* Panchenko et al. (2016) Alexander Panchenko, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, Cédrick Fairon, Simone Paolo Ponzetto, and Chris Biemann. 2016. TAXI at SemEval-2016 Task 13: A Taxonomy Induction Method Based on Lexico-Syntactic Patterns, Substrings and Focused Crawling. In _Proceedings of SemEval-2016_. 1320–1327.
* Peng et al. (2019) Hao Peng, Jianxin Li, Senzhang Wang, Lihong Wang, Qiran Gong, Renyu Yang, Bo Li, Philip Yu, and Lifang He. 2019. Hierarchical Taxonomy-Aware and Attentional Graph Capsule RCNNs for Large-Scale Multi-Label Text Classification. _IEEE Transactions on Knowledge and Data Engineering_ (2019).
* Roller et al. (2018) Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018\. Hearst Patterns Revisited: Automatic Hypernym Detection From Large Text Corpora. In _Proceedings of ACL_. 358–363.
* Sang (2007) Erik Tjong Kim Sang. 2007\. Extracting Hypernym Pairs From the Web. In _Proceedings of ACL_. Association for Computational Linguistics, 165–168.
* Sanh et al. (2020) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter. (2020).
* Shang et al. (2020a) Chao Shang, Sarthak Dash, Md. Faisal Mahbub Chowdhury, Nandana Mihindukulasooriya, and Alfio Gliozzo. 2020a. Taxonomy Construction of Unseen Domains via Graph-Based Cross-Domain Knowledge Transfer. In _Proceedings of ACL_. 2198–2208.
* Shang et al. (2020b) Jingbo Shang, Xinyang Zhang, Liyuan Liu, Sha Li, and Jiawei Han. 2020b. NetTaxo: Automated Topic Taxonomy Construction From Text-Rich Network. In _Proceedings of TheWebConf_. 1908–1919.
* Shaw et al. (2018) Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018\. Self-Attention With Relative Position Representations. In _Proceedings of NAACL_. 464–468.
* Shen et al. (2020) Jiaming Shen, Zhihong Shen, Chenyan Xiong, Chi Wang, Kuansan Wang, and Jiawei Han. 2020\. TaxoExpan: Self-Supervised Taxonomy Expansion With Position-Enhanced Graph Neural Network. In _Proceedings of TheWebConf_. 486–497.
* Shen et al. (2018) Jiaming Shen, Zeqiu Wu, Dongming Lei, Chao Zhang, Xiang Ren, Michelle T. Vanni, Brian M. Sadler, and Jiawei Han. 2018\. HiExpan: Task-Guided Taxonomy Construction by Hierarchical Tree Expansion. In _Proceedings of KDD_. 2180–2189.
* Shiv and Quirk (2019) Vighnesh Shiv and Chris Quirk. 2019. Novel Positional Encodings to Enable Tree-Based Transformers. In _Advances in Neural Information Processing Systems 32_. 12081–12091.
* Shwartz et al. (2016) Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016\. Improving Hypernymy Detection With an Integrated Path-Based and Distributional Method. In _Proceedings of ACL_. Association for Computational Linguistics, 2389–2398.
* Tai et al. (2015) Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015\. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In _Proceedings of ACL/IJCNLP_. 1556–1566.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017\. Attention Is All You Need. In _Advances in Neural Information Processing Systems 30_. 5998–6008.
* Vedula et al. (2018) Nikhita Vedula, Patrick K. Nicholson, Deepak Ajwani, Sourav Dutta, Alessandra Sala, and Srinivasan Parthasarathy. 2018. Enriching Taxonomies With Functional Domain Knowledge. In _Proceedings of SIGIR_. 745–754.
* Velardi et al. (2013) Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013\. OntoLearn Reloaded: A Graph-Based Algorithm for Taxonomy Induction. _Computational Linguistics_ (2013), 665–707.
* Wang et al. (2019) Chengyu Wang, Yan Fan, Xiaofeng He, and Aoying Zhou. 2019\. A Family of Fuzzy Orthogonal Projection Models for Monolingual and Cross-Lingual Hypernymy Prediction. In _Proceedings of WWW_. 1965–1976.
* Wang et al. (2014) Jingjing Wang, Changsung Kang, Yi Chang, and Jiawei Han. 2014\. A Hierarchical Dirichlet Model for Taxonomy Expansion for Search Engines. In _Proceedings of WWW_. 961–970.
* Wu and Palmer (1994) Zhibiao Wu and Martha Palmer. 1994. Verbs Semantics and Lexical Selection. In _Proceedings of ACL_. 133–138.
* Yin and Roth (2018) Wenpeng Yin and Dan Roth. 2018. Term Definitions Help Hypernymy Detection. In _Proceedings of *SEM_. 203–213.
* Yin and Shah (2010) Xiaoxin Yin and Sarthak Shah. 2010. Building Taxonomy of Web Search Intents for Name Entity Queries. In _Proceedings of WWW_. 1001–1010.
* Yu et al. (2020) Yue Yu, Yinghao Li, Jiaming Shen, Hao Feng, Jimeng Sun, and Chao Zhang. 2020\. STEAM: Self-Supervised Taxonomy Expansion With Mini-Paths. In _Proceedings of KDD_. 1026–1035.
* Zhang et al. (2018) Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian Sadler, Michelle Vanni, and Jiawei Han. 2018\. TaxoGen: Unsupervised Topic Taxonomy Construction by Adaptive Term Embedding and Clustering. In _Proceedings of KDD_. 2701–2709.
## Appendix A Case Study
To understand how different Fitting Score components contribute to HEF’s
performance, we conduct a case study on the SemEval16-Food dataset and show
the detailed results in Table 4.
The first two rows of Table 4 shows two cases where HEF successfully predicts
the query’s parent. We can see that the Pathfinder Score and the three Stopper
Scores all contribute to the correct selection, which testifies the
effectiveness of the Fitting Score design.
The last two rows of Table 4 provide situations when HEF fails to select the
correct parent. In the third row, “bourguignon” is described as “reduced red
wine”, thus the model attaches it to the node “wine”. However, “bourguignon”
is also a sauce for cooking beef. Such ambiguation consequently affects the
meaning of a term by assigning an incorrect description, which hurts the
model’s performance. In the last row, although “hot fudge sauce”’s description
contains ”chocolate sauce”, the node “chocolate sauce” still gets a low
Current Score. In HEF, the design of Stopper Scores enables the model to self-
correct the occasionally wrong Current Scores by assigning larger Forward
Score from a node’s parent and larger Backward Score from one of the node’s
children. However, since “chocolate sauce” is a leaf node, its child’s
Backward Score is assigned to be the proportion of leaf nodes in the seed
taxonomy, which is 0.07 in the SemEval16-Food dataset. This indicates that
future work includes designing a more reasonable Backward Score function for
leaf nodes to improve the model’s robustness.
## Appendix B Description Generation Algorithm
Algorithm 2 shows the description generation algorithm descr($\cdot$) used in
HEF’s hypernymy detection module. descr($\cdot$) utilizes WordNet descriptions
to generate domain-related term descriptions by dynamic programming. In this
algorithm, WordNetNounDescr($\cdot$) means the set of a concept’s noun
descriptions from WordNet (Miller, 1995), and
CosSimilarity($t,n_{\mbox{root}}$) means calculating the average token cosine
similarity of word vectors between a candidate description $t$ and the surface
name of a taxonomy’s root term $n_{\mbox{root}}$.
Algorithm 2 Description Generation Algorithm for the Hypernymy Detection
Module
1:procedure Descr($n$) $\triangleright$ Input: term $n$
2: $N\leftarrow$split($n$)
3: for $i\leftarrow 0,\cdots,\texttt{length($N$)}$ do
4: $S[i]=0$ $\triangleright$ Initialize score array
5: $C[i]=0$ $\triangleright$ Initialize splitting positions
6: end for
7: for $i\leftarrow 0,\cdots,\texttt{length($N$)$-1$}$ do
8: for $j\leftarrow 0,\cdots,i$ do
9: if WordNetNounDescr($N[j:i+1]$)$>0$ then
10: $s_{ij}=\left(i-j+1\right)^{2}+1$ $\triangleright$ Prefer longer concepts
11: else
12: $s_{ij}=1$
13: end if
14: if $S[j]+s_{ij}>S[i+1]$ then
15: $S[i+1]\leftarrow S[j]+s_{ij}$ $\triangleright$ Save max score
16: $C[i]=j$ $\triangleright$ Save splitting position
17: end if
18: end for
19: end for
20: $D\leftarrow$“” $\triangleright$ Initialize description
21: $p\leftarrow\texttt{length(}N\texttt{)}$ $\triangleright$ Generate split
pointer
22: while $p\neq-1$ do
23: $D_{WN}=$WordNetNounDescr($N\left[C[p]:p+1\right]$)
24: if len($D_{WN}$)$>0$ then $\triangleright$ Noun or noun phrase
25: $d\leftarrow\mathop{\arg\max}_{t\in
D_{WN}}{{\texttt{CosSimilarity(}}t,n_{\mbox{root}}{\texttt{)}}}$
26: else$\triangleright$ Prep. or adj.
27: $d\leftarrow{\texttt{j}oin(}N[C[p]:p+1]\texttt{)}$
28: end if
29: $D\leftarrow d+D$ $\triangleright$ Put new description in the front
30: $p\leftarrow C[p]-1$ $\triangleright$ Go to next split
31: end while
32:end procedure
Table 4. Examples of HEF’s prediction, with detailed Fitting Score composition and comparison between the ground truth and the predicted parent. Scores in this table correspond to the node in the same tabular cell with the score. Query ($q$) | Ground Truth ($\hat{p}$) | Scores | Prediction ($p$) | Scores
---|---|---|---|---
$q$: paddy is rice in the husk either gathered or still in the field | $\hat{p}$: rice is grains used as food either unpolished or more often polished | $S_{p}=0.9997$ | $p$: rice is grains used as food either unpolished or more often polished | $S_{p}=0.9997$
| | $S_{c}=0.4599$ | | $S_{c}=0.4599$
| ${\rm parent}(\hat{p})$: starches is a commercial preparation of starch that is used to stiffen textile fabrics in laundering | $S_{f}=0.9755$ | ${\rm parent}(p)$: starches is a commercial preparation of starch that is used to stiffen textile fabrics in laundering | $S_{f}=0.9755$
$F(\hat{p},q)=0.4483$ | $c_{\hat{p}}^{*}$: white rice is having husk or outer brown layers removed | $S_{b}=0.9995$ | $c_{p}^{*}$: white rice is having husk or outer brown layers removed | $S_{b}=0.9995$
$\hat{p}$’s Ranking: 1 | | | |
$q$: fish meal is ground dried fish used as fertilizer and as feed for domestic livestock | $\hat{p}$: feed is food for domestic livestock | $S_{p}=0.9993$ | $p$: feed is food for domestic livestock | $S_{p}=0.9993$
| | $S_{c}=0.3169$ | | $S_{c}=0.3169$
| ${\rm parent}(\hat{p})$: food is any substance that can be metabolized by an animal to give energy and build tissue | $S_{f}=0.9984$ | ${\rm parent}(p)$: food is any substance that can be metabolized by an animal to give energy and build tissue | $S_{f}=0.9984$
$F(\hat{p},q)=0.3158$ | $c_{\hat{p}}^{*}$: mash is mixture of ground animal feeds | $S_{b}=0.9988$ | $c_{p}^{*}$: mash is mixture of ground animal feeds | $S_{b}=0.9988$
$\hat{p}$’s Ranking: 1 | | | |
$q$: bourguignon is reduced red wine with onions and parsley and thyme and butter | $\hat{p}$: sauce is flavorful relish or dressing or topping served as an accompaniment$\cdots$ | $S_{p}=0.0002$ | $p$: wine is a red as dark as red wine | $S_{p}=0.9997$
| | $S_{c}=0.0001$ | | $S_{c}=0.1399$
| ${\rm parent}(\hat{p})$: condiment is a preparation (a sauce or relish or spice) to enhance flavor or enjoyment | $S_{f}=0.0004$ | ${\rm parent}(p)$: alcohol is any of a series of volatile hydroxyl compounds that are made from hydrocarbons by distillation | $S_{f}=0.9812$
$F(\hat{p},q)=1e-11$ | $c_{\hat{p}}^{*}$: bercy is butter creamed with white wine and shallots and parsley | $S_{b}=0.9997$ | $c_{p}^{*}$: red wine is wine having a red color derived from skins of dark-colored grapes | $S_{b}=0.8784$
$\hat{p}$’s Ranking: 328 | | | |
$q$: hot fudge sauce is hot thick chocolate sauce served hot | $\hat{p}$: chocolate sauce is sauce made with unsweetened chocolate or cocoa$\cdots$ | $S_{p}=0.9471$ | $p$: sauce is flavorful relish or dressing or topping served as an accompaniment$\cdots$ | $S_{p}=0.9995$
| | $S_{c}=9e-5$ | | $S_{c}=0.0172$
| ${\rm parent}(\hat{p})$: sauce is flavorful relish or dressing or topping served as an accompaniment$\cdots$ | $S_{f}=0.9617$ | ${\rm parent}(p)$: condiment is a preparation (a sauce or relish or spice) to enhance flavor or enjoyment | $S_{f}=0.9888$
$F(\hat{p},q)=6e-6$ | $c_{\hat{p}}^{*}$: None | $S_{b}=0.0700$ | $c_{p}^{*}$: lyonnaise sauce is brown sauce with sauteed chopped onions and parsley$\cdots$ | $S_{b}=0.9995$
$\hat{p}$’s Ranking: 20 | | | |
|
# Xova: Baseline-Dependent Time and Channel Averaging for Radio Interferometry
Marcellin Atemkeng1, Simon Perkins2, Jonathan Kenyon1, Benjamin Hugo2,1, and
Oleg Smirnov1,2
###### Abstract
Xova is a software package that implements baseline-dependent time and channel
averaging on Measurement Set data. The $uv$-samples along a baseline track are
aggregated into a bin until a specified decorrelation tolerance is exceeded.
The degree of decorrelation in the bin correspondingly determines the amount
of channel and timeslot averaging that is suitable for samples in the bin.
This necessarily implies that the number of channels and timeslots varies per
bin and the output data loses the rectilinear input shape of the input data.
1Rhodes University, Makhanda (Grahamstown), Eastern Cape, South Africa
2South African Radio Astronomy Observatory, Cape Town, Western Cape, South
Africa
## 1 Effects of Time and Channel Averaging
Consider $\mathcal{V}_{pq}=\mathcal{V}(\mathbf{u}_{pq}(t,\nu))$ as the
visibility sampled by the baseline $pq$ at time $t$ and frequency $\nu$. An
interferometer is non-ideal in the sense that the measured visibility is the
average of this sampled visibility over the sampling bin, $B_{kr}^{[\Delta
t\Delta\nu]}=[t_{k}-\Delta t/2,t_{k}+\Delta
t/2]\times[\nu_{r}-\Delta\nu/2,\nu_{r}+\Delta\nu/2]$:
$\widetilde{\mathcal{V}}_{pq}=\frac{1}{\Delta
t\Delta\nu}\iint\limits_{B_{kr}^{[\Delta
t\Delta\nu]}}\mathcal{V}(\mathbf{u}_{pq}(t,\nu))\text{d}t\text{d}\nu,$ (1)
where $\Delta t$ and $\Delta\nu$ are the integration intervals. If $\Pi_{pq}$
represents a normalized 2D top-hat window for a baseline $pq$ then Eq. (1) is
equivalent to the infinitesimal integral:
$\widetilde{\mathcal{V}}_{pq}=\iint\limits_{\infty}\Pi_{pq}(t-t_{k},\nu-\nu_{r})\mathcal{V}_{pq}(t,\nu)\text{d}t\text{d}\nu,$
(2)
which is a convolution in the Fourier space, i.e.:
$\displaystyle\widetilde{\mathcal{V}}_{pq}$
$\displaystyle=[\Pi_{pq}\circ\mathcal{V}_{pq}](\mathbf{u}_{pq}(t_{k},\nu_{r}))$
(3) $\displaystyle=\delta_{pqkr}[\Pi_{pq}\circ\mathcal{V}_{pq}],$ (4)
where $\delta_{pqkr}(\mathbf{u})=\delta(\mathbf{u}-\mathbf{u}_{pqkr})$ is the
Delta function shifted to the sampled point $pqkr$. For an observation with
frequency range $F=\Delta\nu\times N_{\nu}$ and total observing period
$T=\Delta t\times N_{t}$, observing for long times and large frequency ranges
leads to storage issues, as well as computation cost since
$\\{T,F\\}\propto\\{N_{t},N_{\nu}\\}$ if $\Delta t$ and $\Delta\nu$ most
remain sufficiently small. For aggressive averaging, $\Delta t$ and
$\Delta\nu$ are large which makes $\Pi_{pq}$ to deviate significantly from
$\delta_{pqkr}$ and therefore causes the visibility to decorrelate:
$\mathcal{V}_{pq}\neq\widetilde{\mathcal{V}}_{pq}$. To derive the effect of
averaging on the image, we can reformulate Eq. 4 as:
$\displaystyle\widetilde{\mathcal{V}}_{pq}$
$\displaystyle=\mathcal{F}\\{\mathcal{P}_{pqkr}\\}\big{(}\Pi_{pq}\circ\mathcal{F}\\{\mathcal{I}\\}\big{)},$
(5)
where the apparent sky $\mathcal{I}$ is the inverse Fourier transform of
$\mathcal{V}_{pq}$ and $\mathcal{P}_{pqkr}$ is the inverse Fourier transform
of $\delta_{pqkr}$. Here $\mathcal{F}$ represents the Fourier transform.
Inverting the sum of Eq. 5 over all the baselines results in an estimate of
the sky image:
$\displaystyle\widetilde{\mathcal{I}}$
$\displaystyle=\sum_{pqkr}W_{pqkr}\mathcal{P}_{pqkr}\circ\big{(}\mathcal{D}_{pqkr}\mathcal{I}\big{)},$
(6)
where $W_{pqkr}$ is the weight at the sampled point $pqkr$. We note that the
apparent sky $\mathcal{I}$ is now tapered by the baseline-dependent distortion
distribution $\mathcal{D}_{pqkr}$, the latter being the inverse Fourier
transform of the baseline-dependent top-hat window:
$\displaystyle\mathcal{D}_{pqkr}$
$\displaystyle=\mathcal{F}^{-1}\\{\Pi_{pq}\\}$
$\displaystyle=\mathrm{sinc}\left(\frac{\Delta\Psi}{2}\right)\mathrm{sinc}\left(\frac{\Delta\Phi}{2}\right).$
(7)
For a source at the sky location $\mathbf{l}$, the $\Delta\Psi$ and
$\Delta\Phi$ are the phase difference in time and frequency, respectively:
$\displaystyle\Delta\Psi$
$\displaystyle=2\pi\Delta\mathbf{u}_{pq}(t,\nu_{r})\mathbf{l};\Delta\Phi$
$\displaystyle=2\pi\Delta\mathbf{u}_{pq}(t_{k},\nu)\mathbf{l}.$ (8)
Assuming no other corruption effects apart from decorrelation and assuming
naturally weighting a sky with a single source; with decorrelation in effect
Eq. 6 becomes:
$\displaystyle\widetilde{\mathcal{I}}$
$\displaystyle=\mathcal{P}_{pqkr}\circ\big{(}\mathcal{D}_{pqkr}\mathcal{I}\big{)}$
$\displaystyle=\mathcal{D}_{pqkr}\mathcal{I}.$ (9)
Note that in this formulation, we have assumed that
$\mathcal{P}_{pqkr}=\delta(\mathbf{l})$ is baseline independent as opposed to
Atemkeng et al. (2020). Eq. 9 is simulated in Figure 1 using the MeerKAT
telescope at 1.4 GHz showing the apparent intensity of a 1 Jy source, as seen
by the shortest baseline, medium-length baseline, and longest baseline, as a
function of distance from the phase center. We see that decorrelation is
severe on the longest baseline then followed by the medium length baseline,
and that decorrelation is a function of source position in the sky.
Figure 1.: Effect of time averaging: The data is sampled at 1 s and 84 kHz
frequency resolutions then averaged only in time across 15 s.
## 2 Baseline-Dependent Time and Channel Averaging (BDA)
The distortion distribution, $\mathcal{D}_{pqkr}$ depends on each baseline and
its rotation orientation in the Fourier space which makes the decorrelation to
be baseline-dependent. For decorrelation to be baseline-independent, the
rectangular sampling bin $B_{kr}^{[\Delta t\Delta\nu]}$ across which the data
is averaged must be kept baseline-dependent as opposed to the fixed sampling
bin currently employed in radio interferometer correlators:
$\displaystyle
B_{kr}^{[\Delta_{\mathbf{u}_{pq}}t\Delta_{\mathbf{u}_{pq}}\nu]}=[t_{k}-\Delta_{\mathbf{u}_{pq}}t/2,t_{k}+\Delta_{\mathbf{u}_{pq}}t/2]\times[\nu_{r}-\Delta_{\mathbf{u}_{pq}}\nu/2,\nu_{r}+\Delta_{\mathbf{u}_{pq}}\nu/2],$
(10)
where the integration intervals $\Delta_{\mathbf{u}_{pq}}t$ and
$\Delta_{\mathbf{u}_{pq}}\nu$ are now also baseline-dependant. In this case
Eq. 6 becomes:
$\displaystyle\widetilde{\mathcal{I}}$
$\displaystyle=\sum_{pqkr}\mathcal{W}_{pqkr}\mathcal{P}_{pqkr}\circ\big{(}\mathcal{D}\mathcal{I}\big{)},$
(11)
where $\mathcal{D}=\mathcal{D}_{pqkr}=\mathcal{D}_{\alpha\beta kr}$ is the
distortion distribution, which is now equal across all the baselines, $pq$ and
$\alpha\beta$ no matter their orientation. We provide details on the
implementation of $\mathcal{D}$ in Sections 3 and 4.
## 3 Technologies
The core of Xova’s BDA algorithm is implemented using two recent
parallelisation and acceleration frameworks: (1) Dask (Rocklin 2015) is a
Python parallel computing library that expresses programs as Computational
Graphs whose individual tasks are scheduled on multiple cores or nodes. Dask
collections abstract underlying graphs with familiar Array and Dataframe
interfaces. (2) Numba (Lam et al. 2015) a JIT compiler that translates the BDA
algorithm, expressed as a subset of Python and NumPy code, to accelerated
machine code. These are implemented in Xova as follows: dask-ms (Perkins et
al. 2021) exposes Measurement Set columns as dask arrays for ingest by Xova
then Codex Africanus (Perkins et al. 2021) a Radio Astronomy Algorithms
Library, applies BDA, implemented in numba to dask arrays, producing averaged
dask arrays and dask-ms writes the averaged dask arrays to a new Measurement
Set.
## 4 Xova
Figure 2.: The parts of the baseline closer to the phase centre are subject
to greater averaging
For each baseline (See Figure 2): Measurement Set timeslots are aggregated
into averaging bins until $\textrm{sinc}\left(\Delta\Psi/2\right)$ falls below
decorrelation tolerance $\mathcal{D}$. The acceptable corresponding change in
frequency
$\Delta\Phi=2\,\textrm{sinc}^{-1}\left(\mathcal{D}/\textrm{sinc}\left(\Delta\Psi/2\right)\right)$
is calculated and channel width $\Delta\nu$ is derived from $\Delta\Phi$ and
used to divide the original band into a new channelisation.
## 5 Results
Figure 3 shows the image of a high-resolution data set imaged without BDA
(right panel) and with $95\%$ decorrelation tolerance BDA (left panel). We
note that BDA does not distort the image when compared to the no BDA image.
Figure 3.: BDA (left) vs. no BDA (right).
### Acknowledgments
The research of Oleg Smirnov is supported by the South African Research Chairs
Initiative of the Department of Science and Technology and National Research
Foundation.
## References
* Atemkeng et al. (2020) Atemkeng, M., Smirnov, O., Tasse, C., Foster, G., & Makhathini, S. 2020, Monthly Notices of the Royal Astronomical Society, 499, 292
* Lam et al. (2015) Lam, S. K., Pitrou, A., & Seibert, S. 2015, in Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (New York, NY, USA: Association for Computing Machinery), LLVM ’15. URL https://doi.org/10.1145/2833157.2833162
* Perkins et al. (2021) Perkins, S. J., et al. 2021, in ADASS XXX, edited by J.-E. Ruiz, & F. Pierfederici (San Francisco: ASP), vol. TBD of ASP Conf. Ser., 999 TBD
* Rocklin (2015) Rocklin, M. 2015, in Proceedings of the 14th Python in Science Conference, edited by K. Huff, & J. Bergstra, 130
|
# Response to Comment on: Tunneling in DNA with Spin Orbit coupling
Solmar Varela Yachay Tech University, School of Chemical Sciences &
Engineering, 100119-Urcuquí, Ecuador Yachay Tech University, School of
Physical Sciences & Nanotechnology, 100119-Urcuquí, Ecuador Iskra Zambrano
Yachay Tech University, School of Physical Sciences & Nanotechnology,
100119-Urcuquí, Ecuador Bertrand Berche Laboratoire de Physique et Chimie
Théoriques, UMR Université de Lorraine-CNRS 7019 54506 Vandoeuvre les Nancy,
France Vladimiro Mujica School of Molecular Sciences, Arizona State
University, Tempe, Arizona 85287-1604, USA Ernesto Medina Yachay Tech
University, School of Physical Sciences & Nanotechnology, 100119-Urcuquí,
Ecuador Centro de Física, Instituto Venezolano de Investigaciones Cíentificas
(IVIC), Apartado 21827, Caracas 1020 A, Venezuela
###### Abstract
The comment in ref.[WohlmanAharony, ] makes a few points related to the
validity of our model, especially in the light of the interpretation of
Bardarson’s theorem: “in the presence of time reversal symmetry and for half-
integral spin the transmission eigenvalues of the two terminal scattering
matrix come in (Kramers) degenerate pairs”. The authors of
ref.[WohlmanAharony, ] first propose an ansatz for the wave function in the
spin active region and go on to show that the resulting transmission does not
show spin dependence, reasoning that spin dependence would violate Bardarson’s
assertion. Here we clearly show that the ansatz presented assumes spin-
momentum independence from the outset and thus just addresses the spinless
particle problem. We then find the appropriate eigenfunction contemplating
spin-momentum coupling and show that the resulting spectrum obeys Bardarson’s
theorem. Finally we show that the allowed wavevectors are the ones assumed in
the original paper and thus the original conclusions follow. We recognize that
the Hamiltonian in our paper written in local coordinates on a helix was
deceptively simple and offer the expressions of how it should be written to
more overtly convey the physics involved. The relation between spin
polarization and torque becomes clear, as described in reference
ref.[VarelaZambrano, ]. This response is a very important clarification in
relation to the implications of Bardarson’s theorem concerning the possibility
of spin polarization in one dimensional systems in the linear regime.
In ref.[WohlmanAharony, ] Aharony et al discuss critical points of our model
that allow an opportunity to clarify fine points about notions that have been
pointed out in the literature regarding the possibility of spin polarization
in one dimensional systems in the linear regimeBart ; BalseiroAharony .
Ref. [1] begins to formulate an ansatz for the solution of our Eq.(5)
${\cal H}=\left[\frac{p_{x}^{2}}{2m}+V_{0}\right]{\bf
1}+\alpha\sigma_{y}p_{x},$ (1)
by assuming a product wave function in basis of eigenspinors of $\sigma_{y}$
where $|\Psi_{\mu}(x)=\psi_{u}(x)|\mu\rangle$. Assuming $\psi_{u}(x)\propto
e^{iQ_{\mu}x}$ they arrive at the energy
$E=\frac{(Q_{\mu}+k_{so}\mu)^{2}-k_{so}^{2}}{2m}+V_{0},$ (2)
with
$Q_{\mu}^{\pm}=-k_{so}\mu\pm q~{}{\rm
with}~{}q=\sqrt{k^{2}+k_{so}^{2}-q_{0}^{2}},$ (3)
where $q_{0}=2m/\hbar^{2}$, $k_{so}=m\alpha/\hbar$ and $k^{2}=2mE/\hbar^{2}$.
The aforementioned proposal clearly leads to the spinless particle solution,
by substituting the proposed $Q_{\mu}$ into the energy, eliminates all
dependence on $\mu$. This point makes their transmission computation
redundant; they obtain the spinless particle transmission, not surprisingly
independent of spin. Furthermore, the solution apparently satisfies
Bardarson’s theorem since any spin orientation gives the same energy
independently of the direction of propagation of the electron.
The latter observation gives a first hint of what has been omitted from this
solution which can be drawn from the Hamiltonian above i.e. that there are two
sets of Kramers pairs with two different energies (which obey Bardarson’s
conclusions). Thus, spin and momentum are coupled and the product solution is
not forthcoming.
The eigenfunction to Eq.1 is
$\Psi_{s}=\left(\begin{array}[]{c}is\\\ 1\end{array}\right)e^{i\lambda|q|x},$
(4)
where $s=\pm 1$ which denotes the two possible spin orientations (in
$\sigma_{z}$ basis) and $\lambda=\pm 1$ the two momentum orientations (see
ref.Birkholz, ). Substitution of this vector into the eigenvalue equation
yields the following energy eigenvalues
$E_{s}^{\lambda}=\frac{\hbar^{2}q^{2}}{2m}-s\lambda\hbar\alpha|q|+V_{0},$ (5)
that now reflect two Kramers pairs $(s,\lambda)=(+,+)~{}{\rm and}~{}(-,-)$
with energy $E_{<}=\hbar^{2}q^{2}/2m-\hbar\alpha|q|+V_{0}$ and
$(s,\lambda)=(+,-)~{}{\rm and}~{}(-,+)$ with energy
$E_{>}=\hbar^{2}q^{2}/2m+\hbar\alpha|q|+V_{0}$. The $q$ vectors associated
with the eigenfunctions of the Hamiltonian are
$q=sk_{so}+\lambda\sqrt{k^{2}+k_{so}^{2}-q_{0}^{2}}.$ (6)
Note that this is the wavevector of the comment but the form of the
wavefunction (Eq.4) involves the quantum number $\lambda$ denoting the wave
vector direction. This form is in agreement with our work. Another subtle
detail that makes the proposed form of the comment suspect is that the
superposition of waves under the barrier does not correspond to equal energies
unless the momentum-spin relation is taken into account.
The very subtle difference determines whether this model shows spin
polarization under tunneling or not as demonstrated in the comment. For
reference work on SO active rings, where these arguments are also applicable
see refs.[Birkholz, ; Richter, ; Chatelain, ; Bolivar, ]
Spin polarization in one dimensional systems has been argued to be feasible in
the tunneling regimeBart in agreement with transport symmetry relations,
where there is an energy dependence. In fact if the injection energy is in
between $E_{<}$ and $E_{>}$ one expects spin polarization while for the
injection energy above $E_{>}$ then both spin orientations are filled and no
spin polarization ensues.
The hamiltonian of Eq.(1) is deceivingly simple because we wrote in a local
coordinate system that rotates because of the constraints imposed by the helix
i.e. $k_{z}=k_{y}\tan\eta$ (taking $x\rightarrow z$ as in reference
[Varela2016, ]) where $\eta$ is the chiral angle of the helix. The system is
actually three dimensional with a hamiltonian close to full filling of the
form
$\displaystyle H$ $\displaystyle=$ $\displaystyle
tR^{2}\cos^{2}\eta\left(q_{y}+q_{z}\tan\eta\right)^{2}\mathbbm{1}_{s}+$ (7)
$\displaystyle
2R\cos\eta~{}\lambda_{SO}\left(q_{y}+q_{z}\tan\eta\right)~{}s_{y},$
applying the relation $q_{z}=q_{y}~{}\tan\eta$ we obtain Eq.(5) of the paper,
the Hamiltonian in the paper
$\displaystyle H$ $\displaystyle=$ $\displaystyle
tR^{2}\csc^{2}\eta~{}q^{2}_{z}~{}\mathbbm{1}_{s}+2R\csc\eta~{}\lambda_{SO}~{}s_{y}q_{z},$
(8)
where again we exchange $x$ in the paper with $z$ in the current notation for
consistency ($z$ is along the axis of the helix). This is the Hamiltonian
object of the comment of ref.[WohlmanAharony, ], that appears not to involve
an orbital degree of freedom for the electron (no orbital angular momentum).
To better understand the physics we will eliminate $q_{z}$ (the vector along
the axis of the helix) in favor of $q_{y}$ using their relationship to obtain
(consistent with ref. [Varela2016, ])
$\displaystyle
H=tR^{2}\sec^{2}{\eta}~{}q^{2}_{y}~{}\mathbbm{1}_{s}+2R\sec\eta~{}\lambda_{SO}~{}s_{y}q_{y},$
(9)
still in local, rotating, coordinates. Further rewriting the Hamiltonian in
cylindrical coordinates, where we can identify orbital and spin angular
momentum, we get
$H=-\beta~{}\mathbbm{1}_{s}\partial_{\varphi}^{2}-i\alpha_{\eta}s_{\varphi}\partial_{\varphi},$
(10)
where $\varphi$ is the angle around the helix axis, $s_{\varphi}$ and
$\beta=t(R\sec\eta/a)^{2}$ and $\alpha_{\eta}=2(R/a)\sec\eta~{}\lambda_{SO}$.
This problem is different from the closed ring since it describes the motion
on a helix and thus, periodic boundary conditions do not apply.
The Hamiltonian in the previous equation is nevertheless non-hermitian
Morpurgo and can be made hermitian by symmetrizing the Hamiltonian in Eq.10.
With the latter procedure we just have to change
$\sigma_{\varphi}\partial_{\varphi}\rightarrow\sigma_{\varphi}\partial_{\varphi}-(1/2)\sigma_{\rho}$
so that the hermitian Hamiltonian is
$H=-\beta~{}\mathbbm{1}_{s}\partial_{\varphi}^{2}-i\alpha_{\eta}(\sigma_{\varphi}\partial_{\varphi}-\frac{1}{2}\sigma_{\rho}).$
(11)
The Hamiltonian in this form has a very revealing interpretation, and makes
now an obvious connection to the conclusions of our paper, since the second
term is the kinetic Hamiltonian for a graphene ring
$\propto{\bm{\sigma}}\cdot{\bf p}$, except that here $\mathbbm{\sigma}$
describes real spin, not pseudo-spin. This term will exert a torque on the
ring since momentum and spin cannot be kept at a fixed angle on the ring. The
torque will disappear if the ring is rotating with the electron momentum. This
manifests itself as a pseudo-spin angular momentum in grapheneBolivar and it
describes the rotation of the ring.
The same physics applies for the helix but for the real spin, on the helix,
the term tends to align momentum (which now circulates around the helix) and
spin. Since this is not possible without changing angular momentum then there
is a torque on the helix. The latter term is precisely what was computed in
the paper as $(1/i\hbar)[s_{z},H]={\cal T}$ the torque on the molecule.
Addressing the problem in three dimensions brings about new features to the
problem regarding the adiabatic/non-adiabatic following of the SO effective
magnetic field, which does not arise in the simplified Hamiltonian of Eq.(1).
In conclusion, we believe the comment in ref.[WohlmanAharony, ] does not
capture the correct spin-momentum coupling present in the model of
ref.[VarelaZambrano, ], treating only effectively spinless electron tunneling.
We hope our presentation has clarified the issues.
###### Acknowledgements.
We acknowledge fruitfull discussions with Alexander Lopez.
## References
* (1) O. Entin-Wohlman, A. Aharony, and Y. Utsumi, Comment (2020).
* (2) S. Varela, I. Zambrano, B. Berche, V. Mujica, and E. Medina, Phys. Rev. B 101, 241410(R) (2020).
* (3) Xu Yang, Caspar H. van der Wal, and Bart J. van WeesNano Lett. 8, 6148 (2020).
* (4) S. Matityahu, Y. Utsumi, A. Aharony, O. Entin-Wohlman, and C. A. Balseiro, Phys. Rev. B 93, 075407 (2016).
* (5) J. E. Birkholz, and V. Meden J. Phys.: Condens. Matter 20, 085226 (2008).
* (6) D. Frustaglia, and K. Richter, Phys. Rev. B 69, 235310 (2004).
* (7) B. Berche, C. Chatelain, and E. Medina, Eur. J. Phys. 31, 1267 (2010).
* (8) N. Bolivar, E. Medina, and B. Berche Phys. Rev. B 89, 125413 (2014).
* (9) S. Varela, V. Mujica, and E. Medina, Phys. Rev. B 93, 155436 (2016).
* (10) F. E. Meijer, A. F. Morpurgo, and T. M. Klapwijk, Phys. Rev. B 66, 033107 (2002).
|
# Nucleon-pair coupling scheme in Elliott’s SU(3) model
G. J<EMAIL_ADDRESS>School of Physics Science and Engineering,
Tongji University, Shanghai 200092, China Calvin W.
<EMAIL_ADDRESS>Department of Physics, San Diego State University,
5500 Campanile Drive, San Diego, CA 92182-1233 P. Van
<EMAIL_ADDRESS>Grand Accélérateur National d’Ions Lourds, CEA/DRF-
CNRS/IN2P3, Boulevard Henri Becquerel, F-14076 Caen, France Zhongzhou
<EMAIL_ADDRESS>School of Physics Science and Engineering, Tongji
University, Shanghai 200092, China
###### Abstract
Elliott’s SU(3) model is at the basis of the shell-model description of
rotational motion in atomic nuclei. We demonstrate that SU(3) symmetry can be
realized in a truncated shell-model space if constructed in terms of a
sufficient number of collective $S$, $D$, $G$, …pairs (i.e., with angular
momentum zero, two, four, …) and if the structure of the pairs is optimally
determined either by a conjugate-gradient minimization method or from a
Hartree-Fock intrinsic state. We illustrate the procedure for 6 protons and 6
neutrons in the $pf$ ($sdg$) shell and exactly reproduce the level energies
and electric quadrupole properties of the ground-state rotational band with
$SDG$ ($SDGI$) pairs. The $SD$-pair approximation without significant
renormalization, on the other hand, cannot describe the full SU(3)
collectivity. A mapping from Elliott’s fermionic SU(3) model to systems with
$s$, $d$, $g$, …bosons provides insight into the existence of a decoupled
collective subspace in terms of $S$, $D$, $G$, …pairs.
Atomic nuclei exhibit a wide variety of behaviors, ranging from single-
particle motion to superconducting-like pairing to vibrational and rotational
modes. To a large extent the story of nuclear structure is the quest to
encompass the widest range of behaviors within the fewest degrees of freedom.
In the early stage of nuclear physics, the spherical nuclear shell model Mayer
; Jensen stressed the single-particle nature of the nucleons in a nucleus,
while the geometric collective model BM1 ; BM2 and the Nilsson mean-field
model Nilsson pointed the way to describing rotational bands by emphasizing
permanent quadrupole deformations Rainwater in “intrinsic” states. The
reconciliation between these two pictures has been one of the most important
advances in our understanding of the structure of nuclei. It was in large part
due to Elliott who showed, on the basis of an underlying SU(3) symmetry, how
to obtain deformed “intrinsic” states in a finite harmonic-oscillator single-
particle basis occupied by nucleons that interact through a quadrupole-
quadrupole force Elliott58 . This major step forward provided a microscopic
interpretation of rotational motion in the context of the spherical shell
model and, more recently, led to the symmetry-adapted no-core shell model
symmetryadapted .
Although the spherical shell model does provide a general framework to
reproduce rotational bands Caurier05 and shape coexistence Heyde11 in light-
and medium-mass nuclei, it is computationally still extremely challenging to
describe deformation in heavier-mass regions Otsuka19 . Approximations must be
sought. A tremendous simplification of the shell model occurs by considering
only pairs of nucleons with angular momentum 0 and 2, and treating them as
($s$ and $d$) bosons. This approximation, known as the interacting boson model
(IBM) IBM1 ; IBM2 , is particularly attractive because of its symmetry
treatment in terms of a U(6) Lie algebra, which allows a spherical U(5), a
deformed SU(3), and an intermediate SO(6) limit. While the IBM has been
connected to the shell model for spherical nuclei OAI ; GJ95 , such relation
has never been established for deformed nuclei, in which case the IBM has
rather been derived from mean-field models Nomura1 ; Nomura2 .
The nucleon-pair approximation (NPA) NPA1 ; NPA2 is one possible truncation
scheme of the shell-model configuration space. The building blocks of the NPA
are fermion pairs with certain angular momenta. Calculations are carried out
in a fully fermionic framework, albeit in a severely reduced model space
defined by the most important degrees of freedom in terms of pairs. The NPA
therefore can be considered as intermediate between the full-configuration
shell model and models that adopt the same degrees of freedom as the nucleon
pairs but in terms of bosons. While the NPA has been successful for nearly
spherical nuclei NPAr ; gs1 ; gs2 ; gs3 ; bpa1 ; bpa2 ; gs4 ; Lei , previous
studies for well-deformed nuclei are not satisfactory. For example, in the
fermion dynamical symmetry model FDSM1 ; FDSM2 an SU(3) limit with Sp(6)
symmetry can be constructed in terms of $S$ and $D$ pairs but their symmetry-
determined structure is far removed from that of realistic pairs Halse89 .
Also, the binding energy, moment of inertia, and electric quadrupole ($E2$)
transitions calculated in an $SD$-pair approximation are much smaller than
those obtained in Elliott’s SU(3) limit for the $pf$ and $sdg$ shells Zhao2000
.
In this Letter we successfully apply the NPA of the shell model to well-
deformed nuclei. We show that the low-energy excitations of many-nucleon
systems in Elliott’s SU(3) limit can be exactly reproduced with a suitable
choice of pairs in the NPA. We obtain an understanding of this observation
through a mapping to a corresponding boson model.
We consider an example system with even numbers of protons and neutrons in a
degenerate $pf$ or $sdg$ shell, interacting through a quadrupole-quadrupole
force of the form,
$\displaystyle V_{Q}=-(Q_{\pi}+Q_{\nu})\cdot(Q_{\pi}+Q_{\nu}),$ (1)
where $Q_{\pi}$ ($Q_{\nu}$) is the quadrupole operator for protons (neutrons),
$\displaystyle Q=-\displaystyle\sum_{\alpha\beta}\displaystyle\frac{\langle
n_{\alpha}l_{\alpha}j_{\alpha}\|r^{2}Y_{2}\|n_{\beta}l_{\beta}j_{\beta}\rangle}{\sqrt{5}r_{0}^{2}}\left(a_{\alpha}^{\dagger}\times\tilde{a}_{\beta}\right)^{(2)}.$
(2)
Greek letters $\alpha$, $\beta,\ldots$ denote harmonic-oscillator single-
particle orbits labeled by $n$, $l$, $j$, and $j_{z}$; $a_{\alpha}^{\dagger}$
and $\tilde{a}_{\beta}$ are the nucleon creation operator and its time-
reversed form for the annihilation operator, respectively; and $r_{0}$ is the
harmonic-oscillator length. As shown in Ref. Elliott58 , the interaction
$V_{Q}$ is a combination of the Casimir operators of SU(3) and SO(3), and its
eigenstates are therefore classified by (irreducible) representations of these
algebras with eigenenergies given by
$\displaystyle-\frac{5}{2\pi}\left[\frac{1}{2}(\lambda^{2}+\lambda\mu+\mu^{2}+3\lambda+3\mu)-\frac{3}{8}L(L+1)\right],$
(3)
in terms of the SU(3) labels $(\lambda,\mu)$ and the SO(3) label $L$, the
total orbital angular momentum. Several useful SU(3) representations for low-
lying states can be found in Ref. Zhao2000 .
In the following we discuss in detail the case of 6 protons and 6 neutrons
(6p-6n) in the NPA of the shell model and subsequently generalize to other
numbers of nucleons. A nucleon-pair state of 6 protons is written as
$\displaystyle|\varphi^{(I_{\pi})}\rangle$ $\displaystyle=$
$\displaystyle\left(({{A}^{({J}_{1})}}^{{\dagger}}\times{{A}^{({J}_{2})}}^{{\dagger}})^{(I_{2})}\times{{A}^{({J}_{3})}}^{{\dagger}}\right)^{(I_{\pi})}|0\rangle,$
(4)
where $I_{2}$ is an intermediate angular momentum and
${{A}^{(J)}}^{{\dagger}}$ is the creation operator of a collective pair with
angular momentum $J$:
$\displaystyle{{A}^{(J)}}^{{\dagger}}=\sum_{{\alpha}\leq{\beta}}y_{J}({\alpha}{\beta})\left(a_{{\alpha}}^{\dagger}\times
a_{{\beta}}^{\dagger}\right)^{(J)},$ (5)
where $y_{J}({\alpha}{\beta})$ is the pair-structure coefficient. For systems
with protons and neutrons, we construct the basis by coupling the proton and
neutron pair states to a state with total angular momentum $I$, i.e.,
$|\psi^{(I)}\rangle=\left(|\varphi^{(I_{\pi})}\rangle\times|\varphi^{(I_{\nu})}\rangle\right)^{(I)}$.
Level energies and wave functions are obtained by diagonalization of the
Hamiltonian matrix in the space spanned by
$\left\\{|\psi^{(I)}\rangle\right\\}$, that is, from a configuration-
interaction calculation. If a sufficient number of pair states are considered
in Eq. (4), the NPA model space can be made exactly equivalent to the full
shell-model space. The interest of the NPA, however, is to restrict to the
relevant pairs and describe low-energy nuclear structure in a truncated shell-
model space.
The selection of relevant pairs with the correct structure in Eq. (5) has been
a long standing problem in NPA calculations. Recent applications choose pairs
by the generalized seniority scheme (GS). Specifically, one optimizes the
structure coefficients of the $S$ pair by minimizing the expectation value of
the Hamiltonian in the $S$-pair condensate and one obtains other pairs by
diagonalizing the Hamiltonian matrix in the space spanned by GS-two (i.e.,
one-broken-pair) states gs2 ; Xu2009 . The collective pairs obtained with the
GS approach provide a good description of nearly-spherical nuclei but, as
recognized in Ref. Lei12un and as will also be shown below, they are
inappropriate in deformed nuclei. Instead we use the conjugate gradient (CG)
method CG1 ; CG2 , where the structure coefficients of all pairs considered in
the basis are simultaneously optimized by minimizing the ground-state energy
in a series of iterative NPA calculations for a given Hamiltonian. The initial
pairs in this iterative procedure are SU(3) tensors, obtained by diagonalizing
$V_{Q}$ in a two-particle basis and retaining the lowest-energy pair.
Figure 1: (a) Excitation energy and (b) electric quadrupole reduced
transition probability $B(E2;I\rightarrow I-2)$ for the ground rotational band
of 6 protons and 6 neutrons in the $pf$ shell in Elliott’s SU(3) model. The
subscript “GS” stands for generalized seniority and “CG” for conjugate
gradient (see text). Figure 2: Same as Fig. 1 for the $sdg$ shell.
Figure 1 shows, for a 6p-6n system in the $pf$ shell, the results of various
NPA calculations concerning excitation energies and $E2$ reduced transition
probabilities (with the standard effective charges $e_{\pi}=1.5$ and
$e_{\nu}=0.5$) for the lowest rotational band. These are compared to the exact
results of Elliott’s model, where the ground band belongs to the SU(3)
representation $(\lambda,\mu)=(24,0)$. Surprisingly, the $SDG$-pair
approximation of the shell model in the CG approach (denoted as $SDG_{\rm
CG}$) reproduces the exact binding energy, $810/\pi$ MeV according to Eq. (3),
to a precision of eight digits, as well as the exact excitation energies for
the entire ground band. One can understand the occurrence of the $(24,0)$
representation from the coupling of $(12,0)$ for the six protons and six
neutrons separately and, in fact, all bands contained in the product
$(12,0)\times(12,0)$, i.e. $(24,0)$, $(22,1)$,…,$(0,12)$, are exactly
reproduced in the $SDG_{\rm CG}$-pair truncated space. We also find that the
results of the $SDG$-pair approximation are close to the exact results if the
pairs are SU(3) tensors. For example, with such pairs the calculation
reproduces 98% of the exact binding energy, 99% of the exact moment of
inertia, and 97% of the exact $B(E2)$ values.
On the other hand, the results of the $SDG$-pair approximation deteriorate if
the pairs are obtained with the GS approach (denoted as $SDG_{\rm GS}$), which
reproduces only 76% of the exact binding energy. Furthermore, $SDG_{\rm GS}$
fails to describe the quadrupole collectivity: The moment of inertia predicted
by $SDG_{\rm GS}$ is only $\sim$43% of the exact one, the predicted $B(E2)$
values are too small, and the yrast states with angular momentum $I\geq 10$ do
not follow the behavior of a quantum rotor. One concludes that the structure
of the collective pairs, as determined by the GS approach, is not suitable for
the description of well-deformed nuclei.
It is also of interest to investigate the standard $SD$-pair approximation of
the shell model and results of the $SD_{\rm GS}$-, $SD_{\rm CG}$-, and
$SDS^{\prime}D^{\prime}_{\rm CG}$-pair approximations are shown in Fig. 1.
Here $S^{\prime}$ and $D^{\prime}$ are collective pairs with angular momentum
0 and 2 but orthogonal to the $S$ and $D$ pairs, respectively. While the CG
approach provides the numerically optimal solution in $SD_{\rm CG}$\- and
$SDS^{\prime}D^{\prime}_{\rm CG}$-pair approximations, the results nonetheless
are underwhelming. In the $SD_{\rm GS}$, $SD_{\rm CG}$, and
$SDS^{\prime}D^{\prime}_{\rm CG}$ spaces only 76%, 83%, and 84% of the exact
binding energy are reproduced, respectively, and the predicted moments of
inertia and $B(E2)$ strengths are evidently smaller than the exact SU(3)
results. We conclude that the collective $SD$ pairs cannot fully explain the
quadrupole collectivity of the SU(3) states. Interestingly, the excitation
energies of the yrast states predicted by the $SD$-pair approximations follow
an $I(I+1)$ rule and the $B(E2)$ strength exhibits a nearly-parabolic shape
[see Fig. 1(b)], two typical features of rotational motion. This raises the
hope that an effective Hamiltonian and effective charges can be derived in the
restricted $SD_{\rm CG}$ space, which takes into account the coupling with the
excluded space. This conclusion is in line with a more phenomenological
approach Nomura2 , in which an $L\cdot L$ term is added to the Hamiltonian,
such that properties of low-lying states of well-deformed nuclei are
reproduced in $sd$-IBM.
Figure 2 shows the corresponding results of for the 6p-6n system in the $sdg$
shell. In this case the $SDGI_{\rm CG}$-pair approximation of the shell model
reproduces exactly the SU(3) results and all states belonging to the coupled
representation $(18,0)\times(18,0)$, i.e. $(36,0)$, $(34,1)$,…,$(0,18)$, are
fully contained in the $SDGI_{\rm CG}$-pair truncated space. Again, if the
pairs are SU(3) tensors, the $SDGI$-pair approximation is close to the exact
result and reproduces 99% of the exact binding energy, 97% of the exact moment
of inertia, and 99% of the exact $B(E2)$ values. The $SDG_{\rm CG}$-pair
approximation yields 96% of the binding energy and 57% of the moment of
inertia. The predicted $B(E2)$ strength in the $SDG_{\rm CG}$-pair
approximation is close to the exact result for low angular momenta but
deteriorates as angular momentum $I$ increases. The necessity of
renormalization is even larger in the $SD_{\rm CG}$-pair approximation.
Let us now try to understand the above results. Specifically, why is it that
the SU(3) results in the $pf$ shell are exactly reproduced with $SDG$ but not
with $SD$ pairs? Similarly, why is it that SU(3) in the $sdg$ shell cannot be
represented with $SD$ or $SDG$ but requires $SDGI$ pairs? To explain these
findings, we invoke a mapping to a system with corresponding $s$, $d$, $g$,
and $i$ bosons (denoted as $sd$-, $sdg$-, or $sdgi$-IBM) and the bosonic
realization of SU(3). The mapping is further specified by the fact that the
quadrupole-quadrupole interaction $V_{Q}$ is an SU(4) invariant and,
consequently, one aims to realize the symmetries associated with Wigner’s
supermultiplet model Wigner in terms of bosons. An SU(4)-invariant boson
model, known as IBM-4 Elliott81 , requires to assign to each boson a spin-
isospin of $(s,t)=(0,1)$ or $(1,0)$, giving rise to a spin-isospin algebra
${\rm U}_{st}(6)$.
The SU(3) limit can be realized in terms of bosons by first decoupling the
orbital angular momentum from the spin-isospin of the bosons. For an
$n_{b}$-boson state this leads to the classification
$\displaystyle\begin{array}[]{ccccc}{\rm U}(6\Lambda)&\supset&{\rm
U}(\Lambda)&\otimes&{\rm U}_{st}(6)\\\ \downarrow&&\downarrow&&\downarrow\\\
\left[n_{b}\right]&&\left[\bar{h}\right]\equiv\left[h_{1},...,h_{6}\right]&&\left[\bar{h}\right]\equiv\left[h_{1},...,h_{6}\right]\end{array},$
(9)
with $\Lambda=6$, 15, and 28 for $sd$-, $sdg$-, and $sdgi$-IBM, respectively.
The six labels $[\bar{h}]$ are a partition of $n_{b}$ such that $h_{1}\geq
h_{2}\geq\cdots\geq h_{6}$; they specify the representations of ${\rm
U}(\Lambda)$ and ${\rm U}_{st}(6)$, which by virtue of the overall ${\rm
U}(6\Lambda)$ symmetry of the bosons must be identical. For all above values
of $\Lambda$ (i.e., $\Lambda=6$, 15, and 28), Elliott’s SU(3) appears as a
subalgebra of ${\rm U}(\Lambda)$,
$\displaystyle\begin{array}[]{ccccccc}{\rm U}(\Lambda)&\supset&{\rm
U}(3)&\supset&{\rm SU}(3)&\supset&{\rm SO}(3)\\\
\downarrow&&\downarrow&&\downarrow&&\downarrow\\\
\left[\bar{h}\right]&&\left[h_{1}^{\prime\prime},h_{2}^{\prime\prime},h_{3}^{\prime\prime}\right]&&(\lambda,\mu)&K&L\end{array},$
(13)
while Wigner’s SU(4) occurs as a subalgebra of ${\rm U}_{st}(6)$,
$\displaystyle\begin{array}[]{ccccccc}{\rm U}_{st}(6)&\supset&{\rm
SU}_{st}(4)&\supset&{\rm SU}_{s}(2)&\otimes&{\rm SU}_{t}(2)\\\
\downarrow&&\downarrow&&\downarrow&&\downarrow\\\
\left[\bar{h}\right]&&(\lambda^{\prime},\mu^{\prime},\nu^{\prime})&&S&&T\end{array}.$
(17)
The quantum numbers $(\lambda,\mu)$, $K$, and $L$ in Eq. (13) and
$(\lambda^{\prime},\mu^{\prime},\nu^{\prime})$, $S$, and $T$ in Eq. (17) have
an interpretation identical to that in Elliott’s fermionic SU(3) model
Elliott58 ; Isacker2016 .
The SU(3) labels $(\lambda,\mu)$ in the different versions of the IBM can be
worked out with the following procedure Elliott1999 . For a given number of
bosons $n_{b}$, one enumerates all possible Young diagrams $[\bar{h}]$ of
${\rm U}(\Lambda)$ or ${\rm U}_{st}(6)$. For each $[\bar{h}]$ one obtains the
${\rm SU}_{st}(4)$ labels $(\lambda^{\prime},\mu^{\prime},\nu^{\prime})$ from
the branching rule ${\rm U}(6)\supset{\rm SU}(4)$, and retains only the ones
that contain the favored supermultiplet. Finally, the SU(3) labels
$(\lambda,\mu)$ for the above $[\bar{h}]$ are found from the ${\rm
U}(\Lambda)\supset{\rm SU}(3)$ branching rule.
Let us apply this procedure to the 6p-6n system in the $pf$ shell. The lowest
eigenstates of the quadrupole-quadrupole interaction belong to the favored
SU(4) supermultiplet $(\lambda^{\prime},\mu^{\prime},\nu^{\prime})=(0,0,0)$
and the leading (fermionic) SU(3) representation is $(\lambda,\mu)=(24,0)$.
For $n_{b}=6$ bosons, the ${\rm U}_{st}(6)$ or ${\rm U}(\Lambda)$
representations containing this favored supermultiplet $(0,0,0)$ are
$[\bar{h}]=[6]$, $[4,2]$, $[2^{3}]$, and $[1^{6}]$, which have the SU(3)
labels $(\lambda,\mu)$ as listed in Table 1 for the $sd$-, $sdg$-, and
$sdgi$-IBM. The leading SU(3) representation $(24,0)$ is not contained in
$sd$-IBM but is present in the $[6]$ representation of U(15), and therefore it
is contained in $sdg$-IBM. Similarly, 6p-6n in the $sdg$ shell give rise to
the leading SU(3) representation $(36,0)$, which is not contained in $sd$\-
nor $sdg$-IBM but present in $sdgi$-IBM.
The generalization to the 2p-2n ($n=4$) and 4p-4n ($n=8$) systems in the $pf$
and $sdg$ shells is summarized in Table 2. The second column lists the leading
fermionic SU(3) representations and the third, fourth, and fifth columns
indicate whether this representation is contained in $sd$-, $sdg$-, and
$sdgi$-IBM, respectively. A dash (—) indicates that it is not, in which case
an NPA calculation adopting the corresponding $SD$, $SDG$, or $SDGI$ pairs
does not reproduce the full collectivity of the ground-state band in the
fermionic SU(3) model. For $n=4$ and $n=8$ nucleons in the $sdg$ shell no
exact mapping can be realized to $sdgi$-IBM and bosons with even higher
angular momentum are needed. It should be noted, however, that this generally
occurs for low nucleon number (e.g., for $n=12$ nucleons in the $sdg$ shell
the problem does not occur), for which NPA calculations with high angular
momentum pairs are still feasible.
Table 1: Leading SU(3) representations for 6 bosons in $sd$-, $sdg$-, and $sdgi$-IBM occurring in the ${\rm U}(\Lambda)$ and ${\rm U}_{st}(6)$ representations $[\bar{h}]$ containing the favored supermultiplet $(0,0,0)$. (bosons)${}^{n_{b}}$ | $[\bar{h}]$ | $(\lambda,\mu)$
---|---|---
$(sd)^{6}$ | $[6]$ | $(12,0),(8,2),(4,4),(6,0),(0,6),\dots$
| $[4,2]$ | $(8,2),(6,3),(7,1),(4,4)^{2},(5,2),\dots$
| $[2^{3}]$ | $(6,0),(0,6),(3,3),(2,2)^{2},(0,0)$
| $[1^{6}]$ | $(0,0)$
$(sdg)^{6}$ | $[6]$ | $(24,0),(20,2),(18,3),(16,4)^{2},(18,0),\dots$
| $[4,2]$ | $(20,2),(18,3),(19,1),(16,4)^{3},(17,2),\dots$
| $[2^{3}]$ | $(18,0),(15,3),(12,6),(13,4),(14,2)^{3},\dots$
| $[1^{6}]$ | $(12,0),(8,5),(9,3),(3,9),(7,4),\dots$
$(sdgi)^{6}$ | $[6]$ | $(36,0),(32,2),(30,3),(28,4)^{2},(30,0),\dots$
| $[4,2]$ | $(32,2),(30,3),(31,1),(28,4)^{3},(29,2)^{2},\dots$
| $[2^{3}]$ | $(30,0),(27,3),(24,6),(25,4),(26,2)^{3},\dots$
| $[1^{6}]$ | $(24,0),(20,5),(21,3),(18,6),(19,4),\dots$
Table 2: Leading fermionic SU(3) representations $(\lambda,\mu)$ for $n$ nucleons in the $pf$ and $sdg$ shells and the U(6), U(15), and U(28) representations of the $n_{\rm b}=n/2$ boson system that contain this $(\lambda,\mu)$ in $sd$-, $sdg$-, and $sdgi$-IBM. (shell)n | $(\lambda,\mu)$ | $sd$-IBM | $sdg$-IBM | $sdgi$-IBM
---|---|---|---|---
$(pf)^{4}$ | $(12,0)$ | — | — | $[2]$
$(pf)^{8}$ | $(16,4)$ | — | — | $[4],[2^{2}]$
$(pf)^{12}$ | $(24,0)$ | — | $[6]$ | $[6],[4,2],[2^{3}],[1^{6}]$
$(sdg)^{4}$ | $(16,0)$ | — | — | —
$(sdg)^{8}$ | $(24,4)$ | — | — | —
$(sdg)^{12}$ | $(36,0)$ | — | — | $[6]$
While the best NPA solutions so far have been found by a numerically intensive
optimization, it turns out they can also be obtained from a deformed
“intrinsic” state. Again consider the 6p-6n system in the $pf$ shell. An
unconstrained Hartree-Fock (HF) calculation in this single-particle shell-
model space JohnsonSHERPA with a quadrupole-quadrupole interaction provides
us with a HF state with an axially symmetric quadrupole deformed shape, a
consequence of the spontaneous symmetry breaking Nambu of rotational
symmetry. One can project out a $K=0$ band with good angular momentum from
this HF state JohnsonLAMP , which exactly corresponds to the SU(3)
representation $(24,0)$ Elliott58 . We use $a$ and $\bar{a}$ to denote the HF
single-particle orbit and its time-reversal partner, respectively, and we
write the creation operator of a nucleon as $c_{a}^{\dagger}$. A Slater
determinant for an even number $2N$ of protons or neutrons can be written as a
pair condensate:
$\displaystyle\prod_{a=1}^{N}c_{a}^{\dagger}c_{\bar{a}}^{\dagger}|0\rangle=\mathcal{N}\left(\sum_{a}v_{a}~{}c^{\dagger}_{a}c_{\bar{a}}^{\dagger}\right)^{N}|0\rangle.$
(18)
The pair in the deformed HF state is a superposition of collective pairs of
good angular momentum in the shell model arXiv :
$\displaystyle\sum_{a}v_{a}~{}c^{\dagger}_{a}c_{\bar{a}}^{\dagger}=\sum_{JM}{{A}^{(J)}_{M}}^{\dagger}.$
(19)
For the appropriate $v_{a}$ one obtains $SDG$ pairs, which are the same as the
$SDG$ pairs obtained by the CG-NPA calculations. Similarly, the $SDGI$ pairs
responsible for (36,0) for 6p-6n in the $sdg$ shell can be also projected out
from a deformed HF pair. The CG approach provides numerically optimal
solutions in the NPA but is computationally heavy due to hundreds, even
thousands of iterations. The HF approach derives pairs using an unconstrained
HF calculation and the decomposition of pairs according to Eq. (19) has a very
low computational cost.
Figure 3: The ground rotational band of 52Fe. The experimental energies are
taken from Ref. expt1 and the shell-model results are obtained with the GXPF1
interaction.
$I^{\pi}$ | Expt. | SM | $SDG$
---|---|---|---
$2^{+}$ | 14.2(19) | 19.2 | 17.0
$4^{+}$ | 26(6) | 25.0 | 21.6
$6^{+}$ | 10(3) | 17.4 | 20.0
$8^{+}$ | 9(4) | 11.5 | 15.5
$10^{+}$ | | 12.7 | 10.5
Table 3: $B(E2;I\rightarrow I-2)$ values (in W.u.) for the ground rotational
band of 52Fe. The experimental values are taken from Ref. expt1
and the shell-model results are obtained with the GXPF1 interaction.
Finally, we show that the NPA with CG-pairs provides a good description of
low-lying states of rotational nuclei also if a realistic shell-model
interaction is taken. We exemplify this with the nucleus 52Fe, considered as a
6p-6n system in the $pf$ shell with the GXPF1 effective interaction gxpf1 .
Figure 3 and Table 3 compare, for the ground rotational band of 52Fe, the
experimental data expt1 , the full configuration shell model (SM), and the
$SDG_{\rm CG}$-pair approximation. Both the level energies and the $B(E2)$
values obtained with $SDG_{\rm CG}$ are in good agreement with the data and
with the shell model.
In summary, we construct in the NPA a collective subspace of the full shell-
model space such that the former exactly reproduces, without any
renormalization, the properties of the low-energy states of the latter. This
construction is valid for an SU(3) quadrupole-quadrupole Hamiltonian and is
achieved by determining the structure of the pairs with the conjugate-gradient
minimization technique or on the basis of a deformed HF calculation. Exact
correspondence is achieved only if a sufficient number of different pairs is
considered. For example, a 6p-6n system in the $pf$ ($sdg$) shell is
reproduced exactly with $SDG$ ($SDGI$) pairs; with just $SD$ pairs, an
important renormalization of all operators is required. We have analytic
understanding of this result: The collective subspace of the NPA exactly
captures the collectivity of the full space if and only if the mapping to a
model constructed with bosons corresponding to the pairs gives rise to a
leading bosonic SU(3) representation that is also leading in fermionic SU(3).
For many years a central problem in nuclear structure has been the
construction of a collective subspace that decouples from the full shell-model
space. With this work the conditions necessary for this decoupling to be exact
are now understood for an SU(3) Hamiltonian. This understanding will pave the
way for the construction of viable collective subspaces for more realistic
shell-model interactions. It will also clarify the derivation of boson
Hamiltonians appropriate for quadrupole deformed nuclei. Similar techniques
conceivably might be applied elsewhere, such as to octupole-deformed nuclei
with a Sp($\Omega$) or SO($\Omega$) symmetry Isacker2016 .
###### Acknowledgements.
This material is based upon work supported by the National Key R&D Program of
China under Grant No. 2018YFA0404403, the National Natural Science Foundation
of China under Grants No. 12075169, 11605122, and 11535004, the U.S.
Department of Energy, Office of Science, Office of Nuclear Physics, under
Award No. DE-FG02-03ER41272, the CUSTIPEN (China-U.S. Theory Institute for
Physics with Exotic Nuclei) funded by the U.S. Department of Energy, Office of
Science grant number DE-SC0009971.
## References
* (1) M. G. Mayer, Phys. Rev. 75, 1969 (1949).
* (2) O. Haxel, J. H. D. Jensen, and H. E. Suess, Phys. Rev. 75, 1766 (1949).
* (3) A. Bohr and B. R. Mottelson, Mat. Fys. Medd. K. Dan. Vidensk. Selsk 27, 16 (1953).
* (4) A. Bohr and B. R. Mottelson, Nuclear Structure (World Scientific,1998).
* (5) S. G. Nilsson, Mat. Fys. Medd. K. Dan. Vidensk. Selsk 29, 16 (1955).
* (6) J. Rainwater, Phys. Rev. 79, 432 (1950).
* (7) J. P. Elliott, Proc. R. Soc. A 245, 128 (1958); 245, 562 (1958).
* (8) T. Dytrych, K. D. Launey, J. P. Draayer, P. Maris, J. P. Vary, E. Saule, U. Catalyurek, M. Sosonkina, D. Langr, and M. A. Caprio, Phys. Rev. Lett. 111, 252501 (2013); T. Dytrych, K. D. Launey, J. P. Draayer, D. J. Rowe, J. L. Wood, G. Rosensteel, C. Bahri, D. Langr, and R. B. Baker, Phys. Rev. Lett. 124, 042501 (2020).
* (9) E. Caurier, G. Martínez-Pinedo, F. Nowacki, A. Poves, and A. P. Zuker, Rev. Mod. Phys. 77, 427 (2005).
* (10) K. Heyde and J. L. Wood, Rev. Mod. Phys. 83, 1467 (2011).
* (11) T. Otsuka, Y. Tsunoda, T. Abe, N. Shimizu, and P. Van Duppen, Phys. Rev. Lett. 123, 222502 (2019).
* (12) A. Arima and F. Iachello, Phys. Rev. Lett. 35, 1069 (1975); Ann. Phys. 111, 201 (1978).
* (13) F. Iachello and A. Arima, The Interacting Boson Model (Cambridge University Press, Cambridge, 1987).
* (14) T. Otsuka, A. Arima, F. Iachello, and I. Talmi, Phys. Lett. B 76, 139 (1978); T. Otsuka, A. Arima, and F. Iachello, Nucl. Phys. A 309, 1 (1978).
* (15) J. N. Ginocchio and C. W. Johnson, Phys. Rev. C 51, 1861 (1995).
* (16) K. Nomura, N. Shimizu, and T. Otsuka, Phys. Rev. Lett. 101, 142501 (2008).
* (17) K. Nomura, T. Otsuka, N. Shimizu, and L. Guo, Phys. Rev. C 83, 041302(R) (2011).
* (18) J. Q. Chen, Nucl. Phys. A 626, 686 (1997).
* (19) Y. M. Zhao, N. Yoshinaga, S. Yamaji, J. Q. Chen, and A. Arima, Phys. Rev. C 62, 014304 (2000).
* (20) Y. M. Zhao and A. Arima, Phys. Rep. 545, 1 (2014).
* (21) I. Talmi, Nucl. Phys. A 172, 1 (1971).
* (22) Y. K. Gambhir, A. Rimini, and T. Weber, Phys. Rev. 188, 1573 (1969); Y. K. Gambir, S. Haq, and J. K. Suri, Ann. Phys.(N.Y.) 133, 154 (1981).
* (23) K. Allaart, E. Boeker, G. Bonsignori, M. Savoia, and Y. K. Gambhir, Phys. Rep. 169, 209 (1988).
* (24) Y. Lei, Z. Y. Xu, Y. M. Zhao, and A. Arima, Phys. Rev. C 80, 064316 (2009); 82, 034303 (2010).
* (25) M. A. Caprio, F. Q. Luo, K. Cai, V. Hellemans, and C. Constantinou, Phys. Rev. C 85, 034324 (2012).
* (26) Y. Y. Cheng, Y. M. Zhao, and A. Arima, Phys. Rev. C 94, 024307 (2016); Y. Y. Cheng, C. Qi, Y. M. Zhao, and A. Arima, Phys. Rev. C 94, 024321 (2016).
* (27) C. Qi, L. Y. Jia, and G. J. Fu, Phys. Rev. C 94, 014312 (2016).
* (28) J. N. Ginocchio, Phys. Lett. B 79, 173 (1978); 85, 9 (1979); Ann. Phys. 126, 234 (1980).
* (29) C. L. Wu, D. H. Feng, X. G. Chen, J. Q. Chen, and M. W. Guidry, Phys. Rev. C 36, 1157 (1987); C. L. Wu, D. H. Feng, and M. Guidry, Adv. Nucl. Phys. 21, 227 (1994).
* (30) P. Halse, Phys. Rev. C 39, 1104 (1989).
* (31) Y. M. Zhao, N. Yoshinaga, S. Yamaji, and A. Arima, Phys. Rev. C 62, 014316 (2000).
* (32) Z. Y. Xu, Y. Lei, Y. M. Zhao, S. W. Xu, Y. X. Xie, and A. Arima, Phys. Rev. C 79, 054315 (2009).
* (33) Y. Lei, S. Pittel, G. J. Fu, and Y. M. Zhao, arXiv: 1207.2297v1; S. Pittel, Y. Lei, Y.M. Zhao, and G. J. Fu, AIP Conference Proceedings 1488, 300 (2012); S. Pittel, Y. Lei, G. J. Fu, and Y. M. Zhao, Journal of Physics: Conference Series 445, 012031 (2013).
* (34) M. R. Hestenes and E. Stiefel, J. Res. Natl. Inst. Stan. 49, 409 (1952).
* (35) R. Fletcher and C. M. Reeves, Comput. J. 7, 149 (1964).
* (36) E. P. Wigner, Phys. Rev. 51, 106 (1937).
* (37) J. P. Elliott and J. A. Evans, Phys. Lett. B 101, 216 (1981).
* (38) P. Van Isacker and S. Pittel, Phys. Scr. 91, 023009 (2016).
* (39) J. P. Elliott and J. A. Evans, J. Phys. G 25, 2071 (1999).
* (40) I. Stetcu and C. W. Johnson, Phys. Rev. C 66, 034301 (2002).
* (41) Y. Nambu, Phys. Rev. Lett. 4, 380 (1960).
* (42) C. W. Johnson and K. D. O’Mara, Phys. Rev. C 96, 064304 (2017); C. W. Johnson and C. F. Jiao, Phys. G 46, 015101 (2019).
* (43) G. J. Fu and C. W. Johnson, Phys. Lett. B 809, 135705 (2020).
* (44) Y. Dong, H. Junde, Nucl. Data Sheets 128, 185 (2015).
* (45) M. Honma, T. Otsuka, B. A. Brown, and T. Mizusaki, Phys. Rev. C 69, 034335 (2004).
|
# ”Can I Touch This?”: Survey of Virtual Reality Interactions via Haptic
Solutions
Revue de Littérature des Interactions en Réalité Virtuelle par le biais de
Solutions Haptiques
Elodie Bouzbib ISIR. Sorbonne UniversitéISCD. Sorbonne UniversitéParisFrance
, Gilles Bailly , Sinan Haliyo ISIR. Sorbonne UniversitéParisFrance and
Pascal Frey ISCD. Sorbonne UniversitéParisFrance
(2021)
###### Abstract.
Haptic feedback has become crucial to enhance the user experiences in Virtual
Reality (VR). This justifies the sudden burst of novel haptic solutions
proposed these past years in the HCI community. This article is a survey of
Virtual Reality interactions, relying on haptic devices. We propose two
dimensions to describe and compare the current haptic solutions: their degree
of physicality, as well as their degree of actuation. We depict a compromise
between the user and the designer, highlighting how the range of required or
proposed stimulation in VR is opposed to the haptic interfaces flexibility and
their deployment in real-life use-cases. This paper (1) outlines the variety
of haptic solutions and provides a novel perspective for analysing their
associated interactions, (2) highlights the limits of the current evaluation
criteria regarding these interactions, and finally (3) reflects the
interaction, operation and conception potentials of ”encountered-type of
haptic devices”.
haptics, Virtual Reality, human factors, haptic devices
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/XXXXXXXXX.XXXXXXXX††conference: 32e conférence Francophone sur
l’Interaction Homme-Machine; April 13–16, 2021; Metz, France††booktitle: IHM
’20’21 : 32e conférence Francophone sur l’Interaction Homme-Machine, April
13–16, 2021, Metz, France††price: 15.00††isbn: XXX-X-XXXX-XXXX-X/XX/XX††ccs:
Human-centered computing Virtual reality††ccs: Human-centered computing Haptic
devices††ccs: Human-centered computing Interaction design theory, concepts and
paradigms
Le retour haptique est devenu essentiel pour améliorer l’expérience
utilisateur en Réalité Virtuelle (RV). C’est pourquoi nous observons une
explosion du nombre de solutions haptiques proposées ces dernières années en
IHM. Cet article est une revue de littérature des interactions en RV
s’appuyant sur des dispositifs haptiques. Nous proposons deux dimensions pour
décrire et comparer les solutions haptiques : leur degré de physicalité ainsi
que leur degré de robotisation. Nous formulons un compromis
utilisateur/concepteur, reflétant la variété des stimulations
requises/proposées en RV, en opposition à la flexibilité des interfaces et
leur déploiement en situation réelle. Ce travail (1) offre un panorama des
solutions haptiques en RV ainsi qu’un cadre d’analyse pour étudier les
interactions associées, (2) souligne les limites des critères d’évaluation
actuels pour ce type d’interactions, et finalement (3) reflète les potentiels
interactionnel, opérationnel et conceptuel des interfaces haptiques ”à
contacts intermittents”.
haptique, Réalité Virtuelle, facteurs humains, dispositif haptique
## 1\. Introduction
In the last few years, the terms ”Virtual Reality” and ”Haptics” have been
amongst the most quoted keywords in HCI conferences such as ACM CHI or ACM
UIST. Indeed, Head-Mounted Displays (HMDs) are now affordable and provide high
quality visual and audio feedback, but augmenting the experience by enhancing
VR through the sense of touch (haptic feedback) has become a main challenge. A
large variety of haptic solutions has currently been proposed, nonetheless
they have highly different scopes, due to the wide range of haptic features.
It is hence difficult to compare their similarities and differences and have a
clear understanding of the design possibilities.
In this paper, we present a survey of existing haptic interactions in VR. We
use the terms ”haptic interactions” to emphasize the focus on the users
actions, and to analyse how the ”haptic devices” influence their behaviours in
VR.
We provide a synthesis of existing research on haptic interactions in VR and
depict, from the required haptic features stimulation and interaction
opportunities, a design space discussing and classifying the associated haptic
solutions according to two dimensions: their degree of physicality, i.e. their
physical consistency and level of resemblance as to replicating an object, and
their degree of actuation, i.e. whether they rely on a motor-based hardware
implementation enabling autonomous displacements of the interface (eg changing
its shape or position) (Table 1).
This design space is useful to characterize, classify and compare haptic
interactions and the corresponding haptic solutions. We also propose two
criteria, User experience and Conception costs, highlighting the implicit
trade-offs between the quality of the user experience and the intricacy for
the designer to implement these solutions. Both of the user’s and designer’s
perspectives are hence considered in a novel framework to evaluate haptic
interactions.
Finally, we illustrate the utility of our design space by analyzing and
comparing four haptic solutions. This analysis indicates that (1) the use of
real props in a virtual environment benefits the user experience, but limits
the interactions to the existing props available within the VR arena; (2) the
use of robotised interfaces enables more various interactions; (3) combining
them offers the best user experience/design cost trade-off; (4) current
evaluation methods do not allow a fair representation and comparison of haptic
solutions.
We hence propose guidelines to evaluate haptic interactions from both the user
and designer perspectives. We also outline how intertwining interfaces can
expand haptic opportunities, by conducting a deeper investigation on Robotic
Graphics interfaces (McNeely, 1993) . Indeed, in the quest of the Ultimate
Display (Sutherland, 1965), these show (a) the largest variety of
interactions, (b) the most reliable interfaces through their automation, and
(c) the most natural interactions as they encounter the users at their
positions of interest without further notice.
## 2\. Background
Surveys in Virtual Reality consider the technology itself and its limits
(Zhao, 2009; Zhou and Deng, 2009), or more specifically its use-case
scenarios. VR is indeed used in industries (Berg and Vance, 2017; Zimmermann,
2008), healthcare (Moline, 1997), or in gaming. In gaming, the concerns are
mainly regarding the evaluation protocols (Merino et al., 2020), ie the
presence (Schuemie et al., 2001) and its related questionnaires (Schwind et
al., 2019; Usoh et al., 2000). Surveys for instance compare the results
whenever the questionnaires are asked in VR or in the real world
(Alexandrovsky et al., 2020; Putze et al., 2020). The user behaviour in VR is
also analysed, through gesture recognition (Sagayam and Hemanth, 2017) or
system control techniques (eg menus) (Bowman and Wingrave, 2001).
The research areas are coincidentally almost similar in haptics. Indeed,
surveys analyse haptics themselves (Varalakshmi et al., 2012), haptic devices
(Seifi et al., 2019; Rakkolainen et al., 2020; Talvas et al., 2014; Hayward
and Maclean, 2007) or examine the scenarios which benefit from a stimulation
of the haptic cues. Haptics are used in telemanipulation (Galambos, 2012), for
training in the industry (Xia, 2016; Bloomfield et al., 2003) or for
healthcare purposes (Coles et al., 2011; Rangarajan et al., 2020), or in
gaming (Kim and Schneider, 2020).
Finally, some surveys have been proposed at the intersection of VR and haptics
and focus either on specific methods (pseudo-haptic feedback) (Lécuyer, 2009),
technology according to stimulated haptic features (temperature, shape, skin
stretch, pressure) (Wang et al., 2020d; Dominjon et al., 2007) or the
motivations and applications of each haptic device category (Wang et al.,
2020a). In contrast our survey outlines the variety of haptic interactions and
technologies in VR and provides a framework to analyse them.
## 3\. Scope and Definitions
The scope of this article is to analyse how a single user interacts and is
provided with believable haptic feedback in Virtual Reality (Magnenat-Thalmann
et al., 2005). We thus define the terms ”virtual reality” and ”haptics” and
how they are related.
### 3.1. Virtual Reality
Virtual reality corresponds to a 3D artificial numeric environment in which
users are immersed in. The environment can be projected onto a large screen,
in a simulation platform for instance, or multiple ones, such as with CAVE
technology (where the image is projected onto at least 3 distinct walls of a
room-scale arena). In this survey, we consider an artificial reality
(Wexelblat, 1993) where users do not perceive their physical vicinity: the
outside world is not noticeable and users are fully immersed through a head-
mounted display (HMD). For instance, augmented reality (AR), where the
physical environment is augmented with virtual artefacts, is out of our scope.
Through a Head Mounted Display (HMD), Virtual reality creates immersive
experiences for the users. These are only limited by the designers’
imagination, and are evaluated through presence. Presence is defined as the
”subjective experience of being in one place, even when one is physically
situated in another” (Witmer and Singer, 1998; Slater, 1999). It quantifies
the users’ involvement and naturalness of interactions through control,
sensory, distraction and realism factors. This heavily relies on the sensory
input and output channels, however, as VR was mainly integrating audio and
visual cues, quantifying the haptic contribution in an experience remains
difficult.
### 3.2. Haptics: Tactile vs Kinesthetic Perception
Haptics is the general term for the sense of touch. They are a combination of
two cues: tactile and kinesthetic. The tactile cues are developed through the
skin, while the kinesthetic ones come from proprioception and are through the
muscles and the tendons.
#### 3.2.1. Tactile cues:
The skin is composed of four types of mechanoreceptors (Lederman and Klatzky,
2009). The first ones, ”Merkel nerve endings”, transmit mechanical pressure,
position and shapes or edges. They are stimulated whilst reading Braille for
instance. The second ones, ”Ruffini corpuscle end-organ”, are sensitive to
skin stretch and provide both pressure and slippage information. The third
ones are the ”Pacinian corpuscles”, which are sensitive to vibration and
pressure. The last ones, ”Meissner’s corpuscles”, are highly sensitive and
provide light touch and vibrations information. It also contains
thermoreceptors, which transmit information about temperature: the Ruffini
endings respond to warmth, while the Krause ones detect cold. Through tactile
cues, the human can hence feel shapes or edges, pressure, vibrations or
temperature changes.
#### 3.2.2. Kinesthetic cues:
The kinesthetic cues rely on proprioception, ie the perception and the
awareness of our own body parts positions and movements. Mechanoreceptors into
the muscles, the ”spindles”, communicate to the nervous system information the
forces muscle generate, as well as their length change (Jones, 2000). The
primary type of spindle is sensitive to the velocity and acceleration of a
muscle contraction or limb movement, while the second type provides
information about static muscle length or limb positions. Kinesthetic cues
hence allow to feel forces, as well as perceiving weights or inertia.
### 3.3. VR & Haptics
Whenever we touch or manipulate an object, the combination of these two
previous cues allows to understand its material, but also its shape and the
constraints it implies to the user. On the one side, adding physical presence
(Lepecq et al., 2008) through haptic feedback in VR enhances the users’
immersion, even at an emotional and physiological scale: the heart rate of a
user can literally increase with the use of haptics through real objects
(Insko, 2001). Haptics are also required for interacting with the environment:
the user needs to control the changes in the environment (Held and Durlach,
1992) and to be aware of the modifications he physically has made (eg moving
virtual objects, pushing a button). On the other side, haptics can benefit
from VR. For instance, Lécuyer et al. leverage the users vision and analyse
how it affects their haptic feedback (Lécuyer, 2009). This approach, ”pseudo-
haptic feedback”, tricks the users’ perception into feeling virtual objects’
stiffness, texture, mass. Many more haptic features can be stimulated, such as
temperature, shape, skin stretch, pressure.
## 4\. Analyzing haptic interactions
The main objective of this survey is to provide analytical tools to evaluate
and compare haptic interactions.
### 4.1. Design space
We propose a two-dimension framework to discuss and classify haptic solutions
in VR (see Table 1).
The first dimension is their degree of physicality, ie how the haptic
perception is tangible/physically consistent/resembling with the virtual
objects. This dimension is drawn as a continuum, from ”no physicality” to
”real objects” (see Figure 2). We find that this continuum can be discretised
as a two-category section: whether they use real objects or not.
The second orthogonal dimension is their degree of actuation, ie whether
haptic solutions rely on a motor-based hardware implementation enabling
autonomous displacements (eg enabling to change its shape, position etc).
### 4.2. Analysis criteria
We consider two main criteria to analyse haptic interactions in VR. They cover
both the user and designer perspectives.
The User experience is the first criterion and includes two aspects:
interaction opportunities and visuo-haptic consistency/discrepancy.
Interaction opportunities represent to which extent haptic solutions allow
users to interact/act (e.g navigate, explore, manipulate) in a VR scene as
opposed as in the real world. Visuo-haptic consistency/discrepancy refers to
the tactile and kinesthetic perceptual rendering of these interactions. These
two sub-criteria are complementary focusing on both action and perception.
The second criterion is the conception cost, i.e. the challenges Designers
should address when designing haptic interactions. We distinguish
implementation and operation costs. Implementation costs include several
technical aspects related to the acceptability of a haptic solution such as
safety, robustness and ease-of-use (Dominjon et al., 2007). Operation costs
include the financial and human costs required to deploy these technologies.
### 4.3. Application
We rely on this design space and criteria to highlight and understand the
trade-offs between the user’s interactions opportunities in VR, and the
designers’ challenges in conception. This survey offers a novel perspective
for researchers to study haptic interactions in VR. It can be used to compare
and analytically evaluate existing haptic interactions. For a given
application, designers can evaluate the most adapted haptic interaction. For a
given technique, they can evaluate a haptic solution depending on their needs
(tasks, workspace, use-cases etc).
.We first discuss haptic interactions from the User perspective (Section 5 \-
Interaction opportunities, Section 6 \- Visuo-Haptic Consistency/Discrepancy).
We then adopt the designer perspective in Section 7. We use our design space
on Sections 6 and 7, which emphasize haptic solutions.
We propose two dimensions to classify current technologies: the degree of
physicality as well as the degree of actuation. Four categories are hence
drawn in this figure: Top Left: No robotics, No real objects; Top Right:
Robotics, No real objects; Bottom Right: Robotics, Real objects; Bottom Left:
No robotics, Real objects. We respectively displayed current technologies and
interaction techniques in the category they belong to.
Table 1. We propose two dimensions to classify current technologies: their
degrees of physicality and actuation.
Figure 1. Tasks in VR: (1) Navigation through Point & Teleport (Funk et al.,
2019); (2) Navigation through a building, using redirection (Cheng, 2019); (3)
Exploration with Bare-Hands: A user finds an invisible haptic code (Bouzbib et
al., 2020); (4) Manipulation: Haptic proxies rearrange themselves to form a
plane the user can manipulate (Zhao et al., 2017); (5) Edition: the user
changes the shape of a haptic 2.5D tabletop (Nakagaki et al., 2016b); (6) The
user is interacted with by a robotic arm to feel emotions (Teyssier et al.,
2020).
## 5\. Interaction Opportunities
In the real world, users move freely without constraints, pick any object of
their environment and then interact with their bare-hands. They also can be
interacted with, from the environment (wind, unexpected obstacles) or from
other users, for instance to catch their attention or to lead them somewhere.
A natural environment also naturally physically constrains users through their
entire body.
In this section, we discuss the interaction opportunities in VR and the
methods available to provide them. In particular, we discuss them through four
main tasks: navigation, exploration, manipulation and edition.
### 5.1. Navigation
We qualify a navigation task as the exploration of the environment through the
vision and the ability to navigate through it via the users displacements. We
identify three main techniques to navigate in VR. The two firsts rely on
controllers and push buttons, where the users do not necessary physically
move. The last one is more natural as it allows the users to walk in the VR
arena.
#### 5.1.1. Panning:
With grounded desktop haptic solutions, such as the Virtuose (Haption, 2019),
users need to push a button to clutch the devices and hence move within the
environment.
#### 5.1.2. Point & Teleport:
With ungrounded solutions, such as controllers, the common technique is
teleportation. Users point their controllers (Baloup et al., 2018) to
predetermined teleportation target areas, and are displaced in position but
also in orientation (Funk et al., 2019) (Figure 1 \- 1).
#### 5.1.3. Real Walking:
Real walking in VR, ”perambulation”, has shown the best immersion and presence
results (Usoh et al., 1999; Steinicke et al., 2013) because it relies on
proprioception and kinesthetic feedback through the legs and gait awareness.
Nonetheless, VR arenas are not infinite and HMD have a limited tracking space,
hence methods need to be developed for the user to be able to move to any
location of interest. One approach is to mount the previously discussed
grounded desktop haptic solutions over mobile (Satler et al., 2011; Lee et
al., 2007; Formaglio et al., 2005; Lee et al., 2009; Nitzsche et al., 2003;
Pavlik et al., 2013) or wearable (Barnaby and Roudaut, 2019) interfaces. Users
however still have to continuously maintain the handle in their palm. Other
interfaces hence allow for free-hands Room-Scale VR (Bouzbib et al., 2020;
Wang et al., 2020c; Yixian et al., 2020). For the users to perambulate in an
infinite workspace, the virtual environment can also visually be warped for
the users to unconsciously modify their trajectory or avoid obstacles (Cheng,
2019; Razzaque et al., 2001; Yang et al., 2019) (Figure 1 \- 2). This infinite
redirection can also be provided from Electro-Muscle Stimulation (EMS) on the
users’ legs (Auda et al., 2019), with wearable electrodes. The user can also
wear actuated stilts to perceive staircases (Schmidt et al., 2015) or a
vibrating shoe to perceive virtual materials (Strohmeier et al., 2020). To
remain unencumbered from these wearable techniques, the VR arena can also
include robotised techniques: users can for instance walk on treadmills
(Vonach et al., 2017; Frissen et al., 2013), or on movable platforms that
encounter their feet (Iwata, 2005, 2013).
### 5.2. Hand Interactions
In the real world, bare-hands interaction is important to execute everyday
tasks (exploration, manipulation, edition). However, in VR, users commonly
have to hold controllers, wearables or handles, which create a discrepancy
between what the users feel and see (Yokokohji et al., 1999). These exploit
the God-object principle (Zilles and Salisbury, 1995), as opposed to bare-
hands Real-touch interactions.
#### 5.2.1. God-Object:
The controller is considered as a continuity of the users’ hands, represented
by a proxy that does not undergo physics or rigid collisions, and is attached
to a complementary rigid object with a spring-damper model. This latter hence
moves along with the proxy, but is constrained by the environment. Whenever it
does collide with an object of interest, the users perceive the previous
spring-damper stiffness through kinesthetic feedback.
Users hence interact though a proxy, like a desktop mouse, which position is
not co-located with the users’ vision. Bare-hands interactions are not
necessarily needed depending on the use-cases. For instance, in healthcare and
surgery training, users are more likely to interact with a tool, such as a
scalpel or a clamp. Continuously holding the god-object is hence not a
constrain, however the co-location of vision and haptics is recommended
(Ortega and Coquillart, 2005).
#### 5.2.2. Real Touch:
In other scenarios, such as gaming, industry or tool training (Winther et al.,
2020; Strandholt et al., 2020), using the appropriate tools through props and
real objects is more natural. The users however need to be able to reach them
whenever required. Some interfaces (e.g. Robotic Graphics; see Section 7.3)
are hence developed in these regards, to encounter the users whenever they
feel like interacting.
### 5.3. Exploration
As opposed to the previous definition of ”navigation”, based on vision cues,
an ”exploration” task consists in the ability to touch the environment and
understand its constraints. Exploring thoroughly an environment in VR can be
done through different haptic features, and can improve the users depth
perception (Makin et al., 2019) or distances to an object. The different
methods for exploring the environment are detailed in Section 6.
Whenever a user is exploring the environment, shapes or textures are felt
through his body displacements. He needs to move for his skin to stretch
(through tactile cues) or his muscles to contract (through kinesthetic cues).
#### 5.3.1. Through Tactile cues:
Whenever real props or material patches are available, users can naturally
interact with their fingertips to feel different materials (Degraen et al.,
2019; Araujo et al., 2016), textures (Benko et al., 2016; Lo et al., 2018),
temperatures (Ziat et al., 2014) or to feel shapes and patterns through their
bare-hands (Bouzbib et al., 2020; Cheng et al., 2017) (Figure 1 \- 3). When no
physicality is available, a stimulation can still be performed. As seen in
Surface haptic displays (Bau et al., 2010), vibrations between 80 to 400 Hz
are felt through the skin, hence users perceive stickiness, smoothness,
pleasure, vibration or friction, and for instance explore a 3D terrain or
volumetric data (Sinclair et al., 2014). Vibrations can then be combined with
auditory and vision cues to render collisions in VR (Boldt et al., 2018).
#### 5.3.2. Through Kinesthetic cues:
Exploring the environment can also be done through kinesthetic cues: the users
can literally be physically constrained to feel a wall, using electro-muscle
stimulation (EMS) for instance (Lopes et al., 2017). With the god-object
principle, users can also explore the environments’ constraints through force-
feedback. In this configuration, the users’ arms are constrained by haptic
desktop interfaces, providing strong enough forces to simulate a physical
collision and discriminate shapes.
### 5.4. Manipulation
A manipulation task is performed whenever modifying the position and
orientation of an object.
#### 5.4.1. Direct Manipulation:
In VR, we distinguish the direct manipulation (Bryson, 2005), ”the ability for
a user to control objects in a virtual environment in a direct and natural
way, much as objects are manipulated in the real world” from
pointing/selecting an object with controllers. A direct manipulation relies on
the ability to hold an object with kinesthetic feedback, feel its weight
(Lopes et al., 2017; Zenner and Krüger, 2019; Heo et al., 2018; Sagheb et al.,
2019; Zenner and Kruger, 2017; Shigeyama et al., 2019), shape (Follmer et al.,
2013; Sun et al., 2019; Kovacs et al., 2020), and constrains from the virtual
environment, for instance when making objects interact with each other
(Bouzbib et al., 2020). Changing a virtual object position or orientation can
be used as an input in the virtual environment: in (Zhao et al., 2017) for
instance, the user modifies a light intensity by moving a handle prop in the
real environment. By transposing (Lopes et al., 2015) in VR, an object could
even communicate its dynamic use to the user.
#### 5.4.2. Pseudo-Haptic Manipulation:
Leveraging vision over haptics allows to move an object with different
friction, weights or force perceptions (Rietzler et al., 2018; Samad et al.,
2019; Pusch and Lécuyer, 2011; Rietzler et al., 2019). For instance, visually
reducing the speed of a virtual prop displacement leads to an increase in the
users’ forces to move it, modifying their friction/weight perceptions.
### 5.5. Edition
We qualify an Edition task as a modification of an object property, other than
its orientation or position (for example through its scale (Xia et al., 2018)
or shape).
#### 5.5.1. Physical Edition:
Editing an interface in VR requires it to be fully equipped with sensors. With
wearables for instance, the hand phalanges positions are known, and can be
tightly linked with an object property (Villa Salazar et al., 2020). Knowing
their own position, modular interfaces can be rearranged to provide stretching
or bending tasks (Feick et al., 2020), or be pushed on with a tool to reduce
in size (Teng et al., 2019).
Shape-changing interfaces have been developed to dynamically modify material
properties (Nakagaki et al., 2016b) (Figure 1 \- 5) or augment the
interactions in Augmented Reality (AR) (Leithinger et al., 2013), however
these techniques only consider HMDs and VR as future work directions.
These interfaces are relevant as 2.5D tabletops are already used in VR.
Physically editing the virtual world through them could be implemented in a
near future, by intertwining these interfaces with 3D modelling techniques (De
Araújo et al., 2013).
#### 5.5.2. Pseudo-Haptic Edition:
The difficulty behind changing a real object property is to track it in real-
time. This is why pseudo-techniques are relevant: they visually change the
object properties such as their shape (Achibet et al., 2017), compliance (Lee
et al., 2019; Sinclair et al., 2019), or their bending curvature (Heo et al.,
2019) without physically editing the object.
### 5.6. Scenario-based Interactions
In the real world, humans are free to interact with any object without further
notice. In this regard, common controllers enable interactions with any object
through pointing, but they display a high visuo-haptic discrepancy. In more
advanced haptically rendered Virtual environments, users are often constrained
to scenario-based interactions: only a few interactable objects are available,
accordingly with the scenario’s progress.
The greater the virtual:physical haptic consistency, the harder it is to
enhance non-deterministic scenarios, where the user is free to interact with
any object with no regards to the scenario’s progress. High quality haptic
rendering in non-deterministic scenarios can be achieved through three
methods: (a) numerous objects and primitives are available for interactions
(Hettiarachchi and Wigdor, 2016); (b) the users’ intentions are to be
predicted prior to interaction to make it occur (Bouzbib et al., 2020; Cheng
et al., 2017); (c) props modify their own topology to match the users expected
haptic rendering (Siu et al., 2018).
### 5.7. Environment-Initiated Interactions
In both real and virtual environments with tangible interfaces, users usually
are the decision makers and get to choose their points of contact during the
next interaction. However, users themselves can be considered as tangible
interfaces: uncontrolled interactions, such as being touched by a colleague,
or feeling a temperature change in the environment (Shaw et al., 2019; Ziat et
al., 2014), are part of everyday interactions that can be transposed in
Virtual Reality. Replicating a social touch interaction in VR for instance
increases presence (Hoppe et al., 2020) or invokes emotions (Teyssier et al.,
2020).
This type of interactions are recurrent in sports simulations, where the user
is undergoing forces from his environment and perceiving impacts (jumping into
space (Gugenheimer et al., 2016), shooting a soccer ball (Wang et al., 2020b),
goalkeeping in a soccer game (Tsai and Chen, 2019), paragliding (Ye et al.,
2019), intercepting a volleyball (Günther et al., 2020), flying (Cheng et al.,
2014)).
These interactions are involving multiple force types: tension, traction,
reaction, resistance, impact that help enhancing the user experience in VR
(Wang et al., 2020c). These can be strong enough to even lead the user through
forces (Bouzbib et al., 2020).
### 5.8. Whole-Body Involvement
All the previous subsections evoke interactions that mainly involve the hands
or the fingers. This paradigm is revoked in (Zielasko and Riecke, 2020): a
user should be able to choose his posture. This is currently only enabled in
room-scale VR applications, where users experience sitting, standing, climbing
or crouching (Teng et al., 2019; Suzuki et al., 2020; Bouzbib et al., 2020;
Danieau et al., 2012) and interact with their whole-body.
Figure 2. Degree of physicality continuum in VR.(1) Haptic desktop devices
enable to explore the environment through a handle (Lee et al., 2009) with the
god-object principle; (2) A controller (Benko et al., 2016) or (3) a wearable
(Fang et al., 2020) simulate objects for exploration tasks; (4) Mid-air
technology (Rakkolainen et al., 2020) create vibrations through the user’s
hand to simulate an object; (5) Passive proxies are oriented for the user to
feel objects’ primitives with their hands (Cheng et al., 2017); (6) Objects
from the environment are assigned to virtual props with the same primitives
(Hettiarachchi and Wigdor, 2016); (7) Real objects or passive props can be
manipulated and interacted with each other (Bouzbib et al., 2020).
Three physical devices simulating objects are shown with their virtual
counterparts. (1) An inflatable prop in the user’s palm simulates holding a
bomb. (2) A pin-based interface shaped as a ball interacts in the user palm to
replicate a hamster. (3) Different primitives (ball, cube, pyramid) are
displayed on a 2.5D tabletop.
Figure 3. Simulating Objects. (1) A controller with an inflatable prop in the
user’s palm simulates holding a bomb (Teng et al., 2018). (2) A pin-based
interface shaped as a ball interacts in the user palm to replicate a hamster
(Yoshida et al., 2020). (3) Different primitives (ball, cube, pyramid) are
displayed on a 2.5D tabletop (Siu et al., 2018).
## 6\. Visuo-Haptic Consistency/Discrepancy
Visuo-Haptic Consistency is the second aspect of the user experience. We
exploit the dimension degree of physicality of our design space (Table 1) to
discuss the different haptic solutions. In particular, we distinguish whether
these solutions use real objects (exploiting real objects) or not (simulating
objects).
### 6.1. Simulating Objects
Object properties that need to be simulated are their shape, texture,
temperature, weight.
#### 6.1.1. No Physicality, (Figure 2 \- 1)
Currently, grounded haptic devices such as the Virtuose (Haption, 2019) or the
PHaNToM (Massie and Salisbury, 1994) simulate objects through their shapes
(Figure 2 \- 1). The rendering is only done through kinesthetic feedback via a
proxy. Conceptually, the ideal link between the users and this proxy is a
massless, infinitely rigid stick, which would be an equivalent to moving the
proxy directly (Hayward and Maclean, 2007; Sato, 2002). These solutions only
provide stimulation at the hand-scale, with no regards to the rest of the
body.
#### 6.1.2. Shape Simulation, (Figure 2 \- 2-3-4)
In the same regard, gloves or controllers provide some physicality (Figure 2
\- 2-3). Gloves or exoskeletons literally constrain the users hands for
simulating shapes (Fang et al., 2020; Gu et al., 2016; noa, 2019a; Amirpour et
al., 2019; Choi et al., 2017; Tsai and Rekimoto, 2018; Achibet et al., 2015;
Choi et al., 2016; Nakagaki et al., 2016a; Achibet et al., 2014; Provancher et
al., 2005), or stimulate other haptic features such as stiffness, friction
(Villa Salazar et al., 2020) or slippage (Tsagarakis et al., 2005). These can
be extended to overall body suits for users to feel impacts or even
temperature changes (noa, 2019b; Danieau et al., 2018), or even intertwined
with grounded devices to extend their use-cases (Steed et al., 2020).
Customised controllers are currently designed to be either stimulating the
palm (Sun et al., 2019; Yoshida et al., 2020; de Tinguy et al., 2020) (Figure
3 \- 1, 2), or held in the palm while providing haptic feedback on the
fingertips. For instance, (Whitmire et al., 2018) proposes interchangeable
haptic wheels with different textures or shapes, while (Benko et al., 2016)
enables textures and shapes and (Lee et al., 2019) displays compliance
changes. In these configurations, users hold a single controller, however bi-
manual interactions can be created by combining two controllers. Their link
transmits kinesthetic feedback, and constrain their respective positions to
each other (Strasnick et al., 2018; Wei et al., 2020).
Contactless technology has also been developed for simulating shapes. While
studies demonstrated that interacting with bare-hands increased the user’s
cognitive load (Galais et al., 2019), combining bare-hands interactions with
haptic feedback actually enhances the users involvement. Since haptic feedback
does require contact, ”contactless” technology defines an interaction where
the users are unencumbered, as per Krueger’s postulate (Wexelblat, 1993), and
ultrasounds are sent to their hands, for them to perceive shapes on their
skin, without a physical prop contact (Rakkolainen et al., 2020) (Figure 2 \-
4).
These unencumbered methods are also achieved through shape-changing
interfaces, for instance with balloons arrays (Takizawa et al., 2017) or 2.5D
tabletops (Figure 3 \- 3, Figure 1 \- 5) (Follmer et al., 2013; Siu et al.,
2018; Iwata et al., 2001). These latter are constituted from pins, that raise
and lower themselves to replicate different shapes. In the same regard, swarm
interfaces rearrange themselves to display different shapes. These have mainly
been developed in the real world (Le Goc et al., 2016; Suzuki et al., 2018;
Kim et al., 2020; Suzuki et al., 2019; Ducatelle et al., 2011; Marquardt et
al., 2009) but slowly take off as VR user interfaces (Zhao et al., 2017)
(Figure 1 \- 4). Indeed, while these latter devices are used as desktop
interfaces, the swarm robot idea has extended to the air, with drones for
instance (Gomes et al., 2016; Rubens et al., 2015; Knierim et al., 2017; Hoppe
et al., 2018; Tsykunov and Tsetserukou, 2019).
All of these previous interfaces embrace the Roboxel principle enunciated in
Robotic Graphics (McNeely, 1993): ”cellular robots that dynamically configure
themselves into the desired shape and size”.
#### 6.1.3. Object Primitives, (Figure 2 \- 5)
Finally, a user can interact with object primitives. These represent the
simplest geometries available: circle, cube, pyramid, cylinder, torus. Simply
feeling an orientation through the fingertips provides the required
information to understand an object shape, in an exploration task for
instance. Panels with diverse orientations can hence be displayed for a user
to explore various objects in a virtual environment (Cheng et al., 2017)
(Figure 2 \- 5) or directly encounter the user at their position of interest
(Yokokohji et al., 2005; Yokokohji et al., 2001).
On the opposite, a bare-hands manipulation task requires multiple primitives
to be available at the same time within the hand vicinity. This is why the
exploitation of real objects is necessary.
### 6.2. Exploiting Real Objects
Passive haptics (Insko, 2001), ie the use of passive props, consist in placing
real objects corresponding to their exact virtual match at their virtual
position. Insko demonstrated that passive haptics enhanced the virtual
environment (Insko, 2001). Nonetheless, this does suffer from a main
limitation: substituting the physical environment for a virtual one (Simeone
et al., 2015) requires a thorough mapping of objects shapes, sizes, textures,
and requires numerous props (Pair et al., 2003). This can be done with real
objects in simulation rooms for instance (e.g plane cockpit, motorcycle), but
cheaper methods need to be implemented to facilitate their use in other
fields.
#### 6.2.1. Object Primitives, (Figure 2 \- 6)
One solution is to extract the primitives of the objects that are already
available in the physical environment, to map virtual objects of the
approximate same primitive over them (Hettiarachchi and Wigdor, 2016) (Figure
2 \- 6).
#### 6.2.2. Visuo-Proprioceptive Illusions & Pseudo Haptics
The number of props within the environment can also be reduced, while letting
the users interact at different positions of the physical world. It is
possible to leverage the vision over haptics and modify the users’
proprioception to redirect their trajectory (Kohli, 2010; Kohli et al., 2012,
2013; Azmandian et al., 2016; Gonzalez and Follmer, 2019; Han et al., 2018). A
user might perceive multiple distinct cubes for instance, while interacting
with a single one. On the same principle, the user hand displacement can be
redirected at an angle, up-/down-scaled (Abtahi and Follmer, 2018; Bergström
et al., 2019), or slowed down for friction or weight perception (Samad et al.,
2019; Praveena et al., 2020). These techniques also allow for the exploration
and manipulation of various shapes: models can for instance be added to enable
complex virtual shapes to be mapped over real physical objects boundaries
(Zhao and Follmer, 2018). The user can also be redirected to pinch a multi-
primitive object (cubic, pyramidal and cylindrical) from different locations,
which theoretically widens the variety of available props with a single one
(de Tinguy et al., 2019). On the same principle, pseudo-haptics allow to
modify the users’ shape (Ban et al., 2012a; Ban et al., 2012b) or texture
(Degraen et al., 2019) perceptions when interacting with a physical prop.
#### 6.2.3. Displacing Objects, (Figure 2 \- 7)
Whenever objects are indeed available within the environment, various
directions are available to displace them. This displacement allows for
mapping one physical object over multiple ones, but also to display a
multitude of props. These directions embrace the Robotic Shape Display
principle from Robotic Graphics (McNeely, 1993): ”a robot that can reach any
location of a virtual desktop with an end-effector” and matches the user’s
object of interest.
Their usability have been validated through a Wizard-of-Oz implementation,
where human operators move real objects or even people around a Room-scale VR
arena to encounter the users (Cheng et al., 2015) (Figure 4 \- 2). The users
themselves can also reconfigure and actuate real props (Cheng et al., 2018).
Robotic Shape Displays, RSDs, are also called encountered-type of haptic
devices, as they literally encounter the users at their object of interest to
provide haptic feedback. They allow to display real pieces of material (Araujo
et al., 2016; Abtahi et al., 2019), physical props to simulate walls (Bouzbib
et al., 2020; Kim et al., 2018; Yamaguchi et al., 2016), or even display
furniture (Suzuki et al., 2020) or untethered objects (He et al., 2017a, b;
Huang et al., 2020; Bouzbib et al., 2020), that can be interacted with each
other.
Figure 4. Degree of Actuation. (1) No actuation is available. The user’s hand
is redirected to touch a passive prop that cannot move (Azmandian et al.,
2016). The implementation of this technique relies exclusively on a software
development leveraging the vision cues; (2) Human actuators are used to
illustrate the Robotic Graphics (McNeely, 1993) principle with a Wizard of Oz
technique (Cheng et al., 2015). They carry props for the user to feel a real
continuous wall; Encountered-type of haptic devices (3-5): (3) A drone
encounters the users’ hand for exploring passive props; (4) A cartesian robot
displaces itself autonomously for users to interact with physical props
(Bouzbib et al., 2020); (5) A robotic arm with multiple degrees of freedom
displaces itself to encounter the users’ hand, and rotates its shape-
approximation device to provide the right material (Araujo et al., 2016).
## 7\. Conception cost
In practice, designers have to trade-off their interaction design space with
implementation and operational costs in the conception phase. Implementation
costs include technical aspects related to the acceptability of an haptic
solution such as safety, robustness and ease-of-use (Dominjon et al., 2007).
For instance, actuated haptic solutions require a special attention regarding
this criterion. Operation costs include the financial and human cost for using
a haptic solution. The financial cost is measured through the cost of the
haptic device and additional elements such as motion capture systems to
precisely track the users’ hand or the prior preparation of required props.
Human cost refers to both labour time and number of human operators required
during the user’s interactions. For instance, actuated haptic solutions
generally do not require human operators (low human cost) but might be
mechanically expensive.
In this section, we use our two-dimension design space (Table 1) to discuss
haptic solutions according to their conception cost. As non-actuated solutions
globally share the same approaches and have a low implementation cost, we
discuss them together in the ”No Robotics” subsection.
### 7.1. No Robotics
Regarding implementation costs, all non-actuated haptic solutions are safe,
robust and easy-to-use. We depict here an important design choice when opting
for these solutions: either the designer relies on graphics solutions,
leveraging vision cues over haptic ones, or needs operators to displace or
change the interactable props (see Table 1).
#### 7.1.1. Passive Props.
Passive props (Insko, 2001) only consist in placing real objects corresponding
to their exact virtual match at their virtual position. They provide a natural
way of interacting through the objects’ natural affordances (Norman, 2013).
They however are limited to the available objects within the scene as they are
not actuated. They only can be used in a scenario-based experience, where the
target is known in advance. The environment hence requires a prop for each
available virtual object.
#### 7.1.2. Shape Simulation, Pseudo-Haptics, Visuo-Haptic Illusions, Object
Primitives.
For graphics solutions, users are redirected towards their object of interest
(Azmandian et al., 2016) using visuo-haptic illusions. However, physically
overlaying a prop or primitive over a virtual object has a tracking cost,
which usually relies on trackers which can be operationally costly (eg
Optitrack (Optitrack, 2019) or HTC Trackers).
Otherwise, the users intentions have to be predicted for the interaction to
occur. The users hands are then redirected to the appropriate motionless prop,
for them to explore their object of interest (Cheng et al., 2017).
Operationally, the cost only relies on the proxy fabrication (Figure 2 \- 5).
These implementations offer various scenarios in terms of interaction (even
non-deterministic), at an affordable cost.
#### 7.1.3. Surface Haptic Displays.
These techniques exclusively allow for exploration through multiple haptic
features such as friction or textures. They also can integrate a tablet or a
smartphone (Savino, 2020), on which the user can interact at any location.
#### 7.1.4. Human Actuators.
This technique consists in using human operators to displace props in the VR
arena. The designers however come across reliability and speed issues with
these operators. Even though they only are used in scenario-based experiences,
delay mechanisms based on graphics need to be implemented (Cheng et al., 2015)
(Figure 4 \- 2) to overcome these issues. Conceptually, they broaden the
interaction scope, however this solution is operationally very costly.
#### 7.1.5. Real Props Reassignment.
Instead of using a tracking system for passive props, a depth camera for
instance allows to reassign props to different virtual objects of the same
primitive (Hettiarachchi and Wigdor, 2016) (Figure 2 \- 6). The objects are
hence all available to be interacted with. This drastically reduces the
operational costs as they only rely on computer vision. This enables non-
deterministic scenarios as the real world is literally substituted for a
virtual one (Simeone et al., 2015) and objects can be reassigned with
virtual:physical (He et al., 2017a) mappings.
### 7.2. Robotics & No Real Objects
This section gathers technologies simulating the virtual environment through
actuation: they replicate it to constrain the users.
#### 7.2.1. Desktop Haptic Interfaces.
The SPIDAR (Sato, 2002), the Virtuose (Haption, 2019) and other classic
desktop haptic interfaces are already compared in multiple surveys (Dominjon
et al., 2007; Seifi et al., 2019; Wang et al., 2020a) (see Figure 2 \- 1).
They are safe as they are controlled by the user and only constrain their arm
movements with kinesthetic feedback and adapt to any available object from the
virtual scene (non-deterministic scenarios). They show a high perceived
stiffness and robustness, but remain really expensive (¿10k$).
#### 7.2.2. Shape-Changing Interfaces, Roboxels, 2.5D Tabletops.
These technologies present a high perceived stiffness and change their shapes
accordingly with the virtual environment (Fitzgerald and Ishii, 2018;
Leithinger et al., 2013). They hence do not require any operator and allow for
non-deterministic scenarios whenever their displacements are enabled (Siu et
al., 2018) (see Figure 3 \- 3). They are however complex to build and require
multiple motors as they are composed of arrays of numerous pins, which define
their haptic fidelity resolution. Even though they present high voltages, they
remain safe around the users. As they require bare-hands interactions, they
hence show a high ease of use.
#### 7.2.3. Wearables, Controllers, EMS.
These rely on small torques, which are sufficient to constrain the users body
parts. They are safe and easy to use, but in return are not robust enough to
resist to users’ actions. As they are continuously changing the users’ haptic
perception, they do allow non-deterministic scenarios and change their
rendered stiffness and rigidity as a function of the distance to a virtual
prop (de Tinguy et al., 2020; Kovacs et al., 2020). A customised controller
usually relies on 3D printed parts and small servomotors and can be easily
replicated (Sun et al., 2019) (Figure 2 \- 2,3; Figure 3 \- 1,2).
#### 7.2.4. Mid-Air Haptics.
Providing contactless interactions, mid-air haptics also provide a high level
of safety around the user. They however do not allow to navigate the VR
environment, and hence cannot consider non-deterministic scenarios. Their
robustness is very low, as they send ultrasounds to the users and do not
physically constrain them (Rakkolainen et al., 2020).
#### 7.2.5. Inflatable Floor.
The floor topology can be modified and inflated to create interactions at the
body-scale (Teng et al., 2019). The users cannot inflate them, however they
can push some tiles down and hence, edit them. These are safe, though they do
not provide a wide range of interactions, but offer multiple static body
postures.
### 7.3. Robotics & Real Objects
In this subsection, we detail the different types of Robotic Shapes Displays -
otherwise known as ”encountered-type of haptic devices”, mentioned in the
Table 1. First, these interfaces move to encounter the users: this feature
optimises their ease of use. Second, as these interfaces move within the user
vicinity, safety concerns are raised in this section, depending on the
interfaces robustness.
Encountered-type of haptic devices combine different types of interaction
techniques: they can provide the users with passive props, textures or
primitives, and allow navigation, exploration, manipulation tasks. Their
mechanical implementations offer a good repeatability and reliability.
#### 7.3.1. Cartesian Robot:
In (Bouzbib et al., 2020), CoVR, a physical column mounted over a Cartesian XY
ceiling robot enables interactions at any height and any position of a room-
scale VR arena (see Figure 1 \- 2; Figure 4 \- 4). This implementation
presents a high perceived stiffness, and because it carries passive props
around the arena, enables a high fidelity haptic rendering. It displays high
accuracy and speed, and presents an algorithm which optimises the column’s
displacements as a function of the users intentions. It hence enables non-
deterministic scenarios. Safety measures have been validated in the field. In
practice, the column’s celerity is decreasing around the user, as it is
repulsed by this latter. Its software implementation ensures a safe
environment for the user to perambulate in the arena without unexpected
collision. However, in order to display many props in different scenarios, an
operator is required to create panels and modify them. The materials however
remain cheap, and even though its structure and motors are more expensive than
3D printed cases and servomotors, as per customised controllers for instance,
this solution provides a wide range of interactions.
#### 7.3.2. Robotic Arm:
A robotic arm provides more degrees of freedom than the previous Cartesian
robot. This primarily means a higher cost and a higher safety risk. For
instance, H-Wall, using a Kuka LBR Iiwa robot, presents high motor torques and
can hence increase the safety risks around the users. This implementation
hence does not allow non-deterministic scenarios, and presents either a wall
or a revolving door to the user, with a high robustness. Implementations with
smaller torques, such as (Vonach et al., 2017; Araujo et al., 2016) are safer
but display a reduced perceived stiffness. The use-cases for all these
interactions are hence drastically different: H-Wall simulates a rigid wall
while VRRobot (Vonach et al., 2017) and Snake Charmer (Araujo et al., 2016)
(Figure 4 \- 5) present more interaction opportunities. This latter is also
the single Robotic Shape Display that autonomously changes its end-effector,
without an operator.
#### 7.3.3. Drones, Swarm Robots, Mobile Platforms:
With drones, the interactions are limited to the available props, for instance
with a single wall at a given position (Yamaguchi et al., 2016). Going from an
active mode (flying) to a passive one (graspable by the user) has a long delay
(10s) (Abtahi et al., 2019), which on top of the safety concerns, does not
allow non-deterministic scenarios. (Tsykunov et al., 2019) however allows the
user to change the drone trajectory to fetch and magnetically recover an
object of interest. Their accuracy and speed are limited (Gomes et al., 2016;
Rubens et al., 2015) compared to the previous grounded interfaces, and can
require dynamic redirection techniques to improve their performances (Abtahi
et al., 2019). As they are ungrounded, they do not have a high robustness nor
perceived stiffness. This is also valid for mobile robots, such as (He et al.,
2017a; Gonzalez et al., 2020), which only display passive props. To decrease
the conception cost, existing vacuuming robots are used as mobile platforms in
(Wang et al., 2020c; Yixian et al., 2020). Designers can choose to duplicate
them, as swarm robots, to enable non-deterministic scenarios (Suzuki et al.,
2020). These are safe to use around the users, as their speed and robustness
are limited. Instead of swarm mobile interfaces, a merry-go-round platform can
also be designed to display various props at an equidistant position from the
user (Huang et al., 2020). All of the previous interfaces require an operator
cost on top of their mechanical and software ones, to modify the interactable
props available, depending on the use-cases.
On the opposite, (Zhao et al., 2017) proposes autonomous reconfigurable
interfaces intertwining both Robotic Shape Displays and Roboxels (McNeely,
1993) principles to get rid of the operator cost (see Figure 1 \- 4). These
small robotic volume elements reconfigure themselves into the users objects of
interest. They have a sufficient perceived stiffness to represent objects, but
are not robust enough to resist to body-scaled forces, for instance to
simulate a rigid wall.
## 8\. Evaluation Protocols
On top of choosing from the different trade-offs between conception and
interaction opportunities, the designer also needs to pick-up an evaluation
protocol. These protocols depend on the VR use-cases. For instance, the haptic
benefits for medical or industrial assembly training can be evaluated against
a real experience condition (Poyade et al., 2012), with criteria such as
completion time, number of errors, user cognitive load (Gutierrez et al.,
2010). On the opposite, the haptic benefits for a gaming experience are more
likely to be evaluated through immersion and presence, comparing ”with/without
haptics” conditions (Cheng et al., 2015). Although some papers do compare
multiple haptic displays (Escobar-Castillejos et al., 2016; Ullrich, 2012), we
point out the lack of referenced evaluation protocols for evaluating haptic
solutions in VR.
### 8.1. Current Reference Evaluation Methods
The most common evaluation methods in VR are the SUS or WS presence
questionnaires (Witmer and Singer, 1998; Slater et al., 1994). These
questionnaires mainly focus on graphics rendering and only two Likert-scale
questions actually focus on haptic feedback: ”How well could you actively
survey the VE using touch?” and ”How well could you manipulate objects in the
VE?”. Besides, most of the above technologies are evaluated against ”no haptic
feedback”, hence the results can seem biased and most of all, expected. This
justifies why some implementations provide results on single parts of the
questionnaire, or arbitrarily combine their results (Choi et al., 2018) with
new subsections (eg ”ability to examine/act”) or tasks specific questions (eg
”How realistic was it to feel different textures?).
Table 2. Comparison & Evaluation of 4 Encountered-type of Haptic Devices,
according to the ”Evaluation section” parameters.
### 8.2. Evaluation Recommendations
Haptics should be more incorporated into the different factors enunciated in
(Witmer and Singer, 1998) (”Control, Sensory, Distraction, Realism”). In this
direction, Kim et al. defined the Haptic Experience model (Kim and Schneider,
2020), where they take into account both of the designer and user experiences.
It depicts how Design parameters (”timeliness, intensity, density and timbre”)
impact Usability requirements (”utility, causality, consistency, saliency”)
and target Experiential dimensions (”harmony, expressivity, autotelics,
immersion, realism”) on the user’s side.
In the same regards, we propose additional guidelines to evaluate haptic
solutions in VR experiments (see Table 2). We believe that the different
elements of interaction opportunities should be added to the users control
parameters.
In the sensory factors, the number of haptic features available should be
added (eg shape, texture, friction, temperature), in line with their quality,
in terms of ”timeliness, intensity, density and timbre”. The usability
requirements should identify the use-cases and number of scenarios with the
proposed solutions. Hence, a good evaluation of the interface timeliness and
usability should anticipate future deployments and avoid unnecessary
developments.
## 9\. Examples: Encountered-Type of Haptic Devices
We propose in this section to compare four encountered-type of haptic devices:
Beyond the Force (BTF) drone (Abtahi et al., 2019) (Figure 4 \- 3), ShapeShift
(Siu et al., 2018) (Figure 3 \- 3), Snake Charmer (Araujo et al., 2016)
(Figure 4 \- 5), and CoVR (Bouzbib et al., 2020) (Figure 4 \- 4).
In terms of interactions and number of props, the drone is the most limited
one. Indeed, because of both safety and implementation limitations, it only
enables free navigation in a reduced workspace. It also allows exploration
(through textures) and manipulation tasks. However, the manipulation task is
at the moment limited to a single light object as BTF cannot handle large
embedded masses yet. Whenever grabbed, it does not provide a haptic
transparency (Hayward and Maclean, 2007) during the interactions because of
its thrust and inertia. For the users to perform different tasks, an operator
needs to manually change the drone configuration. Its mechanical
implementation does not provide a sufficient speed for overlaying virtual
props in non-deterministic scenarios, but its accuracy is also unsatisfactory
and requires dynamic redirection techniques for the interactions to occur. It
also provides unwanted noise and wind, which reduces the interaction realism.
ShapeShift (Siu et al., 2018) is drastically different: it is a 2.5D desktop
interface that displaces itself. Even though a drone is theoretically
available in an infinite workspace, in practice they do share approximately
the same one. As (Siu et al., 2018) relies on a shape-changing interface, no
operator is required and it shape changes itself to overlay the users’ virtual
objects of interest, in non-deterministic scenarios. It allows a free
navigation at a desktop scale, as well as bimanual manipulation and
exploration. Both of these devices haptic transparency are limited as they are
ungrounded solutions. We believe that ShapeShift could be updated to allow
Edition tasks, by synchronising the users force actions with the actuated pins
stiffness. In terms of haptic features, it simulates shapes and stimulates
both tactile and kinesthetic cues. As per all 2.5D tabletops, it can be used
in various applications: 3D terrain exploration, volumetric data etc. Its
resolution seems promising as its studies shows successful object recognition
and haptic search.
The same interactions are available at a desktop scale with Snake Charmer
(Araujo et al., 2016), which provides a wide range of props and stimulation,
as each of its end-effector include 6 faces with various interaction
opportunities (textures to explore, buttons to push, heater and fan to
perceive temperature, handle and lightbulb to grasp and manipulate…). It also
can change its shape approximation device, SAD (ie its end-effector),
autonomously, using magnets. It follows the user hand and orient the expected
interaction face of its SAD prior to the interactions: it hence enables non-
deterministic scenarios. Besides, Snake Charmer has a promising future
regarding its deployment: LobbyBot (noa, [n.d.]), is already in the Renault
industry research lab, to enable VR haptic feedback in the automotive
industry.
Finally, CoVR (Bouzbib et al., 2020) enables the largest workspace as well as
the highest range of interactions. The user is free to navigate in a 30
$m^{3}$ VR arena, and CoVR predicts and physically overlays his object of
interest prior to interaction. These interactions include tactile exploration,
manipulation of untethered objects (full haptic transparency), body postures.
Indeed, CoVR is robust enough to resist body-scaled users, and shows over a
100N perceived stiffness and can carry over 80kg of embedded mass. CoVR can
also initiate the interactions with the users, and is strong enough to lead
the users through forces or even to transport them. Moreover, with the
appropriate physical:virtual mapping (He et al., 2017a), one physical prop can
overlay multiple virtual ones of the same approximate primitive without
redirection techniques. It however requires an operator to create, assemble
and display panels on its sides.
Room-scale VR becomes more and more relevant, and Snake Charmer could benefit
from being attached to an interface such as CoVR. Similarly, intertwining CoVR
with a robotic arm autonomously changing its SAD like Snake Charmer or with a
shape-changing interface could reduce its operational costs. This would
display all of the Robotics Graphics concept capabilities.
## 10\. Conclusion
We analysed in this paper haptic interactions in VR and their corresponding
haptic solutions. We analyzed them from both the user and designer
perspectives by considering interaction opportunities and visuo-haptic
consistency, as well as implementation and operation costs. We proposed a
novel framework to classify haptic displays, through a two-dimension design
space: the interfaces’ degree of physicality and degree of actuation.
We then evaluated these latter solutions from an interaction and conception
perspectives. Implementation-wise, we evaluated the interfaces robustness,
their ease of use as well as their safety considerations. From an operation
perspective, we also evaluated the costs of the proposed solutions.
This survey highlights the variety of props, tasks and haptic features that a
haptic solution can potentially provide in VR. This survey can be used to
analytically evaluate the existing haptic interactions. It can also help VR
designers to choose the desired haptic interaction and/or haptic solution
depending on their needs (tasks, workspace, use-cases etc).
We believe that combining multiple haptic solutions benefits the user
experience, as it optimises the above criteria. Encountered-type of haptic
interfaces were then highlighted as they already combine multiple interaction
techniques: they displace passive props in potentially large VR arenas and
allow for numerous tasks, such as navigation, exploration, manipulation, and
even allow the user to be interacted with.
## References
* (1)
* noa ([n.d.]) [n.d.]. renault. https://www.clarte-lab.fr/component/tags/tag/renault
* noa (2019a) 2019a. CyberGrasp. http://www.cyberglovesystems.com/cybergrasp
* noa (2019b) 2019b. Teslasuit | Full body haptic VR suit for motion capture and training. https://teslasuit.io/
* Abtahi and Follmer (2018) Parastoo Abtahi and Sean Follmer. 2018. Visuo-Haptic Illusions for Improving the Perceived Performance of Shape Displays. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–13. https://doi.org/10.1145/3173574.3173724
* Abtahi et al. (2019) Parastoo Abtahi, Benoit Landry, Jackie (Junrui) Yang, Marco Pavone, Sean Follmer, and James A. Landay. 2019. Beyond The Force: Using Quadcopters to Appropriate Objects and the Environment for Haptics in Virtual Reality. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–13. https://doi.org/10.1145/3290605.3300589
* Achibet et al. (2015) Merwan Achibet, Adrien Girard, Anthony Talvas, Maud Marchal, and Anatole Lecuyer. 2015. Elastic-Arm: Human-scale passive haptic feedback for augmenting interaction and perception in virtual environments. In _2015 IEEE Virtual Reality (VR)_. IEEE, Arles, Camargue, Provence, France, 63–68. https://doi.org/10.1109/VR.2015.7223325
* Achibet et al. (2017) Merwan Achibet, Benoit Le Gouis, Maud Marchal, Pierre-Alexandre Leziart, Ferran Argelaguet, Adrien Girard, Anatole Lecuyer, and Hiroyuki Kajimoto. 2017. FlexiFingers: Multi-finger interaction in VR combining passive haptics and pseudo-haptics. In _2017 IEEE Symposium on 3D User Interfaces (3DUI)_. IEEE, Los Angeles, CA, USA, 103–106. https://doi.org/10.1109/3DUI.2017.7893325
* Achibet et al. (2014) Merwan Achibet, Maud Marchal, Ferran Argelaguet, and Anatole Lecuyer. 2014. The Virtual Mitten: A novel interaction paradigm for visuo-haptic manipulation of objects using grip force. In _2014 IEEE Symposium on 3D User Interfaces (3DUI)_. IEEE, MN, USA, 59–66. https://doi.org/10.1109/3DUI.2014.6798843
* Alexandrovsky et al. (2020) Dmitry Alexandrovsky, Susanne Putze, Michael Bonfert, Sebastian Höffner, Pitt Michelmann, Dirk Wenig, Rainer Malaka, and Jan David Smeddinck. 2020. Examining Design Choices of Questionnaires in VR User Studies. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–21. https://doi.org/10.1145/3313831.3376260
* Amirpour et al. (2019) E. Amirpour, M. Savabi, A. Saboukhi, M. Rahimi Gorii, H. Ghafarirad, R. Fesharakifard, and S. Mehdi Rezaei. 2019. Design and Optimization of a Multi-DOF Hand Exoskeleton for Haptic Applications. In _2019 7th International Conference on Robotics and Mechatronics (ICRoM)_. 270–275. https://doi.org/10.1109/ICRoM48714.2019.9071884 ISSN: 2572-6889.
* Araujo et al. (2016) Bruno Araujo, Ricardo Jota, Varun Perumal, Jia Xian Yao, Karan Singh, and Daniel Wigdor. 2016\. Snake Charmer: Physically Enabling Virtual Objects. In _Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI ’16_. ACM Press, Eindhoven, Netherlands, 218–226. https://doi.org/10.1145/2839462.2839484
* Auda et al. (2019) Jonas Auda, Max Pascher, and Stefan Schneegass. 2019. Around the (Virtual) World: Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–8. https://doi.org/10.1145/3290605.3300661
* Azmandian et al. (2016) Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2016. Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16_. ACM Press, Santa Clara, California, USA, 1968–1979. https://doi.org/10.1145/2858036.2858226
* Baloup et al. (2018) Marc Baloup, Veïs Oudjail, Thomas Pietrzak, and Géry Casiez. 2018. Pointing techniques for distant targets in virtual reality. In _Proceedings of the 30th Conference on l’Interaction Homme-Machine - IHM ’18_. ACM Press, Brest, France, 100–107. https://doi.org/10.1145/3286689.3286696
* Ban et al. (2012a) Y. Ban, T. Kajinami, T. Narumi, T. Tanikawa, and M. Hirose. 2012a. Modifying an identified curved surface shape using pseudo-haptic effect. In _2012 IEEE Haptics Symposium (HAPTICS)_. 211–216. https://doi.org/10.1109/HAPTIC.2012.6183793
* Ban et al. (2012b) Yuki Ban, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2012b. Modifying an identified position of edged shapes using pseudo-haptic effects. In _Proceedings of the 18th ACM symposium on Virtual reality software and technology - VRST ’12_. ACM Press, Toronto, Ontario, Canada, 93. https://doi.org/10.1145/2407336.2407353
* Barnaby and Roudaut (2019) Gareth Barnaby and Anne Roudaut. 2019. Mantis: A Scalable, Lightweight and Accessible Architecture to Build Multiform Force Feedback Systems. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 937–948. https://doi.org/10.1145/3332165.3347909
* Bau et al. (2010) Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison. 2010. TeslaTouch: electrovibration for touch surfaces. In _Proceedings of the 23nd annual ACM symposium on User interface software and technology - UIST ’10_. ACM Press, New York, New York, USA, 283\. https://doi.org/10.1145/1866029.1866074
* Benko et al. (2016) Hrvoje Benko, Christian Holz, Mike Sinclair, and Eyal Ofek. 2016\. NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST ’16_. ACM Press, Tokyo, Japan, 717–728. https://doi.org/10.1145/2984511.2984526
* Berg and Vance (2017) Leif P. Berg and Judy M. Vance. 2017. Industry use of virtual reality in product design and manufacturing: a survey. _Virtual Reality_ 21, 1 (March 2017), 1–17. https://doi.org/10.1007/s10055-016-0293-9
* Bergström et al. (2019) Joanna Bergström, Aske Mottelson, and Jarrod Knibbe. 2019\. Resized Grasping in VR: Estimating Thresholds for Object Discrimination. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology_. ACM, New Orleans LA USA, 1175–1183. https://doi.org/10.1145/3332165.3347939
* Bloomfield et al. (2003) A. Bloomfield, Yu Deng, J. Wampler, P. Rondot, D. Harth, M. McManus, and N. Badler. 2003. A taxonomy and comparison of haptic actions for disassembly tasks. In _IEEE Virtual Reality, 2003\. Proceedings._ IEEE Comput. Soc, Los Angeles, CA, USA, 225–231. https://doi.org/10.1109/VR.2003.1191143
* Boldt et al. (2018) Mette Boldt, Boxuan Liu, Tram Nguyen, Alina Panova, Ramneek Singh, Alexander Steenbergen, Rainer Malaka, Jan Smeddinck, Michael Bonfert, Inga Lehne, Melina Cahnbley, Kim Korschinq, Loannis Bikas, Stefan Finke, Martin Hanci, and Valentin Kraft. 2018\. You Shall Not Pass: Non-Intrusive Feedback for Virtual Walls in VR Environments with Room-Scale Mapping. In _2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)_. IEEE, Reutlingen, 143–150. https://doi.org/10.1109/VR.2018.8446177
* Bouzbib et al. (2020) Elodie Bouzbib, Gilles Bailly, Sinan Haliyo, and Pascal Frey. 2020. CoVR: A Large-Scale Force-Feedback Robotic Interface for Non-Deterministic Scenarios in VR. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_. ACM, Virtual Event USA, 209–222. https://doi.org/10.1145/3379337.3415891
* Bowman and Wingrave (2001) D.A. Bowman and C.A. Wingrave. 2001. Design and evaluation of menu systems for immersive virtual environments. In _Proceedings IEEE Virtual Reality 2001_. IEEE Comput. Soc, Yokohama, Japan, 149–156. https://doi.org/10.1109/VR.2001.913781
* Bryson (2005) Steve Bryson. 2005\. Direct Manipulation in Virtual Reality. In _Visualization Handbook_. Elsevier, 413–430. https://doi.org/10.1016/B978-012387582-2/50023-X
* Cheng (2019) Lung-Pan Cheng. 2019\. VRoamer: Generating On-The-Fly VR Experiences While Walking inside Large, Unknown Real-World Building Environments. (2019), 8.
* Cheng et al. (2018) Lung-Pan Cheng, Li Chang, Sebastian Marwecki, and Patrick Baudisch. 2018. iTurk: Turning Passive Haptics into Active Haptics by Making Users Reconfigure Props in Virtual Reality. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–10. https://doi.org/10.1145/3173574.3173663
* Cheng et al. (2014) Lung-Pan Cheng, Patrick Lühne, Pedro Lopes, Christoph Sterz, and Patrick Baudisch. 2014. Haptic Turk: a Motion Platform Based on People. (2014), 11.
* Cheng et al. (2017) Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko, and Andrew D. Wilson. 2017. Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17_. ACM Press, Denver, Colorado, USA, 3718–3728. https://doi.org/10.1145/3025453.3025753
* Cheng et al. (2015) Lung-Pan Cheng, Thijs Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper, and Patrick Baudisch. 2015. TurkDeck: Physical Virtual Reality Based on People. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology - UIST ’15_. ACM Press, Daegu, Kyungpook, Republic of Korea, 417–426. https://doi.org/10.1145/2807442.2807463
* Choi et al. (2017) Inrak Choi, Heather Culbertson, Mark R. Miller, Alex Olwal, and Sean Follmer. 2017. Grabity: A Wearable Haptic Interface for Simulating Weight and Grasping in Virtual Reality. In _Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST ’17_. ACM Press, Québec City, QC, Canada, 119–130. https://doi.org/10.1145/3126594.3126599
* Choi et al. (2016) Inrak Choi, Elliot W. Hawkes, David L. Christensen, Christopher J. Ploch, and Sean Follmer. 2016. Wolverine: A wearable haptic interface for grasping in virtual reality. In _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, Daejeon, South Korea, 986–993. https://doi.org/10.1109/IROS.2016.7759169
* Choi et al. (2018) Inrak Choi, Eyal Ofek, Hrvoje Benko, Mike Sinclair, and Christian Holz. 2018. CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–13. https://doi.org/10.1145/3173574.3174228
* Coles et al. (2011) Timothy R. Coles, Dwight Meglan, and Nigel W. John. 2011\. The Role of Haptics in Medical Training Simulators: A Survey of the State of the Art. _IEEE Transactions on Haptics_ 4, 1 (Jan. 2011), 51–66. https://doi.org/10.1109/TOH.2010.19
* Danieau et al. (2012) Fabien Danieau, Julien Fleureau, Philippe Guillotel, Nicolas Mollet, Anatole Lécuyer, and Marc Christie. 2012. HapSeat: producing motion sensation with multiple force-feedback devices embedded in a seat. In _Proceedings of the 18th ACM symposium on Virtual reality software and technology - VRST ’12_. ACM Press, Toronto, Ontario, Canada, 69\. https://doi.org/10.1145/2407336.2407350
* Danieau et al. (2018) Fabien Danieau, Philippe Guillotel, Olivier Dumas, Thomas Lopez, Bertrand Leroy, and Nicolas Mollet. 2018\. HFX studio: haptic editor for full-body immersive experiences. In _Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology - VRST ’18_. ACM Press, Tokyo, Japan, 1–9. https://doi.org/10.1145/3281505.3281518
* De Araújo et al. (2013) Bruno R. De Araújo, Géry Casiez, Joaquim A. Jorge, and Martin Hachet. 2013. Mockup Builder: 3D modeling on and above the surface. _Computers & Graphics_ 37, 3 (May 2013), 165–178. https://doi.org/10.1016/j.cag.2012.12.005
* de Tinguy et al. (2020) Xavier de Tinguy, Thomas Howard, Claudio Pacchierotti, Maud Marchal, and Anatole Lécuyer. 2020\. WeATaViX: WEarable Actuated TAngibles for VIrtual reality eXperiences. (2020), 9.
* de Tinguy et al. (2019) Xavier de Tinguy, Claudio Pacchierotti, Maud Marchal, and Anatole Lecuyer. 2019. Toward Universal Tangible Objects: Optimizing Haptic Pinching Sensations in 3D Interaction. In _2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)_. IEEE, Osaka, Japan, 321–330. https://doi.org/10.1109/VR.2019.8798205
* Degraen et al. (2019) Donald Degraen, André Zenner, and Antonio Krüger. 2019\. Enhancing Texture Perception in Virtual Reality Using 3D-Printed Hair Structures. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300479
* Dominjon et al. (2007) Lionel Dominjon, Jérôme Perret, and Anatole Lécuyer. 2007\. Novel devices and interaction techniques for human-scale haptics. _The Visual Computer_ 23, 4 (March 2007), 257–266. https://doi.org/10.1007/s00371-007-0100-4
* Ducatelle et al. (2011) Frederick Ducatelle, Gianni A. Di Caro, Carlo Pinciroli, and Luca M. Gambardella. 2011\. Self-organized cooperation between robotic swarms. _Swarm Intelligence_ 5, 2 (June 2011), 73–96. https://doi.org/10.1007/s11721-011-0053-0
* Escobar-Castillejos et al. (2016) David Escobar-Castillejos, Julieta Noguez, Luis Neri, Alejandra Magana, and Bedrich Benes. 2016\. A Review of Simulators with Haptic Devices for Medical Training. _Journal of Medical Systems_ 40, 4 (April 2016), 1–22. https://doi.org/10.1007/s10916-016-0459-8
* Fang et al. (2020) Cathy Fang, Yang Zhang, Matthew Dworman, and Chris Harrison. 2020\. Wireality: Enabling Complex Tangible Geometries in Virtual Reality with Worn Multi-String Haptics. (2020), 10.
* Feick et al. (2020) Martin Feick, Scott Bateman, Anthony Tang, André Miede, and Nicolai Marquardt. 2020. TanGi: Tangible Proxies for Embodied Object Exploration and Manipulation in Virtual Reality. _arXiv:2001.03021 [cs]_ (Jan. 2020). http://arxiv.org/abs/2001.03021 arXiv: 2001.03021.
* Fitzgerald and Ishii (2018) Daniel Fitzgerald and Hiroshi Ishii. 2018. Mediate: A Spatial Tangible Interface for Mixed Reality. In _Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems_. ACM, Montreal QC Canada, 1–6. https://doi.org/10.1145/3170427.3188472
* Follmer et al. (2013) Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. inFORM: dynamic physical affordances and constraints through shape and object actuation. In _Proceedings of the 26th annual ACM symposium on User interface software and technology - UIST ’13_. ACM Press, St. Andrews, Scotland, United Kingdom, 417–426. https://doi.org/10.1145/2501988.2502032
* Formaglio et al. (2005) A. Formaglio, A. Giannitrapani, M. Franzini, D. Prattichizzo, and F. Barbagli. 2005\. Performance of Mobile Haptic Interfaces. In _Proceedings of the 44th IEEE Conference on Decision and Control_. 8343–8348. https://doi.org/10.1109/CDC.2005.1583513
* Frissen et al. (2013) Ilja Frissen, Jennifer L. Campos, Manish Sreenivasa, and Marc O. Ernst. 2013. Enabling Unconstrained Omnidirectional Walking Through Virtual Environments: An Overview of the CyberWalk Project. In _Human Walking in Virtual Environments: Perception, Technology, and Applications_ , Frank Steinicke, Yon Visell, Jennifer Campos, and Anatole Lécuyer (Eds.). Springer, New York, NY, 113–144. https://doi.org/10.1007/978-1-4419-8432-6_6
* Funk et al. (2019) Markus Funk, Florian Müller, Marco Fendrich, Megan Shene, Moritz Kolvenbach, Niclas Dobbertin, Sebastian Günther, and Max Mühlhäuser. 2019. Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300377
* Galais et al. (2019) Thomas Galais, Alexandra Delmas, and Rémy Alonso. 2019\. Natural interaction in virtual reality: impact on the cognitive load. In _Proceedings of the 31st Conference on l’Interaction Homme-Machine Adjunct - IHM ’19_. ACM Press, Grenoble, France, 1–9. https://doi.org/10.1145/3366551.3370342
* Galambos (2012) Péter Galambos. 2012\. Vibrotactile Feedback for Haptics and Telemanipulation: Survey, Concept and Experiment. _Acta Polytechnica Hungarica_ 9, 1 (2012), 25\.
* Gomes et al. (2016) Antonio Gomes, Calvin Rubens, Sean Braley, and Roel Vertegaal. 2016. BitDrones: Towards Using 3D Nanocopter Displays as Interactive Self-Levitating Programmable Matter. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16_. ACM Press, Santa Clara, California, USA, 770–780. https://doi.org/10.1145/2858036.2858519
* Gonzalez et al. (2020) Eric J. Gonzalez, Parastoo Abtahi, and Sean Follmer. 2020\. REACH+: Extending the Reachability of Encountered-type Haptics Devices through Dynamic Redirection in VR. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_. ACM, Virtual Event USA, 236–248. https://doi.org/10.1145/3379337.3415870
* Gonzalez and Follmer (2019) Eric J. Gonzalez and Sean Follmer. 2019. Investigating the Detection of Bimanual Haptic Retargeting in Virtual Reality. In _25th ACM Symposium on Virtual Reality Software and Technology on - VRST ’19_. ACM Press, Parramatta, NSW, Australia, 1–5. https://doi.org/10.1145/3359996.3364248
* Gu et al. (2016) Xiaochi Gu, Yifei Zhang, Weize Sun, Yuanzhe Bian, Dao Zhou, and Per Ola Kristensson. 2016\. Dexmo: An Inexpensive and Lightweight Mechanical Exoskeleton for Motion Capture and Force Feedback in VR. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16_. ACM Press, Santa Clara, California, USA, 1991–1995. https://doi.org/10.1145/2858036.2858487
* Gugenheimer et al. (2016) Jan Gugenheimer, Dennis Wolf, Eythor R. Eiriksson, Pattie Maes, and Enrico Rukzio. 2016. GyroVR: Simulating Inertia in Virtual Reality using Head Worn Flywheels. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_. ACM, Tokyo Japan, 227–232. https://doi.org/10.1145/2984511.2984535
* Gutierrez et al. (2010) T. Gutierrez, J. Rodriguez, Y. Velaz, S. Casado, A. Suescun, and E. J. Sanchez. 2010\. IMA-VR: A multimodal virtual training system for skills transfer in Industrial Maintenance and Assembly tasks. _19th International Symposium in Robot and Human Interactive Communication_ (2010). https://www.academia.edu/15623406/IMA_VR_A_multimodal_virtual_training_system_for_skills_transfer_in_Industrial_Maintenance_and_Assembly_tasks
* Günther et al. (2020) Sebastian Günther, Dominik Schön, Florian Müller, Max Mühlhäuser, and Martin Schmitz. 2020\. PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation. (2020), 10.
* Han et al. (2018) Dustin T. Han, Mohamed Suhail, and Eric D. Ragan. 2018\. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality. _IEEE Transactions on Visualization and Computer Graphics_ 24, 4 (April 2018), 1467–1476. https://doi.org/10.1109/TVCG.2018.2794659
* Haption (2019) Haption. 2019. Virtuose™ 6D - HAPTION SA. https://www.haption.com/en/products-en/virtuose-6d-en.html
* Hayward and Maclean (2007) Vincent Hayward and Karon Maclean. 2007. Do it yourself haptics: part I. _IEEE Robotics & Automation Magazine_ 14, 4 (Dec. 2007), 88–104. https://doi.org/10.1109/M-RA.2007.907921
* He et al. (2017b) Zhenyi He, Fengyuan Zhu, Aaron Gaudette, and Ken Perlin. 2017b. Robotic Haptic Proxies for Collaborative Virtual Reality. _arXiv:1701.08879 [cs]_ (Jan. 2017). http://arxiv.org/abs/1701.08879 arXiv: 1701.08879.
* He et al. (2017a) Zhenyi He, Fengyuan Zhu, and Ken Perlin. 2017a. PhyShare: Sharing Physical Interaction in Virtual Reality. _arXiv:1708.04139 [cs]_ (Aug. 2017). http://arxiv.org/abs/1708.04139 arXiv: 1708.04139.
* Held and Durlach (1992) Richard M. Held and Nathaniel I. Durlach. 1992. Telepresence. _Presence: Teleoperators and Virtual Environments_ 1, 1 (Jan. 1992), 109–112. https://doi.org/10.1162/pres.1992.1.1.109
* Heo et al. (2018) Seongkook Heo, Christina Chung, Geehyuk Lee, and Daniel Wigdor. 2018. Thor’s Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–11. https://doi.org/10.1145/3173574.3174099
* Heo et al. (2019) Seongkook Heo, Jaeyeon Lee, and Daniel Wigdor. 2019\. PseudoBend: Producing Haptic Illusions of Stretching, Bending, and Twisting Using Grain Vibrations. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 803–813. https://doi.org/10.1145/3332165.3347941
* Hettiarachchi and Wigdor (2016) Anuruddha Hettiarachchi and Daniel Wigdor. 2016. Annexing Reality: Enabling Opportunistic Use of Everyday Objects as Tangible Proxies in Augmented Reality. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16_. ACM Press, Santa Clara, California, USA, 1957–1967. https://doi.org/10.1145/2858036.2858134
* Hoppe et al. (2018) Matthias Hoppe, Pascal Knierim, Thomas Kosch, Markus Funk, Lauren Futami, Stefan Schneegass, Niels Henze, Albrecht Schmidt, and Tonja Machulla. 2018. VRHapticDrones: Providing Haptics in Virtual Reality through Quadcopters. In _Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia - MUM 2018_. ACM Press, Cairo, Egypt, 7–18. https://doi.org/10.1145/3282894.3282898
* Hoppe et al. (2020) Matthias Hoppe, Daniel Neumann, Stephan Streuber, Albrecht Schmidt, and Tonja-Katrin Machulla. 2020\. _A Human Touch: Social Touch Increases the Perceived Human-likeness of Agents in Virtual Reality_. https://doi.org/10.1145/3313831.3376719
* Huang et al. (2020) Hsin-Yu Huang, Chih-Wei Ning, Po-Yao Wang, Jen-Hao Cheng, and Lung-Pan Cheng. 2020. Haptic-go-round: A Surrounding Platform for Encounter-type Haptics in Virtual Reality Experiences. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–10. https://doi.org/10.1145/3313831.3376476
* Insko (2001) Brent Edward Insko. 2001\. Passive Haptics Significantly Enhances Virtual Environments. (2001), 111.
* Iwata (2005) Hiroo Iwata. 2005\. CirculaFloor. https://ieeexplore.ieee.org/abstract/document/1381227
* Iwata (2013) Hiroo Iwata. 2013\. Locomotion Interfaces. In _Human Walking in Virtual Environments: Perception, Technology, and Applications_ , Frank Steinicke, Yon Visell, Jennifer Campos, and Anatole Lécuyer (Eds.). Springer, New York, NY, 199–219. https://doi.org/10.1007/978-1-4419-8432-6_9
* Iwata et al. (2001) Hiroo Iwata, Hiroaki Yano, Fumitaka Nakaizumi, and Ryo Kawamura. 2001. Project FEELEX: adding haptic surface to graphics. In _Proceedings of the 28th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’01_. ACM Press, Not Known, 469–476. https://doi.org/10.1145/383259.383314
* Jones (2000) Lynette Jones. 2000\. Kinesthetic Sensing. _Human and Machine Haptics_ (2000). http://bdml.stanford.edu/twiki/pub/Haptics/PapersInProgress/jones00.pdf
* Kim and Schneider (2020) Erin Kim and Oliver Schneider. 2020. Defining Haptic Experience: Foundations for Understanding, Communicating, and Evaluating HX. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376280
* Kim et al. (2020) Lawrence H. Kim, Daniel S. Drew, Veronika Domova, and Sean Follmer. 2020. User-defined Swarm Robot Control. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376814
* Kim et al. (2018) Yaesol Kim, Hyun Jung Kim, and Young J. Kim. 2018. Encountered-type haptic display for large VR environment using per-plane reachability maps: Encountered-type Haptic Display for Large VR Environment. _Computer Animation and Virtual Worlds_ 29, 3-4 (May 2018), e1814. https://doi.org/10.1002/cav.1814
* Knierim et al. (2017) Pascal Knierim, Thomas Kosch, Valentin Schwind, Markus Funk, Francisco Kiss, Stefan Schneegass, and Niels Henze. 2017. Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters. In _Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’17_. ACM Press, Denver, Colorado, USA, 433–436. https://doi.org/10.1145/3027063.3050426
* Kohli (2010) Luv Kohli. 2010\. Redirected touching: Warping space to remap passive haptics. In _2010 IEEE Symposium on 3D User Interfaces (3DUI)_. IEEE, Waltham, MA, USA, 129–130. https://doi.org/10.1109/3DUI.2010.5444703
* Kohli et al. (2012) L. Kohli, M. C. Whitton, and F. P. Brooks. 2012. Redirected touching: The effect of warping space on task performance. In _2012 IEEE Symposium on 3D User Interfaces (3DUI)_. IEEE, Costa Mesa, CA, 105–112. https://doi.org/10.1109/3DUI.2012.6184193
* Kohli et al. (2013) Luv Kohli, Mary C. Whitton, and Frederick P. Brooks. 2013\. Redirected Touching: Training and adaptation in warped virtual spaces. In _2013 IEEE Symposium on 3D User Interfaces (3DUI)_. IEEE, Orlando, FL, 79–86. https://doi.org/10.1109/3DUI.2013.6550201
* Kovacs et al. (2020) Robert Kovacs, Eyal Ofek, Mar Gonzalez Franco, Alexa Fay Siu, Sebastian Marwecki, Christian Holz, and Mike Sinclair. 2020. Haptic PIVOT: On-Demand Handhelds in VR. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’20)_. Association for Computing Machinery, New York, NY, USA, 1046–1059. https://doi.org/10.1145/3379337.3415854
* Le Goc et al. (2016) Mathieu Le Goc, Lawrence H. Kim, Ali Parsaei, Jean-Daniel Fekete, Pierre Dragicevic, and Sean Follmer. 2016. Zooids: Building Blocks for Swarm User Interfaces. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST ’16_. ACM Press, Tokyo, Japan, 97–109. https://doi.org/10.1145/2984511.2984547
* Lederman and Klatzky (2009) S. J. Lederman and R. L. Klatzky. 2009. Haptic perception: A tutorial. _Attention, Perception & Psychophysics_ 71, 7 (Oct. 2009), 1439–1459. https://doi.org/10.3758/APP.71.7.1439
* Lee et al. (2007) Chaehyun Lee, Min Sik Hong, In Lee, Oh Kyu Choi, Kyung-Lyong Han, Yoo Yeon Kim, Seungmoon Choi, and Jin S Lee. 2007\. Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5. (2007), 2.
* Lee et al. (2009) In Lee, Inwook Hwang, Kyung-Lyoung Han, Oh Kyu Choi, Seungmoon Choi, and Jin S. Lee. 2009\. System improvements in Mobile Haptic Interface. In _World Haptics 2009 - Third Joint EuroHaptics conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems_. IEEE, Salt Lake City, UT, USA, 109–114. https://doi.org/10.1109/WHC.2009.4810834
* Lee et al. (2019) Jaeyeon Lee, Mike Sinclair, Mar Gonzalez-Franco, Eyal Ofek, and Christian Holz. 2019\. TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–13. https://doi.org/10.1145/3290605.3300301
* Leithinger et al. (2013) Daniel Leithinger, Sean Follmer, Alex Olwal, Samuel Luescher, Akimitsu Hogge, Jinha Lee, and Hiroshi Ishii. 2013. Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13_. ACM Press, Paris, France, 1441. https://doi.org/10.1145/2470654.2466191
* Lepecq et al. (2008) Jean-Claude Lepecq, Lionel Bringoux, Jean-Marie Pergandi, Thelma Coyle, and Daniel Mestre. 2008\. Afforded Actions as a Behavioral Assessment of Physical Presence. (2008), 8.
* Lo et al. (2018) Jo-Yu Lo, Da-Yuan Huang, Chen-Kuo Sun, Chu-En Hou, and Bing-Yu Chen. 2018. RollingStone: Using Single Slip Taxel for Enhancing Active Finger Exploration with a Virtual Reality Controller. In _The 31st Annual ACM Symposium on User Interface Software and Technology - UIST ’18_. ACM Press, Berlin, Germany, 839–851. https://doi.org/10.1145/3242587.3242627
* Lopes et al. (2015) Pedro Lopes, Patrik Jonell, and Patrick Baudisch. 2015\. Affordance++: Allowing Objects to Communicate Dynamic Use. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15_. ACM Press, Seoul, Republic of Korea, 2515–2524. https://doi.org/10.1145/2702123.2702128
* Lopes et al. (2017) Pedro Lopes, Sijing You, Lung-Pan Cheng, Sebastian Marwecki, and Patrick Baudisch. 2017. Providing Haptics to Walls & Heavy Objects in Virtual Reality by Means of Electrical Muscle Stimulation. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17_. ACM Press, Denver, Colorado, USA, 1471–1482. https://doi.org/10.1145/3025453.3025600
* Lécuyer (2009) Anatole Lécuyer. 2009\. Simulating Haptic Feedback Using Vision: A Survey of Research and Applications of Pseudo-Haptic Feedback. _Presence: Teleoperators and Virtual Environments_ 18, 1 (Feb. 2009), 39–53. https://doi.org/10.1162/pres.18.1.39
* Magnenat-Thalmann et al. (2005) N. Magnenat-Thalmann, HyungSeok Kim, A. Egges, and S. Garchery. 2005. Believability and Interaction in Virtual Worlds. In _11th International Multimedia Modelling Conference_. IEEE, Honolulu, HI, USA, 2–9. https://doi.org/10.1109/MMMC.2005.24
* Makin et al. (2019) Lawrence Makin, Gareth Barnaby, and Anne Roudaut. 2019\. Tactile and kinesthetic feedbacks improve distance perception in virtual reality. In _Proceedings of the 31st Conference on l’Interaction Homme-Machine - IHM ’19_. ACM Press, Grenoble, France, 1–9. https://doi.org/10.1145/3366550.3372248
* Marquardt et al. (2009) Nicolai Marquardt, Miguel A. Nacenta, James E. Young, Sheelagh Carpendale, Saul Greenberg, and Ehud Sharlin. 2009. The Haptic Tabletop Puck: tactile feedback for interactive tabletops. In _Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces - ITS ’09_. ACM Press, Banff, Alberta, Canada, 85. https://doi.org/10.1145/1731903.1731922
* Massie and Salisbury (1994) Thomas H Massie and J K Salisbury. 1994. The PHANTOM Haptic Interface: A Device for Probing Virtual Objects. (1994), 5.
* McNeely (1993) W. A. McNeely. 1993\. Robotic graphics: a new approach to force feedback for virtual reality. In _Proceedings of IEEE Virtual Reality Annual International Symposium_. 336–341. https://doi.org/10.1109/VRAIS.1993.380761
* Merino et al. (2020) Leonel Merino, Magdalena Schwarzl, Matthias Kraus, Michael Sedlmair, Dieter Schmalstieg, and Daniel Weiskopf. 2020. Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009-2019). _arXiv:2010.05988 [cs]_ (Oct. 2020). http://arxiv.org/abs/2010.05988 arXiv: 2010.05988.
* Moline (1997) Judi Moline. 1997\. _Virtual reality for health care: a survey_. Technical Report.
* Nakagaki et al. (2016a) Ken Nakagaki, Artem Dementyev, Sean Follmer, Joseph A. Paradiso, and Hiroshi Ishii. 2016a. ChainFORM: A Linear Integrated Modular Hardware System for Shape Changing Interfaces. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST ’16_. ACM Press, Tokyo, Japan, 87–96. https://doi.org/10.1145/2984511.2984587
* Nakagaki et al. (2016b) Ken Nakagaki, Luke Vink, Jared Counts, Daniel Windham, Daniel Leithinger, Sean Follmer, and Hiroshi Ishii. 2016b. Materiable: Rendering Dynamic Material Properties in Response to Direct Physical Touch with Shape Changing Interfaces. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16_. ACM Press, Santa Clara, California, USA, 2764–2772. https://doi.org/10.1145/2858036.2858104
* Nitzsche et al. (2003) Norbert Nitzsche, Uwe D. Hanebeck, and G. Schmidt. 2003\. Design issues of mobile haptic interfaces. _Journal of Robotic Systems_ 20, 9 (Sept. 2003), 549–556. https://doi.org/10.1002/rob.10105
* Norman (2013) Donald A. Norman. 2013\. _The design of everyday things_ (revised and expanded edition ed.). Basic Books, New York, New York.
* Optitrack (2019) Optitrack. 2019\. Motion Capture Systems. http://optitrack.com/index.html
* Ortega and Coquillart (2005) M. Ortega and S. Coquillart. 2005. Prop-based haptic interaction with co-location and immersion: an automotive application. In _IREE International Worksho on Haptic Audio Visual Environments and their Applications, 2005._ IEEE, Ottawa, Canada, 23–28. https://doi.org/10.1109/HAVE.2005.1545646
* Pair et al. (2003) J. Pair, U. Neumann, D. Piepol, and B. Swartout. 2003\. FlatWorld: combining Hollywood set-design techniques with VR. _IEEE Computer Graphics and Applications_ 23, 1 (Jan. 2003), 12–15. https://doi.org/10.1109/MCG.2003.1159607
* Pavlik et al. (2013) Ryan A. Pavlik, Judy M. Vance, and Greg R. Luecke. 2013\. Interacting With a Large Virtual Environment by Combining a Ground-Based Haptic Device and a Mobile Robot Base. In _Volume 2B: 33rd Computers and Information in Engineering Conference_. ASME, Portland, Oregon, USA, V02BT02A029. https://doi.org/10.1115/DETC2013-13441
* Poyade et al. (2012) M Poyade, L Molina-Tanco, A Reyes-Lecuona, A Langley, E Frutos, and S Flores. 2012\. Validation of a haptic virtual reality simulation in the context of industrial maintenance. (2012), 4.
* Praveena et al. (2020) Pragathi Praveena, Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2020. Supporting Perception of Weight through Motion-induced Sensory Conflicts in Robot Teleoperation. In _Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction_. ACM, Cambridge United Kingdom, 509–517. https://doi.org/10.1145/3319502.3374841
* Provancher et al. (2005) William R. Provancher, Mark R. Cutkosky, Katherine J. Kuchenbecker, and Günter Niemeyer. 2005\. Contact Location Display for Haptic Perception of Curvature and Object Motion. _The International Journal of Robotics Research_ 24, 9 (Sept. 2005), 691–702. https://doi.org/10.1177/0278364905057121
* Pusch and Lécuyer (2011) Andreas Pusch and Anatole Lécuyer. 2011. Pseudo-haptics: from the theoretical foundations to practical system design guidelines. In _Proceedings of the 13th international conference on multimodal interfaces - ICMI ’11_. ACM Press, Alicante, Spain, 57\. https://doi.org/10.1145/2070481.2070494
* Putze et al. (2020) Susanne Putze, Dmitry Alexandrovsky, Felix Putze, Sebastian Höffner, Jan David Smeddinck, and Rainer Malaka. 2020. Breaking The Experience: Effects of Questionnaires in VR User Studies. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–15. https://doi.org/10.1145/3313831.3376144
* Rakkolainen et al. (2020) Ismo Rakkolainen, Euan Freeman, Antti Sand, Roope Raisamo, and Stephen Brewster. 2020. A Survey of Mid-Air Ultrasound Haptics and Its Applications. _IEEE Transactions on Haptics_ (2020), 1–1. https://doi.org/10.1109/TOH.2020.3018754
* Rangarajan et al. (2020) Karan Rangarajan, Heather Davis, and Philip H. Pucher. 2020\. Systematic Review of Virtual Haptics in Surgical Simulation: A Valid Educational Tool? _Journal of Surgical Education_ 77, 2 (March 2020), 337–347. https://doi.org/10.1016/j.jsurg.2019.09.006
* Razzaque et al. (2001) Sharif Razzaque, Zachariah Kohn, and Mary C. Whitton. 2001\. _EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentation © The Eurographics Association 2001. Redirected Walking_.
* Rietzler et al. (2018) Michael Rietzler, Florian Geiselhart, Jan Gugenheimer, and Enrico Rukzio. 2018. Breaking the Tracking: Enabling Weight Perception using Perceivable Tracking Offsets. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–12. https://doi.org/10.1145/3173574.3173702
* Rietzler et al. (2019) Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, and Enrico Rukzio. 2019. Virtual Muscle Force: Communicating Kinesthetic Forces Through Pseudo-Haptic Feedback and Muscle Input. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 913–922. https://doi.org/10.1145/3332165.3347871
* Rubens et al. (2015) Calvin Rubens, Sean Braley, Antonio Gomes, Daniel Goc, Xujing Zhang, Juan Pablo Carrascal, and Roel Vertegaal. 2015. BitDrones: Towards Levitating Programmable Matter Using Interactive 3D Quadcopter Displays. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology - UIST ’15 Adjunct_. ACM Press, Daegu, Kyungpook, Republic of Korea, 57–58. https://doi.org/10.1145/2815585.2817810
* Sagayam and Hemanth (2017) K. Martin Sagayam and D. Jude Hemanth. 2017. Hand posture and gesture recognition techniques for virtual reality applications: a survey. _Virtual Reality_ 21, 2 (June 2017), 91–107. https://doi.org/10.1007/s10055-016-0301-0
* Sagheb et al. (2019) Shahabedin Sagheb, Frank Wencheng Liu, Alireza Bahremand, Assegid Kidane, and Robert LiKamWa. 2019\. SWISH: A Shifting-Weight Interface of Simulated Hydrodynamics for Haptic Perception of Virtual Fluid Vessels. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 751–761. https://doi.org/10.1145/3332165.3347870
* Samad et al. (2019) Majed Samad, Elia Gatti, Anne Hermes, Hrvoje Benko, and Cesare Parise. 2019. Pseudo-Haptic Weight: Changing the Perceived Weight of Virtual Objects By Manipulating Control-Display Ratio. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–13. https://doi.org/10.1145/3290605.3300550
* Satler et al. (2011) Massimo Satler, Carlo A. Avizzano, and Emanuele Ruffaldi. 2011\. Control of a desktop mobile haptic interface. In _2011 IEEE World Haptics Conference_. IEEE, Istanbul, 415–420. https://doi.org/10.1109/WHC.2011.5945522
* Sato (2002) M. Sato. 2002. SPIDAR and virtual reality. In _Proceedings of the 5th Biannual World Automation Congress_ , Vol. 13. 17–23. https://doi.org/10.1109/WAC.2002.1049515
* Savino (2020) Gian-Luca Savino. 2020\. Virtual Smartphone: High Fidelity Interaction with Proxy Objects in Virtual Reality. _arXiv:2010.00942 [cs]_ (Oct. 2020). http://arxiv.org/abs/2010.00942 arXiv: 2010.00942.
* Schmidt et al. (2015) Dominik Schmidt, Rob Kovacs, Vikram Mehta, Udayan Umapathi, Sven Köhler, Lung-Pan Cheng, and Patrick Baudisch. 2015. Level-Ups: Motorized Stilts that Simulate Stair Steps in Virtual Reality. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15_. ACM Press, Seoul, Republic of Korea, 2157–2160. https://doi.org/10.1145/2702123.2702253
* Schuemie et al. (2001) Martijn J. Schuemie, Peter van der Straaten, Merel Krijn, and Charles A.P.G. van der Mast. 2001\. Research on Presence in Virtual Reality: A Survey. _CyberPsychology & Behavior_ 4, 2 (April 2001), 183–201. https://doi.org/10.1089/109493101300117884
* Schwind et al. (2019) Valentin Schwind, Pascal Knierim, Nico Haas, and Niels Henze. 2019\. Using Presence Questionnaires in Virtual Reality. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300590
* Seifi et al. (2019) Hasti Seifi, Farimah Fazlollahi, Michael Oppermann, John Andrew Sastrillo, Jessica Ip, Ashutosh Agrawal, Gunhyuk Park, Katherine J. Kuchenbecker, and Karon E. MacLean. 2019. Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300788
* Shaw et al. (2019) Emily Shaw, Tessa Roper, Tommy Nilsson, Glyn Lawson, Sue V. G. Cobb, and Daniel Miller. 2019\. The Heat is On: Exploring User Behaviour in a Multisensory Virtual Environment for Fire Evacuation. _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_ (2019), 1–13. https://doi.org/10.1145/3290605.3300856 arXiv: 1902.04573.
* Shigeyama et al. (2019) Jotaro Shigeyama, Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2019. Transcalibur: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception Model. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–11. https://doi.org/10.1145/3290605.3300241
* Simeone et al. (2015) Adalberto L. Simeone, Eduardo Velloso, and Hans Gellersen. 2015\. Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15_. ACM Press, Seoul, Republic of Korea, 3307–3316. https://doi.org/10.1145/2702123.2702389
* Sinclair et al. (2019) Mike Sinclair, Eyal Ofek, Mar Gonzalez-Franco, and Christian Holz. 2019. CapstanCrunch: A Haptic VR Controller with User-supplied Force Feedback. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 815–829. https://doi.org/10.1145/3332165.3347891
* Sinclair et al. (2014) Mike Sinclair, Michel Pahud, and Hrvoje Benko. 2014\. TouchMover 2.0 - 3D touchscreen with force feedback and haptic texture. In _2014 IEEE Haptics Symposium (HAPTICS)_. IEEE, Houston, TX, USA, 1–6. https://doi.org/10.1109/HAPTICS.2014.6775425
* Siu et al. (2018) Alexa F. Siu, Eric J. Gonzalez, Shenli Yuan, Jason B. Ginsberg, and Sean Follmer. 2018\. shapeShift: 2D Spatial Manipulation and Self-Actuation of Tabletop Shape Displays for Tangible and Haptic Interaction. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–13. https://doi.org/10.1145/3173574.3173865
* Slater (1999) Mel Slater. 1999\. Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire. _Presence: Teleoperators and Virtual Environments_ 8, 5 (Oct. 1999), 560–565. https://doi.org/10.1162/105474699566477 Publisher: MIT Press.
* Slater et al. (1994) Mel Slater, Martin Usoh, and Anthony Steed. 1994. Depth of Presence in Virtual Environments. _Presence: Teleoperators and Virtual Environments_ 3, 2 (Jan. 1994), 130–144. https://doi.org/10.1162/pres.1994.3.2.130
* Steed et al. (2020) Anthony Steed, Sebastian Friston, Vijay Pawar, and David Swapp. 2020. Docking Haptics: Extending the Reach of Haptics by Dynamic Combinations of Grounded and Worn Devices. _arXiv:2002.06093 [cs]_ (Feb. 2020). http://arxiv.org/abs/2002.06093 arXiv: 2002.06093.
* Steinicke et al. (2013) Frank Steinicke, Visell Yon, Jennifer Campos, and Anatole Lecuyer (Eds.). 2013. _Human walking in virtual environments: perception, technology, and applications_. Springer, New York,NY. OCLC: 856865949.
* Strandholt et al. (2020) Patrick L. Strandholt, Oana A. Dogaru, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin. 2020\. Knock on Wood: Combining Redirected Touching and Physical Props for Tool-Based Interaction in Virtual Reality. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376303
* Strasnick et al. (2018) Evan Strasnick, Christian Holz, Eyal Ofek, Mike Sinclair, and Hrvoje Benko. 2018. Haptic Links: Bimanual Haptics for Virtual Reality Using Variable Stiffness Actuation. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–12. https://doi.org/10.1145/3173574.3174218
* Strohmeier et al. (2020) Paul Strohmeier, Seref Güngör, Luis Herres, Dennis Gudea, Bruno Fruchard, and Jürgen Steimle. 2020\. bARefoot: Generating Virtual Materials using Motion Coupled Vibration in Shoes. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_. ACM, Virtual Event USA, 579–593. https://doi.org/10.1145/3379337.3415828
* Sun et al. (2019) Yuqian Sun, Shigeo Yoshida, Takuji Narumi, and Michitaka Hirose. 2019. PaCaPa: A Handheld VR Device for Rendering Size, Shape, and Stiffness of Virtual Objects in Tool-based Interactions. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300682
* Sutherland (1965) Ivan Sutherland. 1965\. The Ultimate Display. (1965), 2.
* Suzuki et al. (2020) Ryo Suzuki, Hooman Hedayati, Clement Zheng, James Bohn, Daniel Szafir, Ellen Yi-Luen Do, Mark D Gross, and Daniel Leithinger. 2020\. RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots. (2020), 11.
* Suzuki et al. (2018) Ryo Suzuki, Junichi Yamaoka, Daniel Leithinger, Tom Yeh, Mark D. Gross, Yoshihiro Kawahara, and Yasuaki Kakehi. 2018. Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape Formation. In _The 31st Annual ACM Symposium on User Interface Software and Technology - UIST ’18_. ACM Press, Berlin, Germany, 99–111. https://doi.org/10.1145/3242587.3242659
* Suzuki et al. (2019) Ryo Suzuki, Clement Zheng, Yasuaki Kakehi, Tom Yeh, Ellen Yi-Luen Do, Mark D Gross, and Daniel Leithinger. 2019. ShapeBots: Shape-changing Swarm Robots. (2019), 13.
* Takizawa et al. (2017) N. Takizawa, H. Yano, H. Iwata, Y. Oshiro, and N. Ohkohchi. 2017. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects. _IEEE Transactions on Haptics_ 10, 4 (Oct. 2017), 500–510. https://doi.org/10.1109/TOH.2017.2740934
* Talvas et al. (2014) Anthony Talvas, Maud Marchal, and Anatole Lecuyer. 2014\. A Survey on Bimanual Haptic Interaction. _IEEE Transactions on Haptics_ 7, 3 (July 2014), 285–300. https://doi.org/10.1109/TOH.2014.2314456
* Teng et al. (2018) Shan-Yuan Teng, Tzu-Sheng Kuo, Chi Wang, Chi-huan Chiang, Da-Yuan Huang, Liwei Chan, and Bing-Yu Chen. 2018. PuPoP: Pop-up Prop on Palm for Virtual Reality. In _The 31st Annual ACM Symposium on User Interface Software and Technology - UIST ’18_. ACM Press, Berlin, Germany, 5–17. https://doi.org/10.1145/3242587.3242628
* Teng et al. (2019) Shan-Yuan Teng, Cheng-Lung Lin, Chi-huan Chiang, Tzu-Sheng Kuo, Liwei Chan, Da-Yuan Huang, and Bing-Yu Chen. 2019. TilePoP: Tile-type Pop-up Prop for Virtual Reality. (2019), 11.
* Teyssier et al. (2020) Marc Teyssier, Gilles Bailly, Catherine Pelachaud, and Eric Lecolinet. 2020. Conveying Emotions Through Device-Initiated Touch. _IEEE Transactions on Affective Computing_ (2020), 1–1. https://doi.org/10.1109/TAFFC.2020.3008693
* Tsagarakis et al. (2005) N. G. Tsagarakis, T. Horne, and D. G. Caldwell. 2005\. SLIP AESTHEASIS: a portable 2D slip/skin stretch display for the fingertip. In _First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference_. 214–219. https://doi.org/10.1109/WHC.2005.117
* Tsai and Chen (2019) Hsin-Ruey Tsai and Bing-Yu Chen. 2019. ElastImpact: 2.5D Multilevel Instant Impact Using Elasticity on Head-Mounted Displays. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology_. ACM, New Orleans LA USA, 429–437. https://doi.org/10.1145/3332165.3347931
* Tsai and Rekimoto (2018) Hsin-Ruey Tsai and Jun Rekimoto. 2018. ElasticVR: Providing Multi-level Active and Passive Force Feedback in Virtual Reality Using Elasticity. In _Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–4. https://doi.org/10.1145/3170427.3186540
* Tsykunov et al. (2019) Evgeny Tsykunov, Roman Ibrahimov, Derek Vasquez, and Dzmitry Tsetserukou. 2019. SlingDrone: Mixed Reality System for Pointing and Interaction Using a Single Drone. In _25th ACM Symposium on Virtual Reality Software and Technology on - VRST ’19_. ACM Press, Parramatta, NSW, Australia, 1–5. https://doi.org/10.1145/3359996.3364271
* Tsykunov and Tsetserukou (2019) Evgeny Tsykunov and Dzmitry Tsetserukou. 2019. WiredSwarm: High Resolution Haptic Feedback Provided by a Swarm of Drones to the User’s Fingers for VR interaction. In _25th ACM Symposium on Virtual Reality Software and Technology on - VRST ’19_. ACM Press, Parramatta, NSW, Australia, 1–2. https://doi.org/10.1145/3359996.3364789
* Ullrich (2012) Sebastian Ullrich. 2012\. Haptic Palpation for Medical Simulation in Virtual Environments. _IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS_ 18, 4 (2012), 9.
* Usoh et al. (1999) Martin Usoh, Kevin Arthur, Mary C. Whitton, Rui Bastos, Anthony Steed, Mel Slater, and Frederick P. Brooks. 1999. Walking > walking-in-place > flying, in virtual environments. In _Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’99_. ACM Press, Not Known, 359–364. https://doi.org/10.1145/311535.311589
* Usoh et al. (2000) Martin Usoh, Ernest Catena, Sima Arman, and Mel Slater. 2000\. Using Presence Questionnaires in Reality. _Presence: Teleoperators and Virtual Environments_ 9, 5 (Oct. 2000), 497–503. https://doi.org/10.1162/105474600566989
* Varalakshmi et al. (2012) Varalakshmi, Thriveni, Venugopal, and Patnaik. 2012\. Haptics: State of the Art Survey. _IJCSI International Journal of Computer Science Issues_ (2012). https://core.ac.uk/download/pdf/25725449.pdf
* Villa Salazar et al. (2020) David Steeven Villa Salazar, Claudio Pacchierotti, Xavier De Tinguy De La Girouliere, Anderson Maciel, and Maud Marchal. 2020. Altering the Stiffness, Friction, and Shape Perception of Tangible Objects in Virtual Reality Using Wearable Haptics. _IEEE Transactions on Haptics_ (2020), 1–1. https://doi.org/10.1109/TOH.2020.2967389
* Vonach et al. (2017) Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann. 2017\. VRRobot: Robot actuated props in an infinite virtual environment. In _2017 IEEE Virtual Reality (VR)_. IEEE, Los Angeles, CA, USA, 74–83. https://doi.org/10.1109/VR.2017.7892233
* Wang et al. (2020b) Chi Wang, Da-Yuan Huang, Shuo-Wen Hsu, Cheng-Lung Lin, Yeu-Luen Chiu, Chu-En Hou, and Bing-Yu Chen. 2020b. Gaiters: Exploring Skin Stretch Feedback on the Legs for Enhancing Virtual Reality Experiences. (2020), 14.
* Wang et al. (2020a) Dangxiao Wang, Yuan Guo, Zhang Yuru, XY Weiliang, and WWIA Jing. 2020a. Haptic display for virtual reality: progress and challenges | Elsevier Enhanced Reader. https://doi.org/10.3724/SP.J.2096-5796.2019.0008 ISSN: 2096-5796.
* Wang et al. (2020d) Dangxiao Wang, Kouhei Ohnishi, and Weiliang Xu. 2020d. Multimodal Haptic Display for Virtual Reality: A Survey. _IEEE Transactions on Industrial Electronics_ 67, 1 (Jan. 2020), 610–623. https://doi.org/10.1109/TIE.2019.2920602
* Wang et al. (2020c) Yuntao Wang, Hanchuan Li, Zhengyi Cao, Huiyi Luo, Ke Ou, John Raiti, Chun Yu, Shwetak Patel, and Yuanchun Shi. 2020c. MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot. (2020), 12.
* Wei et al. (2020) Tzu-Yun Wei, Hsin-Ruey Tsai, Yu-So Liao, Chieh Tsai, Yi-Shan Chen, Chi Wang, and Bing-Yu Chen. 2020. ElastiLinks: Force Feedback between VR Controllers with Dynamic Points of Application of Force. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_. ACM, Virtual Event USA, 1023–1034. https://doi.org/10.1145/3379337.3415836
* Wexelblat (1993) Alan Wexelblat. 1993\. Virtual reality: applications and explorations. http://libertar.io/lab/wp-content/uploads/2016/02/Virtual.Reality.-.Applications.And_.Explorations.pdf/page=164 Myron Krueger, Artificial reality 2 An easy entry to Virtual reality Chap 7.
* Whitmire et al. (2018) Eric Whitmire, Hrvoje Benko, Christian Holz, Eyal Ofek, and Mike Sinclair. 2018. Haptic Revolver: Touch, Shear, Texture, and Shape Rendering on a Reconfigurable Virtual Reality Controller. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–12. https://doi.org/10.1145/3173574.3173660
* Winther et al. (2020) Frederik Winther, Linoj Ravindran, Kasper Paabol Svendsen, and Tiare Feuchtner. 2020. Design and Evaluation of a VR Training Simulation for Pump Maintenance Based on a Use Case at Grundfos. In _2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)_. IEEE, Atlanta, GA, USA, 738–746. https://doi.org/10.1109/VR46266.2020.1580939036664
* Witmer and Singer (1998) Bob G. Witmer and Michael J. Singer. 1998. Measuring Presence in Virtual Environments: A Presence Questionnaire. _Presence: Teleoperators and Virtual Environments_ 7, 3 (June 1998), 225–240. https://doi.org/10.1162/105474698565686
* Xia et al. (2018) Haijun Xia, Sebastian Herscher, Ken Perlin, and Daniel Wigdor. 2018. Spacetime: Enabling Fluid Individual and Collaborative Editing in Virtual Reality. In _The 31st Annual ACM Symposium on User Interface Software and Technology - UIST ’18_. ACM Press, Berlin, Germany, 853–866. https://doi.org/10.1145/3242587.3242597
* Xia (2016) Pingjun Xia. 2016\. Haptics for Product Design and Manufacturing Simulation. _IEEE Transactions on Haptics_ 9, 3 (July 2016), 358–375. https://doi.org/10.1109/TOH.2016.2554551
* Yamaguchi et al. (2016) Kotaro Yamaguchi, Ginga Kato, Yoshihiro Kuroda, Kiyoshi Kiyokawa, and Haruo Takemura. 2016\. A Non-grounded and Encountered-type Haptic Display Using a Drone. In _Proceedings of the 2016 Symposium on Spatial User Interaction - SUI ’16_. ACM Press, Tokyo, Japan, 43–46. https://doi.org/10.1145/2983310.2985746
* Yang et al. (2019) Jackie (Junrui) Yang, Christian Holz, Eyal Ofek, and Andrew D. Wilson. 2019. DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 1093–1107. https://doi.org/10.1145/3332165.3347875
* Ye et al. (2019) Yuan-Syun Ye, Hsin-Yu Chen, and Liwei Chan. 2019. Pull-Ups: Enhancing Suspension Activities in Virtual Reality with Body-Scale Kinesthetic Force Feedback. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology - UIST ’19_. ACM Press, New Orleans, LA, USA, 791–801. https://doi.org/10.1145/3332165.3347874
* Yixian et al. (2020) Yan Yixian, Kazuki Takashima, Anthony Tang, Takayuki Tanno, Kazuyuki Fujita, and Yoshifumi Kitamura. 2020. ZoomWalls: Dynamic Walls that Simulate Haptic Infrastructure for Room-scale VR World. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’20)_. Association for Computing Machinery, New York, NY, USA, 223–235. https://doi.org/10.1145/3379337.3415859
* Yokokohji et al. (1999) Yasuyoshi Yokokohji, Ralph L. Hollis, and Takeo Kanade. 1999\. WYSIWYF Display: A Visual/Haptic Interface to Virtual Environment. _Presence: Teleoperators and Virtual Environments_ 8, 4 (Aug. 1999), 412–434. https://doi.org/10.1162/105474699566314
* Yokokohji et al. (2001) Y. Yokokohji, J. Kinoshita, and T. Yoshikawa. 2001\. Path planning for encountered-type haptic devices that render multiple objects in 3D space. In _Proceedings IEEE Virtual Reality 2001_. 271–278. https://doi.org/10.1109/VR.2001.913796
* Yokokohji et al. (2005) Yasuyoshi Yokokohji, Nobuhiko Muramori, Yuji Sato, and Tsuneo Yoshikawa. 2005. _Haptic Display for Multiple Fingertip Contacts Based on the Observation of Human Grasping Behaviors_.
* Yoshida et al. (2020) Shigeo Yoshida, Yuqian Sun, and Hideaki Kuzuoka. 2020\. PoCoPo: Handheld Pin-based Shape Display for Haptic Rendering in Virtual Reality. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376358
* Zenner and Kruger (2017) Andre Zenner and Antonio Kruger. 2017. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality. _IEEE Transactions on Visualization and Computer Graphics_ 23, 4 (April 2017), 1285–1294. https://doi.org/10.1109/TVCG.2017.2656978
* Zenner and Krüger (2019) André Zenner and Antonio Krüger. 2019. Drag:on: A Virtual Reality Controller Providing Haptic Feedback Based on Drag and Weight Shift. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300441
* Zhao (2009) QinPing Zhao. 2009\. A survey on virtual reality. _Science in China Series F: Information Sciences_ 52, 3 (March 2009), 348–400. https://doi.org/10.1007/s11432-009-0066-0
* Zhao and Follmer (2018) Yiwei Zhao and Sean Follmer. 2018. A Functional Optimization Based Approach for Continuous 3D Retargeted Touch of Arbitrary, Complex Boundaries in Haptic Virtual Reality. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–12. https://doi.org/10.1145/3173574.3174118
* Zhao et al. (2017) Yiwei Zhao, Lawrence H. Kim, Ye Wang, Mathieu Le Goc, and Sean Follmer. 2017. Robotic Assembly of Haptic Proxy Objects for TangibleInteraction and Virtual Reality. In _Proceedings of the Interactive Surfaces and Spaces on ZZZ - ISS ’17_. ACM Press, Brighton, United Kingdom, 82–91. https://doi.org/10.1145/3132272.3134143
* Zhou and Deng (2009) Ning-Ning Zhou and Yu-Long Deng. 2009. Virtual reality: A state-of-the-art survey. _International Journal of Automation and Computing_ 6, 4 (Nov. 2009), 319–325. https://doi.org/10.1007/s11633-009-0319-9
* Ziat et al. (2014) Mounia Ziat, Taylor Rolison, Andrew Shirtz, Daniel Wilbern, and Carrie Anne Balcer. 2014. Enhancing virtual immersion through tactile feedback. In _Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology - UIST’14 Adjunct_. ACM Press, Honolulu, Hawaii, USA, 65–66. https://doi.org/10.1145/2658779.2659116
* Zielasko and Riecke (2020) Daniel Zielasko and Bernhard E Riecke. 2020. Either Give Me a Reason to Stand or an Opportunity to Sit in VR. (2020), 3.
* Zilles and Salisbury (1995) C. B. Zilles and J. K. Salisbury. 1995. A constraint-based god-object method for haptic display. In _In International Conference on Intelligent Robots and Systems_. 146–151.
* Zimmermann (2008) Peter Zimmermann. 2008\. Virtual Reality Aided Design. A survey of the use of VR in automotive industry. (Jan. 2008). https://doi.org/10.1007/978-1-4020-8200-9_13
|
# Sending or not sending twin-field quantum key distribution
with distinguishable decoy states
Yi-Fei Lu Mu-Sheng Jiang<EMAIL_ADDRESS>Yang Wang Xiao-Xu Zhang Fan Liu
Chun Zhou Hong-Wei Li Wan-Su Bao<EMAIL_ADDRESS>Henan Key Laboratory of
Quantum Information and Cryptography, SSF IEU, Zhengzhou, Henan 450001, China
Synergetic Innovation Center of Quantum Information and Quantum Physics,
University of Science and Technology of China, Hefei, Anhui 230026, China
###### Abstract
Twin-field quantum key distribution (TF-QKD) and its variants can overcome the
fundamental rate-distance limit of QKD which has been demonstrated in the
laboratory and field while their physical implementations with side channels
remains to be further researched. We find the external modulation of different
intensity states through the test, required in those TF-QKD with post-phase
compensation, shows a side channel in frequency domain. Based on this, we
propose a complete and undetected eavesdropping attack, named passive
frequency shift attack, on sending or not-sending (SNS) TF-QKD protocol given
any difference between signal and decoy states in frequency domain which can
be extended to other imperfections with distinguishable decoy states. We
analyze this attack by giving the formula of upper bound of real secure key
rate and comparing it with lower bound of secret key rate under Alice and
Bob’s estimation with the consideration of actively odd-parity pairing (AOPP)
method and finite key effects. The simulation results show that Eve can get
full information about the secret key bits without being detected at long
distance. Our results emphasize the importance of practical security at source
and might provide a valuable reference for the practical implementation of TF-
QKD.
††preprint: APS/123-QED
## I Introduction
Quantum key distribution (QKD) allows two distant parties, Alice and Bob, to
share secret keys securely in the presence of an eavesdropper, Eve, by
harnessing the laws of physics [1, 2, 3]. Combined with one-time pad, Alice
and Bob can achieve unconditionally secure private communication. Notable
progress has been made to improve performance, such as the communication
distance and secret key rate, and bridge the gap between the idealized device
models assumed in security proofs and the functioning of realistic devices in
practical systems.
The measurement-device-independent (MDI) QKD [4] can remove both known and
unknown security loopholes, or so-called side channels, in the measurement
unit perfectly which shift the focus of quantum attacks to the source. Photon-
number-splitting (PNS) attack [5, 6], the major threat at source since single-
photon source is not available at present and weak laser light are widely used
in practical QKD systems, has been overcome by the decoy-state method [7, 8].
Combining these two methods, the decoy-state MDI-QKD equipped with some
security patches performs well with imperfect single-photon sources. However,
the key rate and communication distance are two implementation bottlenecks. To
exceed the linear scale of key rate [9, 10], twin-field QKD (TF-QKD) was
proposed by Lucamarini _et al._ [11] whose key rate scales linearly with
square-root of the channel transmittance $\eta$ by harnessing single-photon
interference over long distance. Though the security is not completed and the
security loophole is caused by the later announcement of the phase information
[12], then many variants of TF-QKD [12, 13, 14, 15, 16, 17] have been proposed
to deal with this security loophole and each has its advantages. Many effects
have been considered in real-life implementation to accelerate its
application, including finite key effects, the number of states with different
intensities, the phase slice of appropriate and asymmetric transmission
distance, etc. [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34]. Meanwhile, several experiments of TF-QKD have been completed in
laboratory and field to demonstrate its ability to overcome the rate-distance
limit [35, 36, 37, 38, 39, 40, 41, 42].
However, the physical implementations of TF-QKD protocols with side channels
remains to be further researched at present. Since TF-QKD retains the MDI
characteristic, we should just focus on light source. Ideally, it is assumed
that the sending devices are placed in a protected laboratory, and can prepare
and encode quantum states correctly. Unfortunately, these conditions are not
met in practical systems, and state preparation flaws (SPFs) and leakage may
be induced from imperfect devices or Eve’s disturbance. A small imperfection
at source does not necessarily mean a small impact on the secret key rate,
because Eve could enhance such imperfection by exploiting channel loss.
Therefore, Eve can steal secret information actively by performing Trojan-
horse attack [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55] on
modulators, lasers or attenuators and so on, or passively by harnessing the
SPFs and leakage caused by imperfect devices [5, 6, 56, 57, 58, 59].
In those QKD protocols with imperfect single-photon sources, the decoy-state
method is vital and employed to monitor the channel eavesdropping whose
security is based on the fact that Eve could not distinguish between decoy and
signal states. In practice, however, it does not with the behavior of the real
apparatuses or Eve’s disturbance. For instance, the probability distributions
of signal states and decoy states do not overlap in time domain totally with
pump-current modulation [58]. Besides, signal states and decoy states can be
distinguished with external intensity modulation in frequency domain when Eve
applying wavelength-selected photon-number-splitting attack actively in ’plug-
and-play’ systems [46]. This loophole is introduced by decoy-state method and
caused by the imperfections of modulators and modulation voltage. As the
decoy-state method is employed in TF-QKD protocols, it is of significance to
analyze its practical security in this aspect.
In this paper, we concentrate on the sending or not-sending (SNS) TF-QKD
protocol [12] with actively odd-parity pairing (AOPP) method and finite-key
effects taken into consideration [20, 21], and propose a complete and
undetected eavesdropping attack, named passive frequency shift attack, which
can take advantage of the most general side channels in frequency domain and
can be extended to other cases with distinguishable decoy states. In Sec. II,
we recap the frequency shift of intensity modulators (IMs) and test
experimentally the spectral distribution of signal pulses with external
modulation method which shows a side channel in frequency domain. In Sec. III,
we propose a passive frequency shift attack on SNS protocol and analyze its
adverse impact by giving the formula of upper bound of secure key rate and
comparing it with the lower bound of secret key rate under Alice and Bob’s
estimation with consideration of AOPP method and finite-key effects. In Sec.
IV, we present our simulation results. Last, we give some discussion about the
countermeasure of side channels in Sec. V and conclude in Sec. VI.
## II Frequency shift of intensity modulators
In this section, we will recap the frequency shift of IMs and test
experimentally to show a side channel in frequency domain.
There are several kinds of intensity modulators, such as the Mach-Zehnder type
electro-optical intensity modulators (EOIMs), electro-absorption modulators
(EAMs), and acousto-optical modulators (AOMs). EOIMs, especially LiNbO3-based
devices, possess the excellent performance of wavelength-independent
modulation characteristics, excellent extinction performance (typically 20 dB)
and low insertion losses (typically 5 dB) [60].
LiNbO3-based EOIMs work by the principle of interference, controlled by
modulating the optical phase. The incoming light is coupled into a waveguide
and then split into two paths of a Mach-Zehnder interferometer equally and
interfere at an output coupler. The two arms made of lithium niobate will
induce a phase change when applying voltages. Accordingly, the intensity and
phase of output light will be modulated after interference depending on the
applied electrical voltages. Assuming voltages $V_{1}(t)$ and $V_{2}(t)$ are
applied to two arms separately with the input field of intensity $E_{0}$ and
frequency $\omega_{0}$, the output field can be written as [46]
$\displaystyle E_{\rm out}(t)=E_{0}{\rm
cos}[\Delta\varphi(t)]e^{i[\omega_{0}t+\varphi(t)]},$ (1)
where $\Delta\varphi(t)=[\gamma V_{1}(t)+\varphi_{1}-\gamma
V_{2}(t)-\varphi_{2}]/2$ and $\varphi(t)=[\gamma V_{1}(t)+\varphi_{1}+\gamma
V_{2}(t)+\varphi_{2}]/2$, $\gamma=\pi/V_{\pi}$ is the voltage-to-phase
conversion cofficients for two arms, and $\varphi_{1}$ and $\varphi_{2}$ are
the static phases which we will omit for simplicity. Here, $V_{\pi}$ is half-
wave voltage that is required to change the phase in one modulator arm by
$\pi$ radians. The output intensity is given by
$\displaystyle P_{\rm out}(t)=|E_{\rm
out}(t)|^{2}=\frac{P_{0}}{2}\bigl{[}1+{\rm cos}[\gamma V(t)]\bigr{]},$ (2)
where $V(t)=V_{1}(t)-V_{2}(t)$ and $P_{0}$ is the input optical power. The
phase maintains and intensity is determined by Eq. 2 on condition that the two
modulator arms are driven by the same amount, but in opposite directions (i.e.
$V_{1}(t)=-V_{2}(t)$), which is known as balanced driving or push–pull
operation. When $V(t)$ is constant we will get pure intensity modulation
without frequency shift. However, once $V(t)$ is not a constant any more,
something unexpected will arise to the output field. Specifically, frequency
shift will be induced as we can see, for example, if
$V_{1}(t)=-V_{2}(t)=V_{0}+kt$, the output field can be expressed as [46]
$\displaystyle E_{\rm out}=\frac{E_{0}}{2}\Bigl{[}e^{i[(\omega_{0}+\gamma
k)t+\gamma V_{0}]}+e^{i[(\omega_{0}-\gamma k)t+\gamma V_{0}]}\Bigr{]}.$ (3)
From Eq. 3, we can see a frequency shift of the light pulses with
$\pm\omega_{m}=\pm\gamma k$ compared with the original $\omega_{0}$, where $k$
is the slope of modulation voltage. Moreover, the frequency shift of the
output field will be more confusing when the modulation voltages are more
complicated in practical systems. We can analyze the spectrum of the output
field using the fast Fourier transform method.
To evaluate the frequency shift of different intensity pulses, we test it in
principle. Optical pulses with 1 ns pulse width are produced with a constant
intensity from a laser diode (Keysight 8164B) first, modulated by an IM driven
by an arbitrary waveform generator (Keysight M9505A) and measured by an
optical spectrum analyzer (Yokogawa AQ637D) last. Note that the measurement is
taken before the fixed attenuation as implemented in actual systems since the
probability of emitting pulses at the single-photon level follows the same
distribution. Fig. 1 illustrates the wavelength spectrum of three states with
intensity ratio taken from the SNS experiment [39] as 0.1: 0.384: 0.447
($\mu_{a}=0.1$, $\mu_{b}=0.384$, $\mu_{z}=0.447$). And the normalized
intensity probability distributions are shown in Fig. 2 to distinguish the
difference.
Figure 1: The wavelength spectrum of three different states with intensity
ratio as 0.1: 0.384: 0.447 ($\mu_{a}=0.1$, $\mu_{b}=0.384$, $\mu_{z}=0.447$).
Figure 2: The normalized intensity distribution of three different states
with intensity ratio as 0.1: 0.384: 0.447 ($\mu_{a}=0.1$, $\mu_{b}=0.384$,
$\mu_{z}=0.447$). Two dashed lines at 1550.1125 nm and 1550.131 nm are
boundaries that can be used to distinguish states with intensity $\mu_{z}$
($\mu_{b}$) and $\mu_{a}$. There are also some subtle differences between
states $\mu_{z}$ and $\mu_{b}$ in frequency domain.
Obviously, the states modulated by IMs of different intensities do not overlap
totally in frequency domain. The distinction of signal states (also strong
decoy states) and weak decoy states is evident as expected because the
amplitude of modulation voltages of signal and strong decoy states are higher
which will induce more frequency shift, and thus the peaks of the signal and
strong decoy states are lower than weak decoy states. More precisely, the
normalized probability of weak decoy states is higher than signal and strong
decoy states between 1550.112 nm and 1550.131 nm. There are also slight
differences between signal and strong decoy states. The difference will be
bigger when narrowing pulses or the line width as TF-QKD requires, or when the
intensity difference of the different states becomes larger. On this
foundation, Eve can apply a passive frequency shift attack by harnessing this
side channel.
## III Passive frequency shift attack on SNS protocol
In this section, we propose a passive frequency shift attack scheme on
practical SNS TF-QKD systems by exploiting the side channels in frequency
domain and analyze its adverse impact by fiving the formula of upper bound of
secure key rate with consideration of AOPP method and finite-key effects. But
we note that this attack can be applied with other side channels except in
frequency domain.
In fact, the signal pulses (including signal states and decoy states) and
reference pulses must be modulated with a stable continuous-wave laser source
with external modulation method so as to estimate and compensate phase noise
in those TF-QKD protocols which need post-phase compensation, such as SNS TF-
QKD [12] and phase-matching (PM) TF-QKD [13]. Even if the synchronization can
not be controlled by Eve unlike discussed in Ref. [46, 47], the probability
distributions of signal states and decoy states may do not overlap inevitably
in frequency domain.
In the 4 intensity SNS TF-QKD protocol (see Appendix A for details), Alice and
Bob need to modulate the continuous light to 5 different intensities, the
maximum intensity pulses are used as phase reference pulses, the minimum as
vacuum states, while others as signal states, weak and strong decoy states. In
practical SNS systems [38, 39], three IMs are used to modulate these 5
different pulses to ensure that the output signal intensities are in agreement
with the theoretical requirements and the reference detections are high enough
for phase compensation. There will be side channels in frequency domain
inevitably. In what follows, we introduce the passive frequency shift attack.
Suppose Eve intercepts all signal pulses at Alice and Bob’s output ports where
the signal pulses haven’t been attenuated by channels, and then distinguishes
signal and decoy states with a wavelength division multiplexer (WDM) and three
single-photon detectors (SPDs) as illustrated in Fig. 3. Set internals
$T_{\alpha}$ properly, where $\alpha\in\\{z,a,b\\}$, according to the
wavelength spectrum of different states to distinguish the states with
intensity $\mu_{\alpha}$ but may fail with a certain probability.
Theoretically, $T_{\alpha}$ can be set as the union of two symmetric intervals
according to Eq. 3.
Figure 3: Schematic of the passive frequency shift attack. PM: phase
modulator, IM: intensity modulator, ATT: attenuator, WDM: wavelength division
multiplexer, SPD: single-photon detector, S: light path selector, BS: beam
splitter, SNSPD: superconducting nanowire single-photon detectors. The light
path selector S1 (S2) is controlled by SPD1 (SPD1 and SPD2) and Bob’s device
is the same as Alice’s which we have omitted in figure.
Suppose the four ports of WDM 1, 2, 3 and 4 can export photons with frequency
located in $T_{z}$, $T_{a}$, $T_{b}$ and others. The light path selector S1
(S2) is controlled by SPD1 (SPD1 and SPD2). Denote as 1 or 0 when SPD (i.e.
SPD1, SPD2 or SPD3) clicks or not, and 1 or 0 when the light path selector
(i.e. S1 or S2) selects the up or down path. Then we set
$\displaystyle{\rm S1}$ $\displaystyle=\overline{{\rm SPD1}},$
$\displaystyle{\rm S2}$ $\displaystyle={\rm SPD1}\vee{\rm SPD2}.$ (4)
Note that only one SPD at most will click under this principle. According to
the response of SPDs, set the total transmittance as $\eta_{z}$, $\eta_{a}$,
$\eta_{b}$ or $\eta_{k}$ when SPD1, SPD2, SPD3 clicks or none of them click,
respectively. In this process, Eve could get partial raw key bits after Alice
(Bob) announces their signal and decoy windows, which can be understood in
this way that Eve can conclude their key bits as 1 (0) when ${\rm
SPD1}\vee{\rm SPD2}\vee{\rm SPD3}=1$ in $Z$ windows. There is no bit-flip
error between Alice (Bob) and Eve because Eve can intercepts photons at output
ports without stray photons. Only the raw bits are secure in $Z$ windows when
${\rm SPD1}\vee{\rm SPD2}\vee{\rm SPD3}=0$ on both sides. On the one hand,
once Eve detects photons successfully on one side, the bit is either the same
or a bit-flip error with the other side which will be revealed in the error
correction step (and pre-error correction process when AOPP is performed). On
the other hand, the bits are balanced (i.e. random for Eve) in one-detector
heralded events with ${\rm SPD1}\vee{\rm SPD2}\vee{\rm SPD3}=0$ which means
the raw bits are secure in these windows. Though Eve could not distinguish the
decoy states and signal states without errors, once the transmittance of
signal and decoy states differ from others, the decoy-state method may not
estimate the count rate and phase-flip error rate of single-photon states
correctly. When the actual secure key rate is lower than the estimated one,
the final secret string is insecure partially.
We emphasize that this attack will not introduce unnecessary errors as the
beam splitting and measurement by Eve can be seen as a loss and the rest of
the pulses are coherent states with no phase noise. What is more, Eve could
control errors completely expect the inherent errors of the protocol through
channels, superconducting nanowire single-photon detectors (SNSPDs) and
classic information he announces. In the following, we will analyze the
negative effect of this passive frequency shift attack.
Consider the most general case, we assume that the envelope of wavelength
spectrum can be written as $f_{\beta}(\lambda)$, where $\beta\in\\{z,a,b\\}$.
The wavelength spectrum of signal states and decoy states could not totally
overlap. By setting the internals $T_{\alpha}$ properly, Eve can distinguish
the decoy states and siganl states with errors. The proportion of state
$\mu_{\beta}$ in internals $T_{\alpha}$ can be shown as
$\displaystyle r_{\beta|\alpha}=\int_{T_{\alpha}}f_{\beta}(\lambda)d\lambda,$
(5)
where $\alpha$, $\beta\in\\{z,a,b\\}$. Which one detector will click obeys a
certain probability distribution. Therefore, the states of intensity
$\mu_{\beta}$ would be transformed with one of four different transmittance
$\varOmega_{\beta}=\\{\eta_{\beta z},\eta_{\beta a},\eta_{\beta b},\eta_{\beta
k}\\}$ with a finite probability, where
$\displaystyle\eta_{\beta z}=$ $\displaystyle\eta_{z}(1-r_{\beta|z}),$
$\displaystyle\eta_{\beta a}=$
$\displaystyle\eta_{a}(1-r_{\beta|z}-r_{\beta|a}),$ $\displaystyle\eta_{\beta
b}=$ $\displaystyle\eta_{b}(1-r_{\beta|z}-r_{\beta|a}-r_{\beta|b}),$
$\displaystyle\eta_{\beta k}=$
$\displaystyle\eta_{k}(1-r_{\beta|z}-r_{\beta|a}-r_{\beta|b}).$ (6)
Table 1: List of experimental parameters. Here, $\gamma$ is the fiber loss coefficient (dB/km), $\eta_{d}$ is the detection efficiency of detectors, $e_{d}$ is the misalignment-error probability, $f_{\rm EC}$ is the error correction inefficiency, $\xi$ is the failure probability of statiscal fluctuations analysis, $p_{d}$ is the dark count rate and $M$ is the number of phase slices. $\gamma$ | $\eta_{d}$ | $e_{d}$ | $f_{\rm EC}$ | $\xi$ | $p_{d}$ | $M$
---|---|---|---|---|---|---
0.2 | 56% | 0.1 | 1.1 | $2.2\times 10^{-9}$ | $10^{-10}$ | 16
Table 2: List of experimental parameters about intensity and probability Alice and Bob select. $\mu_{a}$ | $\mu_{b}$ | $\mu_{z}$ | $p_{z}$ | $p_{a}$ | $p_{b}$ | $p_{z0}$
---|---|---|---|---|---|---
0.1 | 0.384 | 0.447 | 0.776 | 0.85 | 0.073 | 0.732
Table 3: Five groups of the proportion $r_{\beta|\alpha}$ of state $\mu_{\beta}$ in internals $T_{\alpha}$. | $r_{z|z}$ | $r_{a|z}$ | $r_{b|z}$ | $r_{z|a}$ | $r_{a|a}$ | $r_{b|a}$ | $r_{z|b}$ | $r_{a|b}$ | $r_{b|b}$
---|---|---|---|---|---|---|---|---|---
G1 | 0.01 | 0.008 | 0.01 | 0.01 | 0.1 | 0.01 | 0.01 | 0.008 | 0.01
G2 | 0.01 | 0.008 | 0.01 | 0.005 | 0.1 | 0.005 | 0.01 | 0.008 | 0.01
G3 | 0.01 | 0.008 | 0.01 | 0.01 | 0.2 | 0.01 | 0.01 | 0.008 | 0.01
G4 | 0.01 | 0.005 | 0.01 | 0.005 | 0.2 | 0.005 | 0.01 | 0.005 | 0.01
G5 | 0.01 | 0.008 | 0.01 | 0 | 0.1 | 0 | 0.01 | 0.008 | 0.01
Here, the attenuation coefficient comes from Eve’s interfection and detection,
and the total transmittance can be controlled by Eve completely which means
that Eve is allowed to use a lower-loss or even lossless channel and perfect
detectors with 100% detection efficiency and no dark count, and could select
internals freely to obtain a satisfactory results. For states with intensity
$\mu_{\beta}$, the probability of being transmitted with
$\eta_{\beta\gamma}\in\varOmega_{\beta}$, $\gamma\in M=\\{z,a,b,k\\}$ can be
shown as
$\displaystyle p_{\beta z}$ $\displaystyle=(1-e^{-\mu_{\beta|z}}),$
$\displaystyle p_{\beta a}$
$\displaystyle=e^{-\mu_{\beta|z}}(1-e^{-\mu_{\beta|a}}),$ $\displaystyle
p_{\beta b}$
$\displaystyle=e^{-\mu_{\beta|z}}e^{-\mu_{\beta|a}}(1-e^{-\mu_{\beta|b}}),$
$\displaystyle p_{\beta k}$
$\displaystyle=e^{-\mu_{\beta|z}}e^{-\mu_{\beta|a}}e^{-\mu_{\beta|b}},$ (7)
where $\mu_{\beta|\alpha}=\mu_{\beta}r_{\beta|\alpha}$. Here,
$e^{-\mu_{\beta|\alpha}}$ is the probatility of zero photon in internals
$T_{\alpha}$ with intensity $\mu_{\beta}$.
Since TF-QKD protocols are proposed for the implementation of long optical
fiber communications, Eve’s best target is to acquire more percentage of keys
by minimizing $\eta_{k}$ as far as possible while maintaining the key rate and
communication distance under Alice and Bob’s estimation. When the
communication distance is long enough, Eve may steal all secret key bits.
There are two key rates matter: the lower bound of secret key rate under Alice
and Bob’s estimation $R_{e}$ and the upper bound of real secure key rate
$R_{u}$. Note that Alice and Bob could not estimate $R_{e}$ correctly under
this attack since it is impossible to pick out the decoy states that have
undergone the same operation as the signal states, i.e. the decoy-state method
does not work properly.
When AOPP method is performed with partial bits leaked to Eve, Bob only
chooses odd-parity bit pairs and they will keep the second bit if Alice’s bits
pairs are odd, too. In this process, Eve’s information on raw bits does not
reduce as the parity information is public. Therefore, the upper bound of
secure key rate $R_{u}$ can be shown as
$\displaystyle R_{u}=n_{1,{\rm sec}}[1-H(e_{\rm
in})]-n_{t}^{\prime}fH(E_{Z}^{\prime}),$ (8)
where $n_{1,{\rm sec}}$ is the upper bound of single photons which can be used
to distill secure key bits and $e_{\rm in}$ is the inherent phase-flip error
rate (i.e. the lower bound of untagged bits) determined by the number of phase
slices $M$. These two parameters will be discussed in Appendix B.
This attack can be applied as long as the wavelength spectrum distributions of
signal and decoy states are different. The effect of this attack varies based
on the discernibility of different states. However, this attack can be
extended to other imperfections with distinguishable decoy states, like the
polarization, temporal shape, etc. by replacing the WDM with appropriate
devices.
## IV Numerical simulations
Figure 4: The estimated lower bound of secret key rate $R_{e}$ and upper
bound of real secure key rates $R_{ui}$ in logarithmic scale versus
tranmission distance (between Alice and Bob) under passive frequency shift
attack, where $i\in\\{1,2,3,4,5\\}$, with experimental parameters listed in
Table 1, 2 and 3. The solid line corresponds to the etimated key rate $R_{e}$
which is the same as that without attack, while the dashed lines represent the
upper bound of real secure key rate $R_{ui}$. And the real secure key rate
$R_{u5}$ not shown in the figure is identically 0.
We numerically simulate the behaviour of SNS protocol under passive frequency
shift attack in this section. There are nine parameters obtained by statistic
in practical systems before AOPP, including $n_{\alpha\beta}$,
$n_{\Delta^{+}}^{R}$, $n_{\Delta^{-}}^{L}$, $n_{t}$ and $E_{z}$, where
$n_{t}=n_{\rm sig}+n_{\rm err}$ is the length of raw keys, $\alpha\beta\in
S=\\{vv,va,av,vb,bv\\}$. Here, $E_{z}=n_{\rm err}/n_{t}$, $n_{\rm sig}$ and
$n_{\rm err}$ is the number of right and wrong raw bits, respectively. Under
passive frequency shift attack, the parameters can be simulated as discussed
in Appendix C. The promotion to AOPP is trival as the attack is not applied.
For simulation purposes, the experimental parameters listed in Tables 1 and 2
are taken according to the SNS experiment [39] with a little modification.
Then we simulate the normal secret key rate without frequency shift attack,
the key rate under Alice and Bob’s estimation and the upper bound of secure
key rate under frequency shift attack with AOPP and finite-key effects by
selecting five groups of reasonable parameters about $r_{\beta|\alpha}$ listed
in Tables 3 following the principle that the difference between signal states
(strong decoy states) and weak decoy states is significant while small between
signal and strong decoy states.
In Fig. 4, the estimated key rate $R_{e}$ under passive frequency shift attack
represented by the solid line is the same as that without attack which means
this attack will not be detected by Alice and Bob with the list of
experimental parameters in Table 1, 2 and 3. In comparison, the dashed lines
represent the upper bound of secure key rates $R_{u}$ under frequency shift
attack. Denote the the upper bound of secure key rate under parameters in
Group $i$ (G$i$) in Table 3 as $R_{ui}$, where $i\in\\{1,2,3,4,5\\}$. Compared
$R_{u1}$ with $R_{u2}$, we can find that the difference between weak decoy
states and signal states (strong decoy states) affects the effects of attack
significantly i.e. the communication distance is limited from 472 km to 368 km
when the difference changes from 10 times to 20 times. The negative effects
will be more greater when $r_{z|a}:r_{a|a}:r_{b|a}$ is constant but the
absolute values increase by comparing $R_{u2}$ with $R_{u3}$. The secure
distance will be limited to shorter when the difference becomes larger by
contrasting $R_{u2}$ and $R_{u4}$. Besides, Alice and Bob could not distribute
keys when Eve can distinguish weak decoy states correctly with parameters of
Group 5. On the one hand, if Alice and Bob attach importance to these side
channels and know Eve’s action, this attack will limit the communication
distance. Note that the distance and key rate will be more pessimistic since
the key rates $R_{ui}$ is only the upper bound. Otherwise, the secret key bits
will be insecure over long distance especially. For example, the key bits are
all insecure when the estimated key rate is $10^{-8}$ per pulse at 479 km (an
acceptable value at long distance).
In this attack, we have assumed that Eve intercepts photons in the order of
$T_{z}$, $T_{a}$ and $T_{b}$, but we find there is no obvious difference when
changing this order through numerical simulations which can be understood in
this way that only the photons are secure with ${\rm SPD1}\vee{\rm
SPD2}\vee{\rm SPD3}=0$, i.e. only the pulses that are not detected by Eve in
all internals $T_{\alpha}$ are secured, or there will be no difference in any
order when $r_{\alpha|\beta}=0$ with $\alpha\neq\beta$.
## V Discussion
This eavesdropping attack proposed above is a passive attack harnessing the
side channels and hard to be detected. To guarantee security in practical
systems with side channels, the first potential way is to improve experimental
techniques or modulation methods to restrain side channels but may be
unavoidable when one is closed, but another appears. The second alternative is
to develop mathematical models in theory to maximize the secure key rates
under attacks, like the loss-tolerant method [61, 62, 63, 64, 65] but needs an
accurate characterization of real apparatuses. It may be an ongoing search for
side channels to guarantee the practical security of QKD systems.
## VI Conclusion
The goal of QKD at present is to provide long-distance and high-speed key
distribution, which will induce side channels inevitably. Increasing
repetition rate and narrowing pulses to improve speed will make the pulses
complex and distinguishable, like the frequency, polarization, temporal shape,
and so on. Any small imperfections may be exploited and enhanced utilizing
channel loss by Eve, especially at long distance. Therefore, it is necessary
to pay more attenuation to the practical security of TF-QKD systems.
In this paper, we have investigated and tested the side channels with external
modulation which is required in those TF-QKD protocols with post-phase
compensation, like SNS TF-QKD [12] and PM TF-QKD [13]. Based on this, we
propose a complete and undetected eavesdropping attack named passive frequency
shift attack on SNS protocol which can be applied once there are differences
between different states in frequency domain and can be extended to other
imperfections with distinguishable decoy states. Normally, Alice and Bob could
estimate the lower bound of secret key rate correctly no matter what Eve does.
But this estimation is not accurate once Eve’s operation on signal and decoy
states is different which may cause insecure bits when the upper bound of
secure key rate is lower than the estimated lower one. According to the
numerical results, Eve can get full information about the secret key bits at
long distance if Alice and Bob neglect this distinguishability. For example,
the key bits are all insecure when the estimated key rate is $10^{-8}$ per
pulse at 479 km under the five selected groups of parameters. As there is a
variety of potentially exploitable loopholes at source, our results emphasize
the practical security of the light source. It is a constant search to build
hardened implementations of practical QKD systems.
###### Acknowledgements.
This work is supported by National Key Research and Development Program of
China (Grant No. 2020YFA0309702) and National Natural Science Foundation of
China (Grants N0. 61605248, No. 61675235 and No. 61505261).
## Appendix A SNS TF-QKD protocol
We make a review of the SNS protocol and the key rate formula with AOPP method
and finite-key effects [12, 20, 21] in this section.
(1) Preparation and measurement. At any time window $i$, Alice (Bob) randomly
determines whether it is a signal window or a decoy window with probabilities
$p_{z}$ and $p_{x}=1-p_{z}$. If it is a signal window, Alice (Bob) sends a
phase-randomized coherent state with intensity $\mu_{z}$ and denotes it as 1
(0), or a vacuum state $|0\rangle$ and denotes it as 0 (1) with probabilities
$p_{z1}=1-p_{z0}$ and $p_{z0}$, seperately. If it is a decoy window, Alice
(Bob) sends a phase-randomized coherent state
$|\sqrt{\mu_{a}}e^{i\theta_{A}}\rangle$,
$|\sqrt{\mu_{b}}e^{i\theta_{A}^{\prime}}\rangle$ or $|0\rangle$
($|\sqrt{\mu_{a}}e^{i\theta_{B}}\rangle$,
$|\sqrt{\mu_{b}}e^{i\theta_{B}^{\prime}}\rangle$ or $|0\rangle$) with
probabilities $p_{a}$, $p_{b}$ and $p_{v}=1-p_{a}-p_{b}$, where
$\mu_{a}<\mu_{b}$. The third party, Chrelie, renamed as Eve is supposed to
perform interferometic measurements on the incoming pulses and announce the
results.
(2) Different types of time windows. Suppose Alice and Bob repeat the above
process $N$ times, then they announce their signal windows and decoy windows
through public channels. If both Alice and Bob determine a signal window, it
is a $Z$ window. And the effective events in $Z$ windows are defined as one-
detector heralded events no matter which detector clicks. Alice and Bob will
get two raw $n_{t}$-bit strings $Z_{A}$ and $Z_{B}$ according to effective
events in $Z$ windows. Note that the phase-randomized coherent state of
intensity $\mu$ is equivalent to a probabilistic mixture of different photon-
number states $\sum_{k=0}^{\infty}\frac{e^{-\mu}\mu^{k}}{k!}|k\rangle\langle
k|$. Therefore, we can define $Z_{1}$ windows as a subset of $Z$ windows when
only one party determines to send and she (he) actually sends a single-photon
state $|1\rangle$. The bits from effective $Z_{1}$ windows are regarded as
untagged bits by the tagged model [66]. Then the intensity of pulses would be
announced to each other expect the intensity in $Z$ windows. If both commit to
a decoy window, it is an $X$ window. Alice and Bob also announce their phase
information $\theta_{A}$, $\theta_{B}$ when they choose the same intensity
$\mu_{a}$ in an $X$ window denoted as an $X_{a}$ window. And if only one
detector clicks in $X_{a}$ windows with phases satisfying
$\displaystyle|\theta_{A}-\theta_{B}-\varphi_{AB}|\leq\Delta/2$ (9)
or
$\displaystyle|\theta_{A}-\theta_{B}-\pi-\varphi_{AB}|\leq\Delta/2,$ (10)
it is an effective event in $X_{a}$ windows. All effective events in $X_{a}$
windows can be divided into two subsets as $C_{\Delta^{+}}$ and
$C_{\Delta^{-}}$ according Eq. 9 and Eq. 10, respectively. And the number of
events in $C_{\Delta^{+}}$ and $C_{\Delta^{-}}$ can be defined as
$N_{\Delta^{+}}$ and $N_{\Delta^{-}}$. Here, $\varphi_{AB}$ is set properly to
obtain a satisfactory key rate which will be different over time due to phase
drift and can be obtained with reference pulses. In the following, we will
omit the phase drift without loss of generality and set $\varphi_{AB}=0$.
(3) Parameter estimation. They can estimate parameters, including the bit-flip
error rate of the raw bits $E_{Z}$, the lower bound of untagged bits
$\underline{n}_{1}$ (or the lower bound of the counting rate
$\underline{s}_{1}$ equivalently) and the upper bound of the phase-flip error
rate of untagged bits $\overline{e}_{1}^{ph}$. The bit-flip error rate $E_{Z}$
can be obtained by error test, $\underline{s}_{1}$ and $\overline{e}_{1}^{ph}$
can be estimated with decoy state method as follows.
Denote $\rho_{v}=|0\rangle\langle 0|$,
$\rho_{a}=\sum_{k=0}^{\infty}e^{-\mu_{a}}\mu_{a}^{k}/k!|k\rangle\langle k|$
and $\rho_{b}=\sum_{k=0}^{\infty}e^{-\mu_{b}}\mu_{b}^{k}/k!|k\rangle\langle
k|$, where $\rho_{a}$ and $\rho_{b}$ are density operators of the phase-
randomized coherent states used in $X$ windows in which the phase is not
announced. Let $N_{\alpha\beta}$ be the number of intsnces Alice sends state
$\rho_{\alpha}$ and Bob sends state $\rho_{\beta}$ and $n_{\alpha\beta}$ be
the number of corresponding one-detector heralded events, where
$\alpha\beta\in S=\\{vv,va,av,vb,bv\\}$. Thus, the counting rate can be
defined as $S_{\alpha\beta}=n_{\alpha\beta}/N_{\alpha\beta}$. And
$\underline{s}_{1}$ can be estimated with decoy-state method as [67, 18]
$\displaystyle\underline{s}_{1}\geq$
$\displaystyle\frac{1}{2\mu_{a}\mu_{b}(\mu_{b}-\mu_{a})}[\mu_{b}^{2}e^{\mu_{a}}(S_{va}+S_{av})$
(11)
$\displaystyle-\mu_{a}^{2}e^{\mu_{b}}(S_{vb}+S_{bv})-2(\mu_{b}^{2}-\mu_{a}^{2})S_{vv}].$
Denote the bit-flip errors in $C_{\Delta^{+}}$ ($C_{\Delta^{-}}$) as the
effective events when the right (left) detector clicks and its total number as
$n_{\Delta^{+}}^{R}$ ($n_{\Delta^{-}}^{L}$). The bit-flip error rate in
$C_{\Delta}=C_{\Delta^{+}}\bigcup C_{\Delta^{-}}$ can be shown as
$\displaystyle
T_{\Delta}=\frac{n_{\Delta^{+}}^{R}+n_{\Delta^{-}}^{L}}{N_{\Delta^{+}}+N_{\Delta^{-}}}.$
(12)
Therefore $\overline{e}_{1}^{ph}$ can be estimated with decoy-state method as
[12, 18]
$\displaystyle\overline{e}_{1}^{ph}\leq\frac{T_{\Delta}-1/2e^{-2\mu_{a}}S_{vv}}{2\mu_{a}e^{-2\mu_{a}}\underline{s}_{1}}.$
(13)
(4) Key rate formula. With these quantities, the final key length can be
expressed as [68, 12]
$\displaystyle R=$ $\displaystyle
2p_{z0}(1-p_{z0})\mu_{z}e^{-\mu_{z}}\underline{s}_{1}[1-H(\overline{e}_{1}^{ph})]$
(14) $\displaystyle-n_{t}fH(E_{Z})/N.$
where $N_{f}$ is the number of final bits, $H(x)=-x{\rm log}_{2}x-(1-x){\rm
log}_{2}(1-x)$ is the binary entropy function, and $f$ is the error correction
efficiency factor.
(5) AOPP method. AOPP method [20, 21] is a pre-error correction process on raw
strings $Z_{A}$ and $Z_{B}$ proposed to improve the direct tranmission key
rate. In AOPP method, Bob randomly select two unequal bits as pairs and will
obtain $n_{p}={\rm min}(n_{t0},n_{t1})$ pairs, where $n_{t0}$ ($n_{t1}$) is
the number of bits 0 (1) in raw string $Z_{B}$. There will be only two types
of pairs can be survived when Alice make exactly the same or opposite decision
as Bob for two bits, and denote the number as $n_{vd}$ or $n_{cc}$,
respectively. Therefore, the bit error after AOPP is shown as
$\displaystyle E_{Z}^{\prime}=\frac{n_{vd}}{n_{cc}+n_{vd}}.$ (15)
The lower bound of the number of untagged bits is
$\displaystyle\underline{n}_{1}^{\prime}=n_{p}\frac{\underline{n}_{1}^{0}}{n_{t0}}\frac{\underline{n}_{1}^{1}}{n_{t1}},$
(16)
where $\underline{n}_{1}^{0}$ and $\underline{n}_{1}^{1}$ is the lower bound
of untagged bits when they make the opposite decision and obtain bits 0 and 1,
correspondingly. And the phase-flip error rate changed into
${\overline{e}^{\prime}}_{1}^{ph}=2\overline{e}_{1}^{ph}(1-\overline{e}_{1}^{ph})$.
Besides, finite-key effects should be considered in practical systems using
Chernoff bound [69, 70] and the parameters can be estimated as
$n_{1}^{\prime}=\varphi^{L}(\underline{n}_{1}^{\prime})$ and $e_{1}^{\prime
ph}=\varphi^{U}(\underline{n}_{1}^{\prime}\overline{e}_{1}^{\prime
ph})/\underline{n}_{1}^{\prime}$. Finally, the improved key length can be
shown as [21, 20, 68]
$\displaystyle N_{f}^{\prime}=$ $\displaystyle
n_{1}^{\prime}[1-H({e^{\prime}}_{1}^{ph})]-n_{t}^{\prime}fH(E_{Z}^{\prime})-{\rm
log}_{2}\frac{2}{\varepsilon_{cor}}$ (17) $\displaystyle-2{\rm
log}_{2}\frac{1}{\sqrt{2}\varepsilon_{PA}\hat{\varepsilon}}.$
## Appendix B Details of the upper bound of secure key rate
To obtain the upper bound of secure key rate, we should consider the ideal
situation, i.e. the upper bound of the number of secure single photons and the
lower bound of phase-flip error rate without finite-key effects.
Ideally, the single photons which could be used to distill secure key bits
(i.e. the upper bound of secure untagged bits) after AOPP can be shown as
$\displaystyle n_{1,{\rm
sec}}=n_{p}\frac{n_{1s}^{0}}{n_{t0}}\frac{n_{1s}^{1}}{n_{t1}},$ (18)
where
$\displaystyle
n_{1s}^{0}=n_{1s}^{1}=Np_{z}^{2}p_{z0}(1-p_{z0})p_{zk}u_{z}\eta_{zk}e^{-u_{z}\eta_{zk}}.$
(19)
In the rest, we will analyze this inherent phase-flip error rate from the
perspective of virtual protocol. In the virtual protocol [12], Alice and Bob
will prepare an extended state
$\displaystyle|\varPsi\rangle=\frac{1}{\sqrt{2}}(e^{i\delta_{B}}|01\rangle\otimes|01\rangle+e^{i\delta_{A}}|10\rangle\otimes|10\rangle),$
(20)
with restriction of Eq. (9) or (10) which is equivalent to the state
$[|01\rangle\langle 01|\otimes|01\rangle\langle 01|+|10\rangle\langle
10|\otimes|10\rangle\langle 10|]/2$ when Alice and Bob measure ancillary
photons in photon-number basis in advance. Those states left of $\otimes$ are
real states which will be sent to Charlie, while the right are local ancillary
states with bit value encoded. Local state $|0\rangle$ corresponds to a bit 0
(1), and state $|1\rangle$ corresponds to a bit 1 (0) for Alice (Bob). In
order to obtain lower bound of phase-flip error rate, consider the ideal
situation in which the phase shift can be compensated perfectly which is
equivalent to no phase shift. After interference and detection, the local
states change into
$\displaystyle\rho_{l}=\frac{1}{2}[|\varphi_{1}\rangle\langle\varphi_{1}|+|\varphi_{2}\rangle\langle\varphi_{2}|],$
(21)
where
$\displaystyle|\varphi_{1}\rangle$
$\displaystyle=[|01\rangle+e^{i\delta}|10\rangle]/\sqrt{2},$
$\displaystyle|\varphi_{2}\rangle$
$\displaystyle=[|01\rangle-e^{i\delta}|10\rangle]/\sqrt{2},$ (22)
with $\delta=\delta_{A}-\delta_{B}$. When the local ancillary states measured
virtually with basis $|\Phi^{0}\rangle=[|01\rangle+|10\rangle]/\sqrt{2}$ and
$|\Phi^{1}\rangle=[|01\rangle-|10\rangle]/\sqrt{2}$, the phase-flip error rate
before AOPP can be shown as
$\displaystyle e_{\rm in}^{\prime}=\frac{e_{\rm in}^{0}+e_{\rm in}^{1}}{2},$
(23)
where $e_{\rm in}^{0}$ and $e_{\rm in}^{1}$ are phase-flip error rate when
$\delta_{A}$ and $\delta_{B}$ satisfy Eq. (9) and (10), respectively,
$\displaystyle e_{\rm in}^{0}$ $\displaystyle={\rm
Tr}[|\Phi^{1}\rangle\langle\Phi^{1}|\rho_{l}],$ $\displaystyle e_{\rm in}^{1}$
$\displaystyle={\rm Tr}[|\Phi^{0}\rangle\langle\Phi^{0}|\rho_{l}].$ (24)
On average, the inherent phase-flip error will be $\overline{e}_{\rm
in}^{\prime}=\int_{-\pi/M}^{\pi/M}e_{\rm in}^{\prime}dM\delta/2\pi$ and is
approximately equal to 0.0032 when $M=16$. Thus, we can obtain the inherent
phase-flip error rate with AOPP method as $e_{\rm in}=2\overline{e}_{\rm
in}^{\prime}(1-\overline{e}_{\rm in}^{\prime})$.
## Appendix C Details of numerical simulations
Under passive frequency shift attack, the parameters obtained by statistic can
be shown as follows
$\displaystyle n_{\rm sig}=$ $\displaystyle
4Np_{z}^{2}p_{z0}p_{z1}\sum_{\gamma\in
M}p_{z\gamma}\bigl{[}\overline{p}_{d}e^{-\frac{\mu_{z\gamma}}{2}}-\overline{p}_{d}^{2}e^{-\mu_{z\gamma}}\bigr{]},$
(25) $\displaystyle n_{\rm err}=$ $\displaystyle
2Np_{z}^{2}\Big{[}p_{z1}^{2}\sum_{\gamma_{1},\gamma_{2}\in
M}p_{z\gamma_{1}}p_{z\gamma_{2}}\bigl{[}-\overline{p}_{d}^{2}e^{-(\mu_{z\gamma_{1}}+\mu_{z\gamma_{2}})}$
(26)
$\displaystyle+\overline{p}_{d}e^{-\frac{\mu_{z\gamma_{1}}+\mu_{z\gamma_{2}}}{2}}I_{0}(\sqrt{\mu_{z\gamma_{1}}\mu_{z\gamma_{2}}})\bigr{]}+p_{z0}^{2}p_{d}\overline{p}_{d}\Bigr{]},$
$\displaystyle n_{\alpha v}=$ $\displaystyle 2N_{\alpha v}\sum_{\gamma\in
M}p_{\alpha\gamma}[\overline{p}_{d}e^{-\mu_{\alpha\gamma}/2}-\overline{p}_{d}^{2}e^{-\mu_{\alpha\gamma}}],$
(27) $\displaystyle n_{v\beta}=$ $\displaystyle 2N_{v\beta}\sum_{\gamma\in
M}p_{\beta\gamma}[\overline{p}_{d}e^{-\mu_{\beta\gamma}/2}-\overline{p}_{d}^{2}e^{-\mu_{\beta\gamma}}],$
(28) $\displaystyle n_{vv}=$ $\displaystyle 2N_{vv}p_{d}\overline{p}_{d},$
(29)
where $p_{d}$ is the dark count rate and $\overline{p}_{d}=1-p_{d}$, and
$\mu_{\beta\gamma}=\mu_{\beta}\eta_{\beta\gamma}$. Here, $\beta\in\\{a,b,z\\}$
and $\gamma\in M$. Note that the intensities of state
$|e^{i\theta_{A}}\sqrt{\mu_{a}\eta_{a\gamma_{1}}}\rangle$ and
$|e^{i\theta_{B}}\sqrt{\mu_{a}\eta_{a\gamma_{2}}}\rangle$ from Alice and Bob
in $X_{a}$ windows may be different after passive frequency shift attack where
$\gamma_{1}$, $\gamma_{2}\in M$, but this does not mean it could not cause
right detection. After interference, the intensity of left and right detectors
will be
$\displaystyle\mu_{l}(\gamma_{1},\gamma_{2})=$
$\displaystyle\frac{1}{2}\Bigl{[}\mu_{a\gamma_{1}}+\mu_{a\gamma_{2}}+2\sqrt{\mu_{a\gamma_{1}}\mu_{a\gamma_{2}}}{\rm
cos}\delta\Bigr{]},$ (30) $\displaystyle\mu_{r}(\gamma_{1},\gamma_{2})=$
$\displaystyle\frac{1}{2}\Bigl{[}\mu_{a\gamma_{1}}+\mu_{a\gamma_{2}}-2\sqrt{\mu_{a\gamma_{1}}\mu_{a\gamma_{2}}}{\rm
cos}\delta\Bigr{]},$ (31)
where $\delta=\theta_{B}-\theta_{A}$. We can see that the difference of two
output intensities does not determined by the distinction of two input
intensities but phase difference. Then the number of error events in
$C_{\Delta^{\pm}}$ can be shown as
$\displaystyle n_{\Delta^{+}}^{R}$
$\displaystyle=N_{\Delta^{+}}\sum_{\gamma_{1},\gamma_{2}\in
W}p_{a\gamma_{1}}p_{a\gamma_{2}}\Big{[}-\overline{p}_{d}^{2}e^{-\mu_{a\gamma_{1}}-\mu_{a\gamma_{2}}}$
(32)
$\displaystyle+\overline{p}_{d}\int_{-\Delta/2}^{\Delta/2}e_{d}e^{-\mu_{r}(\gamma_{1},\gamma_{2})}+\overline{e}_{d}e^{-\mu_{l}(\gamma_{1},\gamma_{2})}d\frac{\delta}{\Delta}\Bigr{]},$
where $e_{d}$ is the misalignment-error probability and
$\overline{e}_{d}=1-e_{d}$. Similarly, we can obtain $n_{\Delta^{-}}^{L}$.
## References
* Bennett and Brassard [2014] C. H. Bennett and G. Brassard, Quantum cryptography: Public key distribution and coin tossing, Theor. Comput. Sci. 560, 7 (2014).
* Shor and Preskill [2000] P. W. Shor and J. Preskill, Simple proof of security of the bb84 quantum key distribution protocol, Phys. Rev. Lett. 85, 441 (2000).
* Xu _et al._ [2020a] F. Xu, X. Ma, Q. Zhang, H.-K. Lo, and J.-W. Pan, Secure quantum key distribution with realistic devices, Rev. Mod. Phys. 92, 025002 (2020a).
* Lo _et al._ [2012] H. K. Lo, M. Curty, and B. Qi, Measurement-device-independent quantum key distribution, Phys. Rev. Lett. 108, 130503 (2012).
* Brassard _et al._ [2000] G. Brassard, N. Lütkenhaus, T. Mor, and B. C. Sanders, Limitations on practical quantum cryptography, Phys. Rev. Lett. 85, 1330 (2000).
* Lütkenhaus and Jahma [2002] N. Lütkenhaus and M. Jahma, Quantum key distribution with realistic states: photon-number statistics in the photon-number splitting attack, New J. Phys. 4, 44 (2002).
* Hwang [2003] W. Y. Hwang, Quantum key distribution with high loss: toward global secure communication, Phys. Rev. Lett. 91, 057901 (2003).
* Lo _et al._ [2005] H. K. Lo, X. Ma, and K. Chen, Decoy state quantum key distribution, Phys. Rev. Lett. 94, 230504 (2005).
* Takeoka _et al._ [2014] M. Takeoka, S. Guha, and M. M. Wilde, Fundamental rate-loss tradeoff for optical quantum key distribution, Nat. Commun. 5, 5235 (2014).
* Pirandola _et al._ [2017] S. Pirandola, R. Laurenza, C. Ottaviani, and L. Banchi, Fundamental limits of repeaterless quantum communications, Nat. Commun. 8, 15043 (2017).
* Lucamarini _et al._ [2018] M. Lucamarini, Z. L. Yuan, J. F. Dynes, and A. J. Shields, Overcoming the rate-distance limit of quantum key distribution without quantum repeaters, Nature 557, 400 (2018).
* Wang _et al._ [2018] X.-B. Wang, Z.-W. Yu, and X.-L. Hu, Twin-field quantum key distribution with large misalignment error, Phys. Rev. A 98, 062323 (2018).
* Ma _et al._ [2018] X. Ma, P. Zeng, and H. Zhou, Phase-matching quantum key distribution, Phys. Rev. X 8, 031043 (2018).
* Curty _et al._ [2019] M. Curty, K. Azuma, and H.-K. Lo, Simple security proof of twin-field type quantum key distribution protocol, npj Quant. Inf. 5, 64 (2019).
* Cui _et al._ [2019] C. Cui, Z.-Q. Yin, R. Wang, W. Chen, S. Wang, G.-C. Guo, and Z.-F. Han, Twin-field quantum key distribution without phase postselection, Phys. Rev. Appl. 11, 034053 (2019).
* Lin and Lütkenhaus [2018] J. Lin and N. Lütkenhaus, Simple security analysis of phase-matching measurement-device-independent quantum key distribution, Phys. Rev. A 98, 042332 (2018).
* Tamaki _et al._ [2018] K. Tamaki, H. Lo, W. Wang, and M. Lucamarini, Information theoretic security of quantum key distribution overcoming the repeaterless secret key capacity bound, arXiv: 1805.05511v1 (2018).
* Yu _et al._ [2019] Z. W. Yu, X. L. Hu, C. Jiang, H. Xu, and X. B. Wang, Sending-or-not-sending twin-field quantum key distribution in practice, Sci. Rep. 9, 3080 (2019).
* Jiang _et al._ [2019] C. Jiang, Z.-W. Yu, X.-L. Hu, and X.-B. Wang, Unconditional security of sending or not sending twin-field quantum key distribution with finite pulses, Phys. Rev. Appl. 12, 024061 (2019).
* Xu _et al._ [2020b] H. Xu, Z.-W. Yu, C. Jiang, X.-L. Hu, and X.-B. Wang, Sending-or-not-sending twin-field quantum key distribution: Breaking the direct transmission key rate, Phys. Rev. A 101, 042330 (2020b).
* Jiang _et al._ [2020] C. Jiang, X.-L. Hu, H. Xu, Z.-W. Yu, and X.-B. Wang, Zigzag approach to higher key rate of sending-or-not-sending twin field quantum key distribution with finite-key effects, New J. Phys. 22, 053048 (2020).
* Hu _et al._ [2019] X.-L. Hu, C. Jiang, Z.-W. Yu, and X.-B. Wang, Sending-or-not-sending twin-field protocol for quantum key distribution with asymmetric source parameters, Phys. Rev. A 100, 062337 (2019).
* Zhou _et al._ [2019] X.-Y. Zhou, C.-H. Zhang, C.-M. Zhang, and Q. Wang, Asymmetric sending or not sending twin-field quantum key distribution in practice, Phys. Rev. A 99, 062316 (2019).
* Yin and Fu [2019] H.-L. Yin and Y. Fu, Measurement-device-independent twin-field quantum key distribution, Sci. Rep. 9, 3045 (2019).
* Zeng _et al._ [2020] P. Zeng, W. Wu, and X. Ma, Symmetry-protected privacy: Beating the rate-distance linear bound over a noisy channel, Phys. Rev. Appl. 13, 064013 (2020).
* Currás-Lorenzo _et al._ [2021] G. Currás-Lorenzo, L. Wooltorton, and M. Razavi, Twin-field quantum key distribution with fully discrete phase randomization, Phys. Rev. Appl. 15, 014016 (2021).
* Zhang _et al._ [2020] C.-M. Zhang, Y.-W. Xu, R. Wang, and Q. Wang, Twin-field quantum key distribution with discrete-phase-randomized sources, Phys. Rev. Appl. 14, 064070 (2020).
* Grasselli and Curty [2019] F. Grasselli and M. Curty, Practical decoy-state method for twin-field quantum key distribution, New J. Phys. 21, 073001 (2019).
* Teng _et al._ [2020] J. Teng, F.-Y. Lu, Z.-Q. Yin, G.-J. Fan-Yuan, R. Wang, S. Wang, W. Chen, W. Huang, B.-J. Xu, G.-C. Guo, and Z.-F. Han, Twin-field quantum key distribution with passive-decoy state, New J. Phys. 22, 103017 (2020).
* Lu _et al._ [2019] F.-Y. Lu, Z.-Q. Yin, R. Wang, G.-J. Fan-Yuan, S. Wang, D.-Y. He, W. Chen, W. Huang, B.-J. Xu, G.-C. Guo, and Z.-F. Han, Practical issues of twin-field quantum key distribution, New J. Phys. 21, 123030 (2019).
* Wang _et al._ [2020] R. Wang, Z.-Q. Yin, F.-Y. Lu, S. Wang, W. Chen, C.-M. Zhang, W. Huang, B.-J. Xu, G.-C. Guo, and Z.-F. Han, Optimized protocol for twin-field quantum key distribution, Commun. Phys. 3, 149 (2020).
* Wang and Lo [2020] W. Wang and H.-K. Lo, Simple method for asymmetric twin-field quantum key distribution, New J. Phys. 22, 013020 (2020).
* Lorenzo _et al._ [2019] G. C. Lorenzo, Á. Navarrete, K. Azuma, G. Kato, M. Curty, and M. Razavi, Tight finite-key security for twin-field quantum key distribution, arXiv: 1910.11407v4 (2019).
* Mao _et al._ [2021] Y. Mao, P. Zeng, and T.-Y. Chen, Recent advances on quantum key distribution overcoming the linear secret key capacity bound, Adv. Quantum Technol. 4, 2000084 (2021).
* Minder _et al._ [2019] M. Minder, M. Pittaluga, G. L. Roberts, M. Lucamarini, J. F. Dynes, Z. L. Yuan, and A. J. Shields, Experimental quantum key distribution beyond the repeaterless secret key capacity, Nat. Photon. 13, 334 (2019).
* Wang _et al._ [2019] S. Wang, D.-Y. He, Z.-Q. Yin, F.-Y. Lu, C.-H. Cui, W. Chen, Z. Zhou, G.-C. Guo, and Z.-F. Han, Beating the fundamental rate-distance limit in a proof-of-principle quantum key distribution system, Phys. Rev. X 9, 021046 (2019).
* Zhong _et al._ [2019] X. Zhong, J. Hu, M. Curty, L. Qian, and H. K. Lo, Proof-of-principle experimental demonstration of twin-field type quantum key distribution, Phys. Rev. Lett. 123, 100506 (2019).
* Liu _et al._ [2019] Y. Liu, Z. W. Yu, W. Zhang, J. Y. Guan, J. P. Chen, C. Zhang, X. L. Hu, H. Li, C. Jiang, J. Lin, T. Y. Chen, L. You, Z. Wang, X. B. Wang, Q. Zhang, and J. W. Pan, Experimental twin-field quantum key distribution through sending or not sending, Phys. Rev. Lett. 123, 100505 (2019).
* Chen _et al._ [2020] J. P. Chen, C. Zhang, Y. Liu, C. Jiang, W. Zhang, X. L. Hu, J. Y. Guan, Z. W. Yu, H. Xu, J. Lin, M. J. Li, H. Chen, H. Li, L. You, Z. Wang, X. B. Wang, Q. Zhang, and J. W. Pan, Sending-or-not-sending with independent lasers: Secure twin-field quantum key distribution over 509 km, Phys. Rev. Lett. 124, 070501 (2020).
* Fang _et al._ [2020] X.-T. Fang, P. Zeng, H. Liu, M. Zou, W. Wu, Y.-L. Tang, Y.-J. Sheng, Y. Xiang, W. Zhang, H. Li, Z. Wang, L. You, M.-J. Li, H. Chen, Y.-A. Chen, Q. Zhang, C.-Z. Peng, X. Ma, T.-Y. Chen, and J.-W. Pan, Implementation of quantum key distribution surpassing the linear rate-transmittance bound, Nat. Photon. 14, 422 (2020).
* Liu _et al._ [2021] H. Liu, C. Jiang, H.-T. Zhu, M. Zou, Z. Yu, X.-L. Hu, H. Xu, S. Ma, Z. Han, J. Chen, Y. Dai, S.-B. Tang, W. Zhang, H. Li, L. You, Z. Wang, F. Zhou, Q. Zhang, X.-b. Wang, T.-Y. Chen, and J. Pan, Field test of twin-field quantum key distribution through sending-or-not-sending over 428 km, arXiv: 2101.00276v1 (2021).
* Chen _et al._ [2021] J.-P. Chen, C. Zhang, Y. Liu, C. Jiang, W.-J. Zhang, Z.-Y. Han, S.-Z. Ma, X.-L. Hu, Y.-H. Li, H. Liu, F. Zhou, H.-F. Jiang, T.-Y. Chen, H. Li, L.-X. You, Z. Wang, X.-B. Wang, Q. Zhang, and J.-W. Pan, Twin-field quantum key distribution over 511 km optical fiber linking two distant metropolitans, arXiv:2102.00433v1 (2021).
* Vakhitov _et al._ [2001] A. Vakhitov, V. Makarov, and D. R. Hjelme, Large pulse attack as a method of conventional optical eavesdropping in quantum cryptography, J. Mod. Opt. 48, 2023 (2001).
* Gisin _et al._ [2006] N. Gisin, S. Fasel, B. Kraus, H. Zbinden, and G. Ribordy, Trojan-horse attacks on quantum-key-distribution systems, Phys. Rev. A 73, 022320 (2006).
* Fung _et al._ [2007] C.-H. F. Fung, B. Qi, K. Tamaki, and H.-K. Lo, Phase-remapping attack in practical quantum-key-distribution systems, Phys. Rev. A 75, 032314 (2007).
* Jiang _et al._ [2012] M.-S. Jiang, S.-H. Sun, C.-Y. Li, and L.-M. Liang, Wavelength-selected photon-number-splitting attack against plug-and-play quantum key distribution systems with decoy states, Phys. Rev. A 86, 032310 (2012).
* Jiang _et al._ [2014] M.-S. Jiang, S.-H. Sun, C.-Y. Li, and L.-M. Liang, Frequency shift attack on ‘plug-and-play’ quantum key distribution systems, J. Mod. Opt. 61, 147 (2014).
* Jain _et al._ [2014] N. Jain, E. Anisimova, I. Khan, V. Makarov, C. Marquardt, and G. Leuchs, Trojan-horse attacks threaten the security of practical quantum cryptography, New J. Phys. 16, 123030 (2014).
* Lucamarini _et al._ [2015] M. Lucamarini, I. Choi, M. B. Ward, J. F. Dynes, Z. L. Yuan, and A. J. Shields, Practical security bounds against the trojan-horse attack in quantum key distribution, Phys. Rev. X 5, 031030 (2015).
* Bugge _et al._ [2014] A. N. Bugge, S. Sauge, A. M. M. Ghazali, J. Skaar, L. Lydersen, and V. Makarov, Laser damage helps the eavesdropper in quantum cryptography, Phys. Rev. Lett. 112, 070503 (2014).
* Sun _et al._ [2015] S.-H. Sun, F. Xu, M.-S. Jiang, X.-C. Ma, H.-K. Lo, and L.-M. Liang, Effect of source tampering in the security of quantum cryptography, Phys. Rev. A 92, 022304 (2015).
* Makarov _et al._ [2016] V. Makarov, J.-P. Bourgoin, P. Chaiwongkhot, M. Gagné, T. Jennewein, S. Kaiser, R. Kashyap, M. Legré, C. Minshull, and S. Sajeed, Creation of backdoors in quantum communications via laser damage, Phys. Rev. A 94, 030302 (2016).
* Huang _et al._ [2019] A. Huang, l. Navarrete, S.-H. Sun, P. Chaiwongkhot, M. Curty, and V. Makarov, Laser-seeding attack in quantum key distribution, Phys. Rev. Appl. 12, 064043 (2019).
* Huang _et al._ [2020] A. Huang, R. Li, V. Egorov, S. Tchouragoulov, K. Kumar, and V. Makarov, Laser-damage attack against optical attenuators in quantum key distribution, Phys. Rev. Appl. 13, 034017 (2020).
* Pang _et al._ [2020] X.-L. Pang, A.-L. Yang, C.-N. Zhang, J.-P. Dou, H. Li, J. Gao, and X.-M. Jin, Hacking quantum key distribution via injection locking, Phys. Rev. Appl. 13, 034008 (2020).
* Tang _et al._ [2013] Y.-L. Tang, H.-L. Yin, X. Ma, C.-H. F. Fung, Y. Liu, H.-L. Yong, T.-Y. Chen, C.-Z. Peng, Z.-B. Chen, and J.-W. Pan, Source attack of decoy-state quantum key distribution using phase information, Phys. Rev. A 88, 022308 (2013).
* Tamaki _et al._ [2016] K. Tamaki, M. Curty, and M. Lucamarini, Decoy-state quantum key distribution with a leaky source, New J. Phys. 18, 065008 (2016).
* Huang _et al._ [2018] A. Huang, S.-H. Sun, Z. Liu, and V. Makarov, Quantum key distribution with distinguishable decoy states, Phys. Rev. A 98, 012330 (2018).
* Sajeed _et al._ [2015] S. Sajeed, I. Radchenko, S. Kaiser, J.-P. Bourgoin, A. Pappa, L. Monat, M. Legré, and V. Makarov, Attacks exploiting deviation of mean photon number in quantum key distribution and coin tossing, Phys. Rev. A 91, 032326 (2015).
* Winzer and Essiambre [2006] P. J. Winzer and R. Essiambre, Advanced optical modulation formats, Proc. IEEE 94, 952 (2006).
* Tamaki _et al._ [2014] K. Tamaki, M. Curty, G. Kato, H.-K. Lo, and K. Azuma, Loss-tolerant quantum cryptography with imperfect sources, Phys. Rev. A 90, 052314 (2014).
* Pereira _et al._ [2019] M. Pereira, M. Curty, and K. Tamaki, Quantum key distribution with flawed and leaky sources, npj Quant. Inf. 5, 62 (2019).
* Mizutani _et al._ [2019] A. Mizutani, T. Sasaki, Y. Takeuchi, K. Tamaki, and M. Koashi, Quantum key distribution with simply characterized light sources, npj Quant. Inf. 5, 87 (2019).
* Navarrete _et al._ [2020] A. Navarrete, M. Pereira, M. Curty, and K. Tamaki, Practical quantum key distribution secure against side-channels, arXiv: 2007.03364v1 (2020).
* Pereira _et al._ [2020] M. Pereira, G. Kato, A. Mizutani, M. Curty, and K. Tamaki, Quantum key distribution with correlated sources, Sci. Adv. 6, 4487 (2020).
* Inamori _et al._ [2007] H. Inamori, N. Lütkenhaus, and D. Mayers, Unconditional security of practical quantum key distribution, Eur. Phys. J. D 41, 599 (2007).
* Yu _et al._ [2013] Z.-W. Yu, Y.-H. Zhou, and X.-B. Wang, Three-intensity decoy-state method for measurement-device-independent quantum key distribution, Phys. Rev. A 88, 062339 (2013).
* Tomamichel _et al._ [2012] M. Tomamichel, C. C. W. Lim, N. Gisin, and R. Renner, Tight finite-key analysis for quantum cryptography, Nat. Commun. 3, 634 (2012).
* Chernoff [1952] H. Chernoff, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann. Math. Stat. 23, 493 (1952).
* Curty _et al._ [2014] M. Curty, F. Xu, W. Cui, C. C. Lim, K. Tamaki, and H. K. Lo, Finite-key analysis for measurement-device-independent quantum key distribution, Nat. Commun. 5, 3732 (2014).
|
# Ghost distributions on supersymmetric spaces I: Koszul induced superspaces,
branching, and the full ghost centre
Alexander Sherman
###### Abstract.
Given a Lie superalgebra $\mathfrak{g}$, Gorelik defined the anticentre
$\mathcal{A}$ of its enveloping algebra, which consists of certain elements
that square to the center. We seek to generalize and enrich the anticentre to
the context of supersymmetric pairs $(\mathfrak{g},\mathfrak{k})$, or more
generally supersymmetric spaces $G/K$. We define certain invariant
distributions on $G/K$, which we call ghost distributions, and which in some
sense are induced from invariant distributions on $G_{0}/K_{0}$. Ghost
distributions, and in particular their Harish-Chandra polynomials, give
information about branching from $G$ to a symmetric subgroup $K^{\prime}$
which is related (and sometimes conjugate) to $K$. We discuss the case of
$G\times G/G$ for an arbitrary quasireductive supergroup $G$, where our
results prove the existence of a polynomial which determines projectivity of
irreducible $G$-modules. Finally, a generalization of Gorelik’s ghost centre
is defined called the full ghost centre, $\mathcal{Z}_{full}$. For type I
basic Lie superalgebras $\mathfrak{g}$ we fully describe $\mathcal{Z}_{full}$,
and prove that if $\mathfrak{g}$ contains an internal grading operator,
$\mathcal{Z}_{full}$ consists exactly of those elements in
$\mathcal{U}\mathfrak{g}$ acting by $\mathbb{Z}$-graded constants on every
finite-dimensional irreducible representation.
## 1\. Introduction
Let $\mathfrak{g}$ be a Lie superalgebra over an algebraically closed field
$k$ of characteristic zero. In [Gor00], Gorelik defined a certain natural
’twisted’ adjoint action of a Lie superalgebra $\mathfrak{g}$ on its
enveloping algebra $\mathcal{U}\mathfrak{g}$. The action was originally
considered by [ABF97] for $\mathfrak{o}\mathfrak{s}\mathfrak{p}(1|2n)$, where
a certain element $T$ in the enveloping algebra was constructed, called
Casimir’s ghost, which squared to the center.
The action defined by Gorelik in general is remarkable in that the structure
of $\mathcal{U}\mathfrak{g}$ becomes that of an induced module,
$\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}\mathcal{U}\mathfrak{g}_{\overline{0}}$.
Further the invariants of this action, denoted $\mathcal{A}$ and called the
anticentre in [Gor00], are both a module over the center
$\mathcal{Z}:=\mathcal{Z}(\mathcal{U}\mathfrak{g})$ and multiply into the
center, so that $\tilde{\mathcal{Z}}:=\mathcal{Z}+\mathcal{A}$ is an algebra
which Gorelik called the ghost centre of $\mathfrak{g}$. If $\mathfrak{g}$ is
quasireductive, i.e. $\mathfrak{g}_{\overline{0}}$ is reductive and acts
semisimply on $\mathfrak{g}$, and $\Lambda^{top}\mathfrak{g}_{\overline{1}}$
is a trivial $\mathfrak{g}_{\overline{0}}$-module, Gorelik obtained an
explicit identification as vector spaces of $\mathcal{A}$ with the center of
$\mathcal{U}\mathfrak{g}_{\overline{0}}$. Further, for basic classical Lie
superalgebras, Gorelik gave a complete description of the Harish-Chandra image
of $\mathcal{A}$ and the structure of $\tilde{\mathcal{Z}}$. In particular she
showed that $\tilde{\mathcal{Z}}$ consists of exactly those elements of
$\mathcal{U}\mathfrak{g}$ that act by superconstants on every irreducible
representation. We also note that Gorelik fully computed the $\mathcal{A}$ for
$Q$-type superalgebras.
### 1.1. Generalization to supersymmetric spaces
We seek to understand Gorelik’s original results from the geometric
perspective, and thereby understand how they may be generalized. Consider the
setting of symmetric supervarieties (or supersymmetric spaces, if one prefers
that term) $G/K$ and their corresponding supersymmetric pairs
$(\mathfrak{g},\mathfrak{k})$, corresponding to an involution $\theta$. We
have a decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ into the
$\pm 1$ eigenspaces of $\theta$. We define a new subalgebra
$\mathfrak{k}^{\prime}:=\mathfrak{k}_{\overline{0}}\oplus\mathfrak{p}_{\overline{1}},$
which itself is the fixed points of the involution $\delta\circ\theta$, where
$\delta(x)=(-1)^{\overline{x}}x$ is the grading operator on $\mathfrak{g}$.
Let $K^{\prime}$ be the subgroup of $G$ with $K^{\prime}_{0}=K_{0}$ and
$\operatorname{Lie}(K^{\prime})=\mathfrak{k}^{\prime}$. Then because
$\mathfrak{k}_{\overline{1}}^{\prime}\oplus\mathfrak{k}_{\overline{1}}=\mathfrak{g}_{\overline{1}}$,
we will see that the action of $K^{\prime}$ on $G/K$ enjoys many nice
properties, in particular many of which are not explicitly enjoyed by the
action of $K$ on $G/K$. The first result indicating this is the following.
Recall that for an affine supervariety $X$ with a closed point $x$ and maximal
ideal $\mathfrak{m}_{x}\subseteq k[X]$ in the space of functions, we may
consider the super vector space of distributions
$\operatorname{Dist}(X,x)=\\{\psi:k[X]\to k:\psi(\mathfrak{m}_{x}^{n})=0\text{
for }n\gg 0\\}.$
If we consider the point $eK$ on $G/K$, $K_{0}^{\prime}$ fixes it, and thus
$K^{\prime}$ has a natural action on $\operatorname{Dist}(G/K,eK)$.
###### Theorem 1.1.
We have an isomorphism of $K^{\prime}$-modules
$\operatorname{Dist}(G/K,eK)\cong\operatorname{Ind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}}\operatorname{Dist}(G_{0}/K_{0},eK_{0}).$
In particular, if $K^{\prime}$ is quasireductive and
$\Lambda^{top}\mathfrak{p}_{\overline{1}}$ is a trivial $K_{0}$-module, then
we have an explicit isomorphism of vector spaces
$\operatorname{Dist}(G/K,eK)^{K^{\prime}}\cong\operatorname{Dist}(G_{0}/K_{0},eK_{0})^{K_{0}}.$
For the symmetric supervariety $G\times G/G\cong G$, we have identifications
$G^{\prime}\cong G$ and
$\operatorname{Dist}(G,eG)\cong\mathcal{U}\mathfrak{g}$, and the action we
obtain in this case of $G^{\prime}$ on $\operatorname{Dist}(G,eG)$ is exactly
Gorelik’s twisted adjoint action, thus reproducing her results from this
context.
The proof of 1.1 uses a construction due to Koszul ([Kos82]) which allows one
to take the variety $G_{0}/K_{0}$, which has an action of
$K_{0}^{\prime}=K_{0}$, and induce it to a $K^{\prime}$-supervariety denoted
$(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}$. The latter supervariety has the
special property that the vector fields $\mathfrak{k}_{\overline{1}}$ are
everywhere non-vanishing, and its algebra of functions and spaces of
distributions, respectively, are (co)induced from $G_{0}/K_{0}$. Then by a
general result, $G/K$ and $(G_{0}/K_{0})^{\mathfrak{k}}$ are locally
isomorphic as $K^{\prime}$-supervarieties, implying an isomorphism of their
spaces of distributions as in 1.1.
### 1.2. The Harish-Chandra morphism
Now consider a symmetric supervariety $G/K$ where we assume $G$ is
quasireductive and Cartan-even (i.e. its Lie superalgebra has an even Cartan
subalgebra), $\Lambda^{top}\mathfrak{p}_{\overline{1}}$ is the trivial
$K_{0}$-module, and we have an Iwasawa decomposition
$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}$. We write
$\mathcal{A}_{G/K}:=\operatorname{Dist}(G/K,eK)^{K^{\prime}},$
for the $K^{\prime}$-invariant distributions, and call elements of
$\mathcal{A}_{G/K}$ ghost distributions on $G/K$. One important problem is to
compute the image of $\mathcal{A}_{G/K}$ under the Harish-Chandra homomorphism
$HC:\operatorname{Dist}(G/K,eK)\to S(\mathfrak{a}).$
In this case $G/K$ is also spherical, and admits rational functions
$f_{\lambda}$ indexed by a full rank lattice
$\Lambda\subseteq\mathfrak{a}^{*}$ which are $\mathfrak{n}$-invariant and
eigenfunctions for $\mathfrak{a}$, where we normalize by requiring that
$f_{\lambda}(eK)=1$. If $\lambda$ is a dominant weight, then generically
$f_{\lambda}$ will be a regular function. Then for a distribution $\psi$,
$HC(\psi)(\lambda)=\psi(f_{\lambda}).$
Using this, we have the following representation-theoretic interpretation of
the values of $\mathcal{A}_{G/K}$ on such eigenfunctions.
###### Theorem 1.2.
Let $f_{\lambda}\in k[G/K]$ be a highest weight vector of weight $\lambda$.
Then the $K^{\prime}$-module generated by $f_{\lambda}$ contains a copy of
$I_{K^{\prime}}(k)$ if and only if there exists $\gamma\in\mathcal{A}_{G/K}$
such that $HC(\gamma)(\lambda)\neq 0$.
Here $I_{K^{\prime}}(k)$ denote the injective indecomposable
$K^{\prime}$-module with socle $k$. From this we obtain, as one corollary:
###### Corollary 1.3.
Keep the hypotheses of 1.2, and suppose further that the $G$-module generated
by $f_{\lambda}$, $L$, is irreducible. Then $I_{G}(L)$ is a submodule of
$k[G/K^{\prime}]$.
Thus obtaining $HC(\mathcal{A}_{G/K})$ is of interest, and this will be taken
up in more detail for the case of basic classical Lie superalgebras in a
subsequent article. However even for such Lie superalgebras the answer is not
known in general (outside the case $G\times G/G$, originally done by Gorelik).
### 1.3. The case of $G\times G/G$
For $G\times G/G$, we have the following nice application of 1.3. Note that in
the following we remove the restriction that $G$ be Cartan-even. For this one
needs to generalize the Harish-Chandra morphism to the non-Cartan-even case,
which is done in the appendix.
###### Theorem 1.4.
Let $G$ be a quasireductive supergroup with Cartan subalgebra
$\mathfrak{h}\subseteq\mathfrak{g}$, and choose a Borel subgroup $B$ with Lie
superalgebra $\mathfrak{b}$ containing $\mathfrak{h}$. Then there exists a
polynomial $p_{G,B}\in S(\mathfrak{h})$ of degree less than or equal to
$\operatorname{dim}\mathfrak{b}_{\overline{1}}$, such that for a
$\mathfrak{b}$-dominant weight $\lambda$,
$p_{G,B}(\lambda)\neq 0\ \text{ if and only if }\ L_{B}(\lambda)\text{ is
projective},$
where $L_{B}(\lambda)$ is the irreducible $G$-module of $B$-highest weight
$\lambda$.
In particular the above result implies that if one simple $G$-module is
projective then this is a generic property of simple $G$-modules. However it
is possible that $p_{G,B}$ is the zero polynomial, so that no simple
$G$-modules are projective. Although this was already know for many Lie
superalgebras by direct study, this gives a general explanation for this
phenomenon.
###### Remark 1.5.
In every example the author is aware of, $p_{G,B}$ is a product of linear
polynomials. We do not see an a priori reason for this, and it would be
interesting to find an example where this does not occur (or to prove it
always happens).
Recall that $\mathcal{A}$ denotes the ghost center of
$\mathcal{U}\mathfrak{g}$. Then $\mathcal{A}$ contains an element of least
degree, which we write as $T_{\mathfrak{g}}$. This operator can test whether a
given semisimple $G$-module is projective in the following sense:
###### Proposition 1.6.
Let $L$ be a simple $G$-module. Then $T_{\mathfrak{g}}$ acts by $0$ on $L$ if
and only if $L$ is not projective. If $L$ is projective, then
$T_{\mathfrak{g}}$ acts by one of the following automorphisms:
* •
if $T_{\mathfrak{g}}$ is even, then up to scalar it acts by $\delta_{L}$, the
parity operator on $L$;
* •
if $T_{\mathfrak{g}}$ is odd, then it acts by $\delta_{L}\circ\sigma_{L}$,
where $\sigma_{L}:L\to L$ is a $G$-equivariant odd automorphism of $L$.
For a different application of 1.4, we can prove the following general
sufficient criteria for having projective irreducible modules.
###### Theorem 1.7.
Let $G$ be a Cartan-even quasireductive supergroup with a chosen Cartan
subalgebra $\mathfrak{h}$ such that the following conditions hold on its Lie
superalgebra:
1. (1)
for a root $\alpha\in\mathfrak{h}^{*}$,
$\operatorname{dim}[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]=1$; and
2. (2)
for a root $\alpha\in\mathfrak{h}^{*}$, the pairing
$[-,-]:\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\to[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]$
is nondegenerate.
Choose an arbitrary Borel subgroup $B$ whose Lie superalgebra contains
$\mathfrak{h}$, and let $\alpha_{1},\dots,\alpha_{n}$ denote the odd positive
roots with $r_{i}=\operatorname{dim}\mathfrak{g}_{\alpha_{i}}$. Write
$h_{\alpha_{i}}$ for a nonzero element of
$[\mathfrak{g}_{\alpha_{i}},\mathfrak{g}_{-\alpha_{i}}]$. Then we have (up to
a scalar)
$p_{G,B}=h_{\alpha_{1}}^{r_{1}}\cdots h_{\alpha_{n}}^{r_{n}}+l.o.t.$
In particular $p_{G,B}\neq 0$, so $G$ admits irreducible projective modules.
Note that the above conditions hold in particular for a Cartan-even quadratic
quasireductive supergroup, i.e. a Cartan-even quasireductive supergroup whose
Lie superalgebra admits a nondegenerate, invariant, and even supersymmetric
form. See [Ben00] for a classification of quadratic quasireductive Lie
superalgebras.
### 1.4. The full ghost centre $\mathcal{Z}_{full}$
We may generalize Gorelik’s results in a different way as follows to produce
an interesting subalgebra of $\mathcal{U}\mathfrak{g}$ which contains
Gorelik’s ghost centre. Let
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$ be those
automorphisms of $\mathfrak{g}$ that fix $\mathfrak{g}_{\overline{0}}$
pointwise. For
$\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$, define
the $\phi$-twisted adjoint action $\operatorname{ad}_{\phi}$ of $\mathfrak{g}$
on $\mathcal{U}\mathfrak{g}$ by
$\operatorname{ad}_{\phi}(u)(v)=uv-(-1)^{\overline{u}\overline{v}}v\phi(u).$
When $\phi=\delta$, we obtain the twisted adjoint action studied by Gorelik.
Then using 1.1 we can prove that:
###### Theorem 1.8.
If $\phi(x)=x$ implies that $x\in\mathfrak{g}_{\overline{0}}$, then
$\mathcal{U}\mathfrak{g}\cong\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}\mathcal{U}\mathfrak{g}_{\overline{0}}$
under the $\phi$-twisted adjoint action.
Write $\mathcal{A}_{\phi}\subseteq\mathcal{U}\mathfrak{g}$ for the
$\operatorname{ad}_{\phi}$-invariant elements in $\mathcal{U}\mathfrak{g}$,
for any $\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$.
Then $\mathcal{A}_{\operatorname{id}}=\mathcal{Z}$, and
$\mathcal{A}_{\delta}=\mathcal{A}$. Further, for
$\phi,\psi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$
multiplication induces a morphism
$\mathcal{A}_{\phi}\otimes\mathcal{A}_{\psi}\to\mathcal{A}_{\psi\phi}.$
Therefore if we set
$\mathcal{Z}_{full}:=\sum\limits_{\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})}\mathcal{A}_{\phi},$
we obtain a subalgebra of $\mathcal{U}\mathfrak{g}$, which also contains
$\tilde{\mathcal{Z}}$. For $\mathfrak{g}$ one of the type I basic classical
Lie superalgebras $\mathfrak{g}\mathfrak{l}(m|n)$,
$\mathfrak{s}\mathfrak{l}(m|n)$, $\mathfrak{p}\mathfrak{s}\mathfrak{l}(n|n)$
with $n>2$, or $\mathfrak{o}\mathfrak{s}\mathfrak{p}(2|2n)$) we have an
explicit description of this algebra. Here,
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})\cong
k^{\times}$, so we write
$\phi_{c}\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$ for
the automorphism corresponding to $c\in k^{\times}$ and $\mathcal{A}_{c}$ for
the invariants of the $\phi_{c}$-twisted adjoint action. Then $\phi_{c}$
satisfies the conditions of 1.8 exactly if $c\neq 1$.
###### Theorem 1.9.
Let $N=\operatorname{dim}\mathfrak{g}_{\overline{1}}/2$. Then
$HC(\mathcal{A}_{c})=HC(\mathcal{A}_{-1})$ for all $c\neq 1$, and
$\mathcal{Z}_{full}=\bigoplus\limits_{\zeta^{N}=1}\mathcal{A}_{\zeta}.$
Further, for $\mathfrak{g}\mathfrak{l}(m|n)$, $\mathfrak{s}\mathfrak{l}(m|n)$
with $m\neq n$, and $\mathfrak{o}\mathfrak{s}\mathfrak{p}(2|2n)$,
$\mathcal{Z}_{full}$ consists of exactly the set of elements in
$\mathcal{U}\mathfrak{g}$ which act by $\mathbb{Z}$-graded constants on all
finite-dimensional irreducible representations of $\mathfrak{g}$.
We exclude $\mathfrak{s}\mathfrak{l}(n|n)$ and
$\mathfrak{p}\mathfrak{s}\mathfrak{l}(n|n)$ in the last statement due to the
lack of an internal grading operator. Further,
$\mathfrak{p}\mathfrak{s}\mathfrak{l}(2|2)$ is fully excluded because in this
case $\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})=SL_{2}$
(see section 5.5 of [Mus12]). For this reason it would be interesting to
compute the full ghost center for this superalgebra.
### 1.5. Future work
This article is the first of two on ghost distributions and their
applications. In the subsequent article we study in more detail two questions
of interest: (1) when is it possible to produce an algebra using ghost
distributions on a general supersymmetric space? In particular can we form an
algebra of differential operators, and if not can we at least produce an
algebra of polynomials using the Harish-Chandra homomorphism? (2) Computing
$HC(\mathcal{A}_{G/K})$ as much as possible in the case when $\mathfrak{g}$ is
an almost simple basic classical Lie superalgebra and the involution under
consideration preserves the form on $\mathfrak{g}$. In this case we give some
general properties of $HC(\mathcal{A}_{G/K})$, and seek to compute it for all
rank one supersymmetric pairs.
### 1.6. Outline of paper
In section 2 we introduce the basic algebraic supergeometry we need, in
particular the algebra of differential operators and the space of
distributions at a given point. Section 3 recalls basic facts about algebraic
supergroups and their actions, and gives a description of invariant
differential operators on homogeneous spaces. In section 4, we explain the
main technical construction, the induced superspace as defined by Koszul.
Section 5 applies the ideas of section 4 to homogeneous superspaces, and
deduces what we need to generalize the results of [Gor00]. Section 6 studies
the case of $G/G_{0}$, looking in particular at a certain invariant
distribution, $v_{\mathfrak{g}}$, which will play an important role in the
theory of ghost distributions. Section 7 looks at applications to a symmetric
supervariety $G/K$, and section 8 studies more closely the case when an
Iwasawa decomposition is present, giving the definition of the Harish-Chandra
map and its interpretations. In section 9 we take a special look at the case
of $G\times G/G$, where the theory of ghost distributions is most developed,
and prove 1.4. Finally in section 10 we define and study the full ghost centre
$\mathcal{Z}_{full}$, and look especially at the cases of type I algebras. The
appendix proves 1.4 by generalizing the Harish-Chandra homomorphism to the
case when $G$ is not Cartan-even, and studies the image of the Harish-Chandra
homomorphism in this case.
### 1.7. Acknowledgments
The author would like to thank Alexander Alldridge for many enlightening
discussions about this project. Thanks to Vera Serganova for her patience,
tremendous support and guidance throughout my PhD, during which much of this
work was done. Thank you to Maria Gorelik for patiently explaining many
aspects of her relevant papers to me. Finally thank you to Siddartha Sahi,
Johannes Flakes, and Inna Entova-Aizenbud for helpful comments and
suggestions. This research was partially supported by ISF grant 711/18 and
NSF-BSF grant 2019694.
## 2\. Preliminaries from algebraic supergeometry
### 2.1. Linear super algebra notation
We work throughout over an algebraically closed field $k$ of characteristic
zero. For a super vector space $V$ we write $V=V_{\overline{0}}\oplus
V_{\overline{1}}$ for its parity decomposition. Even though we precisely
consider $V_{\overline{0}}$ and $V_{\overline{1}}$ to be even vector spaces,
we will occasionally (and abusively) view them as super vector spaces where
$V_{\overline{0}}$ is purely even and $V_{\overline{1}}$ is purely odd. We
write
$\operatorname{dim}V=\operatorname{dim}V_{\overline{0}}+\operatorname{dim}V_{\overline{1}}$
for the dimension of the underlying vector space of $V$, and the
superdimension of $V$ is given by
$\operatorname{sdim}V=\operatorname{dim}V_{\overline{0}}-\operatorname{dim}V_{\overline{1}}$.
### 2.2. Algebraic supergeometry notation
We will use the symbols $X,Y,\dots$ for supervarieties with even subschemes
$X_{0},Y_{0},\dots.$ We will be considering supervarieties in the sense of
[She19], however all spaces of interest will be smooth and affine. A smooth
affine supervariety is always given by $\Lambda^{\bullet}E$, the exterior
algebra of a vector bundle $E$ on a smooth affine variety $X_{0}$ (see
[VMP90]). Thus one will lose almost nothing if one simply works with
supervarieties of this form in this article. Note that affine supervarieties
and morphisms between them are entirely determined by their spaces of global
functions and maps between them, just as with affine varieties. See [CCF11]
for more on the basics of algebraic supergeometry.
If $X$ is a supervariety, there is a canonical closed embedding
$i_{X}:X_{0}\to X$, and this is a homeomorphism of underlying topological
spaces. The closed points of $X$ are the $k$-points, which we write as $X(k)$,
and they are canonically identified with the closed points of $X_{0}$ via
$i_{X}$. If $x$ is a closed point of $X$ and $\mathcal{F}$ is a sheaf on $X$,
we write $\mathcal{F}_{x}$ for the stalk of $\mathcal{F}$ at $x$. Then
$\mathcal{O}_{X,x}$ is a local superalgebra, and we write $\mathfrak{m}_{x}$
for its corresponding maximal ideal and ${}_{0}\mathfrak{m}_{x}$ for the
maximal ideal in $\mathcal{O}_{X_{0},x}$. For affine supervarieties we will
also write $\mathfrak{m}_{x}$ for the maximal ideal of $k[X]$ corresponding to
$x$, and similarly ${}_{0}\mathfrak{m}_{x}$ for the maximal ideal of
$k[X_{0}]$ corresponding to $x$.
### 2.3. Differential operators and distributions
###### Definition 2.1.
For a supervariety $X$, let $\mathcal{D}_{X}$ denote the sheaf of filtered
algebras which is the subsheaf of $\mathcal{E}nd(\mathcal{O}_{X})$ defined
inductively as follows. We set $\mathcal{D}_{X}^{n}=0$ for $n<0$, and for
$n\geq 0$ and an open subset $U$ of $X$, set
$\Gamma(U,\mathcal{D}_{X}^{n}):=\\{D\in\operatorname{End}(\mathcal{O}_{U})):[D,f]\in\Gamma(U,\mathcal{D}_{X}^{n-1})\text{
for all }f\in\mathcal{O}_{X}(U)\\}$
We call $\mathcal{D}_{X}$ the sheaf of differential operators on $X$, and
refer to its sections as differential operators. If $\mathcal{F}$ is a sheaf
on $X$, we say that it is a left, resp. right $\mathcal{D}_{X}$-module if it
is exactly that.
###### Definition 2.2.
If $x$ is a closed point of $X$, define the super vector space of
distributions at $x$ to be all (not necessarily even) linear maps
$\psi:\mathcal{O}_{X,x}\to k$ such that for some $n\in\mathbb{N}$ we have
$\psi(\mathfrak{m}_{x}^{n})=0$. We denote this super vector space by
$\operatorname{Dist}(X,x)$. Define
$\operatorname{Dist}^{n}(X,x)\subseteq\operatorname{Dist}(X,x)$ to be those
distributions vanishing on $\mathfrak{m}_{x}^{n+1}$ so that
$\operatorname{Dist}(X,x)$ obtains a filtration. Note that
$\operatorname{Dist}^{0}(X,x)$ is one-dimensional and consists of the
distinguished even distribution given by evaluation at $x$, which we denote by
$\operatorname{ev}_{x}$.
We may give $\operatorname{Dist}(X,x)$ the structure of a right
$\mathcal{D}_{X}$-module as follows. We view $\operatorname{Dist}(X,x)$ as a
sheaf on $X$ supported on $x$. Given a differential operator $D$ defined in a
neighborhood of $x$, and a distribution $\psi$, define
$(\psi D)(f):=\psi(Df)$
This action respects the filtration on $\operatorname{Dist}(X,x)$, so it
becomes a filtered right $\mathcal{D}_{X}$-module.
The following lemma is proved in the same way as in the classical setting, so
we omit the proof.
###### Lemma 2.3.
Let $X$ be a supervariety with a closed point $x$.
1. (1)
Given a map of supervarieties $\phi:X\to Y$, we have a natural map of filtered
super vector spaces
$d\phi_{x}:\operatorname{Dist}(X,x)\to\operatorname{Dist}(Y,\phi(x))$.
2. (2)
The chain rule holds: if $\phi:X\to Y$ and $\psi:Y\to Z$, then
$d(\psi\circ\phi)=d\psi\circ d\phi$.
3. (3)
If $X$ is affine, then the natural pairing $\operatorname{Dist}(X,x)\otimes
k[X]\to k$ has the property that if $\psi(f)=0$ for all $f\in k[X]$, then
$\psi=0$.
4. (4)
There is a natural restriction morphism
$\operatorname{res}_{x}:\Gamma(U,\mathcal{D}_{X})\to\operatorname{Dist}(X,x)$
for any open subscheme $U$ containing $x$, given by
$\operatorname{res}_{x}(D)(f)=D(f)(x)$. This is a morphism of filtered right
$\mathcal{D}_{X}$-modules, where $\mathcal{D}_{X}$ acts on itself by right
multiplication.
###### Remark 2.4.
We have the following identifications:
$\operatorname{Dist}^{n}(X,x)=(\mathcal{O}_{X,x}/\mathfrak{m}_{x}^{n+1})^{*},\
\
\operatorname{Dist}(X,x)=\lim\limits_{\rightarrow}(\mathcal{O}_{X,x}/\mathfrak{m}_{x}^{n+1})^{*}.$
Furthermore when $X$ is affine, we have:
$\operatorname{Dist}^{n}(X,x)=(k[X]/\mathfrak{m}_{x}^{n+1})^{*},\ \
\operatorname{Dist}(X,x)=\lim\limits_{\rightarrow}(k[X]/\mathfrak{m}_{x}^{n+1})^{*}.$
In general we have an isomorphism of $\mathcal{D}_{X}$-modules
$\operatorname{Dist}(X,x)=\Gamma_{\mathfrak{m}_{x}}(\mathcal{O}_{X,x})^{*}$
and for $X$ affine an isomorphism of $\Gamma(X,\mathcal{D}_{X})$-modules
$\operatorname{Dist}(X,x)=\Gamma_{\mathfrak{m}_{x}}k[X]^{*}.$
We recall the definition of the functor $\Gamma_{\mathfrak{m}_{x}}$:
$\Gamma_{\mathfrak{m}_{x}}M=\\{m\in M:\mathfrak{m}_{x}^{n}m=0\text{ for }n\gg
0\\}.$
###### Definition 2.5.
Define the sheaf $\mathcal{T}_{X}$ of vector fields on $X$ by setting
$\Gamma(\text{Spec}A,\mathcal{T}_{X})=\operatorname{Der}(A)$ for an affine
open subscheme $\text{Spec}A$ of $X$. In this way $\mathcal{T}_{X}$ becomes a
subsheaf of $\mathcal{D}_{X}^{1}$, and a sheaf of Lie superalgebras under
supercommutator.
###### Definition 2.6.
For a supervariety $X$ and a closed point $x$ of $X$, we define the tangent
space of $X$ at $x$ to be the super vector space
$T_{x}X:=(\mathfrak{m}_{x}/\mathfrak{m}_{x}^{2})^{*}$. In this way $T_{x}X$ is
exactly the subspace of $\operatorname{Dist}^{1}(X,x)$ given by functionals
$\psi:\mathcal{O}_{X,x}/\mathfrak{m}_{x}^{2}\to k$ such that $\psi(1)=0$.
Observe the restriction morphism
$\operatorname{res}_{x}:\mathcal{D}_{X,x}\to\operatorname{Dist}(X,x)$
restricts to a morphism $\mathcal{T}_{X,x}\to T_{x}X$.
###### Definition 2.7.
A supervariety $X$ is smooth if for all $x\in X(k)$ the morphism
$\operatorname{res}_{x}:\mathcal{T}_{X,x}\to T_{x}X$ is surjective.
See the appendix of [She19] for more equivalent conditions of smoothness.
We have the following standard theorem for $\mathcal{D}_{X}$-modules which we
will use later on. The proof is almost verbatim from the classical case, so we
omit the proof.
###### Proposition 2.8.
Suppose that $X$ is a smooth supervariety. Then if a left (or right)
$\mathcal{D}_{X}$-module $\mathcal{F}$ is coherent over $\mathcal{O}_{X}$,
then it is locally free over $\mathcal{O}_{X}$.
###### Lemma 2.9.
Let $X$ be a supervariety and $x\in X(k)$ a closed point at which $X$ is
smooth. Suppose $V$ is a subspace of vector fields defined in a neighborhood
$U$ of $x$, such that the restriction map $V\to T_{x}X$ is an isomorphism. If
$v_{1},\dots,v_{n}$ is a homogeneous basis of $V$, then the restriction of the
set of all monomials in $v_{1},\dots,v_{n}$, in any order, is a spanning set
of $\operatorname{Dist}(X,x)$
###### Proof.
Choose homogeneous functions $f_{1},\dots,f_{n}$ defined in a neighborhood of
$x$ which vanish at $x$ such that they project to a basis of
$\mathfrak{m}_{x}/\mathfrak{m}_{x}^{2}$. Then
$\mathcal{O}_{X,x}/\mathfrak{m}_{x}^{\ell}$ is isomorphic to the set of
polynomials in $f_{1},\dots,f_{n}$ of degree less than or equal to $\ell$. By
applying a linear automorphism, we may assume that
$\operatorname{res}_{x}(v_{i})(f_{j})=\delta_{ij}$. Then we see that
$\operatorname{res}_{x}(v_{1}^{r_{1}}\cdots v_{n}^{r_{n}})(f_{1}^{s_{1}}\cdots
f_{n}^{s_{n}})=\pm s_{1}!\cdots
s_{n}!\delta_{r_{1}s_{1}}\cdots\delta_{r_{n}s_{n}},$
where the sign is determined by the number of parity changes which occurs in
the computation. From this the statement follows. ∎
###### Definition 2.10.
Suppose $\mathcal{F}$ is a quasi-coherent sheaf on $X$. For a closed point $x$
of $X$, define distributions on $\mathcal{F}$ at $x$ to be all linear maps
$\psi:\mathcal{F}_{x}\to k$ such that for some $n\in\mathbb{N}$,
$\psi(\mathfrak{m}_{x}^{n}\mathcal{F}_{x})=0$. We write this as
$\operatorname{Dist}(\mathcal{F},x)$.
###### Remark 2.11.
Notice that if $\mathcal{F}$ has a connection, hence an action by vector
fields, then $\operatorname{Dist}(\mathcal{F},x)$ naturally admits a right
action by vector fields, given by
$(\psi v)(s)=\psi(vs).$
If the connection is flat and $X$ is smooth, then this action extends in the
natural way to a right action by all differential operators.
As before, we have an identification for each $x\in X(k)$:
$\operatorname{Dist}(\mathcal{F},x)\cong\Gamma_{\mathfrak{m}_{x}}(\mathcal{F}_{x})^{*}$
and this identification respects the right action by vector fields when
$\mathcal{F}$ admits a connection.
## 3\. Differential Operators on a $G$-variety
### 3.1. Preliminaries on algebraic supergroups
Recall that an algebraic supergroup is a group object in the category of
algebraic supervarieties. Morphisms of supergroups are those which respect the
multiplication morphisms. We only consider affine algebraic supergroups, or
equivalently those algebraic supergroups that are linear, i.e. have a faithful
finite-dimensional representation. The letters $G,H,\dots$ will denote an
affine algebraic supergroup. To avoid cumbersome language, we will not write
the adjectives affine and algebraic when referring to affine algebraic
supergroups, and instead simply call them supergroups. Similarly, we use the
term subgroup instead of subsupergroup. We refer again to [CCF11] for basics
on algebraic supergroups.
A supergroup has a Lie superalgebra which we will denote with the letters
$\mathfrak{g},\mathfrak{h},\dots$. The Lie superalgebra may be defined as the
super vector space of left-invariant (or right-invariant) vector fields on
$G$. As in the classical case, the Lie superalgebra is canonically identified
with the tangent space of $G$ at the identity, $T_{e}G$. Morphisms of
algebraic supergroups induce morphisms of the corresponding Lie superalgebras.
If $G$ is a supergroup, then $G_{0}$ is an algebraic group, and the morphism
$i_{G}:G_{0}\to G$ is a morphisms of supergroups. Further, $i_{G}$ induces an
isomorphism of Lie algebras
$\operatorname{Lie}(G_{0})\cong\operatorname{Lie}(G)_{\overline{0}}$.
### 3.2. Representations of supergroups
In this article we will be using some basic facts about the representation
theory of quasireductive supergroups. We refer to [Ser11] for further
background.
Recall that for an affine algebraic supergroup $G$, $k[G]$ has the natural
structure of a supercommutative Hopf superalgebra. We define a left $G$-module
to be a left $k[G]$-comodule, and a right $G$-module to be a right
$k[G]$-comodule. It will be necessary for us to consider both left and right
$G$-modules due to the fact that distributions form a right module over the
algebra of differential operators, as we have seen. The category of left
$G$-modules is equivalent to the category of right $G$-modules because $k[G]$
has a Hopf structure. We will sometimes call a left or right $G$-module simply
a representation of $G$, or even a $G$-module, without specifying whether it
has a left or right action. In this case either the type of action is apparent
or is of no importance.
A left (resp. right) $G$-module induces in a natural way a left (resp. right)
representation of the Lie superalgebra. Recall that the category of
representations of $G$ is equivalent to the category of
$(G_{0},\mathfrak{g}=\operatorname{Lie}(G))$-modules such that the action of
$G_{0}$ and $\mathfrak{g}_{\overline{0}}\subseteq\mathfrak{g}$ are compatible
(see [CCF11]).
Given a representation $V$ of a Lie supergroup $G$ such that
$\operatorname{dim}V_{\overline{1}}=n$, we write $\operatorname{Ber}(V)$ for
the Berezinian of $V$, which is the one-dimensional $G$-module with the same
parity as $n$, where the action by $G$ is given by the Berezinian morphism
$G\to GL(V)\xrightarrow{\operatorname{Ber}}\mathbb{G}_{m}$ (see chapter 3 of
[Man13]). If $V$ is purely even (resp. purely odd) then
$\operatorname{Ber}(V)$ coincides with the top exterior power (resp. top
symmetric power) of $V$. We write $\operatorname{ber}_{V}:G\to\mathbb{G}_{m}$
for the character of $G$ determined by $\operatorname{Ber}(V)$, and by abuse
of notation we also write $\operatorname{ber}_{V}$ for the character of
$\mathfrak{g}$ that this determines. Then we have
$\operatorname{ber}_{V}=\det_{V_{\overline{0}}}\cdot\det_{V_{\overline{1}}}^{-1}$
as a character of $G_{0}$, and
$\operatorname{ber}_{V}=\text{tr}_{V_{\overline{0}}}-\text{tr}_{V_{\overline{1}}}$
as a character of $\mathfrak{g}_{\overline{0}}$.
If $\chi:G\to\mathbb{G}_{m}$ is a character of $G$, and $V$ is a
representation of $G$, we will write $V^{\chi}$ for the subspace of $V$ where
$G$ acts by $\chi$. If $\chi$ is the trivial character, we just write
$V^{G}:=V^{\chi}$.
If $V$ is a $G$-module and $v\in V$ is a homogeneous element, the $G$-module
generated by $v$, which we write as $\langle G\cdot v\rangle$, is given by
$\mathcal{U}\mathfrak{g}\cdot\langle G_{0}\cdot v\rangle$. That is, we first
take the $G_{0}$-module generated by $v$, and then take the
$\mathcal{U}\mathfrak{g}$-module which that generates.
Finally, if $V$ is a $G$-module then we will write (when they exist)
$I_{G}(V)$, resp. $P_{G}(V)$ (or $I(V)$, resp. $P(V)$ when the context is
clear) for the injective hull, resp projective cover of $V$.
### 3.3. Actions of supergroups
If $X$ is a supervariety, $G$ a supergroup, and $G$ acts on (the left on) $X$,
then we call $X$ a $G$-supervariety. We will usually reserve the letter
$a_{X}=a$ for the action morphism, i.e. $a:G\times X\to X$. In this case we
will consider $k[X]$ as a right $G$-module via translation. Explicitly, $g\in
G_{0}(k)$ acts by pullback $L_{g}^{*}$ along the left translation morphism
$L_{g}:X\to X$. The Lie superalgebra acts, for $u\in T_{e}G$, by
$u\mapsto(u\otimes 1)\circ a^{*}.$
This induces an map of superalgebras
$\mathcal{U}\mathfrak{g}\to\Gamma(X,\mathcal{D}_{X})^{op}$, and in this way
$\operatorname{Dist}(X,x)$ becomes a left $\mathcal{U}\mathfrak{g}$-module for
any $x\in X(k)$. In general, we will say $\mathfrak{g}$ acts on a supervariety
$X$ if it admits a homomorphism of algebras
$\mathcal{U}\mathfrak{g}\to\Gamma(X,\mathcal{D}_{X})^{op}$ such that
$\mathfrak{g}$ maps into $\Gamma(X,\mathcal{T}_{X})$.
###### Remark 3.1.
Suppose that $G$ acts on $X$, and $x\in X(k)$ is a closed point which is fixed
by $G_{0}$. Then $G_{0}$ acts on $\operatorname{Dist}(X,x)$. However
$\mathfrak{g}$ also acts on $\operatorname{Dist}(X,x)$, and in this way
$\operatorname{Dist}(X,x)$ obtains the structure of a $G$-module. Notice that
this will happen even if $x$ is not stabilized by all of $G$.
### 3.4. Differential operators on a $G$-supervariety
Let $X$ be an affine $G$-supervariety, with action morphism $a:G\times X\to
X$, and consider $D_{X}=\Gamma(X,\mathcal{D}_{X})$.
###### Lemma 3.2.
For a differential operator $D\in D_{X}$, the following are equivalent:
1. (1)
The map $D:k[X]\to k[X]$ is $G$-equivariant;
2. (2)
We have $a^{*}\circ D=\operatorname{id}\otimes D\circ a^{*}$;
and in the case when $G$ is connected, we have the third equivalent condition:
* (3)
For all $u\in\mathfrak{g}$, $[u,D]=0$.
In this case we say that $D$ is $G$-invariant.
###### Proof.
(1)$\iff$(3) is obvious when $G$ is connected. And (2) says that $D$ is a
$k[G]$-comodule homomorphism, equivalently a $G$-module homomorphism giving
(1)$\iff$(2). ∎
###### Definition 3.3.
Write $D_{X}^{G}$ for the superalgebra of $G$-invariant differential operators
on $X$.
For the meaning of an open orbit of an algebraic supergroup, see [She19].
###### Proposition 3.4.
Let $X$ be a $G$-supervariety, and $x$ a point of $X$ with stabilizer subgroup
$K\subseteq G$. If $G$ has an open orbit at $x$, then the morphism
$\operatorname{res}_{x}:D_{X}\to\operatorname{Dist}(X,x)$ restricts to an
injection $D_{X}^{G}\to\operatorname{Dist}(X,x)^{K}$.
###### Proof.
The map $\operatorname{res}_{x}$ is $K$-equivariant, so certain
$\operatorname{res}_{x}(D_{X}^{G})\subseteq\operatorname{Dist}(X,x)^{K}$. To
see that it is injective, let $a_{x}:G\to X$ be the orbit map at $x$, and
$D\in D_{X}^{G}$. Then by
$a_{x}^{*}:\mathcal{O}_{X}\to(a_{x})_{*}\mathcal{O}_{G}$ is an injective
morphism of sheaves by Prop. 3.11 of [She19]. Therefore, if
$\operatorname{res}_{x}(D)=0$, we have $D(f)(x)=0$ for all $f$ defined in an
open neighborhood of $x$. Equivalently, $a_{x}^{*}(D(f))(e)=0$ for all such
$f$. But we have the factorization $a_{x}=a\circ(\operatorname{id}_{G}\times
i_{x})$, so this says that
$\displaystyle(\operatorname{id}_{G}\times i_{x})^{*}\circ a^{*}(D(f))$
$\displaystyle=$ $\displaystyle(\operatorname{id}_{G}\times
i_{x})^{*}(\operatorname{id}\otimes D)(a^{*}(f))$ $\displaystyle=$
$\displaystyle(\operatorname{id}\otimes\operatorname{res}_{x}(D))(a^{*}(f))=0.$
This implies $D(f)=0$ for all $f$ defined in an open neighborhood of $X$.
Since by definition the restriction morphism on functions is injective for a
supervariety, this implies that $D=0$. ∎
###### Proposition 3.5.
In the context of the previous proposition, if $X\cong G/K$ via the orbit map
at $x$ then the map $D_{X}^{G}\to\operatorname{Dist}(X,x)^{K}$ is an
isomorphism.
###### Proof.
It remains to show it is surjective. For this, if
$\psi\in\operatorname{Dist}(X,x)^{K}$, define $D_{\psi}$ by
$f\mapsto(\operatorname{id}\otimes\psi)\circ a^{*}(f)$. Indeed,
$\displaystyle a^{*}\circ D_{\psi}$ $\displaystyle=$ $\displaystyle
a^{*}\circ(\operatorname{id}\otimes\psi)\circ a^{*}$ $\displaystyle=$
$\displaystyle(\operatorname{id}\otimes\operatorname{id}\otimes\psi)\circ(a^{*}\otimes\operatorname{id})\circ
a^{*}$ $\displaystyle=$
$\displaystyle(\operatorname{id}\otimes\operatorname{id}\otimes\psi)\circ(\operatorname{id}\otimes
a^{*})\circ a^{*}$ $\displaystyle=$
$\displaystyle[\operatorname{id}\otimes((\operatorname{id}\otimes\psi)\circ
a^{*})]\circ a^{*}$ $\displaystyle=$ $\displaystyle(\operatorname{id}\otimes
D_{\psi})\circ a^{*}.$
∎
## 4\. Induced supervarieties in the sense of Koszul
We present a construction that is originally due to Koszul in [Kos82].
Although it may be defined for non-affine supervarieties, for simplicity we
stick to the affine case since that is all we need. Throughout, we let $H$ be
a supergroup and write $\mathfrak{h}=\operatorname{Lie}(H)$.
### 4.1. Induced and coinduced modules
###### Definition 4.1.
Let $V_{0}$ be an $\mathfrak{h}_{\overline{0}}$-module, and define the
$\mathfrak{h}$-module
$\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}$ to be
$\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}:=\mathcal{U}\mathfrak{h}\otimes_{\mathcal{U}\mathfrak{h}_{\overline{0}}}V_{0}.$
The action by $\mathfrak{h}$ is left multiplication. If the action of
$\mathfrak{h}_{\overline{0}}$ on $V_{0}$ integrates to an action of $H_{0}$,
then the $\mathfrak{h}$ action on
$\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}$
integrates to an action of $H$, where $H_{0}$ acts by
$h\cdot(u\otimes v)=\operatorname{Ad}(h)(u)\otimes h\cdot v.$
Similarly we define the $\mathfrak{h}$-module
$\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}$ by
$\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}:=\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},V_{0}).$
Here we consider $\mathcal{U}\mathfrak{h}$ as a left
$\mathcal{U}\mathfrak{h}_{\overline{0}}$-module. The action by $\mathfrak{h}$
on this module is
$(u\eta)(v)=(-1)^{\overline{u}(\overline{\eta}+\overline{v})}\eta(vu).$
Once again, if $V_{0}$ is actually an $H_{0}$-module then
$\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0}$ will
be an $H$-module, where the action by $H_{0}$ is given by
$(h\cdot\eta)(v)=h\cdot\eta(\operatorname{Ad}(h^{-1})(v)).$
###### Remark 4.2.
If $A_{0}$ is an algebra on which $\mathfrak{h}_{\overline{0}}$ acts by
derivations, then
$\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}A_{0}$ is
naturally a superalgebra with multiplication given by
$(\eta\xi)(u)=m_{A_{0}}\circ(\eta\otimes\xi)(\Delta(u)).$
In this case, the action of $\mathfrak{h}$ on
$\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}A_{0}$ is by
super derivations. A similar statement holds if $A_{0}$ is a Hopf algebra
which $\mathfrak{h}_{\overline{0}}$ acts on by derivations preserving the Hopf
algebra structure.
###### Lemma 4.3.
For an $\mathfrak{h}_{\overline{0}}$-module $V_{0}$, we have a canonical
isomorphism of $\mathfrak{h}$-modules
$(\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0})^{*}\cong\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}(V_{0})^{*}.$
This extends to an isomorphism of $H$-modules when $V_{0}$ is an
$H_{0}$-module.
###### Proof.
We always have a canonical map of $\mathfrak{h}$-modules (or $H$-modules when
applicable)
$\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}(V_{0})^{*}\to(\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}V_{0})^{*}$
given by
$(u\otimes\varphi)(\eta)=(-1)^{\overline{u}(\overline{\varphi}+\overline{\eta})}\varphi(\eta(s(u))),$
where $s:\mathcal{U}\mathfrak{h}\to\mathcal{U}\mathfrak{h}$ is the antipode.
Here it is an isomorphism by the PBW theorem, since $\mathcal{U}\mathfrak{h}$
is a finitely generated free $\mathcal{U}\mathfrak{h}_{\overline{0}}$-module.
∎
### 4.2. Koszul’s induced superspace
###### Definition 4.4.
For an affine variety $X_{0}$ with an action of $\mathfrak{h}_{\overline{0}}$,
define $(X_{0})^{\mathfrak{h}}$ to be the affine variety with coordinate ring
$k[(X_{0})^{\mathfrak{h}}]:=\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},k[X_{0}])=\operatorname{Coind}_{\mathfrak{h}_{0}}^{\mathfrak{h}}k[X_{0}].$
By remark 4.2, $(X_{0})^{\mathfrak{h}}$ is an affine supervariety with an
action by $\mathfrak{h}$. If the action of $\mathfrak{h}_{\overline{0}}$ on
$X_{0}$ comes from an action of $H_{0}$ on $X_{0}$, then
$(X_{0})^{\mathfrak{h}}$ will be an $H$-supervariety. We have that
$((X_{0})^{\mathfrak{h}})_{0}=X_{0}$, and the natural projection
$k[(X_{0})^{\mathfrak{h}}]\to k[X_{0}]$ is given by $\eta\mapsto\eta(1)$.
###### Remark 4.5.
The construction of the induced superspace given above can be done for any
supervariety $X_{0}$ with an $\mathfrak{h}_{\overline{0}}$-action as follows.
We define $(X_{0})^{\mathfrak{h}}$ to have underlying space $X_{0}$ and sheaf
of functions given by, for an open subset $U_{0}\subseteq X_{0}$,
$\Gamma(U_{0},\mathcal{O}_{(X_{0})^{\mathfrak{h}}})=\operatorname{Coind}_{\mathfrak{h}_{0}}^{\mathfrak{h}}k[U_{0}]=k[(U_{0})^{\mathfrak{h}}].$
One can check this construction respects localization and thus gives a well-
defined supervariety.
###### Remark 4.6.
If $X_{0}$ is smooth, then $(X_{0})^{\mathfrak{h}}$ is also smooth. In any
case, $\mathfrak{h}_{\overline{1}}$ defines everywhere non-vanishing vector
fields on $(X_{0})^{\mathfrak{h}}$ so that $\mathfrak{h}_{\overline{1}}\to
T_{x}(X_{0})^{\mathfrak{h}}$ is an isomorphism for all $x\in X_{0}(k)$.
###### Remark 4.7.
In [Kos82], it was shown that $G=(G_{0})^{\mathfrak{g}}$. Thus we have an
explicit description of the algebra of functions on $k[G]$ given by
$k[G]=\operatorname{Hom}_{\mathfrak{g}_{\overline{0}}}(\mathcal{U}\mathfrak{g},k[G_{0}]).$
From this one can also determine the Hopf algebra structure on $k[G]$ from the
Hopf algebra structures on $\mathcal{U}\mathfrak{g}$ and $k[G_{0}]$.
###### Proposition 4.8.
Let $X_{0}$ be an affine variety with an action of
$\mathfrak{h}_{\overline{0}}$. Then $(X_{0})^{\mathfrak{h}}$ has the following
universal property: given an affine supervariety $Y$ with an action of
$\mathfrak{h}$, and an $\mathfrak{h}_{\overline{0}}$-equivariant map
$\overline{\phi}:X_{0}\to Y_{0}$, there exists a unique
$\mathfrak{h}$-equivariant map $\phi:(X_{0})^{\mathfrak{h}}\to Y$ such that
$\phi_{0}=\overline{\phi}$ and the following diagram commutes:
$\textstyle{(X_{0})^{\mathfrak{h}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{Y}$$\textstyle{X_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i_{X_{0}}}$$\scriptstyle{\overline{\phi}}$$\textstyle{Y_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces.}$$\scriptstyle{i_{Y_{0}}}$
If the actions of $\mathfrak{h}_{\overline{0}}$ and $\mathfrak{h}$ integrate
to actions of $H_{0}$ and $H$, respectively, then $\phi$ is a morphism of
$H$-supervarieties.
###### Proof.
Since everything is affine, we may work instead with the algebras of
functions. Then the commutativity of the square says that for such a map
$\phi$, we must have that for $f\in k[Y]$,
$\phi^{*}(f)(1)=(\overline{\phi})^{*}\circ(i_{Y_{0}})^{*}(f)$
The property of begin an $\mathfrak{h}$-equivariant map then forces, for
$u\in\mathcal{U}\mathfrak{h}$,
$\phi^{*}(f)(u)=(-1)^{\overline{u}\overline{f}}(u\phi^{*}(f))(1)=(-1)^{\overline{u}\overline{f}}\phi^{*}(uf)(1)=(-1)^{\overline{u}\overline{f}}(\overline{\phi})^{*}((i_{Y_{0}})^{*}(uf))$
so the definition of $\phi^{*}$ is forced on us. One can check that the above
definition defines an algebra homomorphism, and so we are done. ∎
For the following, observe that if $X_{0}$ is an $H_{0}$-variety and $x$ is a
closed point of $X_{0}$ which is fixed by $H_{0}$, then $H_{0}$ preserves
${}_{0}\mathfrak{m}_{x}$ and thus acts on $\operatorname{Dist}(X_{0},x)$.
###### Proposition 4.9.
Let $X_{0}$ be affine $H_{0}$-variety, and $x\in X_{0}(k)$ a closed point
which is fixed by $H_{0}$. Then we have an isomorphism of $H$-modules
$\operatorname{Dist}((X_{0})^{\mathfrak{h}},x)\cong\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\operatorname{Dist}(X_{0},x).$
(See remark 3.1 for the $H$-module structure on
$\operatorname{Dist}((X_{0})^{\mathfrak{h}},x)$.)
###### Proof.
We have the following string of isomorphisms of $H$-modules:
$\displaystyle\operatorname{Dist}((X_{0})^{\mathfrak{h}},x)$
$\displaystyle\cong$
$\displaystyle\Gamma_{\mathfrak{m}_{x}}(k[(X_{0})^{\mathfrak{h}}])^{*}$
$\displaystyle=$
$\displaystyle\Gamma_{\mathfrak{m}_{x}}(\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}k[X_{0}])^{*}$
$\displaystyle\cong$
$\displaystyle\Gamma_{\mathfrak{m}_{x}}\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}(k[X_{0}])^{*}$
$\displaystyle=$
$\displaystyle\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\Gamma_{{}_{0}\mathfrak{m}_{x}}(k[X_{0}])^{*}$
$\displaystyle=$
$\displaystyle\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\operatorname{Dist}(X_{0},x)$
The non-trivial step here is that we can move $\Gamma$ past
$\operatorname{Ind}$. To see this, observe that
$\mathfrak{m}_{x}=\\{\eta\in\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},k[X_{0}]):\eta(1)\in{{}_{0}\mathfrak{m}_{x}}\\}.$
It follows that for any $d\in\mathbb{N}$, for $n\gg d$, if
$\eta\in\mathfrak{m}_{x}^{n}$, then $\eta(u)\in$ ${}_{0}\mathfrak{m}_{x}^{d}$
for all $u\in\mathcal{U}\mathfrak{h}$, because $\mathfrak{h}_{\overline{0}}$
preserves the ideal ${}_{0}\mathfrak{m}_{x}$. The claim now follows. ∎
### 4.3. Induced vector bundles
Let us extend the above framework to more general vector bundles. Note that
this subsection can largely be skipped, as it is only used to prove the
$\operatorname{Ind}-\operatorname{Coind}$ isomorphism in section 6.1, which is
already known via other methods. See section 4.6 of [She20a] for the
definition of $\mathfrak{h}$-equivariant vector bundle.
Again assume that $X_{0}$ is affine, and suppose that $F_{0}$ is an
$\mathfrak{h}_{\overline{0}}$-equivariant vector bundle over $X_{0}$. Then
$\mathfrak{h}_{\overline{0}}$ acts on $\Gamma(X_{0},F_{0})$ and satisfies, for
$f_{0}\in k[X_{0}]$, $s_{0}\in\Gamma(X_{0},F_{0})$, and
$u_{0}\in\mathfrak{h}_{\overline{0}}$,
$u_{0}\cdot(f_{0}s_{0})=u_{0}(f_{0})s_{0}+f_{0}u_{0}(s_{0}),$
where the action of $u_{0}$ on $f_{0}$ is coming from the action on
$k[X_{0}]$. If $F_{0}$ is actually an $H_{0}$-equivariant vector bundle, then
for $h\in H_{0}$ we have
$h\cdot(f_{0}s_{0})=L_{h}^{*}(f_{0})h\cdot(s_{0}).$
Then we define an $\mathfrak{h}$-equivariant vector bundle
$(F_{0})^{\mathfrak{h}}$ over $(X_{0})^{\mathfrak{h}}$ to have the space of
sections given by
$\Gamma((X_{0})^{\mathfrak{h}},(F_{0})^{\mathfrak{h}}):=\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\Gamma(X_{0},F_{0})=\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},\Gamma(X_{0},F_{0})).$
Then this admits a natural $\mathfrak{h}$-action by virtue of being a
coinduced module, and if $F_{0}$ is $H_{0}$-equivariant,
$(F_{0})^{\mathfrak{h}}$ will be $H$-equivariant. The
$k[(X_{0})^{\mathfrak{h}}]$-module structure is defined by
$(\varphi s)(u)=a_{0,F_{0}}^{*}\circ(\varphi\otimes s)(\Delta(u))$
where
$a_{0,F_{0}}^{*}:k[X_{0}]\otimes\Gamma(X_{0},F_{0})\to\Gamma(X_{0},F_{0})$ is
the action map. We check that this makes sense: for
$v_{0}\in\mathfrak{h}_{\overline{0}}$, writing $\Delta(u)=\sum u_{i}\otimes
u^{i}$,
$\displaystyle(\varphi s)(v_{0}u)$ $\displaystyle=$ $\displaystyle
a_{0}\circ(\varphi\otimes s)((v_{0}\otimes 1+1\otimes v_{0})\Delta(u))$
$\displaystyle=$
$\displaystyle(-1)^{\overline{s}\overline{u_{i}}}(\varphi(v_{0}u_{i})s(u^{i})+\varphi(u_{i})s(v_{0}u^{i}))$
$\displaystyle=$
$\displaystyle(-1)^{\overline{s}\overline{u_{i}}}((v_{0}\varphi)(u_{i})s(u^{i})+\varphi(u_{i})(v_{0}s)(u^{i}))$
$\displaystyle=$ $\displaystyle v_{0}((\varphi s)(u)).$
It’s also straightforward to show that for $v\in\mathfrak{h}$ we have
$v(\varphi s)=v(\varphi)s+(-1)^{\overline{v}\overline{\varphi}}\varphi v(s),$
and (when applicable) for $h\in H_{0}$ we have
$h\cdot(\varphi s)=L_{h}^{*}(\varphi)h\cdot s.$
This construction is local in the sense that we have, for an open subvariety
$U_{0}\subseteq X_{0}$,
$\Gamma(U_{0},(F_{0})^{\mathfrak{h}})=\operatorname{Coind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\Gamma(U_{0},F_{0})=\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},\Gamma(U_{0},F_{0})).$
From this it is not hard to show that $(F_{0})^{\mathfrak{h}}$ is an even rank
vector bundle on $(X_{0})^{\mathfrak{h}}$ of the same rank as $F_{0}$.
###### Proposition 4.10.
There is a natural bijection between even rank $\mathfrak{h}$-equivariant
vector bundles on $(X_{0})^{\mathfrak{h}}$ and
$\mathfrak{h}_{\overline{0}}$-equivariant vector bundles on $X_{0}$, given
explicitly as follows:
* •
given an $\mathfrak{h}_{\overline{0}}$-equivariant vector bundle $F_{0}$ on
$X_{0}$, we may produce the $\mathfrak{h}$-equivariant vector bundle
$(F_{0})^{\mathfrak{h}}$ on $X$; and
* •
given an $\mathfrak{h}$-equivariant vector bundle $F$ on
$(X_{0})^{\mathfrak{h}}$ we may take the
$\mathfrak{h}_{\overline{0}}$-equivariant vector bundle on $X_{0}$ gotten by
pulling back along embedding $i:X_{0}\hookrightarrow(X_{0})^{\mathfrak{h}}$.
This restricts to a bijection of even rank $H$-equivariant vector bundles on
$(X_{0})^{\mathfrak{h}}$ and $H_{0}$-equivariant vector bundles on $X_{0}$,
when applicable.
###### Proof.
If we start with an $\mathfrak{h}_{\overline{0}}$-equivariant vector bundle
$F_{0}$, then we have a natural surjection
$\Gamma(X_{0},i^{*}(F_{0})^{\mathfrak{h}})\to\Gamma(X_{0},F_{0})$
given by evaluating an element of
$\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},\Gamma(X_{0},F_{0}))$
at $1\in\mathcal{U}\mathfrak{h}$. It is an
$\mathfrak{h}_{\overline{0}}$-equivariant map, and since the vector bundles
have the same rank, it must be an isomorphism.
In the other direction, suppose we have an $\mathfrak{h}$-equivariant bundle
$F$. Then we have a natural map of $\mathfrak{h}$-equivariant vector bundles
$F\to(i^{*}F)^{\mathfrak{h}}$, given on sections
$\Gamma(X,F)\to\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},\Gamma(X,i^{*}F))$
by
$s\mapsto(u\mapsto i^{*}(us))$
To see this is an isomorphism, it suffices to look at the map on fibers, which
are isomorphisms. ∎
###### Proposition 4.11.
Under the same hypothesis of 4.9, let $F_{0}$ be an $H_{0}$-equivariant vector
bundle on $X_{0}$. Then we have a natural isomorphism of $H$-modules
$\operatorname{Dist}((F_{0})^{\mathfrak{h}},x)\cong\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\operatorname{Dist}(F_{0},x)$
###### Proof.
The proof extends almost verbatim from 4.9. ∎
## 5\. Induced spaces and homogeneous spaces
### 5.1. A local isomorphism for certain homogeneous supervarieties
Given a supergroup $G$ and a closed algebraic subgroup $K$, we may form the
homogeneous supervariety $G/K$. For technical aspects of such spaces see
[MT18] and [MZ10]. In particular $G/K$ is always smooth and
$(G/K)_{0}=G_{0}/K_{0}$, so that $G/K$ is affine if and only if $G_{0}/K_{0}$
is.
Let $X=G/K$ be a homogeneous affine supervariety, and suppose that there
exists a subgroup $H$ of $G$ such that
$\mathfrak{h}_{\overline{1}}\oplus\mathfrak{k}_{\overline{1}}=\mathfrak{g}_{\overline{1}}$.
Then consider the $H$-supervariety $(G_{0}/K_{0})^{\mathfrak{h}}$. By its
universal property it admits a canonical $H$-equivariant morphism to $G/K$.
###### Proposition 5.1.
The canonical $H$-equivariant map $\phi:(G_{0}/K_{0})^{\mathfrak{h}}\to G/K$
induces an isomorphism of supervarieties in a Zariski open neighborhood of
$eK_{0}$. In particular, the map on functions
$\phi^{*}:k[G/K]\to
k[(G_{0}/K_{0})^{\mathfrak{h}}]=\operatorname{Hom}_{\mathfrak{h}_{\overline{0}}}(\mathcal{U}\mathfrak{h},k[G_{0}/K_{0}])$
is an injective $H$-module homomorphism
First we need a lemma.
###### Lemma 5.2.
Suppose that $f:X\to Y$ is a morphism of smooth affine supervarieties such
that $f_{0}:X_{0}\to Y_{0}$ is an isomorphism and $df_{x}$ is an isomorphism
for all closed points $x\in X(k)$. Then $f$ is an isomorphism.
###### Proof.
Because $X$ and $Y$ are smooth and affine, we may present them as exterior
algebras of vector bundles $E_{X_{0}}$, $E_{Y_{0}}$ on $X_{0}$, $Y_{0}$.
Working on a cover, we may assume these vector bundles are trivial.
Let $\xi_{1},\dots,\xi_{n}\in\Gamma(Y_{0},E_{Y_{0}})\subseteq
k[Y]_{\overline{1}}$ be a $k[Y_{0}]$-basis for $\Gamma(Y_{0},E_{Y_{0}})$ so
these elements project to a basis of $(T_{y}Y)_{\overline{1}}$ for each $y\in
Y(k)$. Then $f^{*}(\xi_{1}),\dots,f^{*}(\xi_{n})\in k[X]_{\overline{1}}$ must
project to a basis of $T_{x}X$ for all $x\in X(k)$. It follows that
$f^{*}(\xi_{1}),\dots,f^{*}(\xi_{n})$ project to a $k[X_{0}]$-basis of
$\Gamma(X_{0},E_{X_{0}})$ in the associated graded of $k[X]$. Hence the
associated graded morphism of $f^{*}$ is an isomorphism, which implies that
$f^{*}$ is an isomorphism, and we are done. ∎
###### Proof of 5.1.
Each space admits an action by $\mathfrak{h}$ as vector fields acting on
functions, and since $\phi^{*}$ is a $\mathfrak{h}$-homomorphism we have
$\phi^{*}(uf)=u\phi^{*}(f)$
for $u\in\mathfrak{h},f\in k[G/K]$. Hence for any closed point $x$ of $G/K$
the following diagram is commutative:
$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{T_{x}(G_{0}/K_{0})^{\mathfrak{h}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{df_{x}}$$\textstyle{T_{x}(G/K)}$
In particular, wherever $\mathfrak{h}_{\overline{1}}$ spans the odd part of
the tangent space of $G/K$ at $x$, $df_{x}$ will be an isomorphism of vector
spaces when restricted to the odd part. However, we see that $f_{0}^{*}$ is
the identity map by construction, hence we get a commutative diagram:
$\textstyle{(G_{0}/K_{0})^{\mathfrak{h}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{G/K}$$\textstyle{G_{0}/K_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{0}=\operatorname{id}}$$\textstyle{G_{0}/K_{0}.\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
If we restrict to the open neighborhood of $eK_{0}$ upon which
$\mathfrak{h}_{\overline{1}}$ spans the odd tangent space of $G/K$, $f$ will
be an isomorphism by lemma 5.2. ∎
###### Corollary 5.3.
Maintain the assumptions of 5.1. and suppose further that $H_{0}\subseteq
K_{0}$. Then we have a canonical isomorphism of $H$-modules
$\operatorname{Dist}(G/K,eK)\cong\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\operatorname{Dist}(G_{0}/K_{0},eK_{0}).$
Or in terms of enveloping algebras,
$\mathcal{U}\mathfrak{g}/(\mathcal{U}\mathfrak{g}\mathfrak{k})\cong\operatorname{Ind}_{\mathfrak{h}_{\overline{0}}}^{\mathfrak{h}}\mathcal{U}\mathfrak{g}_{\overline{0}}/(\mathcal{U}\mathfrak{g}_{\overline{0}}\mathfrak{k}_{\overline{0}})$
###### Proof.
The first statement follows from 4.9. The second statement follows from the
usual identification of $\mathcal{U}\mathfrak{g}$-modules
$\operatorname{Dist}(G/K,eK)\cong\mathcal{U}\mathfrak{g}/(\mathcal{U}\mathfrak{g}\mathfrak{k}).$
which comes from the composition
$\mathcal{U}\mathfrak{g}\to\Gamma(G/K,\mathcal{D}_{G/K})\xrightarrow{\operatorname{res}_{eK}}\operatorname{Dist}(G/K,eK).$
∎
### 5.2. Supersymmetric spaces
From now on, we assume that $G$ is a connected supergroup, i.e. an affine
algebraic supergroup such that $G_{0}$ is connected. Let $\theta$ be an
involution of $G$, and let $K$ be a closed subgroup of $G$ such that
$(G^{\theta})^{0}\subseteq K\subseteq G^{\theta}$. In particular $K$ need not
be connected. From this we may consider the homogeneous supervariety $G/K$,
and we call $G$-supervarieties of this form symmetric supervarieties (or
supersymmetric spaces).
On the level of Lie superalgebras, $\theta$ induces an involution on
$\mathfrak{g}$, which by abuse of notation we again write as $\theta$, giving
rise to the decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$, where
$\mathfrak{k}=\operatorname{Lie}K$ is the fixed subspace of $\theta$ and
$\mathfrak{p}$ the $(-1)$-eigenspace of $\theta$.
###### Definition 5.4.
Define
$\mathfrak{k}^{\prime}:=\mathfrak{k}_{\overline{0}}\oplus\mathfrak{p}_{\overline{1}}$,
which is the fixed points of the involution $\delta\circ\theta$, where
$\delta(x)=(-1)^{\overline{x}}x$ is the grading operator on $\mathfrak{g}$.
Let $K^{\prime}$ denote the closed algebraic subgroup of $G$ such that
$K^{\prime}_{0}=K_{0}$ and
$\operatorname{Lie}K^{\prime}=\mathfrak{k}^{\prime}$.
Notice that $\delta\circ\theta$ is an involution on $G$ such that
$(G^{\delta\circ\theta})^{0}\subseteq K^{\prime}\subseteq
G^{\delta\circ\theta}$, hence $G/K^{\prime}$ is another symmetric
supervariety.
###### Proposition 5.5.
We have a $K^{\prime}$-equivariant morphism
$(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}\to G/K$
which is an isomorphism in a neighborhood of $eK_{0}$. In particular, the
pullback morphism of functions
$k[G/K]\to
k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]=\operatorname{Coind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}^{\prime}}k[G_{0}/K_{0}]$
is injective.
###### Proof.
This follows immediately from 5.1. ∎
###### Corollary 5.6.
We have a natural injective morphism of algebras
$k[K^{\prime}\backslash G/K]=k[G/K]^{K^{\prime}}\to k[K_{0}\backslash
G_{0}/K_{0}]=k[G_{0}/K_{0}]^{K_{0}}.$
In particular, $k[K^{\prime}\backslash G/K]$ is an integral domain.
###### Proof.
Taking $K^{\prime}$ invariants of the pullback morphism we obtain an injection
$k[G/K]^{K^{\prime}}\to\left(\operatorname{Coind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}^{\prime}}k[G_{0}/K_{0}]\right)^{K^{\prime}}.$
Now one may use Frobenius reciprocity to identify
$\left(\operatorname{Coind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}^{\prime}}k[G_{0}/K_{0}]\right)^{K^{\prime}}$
with $k[G_{0}/K_{0}]^{K_{0}}$ as an algebra. ∎
The following result, which now is proven easily from 5.3, is an appropriate
generalization of the fundamental observation made in [Gor00].
###### Proposition 5.7.
We have a canonical isomorphism of $K^{\prime}$-modules
$\operatorname{Dist}(G/K,eK)\cong\operatorname{Ind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}^{\prime}}\operatorname{Dist}(G_{0}/K_{0},eK_{0}).$
## 6\. The symmetric space $G/G_{0}$
Consider the involution $\theta=\delta$, the canonical grading operator on
$\mathfrak{g}$ which is defined by $\delta(x)=-x$. In this case
$G^{\delta}=G_{0}$, and the local isomorphism of 5.5 becomes a global
isomorphism of $G$-supervarieties (both consisting of just one point):
$G/G_{0}\cong(G_{0}/G_{0})^{\mathfrak{g}}.$
### 6.1. $\operatorname{Ind}$-$\operatorname{Coind}$ isomorphism
The homogeneous space $G/G_{0}$ has one closed point, which we will call $e$.
Let $V$ be a $G$-equivariant vector bundle on $G/G_{0}$, and write $V$ again
for its space of sections. Since $\mathfrak{m}_{e}$ is nilpotent, we have the
identification $\operatorname{Dist}(V,e)=V^{*}$.
A $G_{0}$-equivariant vector bundle on $G_{0}/G_{0}$ is the same data as a
finite-dimensional $G_{0}$-representation $V_{0}$, and its sections are again
$V_{0}$. Then 4.11 tells us that
$\operatorname{Dist}((V_{0})^{\mathfrak{g}},e)=(\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}V_{0})^{*}\cong\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}(V_{0})^{*},$
which we already showed in lemma 4.3.
We will write $(V_{0})^{\mathfrak{g}}$ once again for the sections of
$(V_{0})^{\mathfrak{g}}$ on $G/G_{0}$ since this space has only one point.
Since $D_{G/G_{0}}$ is generated by $\mathcal{U}\mathfrak{g}$ and $k[G/G_{0}]$
as an algebra, $(V_{0})^{\mathfrak{g}}$ will be a $D_{G/G_{0}}$-module on
$G/G_{0}$.
Thus $\operatorname{Dist}((V_{0})^{\mathfrak{g}},e)$ is a
$D_{G/G_{0}}$-module, and, being finite-dimensional, must also be coherent. By
2.8, $\operatorname{Dist}((V_{0})^{\mathfrak{g}},e)$ must be a vector bundle
on $G/G_{0}$, and is $G$-equivariant via its $\mathcal{D}_{G/G_{0}}$-module
structure. Further it is of even rank since since $(V_{0})^{\mathfrak{g}}$ is
of even rank, and therefore by 4.10 there exists a $G_{0}$-equivariant vector
bundle on $G_{0}/G_{0}$, that is a $G_{0}$-representation $W_{0}$, such that
$\operatorname{Dist}((V_{0})^{\mathfrak{g}},e)\cong(W_{0})^{\mathfrak{g}}.$
By 4.10, $W_{0}$ is obtained by taking the sections of the restriction of
$(V_{0})^{\mathfrak{g}}$ to $G_{0}/G_{0}$, i.e.
$W_{0}\cong\operatorname{Dist}((V_{0})^{\mathfrak{g}},e)/\mathfrak{m}_{e}\operatorname{Dist}((V_{0})^{\mathfrak{g}},e).$
Now let $n=\operatorname{dim}\mathfrak{g}_{\overline{1}}$ so that
$\mathfrak{m}_{e}^{n+1}=0$ but $\mathfrak{m}_{e}^{n}\neq 0$. Then
$\operatorname{Dist}(V,e)\cong((V_{0})^{\mathfrak{g}}/\mathfrak{m}_{e}^{n+1}(V_{0})^{\mathfrak{g}})^{*}$,
and therefore
$((V_{0})^{\mathfrak{g}}/\mathfrak{m}_{e}^{n+1}(V_{0})^{\mathfrak{g}})^{*}/\mathfrak{m}_{e}((V_{0})^{\mathfrak{g}}/\mathfrak{m}_{e}^{n+1}(V_{0})^{\mathfrak{g}})^{*}\cong(\mathfrak{m}_{e}^{n}(V_{0})^{\mathfrak{g}})^{*}.$
Now as a $G_{0}$-module,
$(V_{0})^{\mathfrak{g}}=\operatorname{Hom}_{\mathfrak{g}_{\overline{0}}}(\mathcal{U}\mathfrak{g},V_{0})\cong\operatorname{Hom}(\Lambda\mathfrak{g}_{\overline{1}},V_{0})\cong\Lambda\mathfrak{g}_{\overline{1}}^{*}\otimes
V_{0}$
and $\mathfrak{m}_{e}^{n}V$ sits inside as the
$\mathfrak{g}_{\overline{0}}$-submodule
$\Pi^{n}\Lambda^{n}\mathfrak{g}_{\overline{1}}^{*}\otimes V_{0}$. Therefore,
$W_{0}\cong\Pi^{n}\Lambda^{n}\mathfrak{g}_{\overline{1}}\otimes
V_{0}^{*}=\operatorname{Ber}(\mathfrak{g}_{\overline{1}})\otimes V_{0}^{*}$.
Putting everything together and replacing $V_{0}$ by $V_{0}^{*}$, we have
reproduced the following well-known result (see for instance section 9 of
[Ser11] for an algebraic proof):
###### Proposition 6.1.
For a $G_{0}$-module $V_{0}$, we have a canonical isomorphism of $G$-modules
$\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}V_{0}\cong\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}(\operatorname{Ber}(\mathfrak{g}_{\overline{1}})\otimes
V_{0}).$
### 6.2. The invariant differential ghost operator
Now we study differential operators on $G/G_{0}$. By 3.5, the invariant
differential operators are given by $G_{0}$-invariant distributions, i.e.
$(\mathcal{U}\mathfrak{g}/(\mathcal{U}\mathfrak{g})\mathfrak{g}_{\overline{0}})^{G_{0}}$
which can be identified, via symmetrization, with
$S(\mathfrak{g}_{\overline{1}})^{G_{0}}$. However, we also have
$\operatorname{Dist}(G/G_{0},e)\cong\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k$
as a $\mathfrak{g}$-module. Therefore by 6.1 we have
$\operatorname{Dist}(G/G_{0},e)\cong\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}(\operatorname{Ber}(\mathfrak{g}_{\overline{1}}))$
Recall that $\operatorname{ber}_{\mathfrak{g}}$ is the character of $G$
determined determined by $\operatorname{Ber}(\mathfrak{g})$. If $V$ is a
$G$-module, we write $V^{\operatorname{ber}_{\mathfrak{g}}}$ for the subspace
of eigenvectors for $G$ of weight $\operatorname{ber}_{\mathfrak{g}}$.
###### Corollary 6.2.
Suppose that $\Lambda^{top}\mathfrak{g}_{\overline{0}}$ is a trivial
$G_{0}$-module. Then the subspace
$\operatorname{Dist}(G/G_{0},e)^{\operatorname{ber}_{\mathfrak{g}}}$ is one-
dimensional.
###### Proof.
Under this assumption,
$\operatorname{Ber}(\mathfrak{g})=\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$,
and so the result follows by Frobenius reciprocity. ∎
Observe that the conditions of 6.2 hold if $G_{0}$ is reductive. When it
exists, we write
$v_{\mathfrak{g}}\in\operatorname{Dist}(G/G_{0},e)^{\operatorname{ber}_{\mathfrak{g}}}$
for a chosen non-zero element. If $\beta=0$, i.e.
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is a trivial $G_{0}$-module,
then
$v_{\mathfrak{g}}\in\operatorname{Dist}(G/G_{0},e)^{G}\subseteq\operatorname{Dist}(G/G_{0},e)^{G_{0}}\cong
D_{G/G_{0}}^{G}$. Therefore in this case $v_{\mathfrak{g}}$ corresponds to a
$G$-invariant differential operator on $G/G_{0}$, which we write as
$D_{\mathfrak{g}}$.
###### Definition 6.3.
We say a supergroup $G$ is quasireductive if $G_{0}$ is reductive. We say a
Lie superalgebra $\mathfrak{g}$ is quasireductive if it is the Lie
superalgebra of a quasireductive supergroup.
Assumption: We assume for the rest of the section that $G$ is quasireductive
and
$\operatorname{Ber}(\mathfrak{g})=\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$
is the trivial $G_{0}$-module.
Under this assumption, $v_{\mathfrak{g}}$ corresponds, as we said, to a
certain $G$-invariant differential operator. We now determine what it is.
###### Lemma 6.4.
$D_{G/G_{0}}=\operatorname{End}_{k}(k[G/G_{0}])$.
###### Proof.
This easily follows from the definition of differential operators. ∎
Recall that since $G_{0}$ is reductive,
$k[G/G_{0}]=\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k$
is projective, and hence is a sum of injective indecomposable modules $I(L)$
for $L$ a simple $G$-module in the socle of
$\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k$. We have
that the trivial module, $k$, shows up exactly once in
$\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k$, hence we
can write:
$k[G/G_{0}]=\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k=I(k)\oplus
V.$
Since $\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is trivial, we have
that $I(k)=P(k)$ (see [Ser11]), and thus the head and tail of $I(k)$ are both
the trivial module. It follows that $\operatorname{End}(I(k))$ contains a
unique up to scalar endomorphism $\phi$ taking the head to the tail, which is
nilpotent exactly if $G$ does not have semisimple representations (i.e. $k$ is
not projective).
###### Proposition 6.5.
Up to a non-zero scalar, $D_{\mathfrak{g}}$ is the endomorphism of
$k[G/G_{0}]=I(k)\oplus V$ given by $\phi\oplus 0_{V}$.
###### Proof.
Since $D_{\mathfrak{g}}$ is $G$-invariant by construction, it suffices to show
that $\operatorname{res}_{e}(D_{\mathfrak{g}})$ is $\mathfrak{g}$-invariant.
However, we observe that $uD_{\mathfrak{g}}=D_{\mathfrak{g}}u=0$ for all
$u\in\mathfrak{g}$, so since $D_{G/G_{0}}\to\operatorname{Dist}(G/G_{0},e)$ is
right $D_{G/G_{0}}$-equivariant, we are done. ∎
The following is now easy to show:
###### Corollary 6.6.
For a $G_{0}$-module $V_{0}$, $D_{\mathfrak{g}}$ acts on
$\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}V_{0}$ by
$\phi$ on each summand isomorphic to $I(k)$, and zero otherwise.
We have the following characterization of $v_{\mathfrak{g}}$:
###### Corollary 6.7.
Let $\mathfrak{g}$ be a quasireductive Lie superalgebra such that
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is the trivial
$G_{0}$-module, and write $v_{\mathfrak{g}}$ for a non-zero element of
$(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}})^{G}$.
Then $v_{\mathfrak{g}}$ is the unique non-zero element up to scalar with the
property that
$uv_{\mathfrak{g}}\in\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}$ for
all $u\in\mathfrak{g}$.
### 6.3. Relation to Gorelik’s element $v_{\emptyset}$
###### Proposition 6.8.
Let $\mathfrak{g}$ be a Lie superalgebra such that
$\Lambda^{top}\mathfrak{g}_{\overline{1}}$ is the trivial $G_{0}$-module. Then
for a $G_{0}$-module $V_{0}$ we have a natural isomorphism
$(V_{0})^{G}\to(\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}V_{0})^{\operatorname{ber}_{\mathfrak{g}}}$
given by
$z\mapsto v_{\mathfrak{g}}z.$
###### Proof.
This easily follows from the work in already done in this section. ∎
In [Gor00], it was proven that if
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is trivial then there exists
an element $v_{\emptyset}\in\mathcal{U}\mathfrak{g}$ with the property that
for a $\mathfrak{g}_{\overline{0}}$-module $V$, the map
$z\mapsto v_{\emptyset}z$
defines an isomorphism
$V^{\mathfrak{g}_{\overline{0}}}\to(\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}V)^{\mathfrak{g}}.$
###### Corollary 6.9.
The element $v_{\mathfrak{g}}$ agrees with the construction of such an element
given by Gorelik in [Gor00].
###### Proof.
Gorelik’s element has the property that it defined a nonzero element of
$(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}})^{\mathfrak{g}}$,
so we are done. ∎
### 6.4. Computations of $v_{\mathfrak{g}}$ for some Lie superalgebras
We compute explicitly the element
$v_{\mathfrak{g}}\in\left(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}\right)^{\operatorname{ber}_{\mathfrak{g}}}$
for certain quasireductive Lie superalgebras. First, an easy example:
###### Lemma 6.10.
If $[\mathfrak{g}_{\overline{1}},\mathfrak{g}_{\overline{1}}]$ is central in
$\mathfrak{g}$, then $v_{\mathfrak{g}}=v_{1}\cdots v_{n}$ for any choice of
basis $v_{1},\dots,v_{n}$ of $\mathfrak{g}_{\overline{1}}$.
###### Proof.
This element is acted on by $\mathfrak{g}_{\overline{0}}$ according to its
action on $\operatorname{Ber}(\mathfrak{g})$. Observe that
$v_{i}v_{\mathfrak{g}}=(-1)^{i-1}v_{1}\cdots v_{i}^{2}\cdots
v_{n}+\sum\limits_{1\leq j<i}(-1)^{j-1}v_{1}\dots
v_{j-1}[v_{i},v_{j}]v_{j+1}\cdots v_{n}.$
Since both $[v_{j},v_{i}]$ and $v_{i}^{2}$ are central, we may rewrite this
sum as
$(-1)^{i-1}v_{1}\cdots\widehat{v_{i}}\cdots
v_{n}v_{i}^{2}+(-1)^{j-1}v_{1}\dots v_{j-1}\widehat{v_{j}}v_{j+1}\cdots
v_{n}[v_{j},v_{i}]\in\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}.$
∎
For type I algebras, we have the following:
###### Proposition 6.11.
Suppose that $\mathfrak{g}$ is a quasireductive Lie superalgebra with a
$\mathbb{Z}$-grading
$\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}$
such that $[\mathfrak{g}_{i},\mathfrak{g}_{j}]\subseteq\mathfrak{g}_{i+j}$,
$\mathfrak{g}_{\overline{0}}=\mathfrak{g}_{0}$ and
$\mathfrak{g}_{\overline{1}}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{1}$. Suppose
further that for an odd weight $\alpha$, of a Cartan subalgebra of
$\mathfrak{g}_{\overline{0}}$,
$[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]$ acts trivially on
$\Lambda^{top}\mathfrak{g}_{1}$. Let $v_{1},\dots v_{n}$ be any basis of
$\mathfrak{g}_{-1}$ and $w_{1},\dots,w_{m}$ any basis of $\mathfrak{g}_{1}$.
Then
$v_{\mathfrak{g}}=(v_{1}\cdots v_{n})\cdot(w_{1}\cdots w_{m}).$
###### Proof.
Clearly $\mathfrak{g}_{\overline{0}}$ acts on it as the top exterior power of
$\mathfrak{g}_{\overline{1}}$, and $\mathfrak{g}_{-1}$ annihilates it.
Therefore it remains to show that $\mathfrak{g}_{1}$ also annihilates it. Let
$w\in\mathfrak{g}_{1}$ be a weight vector, of weight $\alpha$ (note that
$\alpha$ could be 0). The above expression is seen to be independent of the
choice of bases up to a nonzero scalar, so let us assume that
$v_{1},\dots,v_{n}$ are weight vectors and that $v_{1},\dots,v_{i}$ are a
basis of $\mathfrak{g}_{-\alpha}$ (and if $\mathfrak{g}_{-\alpha}=0$ then this
condition is vacuous). We see that (working up to
$\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}$)
$\displaystyle w(v_{1}\cdots v_{n})\cdot(w_{1}\cdots w_{m})$ $\displaystyle=$
$\displaystyle\sum\limits_{j}(-1)^{j-1}v_{1}\cdots
v_{j-1}[w,v_{j}]v_{j+1}\cdots v_{n}w_{1}\cdots w_{m}$ $\displaystyle=$
$\displaystyle\sum\limits_{j\leq i}(-1)^{j-1}v_{1}\cdots
v_{j-1}[[w,v_{j}],v_{j+1}\cdots v_{n}w_{1}\cdots w_{m}]$ $\displaystyle+$
$\displaystyle\sum\limits_{j>i,k>j}(-1)^{j-1}v_{1}\cdots v_{j-1}v_{j+1}\cdots
v_{k-1}[[w,v_{j}],v_{k}]v_{k+1}\cdots v_{n}w_{1}\cdots w_{m}$ $\displaystyle+$
$\displaystyle\sum\limits_{j>i}(-1)^{j-1}v_{1}\cdots v_{j-1}v_{j+1}\cdots
v_{n}[[w,v_{j}],w_{1}\cdots w_{m}].$
In the first sum, we have that action of
$[w,v_{j}]\in[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]$ on the vector
$v_{j+1}\cdots v_{n}w_{1}\cdots w_{m}$ which has weight
$j\alpha+\sum\limits_{\beta\in\Delta}\beta.$
By assumption,
$\sum\limits_{\beta\in\Delta}\beta([\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}])=0$,
and by the Jacobi identity we have
$\alpha([\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}])=0$ as well. Therefore
this first term is zero.
For the second sum, the weight of $[[w,v_{j}],v_{k}]$ is
$\alpha+\alpha_{j}+\alpha_{k}$, where $\alpha_{j}$, resp. $\alpha_{k}$ is the
weight of $v_{j}$, resp. $v_{k}$. Since $\alpha\neq-\alpha_{j},-\alpha_{k}$,
we have that $\alpha+\alpha_{j}+\alpha_{k}\neq\alpha_{j},\alpha_{k}$, and
therefore $[[w,v_{j}],v_{k}]$ is either zero or a root vector in
$\mathfrak{g}_{-1}$ which we may assume already appears in the product
$v_{1}\cdots v_{j-1}v_{j+1}\cdots v_{k-1}v_{k+1}\cdots v_{n}$, giving zero.
For the final sum, we know that $[w,v_{j}]$ is a nilpotent element of
$\mathfrak{g}_{\overline{0}}$ and thus acts trivially on
$\Lambda^{top}\mathfrak{g}_{1}$, so we once again get zero. ∎
###### Corollary 6.12.
For $\mathfrak{g}\mathfrak{l}(m|n)$,
$(\mathfrak{p})\mathfrak{s}\mathfrak{l}(m|n)$,
$\mathfrak{o}\mathfrak{s}\mathfrak{p}(2|2n)$, and
$(\mathfrak{s})\mathfrak{p}(n)$ the formula for $v_{\mathfrak{g}}$ is given by
6.11.
###### Proof.
It is straightforward to check the conditions of 6.11 for these superalgebras.
∎
###### Proposition 6.13.
Let $\pm\delta_{1},\dots,\pm\delta_{n}$ denote the odd roots of
$\mathfrak{g}=\mathfrak{o}\mathfrak{s}\mathfrak{p}(1|2n)$ for the standard
presentation of $\mathfrak{o}\mathfrak{s}\mathfrak{p}(1|2n)$ as given for
example in [Mus12]. Choose elements $u_{1},\dots,u_{n}$ of weight
$\delta_{1},\dots,\delta_{n}$ and $v_{1},\dots,v_{n}$ of weight
$-\delta_{1},\dots,-\delta_{n}$ such that if $h_{i}=[u_{i},v_{i}]$, then
$\delta_{i}(h_{i})=1$. Write $t_{i}=u_{i}v_{i}$. Then we have
$v_{\mathfrak{g}}=(1+t_{1})(3+t_{2})\cdots((2n-1)+t_{n}).$
###### Proof.
This is in fact proven in section 4 of [DH+76]. There they prove that the
above element is equal, mod
$\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}$, to
$(1+t_{\sigma(1)})(3+t_{\sigma(2)})\cdots(2n-1+t_{\sigma(n)}).$
for any permutation $\sigma$. Now
$u_{i}(1+t_{i})=-v_{i}u_{i}^{2},\ \ \ \ \
v_{i}(1+t_{i})=-u_{i}v_{i}^{2}+v_{i}h_{i}.$
Since $u_{i}^{2},v_{i}^{2},$ and $h_{i}$ commute with $t_{j}$ for $j\neq i$,
we may move them all the way to the right and obtain elements of
$\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}$. ∎
It would be interesting to obtain formulas for $v_{\mathfrak{g}}$ when
$\mathfrak{g}=\mathfrak{o}\mathfrak{s}\mathfrak{p}(m|2n)$ with $m>2$,
$\mathfrak{g}=\mathfrak{q}(n)$, or when $\mathfrak{g}$ is exceptional simple.
Explicit formulas are important in computing the image of ghost distributions
under the Harish-Chandra homomorphism, which we discuss later.
### 6.5. Semisimplicity criteria
We give a brief application of the ideas above.
###### Theorem 6.14.
Let $G$ be a quasireductive supergroup. Then the following are equivalent:
1. (1)
The category $\operatorname{Rep}(G)$ of representations of $G$ is semisimple;
2. (2)
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is trivial and
$D_{\mathfrak{g}}$ is not nilpotent.
3. (3)
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is trivial and
$\varepsilon(v_{\mathfrak{g}})\neq 0$, where $\varepsilon$ is the counit on
$\mathcal{U}\mathfrak{g}$ (and is well-defined on
$\mathcal{U}\mathfrak{g}/\mathfrak{g}_{\overline{0}}\mathcal{U}\mathfrak{g}$).
Note that the condition that $G$ be quasireductive is necessary in order for
$\operatorname{Rep}(G)$ to be semisimple.
###### Proof.
The equivalence $(3)\iff(2)$ is clear, so we show $(2)\iff(1)$. Since $G$ is
quasireductive,
$k[G/G_{0}]=\operatorname{Coind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}k$
is projective. Since $\operatorname{Rep}(G)$ is semisimple if and only if the
trivial module $k$ is projective, it is equivalent to show that $k$ splits off
of $k[G/G_{0}]$. For this it is equivalent that $D_{\mathfrak{g}}(1)\neq 0$,
which gives $(2)\iff(1)$. ∎
It is well-known, going back to a result of Djokovic and Hochschild in
[DH+76], that if $G$ is a connected algebraic supergroup such that
$\operatorname{Rep}(G)$ is semisimple, then $G\cong K\times
SOSp(1|2n_{1})\times\dots\times SOSp(1|2n_{k})$, where $K$ is a reductive Lie
group. Using 6.14 and 6.13 one can obtain a simple proof of this statement,
and this has been carried out in [She20b].
## 7\. General symmetric space $G/K$
We come back to the general case of symmetric supervarieties $G/K$. For the
rest of the article we assume that $G$ is quasireductive, so that the
connected component of the identity of $K$ is quasireductive, and in
particular $K_{0}$ has semisimple representation theory.
### 7.1. Ghost distributions $\mathcal{A}_{G/K}$
We know by 5.7 that
$\operatorname{Dist}(G/K,eK)\cong\operatorname{Ind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}\operatorname{Dist}(G_{0}/K_{0},eK_{0})$
as $K^{\prime}$-modules. Since $K_{0}$ has semisimple representation theory,
$\operatorname{Dist}(G_{0}/K_{0},eK_{0})$ is a sum of finite-dimensional
$K_{0}$-modules. Hence by 6.1 we have that
$\operatorname{Ind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}\operatorname{Dist}(G_{0}/K_{0},eK_{0})\cong\operatorname{Coind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}(\operatorname{Dist}(G_{0}/K_{0},eK_{0})\otimes\operatorname{Ber}(\mathfrak{k}^{\prime}))$
where we have used that $\mathcal{U}\mathfrak{k}^{\prime}$ is a finite rank,
free left $\mathcal{U}\mathfrak{k}_{0}$-module. Now we have
$\displaystyle\operatorname{Dist}(G/K,ek)^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$
$\displaystyle\cong$
$\displaystyle(\operatorname{Coind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}(\operatorname{Dist}(G_{0}/K_{0},eK_{0})\otimes\operatorname{Ber}(\mathfrak{k}^{\prime}))^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$
$\displaystyle\cong$
$\displaystyle(\operatorname{Dist}(G_{0}/K_{0},eK_{0})\otimes\operatorname{Ber}(\mathfrak{k}^{\prime}))^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$
$\displaystyle\cong$
$\displaystyle\operatorname{Dist}(G_{0}/K_{0},eK_{0})^{K_{0}}$
Write $v_{\mathfrak{k}^{\prime}}$ for an element of
$\mathcal{U}\mathfrak{k}^{\prime}$ which projects to a nonzero element of
$(\mathcal{U}\mathfrak{k}^{\prime}/\mathcal{U}\mathfrak{k}^{\prime}\mathfrak{k}_{\overline{0}})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$.
Then by 6.8 we have that
###### Proposition 7.1.
The isomorphism
$\eta:\operatorname{Dist}(G_{0}/K_{0},eK_{0})^{K_{0}}\to\operatorname{Dist}(G/K,eK)^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$
is given by
$z\mapsto v_{\mathfrak{k}^{\prime}}\cdot z.$
###### Remark 7.2.
Recall we have an identification
$\operatorname{Dist}(G_{0}/K_{0},eK_{0})^{K_{0}}\cong D^{G_{0}}(G_{0}/K_{0})$,
and this algebra is identified with $S(\mathfrak{a})^{W_{\mathfrak{a}}}$,
where $\mathfrak{a}\subseteq\mathfrak{p}_{\overline{0}}$ is a Cartan subspace
and $W_{\mathfrak{a}}$ is the little Weyl group associated to the symmetric
space.
Stated in terms of enveloping algebras, we have shown:
###### Corollary 7.3.
We have an isomorphism
$(\mathcal{U}\mathfrak{g}_{\overline{0}}/\mathcal{U}\mathfrak{g}_{\overline{0}}\mathfrak{k}_{\overline{0}})^{K_{0}}\to(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$
given by $\phi(z)=v_{\mathfrak{k}^{\prime}}z$.
###### Definition 7.4.
We define $\mathcal{A}_{G/K}$ to be
$\operatorname{Dist}(G/K,eK)^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$,
and refer to elements of $\mathcal{A}_{G/K}$ as ghost distributions on $G/K$.
We will often use the letter $\gamma$ to denote such a distribution.
###### Remark 7.5 (Caution).
In [Gor00], $\mathcal{A}$ is used to denote the $G^{\prime}$ invariants in
$\operatorname{Dist}(G\times G/G,eG)$ as we will see later on. However in our
notation, $\mathcal{A}_{G\times G/G}$ denotes the
$\operatorname{ber}_{\mathfrak{g}^{\prime}}$ semi-invariants of $G^{\prime}$
acting on $\operatorname{Dist}(G\times G/G,eG)$. Thus these will agree only
when $\operatorname{Ber}(\mathfrak{g})$ is the trivial module.
In section 10, we will introduce another object, $\mathcal{A}_{\phi}$, which
is a subspace of $\mathcal{U}\mathfrak{g}$ that is invariant under a certain
twisted adjoint action depending on an automorphism $\phi$. For this notation,
we have that $\mathcal{A}=\mathcal{A}_{\delta}$.
### 7.2. Module structure of $\mathcal{A}_{G/K}$
Write $\mathcal{Z}_{G/K}$ for
$\operatorname{Dist}(G/K,eK)^{K}=(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{K}$.
This is identified with the algebra of invariant differential operators on
$G/K$, as explained in 3.5.
###### Proposition 7.6.
We have a natural map
$\mathcal{A}_{G/K}\otimes\mathcal{Z}_{G/K}=(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}\otimes(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{K}\to(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}=\mathcal{A}_{G/K}$
making $\mathcal{A}_{G/K}$ into a right module over $\mathcal{Z}_{G/K}$.
###### Proof.
We define
$(\gamma+\mathcal{U}\mathfrak{g}\mathfrak{k})(z+\mathcal{U}\mathfrak{g}\mathfrak{k}):=\gamma
z+\mathcal{U}\mathfrak{g}\mathfrak{k}$
Since $z$ is $K$-invariant, it is easy to check this is well-defined. ∎
###### Lemma 7.7.
Suppose that $\operatorname{Ber}(\mathfrak{k}_{\overline{1}})$ and
$\operatorname{Ber}(\mathfrak{p}_{\overline{1}})$ are trivial $K_{0}$-modules.
Then we have a natural map
$\mathcal{A}_{G/K}\otimes\mathcal{A}_{G/K^{\prime}}\to\mathcal{Z}_{G/K}$, or
$(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k})^{K^{\prime}}\otimes(\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}^{\prime})^{K}\to(\mathcal{U}\mathfrak{g}/\mathfrak{k}\mathcal{U}\mathfrak{g})^{K},$
given by
$(\gamma+\mathcal{U}\mathfrak{g}\mathfrak{k})\otimes(\gamma^{\prime}+\mathcal{U}\mathfrak{g}\mathfrak{k}^{\prime})\mapsto\gamma\gamma^{\prime}+\mathfrak{k}\mathcal{U}\mathfrak{g}.$
###### Proof.
The proof is straightforward. ∎
### 7.3. $k[G/K]$ as a $K^{\prime}$-module
Observe that via pullback and the isomorphism of distributions we have the
commutative diagram
$\textstyle{\operatorname{Dist}(G/K,eK)^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}\otimes
k[G/K]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Dist}((G_{0}/K_{0})^{\mathfrak{k}^{\prime}},eK_{0})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}\otimes
k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k.}$
Now let $\gamma$ be an element of
$\operatorname{Dist}((G_{0}/K_{0})^{\mathfrak{k}^{\prime}},eK_{0})^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$,
and suppose that $\gamma(f)\neq 0$ for $f\in
k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$. Then necessarily the
$K^{\prime}$-module generated by $f$ generates a copy of $I_{K^{\prime}}(k)$,
the injective hull of $k$ for $K^{\prime}$. It follows that the same must be
true for the space $G/K$, and so we have:
###### Proposition 7.8.
Let $\gamma\in\mathcal{A}_{G/K}$, and suppose that $f\in k[G/K]$ is such that
$\gamma(f)\neq 0$. Then the $K^{\prime}$-module generated by $f$ contains a
copy of $I_{K^{\prime}}(k)$. In particular, if
$\mathfrak{k}_{\overline{0}}\neq\mathfrak{g}_{\overline{0}}$ then $k[G/K]$
contains $I_{K^{\prime}}(k)$ with infinite multiplicity.
###### Proof.
For the last statement, we observe that since $G/K$ is affine the pairing of
distributions with functions is nondegenerate. Using the fact that the
$K^{\prime}$-semi-invariant distributions form an infinite-dimensional vector
space (being isomorphic, as a vector space, to
$S(\mathfrak{a})^{W_{\mathfrak{a}}}$), it is not hard to prove that the
multiplicity must be infinite. ∎
###### Remark 7.9.
We observe that since $k[G_{0}/K_{0}]^{K_{0}}$ is a subalgebra of
$k[G_{0}/K_{0}]$,
$A:=\operatorname{Coind}_{\mathfrak{k}_{\overline{0}}}^{\mathfrak{k}^{\prime}}k[G_{0}/K_{0}]^{K_{0}}$
is a subalgebra of $k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$. In particular
$A$ is the sum of all copies of $I_{K^{\prime}}(k)$ appearing in
$k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$. It follows that $k[G/K]\cap A$ is
a subalgebra of $k[G/K]$ which contains all copies of $I_{K^{\prime}}(k)$ in
$k[G/K]$, as well as all of $k[G/K]^{K^{\prime}}$. The author has a rather
limited understanding of $A\cap k[G/K]$.
Further we do not have a good answer, or even good formulation of the question
of ‘how many’ copies of $I_{K^{\prime}}(k)$ are within $k[G/K]$. The copies of
$I_{K^{\prime}}(k)$ in $k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$ may be
indexed by a basis of $k[G_{0}/K_{0}]^{K_{0}}$, which itself is indexed by the
irreducible summands of $k[G_{0}/K_{0}]$, i.e. by certain dominant weights in
$\mathfrak{a}$. For each copy of $I_{K^{\prime}}(k)$ in $k[G/K]$, one could
record which dominant weights of $\mathfrak{a}$ it is supported on in
$k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$. One could then look at all weights
that appear in the supports of such copies of $I_{K^{\prime}}(k)$ in $k[G/K]$.
This collection of weights would be infinite and could not, for example, lie
within any hyperplane.
We also observe that since each copy of $I_{K^{\prime}}(k)$ contains a
$K^{\prime}$-invariant function, we can deduce there are many
$K^{\prime}$-invariant functions on $k[G/K]$ (recall we already know from 5.6
that $k[G/K]^{K^{\prime}}$ is an integral domain). Again the structure of
$K[G/K]^{K^{\prime}}$ is not generally understood, although it has been
partially computed in certain examples, i.e. $G\times G/G$ and the
superspheres $OSP(m|2n)/OSP(m-1|2n)$.
###### Remark 7.10.
Suppose again the
$\mathfrak{k}_{\overline{0}}\neq\mathfrak{g}_{\overline{0}}$. Following a
similar argument to the one above, one may deduce the existence of many
projective $K^{\prime}$-submodules of $k[G/K]$ as follows: given an
irreducible $K^{\prime}$-submodule $L$ of $\operatorname{Dist}(G/K,eK)$, it
defines an irreducible submodule of
$\operatorname{Dist}((G_{0}/K_{0})^{\mathfrak{k}^{\prime}},eK_{0})$ via our
isomorphism. Thus $L$ must pair nontrivially with some projective
indecomposable summand $P=P_{K^{\prime}}(V)$ of
$k[(G_{0}/K_{0})^{\mathfrak{k}^{\prime}}]$. However this is only possible if
$V\cong L^{*}$, and $L$ pairs with $P$ via $L\otimes P\to L\otimes L^{*}\to
k$. It follows that $k[G/K]$ must contain a copy of $P_{K^{\prime}}(L^{*})$ as
well.
## 8\. Pairs that have an Iwasawa decomposition
### 8.1. The Iwasawa decomposition
We continue to assume $G$ is quasireductive. Due to technical difficulties
with Lie superalgebras like $\mathfrak{q}(n)$, which have Cartan subalgebras
that are not purely even, we will from now on also assume $G$ is Cartan-even.
###### Definition 8.1.
A quasireductive supergroup $G$ is Cartan-even if for a Cartan subalgebra
$\mathfrak{h}\subseteq\mathfrak{g}$ we have
$\mathfrak{h}=\mathfrak{h}_{\overline{0}}$ (i.e. $\mathfrak{h}_{\overline{0}}$
is self-centralizing in $\mathfrak{g}$).
Let $(\mathfrak{g},\mathfrak{k})$ be a supersymmetric pair, and let
$\mathfrak{a}\subseteq\mathfrak{p}_{\overline{0}}$ be a Cartan subspace, i.e.
a maximal subalgebra of $\mathfrak{p}_{\overline{0}}$ consisting only of
semisimple elements. Then we may decompose $\mathfrak{g}$ into weight spaces
according to the action of $\mathfrak{a}$ as
$\mathfrak{g}=\mathfrak{m}\oplus\bigoplus\limits_{\overline{\alpha}\in\mathfrak{a}^{*}}\mathfrak{g}_{\overline{\alpha}}$
where $\overline{\alpha}\neq 0$ and $\mathfrak{m}$ is the centralizer of
$\mathfrak{a}$ in $\mathfrak{g}$. We write $\overline{\Delta}$ for the set of
nonzero weights of this action, and call them restricted roots. Note that the
weights of the action are exactly the restriction to $\mathfrak{a}$ of the
roots under the action of a maximal torus in $\mathfrak{g}_{\overline{0}}$
which contains $\mathfrak{a}$. We say that the pair
$(\mathfrak{g},\mathfrak{k})$ admits an Iwasawa decomposition if there is some
choice of positive roots in $\overline{\Delta}$,
$\overline{\Delta}=\overline{\Delta}^{+}\sqcup\overline{\Delta^{-}}$, such
that
$\text{if }\
\mathfrak{n}=\bigoplus\limits_{\alpha\in\overline{\Delta}^{+}}\mathfrak{g}_{\alpha},\
\text{ then }\ \mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}.$
In this case, we may extend the choice of positive restricted roots to a
choice of positive roots $\Delta=\Delta^{+}\sqcup\Delta^{-}$ for
$\mathfrak{g}$. In that case, if $\mathfrak{b}$ is the Borel subalgebra
determined by this choice of positive roots then we have
$\mathfrak{a}\oplus\mathfrak{n}\subseteq\mathfrak{b}$. In particular,
$\mathfrak{b}+\mathfrak{k}=\mathfrak{g}$, i.e. $\mathfrak{k}$ has a
complimentary Borel subalgebra.
###### Definition 8.2.
We call a Borel subalgebra arising in this way an Iwasawa Borel subalgebra for
$\mathfrak{k}$ with respect to $\mathfrak{a}$. If $B\subseteq G$ integrates
$\mathfrak{b}$, then we call $B$ an Iwasawa Borel subgroup of $G$.
### 8.2. $G/K$ is spherical; the rational functions $f_{\lambda}$
Recall that a $G$-supervariety $X$ is spherical if $X$ admits an open
$B$-orbit for some Borel $B$. For $G/K$, this is equivalent to $\mathfrak{k}$
admitting a complementary Borel subalgebra. Hence if $\mathfrak{g}$ admits an
Iwasawa decomposition, $G/K$ is spherical. Choose a Cartan subspace
$\mathfrak{a}$ and let $B$ be an Iwasawa Borel subgroup with respect to
$\mathfrak{a}$.
In this case, $G_{0}/K_{0}$ is spherical with respect to $B_{0}$. Write
$\Lambda_{0}^{+}\subseteq\mathfrak{a}^{*}$ for the dominant weights that
appear as $B_{0}$-highest weights in $k[G_{0}/K_{0}]$. Then the lattice
generated by $\Lambda_{0}^{+}$ is a full rank lattice in $\mathfrak{a}^{*}$
(see, for instance, [Tim11]).
By the general theory of spherical supervarieties (see [She19]), the
$B$-highest weights in $k[G/K]$, which we will call $\Lambda^{+}$, are a
subset of $\Lambda_{0}^{+}$, and there is at most one function $f_{\lambda}\in
k[G/K]$ of highest weight $\lambda$ for any $\lambda\in\Lambda_{0}^{+}$.
Further, $f_{\lambda}$ is even and non-nilpotent, hence it maps down to a non-
zero function of highest weight $\lambda$ on $G_{0}/K_{0}$. Further,
$\Lambda^{+}$ generates the same lattice as $\Lambda_{0}^{+}$ in
$\mathfrak{a}^{*}$, which we call $\Lambda$, and in particular is Zariski
dense. It follows that by inverting the functions $f_{\lambda}$ for
$\lambda\in\Lambda^{+}$ and taking arbitrary products, we obtain rational
functions $f_{\lambda}$ for all $\lambda\in\Lambda$ which are
$\mathfrak{a}\oplus\mathfrak{n}$-eigenfunctions and are regular in a
neighborhood of $eK$.
### 8.3. A different perspective via induced spaces
We may gain another perspective on the sphericity of $G/K$ using the notion of
induced spaces. Write $A$ and $N$ for the subgroups of $G$ integrating
$\mathfrak{a}$ and $\mathfrak{n}$, respectively. Then $AN$ is a subgroup of
$G$ which integrates $\mathfrak{a}\oplus\mathfrak{n}$. Since
$(\mathfrak{a}\oplus\mathfrak{n})_{\overline{1}}$ is complimentary to
$\mathfrak{k}_{\overline{1}}$ by the Iwasawa decomposition, we may apply 5.1
to obtain the existence of a canonical map of $AN$-supervarieties
$(G_{0}/K_{0})^{\mathfrak{a}\oplus\mathfrak{n}}\to G/K$
which is an isomorphism in a neighborhood of $eK_{0}$. In particular the map
on functions
$k[G/K]\to k[(G_{0}/K_{0})^{\mathfrak{a}\oplus\mathfrak{n}}]$
is injective map of $AN$-modules. It follows that we have an injective
morphism
$k[G/K]^{N}\to k[(G_{0}/K_{0})^{\mathfrak{a}\oplus\mathfrak{n}}]^{N},$
and the right hand side is isomorphic to the subalgebra of highest weight
vectors in $k[G_{0}/K_{0}]$. Further, in a neighborhood of the $eK_{0}$ in
$(G_{0}/K_{0})^{\mathfrak{a}\oplus\mathfrak{n}}$, we have functions
$f_{\lambda}$ annihilated by $\mathfrak{n}$ and of weight
$\lambda\in\mathfrak{a}^{*}$ for every $\lambda\in\Lambda$. Using the local
isomorphism, we find that such functions also exist in a neighborhood of $eK$
in $G/K$, and in particular they are the highest weight functions of weight
$\lambda\in\Lambda$. .
###### Definition 8.3.
In the setup above, for $\lambda\in\Lambda$ we let $f_{\lambda}$ denote the
rational $\mathfrak{a}\oplus\mathfrak{n}$-eigenfunction on $G/K$ such that
$f_{\lambda}(eK)=1$.
Note we can always normalize as above, because the
$\mathfrak{a}\oplus\mathfrak{n}$ eigenvectors are all non-nilpotent and if
they vanished at $eK$ then necessarily they would vanish everywhere.
### 8.4. Highest weight submodules of $k[G/K]$
Recall that for a homogeneous vector $v$ we write $\langle K^{\prime}\cdot
v\rangle$ for the $K^{\prime}$-module generated by $v$.
###### Lemma 8.4.
If $\lambda\in\Lambda^{+}$, then $\mathcal{U}\mathfrak{k}^{\prime}f_{\lambda}$
is stable under $K^{\prime}$, and thus
$\langle K^{\prime}\cdot
f_{\lambda}\rangle=\mathcal{U}\mathfrak{k}^{\prime}f_{\lambda}.$
###### Proof.
It suffices to prove that $\mathcal{U}\mathfrak{k}^{\prime}f_{\lambda}$ is
stable under $K_{0}$. For this we notice that by the Iwasawa decomposition for
$\mathfrak{g}_{\overline{0}}$ and the fact that $G_{0}$ is connected,
$\langle G_{0}\cdot
f_{\lambda}\rangle=\mathcal{U}\mathfrak{g}_{\overline{0}}f_{\lambda}=\mathcal{U}\mathfrak{k}_{\overline{0}}f_{\lambda}\subseteq\langle
K_{0}\cdot f_{\lambda}\rangle.$
Since $\langle K_{0}\cdot f_{\lambda}\rangle\subseteq\langle G_{0}\cdot
f_{\lambda}\rangle$ it follows that $\langle K_{0}\cdot
f_{\lambda}\rangle=\mathcal{U}\mathfrak{k}_{\overline{0}}f_{\lambda}$. From
here it is easy to check that $\mathcal{U}\mathfrak{k}^{\prime}f_{\lambda}$ is
$K_{0}$-stable. ∎
###### Lemma 8.5.
For $\lambda\in\Lambda^{+}$, $\langle K^{\prime}\cdot f_{\lambda}\rangle$
contains at most one summand isomorphic to $I_{K^{\prime}}(k)$.
###### Proof.
By the classical Iwasawa decomposition, we have $\langle K_{0}\cdot
f_{\lambda}\rangle=\mathcal{U}\mathfrak{g}_{\overline{0}}f_{\lambda}\cong
L_{0}(\lambda)$, since $f_{\lambda}$ is a $B_{0}$-highest weight vector. Hence
we have a surjection of $K^{\prime}$-modules
$\operatorname{Ind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}L_{0}(\lambda)\to\langle
K^{\prime}\cdot f_{\lambda}\rangle.$
Each copy of $I_{K^{\prime}}(k)$ showing up in $\langle K^{\prime}\cdot
f_{\lambda}\rangle$ would have to split disjointly back into
$\operatorname{Ind}_{\mathfrak{k}_{0}}^{\mathfrak{k}^{\prime}}L_{0}(\lambda)$.
However, since $K_{0}$ is a spherical subgroup of $G_{0}$,
$L_{0}(\lambda)^{K_{0}}$ is one-dimensional, so the induced module has only
one copy of $I_{K^{\prime}}(k)$. ∎
###### Remark 8.6.
If $(\mathfrak{g},\mathfrak{k}^{\prime})$ also satisfies the Iwasawa
decomposition then we have $\langle K^{\prime}\cdot f_{\lambda}\rangle=\langle
G\cdot f_{\lambda}\rangle$. Also, it will be shown in the subsequent article
that if $\mathfrak{g}$ is basic classical and the involution $\theta$
preserves the nondegenerate form on $\mathfrak{g}$, then we always have
$\langle K^{\prime}\cdot f_{\lambda}\rangle=\langle G\cdot
f_{\lambda}\rangle$.
### 8.5. Harish-Chandra Homomorphism
The Iwasawa decomposition
$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}$ implies that
$\mathcal{U}\mathfrak{g}=S(\mathfrak{a})\oplus(\mathfrak{n}\mathcal{U}\mathfrak{g}+\mathcal{U}\mathfrak{g}\mathfrak{k})$.
Thus we have a vector space isomorphism
$\operatorname{Dist}(G/K,eK)\cong\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}\cong
S(\mathfrak{a})\oplus\mathfrak{n}\mathcal{U}\mathfrak{g}/(\mathfrak{n}\mathcal{U}\mathfrak{g}\cap\mathcal{U}\mathfrak{g}\mathfrak{k}).$
###### Definition 8.7.
The Harish-Chandra morphism $HC:\operatorname{Dist}(G/K,eK)\to
S(\mathfrak{a})$ is given by the projection onto $S(\mathfrak{a})$ along
$\mathfrak{n}\mathcal{U}\mathfrak{g}/(\mathfrak{n}\mathcal{U}\mathfrak{g}\cap\mathcal{U}\mathfrak{g}\mathfrak{k})$.
Choose a homogeneous basis $p_{1},\dots,p_{r}$ of $\mathfrak{p}$. Recall from
lemma 2.9 that $\operatorname{Dist}(G/K,eK)$ is spanned by monomials
$p_{1}^{k_{1}}\cdots p_{r}^{k_{r}},$
where we abusively identify them with their restrictions to
$\operatorname{Dist}(G/K,eK)$. Define a filtration $F^{\bullet}$ on
$\operatorname{Dist}(G/K,eK)$ by setting
$F^{0}=\langle\operatorname{ev}_{eK}\rangle$, and set $F^{i}$ to be the span
of all monomials as above where the degree of $p_{i}$ is 2 if $p_{i}$ is even
and 1 if $p_{i}$ is odd. The following is a generalization of 4.2.2 in
[Gor00].
###### Lemma 8.8.
$HC(F^{r})\subseteq\sum\limits_{j\leq r/2}S^{j}\mathfrak{a}$.
###### Proof.
Since the Harish-Chandra projection is linear, it suffices to prove this on
monomials. We induct on the length of the monomial. If the monomial is length
zero, the result is clear. Suppose we have a monomial $b_{1}\cdots b_{t}\in
F^{r}$. Using the Iwasawa decomposition, we may write $b_{1}=k+a+n$ where
$k\in\mathfrak{k}$, $a\in\mathfrak{a}$, and $n\in\mathfrak{n}$. Since
$nb_{2}\cdots b_{t}\in\mathfrak{n}\mathcal{U}\mathfrak{g}$ it vanishes under
the Harish-Chandra projection, so we have
$\displaystyle HC(b_{1}\dots b_{t})$ $\displaystyle=$ $\displaystyle
HC(ab_{2}\cdots b_{t})+HC(kb_{2}\cdots b_{t})$ $\displaystyle=$ $\displaystyle
HC(ab_{2}\cdots b_{t})+\sum\limits_{i}HC(b_{2}\cdots[k,b_{i}]\cdots
b_{t})+HC(b_{2}\cdots b_{t}k).$
Since $b_{2}\cdots b_{t}k\in\mathcal{U}\mathfrak{g}\mathfrak{k}$, the last
term vanishes. Now notes that $a\neq 0$ only if $b_{t}$ is even, and in that
case $b_{2}\cdots b_{t}\in F^{r-2}$, and $HC(ab_{2}\cdots
b_{t})=HC(a)HC(b_{2}\cdots b_{t})$. Thus by induction,
$\deg HC(ab_{2}\cdots b_{t})\leq\deg HC(b_{2}\cdots
b_{t})+1\leq\frac{r-2}{2}+1=\frac{r}{2}.$
As for $HC(b_{2}\cdots[k,b_{i}]\cdots b_{t})$, if $b_{1}$ is even then the
parity of $[k,b_{i}]$ is the same as the parity of $b_{i}$, and thus
$b_{2}\cdots[k,b_{i}]\cdots b_{t}\in F^{r-2}$. Therefore
$\deg HC(b_{2}\cdots[k,b_{i}]\cdots b_{t})\leq(r-2)/2\leq r/2.$
If $b_{1}$ is odd, then the parity of $[k,b_{i}]$ is opposite the parity of
$b_{i}$, so $b_{2}\cdots[k,b_{i}]\cdots b_{t}\in F^{r}$, and by induction we
have
$HC(b_{2}\cdots[k,b_{i}]\cdots b_{t})\leq r/2.$
∎
###### Corollary 8.9.
Let $z\in\operatorname{Dist}^{r}(G_{0}/K_{0},eK_{0})^{K_{0}}$ lie in the $r$th
part of the standard filtration on $\operatorname{Dist}(G_{0}/K_{0},eK_{0})$
defined in definition 2.2. Then $v_{\mathfrak{k}^{\prime}}\cdot
z\in\mathcal{A}_{G/K}$ has
$HC(v_{\mathfrak{k}^{\prime}}\cdot z)\leq
r+\operatorname{dim}\mathfrak{p}_{\overline{1}}/2.$
###### Proof.
We have that $z\in F^{2r}$ and $v_{\mathfrak{k}^{\prime}}\in
F^{\operatorname{dim}\mathfrak{p}_{\overline{1}}}$, so
$v_{\mathfrak{k}^{\prime}}\cdot z=zv_{\mathfrak{k}^{\prime}}\in
F^{2r+\operatorname{dim}\mathfrak{p}_{\overline{1}}}$. ∎
###### Lemma 8.10.
$HC(\mathcal{A}_{G/K})$ is naturally a module over $HC(\mathcal{Z}_{G/K})$
such that $HC:\mathcal{A}_{G/K}\to S(\mathfrak{a})$ induces a morphism of
$\mathcal{Z}_{G/K}$-modules.
###### Proof.
Let $\gamma\in\mathcal{A}_{G/K}$ and $z\in\mathcal{Z}_{G/K}$. Write
$z=n+HC(z)+\mathcal{U}\mathfrak{g}\mathfrak{k}$ and
$\gamma=n^{\prime}+HC(\gamma)+\mathcal{U}\mathfrak{g}\mathfrak{k}$. Then we
see that
$\gamma
z=(n^{\prime}+HC(\gamma)+\mathcal{U}\mathfrak{g}\mathfrak{k})(n+HC(z)+\mathcal{U}\mathfrak{g}\mathfrak{k})=n^{\prime}(n+HC(z))+HC(\gamma)n+HC(z)HC(\gamma)+\mathcal{U}\mathfrak{g}\mathfrak{k}.$
Clearly $n^{\prime}(n+HC(z))\in\mathfrak{n}\mathcal{U}\mathfrak{g}$. And since
$HC(\gamma)\in S(\mathfrak{a})$, it preserves
$\mathfrak{n}\mathcal{U}\mathfrak{g}$ under commutator and thus
$HC(\gamma)n\in\mathfrak{n}\mathcal{U}\mathfrak{g}$ as well. Hence we find
that
$HC(\gamma z)=HC(\gamma)HC(z),$
as desired. ∎
###### Lemma 8.11.
For $\gamma\in\operatorname{Dist}(G/K,eK)$ and $\lambda\in\Lambda$ we have
that $HC(\gamma)(\lambda)=\gamma(f_{\lambda})$.
###### Proof.
Since $\mathcal{U}\mathfrak{g}\mathfrak{n}$ annihilates $f_{\lambda}$, the
result follows from our normalization of $f_{\lambda}$. ∎
The following illustrates the importance of understanding
$HC(\mathcal{A}_{G/K})$, especially in light of remark 8.6.
###### Corollary 8.12.
Let $\lambda\in\Lambda^{+}$. Then $\langle K^{\prime}\cdot f_{\lambda}\rangle$
contains a copy of $I_{K^{\prime}}(k)$ if and only if there exists
$\gamma\in\mathcal{A}_{G/K}$ such that
$HC(\gamma)(\lambda)=\gamma(f_{\lambda})\neq 0$.
###### Proof.
These results follow from the work in section 7.3. ∎
For the next lemma, let $\mathfrak{h}\subseteq\mathfrak{g}$ be a
$\theta$-stable Cartan subalgebra which contains $\mathfrak{a}$, so that
$\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$, where $\mathfrak{t}$ are the
fixed points of $\theta$ in $\mathfrak{h}$. In particular,
$\mathfrak{t}\subseteq\mathfrak{k}_{\overline{0}}$.
###### Proposition 8.13.
$HC(\mathcal{A}_{G/K})=0$ if one of the following two conditions holds.
1. (1)
$\operatorname{dim}\mathfrak{p}_{\overline{1}}$ is odd;
2. (2)
The restriction of $\operatorname{ber}_{\mathfrak{k}^{\prime}}$ to
$\mathfrak{t}$ is non-zero. Note this holds if
$\operatorname{ber}_{\mathfrak{k}^{\prime}}\neq 0$ and the map
$\mathfrak{t}\to\mathfrak{k}^{\prime}/[\mathfrak{k}^{\prime},\mathfrak{k}^{\prime}]$
is surjective.
###### Proof.
If $\mathfrak{p}_{\overline{1}}$ is odd-dimensional then
$v_{\mathfrak{k}^{\prime}}$ is odd, and thus all elements of
$\mathcal{A}_{G/K}$ are odd. Since $S(\mathfrak{a})$ is purely even and $HC$
is an even map, we must have $HC(\mathcal{A}_{G/K})=0$.
Now suppose condition (2) holds. Let
$\gamma\in\mathcal{A}_{G/K}=\operatorname{Dist}(G/K,eK)^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$,
and let $\lambda\in\Lambda^{+}$. Then since $\lambda\in\mathfrak{a}^{*}$,
$\lambda(\mathfrak{t})=0$. If $t\in\mathfrak{t}$ we see that
$\operatorname{ber}_{\mathfrak{k}^{\prime}}(t)\gamma(f_{\lambda})=(t\cdot\gamma)(f_{\lambda})=\gamma
tf_{\lambda}=\lambda(t)\gamma(f_{\lambda})=0.$
However by assumption there exists $t\in\mathfrak{t}$ such that
$\operatorname{ber}_{\mathfrak{k}^{\prime}}(t)\neq 0$, so if we use this $t$
we find that $\gamma(f_{\lambda})=0$ for all $\lambda$, so that
$HC(\gamma)=0$. ∎
### 8.6. The polynomial $p_{G/K,B}$
We now interpret the polynomial
$p_{G/K,B}:=HC(\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}})$, noting that
it depends on our Iwasawa Borel subgroup $B$. Recall that
$\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}=v_{\mathfrak{k}^{\prime}}\cdot\operatorname{ev}_{eK}$
is a ghost distribution, and is the one of minimal degree. Observe that we
have a $K^{\prime}$-equivariant embedding
$i:K^{\prime}/K_{0}\to G/K.$
This induces a $K^{\prime}$-equivariant surjective morphism on functions
$i^{*}:k[G/K]\to k[K^{\prime}/K_{0}].$
Write $\operatorname{ev}_{eK}$ for the evaluation distribution at $eK$ on
$G/K$. Then on $K^{\prime}$ distributions we obtain a map which sends
$v_{\mathfrak{k}^{\prime}}\mapsto
v_{\mathfrak{k}^{\prime}}\cdot\operatorname{ev}_{eK}=\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}.$
Thus for $\lambda\in\Lambda^{+}$ we have
$p_{G/K,B}(\lambda)=(\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}})f_{\lambda}=v_{\mathfrak{k}^{\prime}}i^{*}(f_{\lambda})=\operatorname{ev}_{eK_{0}}\circ
D_{\mathfrak{k}^{\prime}}\circ i^{*}(f_{\lambda}).$
It follows that:
###### Lemma 8.14.
For $\lambda\in\Lambda^{+}$, $p_{G/K,B}(\lambda)\neq 0$ if and only if
$\langle K^{\prime}\cdot i^{*}f_{\lambda}\rangle$ contains
$I_{K^{\prime}}(k)$.
###### Corollary 8.15.
If $\lambda\in\Lambda^{+}$ and $\langle K^{\prime}\cdot f_{\lambda}\rangle$
contains a copy of $I_{K^{\prime}}(k)$, then $p_{G/K,B}(\lambda)\neq 0$ if and
only if the $K^{\prime}$-invariant in the copy of $I_{K^{\prime}}(k)$ is non-
zero at $eK$.
We have now shown that:
###### Corollary 8.16.
We have $p_{G/K,B}(\lambda)\neq 0$ if and only if $\langle K^{\prime}\cdot
f_{\lambda}\rangle$ contains a copy of $I_{K^{\prime}}(k)$ and the
$K^{\prime}$-invariant of $I_{K^{\prime}}(k)$ does not vanish at $eK$.
The following gives one reason for the importance of the polynomial
$p_{G/K,B}$.
###### Proposition 8.17.
Suppose that whenever $\langle K^{\prime}\cdot f_{\lambda}\rangle$ contains a
copy of $I_{K^{\prime}}(k)$ for $\lambda\in\Lambda^{+}$, the
$K^{\prime}$-invariant in it does not vanish at $eK$. Then,
1. (1)
For $\lambda\in\Lambda^{+}$, $p_{G/K,B}(\lambda)\neq 0$ if and only if
$\langle K^{\prime}\cdot f_{\lambda}\rangle$ contains a copy of
$I_{K^{\prime}}(k)$.
2. (2)
For any $\lambda\in\Lambda^{+}$, if $p_{G/K,B}(\lambda)=0$, then
$HC(\gamma)(\lambda)=0$ for all $\gamma\in\mathcal{A}_{G/K}$.
###### Proof.
(1) follows from 8.16.
(2) follows from 8.12. ∎
### 8.7. Relationship between $G/K$ and $G/K^{\prime}$
We continue to suppose that $(\mathfrak{g},\mathfrak{k})$ satisfies the
Iwasawa decomposition.
###### Proposition 8.18.
For $\lambda\in\Lambda^{+}$, suppose that
1. (1)
$\langle K^{\prime}\cdot f_{\lambda}\rangle$ contains a copy of
$I_{K^{\prime}}(k)$ (equivalently $HC(\gamma)(\lambda)\neq 0$ for some
$\gamma\in\mathcal{A}_{G/K}$); and
2. (2)
$\langle G\cdot f_{\lambda}\rangle\cong L(\lambda)$ is an irreducible
$G$-module.
Then $I_{G}(L(\lambda))$ is a submodule of
$k[G]^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$.
Note that $k[G]^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$ is equal to the
induced bundle
$\operatorname{Ind}_{K^{\prime}}^{G}\operatorname{Ber}(\mathfrak{k}^{\prime})$,
and if $\operatorname{Ber}(\mathfrak{k}^{\prime})$ is trivial then
$k[G]^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}=k[G/K^{\prime}]$.
###### Proof.
Since $I_{K^{\prime}}(k)\cong
P_{K^{\prime}}(\operatorname{Ber}(\mathfrak{k}^{\prime}))$,
$I_{K^{\prime}}(k)$ has a morphism
$\varphi:I_{K^{\prime}}(k)\to\operatorname{Ber}(\mathfrak{k}^{\prime})$
determined by the projection onto its head. Since $I_{K^{\prime}}(k)$ splits
off $L(\lambda)$ as a $K^{\prime}$-module, it also must split off
$I_{G}(L(\lambda))$ as a $K^{\prime}$-module. Therefore we may extend
$\varphi$ to a $K^{\prime}$-equivariant morphism
$\phi:I_{G}(L(\lambda))\to\operatorname{Ber}(\mathfrak{k}^{\prime})$. By
construction $\phi$ is non-zero on $L(\lambda)$, the socle of
$I_{G}(L(\lambda))$. Thus by Frobenius reciprocity, $\phi$ defines an
injective morphism of $G$-modules
$\Phi:I_{G}(L(\lambda))\to k[G]^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}.$
∎
###### Remark 8.19.
The above theorem is especially useful when $G/K\cong G/K^{\prime}$, as it
helps to determine the structure of $k[G/K]$ as a $G$-module. We will see one
nice application of this in the next section.
## 9\. The Group Case $G\times G/G$
We now consider the case of the supersymmetric space $G\times G/G$, where $G$
is embedded diagonally. This space is isomorphic to $G$ with the $G\times G$
action given by left and right translation. Since $G$ is assumed to be
quasireductive, it must be connected in particular, so we can work with the
Lie superalgebra without losing much. The involution we take in this case,
$\theta$, is given on the Lie superalgebra $\mathfrak{g}\times\mathfrak{g}$ by
$\theta(x,y)=(y,x)$.
###### Lemma 9.1.
We have a $\mathfrak{g}\times\mathfrak{g}$-module isomorphism
$\mathcal{U}\mathfrak{g}\cong\operatorname{Dist}(G\times
G/G,eG)=\mathcal{U}(\mathfrak{g}\times\mathfrak{g})/\mathcal{U}(\mathfrak{g}\times\mathfrak{g})\mathfrak{g}$
given by
$u\mapsto u\otimes 1.$
###### Proof.
Let $(v_{1},v_{2})\in\mathfrak{g}\times\mathfrak{g}$ and
$u\in\mathcal{U}\mathfrak{g}$. We see that $(v_{1},v_{2})\cdot
u=v_{1}u-(-1)^{\overline{u}\overline{v_{2}}}uv_{2}$, and this will map to
(before modding out by
$\mathcal{U}(\mathfrak{g}\times\mathfrak{g})\mathfrak{g}$)
$\displaystyle v_{1}u\otimes
1-(-1)^{\overline{u}\overline{v_{2}}}uv_{2}\otimes 1$ $\displaystyle=$
$\displaystyle v_{1}u\otimes 1+(-1)^{\overline{v_{2}}\overline{u}}u\otimes
v_{2}-(-1)^{\overline{v_{2}}\overline{u}}uv_{2}\otimes
1-(-1)^{\overline{v_{2}}\overline{u}}u\otimes v_{2}$ $\displaystyle=$
$\displaystyle(v_{1}\otimes 1+1\otimes v_{2})(u\otimes
1)-(-1)^{\overline{v_{2}}\overline{u}}(u\otimes 1)(v_{2}\otimes 1+1\otimes
v_{2})$
which shows the $\mathfrak{g}\times\mathfrak{g}$-equivariance of the map. It
is an isomorphism by the PBW theorem. ∎
From now on we work with $\mathcal{U}\mathfrak{g}$ as our space of
distributions. Observe that in this case,
$\mathfrak{g}^{\prime}=\\{(u,\delta(u)):u\in\mathfrak{g}\\}\cong\mathfrak{g},$
and so we can identify $\mathfrak{g}^{\prime}$ with $\mathfrak{g}$ as a Lie
superalgebra. With this setup, the action of $\mathfrak{g}^{\prime}$ on
$\operatorname{Dist}(G\times G/G,eG)$ is given by, for $u\in\mathfrak{g}$ and
$v\in\mathcal{U}\mathfrak{g}$,
$u\cdot v=uv-(-1)^{\overline{u}\overline{v}+\overline{u}}vu.$
This is exactly Gorelik’s twisted adjoint action defined in [Gor00]. There,
she proved algebraically that $\mathcal{U}\mathfrak{g}$ is an induced module
from $\mathfrak{g}_{\overline{0}}$ under this action. However this follows
from our geometric perspective via 5.7. Gorelik defined
$\mathcal{A}\subseteq\mathcal{U}\mathfrak{g}$, the anticentre of
$\mathcal{U}\mathfrak{g}$, to be the $\mathfrak{g}^{\prime}$-invariant
distributions on $G$. However please note remark 7.5 regarding notation.
### 9.1. Structure of $k[G]$ as a $G\times G$-module
We assume that $G$ is Cartan-even. The following is proven in [She19].
###### Proposition 9.2.
1. (1)
As a $G\times G$-module, we may write
$k[G]=\bigoplus\limits_{\mathcal{B}}M_{\mathcal{B}},$
where each $M_{\mathcal{B}}$ is an indecomposable $G\times G$-module, and the
sum runs over all blocks of $\operatorname{Rep}(G)$ up to parity.
2. (2)
The socle of $k[G]$ is the submodule generated by all highest weight vectors
with respect to an Iwasawa Borel, and is equal to
$\bigoplus\limits_{L}L^{*}\boxtimes L$, where $L$ runs over all simple modules
of $G$ up to parity.
3. (3)
The socle of $M_{\mathcal{B}}$ is $\bigoplus\limits_{L}L^{*}\boxtimes L$,
where the sum runs over all simple $G$-modules $L$ in $\mathcal{B}$, up to
parity. In particular $M_{\mathcal{B}}$ is simple if and only if its socle is
simple.
4. (4)
For a simple $G$-module $L$, the submodule $L^{*}\boxtimes L$ is given by the
image of the $\operatorname{End}(L)^{*}$ in $k[G]$ under the pullback morphism
$k[\operatorname{End}(L)]\to k[G]$
coming from the representation $G\to\operatorname{End}(L)$.
###### Proposition 9.3.
Let $L$ be a simple $G$-module. Then $\operatorname{End}(L)$, as a $G\times
G$-module, admits a unique, up to scalar, $G^{\prime}$-invariant given by
$\delta_{L}$, the parity involution. Consequently,
$\text{tr}_{L}\in\operatorname{End}(L)^{*}$ defines a nonzero
$G^{\prime}$-invariant function on $G$.
###### Proof.
Clearly $\delta_{L}$ is $\mathfrak{g}_{\overline{0}}$-invariant. For
$u\in\mathfrak{g}_{1}$, we see that
$((u,-u)\delta)(v)=u\delta(v)+\delta(uv)=(-1)^{\overline{v}}uv+(-1)^{\overline{u}+\overline{v}}uv=0$
so $\gamma$ is $\mathfrak{g}^{\prime}$-invariant. Now as a tensor,
$\delta_{L}=\sum\limits_{i}(-1)^{\overline{e_{i}}}e_{i}\otimes\varphi_{i}$,
where $\varphi_{i}(e_{i})=1$, so after applying the braiding to switch the
order of tensors, we obtain the trace in $\operatorname{End}(L)^{*}$. ∎
###### Corollary 9.4.
If $L$ is a simple $G$-module, $L^{*}\boxtimes L\subseteq k[G]$ has a unique
$\mathfrak{g}^{\prime}$-invariant, given by $\text{tr}_{L}$. In particular,
$\text{tr}_{L}(eG)=\operatorname{dim}L\neq 0$, so that the hypotheses of 8.17
apply.
The following is well-known:
###### Lemma 9.5.
Write $\mathfrak{a}=\\{(h,-h):h\in\mathfrak{h}\\}$. Then $\mathfrak{a}$ is a
Cartan subspace for $(\mathfrak{g}\times\mathfrak{g},\mathfrak{g})$.
Now choose a positive system for $\mathfrak{g}$, say $\Delta^{+}$, with Borel
$\mathfrak{b}^{+}$ and opposite Borel $\mathfrak{b}^{-}$. Write $\rho$ for the
Weyl vector. Then we obtain a positive system for
$\mathfrak{g}\times\mathfrak{g}$ with Borel subalgebra
$\mathfrak{b}^{-}\times\mathfrak{b}^{+}$.
###### Corollary 9.6.
The above choice of Borel subalgebra is an Iwasawa Borel subalgebra. Further,
we have that $\mathfrak{n}=\mathfrak{u}^{-}\times\mathfrak{u}^{+}$, where
$\mathfrak{u}^{\pm}\subseteq\mathfrak{b}^{\pm}$ are the nilpotent radicals of
$\mathfrak{b}^{\pm}$, and the Iwasawa decompositions:
$\mathfrak{g}\times\mathfrak{g}=\mathfrak{g}\oplus\mathfrak{a}\oplus\mathfrak{n}=\mathfrak{g}^{\prime}\oplus\mathfrak{a}\oplus\mathfrak{n}$
###### Lemma 9.7.
For the above choice of $\mathfrak{a}$ and positive system,
$\Lambda^{+}=\\{(-\lambda,\lambda):\lambda\text{ is a
}\mathfrak{b}\text{-dominant weight}\\}.$
Given this lemma, we identify $\Lambda^{+}$ with the set of
$\mathfrak{b}$-dominant weights, and write $f_{\lambda}\in k[G]$ for the
function of highest weight $(-\lambda,\lambda)$ such that $f_{\lambda}(eG)=1$.
###### Proposition 9.8.
We have
$\mathcal{U}(\mathfrak{g}\times\mathfrak{g})f_{\lambda}=\mathcal{U}\mathfrak{g}f_{\lambda}=\mathcal{U}\mathfrak{g}^{\prime}f_{\lambda}=L_{\lambda}^{*}\boxtimes
L_{\lambda}$.
###### Proof.
This follows from the Iwasawa decompositions. ∎
Recall that for this supersymmetric pair, the Harish-Chandra morphism on
distributions corresponds to the usual Harish-Chandra morphism
$HC:\mathcal{U}\mathfrak{g}\to S(\mathfrak{h})$ on the enveloping algebra.
###### Corollary 9.9.
$p_{G\times G/G,B}(\lambda)\neq 0$ if and only if $L_{\lambda}^{*}\boxtimes
L_{\lambda}$ has a copy of $I_{G^{\prime}}(k)$ as a $G^{\prime}$-module.
###### Proof.
This follows from 9.8, 9.4, and 8.17. ∎
### 9.2. Projectivity criteria for irreducible modules
For this section we work with a fixed Borel subalgebra $\mathfrak{b}$ of
$\mathfrak{g}$, which as stated above determines a fixed Iwasawa Borel
subalgebra of $\mathfrak{g}\times\mathfrak{g}$.
We observe that $\operatorname{id}\times\delta$ defines an automorphism of
$G\times G$ which takes $G$ to $G^{\prime}$ and vice-versa. In particular it
defines an isomorphism
$G\times G/G^{\prime}\to G\times G/G$
which is $\operatorname{id}\times\delta$-equivariant, meaning that the
pullback morphism defines a $G\times G$-equivariant isomorphism
$k[G]=k[G\times G/G]\to k[G\times
G/G^{\prime}]^{\operatorname{id}\times\delta}.$
Here we use the notation $V^{\phi}$ for the $\phi$ twist of a $G$-module $V$,
where $\phi$ is an automorphism of $G$. Notice that since
$\operatorname{id}\times\delta$ is the identity on $G_{0}\times G_{0}$, we
have that $(L^{*}\boxtimes L)^{\operatorname{id}\times\delta}\cong
L^{*}\boxtimes L$ for a simple $G$-module $L$, by highest weight theory.
###### Proposition 9.10.
Let $(-\lambda,\lambda)\in\Lambda^{+}$. Then $L(\lambda)^{*}\boxtimes
L(\lambda)$ contains a copy of $I_{G^{\prime}}(k)$ if and only if $L(\lambda)$
is a projective $G$-module.
###### Proof.
For the backward direction, if $L(\lambda)$ is projective then
$L(\lambda)^{*}\boxtimes L(\lambda)$ is too, thus its restriction to
$G^{\prime}$ must be projective since $G$ is quasireductive. By 9.4,
$L(\lambda)^{*}\boxtimes L(\lambda)$ contains a $G^{\prime}$-invariant, and
thus it must contain a copy of $I_{G^{\prime}}(k)$.
Conversely, suppose that $L(\lambda)^{*}\boxtimes L(\lambda)$ contains a copy
of $I_{G^{\prime}}(k)$. Since $L(\lambda)^{*}\boxtimes L(\lambda)$ is
irreducible, we may apply 8.18 to obtain that $I_{G\times
G}(L(\lambda)^{*}\boxtimes L(\lambda))$ is a $G\times G$-submodule of
$k[G\times G/G^{\prime}]$. Using the isomorphism $k[G\times
G/G^{\prime}]^{\operatorname{id}\times\delta}\to k[G\times G/G]$, we obtain
that $I_{G\times G}(L(\lambda)^{*}\boxtimes L(\lambda))$ is a submodule
$k[G]$.
Since $I_{G\times G}(L(\lambda)^{*}\boxtimes L(\lambda))$ is projective, it
splits off $k[G]$ as a $G\times G$-module. By 9.2, we find that $L(\lambda)$
must be in its own block, and is therefore projective. ∎
From this, we obtain the following remarkable result. We write
$p_{G,B}:=p_{G\times
G/G,B}=HC(\operatorname{ev}_{eG}v_{\mathfrak{g}^{\prime}}).$
###### Theorem 9.11.
For a $\mathfrak{b}$-dominant weight $\lambda$, the simple $G$-module
$L(\lambda)$ is projective if and only if $p_{G,B}(\lambda)\neq 0$. In
particular, the dominant weights $\lambda$ for which $L(\lambda)$ is
projective is given by the intersection of $\Lambda^{+}$ with a Zariski open
subset of $\mathfrak{h}^{*}$ given by the non-vanishing set of the polynomial
$p_{G,B}$. In particular if $\operatorname{dim}\mathfrak{g}_{\overline{1}}$ is
odd or $\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is not the trivial
$\mathfrak{g}$-module then no irreducible $G$-modules are projective.
###### Proof.
The first statement follows from 9.10 and 9.9. The last statement follows from
8.13. Note that the last statement could be proven more easily using the
result in [Ser11] that $I_{G}(L)\cong
P_{G}(L\otimes\operatorname{Ber}(\mathfrak{g}_{\overline{1}}))$. ∎
In the appendix, we will prove 9.11 in full generality, i.e. without the
assumption the $G$ is Cartan-even.
###### Corollary 9.12.
If $HC(\mathcal{Z})=k$ (e.g. if $\mathcal{Z}=k$) and $\mathfrak{g}\neq 0$,
then no finite-dimensional irreducible $G$-modules are projective.
###### Proof.
By 9.11, if there exists a finite-dimensional irreducible projective
$G$-module, then $p_{G,B}\in S(\mathfrak{h})$ is a nonzero polynomial. By
9.11, $\operatorname{Ber}(\mathfrak{g})$ must be trivial and thus
$\mathcal{A}_{G\times G/G}=\mathcal{A}$. Thus we know from [Gor00] that
$p_{G,B}^{2}\in HC(\mathcal{Z})=k$, and is nonzero, so $p_{G,B}$ must be a
constant polynomial. Therefore $\operatorname{Rep}(G)$ is semisimple, and thus
$HC(\mathcal{Z})$ must be larger than constants by the classification of
algebraic supergroups with semisimple representation theory (see [She20b]). ∎
###### Corollary 9.13.
If there are no finite-dimensional irreducible projective $G$-modules, then
$HC(\mathcal{A}_{G/K})=0$.
###### Proof.
If for $\lambda\in\Lambda^{+}$ we have $HC(\gamma)(\lambda)\neq 0$, then
$L(\lambda)^{*}\boxtimes L(\lambda)$ contains a copy of $I_{G^{\prime}}(k)$,
so that $L(\lambda)$ is projective, a contradiction. ∎
We now can prove a nice sufficient criteria for when $G$ admits projective
irreducible modules.
###### Theorem 9.14.
Suppose that $G$ is a Cartan-even quasireductive supergroup such that the
following conditions on its Lie superalgebra, $\mathfrak{g}$, hold:
1. (1)
$\operatorname{dim}[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]=1$ for all
$\alpha\in\Delta$; and
2. (2)
for all $\alpha\in\Delta$, the pairing
$[-,-]:\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\to[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]$
is nondegenerate.
Make a choice of positive roots and thus a Borel subalgebra $\mathfrak{b}$
containing $\mathfrak{h}$, and let
$\Delta_{\overline{1}}^{+}=\\{\alpha_{1},\dots,\alpha_{n}\\}$. Let
$h_{\alpha_{i}}\in[\mathfrak{g}_{\alpha_{i}},\mathfrak{g}_{-\alpha_{i}}]$ be a
chosen nonzero element, and write
$r_{i}=\operatorname{dim}\mathfrak{g}_{\alpha_{i}}$. Then up to a scalar we
have:
$p_{G,B}=h_{\alpha_{1}}^{r_{1}}\cdots h_{\alpha_{n}}^{r_{n}}+l.o.t.$
In particular $p_{G,B}\neq 0$, so that $G$ admits projective irreducible
modules.
###### Proof.
We may choose a weight basis of $\mathfrak{g}_{\overline{1}}$ as follows. Let
$u_{0},\dots,u_{m}$ be a root vector basis of $\mathfrak{n}_{\overline{1}}$,
where $u_{i}$ has weight $\beta_{i}$, and the $\beta_{i}$’s need not be
distinct. Then there exists a root vector basis $v_{0},\dots,v_{m}$ of
$\mathfrak{n}^{-}_{\overline{1}}$ such that $v_{i}$ has weight $-\beta_{i}$
and $[u_{i},v_{j}]=\delta_{ij}h_{\beta_{i}}$ whenever $\beta_{i}=\beta_{j}$.
Write $u_{I}$ and $v_{I}$ for the corresponding ordered products when
$I\subseteq\\{0,\dots,m\\}$, and set $u_{\emptyset}=v_{\emptyset}=1$. For a
subset $J\subseteq\\{0,\dots,m\\}$ with $J=\\{j_{1}<\dots<j_{l}\\}$ we set
$\widetilde{v_{J}}=v_{j_{l}}\cdots v_{j_{1}}.$
Then we may take as a basis for
$\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{g}_{\overline{0}}$
the elements $u_{I}\widetilde{v_{J}}$ for $I,J\subseteq\\{0,\dots,m\\}$.
Therefore we may write
$v_{\mathfrak{g}}=u_{0}\cdots u_{m}v_{m}\cdots v_{0}+v^{\prime},$
where
$v^{\prime}\in\mathcal{F}^{\operatorname{dim}\mathfrak{g}_{\overline{1}}-2}$.
Thus
$\operatorname{ad}^{\prime}(v_{\mathfrak{g}})(1)=\operatorname{ad}^{\prime}(u_{0}\cdots
u_{m}v_{m}\cdots v_{0})(1)+\operatorname{ad}^{\prime}(v^{\prime})(1).$
By lemma 8.8 $HC(\operatorname{ad}^{\prime}(v^{\prime})(1))\leq m$, so it
suffices to show that (up to a scalar)
$\operatorname{ad}^{\prime}(u_{0}\cdots u_{m}v_{m}\cdots
v_{0})(1)=h_{\beta_{0}}\cdots h_{\beta_{m}}+l.o.t.$
Observe that
$\operatorname{ad}^{\prime}(v_{n}\cdots
v_{0})(1)=\sum\limits_{I\subseteq\\{1,\dots,m\\}}(-1)^{i_{1}+\dots+i_{j}}\widetilde{v_{I^{c}}}v_{I},$
(note the $\\{1,\dots,m\\}$ in the sum is not a typo) where
$I=\\{i_{1},\dots,i_{j}\\}$ and $I^{c}$ denotes the complement of $I$. Now if
we apply $\operatorname{ad}^{\prime}(u_{0}\cdots u_{m})$ to
$\operatorname{ad}^{\prime}(v_{m}\cdots v_{0})(1)$, the only term that does
not vanish under the Harish-Chandra morphism is
$\sum\limits_{I\subseteq\\{1,\dots,m\\}}(-1)^{i_{1}+\dots+i_{j}}u_{0}\cdots
u_{m}\widetilde{v_{I^{c}}}v_{I}.$
Now it suffices to show that
$HC((-1)^{i_{1}+\dots+i_{j}}u_{0}\cdots
u_{m}\widetilde{v_{I^{c}}}v_{I})=h_{\beta_{0}}\cdots h_{\beta_{m}}+l.o.t.$
The proof of this is inductive. If $m\notin I$, then we may write
$u_{0}\cdots u_{m}\widetilde{v_{I^{c}}}v_{I}=u_{0}\cdots
u_{m}v_{m}\widetilde{v_{I^{c}\setminus\\{m\\}}}v_{I}.$
This is equal to (after removing the term in $v_{m}\mathcal{U}\mathfrak{g}$)
$\displaystyle\sum\limits_{j}(-1)^{m-j}u_{0}\cdots[u_{j},v_{m}]\cdots
u_{m}\widetilde{v_{I^{c}\setminus\\{n\\}}}v_{I\setminus\\{m\\}}$
$\displaystyle+$ $\displaystyle u_{0}\cdots
u_{m-1}\widetilde{v_{I^{c}\setminus\\{m\\}}}v_{I}h_{\beta_{m}}$
$\displaystyle+$ $\displaystyle cu_{0}\cdots
u_{m-1}\widetilde{v_{I^{c}\setminus\\{m\\}}}v_{I}$
In the last summand $c$ is the constant given by
$(-\beta_{0}-\dots-\beta_{m-1})(h_{\beta_{m}})$, but its precise value does
not matter as this term will have Harish-Chandra projection of degree $m$ or
less. For the first summand, let
$e_{j,m}=[u_{j},v_{m}]\in\mathfrak{g}_{\overline{0}}$, and suppose that
$e_{j,m}$ lies in $\mathfrak{n}^{-}$ (note that it cannot lie in
$\mathfrak{h}$ by our choice of basis). The argument for when it lies in
$\mathfrak{n}^{+}$ is entirely analogous. Then we have (after removing the
term in $e_{j,m}\mathcal{U}\mathfrak{g}$.)
$u_{0}\cdots e_{j,m}\cdots
u_{m}\widetilde{v_{I^{c}\setminus\\{m\\}}}v_{I}=\sum\limits_{l}u_{0}\cdots[u_{l},e_{j,m}]\cdots\hat{u_{j}}\cdots
u_{m}\widetilde{v_{I^{c}\setminus\\{m\\}}}v_{I}$
Thus we end up with an element in $\mathcal{F}^{2m}$ so that its Harish-
Chandra projection is of degree at most $m$, and so does not contribute to the
top degree.
Now suppose instead $m\in I$, so that we can write
$u_{0}\cdots u_{m}\widetilde{v_{I^{c}}}v_{I}=u_{0}\cdots
u_{m}\widetilde{v_{I^{c}}}v_{I\setminus\\{m\\}}v_{m}.$
Write $I=\\{i_{1},\dots,i_{s},m\\}$ and $I^{c}=\\{j_{1},\dots,j_{t}\\}$. Then
the above expression is equal to
$\displaystyle(-1)^{n}u_{0}\cdots
u_{m-1}\widetilde{v_{I^{c}}}v_{I\setminus\\{m\\}}h_{\beta_{m}}$
$\displaystyle+$ $\displaystyle\sum\limits_{p}u_{0}\cdots
u_{m-1}v_{j_{t}}\cdots[u_{m},v_{j_{p}}]\cdots v_{j_{t}}v_{I}$ $\displaystyle+$
$\displaystyle\sum\limits_{q}u_{0}\cdots
u_{m-1}\widetilde{v_{I^{c}}}v_{i_{1}}\cdots[u_{m},v_{i_{q}}]\cdots v_{n}.$
Now in the last two summands we may apply a similar argument as above to prove
that their Harish-Chandra projections are of degree less than or equal to $m$.
Therefore, we have shown that the following two elements have the same top
degree term:
$HC((-1)^{i_{1}+\dots+i_{j}}u_{0}\cdots u_{m}\widetilde{v_{I^{c}}}v_{I}),\ \ \
\ HC((-1)^{j_{1}+\cdot+j_{l}}u_{0}\cdots
u_{m-1}\widetilde{v_{J^{c}}}v_{J})h_{\alpha_{m}},$
where $\\{j_{1},\dots,j_{l}\\}=J=I\cap\\{0,\dots,m-1\\}$ and the complement of
$J$ is taken in $\\{0,\dots,m-1\\}$. Now we may apply an induction argument to
show that
$HC((-1)^{i_{1}+\dots+i_{j}}u_{0}\cdots
u_{m}\widetilde{v_{I^{c}}}v_{I})=h_{\beta_{0}}\cdots h_{\beta_{m}}+l.o.t.$
which completes the proof. ∎
We say a Lie superalgebra is quadratic if it admits a nondegenerate,
invariant, and even supersymmetric form $(-,-)$. The following is
straightforward to show from the properties of being quadratic.
###### Corollary 9.15.
Let $G$ be a Cartan-even quasireductive supergroup whose Lie superalgebra is
quadratic. Then the conditions of 9.14 hold. In particular $G$ admits
projective irreducible modules.
### 9.3. $\mathfrak{g}$ basic classical
We now additionally assume that $\mathfrak{g}$ is almost simple and basic
classical, by which we mean it is either basic simple,
$\mathfrak{s}\mathfrak{l}(n|n)$, or $\mathfrak{g}\mathfrak{l}(m|n)$. The
following two results could be proven more easily using 9.11 and facts about
the representation theory of basic Lie superalgebras, but we present the
proofs below as they demonstrate a different approach which generalizes to
other supersymmetric pairs.
###### Lemma 9.16.
Suppose that $\alpha\in\Delta^{+}$ is a simple isotropic root, and that we
have $(\lambda,\alpha)=0$. Then $p_{G,B}(\lambda)=0$.
###### Proof.
Let $u_{\alpha}=(e_{\alpha},0)-(0,e_{\alpha})\in\mathfrak{g}^{\prime}$. Then
$u_{\alpha}f_{\lambda}=0$ because $\alpha$ is simple isotropic and
$(\lambda,\alpha)=0$.
Now suppose that $\mathcal{U}\mathfrak{g}^{\prime}f_{\lambda}$ contained a
copy of $I_{G^{\prime}}(k)$, so that we may write
$\mathcal{U}\mathfrak{g}^{\prime}f_{\lambda}=I(k)\oplus M$, for some
complimentary module $M$. Then with respect the this direct sum decomposition
we can write $f_{\lambda}=g+m$, where $g\in I_{G^{\prime}}(k)$ generates. But
then we must have $u_{\alpha}g=0$, implying by the projectivity of
$I_{G^{\prime}}(k)$ that $g=u_{\alpha}h$ for some $h\in I(k)$. However this
contradicts $g$ begin non-zero on some $\mathfrak{g}^{\prime}$-coinvariant.
This finishes the proof. ∎
###### Proposition 9.17.
If $\alpha$ is a positive isotropic root such that $(\lambda+\rho,\alpha)=0$,
then $p_{G,B}(\lambda)=0$.
###### Proof.
If $\alpha$ is simple isotropic, we are done by lemma 9.16. Otherwise, there
exists a sequence of odd reflections transforming $\alpha$ into a simple root.
If $r_{\beta}$ is a such an odd reflection, then we see that $\rho$ becomes
$\rho+\beta$ and $\lambda$ changes to $\lambda-\beta$ if $(\lambda,\beta)\neq
0$, and if $(\lambda,\beta)=0$ then by lemma 9.16 we are done anyway, so we
may assume this doesn’t happen. Writing $\rho^{\prime}$ for the new Weyl
vector and $\lambda^{\prime}$ for the new highest weight, we see that we still
have
$(\lambda^{\prime}+\rho^{\prime},\alpha)=(\lambda+\rho,\alpha)=0$
The proof follows by induction. ∎
###### Corollary 9.18.
$\prod\limits_{\alpha\in\Delta^{+}_{\overline{1}},(\alpha,\alpha)=0}(h_{\alpha}+(\rho,\alpha))$
divides $p_{G,B}$ and thus divides $HC(\gamma)$ for any
$\gamma\in\operatorname{Dist}(G/K,eK)^{\mathfrak{k}^{\prime}}$.
###### Proof.
The first statement follows by a density argument, and the second statement
follows from 8.17. ∎
The above result recovers part of Gorelik’s result in [Gor00], where she
proved that
$HC(\mathcal{A})=S(\mathfrak{h})^{W_{.}}\prod\limits_{\alpha\in\Delta^{+}_{\overline{1}}}(h_{\alpha}+(\rho,\alpha)).$
Here $S(\mathfrak{h})^{W_{.}}$ denotes the $\rho$-shifted $W$-invariant
polynomials in $\mathfrak{h}$, where $W$ has a slightly different action than
usual (see [Gor00] for details). Gorelik proved this result by studying the
action of $\mathcal{A}$ on Verma modules of $\mathfrak{g}$, and using known
results on highest weight vectors in Verma modules. Thus far the author does
not know how to reprove her results from a purely geometric standpoint. In
particular the invariance under an even Weyl group remains elusive.
## 10\. Full ghost centre
As explained previously, in [Gor00] an algebra called the ghost centre
$\tilde{\mathcal{Z}}\subseteq\mathcal{U}\mathfrak{g}$ was defined as
$\tilde{\mathcal{Z}}:=\mathcal{Z}+\mathcal{A},$
where $\mathcal{A}=(\mathcal{U}\mathfrak{g})^{\mathfrak{g}^{\prime}}$. One can
check that for $a,b\in\mathcal{A}$ we have $ab\in\mathcal{Z}$, so this indeed
forms an algebra. We seek to expand this algebra to what we will call the full
ghost centre, $\mathcal{Z}_{full}$, of $\mathfrak{g}$. It is a subalgebra of
$\mathcal{U}\mathfrak{g}$ which we now describe.
### 10.1. Twisted adjoint actions
Let $\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$ denote the
set of automorphisms of $\mathfrak{g}$ which fix $\mathfrak{g}_{\overline{0}}$
pointwise. For
$\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$, define
the $\phi$-twisted adjoint action on $\mathcal{U}\mathfrak{g}$ by
$\operatorname{ad}_{\phi}(u)(v)=uv-(-1)^{\overline{u}\overline{v}}v\phi(u)$
where $u\in\mathfrak{g}$, $v\in\mathcal{U}\mathfrak{g}$.
###### Definition 10.1.
Define $\mathcal{A}_{\phi}$ to be the $\operatorname{ad}_{\phi}$-invariant
elements of $\mathcal{U}\mathfrak{g}$.
Observe that $\mathcal{A}_{\operatorname{id}}=\mathcal{Z}$ and
$\mathcal{A}_{\delta}=\mathcal{A}$, where $\delta(u)=(-1)^{\overline{u}}u$.
Another way to understand this action is as follows: consider the subalgebra
$\mathfrak{g}_{\phi}$ of $\mathfrak{g}\times\mathfrak{g}$ given by
$\\{(u,\phi(u)):u\in\mathfrak{g}\\}$. Then
$\mathfrak{g}_{\phi}\cong\mathfrak{g}$, and its even part is
$(\mathfrak{g}_{\phi})_{\overline{0}}=\\{(u,u):u\in\mathfrak{g}_{\overline{0}}\\}$.
Then the action of $\mathfrak{g}_{\phi}$ on $\operatorname{Dist}(G\times
G/G,eG)\cong\mathcal{U}\mathfrak{g}$ exactly induces the $\phi$-twisted
adjoint action.
Let $G_{\phi}\subseteq G\times G$ be the connected subgroup which integrates
$\mathfrak{g}_{\phi}$.
###### Lemma 10.2.
Suppose that $\phi(u)=u$ implies $u\in\mathfrak{g}_{\overline{0}}$. Then
$\mathcal{U}\mathfrak{g}\cong\operatorname{Ind}_{\mathfrak{g}_{\overline{0}}}^{\mathfrak{g}}\mathcal{U}\mathfrak{g}_{\overline{0}}$
with respect to the $\phi$-twisted action. In particular if
$\operatorname{Ber}(\mathfrak{g}_{\overline{1}})$ is trivial, then there is a
natural isomorphism of vector spaces
$\mathcal{Z}(\mathcal{U}\mathfrak{g}_{\overline{0}})\to\mathcal{A}_{\phi}$
given by
$z\mapsto\operatorname{ad}_{\phi}(v_{\mathfrak{g}})(z).$
###### Proof.
If $\phi(u)=u$ implies $u\in\mathfrak{g}_{\overline{0}}$, then
$\mathfrak{g}_{\phi}\to(T_{eG}(G\times G/G))_{\overline{1}}$ is an
isomorphism. Therefore, 5.3 applies. ∎
Observe that $\operatorname{id}\times\phi$ defines an isomorphism of $G\times
G$ taking $G$ to $G_{\phi}$, and thus an isomorphism
$G\times G/G\to G\times G/G_{\phi}$
which is $\operatorname{id}\times\phi$-equivariant. Since
$\operatorname{id}\times\phi$ is the identity on
$\mathfrak{g}_{\overline{0}}\times\mathfrak{g}_{\overline{0}}$, we obtain the
following generalization of 9.11.
###### Proposition 10.3.
Suppose that $\operatorname{Ber}(\mathfrak{g})$ is trivial and that
$\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$
satisfies the conditions of lemma 10.2. Then for $\gamma\in\mathcal{A}_{\phi}$
and $\lambda\in\Lambda^{+}$, $HC(\gamma)(\lambda)=0$ if $L(\lambda)$ is not
projective. Further if the unique $\operatorname{ad}_{\phi}$-invariant element
of $\operatorname{End}(L)^{*}$ always is nonzero at $eG$ for a simple
$G$-module $L$, then
$HC(\operatorname{ad}_{\phi}(v_{\mathfrak{g}})1)(\lambda)\neq 0$ if and only
if $L(\lambda)$ is projective.
The unique $\operatorname{ad}_{\phi}$-invariant element of
$\operatorname{End}(L)^{*}$ can be thought of as a $\phi$-twisted trace
function. We describe these elements for basic type I algebras in section
10.6.
### 10.2. The full ghost centre
For
$\phi,\psi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$, it
is easy to check that multiplication in $\mathcal{U}\mathfrak{g}$ induces a
morphism
$\mathcal{A}_{\phi}\otimes\mathcal{A}_{\psi}\to\mathcal{A}_{\psi\phi}.$
###### Definition 10.4.
Define the full ghost centre of $\mathcal{U}\mathfrak{g}$ to be the algebra
given by
$\mathcal{Z}_{full}:=\sum\limits_{\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})}\mathcal{A}_{\phi}.$
Notice that
$\tilde{\mathcal{Z}}\subseteq\mathcal{Z}_{full}\subseteq(\mathcal{U}\mathfrak{g})^{\mathfrak{g}_{\overline{0}}}$.
### 10.3. The case of an abelian Lie superalgebra
As a toy case, we consider the above constructions for the abelian
superalgebra $\mathfrak{g}=k^{m|n}$. Then
$\operatorname{Aut}(\mathfrak{g},{\mathfrak{g}_{\overline{0}}})\cong GL(n)$.
Let $A\in GL(n)$, and let $\xi_{A}$ be a nonzero element of
$\Lambda^{top}\operatorname{im}(A-\operatorname{id})$, where
$\xi_{\operatorname{id}}=1$.
###### Lemma 10.5.
$\mathcal{A}_{A}=\mathcal{U}\mathfrak{g}\xi_{A}$.
###### Proof.
For $u\in\mathcal{U}\mathfrak{g}$ and $\eta\in\mathfrak{g}$, we see that
$\displaystyle\operatorname{ad}_{A}(\eta)(u)$ $\displaystyle=$
$\displaystyle\eta u-(-1)^{\overline{u}}uA\eta$ $\displaystyle=$
$\displaystyle(\eta-A\eta)u.$
From here the result is straightforward. ∎
In this case we see that the ghost centres for different automorphisms in
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$ may overlap in
myriad ways. Now suppose
$A\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$ doesn’t fix
any non-zero odd vectors. Then under the twisted adjoint action determined by
$A$, $\mathcal{U}\mathfrak{g}$ is isomorphic to a sum of the left (or right)
regular representation of $\mathcal{U}\mathfrak{g}_{\overline{1}}$, and we
have
$\mathcal{A}_{A}=S(\mathfrak{g}_{\overline{0}})\Lambda^{n}\mathfrak{g}_{\overline{1}}$.
By lemma 6.10, we have $v_{\mathfrak{g}}=\xi_{1}\dots\xi_{n}$, where
$\xi_{1},\dots,\xi_{n}$ is any basis of $\mathfrak{g}_{\overline{1}}$. Thus if
$A$ is fixed point free, the following element must be non-zero:
$\operatorname{ad}_{A}(\xi_{1}\cdots\xi_{n})(1)=(1-A)\xi_{1}\cdots(1-A)\xi_{n}=\det(1-A)\xi_{1}\cdots\xi_{n}.$
So this is nonzero if and only if $A-1$ is invertible, i.e. $A$ is not an
eigenvalue of $1$.
###### Remark 10.6.
Using the above computation, it is possible to give a purely algebraic proof
of lemma 10.2 with the same proof as given in [Gor00]. Namely, given
$u_{1},\dots,u_{n}\in\mathfrak{g}_{\overline{1}}$,
$v\in\mathcal{U}\mathfrak{g}$, and
$\phi\in\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$, we have
$\operatorname{gr}(\operatorname{ad}_{\phi}(u_{1}\cdots
u_{n})(v))=\operatorname{ad}_{\phi}(\operatorname{gr}(u_{1}\cdots
u_{n}))(\operatorname{gr}v),$
where $\operatorname{gr}$ is the associated graded morphism
$\mathcal{U}\mathfrak{g}\to S(\mathfrak{g})$, and we view $S(\mathfrak{g})$ as
the enveloping algebra of $\mathfrak{g}$ with trivial bracket. Thus if
$L_{0}\subseteq\mathcal{U}\mathfrak{g}_{\overline{0}}$ is a
$\mathfrak{g}_{\overline{0}}$-submodule and $\\{v_{j}\\}\subseteq L_{0}$ is a
basis such that $\\{\operatorname{gr}v_{j}\\}$ is linearly independent in
$\mathcal{U}\mathfrak{g}_{\overline{0}}$, then by passing to the associated
graded we find that
$\mathcal{U}\mathfrak{g}L_{0}\cong\operatorname{Ind}_{\mathfrak{g}_{0}}^{\mathfrak{g}}L_{0}$.
### 10.4. $\mathcal{Z}_{full}$ for basic classical Lie superalgebras
Let $\mathfrak{g}$ be either $\mathfrak{g}\mathfrak{l}(m|n)$,
$\mathfrak{s}\mathfrak{l}(m|n)$, $\mathfrak{p}\mathfrak{s}\mathfrak{l}(n|n)$
for $n>2$, $\mathfrak{o}\mathfrak{s}\mathfrak{p}(m|2n)$, or a basic
exceptional simple Lie superalgebra. Recall that such a superalgebra is called
type I if it admits a $\mathbb{Z}$-grading
$\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}$
compatible with the $\mathbb{Z}_{2}$-grading. Of these,
$\mathfrak{g}\mathfrak{l}(m|n)$, $\mathfrak{s}\mathfrak{l}(m|n)$,
$\mathfrak{p}\mathfrak{s}\mathfrak{l}(n|n)$ and
$\mathfrak{o}\mathfrak{s}\mathfrak{p}(2|2n)$ are the type I algebras.
###### Remark 10.7 (Caution).
We do not consider $\mathfrak{p}\mathfrak{s}\mathfrak{l}(2|2)$ here because
lemma 10.8 is false (see section 5.5 of [Mus12]). Further, in section 10.5, we
will not consider $\mathfrak{s}\mathfrak{l}(n|n)$ or
$\mathfrak{p}\mathfrak{s}\mathfrak{l}(n|n)$ due to the lack of an internal
grading operator.
###### Lemma 10.8.
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})\cong k^{\times}$
if $\mathfrak{g}$ is of type I, and otherwise
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})=\langle\delta\rangle\cong\mathbb{Z}/2$.
In particular $\tilde{\mathcal{Z}}=\mathcal{Z}_{full}$ for $\mathfrak{g}$ not
of type I.
###### Proof.
We refer to [Mus12]. In the type I case, we identify
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})\cong k^{\times}$
via defining for $c\in k^{\times}$ the automorphism
$c\mathbf{1}_{\mathfrak{g}_{-1}}\oplus\mathbf{1}_{\mathfrak{g}_{0}}\oplus
c^{-1}\mathbf{1}_{\mathfrak{g}_{1}}$. ∎
So for these algebras we only get something new in the type I case. We study
this case now. Since
$\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})\cong
k^{\times}$, we label the automorphisms by complex numbers $c\in k^{\times}$
according to the identification given in the proof of lemma 10.8.
Write
$\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}$
for the $\mathbb{Z}$-grading on $\mathfrak{g}$. Choose a Cartan subalgebra
$\mathfrak{h}\subseteq\mathfrak{g}_{\overline{0}}$ and consider a Borel
subalgebra $\mathfrak{b}=\mathfrak{b}_{0}\oplus\mathfrak{g}_{1}$ where
$\mathfrak{b}_{0}$ is a Borel of $\mathfrak{g}_{0}$ containing
$\mathfrak{h}_{\overline{0}}$. We call this a standard Borel subalgebra of
$\mathfrak{g}$. Then we obtain a Harish-Chandra morphism with respect to this
Borel. Let $W$ denote the Weyl group of $\mathfrak{g}_{\overline{0}}$ and
consider its action $\rho$-shifted action on $\mathfrak{h}$, where $\rho$ is
the Weyl vector.
###### Lemma 10.9.
Every $\mathfrak{b}$-highest weight module $M$ admits a $\mathbb{Z}$-grading
that is compatible with the $\mathbb{Z}$-grading on $\mathfrak{g}$.
###### Proof.
We may set $M_{0}=M^{\mathfrak{g}_{1}}$, and then define
$M_{-i}=\Lambda^{i}\mathfrak{g}_{-1}M_{0}$. ∎
We will use the $\mathbb{Z}$-grading defined in the proof of lemma 10.9 many
times in what follows.
###### Theorem 10.10.
1. (1)
$\mathcal{Z}_{full}$ is a commutative algebra.
2. (2)
For $c\neq 1$, $HC$ is injective on $\mathcal{A}_{c}$, and we have
$HC(\mathcal{A}_{c})=HC(\mathcal{A}_{-1})=S(\mathfrak{h})^{W}\prod\limits_{\alpha\in\Delta^{+}_{\overline{1}}}(h_{\alpha}+(\rho,\alpha)).$
###### Remark 10.11.
In this case we see that $HC(\mathcal{A}_{c})\subseteq HC(\mathcal{Z})$. See
for instance [CW12] for a description of the Harish-Chandra image of the
centre for $\mathfrak{g}\mathfrak{l}(m|n)$ and
$\mathfrak{o}\mathfrak{s}\mathfrak{p}(m|2n)$.
###### Proof.
Let
$t_{\mathfrak{g}}:=\prod\limits_{\alpha\in\Delta_{1}^{+}}(h_{\alpha}+(\alpha,\rho)).$
By 10.3, if $u\in\mathcal{A}_{c}$ then we must have $HC(u)(\lambda)=0$ for
$\lambda$ an atypical dominant integral weight, i.e. $\lambda$ such that
$(\lambda+\rho,\alpha)=0$ for some odd root $\alpha$, or equivalently
$t_{\mathfrak{g}}(\lambda)=0$. The set of such $\lambda$ are dense in the zero
set of $t_{\mathfrak{g}}$, and thus we have that $t_{\mathfrak{g}}$ divides
$HC(u)$.
To obtain Weyl group invariance, let $\lambda$ be a dominant integral weight,
and consider the Verma module $M_{\mathfrak{b}}(\lambda)$ of highest weight
$\lambda$ and highest weight vector $v$. Choose a simple even root $\alpha$,
and write $f_{\alpha}$ for a root vector of weight $-\alpha$. Then
$f_{\alpha}^{(\lambda,\alpha)+1}v$ will be a highest weight vector of weight
$\lambda-((\lambda,\alpha)+1)\alpha=s_{\alpha}(\lambda)$. Since
$f_{\alpha}\in\mathfrak{g}_{0}$,
$uf_{\alpha}^{(\lambda,\alpha)+1}v=f_{\alpha}^{(\lambda,\alpha)+1}uv=HC(u)(\lambda)f_{\alpha}^{(\lambda,\alpha)+1}v.$
Thus $HC(u)(\lambda)=HC(u)(s_{\alpha}(\lambda))$. Since such $\lambda$ are
dense, and the reflections in the even simple roots of this Borel subalgebra
generate the Weyl group, it follows that $HC(u)\in S(\mathfrak{h})^{W}$. Since
$t_{\mathfrak{g}}$ is itself $W$-invariant, it follows that we’ve shown
$HC(\mathcal{A}_{c})\subseteq S(\mathfrak{h})^{W}t_{\mathfrak{g}}.$
To finish the proof, first observe that $HC$ is injective on
$\mathcal{A}_{c}$, as if $HC(u)=0$ for $u\in\mathcal{A}_{c}$ then it must act
by zero on every Verma module, implying it is zero by [LM94]. Now if we apply
lemma 8.8, we can make the same degree arguments as in [Gor00] to obtain the
equality $HC(\mathcal{A}_{c})=S(\mathfrak{h})^{W}t$.
∎
With 10.10 we can now completely describe the structure of
$\mathcal{Z}_{full}$ for type I algebras. Let
$N=\operatorname{dim}\mathfrak{g}_{1}=\operatorname{dim}\mathfrak{g}_{\overline{1}}/2$,
and let $\zeta_{N}\in k$ be a primitive $N$th root of unity.
###### Theorem 10.12.
$\mathcal{Z}_{full}=\bigoplus\limits_{i=0}^{N-1}\mathcal{A}_{\zeta_{N}^{i}}.$
###### Proof.
Let $u\in\mathcal{A}_{c}$ for $c\neq 1$, and write $p=HC(u)$. Define
$u_{i}\in\mathcal{A}_{\zeta_{N}^{i}}$ be the unique element such that
$HC(u_{i})=p$. We want to solve the equation
$u=a_{0}u_{0}+\cdots+a_{N-1}u_{N-1}=\sum\limits_{i=0}^{N-1}a_{i}u_{i}.$
Let $M:=M_{\mathfrak{b}}(\lambda)$ be the Verma module of highest weight
$\lambda$ for a standard Borel $\mathfrak{b}$, and write $M_{j}$ for the $j$th
graded part according to the $\mathbb{Z}$-grading defined in the proof of
lemma 10.9. Then the scalar action of each side of the above equation applied
to $M_{j}$ gives the equation
$c^{j}p(\lambda)=\sum\limits_{i=0}^{N-1}a_{i}\zeta_{N}^{ji}p(\lambda).$
If $p(\lambda)=0$ then this equation always holds. If $p(\lambda)\neq 0$, we
divide by it to get the system of equations
$c^{j}=\sum\limits_{i=0}^{N-1}a_{i}\zeta_{N}^{ji}$
This has a unique solution for $a_{0},\dots,a_{N-1}$ since it is a linear
system for the Vandermonde matrix determined by
$1,\zeta_{N},\dots,\zeta_{N}^{N-1}$. Since $u-a_{0}u_{0}-\dots-a_{0}u_{0}$
then acts trivially on every Verma module, it is zero in
$\mathcal{U}\mathfrak{g}$. It follows that
$\mathcal{Z}_{full}\subseteq\bigoplus\limits_{i=0}^{N-1}\mathcal{A}_{\zeta_{N}^{i}}.$
The sum is direct by the nonsingularity of the Vandermonde matrix, so we are
done. ∎
###### Remark 10.13.
If $c\in k^{\times}$, we have an inclusion
$\mathcal{A}_{c}\to\mathcal{Z}_{full}=\bigoplus\limits_{i=0}^{N-1}\mathcal{A}_{\zeta_{N}^{i}}$.
Assume that $c\neq 1$, and for $u\in\mathcal{A}_{c}$ write $p=HC(u)$. Let
$u_{i}\in\mathcal{A}_{\zeta_{N}^{i}}$ such that $HC(u_{i})=p$. Then by
inverting the Vandermonde matrix, we obtain the decomposition
$u=\frac{1}{N}\sum\limits_{i=0}^{N-1}\left(\sum\limits_{j=0}^{N-1}c^{j}\zeta_{N}^{-ij}\right)u_{i}.$
### 10.5. Characterization of $Z_{full}$
We now give an intrinsic description of $\mathcal{Z}_{full}$ for the type I
algebras $\mathfrak{g}\mathfrak{l}(m|n)$, $\mathfrak{s}\mathfrak{l}(m|n)$ for
$m\neq n$, and $\mathfrak{o}\mathfrak{s}\mathfrak{p}(2|2n)$. These are
distinguished by their having an internal grading operator, that is an element
$h\in\mathfrak{g}_{0}$ such that $[h,u]=\deg(u)u$ for $u\in\mathfrak{g}$. Thus
we assume for this subsection that $\mathfrak{g}$ is one of these
superalgebras.
Given a $\mathfrak{g}$-module $V$, recall that $V$ is $\mathbb{Z}$-graded if
there exists a $\mathbb{Z}$-grading of $V$ which is compatible with the
$\mathbb{Z}$-grading on $\mathfrak{g}$. In lemma 10.9 it was shown that a
highest weight module for the Borel $\mathfrak{b}_{0}\oplus\mathfrak{g}_{1}$
is $\mathbb{Z}$-graded. In particular all finite-dimensional irreducible
representations of $\mathfrak{g}$ are $\mathbb{Z}$-graded.
If $V$ is $\mathbb{Z}$-graded, we say an element $u\in\mathcal{U}\mathfrak{g}$
acts on it by a $\mathbb{Z}$-graded constant if $u$ acts by a scalar on each
component of the $\mathbb{Z}$-grading. We now seek to prove the following
analogue of corollary 4.4.4 of [Gor00].
###### Theorem 10.14.
$\mathcal{Z}_{full}$ consists exactly of all elements of
$\mathcal{U}\mathfrak{g}$ which act by $\mathbb{Z}$-graded constants on all
finite-dimensional irreducible representations of $\mathfrak{g}$.
Our proof follows the same strategy as taken in [Gor00]. Observe that the
$\mathbb{Z}$-grading on $\mathfrak{g}$ induces a $\mathbb{Z}$-grading on
$\mathcal{U}\mathfrak{g}$:
$\mathcal{U}\mathfrak{g}=\bigoplus\limits_{n\in\mathbb{Z}}(\mathcal{U}\mathfrak{g})_{n}.$
Then $(\mathcal{U}\mathfrak{g})_{0}$ is a subalgebra of
$\mathcal{U}\mathfrak{g}$. Let $C$ denote its centralizer, and let
$\mathfrak{b}=\mathfrak{b}_{0}\oplus\mathfrak{g}_{1}$ denote a standard Borel
subalgebra. Clearly $\mathcal{Z}_{full}\subseteq C$. We would like the show
that $\mathcal{Z}_{full}=C$.
###### Lemma 10.15.
$C\subseteq(\mathcal{U}\mathfrak{g})_{0}$.
###### Proof.
Recall that $\mathfrak{g}$ has an internal grading operator, $h$, in
$\mathfrak{g}_{\overline{0}}$. Thus $[h,C]=0$, implying that
$C\subseteq(\mathcal{U}\mathfrak{g})_{0}$. ∎
Now let $c\in C$.
###### Lemma 10.16.
If $L(\lambda)$ is an irreducible $\mathfrak{b}$-highest weight module, then
$c$ acts on $L(\lambda)$ by a $\mathbb{Z}$-graded constant.
###### Proof.
Write $L(\lambda)=\bigoplus\limits_{n}L(\lambda)_{n}$ for a
$\mathbb{Z}$-grading on $L(\lambda)$. Then $L(\lambda)_{n}$ is a
$(\mathcal{U}\mathfrak{g})_{0}$-module, and it suffices to prove it is
irreducible. If $W\subseteq L(\lambda)_{n}$ is a
$(\mathcal{U}\mathfrak{g})_{0}$-submodule, then $\mathcal{U}\mathfrak{g}W\cap
L(\lambda)_{n}=W$. But since $L(\lambda)$ is irreducible this forces either
$W=L(\lambda)_{n}$ or $W=0$, so we are done. ∎
Let $S\subseteq\mathfrak{h}^{*}$ denote the collection of weights $\lambda$
such that $M_{\mathfrak{b}}(\lambda)=L_{\mathfrak{b}}(\lambda)$. Note that $S$
is a Zariski dense subset of $\mathfrak{h}^{*}$.
We give all Verma modules for $\mathfrak{g}$ with respect to the standard
Borel the $\mathbb{Z}$-grading defined in the proof of lemma 10.9. Then for
our fixed $c\in C$, we write $f_{i}(\lambda)$ for the constant by which $c$
acts on $M(\lambda)_{-i}$, where $f_{i}:S\to k$ is some function determined by
$c$.
###### Proposition 10.17.
The functions $f_{i}$ are polynomials on $\mathfrak{h}^{*}$, i.e. $f_{i}\in
S(\mathfrak{h})$.
For this, we need a lemma:
###### Lemma 10.18.
Let $1\leq n\leq\operatorname{dim}\mathfrak{g}_{1}$, and let
$\alpha_{1},\dots,\alpha_{n}$ be the $n$ smallest distinct positive odd roots
of $\mathfrak{g}$ with respect to our choice of Borel. Then
$\operatorname{dim}(\mathcal{U}\mathfrak{n}^{-})_{-\alpha_{1}-\dots-\alpha_{n}}$
is one-dimensional.
###### Proof.
The dimension of
$(\mathcal{U}\mathfrak{n}^{-})_{-\alpha_{1}-\dots-\alpha_{n}}$ is given by the
number of ways to write $-\alpha_{1}-\dots-\alpha_{n}$ as a sum of negative
roots of $\mathfrak{g}$, where each odd root can show up at most once. Suppose
that we have
$-\alpha_{1}-\dots-\alpha_{n}=-\beta_{1}-\dots-\beta_{m}-r_{1}\gamma_{1}-\dots-
r_{m^{\prime}}\gamma_{m^{\prime}},$
where $\beta_{1},\dots,\beta_{m}$ are positive odd roots are
$\gamma_{1},\dots,\gamma_{m^{\prime}}$ are positive even roots. Writing again
$h\in\mathfrak{g}_{0}$ for the internal grading operator, on $\mathfrak{g}$,
if we apply $h$ to the above equality of roots we learn that $n=m$. However by
our choice of $\alpha_{1},\dots,\alpha_{n}$, this clearly forces $r_{i}=0$ for
all $i$ and
$\\{\beta_{1},\dots,\beta_{m}\\}=\\{\alpha_{1},\dots,\alpha_{n}\\}$, so we are
done. ∎
###### Proof of 10.17.
Observe that $f_{0}=HC(c)$, so this is a polynomial. For $1\leq
n\leq\operatorname{dim}\mathfrak{g}_{1}$, let $\alpha_{1},\dots,\alpha_{n}$ be
the $n$ smallest distinct positive odd roots with root vectors
$u_{1},\dots,u_{n}$, and write $v_{1},\dots,v_{n}$ for the root vectors of
weight $-\alpha_{1},\dots,-\alpha_{i}$, where we assume
$[u_{i},v_{i}]=h_{\alpha_{i}}$, and $h_{\alpha_{i}}$ is the coroot of
$\alpha_{i}$. Let $\lambda\in S$, and write $v_{\lambda}$ for the highest
weight vector of $L(\lambda)$. Then observe that
$u_{1}\cdots u_{n}cv_{n}\cdots v_{1}v_{\lambda}=f_{n}(a)HC(u_{1}\cdots
u_{n}v_{n}\cdots v_{1})v_{\lambda}.$
On the other hand,
$u_{1}\cdots u_{n}cv_{n}\cdots v_{1}v_{\lambda}=HC(u_{1}\cdots
u_{n}cv_{n}\cdots v_{1})v_{\lambda}.$
Thus on $S$ we have an equality of functions:
$HC(u_{1}\cdots u_{n}v_{n}\cdots v_{1})f_{n}(c)=HC(u_{1}\cdots
u_{n}cv_{n}\cdots v_{1}).$
Now let $\lambda\in\mathfrak{h}^{*}$ be arbitrary, and write $v_{\lambda}$ for
the highest weight vector of $M_{\mathfrak{b}}(\lambda)$. If $HC(u_{1}\cdots
u_{n}v_{n}\cdots v_{1})(\lambda)=0$, then $u_{1}\cdots u_{n}v_{n}\cdots
v_{1}v_{\lambda}=0$. However by lemma 10.18, $cv_{n}\cdots v_{1}v_{\lambda}$
is again a multiple of $v_{n}\cdots v_{1}v_{\lambda}$, so in this case we also
have $u_{1}\cdots u_{n}cv_{n}\cdots v_{1}v_{\lambda}=0$, and thus
$HC(u_{1}\cdots u_{n}cv_{n}\cdots v_{1})=0$.
Therefore, wherever $HC(u_{1}\cdots u_{n}v_{n}\cdots v_{1})$ vanishes,
$HC(u_{1}\cdots u_{n}cv_{n}\cdots v_{1})$ also vanishes. Further,
$HC(u_{1}\cdots u_{n}v_{n}\cdots v_{1})$ will have top degree term given by
$h_{\alpha_{1}}\cdots h_{\alpha_{n}}$, and thus this polynomial is a product
of distinct irreducible polynomials. These facts together imply it divides
$HC(u_{1}\cdots u_{n}cv_{n}\cdots v_{1})$ so that
$f_{n}(c)=HC(u_{1}\cdots u_{n}cv_{n}\cdots v_{1})/HC(u_{1}\cdots
u_{n}v_{n}\cdots v_{1})\in S(\mathfrak{h}).$
∎
Thank you to Maria Gorelik for helping me work through the following argument.
###### Proposition 10.19.
$C$ acts by $\mathbb{Z}$-graded constants on every Verma module $M(\lambda)$.
###### Proof.
Let
$M=\mathcal{U}\mathfrak{g}\otimes_{\mathcal{U}\mathfrak{b}}S(\mathfrak{h})$
denote the universal Verma module, where $\mathfrak{n}^{+}$ acts trivially on
$S(\mathfrak{h})$ and $\mathfrak{h}$ acts by multiplication. This module
admits a $\mathbb{Z}$-grading given
$M_{-i}=\Lambda^{i}\mathfrak{g}_{-1}\mathcal{U}\mathfrak{g}_{0}S(\mathfrak{h}).$
For each $\lambda\in\mathfrak{h}^{*}$, we have a surjective
$\mathfrak{g}$-equivariant morphism
$M\to M(\lambda)$
given by evaluation at $\lambda$ on $S(\mathfrak{h})$. For
$u\in(\mathcal{U}\mathfrak{n}^{-})_{i}$, consider the element
$[c,u]+(HC(c)(\lambda)-f_{i}(\lambda))u\in\mathcal{U}\mathfrak{g}$, and then
consider
$\left([c,u]+(HC(c)(\lambda)-f_{i}(\lambda))u\right)\otimes 1\in M.$
Then we have shown that this element evaluates to $0$ on $S$, a Zariski dense
subset of $\mathfrak{h}^{*}$, and therefore it must vanish under every
evaluation. Since
$c\cdot uv_{\lambda}=[c,u]v_{\lambda}+HC(c)(\lambda)uv_{\lambda},$
it follows that for every $\lambda$, $c$ acts by $f_{i}(\lambda)$ on
$M(\lambda)_{-i}$, so that it acts by $\mathbb{Z}$-graded constants. ∎
###### Proposition 10.20.
$Z_{full}=C$.
###### Proof.
Define polynomials $p_{0},\dots,p_{N-1}\in S(\mathfrak{h})$ by:
$\begin{bmatrix}f_{0}\\\ f_{1}\\\ \vdots\\\
f_{N-1}\end{bmatrix}=p_{0}\begin{bmatrix}1\\\ 1\\\ \vdots\\\
1\end{bmatrix}+p_{1}\begin{bmatrix}1\\\ \zeta_{N}\\\ \vdots\\\
\zeta_{N}^{N-1}\end{bmatrix}+\dots+p_{N-1}\begin{bmatrix}1\\\
\zeta_{N}^{N-1}\\\ \vdots\\\ \zeta_{N}^{(N-1)(N-1)}\end{bmatrix}.$
Using the same argument as in 10.10, we must have that $p_{i}\in
S(\mathfrak{h})^{W}$ for all $i$.
Let $\alpha$ be the unique simple odd isotropic root of this Borel subalgebra,
and write $f_{\alpha}$ for a root vector of weight $-\alpha$. Then when
$(\lambda,\alpha)=0$, $f_{\alpha}v$ will be a highest weight vector in
$M(\lambda)$. Thus
$p_{i}(\lambda-\alpha)=\zeta_{N}^{i}p_{i}(\lambda).$
For $i=0$ this implies that $p_{0}(\lambda+r\alpha)=p_{0}(\lambda)$ for all
$r\in k$, and for $i>0$ this forces
$p_{i}(\lambda-n\alpha)=\zeta_{N}^{ni}p_{i}(\lambda).$
Since $p_{i}$ is a polynomial, this forces $p_{i}(\lambda)=0$, so that $p_{i}$
vanishes on the hyperplane $(\lambda,\alpha)=0$.
Using the $\rho$-shifted $W$-invariance of these polynomials, these conditions
imply that $p_{0}$ is constant along all hyperplanes of the form
$(\lambda+\rho,\alpha)=0$, so that $p_{0}\in HC(\mathcal{Z})$ by Sergeev’s
description of $HC(\mathcal{Z})$. On the other hand we find that for $i>0$,
$t_{\mathfrak{g}}$ divides $p_{i}$ (see the proof of 10.10), so that $p_{i}\in
HC(\mathcal{A}_{-1})$.
Now we may take, for each $i$, $u_{i}\in\mathcal{A}_{\zeta_{N}^{i}}$ such that
$HC(u_{i})=p_{i}$ for all $i$, and consider the element:
$c-u_{0}-\cdots-u_{N-1}.$
By construction this acts by 0 on all Verma modules, and thus is zero so that
$c\in\mathcal{Z}_{full}$, finishing the proof. ∎
###### Theorem 10.21.
The following subalgebras of $\mathcal{U}\mathfrak{g}$ agree with each other:
1. (1)
$\mathcal{Z}_{full}$;
2. (2)
the centralizer of $(\mathcal{U}\mathfrak{g})_{0}$;
3. (3)
the center of $(\mathcal{U}\mathfrak{g})_{0}$; and
4. (4)
the collection of elements in $\mathcal{U}\mathfrak{g}$ which act by
$\mathbb{Z}$-graded constants on every irreducible finite-dimensional
representation of $\mathfrak{g}$.
###### Proof.
The only nontrivial equality is (2)$\iff$(4). However by [LM94], the set of
finite-dimensional irreducible representations of $\mathfrak{g}$ form a
complete set, meaning their collective annihilator is trivial. Therefore if an
element of $\mathcal{U}\mathfrak{g}$ acts by $\mathbb{Z}$-graded constants on
every irreducible finite-dimensional representation of $\mathfrak{g}$, it
commutes with $(\mathcal{U}\mathfrak{g})_{0}$ under every such representation,
and thus commutes with $(\mathcal{U}\mathfrak{g})_{0}$. ∎
### 10.6. Twisted trace functions
We continue to let $\mathfrak{g}$ be a basic algebra of type I as in the
previous section, and let $c\in k^{\times}$. Then for a simple $G$-module $L$,
let $L_{0}=L^{\mathfrak{g}_{1}}\subseteq L$ be the invariants of
$\mathfrak{g}_{1}$ so that
$L=L_{0}\oplus\mathfrak{g}_{-1}L_{0}\oplus\Lambda^{2}\mathfrak{g}_{-1}L_{0}\oplus\cdots\oplus\Lambda^{top}\mathfrak{g}_{-1}L_{0}$
defines $\mathbb{Z}$-grading on $L$. Write
$L_{-i}=\Lambda^{i}\mathfrak{g}_{-1}L_{0}$, and define the operator
$T_{c}\in\operatorname{End}(L)$ by declaring that $T_{c}$ preserves the
$\mathbb{Z}$-grading and $T_{c}$ acts on $L_{-i}$ by the scalar $c^{-i}$.
###### Lemma 10.22.
The submodule $L^{*}\boxtimes L\subseteq k[G]$ contains a unique
$G_{c}$-invariant function $f_{c}$ such that
$f_{c}(eG)=\sum\limits_{i\geq 0}(-1)^{i}c^{i}\operatorname{dim}L_{-i}.$
###### Proof.
The operator $T_{c}$ defined above is $G_{c^{-1}}$-invariant in
$\operatorname{End}(L)$. If we apply the braiding isomorphism $L\otimes
L^{*}\cong L^{*}\otimes L$, $T_{c}$ becomes a $G_{c}$-invariant element, which
we write as $f_{c}$. It is now straightforward to check the above formula for
$f_{c}(eG)$. ∎
###### Definition 10.23.
Let $L$ be a simple $G$-module with the $\mathbb{Z}$-grading as above. Define
the polynomial $p_{L}\in\mathbb{Z}[c]$ to be
$p_{L}(c)=f_{c}(eG)=\sum\limits_{i\geq
0}(-1)^{i}c^{i}\operatorname{dim}L_{-i}.$
Observe that $0\leq\deg p_{L}\leq\operatorname{dim}\mathfrak{g}_{-1}$.
###### Lemma 10.24.
If $L$ is projective, then $p_{L}(c)\neq 0$ if and only if $c\neq 1$. In fact,
$p_{L}=\operatorname{dim}L_{0}(1-c)^{\operatorname{dim}\mathfrak{g}_{1}}.$
###### Proof.
The second statement clearly implies the first, and it follows from the fact
that
$L=\operatorname{Ind}_{\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}}^{\mathfrak{g}}L_{0}$
is a Kac-module (see for instance [Kac77]). Thus we have
$p_{L}(c)=\sum\limits_{i}(-1)^{i}c^{i}\operatorname{dim}L_{0}\begin{pmatrix}\operatorname{dim}\mathfrak{g}_{1}\\\
i\end{pmatrix}=\operatorname{dim}L_{0}(1-c)^{\operatorname{dim}\mathfrak{g}_{1}}.$
However we may give another proof which generalizes to other situations. If
$L$ is projective of highest weight $\lambda$, then by 10.3 we have
$HC(\operatorname{ev}_{eK}v_{\mathfrak{g}_{c}})(\lambda)\neq 0$. Thus
$f_{c}\in L^{*}\boxtimes L$ must not vanish at $eG$, i.e. we must have
$p_{L}(c)\neq 0$. ∎
###### Remark 10.25.
It is now possible to define, for $c\in k^{\times}$ and a simple $G$-module
$L$, the $c$-graded dimension of $L$ to be $p_{L}(c)$. This definition also
naturally arises if we consider type $I$ algebras as Lie algebra objects in
the tensor category of $\mathbb{Z}$-graded vector spaces with the symmetric
structure lifted from the category of super vector spaces. Observe that
$p_{L}(1)=\operatorname{dim}L$ and $p_{L}(-1)=\operatorname{sdim}(L)$.
###### Remark 10.26.
It would be interesting to understand the roots of $p_{L}$ for irreducible
$L$, and in particular the order of vanishing at $c=1$, in terms of the
representation theory of $L$. For instance, $L$ is maximally atypical if and
only if $p_{L}(1)\neq 0$, while $L$ is projective if and only if the order of
vanishing at 1 is $\operatorname{dim}\mathfrak{g}_{-1}$.
### 10.7. Limiting to the center
Write
$\operatorname{Aut}:=\operatorname{Aut}(\mathfrak{g},\mathfrak{g}_{\overline{0}})$,
which is an algebraic group, and let $S\subseteq\operatorname{Aut}$ denote the
subset of automorphisms without nonzero fixed vectors in
$\mathfrak{g}_{\overline{1}}$. Then $S$ is open in $\operatorname{Aut}$, and
further is nonempty because $\delta\in S$.
If $\operatorname{Aut}$ has dimension bigger than 0 and
$\operatorname{Aut}^{0}\cap S$ is nonempty, $\operatorname{Aut}^{0}\cap S$
will be Zariski dense. Thus if we choose $(\phi_{c})_{c\in
k^{\times}}\subseteq S$ such that $\lim\limits_{c\to
0}\phi_{c}=\operatorname{id}_{\mathfrak{g}}$, it is reasonable to consider,
for $u\in\mathcal{Z}(\mathcal{U}\mathfrak{g}_{\overline{0}})$, the element
$\lim\limits_{c\to 0}\operatorname{ad}_{\phi_{c}}(v_{\mathfrak{g}})(u).$
This limit exists and is equal to
$\operatorname{ad}(v_{\mathfrak{g}})(u)\in\mathcal{Z}$. However it is quite
possibly zero, and in particular the above limit need not preserve Harish-
Chandra polynomials. A more fruitful approach is to choose elements (if they
exist) $u_{c}\in\mathcal{A}_{\phi_{c}}$ for each $c$ such that $HC(u_{c})$ is
constant. Then we may consider the limit (if it exists):
$u_{0}:=\lim\limits_{c\to 0}u_{c}.$
If $u_{0}$ does exist then it must be in $\mathcal{Z}$ and have
$HC(u_{0})=HC(u_{c})$ for all $c$. However in general such a limit need not
exist, for example in the case that $HC(\mathcal{A}_{c})=0$ for all $c$.
However for type I basic algebras this limit does exist, which we now prove.
Thus let $\mathfrak{g}$ be a type $I$ basic Lie superalgebra, and let
$N=\operatorname{dim}\mathfrak{g}_{1}$.
###### Definition 10.27.
Consider the filtration by degree on $S(\mathfrak{h})$, and pull this back
under $HC$ to each $\mathcal{A}_{\zeta_{N}^{i}}$ to obtain a filtration
$K^{\bullet}\mathcal{A}_{\zeta_{N}^{i}}$. Then let
$G^{\bullet}\mathcal{Z}_{full}$ be the filtration on $\mathcal{Z}_{full}$
given by
$G^{n}\mathcal{Z}_{full}:=\sum\limits_{i}K^{n}\mathcal{A}_{\zeta_{N}^{i}}.$
This defines an algebra filtration on $\mathcal{Z}_{full}$ such that
$G^{n}\mathcal{Z}_{full}$ is finite-dimensional for each $n$.
Now we have the following:
###### Lemma 10.28.
For each $n\in\mathbb{N}$, there exists typical integral dominant weights
$\lambda_{1},\dots,\lambda_{s}$ such that map
$G^{n}\mathcal{Z}_{full}\to\operatorname{End}(L(\lambda_{1})\oplus\cdots\oplus
L(\lambda_{s}))$ is injective.
###### Proof.
Choose a basis $p_{1},\dots,p_{k}$ of $HC(K^{n}\mathcal{A}_{-1})$, and extend
this to a basis of $HC(K^{n}\mathcal{Z})$,
$p_{1},\dots,p_{k},p_{k+1},\dots,p_{\ell}$.
The integral typical dominant weights are Zariski dense in $\mathfrak{h}^{*}$,
so necessarily there exists typical integral dominant weights
$\lambda_{1},\dots,\lambda_{s}$ such that evaluation at these points induces
an injective map on $HC(K^{n}\mathcal{Z})$. Now consider the map
$\phi:G^{n}\mathcal{Z}_{full}\to\operatorname{End}(L(\lambda_{1})\oplus\cdots\oplus
L(\lambda_{s})),$
and suppose it is not injective. We may write an arbitrary element in
$G^{n}\mathcal{Z}_{full}$ as
$\sum\limits_{0\leq j\leq N-1}\sum\limits_{1\leq i\leq
k}\alpha_{i,j}a_{i,j}+\sum\limits_{k<i\leq\ell}\beta_{i}z_{i}$
where $z_{i}\in K^{n}\mathcal{Z}$ and $a_{i,j}\in
K^{n}\mathcal{A}_{\zeta_{N}^{j}}$ such that $HC(z_{i})=p_{i}$ and
$HC(a_{i,j})=p_{i}$ for all valid $i$. Now suppose that
$\sum\limits_{0\leq j\leq N-1}\sum\limits_{1\leq i\leq
k}\alpha_{i,j}\phi(a_{i,j})+\sum\limits_{k<i\leq\ell}\beta_{i}\phi(z_{i})=0$
Looking at the action on the highest weight vectors, this implies by our
choice of $\lambda_{1},\dots,\lambda_{s}$ that
$\sum\limits_{1\leq i\leq
k,\zeta}\alpha_{i,j}p_{i}+\sum\limits_{k<i\leq\ell}\beta_{i}p_{i}=0.$
Thus we must have $\beta_{i}=0$ for all $i$, and $\sum\limits_{0\leq j\leq
N-1}\alpha_{i,j}=0$ for all $i$. Looking further at the action on the $(-r)$th
graded component of $L(\lambda_{1})\oplus\cdots\oplus L(\lambda_{s})$
according to the grading defined in 10.10, we find that
$\sum\limits_{i,j}\alpha_{i,j}\zeta^{rj}p_{i}=0$
which implies that $\sum\limits_{j}\alpha_{i,j}\zeta^{jr}=0$ for all $i,r$. By
the nonsingularity of the Vandermonde matrix, this implies $\alpha_{i,j}=0$,
and we are done. ∎
###### Corollary 10.29.
Choose an element $p\in HC(\mathcal{A}_{-1})$, and let
$a_{\lambda,p}\in\mathcal{A}_{\lambda}$ satisfy $HC(a_{\lambda,p})=p$ for all
$\lambda\in k^{\times}$. Then as $\lambda\to 1$, $a_{\lambda,p}$ converges in
$G^{2\deg p}\mathcal{Z}_{full}$ to the unique central element $z_{p}$ with
$HC(z_{p})=p$.
###### Proof.
Choose an embedding $\phi:G^{2\deg
p}\mathcal{Z}_{full}\to\operatorname{End}(L(\lambda_{1})\oplus\cdots\oplus
L(\lambda_{s}))$ using lemma 10.28. Now in
$\operatorname{End}(L(\lambda_{i}))$ we have that $a_{\lambda,p}-z_{p}$ acts
on $\Lambda^{j}\mathfrak{g}_{-1}\otimes L_{0}(\lambda_{i})$ as
$p(\lambda_{i})(\lambda^{i}-1).$
Thus if we take $\lambda\to 1$ we get convergence as elements of
$\operatorname{End}(L(\lambda_{1})\oplus\cdots\oplus L(\lambda_{s}))$. Since
$\phi$ is an injective linear map, we are done. ∎
###### Remark 10.30.
It is now possible, in principle, to obtain explicit formulas for the elements
of $\mathcal{Z}$ whose Harish-Chandra image lies in $HC(\mathcal{A}_{-1})$. In
the case of $\mathfrak{g}\mathfrak{l}(1|1)$ for instance, this would give all
elements of the center with trivial constant term. For example, we may produce
the known formula for the element of the center whose Harish-Chandra
polynomial is a scalar multiple of
$t_{\mathfrak{g}}=\prod\limits_{\alpha\in\Delta_{1}^{+}}(h_{\alpha}+(\alpha,\rho)).$
Let $u_{1},\dots,u_{N}$ be a basis of $\mathfrak{g}_{1}$ and
$v_{1},\dots,v_{N}$ a basis of $\mathfrak{g}_{-1}$, and write $V=v_{1}\cdots
v_{N}\in\mathcal{U}\mathfrak{g}$. Then by 6.11, $v_{\mathfrak{g}}=u_{1}\cdots
u_{N}v_{1}\cdots v_{N}$, and we see that
$\displaystyle\operatorname{ad}_{c}(v_{\mathfrak{g}})(1)$ $\displaystyle=$
$\displaystyle(1-c)^{N}\operatorname{ad}_{c}(u_{1}\cdots u_{N})(V)$
$\displaystyle=$
$\displaystyle(1-c)^{N}\sum\limits_{I\subseteq\\{1,\dots,N\\}}(-1)^{Nl+i_{1}+\dots+i_{l}}c^{-l}u_{I^{c}}V\tilde{u_{I}}.$
Here the sum runs over all subsets $I$ of $\\{1,\dots,N\\}$, and we write
$l=|I|$. Here $I^{c}$ is the complement of $I$ as a set. Further, we define
for a subset $J=\\{j_{1}<\dots<j_{l}\\}\subseteq\\{1,\dots,N\\}$,
$v_{J}=v_{j_{1}}\cdots v_{j_{l}},\ \ \ \ \tilde{v_{J}}=v_{j_{l}}\cdots
v_{j_{1}}.$
If we divide $\operatorname{ad}_{c}(v_{\mathfrak{g}})(1)$ by $(1-c)^{N}$, the
Harish-Chandra projection of the resulting element is constant and equal to
$HC(u_{1}\cdots u_{N}v_{1}\cdots v_{N})$, which is $t_{\mathfrak{g}}$ up to a
constant. Taking the limit $c\to 1$ we obtain the following element of the
center:
$\sum\limits_{I\subseteq\\{1,\dots,n\\}}(-1)^{nl+i_{1}+\dots+i_{l}}u_{I^{c}}V\tilde{u_{I}}.$
In general the above process will not be as straightforward, as for a general
$z_{0}\in\mathcal{Z}(\mathcal{U}\mathfrak{g}_{\overline{0}})$ we have that
$\operatorname{ad}_{c}(v_{\mathfrak{g}})(z_{0})=(1-c)^{N}\sum\limits_{I\subseteq\\{1,\dots,n\\}}(-1)^{nl+i_{1}+\dots+i_{l}}c^{-l}u_{I^{c}}Vz_{0}\tilde{u_{I}}+l.o.t$
where $l.o.t.$ denotes terms of lower order in $(1-c)$. Thus we cannot divide
by $(1-c)^{N}$. An (albeit tedious) way to overcome this is to take
$k[c,(1-c)^{-1}]$ linear combinations of elements
$\operatorname{ad}_{c}(v_{\mathfrak{g}})(z_{0})$ in order to obtain an element
in $\mathcal{A}_{c}$ with constant Harish-Chandra polynomial. For
$\mathfrak{g}\mathfrak{l}(1|1)$ this could certainly be worked out, but for
higher rank superalgebras the elements of the center of
$\mathcal{U}\mathfrak{g}_{\overline{0}}$ are more complicated, making this
process more challenging.
However we note that in [Gor04] a method for algorithmically computing any
element of $\mathcal{Z}$ with a given Harish-Chandra projection is given,
based off an idea originally due to Kac.
## 11\. Appendix
In this section111Thanks so much to Maria Gorelik and Vera Serganova for
discussions that inspired the results of this section. we generalize the
results of section 8 and section 9 to the case where $G$ is an arbitrary
quasireductive supergroup, i.e. we remove the assumption that it is Cartan-
even. We will define Cartan subspaces and the Harish-Chandra homomorphism,
with the ultimate goal of proving 9.11.
### 11.1. Cartan subspaces and the Iwasawa decomposition
Let $G$ be an arbitrary quasireductive supergroup with involution $\theta$ and
a corresponding symmetric subgroup $K$. As always write
$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ for the eigenspace decomposition
of $\theta$ on $\mathfrak{g}$. Choose a Cartan subspace
$\mathfrak{a}_{\overline{0}}\subseteq\mathfrak{p}_{\overline{0}}$, and extend
it to a $\theta$-stable Cartan subalgebra
$\mathfrak{h}_{\overline{0}}\subseteq\mathfrak{g}_{\overline{0}}$. Then let
$\mathfrak{h}=\mathfrak{c}(\mathfrak{h}_{\overline{0}})$, so that
$\mathfrak{h}$ is a $\theta$-stable Cartan subalgebra of $\mathfrak{g}$. We
may then write $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ for the
eigenspace decomposition of $\theta$, where $\mathfrak{t}$ is fixed and
$\mathfrak{a}$ is the $(-1)$-eigenspace.
###### Definition 11.1.
We define $\mathfrak{a}$ to be a Cartan subspace of $\mathfrak{p}$ for the
pair $(\mathfrak{g},\mathfrak{k})$.
###### Remark 11.2.
It is known that all choices of $\mathfrak{h}_{\overline{0}}$ constructed in
this way are conjugate under $K_{0}$, thus all Cartan subalgebras constructed
in this way are too, so that a Cartan subspace $\mathfrak{a}$ is well-defined
up to conjugation by $K_{0}$.
###### Remark 11.3 (Caution).
If $\mathfrak{a}\neq\mathfrak{a}_{\overline{0}}$ then $\mathfrak{a}$ need not
be a subalgebra of $\mathfrak{g}$. Indeed,
$[\mathfrak{a}_{\overline{1}},\mathfrak{a}_{\overline{1}}]\subseteq\mathfrak{t}_{\overline{0}}$.
We may decompose $\mathfrak{g}$ into eigenspaces under the action of
$\mathfrak{a}_{\overline{0}}$, and write $\overline{\Delta}$ for the non-zero
weights of this action. Then we may choose a decomposition
$\overline{\Delta}=\overline{\Delta}^{+}\sqcup\overline{\Delta}^{-}$ into
positive and negative roots. Define
$\mathfrak{n}=\bigoplus\limits_{\overline{\alpha}\in\overline{\Delta}^{+}}\mathfrak{g}_{\overline{\alpha}}.$
###### Definition 11.4.
We say that the supersymmetric pair $(\mathfrak{g},\mathfrak{k})$ admits an
Iwasawa decomposition if we have a decomposition
$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}$ for some
choice of $\mathfrak{n}$ as above.
### 11.2. Harish-Chandra homomorphism
We suppose from now on that $(\mathfrak{g},\mathfrak{k})$ admits an Iwasawa
decomposition. Consider the Lie superalgebra
$\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}$. Then
$\mathfrak{t}_{\overline{0}}$ is a central subalgebra and contains the derived
subalgebra, so we may take the quotient by it to obtain the abelian Lie
superalgebra
$\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}/\mathfrak{t}_{\overline{0}}$.
We write
$\mathfrak{A}:=\mathcal{U}(\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}/\mathfrak{t}_{\overline{0}})$,
and we view it as the supersymmetric polynomial algebra on $\mathfrak{a}$.
If we restrict the natural map
$\mathcal{U}\mathfrak{g}\to\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}$
to $\mathcal{U}(\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a})$, we obtain the
projection
$\mathcal{U}(\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a})\to\mathcal{U}(\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a})/(\mathfrak{t}_{\overline{0}})\cong\mathfrak{A},$
so that $\mathfrak{A}$ is naturally subspace of
$\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}$. Further, by the
PBW theorem we have a decomposition
$\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}=\mathfrak{A}\oplus\mathfrak{n}\mathcal{U}\mathfrak{g}/(\mathfrak{n}\mathcal{U}\mathfrak{g}\cap\mathcal{U}\mathfrak{g}\mathfrak{k}).$
###### Definition 11.5.
We define the Harish-Chandra homomorphism
$HC:\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}\to\mathfrak{A}$
to be the projection along
$\mathfrak{n}\mathcal{U}\mathfrak{g}/(\mathfrak{n}\mathcal{U}\mathfrak{g}\cap\mathcal{U}\mathfrak{g}\mathfrak{k})$.
Notice that this agrees with the usual Harish-Chandra map when
$\mathfrak{h}=\mathfrak{h}_{\overline{0}}$, i.e. $G$ is Cartan-even.
Now $\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}_{\overline{1}}$ is a
subalgebra of $\mathfrak{k}^{\prime}$, and this acts on $\mathfrak{A}$ by left
multiplication on the quotient, and $\mathfrak{t}_{\overline{0}}$ acts
trivially while $\mathfrak{a}_{\overline{1}}$ acts freely. Thus we obtain a
free action of the exterior algebra on $\mathfrak{a}_{\overline{1}}$ on
$\mathfrak{A}$, and therefore the invariants of this action are given by
$\mathfrak{A}^{\mathfrak{a}_{\overline{1}}}=S(\mathfrak{a}_{\overline{0}})\Lambda^{top}\mathfrak{a}_{\overline{1}}.$
Since the decomposition
$\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}=\mathfrak{A}\oplus\mathfrak{n}\mathcal{U}\mathfrak{g}/(\mathfrak{n}\mathcal{U}\mathfrak{g}\cap\mathcal{U}\mathfrak{g}\mathfrak{k})$
is $\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}_{\overline{1}}$-invariant,
we clearly obtain the following lemma.
###### Lemma 11.6.
The Harish-Chandra morphism restricts to a map
$HC:\mathcal{A}_{G/K}\to\mathfrak{A}^{\mathfrak{a}_{\overline{1}}}=S(\mathfrak{a}_{\overline{0}})\Lambda^{top}\mathfrak{a}_{\overline{1}}.$
###### Remark 11.7.
By lemma 11.6, it follows that given a ghost distribution
$\gamma\in\mathcal{A}_{G/K}$, we may obtain a polynomial in
$S(\mathfrak{a}_{\overline{0}})$ by writing $HC(\gamma)=p_{\gamma}\xi$, where
$\xi\in\Lambda^{top}\mathfrak{a}_{\overline{1}}$ is some chosen nonzero
element. Thus $p_{\gamma}$ is well-defined up to a scalar, so that its
vanishing set is well-defined.
The following lemma and corollary are proven in the same way as in lemma 8.8,
although note that the statement is slightly different.
###### Lemma 11.8.
Give $\mathcal{U}\mathfrak{g}/\mathcal{U}\mathfrak{g}\mathfrak{k}$ the same
filtration $F^{\bullet}$ as defined in lemma 8.8. Viewing $\mathfrak{A}$ as
the supersymmetric polynomial algebra on $\mathfrak{a}$, give it a grading by
degree, where elements of $\mathfrak{a}_{\overline{0}}$ have degree one and
elements of $\mathfrak{a}_{\overline{1}}$ have degree $1/2$. Then we have
$HC(F^{r})\subseteq\sum\limits_{s\leq r/2}\mathfrak{A}^{s}.$
###### Corollary 11.9.
Let $z\in\operatorname{Dist}^{r}(G_{0}/K_{0},eK_{0})^{K_{0}}$ lie in the $r$th
part of the standard filtration on $\operatorname{Dist}(G_{0}/K_{0},eK_{0})$
defined in definition 2.2. Then $v_{\mathfrak{k}^{\prime}}\cdot
z\in\mathcal{A}_{G/K}$ has
$HC(v_{\mathfrak{k}^{\prime}}\cdot
z)\in\mathfrak{A}^{r+\operatorname{dim}\mathfrak{p}_{\overline{1}}/2}.$
Further, in the notation of remark 11.7, $\deg
p_{v_{\mathfrak{k}^{\prime}}\cdot
z}\leq\operatorname{dim}\mathfrak{n}_{\overline{1}}/2+r$.
###### Proof.
The first statement follows from lemma 11.8. The second statement follows from
remark 11.7, where we showed that $HC(v_{\mathfrak{k}^{\prime}}\cdot
z)=p_{\gamma}\xi$ for some $p_{\gamma}\in S(\mathfrak{a}_{\overline{0}})$ and
a non-zero element $\xi\in\Lambda^{top}\mathfrak{a}_{\overline{1}}$. Since
$\deg\xi=\operatorname{dim}\mathfrak{a}_{\overline{1}}/2$, and
$\operatorname{dim}\mathfrak{p}_{\overline{1}}=\operatorname{dim}\mathfrak{a}_{\overline{1}}+\operatorname{dim}\mathfrak{n}_{\overline{1}}$,
the bound follows. ∎
### 11.3. Ghost distributions
Since $(\mathfrak{g},\mathfrak{k})$ admits an Iwasawa decomposition, it will
admit an open orbit at $eK$ under an Iwasawa Borel subgroup $B$, whose Lie
superalgebra contains $\mathfrak{a}\oplus\mathfrak{n}$. We write
$\Lambda^{+}\subseteq\mathfrak{a}_{\overline{0}}^{*}$ for the set of
$B$-dominant weights $\lambda$ such that there exists an irreducible
$B$-submodule of highest weight $\lambda$ in $k[G/K]$.
Let $\lambda\in\Lambda^{+}$, and let $L_{\lambda}$ be an irreducible
$B$-submodule of $k[G/K]$ of highest weight $\lambda$. In particular
$L_{\lambda}$ is an irreducible $\mathfrak{h}$-module. Because $B$ admits an
open orbit at $eK$ we must have that $\operatorname{ev}_{eK}:L_{\lambda}\to k$
is non-zero.
###### Remark 11.10.
Unlike in the Cartan-even case, there is no multiplicity-free statement for
irreducible $B$-submodules of $k[G/K]$. This is due to the fact that the
Cartan subalgebra is no longer abelian so its irreducible representations need
not be dimension 1. Therefore our choice of $L_{\lambda}$ is not unique.
Although in the case $G\times G/G$ irreducible submodules we will have
multiplicity-freenes, due to the extraordinary amount of symmetry present.
Let $\gamma\in\mathcal{A}_{G/K}$. Then $HC(\gamma)$ defines a functional on
$L_{\lambda}$. For what follows, note that lemma 8.4 still holds here, i.e.
$\langle K^{\prime}\cdot
L_{\lambda}\rangle=\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$.
###### Lemma 11.11.
If $HC(\gamma):L_{\lambda}\to k$ is nonzero then
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains a copy of
$I_{K^{\prime}}(k)$. Further, $\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$
contains a copy of $I_{K^{\prime}}(k)$ if and only if
$HC(\gamma):L_{\lambda}\to k$ is non-zero for some
$\gamma\in\mathcal{A}_{G/K}$.
###### Proof.
Follows from the work done in section 7.3. ∎
Consider the irreducible $\mathfrak{h}$-module $L_{\lambda}$. We have already
noted that $\mathfrak{t}_{\overline{0}}$ acts trivially, thus it is really a
$\mathfrak{h}/\mathfrak{t}_{\overline{0}}$-module. Both
$\mathfrak{a}_{\overline{1}}$ and $\mathfrak{t}_{\overline{1}}$ sit inside the
quotient superalgebra as commutative subalgebras. The action of
$\mathfrak{h}/\mathfrak{t}_{\overline{0}}$ on $L_{\lambda}$ is given by the
unique irreducible super representation of the Clifford superalgebra
$Cl(\mathfrak{h}_{\overline{1}},(-,-)_{\lambda})$, where
$(u,v)_{\lambda}=\lambda([u,v]).$
Thus $W_{\lambda}:=\operatorname{ker}(-,-)_{\lambda}$ acts trivially on
$L_{\lambda}$, and
$\mathfrak{a}_{\overline{1}}/(W_{\lambda}\cap\mathfrak{a}_{\overline{1}})$ and
$\mathfrak{t}_{\overline{1}}/(W_{\lambda}\cap\mathfrak{t}_{\overline{1}})$
define complimentary maximal isotropic subspaces of
$\mathfrak{h}_{\overline{1}}/\operatorname{ker}(-,-)_{\lambda}$.
###### Lemma 11.12.
1. (1)
As an $\mathfrak{a}_{\overline{1}}$-module, $L_{\lambda}$ is isomorphic to
$\Lambda^{\bullet}\left(\mathfrak{a}_{\overline{1}}/(W_{\lambda}\cap\mathfrak{a}_{\overline{1}})\right)$;
2. (2)
as an $\mathfrak{t}_{\overline{1}}$-module, $L_{\lambda}$ is isomorphic to
$\Lambda^{\bullet}\left(\mathfrak{t}_{\overline{1}}/(W_{\lambda}\cap\mathfrak{t}_{\overline{1}})\right)$.
Further, the socle of $L_{\lambda}$ as an $\mathfrak{a}_{\overline{1}}$-module
generates $L_{\lambda}$ as a $\mathfrak{t}_{\overline{1}}$-module, and the
socle as a $\mathfrak{t}_{\overline{1}}$-module generates it as an
$\mathfrak{a}_{\overline{1}}$-module.
###### Proof.
Both of these lemma follow from the fact that we may realize the irreducible
representation of an even-dimensional Clifford algebra as an exterior algebra
$\Lambda^{\bullet}\langle\xi_{1},\dots,\xi_{n}\rangle$. Then if $V_{1},V_{2}$
are complimentary maximal isotropic subspaces, we may realize $V_{1}$ as
acting by multiplication by $\xi_{1},\dots,\xi_{n}$, and $V_{2}$ as acting by
$\partial_{\xi_{1}},\dots,\partial_{\xi_{n}}$. ∎
###### Proposition 11.13.
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains at most one copy of
$I_{K^{\prime}}(k)$.
###### Proof.
The $\mathfrak{k}^{\prime}$-semicoinvariants of
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ are determined by their values
on $L_{\lambda}$, and give rise to an
$\mathfrak{a}_{\overline{1}}$-coinvariant on $L_{\lambda}$. However as an
$\mathfrak{a}_{\overline{1}}$-module $L_{\lambda}$ admits a unique
$\mathfrak{a}_{\overline{1}}$-coinvariant by lemma 11.12, so there can be at
most one $\mathfrak{k}^{\prime}$-semicoinvariant on
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ up to scalar. Thus at most one
copy of $I_{K^{\prime}}(k)$ can arise. ∎
###### Lemma 11.14.
If $\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains a copy of
$I_{K^{\prime}}(k)$, then
$\operatorname{ker}(-,-)_{\lambda}\cap\mathfrak{a}_{\overline{1}}=0$, or
equivalently $L_{\lambda}\cong\Lambda^{\bullet}\mathfrak{a}_{\overline{1}}$ as
an $\mathfrak{a}_{\overline{1}}$-module.
###### Proof.
Write $\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}=I_{K^{\prime}}(k)\oplus M$
for some complimentary submodule $M$. Then the restriction of
$I_{K^{\prime}}(k)$ to
$\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}_{\overline{1}}$ must remain
injective. In particular if $g\in I_{K^{\prime}}(k)$ generates the head, then
we may assume it is a weight vector of weight $0$ so that
$\mathfrak{t}_{\overline{0}}g=0$, and further we will have
$\Lambda^{top}\mathfrak{a}_{\overline{1}}g\neq 0$. If we choose
$f\notin\mathfrak{a}_{\overline{1}}L_{\lambda}$, then
$\mathcal{U}\mathfrak{k}^{\prime}f$ will generate the copy of
$I_{K^{\prime}}(k)$ and thus we must also have
$\Lambda^{top}\mathfrak{a}_{\overline{1}}f\neq 0$. The statement follows. ∎
###### Proposition 11.15.
Let $\gamma\in\mathcal{A}_{G/K}$ and write $HC(\gamma)=p_{\gamma}\xi$ as in
remark 11.7. Then the following are equivalent:
1. (1)
$HC(\gamma):L_{\lambda}\to k$ is nonzero;
2. (2)
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains a copy of
$I_{K^{\prime}}(k)$ and $p_{\gamma}(\lambda)\neq 0$;
3. (3)
$L_{\lambda}$ is a projective $\mathfrak{a}_{\overline{1}}$-module and
$p_{\gamma}(\lambda)\neq 0$.
###### Proof.
Clearly $(2)\Rightarrow(3)$. If either $(3)$ or (1) hold, then by lemma 11.14
$L_{\lambda}$ is projective over
$\mathfrak{t}_{\overline{0}}\oplus\mathfrak{a}_{\overline{1}}$, so in these
cases let $f\notin\mathfrak{a}_{\overline{1}}L_{\lambda}$ so that
$\mathcal{U}\mathfrak{k}^{\prime}f$ generates $I_{K^{\prime}}(k)$ whenever
$I_{K^{\prime}}(k)$ is a submodule of
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$. Then $HC(\gamma)$ is non-zero
on $L_{\lambda}$ if and only if $HC(\gamma)(f)\neq 0$. However
$HC(\gamma)(f)=p_{\gamma}(\lambda)(\xi f)(eK).$
Since $\xi f$ spans the socle of $L_{\lambda}$ as an
$\mathfrak{a}_{\overline{1}}$-module, it must be non-vanishing under the
unique (up to scalar) $\mathfrak{t}_{\overline{1}}$-coinvariant on
$L_{\lambda}$ by lemma 11.12. Since $\operatorname{ev}_{eK}:L_{\lambda}\to k$
is non-zero and is a $\mathfrak{t}_{\overline{1}}$-coinvariant, necessarily
$(\xi f)(eK)\neq 0$. Thus $HC(\gamma)(f)\neq 0$ if and only if
$p_{\gamma}(\lambda)\neq 0$. From these arguments $(3)\Rightarrow(1)$ and
$(1)\Rightarrow(2)$ follow.
∎
We have a linear map
$\mathfrak{a}_{\overline{1}}\otimes\mathfrak{a}_{\overline{0}}^{*}\to\mathfrak{t}_{\overline{1}}^{*}$
given by
$(u\otimes\lambda)\mapsto(u,-)_{\lambda}.$
Let $U_{reg}\subseteq\mathfrak{a}_{\overline{0}}^{*}$ denote the locus where
this defines an injective morphism
$\mathfrak{a}_{\overline{1}}\to\mathfrak{t}_{\overline{1}}^{*}$. Clearly
$U_{reg}$ is Zariski open, although it need not be nonempty. Then we have
shown:
###### Corollary 11.16.
If $\lambda\in U_{reg}\cap\Lambda^{+}$, then $HC(\gamma)$ is non-zero on
$L_{\lambda}$ if and only if $p_{\gamma}(\lambda)\neq 0$.
Further, we clearly have the following sufficient condition of when $U_{reg}$
is empty.
###### Corollary 11.17.
If
$\operatorname{dim}\mathfrak{a}_{\overline{1}}>\operatorname{dim}\mathfrak{t}_{\overline{1}}$,
or if there exists a nonzero element $u\in\mathfrak{a}_{\overline{1}}$ such
that $[u,\mathfrak{t}_{\overline{1}}]=0$, then $U_{reg}=\emptyset$.
We also note the following proposition, which is proven in the exact same way
as 8.18.
###### Proposition 11.18.
Suppose that $L$ is an irreducible $G$-submodule of $k[G/K]$ of $B$-highest
weight $\lambda$, and suppose that $L$ contains a copy of $I_{K^{\prime}}(k)$
(equivalently $HC(\gamma):L_{\lambda}\to k$ is nonzero for some
$\gamma\in\mathcal{A}_{G/K}$). Then $I_{G}(L)$ is a submodule of
$k[G]^{\operatorname{ber}_{\mathfrak{k}^{\prime}}}$.
### 11.4. The distribution $\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}$
The distribution $\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}$ arises in
the same way as in section 8.6, and the analogous results of section 8.6 still
hold. We write them out now. Let
$HC(\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}})=p_{G/K}\xi$ as in the
notation of remark 11.7.
###### Proposition 11.19.
The following are equivalent:
* •
$\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}:L_{\lambda}\to k$ is nonzero;
* •
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains a copy of
$I_{K^{\prime}}(k)$, and the $K^{\prime}$-invariant in $I_{K^{\prime}}(k)$ is
non-vanishing at $eK$;
* •
$L_{\lambda}$ is a projective $\mathfrak{a}_{\overline{1}}$-module and
$p_{G/K}(\lambda)\neq 0$.
###### Proof.
These follow from 11.15. ∎
###### Proposition 11.20.
Suppose that whenever a $B$-irreducible submodule $L_{\lambda}\subseteq
k[G/K]$ has $\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ containing
$I_{K^{\prime}}(k)$, the $K^{\prime}$-invariant is non-vanishing at $eK$. Then
1. (1)
$\mathcal{U}\mathfrak{k}^{\prime}L_{\lambda}$ contains $I_{K^{\prime}}(k)$ if
and only if $L_{\lambda}$ is projective over $\mathfrak{a}_{\overline{1}}$ and
$p_{G/K}(\lambda)\neq 0$; and
2. (2)
if $HC(\operatorname{ev}_{eK}v_{\mathfrak{k}^{\prime}}):L_{\lambda}\to k$ is
zero, then $HC(\gamma):L_{\lambda}\to k$ is zero for all
$\gamma\in\mathcal{A}_{G/K}$.
### 11.5. The case $G\times G/G$
Many of the results in section 9 still hold, but we will try to explicitly
mention this as much as possible. To begin with, lemma 9.1 still holds.
Choose a Cartan subalgebra $\mathfrak{h}\subseteq\mathfrak{g}$. Then
$\mathfrak{h}\times\mathfrak{h}$ is a Cartan subalgebra of
$\mathfrak{g}\times\mathfrak{g}$ such that
$\mathfrak{a}=\\{(h,-h):h\in\mathfrak{h}\\}$ is a Cartan subspace of
$\mathfrak{p}$. If we choose a Borel subalgebra $\mathfrak{b}$ of
$\mathfrak{g}$ containing $\mathfrak{h}$, then
$\mathfrak{b}^{-}\times\mathfrak{b}^{+}$ becomes an Iwasawa Borel subalgebra
of $\mathfrak{g}\times\mathfrak{g}$. Again for this choice of Borel
subalgebra, both $(\mathfrak{g}\times\mathfrak{g},\mathfrak{g})$ and
$(\mathfrak{g}\times\mathfrak{g},\mathfrak{g}^{\prime})$ admit an Iwasawa
decomposition for any such choice of Borel subalgebra as in section 9.
Further, if $L_{\lambda}\subseteq k[G]$ is an irreducible $B^{-}\times B^{+}$
submodule, then
$\mathcal{U}(\mathfrak{g}\times\mathfrak{g})L_{\lambda}=\mathcal{U}\mathfrak{g}L_{\lambda}=\mathcal{U}\mathfrak{g}^{\prime}L_{\lambda}.$
We again have $\Lambda^{+}=\\{(-\lambda,\lambda):\lambda\text{ is a
}\mathfrak{b}\text{-dominant weight}\\}$. By abuse of notation we will also
write $\Lambda^{+}$ for the set of $B$-dominant weights in
$\mathfrak{h}_{\overline{0}}^{*}$.
###### Lemma 11.21.
Let $L(\lambda):=L_{B}(\lambda)$ be the irreducible $G$-module of highest
weight $\lambda\in\Lambda^{+}$. Then one of the following two must occur:
1. (1)
$L(\lambda)^{*}\boxtimes L(\lambda)$ is an irreducible $G\times G$-module and
admits a unique even $G^{\prime}$-invariant.
2. (2)
$L(\lambda)^{*}\boxtimes L(\lambda)=L\oplus\Pi L$ is a sum the two irreducible
$G\times G$-modules $L$ and $\Pi L$ which are parity shifts of one another. In
this case, both $L$ and $\Pi L$ admit a unique $G^{\prime}$-invariant, one
even and one odd.
###### Proof.
If $L(\lambda)\not\cong\Pi L(\lambda)$ then the first case happens. Otherwise
the second case happens. ∎
###### Definition 11.22.
For $\lambda\in\Lambda^{+}$, define $d(\lambda)\in\\{0,1\\}$ to be $0$ if
$L(\lambda)\not\cong\Pi L(\lambda)$, and $1$ otherwise. If $d(\lambda)=1$, set
$\frac{1}{2}L(\lambda)^{*}\boxtimes L(\lambda)$ to be the irreducible $G\times
G$-submodule of $L(\lambda)^{*}\boxtimes L(\lambda)$ which contains an even
$G^{\prime}$-invariant.
###### Proposition 11.23.
We have the following decomposition:
$\operatorname{soc}k[G]=\bigoplus\limits_{\lambda\in\Lambda^{+}}\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda).$
Further, the unique $G^{\prime}$-invariant of
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes L(\lambda)$ evaluates (up to
a scalar) at $eG$ to $\text{tr}(L)$.
###### Proof.
The irreducible $G\times G$ modules are all of the form
$\frac{1}{2^{d(\lambda)d(\mu)}}L(\lambda)\boxtimes L(\mu)$, where
$\lambda,\mu\in\Lambda^{+}$. By Frobenius reciprocity, this admits an even
$G\times G$-equivariant morphism into $k[G]$ if and only if $\lambda=-\mu$ and
the unique $G^{\prime}$-invariant of it is even. The evaluation of the unique
$G^{\prime}$-invariant is the same computation as done in 9.4. ∎
###### Remark 11.24.
We have shown that the irreducible $B^{-}\times B^{+}$-submodules of $k[G]$
are given by $\frac{1}{2^{d(\lambda)}}L(\lambda)_{-\lambda}^{*}\boxtimes
L(\lambda)_{\lambda}$ and
$\mathcal{U}\mathfrak{g}^{\prime}\frac{1}{2^{d(\lambda)}}L(\lambda)_{-\lambda}^{*}\boxtimes
L(\lambda)_{\lambda}=\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda).$
Notice that in this setting
$\operatorname{dim}\mathfrak{t}_{\overline{1}}=\operatorname{dim}\mathfrak{a}_{\overline{1}}$,
so by lemma 11.14 if $\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda)$ contains $I_{G^{\prime}}(k)$ then necessarily
$\frac{1}{2^{d(\lambda)}}L(\lambda)_{-\lambda}^{*}\boxtimes
L(\lambda)_{\lambda}$ is a projective $\mathfrak{h}\times\mathfrak{h}$-module,
and it is not difficult to show this is equivalent to $L(\lambda)_{\lambda}$
being a projective $\mathfrak{h}$-module.
###### Lemma 11.25.
$L(\lambda)$ is a projective $G$-module if and only if
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes L(\lambda)$ contains a copy
of $I_{G^{\prime}}(k)$.
###### Proof.
The forward direction is clear.
Conversely, if this module contains $I_{G^{\prime}}(k)$ then we know by 11.18
that $I_{G\times G}(\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda))$ is a $G\times G$ submodule of $k[G]$. (As in section 9, we once
again use that $G\times G/G\cong G\times G/G^{\prime}$ via
$\operatorname{id}\times\delta$.) Further, $L(\lambda)_{\lambda}$ must be a
projective $H$-module by remark 11.24, so that $L(\lambda)$ has no self-
extensions. If there was a non-trivial extension $V$ between $L(\lambda)$ and
$L(\mu)$, where $\mu\neq\lambda$, then the matrix coefficients morphism would
induce a $G\times G$ morphism
$V^{*}\boxtimes V\to k[G]$
such that the image is an indecomposable module with socle
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda)\oplus\frac{1}{2^{d(\mu)}}L(\mu)^{*}\boxtimes L(\mu).$
However $I_{G\times G}(\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes
L(\lambda))$ splits off $k[G]$, so this cannot happen. It follows that
$L(\lambda)$ cannot have nontrivial extensions with any modules, and is
therefore projective. ∎
Let us look at the Harish-Chandra homomorphism. Translating to the enveloping
algebra, it defines a homomorphism
$HC:\mathcal{U}\mathfrak{g}\to\mathcal{U}\mathfrak{h}.$
Note that $\mathfrak{h}$ is no longer abelian in general. However this induces
a morphism
$HC:\mathcal{A}_{G\times G/G}\to\mathcal{A}_{H\times H/H}.$
Since $H_{0}$ is a central subgroup,
$\Lambda^{top}\mathfrak{h}_{\overline{1}}$ is a trivial $H_{0}$-module, and so
$\mathcal{A}_{H\times H/H}=(\mathcal{U}\mathfrak{h})^{\mathfrak{h}^{\prime}}$.
Write $T_{\mathfrak{h}}:=\operatorname{ad}^{\prime}(v_{\mathfrak{h}})(1)$.
Then since $\mathfrak{h}_{\overline{0}}$ is central, we clearly have
$\mathcal{A}_{H\times H/H}=S(\mathfrak{h}_{\overline{0}})T_{\mathfrak{h}}.$
Since
$HC(\operatorname{ad}^{\prime}(v_{\mathfrak{g}})(1))\in(\mathcal{U}\mathfrak{h})^{\mathfrak{h}^{\prime}}$,
we may write
$HC(\operatorname{ad}^{\prime}(v_{\mathfrak{g}})(1))=p_{1}T_{\mathfrak{h}}$.
For each $\lambda\in\mathfrak{h}_{\overline{0}}^{*}$ we have a bilinear form
$(-,-)_{\lambda}$ on $\mathfrak{h}_{\overline{1}}$, in other words we have a
linear map $(-,-):\mathfrak{h}_{\overline{0}}^{*}\to
S^{2}\mathfrak{h}_{\overline{1}}^{*}$. If we choose a basis for
$\mathfrak{h}_{\overline{1}}$, then we obtain a map
$S^{2}\mathfrak{h}_{\overline{1}}^{*}\to k$ given by taking the determinant of
the corresponding bilinear form. The composition defines a degree
$\operatorname{dim}\mathfrak{h}_{\overline{1}}$-degree polynomial $b_{H}\in
S(\mathfrak{h}_{\overline{0}})$. Note that $b_{H}$ is well-defined only up to
scalar. Further, $b_{H}(\lambda)\neq 0$ if and only if the irreducible
$H$-module of weight $\lambda$ is projective.
###### Definition 11.26.
Define the projectivity polynomial of $G$ with respect to $B$ and $H$ to be
$p_{G,B}:=p_{1}b_{H}$.
Notice that if $G=H$ we have $p_{G,B}=b_{H}$. The following theorem justifies
the name.
###### Theorem 11.27.
Let $G$ be a quasireductive supergroup with the Cartan subgroup $H$ and Borel
subgroup $B$ containing $H$. Then for a $B$-dominant weight $\lambda$,
$p_{G,B}(\lambda)\neq 0$ if and only if $L_{B}(\lambda)$ is projective.
Further $p_{G,B}$ is a polynomial of degree at most
$\operatorname{dim}\mathfrak{b}_{\overline{1}}$.
###### Proof.
If $p_{G,B}(\lambda)\neq 0$, then $b_{H}(\lambda)\neq 0$ which implies that
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}_{-\lambda}\boxtimes
L(\lambda)_{\lambda}$ is projective. Since $p_{1}(\lambda)\neq 0$, by 11.20
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}\boxtimes L(\lambda)$ contains
$I_{G^{\prime}}(k)$ and so is projective by lemma 11.25.
Conversely if $L(\lambda)$ is projective then $L(\lambda)_{\lambda}$ is
projective over $H$ so that $b_{H}(\lambda)\neq 0$. Further,
$\frac{1}{2^{d(\lambda)}}L(\lambda)^{*}_{-\lambda}\boxtimes
L(\lambda)_{\lambda}$ contains $I_{G^{\prime}}(k)$, and so by 11.20 and 11.23
$p_{1}(\lambda)\neq 0$. It follows that $p_{G,B}(\lambda)\neq 0$. ∎
Define
$T_{\mathfrak{g}}:=\operatorname{ad}^{\prime}(v_{\mathfrak{g}})(1)\in\mathcal{U}\mathfrak{g}$.
Then if $L$ is an irreducible $G$-module, $T_{\mathfrak{g}}$ acts on $L$ by a
twisted-invariant operator, and thus is either equal, up to a scalar, to
$\delta_{L}$ if $T_{\mathfrak{g}}$ is even, or $\delta_{L}\circ\sigma$ if
$T_{\mathfrak{g}}$ is odd, where $\sigma$ is an odd $G$-equivariant
automorphism of $L$. In particular it either acts by a linear automorphism or
by 0. By 11.27, we have:
###### Corollary 11.28.
If $L$ is an irreducible $G$-module, then $T_{\mathfrak{g}}$ acts by an
automorphism on $L$ if and only if $L$ is projective, and otherwise
$T_{\mathfrak{g}}$ acts by $0$.
## References
* [ABF97] Daniel Arnaudon, Michel Bauer, and L Frappat, _On casimir’s ghost_ , Communications in mathematical physics 187 (1997), no. 2, 429–439.
* [Ben00] Said Benayadi, _Quadratic Lie superalgebras with the completely reducible action of the even part on the odd part_ , Journal of Algebra 223 (2000), no. 1, 344–366.
* [CCF11] Claudio Carmeli, Lauren Caston, and Rita Fioresi, _Mathematical foundations of supersymmetry_ , vol. 15, European Mathematical Society, 2011.
* [CW12] Shun-Jen Cheng and Weiqiang Wang, _Dualities and representations of Lie superalgebras_ , American Mathematical Soc., 2012.
* [DH+76] DŽ Djoković, G Hochschild, et al., _Semisimplicity of $2$-graded Lie algebras, ii_, Illinois Journal of Mathematics 20 (1976), no. 1, 134–143.
* [Gor00] Maria Gorelik, _On the ghost centre of Lie superalgebras_ , Annales de l’institut Fourier, vol. 50, 2000, pp. 1745–1764.
* [Gor04] by same author, _The Kac construction of the centre of for Lie superalgebras_ , Journal of Nonlinear Mathematical Physics 11 (2004), no. 3, 325–349.
* [Kac77] Victor G Kac, _Lie superalgebras_ , Advances in mathematics 26 (1977), no. 1, 8–96.
* [Kos82] Jean-Louis Koszul, _Graded manifolds and graded Lie algebras_ , Proceedings of the International Meeting on Geometry and Physics (Bologna), Pitagora, 1982, pp. 71–84.
* [LM94] Edward S Letzter and Ian M Musson, _Complete sets of representations of classical Lie superalgebras_ , letters in mathematical physics 31 (1994), no. 3, 247–253.
* [Man13] Yuri I Manin, _Gauge field theory and complex geometry_ , vol. 289, Springer Science & Business Media, 2013.
* [MT18] Akira Masuoka and Yuta Takahashi, _Geometric construction of quotients ${G/H}$ in supersymmetry_, arXiv preprint arXiv:1808.05753 (2018).
* [Mus12] Ian Malcolm Musson, _Lie superalgebras and enveloping algebras_ , vol. 131, American Mathematical Soc., 2012.
* [MZ10] Akira Masuoka and Alexander N Zubkov, _Quotient sheaves of algebraic supergroups are superschemes_ , arXiv preprint arXiv:1007.2236 (2010).
* [Ser11] Vera Serganova, _Quasireductive supergroups_ , New developments in Lie theory and its applications 544 (2011), 141–159.
* [She19] Alexander Sherman, _Spherical supervarieties_ , arXiv preprint arXiv:1910.09610 (2019).
* [She20a] by same author, _Spherical and symmetric supervarieties_ , Ph.D. thesis, UC Berkeley, 2020.
* [She20b] by same author, _Two geometric proofs of the classification of algebraic supergroups with semisimple representation theory_ , arXiv preprint arXiv:2012.11317 (2020).
* [Tim11] Dmitry A Timashev, _Homogeneous spaces and equivariant embeddings_ , vol. 138, Springer Science & Business Media, 2011.
* [VMP90] AA Voronov, Yu I Manin, and IB Penkov, _Elements of supergeometry_ , Journal of Soviet Mathematics 51 (1990), no. 1, 2069–2083.
Dept. of Mathematics, Ben Gurion University, Beer-Sheva, Israel
Email address<EMAIL_ADDRESS>
|
# A Note on the Representation Power of GHHs
Zhou Lu111This work is done during LZ’s visit to SQZ institution.
Princeton University
<EMAIL_ADDRESS>
(January 2021)
###### Abstract
In this note we prove a sharp lower bound on the necessary number of nestings
of nested absolute-value functions of generalized hinging hyperplanes (GHH) to
represent arbitrary CPWL functions. Previous upper bound states that $n+1$
nestings is sufficient for GHH to achieve universal representation power, but
the corresponding lower bound was unknown. We prove that $n$ nestings is
necessary for universal representation power, which provides an almost tight
lower bound. We also show that one-hidden-layer neural networks don’t have
universal approximation power over the whole domain. The analysis is based on
a key lemma showing that any finite sum of periodic functions is either non-
integrable or the zero function, which might be of independent interest.
## 1 Introduction
We consider the complexity of representing continuous piecewise linear
functions using the generalized hinging hyperplane model Wang and Sun (2005).
We begin with a short review on these two notions.
### 1.1 Continuous Piecewise Linear (CPWL) Functions
Continuous piecewise linear (CPWL) functions play an important role in non-
linear function approximation, such as nonlinear circuit or neural networks.
We introduce the definition of CPWL functions borrowed from Chua and Deng
(1988).
###### Definition 1.1 (CPWL function).
A function $f(x):R^{n}\to R$ is said to be a CPWL function iff it satisfies:
1):The domain space $R^{n}$ is divided into a finite number of polyhedral
regions by a finite number of disjunct boundaries. Each boundary is a subset
of a hyperplane and takes non-zero measure (standard lebesgue measure) on the
hyperplane (as $R^{n-1}$).
2):The restriction of $f(x)$ on each polyhedral region is an affine function.
3):$f(x)$ is continuous on $R^{n}$.
### 1.2 Generalized Hinging Hyperplanes (GHH)
The model of hinging hyperplanes (HH) is a sum of hinges like
$\pm\max\\{w_{1}^{\top}x+b_{1},w_{2}^{\top}x+b_{2}\\}$ (1)
where $w_{1},w_{2}\in R^{n}$ and $b_{1},b_{2}\in R$ are parameters. The HH
model (in fact equivalent to a one hidden-layer ReLU network) can approximate
any continuous function over a compact domain to arbitrary precision as the
number of hinges go infinity Breiman (1993).
However, this model can’t exactly represent all CPWL function as pointed out
in He et al. (2018), which brings doubt on its approximation efficiency. To
overcome this problem, Wang and Sun (2005) first proposed a generalization of
HH model, called GHH which allows more than 2 affine functions within the
nested maximum operator:
###### Definition 1.2 ($n$-order hinge).
A $n$-order hinge is a function of the following form:
$\pm\max\\{w_{1}^{\top}x+b_{1},w_{2}^{\top}x+b_{2},\cdots,w_{n+1}^{\top}x+b_{n+1}\\}$
(2)
where $w_{i}\in R^{n}$ and $b_{i}\in R$ are parameters.
A linear combination of a finite number of $n$-order hinges is called a
$n$-order hinging hyperplane ($n$-HH) model. Such model has universal
representation power over all CPWL functions, as formalized in the theorem
below:
###### Theorem 1.3 (Theorem 1 in Wang and Sun (2005)).
For any positive integer $n$ and CPWL function $f(x):R^{n}\to R$, there exists
a $n$-HH which exactly represents $f(x)$.
The question is whether we can give a sharp lower bound on the necessary
number of affine functions within the nested maximum operator. Wang and Sun
(2005) conjected that $(n-1)$-HH can’t represent all CPWL functions, but this
open problem is left unanswered for more than a decade. In the following
section we will prove our main result that $(n-2)$-HH can’t represent all CPWL
functions, yielding an almost tight lower bound.
## 2 Main Result
Observe that any $(n-2)$-order hinge depends on only $n-1$ affine transforms
of $x$, thus there always exists a direction in which the value of the
$(n-2)$-order hinge remains the same. We make such observation precise by
introducing the definition of low-dimensional and periodic functions.
###### Definition 2.1 (Low-dimensional/periodic function).
A function $f(x):R^{n}\to R$ is said to be low-dimensional, if there exists a
vector $v\neq 0$, such that for any $x\in R^{n}$ and $c\in R$, we have that
$f(x)=f(x+cv)$. If we have only $f(x)=f(x+v)$ then $f$ is said to be periodic
(a weaker notion). $v$ is called an invariant vector of $f$.
Any $(n-2)$-order hinge is a low-dimensional function on $R^{n}$, so our
problem is reduced to proving the class of finite sum of low-dimensional
functions has limited representation power. The following key lemma actually
proves (a stronger result) that finite sum of periodic functions can’t
represent any non-trivial integrable functions.
###### Lemma 2.2.
Any finite sum of periodic functions is either non-integrable or the zero
function, i.e. given periodic functions $f_{i}(x)$, $i=1,...,m$, then
$f(x)\triangleq\sum_{i=1}^{m}f_{i}(x)$ satisfies
$\int_{R^{n}}|f|=\infty\quad or\quad f\equiv 0$ (3)
###### Proof.
We prove Lemma 2.2 by induction. Suppose each $f_{i}$ has an invariant vector
$v_{i}$, base case $m=1$ is trivial since if we denote the orthogonal
hyperplane $H_{i}=\\{x|x^{\top}v_{i}=0\\}$, we have
$\int_{R^{n}}|f|=\int_{R}\int_{H_{1}}|f|$ (4)
thus $\int_{R^{n}}|f|<\infty$ if and only if $\int_{H}|f|=0$. Assume
$f=\sum_{i=1}^{m}f_{i}$ is integrable, then $g(x)\triangleq f(x+v_{m})-f(x)$
is also integrable. We make the following decomposition of $g$:
$g(x)=\sum_{i=1}^{m}f_{i}(x+v_{m})-f_{i}(x)=\sum_{i=1}^{m-1}f_{i}(x+v_{m})-f_{i}(x)$
(5)
where each $f_{i}(x+v_{m})-f_{i}(x)$ is periodic (with invariant vector
$v_{i}$) as well. By induction we have $g\equiv 0$ and $f$ is also a periodic
function (with invariant vector $v_{m}$). Using the base case on $f$ again
concludes our proof. ∎
Our main result is a direct corollary of Lemma 2.2, as stated below:
###### Theorem 2.3.
For any positive integer $n\geq 2$, there exists a CPWL function
$g(x):R^{n}\to R$, such that no $(n-2)$-HH can exactly represent $g(x)$.
###### Proof.
Let $g(x)\triangleq\max\\{0,1-||x||_{\infty}\\}$. It’s straightfoward to check
that $g(x)$ is a CPWL function with at most $2^{n+1}$ affine polyhedral
regions, and meanwhile is an integrable function with positive integral. As
any $(n-2)$-HH can be written as a finite sum of low-dimensional functions, it
can’t represent $g(x)$ by Lemma 2.2. ∎
Theorem 2.3 implies that in order to achieve universal representation power
over all CPWL functions, a $(n-1)$-HH model is necessary which provides an
almost tight lower bound corresponding to the upper bound in Theorem 1.3.
## 3 Implications on Universal Approximation of ANNs
Traditional universal approximation theorems of artifical neural networks
(ANN) Cybenko (1989); Hornik et al. (1989); Barron (1994) typically states
that an ANN with one hidden layer and unbounded width can approximate any
measurable function with arbitrary precision on a compact set. Our result
demonstrates that the compact set assumption is indeed necessary for ANNs with
traditional activation (composition of an affine transform and a fixed
univariate function $\sigma$):
###### Corollary 3.1.
Given an integrable function $f$ on $R^{n}$ ($n\geq 2$), for any one-hidden-
layer neural network $g$ with traditional activation $\sigma(w^{\top}x+b)$, we
have that
$\int_{R^{n}}|f-g|=\infty\quad or\quad\int_{R^{n}}|f-g|=\int_{R^{n}}|f|$ (6)
###### Proof.
Any unit $\sigma(w^{\top}x+b)$ is obviously a low-dimensional function when
$n\geq 2$, thus by Lemma 2.2 we finish our proof. ∎
Corollary 3.1 reveals a fundamental gap of representation power between one-
hidden layer neural networks and deeper ones, as Theorem 1.3 indicates a
neural network with $\lceil log_{2}(n+1)\rceil$ hidden layers can represent
any CPWL function He et al. (2018), showing the benefits of depth in universal
approximation Lu et al. (2017).
## 4 Conclusion
In this note we give a sharp lower bound on the necessary number of nestings
of nested absolute-value functions of generalized hinging hyperplanes (GHH) to
represent arbitrary CPWL functions, which is the first non-trivial lower bound
to the best of our knowledge. Our results fully characterizes the
representation power (and limit) of the GHH model.
Our result also has implications on ANNs, a much more popular model in machine
learning. It shows that one-hidden-layer neural networks with traditional
activation can’t control the approximation error on the whole domain despite
existing universal approximation theorems, a fundamental gap between one-
hidden-layer networks and deeper ones. We conject similar depth-separation
results should hold for deeper networks and the $\lceil log_{2}(n+1)\rceil$
bound should be tight in representing CPWL functions. Instead of low-
dimensional (periodic), other properties need to be discovered for deeper
networks.
## Acknowledgements
The author would like to thank Fedor Petrov for giving an elegant proof of
Lemma 2.2 on Mathoverflow.
## References
* Barron (1994) Andrew R Barron. Approximation and estimation bounds for artificial neural networks. _Machine learning_ , 14(1):115–133, 1994.
* Breiman (1993) Leo Breiman. Hinging hyperplanes for regression, classification, and function approximation. _IEEE Transactions on Information Theory_ , 39(3):999–1013, 1993.
* Chua and Deng (1988) Leon O Chua and A-C Deng. Canonical piecewise-linear representation. _IEEE Transactions on Circuits and Systems_ , 35(1):101–111, 1988.
* Cybenko (1989) George Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of control, signals and systems_ , 2(4):303–314, 1989.
* He et al. (2018) Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. Relu deep neural networks and linear finite elements. _arXiv preprint arXiv:1807.03973_ , 2018.
* Hornik et al. (1989) Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. _Neural networks_ , 2(5):359–366, 1989.
* Lu et al. (2017) Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The expressive power of neural networks: A view from the width. _arXiv preprint arXiv:1709.02540_ , 2017.
* Wang and Sun (2005) Shuning Wang and Xusheng Sun. Generalization of hinging hyperplanes. _IEEE Transactions on Information Theory_ , 51(12):4425–4431, 2005.
|
# Which Nilpotent Groups are Self-Similar? 111Research supported by CNRS-UMR
5028 and Labex MILYON/ANR-10-LABX-0070.
Olivier Mathieu 222Institut Camille Jordan, Université de Lyon. Email:
<EMAIL_ADDRESS>
###### Abstract
Let $\Gamma$ be a finitely generated torsion free nilpotent group, and let
$A^{\omega}$ be the space of infinite words over a finite alphabet $A$. We
investigate two types of self-similar actions of $\Gamma$ on $A^{\omega}$,
namely the faithfull actions with dense orbits and the free actions. A
criterion for the existence of a self-similar action of each type is
established.
Two corollaries about the nilmanifolds are deduced. The first involves the
nilmanifolds endowed with an Anosov diffeomorphism, and the second about the
existence of an affine structure.
Then we investigate the virtual actions of $\Gamma$, i.e. actions of a
subgroup $\Gamma^{\prime}$ of finite index. A formula, with some number
theoretical content, is found for the minimal cardinal of an alphabet $A$
endowed with a virtual self-similar action on $A^{\omega}$ of each type.
Mathematics Subject Classification 37B10-20G30-53C30
## Introduction
1\. General introduction
Let $A$ be a finite alphabet and let $A^{\omega}$ be the topological space of
infinite words $a_{1}a_{2}\dots$ over $A$, where the topology of
$A^{\omega}=\varprojlim A^{n}$ is the pro-finite topology.
An action of a group $\Gamma$ on $A^{\omega}$ is called self-similar iff for
any $\gamma\in\Gamma$ and $a\in A$ there exists $\gamma_{a}\in\Gamma$ and
$b\in A$ such that
$\gamma(aw)=b\gamma_{a}(w)$ for any $w\in A^{\omega}$.
The group $\Gamma$ is called self-similar (respectively densely self-similar,
respectively freely self-similar, respectively freely densely self-similar) if
$\Gamma$ admits a faithfull self-similar action (respectively a faithfull
self-similar action with dense orbits, respectively a free self-similar
action, respectively a free self-similar action with dense orbits) on
$A^{\omega}$ for some finite alphabet $A$.
Self-similar groups appeared in the early eighties, in the works of R.
Grigorchuk [10] [11] and in the joint works of N. Gupta and S. Sidki [13]
[14]. See also the monography [24] for an extensive account before 2005 and
[25] [2] [9] [16] [12] for more recent works. A general question is
which groups $\Gamma$ are (merely, or densely …) self-similar?
This paper brings an answer for finitely generated torsion-free nilpotent
groups $\Gamma$, called FGTF nilpotent groups in the sequel. Then we will
connect the main result with topics involving differential geometry and
arithmetic groups.
The systematic study of self-similar actions of nilpotent groups started with
[4], and the previous question has been raised in some talks of S. Sidki.
2\. The main results
A few definitions are now required. A grading of a Lie algebra $\mathfrak{m}$
is a decomposition $\mathfrak{m}=\oplus_{n\in\mathbb{Z}}\,\mathfrak{m}_{n}$
such that $[\mathfrak{m}_{n},\mathfrak{m}_{m}]\subset\mathfrak{m}_{n+m}$ for
all $n,\,m\in\mathbb{Z}$. It is called special if
$\mathfrak{m}_{0}\cap\mathfrak{z}=0$, where $\mathfrak{z}$ is the center of
$\mathfrak{m}$. It is called very special if $\mathfrak{m}_{0}=0$.
Let’s assume now that $\Gamma$ is a FGTF nilpotent group. By Malcev Theory
[18][26], $\Gamma$ is a cocompact lattice in a unique connected, simply
connected (or CSC in what follows) nilpotent Lie group $N$. Let
$\mathfrak{n}^{\mathbb{R}}$ be the Lie algebra of $N$ and set
$\mathfrak{n}^{\mathbb{C}}=\mathbb{C}\otimes_{\mathbb{R}}\mathfrak{n}^{\mathbb{R}}$.
The main results, proved in Section 7, are the following
###### Theorem 2.
The group $\Gamma$ is densely self-similar iff the Lie algebra
$\mathfrak{n}^{\mathbb{C}}$ admits a special grading.
###### Theorem 3.
The following assertions are equivalent
(i) The group $\Gamma$ is freely self-similar,
(ii) the group $\Gamma$ is freely densely self-similar, and
(iii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a very special
grading.
As a consequence, let’s mention
###### Corollary 4.
Let $M$ be a nilmanifold endowed with an Anosov diffeomorphism. Then there a
free self-similar action of $\pi_{1}(M)$ with dense orbits on $A^{\omega}$,
for some finite $A$.
###### Corollary 8.
Let $M$ be a nilmanifold. If $\pi_{1}(M)$ is freely self-similar, then $M$ is
affine complete.
Among FGTF nilpotent groups, some of them are self-similar but not densely
self-similar. Some of them are not even self-similar, since Theorem 2 implies
the next
###### Corollary 7.
Let $M$ be one of the non-affine nilmanifolds appearing in [3]. Then
$\pi_{1}(M)$ is not self-similar.
3\. A concrete version of Theorems 2 and 3
Let $N$ be a CSC nilpotent Lie group, with Lie algebra
$\mathfrak{n}^{\mathbb{R}}$. Let’s assume that $N$ contains some cocompact
lattices $\Gamma$. By definition, the degree of a self-similar action of
$\Gamma$ on $A^{\omega}$ is $\mathrm{Card\,}\,A$. We ask the following
question
For a given cocompact lattice $\Gamma\subset N$, what is the minimal degree
degree of a faithfull (or a free) self-similar action with dense orbits?
More notions are now defined. Recall that the commensurable class $\xi$ of a
cocompact lattice $\Gamma_{0}\subset N$ is the set of all cocompact lattices
of $N$ which share with $\Gamma_{0}$ a subgroup of finite index. The
complexity $\mathrm{cp}\,\xi$ (respectively the free complexity
$\mathrm{fcp}\,\xi$) of the class $\xi$ is the minimal degree of a self-
similar action of $\Gamma$ with dense orbits (respectively a free self-similar
action of $\Gamma$), for some $\Gamma\in\xi$.
For any algebraic number $\lambda\neq 0$, set
$d(\lambda)=\mathrm{Card\,}\,{\cal O}(\lambda)/\pi_{\lambda}$, where ${\cal
O}(\lambda)$ is the ring of integers of $\mathbb{Q}(\lambda)$ and
$\pi_{\lambda}=\\{x\in{\cal O}(\lambda)|x\lambda\in{\cal O}(\lambda)\\}$. For
any isomorphism $h$ of a finite dimensional vector space over $\mathbb{Q}$,
set
${\mathrm{ht}}\,h=\prod_{\lambda\in{\mathrm{Spec}}\,h/{\mathrm{Gal}}(\mathbb{Q})}\,d(\lambda)^{m_{\lambda}}$,
where ${\mathrm{Spec}}\,h/{\mathrm{Gal}}(\mathbb{Q})$ is the list of
eigenvalues of $h$ up to conjugacy by ${\mathrm{Gal}}(\mathbb{Q})$ and where
$m_{\lambda}$ is the multiplicity of the eigenvalue $\lambda$.
By Malcev’s Theory, the commensurable class $\xi$ determines a canonical
$\mathbb{Q}$-form $\mathfrak{n}(\xi)$ of the Lie algebra
$\mathfrak{n}^{\mathbb{R}}$. Let ${\cal S}(\mathfrak{n}(\xi))$ (respectively
${\cal V}(\mathfrak{n}(\xi))$) be the set of all
$f\in\mathrm{Aut}\,\mathfrak{n}(\xi)$ such that
${\mathrm{Spec}}\,\,f|\mathfrak{z}^{\mathbb{C}}$ (respectively
${\mathrm{Spec}}\,\,f$) contains no algebraic integer.
###### Theorem 9.
We have
$\mathrm{cp}\,\xi=\mathrm{Min}_{h\in{\cal
S}(\mathfrak{n}(\xi))}\,{\mathrm{ht}}\,h$, and
$\mathrm{fcp}\,\xi=\mathrm{Min}_{h\in{\cal
V}(\mathfrak{n}(\xi))}\,{\mathrm{ht}}\,h$.
If, in the previous statement, ${\cal S}(\mathfrak{n}(\xi))$ is empty, then
the equality $\mathrm{cp}\,\xi=\infty$ means that no $\Gamma\in\xi$ admits a
faithfull self-similar action with dense $\Gamma$-orbits.
Theorem 9 answers the previous question only for the commensurable classes
$\xi$. For an individual $\Gamma\in\xi$, it provides some ugly estimates for
the minimal degree of $\Gamma$-actions, and nothing better can be expected.
The framework of nonabelian Galois cohomology shows the concreteness of
Theorem 9. Up to conjugacy, the commensurable classes in $N$ are classified by
the $\mathbb{Q}$-forms of some classical objects with a prescribed
$\mathbb{R}$-form, see Corollary 4 of ch. 9, and their complexity is an
invariant of the arithmetic group $\mathrm{Aut}\,\mathfrak{n}(\xi)$.
As an illustration of the previous vague sentence, we investigate a class
${\cal N}$ of CSC nilpotent Lie groups $N$, with Lie algebra
$\mathfrak{n}^{\mathbb{R}}$. The commensurable classes $\xi(q)$ in $N$ are
classified, up to conjugacy, by the positive definite quadratic forms $q$ on
$\mathbb{Q}^{2}$. Then, we have
$\mathrm{cp}\,\xi(q)=F(d)^{e(N)}$
where $e(N)$ is an invariant of the special grading of
$\mathbb{C}\otimes\mathfrak{n}^{\mathbb{R}}$, where $-d$ is the discriminant
of $q$, and where $F(d)$ is the norm of a specific ideal in
$\mathbb{Q}(\sqrt{-d})$, see Theorem 11 and Lemma 28.
In particular, $N$ contains some commensurable classes of arbitrarily high
complexity. In a forthcoming paper [22], more complicated examples are
investigated, but the formulas are less explicit.
4\. About the proofs. The proofs of the paper are based on different ideas.
Theorem 1, which is a statement about rational points of algebraic tori, is
the key step in the proof of Theorems 2, 3 and 11. It is based on standard
results of number theory, including the Cebotarev’s Theorem. It is connected
with the density of rational points for connected groups proved by A. Borel
[6], see also [27].
Also, the proof of Corollary 4 is based on a paper of A. Manning [20] about
Anosov diffeomorphisms. The proof of Corollary 7 is based on very difficult
computations, which, fortunately, were entirely done in [3].
## 1 Self-similar actions and self-similar data
Let $\Gamma$ be a group. This section explains the correspondence between the
faithfull transitive self-similar $\Gamma$-actions and some virtual
endomorphisms of $\Gamma$, called self-similar data. Usually self-similar
actions are actions on a rooted tree $A^{*}$, see [24]. Here the groups are
acting on the boundary $A^{\omega}$ of $A^{*}$. This equivalent viewpoint is
better adapted to our setting.
1.1 Transitive self-similar actions
In addition of the definitions of the introduction, the following technical
notion of transitivity will be used.
A self-similar action of $\Gamma$ on $A^{\omega}$ induces an action of
$\Gamma$ on $A$. Indeed, for $a,\,b\in A$ and $\gamma\in\Gamma$, we have
$\gamma(a)=b$ if
$\gamma(aw)=b\gamma_{a}(w)$,
for all $w\in A^{\omega}$. A self-similar action is called transitive if the
induced action on $A$ is transitive. The group $\Gamma$ is called transitive
self-similar if it admits a faithfull transitive self-similar action.
Similarly the self-similar action of $\Gamma$ on $A^{\omega}$ induces an
action of $\Gamma$ on each level set $A^{n}$. Such an action is often called
level transitive if $\Gamma$ acts transitively on each level $A^{n}$.
Obviously, the level transitive actions on $A^{*}$ of [24] corresponds with
the actions on $A^{\omega}$ with dense orbits.
1.2 Core and $f$-core
Let $\Gamma$ be a group and $\Gamma^{\prime}$ be a subgroup. The core of
$\Gamma^{\prime}$ is the biggest normal subgroup $K\triangleleft G$ with
$K\subset\Gamma^{\prime}$. Equivalently the core is the kernel of the left
action of $\Gamma$ on $\Gamma/\Gamma^{\prime}$.
Now let $f:\Gamma^{\prime}\to\Gamma$ be a group morphism. By defintion the
$f$-core is the biggest normal subgroup $K\triangleleft G$ with
$K\subset\Gamma^{\prime}$ and $f(K)\subset K$.
1.3 Self-similar data
Let $\Gamma$ be a group. A virtual endomorphism of $\Gamma$ is a pair
$(\Gamma^{\prime},f)$, where $\Gamma^{\prime}$ is a subgroup of finite index
and $f:\Gamma^{\prime}\to\Gamma$ is a group morphism. A self-similar datum is
a virtual endomorphism $(\Gamma^{\prime},f)$ with a trivial $f$-core.
Assume given a faithfull transitive self-similar action of $\Gamma$ on
$A^{\omega}$. Let $a\in A$, and let $\Gamma^{\prime}$ be the stabilizer of
$a$. By definition, for each $\gamma\in\Gamma^{\prime}$ there is a unique
$\gamma_{a}\in\Gamma$ such that
$\gamma(aw)=a\gamma_{a}(w)$,
for any $w\in A^{\omega}$. Let $f:\Gamma^{\prime}\rightarrow\Gamma$ be the map
$\gamma\mapsto\gamma_{a}$. Since the action is faithfull, $\gamma_{a}$ is
uniquely determined and $f$ is a group morphism. Also it follows from
Proposition 2.7.4 and 2.7.5 of [24] that the $f$-core of $\Gamma^{\prime}$ is
the kernel of the action, therefore it is trivial. Hence $(\Gamma^{\prime},f)$
is a self-similar datum.
Conversely, a virtual endomorphism $(\Gamma^{\prime},f)$ determines a
transitive self-similar action of $\Gamma$ on $A^{\omega}$, where
$A\simeq\Gamma/\Gamma^{\prime}$. Moreover the $f$-core is the kernel of the
corresponding action, see ch 2 of [24] for details, especially subsection
2.5.5 of [24]). In conclusion, we have
###### Lemma 1.
Let $\Gamma$ be a group. There is a correspondence between the self-similar
data $(\Gamma^{\prime},f)$ and the faithfull transitive self-similar actions
of $\Gamma$ on $A^{\omega}$, where $A\simeq\Gamma/\Gamma^{\prime}$.
This correspondence is indeed a bijection up to conjugacy, see [24] for a
precise statement.
1.4 Good self-similar data
Let $\Gamma$ be a group, and let $(\Gamma^{\prime},f)$ be a virtual
endomorphism. Let $\Gamma_{n}$ be the subgroups of $\Gamma$ inductively
defined by $\Gamma_{0}=\Gamma$, $\Gamma_{1}=\Gamma^{\prime}$ and for $n\geq 2$
$\Gamma_{n}=\\{\gamma\in\Gamma_{n-1}|\,f(\gamma)\in\Gamma_{n-1}\\}$
###### Lemma 2.
The sequence $n\mapsto[\Gamma_{n}:\Gamma_{n+1}]$ is not increasing.
###### Proof.
For $n>0$, the map $f$ induces an injection of the set
$\Gamma_{n}/\Gamma_{n+1}$ into $\Gamma_{n-1}/\Gamma_{n}$, thus we have
$[\Gamma_{n}:\Gamma_{n+1}]\leq[\Gamma_{n-1}:\Gamma_{n}]$. ∎
The virtual endomorphism $(\Gamma^{\prime},f)$ is called good if
$[\Gamma_{n}:\Gamma_{n+1}]=[\Gamma/\Gamma^{\prime}]$ for all $n$.
Let $(\Gamma^{\prime},f)$ be a self-similar datum, and let $A^{*}$ be the
corresponding tree on which $\Gamma$ acts. If $a$ is the distinguished point
in $A\simeq\Gamma/\Gamma^{\prime}$, then $\Gamma_{n}$ is the stabilizer of
$a^{n}$. If the self-similar datum $(\Gamma^{\prime},f)$ is good, then
$[\Gamma:\Gamma_{n}]=\mathrm{Card\,}A^{n}$ and therefore $\Gamma$ acts
transitively on $A^{n}$. Exactly as before, we have
###### Lemma 3.
Let $\Gamma$ be a group. There is a correspondence between the good self-
similar data $(\Gamma^{\prime},f)$ and the faithfull self-similar actions of
$\Gamma$ on $A^{\omega}$ with dense orbits, where
$A\simeq\Gamma/\Gamma^{\prime}$.
1.5 Fractal self-similar data
Let $\Gamma$ be a group. A self-similar datum $(\Gamma^{\prime},f)$ is called
fractal (or recurrent) if $f(\Gamma^{\prime})=\Gamma$. A self-similar action
of $\Gamma$ on some $A^{\omega}$ is called fractal if it is transitive and the
corresponding self-similar datum is fractal, see [24] section 2.8. Obviously a
fractal action has dense orbits.
The group $\Gamma$ is called fractal (respectively freely fractal) if $\Gamma$
admits a faithfull (respectively free) fractal action on some $A^{\omega}$.
## 2 Rational points of a torus
We are going to prove Theorem 1, about the rational points of algebraic tori.
For the whole chapter, let $\bf H$ be an algebraic torus defined over
$\mathbb{Q}$ and let $X({\bf H})$ be the group of characters of $\bf H$. For a
number field $K$, let’s denote by
${\mathrm{Gal}}(K):={\mathrm{Gal}}(\overline{\mathbb{Q}}/K)$ its absolute
Galois group. The group $X({\bf H})$ is a ${\mathrm{Gal}}(\mathbb{Q})$-module
which is isomorphic to $\mathbb{Z}^{r}$ as an abelian group, where $r=\dim{\bf
H}$. The splitting field of $\bf H$ is the smallest Galois extension $L$ of
$\mathbb{Q}$ such that ${\mathrm{Gal}}(L)$ acts trivially on $X({\bf H})$, or,
equivalently such that $\bf H$ is $L$-isomorphic to $\mathbb{G}_{m}^{r}$,
where $\mathbb{G}_{m}$ denotes the multiplicative group. Moreover, we have
$\chi(h)\in L^{*}$
for any $\chi\in X({\bf H})$ and any $h\in{\bf H}(\mathbb{Q})$.
Let ${\cal O}$ be the ring of integers of $L$. Recall that a fractional ideal
is a nonzero finitely generated ${\cal O}$-submodule of $K$. A fractional
ideal $I$ is called integral if $I\subset{\cal O}$. If the fractional ideal
$I$ is integral and $I\neq{\cal O}$, then $I$ is merely an ideal of ${\cal
O}$.
Let ${\cal I}$ be the set of all fractional ideals and ${\cal I}^{+}$ be the
subset of all integral ideals. Given $I$ and $J$ in ${\cal I}$, their product
is the ${\cal O}$-module generated by all products $ab$ where $a\in I$ and
$b\in J$. Since ${\cal O}$ is a Dedekind ring, we have
${\cal I}\simeq\oplus_{\pi\in{\cal P}}\,\mathbb{Z}\,[\pi]$
${\cal I}^{+}\simeq\oplus_{\pi\in{\cal P}}\,\mathbb{Z}_{\geq 0}\,[\pi]$,
where ${\cal P}$ is the set of prime ideals of ${\cal O}$. Indeed the additive
notation is used for for the group ${\cal I}$ and the monoid ${\cal I}^{+}$:
view as an element of ${\cal I}$ the fractional ideal
$\pi_{1}^{m_{1}}\dots\pi_{n}^{m_{n}}$ will be denoted as
$m_{1}[\pi_{1}]+\dots+m_{n}[\pi_{n}]$.
Since ${\mathrm{Gal}}(L/\mathbb{Q})$ acts by permutation on ${\cal P}$, ${\cal
I}$ is a $\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module. For $S\subset{\cal
P}$, set
${\cal I}_{S}=\oplus_{\pi\in{\cal P}\setminus S}\,\mathbb{Z}\,[\pi]$.
###### Lemma 4.
Let $S\subset{\cal P}$ be a finite subset and let $r>0$ be an integer.
The ${\mathrm{Gal}}(L/\mathbb{Q})$-module ${\cal I}$ contains a free
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module $M(r)$ of rank $r$ such that
(i) $M(r)\cap{\cal I}^{+}=\\{0\\}$, and
(ii) $M(r)\subset{\cal I}_{S}$.
###### Proof.
Let $S^{\prime}$ be the set of all prime numbers which are divisible by some
$\pi\in S$. Let $\Sigma$ be the set of prime numbers $p$ that are completely
split in $K$, i.e. such that ${\cal O}/p{\cal
O}\simeq\mathbb{F}_{p}^{[L:\mathbb{Q}]}$. For $p\in\Sigma$, let $\pi\in{\cal
P}$ be a prime ideal over $p$. When $\sigma$ runs over
${\mathrm{Gal}}(L/\mathbb{Q})$ the ideals $\pi^{\sigma}$ are all distinct, and
therefore $[\pi]$ generates a free
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-submodule of rank one in ${\cal I}$.
By Cebotarev theorem, the set $\Sigma$ is infinite. Choose $r+1$ distinct
prime numbers $p_{0},\dots p_{r}$ in $\Sigma\setminus S^{\prime}$, and let
$\pi_{0},\dots,\pi_{r}\in{\cal P}$ such that ${\cal
O}/\pi_{i}=\mathbb{F}_{p_{i}}$. For $1\leq i\leq r$, set
$\tau_{i}=[\pi_{i}]-[\pi_{0}]$ and let $M(r)$ be the
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module generated by
$\tau_{1},\dots,\tau_{r}$.
Obviously, the ${\mathrm{Gal}}(L/\mathbb{Q})$-module $M(r)$ is free of rank
$r$ and $M(r)\subset{\cal I}_{S}$. It remains to prove that $M(r)\cap{\cal
I}^{+}=\\{0\\}$. Let
$A=\sum\limits_{1\leq i\leq
r,\sigma\in{\mathrm{Gal}}(L/\mathbb{Q})}\,m_{i,\sigma}\,\tau_{i}^{\sigma}$
be an element of $M(r)\cap{\cal I}^{+}$. We have $A=B-C$, where
$B=\sum\limits_{1\leq i\leq
r,\sigma\in{\mathrm{Gal}}(L/\mathbb{Q})}\,m_{i,\sigma}\,[\pi_{i}^{\sigma}]$,
and
$C=\sum\limits_{\sigma\in{\mathrm{Gal}}(L/\mathbb{Q})}\,(\sum\limits_{1\leq
i\leq r}\,m_{i}^{\sigma})[\pi_{0}^{\sigma}]$.
Thus the condition $A\in{\cal I}^{+}$ implies that
$m_{i}^{\sigma}\geq 0$, for any $1\leq i\leq k$ and
$\sigma\in{\mathrm{Gal}}(L/\mathbb{Q})$, and
$\sum\limits_{1\leq i\leq r}\,m_{i}^{\sigma}\leq 0$, for any
$\sigma\in{\mathrm{Gal}}(L/\mathbb{Q})$.
Thus all the integers $m_{i}^{\sigma}$ vanish. Therefore $M(r)$ intersects
${\cal I}^{+}$ trivially.
∎
For $\pi\in{\cal P}$, let $v_{\pi}:L^{*}\to\mathbb{Z}$ be the corresponding
valuation.
###### Lemma 5.
Let $S\subset{\cal P}$ be a finite ${\mathrm{Gal}}(L/\mathbb{Q})$-invariant
subset and let $r>0$ be an integer.
The ${\mathrm{Gal}}(L/\mathbb{Q})$-module $L^{*}$ contains a free
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module $N(r)$ of rank $r$ such that
(i) $N(r)\cap{\cal O}=\\{1\\}$, and
(ii) $v_{\pi}(x)=0$ for any $x\in N(r)$ and any $\pi\in S$.
###### Proof.
Set $L^{*}_{S}=\\{x\in L^{*}|v_{\pi}(x)=0,\,\forall\pi\in S\\}$ and let
$\theta:L^{*}_{S}\rightarrow{\cal I}_{S}$ be the map
$x\mapsto\sum_{\pi\in{\cal P}}\,v_{\pi}(x)\,[\pi]$.
By Lemma 4, ${\cal I}_{S}$ contains a free
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module $M(r)$ of rank $r$ such that
$M(r)\cap{\cal I}^{+}=\\{0\\}$. Let’s remark that $\mathrm{Coker}\,\theta$ is
a subgroup of the class group $\mathrm{Cl}(L)$ of $L$. Since, by Dirichelet
Theorem, $\mathrm{Cl}(L)$ is finite, there is a positive integer $d$ such that
$d.M(r)$ lies in the image of $\theta$. Since $M(r)$ is free, there is a free
$\mathbb{Z}{\mathrm{Gal}}(K/\mathbb{Q})$-module $N(r)\subset L^{*}_{S}$ of
rank $r$ which is a lift of $dM(r)$, i.e. such that $\theta$ induces an
isomorphism $N(r)\simeq d.M(r)$. Since $\theta({\cal O}\setminus 0)$ lies in
${\cal I}^{+}$, we have $\theta(N(r)\cap{\cal O})=\\{0\\}$. It follows that
$N(r)\cap{\cal O}=\\{1\\}$.
The second assertion follows from the fact that $N(r)$ lies in $L^{*}_{S}$. ∎
For $\pi\in{\cal P}$, let ${\cal O}_{\pi}$ and $L_{\pi}$ be the $\pi$-adic
completions of ${\cal O}$ and $L$. Let $x,\,y\in L$ and let $n>0$ be an
integer. In what follows, the congruence
$x\equiv y$ modulo $n{\cal O}_{\pi}$
means $x_{\pi}\equiv y_{\pi}\,\mathrm{mod}\,n{\cal O}_{\pi}$, where $x_{\pi}$
and $y_{\pi}$ are the images of $x$ and $y$ in $L_{\pi}$.
The case $n=1$ of the next statement will be used in further sections. In such
a case, Assertion (ii) is tautological.
###### Theorem 1.
Let ${\bf H}$ be an algebraic torus defined over $\mathbb{Q}$, and let $L$ be
its splitting field. Let $n>0$ be an integer and let $S\subset{\cal P}$ be the
set of prime divisors of $n$.
There exists $h\in{\bf H}(\mathbb{Q})$ such that
(i) $\chi(h)$ is not an algebraic integer, for any non-trivial $\chi\in X({\bf
H})$, and
(ii) $\chi(h)\equiv 1\,\mathrm{mod}\,n{\cal O}_{\pi}$ for any $\chi\in X({\bf
H})$ and any $\pi\in S$.
###### Proof.
Step 1. First an element $h^{\prime}\in{\bf H}(\mathbb{Q})$ satisfying
Assertion (i) and
(iii) $v_{\pi}(\chi(h^{\prime}))=0$, for any $\pi\in S$ and any $\chi\in
X({\bf H})$
is found.
The abelian group $X({\bf H})$ is free of rank $r$ where $r=\dim\,{\bf H}$.
Therefore, the comultiplication $\Delta:X({\bf H})\rightarrow X({\bf
H})\otimes\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$ provides an embedding of
$X({\bf H})$ into a free $\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module of
rank $r$. By lemma 5, there a free
$\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-module $N(r)\subset L^{*}_{S}$ of
rank $r$ with $N(r)\cap{\cal O}=\\{1\\}$. Let
$\mu:X({\bf H})\otimes\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})\to N(r)$,
be an isomorphism of $\mathbb{Z}{\mathrm{Gal}}(L/\mathbb{Q})$-modules, and set
$h^{\prime}=\mu\circ\Delta$.
Since ${\bf
H}(\mathbb{Q})={\mathrm{Hom}}_{{\mathrm{Gal}}(L/\mathbb{Q})}(X({\bf
H}),L^{*})$, the embedding $h^{\prime}$ is indeed an element of ${\bf
H}(\mathbb{Q})$. Viewed as a map from $X({\bf H})$ to $L^{*}$, $h^{\prime}$ is
the morphism $\chi\in X({\bf H})\mapsto\chi(h^{\prime})$.
Since $\mathrm{Im}\,h^{\prime}\cap{\cal O}=1$ and $h^{\prime}$ is injective,
$\chi(h^{\prime})$ is not an algebraic integer if $\chi$ is a non-trivial
character. Since $\mathrm{Im}\,h^{\prime}\subset L^{*}_{S}$, we have
$v_{\pi}(\chi(h^{\prime}))=0$ for any $\chi\in X({\bf H})$. Therefore
$h^{\prime}$ satisfies Assertions (i) and (iii).
Step 2. Let $\chi_{1},\dots,\chi_{r}$ be a basis of $X({\bf H})$. Since
$v_{\pi}(\chi_{i}(h^{\prime}))=0$ for any $\pi\in S$, the element
$\chi_{i}(h^{\prime})\,\mathrm{mod}\,n{\cal O}_{\pi}$ is an inversible element
in the finite ring ${\cal O}_{\pi}/n{\cal O}_{\pi}$. Therefore there are
positive integers $m_{i,\pi}$ such that
$\chi_{i}(h^{\prime})^{m_{i,\pi}}\equiv 1\,\mathrm{mod}\,n{\cal O}_{\pi}$,
for all $1\leq i\leq r$ and all $\pi\in S$. Set $m=\mathrm{lcm}(m_{i,\pi})$
and set $h=h^{\prime m}$. Obviously $h$ satisfies Assertion (i). Moreover we
have $\chi_{i}(h)\equiv 1\,\mathrm{mod}\,n{\cal O}_{\pi}$, for all $\pi\in S$
and all $1\leq i\leq r$, and therefore $h$ satisfies Assertion (ii) as well.
∎
## 3 Special Gradings
Let $\mathfrak{n}$ be a finite dimensional Lie algebra defined over
$\mathbb{Q}$ and let $\mathfrak{z}$ be its center. The relations between the
gradings of $\mathbb{C}\otimes\mathfrak{n}$ and the automorphisms of
$\mathfrak{n}$ are investigated now.
The following important definitions will be used in the whole paper. Let
${\cal S}(\mathfrak{n})$ (respectively ${\cal V}(\mathfrak{n})$) be the set of
all $f\in\mathrm{Aut}\,\mathfrak{n}$ such that
${\mathrm{Spec}}\,\,f|_{\mathfrak{z}}$ (respectively ${\mathrm{Spec}}\,\,f$)
contains no algebraic integers. Moreover let ${\cal F}(\mathfrak{n})$ be the
set of all $f\in{\cal S}(\mathfrak{n})$ such that all eigenvalues of $f^{-1}$
are algebraic integers. Also set ${\cal F}^{+}(\mathfrak{n})={\cal
F}(\mathfrak{n})\cap{\cal V}(n)$. Here, by eigenvalues of a
$\mathbb{Q}$-linear endomorphism $F$, we always mean the eigenvalues of $F$ in
$\overline{\mathbb{Q}}$.
For any field $K$ of characteristic zero, set
$\mathfrak{n}^{K}=K\otimes\mathfrak{n}$ and
$\mathfrak{z}^{K}=K\otimes\mathfrak{z}$. Let ${\bf G}={\bf Aut}\,\mathfrak{n}$
be the algebraic group of automorphisms of $\mathfrak{n}$. By definition,
${\bf G}$ is defined over $\mathbb{Q}$, and we have ${\bf
G}(K)=\mathrm{Aut}\,\mathfrak{n}^{K}$ for any field $K$ of characteristic
zero. The notation $\mathfrak{n}$ underlines that $\mathfrak{n}$ can be viewed
as the functor in Lie algebras $K\mapsto\mathfrak{n}^{K}$. Let ${\bf
H}\subset{\bf G}$ be a maximal torus defined over $\mathbb{Q}$, whose
existence is proved in [7], see also [6], Theorem 18.2.
By definition, a $K$-grading of $\mathfrak{n}$ is is a decomposition of
$\mathfrak{n}^{K}$
$\mathfrak{n}^{K}=\oplus_{n\in\mathbb{Z}}\,\mathfrak{n}^{K}_{n}$
such that
$[\mathfrak{n}^{K}_{n},\mathfrak{n}^{K}_{m}]\subset\mathfrak{n}^{K}_{n+m}$ for
all $n,\,m\in\mathbb{Z}$. A grading is called special (respectively very
special) if $\mathfrak{z}^{K}\cap\mathfrak{n}^{K}_{0}=0$ (respectively if
$\mathfrak{n}^{K}_{0}=0$). A grading is called non-negative (respectively
positive) if $\mathfrak{n}^{K}_{n}=0$ for $n<0$ (respectively
$\mathfrak{n}^{K}_{n}=0$ for $n\leq 0$).
For any field $K$ of characteristic zero, a $K$-grading of $\mathfrak{n}$ can
be identified with an algebraic group morphism
$\rho:\mathbb{G}_{m}\rightarrow{\bf G}$
defined over $K$, where $\mathbb{G}_{m}$ denotes the multiplicative group.
Consider the following two hypotheses
(${\cal H}_{K}$) The Lie algebra $\mathfrak{n}$ admits a special $K$-grading,
(${\cal H}^{0}_{K}$) The Lie algebra $\mathfrak{n}$ admits a very special
$K$-grading.
###### Lemma 6.
Let $K$ be the splitting field of ${\bf H}$. Up to conjugacy, any grading of
$\mathfrak{n}^{\mathbb{C}}$ is defined over $K$. In particular
(i) The hypotheses ${\cal H}_{\mathbb{C}}$ and ${\cal
H}_{\overline{\mathbb{Q}}}$ are equivalent.
(ii) The hypotheses ${\cal H}^{0}_{\mathbb{C}}$ and ${\cal
H}^{0}_{\overline{\mathbb{Q}}}$ are equivalent.
###### Proof.
Let
$\mathfrak{n}^{\mathbb{C}}=\oplus_{n\in\mathbb{Z}}\,\mathfrak{n}_{n}^{\mathbb{C}}$
be a grading of $\mathfrak{n}^{\mathbb{C}}$ and let
$\rho:\mathbb{G}_{m}\rightarrow{\bf G}$ be the corresponding algebraic group
morphism. Since any maximal torus of ${\bf G}$ is ${\bf
G}(\mathbb{C})$-conjugate to $\bf H$, it can be assumed that
$\rho(\mathbb{G}_{m})\subset{\bf H}$.
Let $X({\bf H})$ be the character group of ${\bf H}$. The group morphism
$\rho$ is determined by the dual morphism $L:X({\bf
H})\rightarrow\mathbb{Z}=X(\mathbb{G}_{m})$. However, ${\mathrm{Gal}}(K)$ acts
trivially on $X({\bf H})$. Thus $\rho$ is automaticaly defined over $K$.
∎
###### Lemma 7.
Let $\Lambda$ be a finitely generated abelian group and let $S\subset\Lambda$
be a finite subset containing no element of finite order. Then there exists a
morphism $L:\Lambda\rightarrow\mathbb{Z}$ such that
$L(\lambda)\neq 0$ for any $\lambda\in S$.
###### Proof.
Let $F$ be the subgroup of finite order elements in $\Lambda$. Using
$\Lambda/F$ instead of $\Lambda$, it can be assumed that
$\Lambda=\mathbb{Z}^{d}$ for some $d$ an $0\notin S$. Let’s choose a positive
integer $N$ such that $S\subset\,]-N,N[^{d}$ and let
$L:\Lambda\rightarrow\mathbb{Z}$ be the function defined by
$L(a_{1},\dots,a_{d})=\sum_{1\leq i\leq d}a_{i}N^{i-1}$.
For any $\lambda=(a_{1},\dots,a_{d})\in S$, there is a smallest index $i$ with
$a_{i}\neq 0$. We have $L(\lambda)=a_{i}N^{i-1}$ modulo $N^{i}$. Since
$|a_{i}|<N$, it follows that $L(\lambda)\neq 0\,\mathrm{mod}\,N^{i}$ and
therefore $L(\lambda)\neq 0$. ∎
###### Lemma 8.
Let $f\in{\bf G}(\mathbb{Q})$. There is a $f$-invariant $\mathbb{Z}$-grading
of $\mathfrak{n}^{\overline{\mathbb{Q}}}$ such that all eigenvalues of $f$ on
$\mathfrak{n}_{0}^{{\overline{\mathbb{Q}}}}$ are roots of unity.
In particular, if ${\mathrm{Spec}}\,\,f$ contains no root of unity, then
$\mathfrak{n}^{\overline{\mathbb{Q}}}$ admits a very special grading.
###### Proof.
Let $\Lambda\subset\overline{\mathbb{Q}}^{*}$ be the subgroup generated by the
${\mathrm{Spec}}\,\,f$. For any $\lambda\in\Lambda$ denote by
$E_{(\lambda)}\subset\mathfrak{n}^{\overline{\mathbb{Q}}}$ the corresponding
generalized eigenspace of $f$. Let $R$ be the set of all roots of unity in
${\mathrm{Spec}}\,\,f$ and set $S={\mathrm{Spec}}\,\,f\setminus R$.
By Lemma 7, there is a morphism $L:\Lambda\rightarrow\mathbb{Z}$ such that
$L(\lambda)\neq 0$ for any $\lambda\in S$. Let ${\cal G}$ be the decomposition
$\mathfrak{n}^{\overline{\mathbb{Q}}}=\oplus_{k\in\mathbb{Z}}\,\mathfrak{n}^{\overline{\mathbb{Q}}}_{k}$
of $\mathfrak{n}^{\overline{\mathbb{Q}}}$ defined by
$\mathfrak{n}^{\overline{\mathbb{Q}}}_{k}=\oplus_{L(\lambda)=k}\,E_{(\lambda)}$.
Since $[E_{(\lambda)},E_{(\mu)}]\subset E_{(\lambda\mu)}$ and
$L(\lambda\mu)=L(\lambda)+L(\mu)$ for any $\lambda,\,\mu\in\Lambda$, it
follows that ${\cal G}$ is a grading of the Lie algebra
$\mathfrak{n}^{\overline{\mathbb{Q}}}$. Moreover we have
$\mathfrak{n}^{\overline{\mathbb{Q}}}_{0}=\oplus_{\lambda\in
R}\,E_{(\lambda)}$,
from which the lemma follows. ∎
###### Lemma 9.
With the previous notations
(i) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a special grading iff
${\cal S}(\mathfrak{n})\neq\emptyset$.
(ii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a very special grading
iff ${\cal V}(\mathfrak{n})\neq\emptyset$.
###### Proof.
In order to prove Assertion (i), let’s consider the following assertion
(${\cal A}$) $H^{0}({\bf
H}(\overline{\mathbb{Q}}),\mathfrak{z}^{\overline{\mathbb{Q}}})=0$.
The proof is based on the following ”cycle” of implications
$\mathfrak{n}^{\mathbb{C}}$ has a special grading $\Rightarrow({\cal A})$
$\Rightarrow{\cal
S}(\mathfrak{n})\neq\emptyset\Rightarrow\mathfrak{n}^{\mathbb{C}}$ has a
special grading.
Step 1: the existence of a special grading of $\mathfrak{n}^{\mathbb{C}}$
implies (${\cal A}$). By hypothesis and Lemma 6,
$\mathfrak{n}^{\overline{\mathbb{Q}}}$ admits a special grading. Let
$\rho:\mathbb{G}_{m}\rightarrow{\bf G}$ be the corresponding group morphism.
Since all maximal tori of ${\bf G}$ are conjugate to ${\bf H}$, we can assume
that $\rho(\mathbb{G}_{m})\subset{\bf H}$. Therefore we have
$H^{0}({\bf
H}({\overline{\mathbb{Q}}}),\mathfrak{z}^{{\overline{\mathbb{Q}}}})\subset
H^{0}(\rho({\overline{\mathbb{Q}}}^{*}),\mathfrak{z}^{{\overline{\mathbb{Q}}}})=0$.
Thus Assertion ${\cal A}$ is proved.
Step 2: proof that (${\cal A}$) implies ${\cal S}(\mathfrak{n})\neq\emptyset$.
By Theorem 1, there exists $f\in{\bf H}(\mathbb{Q})$ such that $\chi(f)$ is
not an algebraic integer for any non-trivial character $\chi\in X({\bf H})$.
If we assume (${\cal A}$), then ${\mathrm{Spec}}\,\,f|_{\mathfrak{z}}$
contains no algebraic integers and therefore ${\cal
S}(\mathfrak{n})\neq\emptyset$.
Step 3: proof that ${\cal S}(\mathfrak{n})\neq\emptyset$ implies the existence
of a special grading. For any $f\in{\cal S}(\mathfrak{n})$, Since
${\mathrm{Spec}}\,\,f|_{\mathfrak{z}}$ contains no roots of unity. It follows
from Lemma 8 that the Lie algebra $\mathfrak{n}^{\overline{\mathbb{Q}}}$ (and
therefore $\mathfrak{n}^{\mathbb{C}}$) admits a special grading. Therefore
${\cal S}(\mathfrak{n})\neq\emptyset$ implies the existence of a special
grading.
The proof of Assertion (ii) is almost identical. Instead of $({\cal A})$, the
”cycle” of implications uses the following assertion
(${\cal A}^{0}$) $H^{0}({\bf
H}(\overline{\mathbb{Q}}),\mathfrak{n}^{\overline{\mathbb{Q}}})=0$. ∎
###### Lemma 10.
The following are equivalent:
(i) the Lie algebra $\mathfrak{n}^{\mathbb{Q}}$ admits a non-negative special
grading,
(ii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a non-negative special
grading, and
(iii) The set ${\cal F}(\mathfrak{n})$ is not empty.
###### Proof.
Proof that $(ii)\Rightarrow(iii)$. Let
$\mathfrak{n}^{\mathbb{C}}=\oplus_{k\geq 0}\,\mathfrak{n}^{\mathbb{C}}_{k}$ be
a non-negative special grading of $\mathfrak{n}^{\mathbb{C}}$ and let
$\rho:\mathbb{G}_{m}\to{\bf G}$ be the corresponding group morphism. Up to
conjugacy, we can assume that $\rho(\mathbb{G}_{m})\subset{\bf H}$. It follows
that the grading is defined over the splitting field $K$ of ${\bf H}$.
Let $g_{1}\in{\bf H}(K)$ be the isomorphism defined by $g_{1}x=2^{k}x$ if
$x\in\mathfrak{n}_{k}^{\mathbb{C}}$. Set $n=[K:\mathbb{Q}]$ and let
$g_{1},g_{2}\dots g_{n}$ be the ${\mathrm{Gal}}(L/\mathbb{Q})$-conjugates of
$g_{1}$. Since all $g_{i}$ belongs to ${\bf H}(K)$, the automorphisms $g_{i}$
commute. Hence the product $g:=g_{1}\dots g_{n}$ is well defined and $g$
belongs to ${\bf H}(\mathbb{Q})$. By hypotheses, all eigenvalues of $g_{i}$
are power of $2$, and all eigenvalues of $g_{i}|_{\mathfrak{z}^{\mathbb{C}}}$
are distinct from $1$. Therefore all eigenvalues of $g$ are integers, and all
eigenvalues of $g|_{\mathfrak{z}^{\mathbb{C}}}$ are $\neq\pm 1$. It follows
that $g^{-1}$ belongs to ${\cal F}(\mathfrak{n})$. Therefore ${\cal
F}(\mathfrak{n})\neq\emptyset$
Proof that $(iii)\Rightarrow(i)$. Let $f\in{\cal F}(\mathfrak{n})$ and set
$g=f^{-1}$. Set $K=\mathbb{Q}({\mathrm{Spec}}\,\,g)$ and let
$L:K^{*}\rightarrow\mathbb{Z}$ be the map defined by
$L(x)=\sum_{p}\,v_{p}(N_{K/\mathbb{Q}}(x))$
where the sum runs over all prime numbers $p$ and where
$N_{K/\mathbb{Q}}:K^{*}\rightarrow\mathbb{Q}^{*}$ denotes the norm map.
For any integer $k$, set
$\mathfrak{n}_{k}^{\overline{\mathbb{Q}}}=\bigoplus\limits_{L(x)=k}E_{(x)}$
where $E_{(x)}\subset\mathfrak{n}^{\overline{\mathbb{Q}}}$ denotes the
generalized eigenspace associated to $x\in{\mathrm{Spec}}\,\,g$. We have
$[E_{(x)},E_{(y)}]\subset E_{(xy)}$ and $L(xy)=L(x)+L(y)$, for any $x,\,y\in
K$. Therefore the decomposition
$\mathfrak{n}^{K}=\oplus_{k\in\mathbb{Z}}\,\mathfrak{n}_{k}^{\overline{\mathbb{Q}}}$
is a grading $\cal G$ of the Lie algebra
$\mathfrak{n}^{\overline{\mathbb{Q}}}$. Since the function $L$ is
${\mathrm{Gal}}(\mathbb{Q})$-invariant, the grading $\cal G$ is indeed defined
over $\mathbb{Q}$. It remains to prove that $\cal G$ is non-negative and
special.
Since any $x\in{\mathrm{Spec}}\,\,g$ is an algebraic integer, we have
$L(x)\geq 0$ and the grading is non-negative. Since no
$x\in{\mathrm{Spec}}\,\,g|_{\mathfrak{z}}$ is an algebraic unit, we have
$N_{K/\mathbb{Q}}(x)\neq\pm 1$ and $L(x)>0$. Thus the grading is special, what
proves that $(iii)\implies(i)$.
∎
###### Lemma 11.
The following are equivalent:
(i) the Lie algebra $\mathfrak{n}^{\mathbb{Q}}$ admits a positive grading,
(ii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a positive grading,
and
(iii) The set ${\cal F}^{+}(\mathfrak{n})$ is not empty.
Since the proof is almost identical to the previous proof, it will be skipped.
The equivalence $(i)\Leftrightarrow(ii)$ also appears in [8].
## 4 Height and relative complexity
For the whole chapter, $V$ denotes a finite dimensional vector space over
$\mathbb{Q}$. In this section, we define the notion of the height of the
isomorphisms $h\in GL(V)$ and the notion of a minimal lattice.
4.1 Height, complexity and minimality
Let $h\in GL(V)$. Recall that a lattice of $V$ is a finitely generated
subgroup $\Lambda$ which contains a basis of $V$. Let ${\cal D}(h)$ be the set
of all couple of lattices $(\Lambda,E)$ of $V$ such that $E\subset\Lambda$ and
$h(E)\subset\Lambda$. By definition, the height of $h$, is the integer
${\mathrm{ht}}(h):=\mathrm{Min}_{(\Lambda,E)\in{\cal D}(h)}\,[\Lambda:E]$.
Let ${\cal D}_{min}(h)$ be the set of all couples $(\Lambda,E)\in{\cal D}(h)$
such that $[\Lambda:E]={\mathrm{ht}}(h)$.
Similarly, for a lattice $\Lambda$ of $V$, the $h$-complexity of $\Lambda$ is
the integer
$\mathrm{cp}_{h}(\Lambda):=\mathrm{Min}_{(\Lambda,E)\in{\cal
D}(h)}\,[\Lambda:E]$.
It is clear that $\mathrm{cp}_{h}(\Lambda)=[\Lambda:E]$, where $E=\Lambda\cap
h^{-1}(\Lambda)$. The lattice $\Lambda$ is called minimal relative to $h$ if
$\mathrm{cp}_{h}(\Lambda)={\mathrm{ht}}(h)$.
For the proofs, a technical notion of relative height is needed. Let
$\mathrm{End}_{h}(V)$ be the commutant of $h$ and let let $A\subset
C(h)\subset\mathrm{End}_{h}(V)$ be a subring. By definition, an $A$-lattice
$\Lambda$ means a lattice $\Lambda$ which is an $A$-module. Let ${\cal
D}^{A}(h)$ be the set of all couple of $A$-lattices $(\Lambda,E)$ in ${\cal
D}(h)$. The $A$-height of $h$ is the integer
${\mathrm{ht}}_{A}(h):=\mathrm{Min}_{(\Lambda,E)\in{\cal
D}^{A}(h)}\,[\Lambda:E]$.
Obviously, we have
${\mathrm{ht}}_{A}(h)\geq{\mathrm{ht}}(h)={\mathrm{ht}}_{\mathbb{Z}}(h)$. Let
${\cal D}^{A}_{min}(h)$) be the set of all couples $(\Lambda,E)\in{\cal
D}^{A}(h)$) such that $[\Lambda:E]={\mathrm{ht}}_{A}(h)$.
4.2 Height and filtrations
Let $V$ be a finite dimensional vector space over $\mathbb{Q}$ and let $h\in
GL(V)$. Let $A$ be a subring of $\mathrm{End}_{h}(V)$ and let $A[h]$ be the
subring of $\mathrm{End}_{h}(V)$ generated by $A$ and $h$.
###### Lemma 12.
Let $0=V_{0}\subset V_{1}\subset\dots\subset V_{n}=V$ be a fitration of $V$,
where each vector space $V_{i}$ is a $A[h]$-submodule. For $i=1$ to $n$, set
$h_{i}=h_{V_{i}/V_{i-1}}$. Then we have
${\mathrm{ht}}_{A}(h)\geq\prod_{1\leq i\leq n}\,{\mathrm{ht}}_{A}(h_{i})$.
Moreover if $V\simeq\oplus V_{i}/V_{i-1}$ as a $A[h]$-module, we have
${\mathrm{ht}}_{A}(h)=\prod_{1\leq i\leq n}\,{\mathrm{ht}}_{A}(h_{i})$.
###### Proof.
Clearly it is enough to prove the lemma for $n=2$. Let $(\Lambda,E)\in{\cal
D}^{A}_{min}(h)$. Set $\Lambda_{1}=\Lambda\cap V_{1}$, $E_{1}=E\cap V_{1}$,
$\Lambda_{2}=\Lambda/\Lambda_{1}$ and $E_{2}=E/E_{1}$. We have
$[\Lambda:E]=[\Lambda_{1}:E_{1}][\Lambda_{2}:E_{2}]$.
Since $(\Lambda_{1},E_{1})\in{\cal D}^{A}(h_{1})$ and
$(\Lambda_{2},E_{2})\in{\cal D}^{A}(h_{2})$, we have
${\mathrm{ht}}_{A}(h)\geq{\mathrm{ht}}_{A}(h_{1})\,{\mathrm{ht}}_{A}(h_{2})$,
what proves the first assertion.
Next, we assume that $V\simeq V_{1}\oplus V_{2}$ as a $A[h]$-module. Let
$(\Lambda_{1},E_{1})\in{\cal D}_{min}^{A}(h_{1})$,
$(\Lambda_{2},E_{2})\in{\cal D}_{min}^{A}(h_{2})$ and set
$\Lambda=\Lambda_{1}\oplus\Lambda_{2}$ and $E=E_{1}\oplus E_{2}$. We have
$[\Lambda:E]=[\Lambda_{1}:E_{1}][\Lambda_{2}:E_{2}]={\mathrm{ht}}_{A}(h_{1})\,{\mathrm{ht}}_{A}(h_{2})$.
Therefore
${\mathrm{ht}}_{A}(h)\leq{\mathrm{ht}}_{A}(h_{1})\,{\mathrm{ht}}_{A}(h_{2})$.
Hence
${\mathrm{ht}}_{A}(h)={\mathrm{ht}}_{A}(h_{1})\,{\mathrm{ht}}_{A}(h_{2})$.
∎
Let $h\in GL(V)$ as before. Its Chevalley decomposition $h=h_{s}h_{u}$ is
uniquely defined by the following three conditions: $h_{s}$ and $h_{u}$
commutes, $h_{s}$ is semi-simple and $h_{u}$ is unipotent.
###### Lemma 13.
We have
${\mathrm{ht}}(h)={\mathrm{ht}}(h_{s})$.
###### Proof.
By Lemma 12, it can be assumed that the $\mathbb{Q}[h]$-module $V$ is
indecomposable. Therefore there is a vector space $V_{0}$, a semi-simple
endomorphism $h_{0}\in\mathrm{End}(V_{0})$ and an isomorphism
$V\simeq V_{0}\otimes\mathbb{Q}[t]/(t^{n})$,
relative to which $h_{s}$ acts as $h_{0}\otimes 1$ and $h_{u}$ acts as
$1\otimes t$. Let $(\Lambda_{0},E_{0})\in{\cal D}_{min}(h_{0})$ and set
$\Lambda=\Lambda_{0}\otimes\mathbb{Z}[t]/(t^{n})$ and
$E=E_{0}\otimes\mathbb{Z}[t]/(t^{n})$. By Lemma 12, we have
${\mathrm{ht}}(h)\geq{\mathrm{ht}}(h_{s})={\mathrm{ht}}(h_{0})^{n}$. Since
$(\Lambda,E)\in{\cal D}(h)$ and
$[\Lambda:E]=[\Lambda_{0}:E_{0}]^{n}={\mathrm{ht}}(h_{0})^{n}$,
it follows that ${\mathrm{ht}}(h)={\mathrm{ht}}(h_{s})$
∎
4.3 Complexity of ${\cal O}(h)$-lattices
For any algebraic number $\lambda$, let ${\cal O}(\lambda)$ be the ring of
integers of the number field $\mathbb{Q}(\lambda)$. Set
$\pi_{\lambda}=\\{x\in{\cal O}(\lambda)|\,x\lambda\in{\cal O}(\lambda)\\}$.
Then $\pi_{\lambda}$ is an integral ideal and its norm is the integer
$d(\lambda):=\mathrm{N}_{\mathbb{Q}(\lambda)/\mathbb{Q}}(\pi_{\lambda})=\mathrm{Card\,}{\cal
O}(\lambda)/\pi_{\lambda}$.
Let $h\in GL(V)$ be semi-simple. Let $P(t)$ be its minimal polynomial, let
$P=P_{1}\dots P_{k}$ be its factorization into irreducible factors. For $1\leq
i\leq k$, set $K_{i}=\mathbb{Q}[t]/(P_{i}(t))$ and let ${\cal O}_{i}$ be the
ring of integers of the number field $K_{i}$. Set ${\cal O}(h)=\oplus_{1\leq
i\leq k}\,{\cal O}_{i}$.
For each $\lambda\in{\mathrm{Spec}}\,\,h$, let $m_{\lambda}$ be its
multiplicity. Note that the functions $\lambda\mapsto m_{\lambda}$ and
$\lambda\mapsto d(\lambda)$ are ${\mathrm{Gal}}(\mathbb{Q})$-invariant, so
they can be viewed as functions defined over
${\mathrm{Spec}}\,\,h/{\mathrm{Gal}}(\mathbb{Q})$.
###### Lemma 14.
Let $\Lambda$ be an ${\cal O}(h)$-lattice of $V$. Then
$\mathrm{cp}_{h}(\Lambda)=\prod\,d(\lambda)^{m_{\lambda}}$,
where the product runs over ${\mathrm{Spec}}\,h/{\mathrm{Gal}}(\mathbb{Q})$.
###### Proof.
With the previous notations, let $e_{i}$ be the unit of ${\cal O}_{i}$ and set
$\Lambda_{i}=e_{i}\Lambda$. Since $\Lambda=\oplus_{1\leq i\leq
k}\,\Lambda_{i}$, it is enough to prove the lemma for $k=1$, i.e. when the
minimal polynomial of $h$ is irreducible.
Let $\lambda$ be one eigenvalue of $h$. With these new hypotheses, we have
$\mathbb{Q}[h]/(P(t))\simeq\mathbb{Q}(\lambda)$, ${\cal O}(h)\simeq{\cal
O}(\lambda)$ and $V$ is a vector space of dimension $m_{\lambda}$ over
$\mathbb{Q}(\lambda)$, relative to which $h$ is identified with the
multiplication by $\lambda$. We have
$r_{\lambda}\Lambda=\Lambda\cap h^{-1}\Lambda$.
Since $\Lambda/r_{\lambda}{\cal I}\simeq({\cal
O}(\lambda)/r_{\lambda})^{m_{\lambda}}$, it follows that
$\mathrm{cp}_{h}(\Lambda)=d(\lambda)^{m_{\lambda}}$.
∎
4.4 Computation of the height
Let $h\in GL(V)$ be semi-simple.
###### Lemma 15.
We have
${\mathrm{ht}}(h)=\prod\,d(\lambda)^{m_{\lambda}}$,
where the product runs over ${\mathrm{Spec}}\,h/{\mathrm{Gal}}(\mathbb{Q})$.
###### Proof.
Using Lemmas 13 and Lemma 12, we can be assumed $V$ is a simple
$\mathbb{Q}[h]$-module, and let $n$ be its dimension. The eigenvalues
$\lambda_{1},\dots,\lambda_{n}$ of $h$ are conjugate by
${\mathrm{Gal}}(\mathbb{Q})$. Under these simplifying hypotheses, the formula
to be proved is
${\mathrm{ht}}(h)=d(\lambda_{1})$.
Step 1: scalar extension. Set $K=\mathbb{Q}(\lambda_{1},\dots,\lambda_{n})$,
let $U=K\otimes V$, let $\tilde{h}=1\otimes h$ be the extension of $h$ to $U$
and let $\\{v_{1},\dots,v_{n}\\}$ be a $K$ basis of $U$ such that
$\tilde{h}.v_{i}=\lambda_{i}\,v_{i}$. We have $U=\oplus_{1\leq i\leq n}U_{i}$,
where $U_{i}=K\,v_{i}$.
Let $\cal O$ be the ring of integers of $K$. For each $1\leq i\leq n$, set
$\tilde{h}_{i}=h|_{U_{i}}$. Since each $U_{i}$ is a ${\cal
O}[\tilde{h}]$-module, Lemma 12 shows that
${\mathrm{ht}}_{\cal O}(\tilde{h})=\prod_{1\leq i\leq n}\,{\mathrm{ht}}_{\cal
O}(\tilde{h}_{i})$.
Next, the integers ${\mathrm{ht}}_{\cal O}(\tilde{h}_{i})$ are computed. Let
$\Lambda_{i}\subset U_{i}$ be any ${\cal O}$-lattice. Since ${\cal O}$
contains ${\cal O}(\lambda_{i})={\cal O}(\tilde{h}_{i})$, it follows from
Lemma 14 that
$\mathrm{cp}_{\tilde{h}_{i}}(\Lambda_{i})=d(\lambda_{i})^{r}$
where $r=\mathrm{rk}_{\cal O}(\Lambda_{i})=[K:\mathbb{Q}(\lambda_{i})]$. Hence
we have ${\mathrm{ht}}_{\cal
O}(\tilde{h}_{i})=d(\lambda_{i})^{[K:\mathbb{Q}(\lambda_{i})]}$. It follows
that
${\mathrm{ht}}_{\cal O}(\tilde{h})=\prod_{1\leq i\leq
n}\,d(\lambda_{i})^{[K:\mathbb{Q}(\lambda_{i})]}=d(\lambda_{1})^{[K:\mathbb{Q}]}$
Step 2: end of the proof. Now let $(\Lambda,E)\in{\cal D}_{min}(h)$. Set
$\tilde{\Lambda}={\cal O}\otimes\Lambda$ and $\tilde{E}={\cal O}\otimes E$.
Since $\tilde{E}$ is an ${\cal O}$-module, we have
$[\tilde{\Lambda}:\tilde{E}]\geq{\mathrm{ht}}_{\cal O}(\tilde{h})$. It follows
that
${\mathrm{ht}}(h)^{[K:\mathbb{Q}]}=[\Lambda:E]^{[K:\mathbb{Q}]}=[\tilde{\Lambda}:\tilde{E}]\geq{\mathrm{ht}}_{\cal
O}(\tilde{h})=d(\lambda)^{[K:\mathbb{Q}]}$.
Thus we have $d(\lambda_{1})\leq{\mathrm{ht}}(h)$. By Lemma 14, we have
${\mathrm{ht}}_{{\cal O}(h)}(h)=d(\lambda_{1})$. It follows that
$d(\lambda_{1})\leq{\mathrm{ht}}(h)\leq{\mathrm{ht}}_{\cal
O(h)}(h)=d(\lambda_{1})$,
what proves the formula.
∎
Remark: In number theory, the Weil height of an algebraic number $\lambda$ is
$H(\lambda)=\theta d(\lambda)^{1/n}$, where $\theta$ involves the norms at
infinite places. Therefore ${\mathrm{ht}}(h)$ is essentially the Weil’s height
of $h$, up to the factor at infinite places.
4.5 A simple criterion of minimality
An obvious consequence of Lemmas 14 and 15 is
###### Lemma 16.
Let $h\in GL(V)$ be semi-simple and let $\Lambda$ be an ${\cal O}(h)$-lattice
of $V$. Then $\Lambda$ is minimal relative to $h$.
## 5 Malcev’s Theorem and self-similar data
In this chapter, we recall Malcev’s Theorem. Then we collect some related
results, which are due to Malcev or viewed as folklore results. Then it is
easy to characterize the self-similar data for FGTF nilpotent groups.
5.1 Three types of lattices
Let $\mathfrak{n}$ be a finite dimensional be a nilpotent Lie algebra over
$\mathbb{Q}$. The Lie algebra $\mathfrak{n}$ is endowed with two group
structures, the addition and the the Campbell-Hausdorff product. To avoid
confusion, the Campbell-Hausdorff product is called the multiplication and it
is denoted accordingly.
A multiplicative subgroup $\Gamma$ of $\mathfrak{n}$ means a subgroup relative
to the Campbell-Hausdorff product. In general, a multiplicative subgroup
$\Gamma$ is not an additive subgroup of $\mathfrak{n}$. However, notice that
$\mathbb{Z}.x\subset\Gamma$ for any $x\in\Gamma$, because $x^{n}=nx$ for any
$n\in\mathbb{Z}$.
A finitely generated multiplicative subgroup $\Gamma$ is called a
multiplicative lattice if $\Gamma\,\mathrm{mod}\,[\mathfrak{n},\mathfrak{n}]$
generates the $\mathbb{Q}$-vector space
$\mathfrak{n}/[\mathfrak{n},\mathfrak{n}]$, or, equivalently, if $\Gamma$
generates the Lie algebra $\mathfrak{n}$. Let $N$ be the CSC nilpotent Lie
group with Lie algebra $\mathfrak{n}^{R}=\mathbb{R}\otimes\mathfrak{n}$. A
discrete subroup $\Gamma$ of $N$ is called a cocompact lattice if $N/\Gamma$
is compact.
It should be noted that three distinct notions of lattices will be used in the
sequel: the additive lattices, the multiplicative lattices and the cocompact
lattices. When it is used alone, a lattice is always an additive lattice. This
very commoun terminology could be confusing: the reader should read
”multiplicative lattice” or ”cocompact lattice” as single words.
5.2 Malcev’s Theorem
Any multiplicative lattice $\Gamma$ of a finite dimensional nilpotent Lie
algebra over $\mathbb{Q}$ is a FGTF nilpotent group. Conversely, Malcev proved
in [18]
###### Malcev’s Theorem.
Let $\Gamma$ be a FGTF nilpotent group.
1\. There exists a unique nilpotent Lie algebra $\mathfrak{n}$ over
$\mathbb{Q}$ wich contains $\Gamma$ as a multiplicative lattice.
2\. There exists a unique CSC nilpotent Lie group $N$ which contains $\Gamma$
as a cocompact lattice.
3\. The Lie algebra of $N$ is $\mathbb{R}\otimes\mathfrak{n}$.
The Lie algebra $\mathfrak{n}$ of the previous theorem will be called the
Malcev Lie algebra of $\Gamma$.
5.3 The coset index
From now on, let $\mathfrak{n}$ will be a finite dimensional nilpotent Lie
algebra. The coset index, which is defined now, generalizes the notions of
indices for additive lattices and for multiplicative lattices.
A subset $X$ of $\mathfrak{n}$ is called a coset union if $X$ is a finite
union of $\Lambda$-coset for some additive lattice $\Lambda$.
Recall that the nilpotency index of $\mathfrak{n}$ is the smallest integer $n$
such that $C^{n+1}\mathfrak{n}=0$, where $(C^{n}\,\mathfrak{n})_{n\geq 0}$ is
its descending central series. The following lemma is easily proved by
induction on the nilpotency index of $\mathfrak{n}$.
###### Lemma 17.
Any multiplicative lattice $\Gamma$ of $\mathfrak{n}$ is a coset union.
Let $X\supset Y$ be two coset unions in $\mathfrak{n}$. Obviously, there is a
lattice $\Lambda$ such that $X$ and $Y$ are both a finite union of
$\Lambda$-coset. The coset index of $Y$ in $X$ is the number
$[X:Y]_{coset}={\mathrm{Card\,}\,X/\Lambda\over\mathrm{Card\,}\,Y/\Lambda}$
The numerator and denominator of the previous expression depends on the choice
of $\Lambda$, but $[X:Y]_{coset}$ is well defined. In general, the coset index
is not an integer. Obviously if $\Lambda\supset\Lambda^{\prime}$ are additive
lattices in $\mathfrak{n}$, we have
$[\Lambda:\Lambda^{\prime}]_{coset}=[\Lambda:\Lambda^{\prime}]$.
Similarly, for multiplicative lattices there is
###### Lemma 18.
Let $\Gamma\supset\Gamma^{\prime}$ be multiplicative lattices in
$\mathfrak{n}$, we have
$[\Gamma:\Gamma^{\prime}]_{coset}=[\Gamma:\Gamma^{\prime}]$.
The proof, done by induction on the nipotency index of $\mathfrak{n}$, is
skipped.
5.4 Morphims of FGTF nilpotent groups
###### Lemma 19.
Let $\Gamma$, $\Gamma^{\prime}\subset\mathfrak{n}$ be multiplicative lattices
in $\mathfrak{n}$ and let $f:\Gamma^{\prime}\to\Gamma$ be a group morphism.
Then $f$ extends uniquely to a Lie algebra morphism
$\tilde{f}:\mathfrak{n}\to\mathfrak{n}$.
Moreover $\tilde{f}$ is an isomorphism if $f$ is injective.
When $f$ is an isomorphism, the result is due to Malcev, see [18], Theorem 5.
In general, the lemma is a folklore result and it is implicitely used in
Homotopy Theory, see e.g. [1]. Since we did not found a precise reference, a
proof, essentially based on Hall’s collecting formula (see Theorem 12.3.1 in
[15]), is now provided.
###### Proof.
Let $x\in\mathfrak{n}$. Since $\Gamma$ contains an additive lattice by Lemma
17, we have $m\mathbb{Z}x\subset\Gamma$ for some $m>0$. Thus there is a unique
map $\tilde{f}:\mathfrak{n}\to\mathfrak{n}$ extending $f$ such that
$\tilde{f}(nx)=n\tilde{f}(x)$ for any $x\in\mathfrak{n}$ and $n\in\mathbb{Z}$.
It remains to prove that
$\tilde{f}(x+y)=\tilde{f}(x)+\tilde{f}(y)$, and
$\tilde{f}([x,y])=[\tilde{f}(x),\tilde{f}(y)]$,
for any $x,\,y\in\mathfrak{n}$.
Let $n$ be the nilpotency index of $\mathfrak{n}$. Set ${\cal L}(2,n)={\cal
L}(2)/C^{n+1}{\cal L}(2)$, where ${\cal L}(2)$ denotes the free Lie algebra
over $\mathbb{Q}$ freely generated by $X$ and $Y$. Let
$\Gamma(2,n)\subset{\cal L}(2,n)$ be the multiplicative subgroup generated by
$X$ and $Y$.
As before, $m(X+Y)$ and $m[X,Y]$ belongs to $\Gamma(2,n)$ for some $m>0$. Thus
there are $w_{1},\,w_{2}$ in the free group over two generators, such that
$w_{1}(X,Y)=m(X+Y)$ and $w_{2}(X,Y)=m[X,Y]$.
Since ${\cal L}(2,n)$ is a free in the category of nilpotent Lie algebras of
nilpotency index $\leq n$, we have
$w_{1}(x,y)=m(x+y)$ and $w_{2}(x,y)=m[x,y]$
for any $x,y\in\mathfrak{n}$. From this it follows easily that $\tilde{f}$ is
a Lie algebra morphism. ∎
5.6 Self-similar data for FGTF nilpotent groups
Let $\mathfrak{z}$ be the center of $\mathfrak{n}$. Recall that ${\cal
S}(\mathfrak{n})$ (respectively ${\cal V}(\mathfrak{n})$) is the set of all
$f\in\mathrm{Aut}\,\mathfrak{n}$ such that
${\mathrm{Spec}}\,\,f|_{\mathfrak{z}}$ (respectively ${\mathrm{Spec}}\,\,f$)
contains no algebraic integers. Let $\Gamma\supset\Gamma^{\prime}$ be
multiplicative lattices of $\mathfrak{n}$, let $f:\Gamma^{\prime}\to\Gamma$ be
a morphism and let $\tilde{f}:\mathfrak{n}\to\mathfrak{n}$ be its extension.
###### Lemma 20.
Let’s assume that $f$ is injective. Then
(i) $(\Gamma^{\prime},f)$ is a self-similar datum iff $\tilde{f}$ belongs to
${\cal S}(\mathfrak{n})$,
(ii) $(\Gamma^{\prime},f)$ is a free self-similar datum iff $\tilde{f}$
belongs to ${\cal V}(\mathfrak{n})$
(iii) if $(\Gamma^{\prime},f)$ is a fractal datum, then $f$ belongs to ${\cal
F}(\mathfrak{n})$.
###### Proof.
Let $V$ be a finite dimensional vector space over $\mathbb{Q}$ and let $f\in
GL(V)$. We will repeatedly use the fact that ${\mathrm{Spec}}\,\,f$ contains
an algebraic integer iff $V$ contains a finitely generated subgroup $E\neq 0$
such that $f(E)\subset E$.
Proof of Assertion (i). Since $\Gamma^{\prime}$ contains a set of generators
of $\mathfrak{n}$, the subgroup
$Z(\Gamma^{\prime}):=\Gamma^{\prime}\cap\mathfrak{z}$ is the center of
$\Gamma^{\prime}$. Let $K$ be the $f$-core of the virtual endomorphism
$(\Gamma^{\prime},f)$.
Let’s assume that $(\Gamma^{\prime},f)$ is not a self-similar datum. Since
$K\neq 1$, the additive group $K\cap Z(\Gamma^{\prime})$ is non-trivial,
finitely generated and $\tilde{f}$-invariant. Therefore
${\tilde{f}}\notin{\cal S}(\mathfrak{n})$.
Conversely let’s assume that ${\tilde{f}}\notin{\cal S}(\mathfrak{n})$. Then
there is a nonzero finitely generated subgroup $E\subset\mathfrak{z}$ such
that $\tilde{f}(E)\subset E$. By Lemma 17, $Z(\Gamma^{\prime})$ is an additive
lattice of $\mathfrak{z}$. Therefore we have $mE\subset Z(\Gamma^{\prime})$
for some $m>0$. Since $K$ contains $mE$, it follows that $(\Gamma^{\prime},f)$
is not a self-similar datum.
Proof of Assertion (ii). Let $A\subset\Gamma$ be a set of representatives of
$\Gamma/\Gamma^{\prime}$. Let’s consider the action of $\Gamma$ on
$A^{\omega}$ associated with the virtual endomorphism $(\Gamma^{\prime},f)$.
Let’s assume that ${\tilde{f}}\notin{\cal V}(\mathfrak{n})$. Then there is a
nonzero finitely generated abelian subgroup $F\subset\mathfrak{n}$ such that
$\tilde{f}(F)\subset F$. As before, it can be assumed $F$ lies in
$\Gamma^{\prime}$. Let $e\in A$ be the representative of the trivial coset and
let $e^{\omega}=ee\dots$ be the infinite word over the single letter $e$.
Since $f(F)\subset F$, it follows that $\gamma(e^{\omega})=e^{\omega}$ for any
$\gamma\in F$. Hence $\Gamma$ does not act freely on $A^{\omega}$.
Conversely, let assume that $\Gamma$ does not act freely on $A^{\omega}$.
Let’s define inductively the subsets ${\cal H}(n)\subset\Gamma$ by ${\cal
H}(1)=\cup_{a\in A}\,a\Gamma^{\prime}a^{-1}$ and
${\cal H}(n+1)=\\{\gamma\in\Gamma|\,\exists\,a\in A:a\gamma
a^{-1}\in\Gamma^{\prime}\land f(a\gamma a^{-1})\in{\cal H}(n)\\}$,
for $n\geq 1$. Indeed ${\cal H}(n)$ is the set of all $\gamma\in\Gamma$ which
have at least one fixed point on $A^{n}$. It follows easily that ${\cal
H}:=\cap_{n\geq 1}\,{\cal H}(n)$ is the set of all $\gamma\in\Gamma$ which
have at least one fixed point on $A^{\omega}$. There is an integer $k$ such
that
${\cal H}\subset C^{k}\mathfrak{n}$ but ${\cal H}\not\subset
C^{k+1}\mathfrak{n}$.
Let $\overline{\cal H}$ be the image of ${\cal H}$ in
$C^{k}\mathfrak{n}/C^{k+1}\mathfrak{n}$ and let $F$ be the additive subgroup
of $C^{k}\mathfrak{n}/C^{k+1}\mathfrak{n}$ generated by $\overline{\cal H}$.
Since $\Gamma$ lies in a lattice, $F$ is finitely generated. Moreover we have
$axa^{-1}\equiv x\,\mathrm{mod}\,\,C^{k+1}\mathfrak{n}$, for any $x\in
C^{k}\mathfrak{n}$ and $a\in A$. It follows that $\tilde{f}_{k}(\overline{\cal
H})\subset\overline{\cal H}$, where $\tilde{f}_{k}$ is the linear map induced
by $\tilde{f}$ on $C^{k}\mathfrak{n}/C^{k+1}\mathfrak{n}$. Hence
$\tilde{f}_{k}(F)\subset F$ and ${\mathrm{Spec}}\,\,\tilde{f}_{k}$ contains an
algebraic integer. Therefore ${\tilde{f}}\notin{\cal V}(\mathfrak{n})$.
Proof of Assertion (iii). Let $(\Gamma^{\prime},f)$ be a fractal datum. Let
$\Lambda$ be the additive lattice generated by $\Gamma$. Since
$\tilde{f}^{-1}(\Lambda)\subset\Lambda$,
all $x\in{\mathrm{Spec}}\,\tilde{f}^{-1}$ are algebraic integers. Therefore
$\tilde{f}$ belongs to ${\cal F}(\mathfrak{n})$. ∎
## 6 Relative complexity of multiplicative lattices
This chapter is the mutiplicative analogue of ch. 4. The main result is the
refined criterion of minimality. Together with Theorem 1, it is the main
ingredient of the proof of Theorem 2 and 3.
Throughout the whole chapter, $\mathfrak{n}$ is finite dimensional nilpotent
Lie algebra over $\mathbb{Q}$, and $\mathfrak{z}$ is its center.
6.1 Complexity of multiplicative lattices
Let $f\in\mathrm{Aut}\,\mathfrak{n}$ and let $\Gamma$ be a multiplicative
lattice of $\mathfrak{n}$. The complexity of $\Gamma$ relative to $f$ is the
integer
$\mathrm{cp}_{f}(\Gamma)=[\Gamma:\Gamma^{\prime}]$,
where $\Gamma^{\prime}=\Gamma\cap f^{-1}(\Gamma)$. The multiplicative lattice
$\Gamma$ is called minimal relative to $f$ if
$\mathrm{cp}_{f}(\Gamma)={\mathrm{ht}}(f)$. Thanks to Lemma 18 the notation
$\mathrm{cp}_{f}(\Gamma)$ is unambiguous.
###### Lemma 21.
Let $\Gamma$ be multiplicative lattices of $\mathfrak{n}$. Then we have
$\mathrm{cp}_{f}(\Gamma)\geq{\mathrm{ht}}(f)$.
###### Proof.
The proof goes by induction on the nilpotency index of $\mathfrak{n}$.
Let $Z$ be the center of $\Gamma$. Set $\Gamma^{\prime}=\Gamma\cap
f^{-1}(\Gamma)$, $Z^{\prime}=Z\cap f^{-1}(Z)$, $\overline{\Gamma}=\Gamma/Z$,
$\overline{\Gamma^{\prime}}=\Gamma^{\prime}/Z^{\prime}$. Also set
${\overline{\mathfrak{n}}}=\mathfrak{n}/\mathfrak{z}$ and let
$\overline{f}:{\overline{\mathfrak{n}}}\to{\overline{\mathfrak{n}}}$ and
$f_{0}:\mathfrak{z}\to\mathfrak{z}$ be the isomorphisms induced by $f$.
By induction hypothesis, we have
$\mathrm{cp}_{\overline{f}}(\overline{\Gamma})\geq{\mathrm{ht}}(\overline{f})$
and therefore
$[\overline{\Gamma}:\overline{\Gamma^{\prime}}]\geq{\mathrm{ht}}(\overline{f})$.
By definition, we have
$[Z:Z^{\prime}]=\mathrm{cp}_{f_{0}}\,Z\geq{\mathrm{ht}}(f_{0})$. Moreover by
Lemma 15 we have
${\mathrm{ht}}(f)={\mathrm{ht}}(f_{0}){\mathrm{ht}}(\overline{f})$. It follows
that
$\mathrm{cp}_{f}\,\Gamma=[\Gamma:\Gamma^{\prime}]=[Z:Z^{\prime}]\,[\overline{\Gamma}:\overline{\Gamma^{\prime}}]\geq{\mathrm{ht}}(f_{0}){\mathrm{ht}}(\overline{f})={\mathrm{ht}}(f)$,
and the statement is proved. ∎
6.2 A property of the minimal multiplicative lattices
Let $\Gamma$ be a multiplicative lattice of $\mathfrak{n}$ and let
$h\in\mathrm{Aut}\,\mathfrak{n}$. For simplicity, let’s assume that $h$ is
semi-simple.
###### Lemma 22.
The following assertions are equivalent
(i) $\Gamma$ is minimal relative to $h$, and
(ii) the virtual morphism $(\Gamma^{\prime},h)$ is good, where
$\Gamma^{\prime}=\Gamma\cap h^{-1}(\Gamma)$.
In particular, there is a multiplicative lattice $\tilde{\Gamma}\subset\Gamma$
which is minimal relative to $h$.
###### Proof.
By Lemma 17, $\Gamma$ is a coset union. Any additive lattice contains a ${\cal
O}(h)$-module of finite index. Therefore there is an ${\cal O}(h)$-lattice
$\Lambda$ such that $\Gamma$ is an union of $\Lambda$-cosets.
Let $\Gamma_{0},\Gamma_{1},\dots$ be the multiplicative lattices inductively
defined by $\Gamma_{0}=\Gamma$, $\Gamma_{1}=\Gamma^{\prime}$ and
$\Gamma_{n+1}=\Gamma_{n}\cap h^{-1}(\Gamma_{n})$ for $n\geq 1$. Similarly let
$\Lambda_{0},\Lambda_{1},\dots$ be the additive lattices defined by
$\Lambda_{0}=\Lambda$, and $\Lambda_{n+1}=\Lambda_{n}\cap h^{-1}(\Lambda_{n})$
for $n\geq 0$.
By Lemma 2, the sequence $[\Gamma_{n}:\Gamma_{n+1}]$ is not increasing. By
Lemma 21, we have $[\Gamma_{n}:\Gamma_{n+1}]\geq{\mathrm{ht}}(f)$. Moreover,
it follows from Lemma 16 that $[\Lambda_{n}:\Lambda_{n+1}]={\mathrm{ht}}(h)$
for all $n$.
Let’s assume now that $\Gamma$ is minimal relative to $h$. We have
$[\Gamma_{n}:\Gamma_{n+1}]={\mathrm{ht}}(f)$ for all $n$, and therefore the
virtual morphism $(\Gamma^{\prime},h)$ is good.
Conversely, let’s assume that the virtual morphism $(\Gamma^{\prime},h)$ is
good. By hypotheses we have
$[\Gamma_{0}:\Gamma_{n}]=[\Gamma_{0}:\Gamma_{1}]^{n}$ and
$[\Lambda_{0}:\Lambda_{n}]={\mathrm{ht}}(h)^{n}$ for all $n\geq 1$. It follows
that
$[\Gamma_{0}:\Lambda_{n}]_{coset}=[\Gamma_{0}:\Lambda_{0}]_{coset}\,{\mathrm{ht}}(h)^{n}$.
Since $\Gamma_{n}\supset\Lambda_{n}$, we have
$[\Gamma_{0}:\Gamma_{n}]\leq[\Gamma_{0}:\Lambda_{n}]_{coset}$.
and therefore
$[\Gamma_{0}:\Gamma_{1}]^{n}\leq[\Gamma_{0}:\Lambda_{0}]_{coset}\,{\mathrm{ht}}(h)^{n}$,
for all $n\geq 0$.
Hence $[\Gamma_{0}:\Gamma_{1}]\leq{\mathrm{ht}}(f)$. It follows from Lemma 21
that $[\Gamma_{0}:\Gamma_{1}]={\mathrm{ht}}(f)$, thus $\Gamma$ is minimal
relative to $h$.
In order to prove the last assertion, notice that the sequence
$[\Gamma_{n}:\Gamma_{n+1}]$ is stationary for $n\geq N$, for some $N>0$.
Therefore $(\Gamma_{N+1},h)$ is a good virtual morphism of $\Gamma_{N}$. Thus
the subgroup $\tilde{\Gamma}=\Gamma_{N}$ is minimal relative to $h$.
∎
6.3 A refined criterion of minimality
A refined version of Lemma 16 is now provided. Let $\Gamma$ be a
multiplicative lattice in $\mathfrak{n}$ and let
$h\in\mathrm{Aut}\,\mathfrak{n}$ be semi-simple. Let $L$ be the field
generated by ${\mathrm{Spec}}\,h$, let ${\cal O}$ be its ring of integers and
let ${\cal P}$ be the set of prime ideals of ${\cal O}$.
Let $\Lambda$ be an ${\cal O}(h)$-lattice and let $n>0$ be an integer. Let’s
assume that
$\Lambda\supset\Gamma$ and $\Gamma$ is an union of $n\Lambda$-cosets.
###### Lemma 23.
Let $S$ be the set of divisors of $n$ in ${\cal P}$. Assume that
$\lambda\equiv 1\,\mathrm{mod}\,n{\cal O}_{\pi}$,
for any $\lambda\in{\mathrm{Spec}}\,\,h$ and any $\pi\in S$. Then $\Gamma$ is
minimal relative to $h$.
###### Proof.
Step 1. Since ${\mathrm{Spec}}\,\,h$ lies in ${\cal O}_{\pi}$ for all $\pi\in
S$, there exists a positive integer $d$, which is prime to $n$, such that
$d\lambda\in{\cal O}$ for all $\lambda\in{\cal O}$. Moreover we can assume
that $d\equiv 1\,\mathrm{mod}\,n$.
Let $\lambda\in{\mathrm{Spec}}\,\,h$. We have $d\lambda\equiv
1\,\mathrm{mod}\,n{\cal O}_{\pi}$ for all $\pi\in S$. Therefore we have
$d\lambda\in 1+n{\cal O}$,
for all $\lambda\in{\mathrm{Spec}}\,\,h$. Set $H=dh$. Since
${\mathrm{Spec}}\,\,dH$ and ${\mathrm{Spec}}\,\,(H-1)/n$ lie in ${\cal O}$, it
follows that
$H\in{\cal O}(h)$ and $H\in 1+n{\cal O}(h)$.
Step 2. Set $\Lambda^{\prime}=\Lambda\cap h^{-1}\Lambda$. Since all
eigenvalues of $h$ are units in ${\cal O}_{\pi}$ whenever $\pi$ divides $n$,
the height of $h$ is prime to $n$. By Lemma 16, we have
$[\Lambda:\Lambda^{\prime}]={\mathrm{ht}}(h)$. Therefore we get
$\Lambda=\Lambda^{\prime}+n\Lambda$.
It follows that
$\Gamma=\coprod\limits_{1\leq i\leq k}\,g_{i}+n\Lambda$
for some $g_{1},...,g_{k}\in\Lambda^{\prime}$, where $k=[\Gamma:n\Lambda]$ and
where $\coprod$ is the symbol of the disjoint union. Since $H(g_{i})\equiv
g_{i}\,\mathrm{mod}\,n\Lambda$, we get that $h(g_{i})\in
g_{i}+n\Lambda\subset\Gamma$. Therefore we have
$\Gamma^{\prime}\supset\coprod\limits_{1\leq i\leq
k}\,g_{i}+n\Lambda^{\prime}$,
Therefore we have $[\Gamma^{\prime}:n\Lambda^{\prime}]\geq
k=[\Gamma:n\Lambda]$. It follows that
$[\Gamma:\Gamma^{\prime}]\leq[n\Lambda:n\Lambda^{\prime}]={\mathrm{ht}}(h)$.
By Lemma 21, we have $[\Gamma:\Gamma^{\prime}]={\mathrm{ht}}(h)$. Thus
$\Gamma$ is minimal relative to $h$. ∎
## 7 Proof of Theorems 2 and 3
7.1 Proof of Theorem 2 and 3.
Let $\mathfrak{n}$ be a finite dimensional nilpotent Lie algebra over
$\mathbb{Q}$ and let $\mathfrak{z}$ be its center and let $\Gamma$ be a
multiplicative lattice of $\mathfrak{n}$.
###### Theorem 2.
The following assertions are equivalent
(i) The group $\Gamma$ is transitive self-similar,
(ii) the group $\Gamma$ is densely self-similar, and
(iii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a special grading.
###### Proof.
Let’s consider the following assertion
$({\cal A})$ ${\cal S}(\mathfrak{n})\neq\emptyset$.
The implication $(ii)\Rightarrow(i)$ is tautological. Together with the Lemmas
6(i) and 9(i), the following implications are already proved
$(ii)\Rightarrow(i)\Rightarrow({\cal A})\Leftrightarrow(iii)$.
Therefore, it is enough to prove that $({\cal A})\Rightarrow(ii)$.
Step 1. Definition of some $h\in{\bf G}(\mathbb{Q})$. Let’s assume that ${\cal
S}(\mathfrak{n})\neq\emptyset$, and let $f\in{\cal S}(\mathfrak{n})$. Since
the semi-simple part of $f$ is also in ${\bf G}(\mathbb{Q})$, it can be
assumed that $f$ is semi-simple. Let ${\bf K}\subset{\bf G}$ be the Zariski
closure of the subgroup generated by $f$ and set ${\bf H}={\bf K}^{0}$.
Let $\Lambda$ be the ${\cal O}(f)$-module generated by $\Gamma$. By Lemma 17,
$\Gamma$ is a coset union. Therefore $\Lambda$ is a lattice and $\Gamma$ is an
union of $n\Lambda$-coset for some positive integer $n$.
Let $X({\bf H})$ be the group of characters of ${\bf H}$, let $K$ be the
splitting field of ${\bf H}$, let ${\cal O}$ be the ring of integers of $K$,
let ${\cal P}$ be the set of prime ideals of ${\cal O}$ and let $S$ be set set
of all $\pi\in{\cal P}$ dividing $n$.
By Theorem 1, there exists $h\in{\bf H}(\mathbb{Q})$ such that, for any non-
trivial $\chi\in X$ we have
(i) $\chi(h)$ is not an algebraic integer, and
(ii) $\chi(h)\equiv 1\,\mathrm{mod}\,n{\cal O}_{\pi}$ for any $\pi\in S$.
Step 2. Let $\Gamma^{\prime}=\Gamma\cap h^{-1}(\Gamma)$. We claim that the
virtual morphism $(\Gamma^{\prime},h)$ is a good self-similar datum.
Since ${\bf K}\subset{\bf G}$ is the Zariski closure of the subgroup generated
by $f$, we have $\mathbb{Q}[h]\subset\mathbb{Q}[f]$ and therefore $\Lambda$ is
a ${\cal O}(h)$-lattice. It follows from Lemma 23 that the virtual
endomorphism $(\Gamma^{\prime},h)$ is good.
Moreover, let $\Omega_{0}$ be the set of weights of ${\bf H}$ over
$\mathfrak{z}^{\overline{\mathbb{Q}}}$. There is an integer $l$ such that
$f^{l}\in{\bf K}^{0}={\bf H}$. The spectrum of $f^{l}$ on
$\mathfrak{z}^{\overline{\mathbb{Q}}}$ are the numbers $\chi(f^{l})$ when
$\chi$ runs over $\Omega_{0}$. Thus it follows that $\Omega_{0}$ does not
contain the trivial character, hence $h$ belongs to ${\cal S}(\mathfrak{n})$.
Therefore by Lemma 20, the virtual endomorphism $(\Gamma^{\prime},h)$ is a
good self-similar datum. Thus by Lemma 3, $\Gamma$ is a densely self-similar
group.
∎
###### Theorem 3.
The following assertions are equivalent
(i) The group $\Gamma$ is freely self-similar,
(ii) the group $\Gamma$ is freely densely self-similar, and
(iii) the Lie algebra $\mathfrak{n}^{\mathbb{C}}$ admits a very special
grading.
###### Proof.
Let’s assume Assertion (i). Let’s consider a free self-similar action of
$\Gamma$ on some $A^{\omega}$ and let $A^{\prime}$ be any $\Gamma$-orbit in
$A$. Then the action of $\Gamma$ on $A^{\prime\omega}$ is free transitive
self-similar, thus $\Gamma$ is freely transitive self-similar.
The rest of the proof is identical to the previous proof, except that
1) the assertion $({\cal A})$ is replaced by $({\cal A^{\prime}})$: ${\cal
V}(\mathfrak{n})\neq\emptyset$,
2) the Lemmas 6(ii) and 9(ii) are used instead of Lemmas 6(i) and 9(i) in
order to prove that $(ii)\Rightarrow(i)\Rightarrow({\cal
A^{\prime}})\Leftrightarrow(iii)$,
3) the proof that ${\cal A^{\prime}}\Rightarrow(ii)$ uses the weights of ${\bf
H}$ and the eigenvalues of $f$ on $\mathfrak{n}$ instead of $\mathfrak{z}$. ∎
7.2 Manning’s Theorem
Let $N$ be a CSC nilpotent Lie group $N$ and let $\Gamma$ be a cocompact
lattice. The manifold $M=N/\Gamma$ is called a nilmanifold.
A diffeomorphism $f:M\to M$ is called an Anosov diffeomorphism if
(i) there is a continuous splitting of the tangent bundle $TM$ as
$TM=E_{u}\oplus E_{s}$ which is invariant by $df$, and
(ii) there is a Riemannian metric relative to which $df|_{E_{s}}$ and
$df^{-1}|_{E_{u}}$ are contracting.
For any $x\in M$, $f$ induces a group automorphism $f_{*}$ of
$\Gamma\simeq\pi_{1}(M)$. By Lemma 19, $f_{*}$ extends to an isomorphism
$\tilde{f}_{*}:\mathfrak{n}^{\mathbb{R}}\to\mathfrak{n}^{\mathbb{R}}$, where
$\mathfrak{n}^{\mathbb{R}}$ is the Lie algebra of $N$. Strictly speaking,
$\tilde{f}_{*}$ is only defined up to an inner automorphism. Since $f_{*}$ is
well defined modulo the unipotent radical of
$\mathrm{Aut}\,\mathfrak{n}^{\mathbb{R}}$, the set ${\mathrm{Spec}}\,f_{*}$ is
unambiguously defined.
###### Manning’s Theorem.
The set ${\mathrm{Spec}}\,f_{*}$ contains no root of unity.
See [18]. Later on, A. Manning proved a much stronger result. Namely
${\mathrm{Spec}}\,f_{*}$ contains no eigenvalues of absolute value $1$, and
$f$ is topologically conjugated to an Anosov automorphism, see [19].
7.3 A Corollary for nilmanifolds with an Anosov diffeomorphim
###### Corollary 4.
Let $M$ be a nilmanifold endowed with an Anosov diffeomorphism. Then
$\pi_{1}(M)$ is freely densely self-similar.
###### Proof.
By definition, we have $M=N/\Gamma$, where $N$ is a CSC nilpotent Lie group
and $\Gamma\simeq\pi_{1}(M)$ is a cocompact lattice. Set
$\mathfrak{n}^{\mathbb{R}}=\mathrm{Lie}\,N$ and
$\mathfrak{n}^{\mathbb{C}}=\mathbb{C}\otimes\mathfrak{n}^{\mathbb{R}}$. By
Manning’s Theorem and Lemma 8, $\mathfrak{n}^{\mathbb{C}}$ has a very special
grading. Therefore $\Gamma$ is freely densely self-similar by Theorem 3. ∎
7.4 Characterisation of fractal FGTF nilpotent groups
For completeness purpose, we will now investigate the non-negative gradings of
$\mathfrak{n}^{C}$. Unlike Theorems 2 and 3, the proof of Propositions 5 and 6
are quite obvious.
Let $\mathfrak{n}^{\mathbb{Q}}$ be a finite dimensional nilpotent Lie algebra
and let $\Gamma$ be a multiplicative lattice in $\mathfrak{n}$. Set
$\mathfrak{n}^{\mathbb{C}}=\mathbb{C}\otimes\mathfrak{n}^{\mathbb{Q}}$.
###### Proposition 5.
The following assertions are equivalent
(i) The group $\Gamma$ is fractal
(ii) $\mathfrak{n}^{\mathbb{C}}$ admits a non-negative special grading.
(iii) $\mathfrak{n}^{\mathbb{Q}}$ admits a non-negative special grading.
###### Proof.
It follows from Lemma 10 that Assertions (ii) and (iii) are equivalent.
Proof that (i) $\Rightarrow$ (ii). By assumption, there is a fractal datum
$(\Gamma^{\prime},f)$. Let $g:\Gamma\rightarrow\Gamma^{\prime}$ be the inverse
of $f$ and let $\tilde{g}\in\mathrm{Aut}\,\mathfrak{n}$ be its unique
extension.
Let $\Lambda\subset\mathfrak{n}$ be the additive subgroup generated by
$\Gamma$. By Lemmas 17, $\Lambda$ is an additive lattice. Since we have
$\tilde{g}(\Lambda)\subset\Lambda$, it follows that all eigenvalues of
$\tilde{g}$ are algebraic integers.
Moreover $(\Gamma^{\prime},g^{-1})$ is a self-similar datum, thus
${\mathrm{Spec}}\,\,\tilde{g}^{-1}|_{\mathfrak{z}}$ contains no root of unity.
Therefore, by Lemma 10, Assertion (ii) holds.
Proof that (iii) $\Rightarrow$ (i). Let’s assume Assertion (iii) and let
$\mathfrak{n}^{\mathbb{Q}}=\oplus_{k\geq 0}\,\mathfrak{n}_{k}^{\mathbb{Q}}$
be a non-negative special grading of $\mathfrak{n}^{\mathbb{Q}}$.
By Lemma 17, $\Gamma$ lies in a lattice $\Lambda$. Since it is possible to
enlarge $\Lambda$, we can assume that
$\Lambda=\oplus_{k\geq 0}\,\Lambda_{k}$,
where $\Lambda_{k}=\Lambda\cap\mathfrak{n}_{k}^{\mathbb{Q}}$. Since $\Gamma$
is a coset union, there is an integer $d\geq 1$ such that $\Gamma$ is an union
of $d\Lambda$-cosets.
Let $g$ be the automorphism of $\mathfrak{n}^{\mathbb{Q}}$ defined by
$g(x)=(d+1)^{k}\,x$ if $x\in\mathfrak{n}_{k}^{\mathbb{Q}}$. We claim that
$g(\Gamma)\subset\Gamma$. Let $x\in\Gamma$ and let $x=\sum_{k\geq 0}\,x_{k}$
be its decomposition into homogenous components. We have
$g(x)=x+\sum_{k\geq 1}\,((d+1)^{k}-1)x_{k}$.
By hypothesis each homogenous component $x_{k}$ belongs to $\Lambda$. Since
$(d+1)^{k}-1$ is divisible by $d$, we have $g(x)\in x+d\Lambda\subset\Gamma$
and the claim is proved.
Set $\Gamma^{\prime}=g(\Gamma)$ and let $f:\Gamma^{\prime}\rightarrow\Gamma$
be the inverse of $g$. It is clear that $(\Gamma^{\prime},f)$ is a fractal
datum for $\Gamma$, what proves Assertion (i). ∎
###### Proposition 6.
The following assertions are equivalent
(i) The group $\Gamma$ is freely fractal
(ii) $\mathfrak{n}^{\mathbb{C}}$ admits a positive grading.
(iii) $\mathfrak{n}^{\mathbb{Q}}$ admits a positive grading.
Since the proof is strictly identical, it will be skipped.
## 8 Not self-similar FGTF nilpotent groups and affine nilmanifolds
This section provides an example of a FGTF nilpotent group which is not even
self-similar, see subsection 8.6. The end of the section is about the Milnor-
Scheuneman conjecture.
8.1 FGTF nilpotent groups with rank one center
Let $\Gamma$ be a FGTF nilpotent group and let $Z(\Gamma)$ be its center.
###### Lemma 24.
Let’s asssume that $\Gamma$ is self-similar and $Z(\Gamma)\simeq\mathbb{Z}$.
Then $\Gamma$ is transitive self-similar.
###### Proof.
Assume that $\Gamma$ admits a faithful self-similar action on some
$A^{\omega}$, where $A$ is a finite alphabet. Let $a_{1},\dots,a_{k}$ be a set
of representatives of $A/\Gamma$, where $k$ is the number of $\Gamma$-orbits
on $A$. For each $1\leq i\leq k$, let $\Gamma_{i}$ be the stabilizer of
$a_{i}$. For any $h\in\Gamma_{i}$, there is $h_{i}\in\Gamma$ such that
$h(a_{i}w)=a_{i}h_{i}(w)$,
for all $w\in A^{\omega}$. Since the action is faithfull $h_{i}$ is uniquely
determined and the map $f_{i}:\Gamma_{i}\to\Gamma,h\mapsto h_{i}$ is a group
morphism.
Let $\mathfrak{n}^{\mathbb{Q}}$ be the Malcev Lie algebra of $\Gamma$, and let
$\mathfrak{z}$ be its center, and let $z\neq 0$ be a generator of
$\cap_{i}\,Z(\Gamma_{i})$. By Lemma 19, the group morphism $f_{i}$ extends to
a Lie algebra morphism
$\tilde{f}_{i}:\mathfrak{n}^{\mathbb{Q}}\to\mathfrak{n}^{\mathbb{Q}}$. Since
$\mathfrak{z}=\mathbb{Q}\otimes Z(\Gamma)$ is one dimensional, it follows that
either $\tilde{f}_{i}$ is an isomorphism or $\tilde{f}_{i}(\mathfrak{z})=0$.
In any case, we have $\tilde{f}_{i}(z)=x_{i}z$, for some $x_{i}\in\mathbb{Q}$.
However $\mathbb{Z}z$ is not invariant by all $\tilde{f}_{i}$, otherwise it
would be in the kernel of the action. It follows that at least one $x_{i}$ is
not an integer.
For such an index $i$, the $f_{i}$-core of $\Gamma_{i}$ is trivial, and the
virtual morphism $(\Gamma_{i},f_{i})$ is a self-similar datum for $\Gamma$.
Thus $\Gamma$ is transitive self-similar.
∎
8.2 Small representations
Let $N$ be a CSC nilpotent Lie group with Lie algebra
$\mathfrak{n}^{\mathbb{R}}$ and let $\Gamma$ be a cocompact lattice.
###### Lemma 25.
If $\Gamma$ is transitive self-similar, then there exists a faithfull
$\mathfrak{n}^{\mathbb{R}}$-module of dimension
$1+\dim\mathfrak{n}^{\mathbb{R}}$.
###### Proof.
By hypothesis, $\Gamma$ is transitive self-similar. By Theorem 2,
$\mathfrak{z}^{\mathbb{C}}$ admits a special grading
$\mathfrak{n}^{\mathbb{C}}=\oplus_{n\in\mathbb{Z}}\,\mathfrak{n}^{\mathbb{C}}_{n}$.
Let $\delta:\mathfrak{n}^{\mathbb{C}}\to\mathfrak{n}^{\mathbb{C}}$ be the
derivation defined by $\delta(x)=nx$ if $x\in\mathfrak{n}_{n}$. Since
$\delta|\mathfrak{z}^{\mathbb{C}}$ is injective, it follows that there is some
$\partial\in\mathrm{Der}\,\mathfrak{n}^{\mathbb{R}}$ such that
$\partial|\mathfrak{z}^{\mathbb{R}}$ is injective.
Set
$\mathfrak{m}^{\mathbb{R}}=\mathbb{R}\partial\ltimes\mathfrak{n}^{\mathbb{R}}$.
Relative to the adjoint action, $\mathfrak{m}^{\mathbb{R}}$ is a faithfull
$\mathfrak{z}^{\mathbb{R}}$-module. Therefore $\mathfrak{m}^{\mathbb{R}}$ is a
faithfull $\mathfrak{n}^{\mathbb{R}}$-module with the prescribed dimension. ∎
8.3 Filiform nilpotent Lie algebras
Let $\mathfrak{n}$ be a nilpotent Lie algebra over $\mathbb{Q}$. Let
$C^{n}\mathfrak{n}$ be the decreasing central series, which is inductively
defined by $C^{1}\mathfrak{n}=\mathfrak{n}$ and
$C^{n+1}\mathfrak{n}=[\mathfrak{n},C^{n}\mathfrak{n}]$. The nilpotent Lie
algebra $\mathfrak{n}$ is called filiform if $\dim
C^{1}\mathfrak{n}/C^{2}\mathfrak{n}=2$ and $\dim
C^{k}\mathfrak{n}/C^{k+1}\mathfrak{n}\leq 1$ for any $k>1$. Set
$n=\dim\mathfrak{n}$. It follows from the definition that $\dim
C^{k}\mathfrak{n}/C^{k+1}\mathfrak{n}=1$ for any $0<k\leq n-1$ and
$C^{k}\mathfrak{n}=0$ for any $k\geq n$.
###### Lemma 26.
Let $\mathfrak{n}$ be a filiform nilpotent Lie algebra over $\mathbb{Q}$, with
$\dim\mathfrak{n}\geq 3$. Then its center $\mathfrak{z}$ has dimension one.
###### Proof.
Let $z\in\mathfrak{n}$ be nonzero. Let $k$ be the integer such that $z\in
C^{k}\mathfrak{n}\setminus C^{k+1}\mathfrak{n}$. Since
$C^{k}\mathfrak{n}=C^{k+1}\mathfrak{n}\oplus\mathbb{Q}z$
$C^{k+1}\mathfrak{n}=[\mathfrak{n},C^{k}\mathfrak{n}]=[\mathfrak{n},C^{k+1}\mathfrak{n}]+[\mathfrak{n},z]=C^{k+2}\mathfrak{n}$.
It follows that $C^{k+1}\mathfrak{n}=0$. Therefore $\mathfrak{z}$ lies in
$C^{k}\mathfrak{n}$, which is a one dimensional ideal. ∎
8.4 Benoist Theorem
###### Benoist’s Theorem.
There is a nilpotent Lie algebra $\mathfrak{n}_{B}^{\mathbb{R}}$ of dimension
$11$ over $\mathbb{R}$, with the following properties
(i) The Lie algebra $\mathfrak{n}_{B}^{\mathbb{R}}$ has no faithfull
representations of dimension $12$,
(ii) the Lie algebra $\mathfrak{n}_{B}^{\mathbb{R}}$ is defined over
$\mathbb{Q}$, and
(iii) the Lie algebra $\mathfrak{n}_{B}^{\mathbb{R}}$ is filiform.
The three assertions appear in different places of [3]. Indeed Assertion (i),
which is explicitely stated in Theorem 2 of [3], hold for a one-parameter
family of eleven dimensional Lie algebras, which are denoted
$\mathfrak{a}_{-2,1,t}$ in section 2.1 of [3]. These Lie algebras are filiform
by Lemma 4.2.2 of [3]. Moreover, when $t$ is rational, $\mathfrak{a}_{-2,1,t}$
is defined over $\mathbb{Q}$. Therefore the Benoist Theorem holds for the Lie
algebras $\mathfrak{n}_{B}=\mathfrak{a}_{-2,1,t}$ where $t$ is any rational
number.
8.5 A FGTF group which is not self-similar
Let $N_{B}$ the CSC nilpotent Lie group with Lie algebra
$\mathfrak{n}_{B}^{\mathbb{R}}$. Since $\mathfrak{n}_{B}^{\mathbb{R}}$ is
defined over $\mathbb{Q}$, $N_{B}$ contains some cocompact lattice.
###### Corollary 7.
Let $\Gamma$ be any cocompact lattice in $N_{B}$. Then $\Gamma$ is not self-
similar.
###### Proof.
Let’s assume otherwise. By Benoist Theorem and Lemma 26, the center of
$\mathfrak{n}_{B}^{\mathbb{R}}$ is one dimensional. Thus the center of
$\Gamma$ has rank one, and by Lemma 24, $\Gamma$ is transitive self-similar.
By Lemma 25, $\mathfrak{n}_{B}^{\mathbb{R}}$ admits a faithfull representation
of dimension 12, which contradicts Benoist Theorem.
Therefore $\Gamma$ is not self-similar. ∎
8.6 On the Scheuneman-Milnor conjecture
A smooth manifold $M$ is called affine if it admits a torsion-free and flat
connection. Scheuneman [28] and Milnor [23] asked the following question
is any nilmanifold $M$ affine?
The story of the Scheuneman-Milnor conjecture is quite interesting. For many
years, there are been a succession of proofs followed by refutations, but
there was no doubts that the conjecture should be ultimalely proved… until a
counterexample has been found by Benoist [3]. Indeed it is an easy corollary
of his previously mentionned Theorem.
The following question is a refinement of the previous conjecture
if $\pi_{1}(M)$ is densely self-similar, is the nilmanifold $M$ affine?
A positive result in that direction is
###### Corollary 8.
Let $M$ be a nilmanifold. If $\pi_{1}(M)$ is freely self-similar, then $M$ is
affine complete.
###### Proof.
Set $M=N/\Gamma$, where $N$ is a CSC nilpotent Lie group and $\Gamma$ is a
cocompact lattice. Let $\mathfrak{n}^{\mathbb{R}}$ be the Lie algebra of $N$.
By Theorem 3, $\mathbb{C}\otimes\mathfrak{n}^{\mathbb{R}}$ admits a very
special grading, what implies that a generic derivation is injective.
Therefore there is a derivation $\delta$ of $\mathfrak{n}^{\mathbb{R}}$ which
is injective. Set
$\mathfrak{m}^{\mathbb{R}}=\mathbb{R}\delta\ltimes\mathfrak{n}^{R}$. Then $N$
is equivariantly diffeomorphic to the affine space
$\delta+\mathfrak{n}^{\mathbb{R}}\subset\mathfrak{m}^{\mathbb{R}}$. Therefore
$M$ is affine complete.
∎
## 9 Absolute Complexities
For the whole chapter, $N$ will be a CSC nilpotent Lie groups, with Lie
algebra $\mathfrak{n}^{\mathbb{R}}$. Let’s assumethat that $N$ contains some
cocompact lattices.
Under the condition of Theorem 2 or 3, any cocompact lattice $\Gamma$ in $N$
admits a transitive or free self-similar action on some $A^{\omega}$. In this
section, we try to determine the minimal degree of these actions.
9.1 Three type of absolute complexities
The complexity of a cocompact lattice $\Gamma\subset N$, denoted by
$\mathrm{cp}\,\Gamma$, is the smallest degree of a faithfull transitive self-
similar action of $\Gamma$ on some $A^{\omega}$, with the convention that
$\mathrm{cp}\,\Gamma=\infty$ if $\Gamma$ is not transitive self-similar.
Similarly, the free complexity of $\Gamma$, denoted by $\mathrm{fcp}\,\Gamma$,
is the smallest degree of a free self-similar action of $\Gamma$. Two
cocompact lattices are called commensurable if they share a commoun subgroup
of finite index. The complexity and the free complexity of a commensurable
class $\xi$ are the integers
$\mathrm{cp}\,\xi=\mathrm{Min}_{\Gamma\in\xi}\,\mathrm{cp}\,\Gamma$, and
$\mathrm{fcp}\,\xi=\mathrm{Min}_{\Gamma\in\xi}\,\mathrm{fcp}\,\Gamma$.
Then, the complexity of the nilpotent group $N$ is
$\mathrm{cp}\,N=\mathrm{Max}_{\xi}\,\mathrm{cp}\,\xi$,
where $\xi$ runs over all commensurable classes in $N$. In what follows, we
will provide a formula for the complexity of commensurable classes. The
question
under which condition $\mathrm{cp}N<\infty$?
is not solved, but it is a deep question. In chapter 10, a class of CSC
nilpotent Lie groups of infinite complexity is investigated.
9.2 Theorem 9
Let $\xi$ be a commensurable class of cocompact lattices in $N$, and let
$\Gamma\in\xi$. The Malcev Lie algebra $\Gamma$ is a $\mathbb{Q}$-form of the
Lie algebra $\mathfrak{n}^{\mathbb{R}}$. Since it depends only on $\xi$, it
will be denoted by $\mathfrak{n}(\xi)$.
###### Theorem 9.
We have
$\mathrm{cp}\,\xi=\mathrm{Min}_{h\in{\cal
S}(\mathfrak{n}(\xi))}\,{\mathrm{ht}}(h)$, and
$\mathrm{fcp}\,\xi=\mathrm{Min}_{h\in{\cal
V}(\mathfrak{n}(\xi))}\,{\mathrm{ht}}(h)$.
###### Proof.
Let $h\in{\cal S}(\mathfrak{n}(\xi))$ be an isomorphism of minimal height. In
order to show that $\mathrm{cp}\,\xi={\mathrm{ht}}(h)$, we can assume that $h$
is semi-simple, by Lemma 13.
Further, let $\Gamma$ be any cocompact lattice in $\xi$. By Lemma 20, we have
$\mathrm{cp}\,\Gamma=\mathrm{Min}_{f\in{\cal
S}(\mathfrak{n}(\xi))}\,\mathrm{cp}_{f}\,\Gamma$. By lemma 21, we have
$\mathrm{cp}_{f}\,\Gamma\geq{\mathrm{ht}}(f)$, therefore we have
$\mathrm{cp}\,\Gamma\geq{\mathrm{ht}}(h)$. In particular
$\mathrm{cp}\,\xi\geq{\mathrm{ht}}(h)$.
By Lemma 22, $\Gamma$ contains a finite index subgroup $\tilde{\Gamma}$ which
is minimal relative to $h$. Since
$\mathrm{cp}_{h}\,\tilde{\Gamma}={\mathrm{ht}}(h)$, it follows that
$\mathrm{cp}\,\xi\leq{\mathrm{ht}}(h)$.
Therefore $\mathrm{cp}\,\xi={\mathrm{ht}}(h)$ and the first assertion is
proved.
For the second assertion, let’s notice that an free action of minimal degree
is automatically transitive, see the proof of Theorem 3. Then the rest of the
proof is strictly identical to the previous proof. ∎
9.3 Classification of lattices in a CSC nilpotent Lie groups
Obviously Malcev’s Theorem implies the following
###### Malcev’s Corollary.
The map $\xi\mapsto\mathfrak{n}(\xi)$ establishes a bijection between the
commensurable classes of lattices and the $\mathbb{Q}$-forms of the Lie
algebra $\mathfrak{n}^{\mathbb{R}}$.
For the next chapter, it is interesting to translate this into the framework
of non-abelian Galois cohomology. Somehow, it is more concrete, since the non-
abelian Galois cohomology classifies $\mathbb{Q}$-forms of classical objects.
Set ${\bf G}=\mathrm{Aut}\mathfrak{n}^{\mathbb{C}}$, let ${\bf U}$ be its
unipotent radical and set $\overline{\bf G}={\bf G}/{\bf U}$. From now on, fix
once for all a commensurable class $\xi_{0}$ of cocompact lattices. Then
$\mathfrak{n}(\xi_{0})$ is a $\mathbb{Q}$-form of $\mathfrak{n}^{\mathbb{C}}$,
what provides a $\mathbb{Q}$-form of the algebraic groups ${\bf G}$ and
$\overline{\bf G}$. It induces an action of ${\mathrm{Gal}}(\mathbb{Q})$ over
$\overline{\bf G}(\overline{\mathbb{Q}})$.
Set ${\overline{\mathbb{Q}}}_{re}=\overline{\mathbb{Q}}\cap\mathbb{R}$ and let
$\overline{\pi}:H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf\overline{G}}(\overline{\mathbb{Q}}))\rightarrow
H^{1}({\mathrm{Gal}}({\overline{\mathbb{Q}}}_{re}),{\bf\overline{G}}(\overline{\mathbb{Q}}))$
be the natural map. Recall that these two non-abelian cohomologies are pointed
sets, where the distinguished point $*$ comes from the given $\mathbb{Q}$-form
and the induced ${\overline{\mathbb{Q}}}_{re}$-form. Denote by
$\mathrm{Ker}\,\overline{\pi}$ the kernel of $\overline{\pi}$, i.e. the fiber
${\overline{\pi}}^{-1}(*)$ of the distinguished point.
Let ${\cal L}(N)$ be the set of all commensurable classes classes of lattices
of $N$, up to conjugacy.
###### Corollary 10.
There is a natural identification
${\cal L}(N)\simeq\,\mathrm{Ker}\,\overline{\pi}$.
###### Proof.
For any field $K\subset\mathbb{C}$, set
$\mathfrak{n}^{K}=K\otimes\mathfrak{n}(\xi_{0})$. For any two fields $K\subset
L\subset\mathbb{C}$, let ${\cal F}(L/K)$ be the set of $K$-forms of
$\mathfrak{n}^{L}$, up to conjugacy. Then ${\cal F}(L/K)$ is a pointed set,
whose distinguished point is the $K$-form $\mathfrak{n}^{K}$.
By the Lefschetz principle, the $\mathbb{Q}$-forms of
$\mathfrak{n}^{\mathbb{C}}$ (up to conjugacy) are in bijection with the
$\mathbb{Q}$-forms of $\mathfrak{n}^{\overline{\mathbb{Q}}}$. Similarly by the
Tarski-Seidenberg principle the real forms (up to conjugacy) of
$\mathfrak{n}^{\mathbb{C}}$ are in bijection with the
${\overline{\mathbb{Q}}}_{re}$-forms of
$\mathfrak{n}^{\overline{\mathbb{Q}}}$. So we have
${\cal F}(\mathbb{C}/\mathbb{Q})\simeq{\cal
F}(\overline{\mathbb{Q}}/\mathbb{Q})$ and ${\cal
F}(\mathbb{C}/\mathbb{R})\simeq{\cal
F}(\overline{\mathbb{Q}}/{\overline{\mathbb{Q}}}_{re})$
Since a Lie algebra is a vector space endowed with a tensor (its Lie bracket),
it follows from [29], III-2, Proposition 1 that
${\cal
F}(\overline{\mathbb{Q}}/\mathbb{Q})=H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf
G}(\overline{\mathbb{Q}}))$, and
${\cal
F}(\overline{\mathbb{Q}}/{\overline{\mathbb{Q}}}_{re})=H^{1}({\mathrm{Gal}}({\overline{\mathbb{Q}}}_{re}),{\bf
G}(\overline{\mathbb{Q}}))$.
Moreover since ${\bf U}$ is unipotent, we have
$H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf G}(\overline{\mathbb{Q}}))\simeq
H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf\overline{G}}(\overline{\mathbb{Q}}))$,
and
$H^{1}({\mathrm{Gal}}(\mathbb{Q}_{re}),{\bf G}(\overline{\mathbb{Q}}))\simeq
H^{1}({\mathrm{Gal}}(\mathbb{Q}_{re}),{\bf\overline{G}}(\overline{\mathbb{Q}}))$.
There is a commutative diagram of pointed sets
${\cal F}(\mathbb{C}/\mathbb{Q})$ | $\overset{\theta}{\longrightarrow}$ | ${\cal F}(\mathbb{C}/\mathbb{R})$
---|---|---
$\downarrow$ | | $\downarrow$
${\cal F}(\overline{\mathbb{Q}}/\mathbb{Q})$ | $\overset{\theta^{\prime}}{\longrightarrow}$ | ${\cal F}(\overline{\mathbb{Q}}/{\overline{\mathbb{Q}}}_{re})$
$\downarrow$ | | $\downarrow$
$H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf G}(\overline{\mathbb{Q}}))$ | $\overset{\pi}{\longrightarrow}$ | $H^{1}({\mathrm{Gal}}({\overline{\mathbb{Q}}}_{re}),{\bf G}(\overline{\mathbb{Q}}))$
$\downarrow$ | | $\downarrow$
$H^{1}({\mathrm{Gal}}(\mathbb{Q}),{\bf\overline{G}}(\overline{\mathbb{Q}}))$ | $\overset{\overline{\pi}}{\longrightarrow}$ | $H^{1}({\mathrm{Gal}}({\overline{\mathbb{Q}}}_{re}),{\bf\overline{G}}(\overline{\mathbb{Q}}))$
where $\theta$ is the map $\mathbb{R}\otimes_{\mathbb{Q}}\,-$ ,
$\theta^{\prime}$ is the map
${\overline{\mathbb{Q}}}_{re}\otimes_{\mathbb{Q}}\,-$, $\pi$ and
$\overline{\pi}$ are restrictions maps. It is tautological that ${\cal
L}(N)=\mathrm{Ker}\,\theta$. Since all vertical maps are bijective, it follows
that ${\cal L}(N)$ is isomorphic to $\mathrm{Ker}\,\overline{\pi}$.
∎
## 10 Some Nilpotent Lie groups of infinite complexity.
This chapter is devoted to the analysis to a class of CSC nilpotent Lie groups
${\cal N}$, for which the classification of commensurable classes and the
computation of their complexity are very explicitely connected with the
arithmetic of complex quadratic fields.
For $K=\mathbb{R}$ or $\mathbb{C}$, let $O(2,K)$ be the group of linear
automorphisms of $\mathbb{R}^{2}$ preserving the quadratic form $x^{2}+y^{2}$.
Let ${\cal L}$ be the class of nilpotent Lie algebras $\mathfrak{n}^{R}$ over
$\mathbb{R}$ satisfying the following properties
(i) $\mathfrak{n}^{R}$ has a $\mathbb{Q}$-form
(ii)
$\mathfrak{n}^{R}/[\mathfrak{n}^{R},\mathfrak{n}^{R}]\simeq\mathbb{R}^{2}$ has
dimension two
(iii) the Lie algebra
$\mathfrak{n}^{\mathbb{C}}:=\mathbb{C}\otimes\mathfrak{n}^{\mathbb{R}}$ has a
special grading
(iv) for $K=\mathbb{R}$ or $\mathbb{C}$, the image of
$\mathrm{Aut}\,\mathfrak{n}^{K}$ in
$GL(\mathfrak{n}^{K}/[\mathfrak{n}^{K},\mathfrak{n}^{K}])$ is $O(2,K)$.
Let be the class of CSC nilpotent Lie groups $N$ whose Lie algebra
$\mathfrak{n}^{\mathbb{R}}$ is in ${\cal L}$.
It should be noted that the class ${\cal N}$ is not empty. There is one Lie
group $N_{112}\in{\cal N}$ of dimension $112$, see [22]. Indeed [22] contains
a general method to find nilpotent Lie algebras with a prescribed group of
automorphisms, modulo its unipotent radical. For the group $O(2,\mathbb{R})$,
$N_{112}$ is the Lie group of minimal dimension obtained with this method.
However it is difficult to provide more details, without going to very long
explanations.
From now on, $N$ will be any Lie group in class ${\cal N}$, $\xi_{0}$ will be
one commensurable class of lattices in $N$ and
$\mathfrak{n}:=\mathfrak{n}(\xi_{0})$ will be the corresponding corresponding
$\mathbb{Q}$ form of $\mathfrak{n}^{R}$. As before, set
$\mathfrak{n}^{K}=K\otimes\mathfrak{n}$ for any field $K\subset\mathbb{C}$.
Let ${\bf G}={\bf Aut}\,\mathfrak{n}$ the algebraic automorphism group of
$\mathfrak{n}$, let ${\bf U}$ be its unipotent radical and set
${\bf\overline{G}}={\bf G}/{\bf U}$. By hypothesis, ${\bf\overline{G}}$ is the
algebraic group $O(2)$.
10.1 The $\mathbb{Z}$-grading of $\mathfrak{n}^{\mathbb{C}}$
Since ${\bf\overline{G}}(\mathbb{C})=O(2,\mathbb{C})$, a maximal torus ${\bf
H}$ of ${\bf G}(\mathbb{C})$ has dimension $1$. Therefore
$\mathfrak{n}^{\mathbb{C}}$ has a $\mathbb{Z}$-grading
$\mathfrak{n}^{\mathbb{C}}=\oplus_{k\in\mathbb{Z}}\,\mathfrak{n}^{\mathbb{C}}_{k}$,
satisfying the following properties
(i) the grading is essentially unique, namely any other grading is a multiple
of the given grading,
(ii) $\dim\mathfrak{n}^{\mathbb{C}}_{k}=\dim\mathfrak{n}^{\mathbb{C}}_{-k}$
for any $k$. In particular $\mathfrak{n}^{\mathbb{C}}$ does not admit a (non-
trivial) non-negative grading, and
(iii) the grading is not defined over $\mathbb{R}$.
Indeed since ${\bf\overline{G}}(\mathbb{C})=O(2,\mathbb{C})$, the normalizer
${\bf K}(\mathbb{C})$ of ${\bf H}(\mathbb{C})$ has two connected components,
and any $\sigma\in{\bf K}(\mathbb{C})\setminus{\bf K}(\mathbb{C})^{0}$
exchanges $\mathfrak{n}^{\mathbb{C}}_{k}$ and
$\mathfrak{n}^{\mathbb{C}}_{-k}$, what shows Assertion (ii). Since
${\bf\overline{G}}(\mathbb{R})=O(2,\mathbb{R})$, no torus of ${\bf
G}(\mathbb{R})$ is split, what implies Assertion (iii).
Moreover, the grading is not very special, so $\mathrm{fcp}(\xi)=\infty$ for
any commensurable class $\xi$. For the forthcoming computation of
$\mathrm{cp}(\xi)$, the following quantity will be involved
$e(N)=\sum_{k>0}\,k\dim\mathfrak{n}^{\mathbb{C}}_{k}$.
For example, for the Lie group $N_{112}$ of [22], we have $e(N_{112})=126$.
10.2 Classification of commensurable lattices in $N$
###### Lemma 27.
Let $N\in{\cal N}$. Up to conjugacy, there is a bijection between
(i) the commensurable class of cocompact lattices in $N$, and
(ii) the positive definite quadratic form on $\mathbb{Q}^{2}$.
###### Proof.
Let $q_{0}$ be a given definite quadratic form on $\mathbb{Q}^{2}$. It
determines a $\mathbb{Q}$-form of the algebraic group $O(2)$, and
$H^{1}({\mathrm{Gal}}(\mathbb{Q}),O(2,\overline{\mathbb{Q}}))$ classifies the
quadratic forms on $\mathbb{Q}^{2}$, while the kernel of
$H^{1}({\mathrm{Gal}}(\mathbb{Q}),O(2,\overline{\mathbb{Q}}))\to
H^{1}({\mathrm{Gal}}(\overline{\mathbb{Q}}_{re}),O(2,\overline{\mathbb{Q}}))$
classifies the positive definite quadratic forms on $\mathbb{Q}^{2}$. Thus the
lemma follows from Corollary 10.
∎
The classification of positive definite quadratic forms $q$ on
$\mathbb{Q}^{2}$ is well known. Up to conjugacy, $q$ can be written as
$q(x,y)=ax^{2}+ady^{2}$,
where $a,\,d$ are positive and $d$ is a square-free integer. Then $q$ is
determined by the following two invariants
(i) its discriminant $-d$, viewed as an element of
$\mathbb{Q}^{*}/\mathbb{Q}^{*2}$,
(ii) the value $a$, viewed as an element in
$\mathbb{Q}^{*}/N_{K/\mathbb{Q}}(K^{*})$, where $K=\mathbb{Q}(\sqrt{-d})$.
Equivalently, this means that $q(\mathbb{Q}^{2}\setminus
0)=aN_{K/\mathbb{Q}}(K^{*})$.
For any positive definite quadratic forms $q$ on $\mathbb{Q}^{2}$, let
$\xi(q)$ be the corresponding commensurable class (or more precisely, the
conjugacy class of the commensurable class). By Theorem 9,
$\mathrm{cp}\,\xi(q)$ only depends on $O(q)$, therefore it only depends on the
discriminant $-d$.
10.3 The function $F(d)$
Let $d$ be a positive square-free integer. Set $K=\mathbb{Q}(\sqrt{-d})$, let
${\cal O}$ be its ring of integers, let $R$ be te set of roots of unity in $K$
and set $K_{1}=\\{z\in K|z\overline{z}=1\\}$. For $z\in K^{*}$, recall that
the integer $d(z)$ is defined by
$d(z)=N_{K/\mathbb{Q}}(\pi_{z})=\mathrm{Card\,}{\cal O}/\pi_{z}$, where
$\pi_{z}$ is the ideal $\pi_{z}=\\{a\in{\cal O}|az\in{\cal O}\\}$. Set
$F(d)=\mathrm{Min}_{z\in K_{1}\setminus R}\,d(z)$.
We will now show two formulas for $F(d)$. Indeed $F(d)$ is the norm of some
specific ideal in $K=\mathbb{Q}(\sqrt{-d})$, and it is also the minimal
solution of some diohantine equation.
Let ${\cal J}$ be the set of all ideals $\pi$ of ${\cal O}$ such that $\pi$
and $\overline{\pi}$ are coprime and $\pi^{2}$ is principal.
###### Lemma 28.
We have
$F(d)=\mathrm{Min}_{\pi\in{\cal J}}\,N_{k/\mathbb{Q}}(\pi)$.
In particular, we have $F(1)=5$ and $F(3)=7$.
###### Proof.
The map $z\mapsto\pi_{z}$ induces a bijection $(K_{1}\setminus R)/R\simeq{\cal
J}$, from which the first assertion follows. Moreover, if
$\mathrm{Cl}(K)=\\{0\\}$, then $F(d)$ is the smallest split prime number.
Therefore $F(1)=5$ and $F(3)=7$. ∎
Let’s consider the following diophantine equation
$({\cal E})$ $4n^{2}=a^{2}+db^{2}$, with $n>0$, $a>0$ and $b\neq 0$.
A solution $(n,a,b)$ of $({\cal E})$ is called primitive if $\gcd(n,a)=1$. Let
${\mathrm{Sol}}\,({\cal E})$ (respectively ${\mathrm{Sol}}\,_{prim}({\cal
E})$) be the set of solutions (respectively of primitive solutions) of $({\cal
E})$.
Let $\pi\in{\cal J}$. Since $\pi^{2}$ is principal, there are integers
$a(\pi)>0$ and $b(\pi)$ such that $a(\pi)+b(\pi)\sqrt{-d}$ is a generator of
$4\pi^{2}$. Moreover, let’s assume that $d\neq 1$ or $3$. Then $R=\\{\pm 1\\}$
and the integers $a(\pi)$ and $b(\pi)$ are uniquely determined. Thus there is
a map $\theta:{\cal J}\to{\mathrm{Sol}}\,({\cal E})$ defined by
$\theta(\pi)=(N_{K/\mathbb{Q}}(\pi),a(\pi),b(\pi))$.
###### Lemma 29.
Under the hypothesis that $d\neq 1$ or $3$, the map $\theta$ induces a
bijection from ${\cal J}$ to ${\mathrm{Sol}}\,_{prim}({\cal E})$. In
particular
$F(d)=\mathrm{Min}_{(n,a,b)\in{\mathrm{Sol}}\,({\cal E})}\,n$.
###### Proof.
Step 1: proof that $\theta({\cal J})\subset{\mathrm{Sol}}\,_{prim}({\cal E})$.
An algebraic integer $z\in{\cal O}$ is called primitive if there are no
integer $d>1$ such that $z/d$ is an algebraic integer. Equivalently, there are
no integer $d>1$ such that $d\mid z+\overline{z}$ and $d^{2}\mid
z\overline{z}$.
Let $\pi\in{\cal J}$ and set $z=1/2(a(\pi)+b(\pi)\sqrt{-d})$. Since
$z+{\overline{z}}=a(\pi)$ and $z.{\overline{z}}=N_{K/\mathbb{Q}}(\pi^{2})$,
$z$ is an algebraic integer which is a generator of $\pi^{2}$. Since $\pi^{2}$
and $\overline{\pi}^{2}$ are coprime, $z$ is primitive. Since
$z.{\overline{z}}=N_{K/\mathbb{Q}}(\pi)^{2}$, it follows that
$N_{K/\mathbb{Q}}(\pi)$ and $a(\pi)$ are coprime. Hence
$\theta(\pi)\in{\mathrm{Sol}}\,_{prim}({\cal E})$ and the claim is proved.
Step 2: proof that $\theta({\cal J})={\mathrm{Sol}}\,_{prim}({\cal E})$. Let
$(n,a,b)\in{\mathrm{Sol}}\,_{prim}({\cal E})$ and
$z=1/2(a(\pi)+b(\pi)\sqrt{-d})$. Since $z\neq\overline{z}$,
$z+{\overline{z}}=a$ and $z{\overline{z}}=n$, the number $z$ is an algebraic
integer. Set $\tau=z{\cal O}$ and let
$\tau=\pi_{1}^{m_{1}}\dots\pi_{k}^{m_{k}}$
be the factorization of $\tau$ into a product of prime ideals of ${\cal O}$,
where, as usual we assume that $\pi_{i}\neq\pi_{j}$ for $i\neq j$ and all
$m_{i}$ are positive.
For $1\leq i\leq k$, let $p_{i}$ be the characteristic of the field ${\cal
O}/\pi_{i}$. Since $n$ and $a$ are coprime, $\tau$ and $\overline{\tau}$ are
coprime. It follows that $\overline{\pi_{i}}$ does not divide $\tau$. In
particular $\pi_{i}\neq\overline{\pi_{i}}$ and
$N_{K/\mathbb{Q}}(\pi_{i})=p_{i}$. Since $\pi_{i}$ and $\overline{\pi_{i}}$
are the only two ideals over $p_{i}$, we have
$m_{i}=v_{p_{i}}(N_{K/\mathbb{Q}}(\tau))=v_{p_{i}}(n^{2})$. Since each $m_{i}$
is even, we have $\tau=\pi^{2}$ for some ideal $\pi\in{\cal J}$. Therefore
$\theta(\pi)=(n,a,b)$, and the claim is proved.
Step 3. It follows easily that $\theta$ is a bijection from ${\cal J}$ to
${\mathrm{Sol}}\,_{prim}({\cal E})$. In particular
$F(d)=\mathrm{Min}_{(n,a,b)\in{\mathrm{Sol}}\,_{prim}({\cal E})}\,n$, from
which the lemma follows.
∎
10.4 Complexity computation
###### Theorem 11.
Let $q$ be a positive definite quadratic form on $\mathbb{Q}^{2}$ of
discriminant $-d$. Then we have
$\mathrm{cp}\,\xi(q)=F(d)^{e(\mathfrak{n}^{\mathbb{C}})}$
###### Proof.
Step 1. Let $G\subset\mathrm{End}_{\mathbb{Q}}(K)$ be the group generated by
the multiplication by elements in $K_{1}$ and by the complex conjugation. We
have $G\simeq O(2)$ and $SO(2)\simeq K_{1}$. As a $O(q)$-module, there is an
isomorphism
$V\simeq\mathbb{Q}(\sqrt{-d})$,
where $V=\mathfrak{n}(\xi(q))/[\mathfrak{n}(\xi(q)),\mathfrak{n}(\xi(q))]$.
Step 2. Let ${\overline{\cal S}}(\mathfrak{n}(\xi(q))$ be the image of ${\cal
S}(\mathfrak{n}(\xi(q))$ in $O(q)$. We claim that
${\overline{\cal S}}(\mathfrak{n}(\xi(q))=K_{1}\setminus R$.
Indeed $O(q)$ can be identified with a Levi factor of ${\bf G}(\mathbb{Q})$
and let $\rho:O(q)\to{\bf G}(\mathbb{Q})$ a corresponding lift. Any element in
$R\cup O(q)\setminus SO(q)$ has finite order, hence we have
${\overline{\cal S}}(\mathfrak{n}(\xi(q))\subset K_{1}\setminus R$.
Let $z\in K_{1}\setminus R$. It is clear that $z$ is not an algebraic integer.
Since the grading is special, we have
$\mathfrak{z}^{\mathbb{C}}=\oplus_{k\neq 0}\mathfrak{z}^{\mathbb{C}}_{k}$.
Since the eigenvalues of $\rho(z)$ on $\mathfrak{z}_{k}$ is $z^{k}$, it
follows that $z$ belongs to ${\overline{\cal S}}(\mathfrak{n}(\xi(q))$, what
proves the point.
Step 3. Let $z\in K_{1}\setminus R$. We have $\overline{z}=\overline{z}^{-1}$.
Therefore by Lemma 15 we have
${\mathrm{ht}}\,\rho(z)=\prod_{k\geq
1}d(z^{k})^{\dim\,\mathfrak{n}^{\mathbb{C}}_{k}}=d(z)^{e(N)}$.
Therefore Theorem 4 implies Theorem 5. ∎
Since $F(d)\geq{\sqrt{1+d}\over 2}$, it follows that
###### Corollary 12.
The group $N$ has infinite complexity.
Since $F(7)=F(15)=2$ and $F(d)\geq 3$ otherwise, it follows that
###### Corollary 13.
If the positive definite quadratic form $q$ has discriminant $-7$ or $-15$ we
have
$\mathrm{cp}\,\xi(q)=2^{e(N)}$,
and $\mathrm{cp}\,\xi(q)\geq 3^{e(N)}$ otherwise.
## References
* [1] A. Bansfield and D. Kan, Homotopy limits, completions and localizations. Springer Lecture Notes in Mathematics 304 (1972).
* [2] L. Bartholdi and S. Sidki, Self-similar products of groups. Groups Geom. Dyn. 14 (2020) 107-115.
* [3] Y. Benoist, Une nilvariete non-affine. J. Differential Geometry 41 (1995) 21-52.
* [4] A. Berlatto and S. Sidki, Virtual endomorphisms of nilpotent groups. Groups Geom. Dyn. 1 (2007) 21-46.
* [5] N. Bourbaki, Groupe et algèbres de Lie, ch. 2-3. Masson (1972).
* [6] A. Borel Linear Algebraic Groups. Springer Graduate Text in Math. 126 (1991).
* [7] C. Chevalley, Theorie des groupes de Lie, Tome III: Groupes algèbriques. Hermann (1955).
* [8] Y. Cornulier, Gradings on Lie algebras, systolic growth, and cohopfian properties of nilpotent groups. Bull. de la SMF 144 (2016) 693-744.
* [9] V. Futorny, D. Kochloukova and S.Sidki, On self-similar Lie algebras and virtual endomorphisms. Math. Z. 292 (2019) 1123–1156.
* [10] R. I. Grigorchuk, On Burnside’s problem on periodic groups. Functional Anal. Appl. 14 (1980) 41-43.
* [11] R. I. Grigorchuk, On the Milnor problem of group growth. Dokl. Akad. Nauk SSSR 271 (1983) 30-33.
* [12] R. Grigorchuk, V. Nekrashevych and Z. Šunić Z, From Self-Similar Groups to Self-Similar Sets and Spectra, Fractal Geometry and Stochastics V. Progress in Probability 70 (2015) 175-207.
* [13] N. D. Gupta and S. N. Sidki, On the Burnside problem for periodic groups. Math. Z. 182 (1983) 385-388.
* [14] N. Gupta and S. Sidki, Extension of groups by tree automorphisms, in Contributions to group theory. Contemp. Math. 33 (1984) 232-246.
* [15] M. Hall, The Theory of Groups. Macmillan compagny (1959).
* [16] D. Kochloukova and S. Sidki, Self-similar groups of type FPn. Geom. Dedicata 204 (2020) 241-264.
* [17] J.L. Loday and B. Valette, Algebraic Operads. Grundlehren der mathematischen Wissenschaften Book 346 (2012).
* [18] A. I. Malcev, On a class of homogeneous spaces. Izvestiya Akad. Nauk. SSSR. Ser. Mat.13 (1949) 9-22.
* [19] A. I. Malcev, Nilpotent groups without torsion. Izv. Akad. Nauk. SSSR, Math. 13 (1949) 201-212.
* [20] A. Manning, Anosov diffeomorphisms on nilmanifolds. Proc. A.M.S. 38 (1973) 423-426.
* [21] A. Manning, There are No New Anosov Diffeomorphisms on Tori. Am. J. of Math. 96 (1974) 422-429.
* [22] O. Mathieu, Automorphisms Groups of nilpotent Lie algebras, and applications. In preparation.
* [23] J. Milnor, On fundamental groups of complete affinely flat manifolds. Advances in Math., 25(1977) 178-187.
* [24] V. Nekrashevych, Self-Similar Groups. Math. Survey and Monographs 117 (2005).
* [25] V. Nekrashevych and S.Sidki, Automorphisms of the binary tree: state-closed subgroups and dynamics of 1/2-endomorphisms, in Groups: topological, combinatorial and arithmetic aspects. London Math. Soc. Lecture Note 311 (2004) 375–404.
* [26] M.S. Raghunathan, Discrete subgroups in Lie groups. Springer-Verlag, Ergebnisse der Math. 68 (1972).
* [27] M. Rosenlicht, Some rationality questions on algebraic groups. Annali di Matematica 43 (1957) 25-50.
* [28] J. Scheuneman, Examples of locally affine spaces. Bull. Amer. Math. Soc. 77 (1971) 589-592.
* [29] J. P. Serre, Cohomologie galoisienne. Springer Lecture Notes in Math. 5 (1965).
* [30] S. Sidki: Regular trees and their automorphisms. Monografías de Matemática 56. Instituto de Matemática Pura e Aplicada (1998).
|
# Distributed Learning over Markovian Fading Channels for Stable Spectrum
Access
Tomer Gafni and Kobi Cohen Tomer Gafni and Kobi Cohen are with the School of
Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer
Sheva 8410501 Israel. Email<EMAIL_ADDRESS><EMAIL_ADDRESS>work has been submitted to the IEEE for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessible.A short version of this paper was presented at the 57th Annual
Allerton Conference on Communication, Control, and Computing, 2019 [1].
###### Abstract
We consider the problem of multi-user spectrum access in wireless networks.
The bandwidth is divided into $K$ orthogonal channels, and $M$ users aim to
access the spectrum. Each user chooses a single channel for transmission at
each time slot. The state of each channel is modeled by a restless unknown
Markovian process. Previous studies have analyzed a special case of this
setting, in which each channel yields the same expected rate for all users. By
contrast, we consider a more general and practical model, where each channel
yields a different expected rate for each user. This model adds a significant
challenge of how to efficiently learn a channel allocation in a distributed
manner to yield a global system-wide objective. We adopt the stable matching
utility as the system objective, which is known to yield strong performance in
multichannel wireless networks, and develop a novel Distributed Stable
Strategy Learning (DSSL) algorithm to achieve the objective. We prove
theoretically that DSSL converges to the stable matching allocation, and the
regret, defined as the loss in total rate with respect to the stable matching
solution, has a logarithmic order with time. Finally, simulation results
demonstrate the strong performance of the DSSL algorithm.
## I Introduction
We consider the spectrum access problem, where a shared bandwidth is divided
into $K$ orthogonal channels (i.e., sub-bands), and $M$ users want to access
the spectrum, where $K\geq M$. Each channel is modeled by a Finite-State
Markovian Channel (FSMC), which is independent and non-identically distributed
across channels. The FSMC is a tractable model widely used to capture the
time-varying behavior of a radio communication channel [2, 3]. It is often
employed to model radio channel dynamics due to primary user occupancy effects
in hierarchical cognitive radio networks (where the $M$ secondary (unlicensed)
users are cognitive in terms of learning and adapting good access strategies),
or the external interference effects in the open sharing model among $M$ users
in the wireless network (e.g., ISM band) [4, 5]. At each time step, each user
experiences a different transmission rate over each channel depending on its
FSMC distribution, where the FSMC parameters (i.e., the transition
probabilities that govern the Markov chain) are unknown. At each time step,
each user is allowed to choose one channel to access, and observe the
instantaneous channel state. If two users or more access the same channel at
the same time, a collision occurs and the achievable rate is zero.
We adopt the stable matching utility (see Section II for details) as the
system objective, which is known to yield strong performance in multichannel
wireless networks [6]. We define the regret as the loss in total rate with
respect to the stable matching solution with known FSMCs. The objective is to
develop a distributed learning algorithm for channel allocation and access
under unknown FSMCs that minimizes the growth rate of the regret with time
$t$.
### I-A Main Results
The stable matching problem for multi-user spectrum access was first
introduced in [6] under the assumption that the expected rates are known, and
a distributed opportunistic CSMA algorithm that solves the problem was
proposed. The model with an unknown expected rate matrix and rested setting
(i.e., the states of the Markovian process do not change if not observed by
the user) was studied in [7, 8]. A regret (with respect to the optimal
allocation) of near-$O(\log t)$ was achieved. However, these algorithms
require intensive communication between users in order to apply the auction
algorithm [9]. In [10], the authors reduced the communication burden, but
without guarantees on the achievable regret. Recently, it was shown in [11,
12] that achieving a sum-regret of near- $O(\log t)$ is possible without
communication between users, but only for the case of i.i.d channels. In this
paper we focus on the general case where the channel states may change whether
or not they are being observed (i.e., the restless Markovian setting), and
improve the regret scaling with the system parameters by a simple distributed
implementation. The main contributions are summarized below.
##### A general model for spectrum access using a restless Markovian channel
model
As explained above, by contrast to [6, 7, 8, 10, 11, 12], in this paper we
first solve the channel allocation and access problem under general unknown
restless Markovian channel model. Handling this model adds significant
challenges in algorithm design and regret analysis. Due to the restless nature
of the channels and potential reward loss due to transient effects as compared
to steady state when switching channels, learning the Markovian channel
characteristics requires that the channels be accessed in a judicious
consecutive manner for a period of time. This is reflected in a novel
algorithm design that guarantees efficient learning, as detailed next.
##### Algorithm Development
We are facing an online learning problem constituted by the well-known
exploration versus exploitation dilemma. To remedy this, we propose a novel
Distributed Stable Strategy Learning (DSSL) algorithm for solving the problem.
Since the FSMCs are unknown, the rate means must be learned by accessing all
channels via exploration phases. This results in increasing the regret, since
the stable allocation is not performed. Thus, the exploration time must be
minimized, while guaranteeing efficient learning. Roughly speaking, each
channel can be learned by different exploration times, depending on its
unknown parameters (see more details in Section III-D). The algorithm design
in this paper contributes to both tackling the more general model, as well as
improving the learning efficiency in a fully-distributed manner. Specifically,
in existing algorithms [7, 8, 10, 11, 12], the exploration phase of all
channels is determined by the channel that requires the largest exploration
time. This results in oversampling the channels and significantly increases
the regret. By contrast, the DSSL algorithm estimates online the desired
(unknown) exploration rate of each channel. Thus, by sampling the channels
according to the desired exploration rate, it avoids oversampling the
channels, and thus reduces the regret scaling significantly as compared to
existing algorithms.
##### Performance analysis
In terms of theoretical performance analysis, we prove that the DSSL algorithm
converges to the stable matching allocation, and the regret has a logarithmic
order with time. When comparing to existing approaches, DSSL achieves this
under the more general restless Markovian model, and also has significantly
better scaling with the system parameters. Specifically, under a common
benchmark setting of equal rates among users (but still vary among channels),
and $K>M$, which allows a theoretical comparison of learning efficiency
between different algorithms, in [8] and [13] the regret scales as
$O(\frac{MK}{(\Delta_{\min})^{2}}\log(t))$ ,in [12] as
$O(\frac{M^{3}K}{(\Delta_{\min})^{2}}\log(t))$ and in [11] the regret scales
as $O(\frac{MK^{2}}{(\Delta_{\min})^{2}}\log(t))$, where $\Delta_{\min}$ is
the difference in rates between the $M$th and $(M+1)$th best channels. In
contrast, under DSSL, the regret scales as
$O((\frac{1}{(\Delta_{\min})^{2}}+MK)\log(t))$. In addition, extensive
numerical experiments were performed to demonstrate the efficiency of the
proposed DSSL algorithm.
### I-B Related Work
A number of studies have developed distributed learning algorithms for a
special case of the restless Markovian channel model considered in this paper,
where each channel yields the same expected rate for all users [14, 15, 16].
This special case significantly simplifies the channel allocation problem and
the analysis (for instance, switching between assigned users does not affect
the resulting regret in this special case). In this paper, we consider the
general model where each channel yields a different expected rate for each
user. This models the situation of different channel fading states across
users and channels in actual wireless networks, and adds a significant
challenge of how to learn the desired channel allocation in a distributed
manner to achieve a global system-wide objective.
Another set of related work on multi-user channel allocation has approached it
from the angle of game theoretic and congestion control ( [17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27] and references therein), hidden channel states[28],
and graph coloring ([29, 30, 31, 32] and references therein). The game
theoretic aspects of the problem have been investigated from both non-
cooperative (i.e., each user aims at maximizing an individual utility) [18,
19, 24, 25, 33], and cooperative (i.e., each user aims at maximizing a system-
wide global utility) [17, 34, 26, 35] settings. Model-free learning strategies
were developed in [36, 37] for orthogonal channels, compact models [38], and
multiple access channel strategies were developed in [39, 40]. Graph coloring
formulations have dealt with modeling the spectrum access problem as a graph
coloring problem, in which users and channels are represented by vertices and
colors, respectively (see [29, 30, 32, 31] and references therein for related
studies). Finally, none of these studies have considered the problem of
achieving provable stable strategies in the learning context under unknown
restless Markovian dynamics, as considered in this paper.
## II System Model and Problem Formulation
We consider a wireless network consisting of $K$ orthogonal channels indexed
by the set $\mathcal{K}=\\{1,2,...,K\\}$ and $M$ cognitive users (referred to
as users) indexed by the set $\mathcal{M}=\\{1,2,...,M\\}$, where $K\geq M$.
The users aim at accessing the spectrum to send their data. Each user is
allowed to choose a single channel for transmission at each time slot, and
transmit if the channel is not occupied by a primary user. The users operate
in a synchronous time-slotted fashion. Due to spatial geographic dispersion,
each user can potentially experience different achievable rates over the
channels. When a user $i$ transmits on channel $k$ (when the channel is free)
at time slot $t$, its data rate is given by $r_{i,k}(t)$. This information is
concisely represented by an $M\times K$ rate matrix $V(t)=\\{r_{ik}(t)\\}$,
$i=1,...,M,k=1,...,K$.
We consider the case where the rate process $r_{i,k}(t)$ is Markovian and has
a well-defined steady state distribution. The transition probabilities
associated with the Markov chain are unknown to the users. The process
$r_{i,k}(t)$ evolves independently of the user’s actions (i.e., external
process). Furthermore, the channel states may change depending on whether or
not they are observed (i.e., restless setting). Specifically, the rate of user
$i$ on channel $k$, $r_{i,k}(t)$, is modeled as a discrete time, irreducible
and aperiodic Markov chain on a finite-state space $\mathcal{X}^{i,k}$ and is
represented by a transition probability matrix
$P^{i,k}\triangleq(p^{i,k}_{x,x^{\prime}}:x,x^{\prime}\in\mathcal{X}^{i,k})$.
The process mean (i.e., the expected rate) is denoted by $\mu_{i,k}$ and is
unknown to the users. We define the $M\times K$ expected rate matrix by
$U=\\{\mu_{ik}\\}$, $i=1,...,M,k=1,...,K$.
Let $X_{i,k}(t)$ be the actual achievable rate for user $i$ on channel $k$ at
time $t$. If two or more users choose to access the same channel at the same
time slot, a collision occurs. In this case, $X_{i,k}(t)=0$. Otherwise, if
user $i$ has accessed channel $k$ without colliding with other users, then
$X_{i,k}(t)=r_{i,k}(t)$. The users implement carrier sensing to observe the
current channel state at each time slot as is typically done in cognitive
radio networks [14, 22]. Hence, the channel states are observed regardless of
collisions. The transmission scheme for the multi-user spectrum access model
is detailed in Section III.
### II-A Notations
We present the other notations that are used throughout the paper. Let
$\vec{\pi}_{i,k}\triangleq(\pi^{x}_{i,k},x\in\mathcal{X}^{i,k})$ be the
stationary distribution of the Markov chain $P^{i,k}$, and let:
$\displaystyle\pi_{\min}\triangleq\min_{i\in\mathcal{M},k\in\mathcal{K},x\in\mathcal{X}^{i,k}}\
\pi^{x}_{i,k},\quad\hat{\pi}_{i,k}^{x}\triangleq\max\\{\pi_{i,k}^{x},1-\pi_{i,k}^{x}\\},\quad\hat{\pi}_{\max}\triangleq\max_{i\in\mathcal{M},k\in\mathcal{K},x\in\mathcal{X}^{i,k}}\\{\pi_{i,k}^{x},1-\pi_{i,k}^{x}\\}$.
We define
$X_{\max}\triangleq\max_{i\in\mathcal{M},k\in\mathcal{K}}\\{|\mathcal{X}^{i,k}|\\}$
as the maximal cardinality among the state spaces, and
$\displaystyle
x_{\max}\triangleq\max_{i\in\mathcal{M},k\in\mathcal{K},x\in\mathcal{X}^{i,k}}x,\quad
r_{\max}\triangleq\max_{i\in\mathcal{M},k\in\mathcal{K}}\sum_{x\in{\mathcal{X}^{i,k}}}x$.
Let $\lambda_{i,k}$ be the second largest eigenvalue of $P^{i,k}$, and
$\displaystyle\lambda_{\max}\triangleq\max_{i\in\mathcal{M},k\in\mathcal{K}}\
\lambda_{i,k}$ be the maximal one among all channels and users. Also,
$\displaystyle\overline{\lambda}_{\min}\triangleq
1-\lambda_{\max},\displaystyle\overline{\lambda}_{i,k}\triangleq
1-\lambda_{i,k}$ is the eigenvalue gap. Let $M^{i,k}_{x,y}$ be the mean
hitting time of state $y$ starting at initial state $x$ for channel $k$ used
by user $i$, and $\displaystyle
M^{i,k}_{\max}\triangleq\max_{x,y\in\mathcal{X}^{i,k},x\neq y}M^{i,k}_{x,y}$.
We also define:
$A_{\max}\triangleq\max_{i,k}\;(\min_{x\in\mathcal{X}^{i,k}}\
\pi_{i,k}^{x})^{-1}\sum\limits_{x\in{\mathcal{X}^{i,k}}}x,$
and
$\begin{array}[]{l}\displaystyle
L\triangleq\frac{28x_{\max}^{2}r_{\max}^{2}\hat{\pi}_{\max}^{2}}{\bar{\lambda}_{\min}}.\end{array}$
(1)
The expectations $\mu_{i,k}$ are given by:
$\displaystyle\mu_{i,k}=\sum\limits_{x\in{\mathcal{X}^{i,k}}}x\cdot\pi_{i,k}^{x}$,
and we define $\sigma_{i}$, for $i=1,...,M$, as a permutation of
$\\{1,\ldots,K\\}$ such that
$\displaystyle\mu_{i,\sigma_{i}(1)}>\mu_{i,\sigma_{i}(2)}>\ldots>\mu_{i,\sigma_{i}(K)}$.
### II-B A Stable Channel Allocation
Let $a_{i}(t)\in\mathcal{K}$ be a selection rule, indicating which channel is
chosen by user $i$ at time $t$, which is a mapping from the observed history
of the process (i.e., all past actions and observations up to time $t-1$) to
$\left\\{1,...,K\right\\}$. The expected aggregated data rate for all users up
to time $t$ is given by:
$\begin{array}[]{l}\displaystyle
R(t)=\mathbb{E}[\sum\limits_{n=1}^{t}\sum\limits_{i=1}^{M}X_{i,a_{i}(n)}(n)].\end{array}$
(2)
A policy $\phi_{i}$ is a time series vector of selection rules:
$\phi_{i}=(a_{i}(t),t=1,2,...)$ for user $i$.
Definition 1 ([6]): A bipartite matching between channels and users is a
permutation $P:\mathcal{M}\rightarrow\mathcal{K}$. The optimal centralized
allocation problem is to find a bipartite matching:
$\displaystyle\mathbf{k}^{**}=\arg\max_{\mathbf{k}\in
P}\sum\limits_{i=1}^{M}\mu_{i,k(i)}$.
Definition 2 ([6]): A matching $S:\mathcal{M}\rightarrow\mathcal{K}$ is stable
if for every $i\in\mathcal{M}$ and $k\in\mathcal{K}$ satisfying $S(i)\neq k$,
if $\mu_{i,S(i)}<\mu_{i,k}$ then there exists some user
$i^{\prime}\in\mathcal{M}$ such that $S(i^{\prime})=k$ and
$\mu_{i^{\prime},k}>\mu_{i,k}$.
Achieving the optimal allocation in Definition 1 requires implementing a
centralized solution, or a distributed solution with heavy complexity and slow
convergence [41]. Therefore, we are interested in developing a distributed
algorithm with low complexity that converges to the stable matching solution
in Definition 2 which is known to yield strong performance and very fast
convergence (when the expected rates are known) by using distributed
opportunistic CSMA (see Section III-B and [6] for more details on
opportunistic CSMA for stable channel allocation).
We assume that the entries in the matrix $U$ are all different, as in [6],
which holds in wireless networks due to continuous-valued Shannon
rates111Otherwise, we can add noise to the matrix.. Thus, there is a unique
stable matching solution under our assumptions, and the expected aggregated
rate under the stable matching solution $S$ is given by:
$\sum\limits_{i=1}^{M}\mu_{i,S(i)}$. The channel $S(i)$ (i.e., the channel
that user $i$ selects under the stable matching configuration) is referred to
as the stable channel selection of user $i$.
###### Remark 1
We point out that under an i.i.d. or rested222In the rested model the Markov
chain $P^{i,k}$ makes a state transition only when user $i$ accesses channel
$k$. Markovian channel model, the optimal policy is to transmit on the same
channels that achieves the optimal centralized allocation in terms of the sum
expected rate. However, the optimal policy in the restless Markovian setting
has been shown to be P-SPACE hard even under known Markoivan dynamics [42].
Therefore, a commonly adopted approach in this setting is to use a weaker
definition of the regret, first introduced in [43] and used later; e.g., in
[14, 15, 44, 45], where the policy is compared to a ”partially informed” genie
who knows the expected rates of the channels, instead of the complete system
dynamics. In this paper we adopt this approach as well.
### II-C The Objective
Since the expected rates $\mu_{i,k}$ are unknown in our setting, the users
must learn this information online effectively so as to converge to the stable
matching solution. A widely used performance measure of online learning
algorithms is the regret, which is defined as the reward loss with respect to
an algorithm with a side information on the model. In our setting, we define
the regret for policy $\phi=(\phi_{i},1\leq i\leq M)$ as the loss in the
expected aggregated data rate with respect to the stable matching solution
that uses the true expected rates:
$\begin{array}[]{l}\displaystyle r_{\phi}(t)\triangleq
t\cdot\sum\limits_{i=1}^{M}\mu_{i,S(i)}-\mathbb{E}_{\phi}[\sum\limits_{n=1}^{t}\sum\limits_{i=1}^{M}X_{i,\phi_{i}(n)}(n)].\end{array}$
(3)
A policy $\phi$ that achieves a sublinear scaling rate of the regret with time
(and consequently the time averaged regret tends to zero) approaches the
required stable matching solution. The essence of the problem is thus to
design an algorithm that learns the unknown expected rates efficiently to
achieve the best sublinear scaling of the regret with time.
## III The Distributed Stable Strategy Learning (DSSL) Algorithm
To achieve the objective, as detailed in Section II-C, we divide the time
horizon into three phases, we term exploration, allocation, and exploitation.
These three phases are performed repeatedly during the algorithm according to
judiciously designed policy rules, as detailed later.
The purpose of the exploration phase is to allow each user to explore all the
channels to identify its $M$ best channels (i.e., the $M$ channels that yield
the highest expected rates for the user). The users use the sample means as
estimators for the expected rates of the channels to achieve this goal. This
phase results in a regret loss, since users access sub-optimal channels to
explore them, and the stable allocation is not performed. However, this phase
is essential to identifying the $M$ best channels and consequently minimizing
the regret scaling with time. The purpose of the exploitation phase is to use
the currently learned information to execute the stable matching solution. The
allocation phase allows users to allocate the channels among themselves
properly in a distributed manner using opportunistic carrier sensing [46].
Since the rate process $r_{i,k}(t)$ can evolve even when channel $k$ is not
selected by user $i$, learning the Markovian rate statistics requires using
the channels in a consecutive manner for a period of time [14, 15]. Moreover,
frequent switching between channels can cause a loss due to the transient
effect. The high-level structure of the DSSL algorithm works as follows. Each
user $i$ computes its sufficient number of samples in the exploration phases
(condition (13) defined in III-E) for each channel $k$ at the end of every
exploitation phase $t$. If the number of samples is greater than the required
number for all $k$, user $i$ performs another exploitation phase. Otherwise,
if the number of samples is smaller than the sufficient number for one or more
channels, user $i$ carries out an exploration phase for those channels. When
no exploration phase is needed, an allocation phase is performed. At the end
of the allocation phase, each user identifies its stable channel selection,
and an exploitation phase is carried out. We now discuss the structure of the
DSSL algorithm in details.
### III-A The structure of the exploration phase:
Let $n_{O}^{i,k}(t)$ be the number of exploration phases in which channel $k$
was selected by user $i$ up to time $t$. Each exploration phase is divided
into two sub epochs: a Random size Epoch (RE), and a Deterministic size Epoch
(DE). Let $\gamma^{i,k}(n_{O}^{i,k}(t)-1)$ be the last channel state observed
at the $(n_{O}^{i,k}(t)-1)^{th}$ exploration phase. RE starts at the beginning
of the exploration phase until state $\gamma^{i,k}(n_{O}^{i,k}(t)-1)$ is
observed. This epoch ensures that the generated sample path (after removing
the samples observed in the RE epochs) is equivalent to a sample path
generated by continuously sensing the Markovian channel without switching.
This step guarantees a consistent estimation of the expected rates. Then, DE
starts by sensing the channel for a deterministic period of time
$4^{n_{O}^{i,k}(t)}$. The deterministic period of time grows geometrically
with time to ensure a relatively small number of channel switching.
### III-B The structure of the allocation phase:
The allocation phase applies opportunistic CSMA among users. In opportunistic
CSMA, the backoff function maps from an index (i.e., expected rate) to a
backoff time [46]. The backoff function decreases monotonically with the
rates, so that the user with the highest rate on a certain channel waits the
minimal time before transmission. All other users sense that the channel is
occupied and do not transmit on that channel. To obtain the stable matching
allocation, this procedure continues until all $M$ users occupy $M$ channels.
For more details on opportunistic CSMA for stable matching see [6].
The allocation phase has two goals in our setting. The first is to assign
channels to users to yield a stable matching solution as in [6]. However,
since the expected rates are unknown in our setting, the allocation phase is
executed by using the sample means. The second goal is to use the backoff
function to identify the differences in sample means among users and channels,
which is needed for setting efficient learning rates. This requires a new
mechanism that performs opportunistic CSMA, as detailed below.
Let $\mathcal{T}_{k}$ be the set of all users that attempt to transmit on
channel $k$ at a certain stage of the allocation phase. We initialize the
phase by declaring each user to be unassigned. We divide the time horizon of
the allocation phase into two sub-phases. In the first sub-phase, referred to
as $S_{1}$, we perform opportunistic CSMA for stable matching as in [6], while
replacing the expected rates by the sample means. Specifically, each
unassigned user attempts to transmit on its best channel, out of those it has
not yet attempted using opportunistic CSMA. On each channel $k$, the best user
out of $\mathcal{T}_{k}$ in this sub-phase ($S_{1}$) is declared to be
assigned. All the other users in $\mathcal{T}_{k}$ store the sample mean of
the assigned user (by mapping from the sensed backoff time to the sample
mean). This sub-phase continues until all $M$ users are assigned to $M$
channels. The second sub-phase, referred to as $S_{2}$, is used to obtain the
side information required for efficient learning. Specifically, the
opportunistic CSMA is executed again, but the assigned users of each channel
do not transmit. All other users that attempted to transmit in $S_{1}$
transmit again on the same channel $k$. The sample mean of the best user in
$S_{2}$ (i.e., the second best user in $\mathcal{T}_{k}$ for each channel $k$)
is stored by the assigned user. This sub-phase continues until all $M$ users
in $S_{2}$ were observed, and the phase ends.
An example for $M=K=3$ is given next. The expected rate matrix is shown in
Table I. Table II shows the transmission attempts made by the users in the
allocation phase before the stable matching was achieved (the assigned users
are shown in bold). At time $t=1$, each user transmits on its best channel
(sub-phase $S_{1}$). Users $1$ and $2$ aim to access the same channel (channel
$2$), and the channel is assigned to user $2$ since it has a higher expected
rate on this channel (i.e., smaller backoff time). At time $t=2$, sub-phase
$S_{2}$ is performed, in which user $1$ transmits again on channel $2$. At
time $t=3$, user $1$ (the only unassigned user) tries to access its second
best channel; i.e., channel $1$. However, the channel is assigned to user $3$
since it has a higher expected rate. The algorithm continues until the three
users are assigned to the three channels.
TABLE I: expected rate matrix U | channel 1 | channel 2 | channel 3
---|---|---|---
user 1 | 45 | 70 | 35
user 2 | 30 | 90 | 60
user 3 | 65 | 10 | 50
TABLE II: allocation phase Sub-phase | Time | channel 1 | channel 2 | channel 3
---|---|---|---|---
$S_{1}$ | t=1 | 3 | 1,2 |
$S_{2}$ | t=2 | | 1 |
$S_{1}$ | t=3 | 1,3 | 2 |
$S_{2}$ | t=4 | 1 | |
$S_{1}$ | t=5 | 3 | 2 | 1
### III-C The structure of the exploitation phase:
Let $n_{I}(t)$ be the number of exploitation phases up to time $t$. In the
exploitation phase, each user transmits on the channel it was assigned
according to the last allocation phase (during $S_{1}$) for a deterministic
period of time $2\cdot 4^{n_{I}(t)-1}$ (for the $n_{I}^{th}$ exploitation
phase). There are no channel switching and no sample mean updating during the
exploitation phase.
### III-D Parameter setting for efficient learning:
As discussed earlier, exploring the channels increases the regret since the
stable matching allocation is not used. On the other hand, it is essential to
reduce the estimation error and hence reduce the regret scaling order with
time. In this section, we establish the sufficient exploration rate of each
channel for each user to achieve efficient learning of the stable matching
allocation. We next establish two parameters used in the learning strategy.
#### III-D1 Identifying $M$ best channels
We show in the analysis that a user (say user $i$) who is interested in
distinguishing with a sufficiently high accuracy between two channels $k,l$
that yield expected rates $\mu_{i,k},\mu_{i,\ell}$, respectively, must explore
them at least
$\displaystyle\frac{4L}{(\mu_{i,k}-\mu_{i,\ell})^{2}}\cdot\log(t)$ times. Let
$\mathcal{M}_{i}$ be the set of the $M$ best channels of user $i$. For each
channel $k\in\mathcal{M}_{i}$ we define the deterministic row333This
definition is consistent with the definition of the $M\times K$ expected rate
matrix by $U=\\{\mu_{ik}\\}$, $i=1,...,M,k=1,...,K$. exploration coefficient
as
$\begin{array}[]{l}\displaystyle
D_{i,k}^{(R)}\triangleq\frac{4L}{\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\}},\end{array}$ (4)
and for channel $k{\not\in}\mathcal{M}_{i}$,
$\begin{array}[]{l}\displaystyle
D_{i,k}^{(R)}\triangleq\frac{4L}{(\mu_{i,k}-\mu_{i,\sigma_{i}(M)})^{2}}.\end{array}$
(5)
Since the expected rates are unknown, the users need to estimate
$D_{i,k}^{(R)}$ for each channel $k\in\mathcal{K}$. This estimator is denoted
by $\widehat{D}_{i,k}^{(R)}(t)$. Let $\bar{s}_{i,k}(t)$ be the mean
transmission rate of user $i$ on channel $k$. Thus, the adaptive row
exploration coefficient for channels $k\in\mathcal{M}_{i}$ is defined by
$\begin{array}[]{l}\displaystyle\widehat{D}_{i,k}^{(R)}(t)\triangleq\frac{4L}{\max\big{\\{}\Delta_{\min}^{2},\displaystyle\min_{\ell\neq
k}\\{(\bar{s}_{i,k}(t)-\bar{s}_{i,\ell}(t))^{2}\\}-\epsilon\big{\\}}},\end{array}$
(6)
and similarly for $k{\not\in}\mathcal{M}_{i}$ we have:
$\begin{array}[]{l}\displaystyle\widehat{D}_{i,k}^{(R)}(t)\triangleq\frac{4L}{\max\\{\Delta_{\min}^{2},(\bar{s}_{i,k}(t)-\bar{s}_{i,\sigma_{i}(M)}(t))^{2}-\epsilon\\}},\end{array}$
(7)
where $\Delta_{\min}$ is the smallest difference between two entries in the
expected rate matrix $U$; i.e.,
$\hskip
85.35826pt\Delta_{\min}\triangleq\displaystyle\min_{i\in\mathcal{M}}\Delta_{i}$,
$\vspace{0.2cm}\hskip
71.13188pt\Delta_{i}\triangleq\displaystyle\min_{k,\ell\in\mathcal{K},k\neq\ell}|\mu_{i,k}-\mu_{i,\ell}|.$
#### III-D2 CSMA protocol identification
Consistent with the opportunistic CSMA protocol described above, each user $i$
needs to distinguish between a channel $k\in\mathcal{T}_{k}$ (this channel is
in $\mathcal{M}_{i}$ as well), and the best channel in $\mathcal{T}_{k}$ (and
the second best channel in $\mathcal{T}_{k}$ if $k$ is the best channel in
$\mathcal{T}_{k}$), for all $k$. Hence, we define the deterministic column
exploration coefficient for user $i$ for channel $k\in\mathcal{T}_{k}$ by:
$\begin{array}[]{l}\displaystyle
D_{i,k}^{(C)}\triangleq\frac{4L}{(\mu_{i,k}-\displaystyle\max_{j\neq
i,j\in\mathcal{T}_{k}}\mu_{j,k})^{2}},\end{array}$ (8)
and the adaptive column exploration coefficient by:
$\begin{array}[]{l}\displaystyle\widehat{D}_{i,k}^{(C)}(t)\triangleq\frac{4L}{\displaystyle\max\\{\Delta_{\min}^{2},(\bar{s}_{i,k}(t)-\max_{j\neq
i}\bar{s}_{j,k}(t))^{2}-\epsilon\\}}.\end{array}$ (9)
Note that $\max_{j\neq i,j\in\mathcal{T}_{k}}\bar{s}_{j,k}(t)$ is known to
user $i$ by the design of the opportunistic CSMA (by sub-phase $S_{2}$). By
combining (4) and (8), the deterministic exploration-rate coefficient of user
$i$ for channels $k\in\mathcal{M}_{i}\cap\mathcal{T}_{k}$ is given by:
$\begin{array}[]{l}\displaystyle
D_{i,k}\triangleq\max\\{D_{i,k}^{(R)},D_{i,k}^{(C)}\\},\end{array}$ (10)
and by combining (6) and (9), the adaptive exploration-rate coefficient of
user $i$ for channels $k\in\mathcal{M}_{i}\cap\mathcal{T}_{k}$ is given by:
$\begin{array}[]{l}\displaystyle\widehat{D}_{i,k}(t)=\max\\{\widehat{D}_{i,k}^{(R)}(t),\widehat{D}_{i,k}^{(C)}(t)\\}.\end{array}$
(11)
###### Remark 2
The design of the adaptive exploration-rate coefficients under DSSL
significantly reduces the regret as compared to existing algorithms that use
deterministic exploration-rate coefficients determined by the channel that
requires the largest exploration time [8, 10, 11, 12]. For example, consider
the expected rate matrix $U$ given in Table I, where parameter $L$ in (1)
equals $10^{4}$. In Table III, we present the deterministic exploration-rate
coefficients $D_{i,k}$ defined in (10) for each channel-user pair under DSSL,
where $D_{i,k}\cdot\log(t)$ is the number of samples required to achieve
consistent estimates of the expected rates. By contrast, in other existing
algorithms [8, 10, 11, 12], all channels are explored with the same
exploration-rate coefficient, which is inversely proportional to the squared
difference between the mean rate of the optimal allocation and the second best
one. When applying this to our example, each channel should be explored for
$1600\cdot\log(t)$ time steps (as seen in Table IV), which significantly
increases the exploration times unnecessarily, and consequently increases the
regret.
TABLE III: Exploration coefficients under the DSSL algorithm $D_{i,k}$ | channel 1 | channel 2 | channel 3
---|---|---|---
user 1 | 400 | 100 | 400
user 2 | 45 | 100 | 45
user 3 | 178 | 25 | 178
TABLE IV: Exploration coefficients under other existing algorithms [8, 10, 11, 12] $D_{i,k}$ | channel 1 | channel 2 | channel 3
---|---|---|---
user 1 | 1600 | 1600 | 1600
user 2 | 1600 | 1600 | 1600
user 3 | 1600 | 1600 | 1600
### III-E Choosing between phases types:
Since $D_{i,k}$ is unknown, the algorithm replaces $D_{i,k}$ by its estimate
$\widehat{D}_{i,k}(t)$. Furthermore, to ensure that $\widehat{D}_{i,k}(t)$
overestimates $D_{i,k}$, the users need to sense at least $I\cdot\log(t)$
times each of their channels in exploration phases, where
$\begin{array}[]{l}\displaystyle
I\triangleq\frac{7\epsilon^{2}}{48(r_{\max}+2)^{2}\cdot L},\end{array}$ (12)
which can be viewed as the rate function of the estimators among all channels.
At the end of the exploitation phases, the users check the condition:
$\begin{array}[]{l}\displaystyle
T_{i,k}^{(O)}(t)>\max\left\\{\widehat{D}_{i,k}(t),\frac{2}{I}\right\\}\cdot\log(t),\end{array}$
(13)
where $T_{i,k}^{(O)}(t)$ is the number of samples in the exploration phases
accessed in sub epochs DE for user $i$ on channel $k$ up to time $t$.
If the condition holds for user $i$, the user enters another exploitation
phase by transmitting on the same channel in which it transmitted during the
last exploitation phase. Otherwise, if the condition does not hold, the user
enters an exploration phase by sensing channel $k$. At the end of the phase,
the user signals the other users that it has finished the exploration phase.
If such an interruption occurred, all the users again check condition (13). If
it holds for all users, they start an allocation phase. At the end of the
allocation phase, an exploitation phase starts. A pseudocode of the DSSL
algorithm is provided in Algorithm 1.
Algorithm 1 DSSL Algorithm for user $i$
Initialization: For all $K$ channels, execute an exploration phase where a
single observation is taken from each channel;
while $t\leq T$ do
if Condition (13) does not hold for channel $k$ then
Enter an exploration phase with length $4^{n_{O}^{i,k}(t)}$;
Update $\bar{s}_{i,k}(t)$ and increment $n_{O}^{i,k}(t)=n_{O}^{i,k}(t)+1$;
goto step 3
end if
Send an interrupt signal;
Start an allocation phase;
Start an exploitation phase with length $2\cdot 4^{n_{I}(t)}$. If an
interruption occurs, go to step $3$;
$n_{I}(t)=n_{I}(t)+1$;
end while
## IV Regret Analysis
Success in obtaining a logarithmic regret order depends on how fast
$\widehat{D}_{i,k}(t)$ converges to a value which is no smaller than $D_{i,k}$
(so that user $i$ senses channel $k$ at least $D_{i,k}\cdot\log t$ time slots
in most of the times). The analysis in the Appendix shows that exploring
channels as in (13) guarantees the desired convergence speed. Specifically, in
the following theorem we establish a finite-sample bound on the regret with
time, which results in a logarithmic scaling of the regret.
###### Theorem 1
Assume that the proposed DSSL algorithm is implemented and that the
assumptions on the system model described in Section II hold. Then, the regret
at time $t$ is upper bounded by:
$\begin{array}[]{l}\vspace{0.3cm}\hskip 8.5359pt\displaystyle r(t)\leq
A_{\max}\cdot\bigg{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\bigg{)}\\\
\vspace{0.3cm}\hskip
19.91684pt\displaystyle+\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\bigg{[}\bigg{(}4A_{i,k}\cdot\log(t)+1\\\
\vspace{0.3cm}\hskip
42.67912pt\displaystyle+M_{\max}^{i,k}\big{(}\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1\big{)}\bigg{)}\\\
\vspace{0.3cm}\hskip
99.58464pt\displaystyle\cdot\bigg{(}\mu_{i,S(i)}+\mu_{S^{-1}(k),k}-\mu_{i,k}\bigg{)}\bigg{]}\end{array}$
$\begin{array}[]{l}\vspace{0.3cm}\hskip 19.91684pt\displaystyle+M^{2}\cdot
A_{\max}\cdot\bigg{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\bigg{)}\\\
\vspace{0.3cm}\hskip
19.91684pt\displaystyle+\bigg{[}\bigg{(}2e\log(M+1)\bigg{)}\\\
\vspace{0.3cm}\hskip
34.14322pt\cdot\bigg{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\bigg{)}\bigg{]}\\\
\vspace{0.3cm}\hskip
22.76228pt\displaystyle\cdot\bigg{[}\sum\limits_{j=1}^{M}\mu_{j,S(j)}\bigg{]}\\\
\vspace{0.3cm}\hskip
19.91684pt\displaystyle+\bigg{(}A_{\max}+(M^{2}K+MK)\frac{6X_{\max}}{\pi_{\min}}\big{(}\sum\limits_{j=1}^{M}\mu_{j,S(j)}\big{)}\bigg{)}\\\
\vspace{0.3cm}\hskip
22.76228pt\displaystyle\cdot\bigg{(}\lceil\log_{4}(\frac{3}{2}t+1)\rceil\bigg{)}+O(1),\end{array}$
(14)
where $A_{i,k}$ is given by:
$\displaystyle\vspace{0.0cm}A_{i,k}\triangleq\left\\{\begin{matrix}\max\\{2/I\;,\;D_{i,k}^{(\max)}\\}\;,&\mbox{if
$k\in\mathcal{G}_{i}$}\vspace{0.0cm}\\\
\max\\{2/I\;,\;4L/\Delta_{\min}^{2}\\}\;,&\mbox{if
$k{\not\in}\mathcal{G}_{i}$}\end{matrix}\right.\;,$ (15)
$\mathcal{G}_{i}$ is defined as the set of all indices $k\in\mathcal{K}$ of
user $i$ that satisfy:
$\displaystyle\min\\{(\displaystyle\min_{\ell\neq
k}\\{\mu_{i,k}-\mu_{i,\ell}\\})^{2},(\mu_{i,k}-\max_{j\neq
i}\mu_{j,k})^{2}\\}-2\epsilon>\Delta_{\min}^{2},$
for $k\in\mathcal{T}_{k}$, and
$\displaystyle(\displaystyle\min_{\ell\neq
k}\\{\mu_{i,k}-\mu_{i,\ell}\\})^{2}-2\epsilon>\Delta_{\min}^{2},$
for $k{\not\in}\mathcal{T}_{k}$, where $D_{i,k}^{(\max)}$ is defined as:
$\begin{array}[]{l}\displaystyle
D_{i,k}^{(\max)}\triangleq{\frac{4L}{\min\big{\\{}(\displaystyle\min_{\ell\neq
k}\\{\mu_{i,k}-\mu_{i,\ell}\\})^{2},(\mu_{i,k}-\max_{j\neq
i}\mu_{j,k})^{2}\big{\\}}-2\epsilon}}.\vspace{0.0cm}\end{array}$ (16)
The proof is given in the Appendix.
Note that Theorem 1 shows that similar to [13, 8, 11, 12], the regret under
DSSL has a logarithmic order with time. DSSL, however, achieves this under the
more general restless Markovian model, and also has significantly better
scaling with $M,K$ and $\Delta_{\min}$. Specifically, under a common benchmark
setting of equal rates among users (but still vary among channels), and $K>M$,
which allows a theoretical comparison of learning efficiency between different
algorithms, in [8] and [13] the regret scales as
$O(\frac{MK}{(\Delta_{\min})^{2}}\log(t))$ ,in [12] as
$O(\frac{M^{3}K}{(\Delta_{\min})^{2}}\log(t))$ and in [11] the regret scales
as $O(\frac{MK^{2}}{(\Delta_{\min})^{2}}\log(t))$. In contrast, under DSSL,
the regret scales as $O((\frac{1}{(\Delta_{\min})^{2}}+MK)\log(t))$ due to the
novel algorithm design that explores every channel according to its unique
adaptive exploration rate, while guaranteeing efficient learning.
## V Simulation Results
In this section we present simulation results to evaluate the performance of
DSSL numerically. In Subsection V-A we start by evaluating the convergence of
DSSL under unknown restless fading FSMCs with respect to the stable matching
solution solved under known restless fading FSMCs. We also evaluate the
performance as compared to random allocation and the optimal centralized
allocation schemes. Then, in Section V-B we examine the learning efficiency of
DSSL as compared to other online learning algorithms under unknown restless
FSMC, and verify our theoretical logarithmic regret. We performed $1,000$
Monte-Carlo experiments and averaged the performance over the experiments.
### V-A Convergence of DSSL to stable matching
We start by describing the wireless channel model used in the simulations.
Each user experiences a block fading channel which remains constant during
each time slot, and varies between time slots. The channel response
experienced by user $i$ at time slot $t$ is given by
$h(i,t)=r(i,t)e^{j\rho(i,t)}$, where $r(i,t)=|h(i,t)|$ denotes the channel
rate, and $\rho(i,t)$ denotes the channel phase experienced by user $i$ at
time $t$. Let $f(i,r)$ denote the Probability Density Function (PDF) of the
fading channel rate $r(i)$ experienced by user $i$ (e.g., Rayleigh fading
distribution in the simulations). We consider independent but non-identically
distributed channels across users, and Markovian correlated channels across
time slots. The FSMC model [2, 3] partitions the range of the channel gain
values into a finite number of intervals and represents each interval as a
state of a Markov chain. The thresholds of the intervals at user $i$ are
denoted by $\tau_{n}(i),n=0,\ldots N$, where
$0=\tau_{0}(i)<\tau_{1}(i)<\ldots<\tau_{N-1}(i)<\tau_{N}(i)=\infty$. The
channel rate $r(i,t)$ experienced by user $i$ is said to be in state
$g_{n}(i),1<n<N$, if it lies in the interval: $t_{n-1}(i)\leq
r(i,t)<\tau_{n}(i)$. The states are partitioned to yield an equal initial
state probability for all states:
$\displaystyle\int_{\tau_{n-1}(i)}^{\tau_{n}(i)}f(i,r)dr=\frac{1}{N},n=1,\ldots,N$.
The transition probability to transition from state $g_{n}(i)$ to state
$g_{\ell}(i)$ is defined by:
$\displaystyle p_{n,\ell}(i)\triangleq Pr(\tau_{\ell-1}(i)\leq
r(i,t+1)<\tau_{\ell}(i)$
$|\tau_{n-1}(i)\leq r(i,t)<\tau_{n}(i))$
where $r(i,t)$ and $r(i,t+1)$ are the current channel gain and the channel
gain in the next time slot experienced by user $i$, respectively. In the
simulations, we quantized the channel gain to $6$ states; i.e., $N=6$, and we
simulated a case of $3$ users and $5$ channels. The transition probability
matrix $P$ and the expected rate matrix $U$ are given by:
$P=\left(\begin{array}[]{cccccc}3/6&2/6&1/6&0&0&0\\\ 2/8&3/8&2/8&1/8&0&0\\\
1/9&2/9&3/9&2/9&1/9&0\\\ 0&1/9&2/9&3/9&2/9&1/9\\\ 0&0&1/8&2/8&3/8&2/8\\\
0&0&0&1/6&2/6&3/6\end{array}\right)$,
$U=\left(\begin{array}[]{ccccc}45&70&35&17.5&12.5\\\ 27.5&90&60&15&20\\\
65&10&50&16.5&30\end{array}\right)$.
We compared the expected rate evolution of DSSL under unknown FSMCs against
stable matching, random allocation and the optimal centralized allocation
solved under known FSMCs. The optimal centralized algorithm served as an upper
bound benchmark for all algorithms, and the stable matching served as an upper
bound for DSSL. In the random allocation scheme users access an arbitrary
channel with equal probability. As shown in Fig. 1 the average rate under DSSL
converged to that of the stable matching, as desired. The stable matching
allocation allocates user 1 to channel 3, user 2 to channel 2, and user 3 to
channel 1. Fig. 2 shows that the average achievable rate of each user in the
DSSL algorithm converged to the stable allocation.
Figure 1: Comparison of the system average rate of various schemes Figure 2:
Comparison of users’ average rate for the proposed DSSL algorithm
### V-B Learning efficiency of DSSL
We next evaluated the learning efficiency of DSSL as compared to other online
learning algorithms under unknown restless FSMCs. We considered the
hierarchical access channel model in spectrum access networks. This models the
situation of primary and secondary users that share the spectrum. Primary
users (licensed) occupy the spectrum occasionally, and a secondary user is
allowed to transmit over a single channel when the channel is free. Thus, each
channel has two states, good (free) and bad (occupied). The good state results
in a positive expected rate, whereas bad state result in a zero rate. The
occupancies of the channels by the primary users are modeled as Markov
processes (i.e., Gilbert-Elliott channel).
First, we simulated a special case of our model where each channel yielded the
same expected rate for all users. In [14, 15], the RCA and DSEE algorithms
were proposed to solve this special case. The RCA algorithm performs random
regenerative cycles until catching predefined states in each phase, which
results in oversampling the channels, and therefore is expected to increase
the regret as compared to DSSL. The DSEE algorithm overcomes this issue by
performing deterministic sequencing for both the exploration and exploitation
phases. However, the deterministic sequencing requires the algorithm to
explore all channels using the maximal exploration rate among all channels,
which is expected to increase the regret as compared to DSSL (that learns the
desired exploration rate for each channel) as well. We simulated the case of
$2$ users, $6$ channels, each with two states: 0, 1. The transition
probabilities for all channels to transition from 0 to 1 and from 1 to 0,
respectively, were $p_{01}=[0.1,0.1,0.5,0.1,0.1,0.7]$,
$p_{10}=[0.2,0.3,0.1,0.4,0.5,0.08]$, the expected rates for all channels at
states 1, 0, respectively, are $r_{1}=[1,1,1,1,1,1]$,
$r_{0}=[0.1,0.1,0.1,0.1,0.1,0.1]$. As can be seen in Fig. 3, the DSSL
algorithm outperformed both RCA and DSEE and achieved the logarithmic regret
order with time.
Finally, we simulated the scenario where the stable matching allocation was
also the optimal centralized allocation, and the channels were i.i.d. across
time slots (and not Markovian). We compared DSSL to the $dE^{3}$ algorithm
which was designed for this setting. However, $dE^{3}$ requires communication
between users since it implements a distributed auction that requires users to
observe the bids of other users [8]. We used the same parameters as selected
and tuned by the authors in [8]. Similar to the DSEE algorithm, in $dE^{3}$
the exploration-rate coefficient was determined by the channel with the
largest exploration time. Thus, we expected that DSSL would yield a faster
convergence rate due to the adaptive design of the exploration epochs. As
shown in Fig. 4, DSSL indeed outperformed the $dE^{3}$ algorithm.
Figure 3: The regret (normalized by $\log t$) under DSSL, DSEE, and RCA as a
function of time. Parameter setting: 2 users, 6 channels, each with two
states: 0, 1. Transition probabilities for all channels to transition from 0
to 1 and from 1 to 0, respectively: $p_{01}=[0.1,0.1,0.5,0.1,0.1,0.7]$,
$p_{10}=[0.2,0.3,0.1,0.4,0.5,0.08]$, expected rates for all channels at states
1, 0, respectively: $r_{1}=[1,1,1,1,1,1]$, $r_{0}=[0.1,0.1,0.1,0.1,0.1,0.1]$.
Figure 4: The regret under DSSL and $dE^{3}$ as a function of time. Parameter
setting: 3 users, 3 channels, with mean transmission rates:
$[0.2,0.25,0.3;0.4,0.6,0.5;0.7,0.9,0.8]$.
## VI Conclusion
We developed a novel algorithm for the multi-user spectrum access problem in
wireless networks, dubbed the Distributed Stable Strategy Learning (DSSL)
algorithm. In contrast to existing models, for the first time we considered
the case of restless Markov channels, which requires a different algorithm
structure to accurately learn the channel statistics. Moreover, the channels
selection rules are adaptive in order to reduce the exploration time required
for efficient learning. We showed theoretically that DSSL achieves a
logarithmic regret with time, and better regret scaling with the system
parameters as compared to existing approaches that have studied special cases
of the model. Extensive simulation results supported the theoretical study and
demonstrated the strong performance of DSSL.
## VII Appendix
In this appendix we prove Theorem 1.
###### Definition 1
Let $T_{1}$ be the smallest integer, such that for all $t\geq T_{1}$ the
following holds: $D_{i,k}\leq\widehat{D}_{i,k}(t)$ for all
$i\in\mathcal{M},k\in\mathcal{K}$, and also $\widehat{D}_{i,k}(t)\leq
D_{i,k}^{(\max)}$ for all $i\in\mathcal{M},k\in\mathcal{G}_{i}$.
###### Lemma 1
Assume that the DSSL algorithm is implemented as described in Section III.
Then, $E(T_{1})<\infty$ is bounded independent of $t$.
Proof: $E(T_{1})$ can be written as follows:
$E[T_{1}]=\sum\limits_{n=1}^{\infty}n\cdot
Pr\left(T_{1}=n\right)=\sum\limits_{n=1}^{\infty}\Pr\left(T_{1}\geq
n\right)\\\ =\vspace{0.0cm}\hskip
8.5359pt\sum\limits_{n=1}^{\infty}\Pr\big{(}\bigcup\limits_{i\in\mathcal{M}}\bigcup\limits_{k\in\mathcal{G}_{i}}\bigcup\limits_{l=n}^{\infty}(\widehat{D}_{i,k}(l)<D_{i,k}\mbox{\;or\;}\\\
\vspace{0.0cm}\hskip
110.96556pt\widehat{D}_{i,k}(l)>D_{i,k}^{(\max)})\mbox{\;or\;}\\\
\vspace{0.0cm}\hskip
51.21504pt\bigcup\limits_{i\in\mathcal{M}}\bigcup\limits_{k{\not\in}\mathcal{G}_{i}}\bigcup\limits_{l=n}^{\infty}(\widehat{D}_{i,k}(l)<D_{i,k})\big{)}\\\
\leq\vspace{0.0cm}\sum\limits_{i\in\mathcal{M}}\sum\limits_{k\in\mathcal{G}_{i}}\sum\limits_{n=1}^{\infty}\sum\limits_{l=n}^{\infty}\Pr\big{(}\widehat{D}_{i,k}(l)<D_{i,k}\mbox{\;or\;}\widehat{D}_{i,k}(l)>D_{i,k}^{(\max)}\big{)}\\\
\vspace{0.0cm}\hskip
8.5359pt+\sum\limits_{i\in\mathcal{M}}\sum\limits_{k{\not\in}\mathcal{G}_{i}}\sum\limits_{n=1}^{\infty}\sum\limits_{l=n}^{\infty}\Pr\big{(}\widehat{D}_{i,k}(l)<D_{i,k}\big{)}$
Note that if we show that
$\begin{array}[]{l}\Pr\big{(}\widehat{D}_{i,k}(l)<D_{i,k}\mbox{\;or\;}\widehat{D}_{i,k}(l)>D_{i,k}^{(\max)}\big{)}\leq
C\cdot l^{-(2+\delta)}\end{array}$ (17)
for some constants $C>0,\delta>0$ for all
$i\in\mathcal{M},k\in\mathcal{G}_{i}$ for all $l\geq n$, then we get:
$\displaystyle\sum\limits_{i\in\mathcal{M}}\sum\limits_{k\in\mathcal{G}_{i}}\sum\limits_{n=1}^{\infty}\sum\limits_{l=n}^{\infty}\Pr\big{(}\widehat{D}_{i,k}(l)<D_{i,k}\mbox{\;or\;}\widehat{D}_{i,k}(l)>D_{i,k}^{(\max)}\big{)}\\\
\leq\vspace{0.1cm}\hskip 8.5359ptMK\cdot
C\left[\sum\limits_{l=1}^{\infty}l^{-(2+\delta)}+\sum\limits_{n=2}^{\infty}\sum\limits_{l=n}^{\infty}l^{-(2+\delta)}\right]\\\
\leq\vspace{0.1cm}\hskip 8.5359ptMK\cdot
C\left[\sum\limits_{l=1}^{\infty}l^{-(2+\delta)}+\sum\limits_{n=2}^{\infty}\int\limits_{n-1}^{\infty}l^{-(2+\delta)}dl\right]\\\
=\vspace{0.1cm}\hskip 8.5359ptMK\cdot
C\left[\sum\limits_{l=1}^{\infty}l^{-(2+\delta)}+\frac{1}{1+\delta}\sum\limits_{n=2}^{\infty}(n-1)^{-(1+\delta)}\right]<\infty$,
which is bounded independent of $t$. Similarly, showing that
$\Pr\big{(}\widehat{D}_{i,k}(l)<D_{i,k}\big{)}\leq C\cdot l^{-(2+\delta)}$ for
some constants $C,\delta>0$ for all
$i\in\mathcal{M},k{\not\in}\mathcal{G}_{i}$ for all $j\geq n$ completes the
statement. We start bounding (17). We look at the first inequality of (17) for
user $i$ with channel $k\in\mathcal{M}_{i}\cap\mathcal{T}_{k}$. The event
$\widehat{D}_{i,k}(t)<D_{i,k}$ implies:
$\vspace{0.3cm}\max\bigg{\\{}\Delta_{\min}^{2},\min\big{\\{}\displaystyle\min_{\ell\neq
k}\\{(\bar{s}_{i,k}(t)-\bar{s}_{i,\ell}(t))^{2}\\}-\epsilon,\\\
\vspace{0.3cm}\hskip 34.14322pt(\bar{s}_{i,k}(t)-\max_{j\neq
i}\bar{s}_{j,k}(t))^{2}-\epsilon\big{\\}}\bigg{\\}}\\\ \vspace{0.3cm}\hskip
5.69046pt>\min\big{\\{}\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\},(\mu_{i,k}-\max_{j\neq
i}\mu_{j,k})^{2}\big{\\}},\\\ $ which after algebraic manipulations implies
that at least one of the following holds:
$\vspace{0.3cm}\hskip 14.22636pt\displaystyle\min_{\ell\neq
k}\\{(\bar{s}_{i,k}(t)-\bar{s}_{i,\ell}(t))^{2}\\}-\epsilon>\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\}\\\ \vspace{0.3cm}\hskip
14.22636pt(\bar{s}_{i,k}(t)-\max_{j\neq
i}\bar{s}_{j,k}(t))^{2}-\epsilon>(\mu_{i,k}-\max_{j\neq i}\mu_{j,k})^{2}.$
Similarly, the second inequality of (17) implies one of the following:
$\vspace{0.3cm}\hskip 8.5359pt\displaystyle\min_{\ell\neq
k}\\{(\bar{s}_{i,k}(t)-\bar{s}_{i,\ell}(t))^{2}\\}-\epsilon<\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\}-2\epsilon\\\ \vspace{0.3cm}\hskip
8.5359pt(\bar{s}_{i,k}(t)-\max_{j\neq
i}\bar{s}_{j,k}(t))^{2}-\epsilon<(\mu_{i,k}-\max_{j\neq
i}\mu_{j,k})^{2}-2\epsilon.$
Let $k^{*}=\displaystyle\text{arg}\min_{\ell\neq
k}(\mu_{i,k}-\mu_{i,\ell}\\})^{2}$ (i.e.,
$(\mu_{i,k}-\mu_{i,k^{*}}\\})^{2}=\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\}$). Cascading the events written above we
get :
$\vspace{0.0cm}\hskip
8.5359pt\Pr\big{(}\widehat{D}_{i,k}(t)<D_{i,k}\mbox{\;or\;}\widehat{D}_{i,k}(t)>D_{i,k}^{(\max)}\
\big{)}$
$\displaystyle\vspace{0.3cm}\leq\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))^{2}-(\mu_{i,k}-\mu_{i,k^{*}})^{2}|>\epsilon\big{)}$
$\displaystyle\vspace{0.3cm}+\Pr\big{(}|(\bar{s}_{i,k}(t)-\max_{j\neq
i}\bar{s}_{j,k}(t))^{2}-(\mu_{i,k}-\max_{j\neq
i}\mu_{j,k})^{2}|>\epsilon\big{)}.$ (18)
Each of the terms in (18) is the probability of a deviation of the squared
difference for two Markov chains’ sample means from the squared difference of
their expected means by an $\epsilon$. We look at the first term of (18).
Using conventional steps from set theory, it can be shown that:
$\vspace{0.3cm}\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))^{2}-(\mu_{i,k}-\mu_{i,k^{*}})^{2}|>\epsilon\big{)}\\\
\vspace{0.2cm}\leq\big{[}\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))[(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))\\\
\vspace{0.3cm}\hskip
133.72786pt-(\mu_{i,k}-\mu_{i,k^{*}})]|>\frac{\epsilon}{2}\big{)}\big{]}\\\
\vspace{0.2cm}+\big{[}\Pr\big{(}|(\mu_{i,k}-\mu_{i,k^{*}})[(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))\\\
\vspace{0.3cm}\hskip
133.72786pt-(\mu_{i,k}-\mu_{i,k^{*}})]|>\frac{\epsilon}{2}\big{)}\big{]}\\\
\vspace{0.3cm}\leq\big{[}\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))-(\mu_{i,k}-\mu_{i,k^{*}})|>1\big{)}\\\
\vspace{0.3cm}+\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))-(\mu_{i,k}-\mu_{i,k^{*}})|>\frac{\epsilon}{2(R+1)}\big{)}\\\
\vspace{0.3cm}+\Pr\big{(}|(\mu_{i,k}-\mu_{i,k^{*}})+1|>R\big{)}\big{]}\\\
\vspace{0.3cm}+\big{[}\vspace{0.3cm}\Pr\big{(}\mu_{i,k}>R^{\prime}\big{)}\\\
\vspace{0.3cm}+\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))-(\mu_{i,k}-\mu_{i,k^{*}})|>\frac{\epsilon}{2(R^{\prime}+1)}\big{)}\big{]},\\\
$ for every $R,R^{\prime}>0$. We choose $R=R^{\prime}=r_{\max}+1,$ hence the
third and fourth terms are equal to $0$, and we get the concentration
inequalities:
$\vspace{0.0cm}\Pr\big{(}|(\bar{s}_{i,k}(t)-\bar{s}_{i,k^{*}}(t))^{2}-(\mu_{i,k}-\mu_{i,k^{*}})^{2}|>\epsilon\big{)}$
$\displaystyle<6\cdot\max\bigg{\\{}\Pr\big{(}|\bar{s}_{i,k}(t)-\mu_{i,k}|>\frac{\epsilon}{4(r_{\max}+2)}\big{)},$
(19)
$\displaystyle\Pr\big{(}|\bar{s}_{i,k^{*}}(t)-\mu_{i,k^{*}}|>\frac{\epsilon}{4(r_{\max}+2)}\big{)}\bigg{\\}}.$
(20)
Similar bounds can be obtained for the second term in (18). To bound (19) and
(20) we use Lezaud’s results [47]:
###### Lemma 2 ([47])
Consider a finite-state, irreducible Markov chain $\\{X_{t}\\}_{t\geq 1}$ with
state space $S$, matrix of transition probabilities $P$, an initial
distribution $q$, and stationary distribution $\pi$. Let
$N_{\textbf{q}}=\left\|(\frac{q^{(x)}}{\pi^{(x)}},x\in S)\right\|_{2}$. Let
$\widehat{P}=P^{\prime}P$ be the multiplicative symmetrization of $P$ where
$P^{\prime}$ is the adjoint of $P$ on $l_{2}(\pi)$. Let
$\epsilon=1-\lambda_{2}$, where $\lambda_{2}$ is the second largest eigenvalue
of the matrix $P^{\prime}$. $\epsilon$ will be referred to as the eigenvalue
gap of $P^{\prime}$. Let $f:S\rightarrow\mathcal{R}$ be such that
$\sum\limits_{y\in S}\pi_{y}f(y)=0,\quad\|f\|_{2}\leq 1$ and
$0\leq\|f\|_{2}^{2}\leq 1$ if $P^{\prime}$ is irreducible. Then, for any
positive integer $n$ and all $0<\lambda\leq 1$, we have:
$P\displaystyle\left(\frac{\sum\limits_{t=1}^{n}f(X_{t})}{n}\geq\lambda\right)\leq
N_{\textbf{q}}$ exp $[-\frac{n\lambda^{2}\epsilon}{12}].$
Consider an initial distribution $\textbf{q}^{i,k}$ for channel $k$ of user
$i$. We have:
$\displaystyle
N_{\textbf{q}}^{(i,k)}=\left\|(\frac{q_{i,k}^{x}}{\pi_{i,k}^{x}},x\in
X^{i,k})\right\|_{2}\leq\sum\limits_{x\in
X^{i,k}}\left\|\frac{q_{i,k}^{x}}{\pi_{i,k}^{x}}\right\|_{2}\leq\frac{1}{\pi_{min}}.$
We point out that the sample rate mean $\bar{s}_{i,k}(t)$ is computed by
$T^{(O)}_{i,k}(t)$ observation taken only from sub epochs DE in the
exploration phases, thus the sample path that generated $\bar{s}_{i,k}(t)$ can
be viewed as a sample path generated by a Markov chain with a transition
matrix identical to the original channel $\\{i,k\\}$, so we can apply Lezaud’s
result to bound (19) and (20). For equation (19):
we define $n_{x}^{i,k}(t)$ to be the number of occurrences of state $x$ on
channel $k$ sensed by user $i$ up to time t.
$\vspace{0.3cm}\Pr\big{(}\bar{s}_{i,k}(t)-\mu_{i,k}>\frac{\epsilon}{4(r_{\max}+2)}\big{)}\\\
=\vspace{0.3cm}\Pr\big{(}\sum\limits_{x\in\mathcal{X}^{i,k}}x\cdot
n_{x}^{i,k}(t)-T^{(O)}_{i,k}(t)\sum\limits_{x\in\mathcal{X}^{i,k}}x\cdot\pi_{i,k}^{x}>\frac{T^{(O)}_{i,k}(t)\cdot\epsilon}{4(r_{\max}+2)}\big{)}\\\
=\vspace{0.3cm}\Pr\big{(}\sum\limits_{x\in\mathcal{X}^{i,k}}(x\cdot
n_{x}^{i,k}(t)-T^{(O)}_{i,k}(t)x\cdot\pi_{i,k}^{x})>\frac{T^{(O)}_{i,k}(t)\cdot\epsilon}{4(r_{\max}+2)}\big{)}\\\
\leq\vspace{0.3cm}\sum\limits_{x\in\mathcal{X}^{i,k}}\Pr\big{(}x\cdot
n_{x}^{i,k}(t)-T^{(O)}_{i,k}(t)x\cdot\pi_{i,k}^{x}>\frac{T^{(O)}_{i,k}(t)\cdot\epsilon}{4(r_{\max}+2)|\mathcal{X}^{i,k}|}\big{)}\\\
=\vspace{0.3cm}\sum\limits_{x\in\mathcal{X}^{i,k}}\Pr\big{(}n_{x}^{i,k}(t)-T^{(O)}_{i,k}(t)\cdot\pi_{i,k}^{x}>\frac{T^{(O)}_{i,k}(t)\cdot\epsilon}{4(r_{\max}+2)|\mathcal{X}^{i,k}|\cdot
x}\big{)}\\\
=\vspace{0.3cm}\sum\limits_{x\in\mathcal{X}^{i,k}}\Pr\bigg{(}\frac{\sum\limits_{n=1}^{t}\textbf{1}(x_{i,k}(n)=x)-T^{(O)}_{i,k}(t)\pi_{i,k}^{x}}{\hat{\pi}_{i,k}^{x}\cdot
T^{(O)}_{i,k}(t)}\\\ \vspace{0.3cm}\hskip
128.0374pt>\frac{T^{(O)}_{i,k}(t)\cdot\epsilon}{4(r_{\max}+2)|\mathcal{X}^{i,k}|\cdot
x\hat{\pi}_{i,k}^{x}}\bigg{)}\\\ \leq\vspace{0.3cm}|\mathcal{X}^{i,k}|\cdot
N_{\textbf{q}}^{(i,k)}$ exp
$\bigg{(}-T^{(O)}_{i,k}(t)\cdot\frac{\epsilon^{2}}{16(r_{\max}+2)^{2}\cdot
x^{2}\cdot|\mathcal{X}^{i,k}|^{2}\cdot(\hat{\pi}_{i,k}^{x})^{2}}\\\
\vspace{0.3cm}\hskip 184.9429pt\cdot\frac{(1-\lambda_{i,k})}{12}\bigg{)},$
and from (13), we have: $T^{(O)}_{i,k}(t)>\frac{2}{I}\log(t)$ with $I$ defined
in (12). Thus,
$\displaystyle\displaystyle\Pr\big{(}|\bar{s}_{i,k}(t)-\mu_{i,k}|>\frac{\epsilon}{4(r_{\max}+2)}\big{)}\leq\frac{|X_{\max}|}{\pi_{\min}}\cdot
t^{-2+\delta}.$ (21)
The same bound can be obtained for (20), and with the same steps, for all
terms in (18). The proof for all $i\in\mathcal{M},k{\not\in}\mathcal{G}_{i}$
is similar, and thus Lemma 1 follows. $\square$
We now bound the expected regret defined in (3). We divide the time horizon
for $t<T_{1}$ and $t>T_{1}$. Since $T_{1}$ is finite (due to Lemma 1), the
regret for all $t<T_{1}$ results in a constant term $O(1)$ which is
independent of $t$. For $t>T_{1}$, we know that the adaptive exploration
coefficient is no smaller than the deterministic exploration coefficient, and
no larger than $D_{i,k}^{(\max)}$ defined in (16); i.e.,
$D_{i,k}\leq\widehat{D}_{i,k}(t)\leq D_{i,k}^{(\max)},$ (22)
for all $i\in\mathcal{M},k\in\mathcal{G}_{i}$ , and the LHS of the inequality
for $i\in\mathcal{M},k\in\mathcal{K}$. Thus, the exploration phases provides
sufficient learning for the channel statistics (and the upper bound ensures
that the channels are judiciously oversampled in the exploration phases).
We continue bounding the regret for $t>T_{1}$:
$\displaystyle\displaystyle
r(t)\leq(t-T_{1})\cdot\sum\limits_{i=1}^{M}\mu_{i,S(i)}-\mathbb{E}[\sum\limits_{n=T_{1}+1}^{t}\sum\limits_{i=1}^{M}X_{i,a_{i}(n)}(n)].$
(23)
For convenience, we will develop (23) between $n=1$ and $t$ with (22) (and the
LHS for $k{\not\in}\mathcal{G}_{i}$) holds for all $1\leq n\leq t$, which
upper bounds (23):
$\vspace{0.0cm}r(t)\leq(t-T_{1})\cdot\sum\limits_{i=1}^{M}\mu_{i,S(i)}-\mathbb{E}[\sum\limits_{n=T_{1}+1}^{t}\sum\limits_{i=1}^{M}X_{i,a_{i}(n)}(n)]$
$\displaystyle\leq
t\cdot\sum\limits_{i=1}^{M}\mu_{i,S(i)}-\mathbb{E}[\sum\limits_{n=1}^{t}\sum\limits_{i=1}^{M}X_{i,a_{i}(n)}(n)].$
(24)
We can rewrite (24) as:
$\displaystyle r(t)$
$\displaystyle\leq\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\big{(}\mu_{i,k}\cdot
E[T_{i,k}(t)]-E[\sum\limits_{n=1}^{t}X_{i,k}(n)]\big{)}$ (25)
$\displaystyle+\big{(}t\cdot\sum\limits_{i=1}^{M}\mu_{i,S(i)}-\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\mu_{i,k}\cdot
E[T_{i,k}(t)]\big{)},$ (26)
where $T_{i,k}(t)$ is the total number of transmission for user $i$ on channel
$k$ up to time $t$ (and $X_{i,k}(n)=0$ if user $i$ did not try to access
channel $k$ at time $n$).
Equation (25) can be considered as the regret due to the transient effect (the
initial state of the channel may not be given by the stationary distribution),
and (26) is the regret caused by not playing the stable matching allocation.
Both (25) and (26) can be thought of as the sum of three different regret
terms, corresponding to the three phases described in Section III. We denote
by $r^{O}(t),r^{A}(t),r^{I}(t)$ the regret caused in the exploration,
allocation and exploitation phases respectively; i.e., the regret can be
written as:
$\displaystyle r(t)=r^{O}(t)+r^{A}(t)+r^{I}(t).$ (27)
We next bound the regret in each of the three phases.
Regret in the exploration phases:
To bound the regret in the exploration phases, we first bound the number of
exploration phases $n_{O}^{i,k}(t)$ for each user $i\in\mathcal{M}$ on each
channel $k\in\mathcal{K}$ by time $t$. As described in Section (III-A), the
total number of samples from the exploration phases in sub epochs DE for user
$i$ on channel $k$ up to time $t$ is:
$\displaystyle
T_{i,k}^{(O)}(t)=\sum\limits_{n=1}^{n_{O}^{i,k}(t)}4^{n-1}=\frac{1}{3}(4^{n_{O}^{i,k}(t)}-1)$.
Since we are in an exploration phase, from (13) together with (22), we have
$T_{i,k}^{(O)}(t)<A_{i,k}\cdot\log(t)$ ($A_{i,k}$ is defined in (15). Hence,
$\begin{array}[]{l}n_{O}^{i,k}(t)\leq\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1.\end{array}$
(28)
We use the following lemma to show that the regret caused by channel switching
is upper bounded by a constant independent of the number of transmissions on
the channel in each phase.
###### Lemma 3 ([48])
Consider an irreducible, aperiodic Markov chain with state space $S$, a matrix
of transition probabilities $P$, an initial distribution $\overrightarrow{q}$
which is positive in all states, and stationary distribution
$\overrightarrow{\pi}(\pi_{s}$ is the stationary probability of state s). The
state (reward) at time $t$ is denoted by $s(t)$. Let $\mu$ denote the mean
reward. If we play the chain for an arbitrary time $T$, then there exists a
value $A_{p}\leq(\min_{s\in S}\pi_{s})^{-}1\sum\limits_{s\in S}s$, such that:
$E[\sum\limits_{t=1}^{T}s(t)-\mu T]\leq A_{p}$.
Lemma 3 bounds the probability of a large deviation from the stationary
distribution of a Markov chain (which we refer to as the transient effect). By
the construction of the exploration phases described in Section (III-A), in
each exploration phase there is no channel switching (each channel has its own
unique exploration phases), therefore (25) in the exploration phases is
bounded by:
$\begin{array}[]{l}A_{\max}\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\big{)}.\end{array}$
(29)
We next bound (26) in the exploration phases. Note that each user has its own
exploration time, independent of the other users; i.e., when user $i$
explores, the other users (for which condition (13) holds) continue to
exploit. However, user’s $i$ exploration may affect other users exploring
during that time due to collision. Specifically, when user $i$ explores
channel $k$ it affects the regret in two ways. First, user $i$ does not
transmit in its stable channel; hence, the regret is increased by
$\mu_{i,S(i)}-\mu_{i,k}$. Second, if $k$ is a stable channel of another user,
then because of the collision, the regret will increase by $\mu_{S^{-1}(k),k}$
($S^{-1}(k)$ is the user for which channel $k$ is its stable channel ).
Combining these two terms, we bound (26) in exploration phases by:
$\begin{array}[]{l}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\bigg{(}E[N_{i,k}^{(O)}(t)]\cdot(\mu_{i,S(i)}+\mu_{S^{-1}(k),k}-\mu_{i,k})\bigg{)},\end{array}$
(30)
where $N_{i,k}^{(O)}(t)$ consists of the time indices from RE and DE, and
depends on the mean hitting time of the channel due to the regenerative
cycles. With (28) we have:
$\vspace{0.0cm}E[N_{i,k}^{(O)}(t)]\leq\sum\limits_{n=0}^{n_{O}^{i,k}-1}(4^{n}+M^{i,k}_{\max})\\\
\vspace{0.0cm}=\frac{1}{3}(4^{n_{O}^{i,k}(t)}-1)+M^{i,k}_{max}\cdot
n_{O}^{i,k}(t)$
$\begin{array}[]{l}\vspace{0.0cm}\leq\frac{1}{3}[4(3A_{i,k}\cdot\log(t)+1)-1]\\\
\vspace{0.0cm}+M^{i,k}_{\max}\cdot\log_{4}(3A_{i,k}\log(t)+1).\end{array}$
(31)
Combining (29) and (30) we can bound the first term in (27):
$\begin{array}[]{l}\vspace{0.0cm}r^{O}(t)\leq
A_{\max}\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\big{)}\\\
+\vspace{0.0cm}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\bigg{(}E[N_{i,k}^{(O)}(t)]\cdot(\mu_{i,S(i)}+\mu_{S^{-1}(k),k}-\mu_{i,k})\bigg{)},\end{array}$
(32)
which coincides with the first and second terms on the RHS of (14).
Regret in the allocation phases:
Since an allocation phase will only come after an exploration phase, the
number of allocation phases by time $t$, $n_{A}(t)$ is bounded by the total
number of exploration phases by time $t$; i.e.,
$\vspace{0.0cm}n_{A}(t)\leq\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}n_{O}^{i,k}(t),$
and by using (28) we have:
$\begin{array}[]{l}n_{A}(t)\leq\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1.\end{array}$
(33)
Since the expected rates are unknown in our setting, the allocation phase is
executed using the sample means. To bound the expected time required for each
allocation phase, we use proposition VI.4. in [6]:
###### Lemma 4 ([6])
Denote the expected delay to reach a stable matching configuration by $T_{M}$.
There is some constant $C$ s.t. for every $M$ we have:
$T_{M}\leq C\log(M+1).$
Specifically, it was shown in [6] that it is sufficient to choose $C=2e$ for
the bound to hold.
Lemma 4 states that each allocation phase is finite with respect to $t$, and
only depends on the number of users. The total time in allocation phases by
time $t$, denoted by $T_{A}(t)$, can be bounded by combining (33) with lemma
4:
$\begin{array}[]{l}\vspace{0.3cm}E[T_{A}(t)]\leq\big{(}2C\log(M+1)\big{)}\\\
\vspace{0.0cm}\hskip
8.5359pt\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1\big{)},\end{array}$
(34)
with $\displaystyle C=2e$.
We now bound (25) and (26) for the allocation phases. In each allocation
phase, the maximum number of channel switchings is $M\cdot M$; thus, the
regret caused by the transient effect is bounded by:
$\begin{array}[]{l}A_{\max}\cdot
M^{2}\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\big{)}.\end{array}$
(35)
and the regret due to sub-optimal allocation can be bounded by:
$\begin{array}[]{l}E[T_{A}(t)]\cdot\big{(}\sum\limits_{i=1}^{M}\mu_{i,S(i)}\big{)}.\end{array}$
(36)
Combining (35), (36) we have:
$\begin{array}[]{l}\vspace{0.3cm}r^{A}(t)\leq A_{\max}\cdot
M^{2}\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}(\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1)\big{)}\\\
\vspace{0.3cm}+\big{[}\big{(}C\log(M+1)\big{)}\cdot\big{(}\sum\limits_{i=1}^{M}\sum\limits_{k=1}^{K}\lfloor\log_{4}(3A_{i,k}\log(t)+1)\rfloor+1\big{)}\big{]}\\\
\vspace{0.0cm}\hskip
56.9055pt\cdot\big{(}\sum\limits_{i=1}^{M}\mu_{i,S(i)}\big{)},\end{array}$
(37)
which coincides with the third and fourth terms in the RHS of (14).
Regret in the exploitation phases:
We first bound the number of exploitation phases up to time $t$. As described
in Section III-C, the number of time slots in the $n^{th}$ exploitation phase
is $2\cdot 4^{(n-1)}$. Thus we have:
$\sum\limits_{n=1}^{n_{I}(t)}2\cdot 4^{n-1}=\frac{2}{3}(4^{n_{I}}-1)\leq t,$
which implies
$\begin{array}[]{l}n_{I}\leq\lceil\log_{4}(\frac{3}{2}t+1)\rceil.\end{array}$
(38)
During the exploitation phases, there are no channel switchings (each user
exploits its stable channel). As a result, the regret caused by the transient
effect in the exploitation phases is upper bounded by:
$\begin{array}[]{l}A_{\max}\cdot\lceil\log_{4}(\frac{3}{2}t+1)\rceil.\end{array}$
(39)
It remains to bound the regret as a result of not playing the stable matching
allocation (which we refer to as a sub-optimal allocation) in the exploitation
phases. The event of playing a sub-optimal allocation in an exploitation phase
occurs if the previous allocation phase results in a sub-optimal allocation,
which occurs if one of the following takes place. The first is that user $i$
did not correctly identify the order of its $M$ best channels entering the
allocation phase. This event would be denoted by $Y_{i}$. The second
eventuality is when the user with the highest expected rate in channel $k$ was
not identified correctly in the allocation phase. This event is denoted by
$Z_{k}$. We write these events explicitly:
$\vspace{0.0cm}\displaystyle
Y_{i}(t_{n})=\bigcup\limits_{k\in\mathcal{M}_{i}}\bigcup\limits_{l\in\mathcal{K}}\big{\\{}\bar{s}_{i,k}(t_{n})<\bar{s}_{i,l}(t_{n})|\mu_{i,k}>\mu_{i,l}\big{\\}}$
$\displaystyle
Z_{k}(t_{n})=\bigcup\limits_{j\in\mathcal{T}_{k}}\big{\\{}\bar{s}_{i,k}(t_{n})<\bar{s}_{j,k}(t_{n})|\mu_{i,k}=\max_{l\in\mathcal{T}_{k}}\mu_{l,k}\big{\\}},$
where $t_{n}$ denotes the starting time of the $n^{th}$ exploitation phase.
Based on the above notations, the probability for a sub-optimal allocation
($P_{S}(n)$) in an exploitation phase at time $t_{n}$ is given by:
$\vspace{0.0cm}\hskip 8.5359pt\displaystyle
P_{S}(n)\triangleq\Pr\big{(}\bigcup\limits_{i\in\mathcal{M}}Y_{i}(t_{n})\mbox{\;or\;}\bigcup\limits_{k\in\mathcal{K}}Z_{k}(t_{n})\big{)}.$
The number of time slots in a sub-optimal allocation in the exploitation
phases can be written as: $\\\ \vspace{0.3cm}\hskip 8.5359pt\displaystyle
E[\tilde{T}(t)]=\sum\limits_{n=1}^{n_{I}(t)}2\cdot 4^{n-1}\cdot
P_{S}(n)\leq\sum\limits_{n=1}^{\lceil\log_{4}(\frac{3}{2}t+1)\rceil}2\cdot
4^{n-1}\cdot P_{S}(n)$
$\displaystyle\vspace{0.3cm}\hskip
8.5359pt\displaystyle\leq\sum\limits_{n=1}^{\lceil\log_{4}(\frac{3}{2}t+1)\rceil}3t_{n}\cdot
P_{S}(n).$ (40)
To complete Theorem 1, we need to show that:
$\displaystyle\displaystyle
P_{S}(n)=\Pr\big{(}\bigcup\limits_{i\in\mathcal{M}}Y_{i}(t_{n})\mbox{\;or\;}\bigcup\limits_{k\in\mathcal{K}}Z_{k}(t_{n})\big{)}\leq
B\cdot t_{n}^{-1},$ (41)
for some $B>0$ (there is only a logarithmic number of terms in (40)). Using
union bounds we have:
$\displaystyle\vspace{0.0cm}\Pr\big{(}\bigcup\limits_{i\in\mathcal{M}}Y_{i}(t_{n})\mbox{\;or\;}\bigcup\limits_{k\in\mathcal{K}}Z_{k}(t_{n})\big{)}\\\
$
$\displaystyle\leq$ $\displaystyle
M^{2}K\cdot\Pr\big{(}\bar{s}_{i,k}(t_{n})<\bar{s}_{i,l}(t_{n})|\mu_{i,k}>\mu_{i,l}\big{)}$
(42) $\displaystyle+$ $\displaystyle
MK\cdot\Pr\big{(}\bar{s}_{i,k}(t_{n})<\bar{s}_{j,k}(t_{n})|\mu_{i,k}=\max_{l\in\mathcal{T}_{k}}\mu_{l,k}\big{)}$
(43)
To bound (42) and (43), we define $C_{t,v}=\sqrt{L\log(t)/v}$. Equation (42)
implies that at least one of the following must hold
$\displaystyle\bar{s}_{i,k}(t_{n})\leq\mu_{i,k}-C_{t_{n},T_{i,k}^{(O)}}$ (44)
$\displaystyle\bar{s}_{i,l}(t_{n})\geq\mu_{i,l}+C_{t_{n},T_{i,l}^{(O)}}$ (45)
$\displaystyle\mu_{i,k}<\mu_{i,l}+C_{t_{n},T_{i,l}^{(O)}}+C_{t_{n},T_{i,k}^{(O)}}.$
(46)
First we show that the probability for event (46) is zero.
$\vspace{0.3cm}\hskip
11.38092pt\Pr\big{(}\mu_{i,k}<\mu_{i,l}+C_{t_{n},T_{i,l}^{(O)}}+C_{t_{n},T_{i,k}^{(O)}}\big{)}\\\
\vspace{0.3cm}\displaystyle=\Pr\bigg{(}\mu_{i,k}-\mu_{i,l}<\sqrt{\frac{L\log
t_{n}}{T_{i,l}^{(O)}(t_{n})}}+\sqrt{\frac{L\log
t_{n}}{T_{i,k}^{(O)}(t_{n})}}\bigg{)}\\\
\vspace{0.3cm}\displaystyle\leq\Pr\bigg{(}\mu_{i,k}-\mu_{i,l}<2\sqrt{\frac{L\log
t_{n}}{\min\left\\{T_{i,k}^{(O)}(t_{n}),T_{i,l}^{(O)}(t_{n})\right\\}}}\bigg{)}\\\
\vspace{0.3cm}\displaystyle\leq\Pr\bigg{(}\min\left\\{T_{i,k}^{(O)}(t_{n}),T_{i,l}^{(O)}(t_{n})\right\\}<\frac{4L}{(\mu_{i,k}-\mu_{i,l})^{2}}\log(t_{n})\bigg{)}.\\\
$ Combining (22) with (13) (which holds since we started an allocation phase),
we have:
$\vspace{0.3cm}\displaystyle
T_{i,k}^{(O)}(t_{n})>\frac{4L}{\displaystyle\min_{\ell\neq
k}\\{(\mu_{i,k}-\mu_{i,\ell})^{2}\\}}\log(t_{n})\\\ \vspace{0.3cm}\hskip
113.81102pt\geq\frac{4L}{(\mu_{i,k}-\mu_{i,l})^{2}}\log(t_{n})\\\
\vspace{0.3cm}\displaystyle
T_{i,l}^{(O)}(t_{n})>\frac{4L}{\displaystyle\min_{j\neq\ell}\\{(\mu_{i,l}-\mu_{i,j})^{2}\\}}\log(t_{n})\\\
\vspace{0.3cm}\hskip
113.81102pt\geq\frac{4L}{(\mu_{i,k}-\mu_{i,l})^{2}}\log(t_{n}),$
which ensures that the probability of (46) is zero. Note that here we used the
fact that $D_{i,k}\geq D_{i,k}^{(R)}.$
We now bound (44) and (45) using Lezaud’s result (Lemma 3). With similar steps
as used above to bound (19), we can show:
$\displaystyle\Pr\big{(}\bar{s}_{i,k}(t_{n})\leq\mu_{i,k}-C_{t_{n},v_{i,k}}\big{)}\leq\frac{|\mathcal{X}^{i,k}|}{\pi_{\min}}t^{-\frac{L\bar{\lambda}_{\min}}{28X_{\max}^{2}r_{\max}^{2}\hat{\pi}_{\max}^{2}}}$
(47)
$\displaystyle\Pr\big{(}\bar{s}_{i,l}(t_{n})\geq\mu_{i,l}+C_{t_{n},v_{i,l}}\big{)}\leq\frac{|\mathcal{X}^{i,l}|}{\pi_{\min}}t^{-\frac{L\bar{\lambda}_{\min}}{28X_{\max}^{2}r_{\max}^{2}\hat{\pi}_{\max}^{2}}}.$
(48)
Using (1), (42) is bounded by:
$\displaystyle
M^{2}K\cdot\Pr\big{(}\bar{s}_{i,k}(t_{n})<\bar{s}_{i,l}(t_{n})|\mu_{i,k}>\mu_{i,l}\big{)}$
$\displaystyle\vspace{0.0cm}\leq$ $\displaystyle
M^{2}K\cdot\frac{2X_{\max}}{\pi_{\min}}\cdot t^{-1}.$ (49)
Equation (43) can be bounded using similar techniques, this time using the
fact that $D_{i,k}\geq D_{i,k}^{(C)}$, and we can bound (41):
$\displaystyle\displaystyle\Pr\big{(}\bigcup\limits_{i\in\mathcal{M}}Y_{i}(t_{n})\mbox{\;or\;}\bigcup\limits_{k\in\mathcal{K}}Z_{k}(t_{n})\big{)}\leq(M^{2}K+MK)\frac{2X_{\max}}{\pi_{\min}}\cdot
t^{-1}.$ (50)
With (50) we can bound (40), and therefore the regret due to sub-optimal
allocation in the exploitation phases is bounded by:
$\begin{array}[]{l}\displaystyle
3\big{(}\sum\limits_{i=1}^{M}\mu_{i,S(i)}\big{)}(M^{2}K+MK)\frac{2X_{\max}}{\pi_{\min}}\cdot\lceil\log_{4}(\frac{3}{2}t+1)\rceil.\end{array}$
(51)
By combining (51) with (39), the total regret in the exploitation phases is:
$\begin{array}[]{l}\vspace{0.0cm}\displaystyle r^{I}(t)\leq
A_{\max}\cdot\lceil\log_{4}(\frac{3}{2}t+1)\rceil\\\ \vspace{0.0cm}\hskip
22.76228pt\displaystyle+3\big{(}\sum\limits_{i=1}^{M}\mu_{i,S(i)}\big{)}(M^{2}K+MK)\frac{2X_{\max}}{\pi_{\min}}\cdot\lceil\log_{4}(\frac{3}{2}t+1)\rceil,\end{array}$
(52)
which coincides with the two last terms on the RHS of (14).
## References
* [1] T. Gafni and K. Cohen, “A distributed stable strategy learning algorithm for multi-user dynamic spectrum access,” in 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 347–351, 2019.
* [2] H. S. Wang and N. Moayeri, “Finite-state markov channel-a useful model for radio communication channels,” IEEE transactions on vehicular technology, vol. 44, no. 1, pp. 163–171, 1995.
* [3] P. Sadeghi, R. A. Kennedy, P. B. Rapajic, and R. Shams, “Finite-state markov modeling of fading channels-a survey of principles and applications,” IEEE Signal Processing Magazine, vol. 25, no. 5, pp. 57–80, 2008.
* [4] Q. Zhao and B. Sadler, “A survey of dynamic spectrum access,” IEEE Signal Processing Magazine, vol. 24, no. 3, pp. 79–89, 2007.
* [5] N. Slamnik-Kriještorac, H. Kremo, M. Ruffini, and J. M. Marquez-Barja, “Sharing distributed and heterogeneous resources toward end-to-end 5g networks: A comprehensive survey and a taxonomy,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 1592–1628, 2020.
* [6] A. Leshem, E. Zehavi, and Y. Yaffe, “Multichannel opportunistic carrier sensing for stable channel access control in cognitive radio systems,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 1, pp. 82–95, 2012.
* [7] D. Kalathil, N. Nayyar, and R. Jain, “Decentralized learning for multiplayer multiarmed bandits,” IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2331–2345, 2014.
* [8] N. Nayyar, D. Kalathil, and R. Jain, “On regret-optimal learning in decentralized multiplayer multiarmed bandits,” IEEE Transactions on Control of Network Systems, vol. 5, no. 1, pp. 597–606, 2016.
* [9] D. P. Bertsekas, “The auction algorithm: A distributed relaxation method for the assignment problem,” Annals of operations research, vol. 14, no. 1, pp. 105–123, 1988.
* [10] O. Avner and S. Mannor, “Multi-user lax communications: a multi-armed bandit approach,” in IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications, pp. 1–9, IEEE, 2016.
* [11] I. Bistritz and A. Leshem, “Distributed multi-player bandits-a game of thrones approach,” in Advances in Neural Information Processing Systems, pp. 7222–7232, 2018.
* [12] E. Boursier, V. Perchet, E. Kaufmann, and A. Mehrabian, “A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players,” arXiv e-prints, p. arXiv:1902.01239, Feb 2019.
* [13] H. Liu, K. Liu, and Q. Zhao, “Learning in a changing world: Restless multiarmed bandit with unknown dynamics,” IEEE Transactions on Information Theory, vol. 59, no. 3, pp. 1902–1916, 2013.
* [14] C. Tekin and M. Liu, “Online learning of rested and restless bandits,” IEEE Transactions on Information Theory, vol. 58, no. 8, pp. 5588–5611, 2012\.
* [15] H. Liu, K. Liu, and Q. Zhao, “Learning in a changing world: Restless multiarmed bandit with unknown dynamics,” IEEE Transactions on Information Theory, vol. 59, no. 3, pp. 1902–1916, 2012.
* [16] T. Gafni and K. Cohen, “Learning in restless multi-armed bandits using adaptive arm sequencing rules,” in Proc. of the IEEE International Symposium on Information Theory (ISIT), pp. 1206–1210, Jun. 2018.
* [17] Z. Han, Z. Ji, and K. R. Liu, “Fair multiuser channel allocation for OFDMA networks using Nash bargaining solutions and coalitions,” IEEE Transactions on Communications, vol. 53, no. 8, pp. 1366–1376, 2005.
* [18] I. Menache and N. Shimkin, “Rate-based equilibria in collision channels with fading,” IEEE Journal on Selected Areas in Communications, vol. 26, no. 7, pp. 1070–1077, 2008.
* [19] U. O. Candogan, I. Menache, A. Ozdaglar, and P. A. Parrilo, “Competitive scheduling in wireless collision channels with correlated channel state,” in Game Theory for Networks, 2009. GameNets’ 09. International Conference on, pp. 621–630, 2009.
* [20] I. Menache and A. Ozdaglar, “Network games: Theory, models, and dynamics,” Synthesis Lectures on Communication Networks, vol. 4, no. 1, pp. 1–159, 2011.
* [21] L. M. Law, J. Huang, and M. Liu, “Price of anarchy for congestion games in cognitive radio networks,” IEEE Transactions on Wireless Communications, vol. 11, no. 10, pp. 3778–3787, 2012.
* [22] K. Cohen, A. Leshem, and E. Zehavi, “Game theoretic aspects of the multi-channel ALOHA protocol in cognitive radio networks,” IEEE Journal on Selected Areas in Communications, vol. 31, pp. 2276–2288, 2013.
* [23] H. Wu, C. Zhu, R. J. La, X. Liu, and Y. Zhang, “Fasa: Accelerated S-ALOHA using access history for event-driven M2M communications,” IEEE/ACM Transactions on Networking (TON), vol. 21, no. 6, pp. 1904–1917, 2013.
* [24] C. Singh, A. Kumar, and R. Sundaresan, “Combined base station association and power control in multichannel cellular networks,” IEEE/ACM Transactions on Networking, vol. 24, no. 2, pp. 1065–1080, 2016.
* [25] K. Cohen and A. Leshem, “Distributed game-theoretic optimization and management of multichannel aloha networks,” IEEE/ACM Transactions on Networking, vol. 24, no. 3, pp. 1718–1731, 2016.
* [26] K. Cohen, A. Nedić, and R. Srikant, “Distributed learning algorithms for spectrum sharing in spatial random access wireless networks,” IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2854–2869, 2017.
* [27] D. Malachi and K. Cohen, “Queue and channel-based aloha algorithm in multichannel wireless networks,” IEEE Wireless Communications Letters, vol. 9, no. 8, pp. 1309–1313, 2020.
* [28] M. Yemini, A. Leshem, and A. Somekh-Baruch, “Restless hidden markov bandits with linear rewards,” arXiv preprint arXiv:1910.10271, 2019.
* [29] W. Wang and X. Liu, “List-coloring based channel allocation for open-spectrum wireless network,” In proc. of IEEE Vehic. Tech. Conf., 2005.
* [30] J. Wang, Y. Huang, and H. Jiang, “Improved algorithm of spectrum allocation based on graph coloring model in cognitive radio,” in WRI International Conference on Communications and Mobile Computing, vol. 3, pp. 353–357, 2009\.
* [31] A. Checco and D. Leith, “Learning-based constraint satisfaction with sensing restrictions,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, pp. 811–820, Oct 2013.
* [32] A. Checco and D. J. Leith, “Fast, responsive decentralised graph colouring,” arXiv preprint arXiv:1405.6987, 2014.
* [33] H. Cao and J. Cai, “Distributed opportunistic spectrum access in an unknown and dynamic environment: A stochastic learning approach,” IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4454–4465, 2018.
* [34] A. Leshem and E. Zehavi, “Bargaining over the interference channel,” in IEEE International Symposium on Information Theory, pp. 2225–2229, 2006.
* [35] I. Bistritz and A. Leshem, “Approximate best-response dynamics in random interference games,” IEEE Transactions on Automatic Control, vol. 63, no. 6, pp. 1549–1562, 2018.
* [36] O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for dynamic spectrum access in multichannel wireless networks,” in IEEE Global Communications Conference (GLOBECOM), pp. 1–7, 2017.
* [37] O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for distributed dynamic spectrum access,” IEEE Transactions on Wireless Communications, vol. 18, no. 1, pp. 310–323, 2019.
* [38] D. Livne and K. Cohen, “PoPS: Policy Pruning and Shrinking for deep reinforcement learning,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 4, pp. 789–801, 2020.
* [39] T. Sery and K. Cohen, “On analog gradient descent learning over multiple access fading channels,” IEEE Transactions on Signal Processing, vol. 68, pp. 2897–2911, 2020.
* [40] K. Cohen and D. Malachi, “A time-varying opportunistic multiple access for delay-sensitive inference in wireless sensor networks,” IEEE Access, vol. 7, pp. 170475–170487, 2019.
* [41] O. Naparstek and A. Leshem, “Fully distributed optimal channel assignment for open spectrum access,” IEEE Transactions on Signal Processing, vol. 62, no. 2, pp. 283–294, 2013.
* [42] C. H. Papadimitriou and J. N. Tsitsiklis, “The complexity of optimal queuing network control,” Mathematics of Operations Research, vol. 24, no. 2, pp. 293–305, 1999.
* [43] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM journal on computing, vol. 32, no. 1, pp. 48–77, 2002.
* [44] A. Lesage-Landry and J. A. Taylor, “The multi-armed bandit with stochastic plays,” IEEE Transactions on Automatic Control, vol. 63, no. 7, pp. 2280–2286, 2017.
* [45] P. Reverdy, V. Srivastava, and N. E. Leonard, “Satisficing in multi-armed bandit problems,” IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3788–3803, 2016.
* [46] Q. Zhao and L. Tong, “Opportunistic carrier sensing for energy-efficient information retrieval in sensor networks,” EURASIP Journal on Wireless Communications and Networking, vol. 2005, no. 2, pp. 231–241, 2005.
* [47] P. Lezaud, “Chernoff-type bound for finite markov chains,” Annals of Applied Probability, pp. 849–867, 1998.
* [48] V. Anantharam, P. Varaiya, and J. Walrand, “Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays-part ii: Markovian rewards,” IEEE Transactions on Automatic Control, vol. 32, no. 11, pp. 977–982, 1987.
|
# Dense Suspension Flow in a Penny-Shaped Crack
Part I : Theory
George R. Wyatt1 & Herbert E. Huppert2 1Emmanuel College, St. Andrew’s Street,
Cambridge, CB2 3AP<EMAIL_ADDRESS>
2King’s College, King’s Parade, Cambridge, CB2 1ST<EMAIL_ADDRESS>
###### Abstract.
We study the dynamics of proppants carried by fluid driven into an evolving
penny-shaped fracture. The behaviour of the slurry flow is investigated in two
phases: pressurised injection and elastic closure. During injection the slurry
is modelled using a frictional rheology that takes into account the shear-
induced migration and jamming of the proppants. Making pragmatic assumptions
of negligible toughness and cross-fracture fluid slip, we find self-similar
solutions supporting a range of proppant concentration profiles. In
particular, we define an effective viscosity, which equates the fracture
evolution of a slurry flow with a given proppant volume fraction, to a
Newtonian flow with a particular viscosity. Using this framework, we are able
to make predictions about the geometry of the growing fracture and the
significance of tip screen-out. In the closure phase, proppants are modelled
as incompressible and radially immobile within the narrowing fracture. The
effects of proppant concentration on the geometry of the residual propped
fracture are explored in full. The results have important applications to
industrial fracking and geological dike formation by hot, intruding magma.
###### Key words and phrases:
Hydraulic fracture, suspension flow, rheology, proppant transport, elastic,
tip screen-out, penny-shaped, cavity flow
## 1\. Introduction
Receiving a patent for his ‘exploding torpedo’ in 1865, US Civil War veteran
Col. Edward Roberts established the practice of fracturing bedrock to
stimulate oil wells [1]. A technique, known as hydraulic fracturing, which
uses pressurised fluid rather than explosives to develop fracture networks,
only came into practice much later, in 1947 [2], and is the topic of this
paper. In particular, we will concentrate on the convective transport of
proppants within an evolving cavity. These are small particles added to the
fracturing fluid in order to prop open the developed fracture, which closes
under far-field stress once the fluid pressure is released. Aside from its use
in hydrocarbon recovery, hydraulic fracturing, or fracking, has uses including
the measurement of in-situ stresses in rocks [3], generation of electricity in
enhanced geothermal systems [4] and improvement of injection rates in CO2
sequestration [5]. Hydraulic fracturing processes are also ubiquitous in
geology: dikes and sills arise from cracks whose growth is driven by magma,
with magmatic crystals taking the place of synthetic proppants. Phenomena such
as crystallisation and gas exsolution in the cooling magma mean models of dike
propagation vary widely, as is summarised in [6]. Notably, Petford & Koenders
[7] utilise granular flow theory to model the ascent of a granitic melt
containing solids.
This paper combines two significant, but often disconnected, fields of
fracking study, cavity flow and suspension flow:
* •
The study of (elastohydrodynamic) cavity flow focusses on the interplay
between hydrodynamic properties of the fracturing fluid and material
properties of the medium being fractured. In the zero-proppant case, the
problem of a fluid-driven, penny-shaped crack requires the joint solution of a
nonlinear Reynold’s equation, which governs flow within the crack, and a
singular integral boundary condition, which takes into account the elastic
properties of the surrounding medium. The general strategy used in this paper
takes inspiration from the work of Spence & Sharp [8], who in 1985,
restricting to the two-dimensional case, were the first to solve these
integro-differential equations. In particular, we will focus on cavities that
keep the same shape in some evolving coordinate system, using series
expansions to represent both the width and pressure profiles within the
fracture. More recently, in 2002, Savitski & Detournay [9] solved similar
three-dimensional versions of these equations, allowing them to find fracture
evolutions with simple time dependence in both the viscous and toughness
dominated regimes. In the former, the principal energy dissipation is by
viscous flow, and in the latter, energy dissipation is mostly by creating new
fracture surfaces. Notably, the same paper [9] verifies that industrial
fracking occurs in the viscous regime; this assumption makes the problem
considered in this paper tractable to a semi-analytical approach.
* •
The mathematical study of suspension flow dates back to 1906, when Einstein
used properties of suspensions to estimate the size of a water molecule [10].
In particular, he showed that very dilute particle-laden flows are Newtonian,
with a viscosity which increases with the concentration of particles. However,
during hydraulic fracturing it is necessary to model a full range of proppant
volume fractions, which we denote by $\phi$. It is typical to have both dilute
flow near the crack walls, as well as plug flow at the centre of the cavity,
where the slurry behaves as a porous granular medium. More recent experiments
by Boyer et al. in 2011 [11] investigate dense suspension rheology. They show
that particles in suspension, subject to a constant normal particle pressure
that is applied by a porous plate, expand when a shear is applied to the
mixture. As a result, it is possible to write $\phi=\phi(I)$, where the
dimensionless parameter, $I$, is the ratio between the fluid shear stress,
which is proportional to the shear rate, and the particle normal stress.
Likewise, fixing the solid volume fraction, they showed that the normal
particle pressure is proportional to the mixture shear stress. It is also
shown that the constant of proportionality, $\mu$, can be expressed as a
decreasing function of $\phi$. In the same paper [11], forms of the
rheological functions $I$ and $\mu$ are suggested, showing good agreement with
experimental data. Since then, several papers have suggested slightly
different rheological models and are reviewed by Donstov et al. in [12]. These
all feature a jamming limit, $\phi_{m}$, which is the volume fraction at which
the flowing slurry transitions into a granular solid. We will utilise the
frictional rheology given by Lecampion & Garagash [13], which is unique in
allowing packings with $\phi>\phi_{m}$. These denser packings form due to ‘in-
cage’ particle rearrangements caused by velocity and pressure fluctuations in
the surrounding flow.
The endeavours of this paper may be condensed into three main objectives. The
first is to establish a mathematical framework that captures the behaviour of
the proppant suspension as it interacts with the growing cavity. Here we will
utilise a lubrication model, along with the assumption that the proppant flow
is fully developed; equivalently, that the transverse fluid slip is
negligible. Crucially, we will try to justify these assumptions using typical
parameters from industrial fracking. We will also make a zero-toughness
assumption, which is validated in [9]. Once we have developed this framework,
an important step will be to compare its features to those derived in the
zero-proppant, viscosity dominated case by Savitski & Detournay [9],
particularly because we utilise a frictional rheology fitted to the dense
regime. The second objective is to find and examine accurate numerical
solutions modelling the developing cavity, given a range of proppant
concentrations. We will explore the empirical effects of changing proppant
concentration on the geometry of the developing fracture, as well as the
distribution of proppants. Where possible, we will evaluate the consistency of
our model and forecast potential shortfalls such as proppant screen-out near
the crack tip. The third, and final, objective is to leverage our results to
make predictions about the geometry of the fracture after the fluid pressure
is released. By assuming the remaining proppants are immobile and
incompressible, we aim to establish simple formulae predicting the width and
radius of the developed fracture. Since these relate directly to the
conductivity of the formation, this third objective is potentially the most
significant.
Aside from the availability of semi-analytical solutions, the problem of
proppant flow in a penny-shaped crack is particularly appealing because of the
potential of practical verification. Recent experiments by O’Keeffe, Huppert &
Linden [14] have explored fluid-driven, penny-shaped fractures in transparent,
brittle hydrogels, making use of small particle concentrations to measure in-
crack velocities. This paper is the first of two; the second of which will be
a practical treatise on slurry driven-fractures in hydrogels, aiming to verify
the predictions made here by repeating the experiments of [14] including
proppant concentrations.
## 2\. Injection: Problem Formulation
Figure 1. Schematic of the penny-shaped crack.
### 2.1. Fracture Mechanics
We model the propagation of a penny-shaped crack similar to that shown in
Figure 1, using the framework of Detournay & Savitski [9]. We will make the
following assumptions:
* •
The crack is axisymmetric and has reflectional symmetry in $z=0$, with half
width $w(r,t)$ and total radius $R(t)$, so $w(R,t)=0$.
* •
The fluid is injected from a point source, with the wellbore radius negligible
compared to the fracture radius.
* •
The lag between the fracture tip and the fluid front is negligible compared to
the fracture radius.
* •
The fracture propagates in continuous mobile equilibrium.
* •
The normal stress on the fracture walls due to proppants is negligible
compared to the fluid pressure.
The third assumption is validated by Garagash & Detournay [15] and introduces
a negative pressure singularity at the tip of the crack ($r=R$). The fourth
and fifth assumptions lead to the following integral equations from linear
elastic fracture mechanics. These relate the net fluid pressure, $p(r,t)$, to
the opening of the fracture and the toughness of the surrounding rock.
(1) $\displaystyle w(r,t)$ $\displaystyle=\frac{4R}{\pi
E^{\prime}}\int_{r/R}^{1}\frac{y}{\sqrt{y^{2}-(r/R)^{2}}}\int_{0}^{1}\frac{xp(xyR,t)}{\sqrt{1-x^{2}}}dxdy,$
(2) $\displaystyle K_{Ic}$ $\displaystyle=\frac{2}{\sqrt{\pi
R}}\int_{0}^{R}\frac{p(r,t)r}{\sqrt{R^{2}-r^{2}}}dr,$
where $E^{\prime}$ is the plane strain modulus, given by the Young modulus,
$E$, and the Poisson ratio, $\nu$, as $E^{\prime}=E/(1-\nu^{2})$. $K_{Ic}$ is
the material toughness. These equations can be attributed to Sneddon [16] and
Rice [17] respectively. We note that $p$ represents the fluid pressure minus
the in-situ stress of the surrounding rock, which is assumed to be isotropic.
We write $p$ with radial spatial dependence only; this will be validated
later, along with the fifth assumption, using a lubrication argument.
### 2.2. Frictional Rheology
We model the injected flow as a Newtonian fluid containing identical spherical
particles. Recent approaches in modelling dense slurry flow are characterised
by empirical relations originally proposed by Boyer et al. [11]. The first of
these relates the fluid shear stress to the normal stress required to confine
the particles; the second gives the ratio of the mixture shear stress to the
particle confining stress,
(3) $\displaystyle I(\phi)$
$\displaystyle=\eta_{f}\dot{\gamma}/\sigma_{n}^{s},$ $\displaystyle\mu(\phi)$
$\displaystyle=\tau/\sigma_{n}^{s}.$
Here $\eta_{f}$ is the carrying fluid’s dynamic viscosity, $\phi$ is the
volume fraction of the proppants, $\dot{\gamma}$ is the solid shear rate and
$\sigma_{n}^{s}$ is the normal particle stress, which we will sometimes refer
to as the particle pressure. The second ratio is given the symbol $\mu$, not
to be confused with dynamic viscosity, because it resembles a friction
coefficient. These relations are given a clear experimental grounding in [11],
which is discussed in the introduction. Various forms of the dimensionless
functions $I(\phi)$ and $\mu(\phi)$ have been compared to experimental results
in [12] using the equivalent formulation:
$\tau=\eta_{s}(\phi)\eta_{f}\dot{\gamma}$ and
$\sigma_{n}=\eta_{n}(\phi)\eta_{f}\dot{\gamma}$, where
$\eta_{s}=\mu(\phi)/I(\phi)$ and $\eta_{n}=1/{I(\phi)}$.
In our calculations we will utilise the frictional rheology provided by B.
Lecampion & D. I. Garagash [13], which is unique in allowing packings with
volume concentrations greater than $\phi_{m}$. Here $I(\phi)=0$, meaning the
proppants have zero shear rate and effectively resemble a permeable solid.
Explicitly, we use the expressions
(4)
$\displaystyle\mu=\mu_{1}+\frac{\phi_{m}}{\delta}\left(1-\frac{\phi}{\phi_{m}}\right)$
$\displaystyle+\left(I(\phi)+\left[\frac{5}{2}\phi_{m}+2\right]I(\phi)^{0.5}\right)\left(1-\frac{\phi}{\phi_{m}}\right)^{2},$
(7) $\displaystyle I(\phi)$
$\displaystyle=\left\\{\begin{array}[]{rl}\left(\phi_{m}/\phi-1\right)^{2}&\textrm{
if }\phi<\phi_{m}\\\ 0&\textrm{ if }\phi\geq\phi_{m},\end{array}\right.$
where $\phi_{m}=0.585$, $\mu_{1}=0.3$ and $\delta=0.158$; these are plotted in
Figure 2. We might have used a different rheology, but this model shows good
agreement with the data of Boyer et al. [11] and Dagois-Bohy et al. [18] for
$0.4<\phi<\phi_{m}$. Furthermore, owing to its linear extension beyond
$\phi_{m}$, $\mu$ is a simple monotonic function, meaning we can invert it
easily to find $\phi$. In other models $\phi(\mu)$ is constant for
$\mu<\mu(\phi_{m})$; this means that $\phi_{m}$ is the maximum volume
fraction, regardless of how small shear stresses in the jammed slurry become.
An important observation is that $\mu=0$ implies
$\phi=\phi_{m}+\delta\mu_{1}\approx 0.63\approx\phi_{rcp}$. Here $\phi_{rcp}$
is the random close packing limit, the maximal observed volume fraction due to
random packing. This reflects the fact that, for a given confining stress, as
the shear stress tends to zero, the particles pack to this maximal density.
This rheology uses a continuum model that requires particles to be small
compared to the size of the fracture. This is in order to well-define the
proppant volume concentration, $\phi$. In our model the relevant ratio is that
of the particle diameter to the typical crack width, the smallest cavity
length scale. In [13], good results are obtained using the same rheological
model, with this ratio taking values as large as $1/10$. However, as the ratio
approaches unity we have to consider non-local effects, such as proppant
bridging across the crack width. This is particularly important near the
fracture tip, where $w$ approaches zero. These effects will be discussed in
greater detail in Section 7, once we have formed a model of the evolving
fracture. We must also be cautious applying these rheological models to dilute
flows, since they are fitted to experimental data from the dense regime, where
$\phi>0.4$. This difficulty is somewhat inevitable, since the determination of
$I$ and $\mu$ requires measurement of the particle normal stress, or particle
pressure, which becomes very small in the dilute regime.
(a) $I$
(b) $\mu$
(c) $I/\mu$
(d) $I/\mu$ data
Figure 2. Plots of the rheological functions $I$, $\mu$ and $I/\mu$ given by
Lecampion & Garagash [13]. Also plotted is the experimental data of Boyer et
al. [11] using polystyrene spheres of diameter 580$\mu$m in $2.15$Pa s fluid
(red), as well as poly(methyl methacrylate) spheres of diameter 1100$\mu$m
suspended in $3.10$Pa s fluid (orange); and of Dagois-Bohy et al. [18] using
polystyrene spheres of diameter 580$\mu$m suspended in $2.27$Pa s fluid
(purple). All experiments are carried out with a fixed particle pressure,
applied by a porous plate.
### 2.3. Fluid Slip
We define $\mathbf{u}$ as the slurry velocity, $\mathbf{v}$ as the particle
velocity and $\mathbf{q}=\mathbf{u}-\mathbf{v}$ as the slip velocity. We then
employ the slip relation
(8) $\displaystyle\mathbf{q}$
$\displaystyle=\frac{a^{2}\kappa(\phi)}{\eta_{f}}\nabla\cdot\sigma^{f},$ (9)
$\displaystyle\kappa(\phi)$ $\displaystyle=\frac{2(1-\phi)^{5.1}}{9\phi},$
where $a$ is the particle radius and $\sigma_{f}$ is the fluid stress tensor.
Since fluid and particle shear rates are often similar, we ignore fluid shear
stresses and take $\sigma^{f}=-pI$; this is typical in the analysis of porous
media flow. This simplifies (8) to Darcy’s law. However, the effect of fluid
shear stress is taken into account in the frictional rheology, where it is
included as part of the solid shear stress. $\kappa$ is a normalised form of
the permeability of the solid particles; we use the function suggested by
Garside & Al-Dibouni [19], which is based on the phenomenology first described
by Richardson & Zaki [20]. This choice of permeability function shows
excellent agreement with the experimental results of Bacri et al. [21].
### 2.4. Conservation Equations
We consider the effective Reynolds number,
(10) $\displaystyle\textrm{Re}_{\textrm{eff}}=\frac{\rho
u_{r}w^{2}}{\eta_{f}R},$
to be negligible. We also neglect the effect of gravity, since we are mainly
concerned with small or neutrally buoyant proppants, which settle slowly.
Hence, our momentum balance becomes
(11) $\displaystyle\nabla\cdot\sigma=0,$
where $\sigma=\sigma^{s}+\sigma^{f}$ is the mixture stress tensor, composed of
the particle and fluid stresses respectively. We also note that, subtracting
the hydrostatic pressure term, we write $\sigma=\tau-pI$. Since we assumed
$\sigma^{f}=-pI$ in deriving the fluid slip equation, we deduce
$\sigma_{s}=\tau$. This is a notational quirk arising from the frictional
rheology because $\tau$ does include shear stress originating from the viscous
carrier fluid. Herein we will refer to $\sigma^{s}_{zz}$ and $\tau_{rz}$,
since the former generally arises from the proppants and the latter stems from
both the proppants and the carrier fluid. The assumption of axisymmetry gives
(12) $\displaystyle\frac{1}{r}\frac{\partial(r\tau_{rr})}{\partial
r}+\frac{\partial\tau_{rz}}{\partial z}-\frac{\partial p}{\partial r}$
$\displaystyle=0,$
$\displaystyle\frac{1}{r}\frac{\partial(r\tau_{rz})}{\partial
r}+\frac{\partial\sigma^{s}_{zz}}{\partial z}-\frac{\partial p}{\partial z}$
$\displaystyle=0.$
We also have the continuity equations
(13) $\displaystyle\nabla\cdot(\mathbf{v}+\mathbf{q})$ $\displaystyle=0,$
$\displaystyle\frac{\partial\phi}{\partial t}+\nabla\cdot(\phi\mathbf{v})$
$\displaystyle=0.$
The first of these can be integrated over the fracture volume to give
$Qt=4\pi\int_{0}^{R}rw(r,t)dr.$ Here, $Q$ is the rate at which the slurry is
pumped into the crack, which we will assume is constant. We will also assume
that the proppants are injected at a constant rate, meaning the average
concentration at the wellbore is constant.
## 3\. Injection: Scalings
To help implement the assumptions of a lubrication model, where the crack
width is far smaller than the crack radius, we introduce the scaled
coordinates,
$\displaystyle T$ $\displaystyle=T(t),$ $\displaystyle r$
$\displaystyle=L(t)\Gamma(T)\xi,$ $\displaystyle z$
$\displaystyle=\epsilon(t)L(t)\eta.$
Here $T(t)$ is the internal time scale, a monotonic function to be specified
later; $\epsilon(t)$ is a small number; and $\Gamma(T)$ is the crack radius,
measured in the scaled coordinates, so $\xi=1$ implies $r=R$. We multiply the
variables accordingly,
$\displaystyle w(r,t)$ $\displaystyle\to\epsilon Lw(\xi,T),$ $\displaystyle
p(r,z,t)$ $\displaystyle\to\epsilon E^{\prime}p(\xi,\eta,T),$ $\displaystyle
R(t)$ $\displaystyle\to L\Gamma(T),$ $\displaystyle v_{z}(r,z,t)$
$\displaystyle\to-\dot{\epsilon}Lv_{z}(\xi,\eta,T),$ $\displaystyle
v_{r}(r,z,t)$
$\displaystyle\to\frac{-\dot{\epsilon}L}{\epsilon}v_{r}(\xi,\eta,T),$
$\displaystyle q_{r}(r,z,t)$
$\displaystyle\to\frac{\epsilon}{L}\frac{a^{2}E^{\prime}}{\eta_{f}\Gamma}q_{r}(\xi,\eta,T),$
$\displaystyle q_{z}(r,z,t)$
$\displaystyle\to\frac{1}{L}\frac{a^{2}E^{\prime}}{\eta_{f}}q_{z}(\xi,\eta,T),$
$\displaystyle\tau(r,z,t)$
$\displaystyle\to-\frac{\dot{\epsilon}}{\epsilon^{2}}\eta_{f}\tau(\xi,\eta,T),$
$\displaystyle\sigma^{s}(r,z,t)$
$\displaystyle\to-\frac{\dot{\epsilon}}{\epsilon^{2}}\eta_{f}\sigma^{s}(\xi,\eta,T).$
The appearance of minus signs reflects the fact that $\epsilon$, the ratio of
the characteristic radius to the characteristic width of the fracture, is
decreasing. We also assume the scaling is suitable so that all the scaled
variables are $\mathcal{O}(1)$. Herein, we will use $(\dot{})$ for derivatives
with respect to $t$ and $(^{\prime})$ for those with respect to $T$.
In the new, rescaled coordinates the equations describing the frictional
rheology become $I(\phi)=\dot{\gamma}/\sigma_{n}^{s}$ and
$\mu(\phi)=\tau/\sigma_{n}^{s}$. The slip equation becomes
$\mathbf{q}=-\kappa(\phi)\nabla p,$ where $\nabla$ is now with respect to
$(\xi,\eta)$. The integral equations become
(14) $\displaystyle w(\xi,T)$
$\displaystyle=\frac{4\Gamma}{\pi}\int_{\xi}^{1}\frac{y}{\sqrt{y^{2}-\xi^{2}}}\int_{0}^{1}\frac{xp(xy,T)}{\sqrt{1-x^{2}}}dxdy,$
$\displaystyle\aleph\equiv\frac{K_{Ic}}{\epsilon E^{\prime}\sqrt{L}}$
$\displaystyle=2\sqrt{\frac{\Gamma}{\pi}}\int_{0}^{1}\frac{p(\xi,T)\xi}{\sqrt{1-\xi^{2}}}d\xi.$
The momentum equations are
(15)
$\displaystyle\frac{\epsilon}{\Gamma\xi}\frac{\partial(\xi\tau_{rr})}{\partial\xi}+\frac{\partial\tau_{rz}}{\partial\eta}+\frac{\epsilon^{3}E^{\prime}t}{\eta_{f}}\frac{\epsilon}{\dot{\epsilon}t\Gamma}\frac{\partial
p}{\partial\xi}$ $\displaystyle=0,$
$\displaystyle\frac{\epsilon^{2}}{\Gamma\xi}\partialderivative{(\xi\tau_{rz})}{\xi}+\epsilon\partialderivative{\sigma^{s}_{zz}}{\eta}+\frac{\epsilon}{\dot{\epsilon}t}\frac{\epsilon^{3}E^{\prime}t}{\eta_{f}}\partialderivative{p}{\eta}$
$\displaystyle=0.$
Since we expect the radial pressure gradient to be comparable to the shear
stress, $\tau_{rz}$, we choose $\epsilon$ so that the dimensionless quantity
$\epsilon^{3}E^{\prime}t/\eta_{f}=1$. Finally, the global volume conservation
equation then becomes $Qt/(\epsilon L^{3})=4\pi\Gamma^{2}\int_{0}^{1}\xi
w(\xi,T)d\xi,$ so in a similar manner we choose the dimensionless quantity
$Qt/\epsilon L^{3}=1.$ These choices mean
(16) $\displaystyle\epsilon(t)$
$\displaystyle=(\eta_{f}/E^{\prime})^{\frac{1}{3}}t^{-1/3},$ $\displaystyle
L(t)$ $\displaystyle=(E^{\prime}Q^{3}/\eta_{f})^{\frac{1}{9}}t^{4/9}.$
We will repeatedly use the relations $\dot{\epsilon}t/\epsilon=-1/3$ and
$\dot{L}t/L=4/9$. Using this choice of $\epsilon$ we note that, before
scaling, $\sigma^{s}/p=\mathcal{O}(\epsilon)$; this validates the assumption
that particle pressure is negligible compared to hydrostatic pressure at the
crack walls. Also, by the scaled momentum equations,
(17) $\displaystyle\frac{\partial\tau_{rz}}{\partial\eta}$
$\displaystyle=\frac{3}{\Gamma}\frac{\partial
p}{\partial\xi}+\mathcal{O}(\epsilon),$ $\displaystyle\frac{\partial
p}{\partial\eta}$
$\displaystyle=\frac{\epsilon}{3}\frac{\partial\sigma^{s}_{zz}}{\partial\eta}+\mathcal{O}(\epsilon^{2}),$
the second of which verifies the assumption that $p$ has spatial dependence in
the radial direction only. Because of the $\eta=0$ reflectional symmetry, we
note that $\tau_{rz}(\xi,0)=0$. So, ignoring $\mathcal{O}(\epsilon)$ terms and
integrating (17.1), we see that
(18) $\displaystyle\tau_{rz}=\frac{3\eta}{\Gamma}\frac{\partial
p}{\partial\xi},$
and, using the scaled equations from the frictional rheology,
(19) $\displaystyle\sigma_{zz}^{s}$
$\displaystyle=\frac{3|\eta|}{\Gamma}\frac{1}{\mu(\phi)}\frac{\partial
p}{\partial\xi},$ $\displaystyle\frac{\partial v_{r}}{\partial\eta}$
$\displaystyle=\frac{3\eta}{\Gamma}\frac{I(\phi)}{\mu(\phi)}\frac{\partial
p}{\partial\xi}.$
Then, using the condition $v_{r}(\xi,\pm w)=0$, we deduce that
(20) $\displaystyle
v_{r}(\xi,\eta)=-\frac{3}{\Gamma}\partialderivative{p}{\xi}\int_{\eta}^{w}\frac{I(\phi)\eta}{\mu(\phi)}d\eta.$
## 4\. Injection: Time Regimes
In this choice of scaling, the slurry conservation equation becomes
(21) $\displaystyle\frac{1}{3\Gamma\xi}\frac{\partial(\xi
v_{r})}{\partial\xi}+\frac{1}{3}\frac{\partial
v_{z}}{\partial\eta}+\left(\frac{a}{L\Gamma}\right)^{2}\frac{1}{\epsilon^{2}\xi}\frac{\partial(\xi
q_{r})}{\partial\xi}+\left(\frac{a}{L}\right)^{2}\frac{1}{\epsilon^{4}}\frac{\partial
q_{z}}{\partial\eta}=0.$
Combining this with the scaled slip equation, noting (17), we obtain
(22) $\displaystyle\frac{1}{3\Gamma\xi}\frac{\partial(\xi
v_{r})}{\partial\xi}+\frac{1}{3}\frac{\partial
v_{z}}{\partial\eta}-\frac{\epsilon\lambda}{\Gamma^{2}\xi}\partialderivative{\xi}\left[\xi\kappa(\phi)\partialderivative{p}{\xi}\right]-\frac{\lambda}{3}\partialderivative{\eta}\left[\kappa(\phi)\partialderivative{\sigma^{s}_{zz}}{\eta}\right]=0.$
Here $\lambda=a^{2}/(L^{2}\epsilon^{3})$ is a constant; we will later identify
it as the ratio of the fracture length scale to the development length scale,
over which we expect proppant flow to stabilise.
According to Shiozawa & McClure [22], Chen Zhixi et al. [23] and Liang et al.
[24], we utilise the following constants, relevant to hydraulic fracturing, as
given in Table 1.
Constant | Typical Value
---|---
$Q$ | $0.04\textrm{m}^{3}\textrm{ s}^{-1}$
$E^{\prime}$ | $40\textrm{ GPa}$
$\eta_{f}$ | $0.01\textrm{ Pa s}$
$\rho_{f}$ | $1000\textrm{ kg m}^{-3}$
$K_{Ic}$ | $0.5\textrm{ MPa m}^{0.5}$
$a$ | $5\times 10^{-5}\textrm{m}$
Table 1. Typical values of constants, given by Shiozawa & McClure [22], Chen
Zhixi et al. [23] and Liang et al. [24].
The choice of $a$ represents a typical diameter for the finer proppants
commonly used at the initiation of fracturing [24]. This gives us the
following estimates
$\displaystyle\epsilon$ $\displaystyle\approx 6\times 10^{-5}\cdot t^{-1/3},$
$\displaystyle L$ $\displaystyle\approx 9\times 10^{0}\cdot t^{4/9},$
$\displaystyle\textrm{Re}_{\textrm{eff}}$ $\displaystyle\approx 1\times
10^{-2}\cdot t^{-7/9},$ $\displaystyle\aleph$ $\displaystyle\approx 4\times
10^{-2}\cdot t^{1/9},$ $\displaystyle\lambda$ $\displaystyle\approx 1\times
10^{2}\cdot t^{1/9},$ $\displaystyle a/(\epsilon L)$ $\displaystyle\approx
1\times 10^{-1}\cdot t^{-1/9}.$
The value of $\textrm{Re}_{\textrm{eff}}$ is calculated using formula (10),
substituting each term with its typical scaling.
Considering the same problem in the zero-proppant case, Detournay & Savitski
[9] show that when $1.6\aleph<1$, the fracture evolution is well approximated
by taking the dimensionless toughness $\aleph=0$. Also, the choice $T=\aleph$
is taken, reflecting the dependence of the scaled solution on this
monotonically increasing parameter; assuming $\aleph$ is negligible it is
possible to neglect any $T$ dependence. We will also use these assumptions,
since toughness plays its greatest role near the fracture tip, where the crack
is typically too narrow for proppants to interfere. Given our estimate for
$\aleph$, this means we must take $t<1.5\times 10^{7}$.
In general we will assume $t>250$, so we may ignore $\epsilon$ and
$\textrm{Re}_{\textrm{eff}}$ terms. This also means $2a/(\epsilon L)<1/10$, so
the fracture is typically more than 10 particles wide. Lecampion & Garagash
[13], conclude that non-local phenomena such as proppant-bridging aren’t
important in such cases; however we can still expect to see these effects near
the narrow crack tip. The significance of this behaviour will be discussed in
greater detail in Section 7.
We also note that $\lambda$ is large; so in an effort to remove time
dependence from our equations, we may neglect the first three terms in the
continuity equation (22),
(23)
$\displaystyle\partialderivative{\eta}\left[\kappa(\phi)\partialderivative{\sigma^{s}_{zz}}{\eta}\right]=0.$
By the assumption of reflectional symmetry, the particle pressure gradient
must vanish at $\eta=0$. Because $\kappa$ is generally non-zero, we deduce
that the particle pressure is constant with $\eta$; and, by (19), so is
$|\eta|/\mu(\phi)$. Hence,
(24)
$\displaystyle\phi(\xi,\eta)=\mu^{-1}\left(\mu_{w}(\xi)\frac{|\eta|}{w(\xi)}\right),$
where $\mu_{w}$ is an undetermined function of $\xi$, which we recognise as
the value of $\mu$ at the crack wall. Noting that $\mu$ is a decreasing
function, we see that $\mu_{w}$ also describes the rate at which the
concentration drops from the centre to the wall of the cavity. We also notice
that, in accordance to Donstov et al. [25], we have plug flow in the centre of
the channel, where concentrations are greater than $\phi_{m}$. Because the
slurry flows away from the wellbore, the distribution of proppants, which is
described by $\mu_{w}$, depends on the concentration of proppants in the
injected mixture and how that changes with time. Hence, an important step in
the determination of $\mu_{w}$ will be implementing the assumption that the
average concentration at the wellbore is constant. This will be discussed in
greater detail in Section 7.
It is interesting to note that [13] verifies a length scale of
$\epsilon^{3}L^{3}/a^{2}$ for proppant flow in a channel, or pipe, to become
fully established. This means the particle pressure gradient becomes
negligible, and the cross fracture concentration profile becomes independent
of the distance from the channel, or pipe, entrance. As a result, the constant
$\lambda=a^{2}/(L^{2}\epsilon^{3})$ can be interpreted as the ratio of the
fracture length to the development length. Because this is large, an
alternative route to (24) would have been to assume the transverse particle
pressure is constant, reflecting the full development of the flow.
## 5\. Injection: Governing Equation for fracture width
In scaled coordinates, the governing equation for the conservation of proppant
mass becomes
(25)
$\displaystyle\frac{\xi\dot{L}t}{L}\frac{\partial\phi}{\partial\xi}+\left[\frac{\dot{\epsilon}t}{\epsilon}+\frac{\dot{L}t}{L}\right]\eta\frac{\partial\phi}{\partial\eta}=-\frac{\dot{\epsilon}t}{\epsilon\Gamma\xi}\partialderivative{(\xi\phi
v_{r})}{\xi}-\frac{\dot{\epsilon}t}{\epsilon}\partialderivative{(\phi
v_{z})}{\eta}.$
Then, implementing our choices of $\epsilon$ and $L$, we obtain
(26)
$\displaystyle\frac{4\xi}{3}\frac{\partial\phi}{\partial\xi}+\frac{\eta}{3}\frac{\partial\phi}{\partial\eta}=\frac{1}{\Gamma\xi}\partialderivative{(\xi\phi
v_{r})}{\xi}+\partialderivative{(\phi v_{z})}{\eta}.$
Integrating from $-w$ to $w$ with respect to $\eta$, leaving details to
Appendix A for brevity, we obtain
(27) $\displaystyle
4\xi\partialderivative{\xi}\left[w\Pi\circ\mu_{w}(\xi)\right]-w\Pi\circ\mu_{w}(\xi)=-\frac{9}{\Gamma^{2}\xi}\partialderivative{\xi}\left[\frac{\xi
w^{3}}{\mu_{w}(\xi)^{2}}\partialderivative{p}{\xi}\Omega\circ\mu_{w}(\xi)\right].$
Here we have defined the rheological functions
(28) $\displaystyle\Pi(x)$
$\displaystyle=\frac{1}{x}\int_{0}^{x}\mu^{-1}(u)du,$ $\displaystyle\Omega(x)$
$\displaystyle=\frac{1}{x}\int_{0}^{x}[\Pi(u)I\circ\mu^{-1}(u)u]du,$
which we plot in Figure 3.
(a) $\Pi(x)$ as a function of $x$
(b) $\Omega(x)$ as a function of $x$
(c) $x^{2}\Pi/\Omega$ as a function of $\Pi(x)$
Figure 3. Plots of the rheological functions $\Omega$, $\Pi$ and
$x^{2}\Pi/\Omega$.
Multiplying by $\xi$ and integrating from $\rho$ to $1$, we obtain
(29) $\displaystyle\int_{\rho}^{1}\xi
w\Pi\circ\mu_{w}(\xi)d\xi+\frac{4}{9}\rho^{2}w\Pi\circ\mu_{w}(\rho)=-\frac{\rho
w^{3}}{\Gamma^{2}\mu_{w}^{2}}\partialderivative{p}{\rho}\Omega\circ\mu_{w}(\rho),$
which lends itself more easily to computation. Here we have taken
$w^{3}\partial p/\partial\xi\to 0$ as $\xi\to 1$; this is physically motivated
by the fact that this term is proportional to the radial flux, which vanishes
at the crack tip. Moreover, Spence & Sharp [8] show that, in the zero-
proppant, zero-toughness regime, near the crack tip, $p\propto(1-\xi)^{-1/3}$
and $w\propto(1-\xi)^{2/3}$.
In order to compare this equation to the zero-proppant case, we assume
$\mu_{w}$ is independent of $\xi$ and take $\mu_{w}\to\infty$, to obtain
(30) $\displaystyle\int_{\rho}^{1}\xi
w(\xi)d\xi+\frac{4}{9}\rho^{2}w=-\frac{\rho
w^{3}}{\Gamma^{2}}\partialderivative{p}{\rho}\lim_{\mu_{w}\to\infty}\left[\frac{\Omega(\mu_{w})}{\mu_{w}^{2}\Pi(\mu_{w})}\right].$
From Figure 3(c) we deduce the right hand limit is approximately $2/5$, which
is confirmed exactly in Appendix B. Modelling the fluid as Newtonian, also
leaving the details to Appendix B, we obtain the same equation, with a factor
of $1/3$ instead. We conclude that the equations governing Newtonian flow are
not the same as those in the zero-proppant slurry flow limit. This is clearly
a limitation of our approach, which arises from using a dense-fitted rheology
in the dilute regime. However, the fact that the equations share a nearly
identical form is promising, as we expect the qualitative behaviour of slurry
flow to be similar to that of Newtonian flow.
## 6\. Injection: Numerical Solution
We implement the numerical method first used by Spence & Sharp [8], with the
adaptions of Detournay & Savitski [9], to solve the equations we have derived
so far. It will be useful to introduce $h(\xi)=w(\xi)/\Gamma$. The lubrication
equation derived above, the elasticity equations and the global volume
conservation equation become
(31) $\displaystyle\int_{\rho}^{1}(\xi h\Pi\circ\mu_{w})d\xi$
$\displaystyle+\frac{4}{9}\rho^{2}h\Pi\circ\mu_{w}=-\rho
h^{3}\partialderivative{p}{\rho}\frac{\Omega\circ\mu_{w}}{\mu_{w}^{2}},$ (32)
$\displaystyle h(\xi)$
$\displaystyle=\frac{4}{\pi}\int_{\xi}^{1}\frac{y}{\sqrt{y^{2}-\xi^{2}}}\int_{0}^{1}\frac{xp(xy)}{\sqrt{1-x^{2}}}dxdy,$
(33) $\displaystyle 0$
$\displaystyle=\int_{0}^{1}\frac{p(\xi)\xi}{\sqrt{1-\xi^{2}}}d\xi,$ (34)
$\displaystyle 1$ $\displaystyle=4\pi\Gamma^{3}\int_{0}^{1}(\xi h)d\xi.$
These equations alone do not give unique solutions for $\\{p,h,\mu_{w}\\}$, so
we will prescribe $\mu_{w}$ as part of the problem data. This allows us to
uniquely determine a solution for $\\{p,h\\}$. We seek series approximations
of the form
(35) $\displaystyle p(\xi)$ $\displaystyle=\sum_{i=-1}^{N-1}A_{i}p_{i}(\xi),$
$\displaystyle h(\xi)$ $\displaystyle=\sum_{i=-1}^{N}B_{i}h_{i}(\xi),$
where we define
(42) $\displaystyle p_{i}(\xi)$
$\displaystyle=\left\\{\begin{array}[]{ll}-\ln\xi+\ln 2-1&(i=-1)\\\ &\\\
(1-\xi)^{-1/3}J_{i}(\frac{4}{3},2,\xi)+\omega_{i}&(i\geq
0)\end{array}\right\\},$ $\displaystyle
h_{i}(\xi)=\left\\{\begin{array}[]{ll}\frac{4}{\pi}\left[(1-\xi^{2})^{1/2}-\xi\cos^{-1}(\xi)\right]&(i=-1)\\\
&\\\ \\\ (1-\xi)^{2/3}J_{i}(\frac{10}{3},2,\xi)&(i\geq
0)\end{array}\right\\}.$
Here the $i=-1$ terms are used to account for the logarithmic singularity in
pressure at the inlet, expected as a result of the point source injection; the
other terms allow for a general solution of (32). Importantly, we note that
the $p_{i}$ terms have a $(1-\xi)^{-1/3}$ singularity near the crack tip and
the $h_{i}$ terms are proportional to $(1-\xi)^{2/3}$ (for $i\geq 0$). This
deliberately matches the asymptotic calculations from Spence & Sharp [8],
which arise from the assumptions of zero-lag and zero-toughness in an
expanding hydraulic fracture. This allows the numerical method to converge
accurately with few terms. The $J_{i}(p,q,\xi)$ are Jacobi Polynomials of
order $i$ defined on the interval $[0,1]$, in the sense defined by Abramowitz
& Stegun [26], normalised to satisfy the orthonormality condition,
(43)
$\displaystyle\int_{0}^{1}(1-\xi)^{p-q}\xi^{q-1}J_{i}(p,q,\xi)J_{j}(p,q,\xi)d\xi=\delta_{ij}.$
This means that the $h_{i}$ ($i\geq 0$) are orthonormal with respect to an
inner product weighted by $\xi$. The $\omega_{i}$ are simply constants to
ensure each of the $p_{i}$ obey the zero-toughness equation; adding these
constants means that the $p_{i}$ lose their orthonormality properties, however
this doesn’t affect the solution finding process.
Because of its linearity, these series approximations reduce (32) to a linear
equation,
(44) $\displaystyle B_{i}=\sum_{j=-1}^{N-1}P_{ij}A_{j}.$
Here $(P)_{ij}$ is an $(N+2)\times(N+1)$ matrix whose entries we only have to
calculate once by using the orthogonality relation given above, along with the
fact that $\\{p_{-1},\theta_{-1}\\}$ are a solution pair to (32). The entries
of $M$, which can be found in [9], are listed in Appendix C for $N=4$. The
subtleties of calculating elements of $P_{ij}$, in the face of strong singular
behaviour, are important and described in depth in [9]. Finally, using the
values of $B_{i}$ given above, we assign a cost to each choice of $A$ given by
(45)
$\displaystyle\Delta(A)=\sum_{\xi\in\\{0,1/M,...,1\\}}\left(\frac{\textrm{RHS}(\xi;A)}{\textrm{LHS}(\xi;A)}-1\right)^{2}.$
This is calculated by considering the discrepancies between the left and right
hand sides of (31), calculated at M+1 equally spaced control points. We then
minimise $\Delta$ with respect to $A$ using the Nelder-Mead Simplex method
[27].
## 7\. Injection: Solutions for a constant $\mu_{w}$
For most monotonic choices of $\mu_{w}$, the numerical method above shows good
convergence. We see that the coefficients $A_{i}$ and $B_{i}$ drop off quickly
with $i$, and the final value of $\Delta$ tends to zero rapidly as we increase
$N$. If $\mu_{w}$ is a more complicated function, like in the case of Figure
4, we may need to use a larger value of $N$, but good convergence is still
possible.
Figure 4. Plot of cavity width profile and proppant distribution in the case
where $\mu_{w}$ is sinusoidal. Here $N=8$ is used.
This leads us to consider which choices of $\mu_{w}$ are most likely to appear
in reality. We note that by (24),
(46)
$\displaystyle\Pi\circ\mu_{w}(\xi)=\frac{1}{2w}\int_{-w}^{w}\phi(\xi,\eta)d\eta,$
so we may view $\Pi\circ\mu_{w}(\xi)$ as the average proppant concentration at
a given value of $\xi$. Since $\Pi\circ\mu_{w}$ is independent of time, we
automatically satisfy the condition that the injection rates of the proppants
and the fluid are constant. However this condition also means that the average
concentration at the wellbore, $\Pi\circ\mu_{w}(0)$, must equal the average
concentration taken by integrating over the entire crack volume. For a
monotonic choice of $\mu_{w}$ this implies that $\mu_{w}$ must be independent
of $\xi$. Herein we will make the assumption that $\mu_{w}$ is a constant and,
as a result, so is $\Pi=\Pi(\mu_{w})$. This is a natural assumption: at early
times we don’t expect significant concentration differences along the crack
because radial length scales are small.
A great advantage of a constant $\Pi$ is that we can define an ‘effective
viscosity’, which we can absorb into our scaled variables the same way as we
did with fluid viscosity. Under the assumption that $\mu_{w}$ is constant,
(31) becomes
(47) $\displaystyle\int_{\rho}^{1}\xi
h(\xi)d\xi+\frac{4}{9}\rho^{2}h=-\frac{\rho
h^{3}}{\eta_{e}}\partialderivative{p}{\rho},$
where $\eta_{e}=\mu_{w}^{2}\Pi/\Omega$ is what we call the effective
viscosity. It is plotted in Figure 3(c), and is best thought of as a function
of the average concentration, $\Pi$. Making the transformations
(48) $\displaystyle h$ $\displaystyle=\eta_{e}^{1/3}\tilde{h},$ $\displaystyle
p$ $\displaystyle=\eta_{e}^{1/3}\tilde{p},$ $\displaystyle\Gamma$
$\displaystyle=\eta_{e}^{-1/9}\tilde{\Gamma},$
our governing equations become
(49) $\displaystyle\int_{\rho}^{1}\xi\tilde{h}d\xi$
$\displaystyle+\frac{4}{9}\rho^{2}\tilde{h}=-\rho\tilde{h}^{3}\partialderivative{p}{\rho},$
$\displaystyle\tilde{h}(\xi)$
$\displaystyle=\frac{4}{\pi}\int_{\xi}^{1}\frac{y}{\sqrt{y^{2}-\xi^{2}}}\int_{0}^{1}\frac{x\tilde{p}(xy)}{\sqrt{1-x^{2}}}dxdy,$
$\displaystyle 0$
$\displaystyle=\int_{0}^{1}\frac{\tilde{p}(\xi)\xi}{\sqrt{1-\xi^{2}}}d\xi,$
$\displaystyle 1$
$\displaystyle=4\pi\tilde{\Gamma}^{3}\int_{0}^{1}(\xi\tilde{h})d\xi.$
We will solve them using the numerical method described before, except with
(49) in the place of (31-34).
Figure 5 plots $\tilde{h}$ and $\tilde{p}$, calculated using $N=4$ and
$M+1=501$ control points. Promisingly, we note that $\tilde{h}>0$ and $p$
shows the expected asymptotic behaviour. The value $\tilde{h}(0)=1.36$ will be
important in later discussion. The first column of table 3 shows the
coefficients $A_{i}$ and $B_{i}$, as well as the calculated value of
$\tilde{\Gamma}=0.598$. Significantly, we see that $A_{i}$ and $B_{i}$
decrease rapidly with $i$, suggesting that a solution with higher order terms
is unnecessary. This is supported by the small value of $\Delta\approx 5\times
10^{-5}$, with evenly spread contributions from control points along the
radius of the crack. This suggests that we have found a genuine solution, and
that the tip asymptotics are indeed suitable.
Figure 5. $(\xi,\eta)$ plots of $\tilde{h}$ and $\tilde{p}$, the scaled width
and pressure solutions to the absorbed effective viscosity system.
We now focus on finding numerical solutions for different concentrations in
order to consider features such as the velocity profile and proppant
distribution within the cavity. We consider the case of four different values
of the average concentration, $\Pi$. These are given in table 2, along with
the corresponding values of $\mu_{w}$ and $\eta_{e}$.
$\Pi$ | $\mu_{w}$ | $\eta_{e}$
---|---|---
0.05 | 487.3 | 2.74
0.20 | 23.35 | 3.92
0.40 | 3.93 | 10.37
0.55 | 1.06 | 96.60
Table 2. Test values of $\Pi$, $\mu_{w}$ and $\eta_{e}$.
The latter columns of table 3 show the values of $A$, $B$ and $\Gamma$
calculated using the exact method suggested in Section 6. Again we use
$M+1=501$ control points and $N=4$. Happily, the same values are observed by
using the values of $A$, $B$ and $\Gamma$ listed in the first column,
calculated after absorbing the effective viscosity, and using the relations
(48) to return to the concentration-specific values. We calculate the same
value of $\Delta\approx 5\times 10^{-5}$ each time; this is to be expected as
the equations are equivalent once the solutions have been scaled.
| $\Pi$ | | Absorbed | 0.05 | 0.20 | 0.40 | 0.55
---|---|---|---|---|---|---|---
| $A_{-1}$ | | 0.14786 | 0.20710 | 0.23326 | 0.32238 | 0.67830
| $A_{0}$ | | 0.53529 | 0.74974 | 0.84444 | 1.16709 | 2.45559
| $A_{1}$ | | 0.01929 | 0.02702 | 0.03043 | 0.04206 | 0.08849
| $A_{2}$ | | 0.00402 | 0.00563 | 0.00634 | 0.00877 | 0.01844
| $A_{3}$ | | 0.00035 | 0.00049 | 0.00055 | 0.00076 | 0.00159
| $B_{-1}$ | | 0.14786 | 0.20710 | 0.23326 | 0.32238 | 0.67830
| $B_{0}$ | | 0.53805 | 0.75361 | 0.84879 | 1.17311 | 2.46825
| $B_{1}$ | | 0.05435 | 0.07612 | 0.08573 | 0.11849 | 0.24931
| $B_{2}$ | | 0.00012 | 0.00016 | 0.00019 | 0.00026 | 0.00054
| $B_{3}$ | | 0.00081 | 0.00114 | 0.00128 | 0.00177 | 0.00373
| $B_{4}$ | | 0.00029 | 0.00041 | 0.00046 | 0.00064 | 0.00134
| $\Gamma$ | | 0.59812 | 0.534579 | 0.513799 | 0.461261 | 0.359968
Table 3. Values of $A_{i}$, $B_{i}$ and $\Gamma$ obtained using (49) with
effective viscosity absorbed into the scaling and (31-34) with
$\Pi\in\\{0.05,0.20,0.40,0.55\\}$. We use $M=500$ and $N=4$ throughout.
Figure 6 shows the distribution of proppants within the fracture for each
value of $\Pi$. They are overlaid with an arrow plot of the proppant velocity
profile, $\mathbf{v}$, scaled by $\xi$ to show the equivalent two-dimensional
flux. The calculation of $\mathbf{v}$ is omitted since it is lengthy and
similar to the derivation of (27) in Appendix A. As $\Pi$ increases we see a
growing disk of plug flow where $\phi>\phi_{m}$, marked with a magenta
contour. We also see a tendency towards proppant velocity across the crack,
rather than along it; this is because the shape of the crack becomes shorter
and wider as the effective viscosity increases.
(a) $\Pi=0.05$
(b) $\Pi=0.20$
(c) $\Pi=0.40$
(d) $\Pi=0.55$
Figure 6. Concentration-specific $(\Gamma\xi,\eta)$ plots of developing
fractures with total solid volume fraction, $\Pi$, taking the values $0.05$,
$0.20$, $0.40$ and $0.55$. These are presented with filled contours displaying
proppant concentration; arrows showing $\xi$-scaled velocity; and magenta
contours indicating the transition into plug flow at the centre of each
cavity.
Drawing on calculations we have made so far, we are now in a position to
assess the significance of tip screen-out in our model, something we have
neglected so far by adopting a continuum model of proppant transport. This is
where, near the crack tip, the narrowing crack aperture causes proppants to
jam and block the fracture, significantly affecting the development of the
evolving formation and the convective transport of proppants. In [28] this
problem is addressed using a ‘blocking function’ which reduces proppant flux
to zero in apertures smaller than three times the average particle’s diameter.
We will use this threshold to weigh the significance of ignoring screen-out in
our model. Figure 7(a) shows the volume-proportion of proppants predicted in
fracture regions of width less than this threshold, dependant on the time,
$t$, and the average proppant concentration, $\Pi$. We see that for early
times and low concentrations, our model predicts a significant proportion of
proppants in these regions, where the fracturing fluid is clear in reality.
However, in concentrations greater than $0.3$ this proportion is relatively
small; this means our model, which ignores tip screen-out, is self-consistent.
This difference arises from the effective viscosity, which increases with
$\Pi$ and causes the ratio of fracture width to length to decrease.
Lecampion & Garagash [13] conclude that their rheology, which is employed
throughout this paper, agrees very well with experimental results when the
predicted width of plug flow is greater than a particle’s width. In figure
7(b), we see this condition holds for moderate times when $\phi>0.4$. It does
not for $\phi<0.4$. Therefore, in this regime we can expect slight mismatches
between predicted and practical concentration profiles; this arises from a
breakdown of the continuum model in the jammed part of the flow [13].
(a) $w<6a$
(b) Plug width $<2a$
Figure 7. Proportion of proppants by volume, predicted in fracture regions
where $w<6a$, or plug width $<2a$, given average concentration, $\Pi$, and
time, $t$.
## 8\. Crack Closure: Problem Formulation
In the zero-proppant case, Lai et al [29] have confirmed experimentally that
for late times after the fluid pressure is released, the crack radius is
constant and volume scales as $t^{-1/3}$. It is tempting to repeat our
previous work in order to find an asymptotic solution with a generalised total
fracture volume $Qt^{\alpha}$. We would then let $\alpha=-1/3$ to model the
case of closure. This approach leads us to
(50) $\displaystyle\alpha\int_{\rho}^{1}\xi
h(\xi)d\xi+\beta\rho^{2}h=-\frac{\rho
h^{3}}{\eta_{e}}\partialderivative{p}{\rho},$
in the place of (47). Here $\beta=(3\alpha+1)/9$ is the exponent for $L$,
giving the radial growth of the fracture. However, we see that attempts to
solve (50) using the previous numerical method fail as
$(\alpha,\beta)\to(-1/3,0)$, corresponding to the case in [29]. This is
because the tip asymptotes $w\propto(1-\xi)^{2/3}$ and
$p\propto(1-\xi)^{-1/3}$ are a result of an advancing fracture in a zero-
toughness medium. Spence & Sharp [8] note that $h\sim C(1-\xi)^{\tau}$ implies
$p\sim C\tau(\cot\pi\tau)(1-\xi)^{\tau-1}$. Balancing terms in (50), we are
forced with $C\leq 0$ if $\beta\leq 0$ which clearly can’t lead to physical
solutions, given the constraint $h\geq 0$. In the same paper, solutions for
$\beta=0$ are shown to exist without the assumption of zero-toughness; these
have $h\sim(1-\xi^{2})^{1/2}$. However, this causes difficulties in the case
of an evolving fracture, since a non-zero toughness parameter, $\aleph$,
brings time dependence to the scaled equations we have derived. An alternative
solution would be the addition of a non-zero fluid lag, providing a region of
negative pressure between the fluid front and the crack tip. Such a region
exists in reality, containing either vapour from the fracturing fluid or, if
the surrounding medium is permeable, pore fluid [30, 31]. Zero-toughness
solutions using this formulation are explored in [32]. Schematics of each
possible solution type are shown in Figure 8.
Figure 8. Possibilities for modelling the crack tip.
Any model utilising a time independent concentration profile is likely to fail
in describing fracture closure at late times. This is because the width of the
crack is decreasing as $t^{-1/3}$, so it is bound to become comparable to the
proppant diameter. At the point where $\epsilon L/a\approx 6$, the proppants
begin to bridge across the fracture, effectively fixing them in position [28];
therein, concentrations will increase as the carrier fluid is forced from the
cavity. For this reason, we will instead address the problem of finding the
residual crack shape, given some axisymmetric initial distribution of
proppants; we will assume these are radially immobile from the moment pressure
is released. This method has been used with success to model the closure of a
bi-wing fracture by Wang et al. [33, 34].
## 9\. Crack Closure: Residual Width Profiles
We model the residual shape of the fracture using $w_{p}(r)$, defined as the
close packed width of proppants. That is to say, after packing the proppants
as tightly as possible in the z direction, so $\phi=\phi_{rcp}$, this is the
residual width. Given some radial distribution of proppants described by the
average concentration, $\Pi$, and un-scaled width profile, $w$, we deduce that
$w_{p}=w\Pi/\phi_{rcp}$. This description is compatible with the frictional
rheology of Lecampion & Garagash [13], used previously, which asserts that a
non-zero normal force on the proppants, along with vanishing shear stress,
causes compression up to the random close packing limit. We then assume that
the surrounding fracture simply collapses around the proppant pack. Our
primary interest will be in using proppant distributions, arising from the
injection phase described previously, to predict the geometry of the residual
formation.
In [34] a more complicated model is offered; this considers stress from the
contact of opposing crack asperities, proppant embedment into the fracture
walls, and compression of proppants. Since we will be concerned with cases
where $w_{p}$ is non-zero along the entire crack radius; the contact term
arising from the crack asperities, which is significant in the un-propped
case, will not be necessary. Furthermore, in the same paper [34] the depth of
proppant embedment is shown to be of the order
$K_{e}=a(3/4E^{\prime})^{2}(16mE^{\prime 2}/9c_{p})^{2/3}$. Here, $m\approx
2\sqrt{3}$ is a constant which depends on the packing of proppants. Using the
value of $c_{p}=3.9\times 10^{-8}\textrm{Pa}^{-1}$ [34], as well as the
typical values of $a=50\mu\textrm{m}$ and $E^{\prime}=40\textrm{GPa}$
mentioned earlier, we note that $K_{e}\approx 1\mu\textrm{m}$, around 100
times smaller than the given proppant diameter. Since we will generally model
proppant packs which are several times the size of the proppant diameter in
width, we will ignore this phenomenon. Finally, we note that, according to our
previous estimates, more than $10\textrm{s}$ into the injection phase we
should expect pressures of less than $1\textrm{MPa}$. In [34] the compressive
stress required to reduce the width of the closely packed proppant bed from
$w_{p}$ to $w$ is given by $1/c_{p}\ln(w_{p}/w)$; using this, the same stress
would only cause a $4\%$ reduction in width. Since typical stresses involved
in the closure phase are much smaller than this, we will model the proppants
as incompressible.
This model of crack closure leads to a simple description of the residual
crack profile. We have two parameters: one for average concentration, $\Pi$,
and another for the time that injection ceases, $t_{0}$. Herein we will denote
$\\{\tilde{h},\tilde{p},\tilde{\Gamma}\\}$ as the solution to the system of
equations given in (49); $\tilde{h}$ and $\tilde{p}$ are plotted in Figure 5
and we use the value $\tilde{\Gamma}=0.598$. Then, using (48) and the original
scaling arguments, we deduce that
(51) $\displaystyle w_{p}(\xi;t_{0},\Pi)$
$\displaystyle=\frac{\Pi}{\phi_{rcp}}\epsilon(t_{0})L(t_{0})\eta_{e}(\Pi)^{2/9}\tilde{\Gamma}\tilde{h}(\xi),$
(52) $\displaystyle R(t_{0},\Pi)$
$\displaystyle=L(t_{0})\eta_{e}(\Pi)^{-1/9}\tilde{\Gamma}.$
From Figure 5 we notice that $\max(\tilde{h}_{1})\approx 1.35$. Using this, we
may plot Figure 9(a), which shows the effect of average concentration on the
maximum residual width of the formation. It is interesting to note that the
propped width doesn’t grow proportional to the proppant concentration, as one
may expect from the close packing of the suspended proppants. Instead, the
dependance is superlinear, because greater proppant concentrations lead to a
higher effective viscosity; this causes the fracture to take a wider shape
before the release of injection pressure. We can also see that $t_{0}$ has
relatively little effect on the maximum crack width. This is because the
$t_{0}$ dependent term, $\epsilon L$, grows with $t_{0}^{1/9}$. By contrast,
in Figure 9(b) we see a greater time dependence in the final radius, which
grows with $L\propto t^{4/9}$. As the proppant concentration increases, with
$t_{0}$ fixed, we see a decrease in the final radius of fracture achieved,
arising from an increase in the effective viscosity.
(a) Maximum fracture width.
(b) Fracture radius.
Figure 9. Plots showing the effect of average concentration on the maximum
residual fracture width and radius for $t_{0}\in\\{100,500,1000\\}$.
## 10\. Conclusions
We have established a mathematical framework that captures the behaviour of a
slurry within a pressure driven cavity. Using typical parameters from
industrial fracking, we predict that the development length, required to
establish stable proppant flow away from the wellbore, is negligible compared
to the typical radius of the penny-shaped fracture generated. As a result, we
may assume the flow is fully developed, reducing the in-fracture distribution
of proppants to a function of the radial distance from the wellbore. A further
assumption of constant proppant injection rate allows us to describe the
proppant distribution with one parameter, the total solid volume fraction. In
the zero-concentration limit, our model becomes similar to one derived using
Newtonian flow, with some disagreement arising from our choice of a dense
frictional rheology.
Within this framework, we are able to define an effective viscosity, which we
may absorb into our equations using a suitable choice of scaling. This is a
particularly striking result because it establishes an equivalence between
slurry flow of a given solid fraction and simple Newtonian flow with some
particular viscosity, at least in the sense of fracture development. Solving
the resulting set of equations numerically, we may then return to our original
scaling to investigate concentration-specific solutions. Unsurprisingly, we
predict width and pressure profiles with the tip-asymptotic behaviour
described in [9]. As the proppant concentration increases we expect shorter
and wider fractures with steeper fluid pressure gradients. In the centre of
the fracture, where shear rate vanishes, we predict the formation of a disk of
plug flow with width, in relation to the crack, increasing with the average
proppant concentration. Evaluating our model, we see that the unaccounted
effect of tip screen-out is likely to be significant in the low concentration,
low effective viscosity case, particularly at early times. Here, the cavity
formed is narrow, so near its tip, particle bridging is likely. Moreover, we
observe that for typical fracturing timescales, if $\Pi<0.4$, our model
predicts plug flow thinner than one particle width: suggesting that our use of
a continuum model may not be appropriate. Otherwise, the plug flow is broader
than a particle’s width, meaning it is physically realisable and the results
of [13] suggest we should have good experimental agreement.
Lastly, we have adopted a simple model of crack closure which regards the
remaining proppants to be immobile and incompressible. This allows us to
predict the shape of the residual crack, based on two parameters: the average
proppant concentration within the injected fluid and the length of time
between the initiation of fracking and the release of pressure. Simple
formulae show that the residual fracture width increases significantly with
proppant concentration, and grows very slowly with time; fracture radius
however, decreases with proppant concentration and increases with time.
The results established here have important applications in both contexts of
industrial fracking and geological dike formation. Diagnostics of tip screen-
out and forecasts of residual fracture geometry are relevant to the formation
of conductive fractures, whilst predictions about the shape and particle
distribution of a slurry driven crack relate more to a cooling magma. The
discovery of an effective viscosity may also provide a foothold in
understanding slurry driven fractures, particularly given the bounty of
literature surrounding cracks generated by Newtonian fluid. In spite of all
this, experimental investigation is necessary to bolster the predictions we
have made. We hope this will form the basis of a second article, with
tentative title: ‘Proppant flow in a penny-shaped crack. Part II :
Experimental Investigation’.
## 11\. Acknowledgements
The authors would like to thank Derek Elsworth (Pennsylvania State
University), Elisabeth Guazzelli (Centre National de la Recherche
Scientifique) and Emmanuel Detournay (University of Minnesota) for their
support and guidance in the drafting of this paper; with special gratitude to
Elisabeth for providing the data used in Figure 2. We would also like to thank
John Willis (University of Cambridge) for his support in the publication of
the paper.
## Appendix A Integrating the $\phi$,phionservation equation over the crack
width
In this Appendix we integrate equation (25) over $(-w,w)$ to yield (27); we
will take a term-by-term approach. First, we note that by (24),
(53) $\displaystyle\int_{-z}^{z}\phi(\xi,\eta)d\eta$
$\displaystyle=2\int_{0}^{z}\mu^{-1}\left(\mu_{w}(\xi)\frac{\eta}{w}\right)d\eta,$
(54) $\displaystyle=2z\Pi\left(\mu_{w}(\xi)\frac{z}{w}\right).$
Hence, we see that
(55) $\displaystyle\int_{-w}^{w}\partialderivative{\phi}{\xi}d\eta$
$\displaystyle=\partialderivative{\xi}\int_{-w}^{w}\phi
d\eta-2\phi(\xi,w)\partialderivative{w}{\xi},$ (56)
$\displaystyle=2\partialderivative{\xi}\left[w\Pi\circ\mu_{w}(\xi)\right]-2\phi(\xi,w)\partialderivative{w}{\xi}.$
Then, integrating by parts, we find
(57)
$\displaystyle\int_{-w}^{w}\eta\partialderivative{\phi}{\eta}d\eta=2\left[w\phi(\xi,w)-w\Pi\circ\mu_{w}(\xi)\right].$
Furthermore, utilising the expression of $v_{r}$ given in (20) and the
condition $v_{r}(\xi,\pm w)=0$ we determine
(58) $\displaystyle\int_{-w}^{w}\partialderivative{(\xi\phi v_{r})}{\xi}d\eta$
$\displaystyle=\partialderivative{\xi}\left[\xi\int_{-w}^{w}\phi
v_{r}d\eta\right],$ (59)
$\displaystyle=-\frac{6}{\Gamma}\partialderivative{\xi}\left[\xi\partialderivative{p}{\xi}\int_{0}^{w}\phi(\xi,\eta)\int_{\eta}^{w}\frac{I(\phi(\xi,z))z}{\mu(\phi(\xi,z))}dzd\eta\right],$
(60)
$\displaystyle=-\frac{6}{\Gamma}\partialderivative{\xi}\left[\xi\partialderivative{p}{\xi}\int_{0}^{w}\int_{0}^{z}\phi(\xi,\eta)\frac{I(\phi(\xi,z))z}{\mu(\phi(\xi,z))}d\eta
dz\right],$ (61)
$\displaystyle=-\frac{6}{\Gamma}\partialderivative{\xi}\left[\xi\partialderivative{p}{\xi}\int_{0}^{w}z^{2}\Pi\left(\frac{\mu_{w}z}{w}\right)\frac{I(\phi(\xi,z))}{\mu(\phi(\xi,z))}dz\right].$
However, by (24), $\mu(\phi(\xi,z))=\mu_{w}z/w$, so
(62) $\displaystyle\int_{-w}^{w}\partialderivative{(\xi\phi v_{r})}{\xi}d\eta$
$\displaystyle=-\frac{6}{\Gamma}\partialderivative{\xi}\left[\frac{w\xi}{\mu_{w}}\partialderivative{p}{\xi}\int_{0}^{w}z\Pi\left(\frac{\mu_{w}z}{w}\right)I\circ\mu^{-1}\left(\frac{\mu_{w}z}{w}\right)dz\right],$
(63) $\displaystyle=-\frac{6}{\Gamma}\partialderivative{\xi}\left[\frac{\xi
w^{3}}{\mu_{w}(\xi)^{2}}\partialderivative{p}{\xi}\Omega\circ\mu_{w}(\xi)\right].$
Finally, we know that
(64) $\displaystyle\int_{-w}^{w}\partialderivative{(\phi
v_{z})}{\eta}d\eta=2\phi(\xi,w)v_{z}(\xi,w).$
In the original scaling we have the boundary condition
$v_{z}(x,w)=\partialderivative{w}{t}(x,t)$; in the lubrication scaling this
becomes
(65) $\displaystyle-\dot{\epsilon}Lv_{z}(\xi,w)$
$\displaystyle=\left[\dot{\epsilon}L+\epsilon\dot{L}\right]w(\xi,T)-\epsilon
L\xi\left[\frac{\dot{L}}{L}+\frac{\Gamma^{\prime}\dot{T}}{\Gamma}\right]\partialderivative{w}{\xi}+\dot{T}\partialderivative{w}{T}.$
Hence,
(66) $\displaystyle
v_{z}(\xi,w)=\frac{w}{3}-\frac{4\xi}{3}\partialderivative{w}{\xi},$
and so
(67) $\displaystyle\int_{-w}^{w}\partialderivative{(\phi
v_{z})}{\eta}d\eta=2\phi(\xi,w)\left[\frac{w}{3}-\frac{4\xi}{3}\partialderivative{w}{\xi}\right].$
Adding these terms together and making various cancellations, we derive
equation (27).
## Appendix B Zero-Concentration Limit
In this Appendix, we will compare the properties of equation (29) to the
equivalent zero-proppant equation. Modelling the flow as Newtonian instead, we
would have used the relation $\tau=\eta_{f}\dot{\gamma}$. In our choice of
scaling this becomes $\tau=\dot{\gamma}$. Hence (19.2) is replaced by
(68)
$\displaystyle\partialderivative{v_{r}}{\eta}=\frac{3\eta}{\Gamma}\partialderivative{p}{\xi},$
where $\mathbf{v}$ is the fluid velocity. With the assumption that
$\nabla\cdot v=0$, our scaled continuity equation is simply
(69) $\displaystyle\frac{1}{\Gamma\xi}\partialderivative{(\xi
v_{r})}{\xi}+\partialderivative{v_{z}}{\eta}=0.$
Integrating first over $(-w,w)$ as in Appendix A, making use of (66), (68) and
$\tau=\dot{\gamma}$, we obtain
(70)
$\displaystyle\frac{w}{3}-\frac{4\xi}{3}\partialderivative{w}{\xi}=\frac{1}{\xi\Gamma^{2}}\partialderivative{\xi}\left[\partialderivative{p}{\xi}\xi
w^{3}\right].$
Then, multiplying by $\xi$ and integrating from $\rho$ to 1, we use the
$w^{3}\partial p/\partial\xi\to 0$ limit employed to derive (29),
(71) $\displaystyle\int_{\rho}^{1}\xi wd\xi+\frac{4}{9}\rho^{2}w=-\frac{\rho
w^{3}}{3\Gamma^{2}}\partialderivative{p}{\rho}.$
In order to compare (29) and (71), we are required to find the limit of
$\Omega/(x^{2}\Pi)$ as $x\to\infty$. Explicitly we see that
(72) $\displaystyle\lim_{x\to\infty}\frac{\Omega(x)}{x^{2}\Pi(x)}$
$\displaystyle=\lim_{x\to\infty}\frac{1}{x^{3}\Pi(x)}\int_{0}^{x}\Pi(u)I\circ\mu^{-1}(u)udu,$
(73)
$\displaystyle=\lim_{x\to\infty}\int_{0}^{1}\frac{\Pi(vx)}{\Pi(x)}\cdot\frac{I\circ\mu^{-1}(vx)}{vx}\cdot
v^{2}dv,$ (74)
$\displaystyle=\int_{0}^{1}v^{2}\lim_{x\to\infty}\left[\frac{\Pi(vx)}{\Pi(x)}\right]dv,$
(75)
$\displaystyle=\int_{0}^{1}v\lim_{x\to\infty}\left[\frac{\int_{0}^{vx}\mu^{-1}(u)du}{\int_{0}^{x}\mu^{-1}(u)du}\right]dv,$
(76)
$\displaystyle=\int_{0}^{1}v^{2}\lim_{x\to\infty}\left[\frac{\mu^{-1}(vx)}{\mu^{-1}(x)}\right]dv,$
(77)
$\displaystyle=\int_{0}^{1}v^{2}\lim_{x\to\infty}\left[\frac{I^{-1}(vx)}{I^{-1}(x)}\right]dv,$
(78)
$\displaystyle=\int_{0}^{1}v^{2}\lim_{x\to\infty}\left[\frac{1+\sqrt{x}}{1+\sqrt{vx}}\right]dv,$
(79) $\displaystyle=\int_{0}^{1}v^{3/2}dv,$ (80) $\displaystyle=2/5.$
Here (74) and (77) arise from the fact $I(\phi)\sim\mu(\phi)$ as $\phi\to 0$,
because the fluid shear stress approaches the slurry shear stress. (76) comes
from L’Hôpital’s rule. We conclude that the equations governing Newtonian flow
are not the same as those in the zero-proppant slurry flow limit.
## Appendix C Matrix $(P)_{ij}$, when $N=4$
The matrix $(P)_{ij}$ for $N=4$, as provided in [9], is given in table 4.
| | | | | j | |
---|---|---|---|---|---|---|---
| | | -1 | 0 | 1 | 2 | 3
| -1 | | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
| 0 | | 0.0000 | 0.9560 | 1.2730 | 0.4101 | 0.3145
i | 1 | | 0.0000 | 0.0991 | -0.0185 | 0.4068 | 0.0610
| 2 | | 0.0000 | 0.0018 | -0.0429 | -0.0244 | 0.2293
| 3 | | 0.0000 | 0.0017 | 0.0039 | -0.0416 | -0.0141
| 4 | | 0.0000 | 0.0005 | 0.0026 | -0.0032 | -0.0372
Table 4. Matrix $(P)_{ij}$, for N=4.
## References
* [1] Wells, Bruce A., ed. (2007). ”Shooters”. The Petroleum Age. American Oil and Gas Historical Society. 4 (3): 8–9. ISSN 1930-5915
* [2] Charlez, Philippe A. (1997). Rock Mechanics: Petroleum Applications. Paris: Editions Technip. p. 239. ISBN 9782710805861.
* [3] National Earthquake Hazards Reduction Program (U.S.), Geological Survey (U.S.), Office of Earthquakes, Volcanoes, and Engineering, U.S. National Committee for Rock Mechanics (1983). Hydraulic Fracturing Stress Measurements. Volume 26 of International journal of rock mechanics and mining sciences and geomechanics abstracts.
* [4] Pierce, Brenda (2010). Geothermal Energy Resources. National Association of Regulatory Utility Commissioners (NARUC).
* [5] Miller, Bruce G. (2005). Coal Energy Systems. Sustainable World Series. Academic Press. p. 380. ISBN 9780124974517.
* [6] E. Rivalta, B. Taisne, A.P. Bunger, R.F. Katz (2015). A review of mechanical models of dike propagation: Schools of thought, results and future directions. Tectonophysics. Volume 638,2015. Pages 1-42. ISSN 0040-1951.
* [7] Petford, N., Koenders, M.A. (1998). Granular flow and viscous fluctuations in low Bagnold number granitic magmas. Journal of the Geological Society, 155 (5), pp. 873-881. 10.1144/gsjgs.155.5.0873
* [8] Spence, D.A., Sharp, P.W. (1985). Self-similar solution for elastohydrodynamic cavity flow. Proc. Roy. Soc. London, Ser. A (400),289–313.
* [9] A.A. Savitski, E. Detournay (2002). Propagation of a penny-shaped fluid-driven fracture in an impermeable rock: asymptotic solutions, International Journal of Solids and Structures, Volume 39, Issue 26, Pages 6311-6337.
* [10] Einstein, A. (1906). A new determination of molecular dimensions. Ann. Phys. 4 (19), 289–306.
* [11] Boyer F., Guazzelli É., Pouliquen O. (2011). Unifying suspension and granular rheology. Phys. Rev. Lett. 107 (18), 188301.
* [12] Dontsov EV, Boronin SA, Osiptsov AA, Derbyshev DY. (2019). Lubrication model of suspension flow in a hydraulic fracture with frictional rheology for shear-induced migration and jamming. Proc. R. Soc. A 475: 20190039.
* [13] Lecampion, Garagash (2014). Confined flow of suspensions modelled by a frictional rheology. J. Fluid Mech. (2014), vol. 759, pp. 197–235. Cambridge University Press 2014. doi:10.1017/jfm.2014.557
* [14] Niall J. O’Keeffe, Herbert E. Huppert & P. F. Linden (2018). Experimental exploration of fluid-driven cracks in brittle hydrogels. J. Fluid Mech., vol. 844, pp. 435–458.
* [15] Garagash, D.I., Detournay, E. (2000). The tip region of a fluid-driven fracture in an elastic medium. ASME J. Appl. Mech. 67, 183–192.
* [16] Sneddon, I.N., (1951). Fourier Transforms. McGraw-Hill, New York, NY
* [17] Rice, J.R., (1968). Mathematical analysis in the mechanics of fracture. In: Liebowitz, H. (Ed.), Fracture, an Advanced Treatise. Vol. II.Academic Press, New York, NY, pp. 191–311 (Chapter 3).
* [18] Dagois-Bohy S., Hormozi S., Guazzelli É, Pouliquen O. (2015). Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids. Journal of Fluid Mechanics, 776, R2. doi:10.1017/jfm.2015.329
* [19] Garside, J., Al-Dibouni, M. R. (1977). Velocity-voidage relationships for fluidization and sedimentation in solid–liquid systems. Ind. Eng. Chem. Process Des. Dev. 16 (2), 206–214.
* [20] Richardson, J., Zaki, W. (1954) Sedimentation and fluidization: Part I. Trans. Inst. Chem. Engrs 32, 35–47.
* [21] Bacri, J.-C., Frenois, C., Hoyos, M., Perzynski, R., Rakotomalala, N. & Salin, D. (1986). Acoustic study of suspension sedimentation. Europhys. Lett. 2 (2), 123–128.
* [22] Shiozawa, S., Mcclure, M. (2016). Simulation of proppant transport with gravitational settling and fracture closure in a three-dimensional hydraulic fracturing simulator. J. Petrol. Sci. Engng, 138, 298–314.
* [23] Chen Zhixi, Chen Mian, Jin Yan, Huang Rongzun (1997). Determination of rock fracture toughness and its relationship with acoustic velocity, International Journal of Rock Mechanics and Mining Sciences, Volume 34, Issues 3–4, 1997, Pages 49.e1-49.e11, ISSN 1365-1609
* [24] Feng Liang, Mohammed Sayed, Ghaithan A. Al-Muntasheri, Frank F. Chang, Leiming Li (2016). A comprehensive review on proppant technologies. Petroleum, Volume 2, Issue 1, March 2016, Pages 26-39.
* [25] Dontsov EV, Boronin SA, Osiptsov AA, Derbyshev DY. (2019). Lubrication model of suspension flow in a hydraulic fracture with frictional rheology for shear-induced migration and jamming. Proc. R. Soc. A 475: 20190039.
* [26] Abramowitz, M., Stegun, I.A. (Eds.), (1964). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables.Applied Mathematics Series, 55. US Govt. Print. Off, Washington, DC.
* [27] Lagarias, J. C., J. A. Reeds, M. H. Wright, & P. E. Wright (1998). Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal of Optimization. Vol. 9, Number 1, 1998, pp. 112–147.
* [28] Dontsov, E. V., Peirce, A. P. (2014). Slurry flow, gravitational settling and a proppant transport model for hydraulic fractures. J. Fluid Mech. 760, 567–590.
* [29] Ching-Yao Lai, Zhong Zheng, Emilie Dressaire, Guy Z. Ramon, Herbert E. Huppert, & Howard A. Stone (2016). Elastic Relaxation of Fluid-Driven Cracks and the Resulting Backflow. Physical Review Letters 117, 268001.
* [30] A.M. Rubin. (1993). Tensile fracture of rock at high confining pressure: implications for dike propagation. J. Geophys. Res., 98 (B9) (1993), pp. 15,919-15,935.
* [31] E. Detournay & D. Garagash (2003). The tip region of a fluid-driven fracture in a permeable elastic solid. J. Fluid Mech., 494, pp. 1-32.
* [32] D. Garagash (2006). Propagation of a plane-strain hydraulic fracture with a fluid lag: Early-time solution, International Journal of Solids and Structures 43, 5811–5835.
* [33] Jiehao Wang, Derek Elsworth & Martin K. Denison (2018). Propagation, proppant transport and the evolution of transport properties of hydraulic fractures. J. Fluid Mech., vol. 855, pp. 503–534.
* [34] Wang, J. & Elsworth, D. (2018). Role of proppant distribution on the evolution of hydraulic fracture conductivity. J. Petrol. Sci. Engng 166, 249–262.
|
# Multilingual and cross-lingual document classification:
A meta-learning approach
Niels van der Heijden♣ Helen Yannakoudakis♠ Pushkar Mishra♢ Ekaterina Shutova♣
♣ILLC, University of Amsterdam, the Netherlands
♠Dept. of Informatics, King’s College London, United Kingdom
♢Facebook AI, London, United Kingdom
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The great majority of languages in the world are considered under-resourced
for the successful application of deep learning methods. In this work, we
propose a meta-learning approach to document classification in a limited-
resource setting and demonstrate its effectiveness in two different settings:
few-shot, cross-lingual adaptation to previously unseen languages; and
multilingual joint training when limited target-language data is available
during training. We conduct a systematic comparison of several meta-learning
methods, investigate multiple settings in terms of data availability and show
that meta-learning thrives in settings with a heterogeneous task distribution.
We propose a simple, yet effective adjustment to existing meta-learning
methods which allows for better and more stable learning, and set a new state
of the art on several languages while performing on-par on others, using only
a small amount of labeled data.
## 1 Introduction
There are more than 7000 languages around the world and, of them, around 6%
account for 94% of the population.111https://www.ethnologue.com/statistics
Even for the 6% most spoken languages, very few of them possess adequate
resources for natural language research and, when they do, resources in
different domains are highly imbalanced. Additionally, human language is
dynamic in nature: new words and domains emerge continuously and hence no
model learned in a particular time will remain valid forever.
With the aim of extending the global reach of Natural Language Processing
(NLP) technology, much recent research has focused on the development of
multilingual models and methods to efficiently transfer knowledge across
languages. Among these advances are multilingual word vectors which aim to
give word-translation pairs a similar encoding in some embedding space Mikolov
et al. (2013a); Lample et al. (2017). There has also been a lot of work on
multilingual sentence and word encoders that either explicitly utilizes
corpora of bi-texts Artetxe and Schwenk (2019); Lample and Conneau (2019) or
jointly trains language models for many languages in one encoder Devlin et al.
(2018); Conneau et al. (2019). Although great progress has been made in cross-
lingual transfer learning, these methods either do not close the gap with
performance in a single high-resource language Artetxe and Schwenk (2019);
Conneau et al. (2019); van der Heijden et al. (2019), e.g., because of
cultural differences in languages which are not accounted for, or are
impractically expensive Lai et al. (2019).
Meta-learning, or learning to learn Schmidhuber (1987); Bengio et al. (1990);
Thrun and Pratt (1998), is a learning paradigm which focuses on the quick
adaption of a learner to new tasks. The idea is that by training a learner to
adapt quickly and from a few examples on a diverse set of training tasks, the
learner can also generalize to unseen tasks at test time. Meta-learning has
recently emerged as a promising technique for few-shot learning for a wide
array of tasks Finn et al. (2017); Koch et al. (2015); Ravi and Larochelle
(2017) including NLP Dou et al. (2019); Gu et al. (2018). To our best
knowledge, no previous work has been done in investigating meta-learning as a
framework for multilingual and cross-lingual few-shot learning. We propose
such a framework and demonstrate its effectiveness in document classification
tasks. The only current study on meta-learning for cross-lingual few-shot
learning is the one by Nooralahzadeh et al. (2020), focusing on natural
language inference and multilingual question answering. In their work, the
authors focus on applying meta-learning to learn to adapt a monolingually
trained classifier to new languages. In contrast to this work, we instead show
that, in many cases, it is more favourable to not initialize the meta-learning
process from a monolingually trained classifier, but rather reserve its
respective training data for meta-learning instead.
Our contributions are as follows: 1) We propose a meta-learning approach to
few-shot cross-lingual and multilingual adaptation and demonstrate its
effectiveness on document classification tasks over traditional supervised
learning; 2) We provide an extensive comparison of meta-learning methods on
multilingual and cross-lingual few-shot learning and release our code to
facilitate further research in the
field;222https://github.com/mrvoh/meta_learning_multilingual_doc_classification
3) We analyse the effectiveness of meta-learning under a number of different
parameter initializations and multiple settings in terms of data availability,
and show that meta-learning can effectively learn from few examples and
diverse data distributions; 4) We introduce a simple yet effective
modification to existing methods and empirically show that it stabilizes
training and converges faster to better local optima; 5) We set a new state of
the art on several languages and achieve on-par results on others using only a
small amount of data.
## 2 Meta-learning methods
Algorithm 1 Meta-training procedure.
0: $p(\mathcal{D})$: distribution over tasks.
0: $\alpha,\beta$: step size hyper-parameters
Initialize $\theta$
while not done do
Sample batch of tasks $\\{D^{l}\\}=\\{(S^{l},Q^{l})\\}\sim p(\mathcal{D})$
for all $(S^{l},Q^{l})$ do
Initialize $\theta_{l}^{(0)}=\theta$
for all steps k do
Compute:
$\theta_{l}^{(k+1)}=\theta_{l}^{(k)}-\alpha(\nabla_{\theta_{l}^{(k)}}\mathcal{L}_{S_{l}}(f_{\theta_{l}^{(k)}}))$
end for
end for
Update $\theta=\theta-\beta($MetaUpdate$(f_{\theta_{l}^{(K)}},Q^{l}))$
end while
Meta-learning, or learning to learn, aims to create models that can learn new
skills or adapt to new tasks rapidly from few training examples. Unlike
traditional machine learning, datasets for either training or testing, which
are referred to as meta-train and meta-test datasets, comprise of many tasks
sampled from a distribution of tasks $p(\mathcal{D})$ rather than individual
data points. Each task is associated with a dataset $\mathcal{D}$ which
contains both feature vectors and ground truth labels and is split into a
support set and a query set, $\mathcal{D}=\\{S,Q\\}$. The support set is used
for fast adaptation and the query set is used to evaluate performance and
compute a loss with respect to model parameter initialization. Generally, some
model $f_{\theta}$ parameterized by $\theta$, often referred to as the base-
learner, is considered. A cycle of fast-adaptation on a support-set followed
by updating the parameter initialization of the base-learner based on the loss
on the query-set is called an episode. In the case of classification, the
optimal parameters maximize the probability of the true labels across multiple
batches $Q\subset\mathcal{D}$
$\displaystyle\displaystyle\theta^{*}:=arg\underset{\theta}{max}\mathbb{E}_{Q\subset\mathcal{D}}[\sum_{(x,y)\in
Q}P_{\theta}(y|x)]$ (1)
In few-shot classification/fast learning, the goal is to minimize the
prediction error on data samples with unknown labels given a small support set
for learning. Meta-training (Algorithm 1) consists of updating the parameters
of the base-learner by performing many of the formerly described episodes,
until some stop criterion is reached.
Following this procedure, the extended definition of optimal parameters is
given in Eq. 2 to include fast adaptation based on the support set. The
underlined parts mark the difference between traditional supervised-learning
and meta-learning. The optimal parameters $\theta^{*}$ are obtained by solving
$\displaystyle\scriptstyle
arg\underset{\theta}{max}\underline{\mathbb{E}_{l\subset
L}[}\mathbb{E}_{\underline{S^{l}\subset\mathcal{D}},Q^{l}\subset\mathcal{D}}[\sum_{(x,y)\in
Q^{l}}P_{\theta}(y|x,\underline{S^{l}})]\underline{]}$ (2)
In this work, we focus on metric- and optimization-based meta-learning
algorithms. In the following sections, their respective characteristics and
the update methods in Algorithm 1 are introduced.
### 2.1 Prototypical Networks
Prototypical Networks Snell et al. (2017) belong to the metric-based family of
meta-learning algorithms. Typically they consist of an embedding network
$f_{\theta}$ and a distance function $d(x_{1},x_{2})$ such as Euclidean
distance. The embedding network is used to encode all samples in the support
set $S_{c}$ and compute prototypes $\mu_{c}$ per class $c\in C$ by computing
the mean of the sample encodings of that respective class
$\displaystyle\mu_{c}:=\frac{1}{|S_{c}|}\sum_{(x_{i},y_{i})\in
S_{c}}f_{\theta}(x_{i})$ (3)
Using the computed prototypes, Prototypical Networks classify a new sample as
$\displaystyle
p(y=c|x)=\frac{exp(-d(f_{\theta}(x),\mu_{c})}{\sum_{c^{^{\prime}}\in
C}exp(-d(f_{\theta}(x),\mu_{c^{^{\prime}}})}$ (4)
Wang et al. (2019) show that despite their simplicity, Prototypical Networks
can perform on par or better than other state-of-the-art meta-learning methods
when all sample encodings are centered around the overall mean of all classes
and consecutively L2-normalized. We also adopt this strategy.
### 2.2 MAML
Model-Agnostic Meta-Learning (MAML) Finn et al. (2017) is an optimization-
based method that uses the following objective function
$\displaystyle\theta^{*}:=arg\underset{\theta}{min}\sum_{D_{l}\sim
p(D)}\mathcal{L}_{l}(f_{\theta_{l}^{(k)}})$ (5)
$\mathcal{L}_{l}(f_{\theta_{l}^{(k)}})$ is the loss on the query set after
updating the base-learner for $k$ steps on the support set. Hence, MAML
directly optimizes the base-learner such that fast-adaptation of $\theta$,
often referred to as inner-loop optimization, results in task-specific
parameters $\theta_{l}^{(k)}$ which generalize well on the task. Setting $B$
as the batch size, MAML implements its MetaUpdate, which is also referred to
as outer-loop optimization, as
$\displaystyle\theta=\theta-\beta\frac{1}{B}\sum_{D_{l}\sim
p(\mathcal{D})}(\nabla_{\theta}\mathcal{L}_{l}(f_{\theta_{l}^{(k)}}))$ (6)
Such a MetaUpdate requires computing second order derivatives and, in turn,
holding $\theta_{l}^{(j)}\forall j=1,\dots,k$ in memory. A first-order
approximation of MAML (foMAML), which ignores second order derivatives, can be
used to bypass this problem:
$\displaystyle\theta=\theta-\beta\frac{1}{B}\sum_{D_{l}\sim
p(\mathcal{D})}(\nabla_{\theta_{l}^{(k)}}\mathcal{L}_{l}(f_{\theta_{l}^{(k)}}))$
(7)
Following previous work Antoniou et al. (2018), we also adopt the following
improvements in our framework for all MAML-based methods:
#### Per-step Layer Normalization weights
Layer normalization weights and biases are not updated in the inner-loop.
Sharing one set of weights and biases across inner-loop steps implicitly
assumes that the feature distribution between layers stays the same at every
step of the inner optimization.
#### Per-layer per-step learnable inner-loop learning rate
Instead of using a shared learning rate for all parameters, the authors
propose to initialize a learning rate per layer and per step and jointly learn
their values in the MetaUpdate steps.
#### Cosine annealing of outer-loop learning rate
It has shown to be crucial to model performance to anneal the learning rate
using some annealing function Loshchilov and Hutter (2016).
### 2.3 Reptile
Reptile Nichol et al. (2018) is a first-order optimization-based meta-learning
algorithm which is designed to move the weights towards a manifold of the
weighted averages of task-specific parameters $\theta_{l}^{(k)}$:
$\displaystyle\theta=\theta-\beta\frac{1}{B}\sum_{D^{l}\sim
p(\mathcal{D})}(\theta_{l}^{(k)}-\theta)$ (8)
Despite its simplicity, it has shown competitive or superior performance
against MAML, e.g., on Natural Language Understanding Dou et al. (2019).
### 2.4 ProtoMAML
Triantafillou et al. (2020) introduce ProtoMAML as a meta-learning method
which combines the complementary strengths of Prototypical Networks and MAML
by leveraging the inductive bias of the use of prototypes instead of random
initialization of the final linear layer of the network. Snell et al. (2017)
show that Prototypical Networks are equivalent to a linear model when
Euclidean distance is used. Using the definition of prototypes $\mu_{c}$ as
per Eq. 3, the weights $w_{c}$ and bias $b_{c}$ corresponding to class $c$ can
be computed as follows
$\displaystyle\mathbf{w}_{c}:=2\mu_{c}\qquad b_{c}:=-\mu_{c}^{T}\mu_{c}$ (9)
ProtoMAML is defined as the adaptation of MAML where the final linear layer is
parameterized as per Eq. 9 at the start of each episode using the support set.
Due to this initialization, it allows modeling a varying number of classes per
episode.
#### ProtoMAMLn
Inspired by Wang et al. (2019), we propose a simple, yet effective adaptation
to ProtoMAML by applying $L_{2}$ normalization to the prototypes themselves,
referred to as ProtoMAMLn, and, again, use a first-order approximation
(foProtoMAMLn). We demonstrate that doing so leads to a more stable, faster
and effective learning algorithm at only constant extra computational cost
($\mathcal{O}(1))$.
We hypothesize the normalization to be particularly beneficial in case of a
relatively high-dimensional final feature space – in case of BERT-like models
typically 768 dimensions. Let $x$ be a sample and $\hat{x}=f_{\theta}(x)$ be
the encoding of the sample in the final feature space. Since the final
activation function is the tanh activation, all entries of both $\hat{x}$ and
$\mu_{c}$ have values between -1 and 1. The pre-softmax activation for class
$c$ is computed as $\hat{x}^{T}\mu_{c}$. Due to the size of the vectors and
the scale of their respective entries, this in-product can yield a wide range
of values, which in turn results in relatively high loss values, making the
inner-loop optimization unstable.
## 3 Related work
### 3.1 Multilingual NLP
Just as the deep learning era for monolingual NLP started with the invention
of dense, low-dimensional vector representations for words Mikolov et al.
(2013b) so did cross-lingual NLP with works like those of Mikolov et al.
(2013a); Faruqui et al. (2014). More recently, multilingual and/or cross-
lingual NLP is approached by training one shared encoder for multiple
languages at once, either by explicitly aligning representations with the use
of parallel corpora Artetxe and Schwenk (2019); Lample and Conneau (2019) or
by jointly training on some monolingual language model objective, such as the
Masked Language Model (MLM) Devlin et al. (2018), in multiple languages Devlin
et al. (2018); Conneau et al. (2019).
The formerly described language models aim to create a shared embedding space
for multiple languages with the hope that fine-tuning in one language does not
degrade performance in others. Lai et al. (2019) argue that just aligning
languages is not sufficient to generalize performance to new languages due to
the phenomenon they describe as domain drift. Domain drift accounts for all
differences for the same tasks in different languages which cannot be captured
by a perfect translation system, such as differences in culture. They instead
propose a multi-step approach which utilizes a multilingual teacher trained
with Unsupervised Data Augmentation (UDA) Xie et al. (2019) to create labels
for a student model that is pretrained on large amounts of unlabeled data in
the target language and domain using the MLM objective. With their method, the
authors obtain state-of-the-art results on the MLDoc document classification
task Schwenk and Li (2018) and the Amazon Sentiment Polarity Review task
Prettenhofer and Stein (2010). A downside, however, is the high computational
cost involved. For every language and domain combination: 1) a machine
translation system has to be inferred on a large amount of unlabeled samples;
2) the UDA method needs to be applied to obtain a teacher model to generate
pseudo-labels on the unlabeled in-domain data; 3) a language model must be
finetuned, which involves forwards and backwards computation of a softmax
function over a large output space (e.g., 50k tokens for mBERT and 250k tokens
for XLM-RoBERTa). The final classifier is then obtained by 4) training the
finetuned language model on the pseudo-labels generated by the teacher.
### 3.2 Meta-learning in NLP
#### Monolingual
Bansal et al. (2019) apply meta-learning to a wide range of NLP tasks within a
monolingual setting and show superior performance for parameter initialization
over self-supervised pretraining and multi-task learning. Their method is an
adaptation of MAML where a combination of a text-encoder, BERT Devlin et al.
(2018), is coupled with a parameter generator that learns to generate task-
dependent initializations of the classification head such that meta-learning
can be performed across tasks with disjoint label spaces. Obamuyide and
Vlachos (2019b) apply meta-learning on the task of relation extraction;
Obamuyide and Vlachos (2019a) apply lifelong meta-learning for relation
extraction; Chen et al. (2019) apply meta-learning for few-shot learning on
missing link prediction in knowledge graphs.
#### Multilingual
Gu et al. (2018) apply meta-learning to Neural Machine Translation (NMT) and
show its advantage over strong baselines such as cross-lingual transfer
learning. By viewing each language pair as a task, the authors apply MAML to
obtain competitive NMT systems with as little as 600 parallel sentences. To
our best knowledge, the only application of meta-learning for cross-lingual
few-shot learning is the one by Nooralahzadeh et al. (2020). The authors study
the application of X-MAML, a MAML-based variant, to cross-lingual Natural
Language Inference (XNLI) Conneau et al. (2018) and Multilingual Question
Answering (MLQA) Lewis et al. (2019) in both a cross-domain and cross-language
setting. X-MAML works by pretraining some model $M$ on a high-resource task
$h$ to obtain initial model parameters $\theta_{mono}$. Consecutively, a set
$L$ of one or more auxiliary languages is taken, and MAML is applied to
achieve fast adaptation of $\theta_{mono}$ for $l\in L$. In their experiments,
the authors use either one or two auxiliary languages and evaluate their
method in both a zero- and few-shot setting. It should be noted that, in the
few-shot setting, the full development set (2.5k instances) is used to
finetune the model, which is not in line with other work on few-shot learning,
such as Bansal et al. (2019). Also, there is a discrepancy in the training set
used for the baselines and their proposed method. All reported baselines are
either zero-shot evaluations of $\theta_{mono}$ or of $\theta_{mono}$
finetuned on the development set of the target language, whereas their
proposed method additionally uses the development set in either one or two
auxiliary languages during meta-training.
MetaUpdate Method | Num inner-loop steps | Inner-loop lr | Class-head lr multiplier | Inner-optimizer lr
---|---|---|---|---
Reptile | 2,3,5 | 1e-5, 5e-5, 1e-4 | 1, 10 | -
foMAML | 2,3,5 | 1e-5, 1e-4, 1e-3 | 1, 10 | 3e-5, 6e-5, 1e-4
foProtoMAMLn | 2,3,5 | 1e-5, 1e-4, 1e-3 | 1, 10 | 3e-5, 6e-5, 1e-4
Table 1: Search range per hyper-parameter. We consider the number of update
steps in the inner-loop, Num inner-loop steps, the (initial) learning rate of
the inner-loop, Inner-loop lr, the factor by which the learning rate of the
classification head is multiplied, Class-head lr multiplier, and, if
applicable, the learning rate with which the inner-loop optimizer is updated,
Inner-optimizer lr. The chosen value is underlined.
## 4 Data
In this section, we give an overview of the datasets we use and the respective
classification tasks.
#### MLDoc
Schwenk and Li (2018) published an improved version of the Reuters Corpus
Volume 2 Lewis et al. (2004) with balanced class priors for all languages.
MLDoc consists of news stories in 8 languages: English, Spanish, French,
Italian, Russian, Japanese and Chinese. Each news story is manually classified
into one of four groups: Corporate/Industrial, Economics, Government/Social
and Markets. The train datasets contain 10k samples whereas the test sets
contain 4k samples.
#### Amazon Sentiment Polarity
Another widely used dataset for cross-lingual text classification is the
Amazon Sentiment Analysis dataset Prettenhofer and Stein (2010). The dataset
is a collection of product reviews in English, French, German and Japanese in
three categories: books dvds and music. Each sample consists of the original
review accompanied by meta-data such as the rating of the reviewed product
expressed as an integer on a scale from one to five. In this work, we consider
the sentiment polarity task where we distinguish between positive (rating $>$
3) and negative (rating $<$ 3) reviews. When all product categories are
concatenated, the dataset consists of 6K samples per language per dataset
(train, test). We extend this with Chinese product reviews in the cosmetics
domain from JD.com Zhang et al. (2015), a large e-commerce website in China.
The train and test sets contain 2k and 20k samples respectively.
## 5 Experiments
We use XLM-RoBERTa Conneau et al. (2019), a strong multilingual model, as the
base-learner in all models. We quantify the strengths and weaknesses of meta-
learning as opposed to traditional supervised learning in both a cross- and a
multilingual joint-training setting with limited resources.
#### Cross-lingual adaptation
Here, the available data is split into multiple subsets: the auxiliary
languages $l_{aux}$ which are used in meta-training, the validation language
$l_{dev}$ which is used to monitor performance, and the target languages
$l_{tgt}$ which are kept unseen until meta-testing. Two scenarios in terms of
amounts of available data are considered. A small sample of the available
training data of $l_{aux}$ is taken to create a limited-resource setting,
whereas all available training data of $l_{aux}$ is used in a high-resource
setting. The chosen training data per language is split evenly and stratified
over two disjoint sets from which the meta-training support and query samples
are sampled, respectively. For meta-testing, one batch (16 samples) is taken
from the training data of each target language as support set, while we test
on the whole test set per target language (i.e., the query set).
#### Multilingual joint training
We also investigate meta-learning as an approach to multilingual joint-
training in the same limited-resource setting as previously described for the
cross-lingual experiments. The difference is that instead of learning to
generalize to $l_{tgt}\neq l_{aux}$ from few examples, here $l_{tgt}=l_{aux}$.
If we can show that one can learn many similar tasks across languages from few
examples per language, using a total number of examples in the same order of
magnitude as in “traditional” supervised learning for training a monolingual
classifier, this might be an incentive to change data collection processes in
practice.
For both experimental settings above, we examine the influence of additionally
using all training data from a high-resource language $l_{src}$ during meta-
training, English.
$\mathbf{l_{src}}$ = en | Method | Limited-resource setting | High-resource setting | |
---|---|---|---|---|---
de | fr | it | ja | ru | zh | $\Delta$ | de | fr | it | ja | ru | zh | $\Delta$
Excluded | Non-episodic | 82.0 | 86.7 | 68.3 | 71.9 | 70.9 | 81.0 | 76.8 | 95.3 | 90.9 | 80.9 | 82.9 | 74.5 | 89.6 | 85.7
ProtoNet | 90.5 | 85.0 | 76.6 | 75.0 | 69.6 | 82.0 | 79.8 | 95.5 | 91.7 | 82.0 | 82.2 | 76.6 | 87.4 | 85.9
foMAML | 89.7 | 85.5 | 74.1 | 74.1 | 74.0 | 83.2 | 80.1 | 95.0 | 91.4 | 81.4 | 82.7 | 76.9 | 87.8 | 86.1
foProtoMAMLn | 90.6 | 86.2 | 77.8 | 75.6 | 73.6 | 83.8 | 80.7 | 95.6 | 92.1 | 82.6 | 83.1 | 77.9 | 88.9 | 86.7
Reptile | 87.9 | 81.8 | 72.7 | 74.4 | 73.9 | 80.9 | 78.6 | 95.0 | 90.1 | 81.1 | 82.7 | 72.5 | 88.7 | 85.0
Included | Zero-shot | 92.4 | 92.1 | 80.3 | 81.0 | 71.7 | 89.1 | 84.4 | 92.4 | 92.1 | 80.3 | 81.0 | 71.7 | 89.1 | 84.4
Non-episodic | 93.7 | 91.3 | 81.5 | 80.6 | 71.1 | 88.4 | 84.4 | 93.7 | 92.9 | 82.4 | 82.3 | 72.1 | 90.1 | 85.6
ProtoNet | 93.4 | 91.9 | 79.1 | 81.3 | 72.2 | 87.8 | 84.5 | 95.0 | 91.7 | 81.1 | 82.7 | 72.0 | 88.0 | 85.9
foMAML | 95.1 | 91.2 | 79.5 | 79.6 | 73.3 | 89.7 | 84.6 | 94.8 | 93.2 | 79.9 | 82.4 | 75.7 | 90.6 | 86.1
foProtoMAMLn | 94.9 | 91.7 | 81.5 | 81.4 | 75.2 | 89.9 | 85.5 | 95.8 | 94.1 | 82.7 | 83.0 | 81.2 | 90.4 | 87.9
| Reptile | 92.3 | 91.4 | 79.7 | 79.5 | 71.8 | 88.1 | 83.8 | 94.8 | 91.0 | 80.2 | 82.0 | 72.7 | 89.9 | 85.1
Table 2: Average accuracy of 5 different seeds on the unseen target languages
for MLDoc. $\Delta$ corresponds to the average accuracy across test languages.
### 5.1 Specifics per dataset
#### MLDoc
As MLDoc has sufficient languages, we set $l_{src}=$ English and $l_{dev}=$
Spanish. The remaining languages are split in two groups:
$l_{aux}=\\{\textrm{German, Italian, Japanese}\\}$; and
$l_{tgt}=\\{\textrm{French, Russian, Chinese}\\}$. In the limited-resource
setting, we randomly sample 64 samples per language in $l_{aux}$ for training.
Apart from comparing low- and high-resource settings, we also quantify the
influence of augmenting the training set $l_{aux}$ with a high-resource source
language $l_{src}$, English.
#### Amazon Sentiment Polarity
The fact that the Amazon dataset (augmented with Chinese) comprises of only
five languages has some implications for our experimental design. In the
cross-lingual experiments, where $l_{aux}$, $l_{dev}$ and $l_{tgt}$ should be
disjoint, only three languages, including English, remain for meta-training.
As we consider two languages too little data for meta-training, we do not
experiment with leaving out the English data. Hence, for meta-training, the
data consists of $l_{src}=$ English, as well as two languages in $l_{aux}$. We
always keep one language unseen until meta-testing, and alter $l_{aux}$ such
that we can meta-test on every language. We set $l_{dev}=$ French in all cases
except when French is used as the target language; then, $l_{dev}=$ Chinese.
In the limited-resource setting, a total of 128 samples per language in
$l_{aux}$ is used.
For the multilingual joint-training experiments there are enough languages
available to quantify the influence of English during meta-training. When
English is excluded, it is used for meta-validation. When included, we average
results over two sets of experiments: one where $l_{dev}=$ French and one
where $l_{dev}=$ Chinese.
Method | Limited-resource setting | | High-resource setting |
---|---|---|---|---
de | fr | ja | zh | $\Delta$ | de | fr | ja | zh | $\Delta$
Zero-shot | 91.2 | 90.7 | 87.0 | 84.6 | 88.4 | 91.2 | 90.7 | 87.0 | 84.6 | 88.4
Non-episodic | 90.9 | 90.6 | 86.1 | 86.9 | 88.6 | 91.6 | 91.0 | 85.5 | 87.9 | 89.0
ProtoNet | 89.7 | 90.2 | 86.6 | 85.2 | 87.9 | 90.7 | 92.0 | 86.7 | 84.0 | 88.4
foMAML | 88.3 | 90.5 | 86.8 | 88.1 | 88.4 | 91.4 | 92.5 | 88.0 | 90.4 | 90.6
foProtoMAMLn | 89.0 | 91.1 | 87.3 | 88.8 | 89.1 | 92.0 | 93.1 | 88.6 | 89.8 | 90.9
Reptile | 88.1 | 87.9 | 86.8 | 87.5 | 87.6 | 90.6 | 91.7 | 87.3 | 86.2 | 89.0
Table 3: Average accuracy of 5 different seeds on the unseen target languages
for Amazon. $\Delta$ corresponds to the average accuracy across test
languages.
### 5.2 Baselines
We introduce baselines trained in a standard supervised, non-episodic fashion.
Again, we use XLM-RoBERTa-base as the base-learner in all models.
#### Zero-shot
This baseline assumes sufficient training data for the task to be available in
one language $l_{src}$ (English). The base-learner is trained in a non-
episodic manner using mini-batch gradient descent with cross-entropy loss.
Performance is monitored during training on a held-out validation set in
$l_{src}$, the model with the lowest loss is selected, and then evaluated on
the same task in the target languages.
#### Non-episodic
The second baseline aims to quantify the exact impact of learning a model
through the meta-learning paradigm versus standard supervised learning. The
model learns from exactly the same data as the meta-learning algorithms, but
in a non-episodic manner: i.e., merging support and query sets in $l_{aux}$
(and $l_{src}$ when included) and training using mini-batch gradient descent
with cross-entropy loss. During testing, the trained model is independently
finetuned for 5 steps on the support set (one mini-batch) of each target
language $l_{tgt}$.
### 5.3 Training setup and hyper-parameters
We use the Ranger optimizer, an adapted version of Adam Kingma and Ba (2014)
with improved stability at the beginning of training – by accounting for the
variance in adaptive learning rates Liu et al. (2019) – and improved
robustness and convergence speed Zhang et al. (2019); Yong et al. (2020). We
use a batch size of 16 and a learning rate of 3e-5 to which we apply cosine
annealing. For meta-training, we perform 100 epochs of 100 episodes and
perform evaluation with 5 different seeds on the meta-validation set after
each epoch. One epoch consists of 100 update steps where each update step
consists of a batch of 4 episodes. Early-stopping with a patience of 3 epochs
is performed to avoid overfitting. For the non-episodic baselines, we train
for 10 epochs on the auxiliary languages while validating after each epoch.
All models are created using the PyTorch library Paszke et al. (2017) and
trained on a single 24Gb NVIDIA Titan RTX GPU.
We perform grid search on MLDoc in order to determine optimal hyperparameters
for the MetaUpdate methods. The hyper-parameters resulting in the lowest loss
on $l_{dev}=$ Spanish are used in all experiments. The number of update steps
in the inner-loop is 5; the (initial) learning rate of the inner-loop is 1e-5
for MAML and ProtoMAML and 5e-5 for Reptile; the factor by which the learning
rate of the classification head is multiplied is 10 for MAML and ProtoMAML and
1 for Reptile; when applicable, the learning rate with which the inner-loop
optimizer is updated is 6e-5. See Table 1 for the considered grid.
$\mathbf{l_{src}}$ = en | Method | Amazon | MLDoc
---|---|---|---
de | fr | ja | zh | $\Delta$ | de | fr | it | ja | ru | zh | $\Delta$
Excluded | Non-episodic | 88.4 | 88.6 | 85.7 | 88.2 | 87.7 | 92.8 | 89.1 | 81.2 | 83.2 | 84.0 | 87.4 | 86.3
ProtoNet | 86.7 | 88.0 | 86.2 | 87.3 | 87.1 | 89.7 | 87.6 | 80.5 | 82.2 | 80.6 | 85.2 | 84.3
foMAML | 88.3 | 87.5 | 84.6 | 89.1 | 86.3 | 94.1 | 89.7 | 81.5 | 84.2 | 77.6 | 87.5 | 85.8
foProtoMAMLn | 88.9 | 89.5 | 86.5 | 89.0 | 88.5 | 94.8 | 89.5 | 81.5 | 84.8 | 81.0 | 88.7 | 86.6
Reptile | 86.1 | 86.3 | 82.9 | 87.0 | 85.6 | 92.4 | 88.2 | 80.5 | 82.5 | 79.5 | 87.8 | 85.3
Included | Non-episodic | 91.0 | 91.0 | 87.3 | 89.4 | 89.8 | 94.9 | 92.1 | 84.7 | 84.8 | 83.7 | 91.4 | 88.6
ProtoNet | 90.3 | 91.3 | 87.5 | 88.7 | 89.5 | 95.5 | 91.7 | 83.4 | 85.1 | 82.8 | 88.3 | 87.8
foMAML | 90.1 | 90.7 | 87.2 | 89.5 | 89.4 | 95.1 | 92.5 | 83.1 | 84.9 | 84.3 | 90.6 | 88.4
foProtoMAMLn | 90.7 | 91.5 | 88.0 | 90.4 | 90.2 | 96.0 | 93.6 | 85.0 | 85.7 | 84.8 | 90.8 | 89.3
Reptile | 90.0 | 89.5 | 86.5 | 87.6 | 88.4 | 94.4 | 93.1 | 83.8 | 85.2 | 83.6 | 90.4 | 88.4
Table 4: Average accuracy of 5 different seeds on the target languages in the
joint-training setting for MLDoc and Amazon. $\Delta$ corresponds to the
average accuracy across test languages.
## 6 Results
#### Cross-lingual adaptation
Tables 2 and 3 show the accuracy scores on the target languages on MLDoc and
Amazon respectively. We start by noting the strong multilingual capabilities
of XLM-RoBERTa as our base-learner: Adding the full training datasets in three
extra languages (i.e., comparing the zero-shot with the non-episodic baseline
in the high-resource, ‘Included’ setting) results in a mere 1.2% points
increase in accuracy on average for MLDoc and 0.6% points for Amazon. Although
the zero-shot333The zero-shot baseline is only applicable in the ‘Included’
setting, as the English data is not available under ‘Excluded’. and non-
episodic baselines are strong, in the majority of cases, a meta-learning
approach improves performance. This holds especially for our version of
ProtoMAML (ProtoMAMLn), which achieves the highest average accuracy in all
considered settings.
The substantial improvements for Russian on MLDoc and Chinese on Amazon
indicate that meta-learning is most advantageous when the considered task
distribution is somewhat heterogeneous or, in other words, when domain drift
Lai et al. (2019) is present. For the Chinese data used for the sentiment
polarity task, the presence of domain drift is obvious as the data is
collected from a different website and concerns different products than the
other languages. For Russian in the MLDoc dataset, it holds that the non-
episodic baseline has the smallest gain in performance when adding English
data ($l_{src}$) in the limited-resource setting (0.2% absolute gain as
opposed to 5.7% on average for the remaining languages) and even a decrease of
2.4% points when adding English data in the high-resource setting. Especially
for these languages with domain drift, our version of ProtoMAML (foProtoMAMLn)
outperforms the non-episodic baselines with a relatively large margin. For
instance, in Table 2 in the high-resource setting with English included during
training, foProtoMAMLn improves over the non-episodic baseline with 9.1%
points whereas the average gain over the remaining languages is 0.9% points. A
similar trend can be seen in Table 3 where, in the limited-resource setting,
foProtoMAMLn outperforms the non-episodic baseline with 1.9% points on
Chinese, with comparatively smaller gains on average for the remaining
languages.
#### Joint training
In this setting, we achieve a new state of the art on MLDoc for German,
Italian, Japanese and Russian using our method, foProtoMAMLn (Table 4).444The
zero-shot baselines are the same as in Tables 2 and 3. The previous state of
the art for German and Russian is held by Lai et al. (2019) (95.73% and 84.65%
respectively). For Japanese and Italian, it is held by Eisenschlos et al.
(2019) (80.55% and 80.12% respectively). The state of the art for French and
Chinese is also held by Lai et al. (2019) (96.05% and 93.32% respectively). On
the Amazon dataset, foProtoMAMLn also outperforms all other methods on
average. The state of the art is held by Lai et al. (2019) with 93.3%, 94.2%
and 90.6% for French, German and Chinese respectively and, although we do not
outperform it, the differences are rather small – between 0.2% (Chinese) and
3.4% points (German) – even when grid search is based on MLDoc, while we use a
much less computationally expensive approach.
Figure 1: Validation accuracy for 3 seeds for original foProtoMAML and our new
method, foProtoMAMLn.
Again, we use Russian in MLDoc to exemplify the difference between meta-
learning and standard supervised learning. When comparing the difference in
performance between excluding and including English meta-training episodes
($l_{src}$), opposite trends are noticeable: for standard supervised, non-
episodic learning, performance drops slightly by 0.3%, whereas all meta-
learning algorithms gain between 2.2% and 6.7% in absolute accuracy. This
confirms our earlier finding that meta-learning benefits from, and usefully
exploits heterogeneity in data distributions; in contrast, this harms
performance in the standard supervised-learning case.
Dataset | de | fr | it | ja | ru | zh | Diff
---|---|---|---|---|---|---|---
Amazon | 90.4 | 90.9 | - | 87.3 | - | 88.3 | -1.7
MLDoc | 92.8 | 92.4 | 78.6 | 79.3 | 69.3 | 88.9 | -4.3
Table 5: Average accuracy of 5 different seeds on unseen target languages using the original/unnormalized foProtoMAML model. Diff is the difference in average accuracy $\Delta$ across languages against foProtoMAMLn. Method | Limited-resource setting | | High-resource setting |
---|---|---|---|---
de | fr | ja | zh | Diff | de | fr | ja | zh | Diff
ProtoNet | 91.1 | 90.9 | 87.1 | 85.5 | +0.75 | 91.3 | 91.1 | 87.4 | 88.7 | +1.44
foMAML | 90.8 | 87.4 | 87.3 | 85.2 | -0.75 | 91.7 | 91.2 | 87.2 | 88.1 | -1.13
foProtoMAMLn | 87.7 | 87.8 | 83.9 | 84.4 | -3.1 | 90.8 | 89.8 | 86.2 | 82.3 | -3.96
Reptile | 89.3 | 90.2 | 86.7 | 85.5 | +0.35 | 90.0 | 89.3 | 87.1 | 85.7 | -1.04
Table 6: Average accuracy of 5 different seeds on unseen target languages for
Amazon when initializing from monolingual classifier in $l_{src}$. Diff:
difference in average accuracy $\Delta$ across languages compared to
initializing from the XLM-RoBERTa language model.
## 7 Ablations
#### foProtoMAMLn
Figure 1 shows the development of the validation accuracy during training for
25 epochs for the original foProtoMAML and our model, foProtoMAMLn. By
applying $L_{2}$ normalization to the prototypes, we obtain a more stable
version of foProtoMAML which empirically converges faster. We furthermore re-
run the high-resource experiments with English for both MLDoc and Amazon using
the original foProtoMAML (Table 5) and find it performs 4.3% and 1.7% accuracy
points worse on average, respectively, further demonstrating the effectiveness
of our approach.
#### Initializing from a monolingual classifier
In our experiments, we often assume the presence of a source language
(English). We now investigate (in the $l_{src}$ = en ‘Excluded’ setting)
whether it is beneficial to pre-train the base-learner in a standard
supervised way on this source language and use the obtained checkpoint
$\theta_{mono}$ as an initialization for meta-training (Table 6) rather than
initializing from the transformer checkpoint.
We observe that only ProtoNet consistently improves performance, whereas
foProtoMAMLn suffers the most with a decrease of 3.1% and 3.96% in accuracy in
the low- and high-resource setting respectively. We surmise this difference is
attributable to two factors. Intuitively, the monolingual classifier aims to
learn a transformation from the input space to the final feature space, from
which the prototypes for ProtoNet and ProtoMAML are created, in which the
learned classes are encoded in their own disjoint sub-spaces such that a
linear combination of these features can be used to correctly classify
instances. ProtoNet aims to learn a similar transformation, but uses a Nearest
Neighbours approach to classify instances instead. ProtoMAML on the other hand
benefits the most from prototypes which can be used to classify instances
after the inner-loop updates have been performed. This, in combination with
the fact that the first-order approximation of ProtoMAML cannot differentiate
through the creation of the prototypes, could explain the difference in
performance gain with respect to ProtoNet.
## 8 Conclusion
We proposed a meta-learning framework for few-shot cross- and multilingual
joint-learning for document classification tasks in different domains. We
demonstrated that it leads to consistent gains over traditional supervised
learning on a wide array of data availability and diversity settings, and
showed that it thrives in settings with a heterogenous task distribution. We
presented an effective adaptation to ProtoMAML and, among others, obtained a
new state of the art on German, Italian, Japanese and Russian in the few-shot
setting on MLDoc.
## 9 Acknowledgements
This work was supported by Deloitte Risk Advisory B.V., the Netherlands.
## References
* Antoniou et al. (2018) Antreas Antoniou, Harrison Edwards, and Amos Storkey. 2018. How to train your maml. _arXiv preprint arXiv:1810.09502_.
* Artetxe and Schwenk (2019) Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. _Transactions of the Association for Computational Linguistics_ , 7:597–610.
* Bansal et al. (2019) Trapit Bansal, Rishikesh Jha, and Andrew McCallum. 2019. Learning to few-shot learn across diverse natural language classification tasks. _arXiv preprint arXiv:1911.03863_.
* Bengio et al. (1990) Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. 1990. _Learning a synaptic learning rule_. Citeseer.
* Chen et al. (2019) Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4217–4226, Hong Kong, China. Association for Computational Linguistics.
* Conneau et al. (2019) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. _arXiv preprint arXiv:1911.02116_.
* Conneau et al. (2018) Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representations. _arXiv preprint arXiv:1809.05053_.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Dou et al. (2019) Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. _arXiv preprint arXiv:1908.10423_.
* Eisenschlos et al. (2019) Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, and Jeremy Howard. 2019. Multifit: Efficient multi-lingual language model fine-tuning. _arXiv preprint arXiv:1909.04761_.
* Faruqui et al. (2014) Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. _arXiv preprint arXiv:1411.4166_.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pages 1126–1135. JMLR. org.
* Gu et al. (2018) Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for low-resource neural machine translation. _arXiv preprint arXiv:1808.08437_.
* van der Heijden et al. (2019) Niels van der Heijden, Samira Abnar, and Ekaterina Shutova. 2019. A comparison of architectures and pretraining methods for contextualized multilingual word embeddings. _arXiv preprint arXiv:1912.10169_.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Koch et al. (2015) Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In _ICML deep learning workshop_ , volume 2. Lille.
* Lai et al. (2019) Guokun Lai, Barlas Oguz, and Veselin Stoyanov. 2019. Bridging the domain gap in cross-lingual document classification. _arXiv preprint arXiv:1909.07009_.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. _arXiv preprint arXiv:1901.07291_.
* Lample et al. (2017) Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017\. Unsupervised machine translation using monolingual corpora only. _arXiv preprint arXiv:1711.00043_.
* Lewis et al. (2004) David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. _Journal of machine learning research_ , 5(Apr):361–397.
* Lewis et al. (2019) Patrick Lewis, Barlas Oğuz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answering. _arXiv preprint arXiv:1910.07475_.
* Liu et al. (2019) Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. _arXiv preprint arXiv:1908.03265_.
* Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_.
* Mikolov et al. (2013a) Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. _arXiv preprint arXiv:1309.4168_.
* Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_ , pages 3111–3119.
* Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_.
* Nooralahzadeh et al. (2020) Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. _arXiv preprint arXiv:2003.02739_.
* Obamuyide and Vlachos (2019a) Abiola Obamuyide and Andreas Vlachos. 2019a. Meta-learning improves lifelong relation extraction. In _Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)_ , pages 224–229, Florence, Italy. Association for Computational Linguistics.
* Obamuyide and Vlachos (2019b) Abiola Obamuyide and Andreas Vlachos. 2019b. Model-agnostic meta-learning for relation classification with limited supervision. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5873–5879, Florence, Italy. Association for Computational Linguistics.
* Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In _NIPS 2017 Workshop Autodiff Submission_.
* Prettenhofer and Stein (2010) Peter Prettenhofer and Benno Stein. 2010. Cross-language text classification using structural correspondence learning. In _Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics_ , pages 1118–1127, Uppsala, Sweden. Association for Computationprettenhoferal Linguistics.
* Ravi and Larochelle (2017) Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In _International Conference on Learning Representations_.
* Schmidhuber (1987) Jurgen Schmidhuber. 1987. Evolutionary principles in self-referential learning. _On learning how to learn: The meta-meta-… hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich_ , 1(2).
* Schwenk and Li (2018) Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_ , Paris, France. European Language Resources Association (ELRA).
* Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In _Advances in neural information processing systems_ , pages 4077–4087.
* Thrun and Pratt (1998) Sebastian Thrun and Lorien Pratt. 1998. Learning to learn: Introduction and overview. In _Learning to learn_ , pages 3–17. Springer.
* Triantafillou et al. (2020) Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. 2020. Meta-dataset: A dataset of datasets for learning to learn from few examples. In _International Conference on Learning Representations_.
* Wang et al. (2019) Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens van der Maaten. 2019. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. _arXiv preprint arXiv:1911.04623_.
* Xie et al. (2019) Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. _arXiv preprint arXiv:1904.12848_.
* Yong et al. (2020) Hongwei Yong, Jianqiang Huang, Xiansheng Hua, and Lei Zhang. 2020. Gradient centralization: A new optimization technique for deep neural networks. _arXiv preprint arXiv:2004.01461_.
* Zhang et al. (2019) Michael Zhang, James Lucas, Jimmy Ba, and Geoffrey E Hinton. 2019. Lookahead optimizer: k steps forward, 1 step back. In _Advances in Neural Information Processing Systems_ , pages 9597–9608.
* Zhang et al. (2015) Yongfeng Zhang, Min Zhang, Yi Zhang, Guokun Lai, Yiqun Liu, Honghui Zhang, and Shaoping Ma. 2015. Daily-aware personalized recommendation based on feature-level time series analysis. In _Proceedings of the 24th international conference on world wide web_ , pages 1373–1383.
|
# Compactness within the space of complete, constant $Q$-curvature metrics on
the sphere with isolated singularities
João Henrique Andrade , João Marcos do Ó and Jesse Ratzkin Institute of
Mathematics and Statistics, University of São Paulo
05508-090, São Paulo-SP, Brazil
and
Department of Mathematics, Federal University of Paraíba
58051-900, João Pessoa-PB, Brazil<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Mathematics, Federal University of Paraíba
58051-900, João Pessoa-PB, Brazil<EMAIL_ADDRESS>Department of Mathematics,
Universität Würzburg
97070, Würzburg-BA, Germany<EMAIL_ADDRESS>
###### Abstract.
In this paper we consider the moduli space of complete, conformally flat
metrics on a sphere with $k$ punctures having constant positive $Q$-curvature
and positive scalar curvature. Previous work has shown that such metrics admit
an asymptotic expansion near each puncture, allowing one to define an
asymptotic necksize of each singular point. We prove that any set in the
moduli space such that the distances between distinct punctures and the
asymptotic necksizes all remain bounded away from zero is sequentially
compact, mirroring a theorem of D. Pollack about singular Yamabe metrics.
Along the way we define a radial Pohozaev invariant at each puncture and
refine some a priori bounds of the conformal factor, which may be of
independent interest.
###### Key words and phrases:
Paneitz operator, $Q$-curvature, Critical exponent, Isolated singularities,
Compactness, Pohozaev invariant
###### 2000 Mathematics Subject Classification:
35J60, 35B09, 35J30, 35B40
Research supported in part by Conselho Nacional de Desenvolvimento Científico
e Tecnológico (CNPq): grant 305726/2017-0, Coordenação de Aperfeiçoamento de
Pessoal de Nível Superior (CAPES): grant 88882.440505/2019-01, and Fundação de
Apoio à Pesquisa do Estado de São Paulo (FAPESP): grant 2020/07566-3
## 1\. Introduction
In 1960 H. Yamabe [28] proposed a program to find optimal metrics in a
conformal class on a manifold of dimension at least three by minimizing the
total scalar curvature functional, obtaining a constant scalar curvature
representative in each conformal class. This program, which now bears his
name, led to many advancements by N. Trudinger [26], T. Aubin [3], R. Schoen
[25], and many others in the understanding of how geometry, topology and
analysis interact with each other in compact Riemannian manifolds. The reader
can find an excellent survey of the resolution of the original Yamabe problem
in [17]. The lack of compactness of the group of conformal transformations of
the sphere presents one of the many complications in carrying out Yamabe’s
program. This same lack of compactness forces one to examine singular
solutions which blow up along a closed subset. Many people continue to study
both the regular and the singular Yamabe problems, and many open questions in
both programs remain. More recent results include [19], in which the authors
prove the set of solutions to the Yamabe problem within a conformal class is
compact provided $3\leq n\leq 24$ and the conformal class is not the one of
the round sphere.
In recent years many people have pursued parts of Yamabe’s program for other
notions of curvature. In the present note, we explore a part of the singular
Yamabe program as applied to the fourth order $Q$-curvature, which is a higher
order analog of scalar curvature. On a Riemannian manifold $(M,g)$ of
dimension $n\geq 5$, the $Q$-curvature is
$Q_{g}=-\frac{1}{2(n-1)}\Delta_{g}R_{g}-\frac{2}{(n-2)^{2}}|\operatorname{Ric}_{g}|^{2}+\frac{n^{3}-4n^{2}+16n-16}{8(n-1)^{2}(n-2)^{2}}R_{g}^{2},$
(1)
where $R_{g}$ is the scalar curvature of $g$, $\operatorname{Ric}_{g}$ is the
Ricci curvature of $g$, and $\Delta_{g}$ is the Laplace–Beltrami operator of
$g$. After a conformal change, the $Q$-curvature transforms as
$\widetilde{g}=u^{\frac{4}{n-4}}g\Rightarrow
Q_{\widetilde{g}}=\frac{2}{n-4}u^{-\frac{n+4}{n-4}}P_{g}u,$ (2)
where $P_{g}$ is the Paneitz operator
$\displaystyle
P_{g}u=\Delta_{g}^{2}u+\operatorname{div}\left(\frac{4}{n-2}\operatorname{Ric}_{g}(\nabla
u,\cdot)-\frac{(n-2)^{2}+4}{2(n-1)(n-2)}R_{g}\langle\nabla
u,\cdot\rangle\right)+\frac{n-4}{2}Q_{g}u.$ (3)
Paneitz [22] first discovered the operator $P_{g}$ and investigated its
conformal invariance. Thereafter Branson [5, 6] began a thorough investigation
of $Q_{g}$ and its variants. The reader can find excellent summaries of the
fourth order $Q$-curvature in [7, 9, 14].
The $Q$-curvature of the round metric $\overset{\circ}{g}$ is
$\frac{n(n^{2}-4)}{8}$, and setting $Q_{g}$ to be this value gives the
equation
$P_{g}u=\frac{n(n-4)(n^{2}-4)}{16}u^{\frac{n+4}{n-4}}.$ (4)
Just as in the scalar curvature setting, one can search for constant
$Q$-curvature metrics in a conformal class by minimizing the total
$Q$-curvature. However, because of the conformal invariance one encounters the
same lack of compactness and presence of singular solutions. Hang and Yang
[15] carry out part of this program in the regular case, assuming that the
background metric also has positive Yamabe invariant.
In any event, a complete understanding of the fourth order analog of the
Yamabe problem would require an understanding of the following singular
problem: let $(M,g)$ be a compact Riemannian manifold and let $\Lambda\subset
M$ be a closed subset. A conformal metric $\widetilde{g}=u^{\frac{4}{n-4}}g$
is a singular constant $Q$-curvature metric if $Q_{\widetilde{g}}$ is constant
and $\widetilde{g}$ is complete on $M\backslash\Lambda$. According to (2) we
can write this geometric problem as
$\displaystyle P_{g}u$ $\displaystyle=$
$\displaystyle\frac{n(n-4)(n^{2}-4)}{16}u^{\frac{n+4}{n-4}}\textrm{ on
}M\backslash\Lambda,$ (5) $\displaystyle\liminf_{x\rightarrow x_{0}}u(x)$
$\displaystyle=$ $\displaystyle\infty\textrm{ for each }x_{0}\in\Lambda.$
For the remainder of our work we concentrate on the case that
$(M,g)=(\mathbf{S}^{n},\overset{\circ}{g})$ is the round metric on the sphere
and $\Lambda=\\{p_{1},\dots,p_{k}\\}$ is a finite set of distinct points. Thus
we examine, given a singular set $\Lambda$ with $\\#(\Lambda)=k$, the set of
functions
$u:\mathbf{S}^{n}\backslash\Lambda=\mathbf{S}^{n}\backslash\\{p_{1},\dots,p_{k}\\}\rightarrow(0,\infty)$
that satisfy
$\displaystyle\overset{\circ}{P}u$ $\displaystyle=$ $\displaystyle
P_{\overset{\circ}{g}}u=\frac{n(n-4)(n^{2}-4)}{16}u^{\frac{n+4}{n-4}}$ (6)
$\displaystyle\liminf_{x\rightarrow p_{j}}u(x)$ $\displaystyle=$
$\displaystyle\infty\textrm{ for each }j=1,2,\dots,k.$
For technical reasons we will also require $R_{g}\geq 0$.
Following [21] we define the marked moduli space
$\mathcal{M}_{\Lambda}=\left\\{g\in[\overset{\circ}{g}]:Q_{g}=\frac{n(n^{2}-4)}{8},\
R_{g}\geq 0,\ g\textrm{ is complete on }\
\mathbf{S}^{n}\backslash\Lambda\right\\}$ (7)
and the unmarked moduli space
$\displaystyle\mathcal{M}_{k}=\left\\{g\in[\overset{\circ}{g}]:Q_{g}=\frac{n(n^{2}-4)}{8},\
R_{g}\geq 0,\ g\textrm{ is complete on }\ \mathbf{S}^{n}\backslash\Lambda,\
\\#(\Lambda)=k\right\\}.$ (8)
We equip each moduli space with the Gromov–Hausdorff topology. C. S. Lin [20]
proved that $\mathcal{M}_{1}$ is the empty set, and recently Frank and König
[11] classified all metrics in $\mathcal{M}_{2}$, proving
$\mathcal{M}_{p,q}=(0,\epsilon_{n}]\textrm{ for each pair }p\neq
q\in\mathbf{S}^{n},$
where
$\epsilon_{n}=\left(\frac{n(n-4)}{n^{2}-4}\right)^{\frac{n-4}{8}}\in(0,1).$
(9)
It follows that
$\mathcal{M}_{2}=(0,\epsilon_{n}]\times((\mathbf{S}^{n}\times\mathbf{S}^{n}\backslash\textrm{diag})/SO(n+1,1)),$
where the group $SO(n+1,1)$ of conformal transformations acts on each
$\mathbf{S}^{n}$ factor simultaneously. These metrics corresponding to a
doubly punctured sphere are all rotationally invariant, and are called the
Delaunay metrics. We describe them in detail in Section 2.1.
In the present work we explore some of the structure of $\mathcal{M}_{k}$ when
$k\geq 3$. Let $\Lambda=\\{p_{1},\dots,p_{k}\\}$ with $k\geq 3$ and let
$g=u^{\frac{4}{n-4}}\overset{\circ}{g}\in\mathcal{M}_{\Lambda}$. As it
happens, the metric $g$ is asymptotic to a Delaunay metric near each puncture
$p_{j}$, and so one can associate a Delaunay parameter
$\epsilon_{j}(g)\in(0,\epsilon_{n}]$ to each $p_{j}$ and
$g\in\mathcal{M}_{\Lambda}$. (See Section 2.2.) Our main compactness theorem
is the following.
###### Theorem 1.
Let $k\geq 3$ and let $\delta_{1}>0,\delta_{2}>0$ be positive numbers. Then
the set
$\Omega_{\delta_{1},\delta_{2}}=\\{g\in\mathcal{M}_{k}:\operatorname{dist}_{\overset{\circ}{g}}(p_{j},p_{l})\geq\delta_{1}\textrm{
for each }j\neq l,\epsilon_{j}(g)\geq\delta_{2}\\}$
is sequentially compact in the Gromov–Hausdorff topology.
We model this result on a compactness theorem of Pollack [23], which states
that the similarly defined set in the moduli space of singular Yamabe metrics
is sequentially compact. Very recently Wei [27] proved a similar theorem in
the context of constant $\sigma_{k}$-curvature. Pollack’s theorem was an
important first step in understanding the structure of the moduli space of
singular Yamabe metrics on a finitely puctured sphere, a program that is still
not complete. We hope our theorem above can play a similar role in advancing
the general theory of constant $Q$-curvature metrics with isolated
singularities.
## 2\. Preliminaries
In this section we present some prerequisite analysis proven elsewhere which
we will use below.
We first rewrite (6). Pulling back by (the inverse of) stereographic
projection, we can write
$\overset{\circ}{g}=\left(\frac{2}{1+|x|^{2}}\right)^{2}\delta=U_{\textrm{sph}}^{\frac{4}{n-4}}\delta,\qquad
U_{\textrm{sph}}=\left(\frac{1+|x|^{2}}{2}\right)^{\frac{4-n}{2}}.$ (10)
In these coordinates (6) takes the form
$u:\mathbf{R}^{n}\backslash\\{q_{1},\dots,q_{k}\\}\rightarrow(0,\infty),\qquad\Delta_{0}^{2}(U_{\textrm{sph}}u)=\frac{n(n-4)(n^{2}-4)}{16}(U_{\textrm{sph}}u)^{\frac{n+4}{n-4}},$
(11)
where $\Delta_{0}$ is the usual flat Laplacian and $q_{j}$ is the image of
$p_{j}$ under the stereographic map. Also, the condition $R_{g}\geq 0$ is
equivalent to the differential inequality
$-\Delta_{0}(U_{\textrm{sph}}u)^{\frac{n-2}{n-4}}\geq
0\Leftrightarrow-\Delta_{0}(U_{\textrm{sph}}u)\geq\frac{2}{n-4}\frac{|\nabla(U_{\textrm{sph}}u)|^{2}}{U_{\textrm{sph}}u}.$
(12)
In this Euclidean setting the transformation rule (2) reads
$\Delta_{0}^{2}u=Au^{\frac{n+4}{n-4}}\Rightarrow
u_{\lambda}(x)=\lambda^{\frac{n-4}{2}}u(\lambda x)\textrm{ satsfies
}\Delta_{0}^{2}u_{\lambda}=Au_{\lambda}^{\frac{n+4}{n-4}}\textrm{ for each
}\lambda>0$ (13)
for any constant $A$.
### 2.1. Delaunay metrics
Let $p\neq q\in\mathbf{S}^{n}$ and let $g\in\mathcal{M}_{\\{p,q\\}}$. We may
precompose by an appropriate dilation and assume $p=-q$, and then rotate
$\mathbf{S}^{n}$ so that $p$ is the north pole and $q$ is the south pole.
After reframing as in the previous paragraph we obtain a function
$u:\mathbf{R}^{n}\backslash\\{0\\}\rightarrow(0,\infty)$ satisfying the PDE
(11). Lin [20] proved that this solution must be rotationally invariant about
$0$, and later Frank and König [11] classified all the ODE solutions.
Their classification is easiest to see after changing to cylindrical
coordinates. We let
$\displaystyle t=-\log|x|,$ $\displaystyle\quad\theta=\frac{x}{|x|},$ (14)
$\displaystyle v(t,\theta)$ $\displaystyle=$ $\displaystyle
e^{\left(\frac{4-n}{2}\right)t}U_{\textrm{sph}}(e^{-t}\theta)u(e^{-t}\theta)=(e^{t}\cosh
t)^{\frac{4-n}{2}}u(e^{-t}\theta)$
This transforms the Paneitz operator into
$\displaystyle P_{\textrm{cyl}}$ $\displaystyle=$
$\displaystyle\frac{\partial^{4}}{\partial
t^{4}}+\Delta_{\theta}^{2}+2\Delta_{\theta}\frac{\partial^{2}}{\partial
t^{2}}-\left(\frac{n(n-4)+8}{2}\right)\frac{\partial^{2}}{\partial t^{2}}$
$\displaystyle-\frac{n(n-4)}{2}\Delta_{\theta}+\frac{n^{2}(n-4)^{2}}{16},$
so that (11) becomes
$v:\mathbf{R}\times\mathbf{S}^{n-1}\rightarrow(0,\infty),\qquad
P_{\textrm{cyl}}v=\frac{n(n-4)(n^{2}-4)}{16}v^{\frac{n+4}{n-4}}.$ (16)
The fact that the orginal function $u$ is radial implies $v$ is a function of
$t$ alone, and so (16) reduces to the ODE
$\ddddot{v}-\left(\frac{n(n-4)+8}{2}\right)\ddot{v}+\frac{n^{2}(n-4)^{2}}{16}v=\frac{n(n-4)(n^{2}-4)}{16}v^{\frac{n+4}{n-4}}.$
(17)
We find two solutions explicitly. The cylindrical solution is the only
constant solution, namely $v_{\textrm{cyl}}=\epsilon_{n}$, given in (9). Also,
the spherical solution $U_{\textrm{sph}}$ given in (10) transforms under the
change of variables (14) into $v_{\textrm{sph}}=(\cosh t)^{\frac{4-n}{2}}$.
The Delaunay solutions found by Frank and König in [11] interpolate between
the cylindrical and spherical solutions. Indeed, for each
$\epsilon\in(0,\epsilon_{n})$ there exists a unique solution $v_{\epsilon}$ of
the ODE (17) realizing its minimal value of $\epsilon$ at $t=0$. Each
$v_{\epsilon}$ is periodic with minimal period $T_{\epsilon}$, and these
Delaunay solutions account for all global solutions of the ODE (17).
Transforming back to Euclidean coordinates, we of course obtain the solutions
$u_{\epsilon}:\mathbf{R}^{n}\backslash\\{0\\}\rightarrow(0,\infty),\qquad
u_{\epsilon}(x)=|x|^{\frac{4-n}{2}}v_{\epsilon}(-\log|x|).$ (18)
We may then apply global conformal transformations to construct the translated
Delaunay solutions. The first such family is
$\widetilde{u}_{\epsilon,a}(x)=u_{\epsilon}(x-a)$ for some fixed vector
$a\in\mathbf{R}^{n}$. The second family is more important to our later
analysis, and is given by translating the point at infinity. More precisely,
we define
$\displaystyle u_{\epsilon,a}(x)$ $\displaystyle=$
$\displaystyle\widehat{\mathbb{K}}_{0}(\widehat{\mathbb{K}}_{0}(u_{\epsilon}(\cdot-a))(x)$
$\displaystyle=$
$\displaystyle|x|^{\frac{n-4}{2}}\left|\frac{x}{|x|}-|x|a\right|^{\frac{4-n}{2}}v_{\epsilon}\left(-\log|x|-\log\left|\frac{x}{|x|}-|x|a\right|\right),$
where
$\widehat{\mathbb{K}}_{0}(u)(x)=|x|^{4-n}u\left(\frac{x}{|x|^{2}}\right)$
is the Kelvin transform of $u$. In cylindrical coordinates we can write this
expression for $u_{\epsilon,a}$ as
$\displaystyle v_{\epsilon,a}(t,\theta)$ $\displaystyle=$
$\displaystyle|\theta-e^{-t}a|^{\frac{4-n}{2}}v_{\epsilon}(t+\log|\theta-e^{-t}a|)$
$\displaystyle=$ $\displaystyle
v_{\epsilon}(t)+e^{-t}\langle\theta,a\rangle\left(-\dot{v}_{\epsilon}(t)+\frac{n-4}{2}v_{\epsilon}(t)\right)+\mathcal{O}(e^{-2t})$
### 2.2. Asymptotics
In [16] Jin and Xiong proved that any positive, superharmonic solution of (11)
in a punctured ball is asymptotically symmetric. In other words, they show
there exists $\alpha>0$ such that
$u(x)=\overline{u}(|x|)(1+\mathcal{O}(|x|^{\alpha})),\qquad\overline{u}(r)=\frac{1}{r^{n-1}|\mathbf{S}^{n-1}|}\int_{|x|=r}u(\theta)d\theta.$
(20)
Later the first two authors [2] and the third author [24] independently
derived refined asymptotics for positive, singular solutions of (11). Roughly
speaking, the translated Delaunay solutions of (2.1) give the next order term
in the expansion of $u$. They show there exists $\beta>1$,
$\epsilon\in(0,\epsilon_{n}]$, $T\in[0,T_{\epsilon})$, and
$a\in\mathbf{R}^{n}$ such that
$\displaystyle u(x)$ $\displaystyle=$
$\displaystyle|x|^{\frac{4-n}{2}}\left(v_{\epsilon}(-\log|x|+T)\right.$
$\displaystyle+$
$\displaystyle\left.|x|\left\langle\frac{x}{|x|},a\right\rangle\left(-\dot{v}_{\epsilon}(-\log|x|+T)+\frac{n-4}{2}v_{\epsilon}(-\log|x|+T)\right)+\mathcal{O}(|x|^{\beta})\right).$
In cylindrical coordinates this estimate has the form
$v(t,\theta)=v_{\epsilon}(t+T)+e^{-t}\langle\theta,a\rangle\left(-\dot{v}_{\epsilon}(t+T)+\frac{n-4}{2}v_{\epsilon}(t+T)\right)+\mathcal{O}(e^{-\beta
t}).$ (22)
### 2.3. Some other useful theorems
For the sake of completeness, we state some background results which will be
required later in the proof of our main result.
We first quote the following theorem of Chang, Han and Yang [10, Theorem 1.1].
###### Theorem 2 (Chang, Han and Yang).
Let $n\geq 5$, let $\Lambda\subset\mathbf{S}^{n}$ be a proper closed set, and
let $g=u^{\frac{4}{n-4}}\overset{\circ}{g}$ be a complete metric on
$\mathbf{S}^{n}\backslash\Lambda$ such that
$Q_{g}=\frac{n(n^{2}-4)}{8},\qquad R_{g}\geq 0.$
Then $\partial\overset{\circ}{\mathbf{B}}_{\rho}(x_{0})$ is has positive mean
curvature with respect to $g$, computed with the inward pointing normal, where
$\overset{\circ}{\mathbf{B}}_{\rho}(x_{0})$ is any ball with respect to the
round metric contained in $\mathbf{S}^{n}\backslash\Lambda$.
We will also need a version of Harnack’s inequality, which was proven by
Caristi and Mitidieri [8, Theorem 3.6].
###### Theorem 3 (Caristi and Mitidieri).
Let $u$ be a superharmonic function defined in a domain
$\Omega\subset\mathbf{R}^{n}$ such that $\Delta_{0}^{2}u=f(u)$, where $f$ is
either linear or superlinear and $f(0)=0$. Then there exists $\rho_{0}>0$ such
that for $\rho\leq\rho_{0}$ we have
$\sup_{\overset{\circ}{\mathbf{B}}_{\rho}(p)}u\leq
C\inf_{\overset{\circ}{\mathbf{B}}_{\rho}(p)}u,$ (23)
where the constant $c$ depends only on the domain $\Omega$, the function $f$,
and $\rho$.
Gursky and Malchiodi [12, Proposition 2.5] prove the existence of a positive
Greens function for the Paneitz operator of the round sphere.
###### Theorem 4 (Gursky and Malchiodi).
Let $(M,g)$ be a compact Riemannian manifold such that $R_{g}\geq 0$ and
$Q_{g}>0$. Then for each $p\in M$ there exists a Greens function $G_{p}$
satisfying
$G_{p}:M\backslash\\{p\\}\rightarrow(0,\infty),\qquad P_{g}G_{p}=\delta_{p},$
where $\delta_{p}$ is the Dirac $\delta$-function with a singularity at $p$.
Furthermore, if either $n=5,6,7$ or $g$ is conformally flat then there exists
$c>0$ depending only on $n$ and $\alpha$ such that
$G_{p}(x)=\frac{1}{2n(n-2)(n-4)\omega_{n}}\left(\operatorname{dist}_{g}(x,p)\right)^{4-n}+\mathcal{O}(1)$
(24)
in conformal normal coordinates, where $\omega_{n}$ is the volume of a unit
ball in $\mathbf{R}^{n}$.
## 3\. Pohozaev invariants
One often finds integral invariants in geometric variational problems. The
reader can find a general abstract framework for constructing these invariants
in the paper by Gover and Ørsted [13]. In future work we will explicitly write
out the full Pohozaev invariant using the first variation tensor defined in
[18].
We consider a function
$v:(a,b)\times\mathbf{S}^{n-1}\rightarrow\mathbf{R}$
satisfying (16). Given such a function $v$ we define the Hamiltonian
functional
$\displaystyle\mathcal{H}(v)$ $\displaystyle=$ $\displaystyle-\frac{\partial
v}{\partial t}\frac{\partial^{3}v}{\partial
t^{3}}+\frac{1}{2}\left(\frac{\partial^{2}v}{\partial
t^{2}}\right)^{2}-\frac{1}{2}(\Delta_{\theta}v)^{2}-\left|\nabla_{\theta}\frac{\partial
v}{\partial t}\right|^{2}+\frac{n(n-4)}{4}|\nabla_{\theta}v|^{2}$
$\displaystyle+\left(\frac{n(n-4)+8}{4}\right)\left(\frac{\partial v}{\partial
t}\right)^{2}-\frac{n^{2}(n-4)^{2}}{32}v^{2}+\frac{(n-4)^{2}(n^{2}-4)}{32}v^{\frac{2n}{n-4}}.$
Integrating by parts we find
$\frac{d}{dt}\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}(v)d\theta=0,$
(26)
which allows us to define our first integral invariant as
$\widetilde{\mathcal{P}}_{\textrm{rad}}(v)=\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}(v)d\theta.$
(27)
Now we can define the radial (or dilational) Pohozaev invariants associated to
a metric $g\in\mathcal{M}_{k}$ at each puncture point $p_{j}$. Recall that
$g=u^{\frac{4}{n-4}}\overset{\circ}{g}$ is a complete, conformally flat metric
on $\mathbf{S}^{n}\backslash\\{p_{1},\dots,p_{k}\\}$ with
$Q_{g}=\frac{n(n^{2}-4)}{8}$ and $R_{g}\geq 0$. Completeness forces
$\liminf_{\operatorname{dist}_{\overset{\circ}{g}}(x,p_{j})\rightarrow
0}u(x)=\infty$
for each $j$, while $Q_{g}=\frac{n(n^{2}-4)}{8}$ is equivalent to the PDE
(11), after stereographic projection down to
$\mathbf{R}^{n}\backslash\\{p_{1},\dots,p_{k}\\}$. Choose coordinates centered
at one of the punctures $p_{j}$, and the perform the cylindrical change of
variables (14), which gives us a function
$v:(A,\infty)\times\mathbf{S}^{n-1}\rightarrow(0,\infty)$
satisfying (16). We define
$\mathcal{P}_{\textrm{rad}}(g,p_{j})=\widetilde{\mathcal{P}}_{\textrm{rad}}(v)=\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}(v)d\theta,$
which is well-defined by (26).
In the special case that $k=2$ we can evaluate the dilational Pohozaev
invariant more explicitly. In this situation we may as well let the puncture
points be the north and south poles, and thus we obtain a function
$v=v_{\epsilon}:\mathbf{R}\times\mathbf{S}^{n-1}\rightarrow(0,\infty)$
satisfying (17). Thus the Hamiltonian (3) reduces to
$\overline{\mathcal{H}}(v)=-\dot{v}\dddot{v}+\frac{1}{2}\ddot{v}^{2}+\left(\frac{n(n-4)+8}{4}\right)\dot{v}^{2}-\frac{n^{2}(n-4)^{2}}{32}v^{2}-\frac{(n-4)^{2}(n^{2}-4)}{32}v^{\frac{2n}{n-4}}.$
(28)
Moreover, because $\overline{\mathcal{H}}$ does not depend on $\theta$ and its
integral over a sphere does not depend on $t$, this reduced Hamiltonian must
be constant on solutions of (17). (One can, of course, explicitly verify this
constancy by taking a derivative.)
A computation reveals
$\overline{\mathcal{H}}(v_{\textrm{sph}})=0,\qquad\overline{\mathcal{H}}(v_{\epsilon_{n}})=-\frac{(n-4)(n^{2}-4)}{8}\left(\frac{n(n-4)}{n^{2}-4}\right)^{n/4}<0.$
Furthermore, Proposition 6 of [4] implies the Delaunay solutions are ordered
(in fact, uniquely determined!) by their energy (28).
Combining our analysis above with (2.2) and (22) we immediately see the
following
###### Lemma 5.
Let $g=u^{\frac{4}{n-4}}\overset{\circ}{g}$ be a complete, conformally flat
metric on $\mathbf{S}^{n}\backslash\\{p_{1},\dots,p_{k}\\}$ with
$Q_{g}=\frac{n(n^{2}-4)}{8}$ and $R_{g}\geq 0$. For each puncture $p_{j}$
define $\mathcal{P}_{\textrm{rad}}(g,p_{j})$ as above. Then
$\mathcal{P}_{\textrm{rad}}(g,p_{j})<0$ and depends only on the necksize
$\epsilon_{j}$ of the Delaunay asymptote at $p_{j}$. Moreover, decreasing
$\epsilon_{j}$ will increase $\mathcal{P}_{\textrm{rad}}(g,p_{j})$ towards
$0$. In particular, bounding the radial Pohozaev invariants
$\mathcal{P}_{\textrm{rad}}(g,p_{j})$ away from zero is equivalent to bounding
the necksizes $\epsilon_{j}$ away from zero.
###### Proof.
We have shown that
$\mathcal{P}_{\textrm{rad}}(g,p_{j})=\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}(v)d\theta$
is well-defined, because the integral does not depend on our choice of $t$.
Now let $t\rightarrow\infty$, and observe that $v\rightarrow
v_{\epsilon_{j}}$, the Delaunay asymptote of $g$ at the puncture $p_{j}$. In
particular,
$\mathcal{H}(v)\rightarrow\overline{\mathcal{H}}(v_{\epsilon_{j}})$, We
conclude that
$\mathcal{P}_{\textrm{rad}}(v)=\mathcal{P}_{\textrm{rad}}(v_{\epsilon_{j}})=|\mathbf{S}^{n-1}|\overline{\mathcal{H}}(v_{\epsilon_{j}})<0.$
The remainder of the lemma follows from the energy ordering theorem of van den
Berg [4] applied to the Delaunay solutions, as described in the paragraph
above. ∎
###### Remark 1.
Our radial Pohozaev invariant is basically the same as the one defined in
Proposition 4.1 of [1]. Jin and Xiong [16] write out the same invariant for
higher order equations.
It will actually be useful for later computations to decompose the Hamiltonian
energy $\mathcal{H}$ given in (3) as
$\mathcal{H}(v)=\mathcal{H}_{\textrm{cyl}}(v)+\frac{(n-4)^{2}(n^{2}-4)}{32}v^{\frac{2n}{n-4}}.$
(29)
The same computation as in (27) shows the following lemma.
###### Lemma 6.
Let $v$ satisfy
$v:(a,b)\times\mathbf{S}^{n-1}\rightarrow\mathbf{R},\qquad
P_{\textrm{cyl}}v=Av^{\frac{n+4}{n-4}}$
for some constant $A$. Then the integral
$\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}_{\textrm{cyl}}(v)+\frac{(n-4)}{2n}Av^{\frac{2n}{n-4}}d\theta$
does not depend on $t$.
## 4\. Proof of the compactness theorem
In this section we prove Theorem 1. We first use standard blow-up techniques
to prove a priori bounds on the $\mathcal{C}^{4}$-norm of solutions of (5).
Once we obtain these bounds, we use them to extract a convergent subsequence.
Finally we prove that our limit is non-trivial, using the fact that the radial
Pohozaev invariants of our original sequence of metrics remain bounded away
from zero.
### 4.1. A priori bounds
We prove some a priori bounds for solutions of
$\overset{\circ}{P}u=\frac{n(n-4)(n^{2}-4)}{16}u^{\frac{n+4}{n-4}}.$
###### Theorem 7.
Let $n\geq 5$, let $\Lambda\subset\mathbf{S}^{n}$ be a proper closed set, and
let $g=u^{\frac{4}{n-4}}\overset{\circ}{g}$ be a complete metric on
$\mathbf{S}^{n}\backslash\Lambda$ such that
$Q_{g}=\frac{n(n^{2}-4)}{8},\qquad R_{g}\geq 0.$
Then there exists $C>0$ depending only on the dimension $n$ such that
$u(x)\leq
C\left(\operatorname{dist}_{\overset{\circ}{g}}(x,\Lambda)\right)^{\frac{4-n}{2}}.$
(30)
###### Remark 2.
In the context of $g\in\mathcal{M}_{k}$ with $k\geq 2$, our upper bound (30)
is very similar to, but slightly stronger than, Proposition 3.2 of [16],
because our constant $C$ depends only on the dimension $n$.
###### Proof.
Our proof borrows from Pollack’s proof of the corresponding upper bound in the
scalar curvature case.
Given any $g$ satisfying the hypotheses of Theorem 7, $x_{0}\not\in\Lambda$,
and $\rho>0$ such that
$\overset{\circ}{\mathbf{B}}_{\rho}\subset\mathbf{S}^{n}\backslash\Lambda$ we
define the auxiliary function
$f:\overset{\circ}{\mathbf{B}}_{\rho}\rightarrow\mathbf{R},\qquad
f(x)=(\rho-\operatorname{dist}_{\overset{\circ}{g}}(x,x_{0}))^{\frac{n-4}{2}}u(x).$
(31)
Observe that choosing
$\rho=\frac{1}{2}\operatorname{dist}_{\overset{\circ}{g}}(x_{0},\Lambda)$
yields
$f(x_{0})=\rho^{\frac{n-4}{2}}u(x_{0})=\left(\frac{1}{2}\operatorname{dist}_{\overset{\circ}{g}}(x_{0},\Lambda)\right)^{\frac{n-4}{2}}u(x_{0}),$
(32)
so it will suffice to find $C$ depending only on $n$ such that $f(x)\leq C$
for all admissible choices of $\Lambda$, $u$, $x_{0}$, and $\rho$.
We suppose the contrary and derive a contradiction. To this end, let
$\Lambda_{i}$, $u_{i}$, $x_{0,i}$ and $\rho_{i}$ be admissible as described
above and suppose
$M_{i}=f(x_{1,i})=\sup_{x\in\overset{\circ}{\mathbf{B}}_{\rho_{i}}(x_{0,i})}f(x)\rightarrow\infty.$
(33)
Observe that
$\left.f\right|_{\partial\overset{\circ}{\mathbf{B}}_{\rho_{i}}(x_{0,i})}=0$,
so $x_{1,i}$ must lie in the interior of the ball
$\overset{\circ}{\mathbf{B}}_{\rho_{i}}(x_{0,i})$. Next let
$r_{i}=\rho_{i}-\operatorname{dist}_{\overset{\circ}{g}}(x_{1,i},x_{0,i}),$
let $y$ be geodesic normal coordinates centered at $x_{1,i}$, and define
$\lambda_{i}=2(u_{i}(x_{1,i}))^{\frac{4-n}{2}},\qquad
R_{i}=\frac{r_{i}}{\lambda_{i}}=\frac{r_{i}}{2}(u_{i}(x_{1,i}))^{\frac{n-4}{2}}=\frac{1}{2}M_{i}^{\frac{2}{n-4}}$
(34)
and
$w_{i}:\mathbf{B}_{R_{i}}(0)\rightarrow(0,\infty),\qquad
w_{i}(y)=\lambda_{i}^{\frac{n-4}{2}}u_{i}(\lambda y).$ (35)
By (2) (or, equivalently, (13)) the function $w_{i}$ solves
$P_{\lambda\overset{\circ}{g}}w_{i}=\frac{n(n-4)(n^{2}-4)}{16}w_{i}^{\frac{n+4}{n-4}}.$
Moreover, by construction
$2^{\frac{n-4}{2}}=w_{i}(0)=\sup_{\mathbf{B}_{R_{i}}(0)}w_{i}(x).$
Using the Arzela-Ascoli theorem we extract a subsequence, which we still
denote by $w_{i}$, that converges uniformly on compact subsets of
$\mathbf{R}^{n}$. Furthermore, as $i\rightarrow\infty$ the rescaled metrics
$\lambda\overset{\circ}{g}$ converge to the Euclidean metric. Therefore, in
the limit we obtain a function
$\overline{w}:\mathbf{R}^{n}\rightarrow[0,\infty),\quad\Delta_{0}^{2}\overline{w}=\frac{n(n-4)(n^{2}-4)}{16}\overline{w}^{\frac{n+4}{n-4}},\quad\overline{w}(0)=\sup\overline{w}=2^{\frac{n-4}{2}}.$
(36)
By the classification theorem in [20, Theorem 1.3] we must have
$\overline{w}(x)=\left(\frac{1+|x|^{2}}{2}\right)^{\frac{4-n}{2}}.$
Thus each solution $u_{i}$ has a “bubble” when $i$ is sufficiently large, that
is for $i$ sufficiently large a small neighborhood of $x_{1,i}$ is close (in
$\mathcal{C}^{4}$-norm) to the round metric, and hence has a concave boundary.
We verify this by computing the mean curvature of a geodesic sphere
explicitly. The round metric has the form
$g_{lm}=\frac{4}{(1+|x|^{2})^{2}}\delta_{lm}$ in stereographic coordinates,
and in general the mean curvature of a hypersurface $\Sigma$ in a Riemannian
manifold with unit normal $\eta$ is given by
$H_{\Sigma}=-\operatorname{tr}_{g}\langle\nabla_{\partial
l}\eta,\partial_{m}\rangle=-\partial_{l}\eta^{l}-\eta^{p}\Gamma_{lp}^{l}.$
(37)
A geodesic sphere centered at $0$ coincides with a Euclidean round sphere
centered at the origin (with a different radius, of course), and so the inward
normal vector is
$\eta=-\left(\frac{1+|x|^{2}}{2|x|}\right)x^{l}\partial_{l}.$
A computation shows
$H=-2n|x|(1+|x|^{2})+\frac{n-1+n|x|^{2}}{|x|},$
which is negative, in particular, when $|x|>3$. Additionally, since
$\|w_{i}-\overline{w}\|_{\mathcal{C}^{4}(\mathbf{B}_{\frac{3R_{i}}{4}}(0))}$
is arbitrarily small when $i$ is sufficiently large, we see that
$\partial\mathbf{B}_{\frac{3R_{i}}{4}}(0)$ is also mean concave with respect
to the metric $w_{i}^{\frac{4}{n-4}}\delta_{lm}$, which in turn implies
$\partial\mathbf{B}_{\frac{3|x_{1,i}|}{8}}(x_{1,i})$ is mean concave with
respect to the metric induced by $u_{i}^{\frac{4}{n-4}}\delta_{lm}$. This
contradicts Theorem 2. ∎
We immediately obtain the following Corollary.
###### Corollary 8.
For each compact subset $\Omega\subset\mathbf{S}^{n}\backslash\Lambda$,
$l\in\mathbb{N}$ and $\alpha\in(0,1)$ there exists $C_{1}$ depending only on
$n$, $l$, $\Omega$, and $\alpha$ such that
$\|u_{i}\|_{\mathcal{C}^{l,\alpha}(\Omega)}\leq C_{1}.$ (38)
We also record here a lower bound due to Jin and Xiong [16, Theorem 1.3].
###### Theorem 9 (Jin and Xiong).
Let
$v:[A,\infty)\times\mathbf{S}^{n-1}\rightarrow(0,\infty)$
solve (16). Then $\mathcal{P}_{\textrm{rad}}(v)\leq 0$ with equality if and
only if
$\liminf_{t\rightarrow\infty}v(t,\theta)=\limsup_{t\rightarrow\infty}v(t,\theta)=\lim_{t\rightarrow\infty}v(t,\theta)=0.$
Otherwise, if $\mathcal{P}_{\textrm{rad}}(v)<0$, there exists $C_{2}>0$ (which
depends on the solution $v$!) such that $v(t,\theta)\geq C_{2}$.
###### Corollary 10.
Let $g=u^{\frac{4}{n-4}}\overset{\circ}{g}\in\mathcal{M}_{k}$ have the
singular set $\Lambda=\\{p_{1},\dots,p_{k}\\}$. Then there exists $C_{2}>0$
(depending on the solution $u$!) such that
$u(x)\geq C_{2}\left(\min_{1\leq j\leq
k}\operatorname{dist}_{\overset{\circ}{g}}(x,p_{j})\right)^{\frac{4-n}{2}}.$
### 4.2. Sequential compactness
In this section we complete our proof of sequential compactness. To this end,
let
$\\{g_{i}=u_{i}^{\frac{4}{n-4}}\overset{\circ}{g}\\}\subset\Omega_{\delta_{1},\delta_{2}}\subset\mathcal{M}_{k}$
and denote the singular set of the conformal factor $u_{i}$ by
$\Lambda_{i}=\\{p_{1}^{i},\dots,p_{k}^{i}\\}$.
The following lemma will simplify our later analysis since it allows us to
assume the singular points are fixed.
###### Lemma 11.
Let $g_{i}=u_{i}^{\frac{4}{n-4}}\overset{\circ}{g}$ be a sequence in
$\mathcal{M}_{k}$ as described above. After passing to a subsequence, we may
assume that when $i$ is sufficiently large both $g_{i}$ and $u_{i}$ are
regular on the
$\mathbf{S}^{n}\backslash\left(\cup_{j=1}^{k}\overset{\circ}{\mathbf{B}}_{\delta_{1}/2}(p_{j}^{i})\right),$
where $\overset{\circ}{\mathbf{B}}_{r}(p)$ is the geodesic ball centered at
$p$ with radius $r$, with respect to the round metric $\overset{\circ}{g}$.
###### Proof.
The set
$(\mathbf{S}^{n})^{k}\backslash\left\\{(q_{1},\dots,q_{k})\in(\mathbf{S}^{n})^{k}:\operatorname{dist}_{\overset{\circ}{g}}(q_{j},q_{l})\geq\delta_{1}\textrm{
for each }j\neq l\right\\}$
is compact and contains each singular set
$\Lambda_{i}=\\{p_{1}^{i},\dots,p_{k}^{i}\\}$. Thus we may extract a
convergent subsequence, which we still denote as
$\Lambda_{i}=\\{p_{1}^{i},\dots,p_{k}^{i}\\}$, with
$p_{j}^{i}\rightarrow\bar{p}_{j}$. The lemma now follows from
$p_{j}^{i}\rightarrow\bar{p}_{j}$ for each $j$. ∎
To set notation, we define the compact sets
$K_{m}=\mathbf{S}^{n}\backslash\left(\cup_{j=1}^{k}\overset{\circ}{\mathbf{B}}_{2^{-m}\delta_{1}}(\bar{p}_{j})\right)$
(39)
for each natural number $m\in\mathbf{N}$. By construction the family
$\\{K_{m}\\}$ is a compact exhaustion of
$\mathbf{S}^{n}\backslash\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}$. Furthermore, by
the convergence $p_{j}^{i}\rightarrow\bar{p}_{j}$, for each fixed $m$ there
exists $i_{0}$ such that $i\geq i_{0}$ implies $u_{i}$ is smooth in $K_{m}$.
Therefore, combining Corollary 8 and the Arzela-Ascoli theorem we obtain a
convergent subsequence, which we again denote by $u_{i}$, that converges
uniformly on compact subsets of $\mathbf{S}^{n}\backslash\overline{\Lambda}$
to a limit $\overline{u}$, where
$\overline{\Lambda}=\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}$. Furthermore,
combining our a priori bounds and elliptic regularity, we see that the limit
function satisfies
$\overline{u}:\mathbf{S}^{n}\backslash\overline{\Lambda}\rightarrow[0,\infty),\quad\overset{\circ}{P}\overline{u}=\frac{n(n-4)(n^{2}-4)}{16}\overline{u}^{\frac{n+4}{n-4}}.$
(40)
###### Proposition 12.
The limit function $\overline{u}$ constructed in the paragraph above is
positive on $\mathbf{S}^{n}\backslash\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}$.
###### Proof.
If the proposition does not hold then there exists
$q\in\mathbf{S}^{n}\backslash\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}$ such that
$0=\overline{u}(q)=\lim_{i\rightarrow\infty}u_{i}(q).$
Let $\epsilon_{i}=u_{i}(q)$ and
$w_{i}:\mathbf{S}^{n}\backslash\\{p_{1}^{i},\dots,p_{k}^{i}\\}\rightarrow(0,\infty),\qquad
w_{i}(x)=\frac{1}{\epsilon_{i}}u_{i}(x).$
As a consequence of (13), we have
$\overset{\circ}{P}w_{i}=\epsilon_{i}^{\frac{8}{n-4}}\frac{n(n-4)(n^{2}-4)}{16}w_{i}^{\frac{n+4}{n-4}}.$
(41)
In addition, $w_{i}$ satisfies the normalization
$w_{i}(q)=1$ (42)
for each $i$ by construction.
By (38), for each $m\in\mathbf{N}$ there exists $C_{1}$ depending on $m$ and
the dimesnion $n$ such that
$\sup_{K_{m}}u_{i}\leq C_{1}.$ (43)
Next we find an upper bound for $1/\epsilon_{i}$. Using the fact that
$0<U_{\textrm{sph}}\leq 2^{\frac{n-4}{2}}$ and the Harnack inequality in
Theorem 3 we get
$2^{\frac{n-4}{2}}\epsilon_{i}\geq\inf_{K_{m}}(U_{\textrm{sph}}u_{i})\geq\frac{1}{\widetilde{C}(m,n)}\sup_{K_{m}}(U_{\textrm{sph}}u_{i})=C_{2},$
and so
$\frac{1}{\epsilon_{i}}\leq C_{3}.$ (44)
Combining (43) and (44) we obtain a uniform upper bound
$\sup_{K_{m}}w_{i}\leq C_{3},$
where $C_{3}$ depends only on $n$ and $m$.
We conclude that $w_{i}$ converges uniformly on compact subsets of
$\mathbf{S}^{n}\backslash\overline{\Lambda}$ to a function
$\overline{w}:\mathbf{S}^{n}\backslash\overline{\Lambda}\rightarrow[0,\infty),\qquad\overset{\circ}{P}\overline{w}=0.$
By Theorem 4 we have
$\overline{w}=\sum_{j=1}^{k}\alpha_{j}G_{\bar{p}_{j}},$ (45)
for some coefficients $\alpha_{j}\geq 0$. By the normalization (42) at least
one of the $\alpha_{j}$’s is positive, so (in particular) $\overline{w}$ is a
smooth, positive function.
Without loss of generality, we may assume $\alpha_{1}\neq 0$ and center our
coordinate system at $\bar{p}_{1}$. We now use the cylindrical coordinates
$t=-\log|x|$ and $\theta=\frac{x}{|x|}$ in a punctured ball centered on
$\bar{p}_{1}=0$, and define
$v_{i}(t,\theta)=e^{\left(\frac{4-n}{2}\right)t}u_{i}(e^{-t}\theta)U_{\textrm{sph}}(e^{-t}\theta),\quad
z_{i}(t,\theta)=\frac{1}{\epsilon_{i}}v_{i}(t,\theta)=e^{\left(\frac{4-n}{2}\right)t}w_{i}(e^{-t}\theta)U_{\textrm{sph}}(e^{-t}\theta)$
and
$\overline{v}(t,\theta)=e^{\left(\frac{4-n}{2}\right)t}\overline{u}(e^{-t}\theta)(\cosh
t)^{\frac{4-n}{2}},\quad\overline{z}(t,\theta)=e^{\left(\frac{4-n}{2}\right)t}\overline{w}(e^{-t}\theta)(\cosh
t)^{\frac{4-n}{2}}.$
By the expansion (24) we have
$\displaystyle\overline{z}(t,\theta)$ $\displaystyle=$ $\displaystyle
e^{\left(\frac{4-n}{2}\right)t}(\cosh
t)^{\frac{4-n}{2}}\left(\frac{\alpha_{1}}{2n(n-2)(n-4)\omega_{n}}e^{\left(\frac{n-4}{2}\right)t}+\mathcal{O}(1)\right)$
$\displaystyle=$
$\displaystyle\frac{\alpha_{1}}{2n(n-2)(n-4)\omega_{n}}+\mathcal{O}(e^{(4-n)t}).$
Observe that $z_{i}$ satisfies the PDE
$P_{\textrm{cyl}}z_{i}=\epsilon_{i}^{\frac{8}{n-4}}\frac{n(n-4)(n^{2}-4)}{16}z_{i}^{\frac{n+4}{n-4}},$
so, following Lemma 6, the integral
$\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}_{\textrm{cyl}}(z_{i})+\epsilon_{i}^{\frac{8}{n-4}}\frac{(n-4)^{2}(n^{2}-4)}{32}z_{i}^{\frac{2n}{n-4}}d\theta$
does not depend on $t$. Moreover, taking a limit as $i\rightarrow\infty$, we
obtain
$\displaystyle\lim_{i\rightarrow\infty}\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}_{\textrm{cyl}}(z_{i})+\epsilon_{i}^{\frac{8}{n-4}}\frac{(n-4)^{2}(n^{2}-4)}{32}z_{i}^{\frac{2n}{n-4}}d\theta$
(47) $\displaystyle=$
$\displaystyle\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}_{\textrm{cyl}}(\overline{z})d\theta$
$\displaystyle=$
$\displaystyle-\int_{\\{t\\}\times\mathbf{S}^{n-1}}\frac{n^{2}(n-4)^{2}}{32}\cdot\frac{\alpha_{1}^{2}}{4n^{2}(n-2)^{2}(n-4)^{2}\omega_{n}^{2}}+\mathcal{O}(e^{(4-n)t})$
$\displaystyle=$
$\displaystyle-\frac{n\alpha_{1}^{2}}{128(n-2)^{2}\omega_{n}}+\mathcal{O}(e^{(4-n)t}).$
On the other hand, by construction
$\displaystyle\mathcal{P}_{\textrm{rad}}(v_{i})$ $\displaystyle=$
$\displaystyle\int_{\\{t\\}\times\mathbf{S}^{n-1}}\mathcal{H}_{\textrm{cyl}}(v_{i})+\frac{(n-4)^{2}(n^{2}-4)}{32}v_{i}^{\frac{2n}{n-4}}d\theta$
$\displaystyle=$
$\displaystyle\int_{\\{t\\}\times\mathbf{S}^{n-1}}\epsilon_{i}^{2}\mathcal{H}_{\textrm{cyl}}(z_{i})+\frac{(n-4)^{2}(n^{2}-4)}{32}\epsilon_{i}^{\frac{2n}{n-4}}z_{i}^{\frac{2n}{n-4}}d\theta\rightarrow
0,$
and so
$\lim_{i\rightarrow\infty}\mathcal{P}_{\textrm{rad}}(g_{i},p_{1}^{i})=0.$
This contradicts the hypothesis that the asymptotic necksizes of
$g_{i}=u_{i}^{\frac{4}{n-4}}\overset{\circ}{g}$ at the puncture points
$p_{1}^{i}$ are all bounded away from $0$. ∎
We finally complete the proof of Theorem 1.
###### Proof.
Given a sequence $\\{g_{i}\\}\in\Omega_{\delta_{1},\delta_{2}}$, we have
already obtained a limit
$\overline{g}=\overline{u}^{\frac{4}{n-4}}\overset{\circ}{g}$ as a limit of a
subsequence. We know that
$\overline{u}:\mathbf{S}^{n}\backslash\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}\rightarrow(0,\infty),\quad\overset{\circ}{P}\overline{u}=\frac{n(n-4)(n^{2}-4)}{16}\overline{u}^{\frac{n+4}{n-4}},$
where $\bar{p}_{j}=\lim_{i\rightarrow\infty}p_{j}^{i}$. We have also shown
that $\overline{u}>0$ on
$\mathbf{S}^{n}\backslash\\{\bar{p}_{1},\dots,\bar{p}_{k}\\}$.
It only remains to verify that $\overline{g}$ is complete. If $\overline{g}$
is incomplete then there exists $j\in\\{1,\dots,k\\}$ such that
$\liminf_{x\rightarrow\bar{p}_{j}}\overline{u}(x)<\infty.$
In this case Theorem 9 implies
$\mathcal{P}_{\textrm{rad}}(\overline{g},\bar{p}_{j})=0$. However, by
construction
$\mathcal{P}_{\textrm{rad}}(\overline{g},\bar{p}_{j})=\lim_{i\rightarrow\infty}\mathcal{P}_{\textrm{rad}}(g_{i},p_{j}^{i})\geq\delta_{2},$
giving a contradiction. We conclude that $\overline{g}$ is indeed in
$\Omega_{\delta_{1},\delta_{2}}$. ∎
## References
* [1] M. O. Ahmedou, Z. Djadli, and A. Malchiodi. Prescribing a fourth-order conformal invariant on the standard sphere II: blow-up analysis and applications. Ann. Scuola Norm. Sup. Pisa 5 (2002), 387–434.
* [2] J. H. Andrade and J. M. do Ó. Asymptotics for singular solutions of conformally invariant fourth order systems in the punctured ball. preprint, arxiv:2003.03487.
* [3] T. Aubin. Équations différentielles non linéaires et problème de Yamabe concernant la courbure scalaire. J. Math. Pures Appl. 55 (1976), 269–296.
* [4] J. van den Berg. The phase-plane picture for a class of fourth-order conservative differential equations. J. Differential Equations. 161 (2000), 110–153.
* [5] T. Branson. Differential operators canonically associated to a conformal structure. Math. Scandinavia. 57 (1985), 293–345.
* [6] T. Branson. Group representations arising from Lorentz conformal geometry. J. Funct. Anal. 74 (1987), 199–291.
* [7] T. Branson and A. R. Gover. Origins, applications and generalisations of the $Q$-curvature. Acta Appl. Math. 102 (2008), 131–146.
* [8] G. Caristi and E. Mitidieri. Harnack inequalities and applications to solutions of biharmonic equations. Operator Theory: Advances and Applications. 168 (2006), 1–26.
* [9] S.-Y. A. Chang, M. Eastwood, B. Ørsted, and P. Yang. What is $Q$-curvature? Acta Appl. Math. 102 (2008), 119–125.
* [10] S.-Y. A. Chang, Z.-C. Han, and P. Yang. Some remarks on the geometry of a class of locally conformally flat metrics. Progress in Math. 333 (2020), 37–56.
* [11] R. Frank and T. König. Classification of positive solutions to a nonlinear biharmonic equation with critical exponent. Anal. PDE 12 (2019), 1101–1113.
* [12] M. Gursky and A. Malchiodi. A strong maximum principle for the Paneitz operator and a non-local flow for the $Q$-curvature. J. Eur. Math. Soc. 17 (2015), 2137–2173.
* [13] A. R. Gover and B. Ørsted. Universal principles for Kazdan-Warner and Pohozaev-Schoen type identities. Comm. Contemp. Math. 15 (2013),
* [14] F. Hang and P. Yang. Lectures on the fourth order $Q$-curvature equation. Geometric analysis around scalar curvature, Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap. 31 (2016), 1–33.
* [15] F. Hang and P. Yang. $Q$-curvature on a class of manifolds with dimension at least $5$. Comm. Pure Appl. Math. 69 (2016), 1452–1491.
* [16] T. Jin and J. Xiong. Asymptotic symmetry and local behavior of solutions of higher order conformally invariant equations with isolated singularities. preprint, arxiv:1901.01678.
* [17] J. Lee and T. Parker. The Yamabe problem. Bull. Amer. Math. Soc. 17 (1987), 37–91.
* [18] Y.-J. Lin and W. Yuan. A symmetric $2$-tensor cannonically associated to $Q$-curvature and its applications. Pac. J. Math. 291 (2017) 425–438.
* [19] M. Khuri, F. C. Marques, and R. Schoen. A compactness theorem for the Yamabe problem. J. Differential Geom. 81 (2009), 143–196.
* [20] C. S. Lin. A classification of solutions of a conformally invariant fourth order equation in $\mathbf{R}^{n}$. Comment. Math. Helv. 73 (1998), 206–231.
* [21] R. Mazzeo, D. Pollack, and K. Uhlenbeck. Moduli spaces of singular Yamabe metrics. J. Amer. Math. Soc. 9 (1996), 303–344.
* [22] S. Paneitz. A quartic conformally covariant differential operator for arbitrary pseudo-Riemannian manifolds. SIGMA Symmetry Integrability Geom. Methods Appl. 4 (2008), 3 pages (preprint from 1983).
* [23] D. Pollack. Compactness results for complete metrics of constant positive scalar curvature on subdomains of $\mathbf{S}^{n}$. Indiana Univ. Math. J. 42 (1993), 1441–1456.
* [24] J. Ratzkin. On constant $Q$-curvature metrics with isolated singularities. preprint, arXiv:2001.07984.
* [25] R. Schoen. Conformal deformation of a Riemannian metric to constant scalar curvature. J. Diff. Geom. 20 (1984), 479–495.
* [26] N. Trudinger. Remarks concerning the conformal deformation of Riemannian structures on compact manifolds. Ann. Scuola Norm. Pisa. 22 (1968), 265–274.
* [27] W. Wei. Compactness theorem of complete $k$-curvature manifolds with isolated singularities. preprint, arxiv.2008.08777.
* [28] H. Yamabe. On the deformation of Riemannian structures on a compact manifold. Osaka Math. J. 12 (1960), 21–37.
|
# Doppler Estimation for High–Velocity Targets
Using Subpulse Processing and the Classic
Chinese Remainder Theorem
Fernando Darío Almeida García, André Saito Guerreiro, Gustavo Rodrigues de
Lima Tejerina,
José Cândido S. Santos Filho, Gustavo Fraidenraich, and Michel Daoud Yacoub F.
D. A. García, A. S. Guerreiro, G. R. de Lima Tejerina, J. C. S. Santos Filho,
G. Fraidenraich, and M. D. Yacoub are with the Wireless Technology Laboratory,
Department of Communications, School of Electrical and Computer Engineering,
University of Campinas, 13083-852 Campinas, SP, Brazil, Tel:+55(19)3788-5106,
e-mail:$\left\\{\right.$ferdaral, andsaito, tejerina, candido, gf,
<EMAIL_ADDRESS>
###### Abstract
In pulsed Doppler radars, the classic Chinese remainder theorem (CCRT) is a
common method to resolve Doppler ambiguities caused by fast-moving targets.
Another issue concerning high-velocity targets is related to the loss in the
signal-to-noise ratio (SNR) after performing range compression. In particular,
this loss can be partially mitigated by the use of subpulse processing (SP).
Modern radars combine these techniques in order to reliably unfold the target
velocity. However, the presence of background noise may compromise the Doppler
estimates. Hence, a rigorous statistical analysis is imperative. In this work,
we provide a comprehensive analysis on Doppler estimation. In particular, we
derive novel closed-form expressions for the probability of detection (PD) and
probability of false alarm (PFA). To this end, we consider the newly introduce
SP along with the CCRT. A comparison analysis between SP and the classic pulse
processing (PP) technique is also carried out. Numerical results and Monte-
Carlo simulations corroborate the validity of our expressions and show that
the SP–plus–CCRT technique helps to greatly reduce the PFA compared to
previous studies, thereby improving radar detection.
###### Index Terms:
Classic Chinese remainder theorem, robust Chinese remainder theorem, Doppler
frequency estimation, subpulse processing, probability of detection.
## I Introduction
One important concern in modern pulsed radars is related to the Doppler
frequency estimation of fast-velocity targets. Due to the high target’s radial
velocity, ambiguous estimates are more likely to occur. More specifically,
ambiguous estimates appear whenever the target’s Doppler shift is greater than
the pulse repetition frequency (PRF) [1]. It seems obvious to think that
increasing the PRF will overcome this problem. However, if we are interested
in detecting targets located at long distances, then the PRF will be
restricted to a maximum value. Therefore, the choice of PRF is a trade-off
between range and Doppler requirements [2]. Fortunately, there are some
techniques that can resolve ambiguities, although at the cost of extra
measurement time and processing load. These techniques make use of multiples
PRFs [3, 4, 5, 6, 7]. The most known and used technique is the classic Chinese
remainder theorem (CCRT). The CCRT is a fast and accurate method to resolve
the unambiguous Doppler frequency. This is accomplished by solving a set of
congruences, formed by the estimated measurements of each PRF [7, 8, 9].
Nevertheless, in this method, the number of PRFs will not be sufficient to
resolve a certain quantity of targets. In general, $L$ PRFs are required to
successfully disambiguate $L-1$ targets. If the number of targets exceeds
$L-1$, then ghosts can appear.111Ghosts are false targets resulting from false
coincidences of Doppler-ambiguous or range-ambiguous data [4]. Unless
additional data (e.g., tracking information) is available, the radar has no
way of recognizing possible false detections [4]. Care must be taken in the
analysis and design since the number of PRFs and the number of targets to be
detected have a direct relationship.
Another issue concerning high-velocity targets is related to the signal-to-
noise ratio (SNR) loss. This occurs because the Doppler shift of fast-moving
targets will provoke a mismatch between the received signal and its replica
[2]. Consequently, the SNR after range compression may be drastically
reduced.222Range compression refers to the convolution operation between the
received signal and the replica of the transmitted signal [10]. Some radar
systems estimate and remove the Doppler shift prior to applying range
compression. Nonetheless, some residual or uncompensated Doppler typically
remains. This concern was partially alleviated in [11, 12]. Specifically, in
[11], the authors proposed a subpulse processing (SP), which proved to have a
higher Doppler tolerance,333Doppler tolerance refers to the degree of
degradation in the compressed response due to uncompensated Doppler [13].
increasing the ability to detect fast-moving targets. The shortcomings of SP
are computation time (critical for most radars), processing load, and poor
velocity resolution.
As stated before, the CCRT and SP have hardware and physical limitations when
it comes to estimating high target velocities. In practice, modern pulsed
radars take advantage of these two techniques so as to improve the system’s
capability to accurately detect the target’s true Doppler frequency. Since SP
the CCRT are affected by the presence of background noise, then a thorough
statistical analysis involving these two estimation techniques must be carried
out. Recently in [14], the authors proposed a novel expression for the
probability to correctly estimate the unambiguous Doppler frequency
considering the CCRT and the common pulse processing (PP) technique [2].
However, to the best of our knowledge, there is no performance analysis
considering the SP–plus–CCRT technique.
The main objective of this research is to combine the statistical analysis
conducted in [14] along with the newly introduced SP and the CCRT. To do so,
we adopt a stochastic model that suits our Doppler estimation techniques.
Then, we derive novel and closed-form expressions for: i) the probability to
correctly estimate the Doppler frequency, also called probability of detection
(PD); and ii) the probability to erroneously estimate the Doppler frequency,
also called probability of false alarm (PFA).
The remainder of this paper is organized as follows. Section II introduces
some key concepts to understand how the velocity estimation is performed.
Section III describes the system model. Section IV analyzes the Doppler
estimation using multiple PRFs; Section V discusses the representative
numerical results. Finally, Section VI concludes this paper.
In what follows, $(a)\text{mod}(b)$ denotes the remainder of the euclidean
division of $a$ by $b$; $\left|\cdot\right|$, absolute value;
$\lfloor\cdot\rfloor$, floor operation; $\text{round}(\cdot)$, rounding
operation; $\text{Pr}\left[\cdot\right]$, probability; $\mathbb{E}(\cdot)$,
expectation; $\text{Var}(\cdot)$, variance; $(\cdot)^{*}$, complex conjugate;
$\bigcap$, intersection of events; $\bigcup$, union of events;
$\mathcal{N}(\mu,\sigma^{2})$ denotes a Gaussian distribution with mean $\mu$
and variance $\sigma^{2}$; $\mathcal{N}_{c}(\mu,\sigma^{2})$ denotes a complex
Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$, and
$j=\sqrt{-1}$ is the imaginary unit.
## II Preliminaries
In this section, we present a brief introduction about the PP and SP
techniques. Latter, we describe the basis to understand the CCRT algorithm.
Finally, we show how the combined technique SP–plus–CCRT works in order to
improve Doppler estimation.
### II-A Pulse Processing
PP is the common technique employed by the radar to estimate the target
velocity and improve the SNR. In this processing technique, the radar
transmits a sequence of $M$ pulses during a coherent processing interval (CPI)
[15]. Then, range compression is performed on each pulse to improve the
radar’s range resolution. Finally, the discrete Fourier transform (DFT) is
applied along the slow-time samples to increase the SNR and to estimate the
target Doppler frequency [2]. These samples are collected at a rate equal to
the PRF. The maximum Doppler frequency shift that the radar manages to detect
using PP is $\Psi_{max}=\pm\text{PRF}/2$. If the target Doppler frequency,
$\mathit{f}_{d}$, exceeds this value, then the radar will deliver ambiguous
Doppler measurements. The Doppler frequency shift will be positive for closing
targets and negative for receding targets. The target velocity,
$\mathit{v}_{t}$, and its corresponding Doppler shift are related by the
following equation [16]:
$\displaystyle\mathit{f}_{d}=\frac{2\mathit{v}_{t}\mathit{f}_{R}}{\mathit{c}}=\frac{2\mathit{v}_{t}}{\lambda},$
(1)
where $\mathit{f}_{R}$ is the radar’s operation frequency, $\mathit{c}$ is the
speed of light, and $\lambda$ is the radar frequency.
### II-B Subpulse Processing
SP improves Doppler tolerance by mitigating the loss in SNR caused by the
uncompensated Doppler shift of fast-moving targets [2, 11]. Moreover, SP is
used to overcome the problem of ambiguous Doppler measurements. The SP
algorithm runs as follows:
1. 1.
First, the replica of the transmitted signal is divided into $N$ subpulses –
unlike PP that used the entire replica.
2. 2.
Latter, range compression is carried out between each subpulse and the
received signal (cf. [11, 12] for a detailed discussion on this). The use of
shorter replicas will enhance the system’s Doppler tolerance [10], increasing
the detection capability of fast-moving targets. Of course, this process leads
to a reduction in the peak amplitude of the sub-compression response (by a
factor of $1/N$). Here, the slow-time samples are collected at a rate of
$\Phi=N/\tau$, where $\tau$ is the pulse width. It is important to emphasize
that PP and SP are performed simultaneously, that is, for each of the $M$
compressions, the radar carried out $N$ sub-compressions [12].
3. 3.
Finally, the slow-time samples are coherently integrated to estimate the
target Doppler frequency and to “restore” the peak amplitude of the sub-
compression response.
The number of subpulses can be chosen as high as needed, as long as it is
taken into consideration that each additional subpulse requires an extra range
compression operation, increasing the computational load and computation time.
The maximum Doppler frequency shift that the radar can now manage to detect is
$\Phi_{max}=\pm N/2\tau$ [11]. Since $\Phi_{max}>\Psi_{max}$, SP provides a
higher frequency range of detection for fast-moving targets.
Computation time is critical for most radars and depends strongly on the
radar’s operation mode (e.g. tracking, searching or imaging), thereby limiting
the number of subpulses. Commonly, the number of subpulses is set between 5
and 10. However, this small number yields to a poor discretization in the
frequency domain and, consequently, producing inaccurate estimates.
Figure 1: Block diagram for Doppler estimation.
### II-C Classic Chinese Remainder Theorem
The use of multiples PRFs is a common approach to resolve range and Doppler
ambiguities [3, 8, 17, 4]. In this work, we only focus on solving Doppler
ambiguities. Consider for the moment a target with Doppler shift
$\mathit{f}_{d}>\Psi_{max}$. In this scenario, the radar will detect the
target with an apparent Doppler shift, $\mathit{f}_{d_{ap}}$, that satisfies
$\displaystyle\mathit{f}_{d}=\mathit{f}_{d_{ap}}+n\text{PRF},$ (2)
where $n$ is some integer. It is convenience to express the target’s Doppler
shift $\mathit{f}_{d}$ in terms of its corresponding Doppler bin, $b_{d}$.
Thus, (2) becomes
$b_{d}=b_{ap}+nM,$ (3)
in which $b_{{ap}}\in\left\\{0,1,2,\ldots,M-1\right\\}$ is the apparent
Doppler bin, defined as
$\displaystyle b_{ap}=$
$\displaystyle\left\lfloor\absolutevalue{\frac{\mathit{f}_{d_{ap}}}{\Delta
D}}\right\rfloor,\ \ \ \ \ \ \ \ \ \ \ \ \mathit{f}_{d_{ap}}\geq 0$ (4)
$\displaystyle b_{ap}=$ $\displaystyle
M-\left\lfloor\absolutevalue{\frac{\mathit{f}_{d_{ap}}}{\Delta
D}}\right\rfloor,\ \ \ \ \ \ \mathit{f}_{d_{ap}}<0$ (5)
with $\Delta D=\text{PRF}/M$ being the Doppler bin spacing. Under this
scenario, the radar is incapable to detect the target’s true Doppler
frequency.
Now, suppose that we have $L$ PRFs. Then, the unambiguous target’s Doppler bin
must satisfies the following congruences:
$\displaystyle b_{d}\equiv b_{{ap}_{i}}+n_{i}M_{i},\ \ \ \ 1\leq i\leq L$ (6)
The CCRT states that if all PRFs are pairwisely coprimes, then the set of
congruences in (6) will have a unique solution given by [17, 4, 18]
$\displaystyle
b_{d}=\left(\sum_{i=1}^{L}b_{{ap}_{i}}\beta_{i}\right)\text{mod}\left(\Theta\right),$
(7)
where $\Theta=\prod_{i=1}^{L}M_{i}$, $\beta_{i}=b_{i}\Theta/\text{PRF}_{i}$,
and $b_{i}$ is the smaller integer which can be computed by solving the
following expression:
$\displaystyle\left(\frac{b_{i}\Theta}{M_{i}}\right)\text{mod}\left(M_{i}\right)=1.$
(8)
### II-D Doppler Estimation
Fig. 1 depicts the entire block diagram for Doppler estimation. First, the
received signal passes through two types of independent range compression
blocks, one for PP and one for SP. This process is performed in sequence for
each pulse repetition interval (PRI). The outputs of both blocks are combined
and stored in memory to form a datacube [2]. (The datacube’s data is organized
by range, number of pulses, and number of subpulses.) More datacubes are
needed when using more than one PRF, as shown in Fig. 1. Next, a 2D-DFT block
is applied to each datacube to perform coherent integration. (The 2D-DFT block
is referred to as a two-dimensional DFT applied along with pulses and
subpulses.) Latter, the output of the 2D-DFT block is a matrix with the same
size containing the estimated Doppler shifts. This new matrix is referred to
as Doppler datacube. Finally, the CCRT is applied over the Doppler datacubes.
This process will be clarified in Section V by means of simulation.
Noise, jammer, and clutter are major concerns in all radar systems. In this
work, we consider the presence complex white Gaussian noise (CWGN). Thus, the
Doppler spectrum of fast moving targets will be compromised due to the
intrinsic characteristics of noise. For example, a high noise power could mask
small target returns, degrading radar performance. Even if the target return
is entirely deterministic, the combined signal (target–plus–noise) is a random
process and must be treated as such. Therefore, we need to assess the
statistics underlying Doppler analysis, but first, we need to come up with a
specific stochastic model that suits the requirements and design of our
radar’s estimation scheme. This is discussed in the next section.
## III System Model
In this section, we propose a stochastic model that fits our signal processing
schemes. In addition, we describe the premises (hypotheses) used for Doppler
estimation.
According to Sections II-A and II-B, the collected signals in the slow-time
domain corresponding to PP and SP can be expressed, respectively, as
$\displaystyle g_{1}\left[m\right]=$ $\displaystyle
s_{1}\left[m\right]+w_{1}\left[m\right]$ $\displaystyle=$ $\displaystyle
a_{1}\exp\left(j2\pi\mathit{f}_{d}m/\text{PRF}\right)+w_{1}\left[m\right],\ \
0\leq m\leq M-1$ (9) $\displaystyle g_{2}\left[n\right]=$ $\displaystyle
s_{2}\left[n\right]+w_{2}\left[n\right]$ $\displaystyle=$ $\displaystyle
a_{2}\exp\left(j2\pi\mathit{f}_{d}n/\Phi\right)+w_{2}\left[n\right],\ \ \ \ \
\ \ 0\leq n\leq N-1$ (10)
where $s_{1}\left[m\right]$ and $s_{2}\left[n\right]$ are discrete complex
sine signals444In most systems, the radio frequency (RF) signal is mixed to
baseband prior to compression, and a coherent detector is used in the
downconversion process to form in-phase (I) and quadrature (Q) receive
channels, thereby creating a complex baseband signal. originated by changes in
the target position; $w_{1}\left[m\right]$ and $w_{2}\left[n\right]$ are
discrete additive complex Gaussian noises; and finally, $a_{1}$ and $a_{2}$
are the amplitudes at the output of the matched filters. Depending on the
target velocity, the output amplitudes $a_{1}$ and $a_{2}$ maybe be greatly
attenuated. However, the attenuation in $a_{2}$ is partially mitigated by the
use of SP. In particular, it follows that $a_{2}>a_{1}$ for high-velocity
targets [11]. Additionally, we define $2\sigma_{t_{1}}^{2}$ and
$2\sigma_{t_{2}}^{2}$ as the total mean powers – in the time domain – for
$w_{1}\left[m\right]$ and $w_{2}\left[n\right]$, respectively. As seen in
practice, and due to the stationary characteristic of noise, we have that
$\sigma_{t_{1}}^{2}=\sigma_{t_{2}}^{2}$ [19]. However, we will remain using
separate notations for $\sigma_{t_{1}}^{2}$ and $\sigma_{t_{2}}^{2}$ so as to
distinguish the noise power from PP and SP. Of course, these separate
notations will not alter, in any form, our performance analysis.
The SNR measured in the time domain considering PP and SP, can be expressed,
respectively, as
$\displaystyle\text{SNR}_{t_{1}}=$
$\displaystyle\frac{\left|a_{1}\right|^{2}}{2\sigma_{t_{1}}^{2}}$ (11)
$\displaystyle\text{SNR}_{t_{2}}=$
$\displaystyle\frac{\left|a_{2}/N\right|^{2}}{2\sigma_{t_{2}}^{2}}.$ (12)
Observe in (12) that the fact of dividing the replica into $N$ subpulses
causes a reduction in the SNR by a factor of $1/N^{2}$, as mentioned in
Section II-B.
The DFT is the primary operation to implement coherent integration. More
precisely, the DFT provides a mechanism to test multiple candidate frequencies
to maximize the integration gain [2]. The corresponding DFTs for (III) and
(III) are given, respectively, by
$\displaystyle G_{1}\left[k^{\prime}\right]\triangleq$ $\displaystyle\
\mathscr{F}\left\\{g_{1}\left[m\right]\right\\}$ $\displaystyle=$
$\displaystyle\sum_{m=0}^{M-1}g_{1}\left[m\right]\exp\left(-j2\pi
k^{\prime}m/M\right)$ $\displaystyle=$ $\displaystyle
S_{1}\left[k^{\prime}\right]+W_{1}\left[k^{\prime}\right],\ \ \ 0\leq
k^{\prime}\leq M-1$ (13) $\displaystyle
G_{2}\left[l^{\prime}\right]\triangleq$ $\displaystyle\
\mathscr{F}\left\\{g_{2}\left[n\right]\right\\}$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{N-1}g_{2}\left[n\right]\exp\left(-j2\pi
l^{\prime}n/N\right)$ $\displaystyle=$ $\displaystyle
S_{2}\left[l^{\prime}\right]+W_{2}\left[l^{\prime}\right],\ \ \ \ \ \ 0\leq
l^{\prime}\leq N-1$ (14)
The SNR measured in the frequency domain considering PP and SP, are given,
respectively, by [2, Eq. (17.37)]
$\displaystyle\text{SNR}_{1}=$
$\displaystyle\frac{|Ma_{1}|^{2}}{2\sigma_{1}^{2}}$ (15)
$\displaystyle\text{SNR}_{2}=$
$\displaystyle\frac{|a_{2}|^{2}}{2\sigma_{2}^{2}},$ (16)
in which $\sigma_{1}^{2}=M\sigma_{t_{1}}^{2}$ and
$\sigma_{2}^{2}=N\sigma_{t_{1}}^{2}$ are half of the noise powers – in the
frequency domain – for $W_{1}\left[k^{\prime}\right]$ and
$W_{2}\left[l^{\prime}\right]$, respectively.
The Doppler estimates are based on the absolute values of
$G_{1}\left[k^{\prime}\right]$ and $G_{2}\left[l^{\prime}\right]$. That is,
(III) and (III) will provide estimates for $\mathit{f}_{d}$, say
$\hat{\mathit{f}}_{1}$ and $\hat{\mathit{f}}_{2}$, by searching $k^{\prime}$
and $l^{\prime}$, in which the absolute values of
$G_{1}\left[k^{\prime}\right]$ and $G_{2}\left[l^{\prime}\right]$ are maximum.
It is worth mentioning that if $\Psi_{max}<\mathit{f}_{d}$ and
$\Phi_{max}<\mathit{f}_{d}$, then $\hat{\mathit{f}}_{1}$ and
$\hat{\mathit{f}}_{2}$ will display ambiguous Doppler estimates.
Now, considering $L$ PRFs (say, $\text{PRF}_{1},\ldots,\text{PRF}_{L}$), we
can define the absolute values for $G_{1}\left[k^{\prime}\right]$ and
$G_{2}\left[l^{\prime}\right]$ at the $i$-th PRF, respectively, as
$\displaystyle H_{1,i}\left[k^{\prime}\right]\triangleq$
$\displaystyle|G_{1,i}\left[k^{\prime}\right]|\ \ \ \ \ 0\leq k^{\prime}\leq
M_{i}-1$ (17) $\displaystyle H_{2,i}\left[l^{\prime}\right]\triangleq$
$\displaystyle|G_{2,i}\left[l^{\prime}\right]|\ \ \ \ \ \ 0\leq l^{\prime}\leq
N_{i}-1$ (18)
where the subscript $i\in\\{1,\ldots,L\\}$ denotes the association to the
$i$-th PRF.
Herein, we assume that $G_{1,i}\left[k^{\prime}\right]$ is composed of
$M_{i}-1$ independent and identically distributed noise samples and one
target–plus–noise sample, denoted as $\mathcal{G}_{1,i}$. On the other hand,
$G_{2,i}\left[l^{\prime}\right]$ is composed of $N_{i}-1$ independent and
identically distributed noise samples and one combined sample, denoted as
$\mathcal{G}_{2,i}$. The target–plus–noise samples $\mathcal{G}_{1,i}$ and
$\mathcal{G}_{2,i}$ can be modeled, respectively, by [20, Eq. (1)]
$\displaystyle\mathcal{G}_{1,i}=$
$\displaystyle\sigma_{1,i}\left(\sqrt{1-\lambda_{1,i}^{2}}A_{1,i}+\lambda_{1,i}A_{0,i}\right)$
$\displaystyle+j\sigma_{1,i}\left(\sqrt{1-\lambda_{1,i}^{2}}B_{1,i}+\lambda_{1,i}B_{0,i}\right)$
(19) $\displaystyle\mathcal{G}_{2,i}=$
$\displaystyle\sigma_{2,i}\left(\sqrt{1-\lambda_{2,i}^{2}}A_{2,i}+\lambda_{2,i}A_{0,i}\right)$
$\displaystyle+j\sigma_{2,i}\left(\sqrt{1-\lambda_{2,i}^{2}}B_{2,i}+\lambda_{2,i}B_{0,i}\right),$
(20)
where $A_{p,i}$ and $B_{p,i}$ ($p=1,2$) are mutually independent random
variables (RVs) distributed as $\mathcal{N}(0,\frac{1}{2})$, and
$\lambda_{p,i},\in(0,1]$. Then, for any $p$ and $q$ ($q=1,2$), it follows that
$\mathbb{E}(A_{p,i}B_{q,i})=0$ and
$\mathbb{E}(A_{p,i}A_{q,i})=\mathbb{E}(B_{p,i}B_{q,i})=\frac{1}{2}\delta_{pq}$.
($\delta_{pq}=1$ if $p=q$, and $\delta_{pq}=0$ otherwise.) In addition,
$A_{0,i}$ and $B_{0,i}$ are mutually independent RVs distributed as
$\mathcal{N}(m_{\textbf{Re},i},\frac{1}{2})$ and
$\mathcal{N}(m_{\textbf{Im},i},\frac{1}{2})$, respectively. Thus,
$\mathcal{G}_{1,i}$ and $\mathcal{G}_{2,i}$ are non-zero mean complex Gaussian
RVs with probability density functions (PDFs) given, respectively, by
$\mathcal{N}_{c}(\lambda_{1,i}(m_{\textbf{Re},i}+jm_{\textbf{Im},i}),\sigma_{1,i}^{2})$
and
$\mathcal{N}_{c}(\lambda_{2,i}(m_{\textbf{Re},i}+jm_{\textbf{Im},i}),\sigma_{2,i}^{2})$.
The correlation coefficient between any pair of ($\mathcal{G}_{1,i}$,
$\mathcal{G}_{2,i}$), can be calculated as [20, Eq. (2)]
$\displaystyle\rho_{kl,i}\triangleq$
$\displaystyle\frac{\mathbb{E}(\mathcal{G}_{1,i}\mathcal{G}_{2,i}^{*})-\mathbb{E}(\mathcal{G}_{1,i})\mathbb{E}(\mathcal{G}_{2,i}^{*})}{\sqrt{\text{Var}(\mathcal{G}_{1,i})\text{Var}(\mathcal{G}_{2,i})}}$
$\displaystyle=$ $\displaystyle\lambda_{1,i}\lambda_{2,i}.$ (21)
This correlation exists because both PP and SP use the same received signal
when performing range compression [2]. Observe that the parameters
$\lambda_{1,i}^{2}$, $\lambda_{2,i}^{2}$, $m_{\textbf{Re},i}$ and
$m_{\textbf{Im},i}$ can be used to model the compressed responses
$\left|M_{i}a_{1,i}\right|^{2}$ and $\left|a_{2,i}\right|^{2}$. This can be
done by making the following substitutions:
$|M_{i}a_{1,i}|^{2}=\lambda_{1,i}^{2}(m_{\textbf{Re},i}^{2}+m_{\textbf{Im}}^{2})$
and
$|a_{2,i}|^{2}=\lambda_{2,i}^{2}(m_{\textbf{Re},i}^{2}+m_{\textbf{Im}}^{2})$.
On the other hand, $\lambda_{1,i}$ and $\lambda_{2,i}$ can be chosen to meet a
desire correlation coefficient.
By the above, it follows that $H_{1,i}\left[k^{\prime}\right]$ is composed of
$M_{i}-1$ Rayleigh distributed samples, denoted as $X_{k,i}$
$\left(k\in\left\\{1,2,\ldots,M_{i}-1\right\\}\right)$, and one Rice
distributed sample, denoted as $R_{1,i}$. Similarly,
$H_{2,i}\left[l^{\prime}\right]$ is composed of $N_{i}-1$ Rayleigh distributed
samples, denoted as $Y_{l,i}$
$\left(l\in\left\\{1,2,\ldots,N_{i}-1\right\\}\right)$, and one Rice
distributed sample, denoted as $R_{2,i}$. The PDFs of $X_{k,i}$ and $Y_{l,i}$
are given, respectively, by
$\displaystyle f_{X_{k,i}}(x_{k,i})=$
$\displaystyle\frac{x_{k,i}\exp\left(-\frac{x_{k,i}^{2}}{2\sigma_{k,i}^{2}}\right)}{\sigma_{k,i}}$
(22) $\displaystyle f_{Y_{l,i}}(y_{l,i})=$
$\displaystyle\frac{y_{l,i}\exp\left(-\frac{y_{l,i}^{2}}{2\sigma_{l,i}^{2}}\right)}{\sigma_{l,i}}.$
(23)
Moreover, since $R_{2,i}$ and $R_{2,i}$ bear a certain degree of correlation,
they are governed by a bivariate Rician distribution, given by [20, 21]
$\displaystyle\mathit{f}_{R_{1,i},R_{2,i}}$
$\displaystyle\left(r_{1,i},r_{2,i}|\mathcal{H}_{1}\right)=\int_{0}^{\infty}\exp\left(-t\xi_{i}\right)$
$\displaystyle\times\exp\left(-\textbf{m}_{i}\right)I_{0}\left(2\sqrt{\textbf{m}_{i}t}\right)\prod_{p=1}^{2}\frac{r_{p,i}}{\Omega_{p,i}^{2}}$
$\displaystyle\times\exp\left(-\frac{r_{p,i}^{2}}{2\Omega_{p,i}^{2}}\right)I_{0}\left(\frac{r_{p,i}\sqrt{t\sigma_{p,i}^{2}\lambda_{p,i}^{2}}}{\Omega_{p,i}^{2}}\right)\text{d}t,$
(24)
where $I_{0}(\cdot)$ is the modified Bessel function of the first kind and
order zero [22, Eq. (9.6.16)],
$\textbf{m}_{i}=m_{\textbf{Re},i}^{2}+m_{\textbf{Im},i}^{2}$, and
$\displaystyle\Omega_{p,i}^{2}$
$\displaystyle=\sigma_{p,i}^{2}\left(\frac{1-\lambda_{p,i}^{2}}{2}\right)$
(25a) $\displaystyle\xi_{i}$
$\displaystyle=1+\sum_{p=1}^{2}\frac{\sigma_{p,i}^{2}\lambda_{p,i}^{2}}{2\Omega_{p,i}^{2}}.$
(25b)
## IV Doppler Analysis
In this section, we provide a comprehensive statistical analysis on Doppler
estimation. To do so, we derive the performance metrics for both SP and
SP–plus–CCRT.
### IV-A SP Analysis
First, let us define the following events:
$\displaystyle\mathcal{A}_{k,i}=$
$\displaystyle\left\\{R_{1,i}>X_{k,i}\right\\}$ (26)
$\displaystyle\mathcal{B}_{l,i}=$
$\displaystyle\left\\{R_{2,i}>Y_{l,i}\right\\}$ (27)
$\displaystyle\mathcal{C}_{k,i}=$
$\displaystyle\left\\{X_{k,i}>R_{1,i}\right\\}$ (28)
$\displaystyle\mathcal{D}_{l,i}=$
$\displaystyle\left\\{Y_{l,i}>R_{2,i}\right\\}.$ (29)
###### Proposition I.
Let $\text{PD}_{i}$ be the probability of detection at the $i$-th PRF.
Specifically, $\text{PD}_{i}$ is defined as the probability that $R_{1,i}$ is
greater than $X_{k,i}$ and, simultaneously, that $R_{2,i}$ is greater than
$Y_{l,i}$, i.e.,
$\displaystyle\text{PD}_{i}\triangleq\text{Pr}$
$\displaystyle\left[\left(\bigcap_{k=1}^{M_{i}-1}\mathcal{A}_{k,i}\right)\bigcap\left(\bigcap_{l=1}^{N_{i}-1}\mathcal{B}_{l,i}\right)\right].$
(30)
Then, from (22)–(III), (30) can be expressed in closed-form as
$\displaystyle\text{PD}_{i}=$
$\displaystyle\sum_{k=0}^{M_{i}-1}\sum_{l=0}^{N_{i}-1}\left(\begin{array}[]{c}M_{i}-1\\\
k\\\ \end{array}\right)\left(\begin{array}[]{c}N_{i}-1\\\ l\\\
\end{array}\right)$ (35)
$\displaystyle\times\frac{(-1)^{-k-l+M_{i}+N_{i}}\mathcal{V}_{i}(k,l)}{\mathcal{U}_{i}(k,l)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{U}_{i}(k,l)}\right),$
(36)
wherein $\mathcal{U}_{i}(k,l)$ and $\mathcal{V}_{i}(k,l)$ are auxiliary
functions defined, respectively, as
$\displaystyle\mathcal{U}_{i}(k,l)=$ $\displaystyle\
\xi_{i}-\frac{\xi_{i}\lambda_{1,i}^{2}\sigma_{1,i}^{4}}{2\Omega_{1,i}^{2}\left(\Omega_{1,i}^{2}(k-M_{i}+1)-\sigma_{1,i}^{2}\right)}$
$\displaystyle-\frac{\xi_{i}\lambda_{2,i}^{2}\sigma_{2,i}^{4}}{2\Omega_{2,i}^{2}\left(\Omega_{2,i}^{2}(l-N_{i}+1)-\sigma_{2,i}^{2}\right)}$
(37a) $\displaystyle\mathcal{V}_{i}(k,l)=$ $\displaystyle\
\frac{\sigma_{1,i}^{2}}{\left(\Omega_{1,i}^{2}(-k+M_{i}-1)+\sigma_{1,i}^{2}\right)}$
$\displaystyle\times\frac{\sigma_{2,i}^{2}}{\left(\Omega_{2,i}^{2}(-l+N_{i}-1)+\sigma_{2,i}^{2}\right)}.$
(37b)
###### Proof.
See Appendix A. ∎
###### Corollary I.
Let $\text{PFA}_{i}$ be the probability of false alarm at the $i$-th PRF. More
precisely, $\text{PFA}_{i}$ is defined as the probability that at least one of
$X_{k,i}$ is greater than $R_{1,i}$ and, simultaneously, that at least one of
$Y_{l,i}$ is greater than $R_{2,i}$, i.e.,
$\displaystyle\text{PFA}_{i}\triangleq\text{Pr}$
$\displaystyle\left[\underset{k=1}{\overset{M_{i}-1}{\bigcup}}\underset{l=1}{\overset{N_{i}-1}{\bigcup}}\left(\mathcal{C}_{k,i}\bigcap\mathcal{D}_{l,i}\right)\right].$
(38)
Then, from (22)–(III), (38) can be written in closed-form as in (Corollary I),
shown at the top of the next page, where $\mathcal{P}_{i}\left(k,l\right)$ and
$\mathcal{Q}_{i}\left(k,l\right)$ are auxiliary functions defined,
respectively, by
$\displaystyle\mathcal{P}_{i}\left(k,l\right)=$ $\displaystyle\
\xi_{i}-\frac{\lambda_{1,i}^{2}\sigma^{4}_{1,i}}{2\Omega_{1,i}^{2}\left(k\
\Omega_{1,i}^{2}+\sigma^{2}_{1,i}\right)}$
$\displaystyle-\frac{\lambda_{2,i}^{2}\sigma^{4}_{2,i}}{2\Omega_{2,i}^{2}\left(l\
\Omega_{2,i}^{2}+\sigma^{2}_{2,i}\right)}$ (39a)
$\displaystyle\mathcal{Q}_{i}\left(k,l\right)=$ $\displaystyle\
\frac{\sigma^{2}_{1,i}\sigma^{2}_{2,i}}{\left(k\
\Omega_{1,i}^{2}+\sigma^{2}_{1,i}\right)\left(l\
\Omega_{2,i}^{2}+\sigma^{2}_{2,i}\right)}.$ (39b)
$\displaystyle\textit{PFA}_{i}=$
$\displaystyle\frac{\left(M_{i}-1\right)\left(N_{i}-1\right)\mathcal{Q}_{i}\left(1,1\right)}{\mathcal{P}_{i}\left(1,1\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(1,1\right)}\right)-\binom{M_{i}-1}{2}\binom{N_{i}-1}{2}\frac{\mathcal{Q}_{i}\left(2,2\right)}{\mathcal{P}_{i}\left(2,2\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(2,2\right)}\right)+\ldots$
$\displaystyle+(-1)^{M_{i}-N_{i}-1}\frac{\mathcal{Q}_{i}\left(M_{i}-1,N_{i}-1\right)}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\right)$
(40)
###### Proof.
See Appendix B. ∎
It is worth mentioning that (35) and (Corollary I) are novel and original
contributions of this work, derived in closed-form even though (III) is given
in integral form.
### IV-B SP–Plus–CCRT Analysis
Similar to [14], we assume that each individual pulse on each sweep results in
an independent random value for the target returns.
Now, using (35) and taking into account the $\mathcal{M}$–of–$L$ detection
criterion,555Instead of detecting a target on the basis of at least one
detection in $L$ tries, system designers often require that some number
$\mathcal{M}$ or more detections be required in $L$ tries before a target
detection is accepted [2]. the probability of detection for the combined
technique SP–plus–CCR can be calculated as follows [23]
$\displaystyle\text{PD}_{\text{CCRT}}\triangleq$
$\displaystyle\sum_{l=\mathcal{M}}^{L}\sum_{\mathcal{E}\in\mathcal{F}_{l}}\left\\{\left(\prod_{i\in\mathcal{E}}\text{PD}_{i}\right)\left(\prod_{j\in\mathcal{E}^{c}}\left(1-\text{PD}_{j}\right)\right)\right\\},$
(41)
where $\mathcal{F}_{l}$ is the set of all subsets of $l$ integers that can be
selected from $\left\\{1,2,\ldots,L\right\\}$, and $\mathcal{E}^{c}$ is the
complement of $\mathcal{E}$. For example, if $l=2$ and $L=3$, then
$\mathcal{F}_{2}=\left\\{\left\\{1,2\right\\},\left\\{1,3\right\\},\left\\{2,3\right\\}\right\\}$,
and $\mathcal{E}^{c}=\left\\{1,2,\ldots,L\right\\}\backslash\mathcal{E}$.
On the other hand, the probability of false alarm for the combined technique
SP–plus-CCRT can be calculated as [23]
$\displaystyle\text{PFA}_{\text{CCRT}}\triangleq$
$\displaystyle\sum_{l=\mathcal{M}}^{L}\sum_{\mathcal{E}\in\mathcal{F}_{l}}\left\\{\left(\prod_{i\in\mathcal{E}}\text{PFA}_{i}\right)\left(\prod_{j\in\mathcal{E}^{c}}\left(1-\text{PFA}_{j}\right)\right)\right\\}.$
(42)
For the case where $\mathcal{M}=L$, (41) and (42) reduce, respectively, to
$\displaystyle\text{PD}_{\text{CCRT}}=$
$\displaystyle\prod_{i=1}^{L}\text{PD}_{i}$ (43)
$\displaystyle\text{PFA}_{\text{CCRT}}=$
$\displaystyle\prod_{i=1}^{L}\text{PFA}_{i}.$ (44)
## V Numerical Results
Figure 2: $\text{PD}_{i}$ vs $\text{SNR}_{1}$ using $N_{i}=8$,
$\lambda_{1,i}=0.5$, $\lambda_{2,i}=0.99$, and different values of $M_{i}$
($i\in\left\\{1,2,3,4\right\\}$). Figure 3: $\text{PMD}_{i}$ vs
$\text{SNR}_{1}$ using $N_{i}=8$, $\lambda_{1,i}=0.5$, $\lambda_{2,i}=0.99$,
and different values of $M_{i}$ ($i\in\left\\{1,2,3,4\right\\}$).
---
(a–1) $\text{PRF}_{1}=1700$ [Hz], $M_{1}=11$
---
(a–2) $\text{PRF}_{2}=1900$ [Hz], $M_{2}=13$
---
(a–3) $\text{PRF}_{3}=2100$ [Hz], $M_{3}=17$
---
(a–4) $\text{PRF}_{4}=2300$ [Hz], $M_{4}=19$
---
(b–1) $\text{PRF}_{1}=1700$ [Hz], $N_{1}=8$
---
(b–2) $\text{PRF}_{2}=1900$ [Hz], $N_{2}=8$
---
(b–3) $\text{PRF}_{3}=2100$ [Hz], $N_{3}=8$
---
(b–4) $\text{PRF}_{4}=2300$ [Hz], $N_{4}=8$
---
(c–1) $\text{PRF}_{1}=1700$ [Hz]
---
(c–2) $\text{PRF}_{2}=1900$ [Hz]
---
(c–3) $\text{PRF}_{3}=2100$ [Hz]
---
(c–4) $\text{PRF}_{4}=2300$ [Hz]
Figure 4: Doppler estimation.
In this section, we illustrate through Fig. 4 how the Doppler estimation
process is carried out. Latter, we validate our derived expressions by means
of Monte-Carlo simulations666The number of realizations in Monte-Carlo
simulations was set to $10^{6}$.. To do so, we make use of the following radar
setup: $\text{PRF}_{1}=700$ [Hz], $\text{PRF}_{2}=1100$ [Hz],
$\text{PRF}_{3}=1300$ [Hz], $\text{PRF}_{4}=1700$ [Hz], $L=\mathcal{M}=4$,
$\mathit{f}_{R}=6\ [\text{GHz}]$, $\tau=25\ [\mu s]$, $\lambda=0.05\
[\text{m}]$, $M_{1}=11$, $M_{2}=13$, $M_{3}=17$, $M_{4}=19$, and $N_{i}=8\
\forall i\in\left\\{1,2,3,4\right\\}$. In addition, we consider a linear
frequency-modulated pulse with bandwidth $B=2\ [\text{MHz}]$.
Fig. 4 illustrates the output data after the 2D-DFT blocks. In this simulation
example, we placed a target at an initial range of 10 [Km], traveling with a
constant velocity of $\mathit{v}_{t}=900$ [m/s] in the opposite direction of
the radar (i.e., the target is receding). Fig. 4(a) shows the normalized
output data – Velocity vs Range – using PP. Observe that in all 4 scenarios,
the target at 10 [Km] is unlikely to be detected due to the high loss in SNR.
On the other hand, Fig. 4(b) shows the normalized output data – Velocity vs
Range – using SP. Observe that the loss in SNR is partially mitigated by means
of SP. Therefore, the target located at 10 [Km] can now be easily be detect
without further processing. At last, Fig. 4(c) shows the combined pulse and
subpulse information. Note in Fig. 4(c) that SP provides a better intuition
about the target location, but due to its poor discretization, it is not
sufficient to determine the exact velocity. Conversely, PP provides a better
discretization but, unfortunately, its velocity estimation is more likely to
be ambiguous. Thus, by combining SP and the CCRT, we provide the system a high
capability to unfold the target’s true velocity.
Fig. 2 shows $\text{PD}_{i}$ versus $\text{SNR}_{1}$ using different values of
$M_{i}$. Note how radar performance improves as $M_{i}$ increases, requiring a
lower SNR for a given PD. This is because when increasing $M_{i}$, we are, in
fact, increasing the compressed response of PP by means of coherent
integration. In particular, for a fixed $\text{SNR}_{1}=10$ [dB], we obtain
the following probabilities of detection: $\text{PD}_{1}=0.66$ for $M_{1}=7$;
$\text{PD}_{2}=0.78$ for $M_{2}=11$; $\text{PD}_{3}=0.85$ for $M_{3}=13$; and
$\text{PD}_{4}=0.93$ for $M_{4}=17$. Also, observe that for the high and
medium SNR regime, our derived expression matches perfectly the PD of [14, Eq.
(28)]. Nevertheless, there is a small difference in the PD for the low SNR
regime. This occurs because if the compressed response of PP is less than the
background noise, then the intersection probability in (30) will be less than
the probability of $\bigcap_{k=1}^{M_{i}-1}\mathcal{A}_{k,i}$ . For example,
given $\text{SNR}_{1}=4$ [dB] and $M_{1}=7$, we obtain $\text{PD}_{1}=0.15$
with our proposed SP–plus–CCRT technique, and $\text{PD}_{1}=0.18$ with [14,
Eq. (28)]. However, this small reduction in the PD is compensated by a greater
reduction in the PFA, as shall be seen next.
Figure 5: $\text{PD}_{\text{CCRT}}$ vs $\text{SNR}_{1}$ using $N_{i}=8$,
$\lambda_{1,i}=0.5$, $\lambda_{2,i}=0.99$, $\mathcal{M}=4$, and different
values of $M_{i}$ ($i\in\left\\{1,2,3,4\right\\}$). Figure 6:
$\text{PMD}_{\text{CCRT}}$ vs $\text{SNR}_{1}$ using $N_{i}=8$,
$\lambda_{1,i}=0.5$, $\lambda_{2,i}=0.99$, $\mathcal{M}=4$, and different
values of $M_{i}$ ($i\in\left\\{1,2,3,4\right\\}$).
Fig. 3 shows $\text{PFA}_{i}$ versus $\text{SNR}_{1}$ using different values
for $M_{i}$. Observe how $\text{PFA}_{i}$ decreases as $M_{i}$ increases. This
occurs because as we increase $M_{i}$, the received target echo becomes
stronger compared to the noise background. For example, for a fixed
$\text{SNR}_{1}=5$ [dB], we obtain the following probabilities of false alarm:
$\text{PFA}_{1}=0.83$ for $M_{1}=7$; $\text{PFA}_{2}=0.77$ for $M_{2}=11$;
$\text{PFA}_{3}=0.73$ for $M_{3}=13$; and $\text{PFA}_{4}=0.60$ for
$M_{4}=17$. More interesting, observe how $\text{PFA}_{i}$ decays rapidly
compared to [14]. This difference in $\text{PFA}_{i}$ is because intuitively
SP acts as a backup detection process. That is, since the compressed response
of SP is greater of the PP response (for high-velocity targets), then the
probability in (38) is lower than the probability of
$\underset{k=1}{\overset{M_{i}-1}{\bigcup}}\mathcal{C}_{k,i}$ . For example,
using the classic PP technique [14], we obtain the following probabilities of
false alarm: $\text{PFA}_{1}=0.96$ for $M_{1}=7$; $\text{PFA}_{2}=0.97$ for
$M_{2}=11$; $\text{PFA}_{3}=0.98$ for $M_{3}=13$; and $\text{PFA}_{4}=0.99$
for $M_{3}=17$.
Finally, Figs. 5 and 6 show $\text{PD}_{\text{CCRT}}$ and
$\text{PFA}_{\text{CCRT}}$ versus $\text{SNR}_{1}$, respectively. Observe in
Fig. 5, the perfect agreement between (41) and [14, Eq. (29)]. Hence, in this
case, we have no advantage when using SP–plus–CCRT. On the other hand, observe
in Fig. 6, the high difference in the PFA between of (42) and that in [14]. In
this case, the use of SP–plus–CCRT improves radar performance by considerably
reducing the false alarms. For instance, for given $\text{SNR}_{1}=2$ [dB], we
obtain probabilities of $\text{PFA}_{\text{CCRT}}=0.94$ using PP–plus–CCRT,
and $\text{PFA}_{\text{CCRT}}=0.54$ using SP–plus–CCRT.
## VI Conclusion
In this work, we provided a thorough statistical analysis on Doppler
estimation when both SP and the CCRT were employed. To do so, we derived novel
and closed-form expressions for the PD and PFA. Moreover, a comparison
analysis between our proposed SP–plus–CCRT technique and the classic
PP–plus–CCRT was carried out. Numerical results and Monte-Carlo simulations
corroborated the validity of our expressions and showed that the PFA when
using SP–plus–CCRT technique was greatly reduced compared to [14], thereby
enhancing radar detection.
## Appendix A Proof of Proposition I
Applying [24, Eq. (5.48)] and using the fact that $X_{k,i}$ and $Y_{l,i}$ are
independent RVs, (30) can be rewritten as follows:
$\displaystyle\text{PD}_{i}=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\left(\prod_{k=1}^{M_{i}-1}\text{Pr}\left[X_{k,i}<r_{1,i}|R_{1,i}=r_{1,i}\right]\right)$
$\displaystyle\times\left(\prod_{l=1}^{N_{i}-1}\text{Pr}\left[Y_{l,i}<r_{2,i}|R_{2,i}=r_{2,i}\right]\right)$
$\displaystyle\times\mathit{f}_{R_{1,i},R_{2,i}}(r_{1,i},r_{2,i})\
\text{d}r_{1,i}\ \text{d}r_{2,i}.$ (45)
Now, with the aid of [24, Eq. (4.11)] and taking into account that $X_{k,i}$
and $Y_{l,i}$ are identically distributed RVs, yields
$\displaystyle\text{PD}_{i}=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\left(\int_{0}^{r_{1,i}}f_{X_{1,i}}(x_{1,i})\
\text{d}x_{1,i}\right)^{M_{i}-1}$
$\displaystyle\times\left(\int_{0}^{r_{2,i}}f_{Y_{1,i}}(y_{1,i})\
\text{d}y_{1,i}\right)^{N_{i}-1}$
$\displaystyle\times\mathit{f}_{R_{1,i},R_{2,i}}(r_{1,i},r_{2,i})\
\text{d}r_{1,i}\ \text{d}r_{2,i}.$ (46)
Replacing (22)–(III) in (A), we obtain
$\displaystyle\text{PD}_{i}=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\underbrace{\left(\int_{0}^{r_{1,i}}\frac{x_{1,i}\exp\left(-\frac{x_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)}{\sigma_{1,i}}\text{d}x_{1,i}\right)^{M_{i}-1}}_{\triangleq\
\mathcal{I}_{1}}$
$\displaystyle\times\underbrace{\left(\int_{0}^{r_{2,i}}\frac{y_{1,i}\exp\left(-\frac{y_{1,i}^{2}}{2\sigma_{2,i}^{2}}\right)}{\sigma_{2,i}}\text{d}y_{1,i}\right)^{N_{i}-1}}_{\triangleq\
\mathcal{I}_{2}}$
$\displaystyle\times\int_{0}^{\infty}\exp(-\xi_{i}t)\exp\left(-\textbf{m}_{i}\right)I_{0}\left(2\sqrt{\textbf{m}_{i}t}\right)$
$\displaystyle\times\prod_{p=1}^{2}\frac{r_{p,i}}{\Omega_{p,i}^{2}}\exp\left(-\frac{r_{p,i}^{2}}{2\Omega_{p,i}^{2}}\right)$
$\displaystyle\times
I_{0}\left(\frac{r_{p,i}\sqrt{t\sigma_{p,i}^{2}\lambda_{p,i}^{2}}}{\Omega_{p,i}^{2}}\right)\text{d}t\
\text{d}r_{1,i}\ \text{d}r_{2,i}.$ (47)
In order to solve (A), we must first evaluate $\mathcal{I}_{1}$ and
$\mathcal{I}_{2}$. In particular, $\mathcal{I}_{1}$ can be calculated as
follows:
$\displaystyle\mathcal{I}_{1}$
$\displaystyle\overset{(a)}{=}\left(1-\exp\left(-\frac{r_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)\right)^{M_{i}-1}$
$\displaystyle\overset{(b)}{=}\sum_{k=0}^{M_{i}-1}\left(\begin{array}[]{c}M_{i}-1\\\
k\\\
\end{array}\right)\left(-\exp\left(-\frac{r_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)\right)^{M_{i}-1-k},$
(50)
where in step (a), we have developed the inner integral; and in step (b), we
have used the binomial Theorem [24].
Using a similar approach to that used in (A), $\mathcal{I}_{2}$ can be
calculated as
$\displaystyle\mathcal{I}_{2}=\sum_{l=0}^{N_{i}-1}\left(\begin{array}[]{c}N_{i}-1\\\
l\\\
\end{array}\right)\left(-\exp\left(-\frac{r_{2,i}^{2}}{2\sigma_{2,i}^{2}}\right)\right)^{N_{i}-1-l}.$
(53)
Inserting (A) and (53) in (A), followed by changing the order of
integration777The change in the order of integration was performed without
loss of generality since (22), (23) and (III) are non-negative real functions
[25]. and along with minor manipulations, we obtain (58), displayed at the top
of the next page.
$\displaystyle\text{PD}_{i}=$
$\displaystyle\sum_{k=0}^{M_{i}-1}\sum_{l=0}^{N_{i}-1}1^{k+l}\left(\begin{array}[]{c}M_{i}-1\\\
k\\\ \end{array}\right)\left(\begin{array}[]{c}N_{i}-1\\\ l\\\
\end{array}\right)\int_{0}^{\infty}\exp(-\xi_{i}t)\exp\left(-\textbf{m}_{i}\right)I_{0}\left(2\sqrt{\textbf{m}_{i}t}\right)$
(58)
$\displaystyle\times\underbrace{\int_{0}^{\infty}\left(-\exp\left(-\frac{r_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)\right)^{M_{i}-1-k}\frac{r_{1,i}}{\Omega_{1,i}^{2}}\exp\left(-\frac{r_{1,i}^{2}}{2\Omega_{1,i}^{2}}\right)I_{0}\left(\frac{r_{1,i}\sqrt{t\sigma_{1,i}^{2}\lambda_{1,i}^{2}}}{\Omega_{1,i}^{2}}\right)\text{d}r_{1,i}}_{\triangleq\
\mathcal{I}_{3}}$
$\displaystyle\times\underbrace{\int_{0}^{\infty}\left(-\exp\left(-\frac{r_{2,i}^{2}}{2\sigma_{2,i}^{2}}\right)\right)^{N_{i}-1-l}\frac{r_{2,i}}{\Omega_{2,i}^{2}}\exp\left(-\frac{r_{2,i}^{2}}{2\Omega_{2,i}^{2}}\right)I_{0}\left(\frac{r_{2,i}\sqrt{t\sigma_{2,i}^{2}\lambda_{2,i}^{2}}}{\Omega_{2,i}^{2}}\right)\text{d}r_{2,i}}_{\triangleq\
\mathcal{I}_{4}}\text{d}t.$ (59)
Now, it remains to find $\mathcal{I}_{3}$ and $\mathcal{I}_{4}$. More
precisely, $\mathcal{I}_{3}$ can be computed as
$\displaystyle\mathcal{I}_{3}\overset{(a)}{=}$
$\displaystyle\int_{0}^{\infty}\left(-\exp\left(-\frac{r_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)\right)^{M_{i}-1-k}\frac{r_{1,i}}{\Omega_{1,i}^{2}}$
$\displaystyle\times\exp\left(-\frac{r_{1,i}^{2}}{2\Omega_{1,i}^{2}}\right)\sum_{q=0}^{\infty}\frac{\left(\frac{r_{1,i}\sqrt{t\lambda_{1,i}^{2}\sigma_{1,i}^{2}}}{2\Omega_{1,i}^{2}}\right)^{2q}}{q!\
\Gamma(q+1)}\text{d}r_{1,i}$ $\displaystyle\overset{(b)}{=}$
$\displaystyle\frac{(-1)^{-k+M_{i}+1}}{\Omega_{1,i}^{2}\left(\frac{-k+M_{i}-1}{\sigma_{1,i}^{2}}+\frac{1}{\Omega_{1,i}^{2}}\right)}$
$\displaystyle\times\sum_{q=0}^{\infty}\frac{\left(\frac{t\lambda_{1,i}^{2}\sigma_{1,i}^{4}}{2\Omega_{1,i}^{2}\left(\Omega_{1,i}^{2}(-k+M_{i}-1)+\sigma_{1,i}^{2}\right)}\right)^{q}}{q!}$
$\displaystyle\overset{(c)}{=}$
$\displaystyle\frac{(-1)^{-k+M_{i}+1}}{\Omega_{1,i}^{2}\left(\frac{-k+M_{i}-1}{\sigma_{1,i}^{2}}+\frac{1}{\Omega_{1,i}^{2}}\right)}$
$\displaystyle\times\exp\left(\frac{t\lambda_{1,i}^{2}\sigma_{1,i}^{4}}{2\Omega_{1,i}^{2}\left(\Omega_{1,i}^{2}(-k+M_{i}-1)+\sigma_{1,i}^{2}\right)}\right),$
(60)
where in step (a), we have used the series representation of the modified
Bessel function of the first kind and order zero [26, Eq. (03.02.02.0001.01)];
in step (b), we have solved the integral by first changing the order of
integration; finally, in step (c), we have used [26, Eq. (01.03.06.0002.01)]
and performed some algebraic manipulations.
In like manner as in (A), $\mathcal{I}_{4}$ can be computed as
$\displaystyle\mathcal{I}_{4}=$
$\displaystyle\frac{(-1)^{-l+N_{i}+1}}{\Omega_{2,i}^{2}\left(\frac{-l+N_{i}-1}{\sigma_{2,i}^{2}}+\frac{1}{\Omega_{2,i}^{2}}\right)}$
$\displaystyle\times\exp\left(\frac{t\lambda_{2,i}^{2}\sigma_{2,i}^{4}}{2\Omega_{2,i}^{2}\left(\Omega_{2,i}^{2}(-l+N_{i}-1)+\sigma_{2,i}^{2}\right)}\right).$
(61)
Now, replacing (A) and (A) in (58), we obtain
$\displaystyle\text{PD}_{i}=$
$\displaystyle\sum_{k=0}^{M_{i}-1}\sum_{l=0}^{N_{i}-1}\left(\begin{array}[]{c}M_{i}-1\\\
k\\\ \end{array}\right)\left(\begin{array}[]{c}N_{i}-1\\\ l\\\
\end{array}\right)$ (66)
$\displaystyle\times\exp\left(-\textbf{m}_{i}\right)\left(\frac{\sigma_{1,i}^{2}(-1)^{-k+M_{i}+1}}{\Omega_{1,i}^{2}(-k+M_{i}-1)+\sigma_{1,i}^{2}}\right)$
$\displaystyle\times\left(\frac{\sigma_{2,i}^{2}(-1)^{-l+N_{i}+1}}{\Omega_{2,i}^{2}(-l+N_{i}-1)+\sigma_{2,i}^{2}}\right)$
$\displaystyle\times\int_{0}^{\infty}\exp(-\xi_{i}t)I_{0}\left(2\sqrt{\textbf{m}_{i}t}\right)$
$\displaystyle\times\exp\left(\frac{t\lambda_{1,i}^{2}\sigma_{1,i}^{4}}{2\Omega_{1,i}^{2}\left(\Omega_{1,i}^{2}(-k+M_{i}-1)+\sigma_{1,i}^{2}\right)}\right)$
$\displaystyle\times\exp\left(\frac{t\lambda_{2,i}^{2}\sigma_{2,i}^{4}}{2\Omega_{2,i}^{2}\left(\Omega_{2,i}^{2}(-l+N_{i}-1)+\sigma_{2,i}^{2}\right)}\right)\text{d}t.$
(67)
Finally, using the following identity [27, Eq. (1.11.2.4)]
$\int_{0}^{\infty}\exp(tb)I_{0}(\sqrt{t}a)\
\text{d}t=-\frac{\exp\left(-\frac{a^{2}}{4b}\right)}{b},$ (68)
and after performing some minor simplifications, we can express (66) in
closed-form as in (35), which completes the proof.
## Appendix B Proof of Corollary I
By making use of [24, Coroll. 6], we can express (38) as
$\displaystyle\text{PFA}_{i}=\sum_{k=1}^{M_{i}-1}\sum_{l=1}^{N_{i}-1}\text{Pr}\left[\mathcal{C}_{k,i}\bigcap\mathcal{D}_{l,i}\right]$
$\displaystyle\
-\underset{k<p,l<q}{\sum_{k=1}^{M_{i}-1}\sum_{l=1}^{N_{i}-1}\sum_{p=2}^{M_{i}-1}\sum_{q=2}^{N_{i}-1}}\text{Pr}\left[\mathcal{C}_{k,i}\bigcap\mathcal{D}_{l,i}\bigcap\mathcal{C}_{p,i}\bigcap\mathcal{D}_{q,i}\right]+\ldots$
$\displaystyle\
+(-1)^{M_{i}-N_{i}-1}\text{Pr}\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\ldots\bigcap\mathcal{C}_{M_{i}-1,i}\bigcap\mathcal{D}_{N_{i}-1,i}\right].$
(69)
Now, we need to find the event probabilities. First, let us derive the last
event probability of (B), that is,
Pr
$\displaystyle\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\ldots\bigcap\mathcal{C}_{M_{i}-1,i}\bigcap\mathcal{D}_{N_{i}-1,i}\right]$
$\displaystyle\overset{a}{=}$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\left(\prod_{k=1}^{M_{i}-1}\text{Pr}\left[X_{k,i}>r_{1,i}|R_{1,i}=r_{1,i}\right]\right)$
$\displaystyle\times\left(\prod_{l=1}^{N_{i}-1}\text{Pr}\left[Y_{l,i}>r_{2,i}|R_{2,i}=r_{2,i}\right]\right)$
$\displaystyle\times\mathit{f}_{R_{1,i},R_{2,i}}(r_{1,i},r_{2,i})\
\text{d}r_{1,i}\ \text{d}r_{2,i}$ $\displaystyle\overset{b}{=}$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\left(\int_{r_{1,i}}^{\infty}f_{X_{1,i}}(x_{1,i})\
\text{d}x_{1,i}\right)^{M_{i}-1}$
$\displaystyle\times\left(\int_{r_{2,i}}^{\infty}f_{Y_{1,i}}(y_{1,i})\
\text{d}y_{1,i}\right)^{N_{i}-1}$
$\displaystyle\times\mathit{f}_{R_{1,i},R_{2,i}}(r_{1,i},r_{2,i})\
\text{d}r_{1,i}\ \text{d}r_{2,i},$ (70)
where in step (a) we have used [24, Eq. (5.48)]; and in step (b) we have used
[24, Eq. (4.11)] along with the fact that $X_{k,i}$ and $Y_{l,i}$ are
identically distributed RVs.
Replacing (22)–(III) in (B), yields
Pr
$\displaystyle\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\ldots\bigcap\mathcal{C}_{M_{i}-1,i}\bigcap\mathcal{D}_{N_{i}-1,i}\right]$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}\underbrace{\left(\int_{r_{1,i}}^{\infty}\frac{x_{1,i}\exp\left(-\frac{x_{1,i}^{2}}{2\sigma_{1,i}^{2}}\right)}{\sigma_{1,i}}\text{d}x_{1,i}\right)^{M_{i}-1}}_{\triangleq\
\mathcal{I}_{5}}$
$\displaystyle\times\underbrace{\left(\int_{r_{2,i}}^{\infty}\frac{y_{1,i}\exp\left(-\frac{y_{1,i}^{2}}{2\sigma_{2,i}^{2}}\right)}{\sigma_{2,i}}\text{d}y_{1,i}\right)^{N_{i}-1}}_{\triangleq\
\mathcal{I}_{6}}$
$\displaystyle\times\int_{0}^{\infty}\exp(-\xi_{i}t)\exp\left(-\textbf{m}_{i}\right)I_{0}\left(2\sqrt{\textbf{m}_{i}t}\right)$
$\displaystyle\times\prod_{p=1}^{2}\frac{r_{p,i}}{\Omega_{p,i}^{2}}\exp\left(-\frac{r_{p,i}^{2}}{2\Omega_{p,i}^{2}}\right)$
$\displaystyle\times
I_{0}\left(\frac{r_{p,i}\sqrt{t\sigma_{p,i}^{2}\lambda_{p,i}^{2}}}{\Omega_{p,i}^{2}}\right)\text{d}t\
\text{d}r_{1,i}\ \text{d}r_{2,i}.$ (71)
After some mathematical manipulations, $\mathcal{I}_{5}$ and $\mathcal{I}_{6}$
can be calculated, respectively, as
$\displaystyle\mathcal{I}_{5}=$
$\displaystyle\exp\left(-\frac{r_{1,i}^{2}(M_{i}-1)}{2\sigma_{1,i}^{2}}\right)$
(72) $\displaystyle\mathcal{I}_{6}=$
$\displaystyle\exp\left(-\frac{r_{2,i}^{2}(N_{i}-1)}{2\sigma_{2,i}^{2}}\right).$
(73)
$\displaystyle\text{PFA}_{i}=$
$\displaystyle\binom{M_{i}-1}{1}\binom{N_{i}-1}{1}\frac{\mathcal{Q}_{i}\left(1,1\right)}{\mathcal{P}_{i}\left(1,1\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(1,1\right)}\right)-\binom{M_{i}-1}{2}\binom{N_{i}-1}{2}\frac{\mathcal{Q}_{i}\left(2,2\right)}{\mathcal{P}_{i}\left(2,2\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(2,2\right)}\right)+\ldots$
$\displaystyle+(-1)^{M_{i}-N_{i}-1}\binom{M_{i}-1}{M_{i}-1}\binom{N_{i}-1}{N_{i}-1}\frac{\mathcal{Q}_{i}\left(M_{i}-1,N_{i}-1\right)}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\right)$
(74)
Now, replacing (72) and (73) in (B), and after solving remaining three
integrals by applying the same procedure as in (66), we obtain
Pr
$\displaystyle\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\ldots\bigcap\mathcal{C}_{M_{i}-1,i}\bigcap\mathcal{D}_{N_{i}-1,i}\right]$
$\displaystyle=\frac{\mathcal{Q}_{i}\left(M_{i}-1,N_{i}-1\right)}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(M_{i}-1,N_{i}-1\right)}\right),$
(75)
where $\mathcal{P}_{i}\left(k,l\right)$ and $\mathcal{Q}_{i}\left(k,l\right)$
are auxiliary functions defined in (39), and the parameters
$k\in\left\\{1,2,\ldots,M_{i}-1\right\\}$ and
$l\in\left\\{1,2,\ldots,N_{i}-1\right\\}$ denote the number of events for
$\mathcal{C}_{k,i}$ and $\mathcal{D}_{l,i}$, respectively. Thus, the remaining
event probabilities in (B) can be easily obtained by a proper choice of the
parameters $k$ and $l$. For example, for $k=1$ and $l=3$, we obtain
Pr
$\displaystyle\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\mathcal{D}_{2,i}\bigcap\mathcal{D}_{3,i}\right]$
$\displaystyle=\frac{\mathcal{Q}_{i}\left(1,3\right)}{\mathcal{P}_{i}\left(1,3\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(1,3\right)}\right),$
(76)
whereas for $k=3$ and $l=2$, we have
Pr
$\displaystyle\left[\mathcal{C}_{1,i}\bigcap\mathcal{D}_{1,i}\bigcap\mathcal{C}_{2,i}\bigcap\mathcal{D}_{2,i}\bigcap\mathcal{C}_{3,i}\right]$
$\displaystyle=\frac{\mathcal{Q}_{i}\left(3,2\right)}{\mathcal{P}_{i}\left(3,2\right)}\exp\left(-\textbf{m}_{i}+\frac{\textbf{m}_{i}}{\mathcal{P}_{i}\left(3,2\right)}\right).$
(77)
Later, with the aid of (B) and after some algebraic manipulations, we can
rewrite (B) as in (B), displayed at the top of the next page. Finally, and
after minor simplifications, (B) reduces to (Corollary I), which completes the
proof.
## References
* [1] G. Morris and L. Harkness, _Airborne Pulsed Doppler Radar_ , 2nd ed. Norwood, MA, USA: Artech House, 1996.
* [2] M. A. Richards, J. Scheer, W. A. Holm, and W. L. Melvin, _Principles of Modern Radar: Basic Principles_ , 1st ed. West Perth, WA, Australia: SciTech, 2010.
* [3] G. V. Trunk, “Range resolution of targets using automatic detectors,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. AES-14, no. 5, pp. 750–755, Sept. 1978.
* [4] S. A. Hovanessian, “An algorithm for calculation of range in a multiple PRF radar,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. AES-12, no. 2, pp. 287–290, Mar. 1976.
* [5] X.-G. Xia and G. Wang, “Phase unwrapping and a robust chinese remainder theorem,” _IEEE Signal Process. Lett._ , vol. 14, no. 4, pp. 247–250, Apr. 2007.
* [6] X. Li, H. Liang, and X. Xia, “A robust chinese remainder theorem with its applications in frequency estimation from undersampled waveforms,” _IEEE Trans. Signal Process._ , vol. 57, no. 11, pp. 4314–4322, Nov. 2009\.
* [7] W. Wang and X. Xia, “A closed-form robust chinese remainder theorem and its performance analysis,” _IEEE Trans. Signal Process._ , vol. 58, no. 11, pp. 5655–5666, Nov. 2010.
* [8] G. V. Trunk and W. M. Kim, “Ambiguity resolution of multiple targets using pulse-Doppler waveforms,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 30, no. 4, pp. 1130–1137, Oct. 1994.
* [9] F. D. A. García, A. S. Guerreiro, G. R. L. Tejerina, J. C. S. Santos Filho, G. Fraidenraich, M. D. Yacoub, M. A. M. Miranda, and H. Cioqueta, “Probability of detection for unambiguous doppler frequencies in pulsed radars using the chinese remainder theorem and subpulse processing,” in _Proc. 53rd Asilomar Conference on Signals, Systems, and Computers_ , Pacific Grove, CA, USA, Nov. 2019, pp. 138–142.
* [10] M. I. Skolnik, _Introduction to Radar Systems_ , 3rd ed. Ney York, NY, USA: McGraw-Hill, 2001.
* [11] G. Beltrao, L. Pralon, M. Menezes, P. Vyplavin, B. Pompeo, and M. Pralon, “Subpulse processing for long range surveillance noise radars,” in _Proc. International Conference on Radar Systems (Radar 2017)_ , Belfast, UK, Oct. 2017, pp. 1–4.
* [12] A. Barreto, L. Pralon, B. Pompeo, G. Beltrao, and M. Pralon, “FPGA design and implementation of a real-time subpulse processing architecture for noise radars,” in _Proc. 2019 International Radar Conference (RADAR)_ , Toulon, France, Sept. 2019, pp. 1–6.
* [13] D. S. Doviak and R. J. Zrnic, _Doppler Radar and Weather Observations_ , 2nd ed. San Diego, CA, USA: Academic Press, 2001.
* [14] B. Silva and G. Fraidenraich, “Performance analysis of the classic and robust chinese remainder theorems in pulsed doppler radars,” _IEEE Trans. Signal Process._ , vol. 66, no. 18, pp. 4898–4903, Sept. 2018.
* [15] M. A. Richards, _Fundamentals of Radar Signal Processing_ , 2nd ed. Ney York, NY, USA: McGraw-Hill, 2014.
* [16] D. K. Barton, _Radar Equations for Modern Radar_ , 1st ed. Massachusetts, MA, USA: Artech House, 2013.
* [17] G. Trunk and S. Brockett, “Range and velocity ambiguity resolution,” in _Proc. Record IEEE Nat. Radar Conf._ , Lynnfield, MA, USA, Apr. 1993, pp. 146–149.
* [18] A. Ferrari, C. Berenguer, and G. Alengrin, “Doppler ambiguity resolution using multiple PRF,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 33, no. 3, pp. 738–751, Jul. 1997.
* [19] A. Papoulis, _Probability, Random Variables, and Stochastic Processes_ , 4th ed. Ney York, NY, USA: McGraw-Hill, 2002.
* [20] N. C. Beaulieu and K. T. Hemachandra, “Novel representations for the bivariate rician distribution,” _IEEE Trans. Commun._ , vol. 59, no. 11, pp. 2951–2954, Nov. 2011.
* [21] A. Behnad, N. C. Beaulieu, and K. T. Hemachandra, “Correction to “Novel representations for the bivariate rician distribution”,” _IEEE Trans. Commun._ , vol. 60, no. 6, pp. 1486–1486, Jun. 2012.
* [22] M. Abramowitz and I. A. Stegun, _Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables_. Washington, DC: US Dept. of Commerce: National Bureau of Standards, 1972.
* [23] Y. H. Wang, “On the number of successes in independent trials,” _Statistica Sinica_ , vol. 3, no. 2, pp. 295–312, 1993.
* [24] A. Leon-Garcia, _Probability and Random Processes for Electrical Engineering_ , 3rd ed. New Jersey, NJ, USA: Pearson Prentice Hall, 1994.
* [25] H. Friedman, “A consistent Fubini-Tonelli theorem for nonmeasurable functions,” _Illinois J. Math._ , vol. 24, no. 3, pp. 390–395, 1980.
* [26] Wolfram Research, Inc. (2018), _Wolfram Research_ , Accessed: Sept. 19, 2018\. [Online]. Available: http://functions.wolfram.com
* [27] A. P. Prudnikov, Y. A. Bryčkov, and O. I. Maričev, _Integral and Series: Vol. 2_ , 2nd ed., Fizmatlit, Ed. Moscow, Russia: Fizmatlit, 1992.
|
11institutetext: University of Queensland, Brisbane QLD 4072, Australia
11email<EMAIL_ADDRESS>11email<EMAIL_ADDRESS>11email:
<EMAIL_ADDRESS>
# Towards a Standard Feature Set for
Network Intrusion Detection System Datasets
Mohanad Sarhan 11 Siamak Layeghy 11 Marius Portmann 11
###### Abstract
Network Intrusion Detection Systems (NIDSs) are important tools for the
protection of computer networks against increasingly frequent and
sophisticated cyber attacks. Recently, a lot of research effort has been
dedicated to the development of Machine Learning (ML) based NIDSs. As in any
ML-based application, the availability of high-quality datasets is critical
for the training and evaluation of ML-based NIDS. One of the key problems with
the currently available NIDS datasets is the lack of a standard feature set.
The use of a unique and proprietary set of features for each of the publicly
available datasets makes it virtually impossible to compare the performance of
ML-based traffic classifiers on different datasets, and hence to evaluate the
ability of these systems to generalise across different network scenarios. To
address that limitation, this paper proposes and evaluates standard NIDS
feature sets based on the NetFlow network meta-data collection protocol and
system. We evaluate and compare two NetFlow-based feature set variants, a
version with 12 features, and another one with 43 features. For our
evaluation, we converted four widely used NIDS datasets (UNSW-NB15, BoT-IoT,
ToN-IoT, CSE-CIC-IDS2018) into new variants with our proposed NetFlow based
feature sets. Based on an Extra Tree classifier, we compared the
classification performance of the NetFlow-based feature sets with the
proprietary feature sets provided with the original datasets. While the
smaller feature set cannot match the classification performance of the
proprietary feature sets, the larger set with 43 NetFlow features,
surprisingly achieves a consistently higher classification performance
compared to the original feature set, which was tailored to each of the
considered NIDS datasets. The proposed NetFlow-based standard NIDS feature
set, together with four benchmark datasets, made available to the research
community, allow a fair comparison of ML-based network traffic classifiers
across different NIDS datasets. We believe that having a standard feature set
is critical for allowing a more rigorous and thorough evaluation of ML-based
NIDSs and that it can help bridge the gap between academic research and the
practical deployment of such systems.
###### keywords:
Machine Learning, NetFlow, Network Intrusion Detection System
## 1 Introduction
Network Intrusion Detection Systems (NIDSs) aim to detect network attacks and
to preserve the three principles of information security: confidentiality,
integrity, and availability [1]. Signature-based NIDSs match attack signatures
to observed traffic, giving a high detection accuracy to known attacks.
However, these systems are unable to detect previously unseen (zero-day)
attacks or new variants of known attacks. Therefore, researchers have
investigated anomaly-based NIDSs that focus on matching attack behaviours and
patterns [2]. Machine Learning (ML), a sub-field of artificial intelligence,
is capable of learning and extracting complex network attack patterns that may
threaten computer networks if undetected [3]. All network intrusions generate
a unique set of security events, that would aid in their classification
process. These identifying patterns can be extracted from network traffic in
the form of data features. To generate a dataset, corresponding data features
form network data flows that are ideally labelled with an attack or a benign
class to allow for a supervised ML methodology.
Real-world network flow datasets with labels that identify attack and benign
flows are challenging to obtain, mainly due to security and privacy concerns.
Therefore, researchers have designed network test-beds to generate synthetic
datasets that consist of labelled network data flows [4]. The data flows are
made of several network features that are often preselected based on the
authors’ domain knowledge and available extraction tools. As a result, the
currently available NIDS datasets are very distinct in terms of their feature
sets and therefore the security events represented by the data flows. Due to
the great impact of data features on the performance of ML models [5], the
evaluation of the proposed ML-based NIDSs are often unreliable when tested on
multiple datasets using their original feature sets. Finally, as certain
network data features require a complex and deep packet inspection, the
computational complexity of feature extraction and processing is not feasible
The importance of having a standard feature set for all datasets is paramount.
It will facilitate a fair and reliable evaluation of proposed ML models across
various network environments and attack scenarios. This also enables an
evaluation of the generalisability of the model, and hence its performance
when deployed in practical network scenarios. Moreover, a standard feature set
will ensure that the security events and network information presented by NIDS
datasets are the same and in a controlled manner. NetFlow is an industry-
standard protocol for network traffic collection [6].Its practical and
scalable deployment properties are capable of enhancing the deployment
feasibility of ML-based NIDSs. NetFlow features are capable of presenting key
security events that are crucial in the identification of network attacks.
Therefore, we believe that applying NetFlow-based features in the design of a
universal feature set will facilitate the successful deployment of ML-based
NIDS in practical network scenarios.
Four widely used NIDS datasets, referred to as UNSW-NB15 [7], BoT-IoT [8],
ToN-IoT [9], and CIC-CSE-IDS2018 [10] have been converted into a common basic
NetFlow-based feature set [11]. The NetFlow datasets address some of the
current research issues by applying a common feature set across multiple
datasets. However, due to the insufficient security information represented by
the basic NetFlow feature set, the ML models lead to limited detection
accuracy, in particular when performing multi-class experiments. Therefore,
this paper proposes an extended NetFlow feature set as the standard version to
be used in future NIDS datasets. As part of its evaluation, the features have
been extracted and labelled from four well-known datasets. The datasets
generated are named NF-UNSW-NB15-v2, NF-BoT-IoT-v2, NF-ToN-IoT-v2, NF-CSE-CIC-
IDS2018-v2 and NF-UQ-NIDS-v2, and have been made publicly available for
research purposes [12].
This paper explores two variants of NetFlow-based feature sets along with
their proprietary feature sets. The rest of the paper is organised as follows.
Existing NIDS datasets and their limitations are discussed in Section 2.
Section 3 motivates the case for having a standard and a common feature set in
NIDS datasets. It also illustrates our methodology of extracting the proposed
feature. Finally, in Section 4, we use an Extra Tree classifier to compare the
predictive power of our proposed NetFlow based feature set, with the
proprietary features sets provided with the original benchmark NIDS datasets.
Finally, Section 5 concludes the paper.
## 2 Limitations of Existing Datasets
Researchers have created engineered benchmark NIDS datasets because of the
complexity in obtaining labelled realistic network traffic. A network testbed
is designed to simulate the network behaviour of multiple end nodes. The
artificial network environment overcomes the security and privacy issues faced
by real-world networks. Besides, labelling the network flows generated by such
controlled environments is more reliable than the open-world nature of
realistic networks. During the experiments, benign network traffic and various
attack scenarios are generated and conducted over the network testbed. In the
meanwhile, the network packets are captured in their native packet capture
(pcap) format and dumped onto storage devices. A set of network data features
are extracted from the pcap files using appropriate tools and methods, forming
network data flows. The result is a data source of labelled network flows
reflecting benign and malicious network behaviour. The generated datasets are
published and made publicly accessible for use in the design and evaluation
phases of ML-based NIDS models [13].
The network data features that form these data flows are critical as they need
to represent an adequate amount of security events that would aid in the ML
model’s classification of benign and attack classes. They also need to be
feasible in count and extraction’s complexity for scalable and practical
deployments. A key task of designing an ML-based NIDS is the selection of the
utilised data features. However, due to the lack of a standard feature set in
generating NIDS datasets, the authors have applied their domain experience in
the selection of these features. As a result, each available dataset is made
up of its own unique set of features that their authors believe would lead to
the best possible results in the classification stage. Each of the current
feature sets is almost exclusive and completely different from other sets,
sharing only a small number of features. The current evaluation method of ML
models across multiple datasets requires the usage of the unique feature sets
presented by each dataset.
The differences in the security information represented by each dataset’s
feature set have caused limitations and concerns regarding the reliability of
the evaluation methods followed. The three main issues of not having a
standard feature set are; 1. Complex extraction of several features from
network traffic, some of which are irrelevant due to the lack of security
events and 2. Limited ability to evaluate an ML model’s generalisation to a
targeted feature set across multiple datasets and 3. Lack of a universal
dataset containing network data flows collected over multiple network
environments. It is believed that the lack of reliable evaluation methods has
caused a gap between the extensive academic research produced and the
practical deployment of ML-based NIDS models in production networks [14]. Four
of the most recent and widely-used NIDS datasets are discussed, which
represent modern behavioural network attacks due to their production time.
* $\bullet$
UNSW-NB15 The Cyber Range Lab of the Australian Centre for Cyber Security
(ACCS) released the widely used, UNSW-NB15, dataset in 2015. The IXIA
PerfectStorm tool was utilised to generate a hybrid of testbed-based benign
network activities as well as synthetic attack scenarios. The tcpdump tool was
implemented to capture a total of 100 GB of pcap files. Argus and Bro-IDS, now
called Zeek [15], and twelve additional SQL algorithms were used to extract
the dataset’s original 49 features [7]. The dataset contains 2,218,761
(87.35%) benign flows and 321,283 (12.65%) attack ones, that is, 2,540,044
flows in total.
* $\bullet$
BoT-IoT The Cyber Range Lab of the Australian Centre for Cyber Security (ACCS)
designed a network environment in 2018 that consists of normal and botnet
traffic [8]. The Ostinato and Node-red tools were utilised to generate the
non-IoT and IoT traffic respectively. A total of 69.3GB of pcap files were
captured and the Argus tool was used to extract the dataset’s original 42
features. The dataset contains 477 (0.01%) benign flows and 3,668,045 (99.99%)
attack ones, that is, 3,668,522 flows in total.
* $\bullet$
ToN-IoT A recent heterogeneous dataset released in 2019 [9] that includes
telemetry data of Internet of Things (IoT) services, network traffic of IoT
networks, and operating system logs. In this paper, the portion containing
network traffic flows is utilised. The dataset is made up of a large number of
attack scenarios conducted in a representation of a realistic large-scale
network at the Cyber Range Lab by ACCS. Bro-IDS, now called Zeek [15], was
used to extract the dataset’s original 44 features. The dataset is made up of
796,380 (3.56%) benign flows and 21,542,641 (96.44%) attack samples, that is,
22,339,021 flows in total.
* $\bullet$
CSE-CIC-IDS2018 A dataset released by a collaborative project between the
Communications Security Establishment (CSE) & Canadian Institute for
Cybersecurity (CIC) in 2018 [10]. The victim network consisted of five
different organisational departments and an additional server room. The benign
packets were generated by network events using the abstract behaviour of human
users. The attack scenarios were executed by one or more machines outside the
target network. The CICFlowMeter-V3 tool was used to extract the original
dataset’s 75 features. The full dataset has 13,484,708 (83.07%) benign flows
and 2,748,235 (16.93%) attack flows, that is, 16,232,943 flows in total.
Figure 1: Venn diagram of the shared and exclusive features of four NIDS
datasets
In Figure 1, the shared and unique features of the aforementioned datasets are
displayed. The set of features available in all four datasets contains 3
features, and the pairwise shared feature numbers vary from 1 to 5. As most of
the features are exclusive to individual datasets, the evaluation of proposed
ML models using a targeted feature set across the four datasets is
challenging. Moreover, the ratio of the classes, i.e., benign and attack
flows, is extremely varied in each dataset. Where the UNSW-NB15 and CSE-CIC-
IDS2018 datasets have very high benign-to-attack ratios, whereas the ToN-IoT
and BoT-IoT datasets are mainly made up of attack samples, which do not
represent a realistic network behaviour. Also, some of the features in the
UNSW-NB15, BoT-IoT, and CSE-CIC-IDS2018 datasets are handcrafted features that
are not originally found in network packets but are statistically calculated
based on other features, such as the total number of bytes transferred over
the last 100 seconds. All these differences in the security information
presented by the datasets have led to the design of a standard feature set for
NIDS datasets.
## 3 Benchmarking a Standard Feature Set
Due to the aforementioned limitations faced by current NIDS datasets made up
of unique feature sets, in this paper, a standard feature set is proposed. The
feature set will be evaluated and benchmarked to be used in the releases of
new NIDS datasets to efficiently design ML-based NIDS. The design of ML-based
NIDS requires a feature set to be extracted and scanned for intrusions when
implemented. The choice of these features significantly alters the performance
of the NIDS as they need to contain an adequate amount of security events to
aid the ML model classification. By having a standard feature set, researchers
can evaluate their model’s classification ability based on their chosen
features, across multiple datasets and hence different attack scenarios
conducted over several network environments. This can be used to make sure
their measured model performance generalises when deployed over different
networks. Moreover, by having datasets sharing a common ground feature set,
they can be merged to create a universal comprehensive source of data.
Finally, having a standard feature set will grant control over the security
information presented by NIDS datasets. We believe that a standard feature set
will narrow the gap between the number of research experiments and the
practical deployment of ML-based NIDS [14].
### 3.1 NetFlow
The collection and storage of network traffic are important for organisations
to monitor, analyse, and audit their network environments. However, network
traffic tends to overload in volume and therefore are aggregated in terms of
flows. A network data flow is a sequence of packets, in either uni- or bi-
direction, between two unique endpoints sharing some attributes such as
source/destination IP address and L4 (transport layer) ports, and the L4
protocol, also known as the five-tuple [11]. A data flow can also be enhanced
with additional features, each representing details of the respective network
traffic. The information provided by these features contains security events
that are essential in analysing network traffic in case of a threat [16].
Network flows can be represented in various formats where the NetFlow is the
de-facto industry standard, developed and proposed by Darren and Barry Bruins
from Cisco in 1996 [17]. NetFlow evolved over the years, where version 9 is
the most common due to its larger variety of data features and bidirectional
flow support [18].
Table 1: List of the proposed standard NetFlow features Feature | Description
---|---
IPV4_SRC_ADDR | IPv4 source address
IPV4_DST_ADDR | IPv4 destination address
L4_SRC_PORT | IPv4 source port number
L4_DST_PORT | IPv4 destination port number
PROTOCOL | IP protocol identifier byte
L7_PROTO | Layer 7 protocol (numeric)
IN_BYTES | Incoming number of bytes
OUT_BYTES | Outgoing number of bytes
IN_PKTS | Incoming number of packets
OUT_PKTS | Outgoing number of packets
FLOW_DURATION_MILLISECONDS | Flow duration in milliseconds
TCP_FLAGS | Cumulative of all TCP flags
CLIENT_TCP_FLAGS | Cumulative of all client TCP flags
SERVER_TCP_FLAGS | Cumulative of all server TCP flags
DURATION_IN | Client to Server stream duration (msec)
DURATION_OUT | Client to Server stream duration (msec)
MIN_TTL | Min flow TTL
MAX_TTL | Max flow TTL
LONGEST_FLOW_PKT | Longest packet (bytes) of the flow
SHORTEST_FLOW_PKT | Shortest packet (bytes) of the flow
MIN_IP_PKT_LEN | Len of the smallest flow IP packet observed
MAX_IP_PKT_LEN | Len of the largest flow IP packet observed
SRC_TO_DST_SECOND_BYTES | Src to dst Bytes/sec
DST_TO_SRC_SECOND_BYTES | Dst to src Bytes/sec
RETRANSMITTED_IN_BYTES | Number of retransmitted TCP flow bytes (src-$>$dst)
RETRANSMITTED_IN_PKTS | Number of retransmitted TCP flow packets (src-$>$dst)
RETRANSMITTED_OUT_BYTES | Number of retransmitted TCP flow bytes (dst-$>$src)
RETRANSMITTED_OUT_PKTS | Number of retransmitted TCP flow packets (dst-$>$src)
SRC_TO_DST_AVG_THROUGHPUT | Src to dst average thpt (bps)
DST_TO_SRC_AVG_THROUGHPUT | Dst to src average thpt (bps)
NUM_PKTS_UP_TO_128_BYTES | Packets whose IP size $<$= 128
NUM_PKTS_128_TO_256_BYTES | Packets whose IP size $>$ 128 and $<$= 256
NUM_PKTS_256_TO_512_BYTES | Packets whose IP size $>$ 256 and $<$= 512
NUM_PKTS_512_TO_1024_BYTES | Packets whose IP size $>$ 512 and $<$= 1024
NUM_PKTS_1024_TO_1514_BYTES | Packets whose IP size $>$ 1024 and $<$= 1514
TCP_WIN_MAX_IN | Max TCP Window (src-$>$dst)
TCP_WIN_MAX_OUT | Max TCP Window (dst-$>$src)
ICMP_TYPE | ICMP Type * 256 + ICMP code
ICMP_IPV4_TYPE | ICMP Type
DNS_QUERY_ID | DNS query transaction Id
DNS_QUERY_TYPE | DNS query type (e.g., 1=A, 2=NS..)
DNS_TTL_ANSWER | TTL of the first A record (if any)
FTP_COMMAND_RET_CODE | FTP client command return code
Most of the network devices such as routers and switches are capable of
extracting NetFlow records. This is a great motivation for standardising
NetFlow features for NIDS datasets, as the level of complexity and resources
required to collect and store them is lower. In this paper, NetFlow v9
features have been utilised to form the proposed feature set, listed and
described in Table 1. There are 43 features in total with some providing
information on general flow statistics and others on specific protocol
applications such as DNS and FTP. All features are flow-based, meaning they
are extracted from packet headers and do not depend on the payload information
which is often encrypted in secure communications due to privacy concerns. The
chosen features are numerical in type for efficient ML experiments. These
features contain useful security events to enhance the models’ intrusions
detection capabilities.
### 3.2 Datasets
Figure 2 shows the procedure of generating NIDS datasets using the proposed
feature set. The nProbe tool by Ntop [19] is utilised to extract 43 NetFlow
version 9 features from the publicly available pcap files. The output format
is chosen as text flows, in which each feature is separated by a comma (,) to
be utilised as CSV files. Two label features are created by matching the five
flow identifiers; source/destination IPs and ports and protocol to the ground
truth attack events published by the original dataset. If a data flow is
located in the attack events it would be labelled as an attack (class 1) in
the binary label and its respective attack’s type would be recorded in the
attack label, otherwise, the sample is labelled as a benign flow (class 0).
Figure 2: Feature set extraction and labelling procedure Table 2: Specifications of the datasets proposed in this paper, compared to the original and basic NetFlow datasets Dataset | | Release
---
year
Feature extraction tool | | Number
---
of features
| CSV size
---
(GB)
| Benign to attack
---
samples ratio
UNSW-NB15 | 2015 | Argus, Bro-IDS and MS SQL | 49 | 0.55 | 8.7 to 1.3
NF-UNSW-NB15 | 2020 | nProbe | 12 | 0.11 | 9.6 to 0.4
NF-UNSW-NB15-v2 | 2021 | nProbe | 43 | 0.41 | 9.6 to 0.4
BoT-IoT | 2018 | Argus | 42 | 0.95 | 0.0 to 10
NF-BoT-IoT | 2020 | nProbe | 12 | 0.05 | 0.2 to 9.8
NF-BoT-IoT-v2 | 2021 | nProbe | 43 | 5.60 | 0.0 to 10.0
ToN-IoT | 2020 | Bro-IDS | 44 | 3.02 | 0.4 to 9.6
NF-ToN-IoT | 2020 | nProbe | 12 | 0.09 | 2.0 to 8.0
NF-ToN-IoT-v2 | 2021 | nProbe | 43 | 2.47 | 3.6 to 6.4
CSE-CIC-IDS2018 | 2018 | CICFlowMeter-V3 | 75 | 6.41 | 8.3 to 1.7
NF-CSE-CIC-IDS2018 | 2020 | nProbe | 12 | 0.58 | 8.8 to 1.2
NF-CSE-CIC-IDS2018-v2 | 2021 | nProbe | 43 | 2.80 | 8.8 to 1.2
NF-UQ-NIDS | 2020 | nProbe | 12 | 1.0 | 7.7 to 2.3
NF-UQ-NIDS-v2 | 2021 | nProbe | 43 | 12.5 | 3.3 to 6.7
In this paper, the proposed feature set has been extracted from four well-
known datasets; UNSW-NB15, BoT-IoT, ToN-IoT, and CSE-CIC-IDS2018. Their
publicly available pcap files and ground truth events have been utilised in
the features extraction and labelling processes respectively. The generated
datasets have been named NF-UNSW-NB15-v2, NF-BoT-IoT-v2, NF-ToN-IoT-v2, NF-
CSE-CIC-IDS2018-v2 and NF-UQ-NIDS-v2. The later dataset is a merge of all
other datasets, which is a practical advantage of having a common feature set.
Table 2 lists the NetFlow datasets and compares their properties to the
original datasets in terms of the Feature Extraction (FE) tool utilised, the
number of features, file size and the benign to attack samples ratio. As
illustrated, two NetFlow datasets are corresponding to each original NIDS
dataset, where v1 and v2 are the basic and extended versions respectively. The
fifth NetFlow dataset is a comprehensive dataset that combines all four.
* $\bullet$
NF-UNSW-NB15-v2 The NetFlow-based format of the UNSW-NB15 dataset, named NF-
UNSW-NB15, has been extended with additional NetFlow features and labelled
with its respective attack categories. The total number of data flows are
2,390,275 out of which 95,053 (3.98%) are attack samples and 2,295,222
(96.02%) are benign. The attack samples are further classified into nine
subcategories, Table 3 represents the NF-UNSW-NB15-v2 dataset’s distribution
of all flows.
Table 3: NF-UNSW-NB15-v2 distribution Class | Count | Description
---|---|---
Benign | 2295222 | Normal unmalicious flows
Fuzzers | 22310 | | An attack in which the attacker sends large amounts of random data which cause a system
---
to crash and also aim to discover security vulnerabilities in a system.
Analysis | 2299 | | A group that presents a variety of threats that target web applications through ports,
---
emails and scripts.
Backdoor | 2169 | | A technique that aims to bypass security mechanisms by replying to specific constructed
---
client applications.
DoS | 5794 | | Denial of Service is an attempt to overload a computer system’s resources with the aim
---
of preventing access to or availability of its data.
Exploits | 31551 | | Are sequences of commands controlling the behaviour of a host through a known
---
vulnerability.
Generic | 16560 | A method that targets cryptography and causes a collision with each block-cipher.
Reconnaissance | 12779 | A technique for gathering information about a network host and is also known as a probe.
Shellcode | 1427 | A malware that penetrates a code to control a victim’s host.
Worms | 164 | Attacks that replicate themselves and spread to other computers.
* $\bullet$
NF-BoT-IoT-v2 An IoT NetFlow-based dataset is generated by expanding the NF-
BoT-IoT dataset. The features were extracted from the publicly available pcap
files and the flows were labelled with their respective attack categories. The
total number of data flows are 37,763,497 out of which 37,628,460 (99.64%) are
attack samples and 135,037 (0.36%) are benign. There are four attack
categories in the dataset, Table 4 represents the NF-BoT-IoT-v2 distribution
of all flows.
Table 4: NF-BoT-IoT-v2 distribution Class | Count | Description
---|---|---
Benign | 135037 | Normal unmalicious flows
Reconnaissance | 2620999 | A technique for gathering information about a network host and is also known as a probe.
DDoS | 18331847 | | Distributed Denial of Service is an attempt similar to DoS but has multiple
---
different distributed sources.
DoS | 16673183 | | An attempt to overload a computer system’s resources with the aim of preventing access
---
to or availability of its data.
Theft | 2431 | | A group of attacks that aims to obtain sensitive data such as data theft and keylogging
---
* $\bullet$
NF-ToN-IoT-v2 The publicly available pcaps of the ToN-IoT dataset are utilised
to generate its NetFlow records, leading to a NetFlow-based IoT network
dataset called NF-ToN-IoT. The total number of data flows are 16,940,496 out
of which 10,841,027 (63.99%) are attack samples and 6,099,469 (36.01%) are
benign. Table 5 lists and defines the distribution of the NF-ToN-IoT-v2
dataset.
Table 5: NF-ToN-IoT-v2 distribution Class | Count | Description
---|---|---
Benign | 6099469 | Normal unmalicious flows
Backdoor | 16809 | | A technique that aims to attack remote-access computers by replying to specific constructed
---
client applications
DoS | 712609 | | An attempt to overload a computer system’s resources with the aim of preventing access to or
---
availability of its data.
DDoS | 2026234 | | An attempt similar to DoS but has multiple
---
different distributed sources.
Injection | 684465 | | A variety of attacks that supply untrusted inputs that aim to alter the course of
---
execution, with SQL and Code injections two of the main ones.
MITM | 7723 | | Man In The Middle is a method that places an attacker between a victim and host with which
---
the victim is trying to communicate, with the aim of intercepting traffic and
communications.
Password | 1153323 | covers a variety of attacks aimed at retrieving passwords by either brute force or sniffing.
Ransomware | 3425 | | An attack that encrypts the files stored on a host and asks for compensation in exchange for
---
the decryption technique/key.
Scanning | 3781419 | | A group that consists of a variety of techniques that aim to discover information about networks
---
and hosts, and is also known as probing.
XSS | 2455020 | | Cross-site Scripting is a type of injection in which an attacker uses web applications to send
---
malicious scripts to end-users.
* $\bullet$
NF-CSE-CIC-IDS2018-v2 The original pcap files of the CSE-CIC-IDS2018 dataset
are utilised to generate a NetFlow-based dataset called NF-CSE-CIC-IDS2018-v2.
The total number of flows are 18,893,708 out of which 2,258,141 (11.95%) are
attack samples and 16,635,567 (88.05%) are benign ones, Table 6 represents the
dataset’s distribution.
Table 6: NF-CSE-CIC-IDS2018-v2 distribution Class | Count | Description
---|---|---
Benign | 16635567 | Normal unmalicious flows
BruteForce | 120912 | | A technique that aims to obtain usernames and password credentials by accessing a list of
---
predefined possibilities
Bot | 143097 | | An attack that enables an attacker to remotely control several hijacked computers to perform
---
malicious activities.
DoS | 483999 | | An attempt to overload a computer system’s resources with the aim of preventing access to or
---
availability of its data.
DDoS | 1390270 | An attempt similar to DoS but has multiple different distributed sources.
Infiltration | 116361 | | An inside attack that sends a malicious file via an email to exploit an application and is
---
followed by a backdoor that scans the network for other vulnerabilities
Web Attacks | 3502 | A group that includes SQL injections, command injections and unrestricted file uploads
* $\bullet$
NF-UQ-NIDS-v2 A comprehensive dataset, merging all the aforementioned
datasets. The newly published dataset represents the benefits of the shared
dataset feature sets, where the merging of multiple smaller datasets is
possible. This will eventually lead to a bigger and universal NIDS dataset
containing flows from multiple network setups and different attack settings.
It includes an additional label feature, identifying the original dataset of
each flow. This can be used to compare the same attack scenarios conducted
over two or more different testbed networks. The attack categories have been
modified to combine all parent categories. Attacks named DoS attacks-Hulk, DoS
attacks-SlowHTTPTest, DoS attacks-GoldenEye and DoS attacks-Slowloris have
been renamed to the parent DoS category. Attacks named DDoS attack-LOIC-UDP,
DDoS attack-HOIC and DDoS attacks-LOIC-HTTP have been renamed to DDoS. Attacks
named FTP-BruteForce, SSH-Bruteforce, Brute Force -Web and Brute Force -XSS
have been combined as a brute-force category. Finally, SQL Injection attacks
have been included in the injection attacks category. The NF-UQ-NIDS dataset
has a total of 75,987,976 records, out of which 25,165,295 (33.12%) are benign
flows and 50,822,681 (66.88%) are attacks. Table 7 lists the distribution of
the final attack categories.
Table 7: NF-UQ-NIDS-v2 distribution Class | Count | Class | Count
---|---|---|---
Benign | 25165295 | Scanning | 3781419
DDoS | 21748351 | Fuzzers | 22310
Reconnaissance | 2633778 | Backdoor | 18978
Injection | 684897 | Bot | 143097
DoS | 17875585 | Generic | 16560
Brute Force | 123982 | Analysis | 2299
Password | 1153323 | Shellcode | 1427
XSS | 2455020 | MITM | 7723
Infilteration | 116361 | Worms | 164
Exploits | 31551 | Ransomware | 3425
Theft | 2431 | |
## 4 Evaluation
In this section, the proposed NetFlow feature set is evaluated across five
NIDS datasets; NF-UNSW-NB15-v2, NF-BoT-IoT-v2, NF-ToN-IoT-v2, NF-CSE-CIC-
IDS2018-v2 and NF-UQ-NIDS-v2. An ensemble ML classifier, known as Extra Trees,
that belongs to the trees family has been utilised for this purpose. The
evaluation is conducted by comparing the classifier performance with the
corresponding metrics of the basic NetFlow and original datasets. Various
classification metrics are collected such as accuracy, Area Under the Curve
(AUC), F1 Score, Detection Rate (DR), False Alarm Rate (FAR) and time required
to predict a single test sample in microseconds (µs). As part of the data pre-
processing, the flow identifiers such as IDs, source/destination IP and ports,
timestamps, and start/end time are dropped to avoid learning bias towards
attacking and victim end nodes. For the UNSW-NB15 and NF-UNSW-NB15-v2
datasets, The Time To Live (TTL)-based features are dropped due to their
extreme correlation with the labels. Additionally, the min-max normalisation
technique has been applied to scale all datasets’ values between 0 and 1. The
datasets have been split into 70%-30% for training and testing purposes. For a
fair evaluation, five cross-validation splits are conducted and the mean is
measured.
### 4.1 Binary-class Classification
In Table 8, the attack detection (binary classification) performance of the
datasets has been measured and compared to the original and basic NetFlow
datasets. Using the NF-UNSW-NB15-v2 dataset, the ML model’s performance has
significantly increased with an AUC of 0.9845, compared to 0.9485 and 0.9545
when using the NF-UNSW-NB15 and UNSW-NB15 datasets respectively. The model
achieved the highest F1 score of 0.97 in the shortest prediction time when
using the extended NetFlow feature set. The NF-BoT-IoT-v2 dataset has enabled
the ML model to achieve the highest possible detection accuracy and F1 score,
the same as the BoT-IoT dataset. However, the model has a significantly lower
FAR and prediction time, resulting in an increased AUC of 0.9987 and a lower
prediction time of 3.90 µs compared. Using the extended NetFlow feature set,
the ML model achieved significantly higher accuracy than NF-BoT-IoT of 100%
compared to 93.82%.
Table 8: Binary-class classification results Dataset | Accuracy | AUC | F1 Score | DR | FAR | Prediction Time (µs)
---|---|---|---|---|---|---
UNSW-NB15 | 99.25% | 0.9545 | 0.92 | 91.25% | 0.35% | 10.05
NF-UNSW-NB15 | 98.62% | 0.9485 | 0.85 | 90.70% | 1.01% | 7.79
NF-UNSW-NB15-v2 | 99.73% | 0.9845 | 0.97 | 97.07% | 0.16% | 5.92
BoT-IoT | 100.00% | 0.9948 | 1.00 | 100.00% | 1.05% | 7.62
NF-BoT-IoT | 93.82% | 0.9628 | 0.97 | 93.70% | 1.13% | 5.37
NF-BoT-IoT-v2 | 100.00% | 0.9987 | 1.00 | 100.00% | 0.26% | 3.90
ToN-IoT | 97.86% | 0.9788 | 0.99 | 97.86% | 2.10% | 8.93
NF-ToN-IoT | 99.66% | 0.9965 | 1.00 | 99.67% | 0.37% | 6.05
NF-ToN-IoT-v2 | 99.64% | 0.9959 | 1.00 | 99.76% | 0.58% | 8.47
CSE-CIC-IDS2018 | 98.31% | 0.9684 | 0.94 | 94.75% | 1.07% | 23.01
NF-CSE-CIC-IDS2018 | 95.33% | 0.9506 | 0.83 | 94.71% | 4.59% | 17.04
NF-CSE-CIC-IDS2018-v2 | 99.35% | 0.9829 | 0.97 | 96.89% | 0.31% | 21.75
NF-UQ-NIDS | 97.25% | 0.9669 | 0.94 | 95.66% | 2.27% | 14.35
NF-UQ-NIDS-v2 | 97.90% | 0.9830 | 0.98 | 97.12% | 0.52% | 14.18
The intrusion detection results of the ML model using the NF-ToN-IoT-v2
dataset are superior to its original ToN-IoT dataset. Compared to NF-ToN-IoT,
it achieved a higher DR of but a slightly higher FAR. Overall, the accuracy
achieved by the model using the NF-ToN-IoT-v2 is 99.64%, which is higher than
ToN-IoT (97.86%) and similar to NF-ToN-IoT (99.66%). The model performance
when using the NF-CSE-CIC-IDS2018-v2 dataset is notably more efficient than
the CSE-CIC-IDS2018 and NF-CSE-CIC-IDS2018-v2 datasets. It achieved a high DR
of 96.89% and a low FAR of 0.31% and required 21.75 µs per sample prediction.
The overall accuracy achieved is 99.35%, which is higher than both the CSE-
CIC-IDS2018 (98.31%) and NF-CSE-CIC-IDS2018 (95.33%) datasets. The merged NF-
UQ-NIDS-v2 dataset enabled the model to achieve an accuracy of 97.90%, a DR of
97.12% and a FAR of 0.52%, outperforming the NF-UQ-NIDS dataset with a lower
prediction time of 14.18 µs.
Figure 3: Binary-class classification F1 score
Figure 3 visually represents the F1 score obtained when applying an Extra
Trees classifier on the three different feature sets of five NIDS datasets;
the original as well as basic and proposed NetFlow feature sets. This fair
comparison between the NetFlow feature sets demonstrates the benefit of having
a common feature set across multiple datasets. It enables the evaluation of
various attack detections using a common feature set. Overall, the proposed
(extended) NetFlow feature set has outperformed the original and basic feature
sets in terms of attack detection performance. All datasets have significantly
achieved a higher or similar F1 score to their respective datasets. It is
clear that using the proposed feature set achieves a reliable detection
performance. Further feature selection experiments are required to identify
its key features to enhance the extraction tasks.
### 4.2 Multi-class Classification
Table 9: NF-UNSW-NB15-v2 multi-class classification results | UNSW-NB15 | NF-UNSW-NB15 | NF-UNSW-NB15-v2
---|---|---|---
Class Name | DR | F1 Score | DR | F1 Score | DR | F1 Score
Benign | 99.72% | 1.00 | 99.02% | 0.99 | 99.85% | 1.00
Analysis | 4.39% | 0.03 | 28.28% | 0.15 | 30.89% | 0.17
Backdoor | 13.96% | 0.08 | 39.17% | 0.17 | 40.30% | 0.18
DoS | 13.63% | 0.18 | 31.84% | 0.41 | 29.57% | 0.36
Exploits | 83.25% | 0.80 | 81.04% | 0.82 | 80.41% | 0.84
Fuzzers | 50.50% | 0.57 | 62.63% | 0.55 | 80.57% | 0.85
Generic | 86.08% | 0.91 | 57.13% | 0.66 | 85.15% | 0.90
Reconnaissance | 75.90% | 0.80 | 76.89% | 0.82 | 80.02% | 0.83
Shellcode | 53.61% | 0.59 | 87.91% | 0.75 | 87.67% | 0.69
Worms | 5.26% | 0.09 | 52.91% | 0.55 | 85.98% | 0.69
Weighted Average | 98.19% | 0.98 | 97.62% | 0.98 | 98.90% | 0.99
Prediction Time (µs) | 9.94 | 9.35 | 8.81
To further evaluate the proposed NetFlow feature set, multi-classification
experiments are conducted to measure the weighted average of DR, F1 score and
prediction time of each class present in the datasets. Tables 9, 10 11, 12 and
13 represent the performances of the NF-UNSW-NB15-v2, NF-BoT-IoT-v2, NF-ToN-
IoT-v2, NF-CSE-CIC-IDS201-v2 and NF-UQ-NIDS-v2 datasets respectively. The
datasets made up of the original and basic NetFlow feature sets are provided
for comparison purposes. In Table 9, the benefits of using the NF-UNSW-NB15-v2
over the former datasets are realised by increasing the Ml model’s F1 score to
0.99 from 0.98 and decreasing the prediction time to 8.81 µs. The DR of
certain attack types such as fuzzers, generic, and worms have significantly
improved while the others have remained at slightly the same rate. The
detection of the analysis, backdoor and DoS attacks are still unreliable when
using the extended Netflow feature set, further analysis is required to
identify the missing key features. However, due to their small number of
samples, the overall accuracy of the NF-UNSW-NB15-v2 is higher (98.90%) than
UNSW-NB15 (98.19%) and NF-UNSW-NB15 (97.62%).
Table 10: NF-BoT-IoT-v2 multi-class classification results | BoT-IoT | NF-BoT-IoT | NF-BoT-IoT-v2
---|---|---|---
Class Name | DR | F1 Score | DR | F1 Score | DR | F1 Score
Benign | 99.58% | 0.99 | 98.65% | 0.43 | 99.76% | 1.00
DDoS | 100.00% | 1.00 | 30.37% | 0.28 | 99.99% | 1.00
DoS | 100.00% | 1.00 | 36.33% | 0.31 | 99.99% | 1.00
Reconnaissance | 100.00% | 1.00 | 89.95% | 0.90 | 99.93% | 1.00
Theft | 91.16% | 0.95 | 88.06% | 0.18 | 83.01% | 0.85
Weighted Average | 100.00% | 1.00 | 73.58% | 0.77 | 99.99% | 1.00
Prediction Time (µs) | 12.63 | 9.19 | 11.86
Table 10 shows that when using the NF-BoT-IoT-v2 dataset, the ML model is
achieving the almost perfect multi-classification performance gained when
using the BoT-IoT dataset of 100% accuracy and 1.00 F1 Score. The four attack
categories are almost fully detected except for the theft attacks, where only
83.01% were successfully detected. The accuracy of the ML model is increased
from 73.58% to 99.99% and the F1 score from 0.77 to 1.00 when applied to the
extended NetFlow feature set compared to the basic set. Overall, it is a
significant improvement that overcomes the performance limitations faced by
the basic NetFlow datasets, despite the slight increase in prediction time.
Table 11: NF-ToN-IoT-v2 multi-class classification results | ToN-IoT | NF-ToN-IoT | NF-ToN-IoT-v2
---|---|---|---
Class Name | DR | F1 Score | DR | F1 Score | DR | F1 Score
Benign | 89.97% | 0.94 | 98.97% | 0.99 | 99.44% | 0.99
Backdoor | 98.05% | 0.31 | 99.22% | 0.98 | 99.79% | 1.00
DDoS | 96.90% | 0.98 | 63.22% | 0.72 | 98.76% | 0.99
DoS | 53.89% | 0.57 | 95.91% | 0.48 | 89.41% | 0.91
Injection | 96.67% | 0.96 | 41.47% | 0.51 | 90.14% | 0.91
MITM | 66.25% | 0.16 | 52.81% | 0.38 | 37.45% | 0.45
Password | 86.99% | 0.92 | 27.36% | 0.24 | 97.16% | 0.97
Ransomware | 89.87% | 0.11 | 87.33% | 0.83 | 97.29% | 0.98
Scanning | 75.05% | 0.85 | 31.30% | 0.08 | 99.67% | 1.00
XSS | 98.83% | 0.99 | 24.49% | 0.19 | 96.83% | 0.96
Weighted Average | 84.61% | 0.87 | 56.34% | 0.60 | 98.05% | 0.98
Prediction Time (µs) | 12.02 | 21.21 | 12.15
In Table 11, the NF-ToN-IoT-v2 dataset has enabled the ML model to achieve
outstanding results when conducting multi-classification experiments. The
extended NetFlow feature set notably outperformed both the ToN-IoT and NF-ToN-
IoT feature sets by increasing the model’s weighted F1 score to 0.98 from 0.87
and 0.60 respectively. The model also requires a lower prediction time
compared to when applied to the basic NetFlow dataset. The extended Netflow
feature set has increased the DR of all attack types except for DoS, MITM, and
XSS attacks. Further analysis of features containing useful security events is
essential to aid in their detection. Overall, the feature set of NF-ToN-IoT-v2
has aided the ML model in the detection of the attacks present in the dataset,
with an enhanced accuracy of 98.05% confirming the reliability of the extended
NetFlow feature set.
Table 12: NF-CSE-CIC-IDS2018-v2 multi-class classification results | CSE-CIC-IDS2018 | NF-CSE-CIC-IDS2018 | NF-CSE-CIC-IDS2018-v2
---|---|---|---
Class Name | DR | F1 Score | DR | F1 Score | DR | F1 Score
Benign | 89.50% | 0.94 | 69.83% | 0.82 | 99.69% | 1.00
Bot | 99.92% | 0.99 | 100.00% | 1.00 | 100.00% | 1.00
Brute Force -Web | 71.36% | 0.01 | 50.21% | 0.52 | 28.05% | 0.01
Brute Force -XSS | 72.17% | 0.72 | 49.16% | 0.39 | 29.34% | 0.00
DDoS attack-HOIC | 100.00% | 1.00 | 45.66% | 0.39 | 57.33% | 0.73
DDoS attack-LOIC-UDP | 83.59% | 0.82 | 80.98% | 0.82 | 99.29% | 1.00
DDoS attacks-LOIC-HTTP | 99.93% | 1.00 | 99.93% | 0.71 | 100.00% | 1.00
DoS attacks-GoldenEye | 99.97% | 1.00 | 99.32% | 0.98 | 100.00% | 1.00
DoS attacks-Hulk | 100.00% | 1.00 | 99.65% | 0.99 | 100.00% | 1.00
DoS attacks-SlowHTTPTest | 69.80% | 0.60 | 0.00% | 0.00 | 100.00% | 1.00
DoS attacks-Slowloris | 99.44% | 0.62 | 99.95% | 1.00 | 99.99% | 1.00
FTP-BruteForce | 68.76% | 0.75 | 100.00% | 0.79 | 100.00% | 1.00
Infilteration | 36.15% | 0.08 | 62.66% | 0.04 | 39.58% | 0.43
SQL Injection | 49.34% | 0.30 | 25.00% | 0.22 | 41.44% | 0.00
SSH-Bruteforce | 99.99% | 1.00 | 99.93% | 1.00 | 100.00% | 1.00
Weighted Average | 90.28% | 0.94 | 71.92% | 0.80 | 96.90% | 0.98
Prediction Time (µs) | 24.17 | 17.29 | 27.28
Table 12 presents the detection results of the NF-CSE-CIC-IDS2018-v2 dataset.
The ML model has improved the DR of most of the attacks present in the
dataset, achieving an accuracy of 96.90% and an F1 score of 0.98. Most attacks
were fully detected with a DR ranging between 99% to 100%. However, the
detection of certain attack types such as Brute Force, DDoS attack-HOIC,
infiltration, and SQL injection is still unreliable when using the extended
Netflow feature set. Their respective F1 score is lower due to a high number
of false positives. Overall, the performance of the ML model when applied to
the NF-CSE-CIC-IDS2018-v2 dataset is superior compared to when using the CSE-
CIC-IDS2018 and NF-CSE-CIC-IDS2018 datasets. However, there is an increased
prediction time of 27.28 µs compared to 24.17 µs and 17.29 µs, respectively.
Table 13: NF-UQ-NIDS-v2 multi-class classification results | NF-UQ-NIDS | NF-UQ-NIDS-v2
---|---|---
Class Name | DR | F1 Score | DR | F1 Score
Analysis | 69.63% | 0.21 | 78.43% | 0.24
Backdoor | 90.95% | 0.92 | 89.61% | 0.93
Benign | 71.70% | 0.83 | 93.45% | 0.96
Bot | 100.00% | 1.00 | 100.00% | 1.00
Brute Force | 99.94% | 0.85 | 98.16% | 0.74
DoS | 55.54% | 0.62 | 99.46% | 1.00
Exploits | 80.65% | 0.81 | 85.16% | 0.84
Fuzzers | 63.24% | 0.54 | 80.58% | 0.84
Generic | 58.90% | 0.61 | 85.41% | 0.88
Infilteration | 60.57% | 0.03 | 21.62% | 0.19
Reconnaissance | 88.96% | 0.88 | 98.24% | 0.76
Shellcode | 83.89% | 0.15 | 89.35% | 0.34
Theft | 87.22% | 0.15 | 81.66% | 0.22
Worms | 52.97% | 0.46 | 87.20% | 0.71
DDoS | 77.08% | 0.69 | 99.43% | 1.00
Injection | 40.58% | 0.50 | 90.03% | 0.90
MITM | 57.99% | 0.10 | 35.97% | 0.43
Password | 30.79% | 0.27 | 97.09% | 0.97
Ransomware | 90.85% | 0.85 | 96.82% | 0.87
Scanning | 39.67% | 0.08 | 97.36% | 0.98
XSS | 30.80% | 0.21 | 95.72% | 0.95
Weighted Average | 70.81% | 0.79 | 96.93% | 0.97
Prediction Time (µs) | 14.74 | 25.67
Table 13 compares the attack detection results of the merged NIDS dataset; NF-
UQ-NIDS-v2 compared to its former (NF-UQ-NIDS) dataset. Most of the attacks DR
has increased by using the extended NetFlow feature set. The detection of
attacks such as DoS, Generic, Worms, DDoS, Injection, password, scanning and
XSS has significantly improved. However, attacks such as infiltration and MITM
have been detected less accurately. Moreover, the time consumed to predict a
single test sample has increased from 14.74 µs to 25.67 µs. An increased
accuracy from 70.81% to 96.96% and an F1 score from 0.79 to 0.97 confirms the
enhanced ML model detection capabilities when applied to the extended NetFlow
feature set across 20 attack types conducted over several network
environments.
Figure 4: Multi-class classification F1 score
Overall, the proposed NetFlow feature set has significantly improved the
multi-class classification performance of the datasets as displayed in Figure
4. The F1 score is plotted on the y-axis and the datasets in their three
feature sets are on the x-axis. The detection performance is often comparable
to the original feature set but remarkably superior to the basic NetFlow
feature set. Hence, the generated datasets enjoy the benefits of adopting a
standard common NetFlow feature set and with enhanced detection performance.
This motivates the usage of the proposed feature set across future NIDS
datasets and encourages researchers to generate their datasets in the proposed
format for efficient and reliable ML experiments.
## 5 Conclusion
This paper proposes a NetFlow based standard feature set for NIDS datasets, as
listed in Table 2. The importance of having a standard feature set allows the
reliable evaluation of ML-based NIDS across multiple datasets, network
environments, and attack scenarios. Moreover, the use of a standard feature
set allows multiple NIDS datasets to be merged, leading to a larger variety of
labelled datasets. As part of the proposed feature set evaluation, five new
NIDS datasets have been generated from existing NIDS benchmark datasets. These
new dataset variants have been made publicly available to the research
community. Our evaluation based on an Extra Tree classifier has shown that our
NetFlow-based feature set with 43 features achieves a higher classification
performance (F1-Score) than the proprietary feature sets, for all the
considered benchmark NIDS datasets, for both binary and multi-class
classification scenarios.
The proposed NetFlow-based feature sets have the further advantage of being
highly practical and scalable, due to the wide availability of efficient
NetFlow exporter and collections. The key benefit of having a standard feature
set for NIDS datasets, and the key contribution of this paper, is the ability
to more rigorously and reliably evaluate ML-based traffic classifiers across a
wide range of datasets, and hence a wider range of attack types, network
topologies, etc. This allows the evaluation of how well these ML-based NIDSs
can generalise from the dataset they have been trained on, to other network
scenarios. We believe the inability to perform such thorough and rigorous
evaluation is one of the reasons for the limited deployment of ML-based NIDSs
in practical network settings. Therefore, we believe the contributions of this
paper can provide a step towards bridging the gap between academic research on
ML-based NIDSs and their practical deployment.
## References
* [1] C. N. Modi, D. R. Patel, A. Patel, and R. Muttukrishnan, “Bayesian classifier and snort based network intrusion detection system in cloud computing,” in 2012 Third International Conference on Computing, Communication and Networking Technologies (ICCCNT’12), pp. 1–7, IEEE, 2012.
* [2] P. Garcia-Teodoro, J. Diaz-Verdejo, G. Maciá-Fernández, and E. Vázquez, “Anomaly-based network intrusion detection: Techniques, systems and challenges,” computers & security, vol. 28, no. 1-2, pp. 18–28, 2009.
* [3] S. K. Sahu, S. Sarangi, and S. K. Jena, “A detail analysis on intrusion detection datasets,” in 2014 IEEE International Advance Computing Conference (IACC), pp. 1348–1353, 2014.
* [4] A. Shiravi, H. Shiravi, M. Tavallaee, and A. A. Ghorbani, “Toward developing a systematic approach to generate benchmark datasets for intrusion detection,” Computers & Security, vol. 31, no. 3, pp. 357 – 374, 2012.
* [5] A. Binbusayyis and T. Vaiyapuri, “Identifying and benchmarking key features for cyber intrusion detection: An ensemble approach,” IEEE Access, vol. 7, pp. 106495–106513, 2019.
* [6] B. Claise, G. Sadasivan, V. Valluri, and M. Djernaes, “Cisco systems netflow services export version 9,” 2004.
* [7] N. Moustafa and J. Slay, “Unsw-nb15: a comprehensive data set for network intrusion detection systems (unsw-nb15 network data set),” 2015 Military Communications and Information Systems Conference (MilCIS), 2015.
* [8] N. Koroniotis, N. Moustafa, E. Sitnikova, and B. Turnbull, “Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: Bot-iot dataset,” CoRR, vol. abs/1811.00701, 2018.
* [9] N. Moustafa, “Ton-iot datasets,” 2019.
* [10] I. Sharafaldin, A. Habibi Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization,” Proceedings of the 4th International Conference on Information Systems Security and Privacy, 2018.
* [11] M. Sarhan, S. Layeghy, N. Moustafa, and M. Portmann, “Netflow datasets for machine learning-based network intrusion detection systems,” arXiv:2011.09144, 2020.
* [12] “Netflow datasets.” http://staff.itee.uq.edu.au/marius/NIDS_datasets/, 2020\.
* [13] M. Ring, S. Wunderlich, D. Scheuring, D. Landes, and A. Hotho, “A survey of network-based intrusion detection data sets,” Computers & Security, vol. 86, pp. 147 – 167, 2019.
* [14] R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” 2010 IEEE Symposium on Security and Privacy, 2010.
* [15] 2021\.
* [16] B. Li, J. Springer, G. Bebis, and M. Hadi Gunes, “A survey of network flow applications,” Journal of Network and Computer Applications, vol. 36, no. 2, pp. 567–581, 2013.
* [17] D. R. Kerr and B. L. Bruins, “Network Flow Switching and Flow Data Export,” 2001\.
* [18] Cisco Systems, “Cisco IOS NetFlow Version 9 Flow-Record Format - White Paper.” https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.pdf, 2011\.
* [19] Ntop, “nProbe, An Extensible NetFlow v5/v9/IPFIX Probe for IPv4/v6.” https://www.ntop.org/guides/nprobe/cli_options.html, 2017.
|
# Optically induced Kondo effect in semiconductor quantum wells
I. V. Iorsh1,2 O. V. Kibis2 Oleg.Kibis(c)nstu.ru 1Department of Physics and
Engineering, ITMO University, Saint-Petersburg, 197101, Russia 2Department of
Applied and Theoretical Physics, Novosibirsk State Technical University, Karl
Marx Avenue 20, Novosibirsk 630073, Russia
###### Abstract
It is demonstrated theoretically that the circularly polarized irradiation of
two-dimensional electron systems can induce the localized electron states
which antiferromagnetically interact with conduction electrons, resulting in
the Kondo effect. Conditions of experimental observation of the effect are
discussed for semiconductor quantum wells.
## I Introduction
In 1964, Jun Kondo in his pioneering article kondo1964resistance suggested
the physical mechanism responsible for the minimum of temperature dependence
of the resistivity of noble divalent metals, which had remained a mystery for
more than three decades de1934electrical . Within the developed theory, he
showed that the antiferromagnetic interaction between the spins of conduction
electrons and electrons localized on magnetic impurities leads to the
$\log(T)$ corrections to the relaxation time of conduction electrons (the
Kondo effect). The subsequent studies on the subject wilson1975renormalization
; Wiegmann_1981 ; Andrei_1983 demonstrated that physics of the Kondo effect
is universal to describe the transformation of the ground state of various
many-body systems in the broad range of energies. Particularly, the
transformation is characterized by the single energy scale $T_{K}$ (the Kondo
temperature) and can be effectively treated by the powerful methods of the
renormalization group theory. Therefore, the Kondo problem is currently
considered as an effective testing ground to solve many challenging many-body
problems, including heavy-fermion materials, high-temperature superconductors,
etc steglich1979superconductivity ; andres19754 ; tsunetsugu1993phase ;
RevModPhys.56.755 .
The Kondo temperature is defined by the Coulomb repulsion of the impurity
atoms, hybridization of the conduction and impurity electrons, and other
condensed-matter parameters which are fixed in bulk materials but can be
effectively tuned in nanostructures. Since the first observation of the
tunable Kondo effect in such nanostructures as quantum dots goldhaber1998kondo
, it attracts the enormous attention of research community
cronenwett1998tunable ; iftikhar2018tunable ; park2002coulomb ; Borzenets_2020
. While the tuning of Kondo temperature in nanostructures is usually achieved
by stationary fields (produced, e.g., by the gate voltage iftikhar2018tunable
), it should be noted that all physical properties of them can be effectively
controlled also by optical methods. Particularly, it has been demonstrated
that the resonant laser driving of the impurity spins (such as quantum dot
spins or single atom spins in the optical lattices) allows for the control
over the onset and destruction of the Kondo resonance Latta_2011 ; Haupt_2013
; Tureci_2011 ; Sbierski_2013 ; Nakagawa_2015 . An alternative optical way of
controlling the Kondo effect could be based on the modification of electronic
properties by an off-resonant high-frequency electromagnetic field (the
Floquet engineering Basov_2017 ; Oka_2019 ), which became the established
research area of modern physics and resulted in many fundamental effects in
various nanostructures Goldman_2014 ; Bukov_2015 ; Lindner_2011 ; Savenko_2012
; Iorsh_2017 ; Kibis_2016 ; Kibis_2017 ; Kozin_2018 ; Kozin_2018_1 ;
Rechtsman_2013 ; Wang_2013 ; Glazov_2014 ; Torres_2014 ; Sentef_2015 ;
Sie_2015 ; Cavalleri_2020 . Since the frequency of the off-resonant field lies
far from characteristic resonant frequencies of the electron system, the field
cannot be absorbed and only “dresses” electrons (dressing field), changing
their physical characteristics. Particularly, it was demonstrated recently
that a high-frequency circularly polarized dressing field crucially modifies
the interaction of two-dimensional (2D) electron systems with repulsive
scatterers, inducing the attractive area in the core of a repulsive potential
Kibis_2019 . As a consequence, the light-induced electron states localized at
repulsive scatterers appear Kibis_2020 ; Iorsh_2020 . Since such localized
electron states are immersed into the continuum of conduction electrons and
interact with them antiferromagnetically, the Kondo effect can exist. The
present article is dedicated to theoretical analysis of this optically induced
effect for 2D electron gas in semiconductor quantum wells (QWs).
The article is organized as follows. In the second section, the model of Kondo
effect based on the dressing field approach is developed. The third section is
dedicated to the analysis of the Kondo resonance and the Kondo temperature in
QWs. The last two sections contain conclusion and acknowledgements.
## II Model
For definiteness, let us consider a semiconductor QW of the area $S$ in the
plane $x,y$, which is filled by 2D gas of conduction electrons with the
effective mass $m_{e}$ and irradiated by a circularly polarized
electromagnetic wave incident normally to the $x,y$ plane. Then the behavior
of a conduction electron near a scatterer with the repulsive potential
$U(\mathbf{r})$ is described by the time-dependent Hamiltonian $\hat{\cal
H}_{e}(t)=(\hat{\mathbf{p}}-e\mathbf{A}(t)/c)^{2}/2m_{e}+U(\mathbf{r})$, where
$\hat{\mathbf{p}}$ is the plane momentum operator of conduction electron,
$\mathbf{r}=(x,y)$ is the plane radius vector of the electron,
$\mathbf{A}(t)=(A_{x},A_{y})=[cE_{0}/\omega_{0}](\sin\omega_{0}t,\,\cos\omega_{0}t)$
(1)
is the vector potential of the wave, $E_{0}$ is the electric field amplitude
of the wave, and $\omega_{0}$ is the wave frequency. If the field frequency
$\omega_{0}$ is high enough and lies far from characteristic resonant
frequencies of the QW, this time-dependent Hamiltonian can be reduced to the
effective stationary Hamiltonian, $\hat{\cal
H}_{0}=\hat{\mathbf{p}}^{2}/2m_{e}+U_{0}(\mathbf{r})$, where
$U_{0}(\mathbf{r})=\frac{1}{2\pi}\int_{-\pi}^{\pi}U\big{(}\mathbf{r}-\mathbf{r}_{0}(t)\big{)}\,d(\omega_{0}t)$
(2)
is the repulsive potential modified by the incident field (dressed potential),
$\mathbf{r}_{0}(t)=(-r_{0}\cos\omega_{0}t,\,r_{0}\sin\omega_{0}t)$ is the
radius-vector describing the classical circular trajectory of a free electron
in the circularly polarized field (1), and
$r_{0}={|e|E_{0}}/{m_{e}\omega^{2}_{0}}$ is the radius of the trajectory
Kibis_2019 ; Kibis_2020 . In the case of the short-range scatterers
conventionally modelled by the repulsive delta potential,
$U(\mathbf{r})=u_{0}\delta(\mathbf{r}),$ (3)
the corresponding dressed potential (4) reads Kibis_2020
$U_{0}(\mathbf{r})=\frac{u_{0}\,\delta({r}-{r}_{0})}{2\pi r_{0}}.$ (4)
Thus, the circularly polarized dressing field (1) turns the repulsive delta
potential (3) into the delta potential barrier of ring shape (4), which
defines dynamics of an electron near the scatterer. As a consequence, the
bound electron states which are localized inside the area fenced by the ring-
shape barrier ($0<r<r_{0}$) appear. Certainly, such bound electron states are
quasi-stationary since they can decay via the tunnel transition through the
potential barrier (4) into the continuum of conduction electrons. As a
consequence, the energy broadening of the localized states appears. In the
following, we will restrict the analysis by the ground localized state with
the energy $\varepsilon_{0}$, the energy broadening $\Gamma_{0}$ and the wave
function $\psi_{0}(r)$ (see Fig. 1). Assuming the repulsive delta potential to
be strong enough ($\alpha=2\hbar^{2}/m_{e}u_{0}\ll 1$), the solution of the
Schrödinger problem with the stationary potential (4) can be written
approximately as Kibis_2020
$\displaystyle\varepsilon_{0}=\frac{\hbar^{2}\xi^{2}_{0}}{2m_{e}r^{2}_{0}},\,\,\,{\Gamma}_{0}=\frac{2\varepsilon_{0}\alpha^{2}}{N^{3}_{0}(\xi_{0})J_{1}(\xi_{0})},$
$\displaystyle\psi_{0}(r)=\frac{J_{0}\left({\xi_{0}r}/{r_{0}}\right)}{\sqrt{\pi}r_{0}J_{1}(\xi_{0})}\theta(r_{0}-r),$
(5)
where $J_{m}(\xi)$ and $N_{m}(\xi)$ are the Bessel functions of the first and
second kind, respectively, $\xi_{0}$ is the first zero of the Bessel function
$J_{0}(\xi)$, and $\theta(r)$ is the Heaviside function. It follows from the
theory of the dressed potential approach Kibis_2019 that the discussed model
based on the dressed potential (4) is applicable if the dressing field
frequency, $\omega_{0}$, much exceeds the characteristic frequency of the
bound electron state, $\varepsilon_{0}/\hbar$.
Figure 1: Sketch of the system under consideration: (a) the optically induced
potential (4) depicted by the vertical blue line, which confines the bound
electron state (II) marked by the horizontal yellow strip and separates it
from the states of conduction electrons with the wave vectors $\mathbf{k}$
marked by the wave arrow; (b) the energy structure of singly-occupied
($\uparrow$) and doubly occupied ($\uparrow\downarrow$) electron states (II)
near the Fermi energy $\varepsilon_{F}$.
Assuming the condition $\hbar\omega_{0}\gg\varepsilon_{0}$ to be satisfied,
interaction between the localized electron state (II) and the conduction
electrons can be described by the Hamiltonian
$\displaystyle\hat{\cal H}$ $\displaystyle=$
$\displaystyle\sum_{\mathbf{k},\sigma}(\varepsilon_{\mathbf{k}}-\varepsilon_{F})\hat{c}_{\mathbf{k}\sigma}^{\dagger}\hat{c}_{\mathbf{k}\sigma}+\sum_{\sigma}(\varepsilon_{0}-\varepsilon_{F})\hat{d}_{\sigma}^{\dagger}\hat{d}_{\sigma}$
(6) $\displaystyle+$ $\displaystyle
U_{C}\hat{d}_{\uparrow}^{\dagger}\hat{d}_{\uparrow}\hat{d}_{\downarrow}^{\dagger}\hat{d}_{\downarrow}+\sum_{\mathbf{k},\sigma}T_{\mathbf{k}}\left[\hat{c}_{\mathbf{k},\sigma}^{\dagger}\hat{d}_{\sigma}+\mathrm{H.c.}\right],$
where $\varepsilon_{\mathbf{k}}=\hbar^{2}k^{2}/2m_{e}$ is the energy spectrum
of conduction electrons, $\mathbf{k}$ is the electron wave vector,
$\varepsilon_{F}$ is the Fermi energy of conduction electrons,
$\sigma=\uparrow,\downarrow$ is the spin quantum number,
$\hat{c}_{\mathbf{k},\sigma}^{\dagger}$($\hat{c}_{\mathbf{k},\sigma}$) are the
production (annihilation) operators for conduction electron states,
$\hat{d}_{\sigma}^{\dagger}$($\hat{d}_{\sigma}$) are the production
(annihilation) operators for the light-induced localized electron states (II),
$\displaystyle
U_{C}=e^{2}\int_{S}d^{2}\mathbf{r}\int_{S}d^{2}\mathbf{r}^{\prime}\frac{|\psi_{0}({r})|^{2}|\psi_{0}({r^{\prime}})|^{2}}{|\mathbf{r}-\mathbf{r}^{\prime}|}=\frac{\gamma
e^{2}}{\epsilon r_{0}},$ (7)
is the Coulomb interaction energy of two electrons with opposite spins
$\sigma=\uparrow,\downarrow$ in the state (II), $\epsilon$ is the permittivity
of QW, $\gamma\approx 0.8$ is the numerical constant, and $T_{\mathbf{k}}$ is
the tunneling matrix element connecting the localized electron state (II) and
the conduction electron state with the wave vector $\mathbf{k}$. Physically,
the first term of the Hamiltonian (6) describes the energy of conduction
electrons, the second term describes the energy of the localized electron in
the state (II), the third therm describes the Coulomb energy shift of the
double occupied state (II), and the fourth term describes the tunnel
interaction between the conduction electrons and the localized electrons.
Assuming the tunneling to be weak enough, one can replace the matrix element
$T_{\mathbf{k}}$ with its resonant value, $|T_{\mathbf{k}}|^{2}=\Gamma_{0}/\pi
N_{\varepsilon}$, corresponding to the energy
$\varepsilon_{\mathbf{k}}=\varepsilon_{0}$, where
$N_{\varepsilon}=Sm_{e}/\pi\hbar^{2}$ is the density of conduction electron
states (see, e.g., Appendix B in Ref. Kibis_2020, ). If the localized and
delocalized (conduction) electron states are decoupled from each other
($T_{\mathbf{k}}=0$), the localized eigenstates of the Hamiltonian (6)
correspond to the singly occupied state (II) with the eigenenergy
$\varepsilon_{0}-\varepsilon_{F}$ and the doubly occupied state (II) with the
eigenenergy $2(\varepsilon_{0}-\varepsilon_{F})+U_{C}$, which are marked
schematically in Fig. 1b by the symbols $\uparrow$ and $\uparrow\downarrow$,
correspondingly. For completeness, it should be noted that the empty state
(II) is also the eigenstate of the considered Hamiltonian with the zero
eigenenergy corresponding to the Fermi level. Since the Kondo effect
originates due to the emergence of magnetic moment (spin) of a localized
electron, it appears only if the singly occupied state is filled by an
electron but the doubly occupied state is empty. Assuming the temperature to
be zero, this corresponds to the case of $\varepsilon_{0}-\varepsilon_{F}<0$
and $U_{C}>\varepsilon_{F}-\varepsilon_{0}$. Taking into account that the
characteristic energies of the considered problem, $U_{C}$ and
$\varepsilon_{0}$, depend differently on the irradiation amplitude $E_{0}$ and
frequency $\omega_{0}$, the optically induced Kondo effect can exist only in
the range of these irradiation parameters defined by the inequality
$\sqrt{\frac{\hbar^{2}\xi_{0}^{2}}{2m_{e}\varepsilon_{F}}}<r_{0}<\frac{\gamma
e^{2}}{2\epsilon\varepsilon_{F}}+\sqrt{\frac{\hbar^{2}\xi_{0}^{2}}{2m_{e}\varepsilon_{F}}+\left(\frac{\gamma
e^{2}}{2\epsilon\varepsilon_{F}}\right)^{2}}.$ (8)
Mathematically, the Hamiltonian (6) is identical to the famous Anderson
Hamiltonian describing the microscopic mechanism for the magnetic moment
formation in metals anderson1961localized . Therefore, one can apply the known
Schrieffer-Wolff (SW) unitary transformation coleman2015introduction to turn
the Hamiltonian (6) into the Hamiltonian of the Kondo problem Andrei_1983 .
Assuming the condition (8) to be satisfied, we arrive at the Kondo Hamiltonian
$\hat{\cal
H}_{K}=\sum_{\mathbf{k}\sigma}(\varepsilon_{\mathbf{k}}-\varepsilon_{F})\hat{c}_{\mathbf{k}\sigma}^{\dagger}\hat{c}_{\mathbf{k}\sigma}+J\bm{\sigma}(0)\cdot\mathbf{S}_{0}-\frac{{V}}{2}\hat{\psi}_{\sigma}^{\dagger}(0)\hat{\psi}_{\sigma}(0),$
(9)
where $\hat{\psi}_{\sigma}(0)=\sum_{\mathbf{k}}\hat{c}_{\mathbf{k}\sigma}$ is
the $\hat{\psi}(\mathbf{r})$ operator of conduction electrons at the repulsive
delta potential ($\mathbf{r}=0$),
${\hat{\bm{\sigma}}}(0)=\hat{\psi}^{\dagger}(0){\hat{\bm{\sigma}}}\hat{\psi}(0)$
is the spin density of conduction electrons at $\mathbf{r}=0$,
$\mathbf{S}_{0}=\hat{d}^{\dagger}({{\hat{\bm{\sigma}}}}/{2})\hat{d}$ is the
spin density of an electron in the localized state (II),
$\hat{\bm{\sigma}}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is the spin vector
matrix, and the coupling coefficients $J$ and ${V}$ read
$\displaystyle J$ $\displaystyle=$ $\displaystyle\frac{\Gamma_{0}}{\pi
N_{\varepsilon}}\left[\frac{1}{\varepsilon_{0}-\varepsilon_{F}+U_{C}}+\frac{1}{\varepsilon_{F}-\varepsilon_{0}}\right],$
(10) $\displaystyle{V}$ $\displaystyle=$ $\displaystyle-\frac{\Gamma_{0}}{2\pi
N_{\varepsilon}}\left[\frac{1}{\varepsilon_{0}-\varepsilon_{F}+U_{C}}-\frac{1}{\varepsilon_{F}-\varepsilon_{0}}\right].$
(11)
It should be noted that the denominators in Eqs. (10)–(11) are the energy
detunings between the singly occupied and empty states,
$\varepsilon_{F}-\varepsilon_{0}$, and the singly and doubly occupied states,
$\varepsilon_{0}-\varepsilon_{F}+U_{C}$. Since excitation of the empty (doubly
occupied) state creates an electron (hole) in the Fermi sea, we will label the
corresponding detunings as $D_{e}=\varepsilon_{F}-\varepsilon_{0}$ and
$D_{h}=\varepsilon_{0}-\varepsilon_{F}+U_{c}$.
It should be stressed also that the SW transformation is only applicable to
the case of weak coupling between the localized electron state (II) and the
conduction electrons as compared to the energy difference between the ground
(singly occupied) and excited (empty and doubly occupied) localized states. As
a consequence, the Hamiltonian (9) accurately describes the asymmetric Kondo
problem under consideration for
$\Gamma_{0}\ll[\varepsilon_{F}-\varepsilon_{0},\varepsilon_{0}-\varepsilon_{F}+U_{C}]$
and, therefore, the detunings $D_{e,h}$ are assumed to meet the condition
$\Gamma_{0}\ll D_{e,h}$. Beyond the condition, the system enters the so-called
mixed-valence regime, where the localized states (II) with different occupancy
become effectively coupled riseborough2016mixed . Although the mixed valence
regime hosts a rich class of the interesting phase transitions, in the
following we will focus exclusively at the Kondo regime, where the singly-
occupied ground localized state is well separated from the excited states.
## III Results and discussion
Figure 2: Effect of the circularly polarized irradiation with the frequency
$\omega_{0}/2\pi=200$ GHz and the intensity $I$ on (a) the hole and electron
detunings $D_{h,e}$ and (b) the Kondo temperature in a GaAs-based QW filled by
2D electron gas with the Fermi energy $\varepsilon_{F}=5$ meV, energy
broadening $\Gamma_{0}=0.1\varepsilon_{0}$ and the electron effective mass
$m_{e}=0.067m_{0}$ ($m_{0}$ is the free electron mass) for the zero
temperature. The green shadow areas mark the validity range of the model,
where the applicability conditions are satisfied for both the Kondo
Hamiltonian ($\Gamma_{0}\ll D_{e},D_{h}$) and the dressed potential approach
($\varepsilon_{0}\ll\hbar\omega_{0}$).
For the particular case of 2D electrons in a GaAs-based QW, the dependence of
the detunings $D_{e,h}$ on the irradiation is plotted in Fig. 2a. It should be
noted that excitations of virtual electrons and holes should be considered
within the whole conduction band of width $2D_{0}$, where $D_{0}\approx 1.5$
eV for GaAs. Since the typical Kondo temperature is essentially smaller than
the bandwidth $D_{0}$, one needs to transform the initial high-energy
Hamiltonian (9) to the low-energy range in order to find the Kondo
temperature. Such a transformation can be performed within the poor man’s
scaling renormalization approach anderson1970poor ; anderson1973kondo , which
was originally proposed by Anderson. Following Anderson, the higher energy
excitations corresponding to the first order processes pictured in Fig. 3a can
be integrated out from the Hamiltonian with using the SW transformation. Then
the highest energy excitations, which are remained in the renormalized
Hamiltonian, correspond to the second order processes pictured in Fig. 3b-c.
It should be stressed that the renormalized Hamiltonian has the same structure
as the initial one with the coupling constants depending on the renormalized
(decreased) bandwidth $D<D_{0}$, where the renormalized coupling constant
$J(D)$ diverges at some critical bandwidth $D=D_{K}$. This critical bandwidth
defines the sought Kondo temperature, $T_{K}=D_{K}$, which, particularly,
indicates the applicability limit of the perturbation theory anderson1970poor
; anderson1973kondo .
Figure 3: The diagrams illustrating all possible first order processes (a) and
the second order processes for electrons (b) and holes (c). The solid lines
depict propagators of conduction electrons and holes, the dashed lines depict
the localized spin propagator, whereas the symbols $\sigma$ and
$\alpha$($\beta$) mark the spins of localized electrons and conduction
electrons, respectively.
The only difference of the considered system from the original Kondo problem
anderson1961localized is the strong electron-hole asymmetry since the typical
Fermi energy in GaAs-based QWs is $\varepsilon_{F}\ll D_{0}$. As a
consequence, the hole process shown in Fig. 3c cannot exists for
$D>\varepsilon_{F}$ and only the second order process involving a virtual
electron (see Fig. 3b) should be taken into account. On the contrary, the both
processes contribute to the effective coupling rescaling for the case of
$D<\varepsilon_{F}$. Applying the known general solution of the asymmetric
electron-hole Kondo problem Zitko2016 to the considered system, the flow
equations for the effective exchange constant $J^{\prime}(D)$ and the scalar
potential $V^{\prime}(D)$ can be written as
$\displaystyle\frac{1}{{\pi
N_{\varepsilon}}}\frac{\partial{J}^{\prime}}{{\partial\ln(D_{0}/D)}}=[1+\theta(\varepsilon_{F}-D)]{{J^{\prime}}^{2}-\theta(D-\varepsilon_{F}){J^{\prime}}{V^{\prime}}},$
$\displaystyle\frac{2}{{\pi
N_{\varepsilon}}}\frac{\partial{V}^{\prime}}{{\partial\ln(D_{0}/D)}}=-\theta(D-\varepsilon_{F})\left[{3{J^{\prime}}^{2}+{V^{\prime}}^{2}}\right],$
(12)
with the boundary conditions $J^{\prime},V^{\prime}|_{D=D_{0}}=J,V$. The
solving of Eqs. (12) should be performed in the two steps as follows. At the
first step, we consider the interval $\varepsilon_{F}\leq D\leq D_{0}$. Within
this interval, the two nonlinear differential equations (12) can be solved
analytically (see, e.g., Ref. Zitko2016, for details) and result in the
boundary condition
$\displaystyle J^{\prime}(\varepsilon_{F})=$ (13)
$\displaystyle=\frac{J}{\left[1+\frac{\pi
N_{\varepsilon}}{2}\ln\frac{D_{0}}{\varepsilon_{F}}(J+V)\right]\left[1-\frac{\pi
N_{\varepsilon}}{2}\ln\frac{D_{0}}{\varepsilon_{F}}(3J-V)\right]}.$
At the second step, we consider the interval $D\leq\varepsilon_{F}$. Within
this interval, the scalar potential $V^{\prime}$ is constant and the
differential equation defining the effective exchange constant $J^{\prime}$
does not depend on the scalar potential. Therefore, the system of two
nonlinear differential equations (12) is reduced to the two independent
differential equations. Solving them under the boundary condition (13), one
can find the effective exchange coupling constant $J^{\prime}(D)$. Taking into
account that the coupling constant diverges at the critical (Kondo) bandwidth
$D=D_{K}$, we arrive the Kondo temperature
$\displaystyle T_{K}=\varepsilon_{F}\exp\left[-\frac{1}{2\pi
N_{\varepsilon}J^{\prime}(\varepsilon_{F})}\right].$ (14)
Certainly, Eq. (14) turns into the known expression for the Kondo temperature
$T_{K}=D_{0}\exp[-1/2J\pi N_{\varepsilon}]$ corresponding to the symmetric
Kondo problem anderson1970poor if $\varepsilon_{F}=D_{0}$ (the particular
case of the half-filled band). The dependence of the Kondo temperature on the
irradiation is plotted in Fig. 2b, where the Kondo temperature is found to be
of several Kelvin. In the theory developed above, the field frequency,
$\omega_{0}$, was assumed to satisfy the high-frequency condition
$\omega_{0}\tau\gg 1$, where $\tau$ is the mean free time of conduction
electrons in the QW. To satisfy this condition for modern QWs and keep the
irradiation intensity $I$ to be reasonable, we chose the field frequency for
our calculations (see Fig. 2) near the upper limit of microwave range,
$\omega_{0}/2\pi=200$ GHz, which can be easily realized in experiments. It
should be noted also that the present theory is developed for the case of
symmetric QW, although effects in asymmetric QWs are also studied actively
(see, e.g., Refs. Stavrou_2001, ; Arulmozhi_2020, ). In such asymmetric QWs,
particularly, there is the Rashba spin-orbit coupling which can lead to the
exponential increase of the Kondo temperature Wong_2016 .
To observe the discussed effect experimentally, the known contribution of the
Kondo resonance to the electron mean free time kondo1964resistance ,
$\displaystyle 1/\tau\sim J^{4}\left[\frac{1}{\pi
N_{\varepsilon}J}+2\ln\frac{D_{0}}{T}\right]^{2},$ (15)
can be used. Indeed, the found Kondo temperature (14) corresponds to the
minimum of the contribution (15) as a function of the temperature $T$. Since
all electron transport phenomena depend on the electron mean free time $\tau$,
this minimum can be detected in various transport experiments (e.g.,
conductivity measurements). In order to exclude effects arisen from the
irradiation-induced heating of electron gas, the difference scheme based on
using both a circularly polarized field and a linearly polarized one can be
applied. Indeed, the heating does not depend on the field polarization,
whereas the electron states bound at repulsive scatterers — and the related
Kondo effect, respectively — can be induced only by a circularly polarized
field Kibis_2019 .
## IV Conclusion
We showed within the Floquet theory that a circularly polarized
electromagnetic field irradiating a two-dimensional electron system can induce
the localized electron states which antiferromagnetically interact with
conduction electrons. As a consequence, the Kondo effect appears. For
semiconductor quantum wells irradiated by a microwave electromagnetic wave of
the intensity $\sim$kW/cm2, the Kondo temperature is found to be of several
Kelvin and, therefore, the effect can be detected in state-of-the-art
transport measurements.
## V Acknowledgements
The reported study was funded by the Russian Science Foundation (project
20-12-00001).
## References
* (1) Kondo G Resistance 1964 Minimum in Dilute Magnetic Alloys Progress of Theoretical Physics 32 37
* (2) De Haas W J, De Boer J and Van den Berg G J 1934 The electrical resistance of gold, copper and lead at low temperatures Physica 1 1115
* (3) Wilson K G 1973 The renormalization group: Critical phenomena and the Kondo problem Rev. Mod. Phys. 47 773
* (4) Fateev V A and Wiegmann P B 1981 The exact solution of the s-d exchange model with arbitrary impurity spin S (Kondo problem) Phys. Lett. A 81 179
* (5) Andrei N, Furuya K and Lowenstein J H 1983 Solution of the Kondo problem Rev. Mod. Phys. 55 331
* (6) Steglich F, Aarts J, Bredl C D, Lieke W, Meschede D, Franz W and Schäfer H 1979 Superconductivity in the Presence of Strong Pauli Paramagnetism: CeCu2Si2 Phys. Rev. Lett. 43 1892
* (7) Andres K, Graebner J E and Ott H R 1975 4f-Virtual-Bound-State Formation in CeAl3 at Low Temperatures Phys. Rev. Lett. 35 1779
* (8) Tsunetsugu H, Sigrist M and Ueda K 1993 Phase diagram of the one-dimensional Kondo-lattice model Phys. Rev. B 47 8345
* (9) Stewart G R 1984 Heavy-fermion systems Rev. Mod. Phys. 56 755
* (10) Kouwenhoven L and Glazman L 2001 Revival of the Kondo effect Phys. World 14(1) 33
* (11) Goldhaber-Gordon D, Göres J, Kastner M A, Shtrikman H, Mahalu D and Meirav U 1998 From the Kondo Regime to the Mixed-Valence Regime in a Single-Electron Transistor Phys. Rev. Lett. 81 5225
* (12) Cronenwett S M, Oosterkamp T H and Kouwenhoven L P 1998 A Tunable Kondo Effect in Quantum Dots Science 281 540
* (13) Iftikhar Z, Anthore A, Mitchell A, Parmentier F, Gennser U, Ouerghi A, Cavanna A, Mora C, Simon P and Pierre F 2018 Tunable quantum criticality and super-ballistic transport in a “charge” Kondo circuit Science 360 1315
* (14) Park J, Pasupathy A N, Goldsmith J I, Chang C, Yaish Y, Petta J R, Rinkoski M, Sethna J P, Abruna H D, McEuen P L and Ralph D C 2002 Coulomb blockade and the Kondo effect in single-atom transistors Nature 417 722
* (15) Borzenets I V, Shim J, Chen J C H, Ludwig A, Wieck A D, Tarucha S, Sim H-S and Yamamoto M 2020 Observation of the Kondo screening cloud Nature 579 210
* (16) Latta C, Haupt F, Hanl M, Weichselbaum A, Claassen M, Wuester W, Fallahi P, Faelt S, Glazman L, von Delft J, Türeci H E and Imamoglu A 2011 Quantum quench of Kondo correlations in optical absorption Nature 474 627
* (17) Haupt F, Smolka S, Hanl M, Wüster W, Miguel-Sanchez J, Weichselbaum A, von Delft J and Imamoglu A 2013 Nonequilibrium dynamics in an optical transition from a neutral quantum dot to a correlated many-body state Phys. Rev. B 88 161304
* (18) Türeci H E, Hanl M, Claassen M, Weichselbaum A, Hecht T, Braunecker B, Govorov A, Glazman L, Imamoglu A and von Delft J 2011 Many-Body Dynamics of Exciton Creation in a Quantum Dot by Optical Absorption: A Quantum Quench towards Kondo Correlations Phys. Rev. Lett. 106 107402
* (19) Sbierski B, Hanl M, Weichselbaum A, Türeci H E, Goldstein M, Glazman L I, von Delft J and Imamoglu A 2013 Proposed Rabi-Kondo Correlated State in a Laser-Driven Semiconductor Quantum Dot Phys. Rev. Lett. 111 157402
* (20) Nakagawa M and Kawakami N 2015 Laser-Induced Kondo Effect in Ultracold Alkaline-Earth Fermions Phys. Rev. Lett. 115 165303
* (21) Basov D N, Averitt R D and Hsieh D 2017 Towards properties on demand in quantum materials Nat. Mater. 16 1077
* (22) Oka T and Kitamura S 2019 Floquet Engineering of Quantum Materials Ann. Rev. Cond. Matt. Phys. 10 387
* (23) Goldman N and Dalibard J 2014 Periodically Driven Quantum Systems: Effective Hamiltonians and Engineered Gauge Fields Phys. Rev. X 4 031027
* (24) Bukov M, D’Alessio L and Polkovnikov A 2015 Universal high-frequency behavior of periodically driven systems: From dynamical stabilization to Floquet engineering Adv. Phys. 64 139
* (25) Lindner N H, Refael G and Galitski V 2011 Floquet topological insulator in semiconductor quantum wells Nat. Phys. 7 490
* (26) Savenko I G, Kibis O V and Shelykh I A 2012 Asymmetric quantum dot in a microcavity as a nonlinear optical element Phys. Rev. A 85 053818
* (27) Rechtsman M C, Zeuner J M, Plotnik Y, Lumer Y, Podolsky D, Dreisow F, Nolte S, Segev M and Szameit A 2013 Photonic Floquet topological insulator Nature 496 196
* (28) Wang Y H, Steinberg H, Jarillo-Herrero P and Gedik N 2013 Observation of Floquet-Bloch states on the surface of a topological insulator Science 342 453
* (29) Glazov M M and Ganichev S D 2014 High frequency electric field induced nonlinear effects in graphene Phys. Rep. 535 101
* (30) Usaj G, Perez-Piskunow P M, Foa Torres L E F and Balseiro C A 2014 Irradiated graphene as a tunable Floquet topological insulator Phys. Rev. B 90 115423
* (31) Sentef M A, Claassen M, Kemper A F, Moritz B, Oka T, Freericks J K and Devereaux T P 2015 Theory of Floquet band formation and local pseudospin textures in pump-probe photoemission of graphene Nat. Commun. 6 7047
* (32) Sie E J, McIver J W, Lee Y-H, Fu L, Kong J and Gedik N 2015 Valley-selective optical Stark effect in monolayer WS2 Nat. Mater. 14 290
* (33) Dini K, Kibis O V and Shelykh I A 2016 Magnetic properties of a two-dimensional electron gas strongly coupled to light Phys. Rev. B 93 235411
* (34) Kibis O V, Dini K, Iorsh I V and Shelykh I A 2017 All-optical band engineering of gapped Dirac materials Phys. Rev. B 95 125401
* (35) Iorsh I V, Dini K, Kibis O V and Shelykh I A 2017 Optically induced Lifshitz transition in bilayer graphene Phys. Rev. B 96 155432
* (36) Kozin V K, Iorsh I V, Kibis O V and Shelykh I A 2018 Quantum ring with the Rashba spin-orbit interaction in the regime of strong light-matter coupling Phys. Rev. B 97 155434
* (37) Kozin V K, Iorsh I V, Kibis O V and Shelykh I A 2018 Periodic array of quantum rings strongly coupled to circularly polarized light as a topological insulator Phys. Rev. B 97 035416
* (38) McIver J W, Schulte B, Stein F-U, Matsuyama T, Jotzu G, Meier G and Cavalleri A 2020 Light-induced anomalous Hall effect in graphene Nat. Phys. 16 38
* (39) Kibis O V 2019 Electron pairing in nanostructures driven by an oscillating field Phys. Rev. B 99 235416
* (40) Kibis O V, Boev M V and Kovalev V M 2020 Light-induced bound electron states in two-dimensional systems: Contribution to electron transport Phys. Rev. B 102 075412
* (41) Kibis O V, Kolodny S A and Iorsh I V 2021 Fano resonances in optical spectra of semiconductor quantum wells dressed by circularly polarized light Opt. Lett. 46 50
* (42) Anderson P W 1961 Localized Magnetic States in Metals Phys. Rev. 124 41
* (43) Coleman P 2015 Introduction to many-body physics (Cambridge: University Press)
* (44) Riseborough P S and Lawrence J M 2016 Mixed Valnt Metals Rep. Prog. Phys. 79 084501
* (45) Anderson P W 1979 A poor man’s derivation of scaling laws for the Kondo problem J. Phys. C: Solid State Phys. 3 2436
* (46) Zitko R and Hovrat A 1994 Kondo effect at low electron density and high particle-hole asymmetry in 1D, 2D, and 3D Phys. Rev. B 94 125138
* (47) Anderson P W Kondo effect 1973 Comments on Solid State Phys. 5 73
* (48) Stavrou V N, Babiker M and Bennett C R 2001 Influences of asymmetric quantum wells on electron-phonon interactions J. Phys.: Condens. Matter 13 6489
* (49) Arulmozhi R, John Peterb A and Lee C W 2020 Optical absorption in a CdS/CdSe/CdS asymmetric quantum well Chem. Phys. Lett. 742 137129
* (50) Wong A, Ulloa S E, Sandler N and Ingersent K 2016 Influence of Rashba spin-orbit coupling on the Kondo effect Phys. Rev. B 93 075148
|
11institutetext: Institute of Theoretical Astrophysics, University of Oslo,
P.O. Box 1029 Blindern, N-0315 Oslo, Norway 22institutetext: Rosseland Centre
for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo,
Norway
# Signatures of ubiquitous magnetic reconnection in the deep atmosphere of
sunspot penumbrae
Luc H. M. Rouppe van der Voort 1122 Jayant Joshi 1122 Vasco M. J. Henriques
1122 Souvik Bose 1122
( - submitted to A&A December 18, 2020 / accepted January 26, 2021.)
###### Abstract
Context. Ellerman bombs are regions with enhanced Balmer line wing emission
and mark magnetic reconnection in the deep solar atmosphere in active regions
and quiet Sun. They are often found in regions where opposite magnetic
polarities are in close proximity. Recent high resolution observations suggest
that Ellerman bombs are more prevalent than thought before.
Aims. We aim to determine the occurrence of Ellerman bombs in the penumbra of
sunspots.
Methods. We analyze high spatial resolution observations of sunspots in the
Balmer H $\alpha$ and H $\beta$ lines as well as auxiliary continuum channels
obtained with the Swedish 1-m Solar Telescope and apply the $k$-means
clustering technique to systematically detect and characterize Ellerman Bombs.
Results. Features with all the defining characteristics of Ellerman bombs are
found in large numbers over the entire penumbra. The true prevalence of these
events is only fully appreciated in the H $\beta$ line due to highest spatial
resolution and lower chromospheric opacity. We find that the penumbra hosts
some of the highest Ellerman bomb densities, only surpassed by the moat in the
immediate surroundings of the sunspot. Some penumbral Ellerman bombs show
flame morphology and rapid dynamical evolution. Many penumbral Ellerman bombs
are fast moving with typical speed of 3.7 km s-1 and sometimes more than 10 km
s-1. Many penumbral Ellerman bombs migrate from the inner to the outer
penumbra over hundreds of km and some continue moving beyond the outer
penumbral boundary into the moat. Many penumbral Ellerman bombs are found in
the vicinity of regions with opposite magnetic polarity.
Conclusions. We conclude that reconnection is a near continuous process in the
low atmosphere of the penumbra of sunspots as manifest in the form of
penumbral Ellerman bombs. These are so prevalent that they may be a major sink
of sunspot magnetic energy.
###### Key Words.:
Sun: activity – Sun: atmosphere – Sun: magnetic fields – sunspots – Magnetic
reconnection
## 1 Introduction
Magnetic reconnection is a fundamental process in magnetized astrophysical
plasmas for which magnetic energy is dissipated and converted into heat. In
the lower solar atmosphere, the hydrogen Balmer lines provide effective
tracers of reconnection sites as they exhibit remarkable enhanced emission in
their extended line wings as result of localised heating. This phenomenon of
enhanced wing emission, referred to as Ellerman “bombs” (EBs, Ellerman, 1917),
is particularly pronounced in emerging active regions with vigorous magnetic
flux emergence. At locations where opposite polarities are in close proximity
(i.e., at the polarity inversion line), EBs appear as subarcsecond sized
brightenings in H $\alpha$ line wing (see, e.g., Georgoulis et al., 2002;
Pariat et al., 2004; Fang et al., 2006; Pariat et al., 2007; Matsumoto et al.,
2008; Watanabe et al., 2008) and H $\beta$ line wing (Libbrecht et al., 2017;
Joshi et al., 2020) images. The fact that the enhancement is only in the wings
and that the EBs are invisible in the H $\alpha$ line-core locate the height
of the reconnection below the chromospheric canopy of fibrils (Watanabe et
al., 2011; Vissers et al., 2013; Nelson et al., 2013b). When observed from an
inclined observing angle, sufficiently away from the center of the solar disk,
and at sufficient spatial resolution, H $\alpha$ wing images show EBs as tiny
(1–2 Mm), bright, upright flames that flicker rapidly on a time scale of
seconds (Watanabe et al., 2011; Rutten et al., 2013; Nelson et al., 2015).
There is considerable spread in EB lifetimes but they rarely live longer than
a few minutes. We refer to Rutten et al. (2013) and Vissers et al. (2019) for
recent reviews of observational EB properties and their visibility in
different spectral diagnostics.
Traditionally, EBs have been associated with strong magnetic field
environments and therefore regarded as a typical active region phenomenon.
This view changed when Rouppe van der Voort et al. (2016) and later Shetye et
al. (2018) reported the existence of tiny ($\lesssim
0\aas@@fstack{\prime\prime}5$) Ellerman-like brightenings in quiet Sun when
observed at extremely high spatial resolution. Nelson et al. (2017) found
cases of quiet Sun EBs (QSEB) that were also visible in UV channels,
suggesting that at least some QSEBs are energetic enough to become detectable
in higher energy diagnostics. New high spatial resolution quiet Sun H $\beta$
observations presented by Joshi et al. (2020) show that QSEBs are much more
ubiquitous than the lower spatial resolution H $\alpha$ observations
suggested. The shorter wavelength H $\beta$ line allows for higher spatial
resolution and higher temperature sensitivity and the observations suggest
that about half a million QSEBs are present in the solar atmosphere at any
time.
The interpretation of EBs as markers of small-scale magnetic reconnection in
the lower solar atmosphere has been reinforced by the advanced numerical
simulations of Hansteen et al. (2017, 2019) and Danilovic (2017). In these
simulations, heating occurs along current sheets that extend over several
scale heights from the photosphere into the chromosphere. In synthetic H
$\alpha$ wing images, these current sheets are at the core of flame like
structures that resemble the characteristic EB flames in observations.
The sunspot penumbra is another environment in the lower solar atmosphere
where magnetic reconnection is likely to occur. In the penumbra, harboring an
“uncombed” magnetic field topology with strong magnetic fields at highly
variable inclination angles and considerable dynamic forcing from convective
flows, one may arguably expect ample occurrences of magnetic fields with
differing angles at sufficient close proximity to effectively interact and
reconnect (for reviews on the sunspot magnetic structure with strong-field
vertical spines and and weaker-field horizontal inter-spines, see e.g.,
Borrero & Ichimoto, 2011; Tiwari, 2017). Scharmer et al. (2013) detected small
regions of opposite polarity in a sunspot penumbra (also see Ruiz Cobo &
Asensio Ramos, 2013; Franz & Schlichenmaier, 2013), and that these regions
harbor convective downflows. Tiwari et al. (2015) found ample regions with
polarity opposite to the dominant sunspot polarity in a high quality Hinode
SOT/SP map.
Based on the experience that EBs are often found at the interface between
photospheric opposite polarity patches, we searched for EB signatures in high
quality H $\alpha$ and H $\beta$ sunspot observations. In particular, we
concentrated on the presence of flames in limbward observations as the
telltale EB signature. After close inspection of 13 datasets acquired over
more than a decade of observation campaigns, we conclude that EBs are
prevalent in sunspot penumbrae. The signature of penumbral EBs (PEBs) however
is often subtle and requires excellent observing quality. The H $\beta$ line
offers more clear detection in comparison to H $\alpha$, where the EB spectral
signature is often hidden by dense superpenumbral filaments. In this paper, we
present results from analysis of the best datasets.
## 2 Observations
The observations were obtained with the Swedish 1-m Solar Telescope (SST,
Scharmer et al., 2003a) on the island of La Palma, Spain. We used the CRisp
Imaging SpectroPolarimeter (CRISP, Scharmer et al., 2008) and the
CHROMospheric Imaging Spectrometer (CHROMIS) to perform imaging spectrometry
in the H $\alpha$ and H $\beta$ spectral lines. We used the standard SST data
reduction pipelines (de la Cruz Rodríguez et al., 2015; Löfdahl et al., 2018)
to process the data. This includes image restoration with the multi-object
multi-frame blind deconvolution (MOMFBD, van Noort et al., 2005) method and
the procedure for consistency across narrowband channels of Henriques (2012).
High image quality was further aided with the SST adaptive optics system
(Scharmer et al., 2003b) which has a 85-electrode deformable mirror operating
at 2 kHz.
The data recorded during the best seeing conditions was observed on 22
September 2017. During the best periods, the Fried’s parameter $r_{0}$ was
above 50 cm, with a maximum of 79 cm (for a discussion of measurements of
$r_{0}$ by the SST adaptive optics system, see Scharmer et al., 2019).
Unfortunately, the seeing was not consistently of high quality and the data
set is not optimal for temporal evolution studies. Most of the analysis and
data presented in Figs. 1–5 is based on the CHROMIS and CRISP spectral scans
recorded at 10:00:48 UT. The target area was the main sunspot in AR12681 at
$(X,Y)=(-749\arcsec,-296\arcsec)$, $\mu=\cos\theta=0.54$ with $\theta$ the
observing angle. With CHROMIS, we sampled the H $\beta$ line at 32 positions
between $\pm$1.37 Å with equidistant steps of 0.074 Å around the line core and
sparser in the wings to avoid line blends. The time to complete a full H
$\beta$ scan was 11.1 s. The CHROMIS data has a pixel scale of
0$\aas@@fstack{\prime\prime}$038 and the telescope diffraction limit
($\lambda/D$) is 0$\aas@@fstack{\prime\prime}$1 at $\lambda=4861$ Å. The
CHROMIS instrument has an auxiliary wide-band (WB) channel that is equipped
with a continuum filter which is centered at 4846 Å and has a full-width at
half-maximum (FWHM) of the transmission profile of 6.5 Å. This filter covers a
spectral region that is dominated by continuum and has relatively weak
spectral lines (see Löfdahl et al., 2018, for a plot of the transmission
profile in comparison with an atlas spectrum).
With CRISP, we sampled the H $\alpha$ line at 32 positions between $\pm$1.85 Å
from the line core with equidistant steps of 0.1 Å between $-1.6$ and $+1.3$
Å. In addition, CRISP was sampling the Fe i 6301 and 6302 Å line pair in
spectropolarimetric mode, with 9 positions in Fe i 6301 and 6 positions in Fe
i 6302, avoiding the telluric blend in the red wing. Furthermore, a continuum
position was sampled between the two lines. The time to complete full scans of
the H $\alpha$ and Fe i spectral lines was 19.1 s. The pixel scale of the
CRISP data is 0$\aas@@fstack{\prime\prime}$058.
The other dataset that we analyzed in detail was observed on 29 April 2016 and
was centered on the main sunspot in AR12533 at $(X,Y)=(623\arcsec,8\arcsec)$,
$\mu=0.75$. The seeing conditions were very good for the whole 1 h 30 m
duration of the time series which started at 09:43:09 UT. The $r_{0}$ values
were averaging at about 20 cm with peaks up to 30 cm. The online material
includes movies of the temporal evolution of the sunspot. For these movies we
have applied frame selection by rejecting 32 low quality images, this
corresponds to 12% of the total of 267 time steps. The CRISP instrument was
running a program with Ca ii 8542 Å spectropolarimetry and H $\alpha$ imaging
spectrometry at a cadence of 20 s. The H $\alpha$ line was sampled at 15
positions between $\pm$1.5 Å with 0.2 Å steps between $\pm$1.2 Å. We compare H
$\alpha$ wing images with images from the CRISP 8542 Å WB channel. For CRISP,
the WB channel branches off after the prefilter so that contrary to CHROMIS,
one cannot have imaging in a clean continuum band. The prefilter has a FWHM of
9.3 Å and is centered on the Ca ii 8542 Å line. The Ca ii 8542 Å spectra were
not included in our analysis. This data was earlier analyzed by Drews & Rouppe
van der Voort (2020) to study penumbral micro jets and the co-aligned SST and
IRIS data were publicly released as described by Rouppe van der Voort et al.
(2020).
For the exploration of all data, verification of detected events, and the
study and measurement of the dynamical evolution of PEBs in the 29 April 2016
data, we made use of CRISPEX (Vissers & Rouppe van der Voort, 2012), a widget-
based graphical user interface for exploration of multi-dimensional data sets
written in the Interactive Data Language (IDL).
## 3 Methods
#### Inversions.
We have performed Milne-Eddington (ME) inversions of the Fe i line pair
observed on 22 September 2017 to infer the magnetic field vector utilizing a
parallel C++/Python implementation111https://github.com/jaimedelacruz/pyMilne
(de la Cruz Rodríguez, 2019). The magnetic field vector retrieved from the ME
inversions are an average over the formation height of the Fe i line pair. For
these lines, the response of Stokes profiles to magnetic field is maximum
around optical depth 0.1 at 5000 Å in sunspot penumbrae (e.g., see Fig. 9 of
Joshi et al., 2017).
We resolved the 180° ambiguity in our magnetic field vector measurements using
the acute angle method (Sakurai et al., 1985; Cuperman et al., 1992). The
inferred magnetic field vector in the line-of-sight frame of reference is
projected to the disk center coordinates where $B_{z}$ represents the magnetic
field component normal to the solar surface and $B_{x}$ and $B_{y}$ are the
two orthogonal components projected onto the solar surface.
To better resolve opposite polarity patches in the penumbra, we corrected for
stray light prior to the inversions. We assumed a Gaussian point spread
function (PSF) with FWHM of 1$\aas@@fstack{\prime\prime}$2 and 45% stray
light, following similar stray light corrections that were considered for
CRISP/SST observations in earlier studies. For example, Scharmer et al. (2011)
and Scharmer & Henriques (2012) compensated for stray light using a PSF with
FWHM of 1$\aas@@fstack{\prime\prime}$2 and 56% stray light contribution. Joshi
et al. (2011) assumed 35% stray light and a Gaussian PSF with FWHM of
1$\aas@@fstack{\prime\prime}$6\. Moreover, from a detailed analysis of solar
granulation contrast, Scharmer et al. (2019) concluded that stray light at the
SST comes mainly from small-angle scattering and that the wings of the
uncorrected PSF do not extend beyond 2″.
#### k-means clustering.
We used the $k$-means clustering technique (Everitt, 1972) to identify EB
spectra in the H $\beta$ spectral line observed on 22 September 2017. The
$k$-means method is widely used for the characterization of a variety of solar
phenomena and observations. Examples include the classification of Mg ii h and
k line profiles observed with IRIS (Sainz Dalda et al., 2019), the
identification of Mg ii h and k spectra in flares (Panos et al., 2018), and Ca
ii K observations of on-disk spicules (Bose et al., 2019, 2021). Our approach
for clustering the H $\beta$ spectra is very similar to that employed by Joshi
et al. (2020) and Joshi & Rouppe van der Voort (2020) to identify QSEBs in
their H $\beta$ observations. With the $k$-means method we divided H $\beta$
spectra into 100 clusters and each cluster is represented by the mean of all
profiles in that cluster. This mean profile is referred to as representative
profile (RP). Out of 100 RPs we found that 29 RPs show line wing enhancement
that is characteristic of EBs. Of these 29 selected RPs with enhanced wings,
25 RPs have essentially an unaffected line core, while the rest show an
intensity enhancement in the line core. Inclusion of the four RPs, that have
an enhanced line core as EB profiles, is motivated by Joshi et al. (2020) who
found that unlike typical H $\alpha$ EB profiles, some EBs can show raised
intensity level even in the H $\beta$ line core. A detailed description of
selected RPs with EB-like H $\beta$ spectral profiles is provided in Sect. 4.
Based on spatial locations of selected RPs, we created a binary mask which was
then used to perform two dimensional (2D) connected component labeling (Fiorio
& Gustedt, 1996) that assigns a unique label for each isolated patch in the
binary mask. We then used the labels to estimate their area, brightness
enhancement and radial distance from the geometric center of the sunspot for
each individual EB. A detailed statistical analysis of these parameters is
presented in Sect. 4.
## 4 Results
### 4.1 PEB morphology and general appearance
Figure 1: Limb-side part of the sunspot in AR12681 observed on 22 September
2017 in H $\beta$ and H $\alpha$ blue wing and CHROMIS WB 4846 Å. PEBs are
visible as small bright features all over the penumbra, some with clear flame
morphology sticking straight up between filaments. These PEBs are invisible in
the continuum WB image. The direction to the nearest limb is approximately
upward along the $y$-axis. The top image includes six squares labelled A–F
that mark ROIs that are shown in detail in Fig. 2. An animation of this figure
is available as online material at
https://www.mn.uio.no/astro/english/people/aca/rouppe/movies/. This animation
shows a spectral scan through the H $\beta$ and H $\alpha$ lines.
Figure 2: Details of EBs in and outside the penumbra of the sunspot shown in
Fig. 1 in 6 ROIs. The spatial $X,Y$ coordinates are at the same scale as Fig.
1. The top row of panels for each ROI shows H $\beta$ and H $\alpha$ blue wing
and CHROMIS WB 4846 Å images. The bottom left panels show $\lambda x$-diagrams
of the spectral profiles along the red dotted line in the panels above. The
bottom right panel shows spectral profiles for H $\beta$ (solid black line)
and H $\alpha$ (dashed line) from the position of the red cross in the top
left panels. The thin gray profiles are reference spectral profiles averaged
over an area outside the sunspot. The intensity scaling is normalized to the
level of the far red wing of the reference profile. The red tickmark in the
bottom row panels indicates the line position of the wing images in the top
left. ROI A is centered on a strong EB outside the sunspot. ROI B is centered
on an EB at the outer edge of the penumbra. All other examples are PEBs inside
the penumbra.
Figure 1 shows the limb-side part of the 22 September 2017 sunspot in the blue
wings of H $\beta$ and H $\alpha$ as well as in CHROMIS WB 4846 Å. The offset
from line core was chosen to be close to the maximum of the typical EB profile
as to show EBs at highest contrast. Some prominent EBs are visible as
pronounced flames in the moat around the sunspot outside the penumbra. As
expected, the EBs are not visible in the continuum dominated WB image. Inside
the penumbra, there are a large number of small bright features present in the
Balmer wing images but clearest in the H $\beta$ wing image and not visible in
the WB image. Some of these appear as small linear features sticking straight
up from between the penumbral filaments, resembling the larger EB flames in
the surrounding sunspot moat.
The animation associated with Fig. 1 shows a spectral line scan through the H
$\beta$ and H $\alpha$ lines for comparison. From the animation it is evident
that in the penumbra the EB-like brightenings in the H $\beta$ wings also
persist in and close to the line core wavelength positions. However, these
compact brightenings in the H $\beta$ line core are absent in the H $\alpha$
line core which predominantly show chromospheric superpenumbral fibril
structures.
Figure 2 zooms in on 6 regions of interest (ROI). In the upper left, ROI A is
centered on the most prominent EB in the FOV, with the telltale flame towering
about 600 km above the intergranular lane from where it appears to emanate.
The CHROMIS WB image shows no trace of the EB, only some striations in the
background faculae, which are unrelated to the EB phenomenon. The $\lambda
x$-diagrams and spectral panel show the well-known characteristic EB Balmer
profile with enhanced wings and unaffected line core. The peak wing
enhancement is more than 2 times the level of the reference profile which is
averaged over a quiet region. The higher contrast and higher spatial
resolution in the H $\beta$ data compared to H $\alpha$ is clear, for example
from the fine structure and spatial variation in the $\lambda x$-diagram. The
EB in ROI A serves as reference for the EBs presented in the other ROIs.
In ROI B, a clear EB flame is located at the outer edge of the penumbra. The
vertical extension of this flame has a length of about 450 km. The other four
ROIs are all inside the penumbra and are centered on PEBs. Of these, ROI F is
centered on the tallest flame which has a length of about 350 km. The H
$\beta$ wing image shows clear substructure in the PEB while it is more an
extended fuzzy feature in the H $\alpha$ wing image. For this case, the wing
enhancement in the H $\beta$ profile is only slightly larger than in H
$\alpha$. For the PEB examples in ROIs C and D, the differences in wing
enhancement is larger, in particular in ROI C where the peak in wing
enhancement is almost as high as for the large EB in ROI A. Flame morphology
in ROI C might be difficult to discern because the PEB is aligned along the
penumbral filaments that, in this part of the penumbra, are aligned in the
direction of the nearest limb (i.e. along the line-of-sight). ROI E is
centered on a PEB that has hardly enhanced wings in the profile plot but
clearly shows a little flame in the H $\beta$ wing image and is unmistakably
present in the $\lambda x$-diagram. While this PEB might be weak, its absence
in the WB image is striking. This weak event is detected as a PEB with the
$k$-means method.
In all of these ROIs the penumbra in WB appears smoother than in the Balmer
wing images. Particularly in the H $\beta$ wing there are many small bright
features, like bright “crumbs”, scattered over the penumbra. Some of these are
very bright and show the characteristic EB wing enhancement and are clear
PEBs. Many others show only subtle wing enhancement but are notably absent in
the WB image. To the left of the PEB, in ROI F, the red dashed line crosses
some of these “crumbs” and the $\lambda x$-diagram shows wing enhancement when
compared to their surroundings, but clearly not so much as the central PEB.
Figure 3 shows all H $\beta$ RPs that have been identified as showing EB
spectral signatures. Representative profiles 0–24 have profiles similar to
typical H $\alpha$ EB profiles with enhanced wings and essentially unaffected
line core whereas RPs 25–28 display intensity enhancement in the line core
along with enhancement in the wings. Each detected EB in our dataset displays
a combination of RPs plotted in Fig. 3. For example, the PEB shown in the ROI
E of Fig. 2 is identified as a line core brightening and represented by a
combination of RPs 27 and 28. Similarly, a part of the PEB in ROI C is
identified as a line core brightening by RP 25. The rest of the EBs and PEBs
in Fig. 2 predominantly exhibit wing intensity enhancement in combination with
unaffected line cores and are clustered following RP 0–24.
Besides the RPs, Fig. 3 shows a density distribution of all H $\beta$ profiles
that are included in each cluster. The density distributions are narrow and
centered around the RPs. However, in some clusters the farthest profile shows
some significant deviation from the corresponding RP. For example, in clusters
represented by RP 7 and 23, the farthest profiles have quite different shape
as compared to their respective RPs. Nevertheless, these farthest profiles
also show characteristic EB-like spectral profiles.
Figure 3: Twenty nine representative profiles (RPs) from the $k$-means
clustering of the H $\beta$ line that are identified as signature of EB. The
black lines show RPs whereas shaded colored areas represent density
distribution of H $\beta$ spectra within a cluster; darker shades indicate
higher density. Within a particular cluster, the H $\beta$ profile that is
farthest (measured in euclidean distance) from the corresponding RPs is shown
by the black dotted line. As reference, the average quiet Sun profile (gray
line) is plotted in each panel. RPs 0–24 show the typical EB-like H $\beta$
profiles, i.e., enhanced wings and unaffected line core, while RPs 25–28
display both an enhancement in the wings as well as in the line core. The
parameter $n$ represents the number of pixels in a cluster as percentage of
the total of $\sim 1.73\times 10^{6}$ pixels.
### 4.2 Magnetic field environment
Figure 4: The location of PEBs compared to the vertical magnetic field
$B_{z}$. The top left panel shows a split image of the sunspot observed on 22
September 2017 with the left part in the H $\beta$ blue wing at $-0.2$ Å
offset and the right part at $-0.6$ Å. The blue contours indicate the radial
distance $r/\mathrm{R_{spot}}$ to the umbral center that is marked with the
blue cross. The contour for $r/\mathrm{R_{spot}}=1.00$ is the outer penumbra
boundary, defined from the associated WB image. The top right panel shows the
$B_{z}$ map, derived from ME inversions of the Fe i lines, scaled between
$-400$ and $+1600$ G. Regions with artifacts due to the de-projection method
are marked in green. The sets of panels at the bottom show four ROIs in H
$\beta$ $-0.2$ Å, $-0.6$ Å, and $B_{z}$ respectively. Red contours outline
PEBs detected through the $k$-means method. Figure 5: Distribution of EBs and
their properties with respect to radial distance from the sunspot center
(observed on 22 September 2017). The outer sunspot boundary is at
$r/\mathrm{R}_{\mathrm{spot}}=1$ and is marked with the yellow vertical line,
also see the contours in Fig. 4. The statistics are based on $k$-means
detections, the total number of EB detections is 372, of which 108 are PEBs.
The top panel shows the EB occurrence. The red curve shows the ratio of
negative magnetic polarity flux relative to total absolute flux (the sunspot
is dominated by positive polarity). The blue curve shows the fraction of
pixels with negative polarity relative to all pixels with significant magnetic
signal ($|B_{z}|>50$ G). The middle panels show the area of the EB detections.
The bottom panels show the H $\beta$ wing brightness enhancement of the
brightest pixel in the EB detection relative to the local background in a
100$\times$100 pixel area and excluding EB detection pixels. The brightness
enhancement is relative to the outer most wavelength positions on both sides
of the line center of the background H $\beta$ profile and on a scaling set by
the normalized quiet Sun reference profile. For the two bottom rows, the right
panels show occurrence histograms with the black line outlining the histograms
for PEBs. The histogram bin size is 0.003 Mm2 in area and 0.12 in brightness
enhancement. The grey lines in the left panels mark the average values for
each radial distance $r/\mathrm{R}_{\mathrm{spot}}$.
In order to study the occurrence of PEBs with respect to the magnetic field in
the vicinity, we compare EB detections from the $k$-means method with the
$B_{z}$ map derived from the Fe i lines. This is illustrated in Fig. 4. The
sunspot is dominated by positive magnetic polarity but the $B_{z}$ map also
shows many small isolated patches with significant opposite (negative)
polarity within the outer penumbra boundary. We find that many PEBs are
located in the vicinity of these opposite polarity patches. This can be seen
at close look in the four ROIs in the bottom of Fig. 4. Red contours outline
EB detections and there are some clear examples of PEBs that are located at or
close to the interface where opposite polarities meet. We note however, that
we also find PEBs located in unipolar regions, for example the PEB in the
center of the lower left ROI.
As mentioned before, PEB brightenings can also be visible in and close to the
H $\beta$ line core, see the left H $\beta$ $-0.2$ Å part of Fig. 4 (top
left), that displays numerous compact brightenings in the penumbra. A number
of PEBs that can be seen in the H $\beta$ line core are shown in more detail
in the ROIs presented in the bottom of Fig. 4. For example, in the upper-left
ROI, two big PEBs at the sunspot boundary are visible in the wing as well as
close to the line core. The PEBs at
$(X,Y)=(41\aas@@fstack{\prime\prime}5,23\aas@@fstack{\prime\prime}6)$ and
$(X,Y)=(41\aas@@fstack{\prime\prime}0,22\aas@@fstack{\prime\prime}8)$ in the
upper-right ROI are predominantly visible at $-0.2$ Å while they have only
subtle brightenings in the outer H $\beta$ line wing.
The statistics shown in Fig. 5 provide a quantified context of the observation
that PEBs are often found in the vicinity of opposite polarities: the top
diagram shows that both the number of PEBs and the contribution from opposite
polarity patches increase towards the outer penumbra. Both the relative area
(blue curve) and opposite polarity flux (red curve) increase to more than 10%
at the outer penumbra boundary.
With the $k$-means clustering method, we detected a total of 372 EBs of which
108 are in the penumbra. We found no EBs in the umbra. In the inner penumbra,
$0.5\leq r/\mathrm{R}_{\mathrm{spot}}\leq 0.75$, the number density of
detected PEBs is 0.29 Mm-2, and the fraction of the total area covered by PEBs
is 0.007 (i.e., the area filling factor). In the outer penumbra,
$0.75<r/\mathrm{R}_{\mathrm{spot}}\leq 1$, the number density is 0.76 Mm-2,
and the area filling factor 0.032. In the immediate surroundings of the
sunspot, in the moat, $1<r/\mathrm{R}_{\mathrm{spot}}\leq 1.25$, the EB number
density is 1.72 Mm-2, and the area filling factor 0.037. The number density of
all 372 EBs detected over the full CHROMIS FOV is 0.27 Mm-2. For two other H
$\beta$ spectral scans of this sunspot, recorded under less optimal seeing
conditions, we find fewer but comparable numbers of EB detections: for a scan
with quiet Sun granulation contrast 15.0%, there are 304 EB detections of
which 90 PEBs, and for a scan with 14.6% contrast, there are 252 EBs of which
75 PEB (the contrast for the best scan is 15.7%). So about 30% of the EBs
detected in the FOV are inside the penumbra.
Figure 5 further provides statistics on the area and brightness enhancement of
the EB detections. The largest PEBs are found towards the outer penumbra and
PEBs do not stand out as being smaller or larger than EBs outside the sunspot.
The mean area for PEBs is 0.039 Mm2 (standard deviation $\sigma=0.055$ Mm2)
and for EBs outside the sunspot 0.022 Mm2 ($\sigma=0.041$ Mm2). The trend in
the area distribution has a sharp cut off at 0.0037 Mm2 (five pixels) which is
set by the spatial resolution. This suggests that there exist smaller EBs that
are not resolved. Many small bright features in the H $\beta$ wings described
as “crumbs” in Sect. 4.1 were not detected by the $k$-means method. In some
cases where these features were detected, only a few of the brightest pixels
were identified as PEBs and not the whole morphological structure. Thus, these
detections also contribute to the population of PEBs with smallest areas. We
excluded all EB events with area less than five pixels in our statistical
analysis.
Also in terms of wing brightness enhancement, as shown in the bottom of Fig.
5, PEBs do not stand out as compared to EBs. The average wing brightness
enhancement for PEBs is 0.72 ($\sigma=0.33$) and for EBs outside the sunspot
0.78 ($\sigma=0.35$). Here, the H $\beta$ wing enhancement was measured
against the average intensity of the outermost wavelength positions in the
local background (over an area of $100\times 100$ pixels). The majority of the
EBs, within the penumbra as well as in the surroundings of the sunspot, have a
brightness enhancement between 0.5 and 1. However, some PEBs in the outer
penumbra and some EBs in close proximity of the sunspot are brighter, and for
some, the intensity enhancement relative to the local surroundings is larger
than 2. The brightest EBs were classified as RP 0–2 (see Fig. 3) for which the
blue wing is raised more than three times the level of the reference quiet
Sun.
### 4.3 Temporal evolution
Figure 6: Temporal evolution of PEBs in the sunspot in AR12533 observed on 29
April 2016. The top left image shows an H $\alpha$ blue wing image at $-0.8$ Å
with three regions of interest (ROI) marked with labels A, B, and C. The
temporal evolution for these ROIs is shown in the rows with smaller H $\alpha$
wing images where the time is marked in the top left. The bottom row of images
shows the temporal evolution in ROI C in WB 8542 Å for comparison. The spacing
between large tick marks in the ROI images is 1″. The contrast in the top left
overview image is enhanced by applying a gamma correction with $\Gamma$=2, all
other images have linear scaling on a common scale for each ROI. Three
animations associated to this figure are available as online material: a movie
of the sunspot in the H $\alpha$ blue wing as in the upper left panel, the
corresponding movie in WB 8542 Å, and a combined movie showing the left part
of the sunspot. See
https://www.mn.uio.no/astro/english/people/aca/rouppe/movies/.
The temporal evolution of PEBs was studied in the 29 April 2016 observations
of the sunspot in AR12533, see Fig. 6. The online material includes a movie of
the full 90 min sequence of the entire sunspot at $-0.8$ Å offset from H
$\alpha$ line center, equivalent to the FOV shown in the upper left panel. The
movie shows many small bright PEBs in the penumbra that generally move
radially outward, away from the umbra. These are not visible in the reference
WB 8542 Å movie. This difference in visibility is best seen in the 3rd movie
that zooms in on the left part of the penumbra and combines the H $\alpha$
blue wing and WB 8542 Å, as well as a panel that blinks between these
diagnostics. There are numerous examples of PEBs that originate in the
penumbra, migrate outwards and eventually cross the outer penumbra boundary
where they continue their outward migration in the sunspot moat flow. From
inspection of the H $\alpha$ blue wing movie, it is clear that most of the
PEBs are found in the outer regions of the penumbra. This is similar to what
is described above for the 22 September 2017 sunspot and what was found from
the $k$-means detections.
We tracked 32 events to measure lifetimes, trajectories and velocities. These
PEBs were selected on the basis of clear visibility throughout their lifetime
and regarded as a representative sample. We measured PEB lifetimes ranging
between 1 and 9 min, and an average lifetime of about 3 min. During their
lifetime, these PEBs traveled distances ranging between 100 and 1640 km, with
an average of about 650 km. They traveled at an average speed of 3.7 km s-1
and the maximum speed measured is almost 13 km s-1. These velocities are
apparent motions and from these observations we cannot determine whether these
are real plasma flows or result from a moving front of for example
reconnection moving along a magnetic interface.
Figure 6 shows the evolution of selected PEBs in sequences of small images for
three ROIs. The three images for ROI A cover 2:43 min during which the PEB
migrates over a distance of 340 km with an average speed of 2 km s-1. The PEB
flares up for a duration of 102 s, with its brightest moment at 10:13:16 UT in
the middle panel. The PEB in ROI B migrates over 365 km with an average speed
of 1.4 km s-1. During its lifetime, the PEB splits into a number of bright
substructures, this is visible in the 3rd panel. The sequence for ROI C covers
1:41 min of a total lifetime of the PEB of 5:04 min. This PEB shows a clear
flame structure which is strikingly absent in the reference row of WB 8542 Å
images. This flame appears to eject a small bright blob that is visible in the
three last panels. This ejection can be followed for 2:22 min during which it
moves at a maximum speed of almost 11 km s-1. The PEB itself appears to move
at a maximum speed of almost 4 km s-1 while it moves at an average speed of 2
km s-1 during a migration over 650 km.
The rapid variability we see for these selected examples and other PEBs in the
time series is clearly limited by the temporal resolution. There often are
significant variations in brightness and morphology (e.g., in the form of
splitting, merging, and ejections) between subsequent time steps. This
suggests that PEBs are changing on a shorter time scale than 20 s.
## 5 Discussion and conclusion
Using high spatial resolution observations in the Balmer H $\alpha$ and H
$\beta$ lines, we find large numbers of EBs in the penumbrae of sunspots. The
EB nature of these penumbral events is established by 1: characteristic
spectral profiles with often strongly enhanced wings, 2: flame morphology
under slanted viewing angle, 3: rapid temporal variability in brightness and
morphology, and 4: absence in concurrent continuum passbands.
We find many small patches in the penumbra with characteristic EB wing
enhancement and note that there is considerable spread in the level of
enhancement: some reach the level of strong EBs traditionally found in active
region flux emergence regions with wing enhancement well above twice that of
quiet Sun, others have weak wing enhancement that is only discernible in
contrast to weak background penumbral profiles. In the H $\beta$ line, we find
that PEBs do not stand out in terms of area or wing brightness as compared to
EBs in the surroundings of the sunspot. We do note however, that PEBs are
easier to discern in H $\beta$ than in H $\alpha$. The shorter wavelength of H
$\beta$ offers the advantage of higher spatial resolution and higher contrast.
Furthermore, we observe that the sunspot in the H $\alpha$ line is much more
dominated by dense chromospheric fibrils. It appears that the sunspot
chromosphere has much less opacity in H $\beta$. Recently, from non-LTE
radiative transfer calculations, Zhang (2020) concluded that H $\beta$ Stokes
signals originate from the sunspot umbra photosphere. The difference between
the H $\alpha$ and H $\beta$ lines is well illustrated by the line scan
animation associated with Fig. 1 in the online material. At wing offsets that
have highest EB contrast, the H $\alpha$ line is much more dominated by the
chromospheric superpenumbra fibrils than the H $\beta$ line. These combined
reasons make it more difficult to detect PEBs in H $\alpha$ and appreciate the
ubiquity of PEBs. Here we present detailed analysis of two different sunspots,
but we note that we observe large numbers of PEBs in at least 11 other sunspot
datasets that we have acquired over the past ten years. In the 22 September
2017 dataset, we find more than 100 PEBs in the highest quality H $\beta$ line
scan which corresponds to almost 30% of all detected EBs. The number density
of PEBs is higher than the average number density of EBs over the whole FOV.
It is only in the sunspot moat, just outside the penumbra, that the number
density of EBs is higher than in the outer penumbra. In the moat we detect on
average about 3 EBs per typical granule, considering an average area of a
granule of 1.75 Mm2 (see Rincon & Rieutord, 2018). For the outer penumbra, we
detect about 1.3 PEBs per typical granule area. In the quiet Sun, Joshi et al.
(2020) found a QSEB number density of 0.09 Mm-2, which is more than 8 times
lower than the number densities of PEBs (0.76 Mm-2) and 19 times lower than
EBs in the moat (1.72 Mm-2).
Many events show clear flame morphology under inclined viewing angle which
underlines the similarity with EBs in active regions and quiet Sun. The rapid
variability and dynamics we see in the 29 April 2016 time series remind of the
rapid variability found in EB flames in active regions (Watanabe et al., 2011)
and quiet Sun (Rouppe van der Voort et al., 2016). We note however, that the
temporal cadence of these studies ($\sim 1$ s) is much faster than for the
time series presented here (20 s).
Establishing the ubiquity of EBs in the penumbra is aided by the concurrent
continuum observations that are available through the CHROMIS WB channel.
Absence of a bright feature in the associated continuum image confirms the
Balmer line wing enhancement. The EB features are as absent in the penumbra as
they are outside the sunspot and this further confirms the EB nature of PEBs.
The H $\beta$ wing images show many small bright features that are absent in
the WB image but have too weak wing enhancement to be (fully) detected by the
$k$-means method (in Sect. 4.1 described as “crumbs”). This suggests that PEBs
are more prevalent than the detection numbers from the $k$-means method
suggest.
For CRISP H $\alpha$ observations, a clean continuum channel is not as readily
available as the CRISP WB channel shares the same prefilter as the CRISP
narrow band images. CRISP WB 6563 Å images show EBs because the CRISP
prefilter transmission profile has relatively wide passband (FWHM=4.9 Å) and
is centered on the H $\alpha$ line. For the 29 April 2016 time series (see
Fig. 6) we compare H $\alpha$ blue wing images with concurrent CRISP WB 8542 Å
images since EB signature in this channel is weaker due to wider passband and
generally weaker EB emission in Ca ii 8542 Å.
The presence of EBs in the sunspot penumbra has been reported before. For
example, a number of small EBs inside the penumbra of a small sunspot can be
seen in the H $\alpha$ wing detection maps of Nelson et al. (2013a), and
Reardon et al. (2013) report the observation of EB profiles in the Ca ii 8542
Å line for two events in a study of penumbral transients. However, this is the
first time that the presence of large numbers of EBs in the penumbra is
reported.
The significance of EBs lies in their capacity of being markers of magnetic
reconnection in the low solar atmosphere. Numerical simulations demonstrated
that enhanced Balmer wing emission and flame morphology stems from heating
along current sheets at reconnection sites (Hansteen et al., 2017, 2019;
Danilovic, 2017). The flames we observe for PEBs appear to be rooted deep down
in the penumbra photosphere, in similar fashion as for EB flames in active
regions and quiet Sun. Further support for PEBs being markers of magnetic
reconnection in the deep penumbra photosphere comes from PEB detections being
located in areas where opposite polarities are in close proximity. The sunspot
of 22 September 2017 is of positive magnetic polarity. The $B_{z}$ map (Fig.
4) reveals the presence of many isolated patches of opposite (negative)
polarity within the penumbra. Many PEBs are located in the vicinity of these
opposite polarity patches and some are located right at the interface where
the two magnetic polarities meet. We also observe that the number density of
PEBs increases toward the outer penumbra, following the same trend of
increasing opposite polarity flux with increasing distance from the sunspot
umbra. We note however, that there are a few limitations that need to be kept
in mind when combining the $B_{z}$ and EB detection maps for inferring that
magnetic reconnection is taking place: spectral line inversions are sensitive
to a limited range in height and simplifications assumed for the ME inversion
method imply uncertainties. We estimate that our ME inversions of the Fe i
lines are valid as $\@vec{B}$ field measurements over a height range over a
few 100 km in the upper photosphere (see Grec et al., 2010). Joshi et al.
(2017) have shown that opposite polarity magnetic flux found in the deeper
penumbra could be more than four times larger than that in the the middle and
upper photosphere. Therefore, there is solid ground to believe that our ME
inversions which provide height-independent magnetic field vectors are not
able to resolve all opposite polarity patches in the penumbra. Furthermore,
stray light makes it difficult to detect weak signals and adds to the
uncertainty in the interpretation. We have applied a correction for stray
light that is consistent with previous studies but the full impact of stray
light on our measurements remains unknown. Further uncertainties come from
line-of-sight obscuration due to corrugation of the penumbral optical surface
and it may be possible that regions with opposite polarity are hidden behind
elevated foreground structures. Apart from these observational limitations
that mitigate the detection of opposite polarity patches, it should be
stressed that the condition of diametrically opposite direction fields is not
strictly required for reconnection to take place. Even in areas that appear
unipolar in observations, the complex magnetic topology of the penumbra can be
expected to host gradients in the magnetic field that allow for small-angle
magnetic reconnection.
The large number of PEBs we observe suggests that magnetic reconnection is a
very frequently occurring process in the low penumbra atmosphere. A
significant amount of magnetic energy may be dissipated through reconnection
in the highly abundant PEBs and as such PEBs may play an important role in
sunspot decay. Outward moving magnetic elements that leave the penumbra and
migrate through the sunspot moat, commonly referred to as moving magnetic
features (MMF), carry net flux away from the sunspot and are traditionally
regarded as main actors in sunspot decay (see, e.g. Solanki, 2003). The
ubiquity of PEBs we find here may implicate that some fraction of magnetic
energy is already dissipated and lost from the sunspot before MMFs cross the
sunspot boundary. Moreover, high density of EBs in the immediate vicinity of
the sunspot suggest that significant fraction of magnetic field in the moat
flow regions might also dissipate through magnetic field reconnection
occurring in the photosphere.
What impact do PEBs have on the upper atmosphere? There exist several
transient phenomena in sunspots that may be related to energy release in PEBs.
Penumbral micro-jets (PMJ) are short-lived elongated brightenings that can be
observed in the core of Ca ii lines (Katsukawa et al., 2007; Reardon et al.,
2013). Magnetic reconnection has been suggested as their driver but the idea
that they carry high-speed plasma flows as their name suggests has been
contested (Esteban Pozuelo et al., 2019; Rouppe van der Voort & Drews, 2019).
They can be observed in transition region diagnostics (Vissers et al., 2015;
Drews & Rouppe van der Voort, 2020) and Tiwari et al. (2016) reported the
existence of large PMJs originating from the outer penumbra in the regions
with abundant mixed polarities. These large PMJs leave signatures in some of
the transition region/coronal channels of the AIA instrument of NASA’s Solar
Dynamics Observatory Drews & Rouppe van der Voort (2017) found that there
exist on average 21 PMJs per time step in a time series of Ca ii 8542 Å
observations. This is significantly fewer than the number of PEBs that we
detect. Furthermore, clear PMJ detections are mostly found in the inner
penumbra where we find fewer PEBs as compared to the outer penumbra. Recently,
Buehler et al. (2019) and Drews & Rouppe van der Voort (2020) connected Ca ii
8542 Å PMJs with dark fibrilar structures close to the line core in H
$\alpha$. The connection between PEBs and PMJs warrants further study and
requires simultaneous observation of multiple spectral lines and extreme high
temporal evolution to resolve the onset of PMJs (Rouppe van der Voort & Drews,
2019). Possibly there is also a connection with transition region bright dots
observed above sunspots with IRIS (Tian et al., 2014; Samanta et al., 2017)
and Hi-C (Alpert et al., 2016). Furthermore, magnetic reconnection in the deep
atmosphere as marked by PEBs may play a role in the heating of bright coronal
loops that are rooted in penumbrae (see, e.g., Tiwari et al., 2017)
Finally, we conclude that EBs in the penumbra of sunspots are an excellent
target for new telescopes such as the 4-m DKIST (Rimmele et al., 2020) and the
planned EST (Schlichenmaier et al., 2019) since PEBs offer opportunities to
study magnetic reconnection in kG magnetic field environments at the smallest
resolvable scales in astrophysical plasmas.
###### Acknowledgements.
The Swedish 1-m Solar Telescope is operated on the island of La Palma by the
Institute for Solar Physics of Stockholm University in the Spanish
Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de
Canarias. The Institute for Solar Physics is supported by a grant for research
infrastructures of national importance from the Swedish Research Council
(registration number 2017-00625). This research is supported by the Research
Council of Norway, project number 250810, and through its Centres of
Excellence scheme, project number 262622. VMJH is supported by the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (SolarALMA, grant agreement No. 682462). We thank Shahin
Jafarzadeh, Ainar Drews, Tiago Pereira and Ada Ortiz for their help with the
observations. We made much use of NASA’s Astrophysics Data System
Bibliographic Services.
## References
* Alpert et al. (2016) Alpert, S. E., Tiwari, S. K., Moore, R. L., Winebarger, A. R., & Savage, S. L. 2016, ApJ, 822, 35
* Borrero & Ichimoto (2011) Borrero, J. M. & Ichimoto, K. 2011, Living Reviews in Solar Physics, 8, 4
* Bose et al. (2019) Bose, S., Henriques, V. M. J., Joshi, J., & Rouppe van der Voort, L. 2019, A&A, 631, L5
* Bose et al. (2021) Bose, S., Joshi, J., Henriques, V. M. J., & Rouppe van der Voort, L. 2021, arXiv e-prints, arXiv:2101.07829
* Buehler et al. (2019) Buehler, D., Esteban Pozuelo, S., de la Cruz Rodriguez, J., & Scharmer, G. B. 2019, ApJ, 876, 47
* Cuperman et al. (1992) Cuperman, S., Li, J., & Semel, M. 1992, A&A, 265, 296
* Danilovic (2017) Danilovic, S. 2017, A&A, 601, A122
* de la Cruz Rodríguez (2019) de la Cruz Rodríguez, J. 2019, A&A, 631, A153
* de la Cruz Rodríguez et al. (2015) de la Cruz Rodríguez, J., Löfdahl, M. G., Sütterlin, P., Hillberg, T., & Rouppe van der Voort, L. 2015, A&A, 573, A40
* Drews & Rouppe van der Voort (2017) Drews, A. & Rouppe van der Voort, L. 2017, A&A, 602, A80
* Drews & Rouppe van der Voort (2020) Drews, A. & Rouppe van der Voort, L. 2020, A&A, 638, A63
* Ellerman (1917) Ellerman, F. 1917, ApJ, 46, 298
* Esteban Pozuelo et al. (2019) Esteban Pozuelo, S., de la Cruz Rodríguez, J., Drews, A., et al. 2019, ApJ, 870, 88
* Everitt (1972) Everitt, B. S. 1972, British Journal of Psychiatry, 120, 143–145
* Fang et al. (2006) Fang, C., Tang, Y. H., Xu, Z., Ding, M. D., & Chen, P. F. 2006, ApJ, 643, 1325
* Fiorio & Gustedt (1996) Fiorio, C. & Gustedt, J. 1996, Theoretical Computer Science, 154, 165
* Franz & Schlichenmaier (2013) Franz, M. & Schlichenmaier, R. 2013, A&A, 550, A97
* Georgoulis et al. (2002) Georgoulis, M. K., Rust, D. M., Bernasconi, P. N., & Schmieder, B. 2002, ApJ, 575, 506
* Grec et al. (2010) Grec, C., Uitenbroek, H., Faurobert, M., & Aime, C. 2010, A&A, 514, A91
* Hansteen et al. (2019) Hansteen, V., Ortiz, A., Archontis, V., et al. 2019, A&A, 626, A33
* Hansteen et al. (2017) Hansteen, V. H., Archontis, V., Pereira, T. M. D., et al. 2017, ApJ, 839, 22
* Henriques (2012) Henriques, V. M. J. 2012, A&A, 548, A114
* Joshi et al. (2017) Joshi, J., Lagg, A., Hirzberger, J., Solanki, S. K., & Tiwari, S. K. 2017, A&A, 599, A35
* Joshi et al. (2011) Joshi, J., Pietarila, A., Hirzberger, J., et al. 2011, ApJ, 734, L18
* Joshi & Rouppe van der Voort (2020) Joshi, J. & Rouppe van der Voort, L. H. M. 2020, (in preparation)
* Joshi et al. (2020) Joshi, J., Rouppe van der Voort, L. H. M., & de la Cruz Rodríguez, J. 2020, A&A, 641, L5
* Katsukawa et al. (2007) Katsukawa, Y., Berger, T. E., Ichimoto, K., et al. 2007, Science, 318, 1594
* Libbrecht et al. (2017) Libbrecht, T., Joshi, J., Rodríguez, J. d. l. C., Leenaarts, J., & Ramos, A. A. 2017, A&A, 598, A33
* Löfdahl et al. (2018) Löfdahl, M. G., Hillberg, T., de la Cruz Rodriguez, J., et al. 2018, ArXiv e-prints 1804.03030
* Matsumoto et al. (2008) Matsumoto, T., Kitai, R., Shibata, K., et al. 2008, PASJ, 60, 577
* Nelson et al. (2013a) Nelson, C. J., Doyle, J. G., Erdélyi, R., et al. 2013a, Sol. Phys., 283, 307
* Nelson et al. (2017) Nelson, C. J., Freij, N., Reid, A., et al. 2017, ApJ, 845, 16
* Nelson et al. (2015) Nelson, C. J., Scullion, E. M., Doyle, J. G., Freij, N., & Erdélyi, R. 2015, ApJ, 798, 19
* Nelson et al. (2013b) Nelson, C. J., Shelyag, S., Mathioudakis, M., et al. 2013b, ApJ, 779, 125
* Panos et al. (2018) Panos, B., Kleint, L., Huwyler, C., et al. 2018, ApJ, 861, 62
* Pariat et al. (2004) Pariat, E., Aulanier, G., Schmieder, B., et al. 2004, ApJ, 614, 1099
* Pariat et al. (2007) Pariat, E., Schmieder, B., Berlicki, A., et al. 2007, A&A, 473, 279
* Reardon et al. (2013) Reardon, K., Tritschler, A., & Katsukawa, Y. 2013, ApJ, 779, 143
* Rimmele et al. (2020) Rimmele, T. R., Warner, M., Keil, S. L., et al. 2020, Sol. Phys., 295, 172
* Rincon & Rieutord (2018) Rincon, F. & Rieutord, M. 2018, Living Reviews in Solar Physics, 15, 6
* Rouppe van der Voort et al. (2020) Rouppe van der Voort, L. H. M., De Pontieu, B., Carlsson, M., et al. 2020, A&A, 641, A146
* Rouppe van der Voort & Drews (2019) Rouppe van der Voort, L. H. M. & Drews, A. 2019, A&A, 626, A62
* Rouppe van der Voort et al. (2016) Rouppe van der Voort, L. H. M., Rutten, R. J., & Vissers, G. J. M. 2016, A&A, 592, A100
* Ruiz Cobo & Asensio Ramos (2013) Ruiz Cobo, B. & Asensio Ramos, A. 2013, A&A, 549, L4
* Rutten et al. (2013) Rutten, R. J., Vissers, G. J. M., Rouppe van der Voort, L. H. M., Sütterlin, P., & Vitas, N. 2013, in Journal of Physics Conference Series, Vol. 440, Journal of Physics Conference Series, 012007
* Sainz Dalda et al. (2019) Sainz Dalda, A., de la Cruz Rodríguez, J., De Pontieu, B., & Gošić, M. 2019, ApJ, 875, L18
* Sakurai et al. (1985) Sakurai, T., Makita, M., & Shibasaki, K. 1985, in Theo. Prob. High Resolution Solar Physics, ed. H. U. Schmidt, 313
* Samanta et al. (2017) Samanta, T., Tian, H., Banerjee, D., & Schanche, N. 2017, ApJ, 835, L19
* Scharmer et al. (2003a) Scharmer, G. B., Bjelksjö, K., Korhonen, T. K., Lindberg, B., & Petterson, B. 2003a, in Proc. SPIE, Vol. 4853, Innovative Telescopes and Instrumentation for Solar Astrophysics, ed. S. L. Keil & S. V. Avakyan, 341–350
* Scharmer et al. (2013) Scharmer, G. B., de la Cruz Rodriguez, J., Sütterlin, P., & Henriques, V. M. J. 2013, A&A, 553, A63
* Scharmer et al. (2003b) Scharmer, G. B., Dettori, P. M., Löfdahl, M. G., & Shand, M. 2003b, in Proc. SPIE, Vol. 4853, Innovative Telescopes and Instrumentation for Solar Astrophysics, ed. S. L. Keil & S. V. Avakyan, 370–380
* Scharmer & Henriques (2012) Scharmer, G. B. & Henriques, V. M. J. 2012, A&A, 540, A19
* Scharmer et al. (2011) Scharmer, G. B., Henriques, V. M. J., Kiselman, D., & de la Cruz Rodríguez, J. 2011, Science, 333, 316
* Scharmer et al. (2019) Scharmer, G. B., Löfdahl, M. G., Sliepen, G., & de la Cruz Rodríguez, J. 2019, A&A, 626, A55
* Scharmer et al. (2008) Scharmer, G. B., Narayan, G., Hillberg, T., et al. 2008, ApJ, 689, L69
* Schlichenmaier et al. (2019) Schlichenmaier, R., Bellot Rubio, L. R., Collados, M., et al. 2019, arXiv e-prints, arXiv:1912.08650
* Shetye et al. (2018) Shetye, J., Shelyag, S., Reid, A. L., et al. 2018, MNRAS, 479, 3274
* Solanki (2003) Solanki, S. K. 2003, A&A Rev., 11, 153
* Tian et al. (2014) Tian, H., Kleint, L., Peter, H., et al. 2014, ApJ, 790, L29
* Tiwari (2017) Tiwari, S. K. 2017, ArXiv e-prints 1712.07174
* Tiwari et al. (2016) Tiwari, S. K., Moore, R. L., Winebarger, A. R., & Alpert, S. E. 2016, ApJ, 816, 92
* Tiwari et al. (2017) Tiwari, S. K., Thalmann, J. K., Panesar, N. K., Moore, R. L., & Winebarger, A. R. 2017, ApJ, 843, L20
* Tiwari et al. (2015) Tiwari, S. K., van Noort, M., Solanki, S. K., & Lagg, A. 2015, A&A, 583, A119
* van Noort et al. (2005) van Noort, M., Rouppe van der Voort, L., & Löfdahl, M. G. 2005, Sol. Phys., 228, 191
* Vissers & Rouppe van der Voort (2012) Vissers, G. & Rouppe van der Voort, L. 2012, ApJ, 750, 22
* Vissers et al. (2015) Vissers, G. J. M., Rouppe van der Voort, L. H. M., & Carlsson, M. 2015, ApJ, 811, L33
* Vissers et al. (2013) Vissers, G. J. M., Rouppe van der Voort, L. H. M., & Rutten, R. J. 2013, ApJ, 774, 32
* Vissers et al. (2019) Vissers, G. J. M., Rouppe van der Voort, L. H. M., & Rutten, R. J. 2019, A&A, 626, A4
* Watanabe et al. (2008) Watanabe, H., Kitai, R., Okamoto, K., et al. 2008, ApJ, 684, 736
* Watanabe et al. (2011) Watanabe, H., Vissers, G., Kitai, R., Rouppe van der Voort, L., & Rutten, R. J. 2011, ApJ, 736, 71
* Zhang (2020) Zhang, H. 2020, Science China Physics, Mechanics, and Astronomy, 63, 119611
|
# Minimum energy with infinite horizon:
from stationary to non-stationary states
P. Acquistapace111Dipartimento di Matematica, Università di Pisa, e-mail:
<EMAIL_ADDRESS>F.Gozzi222Dipartimento di Economia e Finanza,
Università _LUISS - Guido Carli_ Roma; e-mail<EMAIL_ADDRESS>
###### Abstract
We study a non standard infinite horizon, infinite dimensional linear-
quadratic control problem arising in the physics of non-stationary states (see
e.g. [7, 9]): finding the minimum energy to drive a given stationary state
$\bar{x}=0$ (at time $t=-\infty$) into an arbitrary non-stationary state $x$
(at time $t=0$). This is the opposite to what is commonly studied in the
literature on null controllability (where one drives a generic state $x$ into
the equilibrium state $\bar{x}=0$). Consequently, the Algebraic Riccati
Equation (ARE) associated to this problem is non-standard since the sign of
the linear part is opposite to the usual one and since it is intrinsically
unbounded. Hence the standard theory of AREs does not apply. The analogous
finite horizon problem has been studied in the companion paper [1]. Here,
similarly to such paper, we prove that the linear selfadjoint operator
associated to the value function is a solution of the above mentioned ARE.
Moreover, differently to [1], we prove that such solution is the maximal one.
The first main result (Theorem 4.7) is proved by approximating the problem
with suitable auxiliary finite horizon problems (which are different from the
one studied in [1]). Finally in the special case where the involved operators
commute we characterize all solutions of the ARE (Theorem 5.5) and we apply
this to the Landau-Ginzburg model.
Keywords: Minimum energy; Null controllability; Landau-Ginzburg model; Optimal
control with infinite horizon; Algebraic Riccati Equation in infinite
dimension; Value function as maximal solution.
###### Contents
1. 1 Introduction
1. 1.1 Plan of the paper
2. 2 The problem and the main results
1. 2.1 The state equation
2. 2.2 Minimum energy problems with infinite horizon and associated Riccati equation
3. 2.3 The method and the main results
3. 3 The auxiliary problem
1. 3.1 A key comparison result
4. 4 Minimum energy with (negative) infinite horizon
1. 4.1 Optimal strategies
2. 4.2 Connection with the finite horizon case
3. 4.3 Algebraic Riccati Equation
5. 5 The selfadjoint commuting case
6. 6 A motivating example: from equilibrium to non-equilibrium states
7. A Minimum Energy with finite horizon
1. A.1 General formulation of the problem
2. A.2 The space $H$ and its properties
## 1 Introduction
We study a non standard infinite dimensional, infinite horizon, linear-
quadratic control problem: finding the minimum energy to drive a given
stationary state $\bar{x}=0$ (at time $t=-\infty$) into an arbitrary non-
stationary state $x$ (at time $t=0$).
This kind of problems arises in the control representation of the rate
function for a class of large deviation problems (see e.g. [13] and the
references quoted therein; see also [18, Chapter 8] for an introduction to the
subject). It is motivated by applications in the physics of non-equilibrium
states and in this context it has been studied in various papers, see e.g. [4,
5, 6, 7, 8, 9] (see Section 6 for a description of a model case).
The main goal here, as a departure point of the theory, is to apply the
dynamic programming approach to characterize the value function as the unique
(or maximal/minimal) solution of the associated Hamilton-Jacobi-Bellman (HJB)
equation, a problem left open e.g. in [7, 9]. This problem is quite difficult
since it deals with the opposite to what is commonly studied in the literature
on null controllability (where one drives a generic state $x$ into the
equilibrium state $\bar{x}=0$). For this reason we start studying here the
simplest case, i.e. when the state equation is linear and the energy
functional is purely quadratic: so the problem falls into the class of linear-
quadratic optimal control problems, the value function is quadratic, and the
associated HJB equation reduces to an Algebraic Riccati Equation (ARE).
The above feature (i.e. the fact we bring $0$ to $x$ instead of the opposite)
implies that the ARE associated to this problem is non-standard for two main
reasons: first, the sign of the linear part is opposite to the usual one;
second, since the set of reachable $x$ is strictly smaller than the whole
state space $X$, the solution is intrinsically unbounded in $X$. The
combination of these two difficulties does not allow to apply the standard
theory of AREs.
In the companion paper [1] we studied, as a first step, the associated finite
horizon case. Here we partially exploit the results of such paper to deal with
the more interesting infinite horizon case, which is the one that arises in
the above mentioned papers in physics.
Our main results (Theorems 4.7 and 5.5) show that, under a null
controllability assumption (after a given time $T_{0}\geq 0$) and a coercivity
assumption on the control operator, the linear selfadjoint operator $P$
associated to the value function is the maximal solution of the above
mentioned ARE. The first result concerns the general case with some
restrictions on the class of solutions, while the second one looks at the case
where the state and the control operators commute, without any restriction on
the class of solutions.
This is only partially similar to what has been done in [1]. Indeed, the proof
that $P$ is a solution of ARE is substantially similar to what is done in [1,
Section 4.3]. On the other hand, while in [1, Section 4.4] we prove a partial
uniqueness result (i.e. uniqueness in a suitable family of invertible
operators), here we are able to prove, through a delicate comparison argument
(based on a nontrivial approximation procedure), that $P$ is the maximal
solution of the associated ARE.
To prove the comparison argument (which is the content of the key Lemma 3.10)
we need to introduce a family of auxiliary finite horizon problems, which are
different from the one studied in [1].
Finally, in the special case where the involved operators commute, we are
able, again differently from the finite horizon case, to characterize all
solutions of the ARE. This allows to apply our result to the case of Landau-
Ginzburg model.
### 1.1 Plan of the paper
In Section 2 we illustrate the problem and the strategy to show the main
results. It is divided in three subsections: in the first we present the state
equation and the main Hypothesis; in the second one we describe our minimum
energy problem; the third subsection briefly explains the method used to prove
our main results. Section 3 concerns the study of the auxiliary problem. After
devoting the first part of the section to some basic results on it, we show,
in Subsection 3.1, the comparison Lemma 3.10 which will be used to prove the
maximality result in the infinite horizon case. Section 4 is devoted to the
main problem and the main maximality result. In Section 5 we analyze the case
when the operators $A$ and $BB^{*}$ commute. In Section 6 we present, as an
example, a special case of the motivating problem given in [7] (the case of
the so-called Landau-Ginzburg model): we show that it falls into the class of
problems treated in this paper.
## 2 The problem and the main results
### 2.1 The state equation
###### Notation 2.1.
Given any two Banach spaces $Y$ and $Z$, we denote by ${\cal L}(Y,Z)$ the set
of all linear bounded operators from $Y$ to $Z$, writing ${\cal L}(Y)$ when
$Z=Y$. When $Y$ is a Hilbert space we denote by ${\cal L}_{+}(Y)$ the set of
all elements of ${\cal L}(Y)$ which are selfadjoint and nonnegative.
Let $-\infty<s<t<+\infty$. Consider the abstract linear equation
$\left\\{\begin{array}[]{l}y^{\prime}(r)=Ay(r)+Bu(r),\quad
r\in\,]s,t],\\\\[5.69054pt] y(s)=x\in X,\end{array}\right.$ (1)
under the following assumption.
###### Hypothesis 2.2.
(i)
$X$, the state space, and $U$, the control space, are real separable Hilbert
spaces;
(ii)
$A:{\cal D}(A)\subseteq X\rightarrow X$ is the generator of a
$C_{0}$-semigroup on $X$ such that
$\|e^{tA}\|_{{\cal L}(X)}\leq Me^{-\omega t},\qquad t\geq 0,$ (2)
for given constants $M>0$ and $\omega>0$;
(iii)
$B:U\rightarrow X$ is a bounded linear operator;
(iv)
$u$, the control strategy, belongs to $L^{2}(s,t;U)$.
We recall the following well known result, pointed out e.g. in [1, Proposition
2.2].
###### Proposition 2.3.
For $-\infty<s<t<+\infty$, $x\in X$ and $u\in L^{2}(s,t;U)$, the mild solution
of (1), defined by
$y(r;s,x,u)=e^{(r-s)A}x+\int_{s}^{r}e^{(r-\sigma)A}Bu(\sigma)\,\mathrm{d}\sigma,\quad
r\in[s,t],$ (3)
is in $C([s,t],X)$.
We now consider the state equation in the half-line $\,]-\infty,t]$:
$\left\\{\begin{array}[]{l}y^{\prime}(r)=Ay(r)+Bu(r),\quad
r\in\,]-\infty,t],\\\\[5.69054pt]
\displaystyle\lim_{s\rightarrow-\infty}y(s)=0.\end{array}\right.$ (4)
Since (4) is not completely standard we introduce the following definition of
solution.
###### Definition 2.4.
Given $u\in L^{2}(-\infty,t;U)$, we say that $y\in C(\,]-\infty,t];X)$ is a
solution of (4) if for every $-\infty<r_{1}\leq r_{2}\leq t$ we have
$y(r_{2})=e^{(r_{2}-r_{1})A}y(r_{1})+\int_{r_{1}}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\mathrm{d}\tau.$
(5)
and
$\lim_{s\rightarrow-\infty}y(s)=0.$ (6)
###### Lemma 2.5.
Given any $u\in L^{2}(-\infty,t;U)$, there exists a unique solution of the
Cauchy problem (4) and it is given by
$y(r;-\infty,0,u):=\int_{-\infty}^{r}e^{(r-\tau)A}Bu(\tau)\,\mathrm{d}\tau,\qquad
r\leq t.$ (7)
###### Proof.
We prove first that the function $y(\cdot;-\infty,0,u)$ given by (7) is
continuous. Fixed $r_{1}<r_{2}\leq t$, we have
$\displaystyle y(r_{2};-\infty,0,u)-y(r_{1},-\infty,0,u)=$
$\displaystyle=\int_{-\infty}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau-\int_{-\infty}^{r_{1}}e^{(r_{1}-\tau)A}Bu(\tau)\,\mathrm{d}\tau=$
$\displaystyle=\int_{-\infty}^{r_{1}}\left(e^{(r_{2}-r_{1})A}-I\right)e^{(r_{1}-\tau)A}Bu(\tau)\,\mathrm{d}\tau+\int_{r_{1}}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau,$
and then continuity follows by standard arguments. We now prove that (5)
holds. For $-\infty<r_{1}\leq r_{2}\leq t$, we have
$\displaystyle y(r_{2};-\infty,0,u)$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau=$
$\displaystyle=$ $\displaystyle
e^{(r_{2}-r_{1})A}\int_{-\infty}^{r_{1}}e^{(r_{1}-\tau)A}Bu(\tau)\,\mathrm{d}\tau+\int_{r_{1}}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau=$
$\displaystyle=$ $\displaystyle
e^{(r_{2}-r_{1})A}y(r_{1};-\infty,0,u)+\int_{r_{1}}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau,$
so (5) is satisfied. Moreover letting $r\rightarrow-\infty$, since $u\in
L^{2}(-\infty,t;U)$ and thanks to equation (2), we have
$y(r;-\infty,x,u)\rightarrow 0$ as ${r\rightarrow-\infty}$.
In order to prove uniqueness, consider two solutions $y_{1}(\cdot)$ and
$y_{2}(\cdot)$ and a point $r\in(-\infty,t)$. Since $y_{1}(\cdot)$ and
$y_{2}(\cdot)$ satisfy (5), for their difference we have, for $r_{0}<r<t$,
$\|y_{1}(r)-y_{2}(r)\|_{X}=\|e^{(r-r_{0})A}(y_{1}(r_{0})-y_{2}(r_{0}))\|_{X}\leq
M\,e^{-(r-r_{0})\omega}\|(y_{1}(r_{0})-y_{2}(r_{0}))\|_{X}\,.$
As $y_{1}(\cdot)$ and $y_{2}(\cdot)$ satisfy (6), letting
$r_{0}\rightarrow-\infty$ above we get $y_{1}(r)=y_{2}(r)$ for every $r<t$. ∎
###### Remark 2.6.
Notice that, if the initial condition (6) is not zero, then the above equation
cannot have any solution. Indeed any solution $y(\cdot;-\infty,x,u)$ of the
state equation (4), with $0$ replaced by $x\in X\setminus\\{0\\}$ in (6), must
satisfy (5) and $\lim_{s\rightarrow-\infty}y(s)=x$. But, as
$r_{1}\rightarrow-\infty$, (5) implies, as in (7), that
$y(r_{2};-\infty,x,u):=\int_{-\infty}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\mathrm{d}\tau,\qquad
r_{2}\leq t.$ (8)
Taking the limit as $r_{2}\rightarrow-\infty$ we get $x=0$, a contradiction.
### 2.2 Minimum energy problems with infinite horizon and associated Riccati
equation
To better clarify our results we state, roughly and informally, the
mathematical problem (see Section 4 for a precise description). The state
space $X$ and the control space $U$ are both real separable Hilbert spaces. We
take the linear controlled system in $X$
$\left\\{\begin{array}[]{ll}y^{\prime}(s)=Ay(s)+Bu(s),\quad
s\in\,]-\infty,0],\\\\[5.69054pt] y(-\infty)=0,\end{array}\right.$ (9)
where $A:{\cal D}(A)\subset X\rightarrow X$ generates a strongly continuous
semigroup and $B:U\rightarrow X$ is a linear, possibly unbounded operator.
Given a point $x\in X$ we consider the set ${\cal U}_{[-\infty,0]}(0,x)$ of
all square integrable control strategies that drive the system from the
equilibrium state $0$ (at time $t=-\infty$) into the generic non-equilibrium
state $x$ (at time $t=0$). It is well known (see Proposition 4.2) that the set
${\cal U}_{[-\infty,0]}(0,x)$ is nonempty if and only if $x\in H$, where $H$
is a suitable subspace of $X$ that can be endowed with its own Hilbert
structure (see next subsection for the precise definition of $H$ and
Subsection A.2 for its properties).
We want to minimize the “energy-like” functional
$J_{[-\infty,0]}(u)=\frac{1}{2}\int_{-\infty}^{0}\|u(s)\|^{2}_{U}\,\mathrm{d}s.$
(10)
As usual the value function $V_{\infty}$ is defined as
$V_{\infty}(x)=\inf_{u\in{\cal U}_{[-\infty,0]}(0,x)}J_{[-\infty,0]}(u),$ (11)
and it is finite only when $x\in H$.
The peculiarity of the problem with respect to the most studied minimum energy
problems in Hilbert spaces (see e.g. [10] [13], [14, 15], [20], [28], and the
general surveys [3], [11], [22, 23], [32]) is that it gives rise to an
Algebraic Riccati Equation with a ‘wrong’ sign in the linear term to which, to
our knowledge, the standard theory developed in the current literature does
not apply.
Indeed, the associated ARE in $X$ (with unknown $R$), which can be found
applying the dynamic programming principle, is, formally,
$0=-\langle Ax,Ry\rangle_{X}-\langle Rx,Ay\rangle_{X}-\langle
B^{*}Rx,B^{*}Ry\rangle_{U}\,,\quad x,y\in{\cal D}(A)\cap{\cal D}(R).$ (12)
Since $R$ is unbounded (this comes from the fact that $V_{\infty}$ is defined
only in $H$), it is convenient to rewrite (12) in $H$ (with unknown $P$, which
is now a bounded operator on $H$). This way we get the equation
$0=-\langle Ax,Py\rangle_{H}-\langle Px,Ay\rangle_{H}-\langle
B^{*}Q_{\infty}^{-1}Px,B^{*}Q_{\infty}^{-1}Py\rangle_{U},$ (13)
or, transforming the inner products in $H$ into inner products in $X$,
$0=-\langle Ax,Q_{\infty}^{-1}Py\rangle_{X}-\langle
Q_{\infty}^{-1}Px,Ay\rangle_{X}-\langle
B^{*}Q_{\infty}^{-1}Px,B^{*}Q_{\infty}^{-1}Py\rangle_{U},$ (14)
In the last two equations $Q_{\infty}$ is the so-called controllability
operator (see (19)) and $Q_{\infty}^{-1}$ denotes its pseudoinverse, which is,
in general, unbounded. Moreover the last two equations make sense for $x,y$
belonging to suitable sets to be specified later on. For more details on how
these equations arise, the definitions of solution, and the relations among
them, see the discussion at the beginning of Subsection 4.3. Here we just
observe that the form of equation (14) turns out to be more suitable to prove
our main results.
The ‘wrong’ sign in the linear term333Evidently the two terms in equation (14)
(or (12), or (13)) have the same sign, while in the standard case they do not.
We infer that the ‘wrong’ sign is in the linear term looking at the
corresponding finite horizon problem in [1]. of (14) (or (12), or (13)) does
not allow us to approach it using the standard method (described e.g. in [3,
pp. 390-394 and 479-486], see also [28, p.1018]), which consists in taking the
associated evolutionary Riccati equation, proving that it has a solution
$P(t)$ (using an a priori estimate, due to the fact that of both the linear
and the quadratic terms have the same sign), and taking the limit of $P(t)$ as
$t\rightarrow\infty$.
On the other hand the ‘wrong’ sign comes from the nature of the motivating
problem: to look at minimum energy paths from equilibrium to non-equilibrium
states (see Section 6), which is the opposite direction of the standard one
considered in the above quoted papers. This means that the value function
depends on the final point, while in the above quoted problems it depends on
the initial one (see Remark 3.3 to see what happens to our auxiliary problem
using a time inversion). Therefore we are driven to use a different approach,
that exploits the structure of the problem; we partly borrow some ideas from
[28] and from the literature about model reduction444We thank prof. R. Vinter
for providing us these references. (see e.g. [25] and [30]: indeed our results
partly generalize Theorem 2.2 of [30], see Remark 4.4).
### 2.3 The method and the main results
We now briefly explain our approach. First of all we consider the associated
finite horizon problem (which has been studied in the companion paper [1]
whose results, for the part needed here, are recalled in Appendix A), where
the state equation is
$\left\\{\begin{array}[]{ll}y^{\prime}(r)=Ay(r)+Bu(r),\quad
r\in\,]-t,0],\\\\[5.69054pt] y(-t)=0,\end{array}\right.$ (15)
and the energy to be minimized is
$J_{[-t,0]}(u)=\frac{1}{2}\int_{-t}^{0}\|u(r)\|^{2}_{U}\,\mathrm{d}r.$ (16)
The value function is
$V(t,x)=\inf_{u\in{\cal U}_{[-t,0]}(0,x)}J_{[-t,0]}(u),$ (17)
where
${\cal U}_{[-t,0]}(0,x)=\\{u\in L^{2}(-t,0;U):\;y(0)=x\\}.$ (18)
We now recall the well known expression of the controllability operator
$Q_{t}x=\int_{0}^{t}e^{rA}BB^{*}e^{rA^{*}}x\,\mathrm{d}r,\quad x\in X,\quad
t\in[0,+\infty].$ (19)
It is well known (see e.g [32, Part IV, Theorem 2.3]) that, for $t\geq 0$, the
reachable set of the control systems (15) (finite horizon case) and (4)
(infinite horizon case), is $\mathcal{\cal R}(Q^{1/2}_{t})$, i.e. the range of
$Q_{t}^{1/2}$ ($t\in[0,+\infty]$). This is clearly the set where the value
functions $V$ of (17) (finite horizon case) and $V_{\infty}$ of (11) (infinite
horizon case) are well defined. Moreover, as pointed out, e.g., in [1,
Proposition C.2-(i)], for $0\leq t_{1}\leq t_{2}$ we have $\mathcal{\cal
R}(Q^{1/2}_{t_{1}})\subseteq\mathcal{\cal R}(Q^{1/2}_{t_{2}})$. It will be
often useful to assume, beyond Hypothesis 2.2, also the following null
controllability assumption.
###### Hypothesis 2.7.
There exists $T_{0}\geq 0$ such that
${\cal R}(e^{T_{0}A})\subseteq{\cal R}(Q_{T_{0}}^{1/2}).$ (20)
Under such assumption we get, for $t\geq T_{0}$,
$\mathcal{\cal R}(Q^{1/2}_{t_{1}})=\mathcal{\cal R}(Q^{1/2}_{t_{2}}),\qquad
T_{0}\leq t_{1}\leq t_{2}\leq+\infty.$
Consequently
$\ker Q_{t_{1}}=\ker Q^{1/2}_{t_{1}}=\ker Q^{1/2}_{t_{2}}=\ker
Q_{t_{1}}\,,\qquad T_{0}\leq t_{1}\leq t_{2}\leq+\infty.$ (21)
We can now introduce the already announced space $H$. We define
$H=\mathcal{\cal R}(Q_{\infty}^{1/2}).$ (22)
Of course it holds
$H\subseteq\overline{\mathcal{\cal R}(Q_{\infty}^{1/2})}=[\ker
Q_{\infty}^{1/2}]^{\perp}=[\ker Q_{\infty}]^{\perp}.$
The inclusion is in general proper. Define in $H$ the inner product
$\langle x,y\rangle_{H}=\langle
Q_{\infty}^{-1/2}x,Q_{\infty}^{-1/2}y\rangle_{X}\,,\qquad x,y\in H.$ (23)
Some useful results on the space $H$ which form the ground for our main
results and are partly proved in [1], are recalled (and proved, when needed)
in Appendix A.2.
Using $H$ as the ground space we know (see [1, Proposition 4.8-(ii)]) that
$V(t,x)=\frac{1}{2}\langle P(t)x,x\rangle_{H}$ , where $P(t)$ is a suitable
extension of $Q_{\infty}Q_{t}^{-1}$ (here $Q_{t}^{-1}$ is the pseudoinverse of
$Q_{t}$, see [1, Appendix A] or [32, Part IV, end of Section 2.1]). Moreover,
using this explicit expression it is proved in [1, Theorem 4.12] that $P(t)$
solves the following Riccati equation in $H$:
$\frac{d}{dt}\langle P(t)x,y\rangle_{H}=-\langle Ax,P(t)y\rangle_{H}-\langle
P(t)x,Ay\rangle_{H}-\langle{B}^{*}Q_{\infty}^{-1}P(t)x,{B}^{*}Q_{\infty}^{-1}P(t)y\rangle_{U},\quad
t>0,$ (24)
whose natural condition at $t=0$ is, heuristically,
$\lim_{t\rightarrow 0^{+}}P(t)=+\infty.$
It is not difficult to prove (see Proposition 4.3) that
$V_{\infty}(x)=\lim_{t\rightarrow+\infty}V(t,x).$ (25)
and that $V_{\infty}(x)=\frac{1}{2}\langle x,x\rangle_{H}$. This allows to
prove that $P=I_{H}$ (the identity on $H$) solves the ARE (14) in $H$ (Theorem
4.7-(ii)).
However, due to the infinite initial condition at $0$ of $P(t)$ (similarly to
what happens in [28]), the above limit does not help to prove any comparison
theorem for (14). Here comes the main difficulty since, even in very simple
cases, it is not known in the literature whether the ARE characterizes
$V_{\infty}$ or not (see e.g. [7]). To get a comparison result we proceed as
follows.
* •
We first introduce a suitable auxiliary problem (beginning of Section 3).
* •
Next, we prove a comparison result for the auxiliary problem (Subsection 3.1,
Lemma 3.10).
* •
Finally we use the relation among the auxiliary problem and the original
problem to prove our main maximality result (Theorem 4.7).
The idea of introducing an auxiliary problem is exploited in [28], too.
However the method used there cannot work here, due to the different sign of
the linear part of our equation.
## 3 The auxiliary problem
In this section we introduce an auxiliary problem which can be considered a
“time reversed” version of the auxiliary problem considered in [28] (see also
Remark 3.3 about this). This problem will be a key tool to prove the main
result, Theorem 4.7. Indeed, as we will see, any solution of our Algebraic
Riccati Equation (14) is also, under appropriate assumptions, a solution of
this auxiliary problem with itself as initial datum; a comparison argument
will then allow to get the main result.
Throughout this section Hypothesis 2.2 will be always assumed, while
Hypothesis 2.7 will be used when necessary.
Let us consider, for $x\in X$, the following set of controls:
$\overline{{\cal U}}_{[-t,0]}(x)=\\{(z,u)\in H\times L^{2}(-t,0;U):y(0)=x\\},$
(26)
where $y(\cdot):=y(\cdot;-t,z,u)$ is the solution of the Cauchy problem
(similar to (15) but with generic initial datum $z$)
$\left\\{\begin{array}[]{l}y^{\prime}(r)=Ay(r)+Bu(r),\quad r\in\,]-t,0],\\\
y(-t)=z.\end{array}\right.$ (27)
Note that a control in $\overline{{\cal U}}_{[-t,0]}(x)$ is a couple: an
initial point $z\in H$ and a control $u\in{\cal U}_{[-t,0]}(z,x)$, where
${\cal U}_{[-t,0]}(z,x)=\\{u\in L^{2}(-t,0;U):\;y(0;-t,z,u)=x\\}.$ (28)
(this is similar to the set (18) but with a generic initial datum $z$). The
following is true:
###### Proposition 3.1.
Define the reachable set from the point $z$ as
${\mathbf{R}}_{[-t,0]}^{z}:=\left\\{x\in X:\ {\cal
U}_{[-t,0]}(z,x)\neq\emptyset\right\\}.$ (29)
and set
$\bar{\mathbf{R}}_{[-t,0]}:=\bigcup_{z\in H}{\mathbf{R}}^{z}_{[-t,0]}.$ (30)
Then the set $\overline{{\cal U}}_{[-t,0]}(x)$ introduced in (26) is nonempty
if and only if $x\in\bar{\mathbf{R}}_{[-t,0]}$. Moreover we have
$\bar{\mathbf{R}}_{[-t,0]}\subseteq H,$ (31)
with equality for $t\geq T_{0}$, if Hypothesis 2.7 holds.
###### Proof.
The first statement is an immediate consequence of the definition of reachable
set in (29). The second one follows from (99), Lemma A.4-(i), the fact that
${\cal R}(Q_{t}^{1/2})\subseteq{\cal R}(Q_{\infty}^{1/2})$ (with equality, for
$t\geq T_{0}$, when Hypothesis 2.7 holds), and the equality
${\cal R}\left({\cal L}_{-t,0}\right)={\mathbf{R}}^{0}_{[-t,0]}={\cal
R}(Q_{t}^{1/2}),\qquad t\in[0,+\infty]$ (32)
(here ${\cal L}_{-t,0}$ is the operator defined in (98)), which is proved in
[32, Theorem 2.3] for $t<+\infty$. Such equality holds also when $t=+\infty$
with exactly the same proof. ∎
Given a bounded selfadjoint positive operator $N$ on $H$ we want to minimize,
in the class $\overline{{\cal U}}_{[-t,0]}(x)$, the following functional with
an initial cost:
$J^{N}_{[-t,0]}(z,u)=\frac{1}{2}\langle
Nz,z\rangle_{H}+\frac{1}{2}\int_{-t}^{0}\|u(s)\|_{U}^{2}\,\mathrm{d}s.$ (33)
The presence of the operator $N\in{\cal L}_{+}(H)$ forces us to fix the
starting point $z$ at time $-t$ in $H$, rather than in $X$. Define
$V^{N}(t,x)=\inf_{(z,u)\in\overline{{\cal
U}}_{[-t,0]}(x)}J^{N}_{[-t,0]}(z,u)=\inf_{z\in H}\left[\inf_{u\in{\cal
U}_{[-t,0]}(z,x)}J^{N}_{[-t,0]}(z,u)\right],\ t>0,\ x\in X,$ (34)
with the agreement that the infimum over the emptyset is $+\infty$, so that
$V^{N}(t,x)$ is finite only when $x\in H$. Now we provide a relation between
$V^{N}$ and the value function $V$ defined in (17).
###### Proposition 3.2.
We have
$V^{N}(t,x)=\inf_{z\in H}\left[V(t,x-e^{tA}z)+\frac{1}{2}\langle
Nz,z\rangle_{H}\right],\quad t>0,\ x\in X$ (35)
and, in particular,
$V^{N}(t,x)\leq V(t,x)\qquad\forall x\in X,\quad\forall t>0.$ (36)
###### Proof.
We use (96), (100) and (101) getting
$\inf_{u\in{\cal
U}_{[-t,0]}(z,x)}J^{N}_{[-t,0]}(z,u)=V_{1}(-t,0;z,x)+\frac{1}{2}\langle
Nz,z\rangle_{H}=V(t,x-e^{tA}z)+\frac{1}{2}\langle Nz,z\rangle_{H}\,.$
This equality immediately implies (35). Taking $z=0$ we get (36). ∎
The following remark is crucial to understand what is the “natural” Riccati
equation associated to this auxiliary problem.
###### Remark 3.3.
If $A$ generates not just a $C_{0}$-semigroup but a $C_{0}$-group, the
auxiliary problem can be shown, under appropriate assumptions, to be
equivalent, reversing the time, to a standard optimization problem with final
cost. Indeed, given $x\in H$, consider the problem of minimizing, over all
$v(\cdot)\in L^{2}(0,t;U)$, the functional
$\widehat{J}^{N}_{[0,t]}(x,v)=\frac{1}{2}\langle
Nw(t),w(t)\rangle_{H}+\frac{1}{2}\int_{0}^{t}\|v(s)\|_{U}^{2}\,\mathrm{d}s,$
(37)
where $w(\cdot):=w(\cdot;0,x,v)$ is the mild solution of the Cauchy problem
$w^{\prime}(s)=-Aw(s)+Bv(s),\quad s\in\,]-t,0],\qquad w(0)=x.$ (38)
Assume now that, for every $x\in H$, the mild solution $w(\cdot;0,x,v)$
belongs to $H$ for every $t>0$. Setting
$\widehat{V}^{N}(t,x)=\inf_{v\in L^{2}(0,t;U)}\widehat{J}^{N}_{[0,t]}(x,v),$
it can be seen that
$\widehat{V}^{N}(t,x)=V^{N}(t,x).$
To see this, fix $(t,x)\in[0,+\infty[\,\times H$ and recall that, for every
$(z,u)\in\overline{\cal U}_{[-t,0]}(x)$, we have
$e^{tA}z+\int_{-t}^{0}e^{-sA}Bu(s)ds=x\quad\Longleftrightarrow\quad
z+\int_{-t}^{0}e^{(-t-s)A}Bu(s)ds=e^{-tA}x;$
hence, changing variable in the integral,
$z=e^{t(-A)}x+\int_{0}^{t}e^{(t-s)(-A)}B(-u(-s))ds.$
This means that $\bar{\mathbf{R}}_{[-t,0]}=H$ (see (30)). Moreover, to any
$(z,u)\in\overline{\cal U}_{[-t,0]}(x)$ we can associate a function $v\in
L^{2}(0,t;U)$ such that $w(t)=z$, namely, $v(s)=-u(-s)$; consequently
$J^{N}_{[-t,0]}(z,u)=\widehat{J}^{N}_{[0,t]}(x,v).$ (39)
Conversely, given any $v\in L^{2}(0,t;U)$, set $z=w(t;0,x,v)$ and
$u(s)=-v(-s)$: then, clearly, $(z,u)\in\overline{\cal U}_{[-t,0]}(x)$ and,
again, (39) holds. In conclusion, there is a one-to-one correspondence between
the control set of the two problems and, in particular,
$\widehat{V}^{N}(t,x)=V^{N}(t,x)$.
The equation for the “time-reversed” problem (37)-(38) turns out to be the
following:
$\left\\{\begin{array}[]{ll}\displaystyle\frac{d}{ds}\langle
P^{N}(s)x,y\rangle_{H}=&-\langle Ax,P^{N}(s)y\rangle_{H}-\langle
P^{N}(s)x,Ay\rangle_{H}-\\\\[5.69054pt]
&-\langle{B}^{*}Q_{\infty}^{-1}P^{N}(s)x,{B}^{*}Q_{\infty}^{-1}P^{N}(s)y\rangle_{U}\,,\qquad
s\in\,]0,t],\\\\[5.69054pt] P^{N}(0)=N.\end{array}\right.$ (40)
To give sense to (40) we must take $x,y\in{\cal D}(A)\cap H$ with $Ax,Ay\in H$
and $P^{N}(t)x,P^{N}(t)y\in{\cal R}(Q_{\infty})$. When
${B}^{*}Q_{\infty}^{-1}$ can be extended to a bounded operator $H\rightarrow
U$ and $A$ generates a group, then it is known that the value function
$\widehat{V}^{N}$ is quadratic and
$\widehat{V}^{N}(t,x)=\langle\widehat{P}^{N}(t)x,x\rangle_{H}$, where
$\widehat{P}^{N}:[0,+\infty[\rightarrow{\cal L}_{+}(H)$ is the unique solution
of (40). In our case this is not obvious, but it suggests anyway the right
form of the Riccati equation for our auxiliary problem. Note, finally, that
the right hand side of (40) is exactly one of the forms of the ARE we aim to
study (see (13)).
###### Remark 3.4.
As in the case $N=0$ treated in [1], in the above Riccati equations the sign
of the linear part is opposite to the usual one. In fact the control problem
(27)-(33) involves an “initial cost”, instead of a final cost like in the
standard problems (see e.g. [28]).
Our aim now is to prove that for every stationary solution $Q$ of the Riccati
equation (40) (in a suitable class to be defined later) there exists an
operator $N$, namely $Q$ itself, such that
$\frac{1}{2}\langle Qx,x\rangle_{H}\leq V^{N}(t,x),\qquad\hbox{for
sufficiently large $t$.}$
###### Remark 3.5.
It is possible to prove much more about the auxiliary problem, namely:
* (i)
that, for every $N\in{\cal L}_{+}(H)$ the value function $V^{N}$ is continuous
and is a quadratic form in $H$;
* (ii)
that, when $N$ is coercive (i.e., for some $\nu>0$, $\langle
Nx,x\rangle_{H}\geq\nu|x|^{2}_{H}$ for all $x\in H$), the linear operator
$P^{N}$ associated to the value function solves the Riccati equation (40);
* (iii)
that the comparison result mentioned above translates in the inquality
$P^{N}\geq Q^{N}$, in the preorder of positive operators, for every constant
solution $Q^{N}$ of the Riccati equation (40) in a suitable class.
This is the subject of a paper in progress.
### 3.1 A key comparison result
Given any initial datum $N\in{\cal L}_{+}(H)$, we want to compare the
“stationary” solutions of the Riccati equation (40) with the value function
$V^{N}$ of the auxiliary problem. This fact will be used, in the next section,
as a key tool to prove our main results. In order to do this we need first to
give a precise meaning to the concept of stationary solution of (40).
Roughly speaking, a stationary solution $P\in{\cal L}_{+}(H)$ of the Riccati
Equation (40) should also be a solution of the following Algebraic Riccati
Equation (ARE), which comes from the right hand side of (40):
$0=-\langle Ax,Py\rangle_{H}-\langle Px,Ay\rangle_{H}-\langle
B^{*}Q_{\infty}^{-1}Px,B^{*}Q_{\infty}^{-1}Py\rangle_{U}.$ (41)
This equation is meaningful for every $x,y\in{\cal D}(A)\cap H$ with
$Px,Py\in{\cal R}(Q_{\infty})$ and $Ax,Ay\in H$. Since the last requirement
appears too restrictive, we rewrite (41) by taking the first two inner
products in $X$, getting:
$0=-\langle Ax,Q_{\infty}^{-1}Py\rangle_{X}-\langle
Q_{\infty}^{-1}Ox,Ay\rangle_{X}-\langle
B^{*}Q_{\infty}^{-1}Px,B^{*}Q_{\infty}^{-1}Py\rangle_{U}.$ (42)
This makes sense in a larger set of vectors $x,y$, namely for every
$x,y\in{\cal D}(A)\cap H$ with $Px,Py\in{\cal R}(Q_{\infty})$.555Note that
(41) is the same as (13) while (42) is the same as (14). We can now provide
the precise definition of solution of (42).
###### Definition 3.6.
Let $P\in{\cal L}_{+}(H)$ and define the operator $\Lambda_{P}$ as follows:
$\left\\{\begin{array}[]{l}{\cal D}(\Lambda_{P})=\\{x\in H:\ Px\in{\cal
R}(Q_{\infty})\\}\\\\[5.69054pt] \Lambda_{P}x=Q_{\infty}^{-1}Px\qquad\forall
x\in{\cal D}(\Lambda_{P}).\end{array}\right.$ (43)
We say that $P$ is a solution of (42) (or, alternatively, a stationary
solution of (40)) if ${\cal D}(A)\cap{\cal D}(\Lambda_{P})$ is dense in $[\ker
Q_{\infty}]^{\perp}$ and
$0=-\langle
Ax,\Lambda_{P}y\rangle_{X}-\langle\Lambda_{P}x,Ay\rangle_{X}-\langle
B^{*}\Lambda_{P}x,B^{*}\Lambda_{P}y\rangle_{U}\qquad\forall x,y\in{\cal
D}(A)\cap{\cal D}(\Lambda_{P}).$ (44)
We now define a subclass ${\cal Q}$ of the class of all stationary solutions
of (40). First of all we recall that, by Lemma A.4-(i), $e^{tA}|_{H}$ is a
strongly continuous semigroup in $H$. We then use the following notation.
###### Notation 3.7.
We denote by $A_{0}:{\cal D}(A_{0})\subseteq H\rightarrow H$ the generator of
$e^{tA}|_{H}$, and we write $e^{tA_{0}}$ in place of $e^{tA}|_{H}$.
###### Definition 3.8.
Let $P\in{\cal L}_{+}(H)$. We say that $P\in{\cal Q}$ if there exists
$D\subseteq{\cal D}(\Lambda_{P})$ such that $D$ is dense in ${\cal D}(A)\cap
H$ with respect to the norm $\|\cdot\|_{H}+\|A\cdot\|_{X}$;
###### Lemma 3.9.
The set ${\cal R}(Q_{\infty})\cap{\cal D}(A)$ is dense in ${\cal D}(A)\cap H$,
equipped with the norm $\|\cdot\|_{H}+\|A\cdot\|_{X}$. Hence, choosing
$D={\cal R}(Q_{\infty})\cap{\cal D}(A)$, we have $P=I_{H}\in{\cal Q}$.
###### Proof.
Let $x\in H\cap{\cal D}(A)$ such that
$\langle x,z\rangle_{H}+\langle Ax,Az\rangle_{X}=0,\qquad\forall z\in{\cal
R}(Q_{\infty})\cap{\cal D}(A).$
It is enough to prove that $x=0$. Observe that, writing $z=Q_{\infty}y$,
$\langle x,Q_{\infty}y\rangle_{H}+\langle
Ax,AQ_{\infty}y\rangle_{X}=0,\qquad\forall y\in{\cal D}(AQ_{\infty}).$
Then
$\langle Ax,AQ_{\infty}y\rangle_{X}=-\langle x,Q_{\infty}y\rangle_{H}=-\langle
x,y\rangle_{X}\qquad\forall y\in{\cal D}(AQ_{\infty}).$
This means that $Ax\in{\cal D}((AQ_{\infty})^{*})$ and
$(AQ_{\infty})^{*}Ax=-x$. Hence
$\langle(AQ_{\infty})^{*}Ax,Ax\rangle_{X}=-\langle
x,Ax\rangle_{X}=|(-A)^{1/2}x|_{X}^{2}\geq 0.$
On the other hand we know, from [1, Lemma 3.1-(ii)], that, for every
$y\in{\cal D}((AQ_{\infty})^{*})\subseteq{\cal D}(AQ_{\infty})$
$2\langle(AQ_{\infty})^{*}y,y\rangle_{X}=-\|B^{*}y\|^{2}_{U}\,,$
so that
$2\langle(AQ_{\infty})^{*}Ax,Ax\rangle_{X}=-\|B^{*}Ax\|^{2}_{U}\leq 0.$
This implies that $\|(-A)^{1/2}x\|_{X}^{2}=0$; hence $Ax=0$ and, since $A$ is
invertible, $x=0$. ∎
###### Lemma 3.10.
Assume Hypothesis 2.7. Let $P\in{\cal L}_{+}(H)$ be a solution of (42)
according to Definition 3.6. Assume also that $P\in{\cal Q}$ and that $BB^{*}$
is coercive, which is equivalent to require that, for some $\mu>0$,
$\|B^{*}x\|_{U}\geq\mu\|x\|_{X}$ for all $x\in X$. Then, the following
estimate holds:
$\frac{1}{2}\langle Px,x\rangle_{H}\leq V^{P}(t-T_{0},x)\qquad\forall x\in H,\
\ \forall t>T_{0},$
where $V^{P}$ is the value function defined in (34) with $N=P$.
###### Proof.
Step 1 We prove the estimate
$\langle Px,x\rangle_{H}\leq\langle
Py(T_{0}-t),y(T_{0}-t)\rangle_{H}+\int_{T_{0}-t}^{0}\|u(s)\|_{U}^{2}\,\mathrm{d}s,\qquad
t>T_{0},$ (45)
for every $(z,u)\in\overline{{\cal U}}_{[-t,0]}(x)$ with $x\in H$, where $y$
is the state corresponding to $(z,u)$, i.e.
$y(s)=e^{(s+t)A}z+\int_{-t}^{s}e^{(s-\sigma)A}\,Bu(\sigma)\,d\sigma,\quad
s\in[-t,0].$ (46)
Such inequality would be easy to prove if we were able to compute
$\frac{d}{ds}\langle Py(s),y(s)\rangle_{H}$ and prove that
$\frac{d}{ds}\langle Py(s),y(s)\rangle_{H}\leq\|u(s)\|^{2}_{U},\qquad
s\in[-t,0].$
Unfortunately we even do not know if such a derivative exists. Hence we need
to build a delicate approximation procedure as follows.
Fix $t>T_{0}$ and $x\in H$; consider any $(z,u)\in\overline{{\cal
U}}_{[-t,0]}(x)$. It is not restrictive to assume in (46) that
$u(\sigma)\in\overline{{\cal R}(B^{*})}$ for every $\sigma\in[-t,0]$: indeed,
writing, for every such $\sigma$,
$u(\sigma)=u_{1}(\sigma)+u_{2}(\sigma),\quad u_{1}(\sigma)\in\overline{{\cal
R}(B^{*})},\quad u_{2}(\sigma)\in\overline{{\cal R}(B^{*})}^{\perp}=\ker B,$
it is clear that $e^{(s-\sigma)A}Bu_{2}(\sigma)=0$. Hence
$y(s)=e^{(s+t)A}z+\int_{-t}^{s}e^{(s-\sigma)A}\,Bu_{1}(\sigma)\,d\sigma,\quad
s\in[-t,0].$
Since, evidently, $J^{P}_{[-t,0]}(z,u)\geq J^{P}_{[-t,0]}(z,u_{1})$, we can
always choose $u_{1}$ in place of $u$. Next, select a sequence
$\\{(z_{n},u_{n})\\}\subseteq\big{[}{\cal D}(A_{0})\big{]}\times
C^{1}_{0}([-t,0];U)$666$C^{1}_{0}([-t,0];U)$ is the set of $C^{1}$ $U$-valued
functions which take the value $0$ at the boundary., such that $u_{n}$ is
${\cal R}(B^{*})$-valued and $(z_{n},u_{n})\rightarrow(z,u)$ in $H\times
L^{2}(-t,0;U)$. Thus we can set $u_{n}=B^{*}v_{n}$, where $v_{n}\in
C^{1}_{0}([-t,0],X)$ and, denoting by $y_{n}$ the corresponding state, we have
$y_{n}\in C^{1}([-t,0];H)\cap C([-t,0];{\cal D}(A))$ (see e.g. [27, Chapter 4,
Corollary 2.5]) and
$y_{n}(s)=e^{(s+t)A}z_{n}+\int_{-t}^{s}e^{(s-\sigma)A}\,BB^{*}v_{n}(\sigma)\,d\sigma,\qquad
s\in[-t,0].$
Thanks to the properties of the set $D$ of Hypothesis 3.8, we can now choose,
for every $n\in\mathbb{N}$, another approximating sequence $\\{y_{nk}\\}_{h\in
N}\subset C^{1}([-t,0],H)\cap C^{0}_{0}([-t,0],{\cal D}(A))$, such that
$y_{nk}(s)\in D$ for every $s\in[-t,0]$ and satisfying, as
$k\rightarrow+\infty$,
$y_{nk}\rightarrow y_{n}\hbox{ in }C^{1}([-t,0];H),\qquad Ay_{nk}\rightarrow
Ay_{n}\hbox{ in }C([-t,0];X)$ (47)
(see e.g. [27, Chapter 4, Theorem 2.7]). Set now
$w_{nk}=y^{\prime}_{nk}-Ay_{nk}$. By (47) we get, for every $n\in\mathbb{N}$,
$w_{nk}\rightarrow y_{n}-Ay_{n}=BB^{*}v_{n}\hbox{ in
}C^{0}([-t,0];X)\qquad\hbox{as $k\rightarrow+\infty$.}$ (48)
We now can differentiate the quantity $\langle
Py_{nk}(s),y_{nk}(s)\rangle_{H}$ for $s\in[-t,0]$. Indeed, taking into account
the above definition of $w_{nk}$, we obtain, for $s\in[-t,0]$ and
$n,k\in\mathbb{N}$:
$\displaystyle\frac{d}{ds}\langle Py_{nk}(s),y_{nk}(s)\rangle_{H}=\langle
y_{nk}^{\prime}(s),Py_{nk}(s)\rangle_{H}+\langle
Py_{nk}(s),y_{nk}^{\prime}(s)\rangle_{H}=$ $\displaystyle=\langle
y_{nk}^{\prime}(s),\Lambda_{P}y_{nk}(s)\rangle_{X}+\langle\Lambda_{P}y_{nk}(s),y_{nk}^{\prime}(s)\rangle_{X}=$
$\displaystyle=\langle
Ay_{nk}(s)+w_{nk}(s),\Lambda_{P}y_{nk}(s)\rangle_{X}+\langle\Lambda_{P}y_{nk}(s),Ay_{nk}(s)+w_{nk}(s)\rangle_{X}.$
Since $P$ solves the ARE (44) we get, for every $s\in[-t,0]$,
$\displaystyle\frac{d}{ds}\langle Py_{nk}(s),y_{nk}(s)\rangle_{H}=$
$\displaystyle=-\|B^{*}\Lambda_{P}y_{nk}(s)\|_{U}^{2}+\langle
w_{nk}(s),\Lambda_{P}y_{nk}(s)\rangle_{X}+\langle\Lambda_{P}y_{nk}(s),w_{nk}(s)\rangle_{X}=$
$\displaystyle=-\|B^{*}\Lambda_{P}y_{nk}(s)\|_{U}^{2}+\langle
B^{*}v_{n}(s),B^{*}\Lambda_{P}y_{nk}(s)\rangle_{U}+\langle
B^{*}\Lambda_{P}y_{nk}(s),B^{*}v_{n}(s)\rangle_{U}=$
$\displaystyle\quad+\langle
w_{nk}(s)-BB^{*}v_{n}(s),\Lambda_{P}y_{nk}(s)\rangle_{X}+\langle\Lambda_{P}y_{nk}(s),w_{nk}(s)-BB^{*}v_{n}(s)\rangle_{X}=$
$\displaystyle=-\|B^{*}\Lambda_{P}y_{nk}(s)-B^{*}v_{n}(s)\|_{U}^{2}+\|B^{*}v_{n}(s)\|_{U}^{2}+$
$\displaystyle\quad+\langle
w_{nk}(s)-BB^{*}v_{n}(s),\Lambda_{P}y_{nk}(s)\rangle_{X}+\langle\Lambda_{P}y_{nk}(s),w_{nk}(s)-BB^{*}v_{n}(s)\rangle_{X}.$
Hence, recalling that $u_{n}=B^{*}v_{n}$, we may write for every
$\varepsilon>0$,
$\begin{array}[]{lcl}\displaystyle\frac{d}{ds}\langle
Py_{nk}(s),y_{nk}(s)\rangle_{H}&\leq&-\|B^{*}\Lambda_{P}y_{nk}(s)-B^{*}v_{n}(s)\|_{U}^{2}+\|u_{n}(s)\|_{U}^{2}+\\\\[8.53581pt]
&&+2\|w_{nk}(s)-Bu_{n}(s)\|_{X}\|\Lambda_{Q}Py_{nk}(s)\|_{X}\leq\\\\[5.69054pt]
&\leq&-\|B^{*}\Lambda_{P}y_{nk}(s)-B^{*}v_{n}(s)\|_{U}^{2}+\|u_{n}(s)\|_{U}^{2}+\\\\[5.69054pt]
&&\displaystyle+\frac{1}{\varepsilon}\|w_{nk}(s)-Bu_{n}(s)\|_{X}^{2}+\varepsilon\|\Lambda_{P}y_{nk}(s)\|_{X}^{2}.\end{array}$
(49)
Now observe that
$\varepsilon\|\Lambda_{P}y_{nk}(s)\|_{X}^{2}\leq\frac{\varepsilon}{\mu}\|B^{*}\Lambda_{P}y_{nk}(s)\|_{U}^{2}\leq
2\frac{\varepsilon}{\mu}\|B^{*}\Lambda_{P}y_{nk}(s)-B^{*}v_{n}(s)\|_{U}^{2}+2\frac{\varepsilon}{\mu}\|B^{*}v_{n}(s)\|_{U}^{2}.$
Inserting this inequality into (49) we get
$\begin{array}[]{lcl}\displaystyle\frac{d}{ds}\langle
Py_{nk}(s),y_{nk}(s)\rangle_{H}&\leq&\displaystyle-\left(1-2\frac{\varepsilon}{\mu}\right)\|B^{*}\Lambda_{P}y_{nk}(s)-B^{*}v_{n}(s)\|_{U}^{2}+\\\\[11.38109pt]
&&\displaystyle+\left(1+2\frac{\varepsilon}{\mu}\right)\|u_{n}(s)\|_{U}^{2}+\frac{1}{\varepsilon}\|w_{nk}(s)-Bu_{n}(s)\|_{X}^{2}.\end{array}$
(50)
Hence, for all positive $\varepsilon$ such that
$2\frac{\varepsilon}{\mu}\leq\frac{1}{2}$ we get
$\frac{d}{ds}\langle
Py_{nk}(s),y_{nk}(s)\rangle_{H}\leq\left(1+2\frac{\varepsilon}{\mu}\right)\|u_{n}(s)\|_{U}^{2}+\frac{1}{\varepsilon}\|w_{nk}(s)-Bu_{n}(s)\|_{X}^{2}\,.$
(51)
Now we have for every $s\in[-t,0]$, as $k\rightarrow\infty$,
$\|y_{nk}(s)-y_{n}(s)\|_{H}\rightarrow
0,\quad\|y_{nk}^{\prime}(s)-y_{n}^{\prime}(s)\|_{H}\rightarrow
0,\quad\|w_{nk}(s)-Bu_{n}(s)\|_{X}\rightarrow 0;$
thus we get, for every $n\in\mathbb{N}^{+}$, $s\in[-t,0]$ and
$0<\varepsilon\leq\mu/4$,
$\frac{d}{ds}\langle
Py_{n}(s),y_{n}(s)\rangle_{H}\leq\left(1+2\frac{\varepsilon}{\mu}\right)\|u_{n}(s)\|_{U}^{2}\,.$
Finally, letting $\varepsilon\rightarrow 0$,
$\frac{d}{ds}\langle
Py_{n}(s),y_{n}(s)\rangle_{H}\leq\|u_{n}(s)\|_{U}^{2}\quad\forall
n\in\mathbb{N}^{+},\quad\forall s\in[-t,0].$
We now integrate in the smaller interval $[T_{0}-t,0]$:
$\langle Py_{n}(0),y_{n}(0)\rangle_{H}\leq\langle
Py_{n}(T_{0}-t),y_{n}(T_{0}-t)\rangle_{H}+\int_{T_{0}-t}^{0}\|u_{n}(s)\|_{U}^{2}\,\mathrm{d}s.$
Letting $n\rightarrow\infty$, since $y_{n}(s)\rightarrow y(s)$ for every
$s\in[-t,0]$, $y(0)=x$, and $u_{n}\rightarrow u$ in $L^{2}(-t,0;U)$, we deduce
for every $(z,u)\in\overline{{\cal U}}_{[-t,0]}(x)$
$\langle Px,x\rangle_{H}\leq\langle
Py(T_{0}-t),y(T_{0}-t)\rangle_{H}+\int_{T_{0}-t}^{0}\|u(s)\|_{U}^{2}\,\mathrm{d}s,\qquad
t>T_{0};$
this is equation (45).
Step 2 We complete the proof of the Lemma. Consider a sequence
$(\hat{z}_{n},\hat{u}_{n})\in\overline{{\cal U}}_{[T_{0}-t,0]}(x)$, such that,
as $n\rightarrow\infty$,
$J^{P}_{[T_{0}-t,0]}(\hat{z}_{n},\hat{u}_{n})\rightarrow\inf_{(z,u)\in\overline{{\cal
U}}_{[T_{0}-t,0]}(x)}J^{P}_{[T_{0}-t,0]}(z,u)=V^{P}(t-T_{0},x).$ (52)
Thus $\hat{z}_{n}\in H$, $\hat{u}_{n}\in L^{2}(T_{0}-t,0;U)$ and the
corresponding state is
$\hat{y}_{n}(s)=e^{(s+t-T_{0})A}\hat{z}_{n}+\int_{T_{0}-t}^{s}e^{(s-\sigma)A}B\hat{u}_{n}(\sigma)\,\mathrm{d}\sigma,\quad
s\in[T_{0}-t,0];$
in particular $\hat{y}_{n}(0)=x$. Now choose $\hat{v}_{n}\in
L^{2}(-t,T_{0}-t;U)$ such that
$\int_{-t}^{T_{0}-t}e^{(T_{0}-t-\sigma)A}B\hat{v}_{n}(\sigma)\,\mathrm{d}\sigma=\hat{z}_{n};$
(53)
this is possible since, due to Hypothesis 2.7, the range of the operator
(defined in (98))
$v\mapsto{\cal L}_{-t,T_{0}-t}(v)={\cal L}_{-T_{0},0}(v(\cdot+t-T_{0}))$
is all of $H$ (see [32, Theorem 2.3]). Then, setting
$\overline{u}_{n}=\left\\{\begin{array}[]{ll}\hat{v}_{n}&\textrm{in
}[-t,T_{0}-t]\\\\[5.69054pt] \hat{u}_{n}&\textrm{in
}[T_{0}-t,0],\end{array}\right.$
the state corresponding to $(0,\overline{u}_{n})$ in $[-t,0]$ is
$\overline{y}_{n}(s)=\int_{-t}^{s}e^{(s-\sigma)A}B\overline{u}_{n}(\sigma)\,\mathrm{d}\sigma.$
By (53) we have
$\overline{y}_{n}(T_{0}-t)=\int_{-t}^{T_{0}-t}e^{(T_{0}-t-\sigma)A}B\overline{u}_{n}(\sigma)\,\mathrm{d}\sigma=\hat{z}_{n};$
hence, by uniqueness,
$\overline{y}_{n}(s)=e^{(s+t-T_{0})A}\hat{z}_{n}+\int_{T_{0}-t}^{s}e^{(s-\sigma)A}B\hat{u}_{n}(\sigma)\,\mathrm{d}\sigma=\hat{y}_{n}(s)\qquad\forall
s\in[T_{0}-t,0],$
so that $\overline{y}_{n}(0)=\hat{y}_{n}(0)=x$. This shows that
$(0,\overline{u}_{n})\in\overline{{\cal U}}_{[-t,0]}(x)$, and consequently, by
(45),
$\langle Px,x\rangle_{H}\leq\langle
P\hat{z}_{n},\hat{z}_{n}\rangle_{H}+\int_{T_{0}-t}^{0}\|\hat{u}_{n}(s)\|^{2}_{U}\,\mathrm{d}s=2J^{P}_{[T_{0}-t,0]}(\hat{z}_{n},\hat{u}_{n}).$
Finally, by (52), as $n\rightarrow\infty$ we get
$\frac{1}{2}\langle Px,x\rangle_{H}\leq V^{P}(t-T_{0},x)\qquad\forall
t>T_{0},\quad\forall x\in H.$
∎
## 4 Minimum energy with (negative) infinite horizon
We now give a precise formulation of our infinite horizon problem (see
Subsection 2.2 and also [1, Remark 2.8]). We assume that Hypothesis 2.2 holds
throughout this section without repeating it. For any given control $u\in
L^{2}(-\infty,s;U)$ we take the state equation
$\left\\{\begin{array}[]{l}y^{\prime}(r)=Ay(r)+Bu(r),\quad
r\in\,]-\infty,s],\\\ y(-\infty)=0.\end{array}\right.$ (54)
By Lemma 2.5 we know that the unique solution of (54) belongs to
$C(\,]-\infty,s];X)$, is given by
$y(r):=y(r;-\infty,0,u)=\int_{-\infty}^{r}e^{(r-\tau)A}Bu(\tau)\,\mathrm{d}\tau,\quad-\infty<r\leq
s,$
and satisfies, for every $-\infty<r_{1}\leq r_{2}\leq s$,
$y(r_{2})=e^{(r_{2}-r_{1})A}y(r_{1})+\int_{r_{1}}^{r_{2}}e^{(r_{2}-\tau)A}Bu(\tau)\,\mathrm{d}\tau,\quad\textrm{and}\quad\lim_{r\rightarrow-\infty}y(r)=0\quad\hbox{
in $X$}.$
As for the finite horizon case, we define:
${\cal
U}_{[-\infty,s]}(0,x)\stackrel{{\scriptstyle\textrm{def}}}{{=}}\left\\{u\in
L^{2}(-\infty,s;U)\;:\;y(s;-\infty,0,u)=x\right\\},$ (55)
$J_{[-\infty,s]}(u)=\frac{1}{2}\int_{-\infty}^{s}\|u(r)\|^{2}_{U}\;\mathrm{d}r,$
$V_{1}(-\infty,s;0,x)\stackrel{{\scriptstyle\textrm{def}}}{{=}}\inf_{u\in{\cal
U}_{[-\infty,s]}(0,x)}J_{[-\infty,s]}(u),$
with the agreement that the infimum over the empty set is $+\infty$. From (55)
it is easy to see that
$u(\cdot)\in{\cal U}_{[-\infty,s]}(0,x)\ \iff\ u(\cdot-s)\in{\cal
U}_{[-\infty,0]}(0,x);$ (56)
this implies that
$V_{1}(-\infty,s;0,x)=V_{1}(-\infty,0;0,x).$
From now on we set, as in (101)
$V_{\infty}(x)=V_{1}(-\infty,0;0,x)=\inf_{u\in{\cal
U}_{[-\infty,0]}(0,x)}J_{[-\infty,0]}(u),\quad x\in X.$ (57)
We collect now some results about the above problem and the function
$V_{\infty}$.
### 4.1 Optimal strategies
We start proving the existence of optimal strategies.
###### Proposition 4.1.
The set ${\cal U}_{[-\infty,0]}(0,x)$ is nonempty if and only if $x\in H$.
Moreover, for every $x\in H$ there exists a unique $\hat{u}_{x}\in{\cal
U}_{[-\infty,0]}(0,x)$ such that
$V_{\infty}(x)=J_{[-\infty,0]}(\hat{u}_{x}).$
###### Proof.
The first statement follows from (32) as in Proposition 3.1. Now take $x\in H$
and observe that any minimizing sequence $\\{u_{n}\\}_{n\in{\mathbb{N}}}$ must
be bounded in $L^{2}(-\infty,0;U)$; so, passing to a subsequence, we have
$u_{n}\rightharpoonup\hat{u}_{x}$ in $L^{2}(-\infty,0;U)$. As the functional
$J_{[-\infty,0]}$ is weakly lower semicontinuous, we get
$V_{\infty}(x)\leq
J_{[-\infty,0]}(\hat{u}_{x})\leq\liminf_{n\rightarrow\infty}J_{[-\infty,0]}(u_{n})=V_{\infty}(x),$
i.e. $\hat{u}_{x}$ is optimal. Uniqueness is an easy consequence of the strict
convexity of the functional $J_{[-\infty,0]}$. ∎
Moreover we have the following result about the optimal couples when
$x\in{\cal R}(Q_{\infty})$ (see [1, Proposition C.3 and Remark C.4]).
###### Proposition 4.2.
Let $x\in{\cal R}(Q_{\infty})$. Let $(\hat{y}_{x},\hat{u}_{x})$ be the optimal
couple for our problem with target $x$. Then we have
$\hat{u}_{x}(r)=B^{*}e^{-rA^{*}}{Q}^{-1}_{\infty}x,\quad r\in\,]-\infty,0].$
(58)
Moreover the corresponding optimal state $\hat{y}_{x}$ satisfies
$\hat{y}_{x}(r)=Q_{\infty}e^{-rA^{*}}Q_{\infty}^{-1}x,\quad
r\in\,]-\infty,0];$ (59)
hence the optimal couple satisfies the feedback formula
$\hat{u}_{x}(r)=B^{*}Q_{\infty}^{-1}\hat{y}_{x}(r),\quad r\in\,]-\infty,0],$
(60)
and, formally, $\hat{y}_{x}$ is a solution of the backward closed loop
equation (BCLE)
$y^{\prime}(r)=(A+BB^{*}Q_{\infty}^{-1})y(r),\quad r\in\,]-\infty,0[\,,\quad
y(0)=x,$ (61)
which, since $Q_{\infty}$ solves the Lyapunov equation (see [1, Proposition
3.3] rewrites as
$y^{\prime}(r)=-Q_{\infty}A^{*}Q_{\infty}^{-1}y(r),\quad r\in\,]-\infty,0[\,.$
(62)
If $A^{*}$ commutes with $Q_{\infty}$ (e.g. when $A$ is selfadjoint and
invertible, and $A$ and $BB^{*}$ commute), then (62) becomes
$y^{\prime}(r)=-A^{*}y(r),\quad r\in\,]-\infty,0[\,.$ (63)
This means that, in such case, the optimal trajectory arriving at $x$ is given
by
$y(r)=e^{-rA^{*}}x,\quad r\in\,]-\infty,0].$
### 4.2 Connection with the finite horizon case
We now prove the connection between $V_{\infty}$ and the value function $V$ of
the corresponding finite horizon problem which is studied in [1] (see also
Appendix A).
###### Proposition 4.3.
Under Hypothesis 2.7, for every $x\in H$ we have
$V_{\infty}(x)=\lim_{t\rightarrow+\infty}V(t,x)=\inf_{t>0}V(t,x).$
Moreover $V_{\infty}(x)=\frac{1}{2}\|x\|_{H}^{2}$.
###### Proof.
First of all, by [1, Proposition 4.8-(i)], the function $V(\cdot,x)$ is
decreasing for every $x\in H$; hence, for every such $x$
$\exists\,\lim_{t\rightarrow+\infty}V(t,x)=\inf_{t>0}V(t,x).$
We now prove that $V_{\infty}(x)\leq\inf_{t>0}V(t,x)$. With an abuse of
notation we can write
${\cal U}_{[-t,0]}(0,x)\subseteq{\cal U}_{[-\infty,0]}(0,x)\qquad\forall t>0:$
indeed, given a control bringing $0$ to $x$ in the interval $[-t,0]$, we can
extend it to a control bringing $0$ to $x$ in the interval $[-\infty,0]$ just
taking the null control on $\,]-\infty,-t]$. So, if the set ${\cal
U}_{[-t,0]}(0,x)$ is not empty, a fortiori the set ${\cal
U}_{[-\infty,0]}(0,x)$ will be not empty. This fact, together with the
monotonicity of $V(\cdot,x)$ implies that $V_{\infty}(x)\leq\inf_{t>0}V(t,x)$.
We prove now that $V_{\infty}(x)=\inf_{t>0}V(t,x)$. Assume by contradiction
that $V_{\infty}(x)<\inf_{t>0}V(t,x)$, and let $\varepsilon>0$ be such that
$V_{\infty}(x)+2\varepsilon<\inf_{t>0}V(t,x)$. Take $u_{\varepsilon}\in{\cal
U}_{[-\infty,0]}(0,x)$ such that
$J_{[-\infty,0]}(u_{\varepsilon})<V_{\infty}(x)+\varepsilon$. By (5) we get
$x=\int_{-\infty}^{0}e^{-\tau
A}Bu_{\varepsilon}(\tau)\,\mathrm{d}\tau=e^{tA}y(-t)+\int_{-t}^{0}e^{-\tau
A}Bu_{\varepsilon}(\tau)\,\mathrm{d}\tau\qquad\forall t>0;$
hence we have $u_{\varepsilon}|_{[-t,0]}\in{\cal U}_{[-t,0]}(y(-t),x)$, which
in turn implies that
$V(t,x-e^{tA}y(-t))\leq\frac{1}{2}\int_{-t}^{0}\|u_{\varepsilon}(s)\|_{U}^{2}\,\mathrm{d}s.$
(64)
Now we observe that for every $\delta\in\,]0,1[\,$ we may choose
$t_{\delta}>T_{0}+1$ such that $\|e^{tA}y(-t)\|_{H}\leq\delta$ for every
$t>t_{\delta}$: indeed, by Hypothesis 2.7 and Lemma A.1-(v) we have
$\displaystyle\|e^{tA}y(-t)\|_{H}$ $\displaystyle=$
$\displaystyle\|Q_{\infty}^{-1/2}e^{tA}y(-t)\|_{X}\leq\|Q_{\infty}^{-1/2}e^{A}\|_{{\cal
L}(X)}\|e^{(t-1)A}y(-t)\|_{X}\leq$ $\displaystyle\leq$
$\displaystyle\|Q_{\infty}^{-1/2}e^{A}\|_{{\cal
L}(X)}Me^{-\omega(t-1)}\|y(-t)\|_{X}\,.$
Since $y(-t)$ is uniformly bounded in $X$ for $t>0$, we have the claim.
Going ahead with the proof, we recall that, by [1, Proposition 4.8-(iii)-(b)],
we have uniform continuity of $V$ on $[T_{0},+\infty]\times B_{H}(0,R)$ for
every $R>0$, where $B_{H}(0,R)$ is the ball of center $0$ and radius $R$ in
$H$. So, setting $R=\|x\|_{H}+1$, and denoting by $\rho_{R}$ the continuity
modulus of $V$ on $[T_{0},+\infty]\times B_{H}(0,R)$, we have for
$t>t_{\delta}$
$V\left(t,x-e^{tA}y(-t)\right)>V(t,x)-\rho_{R}(\delta).$
The above, together with (64), implies that
$V(t,x)-\rho_{R}(\delta)\leq V_{\infty}(x)+\varepsilon\qquad\forall
t>t_{\delta}\,.$
Now it is enough to choose $\delta$ such that $\rho_{R}(\delta)<\varepsilon$
to get a contradiction.
Finally the last statement follows from [1, Proposition 4.8-(iii)-(d)]. ∎
### 4.3 Algebraic Riccati Equation
We deal with the Algebraic Riccati Equation (ARE from now on) associated to
our infinite horizon problem. As well known, when the value function is a
quadratic form in the state space $X$, the ARE is an equation whose unknown is
an operator $R$. A typical goal in studying such ARE is to prove that the
operator representing the quadratic form in $X$ given by the value function is
a solution (possibly unique) of the associated ARE. Formally our ARE is given
as follows:
$0=-\langle Ax,Ry\rangle_{X}-\langle
Rx,Ay\rangle_{X}-\langle{B}^{*}Rx,{B}^{*}Ry\rangle_{U},\qquad x,y\in{\cal
D}(A).$ (65)
In our case (see Proposition 4.1) the value function $V_{\infty}$ is finite
only in $H$ so that the operator $R$ above must be unbounded and the above
equation makes sense only for $x,y\in{\cal D}(A)\cap{\cal D}(R)$. Moreover, by
Proposition 4.3, $V_{\infty}$ is a quadratic form on the space $H$,
represented by the identity operator $I_{H}\in{\cal L}(H)$, i.e.
$V_{\infty}(x)=\frac{1}{2}\|x\|^{2}_{H}$. Consequently, transforming such norm
in $X$, it must be $V_{\infty}(x)=\frac{1}{2}\|Q_{\infty}^{-1/2}\|_{X}^{2}$,
and, when $x\in R(Q_{\infty})$, $V_{\infty}(x)=\frac{1}{2}\langle
Q_{\infty}^{-1}x,x\rangle_{X}$. Hence it is natural to deduce that the
operator representing $V_{\infty}$ in the space $X$ is $Q_{\infty}^{-1}$.
Due to the unboundedness of the candidate solution $R$ of the ARE (65), it
seems better to study the corresponding ARE in the space $H$, with unknown
$P\in{\cal L}(H)$ whose form (taking $R=Q_{\infty}^{-1}P$) must be (compare
with (14)):
$0=-\langle Ax,Q^{-1}_{\infty}Py\rangle_{X}-\langle
Q^{-1}_{\infty}Px,Ay\rangle_{X}-\langle{B}^{*}Q_{\infty}^{-1}Px,{B}^{*}Q_{\infty}^{-1}Py\rangle_{U}.$
(66)
Note that such expression makes sense only when $Px,Py\in{\cal R}(Q_{\infty})$
and $x,y\in{\cal D}(A)\cap H$.
By Proposition 4.3, we expect that the positive selfadjoint operator $P=I_{H}$
associated with the value function $V_{\infty}$ is a solution of the above ARE
(66). Similarly we expect that $R=Q_{\infty}^{-1}$ is a solution of the above
ARE (65). As they cannot be unique (the zero operator is always a solution of
both), we somehow expect such solutions to be maximal in some suitable sense.
###### Remark 4.4.
In the finite-dimensional case, when the operator $Q_{\infty}$ is invertible,
it is proved that the operator $R=Q_{\infty}^{-1}$ solves (65), using the fact
that its inverse $W=Q_{\infty}$ is the unique solution of the Lyapunov
equation
$AW+WA^{*}=-BB^{*}$ (67)
among all definite positive bounded operators $X\rightarrow X$. This is
reported by Scherpen [30, Theorem 2.2], who quotes Moore [25] for the proof
(see also [21, Chapters 5 and 7] for related results). In fact, as we will
see, this procedure works in our infinite dimensional case, too, but with more
difficulties.
###### Definition 4.5.
* (i)
An operator $P\in{\cal L}_{+}(H)$ is a solution of the ARE (66) if the set
${\cal D}(A)\cap{\cal D}(\Lambda_{P})$ (see (43)) is dense in $H$ and the
equation (66) is satisfied for all $x,y\in{\cal D}(A)\cap{\cal
D}(\Lambda_{P})$.
* (ii)
A positive, selfadjoint, possibly unbounded operator $R:{\cal D}(R)\subset
X\rightarrow X$ is a solution of the ARE (65) if the set ${\cal D}(A)\cap{\cal
D}(R)$ is dense in $[\ker Q_{\infty}]^{\perp}$ (in the topology inherited by
$X$) and the equation (65) is satisfied for all $x,y\in{\cal D}(A)\cap{\cal
D}(R)$.
###### Proposition 4.6.
The following facts are equivalent.
(i) $P\in{\cal L}_{+}(H)$ is a solution to (66);
(ii) $R=Q_{\infty}^{-1}P$ is a solution to (65) and it satisfies, in addition,
$Q_{\infty}^{1/2}RQ_{\infty}^{1/2}\in{\cal L}(X)$.
###### Proof.
(i) Assume that $P\in{\cal L}_{+}(H)$ solves (66). Then, in particular the set
${\cal D}(A)\cap{\cal D}(\Lambda_{P})$ is dense in $H$. Setting
$R=Q_{\infty}^{-1}P$ we see that its domain is exactly ${\cal
D}(\Lambda_{P})$, which is dense in $[\ker Q_{\infty}]^{\perp}$. The fact that
such $R$ satisfies (65) for every $x,y\in{\cal D}(A)\cap{\cal D}(\Lambda_{P})$
follows by simple substitution. Finally, for every $x\in X$ we have
$\|Q_{\infty}^{1/2}RQ_{\infty}^{1/2}x\|_{X}=\|Q_{\infty}^{-1/2}PQ_{\infty}^{1/2}x\|_{X}=\|PQ_{\infty}^{1/2}x\|_{H}\leq\|P\|_{{\cal
L}(H)}\|Q_{\infty}^{1/2}x\|_{H}=\|P\|_{{\cal L}(H)}\|x\|_{X}\,.$
(ii) Let $R:{\cal D}(R)\rightarrow X$ be a solution of (65), having the
property $Q_{\infty}^{1/2}RQ_{\infty}^{1/2}\in{\cal L}(X)$: note that, in this
case, ${\cal D}(R)$ must coincide with $H$. Thus ${\cal D}(A)\cap{\cal D}(R)$
is dense in H, since it contains ${\cal D}(A_{0})$. We set $P=Q_{\infty}R$:
then $P\in{\cal L}_{+}(H)$ since, for every $x\in H$,
$\displaystyle\|Px\|_{H}$ $\displaystyle=$
$\displaystyle\|Q_{\infty}Rx\|_{H}=\|Q_{\infty}^{1/2}[Q_{\infty}^{1/2}RQ_{\infty}^{1/2}]Q_{\infty}^{-1/2}x\|_{H}=\|[Q_{\infty}^{1/2}RQ_{\infty}^{1/2}]Q_{\infty}^{-1/2}x\|_{X}\leq$
$\displaystyle\leq$ $\displaystyle\|Q_{\infty}^{1/2}RQ_{\infty}^{1/2}\|_{{\cal
L}(X)}\|Q_{\infty}^{-1/2}x\|_{X}=\|Q_{\infty}^{1/2}RQ_{\infty}^{1/2}\|_{{\cal
L}(X)}\|x\|_{H}\,.$
Moreover, we see immediately that ${\cal D}(\Lambda_{P})=H$. In addition, (65)
transforms into (66), and it holds for every $x,y\in{\cal D}(A)\cap{\cal
D}(R)$, i.e. it holds for every $x,y\in{\cal D}(A)\cap H={\cal D}(A)\cap{\cal
D}(\Lambda_{P})$, as required by Definition 4.5. ∎
Concerning the two AREs (66) and (65) we have the following result.
###### Theorem 4.7.
Let Hypothesis 2.7 hold true.
* (i)
The operator $R=Q_{\infty}^{-1}$ is a solution of the Riccati equation (65) in
the sense of Definition 4.5(ii).
* (ii)
The operator $P=I_{H}$ is a solution of the Riccati equation (66) in the sense
of Definition 4.5(i).
* (iii)
Assume that $BB^{*}$ is coercive. Then the operator $I_{H}$ is the maximal
solution of (66) in the following sense: if $\hat{P}$ is another solution of
(66) in the sense of Definition 4.5-(i), belonging to the class ${\cal Q}$
introduced in Definition 3.8, then
$\frac{1}{2}\langle\hat{P}x,x\rangle_{H}\leq\frac{1}{2}\langle
x,x\rangle_{H}=V_{\infty}(x)\qquad\forall x\in H.$
###### Proof.
(i) By [1, Proposition 3.3], $Q_{\infty}$ solves the Lyapunov equation, i.e.
we have for every $\xi\in{\cal D}(A^{*})$
$AQ_{\infty}\xi+Q_{\infty}A^{*}\xi+BB^{*}\xi=0.$
This implies that, for every $\xi\in{\cal D}(A^{*})$ and $\eta\in X$,
$\langle AQ_{\infty}\xi,\eta\rangle_{X}+\langle
Q_{\infty}A^{*}\xi,\eta\rangle_{X}+\langle B^{*}\xi,B^{*}\eta\rangle_{U}=0.$
When $\eta\in{\cal D}(AQ_{\infty})$ the second term above rewrites as
$\langle\xi,AQ_{\infty}\eta\rangle_{X}$. Consequently, when $\eta\in{\cal
D}(AQ_{\infty})$, the functional $\xi\rightarrow\langle
AQ_{\infty}\xi,\eta\rangle_{X}$, well defined since $\xi\in{\cal D}(A^{*})$,
can be extended to a bounded linear operator on $X$, since it is equal to
$-\langle\xi,AQ_{\infty}\eta\rangle_{X}-\langle
B^{*}\xi,B^{*}\eta\rangle_{U}$. Hence, choosing $\xi\in{\cal D}(AQ_{\infty})$,
we get, for $\xi,\eta\in{\cal D}(AQ_{\infty})$, that
$\langle
AQ_{\infty}\xi,\eta\rangle_{X}+\langle\xi,AQ_{\infty}\eta\rangle_{X}+\langle
B^{*}\xi,B^{*}\eta\rangle_{U}=0.$ (68)
Now set $x=Q_{\infty}\xi$ and $y=Q_{\infty}\eta$. Then $x,y\in{\cal D}(A)$ and
the above rewrites as
$\langle Ax,\eta\rangle_{X}+\langle\xi,Ay\rangle_{X}+\langle
B^{*}\xi,B^{*}\eta\rangle_{U}=0.$ (69)
Observe that $\xi=Q_{\infty}^{-1}x+\xi_{0}$ and
$\eta=Q_{\infty}^{-1}y+\eta_{0}$ for suitable $\xi_{0},\eta_{0}\in\ker
Q_{\infty}\subseteq\ker B^{*}$. Hence, using the fact that $Q_{\infty}$ solves
the Lyapunov equation in the form (68), we have, for $\xi\in{\cal
D}(AQ_{\infty})$,
$\langle Ax,\eta_{0}\rangle_{X}=\langle
AQ_{\infty}\xi,\eta_{0}\rangle_{X}=-\langle\xi,AQ_{\infty}\eta_{0}\rangle_{X}-\langle
B^{*}\xi,B^{*}\eta_{0}\rangle_{U}=0$
and, similarly, for $\eta\in{\cal D}(AQ_{\infty})$,
$\langle\xi_{0},Ay\rangle_{X}=0$. We then get, substituting into (69) and
observing that $B^{*}\xi_{0}=B^{*}\eta_{0}=0$,
$\langle Ax,Q_{\infty}^{-1}y\rangle_{X}+\langle
Q_{\infty}^{-1}x,Ay\rangle_{X}+\langle
B^{*}Q_{\infty}^{-1}x,B^{*}Q_{\infty}^{-1}y\rangle_{U}=0,\quad+x,y\in
Q_{\infty}({\cal D}(AQ_{\infty})).$ (70)
The above is exactly equation (65) for $R=Q_{\infty}^{-1}$. To end the proof
of (i), it is enough to observe that $Q_{\infty}({\cal D}(AQ_{\infty}))$ is
dense in $[\ker Q_{\infty}]^{\perp}$ (using Remark A.3 and the fact that it
contains $Q_{\infty}({\cal D}(A^{*}))$), and moreover that
$Q_{\infty}({\cal D}(AQ_{\infty}))={\cal D}(A)\cap{\cal R}(Q_{\infty})={\cal
D}(A)\cap{\cal D}(Q_{\infty}^{-1}).$
Indeed if $x\in Q_{\infty}({\cal D}(AQ_{\infty}))$ then it must be
$x=Q_{\infty}\xi$ with $\xi\in({\cal D}(AQ_{\infty}))$, so that
$AQ_{\infty}\xi$ is well defined and, clearly, it coincides with $Ax$, proving
that $x\in{\cal D}(A)$. Obviously it must also be $x\in{\cal R}(Q_{\infty})$.
The converse is similar.
(ii) It is enough to observe that (70) coincides with (66) with $P=I_{H}$, and
that ${\cal D}(\Lambda_{I_{H}})=R(Q_{\infty})$.
(iii) Let $\hat{P}$ be a solution of (66) belonging to the class ${\cal Q}$
introduced in Definition 3.8. It is immediate to see that $\hat{P}$ is a
stationary solution of (40) in the sense of Definition 3.6. Now we apply Lemma
3.10 and (36), getting
$\frac{1}{2}\langle\hat{P}x,x\rangle_{H}\leq V^{\hat{P}}(t,x)\leq V(t,x)\quad
x\in H,\quad t>T_{0}.$
Taking the limit as $t\rightarrow+\infty$, the result follows by Proposition
4.3. ∎
###### Remark 4.8.
The statement of Theorem 4.7 still holds if we consider the slightly more
general problem where the energy functional has the integrand $\langle
Cu,u\rangle_{U}$ instead of $\langle u,u\rangle_{U}\,$, where
$C\in{\mathcal{L}}_{+}(U)$ is coercive and hence invertible. Indeed it is
enough to define the new control variable $v=C^{1/2}u$ and, consequently, to
replace the control operator $B$ in the state equation by $BC^{-1/2}$.
###### Remark 4.9.
Theorem 4.7 can be applied to a variety of cases (e.g. delay equations treated
in [1, Subsection 5.1] or wave equations). Here, according to our motivating
example arising in physics, we develop more deeply the analysis when the
operator $A$ is selfadjoint and commutes with $BB^{*}$ and, in particular,
when both are diagonal. This will be done in the next section.
## 5 The selfadjoint commuting case
We consider the case where $A$ is selfadjoint and invertible and commutes with
$BB^{*}$. To apply Theorem 4.7 we need that $BB^{*}$ is coercive; hence we
assume the following:
###### Hypothesis 5.1.
$A$ is selfadjoint and invertible and commutes with $BB^{*}$ i.e. for every
$x\in\mathcal{D}(A)$ we have $BB^{*}x\in\mathcal{D}(A)$ and
$ABB^{*}x=BB^{*}Ax$. Moreover, $BB^{*}$ is coercive, i.e., for a suitable
$\mu>0$, $\|B^{*}x\|_{U}\geq\mu\|x\|_{X}$ for all $x\in X$.
From [1, Proposition C.1-(v)] we know that, for every $x\in X$,
$Q_{\infty}x=-\frac{1}{2}A^{-1}BB^{*}x.$
This implies that ${\cal R}(Q_{\infty})={\cal D}(A)$, and, as $BB^{*}$ is
invertible in $X$, we have $Q_{\infty}^{-1}x=-2(BB^{*})^{-1}Ax$ for every
$x\in{\cal R}(Q_{\infty})$ (see again [1, Proposition C.1-(v)]). Hence the
Riccati equation (66) in $H$ (with unknown $P\in{\cal L}(H)$), becomes
$0=-\langle Ax,Q_{\infty}^{-1}Py\rangle_{X}-\langle
Q_{\infty}^{-1}Px,Ay\rangle_{X}+2\langle APx,Q_{\infty}^{-1}Py\rangle_{X}.$
(71)
This makes sense, as for (66), when $x,y\in{\cal D}(A)\cap{\cal
D}(\Lambda_{P})$ (see Definition 3.6). We now want to rewrite this equation
using the inner products in $H$. Observe first that in ${\cal R}(Q_{\infty})$
we have $Q_{\infty}^{-1}=Q_{\infty}^{-1/2}Q_{\infty}^{-1/2}$. Then, if $Ax$,
$Ay$ and $APx$ belong to $H$, we rewrite (71) as
$0=-\langle Ax,Py\rangle_{H}-\langle Px,Ay\rangle_{H}+2\langle
APx,Py\rangle_{H}.$ (72)
Now, recalling the definition of $A_{0}$ (see Notation 3.7 and Lemma
A.4)-(ii)), equation (72) can be equivalently rewritten as
$0=-\langle A_{0}x,Py\rangle_{H}-\langle Px,A_{0}y\rangle_{H}+2\langle
A_{0}Px,Py\rangle_{H},$ (73)
provided that $x,y,Px,Py$ belong to ${\cal D}(A_{0})$.
We now clarify the relationship between (71) and (73). First we set
$D^{P}:=\left\\{x\in{\cal D}(A_{0}):\;Px\in{\cal D}(A_{0})\right\\}.$ (74)
Next, we provide the following definition of solution for (73) (compare with
Definition 4.5):
###### Definition 5.2.
An operator $P\in{\cal L}_{+}(H)$ is a solution of the ARE (73) if the set
$D^{P}$ is dense in $H$ and the equation (73) is satisfied for every $x,y\in
D^{P}$.
Finally, we observe that every solution of (71) is also a solution of (73):
indeed, if $P\in{\cal L}_{+}(H)$, then, by definition, we have
$D^{P}\subseteq{\cal D}(A)\cap{\cal D}(\Lambda_{P})$. Hence, if $P\in{\cal
L}_{+}(H)$ solves equation (71), then, choosing in particular $x,y\in D^{P}$
we can turn (71) into (73).
The reverse procedure is also possible: we postpone the proof at the end of
the Section, since some more informations on solutions $P$ of (73) are needed.
We now give a preparatory result about the properties of such solutions.
###### Proposition 5.3.
Assune Hypothesis 5.1. Then any solution $P$ of (73) satisfies
$\langle A_{0}x,A_{0}Pz\rangle_{H}=\langle
A_{0}Px,A_{0}z\rangle_{H}\qquad\forall x,z\in D^{P}.$ (75)
###### Proof.
Let $P$ be a solution of (73). We observe that for all $x,y\in D^{P}$ we have,
since $A_{0}$ is selfadjoint in $H$ (see Lemma (A.5)-(iii)),
$\langle A_{0}Px,y\rangle_{H}+\langle PA_{0}x,y\rangle_{H}=2\langle
PA_{0}Px,y\rangle_{H}\,.$ (76)
By density, this equation holds for every $x\in D^{P}$ and $y\in H$.
Symmetrically we have also
$\langle x,PA_{0}y\rangle_{H}+\langle x,A_{0}Py\rangle_{H}=2\langle
x,PA_{0}Py\rangle_{H}$ (77)
for every $x\in H$ and $y\in D^{P}$. We choose in (76) $y=PA_{0}z-A_{0}Pz$,
with $z\in D^{P}$, and we obtain:
$\displaystyle\langle A_{0}Px,PA_{0}z\rangle_{H}-\langle
A_{0}Px,A_{0}Pz\rangle_{H}+\langle PA_{0}x,PA_{0}z\rangle_{H}-\langle
PA_{0}x,A_{0}Pz\rangle_{H}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad=2\langle
PA_{0}Px,PA_{0}z\rangle_{H}-2\langle PA_{0}Px,A_{0}Pz\rangle_{H}\,.$
We isolate on the left the symmetric terms:
$\displaystyle 2\langle PA_{0}Px,A_{0}Pz\rangle_{H}-\langle
A_{0}Px,A_{0}Pz\rangle_{H}+\langle PA_{0}x,PA_{0}z\rangle_{H}$
$\displaystyle\qquad\qquad\qquad=-\langle A_{0}Px,PA_{0}z\rangle_{H}+\langle
PA_{0}x,A_{0}Pz\rangle_{H}+2\langle PA_{0}Px,PA_{0}z\rangle_{H}\,.$
Next, we apply (76) to the last term on the right:
$\displaystyle 2\langle PA_{0}Px,A_{0}Pz\rangle_{H}-\langle
A_{0}Px,A_{0}Pz\rangle_{H}+\langle PA_{0}x,PA_{0}z\rangle_{H}$
$\displaystyle=-\langle A_{0}Px,PA_{0}z\rangle_{H}+\langle
PA_{0}x,A_{0}Pz\rangle_{H}+\langle A_{0}Px,PA_{0}z\rangle_{H}+\langle
PA_{0}x,PA_{0}z\rangle_{H}\,,$
which simplifies to
$2\langle PA_{0}Px,A_{0}Pz\rangle_{H}-\langle
A_{0}Px,A_{0}Pz\rangle_{H}=\langle PA_{0}x,A_{0}Pz\rangle_{H}\,.$
Applying (77) to the term on the right, rewritten as $\langle
A_{0}x,PA_{0}Px\rangle_{H}$, we obtain for every $x,z\in D^{P}$
$2\langle PA_{0}Px,A_{0}Pz\rangle_{H}-\langle
A_{0}Px,A_{0}Pz\rangle_{H}-\frac{1}{2}\langle
PA_{0}x,A_{0}z\rangle_{H}=\frac{1}{2}\langle A_{0}x,A_{0}Pz\rangle_{H}\,.$
(78)
We now restart from (77), and choose $x=PA_{0}z-A_{0}Pz$, with $z\in D^{P}$:
acting on the left variable of the inner product, and proceeding exactly in
the same way as before, we get for every $z,y\in D^{P}$
$2\langle A_{0}Pz,PA_{0}Py\rangle_{H}-\langle
A_{0}Pz,A_{0}Py\rangle_{H}-\frac{1}{2}\langle
PA_{0}z,A_{0}y\rangle_{H}=\frac{1}{2}\langle A_{0}Pz,A_{0}y\rangle_{H}\,.$
(79)
Comparing equations (78) and (79), both written with variables $x,y$, we
immediately obtain
$\frac{1}{2}\langle A_{0}x,A_{0}Py\rangle_{H}=\frac{1}{2}\langle
A_{0}Px,A_{0}y\rangle_{H}\,,\quad x,y\in D^{P},$
which is (75). ∎
We can now prove:
###### Theorem 5.4.
Assume Hypothesis 5.1. Then any solution $P$ of (73) commutes with $A_{0}$,
i.e. $Px\in{\cal D}(A_{0})$ for every $x\in{\cal D}(A_{0})$ and
$A_{0}Px=PA_{0}x\qquad\forall x\in{\cal D}(A_{0}).$
In particular $D^{P}={\cal D}(A_{0})$.
###### Proof.
We start from (75) with $w=A_{0}x$ and $y=A_{0}z$, i.e.
$\langle w,A_{0}PA_{0}^{-1}y\rangle_{H}=\langle
A_{0}PA_{0}^{-1}w,y\rangle_{H}\qquad\forall w,y\in A_{0}(D^{P}).$ (80)
Notice that $A_{0}(D^{P})$ is the natural domain of the operator
$A_{0}PA_{0}^{-1}$; which might be (a priori) not dense in $H$. Let us denote
by $Z$ the closure of ${\cal D}(A_{0}PA_{0}^{-1})$in $H$; so we have
$Z:=\overline{A_{0}(D^{P})}=\overline{{\cal D}(A_{0}PA_{0}^{-1})}.$
Obvoiusly $Z$ is a Hilbert space with the inner product of $H$. Equation (80)
then tells us that $A_{0}(D^{P})\subseteq{\cal D}((A_{0}PA_{0}^{-1})^{*})$ and
$(A_{0}PA_{0}^{-1})^{*}w=A_{0}PA_{0}^{-1}w\qquad\forall w\in
A_{0}(D^{P})={\cal D}(A_{0}PA_{0}^{-1}).$ (81)
On the other hand, if $x\in{\cal D}(A_{0})$ and $y\in{\cal
D}(A_{0}PA_{0}^{-1})$ we may write
$\langle x,A_{0}PA_{0}^{-1}y\rangle_{H}=\langle
A_{0}^{-1}PA_{0}x,y\rangle_{H}\,;$
consequently
${\cal D}(A_{0})\subseteq{\cal D}((A_{0}PA_{0}^{-1})^{*})$ (82)
and
$(A_{0}PA_{0}^{-1})^{*}x=A_{0}^{-1}PA_{0}x\qquad\forall x\in{\cal D}(A_{0}).$
(83)
We now claim that $A_{0}PA_{0}^{-1}$ is selfadjoint in the space $H$, i.e.
${\cal D}((A_{0}PA_{0}^{-1})^{*})={\cal D}(A_{0}PA_{0}^{-1})=A_{0}(D^{P})$
(84)
is dense in $H$ and (81) holds.
Indeed, assume that $z\in{\cal D}((A_{0}PA_{0}^{-1})^{*})$: then there is
$c>0$ such that
$|\langle A_{0}PA_{0}^{-1}x,z\rangle_{H}|\leq c\|x\|_{H}\qquad\forall
x\in{\cal D}(A_{0}PA_{0}^{-1}).$
In particular, by (81),
$\langle x,(A_{0}PA_{0}^{-1})^{*}z\rangle_{H}=\langle
A_{0}PA_{0}^{-1}x,z\rangle_{H}=\langle(A_{0}PA_{0}^{-1})^{*}x,z\rangle_{H}\quad\forall
x\in{\cal D}(A_{0}PA_{0}^{-1}).$
This shows that $z\in{\cal D}(A_{0}PA_{0}^{-1})$ and
$A_{0}PA_{0}^{-1}z=(A_{0}PA_{0}^{-1})^{*}z$. Hence
${\cal D}((A_{0}PA_{0}^{-1})^{*})\subseteq{\cal
D}(A_{0}PA_{0}^{-1})\quad\textrm{and}\quad
A_{0}PA_{0}^{-1}=(A_{0}PA_{0}^{-1})^{*}\ \textrm{on}\ {\cal
D}((A_{0}PA_{0}^{-1})^{*}).$
Conversely, we know from (81) that
${\cal D}(A_{0}PA_{0}^{-1})=A_{0}(D^{P})\subseteq{\cal
D}((A_{0}PA_{0}^{-1})^{*})\quad\textrm{and}\quad(A_{0}PA_{0}^{-1})^{*}=A_{0}PA_{0}^{-1}\
\textrm{on}\ {\cal D}(A_{0}PA_{0}^{-1}).$
In particular, by (82), $Z$ coincides with $H$, i.e. both domains in (84) are
dense in $H$. This proves our claim.
Take now $x\in{\cal D}(A_{0})$. As, by (82), $D(A^{0})\subseteq{\cal
D}(A_{0}PA_{0}^{-1})$, we have
$PA_{0}^{-1}\in{\cal D}(A_{0})\quad\forall x\in{\cal
D}(A_{0}),\quad\textrm{i.e.}\quad D^{P}={\cal D}(A_{0}).$ (85)
(see (74)). Moreover, by (83) and by the above claim we deduce
$A_{0}^{-1}PA_{0}x=A_{0}PA_{0}^{-1}x\qquad\forall x\in{\cal D}(A_{0}).$
Applying $A_{0}^{-1}$ we have $A_{0}^{-2}PA_{0}x=PA_{0}^{-1}x$ for every
$x\in{\cal D}(A_{0})$, or, equivalently,
$A_{0}^{-2}PA_{0}^{2}z=z\qquad\forall z\in{\cal
D}(A_{0}^{2}),\qquad\textrm{i.e.}\qquad A_{0}^{-2}Pw=PA_{0}^{-2}w\qquad\forall
w\in H.$
This means that the bounded operators $A_{0}^{-2}$ and $P$ commute. Now, since
$A_{0}^{-1}$ is a non-negative operator such that
$(A_{0}^{-1})^{2}=A_{0}^{-2}$, by a well known result (see [29, Theorem
VI.9]), $A_{0}^{-1}$ must commute with every bounded operator $B$ which
commutes with $A_{0}^{-2}$, for instance $B=P$. So
$A_{0}^{-1}Pw=PA_{0}^{-1}w\qquad\forall w\in H,\qquad\textrm{i.e.}\qquad
Pz=A_{0}^{-1}PA_{0}z\qquad\forall z\in{\cal D}(A_{0});$
this implies that $P({\cal D}(A_{0}))\subseteq{\cal D}(A_{0})$ and
$A_{0}Pz=PA_{0}z$ for every $z\in{\cal D}(A_{0})$. Thus $P$ commutes with
$A_{0}$, as required. Moreover $P({\cal D}(A_{0}))\subseteq{\cal D}(A_{0})$
implies ${\cal D}(A_{0})\subseteq D^{P}$. The reverse inclusion immediately
follows from the definition of $D^{P}$. ∎
We are now able to characterize all solutions of the ARE (73).
###### Theorem 5.5.
Assume Hypothesis 5.1 and let $P\in{\cal L}_{+}(H)$. Then $P$ is a solution of
(73) if and only if $P$ is an orthogonal projection in $H$ and it commutes
with $A_{0}$. In particular the identity $I_{H}$ is the maximal solution among
all solutions of (73).
###### Proof.
Let $P$ be a solution of (73): by Theorem 5.4 we have $Px\in{\cal D}(A_{0})$
for every $x\in{\cal D}(A_{0})$ and $A_{0}Px=PA_{0}x$. Hence the ARE (76),
equivalent to (73), becomes
$0=-2\langle PA_{0}x,y\rangle_{H}+2\langle PA_{0}Px,y\rangle_{H},\quad x\in
D^{P},\ y\in H.$
Since $y$ is arbitrary, using (85) we get $2PA_{0}x=2PA_{0}Px$ for every
$x\in{\cal D}(A_{0})$, and successively, for all $x\in D(A_{0})$,
$PA_{0}x-PA_{0}Px=0$ , $PA_{0}(I_{H}-P)x=0$, $A_{0}P(I_{H}-P)x=0$,
$P(I_{H}-P)x=0$, $Px=P^{2}x$; finally, by density, $P=P^{2}$.
Assume, conversely, that $P$ is an orthogonal projection in $H$ and it
commutes with $A_{0}$. Then
$PA_{0}Pz=P^{2}A_{0}z=PA_{0}z=A_{0}Pz\quad\forall z\in{\cal D}(A_{0}),$
and consequently $P$ solves (76). Finally, since $I_{H}$ solves (73), the last
statement is immediate. ∎
We conclude this Section proving the equivalence of the two forms (71) and
(73) of the ARE.
###### Proposition 5.6.
Every solution of (71) is also a solution of (73) and vice versa.
###### Proof.
We have already seen that every solution of (71) is also a solution of (73).
Consider now a solution $P$ of (73). First of all, if $x,y\in D^{P}={\cal
D}(A_{0})$, equation (73) transforms into (71), so that (71) holds true for
$x,y\in D^{P}$.
We claim that $D^{P}$ is dense in ${\cal D}(A)\cap{\cal D}(\Lambda_{P})$ (see
(74)) with respect to the norm $\|\cdot\|_{H}+\|A\cdot\|_{X}+\|AP\cdot\|_{X}$.
Indeed, for $z\in{\cal D}(A)\cap{\cal D}(\Lambda_{P})$, recalling Lemma A.4,
we set
$z_{n}=nR(n,A)z=nR(n,A)|_{H}\,z=nR(n,A_{0})z.$
Then $z_{n}\in{\cal D}(A_{0})=D^{P}$ and, as $n\rightarrow\infty$,
$\begin{array}[]{c}z_{n}\rightarrow z\quad\textrm{in }H,\\\\[5.69054pt]
A_{0}z_{n}=nA_{0}R(n,A_{0})z=nAR(n,A)z\rightarrow Az\quad\textrm{in
}X,\\\\[5.69054pt] A_{0}Pz_{n}=nA_{0}PR(n,A_{0})z=nAR(n,A)Pz\rightarrow
APz\quad\textrm{in }X;\end{array}$
this proves our claim.
Let now $x,y\in{\cal D}(A)\cap{\cal D}(\Lambda_{P})$; select
$\\{x_{n}\\},\\{y_{n}\\}\subseteq{\cal D}(A_{0})$ such that, as
$n\rightarrow\infty$,
$\begin{array}[]{l}x_{n}\rightarrow x\ \textrm{in }H,\quad Ax_{n}\rightarrow
Ax\ \textrm{in }X,\quad APx_{n}\rightarrow APx\ \textrm{in }X,\\\\[5.69054pt]
y_{n}\rightarrow y\ \textrm{in }H,\quad Ay_{n}\rightarrow Ay\ \textrm{in
}X,\quad APy_{n}\rightarrow APy\ \textrm{in }X.\end{array}$
As a consequence,
$Q_{\infty}^{-1}Px_{n}=-2BB^{*}APx_{n}\rightarrow-2BB^{*}APx=Q_{\infty}^{-1}Px\
\textrm{in }X\ \textrm{as }n\rightarrow\infty,$
and similarly $Q_{\infty}^{-1}Px_{n}\rightarrow Q_{\infty}^{-1}Py$ in $X$ as
$n\rightarrow\infty$. For $x_{n}$ and $y_{n}$, (71) holds:
$0=-\langle Ax_{n},Q_{\infty}^{-1}Py_{n}\rangle_{X}-\langle
Q_{\infty}^{-1}Px_{n},Ay_{n}\rangle_{X}+2\langle
APx_{n},Q_{\infty}^{-1}Py_{n}\rangle_{X}.$
In all terms, by what established above, we can pass to the limit as
$n\rightarrow\infty$, obtaining
$0=-\langle Ax,Q_{\infty}^{-1}Py\rangle_{X}-\langle
Q_{\infty}^{-1}Px,Ay\rangle_{X}+2\langle
APx,Q_{\infty}^{-1}Py\rangle_{X}\qquad\forall x,y\in{\cal D}(A)\cap{\cal
D}(\Lambda_{P}),$
i.e. $P$ solves (71). ∎
###### Remark 5.7.
It is easy to verify that for every solution $P$ of (73) the space
$D^{P}={\cal D}(A_{0})$ is dense in ${\cal D}(A)\cap H$ with respect to the
norm $\|\cdot\|_{H}+\|A\cdot\|_{X}$: it suffices to repeat the argument above,
i.e. to consider, for fixed $x\in{\cal D}(A)\cap H$, the approximation
$x_{n}=nR(n,A_{0})x$, observing that $x_{n}\rightarrow x$ in $H$ and
$A_{0}x_{n}=Ax_{n}\rightarrow Ax$ in $X$. Thus, $P$ belongs to the class
${\cal Q}$ introduced in Definition 3.8, and consequently, by Theorem 4.7, we
have $P\leq I_{H}$. Of course, this follows as well by Theorem 5.5.
###### Corollary 5.8.
Assume that $A_{0}$ is a diagonal operator with respect to an orthonormal
complete system $\\{e_{n}\\}$ in $H$ with sequence of eigenvalues
$\\{\lambda_{n}\\}\subset\,]-\infty,0[\,$. Let $P$ be a solution of the ARE
(73). Then
* (i)
every eigenspace of $A_{0}$ is invariant for $P$;
* (ii)
if all eigenvalues are simple, then $P$ is diagonal with respect to the system
$\\{e_{n}\\}$, too;
* (iii)
if at least one eigenspace $M$ has dimension $m\geq 2$, then the restriction
of $P$ to $M$ needs not be diagonal: for instance, if $m=2$ a non-diagonal $P$
on $M$ must have the following explicit form:
$\left(\begin{array}[]{cc}a&\pm\sqrt{a(1-a)}\\\ \pm\sqrt{a(1-a)}&1-a\\\
\end{array}\right)\qquad\textrm{for some }\ a\in\,]0,1[\,.$ (86)
###### Proof.
To prove (i) it is enough to show that, for every eigenvalue $\lambda$ of
$A_{0}$ and $x$ eigenvector of $A_{0}$ associated to $\lambda$, we have
$\lambda Px=A_{0}Px$. This is immediate since $A_{0}$ and $P$ commute.
Concerning (ii) we observe that, for every $n\in\mathbb{N}$ we have
$A_{0}e_{n}=\lambda_{n}e_{n}$, so that $\lambda_{n}Pe_{n}=A_{0}Pe_{n}$. Since
$\lambda_{n}$ is simple, it is $Pe_{n}=ke_{n}$ for some $k\in\mathbb{R}$.
Since $P$ is a projection, it must be $k=0$ or $k=1$.
Finally (iii) can be proved with straightforward algebraic calculations, using
the fact that $M$ is invariant under $P$ and that $P$ is a projection. ∎
###### Remark 5.9.
Let $A$ be a diagonal operator with respect to an orthonormal complete system
$\\{e_{n}\\}$ in $H$ with sequence of eigenvalues
$\\{\lambda_{n}\\}\subset\,]-\infty,0[\,$, where all $\lambda_{n}$ are
distinct and simple. Then $BB^{*}$ must be diagonal, too. Indeed we have, for
every $n\in\mathbb{N}$,
$\sum_{k=0}^{+\infty}\langle
BB^{*}e_{n},e_{k}\rangle_{H}\,e_{k}=BB^{*}e_{n}=\frac{1}{\lambda_{n}}BB^{*}Ae_{n}=\frac{1}{\lambda_{n}}ABB^{*}e_{n}=\frac{1}{\lambda_{n}}\sum_{k=0}^{+\infty}\lambda_{k}\langle
BB^{*}e_{n},e_{k}\rangle_{H}\,e_{k}\,,$
which implies
$\langle
BB^{*}e_{n},e_{k}\rangle_{H}\left(1-\frac{\lambda_{k}}{\lambda_{n}}\right)=0\quad\forall
k,n\in\mathbb{N}.$
Since all eigenvalues are distinct, it must be $BB^{*}e_{n}=b_{n}e_{n}$ for
all $n\in\mathbb{N}$ for a suitable sequence $\\{b_{n}\\}\in\ell^{\infty}$.
This implies that $Q_{\infty}$ and $Q_{\infty}^{1/2}$ are diagonal with
respect to $\\{e_{n}\\}$, too. Following [1, Subsection 5.2] we may also
consider the case when $BB^{*}$ is unbounded and characterize the space $H$,
for specific choices of $BB^{*}$, in terms of the domain of suitable powers of
$(-A)$. In Section 6 we will consider a specific diagonal case arising in
mathematical physics.
## 6 A motivating example: from equilibrium to non-equilibrium states
In this section we describe, in a simple one-dimensional case, the optimal
control problem outlined in the papers [4, 5, 6, 7, 8, 9]. Such special case
fits into the application studied e.g. in [7, 9], in the case of the Landau-
Ginzburg model.
We consider a controlled dynamical system whose state variable is described by
a function $\rho:\left]-\infty,0\right]$ (the choice of the letter $\rho$
comes from the fact that in many physical models $\rho$ is a density). The
control variable is a function
$F:\left]-\infty,0\right]\times\left[0,1\right]\rightarrow\mathbb{R}$ which we
assume to belong to $L^{2}\left(-\infty,0;L^{2}(0,1)\right)$. The state
equation is formally given by
$\begin{cases}\frac{\partial\rho}{\partial
t}\left(t,x\right)=\frac{1}{2}\frac{\partial^{2}\rho}{\partial
x^{2}}\left(t,x\right)+\nabla F\left(t,x\right),&t\in\,]-\infty,0[\,,\
x\in\,]0,1[\,,\\\\[2.84526pt]
\rho\left(-\infty,x\right)=\bar{\rho}(x),&x\in[0,1],\\\
\rho\left(t,0\right)=\rho_{-},\qquad\rho\left(t,1\right)=\rho_{+},&t\in\,]-\infty,0[\,,\\\
\rho\left(0,x\right)=\rho_{0}\left(x\right),&x\in[0,1],\end{cases}$ (87)
where $\rho_{+},\rho_{-}\in\left(0,1\right)$, and $\bar{\rho}$ is an
equilibrium state for the uncontrolled problem. Hence $\bar{\rho}$ is the
unique solution of the following system
$\left\\{\begin{array}[]{l}v^{\prime\prime}\left(x\right)=0,\\\
v\left(0\right)=\rho_{-},\\\ v\left(1\right)=\rho_{+};\end{array}\right.$
so we have $\bar{\rho}(x)=(\rho_{+}-\rho_{-})x+\rho_{-}$.
For any datum $\rho_{0}\in L^{2}(0,1)$ we consider any control driving (in
equation (87)) the equilibrium state $\bar{\rho}$ (at time $t=-\infty$) to
$\rho_{0}$ at time $t=0$. Then we consider the problem of minimizing, over the
set of such controls, the energy functional
$J^{0}_{\infty}\left(F\right)=\frac{1}{2}\int_{-\infty}^{0}\|F(s)\|^{2}_{L^{2}(0,1)}\,\mathrm{d}s.$
Given the above structure it is natural to consider the new control
$\nu=\nabla F\in L^{2}\left(-\infty,0;H^{-1}(0,1)\right)$
and take both the state space $X$ and the control space $U$ equal to
$H^{-1}\left(0,1\right)$. We now rewrite (87) in our abstract setting as
follows. First we denote by $A$ the Laplace operator in the space
$H^{-1}(0,1)$ with Dirichlet boundary conditions, i.e.
${\cal D}\left(A\right)=H^{1}_{0}\left(0,1\right),\qquad
A\eta=\eta^{\prime\prime}\quad\forall\eta\in H^{1}_{0}\left(0,1\right).$
Hence, formally, the state equation (87) becomes
$\left\\{\begin{array}[]{l}\rho^{\prime}(t)=A[\rho(t)-\bar{\rho}]+\nu(t),\quad
t<0,\\\\[2.84526pt] \rho(-\infty)=\bar{\rho}.\end{array}\right.$ (88)
Using a standard argument (see e.g. [17, Appendix C]), the state equation (87)
can be rewritten in the space $X$ and in the new variable
$y(t):=\rho(t)-\bar{\rho}$ as
$\left\\{\begin{array}[]{l}y^{\prime}(t)=Ay(t)+\nu(t),\quad
t<0,\\\\[2.84526pt] y(-\infty)=0.\end{array}\right.$ (89)
The function
$y(t;-\infty,0,\nu)=\int_{-\infty}^{t}e^{(t-s)A}\nu(s)\,\mathrm{d}s,\qquad
t\leq 0,$ (90)
corresponding to
$\rho(t;\nu)=\bar{\rho}+\int_{-\infty}^{t}e^{(t-s)A}\nu(s)\,\mathrm{d}s$, is
the unique solution of (89), adopting Definition 2.4 and applying Lemma 2.5.
The energy functional, in the new control variable $\nu$, becomes
$\bar{J}^{0}_{\infty}\left(\nu\right)=\frac{1}{2}\int_{-\infty}^{0}\|A^{-1/2}\nu(s)\|^{2}_{L^{2}(0,1)}\,\mathrm{d}s=\frac{1}{2}\int_{-\infty}^{0}\|\nu(s)\|^{2}_{H^{-1}(0,1)}\,\mathrm{d}s.$
The set of admissible controls here is exactly
$\mathcal{U}_{[-\infty,0]}(0,y_{0})$ (see Subsection 2.2), which is nonempty
if and only if $y_{0}\in H:=R(Q_{\infty}^{1/2})=D(A^{1/2})=L^{2}(0,1)$ (see
e.g. [1, Section 5.2]). The value function $V_{\infty}$ is defined as
$V_{\infty}\left(y_{0}\right):=\inf_{\nu\in\mathcal{U}_{[-\infty,0]}(0,y_{0})}\bar{J}^{0}_{\infty}\left(\nu\right).$
(91)
Now, recalling that $X=U=H^{-1}(0,1)$ and setting $B=I_{H^{-1}(0,1)}\in{\cal
L}(U,X)$, this problem belongs to the class of the minimum energy problems
studied in this paper. We know, from Proposition 4.3, that the value function
is given by
$V_{\infty}(y_{0})=\frac{1}{2}\|y_{0}\|^{2}_{L^{2}(0,1)}.$
We can now apply Theorem 4.7, obtaining that:
* •
the identity in $L^{2}$, $I_{L^{2}(0,1)}$, solves the ARE (66) where we
replace $B$ and $B^{*}$ by $I_{H^{-1}(0,1)}$;
* •
the operator $Q_{\infty}^{-1}=2A$ solves the ARE (65) where we replace $B^{*}$
by $I_{H^{-1}(0,1)}$;
* •
$I_{L^{2}(0,1)}$ is the maximal solution of the ARE (66) among those in the
class $\mathcal{Q}$ introduced in Definition 3.8.
Moreover, here Hypothesis 5.1 holds; hence we can apply Theorem 5.5. Then,
noting that $A_{0}$ is the Laplace operator with Dirichlet boundary conditions
in the space $H=L^{2}(0,1)$, whose domain is $H^{2}(0,1)\cap H^{1}_{0}(0,1)$,
we obtaing that:
* •
the identity in $L^{2}$, $I_{L^{2}(0,1)}$, is a solution of the two
(equivalent) AREs (71) and (73);
* •
the set of all solutions of (71) and (73) consists of all orthogonal
projections $P$ which commute with $A_{0}$, i.e. all projections whose image
is generated by a subset of the eigenvectors of $A_{0}$;
* •
$I_{L^{2}(0,1)}$ is the maximal solution among all solutions of (71) and (73).
## Appendix
## Appendix A Minimum Energy with finite horizon
This part of the Appendix is devoted to recall the formulation of the finite
horizon minimum energy problem studied in [1] (briefly described at the
beginning of Subsection 2.3) and to provide some related results which are
useful in treating the infinite horizon problem (9)–(10). Throughout this
section we will assume that Hypothesis 2.2 holds without repeating it.
### A.1 General formulation of the problem
We take the Hilbert spaces $X$ (state space) and $U$ (control space), as well
as the operators $A$ and $B$, as in Hypothesis 2.2. Given a time interval
$[s,t]\subset\mathbb{R}$, an initial state $z\in X$ and a control $u\in
L^{2}(s,t;U)$ we consider the state equation (1), which we rewrite here:
$\left\\{\begin{array}[]{l}y^{\prime}(r)=Ay(r)+Bu(r),\quad
r\in\,]s,t],\\\\[2.84526pt] y(s)=z.\end{array}\right.$ (92)
Denote by $y(\cdot;s,z,u)$ the mild solution of (92) (see Proposition 2.3):
$y(r;s,z,u):=e^{(r-s)A}z+\int_{s}^{r}e^{(r-\tau)A}Bu(\tau)\,\mathrm{d}\tau,\qquad
r\in[s,t].$ (93)
We define the class of controls $u(\cdot)$ bringing the state $y(\cdot)$ from
a fixed $z\in X$ at time $s$ to a given target $x\in X$ at time $t$:
${\cal U}_{[s,t]}(z,x)\stackrel{{\scriptstyle\textrm{def}}}{{=}}\left\\{u\in
L^{2}(s,t;U)\;:\;y(t;s,z,u)=x\right\\}.$ (94)
Consider the quadratic functional (the energy)
$J_{[s,t]}(u)=\frac{1}{2}\int_{s}^{t}\|u(r)\|_{U}^{2}\,\mathrm{d}r.$ (95)
The minimum energy problem at $(s,t;z,x)$ is the problem of minimizing the
functional $J_{[s,t]}(u)$ over all $u\in{\cal U}_{[s,t]}(z,x)$. The value
function of this control problem (the minimum energy) is
$V_{1}(s,t;z,x)\stackrel{{\scriptstyle\textrm{def}}}{{=}}\inf_{u\in{\cal
U}_{[s,t]}(z,x)}J_{[s,t]}(u).$ (96)
with the agreement that the infimum over the emptyset is $+\infty$. Similarly
to what we did in Proposition 3.1, given any $z\in X$ we define the reachable
set in the interval $[s,t]$, starting from $z$, as
${\mathbf{R}}_{[s,t]}^{z}:=\left\\{x\in X:\ {\cal
U}_{[s,t]}(z,x)\neq\emptyset\right\\}.$ (97)
Defining the operator
${\cal L}_{s,t}:L^{2}(s,t;U)\rightarrow X,\qquad{\cal
L}_{s,t}u=\int_{s}^{t}e^{(t-\tau)A}Bu(\tau)\,\mathrm{d}\tau,$ (98)
it is clear that
${\mathbf{R}}_{[s,t]}^{z}:=e^{(t-s)A}z+{\cal
L}_{s,t}\left(L^{2}(s,t;U)\right).$ (99)
The use of [1, Proposition 2.6] allows to reduce the number of variables from
4 to 2. In particular
$V_{1}(s,t;z,x)=V_{1}(s-t,0;0,x-e^{(t-s)A}z)=V_{1}(0,t-s;0,x-e^{(t-s)A}z);$
(100)
Hence from now on we set, for simplicity of notation,
$V(t,x):=V_{1}(-t,0;0,x)=\inf_{u\in{\cal U}_{[-t,0]}(0,x)}J_{[-t,0]}(u)\qquad
t\in\,]0,+\infty[,\quad x\in X.$ (101)
### A.2 The space $H$ and its properties
In this subsection we provide some useful properties of the space $H$
introduced in Subsection 2.3 (see (22)-(23)). First recall that
$H={\cal R}(Q_{\infty}^{1/2})\qquad\hbox{and}\qquad\langle
x,y\rangle_{H}=\langle
Q_{\infty}^{-1/2}x,Q_{\infty}^{-1/2}y\rangle_{X}\,,\quad x,y\in H,$ (102)
and that (with, in general, proper inclusion)
$H\subseteq\overline{{\cal R}(Q_{\infty}^{1/2})}=[\ker
Q_{\infty}^{1/2}]^{\perp}=[\ker Q_{\infty}]^{\perp}.$
Next Lemmas A.1 and A.2 are exactly Lemmas 4.2 and 4.3 of [1].
###### Lemma A.1.
* * (i)
The space $H$ is a Hilbert space continuously embedded into $X$.
* (ii)
The space ${\cal R}(Q_{\infty})$ is dense in $H$.
* (iii)
The operator $Q_{\infty}^{-1/2}$ is an isometric isomorphism from $H$ to
$[\ker Q_{\infty}^{1/2}]^{\perp}$, and in particular
$\|Q_{\infty}^{-1/2}x\|_{X}=\|x\|_{H}\qquad\forall x\in H.$ (103)
* (iv)
We have $Q_{\infty}^{1/2}\in{\cal L}(H)$ and
$\|Q_{\infty}^{1/2}\|_{{\cal L}(X)}=\|Q_{\infty}^{1/2}\|_{{\cal L}(H)}.$
* (v)
For every $F\in{\cal L}(X)$ such that ${\cal R}(F)\subseteq H$ we have
$Q_{\infty}^{-1/2}F\in{\cal L}(X)$, so that $F\in{\cal L}(X,H)$.
###### Lemma A.2.
For $0<t\leq+\infty$ let $Q_{t}$ be the operator defined by (19). Then, for
every $t\in[T_{0},+\infty]$, the space $Q_{t}({\cal D}(A^{*}))$ is dense in
$H$ and contained in ${\cal D}(A)$. In particular ${\cal D}(A)\cap H$ is dense
in $H$.
###### Remark A.3.
The above Lemma immediately implies that, for every $t\in[T_{0},+\infty]$,
$Q_{t}({\cal D}(A^{*}))$ is dense in $[\ker Q_{\infty}]^{\perp}$ with the
topology inherited by $X$, since the inclusion of $H$ into $[\ker
Q_{\infty}]^{\perp}$ is continuous.
Now we state and prove three very useful lemmas.
###### Lemma A.4.
Assume Hypothesis 2.7. Then we have the following:
* (i)
For every $z\in H$ and $r\geq 0$ we have $e^{rA}z\in H$; moreover the
semigroup $e^{tA}|_{H}$ is strongly continuous in $H$. In particular, for each
$T>0$ there exists $c_{T}>0$ such that
$\|e^{rA}z\|_{H}\leq c_{T}\|z\|_{H}\qquad\forall z\in H,\quad\forall
r\in[0,T].$
* (ii)
For every $\lambda\in\rho(A)$ we have $\lambda\in\rho(A_{0})$ and
$R(\lambda,A_{0})=R(\lambda,A)|_{H}$.
* (iii)
The generator $A_{0}$ of the semigroup $e^{tA}|_{H}$ is given by
$\left\\{\begin{array}[]{l}{\cal D}(A_{0})=\left\\{x\in{\cal D}(A)\cap
H\,:\;Ax\in H\right\\}\\\\[5.69054pt] A_{0}x=Ax\quad\forall x\in{\cal
D}(A_{0}).\end{array}\right.$ (104)
We denote the semigroup $e^{tA}|_{H}$ by $e^{tA_{0}}$ (see Notation 3.7): thus
$e^{tA_{0}}$ is a strongly continuous semigroup in $H$.
###### Proof.
(i) Fix any $z\in H$ and $t>T_{0}$: then, by Hypothesis 2.7, $z\in{\cal
R}(Q_{\infty}^{1/2})={\cal R}(Q_{t}^{1/2})={\cal R}({\cal L}_{-t,0})$ (see
(32)), i.e. there exists $u\in L^{2}(0,r;U)$ such that
$z={\cal L}_{-t,0}(u)=\int_{-t}^{0}e^{-\sigma
A}\,Bu(\sigma)\,\mathrm{d}\sigma.$
Hence, for every $r>0$,
$e^{rA}z=\int_{-t}^{0}e^{(r-\sigma)A}\,Bu(\sigma)\,d\sigma=\int_{-t}^{r}e^{(r-\sigma)A}\,B\overline{u}(\sigma)\,\mathrm{d}\sigma,$
where
$\overline{u}(s)=\left\\{\begin{array}[]{ll}u(s)&\textrm{if
}s\in[-t,0]\\\\[5.69054pt] 0&\textrm{if }s\in[0,r].\end{array}\right.$
Setting $r-\sigma=-s$ and $v(s)=\overline{u}(s+r)$, it follows that
$e^{rA}z=\int_{-t-r}^{0}e^{-sA}\,B\overline{u}(r+s)\,ds={\cal
L}_{-t-r,0}(v)\in{\cal R}({\cal L}_{-t-r,0})={\cal R}(Q_{t+r}^{1/2})={\cal
R}(Q_{\infty}^{1/2})=H.$
Let us now prove that the restriction of $e^{rA}$ to $H$ has closed graph in
$H$: if $z,\\{z_{n}\\}\subset H$ and $z_{n}\rightarrow z$ in $H$,
$e^{rA}z_{n}\rightarrow w\in H$ in $H$, then, since $H$ is continuously
embedded into $X$,
$z_{n}\rightarrow z\textrm{ in }X,\qquad e^{rA}z_{n}\rightarrow w\textrm{ in
}X;$
but $e^{rA}\in{\cal L}(X)$, so that $w=e^{rA}z$. Thus $e^{rA}z_{n}\rightarrow
e^{rA}z$ in $H$, and it follows that $e^{rA}\in{\cal L}(H)$.
Now fix $x\in H$ and consider for $t>0$ the quantity $e^{tA}x-x$. We have
$\|e^{tA}x-x\|_{H}=\sup_{\|y\|_{H}=1}\langle e^{tA}x-x,y\rangle_{H}\,;$
thus, for every $\varepsilon\in\,]0,1[\,$ there exists $y_{\varepsilon}\in H$
with $\|y_{\varepsilon}\|_{H}=1$ such that
$\|e^{tA}x-x\|_{H}<\varepsilon+\langle e^{tA}x-x,y_{\varepsilon}\rangle_{H};$
then, using Lemma A.2 and choosing $z_{\varepsilon}\in{\cal R}(Q_{\infty})$
such that $\|z_{\varepsilon}-y_{\varepsilon}\|_{H}<\varepsilon$, we obtain
$\displaystyle\|e^{tA}x-x\|_{H}$ $\displaystyle<$
$\displaystyle\varepsilon+\langle
e^{tA}x-x,y_{\varepsilon}-z_{\varepsilon}\rangle_{H}+\langle
e^{tA}x-x,z_{\varepsilon}\rangle_{H}\leq$ $\displaystyle\leq$
$\displaystyle\varepsilon+\|e^{tA}x-x\|_{H}\,\|y_{\varepsilon}-z_{\varepsilon}\|_{H}+\langle
e^{tA}x-x,Q_{\infty}^{-1}z_{\varepsilon}\rangle_{X}\leq$ $\displaystyle\leq$
$\displaystyle\varepsilon+\|e^{tA}x-x\|_{H}\,\varepsilon+\|e^{tA}x-x\|_{X}\,\|Q_{\infty}^{-1}z_{\varepsilon}\|_{X}\,.$
Hence
$(1-\varepsilon)\|e^{tA}x-x\|_{H}<\varepsilon+\|e^{tA}x-x\|_{X}\,\|Q_{\infty}^{-1}z_{\varepsilon}\|_{X}\,,$
and letting $t\rightarrow 0^{+}$ we get
$\limsup_{t\rightarrow
0^{+}}\|e^{tA}x-x\|_{H}\leq\frac{\varepsilon}{1-\varepsilon}+0.$
The arbitrariness of $\varepsilon$ leads to the conclusion.
(ii) This is an immediate consequence of point (i) and of the well known
resolvent formula $R(\lambda,A)=\int_{0}^{+\infty}e^{-\lambda
t}e^{tA}\,\mathrm{d}t$.
(iii) If $z\in{\cal D}(A_{0})$ then it must be, by definition, $z\in{\cal
D}(A)$, $z\in H$, $Az\in H$; hence ${\cal D}(A_{0})\subseteq\left\\{x\in{\cal
D}(A)\cap H\,:\;Ax\in H\right\\}$. To prove the converse we first observe
that, using (ii), we get, for $n\in\mathbb{N}-\\{0\\}$ and $h\in H$,
$nAR(n,A)x=nx-n^{2}R(n,A)x=nx-n^{2}R(n,A_{0})x=nA_{0}R(n,A_{0})x$
Now assume that $z\in{\cal D}(A)\cap H$ with $Az\in H$. To prove that
$z\in{\cal D}(A_{0})$ it is enough to show that $nA_{0}R(n,A_{0})z$ converges
to some element $y$ of $H$ when $n\rightarrow+\infty$. In this case such
element is $A_{0}z$. To do this we observe that, by the above remarks for the
resolvents and by the assumptions on $z$, we get
$nA_{0}R(n,A_{0})z=nAR(n,A)z=nR(n,A)Az=nR(n,A_{0})Az$
The latter, by the properties of resolvents, converges in $H$ to $Az$ as
$n\rightarrow+\infty$, since $Az\in H$. This shows that $z\in{\cal D}(A_{0})$
and $A_{0}z=Az$. ∎
###### Lemma A.5.
Assume Hypothesis 2.7. Then we have the following.
* (i)
$Q_{\infty}(H)$ is dense in $H$.
* (ii)
$Q_{\infty}({\cal D}(A_{0}^{*}))$ is dense in $H$.
* (iii)
Let $A$ be selfadjoint and commuting with $BB^{*}$. Then $Q_{\infty}({\cal
D}(A_{0}^{*}))\subseteq{\cal D}(A_{0})$. Moreover $A_{0}^{*}$ is selfadjoint
in $H$.
###### Proof.
(i) Since $\ker Q_{\infty}^{1/2}=\ker Q_{\infty}$, we have $\overline{{\cal
R}(Q_{\infty}^{1/2})}=\overline{{\cal R}(Q_{\infty})}$. Fix $x\in H$ and set
$z:=Q_{\infty}^{-1/2}x\in\overline{{\cal R}(Q_{\infty}^{1/2})}$. Then there
exists $\\{w_{n}\\}\subset X$ such that, defining
$z_{n}:=Q_{\infty}w_{n}\in{\cal R}(Q_{\infty})$, we have $z_{n}\rightarrow z$
in $X$. Set
$x_{n}:=Q_{\infty}^{1/2}z_{n}=Q_{\infty}^{1/2}Q_{\infty}w_{n}=Q_{\infty}Q_{\infty}^{1/2}w_{n}.$
Clearly $x_{n}\in Q_{\infty}(H)$. Moreover
$\|x_{n}-x\|_{H}=\|Q_{\infty}^{1/2}z_{n}-x\|_{H}=\|z_{n}-z\|_{X}\rightarrow
0\quad\hbox{as $n\rightarrow+\infty$,}$
which proves the claim.
[(ii) Fix $x\in H$. By part (i) there exists $\\{x_{n}\\}\subset
Q_{\infty}(H)$ such that $x_{n}\rightarrow x$ in $H$. We must have
$x_{n}:=Q_{\infty}z_{n}$, with $z_{n}\in H$. Since ${\cal D}(A^{*}_{0})$ is
dense in $H$, then, for every $n\in\mathbb{N}_{+}$ there exists $w_{n}\in{\cal
D}(A^{*}_{0})$ such that $\|z_{n}-w_{n}\|_{H}<1/n$. Consequently, setting
$y_{n}:=Q_{\infty}w_{n}$, we have, using Lemma A.1-(iv),
$\|y_{n}-x\|_{H}\leq\|Q_{\infty}(w_{n}-z_{n})\|_{H}+\|x_{n}-x\|_{H}\leq\|Q_{\infty}\|_{{\cal
L}(H)}\frac{1}{n}+\|x_{n}-x\|_{H}\,.$
This proves the claim.
(iii) Let $A$ be selfadjoint and commuting with $BB^{*}$. Observe first that
${\cal D}(A_{0}^{*})\subseteq{\cal D}(A^{*})={\cal D}(A)$. Indeed, when
$x\in{\cal D}(A_{0}^{*})$, the linear map $y\rightarrow\langle
x,A_{0}y\rangle_{H}$ is bounded in $H$. Using such boundedness and the fact
that $A$ and $Q_{\infty}$ commute (see [1, Proposition C.1-(v)])), we get, for
every $y\in{\cal D}(A)$,
$\langle x,Ay\rangle_{X}=\langle x,Q_{\infty}Ay\rangle_{H}=\langle
x,AQ_{\infty}y\rangle_{H}=\langle x,A_{0}Q_{\infty}y\rangle_{H}\leq
C\|Q_{\infty}y\|_{H}\leq C^{\prime}\|y\|_{X}$
which implies $x\in{\cal D}(A^{*})={\cal D}(A)$.
Now, let $x\in Q_{\infty}({\cal D}(A_{0}^{*}))$ (which is contained in ${\cal
D}(A)$ by Lemma A.2, since ${\cal D}(A_{0}^{*})\subseteq{\cal D}(A^{*})$) and
let $z\in{\cal D}(A_{0}^{*})$ be such that $x=Q_{\infty}z$. Using again the
fact that $A$ and $Q_{\infty}$ commute, we get
$Ax=AQ_{\infty}z=Q_{\infty}Az\in H$. Hence, by definition of $A_{0}$, we
deduce that $x\in{\cal D}(A_{0})$ and $A_{0}x=Ax$.
Now we prove that $A_{0}$ is selfadjoint in $H$. Let $x\in{\cal D}(A_{0})$ and
$y\in Q_{\infty}({\cal D}(A_{0}^{*}))$. Then for some $z\in{\cal
D}(A_{0}^{*})$ we have $y=Q_{\infty}z$ and $Q_{\infty}^{-1}y=z+z_{0}$, where
$z_{0}\in\ker Q_{\infty}$. Hence it must be $\langle Ax,z_{0}\rangle_{X}=0$,
since $Ax=A_{0}x\in H\subseteq[\ker Q_{\infty}]^{\perp}$. Using this fact, we
get
$\displaystyle\langle A_{0}x,y\rangle_{H}$ $\displaystyle=$
$\displaystyle\langle Ax,y\rangle_{H}=\langle
Ax,Q_{\infty}^{-1}y\rangle_{X}=\langle Ax,z\rangle_{X}=\langle
x,Az\rangle_{X}=$ $\displaystyle=$ $\displaystyle\langle
x,Q_{\infty}Az\rangle_{H}=\langle x,AQ_{\infty}z\rangle_{H}=\langle
x,Ay\rangle_{H}=\langle x,A_{0}y\rangle_{H}\,,$
where in the last step we used the inclusion ${\cal
D}(A_{0}^{*})\subseteq{\cal D}(A)$, the fact that $Q_{\infty}$ and $A$
commute, and the inclusion $Q_{\infty}({\cal D}(A_{0}^{*}))\subseteq{\cal
D}(A_{0})$. This implies that, for every $x\in{\cal D}(A_{0})$, the linear map
$y\rightarrow\langle x,A_{0}y\rangle_{H}$ is defined on $Q_{\infty}({\cal
D}(A_{0}^{*}))$ (which is dense in $H$) and is bounded in $H$. This implies
that $x\in{\cal D}(A_{0}^{*})$ and $A_{0}^{*}x=A_{0}x$. Hence $A_{0}^{*}$
extends $A_{0}$. Since both $A_{0}$ and $A_{0}^{*}$ generate a semigroup, we
can choose $\lambda>0$ such that $\lambda\in\rho(A_{0})\cap\rho(A_{0}^{*})$.
For such $\lambda$ we now prove that $R(\lambda,A_{0}^{*})=R(\lambda,A_{0})$,
which immediately implies that ${\cal D}(A_{0})={\cal D}(A_{0}^{*})$. Indeed
for $z\in H$ we have
$z=(\lambda-A_{0})R(\lambda,A_{0})z=(\lambda-A_{0}^{*})R(\lambda,A_{0})z,$
where in the last equality we used that ${\cal D}(A_{0})\subseteq{\cal
D}(A_{0}^{*})$ and that $A_{0}^{*}x=A_{0}x$ for all $x\in{\cal D}(A_{0})$.
Applying $R(\lambda,A_{0}^{*})$ to both sides we get the claim. ∎
###### Lemma A.6.
Assume Hypothesis 2.7.
(i)
For $0\leq s\leq T_{0}$ we have $Q_{\infty}^{-1/2}e^{sA}\in{\cal L}(H,X)$, and
there exists $C_{1}(T_{0})>0$ such that
$\|Q_{\infty}^{-1/2}e^{sA}x\|_{X}\leq C_{1}({T_{0}})\|x\|_{H}\qquad\forall
x\in H,\quad\forall s\in[0,T_{0}],$
and $(Q_{\infty}^{-1/2}e^{sA})^{*}=e^{sA_{0}^{*}}Q_{\infty}^{-1/2}\in{\cal
L}(X,H)$, with
$\|(Q_{\infty}^{-1/2}e^{sA})^{*}\|_{{\cal L}(X,H)}\leq
C_{1}({T_{0}})\qquad\forall s\in[0,T_{0}].$
(ii)
For $s>T_{0}$ we have $Q_{\infty}^{-1/2}e^{sA}\in{\cal L}(X)$, with
$\|Q_{\infty}^{-1/2}e^{sA}x\|_{X}\leq
C_{1}({T_{0}})Me^{-\omega(s-T_{0})}\|x\|_{X}\qquad\forall x\in X,\quad\forall
s>T_{0},$
and
$\|(Q_{\infty}^{-1/2}e^{sA})^{*}\|_{{\cal L}(X)}\leq
C_{1}({T_{0}})Me^{-\omega(s-T_{0})}\qquad\forall s>T_{0}.$
(iii)
For $s\geq 0$, $x\in X$ we have
$e^{sA_{0}^{*}}Q_{\infty}x=Q_{\infty}e^{sA^{*}}x.$
(iv)
For $x\in{\cal D}(A^{*})$ we have $Q_{\infty}x\in{\cal D}(A_{0}^{*})$.
Moreover, for every $s\geq 0$ we have
$A_{0}^{*}e^{sA_{0}^{*}}Q_{\infty}x=Q_{\infty}A^{*}e^{tA^{*}}x.$
###### Proof.
(i) We have by Lemma A.4-(i)
$\|Q_{\infty}^{-1/2}e^{sA}x\|_{X}=\|e^{sA}x\|_{H}\leq
c_{T_{0}}\|x\|_{H}\qquad\forall x\in H,\quad\forall s\in[0,T_{0}];$
moreover (identifying here $X$ and $H$ with their duals),
$(Q_{\infty}^{-1/2}e^{sA})^{*}\in{\cal L}(X,H)$ and, for all $z\in H$ and
$x\in X$, we have
$\langle(Q_{\infty}^{-1/2}e^{sA})^{*}x,z\rangle_{H}=\langle
x,Q_{\infty}^{-1/2}e^{sA}z\rangle_{X}=\langle
Q_{\infty}^{1/2}x,e^{sA_{0}}z\rangle_{H}=\langle
e^{sA_{0}^{*}}Q_{\infty}^{1/2}x,z\rangle_{H}\,.$
This shows that
$(Q_{\infty}^{-1/2}e^{sA})^{*}=e^{sA_{0}^{*}}Q_{\infty}^{1/2}\in{\cal
L}(X,H),$
with
$\|(Q_{\infty}^{-1/2}e^{sA})^{*}\|_{{\cal
L}(X,H)}=\|e^{sA_{0}}Q_{\infty}^{1/2}\|_{{\cal
L}(X,H)}=\|Q_{\infty}^{-1/2}e^{sA}\|_{{\cal L}(H,X)}\leq c_{T_{0}}\quad\forall
s\in[0,T_{0}].$
(ii) By Hypothesis 2.7 we have $Q_{\infty}^{-1/2}e^{sA}\in{\cal L}(X)$, and by
(i) we get
$\displaystyle\|Q_{\infty}^{-1/2}e^{sA}\|_{{\cal L}(X)}$ $\displaystyle=$
$\displaystyle\|Q_{\infty}^{-1/2}e^{T_{0}A}e^{(s-T_{0})A}\|_{{\cal L}(X)}$
$\displaystyle\leq$ $\displaystyle\|Q_{\infty}^{-1/2}e^{T_{0}A}\|_{{\cal
L}(X)}Me^{-\omega(s-T_{0})}\leq
C_{1}({T_{0}})Me^{-\omega(s-T_{0})}\quad\forall s>T_{0}.$
The claim easily follows.
(iii) For $s\geq 0$, $x\in X$, $z\in H$, we have
$\langle e^{sA_{0}^{*}}Q_{\infty}x,z\rangle_{H}=\langle
Q_{\infty}x,e^{sA_{0}}z\rangle_{H}=\langle x,e^{sA}z\rangle_{X}=\langle
e^{sA^{*}}x,z\rangle_{X}=\langle Q_{\infty}e^{sA^{*}}x,z\rangle_{H}\,,$
which proves the claim.
Let $x\in{\cal D}(A^{*})$ and $s\geq 0$. For $h>0$ we have, using the point
(iii) above,
$\frac{e^{(s+h)A_{0}^{*}}-e^{sA_{0}^{*}}}{h}Q_{\infty}x=Q_{\infty}\frac{e^{(s+h)A^{*}}-e^{sA^{*}}}{h}x$
Letting $h\rightarrow 0$, the claim follows. ∎
## References
* [1] P. Acquistapace, and F. Gozzi. Minimum energy for linear systems with finite horizon: a non-standard Riccati equation. Math. Control Signals Syst. 29, 19 (2017).
* [2] E. Barucci, and F. Gozzi. Technology Adoption and Accumulation in a Vintage Capital Model. J. of Economics, 74, N.1, 1–38 (2001).
* [3] A. Bensoussan, G. Da Prato, M.C. Delfour, and S.K. Mitter. Representation and control of Infinite dimensional system. Second edition, Birkhäuser, Boston 2007.
* [4] L. Bertini, A. De Sole, D. Gabrielli, G. Jona–Lasinio, and C. Landim. Fluctuations in stationary nonequilibrium states of irreversible processes. Phys. Rev. Lett. 87, 040601 (2001).
* [5] L. Bertini, A. De Sole, D. Gabrielli, G. Jona–Lasinio, and C. Landim. Macroscopic fluctuation theory for stationary non equilibrium state. J. Statist. Phys. 107, 635–675 (2002).
* [6] L. Bertini, A. De Sole, D. Gabrielli, G. Jona–Lasinio, and C. Landim. Large deviations for the boundary driven simple exclusion process. Math. Phys. Anal. Geom. 6, 231–267 (2003).
* [7] L. Bertini, A. De Sole, D. Gabrielli, G. Jona–Lasinio, and C. Landim. Minimum Dissipation Principle in Stationary Non-Equilibrium States. J. Statist. Phys. 116, 831–841 (2004).
* [8] L. Bertini, A. De Sole, D. Gabrielli, G. Jona–Lasinio, and C. Landim. Action Functional and Quasi-Potential for the Burgers Equation in a Bounded Interval. Comm. Pure. Appl. Math. 64, 649–696 (2011).
* [9] L. Bertini, D. Gabrielli, and J.L. Lebowitz. Large deviations for a stochastic model of heat flow. J. Statist. Phys. 121, 843–885 (2005).
* [10] 0\. Carja. The minimal time function in infinite dimensions. SIAM J. Contr. Optimiz. 31 (5), 1103-1114 (1993).
* [11] R. Curtain, and A. J. Pritchard. Infinite Dimensional Linear Systems Theory. Springer 1978.
* [12] G. Da Prato, and J. Zabczyk. Stochastic Equations in Infinite Dimension. Springer 1992.
* [13] G. Da Prato, A. J. Pritchard, and J. Zabczyk. Null controllability with vanishing energy. SIAM J. Control Optim. 29 (1), 209–221 (1991).
* [14] Z. Emirsajlow. A feedback for an infinite-dimensional linear-quadratic control problem with a fixed terminal state. IMA J. Math. Control Inf. 6 (1), 97-117 (1989).
* [15] Z. Emirsajlow, and SD. Townley. Uncertain systems and minimum energy control. J. Appl. Math. Comput. Sci. 5 (3), 533–545 (1995).
* [16] K-J. Engel, and R. Nagel. One Parameter Semigroups for Linear Evolution Equations. Springer 1999.
* [17] G. Fabbri, F. Gozzi, and A. Swiech. Stochastic optimal control in infinite dimension. Springer 2017.
* [18] J. Feng, and T. Kurtz. Large Deviations for Stochastic Processes. Mathematical Surveys and Monographs, AMS 2006.
* [19] J.A. Goldstein. Semigroups of linear operators and applications. Oxford Mathematical Monographs, 1985.
* [20] F. Gozzi, and P. Loreti. Regularity of the minimum time function and minimum energy problems: The linear case. SIAM J. Control Optim. 37 (4), 1195–1221 (1999).
* [21] P. Lancaster, and L. Rodman. Algebraic Riccati Equations. Oxford Science Publications, Clarendon Press, Oxford, 1995.
* [22] I. Lasiecka, and R. Triggiani. Control Theory for Partial Differential Equations: Continuous and Approximation Theories. Part 1. Cambridge University Press 2000.
* [23] I. Lasiecka, and R. Triggiani. Control Theory for Partial Differential Equations: Continuous and Approximation Theories. Part 2. Cambridge University Press 2000.
* [24] A. Lunardi. Analytic semigroups and optimal regularity in parabolic problems. Birkhäuser Verlag, Basel, 1995.
* [25] B. C. Moore. Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Automat. Control 26 (1), 17–32 (1981).
* [26] A. W. Olbrot, and L. Pandolfi. Null controllability of a class of functional-differential systems. Internat. J. Control 47 (1), 193–208 (1988).
* [27] A. Pazy. Semigroups of linear operators and applications to partial differential equations. Springer Verlag, New-York 1983.
* [28] E. Priola, and J. Zabczyk. Null controllability with vanishing energy. SIAM J. Control and Optim. 42 (6), 1013-1032, 2003.
* [29] M. Reed, B. Simon. Functional Analysis. Academic Press, London 1980.
* [30] J.M.A. Scherpen. Balancing for Nonlinear Systems. Syst. Contr. Lett. 21 (2), 143-153 (1993).
* [31] H. Triebel. Interpolation theory, function spaces, differential operators. North-Holland Publishing Co., Amsterdam 1978.
* [32] J. Zabczyk. Mathematical control theory: an introduction. Birkhäuser Verlag, Boston 1995.
|
# Echo State Network for two-dimensional turbulent moist Rayleigh-Bénard
convection
Florian Heyder Institut für Thermo- und Fluiddynamik, Technische Universität
Ilmenau, Postfach 100565, D-98684 Ilmenau, Germany Jörg Schumacher Institut
für Thermo- und Fluiddynamik, Technische Universität Ilmenau, Postfach 100565,
D-98684 Ilmenau, Germany Tandon School of Engineering, New York University,
New York City, NY 11201, USA
###### Abstract
Recurrent neural networks are machine learning algorithms which are suited
well to predict time series. Echo state networks are one specific
implementation of such neural networks that can describe the evolution of
dynamical systems by supervised machine learning without solving the
underlying nonlinear mathematical equations. In this work, we apply an echo
state network to approximate the evolution of two-dimensional moist Rayleigh-
Bénard convection and the resulting low-order turbulence statistics. We
conduct long-term direct numerical simulations in order to obtain training and
test data for the algorithm. Both sets are pre-processed by a Proper
Orthogonal Decomposition (POD) using the snapshot method to reduce the amount
of data. Training data comprise long time series of the first 150 most
energetic POD coefficients. The reservoir is subsequently fed by these data
and predicts of future flow states. The predictions are thoroughly validated
by original simulations. Our results show good agreement of the low-order
statistics. This incorporates also derived statistical moments such as the
cloud cover close to the top of the convection layer and the flux of liquid
water across the domain. We conclude that our model is capable of learning
complex dynamics which is introduced here by the tight interaction of
turbulence with the nonlinear thermodynamics of phase changes between vapor
and liquid water. Our work opens new ways for the dynamic parametrization of
subgrid-scale transport in larger-scale circulation models.
## I Introduction
Machine learning (ML) algorithms have changed our way to model and analyse
turbulent flows including thermally driven convection flows Brenner et al.
(2019); Brunton et al. (2020); Pandey et al. (2020). The applications reach
from subgrid-scale stress models Ling et al. (2016); Duraisamy et al. (2019),
via the detection of large-scale patterns in mesoscale convection Fonda et al.
(2019) to ML-based parametrizations of turbulent transport and clouds in
climate and global circulation models Brenowitz and Bretherton (2018);
O’Gorman and Dwyer (2018); Gentine et al. (2018). The implementations of such
algorithms help to process and reduce increasing amounts of data coming from
numerical simulations and laboratory experiments Moller et al. (2020) by
detecting patterns and statistical correlations Goodfellow et al. (2016).
Moreover, the computational cost that is in line with a direct numerical
simulation (DNS) of the underlying highly nonlinear Navier-Stokes equations
can often be reduced significantly by running a neural network instead that
generates synthetic fields with the same low-order statistics as in a full
simulation Schneider et al. (2017); Mohan et al. (2020).
Turbulent convection, as all other turbulent flows, is inherently highly
chaotic so that specific flow states in the future are hard to predict after
exceeding a short horizon. The additional possibility of the fluid to change
its thermodynamic phase, as it is the case in moist turbulent convection
Stevens (2005); Mellado (2017), adds further nonlinearities and feedbacks to
the turbulence dynamics. Learning low-order statistics of such a turbulent
flow provides a challenge to an ML algorithm. For such a task, an internal
memory is required since statistical correlations decay in a finite time. This
necessitates the implementation of a short-term memory or internal cyclic
feedbacks in the network architecture. That is why a particular class of
supervised machine learning algorithms – recurrent neural networks (RNNs) –
will be in the focus of the present work Hochreiter and Schmidhuber (1997);
Lukoševičius and Jaeger (2009); Tanaka et al. (2019).
In this paper we apply a specific implementation of an RNN, the echo state
network (ESN) Jaeger and Haas (2004); Yildiz et al. (2012), to two-dimensional
(2d) turbulent moist Rayleigh-Bénard convection (RBC) flow. We use this RNN to
predict the low-order statistics, such as the buoyancy and liquid water fluxes
or fluctuation profiles of these quantities across the layer. Our present work
extends a recent application of ESNs to two-dimensional dry Rayleigh-Bénard
convection Pandey and Schumacher (2020) in several points. (1) The physical
complexity of turbulent convection is enhanced in the present moist case. This
is due to the total water content, an additional active scalar field which
comprises of vapor and liquid water contents. The total water content couples
as an additional field to the original dry Boussinesq model for temperature
and velocity. (2) The size of the convection domain is increased by a factor
of 4 such that the necessary degree of data reduction is significantly higher.
(3) Moist convection requires also the satisfying reproduction of new physical
quantities which are derived from different turbulence fields, e.g. the cloud
cover in or the liquid water flux across the layer. This can be considered as
a firmer test of the capabilities of the ESN to model complex dynamics. Such
quantities are of particular interest for larger-scale models of atmospheric
circulation that include mesoscale convection processes in form of parameters
or minimal models Grabowski and Smolarkiewicz (1999); Grabowski (2001). (4)
Finally, the hyperparameters of the ESN, in particular the spectral radius
$\rho(W^{r})$ of the reservoir matrix $W^{r}$, has been tested in more detail
(exact definitions follow). The grid search in our work thus sums up to a
total of more than 1800 different hyperparameter sets.
Echo state networks have recently received renewed attention for their
capability of equation-free modeling of several chaotic systems, such as of
the Lorenz96 model Vlachas et al. (2020) or the one-dimensional partial
differential Kuramoto-Sivashinsky equation Lu et al. (2017); Pathak et al.
(2018). Furthermore, low-order flow statistics in 2d dry Rayleigh-Bénard
convection with ${\rm Ra}=10^{7}$ have been successfully reproduced using an
ESN that trains the most energetic time coefficients of a Karhunen-Loéve
expansion (or proper orthogonal decomposition) of the convection fields Holmes
et al. (2012). This latter step can be considered as an autoencoder that
provides training and test data for the ESN which cannot take the data
directly, even for the present 2d case.
A second popular implementation of RNNs, which we want to mention here for
completeness, are long short-term memory networks (LSTM) Hochreiter and
Schmidhuber (1997) which have been also applied to fluid mechanical problems,
such as the dynamics of Galerkin models of shear flows in Srinivasan et al.
(2019). These models also demonstrated to capture the longer-term time
dependency and low-order statistics of important turbulent transport
properties well (see also Pandey et al. (2020) for a direct comparison). An
acceleration of the training, which requires the backpropagation of the errors
through the whole network in contrast to ESNs, were obtained recently with
Momentum LSTMs that apply the momentum extension of gradient descent search of
the cost function minimum to determine the weights at the network nodes Nguyen
et al. (2020).
In this work, a moist Rayleigh-Bénard convection model with moist Rayleigh
number ${\rm Ra}_{\rm M}\simeq 10^{8}$ and Prandtl number $\text{Pr}=0.7$ is
considered as an example case. We choose a 2d domain $\Omega=L\times H$ with
aspect ratio $A=L/H=24$. Here $L$ is the horizontal length and $H$ the height
of the simulation domain. The number of grid points was chosen as $N_{x}\times
N_{y}=7200\times 384$. The data are obtained by direct numerical simulations
(DNS) which apply a spectral element method Fischer (1997); Scheel et al.
(2013); nek (2017). Comprehensive studies of further data sets with different
parameters, such as Rayleigh numbers, to study the generalization properties
will be presented elsewhere. Our intention is to demonstrate the capability of
the ESN to deliver reliable low-order statistics for a turbulent convection
flow with phase changes.
The manuscript is structured as follows. The next section introduces the moist
RBC model and the central ideas of ESNs. Then the generation and analysis of
the DNS data, including a brief description of the numerical scheme and the
proper orthogonal decomposition (POD), is specified. Finally the results of
our machine learning approach to moist turbulent convection will be discussed
in detail. In section V we summarize our results and provide a conclusion and
an outlook.
## II Methods
### II.1 Moist Rayleigh-Bénard Convection Model
We now briefly review our model for moist Rayleigh-Bénard convection in two
spatial dimensions. A detailed derivation can be found in Pauluis and
Schumacher (2010); Weidauer et al. (2010); Schumacher and Pauluis (2010). The
framework is based on the mathematically equivalent formulation by Bretherton
Bretherton (1987, 1988). Similar simplified models of moist convection with
precipitation have been developed by Smith and Stechmann Smith and Stechmann
(2017) and Vallis et al. Vallis et al. (2019). Evaporative cooling and
buoyancy reversal effects were discussed for example by Abma et al. Abma et
al. (2013).
The buoyancy $B$ in atmospheric convection is given by Emmanuel (1994)
$\displaystyle
B=-g\frac{\rho(S,q_{v},q_{l},q_{i},p)-\overline{\rho}}{\overline{\rho}}$ (1)
with the gravity acceleration $g$, a mean density $\overline{\rho}$, the
pressure $p$, the entropy $S$ and the contents of water vapor $q_{v}$, liquid
water $q_{l}$ and ice $q_{i}$. We consider warm clouds only, i.e. $q_{i}=0$
and assume local thermodynamic equilibrium. From the latter assumption, it
follows that no precipitation is possible and the number of independent
variables in eq. (1) reduces to three. By introducing the total water content
$q_{T}=q_{v}+q_{l}$ the buoyancy can be expressed as $B(S,q_{T},p)$. In the
Boussinesq approximation, pressure variations about a hydrostatic equilibrium
profile are small, such that the buoyancy becomes $B(S,q_{T},y)$ with the
vertical spatial coordinate $y$ for the present 2d case. Furthermore, the
convection layer is near the vapor-liquid phase boundary. The buoyancy can
then be expressed as a piecewise linear function of $S$ and $q_{T}$ on both
sides of the saturation line. This step preserves the discontinuity of the
first partial derivatives of $B$ and therefore the physics of a first-order
phase transition. The advantage of this formulation is that locally the
saturation state of the air can be determined. In the last step, we substitute
the linear combinations of $S$ and $q_{T}$ on both sides of the phase boundary
by a dry buoyancy $D$ and a moist buoyancy $M$. Consequently the buoyancy
field $B$ is given by Pauluis and Schumacher (2010)
$\displaystyle B(x,y,t)=\max\left(M(x,y,t),D(x,y,t)-N_{s}^{2}y\right)$ (2)
where the fixed Brunt-Väisälä frequency
$N_{s}=\sqrt{g(\Gamma_{u}-\Gamma_{s})/T_{\rm ref}}$ is determined by the lapse
rates of the saturated and unsaturated moist air, $\Gamma_{s}$ and
$\Gamma_{u}$, and a reference temperature $T_{\rm ref}$. An air parcel at
height $y$ and time $t$ is unsaturated if $M(x,y,t)<D(x,y,t)-N_{s}^{2}y$ and
saturated if $M(x,y,t)>D(x,y,t)-N_{s}^{2}y$. Note that the newly introduced
dry buoyancy field $D$ is proportional to the liquid water static energy and
the moist buoyancy field $M$ to the moist static energy. As in dry Rayleigh-
Bénard convection with Dirichlet boundary conditions for the temperature, the
static diffusive profiles $\overline{D}(y),\overline{M}(y)$ are vertically
linear
$\displaystyle\overline{D}(y)$ $\displaystyle=D_{0}+\frac{D_{H}-D_{0}}{H}y$
(3) $\displaystyle\overline{M}(y)$
$\displaystyle=M_{0}+\frac{M_{H}-M_{0}}{H}y$ (4)
where $D_{0}$, $M_{0}$ and $D_{H}$, $M_{H}$ are the imposed values of $D$, $M$
at the bottom ($y=0$) and top ($y=H$) of the computational domain. Here,
$D_{0}=M_{0}$. The governing equations of the moist Boussinesq system are
given by
$\displaystyle\frac{d\mathbf{v}}{dt}$
$\displaystyle=-\nabla\tilde{p}+\nu\nabla^{2}\mathbf{v}+B\left(D,M,y\right)\hat{\mathbf{e}}_{y}$
(5) $\displaystyle\nabla\cdot\mathbf{v}$ $\displaystyle=0$ (6)
$\displaystyle\frac{dD}{dt}$ $\displaystyle=\kappa\nabla^{2}D$ (7)
$\displaystyle\frac{dM}{dt}$ $\displaystyle=\kappa\nabla^{2}M.$ (8)
Here $\mathbf{v}=(v_{x}(x,y,t),v_{y}(x,y,t))^{T}$ is the two-dimensional
velocity field, $\tilde{p}=p/\rho_{\rm ref}$ the kinematic pressure, $\nu$ the
kinematic viscosity and $\kappa$ the scalar diffusivity. The term
$d/dt=\partial/\partial t+(\mathbf{v}\cdot\nabla)$ is the material derivative.
This idealized model describes the formation of warm, non-precipitating low
clouds in a shallow layer up to a depth of $\sim 1$km. The assumptions, which
are made here, hold for example to a good approximation over the subtropical
oceans.
The problem is made dimensionless using length scale $[x,y]=H$, buoyancy scale
$[B]=M_{0}-M_{H}$, and (free-fall) time scale $[t]=\sqrt{H/(M_{0}-M_{H})}$.
The characteristic velocity scale follows by
$[v_{x},v_{y}]=\sqrt{(M_{0}-M_{H})H}$. Four dimensionless numbers can be
identified: the Prandtl number, dry Rayleigh number and moist Rayleigh number
are given by
Pr $\displaystyle=\frac{\nu}{\kappa}$ (9) $\displaystyle{\rm Ra}_{\rm D}$
$\displaystyle=\frac{\left(D_{0}-D_{H}\right)H^{3}}{\nu\kappa}$ (10)
$\displaystyle{\rm Ra}_{\rm M}$
$\displaystyle=\frac{\left(M_{0}-M_{H}\right)H^{3}}{\nu\kappa}\,.$ (11)
An additional parameter arises from the additional phase changes, the
dimensionless form of (2)
${\rm CSA}=\frac{N_{s}^{2}H}{M_{0}-M_{H}}\,.$ (12)
The condensation in saturated ascent (CSA) controls the amount of latent heat
an ascending saturated parcel can release on its way to the top. The
saturation condition (2) implies that liquid water is immediately formed at a
point in space and time when $M>D-N_{s}^{2}y$. There is no supersaturation
considered in this model and the liquid water content field $q_{l}$ and thus
the clouds are given by
$q_{l}(x,y,t)=M(x,y,t)-(D(x,y,t)-N_{s}^{2}y)\,.$ (13)
Note that in this formulation, $q_{l}$ can become negative, as it is a measure
of the degree of saturation. In a nutshell, $q_{l}<0$ stand thus for a liquid
water deficit. When the atmosphere is saturated, $q_{l}\geq 0$ and the
conventional liquid water content is retained.
Here we study the case of $D_{0}>D_{H}$ and $M_{0}>M_{H}$. Both fields are
linearly unstable. For the case of a conditionally unstable moist layer with
$D_{0}\leq D_{H}$ we refer to refs. Bretherton (1987, 1988) or Pauluis and
Schumacher (2011).
### II.2 Reservoir Computing
Reservoir computing (RC) is a special type of RNN implementation. Contrary to
standard feed forward networks, neurons in the hidden layers of RNN are
recurrently connected to each other. In this way, RNNs have a similar
architecture to biological brains and are said to posses an internal memory as
cyclic connections allow information to stay inside the network for a certain
amount of time before fading out, known as the echo state property. Yet the
training of such RNNs is exceedingly difficult, as common training schemes
like the back propagation through time struggle with fading error gradients,
slow convergence Jaeger (2002) and bifurcations Doya (1992). An alternative
training method was proposed by Jaeger Jaeger (2001) and in an independent
work by Maass Maass et al. (2002). Their idea, which is now known as reservoir
computing Lukoševičius and Jaeger (2009), was to train the weights of the
output layer only, which connect the RNN, denoted to as the reservoir, to the
output neurons. The weights of the input layer as well as the internal
reservoir weights should be initialized at random and then kept constant. This
training procedure reduces the computational costs for training significantly
and shifts the focus to an adequate initialization of the input and reservoir
weight matrix $W^{r}$ (which is an adjacency matrix in network theory). While
Jaeger’s approach is known as ESN, Maass’ framework is called liquid state
machine. They differ in their field of origin, as the ESN stems from the field
of machine learning and the liquid state machine from computational
neuroscience. We will stick to the ESN formulation, but note that RC refers to
the concept mentioned above and summarizes both models.
Despite their simple training scheme, ESNs have been said to tackle many
tasks, from forecasting closing prices of stock markets Lin et al. (2008) to
estimating the life span of a fuel cell Morando et al. (2013). Especially its
application to dynamical systems shows great promise. For instance it has been
demonstrated that the dynamics of two of the three degrees of freedom of the
Rössler system can be inferred from the evolution of the third one Lu et al.
(2017). Further, the Lyapunov exponents of the dynamical system that a trained
ESN represents have been shown to match the exponents of the data generating
system Pathak et al. (2017).
Figure 1: Sketch of the echo state network in the training phase (a) for time
steps $n<0$ and the prediction phase (b) for time steps $n\geq 0$.
Figure 1 shows the concept and components of the ESN, for the training phase
in panel (a) and for the prediction phase in panel (b). The input
$\mathbf{u}(n)\in\mathbb{R}^{\rm N_{\rm in}}$ at a time instance $n$ as well
as a constant scalar bias $b=1$ are passed to the reservoir via the input
weight matrix $W^{\rm in}\in\mathbb{R}^{\rm N_{\rm r}\times(1+\rm N_{\rm
in})}$. The weighted input contributes to the dynamics of the reservoir state
$\mathbf{s}\in\left[-1,1\right]^{\rm N_{\rm r}}$ at time $n$ which is given by
$\displaystyle\mathbf{s}(n)$ $\displaystyle=(1-\gamma)\mathbf{s}(n-1)$
$\displaystyle+\gamma\tanh\left[W^{\rm in}\left[b;\mathbf{u}(n)\right]+W^{\rm
r}\mathbf{s}(n-1)\right].$ (14)
Here $\left[b;\mathbf{u}(n)\right]$ stands for the vertical concatenation of
the scalar bias and the input 111In some cases the bias $b=0$ and
$\mathbf{u}(n-1)$ are used in eq. (II.2).. This update rule comprises external
forcing by the inputs $\mathbf{u}(n)$ as well as a self-interaction with the
past instance $\mathbf{s}(n-1)$. The reservoir weight matrix $W^{\rm
r}\in\mathbb{R}^{\rm N_{\rm r}\times\rm N_{\rm r}}$ blends the state
dimensions, while the nonlinear hyperbolic tangent $\tanh\left(\cdot\right)$,
applied to each component of its argument vector, is the nonlinear activation
function of the neurons in this model. The leaking rate
$\gamma\in\left(0,1\right]$ moderates the linear and nonlinear contributions
and assures that the state is confined to $\left[-1,1\right]^{N_{\rm r}}$. As
mentioned above, the existence of echo states, i.e., states that are purely
defined by the input history, is crucial. An ESN is said to include such echo
states when two different states $\mathbf{s}(n-1)$, $\mathbf{s}^{\prime}(n-1)$
converge to the same state $\mathbf{s}(n)$, provided the same input
$\mathbf{u}(n)$ is given and the system has been running for many iterations
$n$ Jaeger (2001). Therefore, the first few state iterations are considered as
a reservoir washout, even for a reservoir with echo state property. After this
transient phase, the updated state is concatenated with the bias and the
current input to form the extended reservoir state
$\tilde{\mathbf{s}}(n)=\left[b;\mathbf{u}(n);\mathbf{s}(n)\right]$. Finally,
$\tilde{\mathbf{s}}$ is mapped via the output matrix $W^{\rm
out}\in\mathbb{R}^{\rm N_{\rm in}\times\left(1+\rm N_{\rm in}+\rm N_{\rm
r}\right)}$ to form the reservoir output $\mathbf{y}(n)\in\mathbb{R}^{\rm
N_{\rm in}}$
$\mathbf{y}(n)=\mathrm{W}^{\rm out}\tilde{\mathbf{s}}(n).$ (15)
For our application the output dimension matches the input dimension $\rm
N_{\rm in}$, which generally does not need to be the case.
Before the ESN can be used in the prediction phase, as sketched in Fig. 1(b),
the elements of $W^{\rm out}$ have to be computed first. This process is known
as training phase of this supervised machine learning algorithm. Only when the
reservoir is properly trained it will produce reasonable output. A set of
$n_{\rm train}$ training data instances $\\{\mathbf{u}(n),\mathbf{y}^{\rm
target}(n)\\}$ (where $n=-n_{\rm train},-(n_{\rm train}-1),...,-1$) needs to
be prepared. The target output $\mathbf{y}^{\rm target}(n)$ represents the
desired output the ESN should produce for the given input $\mathbf{u}(n)$. The
reservoir state $\mathbf{s}$ is computed for all inputs in the training data
set and assembled into a mean square cost function with an additional $L_{2}$
regularization $C\left(W^{\rm out}\right)$ which is given by
$\displaystyle C\left(W^{\rm out}\right)$ $\displaystyle=\frac{1}{n_{\rm
train}}\sum_{n=-n_{\rm train}}^{-1}\left(W^{\rm
out}\tilde{\mathbf{s}}(n)-\mathbf{y}^{\rm target}(n)\right)^{2}$
$\displaystyle+\beta\sum\limits_{i=1}^{N_{\rm in}}\|w_{i}^{\rm
out}\|_{2}^{2},$ (16)
and has to be minimized corresponding to
$\displaystyle W_{\ast}^{\rm out}$ $\displaystyle=\arg\min C\left(W^{\rm
out}\right)\,.$ (17)
Here $w_{i}^{\rm out}$ denotes the $i$-th row of $W^{\rm out}$ and
$\|\cdot\|_{2}$ the $L_{2}$ norm. Equations (16) and (17) are known as ridge
regression with the ridge regression parameter $\beta$. The last term in (16)
suppresses large values of the rows of $W^{\rm out}$, which could
inadvertently amplify small differences of the state dimensions in (15). This
well known regression problem is solved by the fitted output matrix
$\displaystyle W_{\ast}^{\rm out}=Y^{\rm target}S^{\rm T}\left(SS^{\rm
T}+\beta{\rm Id}\right)^{-1}$ (18)
where $\left(\cdot\right)^{\rm T}$ denotes the transposed and ${\rm Id}$ the
identity matrix. $Y^{\rm target}$ and $S$ are matrices where the $n$-th column
is the target output $\mathbf{y}^{\rm target}(n)$ and the extended reservoir
state $\tilde{\mathbf{s}}(n)$, respectively.
As the output weights are the only parameters that are trained, RC is
computationally inexpensive. However, the algebraic properties of the
initially randomly generated matrices $W^{\rm in}$ and $W^{\rm r}$ are
hyperparameters which have to be tuned beforehand. In our approach we draw the
elements of $W^{\rm in}$ from a uniform distribution in
$\left[-0.5,0.5\right]$ and impose no further restrictions on this matrix. For
the generation of the reservoir weights in $W^{\rm r}$ it has been reported
that the proportion of non-zero elements, i.e., the reservoir density D and
the spectral radius $\varrho\left(W^{\rm r}\right)$, i.e., the largest
absolute eigenvalue of $W^{\rm r}$ are crucial parameters that determine
whether the desired echo state property holds Lukoševičius (2012). We choose a
sparse reservoir ($\text{D}<1$) with few internal node connections and draw
the elements from a uniform distribution in $\left[-1,1\right]$. We then
normalize $W^{\rm r}$ by its largest absolute eigenvalue and multiply it with
the desired spectral radius $\varrho\left(W^{\rm r}\right)$. This scaling
approach, initially proposed by Jaeger Jaeger (2002), has established itself
as one of the standard ways Morando et al. (2013); Lu et al. (2017); Pathak et
al. (2018); Pandey and Schumacher (2020) to control the spectral radius.
Nevertheless, other procedures have been proposed Yildiz et al. (2012);
Strauss et al. (2012). In addition, the size of the reservoir $\rm N_{\rm r}$
is a hyperparameter. Usually $\mathbf{s}$ should be a high-dimensional
extension of the inputs $\mathbf{u}$, so that $\rm N_{\rm r}\gg\rm N_{\rm in}$
is satisfied. Moreover, we consider the leaking rate $\gamma$ and ridge
regression parameter $\beta$ as further quantities that have to be adjusted to
our data.
## III Echo state network for 2d Moist Convection
### III.1 Direct Numerical Simulations
DNS using the spectral element solver Nek5000 Fischer (1997); Scheel et al.
(2013); nek (2017) were conducted to solve the two-dimensional moist Rayleigh-
Bénard system (5)-(8) in a domain $\Omega=L\times H$ with aspect ratio
$A=L/H=24$. The Rayleigh numbers are ${\rm Ra}_{\rm D}=2\cdot 10^{8}$, ${\rm
Ra}_{\rm M}=4\cdot 10^{8}$. The Prandtl number is ${\rm Pr}=0.7$ representing
moist air. The additional parameter is set to ${\rm CSA=0.3}$. In the vertical
direction $y$, Dirichlet boundary conditions are imposed for both buoyancy
fields at top and bottom in combination with free-slip boundary conditions for
the velocity field. Periodic boundaries are set for all fields in the
horizontal direction. We chose a spatial resolution of $N_{x}\times
N_{y}=7200\times 384$ grid points and a time step size of $5.0\cdot 10^{-4}$.
This setup corresponds to an absolutely unstable atmosphere, i.e. where both
unsaturated and saturated air are unstable w.r.t. vertical displacements. The
initial conditions are small perturbations around the diffusive equilibrium
state $\overline{M}(y)$ and $\overline{D}(y)$, which result in turbulent
convection. The flow statistics relaxes into a statistically stationary state
(see Fig. 2) which provides training and test data for further processing.
Figure 2: Turbulent kinetic energy $E_{\rm kin}(t)=\langle
v_{x}^{2}+v_{y}^{2}\rangle_{x,y}$ of the moist Rayleigh-Bénard flow versus
time $t$. After an initial transient the values such as those of $E_{\rm
kin}(t)$ become statistically stationary. In the statistically stationary
regime, $2000$ snapshots, each separated by $\Delta t=0.25$, are analyzed by a
POD (see Sec. III.2).
### III.2 Data reduction by POD
We sample a total of $n_{s}=2000$ snapshots of $v_{x}$, $v_{y}$, $M$ and $D$
at a time interval $\Delta t=0.25$ in the statistically stationary regime. The
original DNS data snapshots have been sampled at a constant time interval such
that we stick to constant time steps throughout this work, including the
subsequent RC model. Furthermore, the data are spectrally interpolated on a
uniform grid with a resolution of $N_{x}^{\prime}\times
N_{y}^{\prime}=640\times 80$ points from the originally unstructured element
mesh. They are decomposed into temporal mean and fluctuations subsequently,
$\displaystyle v_{x}(x,y,t)$ $\displaystyle=$ $\displaystyle\langle
v_{x}\rangle_{t}(x,y)+v_{x}^{\prime}(x,y,t)$ (19) $\displaystyle v_{y}(x,y,t)$
$\displaystyle=$ $\displaystyle\langle
v_{y}\rangle_{t}(x,y)+v_{y}^{\prime}(x,y,t)$ (20) $\displaystyle D(x,y,t)$
$\displaystyle=$ $\displaystyle\langle D\rangle_{t}(x,y)+D^{\prime}(x,y,t)$
(21) $\displaystyle M(x,y,t)$ $\displaystyle=$ $\displaystyle\langle
M\rangle_{t}(x,y)+M^{\prime}(x,y,t)$ (22) $\displaystyle q_{l}(x,y,t)$
$\displaystyle=$ $\displaystyle\langle
q_{l}\rangle_{t}(x,y)+q_{l}^{\prime}(x,y,t)\,.$ (23)
Figure 3: Eigenvalue spectrum of the POD mode obtained from the analysis of
2000 snapshots. (a) Individual and cumulative contribution of each mode. The
shaded region indicates the first $\rm N_{\rm POD}=150$ modes which capture
$81\%$ of the total energy of the original snapshot data. (b) Time
coefficients $a_{p}(n)$ for the 1st, 2nd, 10th, 50th, and 100th mode are
shown. The first coefficients show a slow variation compared to higher
coefficients. Time series are shifted with respect to each other for better
visibility.
A grid dimension of $640\times 80$ in terms of ESN input dimensions ${\rm
N_{\rm in}}$ is still too big. We therefore follow the approach in Pandey and
Schumacher (2020) and introduce an intermediate step of data reduction before
handing the data to the ESN. We make use of the periodicity and expand the
data in a Fourier series in the horizontal $x$-direction and take the
Karhunen-Loéve expansion, also known as POD of our data. In particular we
choose the snapshot method Sirovich (1987) which decomposes the $k$-th
component of a vector field $\mathbf{g}(x,y,t)$ into
$\displaystyle g_{k}(x,y,t)$ $\displaystyle=\sum\limits_{p=1}^{\rm n_{\rm
s}}\sum_{n_{x}=-N_{x}^{\prime}/2}^{N_{x}^{\prime}/2}a_{p,n_{x}}(t)\Phi_{k,n_{x}}^{(p)}(y)\exp{\left(i\frac{2\pi
n_{x}x}{L}\right)}$ $\displaystyle=\sum\limits_{p=1}^{\rm n_{\rm
s}}a_{p}(t)\Phi_{k}^{(p)}(x,y).$ (24)
Here $a_{p}(t)$ and $\Phi_{k}^{(p)}(x,y)$ are the $p$-th time coefficient and
the corresponding spatial POD mode respectively. In our approach we take the
POD of $\mathbf{g}=(v_{x}^{\prime},v_{y}^{\prime},D^{\prime},M^{\prime})^{T}$.
The POD spectrum of the turbulent convection data can be seen in Fig. 3(a).
The eigenvalues of the covariance matrix fall off quickly and we therefore
truncate the POD expansion at $\rm N_{\rm POD}=150\ll\rm n_{\rm s}$ and
include only the most energetic modes (green shaded region). These capture
$81\%$ of the total energy of the original data. In Fig. 3(b) the time
coefficients $a_{p}(n)$ are shown for $p=1,2,10,50$, and 100 for all POD time
steps. The first time coefficients ($a_{1}$ to $a_{10}$) posses only few
temporal features opposed to higher coefficients. This range of active scales
is inherent to turbulence as kinetic energy of large-scale motion is
transferred to small eddies down to the Kolmogorov scale. Moreover, the
influence of the additional nonlinearity due to the phase changes impacts the
dynamics, as the first coefficients varied more in the dry RBC case with
aspect ratio $6$ at ${\rm Ra=10^{7}}$ Pandey and Schumacher (2020). This is
one order of magnitude below our Rayleigh number values. Nevertheless our RC
model will receive values for all $\rm N_{\rm POD}$ coefficients and therefore
for a wide range of temporal frequencies. We note here that the design of the
RC model could be adapted to the different frequencies in future approaches.
Figure 4 shows the spatial modes $\Phi_{2}^{(1)}$, $\Phi_{2}^{(50)}$ and
$\Phi_{4}^{(1)}$, $\Phi_{4}^{(50)}$.
Figure 4: Spatial structure of two POD modes for $v_{y}$: (a)
$\Phi_{2}^{(1)}(x,y)$, (b) $\Phi_{2}^{(50)}(x,y)$ and the moist buoyancy field
$M$: (c) $\Phi_{4}^{(1)}(x,y)$, (d) $\Phi_{4}^{(50)}(x,y)$. For visualization
purposes the aspect proportions do not match the actual aspect ratio of
$A=24$.
We limit ourselves to the last $1400={\rm n}_{\rm train}+{\rm n}_{\rm test}$
time instances of our data and use the first 150 time coefficients
$\mathbf{a}(n)=\left(a_{1}(n),a_{2}(n),...,a_{150}(n)\right)^{T}$ as the input
for the ESN. The first ${\rm n}_{\rm train}=700$ snapshots are assembled into
a training data set
$\displaystyle\\{\mathbf{u}(n),\mathbf{y}^{\rm
target}(n)\\}=\\{\mathbf{a}(n),\mathbf{a}(n+1)\\}$ (25)
with $-{\rm n}_{\rm train}\leq n\leq-1$. The training data span 175 free-fall
time units $T_{f}$ that correspond to 61 eddy turnover times. This time scale
is given by $\tau_{\rm eddy}=H/u_{\rm rms}\approx 2.9T_{f}$ with $u_{\rm
rms}=\langle u_{x}^{2}+u_{y}^{2}\rangle^{1/2}_{x,y,t}$. The ESN is trained to
predict the time instance $\mathbf{a}(n+1)$ when given the POD time
coefficients $\mathbf{a}(n)$ at the last time step as input. We use the first
$46$ time steps to initialize the reservoir. Using eq. (18) we compute $W^{\rm
out}$, which can then be used for prediction. In the prediction phase, we give
the initial input $\mathbf{u}(0)=\mathbf{a}(0)$ to the reservoir and redirect
the output to the input layer (see Fig. 1(b)) such that
$\displaystyle\mathbf{u}(n)$ $\displaystyle=\mathbf{y}(n-1)\qquad n=1,...,{\rm
n}_{\rm test}-1$ (26)
with ${\rm n}_{\rm test}=700$. This coupling creates an autonomous system that
generates new outputs without providing external inputs. Contrary to the
teacher forcing approach, where at each time step the input is given by the
actual time coefficients, this method is more suited for real world
application. Finally, the outputs at each time step are gathered and validated
by the last ${\rm n}_{\rm test}$ snapshots $\\{\mathbf{y}^{\rm
val}(n)\\}=\\{\mathbf{a}(n+1)\\}$.
### III.3 Training of ESN and choice of hyperparameters
We quantify the quality of ESN predictions, with the set of hyperparameters
$\rm h=\\{\gamma,\beta,\rm N_{\rm r},{\rm D},\varrho(W^{\rm r})\\}$, by two
types of measures. The mean square error ${\rm MSE}_{\rm h}$ of ESN output to
the validation data
$\displaystyle{\rm MSE}_{\rm h}$ $\displaystyle=\frac{1}{{\rm n}_{\rm
test}}\sum\limits_{n=0}^{{\rm n}_{\rm test}}{\rm mse}(n)$ (27) where
$\displaystyle{\rm mse}(n)$ $\displaystyle=\frac{1}{\rm N_{\rm
POD}}\sum\limits_{i=1}^{{\rm N}_{\rm POD}}\left(y_{i}(n)-y^{\rm
val}_{i}(n)\right)^{2}$ (28)
is the mean square error at time step $n$ averaged over all ${\rm N}_{\rm
POD}$ modes. Additionally, we take the physically more relevant normalized
average relative error (NARE) as defined in Srinivasan et al. (2019) into
account. For the moist buoyancy field $M$, it is for example given by
$\displaystyle E_{\rm h}\left[\langle M\rangle_{x,t}\right]$
$\displaystyle=\frac{1}{C_{\max}}\int\limits_{0}^{1}\Big{|}\langle
M\rangle_{x,t}^{\rm ESN}(y)-\langle M\rangle_{x,t}^{\rm POD}(y)\Big{|}dy$ (29)
with $\displaystyle C_{\max}$
$\displaystyle=\frac{1}{2\max_{y\in[0,1]}(|\langle M\rangle_{x,t}^{\rm
POD}|)}.$ (30)
The superscript defines whether the field was reconstructed with $a_{i}(n)$
(POD) or $y_{i}(n)$ (ESN). It measures the integral deviation of the
reconstructed line-time average profile $\langle\cdot\rangle_{x,t}$ of a
specific field. We consider the three NAREs: $E_{\rm h}\left[\langle
M\rangle_{x,t}\right]$, $E_{\rm h}\left[\langle q_{l}^{\prime
2}\rangle_{x,t}\right]$ and $E_{\rm h}\left[\langle
v_{y}^{\prime}M^{\prime}\rangle_{x,t}\right]$, that is the NARE of the total
moist buoyancy field $M$, the liquid water content fluctuations
$q_{l}^{\prime}$, and the moist buoyancy flux fluctuations
$v_{y}^{\prime}M^{\prime}$.
$\gamma$ | $\beta$ | $D$ | $\varrho(W^{\rm r})$
---|---|---|---
$0.50$ | $5\cdot 10^{-4}$ | $0.1$ | $0.00$
$0.60$ | $5\cdot 10^{-3}$ | $0.2$ | $0.90$
$0.70$ | $5\cdot 10^{-2}$ | $0.3$ | $0.91$
$0.80$ | $5\cdot 10^{-1}$ | $0.4$ | $0.92$
$0.90$ | | $0.5$ | $0.93$
$0.95$ | | $0.6$ | $0.94$
| | $0.7$ | $0.95$
| | | $0.96$
| | | $0.97$
| | | $0.98$
| | | $0.99$
| | | $1.00$
Table 1: Range of the four hyperparameters upon which a grid search was
conducted. For each of the $1848$ combinations, an ESN was trained and
validated with the training and validation data set. The reservoir dimension
$N_{\rm r}$ was set to $4000$ for all studies. The MSE and NARE measures were
computed and evaluated to find the optimal parameter set $h^{*}$.
$\gamma^{*}$ | $\beta^{*}$ | $N_{\rm r}^{*}$ | $D^{*}$ | $\varrho(W^{\rm r})^{*}$
---|---|---|---|---
$0.9$ | $5\cdot 10^{-1}$ | $4000$ | $0.1$ | $1.0$
MSE($\rm h^{*}$) | $E_{\rm h}\left[\langle M\rangle_{x,t}\right]$ | $E_{\rm h}\left[\langle q_{l}^{\prime 2}\rangle_{x,t}\right]$ | $E_{\rm h}\left[\langle v_{y}^{\prime}M^{\prime}\rangle_{x,t}\right]$
---|---|---|---
$8.18\cdot 10^{-4}$ | $0.032$% | $0.033$% | $4.5$%
Table 2: Hyperparameter set $h^{*}$ that was chosen for the ESN setup and the
associated errors, which this ESN run has produced. The results of the
reservoir with these listed hyperparameters are presented in section IV.
Figure 5: Representative profiles taken from the error landscape for the
leaking rate $\gamma$ (a,b,c) and the spectral radius $\varrho(W^{\rm r})$
(d,e,f). The data are obtained by a grid search study (see Table 1). We find a
systematic dependence for the two quantities in this parameter domain. Note
the different magnitudes between single quantity-NARE in (b,e) and multiple
quantity-NARE in panels (c,f). The legends in (c) and (f) hold also for panels
(a,b) and (d,e), respectively.
A grid search for the four quantities $\gamma$, $\beta$, $D$, $\varrho(W^{\rm
r})$ was conducted in a suitable range (see Table 1), based on the results in
Pandey and Schumacher (2020). The reservoir size was fixed to $N_{\rm r}=4000$
for all runs. The resulting ${\rm MSE}_{h}$ and NAREs were computed to find an
adequate parameter setting. Figure 5 shows ${\rm MSE}_{h}$, $E_{\rm
h}\left[\langle q_{l}^{\prime 2}\rangle_{x,t}\right]$, $E_{\rm h}\left[\langle
v_{y}^{\prime}M^{\prime}\rangle_{x,t}\right]$ and their dependence on
$\varrho\left(W^{\rm r}\right)$ and $\gamma$. We detect a systematic
dependence of both, spectral radius and leaking rate, even when slightly
changing a third parameter (see legends). Interestingly, as both parameters
are increased, the mean square error increases as well, while the NARE either
decrease or pass a local minimum. We emphasize that our grid search is only an
excerpt of the much bigger error landscape. Further, we did not average over
multiple random realizations which would be the basis for a more rigorous
discussion of parameter dependencies. Nevertheless, a starting point of the
discussion of the hyperparameter dependencies is as follows: as the spectral
radius grows, the magnitude of the argument of the hyperbolic tangent builds
up too. This will saturate the activation function, which in turn will act in
an increasingly binary way since
$\lim\limits_{x\rightarrow\pm\infty}\tanh(x)=\pm 1$. In this limit of a fully
saturated activation function, eq. (II.2) simplifies to
$\displaystyle\mathbf{s}(n)\simeq(1-\gamma)\mathbf{s}(n-1)\pm\gamma\mathbf{1},$
(31)
where $\mathbf{1}=(1,1,...,1)^{T}\in\mathbb{R}^{N_{\rm r}}$. This corresponds
to a linear dependence of each reservoir state on its last instance plus the
constant leaking rate $\gamma$ with stochastically changing sign, depending on
the randomly generated weight matrices. As the leaking rate is increased
towards unity, the memory effect is lost as well and the reservoir state is
basically updated by the constant last term in (31). The resulting reservoir
output will lead to increased mean square deviations from the varying ground
truth signal. We thus speculate that such activation saturation is already
satisfied for several reservoir state components at $\rho(W^{r})\lesssim 1$
which in turn contribute to the increasing ${\rm MSE}_{\rm h}$ in panel (d) of
Fig. 5.
Finally, we chose the hyperparameter set $h^{*}$, listed in Table 2 as it
results in a minimum of $E_{\rm h}\left[\langle
v_{y}^{\prime}M^{\prime}\rangle_{x,t}\right]$. The reason for settling with
this measure is that it is susceptible to two ESN estimates. Quantities like
$E_{\rm h}\left[\langle q_{l}^{\prime 2}\rangle_{x,t}\right]$, which depend on
only one ESN estimate, exhibit small values for many parameter settings (see
Fig. 5 (b),(e)).
## IV Results for the moist RBC case
Figure 6: Time evolution of the POD time coefficients $a_{i}(t)$. The gray
shaded area marks the training phase (reservoir output not shown). At the end
of the training phase the prediction phase starts. The curves labeled POD
stand for the ground truth of the evolution of the coefficient, while those
labeled ESN are the network predictions. Panels (f)–(j) show the initial part
of the forecast and correspond to (a)–(e). Figure 7: Fourier spectrum
$\|\mathcal{F}(a_{i})\|^{2}$ of the POD time coefficients and of the
corresponding reservoir prediction. The first 200 frequencies are shown only.
After the ESN receives the initial input $\mathbf{u}(0)=\mathbf{a}(0)$, the
autonomous predictor (see eq. (26)) produces estimates for the POD time
coefficients which can be seen in Fig. 6. From the predicted coefficients and
the known POD modes we can reconstruct all fields and compare these with the
ground truth which is the POD expansion of the test data.
Deviations for the first ten coefficients are detected while predictions for
subsequent POD coefficients associated with less energetic modes agree with
the values of the validation data for the first few time steps. Nevertheless,
the ESN accomplishes to produce a time series with matching temporal frequency
as the actual data, but shows bigger deviations to compute the trend of the
slowly varying first coefficients.
The frequency spectra of the ESN predictions for the coefficients $a_{i}(t)$
in comparison to those of the test data are shown in Figure 7 for 5 different
cases. While the spectral values of the first 100 frequencies are captured
well by the ESN, the higher frequency part starts to deviate in most cases. As
discussed already above, this might be due to a simple RC model architecture,
which does not differentiate between significantly different time scales that
are always present in a turbulent flow. Nevertheless, the result underlines
that the ESN is able to learn the time scales of the most significant POD
coefficients.
Figure 8(a) shows the mean square error ${\rm mse}(n)$ over all modes, as
defined in (28), as a function of time steps after training. The mean error
initially rises and then saturates. The fact that errors increase stems from
the coupling scheme of output to input; small errors will inadvertently be
amplified by the nonlinearity of the activation function in (II.2). Figure
8(b) shows the deviations $(y_{i}^{\rm val}(n)-y_{i}(n))$ for $i=1,10,50,100$.
Figure 8: ESN prediction error: (a) Mean square error over all modes ${\rm
mse}(n)$, see. eq. (28), a function of time steps $n$ after the training
phase. (b) The difference $(y_{i}^{\rm val}(n)-y_{i}(n))$, i.e. the deviation
of the ESN prediction $y_{i}(n)$ of $a_{i}$ at time steps $n$ after training.
Figures 9(a)–(c) show the three weight matrices of the RC model. As described
in section II.2, the input and reservoir weights are initialized randomly and
left unchanged. With a reservoir density of $D=0.2$ the reservoir matrix
$W^{\rm r}$ is a sparse matrix containing many weights equal to zero. Note
further, that the fitted output weights $W^{\rm out}$ have low magnitudes in
comparison to the entries of the input matrix $W^{\rm in}$. This is adjusted
according to the number of reservoir nodes $N_{\rm r}$ and the magnitude of
the training data. Moreover, the magnitude of the first $1+N_{\rm in}$ columns
are close to zero. This indicates that the contributions of the output bias
$b$ and the current input $\mathbf{u}$ to the reservoir output (see eq. (15))
are small.
Figure 10 (a,b) shows the training and prediction phase dynamics of three
exemplary hidden reservoir state components $s_{1},s_{1000}$, and $s_{4000}$.
As the first $46$ time steps of the training data were used to initialize the
reservoir state, the last $n_{\rm Train}-46$ time steps are shown in panel (a)
only. During both phases the individual time series $s_{i}$ are confined to a
certain subrange of the whole range $\left[-1,1\right]$. They have comparable
amplitudes. Nevertheless, the prediction phase time series differ from the
their training phase counterparts by slightly smoother variations with respect
to time, see also the corresponding Fourier frequency spectra in panels (c,d)
of the same figure. This might be explained by the fact that the states for
$n\geq 0$ experience the learned output matrix $W^{\rm out}_{\ast}$ via the
closed feedback loop. The states in the training phase, $n<0$, on the other
hand, do neither experience the fitted output matrix, nor is the last output
fed back to the reservoir. We suspect that this has a significant impact on
the evolution of the $s_{i}$.
Figure 9: Reservoir weight matrices: (a) Input weight matrix $W^{\rm in}$
which is a $4000\times 151$ matrix in our case. (b) Reservoir weight matrix
$W^{\rm r}$ which is a $4000\times 4000$ matrix in the present case. All
weights that are unequal to zero are marked as black dots. (c) Optimized
output weight matrix $W_{\ast}^{\rm out}$ which is a $150\times 4151$ matrix.
The aspect ratios of $W^{\rm in}$ and $W_{\ast}^{\rm out}$ have been adjusted
for illustration purposes. Figure 10: Reservoir state components
$s_{1},s_{1000},s_{4000}$ versus time step $n$ during (a) training and (b)
prediction phases. Panels (c) and (d) show their corresponding Fourier spectra
$\|\mathcal{F}(s_{i})\|^{2}$. Note that in (a), the first $46$ time steps are
not shown, as they were used for the initialization of the reservoir.
We now take a look at the reconstruction of the three fields $v_{x}(x,y)$,
$v_{y}(x,y)$ and $M(x,y)$ with the reservoir outputs as temporal coefficients
to see whether large-scale features are captured correctly. An instantaneous
snapshot at the half-time of the prediction phase at time step $n=350$ is
depicted in Fig. 11. We apply a POD-mode composition (24) to obtain the
fluctuation fields $v_{x}^{\prime}$, $v_{y}^{\prime}$ and $M^{\prime}$ from
the reservoir outputs $a_{p}(t)$. The mean profiles $\langle v_{x}\rangle_{t}$
and $\langle v_{y}\rangle_{t}$ and $\langle M\rangle_{t}$ are subsequently
added to obtain the full fields, see eqns. (20) and (23). The resulting ESN
predictions are displayed in panels (b), (d), and (f). For reference, the
validation (POD) fields are shown in panels (a), (c), and (e).
The horizontal velocity field $v_{x}(x,y)$ in (a) and (b) shows some
differences in the structure of the right- and left-going fluid patches, but
the large-scale structure as a whole is in surprisingly good qualitative
agreement. The structure of vertical velocity field $v_{y}(x,y)$ in panel (c)
and (d) does not show a systematic distinction, even though slight differences
in shape and maximum values of up- and downdrafts are detectable. Finally the
moist buoyancy field $M(x,y)$ in (f) does not fully reproduce all moist plumes
that detach from the bottom plate, see validation field in (e). Nevertheless
the predicted time coefficients lead to reconstructed fields that contain the
same features as the original fields.
Figure 11: Instantaneous snapshot of the fields (a, b) $v_{x}(x,y)$, (c, d)
$v_{y}(x,y)$ and (e, f) $M(x,y)$ at time step $n=350$ after the training
phase. Panels (a, c, e) are validation data from the POD and (b, d, f) the ESN
output data. The fields were reconstructed using the first 150 $a_{p}(n)$
(POD) and the predictions $y_{p}(n)$ (ESN). Here, $n$ is a discrete time step.
The ESN predictions deviate locally from the POD fields, but capture large-
scale features of the flow. The aspect ratio has been adjusted again for
illustration purposes. The corresponding colorbars can be seen on the right.
To get a better grasp on the time evolution of the error in the field
reconstruction, we compute a normalized field deviation at constant height $y$
of the vertical velocity field component in Fig. 12 which is given by
$\displaystyle{\rm Err}(x,n)=\frac{|v_{y}^{\prime\rm
POD}(x,n)-v_{y}^{\prime\rm ESN}(x,n)|}{\max_{x,y,n}\left(v_{y}^{\prime\rm
POD}\right)}\Bigg{|}_{y={\rm const}}$ (32)
where the superscript defines whether the field was reconstructed with
$a_{i}(n)$ (POD) or $y_{i}(n))$ (ESN). We find that the fast growing errors in
the time coefficients lead to fast amplifications of local field errors.
Furthermore, different horizontal and vertical positions in the domain show
different error magnitudes.
Figure 12: Time evolution of the prediction error ${\rm Err}(x,n)$, which is
given by eq. (32), at specified height $y$ (see top). As time progresses, the
errors start to grow in magnitude. Different positions $(x,y)$ in the domain
give rise to different magnitudes of the deviation.
We now discuss the ability of the ESN to reproduce the low-order statistical
properties of the turbulent flow. This is done by comparison of vertical line-
time average profiles $\langle\cdot\rangle_{x,t}(y)$. The averages are taken
along $x$-direction in combination with respect to time $t$. Such profiles are
for example of interest in larger-scale atmospheric models for the
parameterization of sub-grid scale transport Grabowski (2001); Khairoutdinov
and Randall (2001). Figure 13 depicts the profiles as a function of the domain
height $y$. The actual profiles obtained by the original DNS are plotted as a
dash-dotted, the POD reconstruction as a solid and the reconstruction from the
ESN outputs as a dashed line. Figure 13(a) shows the moist buoyancy $M$. Here
the time mean $\langle M\rangle_{t}$ was added to see whether the
reconstructed POD and ESN fields would deviate from the full DNS data. We
observe an excellent agreement and find that the ESN produces correct
fluctuations which preserve this profile. The fluctuations of the vertical
moist buoyancy flux $\langle v_{y}^{\prime}M^{\prime}\rangle$ are shown in
Fig. 13(b). Again, an excellent agreement between the curves in the bulk of
the domain and small deviations in the boundary layers only are observable,
despite the fact that this quantity is much more susceptible to errors since
it consists of two ESN estimates. Finally the fluctuations from the liquid
water content and the liquid water flux, further derived fields, are shown in
13(c) and (d), respectively. Here we see that POD and ESN curves match
throughout the whole of the domain. We thus conclude that the ESN is able to
reproduce essential low-order statistics very well.
For comparison, we add a comparison of test data with the output of an LSTM
network. The network parameters are as follows: the number of hidden states is
300 and the number of stacked LSTM layers 3. The loss function is again the
mean-squared error, the optimization applies the method of adaptive moments,
and the learning rate is $10^{-3}$ Goodfellow et al. (2016). A training of the
network proceeds over 1000 epochs. We can conclude that the LSTM performs
similarly well as the ESN even though the reproduced profiles deviate a bit
for both fluxes in the center of the convection layer.
Figure 13: Line-time averaged vertical profiles $\langle\cdot\rangle_{x,t}$ of (a) the full moist buoyancy $M$, (b) the vertical moist buoyancy flux $v_{y}^{\prime}M^{\prime}$, (c) the fluctuations of the liquid water content $q_{l}$ and (d) the vertical liquid water flux $v_{y}^{\prime}q_{l}^{\prime}$. The time average for DNS (dash-dotted) and POD (solid) were computed over the whole range of 1400 time steps, while for the ESN (dashed) and LSTM (dotted) only the range of the prediction phase ($700$ time steps) was taken into account. $\langle{\rm CC}^{\rm DNS}\rangle_{t}$ | $\langle q_{l}^{\rm DNS}\geq 0\rangle_{x,y,t}\cdot 10^{3}$ | $\langle{\rm CC}^{\rm POD}\rangle_{t}$ | $\langle q_{l}^{\rm POD}\geq 0\rangle_{x,y,t}\cdot 10^{3}$ | $\langle{\rm CC}^{\rm ESN}\rangle_{t}$ | $\langle q_{l}^{\rm ESN}\geq 0\rangle_{x,y,t}\cdot 10^{3}$
---|---|---|---|---|---
$89.43$% | $3.24$ | $83.25$% | $2.72$ | $82.49$% | $2.72$
Table 3: Time mean average $\langle\rm CC\rangle_{t}$ and $\langle q_{l}\geq
0\rangle_{x,y,t}$ of the cloud cover CC and the volume average of liquid water
for the DNS, POD and ESN case. See also Fig. 14. Figure 14: Clouds, i.e.
$q_{l}(x,y,n)\geq 0$ at time step $n=350$ of (a) the POD and the (b) ESN
prediction. The liquid water content $q_{l}$ differs in the magnitude and
shape of its envelope, the isosurface $q_{l}=0$. (c) shows the cloud cover as
defined in eq. (33) computed for the DNS data (brown), POD data (green) and
the ESN predictions (blue). The POD approximation does not capture all of the
original cloud cover; the value of the DNS exceeds the one of the POD by a few
per cent. The cloud cover prediction of the ESN itself deviates by a few per
cent in comparison to that of the POD. (d) shows the change of the volume
average of positive liquid water content $\langle q_{l}\geq 0\rangle_{x,y}$
with time.
Motivated by these results, we now investigate whether quantities such as the
cloud cover can be also modeled by the ESN. We define the cloud cover CC of
the two-dimensional domain $N_{x}^{\prime}\times N_{y}^{\prime}$ as the ratio
of the number of vertical grid lines $N_{q_{l}>0}$ that contain at least one
mesh point with $q_{l}>0$ along their vertical line of sight and the total
number of vertical grid lines, $N_{x}^{\prime}$. Thus follows
${\rm CC}=\frac{N_{q_{l}>0}}{N_{x}^{\prime}}\times 100\%.$ (33)
The time average $\langle{\rm CC}\rangle_{t}$ and volume-time average of
positive liquid water content $\langle q_{l}\geq 0\rangle_{x,y,t}$ are given
in Table 3. The truncation to the first $\rm N_{\rm POD}$ POD modes leads to a
loss of about $6.9\%$ of the original DNS CC and a $16.1\%$ loss of $\langle
q_{l}\geq 0\rangle_{x,y,t}$. We find good agreement between ESN estimate and
POD results. In Fig. 14, the POD and ESN results of the cloud distribution at
time step $n=350$ are shown. Despite small discrepancies in the local
distribution of the liquid water content and the shape of the cloud
boundaries, i.e. the isosurfaces $q_{l}=0$, the overall distribution is
comparable. In panels (c) and (d) of the same figure, the time evolution of
cloud cover and volume-time average positive liquid water content are
displayed. While the predicted cloud cover does not deviate too much from the
reference case, the variations in the amount of liquid water are less well
reproduced.
## V Summary and Conclusion
In the present work, we have applied a machine learning algorithm to two-
dimensional turbulent moist Rayleigh-Bénard convection, in order to infer the
large-scale evolution and low-order statistics of convective flow undergoing
phase changes. We apply a specific learning scheme for recurrent neural
networks, called echo state network, which has been applied for learning the
dynamics of nonlinear systems, such as turbulent shear flows. Here, we test
its capabilities successfully by fitting a reservoir to complex convection
flow dynamics which results by the interaction of turbulence with the
nonlinear thermodynamics originating from the first order phase changes
between vapor and liquid water as present for example in atmospheric clouds.
We therefore generate comprehensive data by means of a simple moist convection
model in the Boussinesq framework, the moist Rayleigh-Bénard convection model.
We obtain moist convection data from direct numerical simulations. As the 2d
set of data has still a large amount of degrees of freedom and therefore
cannot be passed directly to the echo state network, we introduce the POD as
an additional dimensionality reduction step. We therefore decompose the data
into a data-driven, spatially dependent basis with temporal coefficients,
where the latter are fed to the reservoir. We truncate the POD to its most
energetic modes and coefficients, reducing the degrees of freedom of the
dynamical system at hand considerably. This reduced data set serves as the
ground truth for the training of the echo state network as well as validation
of its outputs. The network setup is tuned by conducting a comprehensive grid
search of important hyperparameters. By coupling the output of the trained
network back to its input, the autonomous system estimates the evolution of
the temporal coefficients. Reconstructing the velocity and thermodynamic
fields from these estimates, allow us to check whether the dynamics have been
learned. We find an excellent agreement of the vertical profiles of moist
buoyancy, vertical moist buoyancy transport as well as liquid water content.
Furthermore, we report a good agreement of essential parameters in moist
convection such as the fraction of clouds covering the two-dimensional
atmosphere, as well its content of liquid water.
This first approach of our reservoir computing model to moist Rayleigh-Bénard
convection shows, its potential to infer low-order statistics from a set of
training data. Though the reservoir output quickly diverges from the actual
system trajectory, time averaged quantities are robustly reproduced. This
result might seem trivial at first glance, yet the reservoir produces velocity
and thermodynamic fluctuation fields which do not deviate too strongly from
those of the original flow, even for combined quantities such as the liquid
water flux across the layer. This indicates that the present echo state
network did not just learn the statistics, but the dynamical system itself.
Our additional comparison with an LSTM network gives a similar outcome. A more
detailed comparison of both RNN implementations has to be left however as a
future work.
Our approach can be considered as a first step of applying reservoir computing
as a machine learning-based parameterization. General circulation models
already use multi-scale modeling methods where small-scale resolving models
interact with the large-scale motion by their low-order statistics,
essentially relaxing one to each other, e.g. in superparametrizations
Grabowski and Smolarkiewicz (1999); Grabowski (2001); Khairoutdinov and
Randall (2001). An echo state network can serve as a simple dynamical
substitute for the unresolved subgrid scale transport.
Even though the present results are promising, the development is still in its
infancy. We state that for example the mathematical foundations of reservoir
computing, which could provide deeper insights on the role of the
hyperparameters on the prediction quality, are still mostly unexplored.
Moreover, we reckon that for an extension of the ESN approach to a three
dimensional flow, the data reduction step via POD will not suffice to cope
with the large amount of simulation data. For this scenario one might propose
the usage of a convolutional autoencoder/-decoder neural network in
combination with the RC model. Furthermore, we mention that the machine
learning algorithm is supposed here to learn dynamics of a nonlinear system
which incorporates processes on different spatial and temporal scales. This
circumstance is so far not fully captured by the network architecture.
Particularly for turbulence, this might imply that the different spatial
scales which interact with each other and exchange their energy, could be
trained separately allowing for a subsequent coupling. The exploration of such
ideas is currently under way and will be reported elsewhere.
## Acknowledgments
This work is supported by the project ”DeepTurb – Deep Learning in and of
Turbulence” which is funded by the Carl Zeiss Foundation. The authors
gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-
centre.eu) for funding this project by providing computing time through the
John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS
at Jülich Supercomputing Centre (JSC). We thank Martina Hentschel, Erich
Runge, and Sandeep Pandey for helpful comments.
## References
* Brenner et al. (2019) M. P. Brenner, J. D. Eldredge, and J. B. Freund, Phys. Rev. Fluids 4, 100501 (2019).
* Brunton et al. (2020) S. L. Brunton, B. R. Noack, and P. Koumoutsakos, Annu. Rev. Fluid Mech. 52, 477 (2020).
* Pandey et al. (2020) S. Pandey, J. Schumacher, and K. R. Sreenivasan, J. Turbul. 21, 567 (2020).
* Ling et al. (2016) J. Ling, A. Kurzawski, and J. Templeton, J. Fluid Mech. 807, 155 (2016).
* Duraisamy et al. (2019) K. Duraisamy, G. Iaccarino, and H. Xiao, Annu. Rev. Fluid Mech. 51, 357 (2019).
* Fonda et al. (2019) E. Fonda, A. Pandey, J. Schumacher, and K. R. Sreenivasan, Proc. Natl. Acad. Sci. 116, 8667 (2019).
* Brenowitz and Bretherton (2018) N. D. Brenowitz and C. S. Bretherton, Geophys. Res. Lett. 45, 6289 (2018).
* O’Gorman and Dwyer (2018) P. A. O’Gorman and J. G. Dwyer, J. Adv. Model Earth Sy. 10, 2548 (2018).
* Gentine et al. (2018) P. Gentine, M. Pritchard, S. Rasp, G. Reinaudi, and G. Yacalis, Geophys. Res. Lett. 45, 5742 (2018).
* Moller et al. (2020) S. Moller, C. Resagk, and C. Cierpka, Exp. Fluids 61, 111 (2020).
* Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, and A. Courville, _Deep Learning_ (MIT Press, 2016).
* Schneider et al. (2017) S. Schneider, T. ans Lan, A. Stuart, and J. Teixeira, Geophys. Res. Lett. 44, 12396 (2017).
* Mohan et al. (2020) A. Mohan, D. Tretiak, M. Chertkov, and D. Livescu, J. Turbul. 21, 525 (2020).
* Stevens (2005) B. Stevens, Annu. Rev. Earth Planet. Sci. 33, 605 (2005).
* Mellado (2017) J. P. Mellado, Annu. Rev. Fluid Mech. 49, 145 (2017).
* Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber, Neural Comput. 9, 1735 (1997).
* Lukoševičius and Jaeger (2009) M. Lukoševičius and H. Jaeger, Comp. Sci. Rev. 3, 127 (2009).
* Tanaka et al. (2019) G. Tanaka, T. Yamane, J. B. Héroux, R. Nakane, N. Kanazawa, S. Takeda, H. Numata, D. Nakano, and A. Hirose, Neural Netw. 115, 100 (2019).
* Jaeger and Haas (2004) H. Jaeger and H. Haas, Science 304, 78 (2004).
* Yildiz et al. (2012) I. B. Yildiz, H. Jaeger, and S. J. Kiebel, Neural Netw. 35, 1 (2012).
* Pandey and Schumacher (2020) S. Pandey and J. Schumacher, Phys. Rev. Fluids 5, 113506 (2020).
* Grabowski and Smolarkiewicz (1999) W. W. Grabowski and P. K. Smolarkiewicz, 133, 171 (1999).
* Grabowski (2001) W. W. Grabowski, J. Atm. Sci. 58, 978 (2001).
* Vlachas et al. (2020) P. R. Vlachas, J. Pathak, B. R. Hunt, T. P. Sapsis, G. M., E. Ott, and P. Koumoutsakos, Neural Netw. 126, 191 (2020).
* Lu et al. (2017) Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett, and E. Ott, Chaos 27 (2017).
* Pathak et al. (2018) J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott, Phys. Rev. Lett. 120, 024102 (2018).
* Holmes et al. (2012) P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley, _Turbulence, Coherent Structures, Dynamical Systems and Symmetry_ , Cambridge Monographs on Mechanics (Cambridge University Press, Cambridge, UK, 2012), 2nd ed.
* Srinivasan et al. (2019) P. A. Srinivasan, L. Guastoni, H. Azizpour, P. Schlatter, and R. Vinuesa, Phys. Rev. Fluids 4, 054603 (2019).
* Nguyen et al. (2020) T. Nguyen, R. Baraniuk, A. Bertozzi, S. Osher, and B. Wang, in _Advances in Neural Information Processing Systems_ , edited by H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Curran Associates, Inc., 2020), vol. 33, pp. 1924–1936.
* Fischer (1997) P. F. Fischer, J. Comput. Phys. 133, 84 (1997).
* Scheel et al. (2013) J. D. Scheel, M. S. Emran, and J. Schumacher, New J. Phys. 15, 113063 (2013).
* nek (2017) _nek5000 version 17.0_ (2017), URL https://nek5000.mcs.anl.gov.
* Pauluis and Schumacher (2010) O. Pauluis and J. Schumacher, Commun. Math. Sci. 8, 295 (2010).
* Weidauer et al. (2010) T. Weidauer, O. Pauluis, and J. Schumacher, New J. of Phys. 12, 105002 (2010).
* Schumacher and Pauluis (2010) J. Schumacher and O. Pauluis, J. Fluid Mech. 648, 509–519 (2010).
* Bretherton (1987) C. S. Bretherton, J. Atmos. Sci. 44, 1809 (1987).
* Bretherton (1988) C. S. Bretherton, J. Atmos. Sci. 45, 2391 (1988).
* Smith and Stechmann (2017) L. M. Smith and S. N. Stechmann, J. Atmos. Sci. 74, 3285 (2017).
* Vallis et al. (2019) G. K. Vallis, D. J. Parker, and S. M. Tobias, J. Fluid Mech. 862, 162–199 (2019).
* Abma et al. (2013) D. Abma, T. Heus, and J. P. Mellado, J. Atmos. Sci. 70, 2088 (2013).
* Emmanuel (1994) K. A. Emmanuel, _Atmospheric Convection_ (Oxford University Press, 1994).
* Pauluis and Schumacher (2011) O. Pauluis and J. Schumacher, Proc. Natl. Acad. Sci. USA 108, 12623 (2011).
* Jaeger (2002) H. Jaeger, GMD-Forschungszentrum Informationstechnik (2002).
* Doya (1992) K. Doya, [Proceedings] 1992 IEEE International Symposium on Circuits and Systems 6, 2777 (1992).
* Jaeger (2001) H. Jaeger, GMD-Forschungszentrum Informationstechnik Technical Report 148 (2001).
* Maass et al. (2002) W. Maass, T. Natschläger, and H. Markram, Neural Computation 14, 2531 (2002).
* Lin et al. (2008) X. Lin, Z. Yang, and Y. Song, in _Advances in Knowledge Discovery and Data Mining_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008), pp. 932–937.
* Morando et al. (2013) S. Morando, S. Jemei, R. Gouriveau, N. Zerhouni, and D. Hissel, in _IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society_ (2013), pp. 1632–1637.
* Pathak et al. (2017) J. Pathak, Z. Lu, B. R. Hunt, M. Girvan, and E. Ott, Chaos 27, 121102 (2017).
* Lukoševičius (2012) M. Lukoševičius, LNCS 7700, 659 (2012).
* Strauss et al. (2012) T. Strauss, W. Wustlich, and R. Labahn, Neural Comput. 24, 3246 (2012).
* Sirovich (1987) L. Sirovich, Q. Appl. Math. XLV, 561 (1987).
* Khairoutdinov and Randall (2001) M. F. Khairoutdinov and D. A. Randall, Geophys. Res. Lett. 28, 3617 (2001).
|
LATTICE ISOMORPHISMS OF LEIBNIZ ALGEBRAS
DAVID A. TOWERS
Department of Mathematics and Statistics
Lancaster University
Lancaster LA1 4YF
England
<EMAIL_ADDRESS>
###### Abstract
Leibniz algebras are a non-anticommutative version of Lie algebras. They play
an important role in different areas of mathematics and physics and have
attracted much attention over the last thirty years. In this paper we
investigate whether conditions such as being a Lie algebra, cyclic, simple,
semisimple, solvable, supersolvable or nilpotent in such an algebra are
preserved by lattice isomorphisms.
Mathematics Subject Classification 2000: 17B05, 17B20, 17B30, 17B50.
Key Words and Phrases: Lie algebras, Leibniz algebras, cyclic, simple,
semisimple, solvable, supersolvable, nilpotent, lattice isomorphism.
## 1 Introduction
An algebra $L$ over a field $F$ is called a Leibniz algebra if, for every
$x,y,z\in L$, we have
$[x,[y,z]]=[[x,y],z]-[[x,z],y]$
In other words the right multiplication operator $R_{x}:L\rightarrow
L:y\mapsto[y,x]$ is a derivation of $L$. As a result such algebras are
sometimes called right Leibniz algebras, and there is a corresponding notion
of left Leibniz algebras, which satisfy
$[x,[y,z]]=[[x,y],z]+[y,[x,z]].$
Clearly the opposite of a right (left) Leibniz algebra is a left (right)
Leibniz algebra, so, in most situations, it does not matter which definition
we use. Leibniz algebras which satisfy both the right and left identities are
sometimes called symmetric Leibniz algebras.
Every Lie algebra is a Leibniz algebra and every Leibniz algebra satisfying
$[x,x]=0$ for every element is a Lie algebra. They were introduced in 1965 by
Bloh ([7]) who called them $D$-algebras, though they attracted more widespread
interest, and acquired their current name, through work by Loday and
Pirashvili ([18], [19]). They have natural connections to a variety of areas,
including algebraic $K$-theory, classical algebraic topology, differential
geometry, homological algebra, loop spaces, noncommutative geometry and
physics. A number of structural results have been obtained as analogues of
corresponding results in Lie algebras.
The Leibniz kernel is the set $I=$ span$\\{x^{2}:x\in L\\}$. Then $I$ is the
smallest ideal of $L$ such that $L/I$ is a Lie algebra. Also $[L,I]=0$.
We define the following series:
$L^{1}=L,L^{k+1}=[L^{k},L](k\geq 1)\hbox{ and
}L^{(0)}=L,L^{(k+1)}=[L^{(k)},L^{(k)}](k\geq 0).$
Then $L$ is nilpotent of class n (resp. solvable of derived length n) if
$L^{n+1}=0$ but $L^{n}\neq 0$ (resp.$L^{(n)}=0$ but $L^{(n-1)}\neq 0$) for
some $n\in{\mathbb{N}}$. It is straightforward to check that $L$ is nilpotent
of class n precisely when every product of $n+1$ elements of $L$ is zero, but
some product of $n$ elements is non-zero.The nilradical, $N(L)$, (resp.
radical, $R(L)$) is the largest nilpotent (resp. solvable) ideal of $L$.
The set of subalgebras of a nonassociative algebra forms a lattice under the
operations of union, $\cup$, where the union of two subalgebras is the
subalgebra generated by their set-theoretic union, and the usual intersection,
$\cap$. The relationship between the structure of a Lie algebra $L$ and that
of the lattice ${\cal L}(L)$ of all subalgebras of $L$ has been studied by
many authors. Much is known about modular subalgebras (modular elements in
${\cal L}(L)$) through a number of investigations including [1, 12, 14, 27,
28, 29]. Other lattice conditions, together with their duals, have also been
studied. These include semimodular, upper semimodular, lower semimodular,
upper modular, lower modular and their respective duals (see [8] for
definitions). For a selection of results on these conditions see [9, 13, 15,
16, 17, 20, 24, 26, 30, 31].
The subalgebra lattice of a Leibniz algebra, however, is rather different; in
a Lie algebra every element generates a one-dimensional subalgebra, whereas in
a Leibniz algebra elements can generate subalgebras of any dimension. So, one
could expect different results to hold for Leibniz algebras anf this has been
shown to be the case in [21]..
Of particular interest is the extent to which important classes of Leibniz
algebras are determined by their subalgebra lattices. In order to investigate
this question we introduce the notion of a lattice isomorphism. If we denote
the subalgebra lattice of $L$ by ${\mathcal{L}}(L)$, then a lattice
isomorphism from $L$ to $L^{*}$ is a bijective map
$\theta:{\mathcal{L}}(L)\rightarrow{\mathcal{L}}(L^{*})$ such that
$\theta(A\cup B)=\theta(A)\cup\theta(B)$ and $\theta(A\cap
B)=\theta(A)\cap\theta(B)$ for all $A,B\in{\mathcal{L}}(L)$. If $L$ is a Lie
algebra over a field of characteristic zero the following were proved in [23].
###### Theorem 1.1
* (i)
If $L$ is simple then either
* (a)
$L^{*}$ is simple, or
* (b)
$L$ is three-dimensional non-split simple and $L^{*}$ is two-dimensional.
* (ii)
If $L$ is semisimple then either
* (a)
$L^{*}$ is semisimple, or
* (b)
$L$ is three-dimensional non-split simple and $L^{*}$ is two-dimensional.
* (iii)
If $\dim L,L^{*}>2$ and $R$ is the radical of $L$, then $R^{*}$ is the radical
of $L^{*}$.
* (iv)
If $L$ is supersolvable of dimension $>2$, then $L^{*}$ is supersolvable.
In [15] the following was proved.
###### Theorem 1.2
If $L$ is a solvable Lie algebra over a perfect field of characteristic
different from $2,3$, then either
* (i)
$L^{*}$ is solvable, or
* (ii)
$L^{*}$ is three-dimensional non-split simple.
We say that a Lie algebra $L$ is almost abelian if it is a split extension
$L=L^{2}\dot{+}Fa$ with ad $a$ acting as the identity map on the abelian ideal
$L^{2}$; $L$ is quasi-abelian if it is abelian or almost abelian. The quasi-
abelian Lie algebras are precisely the ones in which every subspace is a
subalgebra. The following is well-known and easy to show.
###### Proposition 1.3
If $L$ is a quasi-abelian Lie algebra over a field of characteristic zero then
$L^{*}$ is quasi-abelian unless $\dim L=2$ and $L^{*}$ is three-dimensional
non-split simple.
In this paper we consider corresponding results for Leibniz algebras. First,
in section two, we show that cyclic Leibniz algebras are characterised by
their subalgebra lattice, and that a non-Lie Leibniz algebra cannot be lattice
isomorphic to a Lie algebra. In section three we see that if $L$ is a non-Lie
simple or semisimple Leibniz algebra then so is $L^{*}$. In section four, it
is shown that if $L$ is a non-Lie solvable or supersolvable Leibniz algebra
then so is $L^{*}$. It is also proved that the radical of a non-Lie Leibniz
algebra is preserved by lattice isomorphisms. The final section is devoted to
showing that if $L$ is a non-Lie nilpotent Leibniz algebra then so is $L^{*}$.
Most of the above results are over fields of characteristic zero.
Throughout, $L$ will denote a finite-dimensional Leibniz algebra over a field
$F$. Algebra direct sums will be denoted by $\oplus$, whereas vector space
direct sums will be denoted by $\dot{+}$. The notation ‘$A\subseteq B$’ will
indicate that $A$ is a subset of $B$, whereas ‘$A\subset B$’ will mean that
$A$ is a proper subset of $B$. If $A$ and $B$ are subalgebras of $L$ we will
write $\langle A,B\rangle$ for $A\cup B$.
The centre of $L$ is $Z(L)=\\{z\in L\mid[z,x]=[x,z]=0$ for all $x\in L\\}$.
The Frattini ideal of $L$, $\phi(L)$, is the largest ideal of $L$ contained in
all maximal subalgebras of $L$ .
## 2 Cyclic Leibniz algebras
The only previous paper that we are aware of on this topic is by Barnes ([5]).
The following example shows that the Leibniz kernel of a non-Lie Leibniz
algebra is not necessarily preserved by a lattice isomorphism.
###### Example 2.1
Let $L=Fb+Fa$ where the only non-zero products are $[b,b]=a$, $[a,b]=a$. Then
the only subalgebras of $L$ are ${0}$, $Fa$, $F(b-a)$ and $L$, and $I=Fa$.
Then we can define a lattice automorphism of $L$ which interchanges $Fa$ and
$F(b-a)$, and the latter is not an ideal of $L$ as $[b,b-a]=a$.
Barnes called the above example the diamond algebra because of the structure
of its lattice of subalgebras as a Hasse diagram, but that name has since been
used for a different Leibniz algebra. He further showed that this example is
exceptional in the following result.
###### Theorem 2.1
([5, Theorem 3.1]) Let $L,L^{*}$ be Leibniz algebras with Leibniz kernels
$I,I^{*}$ respectively, and let $\theta:L\rightarrow L^{*}$ be a lattice
isomorphism. Suppose that $\dim L\geq 3$. Then $\theta(I)=I^{*}$.
However, this paper does not appear to have been followed by further
investigations into the subalgebra structure of a Leibniz algebra. Theorem
2.1, of course, has an immediate corollary.
###### Corollary 2.2
Let $L$ be a non-Lie Leibniz algebra. Then $L$ cannot be lattice isomorphic to
a Lie algebra $L^{*}$.
Proof. If $\dim L\geq 3$ then $I\neq 0$ if and only if $I^{*}\neq 0$. If $\dim
L=2$ there are only two possibilities for $L$, both of them cyclic with basis
$x,x^{2}$. In the first, $[x^{2},x]=0$ and the only proper subalgebra is
$Fx^{2}$, and in the second, $[x^{2},x]=x^{2}$ and the only proper subalgebras
are $Fx^{2}$ and $F(x-x^{2})$. However, every Lie algebra of dimension greater
than one has more than two proper subalgebras.
There is no non-Lie Leibniz algebra of dimension one. $\Box$
A Leibniz algebra $L$ is called cyclic if it is generated by a single element.
In this case, $L$ has a basis $x,x^{2},\ldots,x^{n}(n>1)$ and products
$[x^{i},x]=x^{i+1}$ for $1\leq i\leq n-1$,
$[x^{n},x]=\alpha_{2}x^{2}+\ldots+\alpha_{n}x^{n}$, all other products being
zero. Then we have the following.
###### Theorem 2.3
If $L$ is a cyclic Leibniz algebra over an infinite field $F$, then $L^{*}$ is
also a cyclic Leibniz algebra of the same dimension.
Proof. Over an infinite field a Leibniz algebra is cyclic if and only if it
has finitely many maximal subalgebras, by [21, Corollary 2.3]. Moreover, the
length of a maximal chain of subalgebras of a cyclic algebra is equal to its
dimension. $\Box$
###### Corollary 2.4
If $L$ is a nilpotent cyclic Leibniz algebra, then $L^{*}\cong L$.
Proof. A nilpotent cyclic Leibniz algebra has only one maximal subalgebra,
namely $I$, its Leibniz kernel. It follows that $L^{*}$ is nilpotent of the
same dimension. Note that the restriction on the field is unnecessary here,
since, if $M$ is the only maximal subalgebra of $L$ and $x\in L\setminus M$,
we must have $L=\langle x\rangle$. $\Box$
Note that, in both of the above results, if $L=\langle x\rangle$ then
$L^{*}=\langle x^{*}\rangle$, since $x$ does not belong to any of the maximal
subalgebras of $L$, and this is inherited by $x^{*}$ in $L^{*}$.
###### Proposition 2.5
Let $L=A\dot{+}Fx$ be a non-Lie Leibniz algebra in which $A$ is a minimal
abelian ideal of $L$ and $x^{2}=0$. Then $L$ is cyclic and $A=I$.
Proof. Since $L$ is not a Lie algebra, $A=I$, $[L,A]=0$ and $[A,L]\neq 0$, so
$[A,x]=A$. Let $0\neq a\in A$. Then $(x+a)^{n}=R_{x}^{n-1}(a)$ for $n\geq 2$,
which implies that $[(x+a)^{n},x]=R_{x}^{n}(a)\in\langle x+a\rangle$ for
$n\geq 1$. Hence $\langle x+a\rangle\cap A$ is an ideal of $L$ and so equals
$A$ or $0$. However, the latter implies that $[a,x]=0$, whence $A=Fa$ and
$[A,x]=0$, a contradiction. It follows that $L=\langle x+a\rangle$. $\Box$
## 3 Semisimple Leibniz algebras
The following useful result was proved by Barnes in [4]. Note that we have
modified the statement to take account of the fact that Barnes’ result is
stated for left Leibniz algebras and we are dealing with right Leibniz
algebras.
###### Lemma 3.1
Let $A$ be a minimal ideal of the Leibniz algebra $L$. Then $[L,A]=0$ or
$[x,a]=-[a,x]$ for all $a\in A$, $x\in L$.
A Leibniz algebra $L$ is called simple if its only ideals of $L$ are $0$, $I$
and $L$, and $L^{2}\neq I$. If $L/I$ is a simple Lie algebra then $L$ is not
necessarily a simple Leibniz algebra. It is said to be semisimple if $R(L)=I$.
This definition agrees with that of a semisimple Lie algebra, since, in this
case, $I=0$. Semisimple Leibniz algebras are not necessarily direct sums of
simple Leibniz algebras (see [10] or [2]).
We have the following version of Levi’s Theorem.
###### Theorem 3.2
(Barnes [3]) Let $L$ be a finite-dimensional Leibniz algebra over a field of
characteristic $0$. Then there is a semisimple Lie subalgebra $S$ of $L$ such
that $L=S\dot{+}R(L)$.
We shall need the following result which was proved by Gein in [11, p. 23].
###### Lemma 3.3
Let $S$ be a three-dimensional non-split simple Lie algebra, and let $R$ be an
irreducible $S$-module. Then, for any $s\in S$, $R$ has an ad $s$-invariant
subspace of dimension less than or equal to two.
If $U$ is a subalgebra of $L$ and $0=U_{0}<U_{1}<\ldots<U_{n}=U$ is a maximal
chain of subalgebras of $U$ we will say that $U$ has length $n$.
###### Theorem 3.4
Let $L=S\dot{+}A$ be a Leibniz algebra over a field of characteristic zero,
where $S$ is a three-dimensional non-split simple Lie algebra and $A$ is a
minimal abelian ideal of $L$. Then $L^{*}$ has a simple Lie subalgebra.
Proof. Suppose that $L^{*}$ does not have a simple Lie subalgebra. Then
$R(L^{*})\neq 0$, by Theorem 3.2, and so $L^{*}$ has a minimal abelian ideal
$B^{*}$. As $S^{*}$ is a maximal subalgebra of $L^{*}$ we must have that
$L^{*}=S^{*}\dot{+}B^{*}$. If $\dim A=1$ we have that $L=S\oplus A$ is a Lie
algebra and hence, so is $L^{*}$, giving that $S^{*}\cong S$, by [23, Lemma
3.3] and contradicting our supposition. Hence $\dim A\geq 2$.
Now maximal subalgebras of $L$ are of two types: they are isomorphic to $S$,
and so have length $2$, or they are of the form $Fs\dot{+}A$, where $s\in S$,
and so are solvable of length at least $3$. Moreover, $A$ is the intersection
of those of the second type. The same must be true of the maximal subalgebras
of $L^{*}$ and so $B^{*}=A^{*}$ and $L^{*}=S^{*}\dot{+}A^{*}$. Also, $\dim
S^{*}=2$, by Theorem 1.1. Now $\phi(L^{*})=(\phi(L))^{*}=0$, so
$L^{*}=A^{*}\dot{+}C^{*}$, where $C^{*}$ is abelian, by [6, Corollary 2.9].
Since $S^{*}\cong C^{*}$, we have that $S^{*}$ is abelian.
Let $0\neq s^{*}\in S^{*}$, $0\neq a^{*}\in A^{*}$ and let $f(\theta)$ be the
polynomial of smallest degree for which $f(R_{s^{*}})(a^{*})=0$. It follows
from the fact that $S^{*}$ is abelian that $\\{x^{*}\in
A^{*}:f(R_{s^{*}})(x^{*})=0\\}$ is an ideal of $L^{*}$, and hence that it
coincides with $A^{*}$. Clearly then $f(\theta)$ is the minimum polynomial of
$R_{s^{*}}|_{A^{*}}$.
Suppose that there is an $s_{1}^{*}\in S^{*}$ for which the minimum polynomial
for $R_{s_{1}^{*}}$ has degree two. and let this polynomial be
$f(\theta)=\theta^{2}-\lambda_{2}\theta-\lambda_{1}$. Pick $s_{2}^{*}\in
S^{*}$ linearly independent of $s_{1}^{*}$. Then
$\displaystyle[[a^{*},s_{1}^{*}],s_{1}^{*}]$
$\displaystyle=\lambda_{1}a^{*}+\lambda_{2}[a^{*},s_{1}^{*}]\hbox{ and }$
$\displaystyle[[a^{*},s_{2}^{*}],s_{2}^{*}]$
$\displaystyle=\alpha_{1}a^{*}+\alpha_{2}[a^{*},s_{2}^{*}]\hbox{ so }$
$\displaystyle[[a^{*},s_{1}^{*}],s_{2}^{*}]$
$\displaystyle=[a^{*},[s_{1}^{*},s_{2}^{*}]]+[[a^{*},s_{2}^{*}],s_{1}^{*}]=[[a^{*},s_{2}^{*}],s_{1}^{*}]$
$\displaystyle=\beta_{1}a^{*}+\beta_{2}[a^{*},s_{1}^{*}]+\beta_{3}[a^{*},s_{2}^{*}],$
since $[[a^{*},s_{1}^{*}+s_{2}^{*}],s_{1}^{*}+s_{2}^{*}]\in
Fa^{*}+F[a^{*},s_{1}^{*}+s_{2}^{*}]$. Now
$[[[a^{*},s_{2}^{*}],s_{1}^{*}],s_{1}^{*}]=\lambda_{1}[a^{*},s_{2}^{*}]+\lambda[[a^{*},s_{2}^{*}],s_{1}^{*}],$
so
$\displaystyle(\beta_{2}\lambda_{1}+\beta_{3}\beta_{1})a^{*}+(\beta_{1}+\beta_{2}\lambda_{2})[a^{*},s_{1}^{*}]+\beta_{3}^{2}[a^{*},s_{2}^{*}]$
$\displaystyle=\lambda_{2}\beta_{1}a^{*}+\lambda_{2}\beta_{2}[a^{*},s_{1}^{*}]+(\lambda_{1}+\lambda_{2}\beta_{3})[a^{*},s_{2}^{*}].$
Since $f(\theta)$ is irreducible,
$\beta_{3}^{2}\neq\lambda_{1}+\lambda_{2}\beta_{3}$ and so
$[a^{*},s_{2}^{*}]=\gamma_{1}a^{*}+\gamma_{2}[a^{*},s_{1}^{*}]$. Hence $A^{*}$
is two dimensional.
Put $A=Fa+F[a,s]$. Choose $s_{1},s_{2}$ to be elements of $S$ such that
$s,s_{1},s_{2}$ are linearly independent. Then $[a,s_{1}]=\alpha a+\beta[a,s]$
and $[a,s_{2}]=\gamma a+\delta[a,s]$ for some $\alpha,\beta,\gamma,\delta\in
F$. Thus $[a,s_{1}-\beta s]=\alpha a$ and $[a,s_{2}-\delta s]=\gamma a$. But
$s_{1}-\beta s$ and $s_{2}-\delta s$ are linearly independent, so
$[a,S]=[a,<s_{1}-\beta s,s_{2}-\delta s>]\subseteq Fa$
and $A$ is one dimensional, a contradiction. $\Box$
###### Corollary 3.5
Let $L$ be a non-Lie semisimple Leibniz algebra over a field of characteristic
zero. Then $L^{*}$ is a non-Lie semisimple Leibniz algebra.
Proof. We have that $I\neq 0$, so $L=I\dot{+}S$ where $S$ is a semisimple Lie
algebra, by Theorem 3.2. Then $L^{*}/I^{*}$ is a semisimple Lie algebra or
$\dim L^{*}/I^{*}=2$ and $S$ is $3$-dimensional non-split simple, by Theorem
1.1(ii).
Suppose that the latter holds, so $L^{*}$ is solvable. Let $A$ be a minimal
ideal of $L$ inside $I$ and put $B=A\dot{+}S$. Then $B^{*}$ has a simple Lie
subalgebra, by Theorem 3.4 and $L^{*}$ cannot be solvable. Hence the former
holds and $L^{*}$ is a non-Lie semsimple Leibniz algebra. $\Box$
A subalgebra $U$ of $L$ is called upper semi-modular if $U$ is a maximal
subalgebra of $\langle U,B\rangle$ for every subalgebra $B$ of $L$ such that
$U\cap B$ is maximal in $B$. Using this concept we have a further corollary.
###### Corollary 3.6
Let $L$ be a non-Lie simple Leibniz algebra over a field of characteristic
zero. Then $L^{*}$ is a non-Lie simple Leibniz algebra.
Proof. We have that $L=I\dot{+}S$ where $S$ is a simple Lie subalgebra of $L$
and $I\neq 0$. If $L^{*}/I^{*}$ is not simple then $S$ must be three-
dimensional non split simple, by Theorem 1.1(i), and we get a contradiction as
in the previous corollary.
Let $0\neq A^{*}$ be an ideal of $L^{*}$. Suppose first that $A^{*}\subseteq
I^{*}$. Then $A$ is an upper semi-modular subalgebra of $L$ with $A\subseteq
I$. Let $s\in S$. Then $A\cap Fs=0$ is a maximal subalgebra of $Fs$. Hence $A$
is a maximal subalgebra of $C=\langle A,s\rangle$. Now $A\subseteq C\cap
I\subset C$, so $A=C\cap I$. Thus $[s,A],[A,s]\subseteq C\cap I=A$, so $A$ is
an ideal of $L$, whence $A=I$. It follows that $A^{*}=I^{*}$.
Next, suppose that $A^{*}\not\subseteq I^{*}$. Then $I^{*}+A^{*}=L^{*}$ and
$I^{*}\cap A^{*}=I^{*}$ or $0$, by the previous paragraph. The former implies
that $A^{*}=L^{*}$; the latter gives that $L^{*}=I^{*}\oplus A^{*}$ giving
$I^{*}=0$ and $L^{*}=A^{*}$ again.
Clearly $(L^{*})^{2}\neq I^{*}$, so $L^{*}$ is a non-Lie simple Leibniz
algebra. $\Box$
## 4 Solvable and supersolvable Leibniz algebras
###### Proposition 4.1
Let $L$ be a non-Lie solvable Leibniz algebra over a field of characteristic
zero. Then $L^{*}$ is a non-Lie solvable Leibniz algebra.
Proof. Let $L$ be a minimal counter-example. Then $L^{*}$ has a semisimple Lie
subalgebra $S^{*}$, and so $S(\neq L)$ must be two dimensional and $S^{*}$
must be three-dimensional non-split simple. Moreover,
$L^{*}=S^{*}\dot{+}A^{*}$, where $A^{*}$ is a minimal ideal of $L^{*}$, since,
otherwise, this is a smaller counter-example. But then $L$ has a simple
subalgebra, by Theorem 3.4, a contradiction. $\Box$
###### Lemma 4.2
Let $L$ be a Leibniz algebra over a field of characteristic zero. Then the
radical, $R$, of $L$ is the intersection of the maximal solvable subalgebras
of $L$.
Proof. Let $\Gamma$ be the intersection of the maximal solvable subalgebras of
$L$. Then $R\subseteq\Gamma$. Furthermore, $\Gamma$ is invariant under all
automorphisms of $L$, and hence is invariant under all derivations of $L$, by
[22, Corollary 3.2]. It follows that $\Gamma$ is a right ideal of $L$. But
$[x,y]+[y,x]\in I\subseteq\Gamma$ for all $x\in L$, $y\in\Gamma$, so $\Gamma$
is an ideal of $L$, whence $\Gamma\subseteq R$. $\Box$
Then we have the following corollaries to Proposition 4.1.
###### Corollary 4.3
Let $L$ be a non-Lie Leibniz algebra over a field of characteristic zero, and
let $R$ be the radical of $L$. Then $R^{*}$ is the radical of $L^{*}$.
Proof. Let $U$ be a maximal solvable subalgebra of $L$. If $U$ is non-Lie then
$U$ is solvable, by Proposition 4.1. If $U$ is Lie, then $U^{*}$ is solvable,
unless $\dim U=2$ and $U^{*}$ is three-dimensional non-split simple. If
$R^{*}=0$ then $L^{*}$, and hence $L$, is a Lie algebra, a contradiction.
Hence $\dim R^{*}\neq 0$.
Moreover, $R\subseteq U$. If $R=U$ then $R$ is a maximal solvable subalgebra
of $L$, which is impossible unless $R=L$. But then the result follows from
Proposition 4.1. So suppose $\dim R=0,1$. The former implies that $L$ is a
semisimple Lie algebra, which is impossible. The latter implies that
$L=S\oplus Fa$, where $S$ is a semisimple Lie algebra. But this is also a Lie
algebra and so is impossible.
It follows that $U^{*}$ must be a maximal solvable subalgebra of $L^{*}$. The
result now follows from Lemma 4.2. $\Box$
A subalgebra $U$ of $L$ is called lower semi-modular in $L$ if $U\cap B$ is
maximal in $B$ for every subalgebra $B$ of $L$ such that $U$ is maximal in
$\langle U,B\rangle$. We say that $L$ is lower semi-modular if every
subalgebra of $L$ is lower semi-modular in $L$.
###### Corollary 4.4
Let $L$ be a non-Lie supersolvable Leibniz algebra over a field of
characteristic zero. Then $L^{*}$ is supersolvable.
Proof. We have that $L$ is solvable and lower semi-modular, by [21,
Proposition 5.1]. It follows from Proposition 4.1 that the same is true of
$L^{*}$. Hence $L^{*}$ is supersolvable, by [21, Proposition 5.1] again.
$\Box$
## 5 Nilpotent Leibniz algebras
A Lie algebra $L$ is callled almost nilpotent of index $n$ if it has a basis
$\\{x;e_{11},\ldots,e_{1r_{1}};\ldots;e_{n1},\ldots,e_{nr_{n}}\\}$
such that
$\displaystyle-[e_{ij},x]=[x,e_{ij}]$ $\displaystyle=e_{ij}+e_{i+1,j}\hbox{
for }1\leq i\leq n-1,1\leq j\leq r_{i},$ $\displaystyle-[e_{nj},x]=[x,e_{nj}]$
$\displaystyle=e_{nj}\hbox{ and }r_{j}\leq r_{j+1}\hbox{ for }1\leq j\leq n-1$
all other products being zero.
The following result was proved in [25]
###### Theorem 5.1
Let $L$ be a nilpotent Lie algebra of index $n$ and of dimension greater than
two for which $L^{*}$ is not nilpotent, over a field of characteristic zero.
Then $L^{*}$ is almost nilpotent of index $n$. Moreover, every almost
nilpotent Lie algebra is lattice isomorphic to a nilpotent Lie algebra.
For non-Lie Leibniz algebras we have the following result.
###### Theorem 5.2
Let $L$ be a nilpotent non-Lie Leibniz algebra over a field of characteristic
zero. Then $L^{*}$ is a non-Lie nilpotent Leibniz algebra.
First we need a lemma.
###### Lemma 5.3
Let $L$ be a nilpotent Leibniz algebra and let $W=Fw$ be a minimal ideal of
$L$ contained in the Leibniz kernel, $I$, of $L$. Then $W^{*}$ is a minimal
ideal of $L^{*}$ and $W^{*}\subseteq Z(L^{*})$.
Proof. Suppose that $x\notin I$, where $x^{n}=0$ but $x^{n-1}\neq 0$. Then
$S=\langle x,W\rangle=\langle x\rangle+W$ and $\langle x\rangle\cap W=0$ or
$1$. The former implies that $\langle x\rangle$ is a maximal subalgebra of
$S$, whence $\langle x^{*}\rangle$ is a maximal subalgebra, and hence an
ideal, of $S^{*}=\langle x^{*},W^{*}\rangle$. The latter implies that
$W\subseteq\langle x\rangle$, whence $W^{*}\subseteq\langle x^{*}\rangle$. In
either case, $[w^{*},x^{*}]\in\langle x^{*}\rangle\cap I^{*}$. Hence
$[w^{*},x^{*}]=\sum_{i=2}^{n}\lambda_{i}(x^{*})^{i}$.
Suppose that $\lambda_{2}\not=0$ and consider $\langle\lambda_{2}x-w\rangle$.
If $W\subseteq\langle x\rangle$ then $W=Fx^{n}$ and
$\langle\lambda_{2}x-w\rangle=\langle x\rangle$. If $W\not\subseteq\langle
x\rangle$ then
$(\lambda_{2}x-w)^{k}=\lambda_{2}^{k}x^{k}-\lambda_{2}^{k-1}\mu^{k-1}w$, where
$[w,x]=\mu w$. In either case, $\langle\lambda_{2}x-w\rangle$ is a cyclic
subalgebra of dimension $n$.
However,
$(\lambda_{2}x^{*}-w^{*})^{2}=\lambda_{2}^{2}(x^{*})^{2}-\lambda_{2}\sum_{i=2}^{n}\lambda_{i}(x^{*})^{i}=\lambda_{2}\sum_{i=3}^{n}\lambda_{i}(x^{*})^{i},$
so $\langle\lambda_{2}x^{*}-w^{*}\rangle$ is a cyclic subalgebra of dimension
$n-1$, contradicting Corollary 2.4. It follows that $\lambda_{2}=0$. A similar
argument shows that $\lambda_{i}=0$ for all $2\leq i\leq n$, so
$[w^{*},x^{*}]=0$. Also, $[x^{*},w^{*}]=0$, since $w^{*}\in I^{*}$, from which
the result follows. $\Box$
Now we can prove Theorem 5.2.
Proof. We have $L/L^{2}$ is abelian and $L^{2}=\phi(L)$, so
$L^{*}/\phi(L^{*})$ is almost abelian or three-dimensional non-split simple.
The latter is impossible, as it would imply that
$L^{*}=\phi(L^{*})\dot{+}S^{*}=S^{*}$, where $S^{*}$ is three-dimensional non-
split simple, by Theorem 3.2. But then $L$ is a two-dimensional Lie algebra,
by Theorem 2.2, a contradiction. It follows that $L^{*}/\phi(L^{*})$, and
hence $L^{*}$, is supersolvable (see [5, Theorems 3.9 and 5.2]) and has
nilradical
$N^{*}=\phi(L^{*})+Fe_{11}^{*}+\dots+Fe_{1r_{1}}^{*}.$
Let $L$ be a minimal counter-example, so $L$ is non-Lie and nilpotent, but
$L^{*}$ is not nilpotent.
Now $I$ is non-zero, so choose a minimal ideal $W=Fw$ of $L$ inside $I$. We
have that $W^{*}$ is a minimal ideal of $L^{*}$ inside $Z(L^{*})$, by Lemma
5.3. Then $L^{*}/W^{*}$ is not nilpotent, so $L/W$ is a Lie algebra and
$L^{*}/W^{*}$ is almost nilpotent. Hence there is a basis
$\\{x^{*};e_{11}^{*},\ldots,e_{1r_{1}}^{*};\ldots;e_{n1}^{*},\ldots,e_{nr_{n}}^{*},w^{*}\\}\hbox{
for }L^{*}$
such that
$\displaystyle[x^{*},e_{ij}^{*}]$
$\displaystyle=e_{ij}^{*}+e_{i+1,j}^{*}+\lambda_{ij}w^{*}\hbox{ for }1\leq
i\leq n-1,1\leq j\leq r_{i},$ $\displaystyle[x^{*},e_{nj}^{*}]$
$\displaystyle=e_{nj}^{*}+\lambda_{nj}w^{*}\hbox{ and }r_{j}\leq r_{j+1}\hbox{
for }1\leq j\leq n-1,$
where $\lambda_{ij}\in F$, $I^{*}=Fw^{*}$ and $(N^{*})^{2}\subseteq W^{*}$.
Let $M^{*}$ be spanned by all of the basis vectors for $L^{*}$ apart from
$e_{11}^{*}$. Then $M^{*}$ is not nilpotent and has nilradical $F^{*}$ spanned
by all of the basis vectors apart from $e_{11}^{*}$ and $x^{*}$. By the
minimality, we must have that $M$ is Lie and $M^{*}$ is almost nilpotent, so
$(F^{*})^{2}=0$, $(x^{*})^{2}=0$ and $[e_{ij}^{*},x^{*}]=-[x^{*},e_{ij}^{*}]$
for all of the $e_{ij}^{*}$’s apart from $e_{11}^{*}$. But also
$[e_{11}^{*},x^{*}]=-e_{11}^{*}-e_{21}^{*}+\mu w^{*}$ for some $\mu\in F$, so
$\displaystyle[e_{11}^{*},x^{*}]$
$\displaystyle=[[x^{*},e_{11}^{*}],x^{*}]-[e_{21}^{*},x^{*}]$
$\displaystyle=[x^{*},[e_{11}^{*},x^{*}]]+[(x^{*})^{2},e_{11}]+[x^{*},e_{21}^{*}]$
$\displaystyle=-[x^{*},e_{11}^{*}]-[x^{*},e_{21}^{*}]+[x^{*},e_{21}^{*}]=-[x^{*},e_{11}^{*}]$
We now claim that $(N^{*})^{2}=0$. It suffices to show that
$[N^{*},e_{11}^{*}]=0$, which we do by a backwards induction argument. We
have, for any $f^{*}\in F^{*}$,
$\displaystyle[f^{*},e_{11}^{*}]$
$\displaystyle=[f^{*},[x^{*},e_{11}^{*}]-e_{21}^{*}-\lambda_{11}w^{*}]=[f^{*},[x^{*},e_{11}^{*}]]$
$\displaystyle=[[f^{*},x^{*}],e_{11}^{*}]-[[f^{*},e_{11}^{*}],x^{*}]=[[f^{*},x^{*}],e_{11}^{*}],$
(1)
since $[f^{*},e_{11}^{*}]\in W\subseteq Z(L^{*})$. Now putting
$f^{*}=e_{nj}^{*}$ gives
$[e_{nj}^{*},e_{11}^{*}]=[[e_{nj}^{*},x^{*}],e_{11}^{*}]=-[e_{nj}^{*},e_{11}^{*}],$
whence $[e_{nj}^{*},e_{11}^{*}]=0$. So now suppose that
$[e_{ij}^{*},e_{11}^{*}]=0$ for some $2\leq i\leq n$.
Putting $f^{*}=e_{i-1,j}^{*}$ ($(i-1,j)\neq(2,1)$) in (1) gives
$[e_{i-1,j}^{*},e_{11}^{*}]=[[e_{i-1,j}^{*},x^{*}].e_{11}^{*}]=-[e_{i-1,j}^{*},e_{11}^{*}]$
which, again, yields that $[e_{i-1,j}^{*},e_{11}^{*}]=0$. Finally, note that,
if we now put $f^{*}=e_{11}^{*}$, then (1) remains valid, so
$(e_{11}^{*})^{2}=0$ and $(N^{*})^{2}=0$.
Now replace $e_{nj}^{*}$ by $e_{nj}^{*}+\lambda_{nj}w^{*}$, $e_{ij}^{*}$ by
$e_{ij}^{*}+(-1)^{n-i}\lambda_{i+1,j}w^{*}$ to see that $L^{*}$ is almost
nilpotent and $L$ is a Lie algebra, a contradiction. Hence the result holds.
$\Box$
## References
* [1] R.K. Amayo and J. Schwarz, ‘Modularity in Lie Algebras’, Hiroshima Math. J. 10 (1980), 311-322.
* [2] Sh. Ayupov, K. Kudaybergenov, B. Omirov and K. Zhao, ‘Semisimple Leibniz algebras, their derivations and automorphisms’, Linear and Multilinear Alg. (2019), https://doi.org/10.1080/03081087.2019.1567674.
* [3] D.W. Barnes, ‘On Levi’s Theorem for Leibniz algebras’, Bull. Aust. Math. Soc. 86 (2012), 184-185.
* [4] D.W. Barnes, ‘Some theorems on Leibniz algebras’, Comm. Alg. 39 (7) (2011), 2463-2472.
* [5] D.W. Barnes, ‘Lattices of subalgebras of Leibniz algebras’, Comm. Alg. 40 (11) (2012), 4330-4335.
* [6] C. Batten Ray, L. Bosko-Dunbar, A. Hedges, J.T. Hird, K. Stagg and E. Stitzinger, ‘A Frattini theory for Leibniz algebras’, Comm. Alg. 41(4) (2013), 1547–1557.
* [7] A. Bloh. ‘On a generalization of the concept of Lie algebra’. Dokl. Akad. Nauk SSSR. 165 (1965), 471–473.
* [8] K. Bowman and D.A.Towers, ‘Modularity conditions in Lie algebras’, Hiroshima Math. J. 19 (1989), 333-346.
* [9] K. Bowman and V.R. Varea, ‘Modularity* in Lie algebras’, Proc. Edin. Math. Soc. 40(2) (1997), 99-110.
* [10] I. Demir, K.C. Misra and E. Stitzinger, ‘On some structures of Leibniz algebras’, in Recent Advances in Representation Theory, Quantum Groups, Algebraic Geometry, and Related Topics, Contemporary Mathematics, 623. Amer. Math. Soc., Providence, RI, (2014), 41–54.
* [11] A.G. Gein, ‘Projections of a Lie algebra of characteristic zero’, Izvestija vyss̃. ucebn. Zaved. Mat. 22(4) (1978), 26-31.
* [12] A.G. Gein, ‘Modular rule and relative complements in the lattice of subalgebras of a Lie algebra’, Sov. Math. 31(3) (1987), 22-32; translated from Izv. Vyssh. Uchebn. Zaved. Mat. 83 (1987), 18-25.
* [13] A.G. Gein, ‘Semimodular Lie algebras’, Siberian Math. J. 17 (1976), 243-248; translated from Sibirsk Mat. Z. 17 (1976), 243-248.
* [14] A.G. Gein, ‘On modular subalgebras of Lie algebras’, Ural Gos. Univ. Mat. Zap. 14 (1987), 27-33.
* [15] A.G. Gein and V.R. Varea, ‘Solvable Lie algebras and their subalgebra lattices’, Comm. Alg. 20(8) (1992), 2203-2217. Corrigenda: Comm. Alg. 23(1) (1995), 399-403.
* [16] B. Kolman, ’Semi-modular Lie algebras’, J. Sci. Hiroshima Univ. Ser. A-I 29 (1965), 149-163.
* [17] A.A. Lashi, ‘On Lie algebras with modular lattices of subalgebras’, J. Algebra 99 (1986), 80-88.
* [18] J.-L. Loday, ‘Une version non commutative des algèbres de Lie: les algèbres de Leibniz’. Enseign. Math. (2) 39 (3–4) (1993), 269–293.
* [19] J.-L. Loday and T. Pirashvili, ‘Universal enveloping algebras of Leibniz algebras and (co)homology’, Math. Annalen 296 (1) (1993) 139–158.
* [20] C. Scheiderer, ‘Intersections of maximal subalgebras in Lie algebras’, J. Algebra 105 (1987), 268-270.
* [21] S. Siciliano and D.A. Towers, ‘On the subalgebra lattice of a Leibniz algebra’, arXiv:2010.12254 (2020).
* [22] D.A. Towers, ‘A Frattini theory for algebras’, Proc. London Math. Soc. (3) 27 (1973), 440–462.
* [23] D.A. Towers, ‘Lattice isomorphisms of Lie algebras’, Math. Proc. Camb. Phil. Soc. 89 (1981), 285-292.
* [24] D.A. Towers, ‘Semimodular subalgebras of a Lie algebra’, J. Algebra 103 (1986), 202-207.
* [25] D.A. Towers, ‘Almost nilpotent Lie algebras’, Glasgow Math. J. 29 (1987), 7-11.
* [26] D.A. Towers, ‘On modular* subalgebras of a Lie algebra’, J. Algebra 190 (1997), 461-473.
* [27] V.R. Varea, ‘Modular subalgebras, Quasi-ideals and inner ideals in Lie Algebras of prime characteristic’, Comm. Alg. 21(11) (1993), 4195–4218.
* [28] V.R. Varea, ‘The subalgebra lattice of a supersolvable Lie algebra’, In Lecture Notes in Mathematics. Springer-Verlag: New York, 1989; Vol. 1373, 81-92.
* [29] V.R. Varea, ‘On modular subalgebras in Lie algebras of prime characteristic,, Contemporary Math. 110 (1990), 289-307.
* [30] V.R. Varea, ‘Lower Semimodular Lie algebras’, Proc. Edin. Math. Soc. 42(1999), 521-540.
* [31] V.R. Varea, ‘Lie algebras whose maximal subalgebras are modular’, Proc. Roy. Soc. Edinburgh Sect. A 94 (1983), 9-13.
|
# Charge affinity and solvent effects in numerical simulations of ionic
microgels
Giovanni Del Monte1,2,3,∗, Fabrizio Camerin1,4,∗, Andrea Ninarello1,2,
Nicoletta Gnan1,2, Lorenzo Rovigatti2,1, Emanuela Zaccarelli1,2,∗ 1 CNR
Institute of Complex Systems, Uos Sapienza, piazzale Aldo Moro 2, 00185, Roma,
Italy 2 Department of Physics, Sapienza University of Rome, piazzale Aldo
Moro 2, 00185 Roma, Italy 3 Center for Life NanoScience, Istituto Italiano di
Tecnologia, viale Regina Elena 291, 00161 Rome, Italy 4 Department of Basic
and Applied Sciences for Engineering, via Antonio Scarpa 14, 00161 Roma, Italy
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Ionic microgel particles are intriguing systems in which the properties of
thermo-responsive polymeric colloids are enriched by the presence of charged
groups. In order to rationalize their properties and predict the behaviour of
microgel suspensions, it is necessary to develop a coarse-graining strategy
that starts from the accurate modelling of single particles. Here, we provide
a numerical advancement of a recently-introduced model for charged co-
polymerized microgels by improving the treatment of ionic groups in the
polymer network. We investigate the thermoresponsive properties of the
particles, in particular their swelling behaviour and structure, finding that,
when charged groups are considered to be hydrophilic at all temperatures,
highly charged microgels do not achieve a fully collapsed state, in favorable
comparison to experiments. In addition, we explicitly include the solvent in
the description and put forward a mapping between the solvophobic potential in
the absence of the solvent and the monomer-solvent interactions in its
presence, which is found to work very accurately for any charge fraction of
the microgel. Our work paves the way for comparing single-particle properties
and swelling behaviour of ionic microgels to experiments and to tackle the
study of these charged soft particles at a liquid-liquid interface.
††: J. Phys.: Condens. Matter
* August 27, 2024
Keywords: ionic microgels, charge affinity, solvophobic attraction, volume
phase transition, form factors
## 1 Introduction
Soft matter is a very active branch of condensed matter physics, which
comprises, among other systems, colloidal suspensions, whose constituent
particles can greatly vary in shape, softness and function. Soft matter
encompasses not only synthetic particles, but also constituents of many
biological systems, such as proteins, viruses and even cells, whose size
ranges between the nano and the micrometer scale. A peculiar aspect of soft
matter systems is the great variety of amorphous states they can form,
including glasses [1, 2] and gels [3, 4, 5]. Indeed, a large amount of work in
this field is devoted to the study of these non-ergodic states which may form
due to different kind of interactions, such as steric, hydrophobic or
electrostatic ones, both of attractive and repulsive nature.
Sometimes, a single colloidal particle is already quite a complex object whose
behaviour at the collective level is strongly connected to the microscopic
features of the particle itself. This situation is typical of soft colloids,
i.e. deformable particles with internal degrees of freedom strongly
influencing their mutual interactions, which makes them already intrinsically
multi-scale. For these systems a theoretical approach is quite challenging
even at the single-particle level, thus it is convenient to rely on the
development of suitable coarse-grained models [6] that allow to greatly reduce
the system complexity, still capturing the important ingredients to be
retained for a correct description of the collective behavior. This strategy
is very profitable for the case of microgel particles [7] that, combining
together the properties of colloids and polymers, can be viewed as a prototype
example of soft particles [8, 9]. A microgel is a microscale gel whose
internal polymeric network controls its peculiar properties. By varying the
constituent monomers, microgels can be made responsive to temperature, pH or
to external forces [7]. For their intriguing properties, they are employed in
a wide variety of applications, ranging from biomedical purposes [10, 11] to
paper restoration [12].
In order to be able to predict the behaviour of dense microgel suspensions and
the formation of arrested states, it is important to properly take into
account the internal degrees of freedom of the particles, by modelling their
effective interactions in such a way that the resulting object can still
shrink, deform and interpenetrate [13, 14, 15, 16]. Hence, an accurate
modelling of a single microgel is a necessary pre-requisite for a correct
description of bulk suspensions. To validate numerical models at the single-
particle level, there are a number of different experiments we can refer to.
One of the most straightforward is the measurement of the effective size of
the microgels via dynamic light scattering experiments. Upon varying the
controlling parameter of the dispersion, the so-called swelling curves can be
determined. For instance, microgels synthesized by employing a
thermoresponsive polymer, such as Poly(N-isopropyl-acrylamide) (PNIPAM),
undergo a so-called Volume Phase Transition (VPT) [7] at a temperature
$T_{\scriptscriptstyle\mathit{VPT}}\approx 32^{\circ}$C from a swollen to a
collapsed state.
In addition, form factors can be measured by small-angle scattering
experiments of dilute microgel suspensions, either using neutrons [17], x-rays
[18] or even visible light for large enough microgels [19]. This observable
directly provides information on the inner structure of the microgels and
shows that microgels prepared via precipitation polymerization [20] can be
modelled as effective fuzzy spheres [17], where a rather homogeneous core is
surrounded by a fluffy corona, giving rise to what is usually called a core-
corona structure. A more complex situation arises when ionic groups are added
to the synthesis to make microgels responsive also to external electric fields
[21, 22] and to pH variations [23]. A case study of such these co-polymerized
microgels is the one made of PNIPAM and polyacrylic acid (PAAc) [20, 24, 25,
26], that is pH-responsive due to the the weak acidic nature of AAc monomers.
An increasing amount of work in the last years has focused on modelling
single-particle behaviour both of neutral [27, 28, 29] and ionic microgels
[30, 31, 32, 33]. For the latter case, we have recently shown [33] that it is
important to take into account both the disordered nature of the network, as
opposed to the diamond lattice structure [29], and an explicit treatment of
charges and counterions. Indeed, mean-field approaches completely neglect the
complex, heterogeneous nature of the charge distribution within the microgel.
In this work, we extend our previous effort by going one step further towards
a realistic numerical treatment of ionic co-polymerized microgels. In Ref.
[33], we modelled a single microgel particle such that all of its monomers,
including charged ones, experienced a solvophobic attraction on increasing
temperature. Here, instead, charged monomers experience Coulomb and steric
interactions only. This is expected to be more realistic, since charged or
polar groups always remain hydrophilic irrespectively of temperature, thus
having a distinct behaviour with respect to all other monomers. We examine the
consequences of this difference on the microgel swelling behaviour as well as
on its structure and charge distributions across the VPT. In the second part
of the manuscript, we consider the presence of an explicit solvent, to examine
its effects on the structural properties of the microgel. In this way, we aim
to make our model suitable for situations where solvent effects become
important. In particular, this will enable us to study the effect of charges
for microgels adsorbed at liquid-liquid interfaces, similarly to what we
recently put forward for neutral microgels [34, 35].
## 2 Methods
The coarse-grained microgels used in this work are prepared as in Refs. [27]
starting from $N$ patchy particles of diameter $\sigma$, which sets the unit
of length, confined in a spherical cavity. A fraction $c=0.05$ of these
particles has four patches on their surface to mimic the typical crosslinker
connectivity in a chemical synthesis, while all the others have two patches to
represent monomers in a polymer chain. During the assembly, an additional
force is employed on the crosslinking particles to increase their
concentration in the core of the microgel in agreement with experimental
features [18]. Once a fully-bonded configuration is reached (when the fraction
of formed bonds is greater than $0.995$), a permanent topology is obtained by
enforcing the Kremer-Grest bead-spring model [36]. In this way, all particles
experience a steric repulsion via the Weeks-Chandler-Anderson (WCA) potential,
$V_{\rm
WCA}(r)=\begin{cases}4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]+\epsilon&\text{if
$r\leq 2^{\frac{1}{6}}\sigma$}\\\ 0&\text{otherwise,}\end{cases}$ (1)
where $\epsilon$ sets the energy scale and $r$ is the distance between the
centers of two beads. Additionally, bonded particles interact via the Finitely
Extensible Nonlinear Elastic potential (FENE),
$V_{\rm FENE}(r)=-\epsilon
k_{F}R_{0}^{2}\ln\left[1-\left(\frac{r}{R_{0}\sigma}\right)^{2}\right]\text{
if $r<R_{0}\sigma$,}$ (2)
with $k_{F}=15$ which determines the stiffness of the bond and $R_{0}=1.5$
which determines the maximum bond distance. The resulting microgel is thus
constituted by a disordered polymer network with a core-corona structure and
form factors across the VPT that closely resemble experimental ones for
microgels synthesized via the precipitation polymerization procedure [18].
Once the microgels are prepared, we add electrostatic interactions to mimic
co-polymerized polymer networks with charged groups. To this aim, we randomly
assign a negative charge $-e^{*}$ to a given fraction $f$ of microgel
monomers, to mimic the acrylic acid dissociation in water, where
$e^{*}=\sqrt{4\pi\varepsilon_{0}\varepsilon_{r}\sigma\epsilon}$ is the reduced
unit charge (which roughly corresponds to the elementary charge $e$,
considering $\epsilon\approx k_{B}T$ at room temperature and $\sigma$ as the
polymer’s Kuhn length), and $\varepsilon_{0}$ and $\varepsilon_{r}$ are the
vacuum and relative dielectric constants. Accordingly, we insert in the
simulation box an equal number of positively charged counterions with charge
$e^{*}$ to ensure the neutrality of the system. Interactions among charged
beads are given by the reduced Coulomb potential
$V_{\rm coul}(r)=\frac{q_{i}q_{j}\sigma}{e^{*2}r}\epsilon,$ (3)
where $q_{i}$ and $q_{j}$ are the charges of counterions or charged monomers.
We adopt the particle-particle-particle-mesh method [37] as a long-range
solver for the Coulomb interactions. As discussed in a previous contribution
[33], the size of the counterions is set to $0.1\sigma$ to facilitate their
diffusion within the microgel network and to avoid spurious excluded volume
effects. They interact with all other species simply via the WCA potential.
The swelling behaviour of a thermoresponsive microgel can be reproduced in
molecular dynamics simulations either by means of an implicit solvent, namely
by adding a potential that mimics the effect of the temperature on the
polymer, or by explicitly adding coarse-grained solvent particles within the
box. In the first case, we employ a solvophobic potential
$V_{\alpha}(r)=\begin{cases}-\epsilon\alpha&\text{if }r\leq 2^{1/6}\sigma\\\
\frac{1}{2}\alpha\epsilon\left\\{\cos\left[\gamma{\left(\frac{r}{\sigma}\right)}^{2}+\beta\right]-1\right\\}&\text{if
}2^{1/6}\sigma<r\leq R_{0}\sigma\\\ 0&\text{if }r>R_{0}\sigma\end{cases}$ (4)
with $\gamma=\pi\left(\frac{9}{4}-2^{1/3}\right)^{-1}$ and
$\beta=2\pi-\frac{9}{4}\gamma$ [38]. This potential introduces an effective
attraction among polymer beads, modulated by the parameter $\alpha$, whose
increase corresponds to an increase in the temperature of the dispersion. For
$\alpha=0$ no attraction is present, which corresponds to fully swollen, i.e.
low-temperature, conditions. For neutral microgels, the VPT is encountered at
$\alpha\approx 0.65$, while a full collapse is usually reached for
$\alpha\gtrsim 1.2$.
In the first part of this work, we analyze two different models, based on a
different treatment of the interactions of charged ions on the microgels: in
Model I all monomers experience a total interaction potential where
$V_{\alpha}$ (Eq. 4) is added to the Kremer-Grest interactions, as previously
done in Ref. [33]; in Model II only neutral monomers experience this total
interaction potential, while the charged monomers do not interact with
$V_{\alpha}$, i.e. $\alpha=0$ for them in all cases. This second situation is
equivalent to leaving unaltered the behaviour of charged groups of the
microgel as the solvent conditions change, so that they always retain a good
affinity for the solvent ($\alpha=0$). A similar treatment is also adopted for
the counterions, for which $\alpha=0$ for both Model I and Model II.
In the second part of this work, we explicitly consider the presence of the
solvent in driving the Volume Phase Transition. Solvent particles are modelled
within the Dissipative Particle Dynamics (DPD) framework in order to avoid
spurious effects which may arise from the use of a standard Lennard-Jones
potential due to the excessive excluded volume of the solvent [39]. In the DPD
scheme, two particles $i$ and $j$ experience a force
$\vec{F}_{ij}=\vec{F}^{C}_{ij}+\vec{F}^{D}_{ij}+\vec{F}^{R}_{ij}$, where:
$\displaystyle\vec{F}^{C}_{ij}$ $\displaystyle=$ $\displaystyle
a_{ij}w(r_{ij})\hat{r}_{ij}$ (5) $\displaystyle\vec{F}^{D}_{ij}$
$\displaystyle=$ $\displaystyle-\gamma
w^{2}(r_{ij})(\vec{v}_{ij}\cdot\vec{r}_{ij})\hat{r}_{ij}$ (6)
$\displaystyle\vec{F}^{R}_{ij}$ $\displaystyle=$ $\displaystyle
2\gamma\frac{k_{B}T}{m}w(r_{ij})\frac{\theta}{\sqrt{\Delta t}}\hat{r}_{ij}$
(7)
where $\vec{F}^{C}_{ij}$ is a conservative repulsive force, with
$w(r_{ij})=1-r_{ij}/r_{c}$ for $r_{ij}<r_{c}$ and $0$ elsewhere,
$\vec{F}^{D}_{ij}$ and $\vec{F}^{R}_{ij}$ are a dissipative and a random
contribution of the DPD, respectively; $\gamma$ is a friction coefficient,
$\theta$ is a Gaussian random variable with average $0$ and unit variance, and
$\Delta t$ is the integration time-step. We set $r_{c}=1.75\sigma$ and
$\gamma=2.0$ in all the simulations. Here $a_{i,j}$ quantifies the repulsion
between two particles $i$ and $j$, which effectively allows the tuning of the
monomer-solvent (m,s) and solvent-solvent (s,s) interactions. Following our
previous work [39], we fix $a_{s,s}=25.0$ while we vary $a_{m,s}\equiv a$
between 5.0 and 16.0, that is the range where the collapse of a neutral
microgel takes place. The reduced DPD density is set to
$\rho_{s}r_{c}^{3}=3.9$ (with $\rho_{s}$ the actual number density of solvent
beads). With this choice of parameters, we previously showed that this model
reproduces the swelling behaviour and structural properties of a neutral
microgel particle, in quantitative agreement with the implicit solvent model
that was explicitly tested against experiments [18]. To compare the explicit
solvent model with the implicit one, we only consider Model II, where charged
monomers always retain a high affinity for the solvent. We will show later
that, in the explicit treatment, this translates to having charged monomers-
solvent interactions (ch,s) always set to $a_{ch,s}=0$. All other interactions
are identical to the implicit solvent model.
Simulations are performed with LAMMPS [40]. The equations of motion are
integrated with the velocity-Verlet algorithm. All particles have unit mass
$m$, the integration time-step is $\Delta t=0.002\sqrt{m\sigma^{2}/\epsilon}$
and the reduced temperature $T^{*}=k_{B}T/\epsilon$ is set to 1.0 by means of
a Nosè-Hoover thermostat for implicit solvent simulations or via the DPD
thermostat for explicit solvent ones. In the former case the number of
monomers in the microgels is fixed to $N\approx 42000$, while for the latter
case we limit the analysis to $N\approx 5000$ due to the very large
computational times owing to the presence of the solvent.
From equilibrated trajectories, we directly calculate the form factor of the
microgel as,
$P(q)=\left\langle\frac{1}{N}\sum_{i,j=1}^{N}\exp{(-i\vec{q}\cdot\vec{r}_{ij})}\right\rangle,$
(8)
where $r_{ij}$ is the distance between monomers $i$ and $j$, while the angular
brackets indicate an average over different configurations and over different
orientations of the wavevector $\vec{q}$ (for each $q$ we consider $300$
distinct directions randomly chosen on a sphere of radius $q$).
Also, we determine the radius of gyration $R_{g}$ of the microgels as a
measure of their swelling degree. This is calculated as
$R_{g}=\left\langle\left[\frac{1}{N}\sum_{i=1}^{N}(\vec{r}_{i}-\vec{r}_{CM})^{2}\right]^{\frac{1}{2}}\right\rangle,$
(9)
where $\vec{r}_{CM}$ is the position of the center of mass of the microgel.
For each swelling curve, representing $R_{g}$ as a function of the effective
temperature $\alpha$ (implicit solvent) or $a$ (explicit solvent), we define
an effective VPT temperature, either $\alpha_{\scriptscriptstyle\mathit{VPT}}$
or $a_{\scriptscriptstyle\mathit{VPT}}$, as the inflection point of a cubic
spline interpolating the simulation points.
Finally, the radial density profile for all monomers is defined as
$\rho(r)=\left\langle\frac{1}{N}\sum_{i_{=1}}^{N}\delta(|\vec{r}_{i}-\vec{r}_{CM}|-r)\right\rangle.$
(10)
By restricting the sum in Eq. 10 to only charged monomers or to counterions,
we also calculate $\rho_{CH}(r)$ and $\rho_{CI}(r)$, that are the radial
density profiles of charged microgel monomers and of counterions,
respectively. In addition, we define the net charge density profile as
$\rho_{Q}(r)=-\rho_{CH}(r)+\rho_{CI}(r).$ (11)
## 3 Results and Discussion
Figure 1: Simulation snapshots. Ionic microgels with $f=0.2$ and $N\approx
42000$ obtained in implicit solvent for (top) Model I and (bottom) Model II
for $\alpha=0,0.74$ and $1.4$ (from left to right panels). Blue (cyan)
particles are neutral (charged) microgel monomers; smaller purple particles
represent counterions.
### 3.1 On the role of the affinity of charged monomers for the solvent
#### 3.1.1 Swelling behaviour
Figure 2: Swelling curves. Radius of gyration $R_{g}$ as a function of the
solvophobic parameter $\alpha$ and different values of $f$ for microgels with
$N\approx 42000$ for the case where charged monomers (a) have a varying
affinity for the solvent (Model I) and (b) have always a good affinity for the
solvent (Model II). Figure 3: Form factors. Form factors for charged
microgels with $N\approx 42000$ and (top) $f=0.032$ and (bottom) $f=0.2$,
simulated in implicit solvent for Models I and II. The models are compared at
the same $\alpha$: for $f=0.032$, $\alpha=0,0.48,0.64,0.8,1.4$; for $f=0.2$,
$\alpha=0,0.6,0.74,1.0,1.4$ (from left to right). The corresponding neutral
case ($f=0$) is also displayed for comparison. Straight lines in the central
panel of the bottom row highlight the two power-law behaviours of the form
factors at intermediate (full line) and high (dashed line) $q$ values, that
are present for both models, extensively discussed in Ref. [33].
We start by discussing the influence of charges on the VPT for microgels with
$N\approx 42000$ in implicit solvent. As explained in the Methods section, we
directly compare the case where the affinity of charged beads for the solvent
varies (Model I) to the case where it remains unchanged (Model II).
Representative simulation snapshots of the two models for the highest value of
charge fraction investigated in this work ($f=0.2$) are reported in Fig. 1.
Here we focus on different swelling stages of the microgels upon increasing
$\alpha$. We notice immediately that a large amount of inhomogeneities
persists in Model II at large $\alpha$, in contrast with the behavior of Model
I where a full collapse is achieved. This can be better quantified by the
swelling curves, reporting the variation of the radius of gyration $R_{g}$
versus the effective temperature $\alpha$, that are shown in Fig. 2 for
different values of the charge fraction $f$. For both models we observe that
the increase of $f$ shifts the transition towards larger effective
temperatures, but important differences arise at large $\alpha$, as displayed
in the snapshots. In Model I, where charged beads experience Coulomb as well
as solvophobic interactions, the VPT occurs at all studied $f$, as shown in
Fig. 2(a). Using the $\alpha$-temperature mapping established in Ref. [18]
through a comparison to experiments, the VPT temperature observed for $f=0.2$
microgels would correspond to $T\approx 38^{\circ}$C. However, experiments on
ionic microgels, for which the amount of charges was systematically varied
[41, 26], have shown that even for values of $f$ smaller than $0.2$, the
microgel does not collapse below $40^{\circ}$C.
As hypothesized in our earlier work [33], Model I neglects the interplay
between the hydrophilic character of the co-polymer and its charge content.
However, charged or polar groups, such as AAc groups, are known to remain
hydrophilic even at high temperatures [42], which would increase the stability
of the microgel in the swollen state with increasing $f$. We thus incorporate
such a feature in Model II by removing solvophobic interactions for charged
microgel beads. The resulting swelling curves, shown in Fig. 2(b), clearly
demonstrate that for $f=0.20$ the VPT is not encountered within the
investigated solvophobicity range, up to values of $\alpha$ that would
correspond to temperatures above $50^{\circ}$C, in qualitative agreement with
experimental observations [41, 26, 43].
#### 3.1.2 Structural properties
It is now important to compare the two models from the structural point of
view, to check whether major differences arise. We start the analysis by
looking at the form factors which, in our previous work on Model I [33], were
shown to exhibit novel features with respect to neutral microgels. In
particular, we found evidence that for
$\alpha<\alpha_{\scriptscriptstyle\mathit{VPT}}$, the standard fuzzy-sphere-
like model was not able to describe the numerical form factors, Instead, the
emergence of two distinct power-law behaviours was found immediately after the
first peak, at intermediate and high $q$ values, respectively [33]. This was
attributed to the presence of charges in the inhomogeneous structure of the
microgel, which gives rise to different features for core and corona regions,
each being characterized by a different domain size. It is now crucial to
verify whether such distinctive behaviour also persists when the interactions
among charged beads are modelled more realistically.
Figure 4: Density profiles. Top panels show the monomers density profiles for
an ionic microgel with $f=0.2$ and $N\approx 42000$ as a function of the
distance from the microgel center of mass $r$ obtained in implicit solvent for
Models I and II. Bottom panels report the ions and counterions (c-ions)
density profiles for $f=0.2$ for both models. The models are compared at the
same $\alpha$: $0,0.6,0.74,1.0,1.4$ (from left to right). The corresponding
neutral case ($f=0$) is also displayed for comparison. Figure 5: Comparison
of Models I and II at the same $R_{g}$. Radial density profiles for an ionic
microgel with $f=0.2$ and $N\approx 42000$ at $R_{g}\approx 21$, where
$\alpha=0.9$ and $\alpha=1.4$ for Models I and II, respectively. The inset
shows the corresponding ions and counterions (c-ions) density profiles.
Figure 6: Charge density profile. Net charge density profile $\rho_{Q}(r)$ as
defined in Eq. 11 for ionic microgels with $N\approx 42000$ and (top)
$f=0.032$ , (bottom) $f=0.2$, as a function of the distance from the microgel
center of mass $r$, simulated in implicit solvent for Models I and II. The
models are compared at the same $\alpha$: for $f=0.032$,
$\alpha=0,0.48,0.64,0.8,1.4$; for $f=0.2$, $\alpha=0,0.6,0.74,1.0,1.4$ (from
left to right panels).
Fig. 3 reports the form factors for Models I and II with $f=0.032$ and
$f=0.2$, in comparison to the neutral case ($f=0$), at different values of
$\alpha$. For $f=0.032$, the amount of charges in the microgel is still too
low to observe differences between the two models and the neutral microgel.
Also, at large $\alpha$, the form factor is that of a collapsed microgel in
all cases, as expected from the swelling curves in Fig. 2. For $f=0.2$ and low
enough $\alpha$, the behaviour of the two models is again very similar, with
the form factors of ionic microgels showing a first peak that is
systematically smaller with respect to that of the neutral case. At
intermediate $\alpha$, we find that two power-law-like behaviours are
compatible with both sets of data for charged microgels, while the neutral
case does not show such a feature. This finding, already elaborated in Ref.
[33], appears to be a distinctive feature of our numerical model of ionic
microgels and is the result of the combination of a random charge distribution
within a disordered, heterogeneous network topology with the explicit
treatment of ions and counterions. Such a distinctive feature was tentatively
attributed to the different degree of swelling of the corona and of the core,
but still awaits a direct experimental confirmation. However, hints of a
similar two-step decay for $P(q)$ were reported in Ref. [44] and would
certainly deserve further investigation in future experiments.
On the other hand, major differences between the two charged models arise for
large values of $\alpha$. Indeed, in Model I the microgel approaches and
crosses the VPT leading to a fully collapsed state, while in Model II it
remains in a quasi-swollen configuration for all studied $\alpha$.
Consequently, for high $\alpha$ values, the form factor does not resemble that
of a homogeneous sphere, with only a second peak becoming evident, as opposed
to the neutral case where many sharp peaks emerge. We notice that Model I
fully coincides with the neutral case for very large $\alpha$, even for
$f=0.2$.
In Fig. 4, we compare the monomers density profiles for the two models as a
function of $\alpha$. These data further indicate that, for Model II, the
microgel does not achieve a collapsed state, as also visible from the
behaviour of the profiles of charged monomers and of counterions,
respectively. These are reported in the bottom panels of Fig. 4, showing that,
for both models, the counterions are always found to be very close to the
charged monomers, in order to neutralize the overall charge of the microgel.
However, all profiles remain much more extended for Model II as compared to
Model I, for all $\alpha$. We stress that the comparison is performed for
microgels with different affinity of the charged monomers for the solvent at
the same $\alpha$, which corresponds to very different swelling conditions, as
evident from Fig. 2. Additional information can be extracted by comparing the
two cases for a similar value of $R_{g}$, as reported in Fig. 5. Also in this
case, we find that Model II displays a more slowly decaying radial profile,
albeit having a very similar mass distribution with respect to Model I, which
is due to the presence of more stretched external dangling chains. Similar
results also apply to ions and counterions profiles, that are shown in the
inset of Fig. 5: even at the same $R_{g}$, there is a surplus of charges at
the surface in the case where the affinity of charged monomers for the solvent
does not change with the effective temperature (Model II). Overall, these
findings confirm an enhanced stabilization of the swollen configuration
operated by the charged groups of the microgel, hindering the tendency of the
remaining (neutral) monomers to collapse.
To complete the structural analysis of the two models, it is instructive to
consider the net charge density profile inside the microgels, that is reported
in Fig. 6 for both $f=0.032$ and $f=0.2$. We confirm that, for both models,
the net charge of the core region is roughly zero. However, it was shown in
Ref. [33] that in the collapsed configuration a charged double layer arises at
the surface of microgels, signalling the onset of a charge imbalance that
grows with $\alpha$. This feature, that is clearly visible in the behaviour of
Model I at high $\alpha$ for all values of $f$, is also present for Model II
for the low charge case ($f=0.032$). However, the double peak in the net
charge distribution is smeared out for $f=0.2$, due to the fact that, up to
the largest explored values of $\alpha$, the microgel does not fully collapse.
In this way, it maintains a low concentration of charged beads, that is always
roughly balanced by counterions, resulting in a rather uniform charge profile.
Instead, the peaks at the surface appear when the microgel collapses: this is
indeed the case for both models at low charge fraction and even for large $f$
when charged monomers are assigned a solvophobic behaviour (Model I).
We conclude from this analysis that the hydrophilicity of the charged monomers
at all effective temperatures enhances the tendency of the microgel to remain
swollen, even when most of the monomers experience a very large solvophobic
attraction. Thanks to the charge neutralization operated by counterions, the
microgel remains very stable in a rather swollen configuration up to very
large $\alpha$, avoiding collapse for large enough values of $f$. This
scenario agrees well with experimental observations, where the suppression of
the VPT [26, 43, 42] is found when the concentration of charged hydrophilic
groups in the polymer network is large enough. These considerations imply that
Model I should not be used to describe microgels with high charge content.
Indeed, its identical treatment of the solvophilic character of both neutral
and charged monomers leads the particle to collapse at extremely high
$\alpha$. Incidentally, we report that this was observed also for unrealistic
values of $f$ up to $0.4$ (not shown), in evident contrast with experiments.
We will thus rely on Model II in the future to correctly incorporate charge
effects in modelling microgels in a realistic fashion.
### 3.2 Solvent effects
We now go one step further in modelling ionic microgels, by explicitly adding
the solvent to the simulations. This is a necessary prerequisite to tackle
phenomena that cannot be described with an implicit solvent, e.g. situations
in which hydrodynamics or surface tension effects at a liquid-liquid interface
[34] play a fundamental role. In this subsection, we compare results for
swelling behaviour and structural properties of the microgels for implicit and
explicit solvent simulations. In particular, we restrict our discussion to
Model II, having established this to be more in line with experimental
observations. Since simulations with an explicit solvent require a much higher
computational effort, we limit the following discussion to microgels with
$N\approx 5000$.
#### 3.2.1 Swelling curves and explicit-implicit ($a$-$\alpha$) mapping
We start by reporting the swelling curves of charged microgels, stressing the
point that they have been obtained by fixing the value of $a_{ch,s}$, which
tunes the solvophilic properties of charged beads and counterions. We find
that setting $a_{ch,s}=0$, while $a_{m,s}\equiv a$ varies, the explicit model
is essentially equivalent to the implicit one. This means that it is possible
to find a relation that links every implicit system with a certain value of
the solvophobic attraction $\alpha$ to an explicit one with solvophobic
parameter $a$ that shows the same structure and swelling properties.
Figure 7: Implicit-explicit solvent mapping and swelling curves. (a) Mapping
between $\alpha$ and $a$ obtained by comparing neutral microgels with implicit
(Model II) and explicit solvents: the linear mapping is expressed by Eq. 12
and the numerical mapping via Eq. 13; (b-e) Normalized radius of gyration
$R_{g}/R_{g,max}$ as a function of the swelling parameter $a$ for microgels
with different charge content: (b) neutral, (c) $f=0.032$, (d) $f=0.1$ and (e)
$f=0.2$, for explicit (full lines and filled diamonds) and implicit solvent
conditions (rescaled along the horizontal axis using the linear mapping
$a_{\text{lin}}(\alpha)$, dashed lines and empty squares, and using the
numerical mapping $a_{\text{num}}(\alpha)$, full lines and filled squares).
The present figure and the following ones refer to the same microgel topology
with $N\approx 5000$.
In order to establish such a $a$-$\alpha$ mapping, we explored two different
routes. The first one, referred to as linear mapping in the following, is
based on the assumption that the dependence of $a$ on $\alpha$ is linear, as
previously adopted for neutral microgels [39]. In this way, the mapping
relation is obtained through a horizontal rescaling of the relative swelling
curves $R_{g}^{\text{imp}}(\alpha)/R_{g}^{\text{imp}}(\alpha=0)$ and
$R_{g}^{\text{exp}}(a)/R_{g}^{\text{exp}}(a=0)$ for the neutral implicit and
explicit microgels onto each other. Specifically, given two points for each
curve, $(a_{1},a_{2})$ and ($\alpha_{1},\alpha_{2}$), the rescaled
$x$-coordinate is calculated using the following relationship:
$a_{lin}(\alpha)=\left(\alpha-\langle\alpha\rangle\right)\Delta
a/\Delta\alpha+\langle a\rangle$ (12)
where $\langle x\rangle=0.5(x_{1}+x_{2})$ and $\Delta x=x_{1}-x_{2}$ with
$x=a,\alpha$. The second mapping $a_{\text{num}}(\alpha)$, referred to as
numerical mapping, has been obtained by numerically inverting the equation
$R_{g}^{\text{imp}}(\alpha)/R_{g}^{\text{imp}}(\alpha=0)=R_{g}^{\text{exp}}(a)/R_{g}^{\text{exp}}(a=0),$
(13)
after spline fitting the two swelling curves. We report both mapping relations
in Fig. 7(a), finding that they fall onto each other for almost the entire
range of investigated solvophobic parameters in the two models, confirming the
overall correctness of the assumption of linearity. However, we find some
differences in the region $\alpha>1.0$ ($a>15$). Having established the
mapping for neutral microgels, we now use it to directly remap also the
results for ionic microgels for all studied $f$ without any further
adjustments.
#### 3.2.2 Swelling behaviour
The normalized swelling curves with varying charge fraction $f$, comparing
implicit and explicit solvent, are reported in Fig. 7(b-e). Data from implicit
simulations are mapped via both methods described above. For the neutral case,
the presence of the solvent does not affect the swelling behaviour, as shown
in Fig. 7(b), where no appreciable differences are found between linear and
numerical mapping even at high $\alpha$. Using the same relations for
comparing charged microgels in explicit and implicit solvent, we find that,
remarkably, the same swelling behaviour works for all charge contents. The
swelling curves are virtually identical, which ensures that the inclusion of
the solvent does not alter the microgel behavior in temperature even in the
presence of charges. Small deviations, as expected from Fig. 7(a), appear only
at large $\alpha$ values, being more pronounced for high charge content. This
confirms the robustness of the DPD model which, as already discussed in Ref.
[39], does not induce spurious effects, e.g. due to excluded volume, even in
the collapsed state. An important result of this work is that, even in the
presence of an explicit solvent, the microgel at high $f$ does not fully
collapse at large $\alpha$, being entirely equivalent to implicit Model II and
compatible with experimental findings.
Figure 8: Density profiles. Density profiles of monomers (top row), charged
beads and counterions (middle row) and net charge (bottom row) for ionic
microgels with $f=0.1$ as a function of the distance from the microgel center
of mass $r$ obtained in explicit and implicit solvent conditions. Curves from
the explicit case refer to values of $a=5,11,12.3,14,16$, from the (left)
swollen to the (right) collapsed state. Implicit and explicit solvent cases
are compared at values of $\alpha$ approximately corresponding to each $a$
value according to both the linear ($\alpha=0,0.56,0.74,1.0,1.1$) and the
numerical ($\alpha=0,0.56,0.74,1.0,1.2$) mapping.
#### 3.2.3 Structural properties
In this subsection, we will show that the implicit and explicit solvent
treatments with the newly established numerical mapping (Eq. 13) lead to
identical structural features of the microgels. Small differences arise when
using the linear mapping (Eq. 12) at high $f$ and large values of $\alpha$.
We show in Fig. 8 the monomer (top panels), ion and counterion (middle panels)
and charge (bottom panels) density profiles only for the $f=0.1$ case, since
similar results are also found for the other studied charge fractions.
Reported data for different values of monomer-solvent interactions show an
overall similarity between implicit and explicit solvent descriptions at all
swelling conditions. Small deviations arise only for $f=0.2$ for states with
the highest values of $a$ or $\alpha$, when using the linear mapping: as we
can observe from the rightmost panels of Fig. 8, the linear mapping fails to
associate implicit and explicit states in the most collapsed state, where a
visible difference arises between the profiles.
The distribution of ions and counterions within the microgel is an observable
that should be more sensitive to the presence of the solvent. However, quite
remarkably, also in this case, we find excellent agreement between the two
models, as shown in the middle panels of Fig. 8. In particular, the emergence
of a clear double-peak structure in the ion distribution is found in both
models for large $\alpha$ (implicit) and $a$ (explicit), signalling an
accumulation of ions at the exterior surface of the microgels. This can be
understood from the fact that ions, remaining always hydrophilic, never
completely collapse onto the core of the particle. Thus, the appearance of a
peak at distances corresponding to the outer region of the microgel is the
result of an attempt of ions to maximize their contact with solvent. This is
preceded by a minimum, which indicates a region where ions are depleted within
the particle.
This feature is the echo of the minimum that arises in the net charge density
distributions, already anticipated for the large microgel treated with the
implicit model in Fig. 6. Importantly, a minimum also occurs in $\rho_{Q}(r)$
for smaller microgels, as shown in the bottom panels of Fig. 8, for the most
collapsed conditions. Here a charged double layer is clearly present, with an
excess of positive charges inside the microgel corona due to the increased
amount of counterions in this region. At the same time, a negative charge
surplus is found at the surface of the microgel, since charged ions preferably
remain in contact with solvent particles. The net charge distribution is also
identical for explicit and implicit solvent when using the numerical mapping,
with again very small differences arising for the linear mapping at large
$\alpha$.
Figure 9: Solvent density profiles for charged microgels of different $f$
values, as a function of the distance from the microgel center of mass $r$.
The different panels refer to $a=5,11,12,14,16$ from (left) good to (right)
bad solvent conditions.
It is important to notice that, although a double layer was also observed with
the implicit solvent in Ref. [33] (equivalent to Model I), the two
distributions (the one in Fig. 8 of the current manuscript and that reported
in Fig. 6 of Ref. [33]) have opposite signs. Indeed in Ref. [33] the
superposition of electrostatic and solvophobic effects led to an accumulation
of counterions at the microgel surface, with the onset of a seemingly Donnan
equilibrium [45]. Notwithstanding the different origin of the double layer,
both models demonstrate that an almost perfect neutrality is achieved within
the core of the microgel, and it is only at the surface that inhomogeneous
distributions appear. Besides, the reduced size of the microgels studied with
the explicit solvent facilitates the onset of peaks due to the increased
surface-to-volume ratio of the microgels. A more precise assessment of size
effects and a careful comparison to experiments will be the subject of future
works.
Finally, the explicit solvent model allows us to quantify the amount of
solvent that is located inside the microgel as temperature increases. This is
illustrated in Fig. 9, where the solvent density profile $\rho_{s}(r)$ is
reported for different values of $a$ and all investigated charge fractions.
These plots confirm the reduced tendency to collapse of charged microgels
which retain a large amount of solvent within the network structure. No
inhomogeneities within the microgel are in general observed. At large $f$ and
$\alpha$ some oscillations arise which may be due to reduced statistics.
Finally, this study confirms that even at temperatures above the VPT there is
quite a residual amount of solvent within the microgel, that is significantly
enhanced by increasing the charge. These findings are in line with
expectations [43, 46], that are thus confirmed by our simulations.
## 4 Conclusions
In this work we report an extensive numerical study of single microgel
particles, a prototype of soft colloids that is of great interest for the
colloidal community, particularly for the formation of arrested states with
tunable rheological properties [9], including glasses [47, 48] and gels [49].
The use of different polymers within the microgel network allows to exploit
responsiveness to different control parameters, such as temperature and pH,
giving rise also to unusual responses in the fragility of the system [47, 50,
16].
In order to be able to model dense suspensions of these soft particles, we can
rely on two possible strategies. On one hand, we can exploit highly coarse-
grained models, such as the Hertzian one, which completely neglect the
polymeric degrees of freedom of the particles and thus cannot reproduce the
complex phenomenology observed in experiments in the gel or glassy regimes,
such as shrinking, faceting and interpenetration [51, 15]. On the other hand,
we can try to model a single microgel in a realistic way, aiming to reproduce
its structural properties and, from this, to build effective interactions
which retain the polymeric features of the single particle.
Adopting the second strategy, the aim of this work is to improve the current
numerical modelling of single ionic microgels with randomly distributed
charged groups, aiming to describe PNIPAM-co-PAAc microgels across the Volume
Phase Transition. In particular, we assess two different ways to model the
interactions of the charged monomers belonging to the polymer network, either
considering or not a solvophobic attraction that mimics their
hydrophilic/hydrophobic interactions. We find that, as long as the charged
groups maintain the same affinity for the solvent, the tendency of the
microgel to remain in swollen conditions is enhanced even at high effective
temperatures. Thus, for a charge fraction of $f=0.2$ we find no evidence of
the collapse of the microgel within the investigated range of our simulations,
in agreement with experimental observations that are currently available [41,
26, 43, 42]. This result is different from the case where charged beads also
attract each other like neutral monomers upon increasing temperature, which
undergoes a Volume Phase Transition to a fully collapsed state [33]. Despite
this fundamental difference, the structural properties of the microgels
treated with both models are rather similar, especially at low and
intermediate temperatures. For instance, we confirm that the peculiar power-
law regimes observed in the form factors are independent of the chosen model.
Having established the most appropriate modelling for charged monomers, we
then performed another necessary step in the modelling of ionic microgels,
namely to explicitly consider the presence of the solvent, which may affect
the rearrangement of the charges during the swelling-deswelling transition. To
this aim, we build on previous results showing that for neutral microgels a
description with an explicit solvent can be directly and quantitatively
superimposed to the implicit modelling by using a DPD representation of the
solvent, leaving unchanged the treatment of the polymer network with a bead-
spring model. In this way, the solvophobic potential in Eq. 4, modulated by
the parameter $\alpha$, is replaced by the DPD repulsive interactions between
monomers and solvent. The latter is varied through a change of the parameter
$a$ controlling the repulsion between non-charged beads, while the interaction
between charged monomers always retains a solvophilic nature.
We have thus carried out a careful comparison between explicit and implicit
solvent treatments, finding quantitative agreement between the two.
Interestingly, the relation among $a$ and $\alpha$ established by the
comparison of neutral microgels can be used also to compare charged microgels,
even with large values of $f$ (some deviations occur only at $f=0.2$ and large
$\alpha,a$ values), where the same correspondence between implicit and
explicit solvent states is retrieved. We showed that a linear mapping between
the two control parameters of the interactions in the implicit and explicit
case is sufficient to obtain a very good agreement between the two
descriptions.
From our analysis of the internal structure of the microgels across the VPT,
we found that counterions have a rather similar distribution within the
microgel core, effectively neutralizing the internal charge at small
distances, but being in excess close to the surface. This gives rise to a
charged double layer for large values of $a$ and $\alpha$. Interestingly, such
peaks in the charge density distributions are swapped with respect to the case
of Model I, where ions do not experience a tendency to remain at the surface,
since they are also treated as solvophobic. These detailed predictions will
have to be compared to experiments on ionic microgels as a function of charge
fraction, pH and $T$, in order to establish the limit of validity of our model
and to further improve it, towards a more realistic description of
experimental microgels.
In perspective, this work paves the way to study realistic charged microgels
in diffusing conditions, such as in electrophoresis and thermophoresis
experiments [52], or at liquid-liquid interfaces and to calculate their
effective interactions, similarly to what has been done for neutral microgels
[34, 35]. In this way, we will be able to determine the conditions under which
electrostatic effects play a dominant role over elastic ones. Another
important line of research will be the assessment of the role of the network
topology: examples of interesting cases whose properties could be investigated
are microgels consisting of two interpenetrated networks [50, 53] or ultra-low
crosslinked [54, 55] and hollow [56, 57] microgels. Finally, we hope that our
theoretical efforts will stimulate further experimental activity on charged
microgels to verify the predicted behaviour so that it will be possible to
tackle the investigation of dense suspensions in the near future.
## Acknowledgments
This research has been performed within the PhD program in ”Mathematical
Models for Engineering, Electromagnetics and Nanosciences”. We acknowledge
financial support from the European Research Council (ERC Consolidator Grant
681597, MIMIC). FC and EZ also acknowledge funding from Regione Lazio, through
L.R. 13/08 (Progetto Gruppo di Ricerca GELARTE, n.prot.85-2017-15290).
## References
## References
* [1] Sciortino F and Tartaglia P 2005 Advances in Physics 54 471–524
* [2] Pusey P 2008 Journal of Physics: Condensed Matter 20 494202
* [3] Zaccarelli E 2007 Journal of Physics: Condensed Matter 19 323101
* [4] Lu P J and Weitz D A 2013 Annu. Rev. Condens. Matter Phys. 4 217–233
* [5] Joshi Y M 2014 Annu. Rev. Chem. Biomol. Eng 5 181–202
* [6] Likos C N 2001 Physics Reports 348 267–439
* [7] Fernandez-Nieves A, Wyss H, Mattsson J and Weitz D A (eds) 2011 Microgel suspensions: fundamentals and applications (New York, New York, USA: John Wiley & Sons)
* [8] Lyon L A and Fernandez-Nieves A 2012 Annual review of physical chemistry 63 25–43
* [9] Vlassopoulos D and Cloitre M 2014 Current opinion in colloid & interface science 19 561–574
* [10] Oh J K, Drumright R, Siegwart D J and Matyjaszewski K 2008 Progress in Polymer Science 33 448–477
* [11] Karg M, Pich A, Hellweg T, Hoare T, Lyon L A, Crassous J J, Suzuki D, Gumerov R A, Schneider S and Potemkin I I 2019 Langmuir 35 6231–6255
* [12] Di Napoli B, Franco S, Severini L, Tumiati M, Buratti E, Titubante M, Nigro V, Gnan N, Micheli L, Ruzicka B et al. 2020 ACS Applied Polymer Materials 2 2791–2801
* [13] Rovigatti L, Gnan N, Ninarello A and Zaccarelli E 2019 Macromolecules 52 4895–4906
* [14] Bergman M J, Gnan N, Obiols-Rabasa M, Meijer J M, Rovigatti L, Zaccarelli E and Schurtenberger P 2018 Nature communications 9 1–11
* [15] Conley G M, Zhang C, Aebischer P, Harden J L and Scheffold F 2019 Nature communications 10 1–8
* [16] Gnan N and Zaccarelli E 2019 Nature Physics 15 683–688
* [17] Stieger M, Richtering W, Pedersen J S and Lindner P 2004 The Journal of chemical physics 120 6197–6206
* [18] Ninarello A, Crassous J J, Paloli D, Camerin F, Gnan N, Rovigatti L, Schurtenberger P and Zaccarelli E 2019 Macromolecules 52 7584–7592
* [19] Bergman M J, Pedersen J S, Schurtenberger P and Boon N 2020 Soft Matter 16 2786–2794
* [20] Pelton R and Hoare T 2011 Microgels and their synthesis: An introduction pp 1–32 in Fernandez-Nieves et al. [7]
* [21] Nöjd S, Mohanty P S, Bagheri P, Yethiraj A and Schurtenberger P 2013 Soft Matter 9 9199–9207
* [22] Colla T, Mohanty P S, Nojd S, Bialik E, Riede A, Schurtenberger P and Likos C N 2018 ACS nano 12 4321–4337
* [23] Nigro V, Angelini R, Bertoldo M, Castelvetro V, Ruocco G and Ruzicka B 2015 Journal of Non-Crystalline Solids 407 361–366
* [24] Nöjd S, Holmqvist P, Boon N, Obiols-Rabasa M, Mohanty P S, Schweins R and Schurtenberger P 2018 Soft Matter 14 4150–4159
* [25] Rochette D, Kent B, Habicht A and Seiffert S 2017 Colloid and Polymer Science 295 507–520
* [26] Capriles-González D, Sierra-Martín B, Fernández-Nieves A and Fernández-Barbero A 2008 The Journal of Physical Chemistry B 112 12195–12200
* [27] Gnan N, Rovigatti L, Bergman M and Zaccarelli E 2017 Macromolecules 50 8777–8786
* [28] Moreno A J and Verso F L 2018 Soft Matter 14 7083–7096
* [29] Rovigatti L, Gnan N, Tavagnacco L, Moreno A J and Zaccarelli E 2019 Soft matter 15 1108–1119
* [30] Quesada-Pérez M, Ramos J, Forcada J and Martín-Molina A 2012 The Journal of chemical physics 136 244903
* [31] Kobayashi H and Winkler R G 2014 Polymers 6 1602–1617
* [32] Martín-Molina A and Quesada-Pérez M 2019 Journal of Molecular Liquids 280 374–381
* [33] Del Monte G, Ninarello A, Camerin F, Rovigatti L, Gnan N and Zaccarelli E 2019 Soft matter 15 8113–8128
* [34] Camerin F, Fernandez-Rodriguez M A, Rovigatti L, Antonopoulou M N, Gnan N, Ninarello A, Isa L and Zaccarelli E 2019 ACS nano 13 4548–4559
* [35] Camerin F, Gnan N, Ruiz-Franco J, Ninarello A, Rovigatti L and Zaccarelli E 2020 Physical Review X 10 031012
* [36] Kremer K and Grest G S 1990 The Journal of Chemical Physics 92 5057–5086
* [37] Deserno M and Holm C 1998 The Journal of chemical physics 109 7678–7693
* [38] Soddemann T, Dünweg B and Kremer K 2001 The European Physical Journal E 6 409–419
* [39] Camerin F, Gnan N, Rovigatti L and Zaccarelli E 2018 Scientific reports 8 1–12
* [40] Plimpton S 1995 Journal of computational physics 117 1–19
* [41] Holmqvist P, Mohanty P, Nägele G, Schurtenberger P and Heinen M 2012 Physical review letters 109 048302
* [42] Wiehemeier L, Brändel T, Hannappel Y, Kottke T and Hellweg T 2019 Soft matter 15 5673–5684
* [43] Brändel T, Wiehemeier L, Kottke T and Hellweg T 2017 Polymer 125 110–116
* [44] Fernandez-Barbero A, Fernandez-Nieves A, Grillo I and Lopez-Cabarcos E 2002 Physical Review E 66 051803
* [45] Hunter R J 2001 Foundations of colloid science (Oxford, New York, USA: Oxford University Press)
* [46] Bischofberger I and Trappe V 2015 Scientific reports 5 15520
* [47] Mattsson J, Wyss H M, Fernandez-Nieves A, Miyazaki K, Hu Z, Reichman D R and Weitz D A 2009 Nature 462 83–86
* [48] Philippe A M, Truzzolillo D, Galvan-Myoshi J, Dieudonné-George P, Trappe V, Berthier L and Cipelletti L 2018 Physical Review E 97 040601
* [49] Wu J, Huang G and Hu Z 2003 Macromolecules 36 440–448
* [50] Nigro V, Angelini R, Bertoldo M, Bruni F, Ricci M A and Ruzicka B 2017 Soft matter 13 5185–5193
* [51] Conley G M, Aebischer P, Nöjd S, Schurtenberger P and Scheffold F 2017 Science advances 3 e1700969
* [52] Wongsuwarn S, Vigolo D, Cerbino R, Howe A M, Vailati A, Piazza R and Cicuta P 2012 Soft Matter 8 5857–5863
* [53] Nigro V, Angelini R, Rosi B, Bertoldo M, Buratti E, Casciardi S, Sennato S and Ruzicka B 2019 Journal of colloid and interface science 545 210–219
* [54] Bachman H, Brown A C, Clarke K C, Dhada K S, Douglas A, Hansen C E, Herman E, Hyatt J S, Kodlekere P, Meng Z et al. 2015 Soft Matter 11 2018–2028
* [55] Scotti A, Brugnoni M, Lopez C G, Bochenek S, Crassous J J and Richtering W 2020 Soft Matter 16 668–678
* [56] Nayak S, Gan D, Serpe M J and Lyon L A 2005 Small 1 416–421
* [57] Nickel A C, Scotti A, Houston J E, Ito T, Crassous J, Pedersen J S and Richtering W 2019 Nano letters 19 8161–8170
|
# OffCon3: What is State-of-the-Art Anyway?
Philip J. Ball
Department of Engineering Science
University of Oxford
Oxford, UK
<EMAIL_ADDRESS>
Stephen J. Roberts
Department of Engineering Science
University of Oxford
Oxford, UK
<EMAIL_ADDRESS>
###### Abstract
Two popular approaches to model-free continuous control tasks are SAC and TD3.
At first glance these approaches seem rather different; SAC aims to solve the
entropy-augmented MDP by minimising the KL-divergence between a stochastic
proposal policy and a hypotheical energy-basd soft Q-function policy, whereas
TD3 is derived from DPG, which uses a deterministic policy to perform policy
gradient ascent along the value function. In reality, both approaches are
remarkably similar, and belong to a family of approaches we call ‘Off-Policy
Continuous Generalized Policy Iteration’. This illuminates their similar
performance in most continuous control benchmarks, and indeed when
hyperparameters are matched, their performance can be statistically
indistinguishable. To further remove any difference due to implementation, we
provide OffCon3 (_Off_ -Policy _Con_ tinuous _Con_ trol: _Con_ solidated), a
code base featuring state-of-the-art versions of both algorithms.
## 1 Introduction
State-of-the-art performance in model-free continuous control reinforcement
learning (RL) has been dominated by off-policy maximum-entropy/soft-policy
based methods, namely Soft Actor Critic [1, 2]. This is evidenced by the
plethora of literature, both model-free and model-based, that chooses SAC as
the standard [3, 4, 5, 6, 7], often showing it as the best performing model-
free approach.
### 1.1 Deterministic Policy Gradients
At this point we introduce the notion of deterministic policy gradients (DPGs)
for off-policy reinforcement learning. This is in contrast to stochastic
policy gradients (SPGs) that rely on a stochastic policy for gradient
estimation. It can be shown that DPG is simply a limiting case of SPG [8], and
intuitively the key difference between them focuses on how they each rely on
samples for estimating gradients.
For both approaches the policy gradient proof is required. For details see [8,
9], but through changing the order of integration and/or differentiation we
can remove the reliance of the derivative on having access the underlying
state distribution.
As aforementioned, DPG is a limiting case of SPG, specifically when the
variance parameter of the SPG policy tends to $0$ (i.e., $\sigma\rightarrow
0$). However the similarities and differences between these two methods are
nuanced and merit further investigation. This is under-explored and often
incorrect equivalences are drawn (i.e., DPG necessitates off-policy learning,
SPG necessitates on-policy learning).
We start by presenting a simple explanation as to why DPG facilitates off-
policy learning:
$\displaystyle Q(s,a)$ $\displaystyle=\mathbb{E}_{r,s^{\prime}\sim
E}\left[r_{t}+\gamma Q(s^{\prime},\mu(s^{\prime}))\right].$ (1)
Observing Eq 1, we note that the expectation is only dependent on the
environment itself, and not the policy. Therefore, all we need to train the
Q-function is environment samples (i.e., tuples $(s,a,r,s_{t+1})$ from a
replay buffer), and the deterministic policy $\pi$. We are now in a position
to write down the objectives we wish to maximize for both the critic and the
actor. For the critic, we use a standard Bellman update, and for the actor, we
maximize the expected return under a Q-function:
$\displaystyle J_{Q}$ $\displaystyle=\mathbb{E}_{s,a,r,s^{\prime}\sim
E}\left[\left(Q(s,a)-\left(r+\gamma
Q(s^{\prime},a^{\prime})|_{a^{\prime}=\pi(s^{\prime})}\right)\right)^{2}\right]$
(2)
For the actor $\pi$, we wish to maximize the expected return:
$\displaystyle J_{\pi}$ $\displaystyle=\mathbb{E}_{s\sim E}\left[V(s)\right]$
(3) $\displaystyle=\mathbb{E}_{s\sim E}\left[Q(s,a)|_{a=\pi(s)}\right]$ (4)
We can now write out the update steps required for DPG-style algorithms, or
more specifically DDPG [10] considering the use of neural networks. This will
facilitate the comparisons to SAC later on. We now denote the neural network
weights of the Q-function and policy as $\theta$ and $\phi$ respectively, with
$Q_{\theta}$ and $\pi(\cdot)=f_{\phi}(\cdot)$ respectively. We define the
policy as a deterministic function for now, as it will make it clearer later
when we start defining policy $\pi$ as a distribution that is dependent on a
deterministic function. We now write the critic and actor update rules:
Critic:
$\displaystyle\nabla_{\theta}J_{Q}$
$\displaystyle\approx\nabla_{\theta}\mathbb{E}_{r,s,s^{\prime}\sim
E}\left[\left(Q_{\theta}(s,a)-\left(r+\gamma
Q_{\theta}(s^{\prime},a^{\prime})|_{a^{\prime}=f_{\phi}(s^{\prime})}\right)\right)^{2}\right]\text{}.$
(5)
Actor:
$\displaystyle\nabla_{\phi}J_{\pi}$ $\displaystyle\approx\mathbb{E}_{s\sim
E}\left[\nabla_{a}Q_{\theta}(s,a)|_{a=f_{\phi}(s)}\nabla_{\phi}f_{\phi}(s)\right].$
(6)
Note that, due to the chain rule, the gradient through the actor requires a
Q-value estimator that is differentiable with respect to actions. Finally, we
observe that in the generalized policy iteration analogues of Actor-Critic
with dynamic programming ([9] Chapter 4); critic training is analogous to
policy evaluation, and actor training is analogous to policy improvement.
### 1.2 Stochastic Value Gradients
Here we discuss Stochastic Value Gradients [11], an algorithm that introduces
the idea of taking a gradient through a learned model of the environment and
associated stochastic policy. We specifically focus on the model-free limiting
case of this approach, SVG(0). This is of particular interest as it represents
a stepping stone between DPG and the maximum entropy methods introduced later.
We observe that, unlike the other versions of SVG, SVG(0) specifically uses a
Q-function to estimate expected return (as opposed to a state-value function).
Therefore we must derive the stochastic Bellman equation in the form of the
Q-function, following a similar approach to [11]:
$\displaystyle Q^{\pi}(s_{t},a_{t})$
$\displaystyle=\mathbb{E}\left[\gamma^{\tau-t}r^{\tau}|s=s_{t},a=a_{t}\right]$
(7) $\displaystyle=\int r_{t}p(r_{t}|s_{t},a_{t})+\gamma\left[\int\int
Q^{\pi}(s_{t+1},a_{t+1})\pi(a_{t+1}|s_{t+1})p(s_{t+1}|s_{t},a_{t})\differential{a_{t+1}}\differential{s_{t+1}}\right]\differential{r_{t}}$
(8) $\displaystyle=\mathbb{E}_{r_{t},s_{t+1}\sim
E}\left[r_{t}+\gamma\mathbb{E}_{a_{t+1}\sim\pi}\left[Q^{\pi}(s_{t+1},a_{t+1})\right]\right].$
(9)
Observe how Eq 9 is just Eq 1 except with a stochastic policy
$a\sim\pi(\cdot)$. To make its derivative tractable, we treat the policy as a
spherical Gaussian, and amortize its parameter ($\mu,\Sigma$) inference using
a neural network with weights $\theta$. This allows the use of the
reparameterization/pathwise derivative trick [12, 13]. This means
$a\sim\pi(s,\eta;\theta)$ where $\eta\sim\mathcal{N}(0,I)$. As a result, we
move policy action sampling outside (i.e., we sample from both the environment
$E$ and a $\mathcal{N}(0,I)$), and can backpropagate through the policy
weights:
$\displaystyle Q^{\pi}(s_{t},a_{t})$
$\displaystyle=\mathbb{E}_{r_{t},s_{t+1}\sim
E}\left[r_{t}+\gamma\mathbb{E}_{\eta\sim\mathcal{N}(0,I)}\left[Q^{\pi}(s_{t+1},\pi(s_{t+1},\eta;\theta))\right]\right].$
(10) $\displaystyle=\mathbb{E}_{r_{t},s_{t+1}\sim
E,\eta\sim\mathcal{N}(0,I)}\left[r_{t}+\gamma\left[Q^{\pi}(s_{t+1},\pi(s_{t+1},\eta;\theta))\right]\right].$
(11)
This means we can write the derivative of the Actor and Critic as follows:
Critic:
$\displaystyle\nabla_{\theta^{Q}}J$
$\displaystyle\approx\nabla_{\theta^{Q}}\mathbb{E}_{r,s,s^{\prime}\sim\rho^{\pi},\eta\sim\mathcal{N}(0,I)}\left[\left(Q(s,a;\theta^{Q})-\left(r+\gamma
Q(s^{\prime},a^{\prime};\theta^{Q^{\prime}})|_{a^{\prime}=\pi(s^{\prime},\eta)}\right)\right)^{2}\right].$
(12)
Actor:
$\displaystyle\nabla_{\theta^{\pi}}J$
$\displaystyle\approx\mathbb{E}_{s\sim\rho^{\pi},\eta\sim\mathcal{N}(0,I)}\left[\nabla_{a}Q(s,a;\theta^{Q})|_{a=\pi(s,\eta)}\nabla_{\theta^{\pi}}\pi(s,\eta;{\theta^{\pi}})\right].$
(14)
Observe how similar this is to the DPG-style gradients; note that when
determining actions an additional sample from a Gaussian distribution is all
that is necessary. Furthermore, we observe that this is still an off-policy
algorithm, with no dependency on the policy that gave rise to the trajectory
samples.
### 1.3 Soft Actor Critic
SAC is an actor-critic method which aims to learn policy that maximizes both
return and entropy over each visited state in a trajectory [14]:
$\displaystyle\pi^{*}={\arg\max}_{\pi}\sum_{t}\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t})\sim\rho_{\pi}}\left[r(\mathbf{s}_{t},\mathbf{a}_{t})\color[rgb]{0,0.88,0}+\alpha\mathcal{H}(\pi(\cdot|\mathbf{s}_{t}))\right]$
(15)
where the part of the equation in green describes the additional entropy
objective (N.B.: the conventional objective is therefore recovered as
$\alpha\rightarrow 0$). This is done using soft-policy iteration, and involves
repeatedly applying the following entropy Bellman operator [1]:
$\displaystyle\mathcal{T}^{\pi}Q(s,a)=$ $\displaystyle
r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p}[V(s^{\prime})]$ (16)
where
$\displaystyle V(s)=$
$\displaystyle\mathbb{E}_{a\sim\pi}[Q(s,a)-\alpha\log\pi(a|s)].$ (17)
For consistency of presentation, we present this as a (soft) Bellman update:
$\displaystyle Q^{\pi}(s,a)$ $\displaystyle=\int
rp(r|s,a)+\gamma\left[\iint(Q^{\pi}(s^{\prime},a^{\prime})-\alpha\log\pi(a^{\prime}|s^{\prime}))\pi(a^{\prime}|s^{\prime})p(s^{\prime}|s,a)\differential{a^{\prime}}\differential{s^{\prime}}\right]\differential{r}$
$\displaystyle=\mathbb{E}_{r,s^{\prime}\sim
E}\left[r+\gamma\mathbb{E}_{a^{\prime}\sim\pi}\left[Q^{\pi}(s^{\prime},a^{\prime})-\alpha\log\pi(a^{\prime}|s^{\prime})\right]\right]$
$\displaystyle=\mathbb{E}_{r,s^{\prime}\sim
E,\eta\sim\mathcal{N}(0,I)}\left[r+\gamma\left[Q^{\pi}(s^{\prime},\pi(s^{\prime},\eta;\theta))-\alpha\log\pi(s^{\prime},\eta;\theta)\right]\right]$
(18)
where in the last line we make the same assumption about amortizing the policy
distribution as in SVG.
At this point we can directly write down the objective of the actor/policy,
namely to maximize expected return _and_ entropy, i.e., Eq 17. This follows
the method for determining the objective function for the policy gradient in
DPG (Eq 4):
$\displaystyle J_{\pi}$
$\displaystyle=\mathbb{E}_{s\sim\rho^{\mu}}\left[V(s)\right]$ (19)
$\displaystyle=\mathbb{E}_{s\sim\rho^{\mu}}\left[\mathbb{E}_{a\sim\pi}\left[Q(s,a;\theta^{Q})-\alpha\log\pi(a|s)\right]\right]$
(20)
Similarly for the critic $Q$, we have $J_{Q}$:
$\displaystyle J_{Q}$
$\displaystyle=\mathbb{E}_{r,s,s^{\prime}\sim\rho^{\mu},a^{\prime}\sim\pi}\left[\left(Q(s,a;\theta^{Q})-\left(r+\gamma(Q(s^{\prime},a^{\prime};\theta^{Q^{\prime}})-\alpha\log\pi(a^{\prime}|s^{\prime}))\right)\right)^{2}\right]$
(21)
The ‘soft’ critic gradient is similar to the DPG style update as the Q-value
parameters don’t depend on the additional entropy term, however the actor
gradient requires both the chain rule and the law of total derivatives. Here
we write down the gradients directly:
Critic:
$\displaystyle\nabla_{\theta^{Q}}J$
$\displaystyle\approx\nabla_{\theta^{Q}}\mathbb{E}_{r,s,s^{\prime}\sim\rho^{\pi},\eta\sim\mathcal{N}(0,I)}\left[\left(Q(s,a;\theta^{Q})-\left(r+\gamma(Q(s^{\prime},a^{\prime};\theta^{Q^{\prime}})-\alpha\log\pi(a^{\prime}|s^{\prime}))|_{a^{\prime}=\pi(s^{\prime},\eta)}\right)\right)^{2}\right].$
(22)
Actor:
$\displaystyle\nabla_{\theta^{\pi}}J$
$\displaystyle\approx\mathbb{E}_{s\sim\rho^{\pi},\eta\sim\mathcal{N}(0,I)}\left[\left(-\nabla_{\theta^{\pi}}\log\pi(a|s)+\nabla_{a}\left(Q(s,a;\theta^{Q})-\alpha\log\pi(s,\eta;\theta^{\pi})\right)\right)|_{a=\pi(s,\eta)}\nabla_{\theta^{\pi}}\pi(s,\eta;{\theta^{\pi}})\right].$
(23)
What remains to be optimized is the temperature $\alpha$, which balances the
entropy/reward trade-off in Eq 15. In [2] the authors learn this during
training using an approximation to constrained optimization, where the mean
trajectory entropy $\mathcal{H}$ is the constraint.
### 1.4 DPG $\rightarrow$ SVG $\rightarrow$ SAC
Having outlined DPG, SVG(0), and SAC we are now in a position to directly
compare all three approaches. We do this by observing the Critic and Actor
objectives, highlighting in different colors the components that are
attributable to each:
$\displaystyle J_{\pi}$ $\displaystyle=\mathbb{E}_{s\sim
E,\color[rgb]{1,0,1}\eta\sim\mathcal{N}(0,I)}\left[Q_{\theta}(s,a)\color[rgb]{0,0.88,0}-\alpha\log\pi(a|s)|_{a=f_{\phi}(s,\color[rgb]{1,0,1}\eta)}\right]$
(24) $\displaystyle J_{Q}$ $\displaystyle=\mathbb{E}_{r,s,s^{\prime}\sim
E,\color[rgb]{1,0,1}\eta\sim\mathcal{N}(0,I)}\left[\left(Q_{\theta}(s,a)-\left(r+\gamma(Q_{\theta}(s^{\prime},a^{\prime})\color[rgb]{0,0.88,0}-\alpha\log\pi(a^{\prime}|s^{\prime}))\right)\right)^{2}|_{a^{\prime}=f_{\phi}(s^{\prime},\color[rgb]{1,0,1}\eta)}\right]$
(25)
where terms in pink are introduced by SVG(0), and terms in green are
introduced by SAC. Here we describe the natural progression of DPG to SAC:
1. 1.
DPG introduces the policy iteration framework, including the deterministic
policy gradient, that allows the learning of policies through Q-learning over
continuous action spaces. DDPG introduces heuristics that allow the use of
neural network function approximators.
2. 2.
SVG introduces the idea of stochastic policies, and its limiting model-free
case SVG(0) allows the learning of stochastic policies in the Q-learning
policy improvement framework proposed in DPG. This uses the pathwise
derivative through the amortized Gaussian policy.
3. 3.
SAC leverages the policy variance learning in amortized inference by ensuring
a maximum-entropy action distribution for any given state through the addition
of an entropy term into the traditional maximum return objective.
We observe therefore that all three algorithms can be considered as belonging
to the same family, namely ‘Off-Policy Continuous Generalized Policy
Iteration’, where the policy evaluation step represents a gradient step along
$J_{Q}$, and policy improvement a gradient step along $J_{\pi}$. All that
distinguishes these approaches is whether the actor is deterministic, and
whether there is an additional entropy objective. We note that the SAC policy
has been derived using standard gradient ascent of the value function (as in
[8]), and similarly the DPG policy gradient can be derived as a KL-
minimization (as in [1]).
## 2 Practical Reinforcement Learning
Two methods derived from the aforementioned approaches have emerged as being
most popular, namely SAC with entropy adjustment [2] and the DDPG derived TD3
[15]. At first glance, it may appear coincidental that both approaches have
achieved similar levels of success in continuous control tasks, such as OpenAI
Gym [16], but the above analysis shows that they are closely related. We
briefly explain the merits of TD3, and understand how this has influenced SAC.
### 2.1 TD3
TD3 [15] is based on DDPG, and introduces several heuristics to improve upon
it. These include:
* •
Training two Q-functions, then taking their minimum when evaluating to address
Q-function overestimation bias.
* •
Update target parameters and actor parameters less frequently than critic
updates.
* •
Add noise to the target policy action during critic training, making it harder
for the actor to directly exploit the critic.
The original SAC paper [1] does not train two Q-functions, and instead trains
a Q-function and a state-value (V) function. Furthermore the trade-off between
entropy and reward is fixed. The ‘applied’ SAC paper [2] removes the state-
value function, and instead trains two Q-functions similar to TD3, and
automatically adjusts the temperature trade-off to ensure some expected policy
entropy (a function of action dimension). Interestingly, in their original
papers, TD3 and SAC claim to outperform each other, and it would appear that
the incorporation of the TD3-style Q-learning and temperature adjustment
results in the ultimately better performance in the ‘applied’ SAC paper.
However there are still key differences between SAC and TD3 training, namely
heuristics such as network architecture, learning rate, and batch size. For
the purposes of fair comparison, we choose these to be the same across both
SAC and TD3, as shown in Table 1222Note we include an additional hidden layer,
see Appendix C for details.
| Algorithm
---|---
Hyperparamater | TD3 | SAC
Collection Steps | 1,000
Random Action Steps | 10,000
Network Hidden Layers | $256:256:256$
Learning Rate | $3\text{\times}{10}^{-4}$
Optimizer | $Adam$
Replay Buffer Size | $1\text{\times}{10}^{6}$
Action Limit | $[-1,1]$
Exponential Moving Averaging Parameter | $5\text{\times}{10}^{-3}$
(Critic Update:Environment Step) Ratio | 1
(Policy Update:Environment Step) Ratio | 2 | 1
Has Target Policy? | Yes | No
Expected Entropy Target | N/A | $-\text{dim}(\mathcal{A})$
Policy Log-Variance Limits | N/A | [-20, 2]
Target Policy $\sigma$ | 0.2 | N/A
Target Policy Clip Range | [-0.5, 0.5] | N/A
Rollout Policy $\sigma$ | 0.1 | N/A
Table 1: Hyperparameters used in OffCon3
### 2.2 What is the effect of Gaussian exploration?
One difference between TD3 and DDPG is the noise injection applied during
Q-value function training. This turns a previously deterministic mapping
$a=\mu(s)$ into a stochastic one
($a\sim\mu(s)+\text{clip}(\mathcal{N}(0,I)\times 0.2,-0.5,0.5)$). This means
that the policies used in both data collection and critic training are in fact
stochastic, making TD3 closer to SAC. We may ask how this compares to a
deterministic objective; evidently, veering from the mean action selected by
the deterministic actor should reduce expected return, so what does this
stochasticity provide? To explore this question, we split our analysis into
two sections: the effect on the Critic, and the effect on the Actor.
#### Effect on Critic:
We simplify analysis by assuming all added noise is diagonal Gaussian333Action
clipping technically makes this assumption untrue, but in reality policies are
still very nearly Gaussian, and write the deterministic objective as $J_{D}$.
We also assume 1-D actions without loss of generality. Performing a Taylor
series expansion of the stochastic policy, we find that the objective
maximized by this Gaussian actor ($J_{R}$) as (see Appendix A for proof):
$\displaystyle J_{R}$ $\displaystyle\approx
J_{D}+\frac{\sigma^{2}}{2}\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\right]$ (26)
This is the deterministic objective with an additional term proportional to
the fixed variance of the policy, as well as the 2nd derivative (Hessian for
multi-dimensional actions) of the critic with respect to actions. Unpacking
this latter term, noting that the following residual between the stochastic
($J_{R}$) and deterministic ($J_{D}$) objectives, leads to:
$\displaystyle J_{R}-J_{D}\approx\frac{\sigma^{2}}{2}\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\right]$ (27)
First, let us consider a well trained policy that is able to produce actions
that maximize the critic $Q$ for all states. This means that the value of the
2nd order term must be negative (equivalently, the Hessian must be negative
semi-definite). Evidently, any non-zero $\sigma^{2}$ will result in the
stochastic return $J_{D}$ being lower than $J_{R}$. This implies that the
stochastic policy objective $J_{R}$ can only ever realistically lower bound
the deterministic objective $J_{D}$.
However in Gaussian exploration we fix this $\sigma^{2}$ to be non-zero,
therefore the only degree of freedom is in the second-order term itself.
Evidently we want a policy that maximizes $Q$ (i.e., 0th order term),
therefore making this term positive is not viable. However the magnitude of
the second-order term can be reduced by making $Q$ ‘smoother’. Since $Q$ is
twice differentiable w.r.t. $a$, we can invoke the identity
$\nabla^{2}_{a}Q(s,a)\preceq\beta\text{I}$ [17], implying that the largest
eigenvalue of the Hessian of $Q$ is smaller than $\beta$, where $\beta$ is the
Lipschitz constant of $Q$. Therefore, to minimize the magnitude of
$\nabla^{2}_{a}Q(s,a)$, we must learn a $Q$ that is smoother with respect to
actions. This can be viewed as a spectral norm regularizer of the $Q$ function
[18], and the mechanism that is used in [15] to ensure the stability of the
critic can be viewed as approximating this approach. It must be noted that we
get this smoothing behavior by default in SAC as the learned policy has non-
zero variance due to its entropy objective.
#### Effect on Actor:
The forced non-zero $\sigma^{2}$ variance term also has implications to
entropy based approaches. SAC learns a non-zero variance to ensure some
minimum entropy per trajectory timestep:
$\displaystyle\max_{\pi}\left[Q^{\pi}(s_{0},a_{0})\right]\quad\text{s.t.}\quad\mathbb{E}_{s_{t}\sim
E}[\mathbb{E}_{a_{t}\sim\pi}[-\log\pi(a_{t}|s_{t})]]>\mathcal{H}\quad\forall
t$ (28)
where, for a Gaussian policy, we can write
$\displaystyle\max_{\pi}\left[Q^{\pi}(s_{0},a_{0})\right]\quad\text{s.t.}\quad\mathbb{E}_{s_{t}\sim
E}\left[\log(\sigma(s_{t})\sqrt{2\pi e})\right]>\mathcal{H}\quad\forall t.$
(29)
This optimization is non-trivial for SAC (see Sec. 5 in [2]) as the amount of
policy variance is learned by the policy (hence the $s_{t}$ dependency above).
However for a policy with fixed variance $\sigma$, the optimization becomes
trivial as we can drop the state dependency, and simply maximize policy over
the critic444Consider that for a fixed $\sigma$ we can always find an
$\mathcal{H}$ such that the constraint inequality evaluates to an exact
equality for all policies, therefore the dual simply collapses into a
maximization of the primal without a constraint. (which is done in standard
actor training):
$\displaystyle\max_{\pi}\left[Q^{\pi}(s_{0},a_{0})\right]\quad\text{s.t.}\quad\log(\sigma\sqrt{2\pi
e})>\mathcal{H}\quad\forall t.$ (30)
In the case of a deployed policy which has standard deviation $0.1$, such as
in TD3, we can view this performing exploration with a maximum entropy policy
that ensures a policy entropy of $\approx-0.884$ nats.
### 2.3 Why not SVG(0)?
In principle, SVG(0) appears to be a strong candidate for policy training; it
is stochastic, does not incorporate an entropy term, which adds computation
and hyperparameters. In reality, since the environments evaluated are
deterministic, the variance head of the SVG(0) policy tends to 0 very quickly.
The reason for this is outlined in Sec. 2.2. As a consequence, the resultant
algorithm is effectively DDPG. Indeed this is supported in the analysis
performed in Appendix A, where only the 0th order Thompson term remains for
relatively small $\sigma$. We illustrate this effect in Figure 1, and include
this algorithm for illustrative purposes as ‘TDS’ in OffCon3.
Figure 1: Variance of rollout policies on HalfCheetah
## 3 Experiments
For these experiments, we run both algorithms for 5 seeds on 4 different
MuJoCo environments: HalfCheetah, Hopper, Walker2d, and Ant. We then perform a
two-tailed Welch’s $t$-test [19] to determine whether final performance is
statically significantly different. Observing Table 2, Ant and Walker2d
performance is statistically indistinguishable (although TD3 learns quicker in
Ant, see Appendix B); in HalfCheetah SAC does convincingly outperform TD3, but
in Hopper, the opposite is true.
(a) HalfCheetah
(b) Hopper
(c) Ant
(d) Walker2d
(e) Humanoid
Figure 2: SAC and TD3 Training Curves on MuJoCo Environments.
### 3.1 Authors’ Results
For completeness we compare the results from each algorithm’s available code
with ours to ensure our implementation does not unfairly penalize either
approach. The results are shown in Tables 3 and 4555Authors’ SAC and TD3 code
is here and here respectively. We tabulate the results provided in these
repos. Results are the max (averaged over seeds) performance up to and
including the Timesteps column.. We note that our implementation appears to
generally match, or exceeds, the authors’ code. Note that unlike the original
code in [15], we do not discard ‘failure’ seeds666See discussion here.; this
may explain why our implementation doesn’t always outperform the author’s
code, especially on less stable environments (such as Hopper and Walker2d).
| $t$-Test Result
---|---
Environment | $t$ | $p$
HalfCheetah | 4.29 | 0.00927
Hopper | -2.92 | 0.0293
Ant | -0.481 | 0.653
Walker2d | 1.59 | 0.155
Humanoid | 1.29 | 0.29
Table 2: Two-tailed Welch’s $t$-test results | SAC Return |
---|---|---
Environment | Ours | Author | Timesteps
HalfCheetah | $16,784\pm 292$ | $12,219\pm 4,899$ | $3\text{\times}{10}^{6}$
Hopper | $3,142\pm 654$ | $3,319\pm 175$ | $1\text{\times}{10}^{6}$
Ant | $4,987\pm 784$ | $3,845\pm 759$ | $1\text{\times}{10}^{6}$
Walker2d | $5,703\pm 408$ | $5,523\pm 466$ | $3\text{\times}{10}^{6}$
Humanoid | $5,871\pm 171$ | $6,268\pm 186$ | $3\text{\times}{10}^{6}$
Table 3: SAC Implementation Comparison to Author’s Code | TD3 Return |
---|---|---
Environment | Ours | Author | Timesteps
HalfCheetah | $12,804\pm 493$ | $9,637\pm 859$ | $1\text{\times}{10}^{6}$
Hopper | $3,498\pm 99$ | $3,564\pm 115$ | $1\text{\times}{10}^{6}$
Ant | $5,700\pm 334$ | $4,372\pm 1,000$ | $1\text{\times}{10}^{6}$
Walker2d | $4,181\pm 607$ | $4,683\pm 540$ | $1\text{\times}{10}^{6}$
Humanoid | $5,085\pm 144$ | N/A | $1\text{\times}{10}^{6}$
Table 4: TD3 Implementation Comparison to Author’s Code
## 4 Conclusion
In conclusion, we show that TD3 and SAC are closely related algorithms, and
that it is possible to categorize them as belonging to the same general family
of algorithms, namely ‘Off-Policy Continuous Generalized Policy Iteration’. We
make this comparison complete by comparing against an oft-forgotten approach
SVG(0). We then show that by matching hyperparameters, their performance is
more similar than is often shown in the literature, and can be statistically
indistinguishable; furthermore TD3 can in fact outperform SAC on certain
environments whilst being more computationally efficient. To make this link
from theory to practice explicit, we have implemented both in the open-source
code base OffCon3, whereby many major elements of the code are shared for each
algorithm.
## References
* [1] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel and Sergey Levine “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor” In _ICML_ , 2018
* [2] Tuomas Haarnoja et al. “Soft Actor-Critic Algorithms and Applications”, 2018 arXiv:1812.05905 [cs.LG]
* [3] Michael Janner, Justin Fu, Marvin Zhang and Sergey Levine “When to Trust Your Model: Model-Based Policy Optimization” In _Advances in Neural Information Processing Systems_ 32 Curran Associates, Inc., 2019, pp. 12519–12530 URL: https://proceedings.neurips.cc/paper/2019/file/5faf461eff3099671ad63c6f3f094f7f-Paper.pdf
* [4] Ignasi Clavera, Yao Fu and Pieter Abbeel “Model-Augmented Actor-Critic: Backpropagating through Paths” In _International Conference on Learning Representations_ , 2020 URL: https://openreview.net/forum?id=Skln2A4YDB
* [5] Yinlam Chow, Brandon Cui, MoonKyung Ryu and Mohammad Ghavamzadeh “Variational Model-based Policy Optimization”, 2020 arXiv:2006.05443 [cs.LG]
* [6] Kate Rakelly et al. “Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables” 97, Proceedings of Machine Learning Research Long Beach, California, USA: PMLR, 2019, pp. 5331–5340 URL: http://proceedings.mlr.press/v97/rakelly19a.html
* [7] Wenjie Shi, Shiji Song and Cheng Wu “Soft Policy Gradient Method for Maximum Entropy Deep Reinforcement Learning” In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19_ International Joint Conferences on Artificial Intelligence Organization, 2019, pp. 3425–3431 DOI: 10.24963/ijcai.2019/475
* [8] David Silver et al. “Deterministic Policy Gradient Algorithms” 32.1, Proceedings of Machine Learning Research Bejing, China: PMLR, 2014, pp. 387–395 URL: http://proceedings.mlr.press/v32/silver14.html
* [9] Richard S Sutton and Andrew G Barto “Reinforcement learning: An introduction” MIT press, 2018
* [10] Timothy P. Lillicrap et al. “Continuous control with deep reinforcement learning.” In _ICLR (Poster)_ , 2016 URL: http://arxiv.org/abs/1509.02971
* [11] Nicolas Heess et al. “Learning Continuous Control Policies by Stochastic Value Gradients” In _Advances in Neural Information Processing Systems 28_ Curran Associates, Inc., 2015, pp. 2944–2952 URL: http://papers.nips.cc/paper/5796-learning-continuous-control-policies-by-stochastic-value-gradients.pdf
* [12] Diederik P. Kingma and Max Welling “Auto-Encoding Variational Bayes” In _2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings_ , 2014
* [13] Shakir Mohamed, Mihaela Rosca, Michael Figurnov and Andriy Mnih “Monte Carlo Gradient Estimation in Machine Learning” In _Journal of Machine Learning Research_ 21.132, 2020, pp. 1–62
* [14] Brian D. Ziebart “Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy” USA: Carnegie Mellon University, 2010
* [15] Scott Fujimoto, Herke Hoof and David Meger “Addressing Function Approximation Error in Actor-Critic Methods” In _ICML_ , 2018, pp. 1582–1591 URL: http://proceedings.mlr.press/v80/fujimoto18a.html
* [16] Greg Brockman et al. “OpenAI Gym”, 2016 eprint: arXiv:1606.01540
* [17] Sébastien Bubeck “Convex Optimization: Algorithms and Complexity” In _Found. Trends Mach. Learn._ 8.3–4 Hanover, MA, USA: Now Publishers Inc., 2015, pp. 231–357 DOI: 10.1561/2200000050
* [18] Yuichi Yoshida and Takeru Miyato “Spectral Norm Regularization for Improving the Generalizability of Deep Learning”, 2017 arXiv:1705.10941 [stat.ML]
* [19] B.. Welch “The Generalization of ’Student’s’ Problem when Several Different Population Variances are Involved” In _Biometrika_ 34.1-2, 1947, pp. 28–35 DOI: 10.1093/biomet/34.1-2.28
* [20] Aviral Kumar, Aurick Zhou, George Tucker and Sergey Levine “Conservative Q-Learning for Offline Reinforcement Learning” In _Advances in Neural Information Processing Systems_ 33 Curran Associates, Inc., 2020, pp. 1179–1191 URL: https://proceedings.neurips.cc/paper/2020/file/0d2b2061826a5df3221116a5085a6052-Paper.pdf
## Appendix A Objective of a Stochastic Gaussian Policy with fixed Variance
Consider a Deterministic Policy:
$\displaystyle J_{D}$ $\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[Q(s_{t},a_{t})|_{a_{t}=\mu_{(s_{t})}}\right].$
The Random Policy is defined as
$\pi(a_{t}|s_{t})=\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})$ where $\sigma$ is
fixed:
$\displaystyle J_{R}$ $\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\mathbb{E}_{a_{t}\sim\pi(a_{t}|s_{t})}\left[Q(s_{t},a_{t})\right]\right]$
$\displaystyle=\mathbb{E}_{s\sim
E}\left[\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})Q(s_{t},a_{t})da_{t}\right].$
Performing a Taylor expansion of $Q(s_{t},a_{t})$ around $a_{t}=\mu(s)$
provides:
$Q(s_{t},a_{t})=Q(s_{t},\mu(s_{t}))+\nabla_{a}Q(s_{t},a)|_{a=\mu(s_{t})}(a_{t}-\mu(s_{t}))\\\
+\frac{1}{2}\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}(a_{t}-\mu(s_{t}))^{2}+\dots.$
We address the different Taylor expansion orders separately (labeled 0, 1, 2,
etc.).
First 0th order:
0 $\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\int\mathcal{N}(a_{t}|\mu(s),\sigma^{2})Q(s_{t},\mu(s_{t}))da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[Q(s_{t},\mu(s_{t}))\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim E}\left[Q(s_{t},\mu(s_{t}))\right]$
$\displaystyle=J_{D}.$
Now 1st order:
1 $\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})\left(\nabla_{a}Q(s_{t},a)|_{a=\mu(s_{t})}(a_{t}-\mu(s))\right)da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})\left(a_{t}-\mu(s_{t})\right)da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\left(\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})a_{t}da_{t}-\mu(s_{t})\right)\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\left(\mu(s_{t})-\mu(s_{t})\right)\right]$
$\displaystyle=0.$
Now 2nd order:
2 $\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})\left(\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}(a_{t}-\mu(s))^{2}\right)da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\int\mathcal{N}(a_{t}|\mu(s_{t}),\sigma^{2})\left(a_{t}-\mu(s))^{2}\right)da_{t}\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\mathbb{E}_{a_{t}}\left[(a_{t}-\mu(s))^{2}\right]\right]$
$\displaystyle=\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\right]\sigma^{2}$
So putting it all (0, 1, 2) together:
$\displaystyle J_{R}$
$\displaystyle=J_{D}+\frac{\sigma^{2}}{2}\mathbb{E}_{s_{t}\sim
E}\left[\nabla^{2}_{a}Q(s_{t},a)|_{a=\mu(s_{t})}\right]$
## Appendix B Efficiency Gain of TD3 in Ant
Figure 3: Efficiency gain of TD3 over SAC
## Appendix C 2 v.s. 3 Layers
Some recent Q-learning work for MuJoCo continuous control has used 3 hidden
layers instead of the 2 hidden layers in the original author’s code, such as
[20]. Following their lead, and noticing the particularly strong performance
on HalfCheetah, we choose to implement 3 hidden layers in OffCon3. However,
for fairness, we run a set of experiments with 2 hidden layers; the results
are displayed in 4. We notice that apart from the significantly improved
HalfCheetah performance, and SAC improving on Hopper, the convergent
differences are marginal. Note that the small network plots are smoother as we
evaluate performance at longer intervals.
(a) HalfCheetah
(b) Hopper
(c) Ant
(d) Walker2d
(e) Humanoid
Figure 4: SAC and TD3 Training Curves on MuJoCo Environments with different
network depths (all 5 seeds). (Small) denotes a 2 hidden layer network for
both actor and critic.
|
AI
artificial intelligence
ANN
artificial neural network
ASIC
application-specific integrated circuit
BD
bounded delay
BC
breast cancer
BNN
binarized neural network
CB
connectionist bench
CD
completion detection
CNN
convolutional neural network
CTM
convolutional Tsetlin machine
CPOG
conditional partial order graph
DI
delay insensitive
DR
dual-rail
DRAM
dynamic RAM
DSP
digital signal processing
DVFS
dynamic voltage and frequency scaling
FD-SOI
fully-depleted silicon-on-insulator
FPGA
field-programmable gate array
FSM
finite state machine
HDL
hardware description language
HVR
house voting record
INWE
inverse-narrow-width effect
ISA
instruction set architecture
IoT
Internet of Things
LA
learning automaton
LEC
logical equivalence checking
LFSR
linear feedback shift register
LU
learning unit
MAC
multiply-accumulate
MNIST
Modified National Institute of Standards and Technology
MEP
minimum energy point
ML
machine learning
MLP
multi-layer perceptron
MPP
maximum power point
MPPT
maximum power point tracking
MRAM
NCL
null convention logic
NN
neural network
OCV
on-chip variation
PVT
process, variation and temperature
QDI
quasi delay insensitive
RAM
random-access memory
RDF
random dopant fluctuation
RTM
recurrent Tsetlin machine
SCM
standard cell memory
SI
speed independent
SR
single-rail
SRAM
static RAM
STG
signal transition graph
SVM
support vector machine
TA
Tsetlin Automaton
TAT
Tsetlin automaton team
TM
Tsetlin machine
ULV
ultra-low voltage
VLSI
very-large-scale integration
WSN
wide sensor network
IoT
Internet of Things
IC
integrated circuit
MFCC
Mel-frequency cepstrum coefficients
KWS
keyword spotting
DNN
Deep Neural Network
QCNN
Quantized Convolutional Neural Network
BCNN
Binarized Convolutional Neural Network
LSTM
Long Short-term Memory
SoC
system on a chip
GPU
Graphics Processing Unit
HPC
High Performance Computing
CUDA
Compute Unified Device Architecture
# Low-Power Audio Keyword Spotting
using Tsetlin Machines
Jie Lei
Microsystems Research Group
School of Engineering,
Newcastle University, NE1 7RU, UK
<EMAIL_ADDRESS>
Tousif Rahman
Microsystems Research Group
School of Engineering,
Newcastle University, NE1 7RU, UK
<EMAIL_ADDRESS>
Rishad Shafik
Microsystems Research Group
School of Engineering,
Newcastle University, NE1 7RU, UK
<EMAIL_ADDRESS>
Adrian Wheeldon
Microsystems Research Group
School of Engineering,
Newcastle University, NE1 7RU, UK
<EMAIL_ADDRESS>
Alex Yakovlev
Microsystems Research Group
School of Engineering,
Newcastle University, NE1 7RU, UK
<EMAIL_ADDRESS>
Ole-Christoffer Granmo
Centre for AI Research (CAIR),
University of Agder, Kristiansand, Norway;
<EMAIL_ADDRESS>
Fahim Kawsar
Pervasive Systems Centre,
Nokia Bell Labs Cambridge, UK;
<EMAIL_ADDRESS>
Akhil Mathur
Pervasive Systems Centre,
Nokia Bell Labs Cambridge, UK;
<EMAIL_ADDRESS>
###### Abstract
The emergence of Artificial Intelligence (AI) driven Keyword Spotting (KWS)
technologies has revolutionized human to machine interaction. Yet, the
challenge of end-to-end energy efficiency, memory footprint and system
complexity of current Neural Network (NN) powered AI-KWS pipelines has
remained ever present. This paper evaluates KWS utilizing a learning automata
powered machine learning algorithm called the Tsetlin Machine (TM). Through
significant reduction in parameter requirements and choosing logic over
arithmetic based processing, the TM offers new opportunities for low-power KWS
while maintaining high learning efficacy. In this paper we explore a TM based
keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate
of convergence compared to NNs. Further, we investigate the scalability with
increasing keywords and explore the potential for enabling low-power on-chip
KWS.
_K_ eywords Speech Command $\cdot$ Keyword Spotting $\cdot$ MFCC $\cdot$
Tsetlin Machine $\cdot$ Learning Automata $\cdot$ Pervasive AI $\cdot$ Machine
Learning $\cdot$ Artificial Neural Network $\cdot$
## 1 Introduction
Continued advances in Internet of Things (IoT) and embedded system design have
allowed for accelerated progress in artificial intelligence (AI) based
applications [1]. AI driven technologies utilizing sensory data have already
had a profoundly beneficial impact to society, including those in personalized
medical care [2], intelligent wearables [3] as well as disaster prevention and
disease control [4].
A major aspect of widespread AI integration into modern living is underpinned
by the ability to bridge the human-machine interface, viz. through sound
recognition. Current advances in sound classification have allowed for AI to
be incorporated into self-driving cars, home assistant devices and aiding
those with vision and hearing impairments [5]. One of the core concepts that
has allowed for these applications is through using KWS[6]. Selection of
specifically chosen key words narrows the training data volume thereby
allowing the AI to have a more focused functionality [7].
With the given keywords, modern keyword detection based applications are
usually reliant on responsive real-time results [8] and as such, the
practicality of transitioning keyword recognition based machine learning to
wearable, and other smart devices is still dominated by the challenges of
algorithmic complexity of the KWS pipeline, energy efficiency of the target
device and the AI model’s learning efficacy.
The algorithmic complexity of KWS stems from the pre-processing requirements
of speech activity detection, noise reduction, and subsequent signal
processing for audio feature extraction, gradually increasing application and
system latency [7]. When considering on-chip processing, the issue of
algorithmic complexity driving operational latency may still be inherently
present in the AI model [7, 9].
AI based speech recognition often offload computation to a cloud service.
However, ensuring real-time responses from such a service requires constant
network availability and offers poor return on end-to-end energy efficiency
[10]. Dependency on cloud services also leads to issues involving data
reliability and more increasingly, user data privacy [11].
Currently most commonly used AI methods apply a neural network (NN) based
architecture or some derivative of it in KWS [9, 12, 8, 13] (see Section
Section 5.1 for a relevant review). The NN based models employ arithmetically
intensive gradient descent computations for fine-tuning feature weights. The
adjustment of these weights require a large number of system-wide parameters,
called hyperparameters, to balance the dichotomy between performance and
accuracy [14]. Given that these components, as well as their complex controls
are intrinsic to the NN model, energy efficiency has remained challenging
[15].
To enable alternative avenues toward real-time energy efficient KWS, low-
complexity machine learning (ML) solutions should be explored. A different ML
model will eliminate the need to focus on issues NN designers currently face
such as optimizing arithmetic operations or automating hyper-parameter
searches. In doing so, new methodologies can be evaluated against the
essential application requirements of energy efficiency and learning efficacy,
The challenge of energy efficiency is often tackled through intelligent
hardware-software co-design techniques or a highly customized AI accelerator,
the principal goal being to exploit the available resources as much as
possible.
To obtain adequate learning efficacy for keyword recognition the KWS-AI
pipeline must be tuned to adapt to speech speed and irregularities, but most
crucially it must be able to extract the significant features of the keyword
from the time-domain to avoid redundancies that lead to increased latency.
Overall, to effectively transition keyword detection to miniature form factor
devices, there must be a conscious design effort in minimizing the latency of
the KWS-AI pipeline through algorithmic optimizations and exploration of
alternative AI models, development of dedicated hardware accelerators to
minimize power consumption, and understanding the relationships between
specific audio features with their associated keyword and how they impact
learning accuracy.
This paper establishes an analytical and experimental methodology for
addressing the design challenges mentioned above. A new automata based
learning method called the Tsetlin machine (TM) is evaluated in the KWS-AI
design in place of the traditional perceptron based NNs. The TM operates
through deriving propositional logic that describes the input features [16].
It has shown great potential over NN based models in delivering energy frugal
AI application while maintaining faster convergence and high learning efficacy
[17, 18, 19]
Through exploring design optimizations utilizing the TM in the KWS-AI pipeline
we address the following research questions:
* •
How effective is the TM at solving real-world KWS problems?
* •
Does the Tsetlin Machine scale well as the KWS problem size is increased?
* •
How robust is the Tsetlin Machine in the KWS-AI pipeline when dealing with
dataset irregularities and overlapping features?
This initial design exploration will uncover the relationships concerning how
the Tsetlin Machine’s parameters affect the KWS performance, thus enabling
further research into energy efficient KWS-TM methodology.
### 1.1 Contributions
The contributions of this paper are as follows:
* •
Development of a pipeline for KWS using the TM.
* •
Using data encoding techniques to control feature granularity in the TM
pipeline.
* •
Exploration of how the Tsetlin Machine’s parameters and architectural
components can be adjusted to deliver better performance.
### 1.2 Paper Structure
The rest of the paper is organized as follows: Section 2 offers an
introduction to the core building blocks and hyper-parameters of the Tsetlin
Machine. Through exploring the methods of feature extraction and encoding
process blocks, the KWS-TM pipeline is proposed in Section 3.3. We then
analyze the effects of manipulating the pipeline hyper-parameters in Section 4
showing the Experimental Results. We examine the effects of changing the
number of Mel-frequency cepstrum coefficientss generated, the granularity of
the encoding and the the robustness of the pipeline through the impact of
acoustically similar keywords. We then apply our understanding of the Tsetlin
Machines attributes to optimize performance and energy expenditure through
Section 4.5.
Through the related works presented in Section 5.2, we explore the current
research progress on AI powered audio recognition applications and offer an
in-depth look at the key component functions of the TM. We summarize the major
findings in Section 6 and present the direction of future work in Section 7.
## 2 A Brief Introduction to Tsetlin Machine
The Tsetlin Machine is a promising, new ML algorithm based on formulation of
propositional logic [16]. This section offers a high level overview of the
main functional blocks; a detailed review of relevant research progress can be
found in Section 5.2.
The core components of the Tsetlin Machine are: _a team of Tsetlin Automata
(TA) in each clause_ , _conjunctive clauses_ , _summation and threshold_
module and the _feedback_ module, as seen in Figure 1. The TA are finite state
machine (FSM)s that are used to form the propositional logic based
relationships that describe an output class through the inclusion or exclusion
of input features and their complements. The states of the TAs for each
feature and its compliment are then aligned to a stochastically independent
clause computation module. Through a voting mechanism built into the summation
and threshold module the expected output class ${Y}$ is generated. During the
training phase this class is compared against the target class $\hat{Y}$ and
the TA states are incremented or decremented accordingly (this is also
referred to as as issuing rewards or penalties).
Figure 1: Block diagram of TM (dashed green arrow indicates penalties and
rewards)[19]
A fundamental difference between the TM and NNs is the requirement of a
_Booleanizer_ module. The key premise is to convert the raw input features and
their complements to Boolean features rather than binary encoded features as
seen with NNs. These Boolean features are also referred to as literals:
$\hat{X}$ and ${X}$. Current research has shown that significance-driven
Booleanization of features for the Tsetlin Machine is vital in controlling the
Tsetlin Machine size and processing requirements [18]. Increasing the number
of features will increase the number of TA and increase computations for the
clause module and subsequently the energy spent in incrementing and
decrementing states in the feedback module. The choice of the number of
clauses to represent the problem is also available as a design knob, which
also directly affects energy/accuracy tradeoffs [19].
The Tsetlin Machine also has two hyper parameters, the _s_ value and the
_Threshold (T)_. The Threshold parameter is used to determine the clause
selection to be used in the voting mechanism, larger Thresholds will mean more
clauses partake in the voting and influence the feedback to TA states. The _s_
value is used to control the fluidity with which the TAs can transition
between states. Careful manipulation of these parameters can be used to
determine the flexibility of the feedback module and therefore control the TMs
learning stability [17]. As seen in Figure 2, increasing the Threshold and
decreasing the _s_ value will lead to more events triggered as more states are
transitioned. These parameters must be carefully tuned to balance energy
efficiency through minimizing events triggered, and achieving good performance
through finding the optimum _s_ -_T_ range for learning stability in the KWS
application.
Figure 2: The affect of $T$ and $s$ on reinforcements on TM[19]
In order to optimize the TM for KWS, due diligence must be given to designing
steps that minimize the Boolean feature set. This allows for finding a balance
between performance and energy usage through varying the TM hyper parameters
and the number of clause computation modules. Through exploitation of these
relationships and properties of the TM, the KWS pipeline can be designed with
particular emphasis on feature extraction and minimization of the number of
the TMs clause computation modules. An extensive algorithmic description of
Tsetlin Machine can be found in [16]. The following section will detail how
these ideas can be implemented through audio pre-processing and Booleanization
techniques for KWS.
## 3 Audio Pre-processing Techniques for KWS
When dealing with audio data, the fundamental design efforts in pre-processing
should be to find the correct balance between reducing data volume and
preserving data veracity. That is, while removing the redundancies from the
audio stream the data quality and completeness should be preserved. This is
interpreted in the proposed KWS-TM pipeline through two methods: feature
extraction through MFCCs, followed by discretization control through quantile
based binning for Booleanization. These methods are expanded below.
### 3.1 Audio Feature Extraction using MFCC
Audio data streams are always subject to redundancies in the channel that
formalize as nonvocal noise, background noise and silence [20, 21]. Therefore
the challenge becomes identification and extraction of the desired linguistic
content (the keyword) and maximally discarding everything else. To achieve
this we must consider transformation and filtering techniques that can amplify
the characteristics of the speech signals against the background information.
This is often done through the generation of MFCCs as seen in the signal
processing flow in Figure 3.
Figure 3: MFCC pipeline.
The MFCC is a widely used audio file pre-processing method for speech related
classification applications [22, 21, 23, 24, 25, 12]. The component blocks in
the MFCC pipeline are specifically designed for extracting speech data taking
into account the intricacies of the human voice.
The _Pre-Emphasis step_ is used to compensate for the structure of the human
vocal tract and provide initial noise filtration. When producing glottal
sounds when speaking, higher frequencies are damped by the vocal tract which
can be characterized as a step roll-off in the signals’ frequency spectrum
[26]. The Pre-Emphasis step, as its name-sake suggests, amplifies (adds
emphasis to) the energy in the high frequency regions, which leads to an
overall normalization of the signal [27].
Speech signals hold a quasi-stationary quality when examined over a very short
time period, which is to say that the statistical information it holds remains
near constant [20]. This property is exploited through the _Framing and
Windowing_ step. The signal is divided into around 20ms frames, then around
10-15ms long window functions are multiplied to these overlapping frames, in
doing so we preserve the temporal changes of the signal between frames and
minimize discontinuities (this is realized through the smoothed spectral edges
and enhanced harmonics of the signal after the subsequent transformation to
the frequency domain) [28]. The windowed signals are then transformed to the
frequency domain through a Discrete Fourier Transform (DFT) process using the
_Fast Fourier Transform (FFT)_ algorithm. FFT is chosen as it is able to find
the redundancies in the DFT and reduce the amount of computations required
offering quicker run-times.
The human hearing system interprets frequencies linearly up to a certain range
(around 1KHz) and logarithmically thereafter. Therefore, adjustments are
required to translate the FFT frequencies to this non-linear function [29].
This is done through passing signal through the _Mel Filter Banks_ in order to
transform it to the _Mel Spectrum_ [30]. The filter is realized by overlapping
band-pass filters to create the required warped axis. Next, the logarithm of
the signal is taken, this brings the data values closer and less sensitive to
the slight variations in the input signal [30]. Finally we perform a _Discrete
Cosine Transform (DCT)_ to take the resultant signal to the _Cepstrum_ domain
[31]. The DCT function is used as energies present in the signal are very
correlated as a result of the overlapping Mel filterbanks and the smoothness
of the human vocal tract; the DCT finds the co-variance of the energies and is
used to calculate the MFCC features vector [27, 32]. This vector can be passed
to the Booleanizer module to produce the input Boolean features, as described
next.
### 3.2 Feature Booleanization
As described in Section 2, Booleanization is an essential step for logic based
feature extraction in Tsetlin Machines. Minimizing the Boolean feature space
is crucial to the Tsetlin Machine’s optimization. The size and processing
volume of a TM is primarily dictated by the number of Booleans [18].
Therefore, a pre-processing stage for the audio features must be embedded into
the pipeline before the TM to allow for granularity control of the raw MFCC
data. The number of the Booleanized features should be kept as low as possible
while capturing the critical features for classification [18].
The discretization method should be able to adapt to, and preserve the
statistical distribution of the MFCC data. The most frequently used method in
categorizing data is through binning. This is the process of dividing data
into group, individual data-points are then represented by the group they
belong to. Data points that are close to each other are put into the same
group thereby reducing data granularity [16]. Fixed width binning methods are
not effective in representing skewed distribution and often result in empty
bins, they also require manual decision making for bin boundaries.
Therefore, for adaptive and scalable Booleanization quantile based binning is
preferred. Through binning the data using its own distribution, we maintain
its statistical properties and do not need to provide bin boundaries, merely
the number of bins the data should be discretized into. The control over the
number of quantiles is an important parameter in obtaining the final Boolean
feature set. Choosing two quantiles will result in each MFCC coefficient being
represented using only one bit; however, choosing ten quantiles (or bins) will
result in four bits per coefficient. Given the large number of coefficients
present in the KWS problem, controlling the number of quantiles is an
effective way to reduce the total TM size.
### 3.3 The KWS-TM pipeline
The KWS-TM pipeline is composed of the the data encoding and classification
blocks presented in Figure 4. The data encoding scheme encompasses the
generation of MFCCs and the quantile binning based Booleanization method. The
resulting Booleans are then fed to the Tsetlin Machine for classification. The
figure highlights the core attributes of the pre-processing blocks: the
ability to extract the audio features only associated with speech through
MFCCs and the ability to control their Boolean granularity through quantile
binning.
To explore the functionality of the pipeline and the optimizations that can be
made, we return to our primary intentions, i.e., to achieve energy efficiency
and high learning efficacy in KWS applications. We can now use the design
knobs offered in the pre-processing blocks, such as variable window size in
the MFCC generation, and control over the number of quantiles to understand
how these parameters can be used in presenting the Boolean data to the TM in a
way to returns good performance utilizing the least number of Booleans.
Through Section 2 we have also seen the design knobs available through
variation of the hyperparameters _s_ and Threshold _T_ , as well as the number
of clause computation modules used to represent the problem. Varying the
parameters in both the encoding and classification stages through an
experimental context will uncover the impact they have on the overall KWS
performance and energy usage.
Figure 4: The data encoding and classification stages in the KWS-TM pipeline
## 4 Experimental Results
To evaluate the proposed KWS-TM pipeline, Tensorflow speech command dataset
was used111Tensorflow speech command: https://tinyurl.com/TFSCDS. The dataset
consists of many spoken keywords collected from a variety of speakers with
different accents, as well as male and female gender. The datapoints are
stored as 1 second long audio files where the background noise is negligible.
This reduces the effect of added redundancies in the MFCC generation, given
our main aim is predominantly to test functionality, we will explore the
impact of noisy channels in our future work. This dataset is commonly used in
testing the functionality of ML models and will therefore allow for fair
comparisons to be drawn [33].
From the Tensorflow dataset, 10 keywords: "Yes", "No", "Stop", "Seven",
"Zero", "Nine", "Five", "One", "Go" and "Two", have been chosen to explore the
functionality of the pipeline using some basic command words. Considering
other works comparing NN based pipelines, 10 keywords is the maximum used [34,
13]. Among the keywords chosen, there is an acoustic similarity between "No"
and "Go", therefore, we explore the impact of 9 keywords together (without
"Go") and then the effect of "No" and "Go" together. The approximate ratio of
training data, testing data and validation data is given as 8:1:1 respectively
with a total of 3340 datapoints per class. Using this setup, we will conduct a
series of experiments to examine the impact of the various parameters of the
KWS-TM pipeline discussed earlier. The experiments are as follows:
* •
Manipulating the window length and window steps to control the number of MFCCs
generated.
* •
Exploring the effect of different quantile bins to change the number of
Boolean features.
* •
Using a different number of the keywords ranging from the 2 to 9 to explore
the scalability of the pipeline.
* •
Testing the effect on performance of acoustically different and similar
keywords.
* •
Changing the size of the TM through manipulating the number of clause
computation modules, optimizing performance through tuning the feedback
control parameters _s_ and _T_.
### 4.1 MFCC Setup
It is well defined that the number of input features to the TM is one of the
major factors that affect its resource usage [17, 18, 19]. Increased raw input
features means more Booleans are required to represent them and thus the
number of Tsetlin Automaton (TA) in the TM will also increase leading to more
energy required to provide feedback to them. Therefore, reducing the number of
features at the earliest stage of the data encoding stage of the pipeline is
crucial to implementing energy-frugal TM applications.
The first set of parameters available in manipulating the number of features
comes in the form of the _Window Step_ and the _Window Length_ (this takes
place in the "Framing an Windowing" stage in Figure 4) in MFCC generation and
can be seen through Figure 5(a).
---
(a) The windowing process.
(b) Effect of increasing window length.
(c) Effect of increasing window step.
Figure 5: The Hamming window function applied to audio pre-processing.
The window function is effective in reducing the spectral distortion by
tapering the sample signal at the beginning and ending of each frame (We use
overlapping frames to ensure signal continuity is not lost). Smaller Window
Steps lead to a more fine grained and descriptive representation of the audio
features through more frames and therefore more MFCCs but this also increases
computations and latency.
Increasing the Window Length leads to a linear decrease in the total number of
frames and therefore the MFCCs as seen in Figure 6(a). Given that the Window
Steps are kept constant for this experiment, we have a linearly decreasing
number of window overlaps resulting in a linearly decreasing total number of
window functions, FFTs and subsequent computations. This leads to the linear
decrease in the MFCCs across all frames.
Increasing the Window Step leads to much sharper fall given the overlapping
regions now no longer decrease linearly as seen in Figure 6(b). This results
in a total number of non-linearly decreasing window functions and therefore
much fewer FFTs and so on, leading to much fewer MFCCs across all frames. As a
result of this, the smaller the increase in the Window Step the larger the
decrease in the number of frames and therefore MFCCs.
|
---|---
(a) The effect of increasing window length. | (b) The effect of increasing window step.
Figure 6: Changing the number of MFCC coefficients through manipulating the
Window parameters.
To test the effectiveness of manipulating the Window Length and Window Step,
the MFCC coefficients were produced for 4 keywords and the TMs classification
performance was examined as seen in Figure 7(a) and Figure 7(b). Changing the
Window Length results in much bigger falls in accuracy compared to Window
Step. This is due to the diminished signal amplitudes at the window edges,
longer windows mean more tapering of the edge amplitudes and fewer overlaps to
preserve the signal continuities as seen through Figure 5(b). As a result, the
fidelity of generated the MFCC features is reduced.
|
---|---
(a) Effect of window length on accuracy. | (b) Effect of window step on accuracy.
Figure 7: Effect of changing window parameters on classification accuracy.
The effect of increasing the Window Step leads to a smaller drop in accuracy.
We see the testing and validation accuracy remain roughly the same at around
90.5$\%$ between 0.03 and 0.10 second Window Steps and then experience a
slight drop. Once again this is due to the tapering effect of the window
function, given the window length remains the same for this experiment, we
know that the increasing of window steps will mean far fewer total overlaps
and a shrinking overlapping region as seen in Figure 5(c). The overlaps are
used to preserve the continuity of the signal against the window function edge
tapering, as the size of the overlapping regions decrease, the effect of edge
tapering increases thereby leading to increased loss of information. The
accuracy remains constant up to a Window Step of 0.1s as the Window Length is
sufficiently long to capture enough of the signal information, once the
overlapping regions start to shrink we experience the loss in accuracy.
We can see that increasing the Window Step is very effective in reducing the
number of frames and therefore the total number MFCC coefficients across all
frames and providing the Window Length is long enough, the reduction in
performance is minimal. To translate these findings toward energy efficient
implementations, we must give increased design focus to finding the right
balance between the size of the Window Step parameter and the achieved
accuracy given the reduction in computations from the reduction in features
produced.
### 4.2 Impact of Number of Quantiles
Increased granularity through more bins will lead to improved performance but
it is observed that this is not the case entirely. Table 1 shows the impact of
the KWS-TM performance when increasing the number of bins. The testing and
validation accuracy remain around the same with 1 Boolean per feature compared
with 4 Booleans per feature. Figure 8 shows the large variance in some feature
columns and no variance in others. The zero variance features are redundant in
the subsequent Booleanization, they will be represented through the same
Boolean sequence. The features with large variances are of main interest. We
see that the mean for these features is relatively close to zero compared to
their variance (as seen in Figure 9), therefore one Boolean per feature
representation is sufficient, a 1 will represent values above the mean and 0
will represent below. The logical conclusion to be made from these
explorations is that the MFCC alone is sufficient in both eliminating
redundancies and extracting the keyword properties and does not require
additional granularity beyond one Boolean per feature to distinguish classes.
Figure 8: Variance between MFCC features. Figure 9: Mean of MFCC features.
We have seen that the large variance of the MFCCs mean that they are easily
represented by 1 Boolean per feature and that is sufficient to achieve high
performance. This is an important initial result, for offline learning we can
now also evaluate the effect of removing the no variance features in future
work to further reduce the total number of Booleans. From the perspective of
the Tsetlin Machine there is an additional explanation as to why the
performance remains high even when additional Boolean granularity is allocated
to the MFCC features. Given that there are a large number datapoints in each
class (3340), if the MFCCs that describe these datapoints are very similar
then the TM will have more than sufficient training data to settle on the best
propositional logic descriptors. This is further seen by the high training
accuracy compared to the testing and validation accuracy.
Table 1: Impact of increasing quantiles with 4 classes Training | Testing | Validation | Num.Bins | Bools per Feature | Total Bools
---|---|---|---|---|---
94.8% | 91.3% | 91.0% | 2 | 1 | 378
96.0% | 92.0% | 90.7% | 4 | 2 | 758
95.9% | 90.5% | 91.0% | 6 | 3 | 1132
95.6% | 91.8% | 92.0% | 8 | 3 | 1132
97.1% | 91.0% | 90.8% | 10 | 4 | 1512
### 4.3 Impact of Increasing the Number of Keywords
Figure 10(a) shows the linear nature with which the training, testing and
validation accuracy decrease as the number of keywords are increased for a TM
with 450 clauses with 200 epochs for training. We note that the testing and
validation accuracy start to veer further away from the training accuracy with
the increase of keywords. This performance drop is expected in ML methods as
the problem scales [35]. Despite the large number of datapoints per keyword
this is an indicator of overfitting, as confirmed through Figure 10(b) showing
around a 4$\%$ increase. The implication of this is that increased number of
keywords make it difficult for the TM to create distinct enough propositional
logic to separate the classes. The performance drop is caused when the
correlation of keywords outweighs the number of datapoints to distinguish each
of them. This behavior is commonly observed in ML models for audio
classification applications [23].
The explained variance ratio of the dataset with an increasing number of
keywords was taken for the first 100 Principle Component Analysis eigenvalues,
as seen in Figure 10(b). We observe that as the number of keywords is
increased, the system variance decreases, i.e. the inter-class features start
to become increasingly correlated. Correlated inter-class features will lead
to class overlap and degrade TM performance [18]. Through examination of the
two largest Linear Discriminant component values for the 9 keyword dataset, we
clearly see in Figure 11 that there is very little class separability present.
|
---|---
(a) The effect on accuracy. | (b) The amount of overfitting.
Figure 10: The effect of increasing the number of keywords.
To mitigate against the effect on performance of increasing keywords, there
are two methods available: Firstly to adjust the Tsetlin Machines
hyperparameters to enable more events triggered (see Figure 2). In doing so
the this may allow the TM to create more differing logic to describe the
classes. Then, by increasing the number of clause computation modules, the TM
will have a larger voting group in the Summation and Threshold module and
potential reach the correct classification more often. Secondly the quantity
of the datapoints can be increased, however, for this to be effective the new
dataset should hold more variance and completeness when describing each class.
This method of data regularization is often used in audio ML applications to
deliberately introduce small variance between datapoints [21].
Figure 11: LDA of 9 keywords.
### 4.4 Acoustically Similar Keywords
In order to test the robustness of the KWS-TM pipeline functionality, we must
emulate real-word conditions where a user will use commands that are
acoustically similar to others. Table 2 shows the results of such
circumstances. The _Baseline_ experiment is a KWS dataset consisting of 3
keywords: ’Yes’, ’No’ and ’Stop’. The second experiment then introduces the
keyword ’Seven’ to the dataset and the third experiment introduces the keyword
’Go’.
The addition of ’Seven’ causes a slight drop in accuracy adhering to our
previously made arguments of increased correlation and the presence of
overfitting. However the key result is the inclusion of ’Go’; ’Go’ is
acoustically similar to ’No’ and this increases the difficulty in separating
these two classes. We see from Figure 12(a), showing the first two LDA
components that adding ’Seven’ does not lead to as much class overlap as
adding ’Go’ as seen in Figure 12(b). As expected, the acoustic similarities of
’No’ and ’Go’ lead to significant overlap. We have seen from the previous
result (Figure 11) that distinguishing class separability is increasingly
difficult when class overlaps are present.
Table 2: Impact of acoustically similar keywords. Experiments | Training | Testing | Validation
---|---|---|---
Baseline | 94.7% | 92.6% | 93.1%
Baseline + ‘Seven’ | 92.5% | 90.1% | 90.2%
Baseline + ‘Go’ | 85.6% | 82.6% | 80.9%
|
---|---
(a) The Baseline with ‘Seven’. | (b) The Baseline with ‘Go’.
Figure 12: The LDA of 4 keywords - the Baseline with one other.
### 4.5 Number of Clauses per Class
So far we have considered the impact of Booleanization granularity, problem
scalabilty and robustness when dealing with acoustically similar classes. Now,
we turn our attention towards optimizing the KWS-TM pipeline to find the right
functional balance between performance and energy efficiency. This is made
possible through two streams of experimentation: manipulating the number of
clauses for each keyword class in the TM and observing the energy expenditure
and accuracy, and experimenting with the TMs hyperparameters to enable better
performance using fewer clauses.
|
---|---
(a) The effect on accuracy. | (b) The effect on overfitting.
Figure 13: Effect of increasing the number of clauses on accuracy and
overfitting.
The influence of increasing the number of clauses was briefly discussed in
Section 2, here we can see the experimental result in Figure 13(a) showing the
impact of increasing clauses with 4 classes.
Increased number of clauses leads to better performance. However, upon closer
examination we can also see the impact of overfitting at the clause level,
i.e., increasing the number of clauses has resulted in a larger difference in
the training accuracy with the testing and validation. The datapoints for the
4 classes were sufficient to create largely different sub-patterns for the TAs
during training, but not complete enough to describe new data in the testing
and validation.
As a result, when clauses are increased, more clauses reach incorrect
decisions and sway the voting in the summation and threshold module toward
incorrect classification, which is seen through Figure 14(a). The TM has two
types of feedback, Type I, which introduces stochasticity to the system and
Type II, which bases state transitions on the results of corresponding clause
value. Type II feedback is predominantly used to diminish the effect of false
positives. We see that as the clause value increases the TM uses more Type II
feedback indicating increased false positive classifications. This result is
for due to the incompleteness in the training data in describing all possible
logic propositions for each class. We see this through 14(b); despite
increasing the number of epochs we do not experience a boost in testing and
validation accuracy and through Figure 13(b) we find the point where the
overfitting outweighs the accuracy improvement at around 190-200 clauses.
|
---|---
(a) The effect of clauses on feedback. | (b) The effect of epoch on accuracy.
Figure 14: Effect of increasing the number of clauses on TM feedback (a), and
the effect of increasing the number of epochs on accuracy (b).
From the perspective of energy efficiency, these results offer two possible
implications for the KWS-TM pipeline, if a small degradation of performance in
the KWS application is acceptable, then operating at a lower clause range will
be more beneficial for the TM. The performance can then be boosted through
hyperparameters available to adjust feedback fluidity. This approach will
reduce energy expenditure through fewer clause computations and reduce the
effects of overfitting when the training data lacks enough completeness.
Alternatively, if performance is the main goal, then the design focus should
be on injecting training data with more diverse datapoints to increase the
descriptiveness of each class. In that case, increased clauses will provide
more robust functionality.
Table 3: Impact of the number of clauses on energy/accuracy tradeoffs. | Clauses | Current | Time | Energy | Accuracy
---|---|---|---|---|---
Training | 100 | 0.50 A | 68 s | 426.40 J | -
Training | 240 | 0.53 A | 96 s | 636.97 J | -
Inference | 100 | 0.43 A | 12 s | 25.57 J | 80 %
Inference | 240 | 0.47 A | 37 s | 87.23 J | 90 %
The impacts of being resource efficient and energy frugal are most prevalent
when implementing KWS applications into dedicated hardware and embedded
systems. To explore this practically, the KWS-TM pipeline was implemented onto
a Raspberry Pi. The same 4 keyword experiement was ran with 100 and 240
clauses. As expected, we see that increased clause computations lead to
increased current, time and energy usage, but also delivers better
performance. We can potentially boost the performance of the Tsetlin Machine
at lower clauses through manipulating the hyperparameters as seen Table 4.
Table 4: Impact of the T values on accuracy Clauses | T | Training | Testing | Validation | Better Classification
---|---|---|---|---|---
30 | 2 | 83.5 % | 80.5 % | 83.8 % | ✓
30 | 23 | 74.9 % | 71.1 % | 76.1 % |
450 | 2 | 89.7 % | 86.1 % | 84.9 % |
450 | 23 | 96.8 % | 92.5 % | 92.7 % | ✓
The major factor that has impacted the performance of the KWS is the capacity
of the TM which is determined by the number of clauses per class. The higher
the number clauses, the higher the overall classification accuracy [18]. Yet,
the resource usage will increase linearly along with the energy consumption
and memory footprint. Through Table 4 we see that at 30 clauses the accuracy
can be boosted through reducing the Threshold hyperparameter. The table offers
two design scenarios; firstly, very high accuracy is achievable through a
large number of clauses (450 in this case) and a large Threshold value. With a
large number of clauses an increased number of events must be triggered in
terms of state transitions (see Figure 2) to encourage more feedback to
clauses and increases the TMs decisiveness. While this offers a very good
return on performance, the amount of computations are increased with more
clauses and more events triggered and this leads to increased energy
expenditure as seen through Table 3.
In contrast, using 960 clauses and a lower Threshold still yields good
accuracy but at a much lower energy expenditure through fewer clause
computations and feedback events. A smaller number of clauses mean that the
vote of each clause has more impact, even at a smaller Threshold the inbuilt
stochasticity of the TM’s feedback module allows the TAs to reach the correct
propositional logic. Through these attributes it is possible to create more
energy frugal TMs requiring fewer computations and operating at a much lower
latency.
### 4.6 Comparative Learning Convergence and Complexity Analysis of KWS-TM
Both TMs and NNs have modular design components in their architecture; For the
TM, this is in the form of clauses and for the NN it is the number of neurons.
NNs require input weights for the learning mechanism which define the neurons’
output patterns. The number of weights and the number of neurons are variable,
however more neurons will lead to better overall NN connectivity due to more
refined arithmetic pathways to define a learning problem.
For the TM the clauses are composed of TAs. The number of TAs are defined by
the number of Boolean features which remains static throughout the course of
the TMs learning. It is the number of clauses that is variable, increasing the
clauses typically offers more propositional diversity to define a learning
problem.
Through Figure 15 and Table 5 we investigate the learning convergence rates of
the TM against 4 ’vanilla’ NN implementations. The TM is able to converge to
90.5$\%$ after less than 10 epochs highlighting its quick learning rate
compared to NNs which require around 100 epochs to converge to the isoaccuracy
target ($\approx$90%). After further 100 epochs the NN implementations reach
only marginally better accuracy than TM. The indirect implication of faster
convergence is improved energy efficiency as fewer training epochs will result
in fewer computations required for the TA states to settle.
Table 5 shows one of the key advantages of the TM over all types of NNs, the
significantly fewer parameters required, i.e. low-complexity. Large number of
parameters needed for NNs are known to limit their practicality for on-chip
KWS solutions [34, 36, 12, 13], where as the TM offers a more resource-frugal
alternative. With only 960 clauses, which require only logic based processing,
the TM outperforms even the most capable large and deep NNs. In our future
work, we aim to exploit this to enable on-chip learning based KWS solutions.
|
---|---
(a) Convergence of the TM against shallow NNs. | (b) Convergence of the TM against deep NNs.
Figure 15: Training convergence of TM and NN implementations. Table 5: The required parameters for different NNs and the TM for a 4 keyword problem. KWS-ML Configuration | Num. neurons | Num. hyperparameters
---|---|---
NN Small & Shallow: 256+512X2 | 1,280 | 983,552
NN Small & Deep: 256+512X5 | 2,816 | 2,029,064
NN Large & Shallow: 256+1024X2 | 2,304 | 2,822,656
NN Large & Deep: 256+1024X5 | 5,376 | 7,010,824
TM with 240 Clauses per Class | 960 (clauses) | 2 hyperparameters with 725760 TAs
## 5 Related Work
This section will provide a brief examination into current KWS research,
industrial challenges with KWS, deeper look in the component blocks of the TM
and provide insight into the current developments and the future research
directions.
### 5.1 Current KWS developments
The first KWS classification methods proposed in the late 1970s used MFCCs for
their feature extraction ability and because the coefficients produced offered
a very small dimensionality compared to the raw input data that was being
considered then [37]. It was later shown that compared to other audio
extraction methods such as near prediction coding coefficients (LPCC)s and
perceptual linear production (PLP), MFCCs perform much better with increased
background noise and low SNR [12].
For the classifier, Hidden Markov Models (HMMs) were favored after the MFCC
stage due to their effectiveness in modelling sequences [37]. However they
rely on many summation and Bayesian probability based arithmetic operations as
well as the computationally intensive _Viterbi_ decoding to identify the final
keyword [34, 38, 39].
Later it was shown that Recurrent Neural Networks (RNN)s outperform HMMs but
suffer from operational latency as the problem scales, albeit RNNs still have
faster run-times than HMM pipelines given they do not require a decoder
algorithm [38]. To solve the latency issue, the Deep Neural Network (DNN) was
used, it has smaller memory footprint and reduced run-times compared to HMMs
[12, 39]. However, DNNs are unable to efficiently model the temporal
correlations of the MFCCs and their transitional variance [36] [34]. In
addition to this, commonly used optimization techniques used for DNNs such as
pruning, encoding and quantization lead to great accuracy losses with KWS
applications [12].
The MFCC features exist as a 2D array as seen in Figure 4, to preserve the
temporal correlations and transitional variance, this array can be treated as
an image and a convolutional neural network (CNN) can be used for
classification [13, 36]. With the use of convolution comes the preservation of
the spatial and temporal dependencies of the 2D data as well as the reduction
of features and computations from the convolution and pooling stages [13].
However, once again both the CNN and DNN suffer from the large number of
parameters (250K for the dataset used in [36] and 9 million Multiplies
required for the CNN). Despite the gains in performance and reductions in
latency, the computational complexity and large memory requirements from
parameter storage are ever present with all NN based KWS solutions.
The storage and memory requirements played a major part in transitioning to a
micro-controller system for inference where memory is limited through the size
of the SRAM [34]. In order to accommodate for the large throughput of running
NN workloads, micro-controllers with integrated DSP instructions or integrated
SIMD and MAC instructions can accelerate low-precision computations [34]. When
testing for 10 keywords, it was shown experimentally in [34], that for systems
with limited memory and compute abilities DNNs are favorable given they use
the fewer operations despite having a lower accuracy (around 6$\%$ less)
compared to CNNs.
It is when transitioning to hardware that the limitations of memory and
compute resources become more apparent. In these cases it is better to settle
for energy efficiency through classifiers with lower memory requirements and
operations per second even if there is a slight drop in performance.
A 22nm CMOS based Quantized Convolutional Neural Network (QCNN) Always-ON KWS
accelerator is implemented in [12], they explore the practicalities of CNN in
hardware through quantized weights, activation values and approximate compute
units. Their findings illustrate the effectiveness of hardware design
techniques; the use of approximate compute units led to a significant decrease
in energy expenditure, the hardware unit is able to classify 10 real-time
keywords under different SNRs with a power consumption of 52$\mu$W. This
impact of approximate computing is also argued in [13] with design focus given
to adder design, they propose an adder with a critical path that is 49.28$\%$
shorter than standard 16-bit Ripple Carry Adders.
Through their research work with earables Nokia Bell Labs Cambridge have
brought an industrial perspective to the idea of functionality while
maintaining energy frugality into design focus for AI powered KWS [40, 41],
with particular emphasis on user oriented ergonomics and commercial form
factor. They discovered that earable devices are not as influenced by
background noise compared to smartphones and smartwatches and offer better
signal-to-noise ratio for moving artefacts due to their largely fixed wearing
position in daily activities (e.g. walking or descending stairs) [41]. This
was confirmed when testing using Random Forest classifiers.
### 5.2 The Tsetlin Machine
We briefly discussed the overall mechanism of the TM and the main building
blocks in the Section 2. In this section, we will have a closer look to the
fundamental learning element of the TM, namely the Tsetlin Automaton, as
described in Figure 16. We will also present a more detailed look at the
clause computing module as seen in Figure 17, and we will discuss the first
application-specific integrated circuit (ASIC) implementation of the TM, the
Mignon 222 Mignon AI: http://mignon.ai/, as seen in Figure 18.
Figure 16: Mechanism of a TA.
The TA is the most fundamental part of the TM forming the core learning
element that drives classification (Figure 16). Developed by Mikhail Tsetlin
in the 1950s, the TA is an FSM where the current state will transition towards
or away from the middle state upon receiving Reward or Penalty reinforcements
during the TMs training stage. The current state of the TA will decide the
output of the automaton which will be either an Include (aA) or Exclude (aB).
Figure 17: Mechanism of a Clause computing module (assuming TA1= 1 means
_Include_ and TA1= 0 means _Exclude_).
Figure 17 shows how the clause module create logic propositions that describe
the literals based on the TA decisions through logic _OR_ operations between
the negated TA decision and the literal. The TA decision is used to bit mask
the literal and through this we can determine which literals are to be
excluded. The proposition is then logic _AND_ ed and this forms the raw vote
for this clause. Clauses can be of positive and negative polarity, as such, a
sign will be added to the clause output before it partakes in the class
voting. It is important to note the reliance purely on logic operations making
the TM well suited to hardware implementations. Clauses are largely
independent of each other, only coalescing for voting giving the TM good
scalability potential.
The feedback to the TM can be thought of on three levels, at the TM level, at
the clause level and at the TA level. At the TM level, the type of feedback to
issue is decided based on the target class and whether the TM is in learning
or inference mode. For inference no feedback is given, we simply take the
clause computes for each class and pass to the summation and threshold module
to generate the predicted class. However, in training mode there is a choice
of Type I feedback to combat false negatives or Type II feedback to combat
false positives. This feedback choice is further considered at the clause
level.
At the clause level there are three main factors that will determine feedback
type to the TAs, the feedback type decision from the TM level, the current
clause value, and whether the magnitude of clause vote is above the magnitude
of the Threshold.
At the TA level, the feedback type from the clause level will be used in
conjunction with the current TA state and the s parameter to determine whether
there is inaction, penalty or reward given to the TA states.
The simplicity of the TM shows its potential to be a promising NN alternative.
Lei et al [19] comparatively analyzed the architecture, memory footprint and
convergence of these two algorithms for different datasets. This research
shows the fewer number of hyperparameter of the TM will reduce the complexity
of the design. The convergence of the TM is higher than the NN in all
experiments conducted.
The most unique architectural advances of the TM is the propositional logic
based learning mechanism which will be beneficial in achieving energy frugal
hardware AI. Wheeldon et al. [18] presented the first ASIC implementation of
the TM for Iris flower classifications (see Figure 18).
Figure 18: The Mignon AI: ASIC microchip acclerating TM (left).
This 65-nm technology based design is a breakthrough in achieving an energy
efficiency of up to 63 Tera Operations per Joule (Tops/Joule) while
maintaining high convergence rate and performance. The early results from this
microchip has been extensively compared with Binarized Convolutional Neural
Network (BCNN) and neuromorphic designs in [18].
In addition, Wheeldon et al. [18] also proposed a system-wide design space
exploration pipeline in deploying TM into ASIC design. They introduced a
detailed methodology from 1) dataset encoding building on the work seen in [4]
to 2) software based design exploration and 3) an FPGA based hyperparameter
search to 4) final ASIC synthesis. A follow-up work of this [42] also
implemented a self-timed and event-driven hardware TM. This implementation
showed power and timing elasticity properties suitable for low-end AI
implementations at-the-microedge.
Other works include mathematical lemma based analysis of clause convergence
using the XOR dataset [43], natural language (text) processing [44], disease
control [4], methods of automating the s parameter [45] as well as exploration
of regression and convolutional TMs [46, 47].
The TM has so far, been implemented with many different programming languages
such as, C, C++, C$\\#$, Python and Node.js, to name a few. It has also been
optimized for High Performance Computing (HPC) through Compute Unified Device
Architecture (CUDA) for accelerating Graphics Processing Unit (GPU) based
solutions and currently through OpenCL for heterogeneous embedded systems
[48].
Exploiting the natural logic underpinning there are currently ongoing efforts
in establishing explainability evaluation and analysis of TMs[17].
Deterministic implementation of clause selection in TM, reported by [49], is a
promising direction to this end.
Besides published works, there are numerous talks, tutorials and multimedia
resources currently available online to mobilize the hardware/software
community around this emerging AI algorithm. Below are some key sources:
Videos: https://tinyurl.com/TMVIDEOSCAIR.
Publications: https://tinyurl.com/TMPAPERCAIR & www.async.org.uk.
Software implementations: https://tinyurl.com/TMSWCAIR
Hardware implementations, Mignon AI: http://www.mignon.ai/.
A short video demonstrating KWS using TM can be found here:
https://tinyurl.com/KWSTMDEMO.
## 6 Summary and Conclusions
The paper presented the first ever TM based KWS application. Through
experimenting with the hyperparameters of the proposed KWS-TM pipeline we
established relationships between the different component blocks that can be
exploited to bring about increased energy efficiency while maintaining high
learning efficacy.
From current research work we have already determined the best methods to
optimize for the TM is through finding the right balance between reduction of
the number of features, number of clauses and number of events triggered
through the feedback hyper-parameters against the resulting performance from
these changes. These insights were carried into our pipeline design
exploration experiments.
Firstly, we fine tuned the window function in the generation of MFCCs, we saw
that increasing the window steps lead to much fewer MFCCs and if the window
length is sufficient enough to reduce edge tapering then the performance
degradation is minimal. Through quantile binning to manipulate the
discretization of the Boolean MFCCs, it was seen that this did not yield
change in performance. The MFCC features of interest have very large variances
in each feature column and as such less precision can be afforded to them,
even as low as one Boolean per feature. This was extremely useful in reducing
the resulting TM size.
Through manipulating the number of clause units to the TM on a Raspberry Pi,
we confirmed the energy and latency savings possible by running the pipeline
at a lower clause number and using the Threshold hyper-parameter the
classification of the accuracy can also be boosted. Through these design
considerations we are able to increase the energy frugality of the whole
system and transition toward low-power hardware accelerators of the pipeline
to tackle real time applications.
The KWS-TM pipeline was then compared against some different NN
implementations, we demonstrated the much faster convergence to the same
accuracy during training. Through these comparisons we also highlighted the
far fewer parameters required for the TM as well as a fewer number of clauses
compared to neurons. The faster convergence, fewer parameters and logic over
arithmetic processing makes the KWS-TM pipeline more energy efficient and
enables future work into hardware accelerators to enable better performance
and low power on-chip KWS.
Acknowledgement: The authors gratefully acknowledge the funding from EPSRC IAA
project “Whisperable” and EPSRC grant STRATA (EP/N023641/1). The research also
received help from the computational powerhouse at
CAIR333https://cair.uia.no/house-of-cair/.
## 7 Future Work
Through testing the KWS-TM pipeline against the Tensorflow Speech data set we
did not account for background noise effects. In-field IoT applications must
be robust enough to minimize the effects of additional noise, therefore,
future work in this direction should examine the effects of the pipeline with
changing signal-to-noise ratios. The pipeline will also be deployed to a
micro-controller in order to benefit from the effects of energy frugality by
operating at a lower power level.
## References
* [1] T. Rausch and S. Dustdar. Edge intelligence: The convergence of humans, things, and ai. In 2019 IEEE International Conference on Cloud Engineering (IC2E), pages 86–96, 2019.
* [2] Itsuki Osawa, Tadahiro Goto, Yuji Yamamoto, and Yusuke Tsugawa. Machine-learning-based prediction models for high-need high-cost patients using nationwide clinical and claims data.
* [3] Tiago M. Fernández-Caramés and Paula Fraga-Lamas. Towards the internet-of-smart-clothing: A review on iot wearables and garments for creating intelligent connected e-textiles. Electronics (Switzerland), 7, 12 2018.
* [4] K. D. Abeyrathna, O. C. Granmo, X. Zhang, and M. Goodwin. Adaptive continuous feature binarization for tsetlin machines applied to forecasting dengue incidences in the philippines. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 2084–2092, 2020.
* [5] K. Hirata, T. Kato, and R. Oshima. Classification of environmental sounds using convolutional neural network with bispectral analysis. In 2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pages 1–2, 2019.
* [6] Hadas Benisty, Itamar Katz, Koby Crammer, and David Malah. Discriminative keyword spotting for limited-data applications. Speech Communication, 99:1 – 11, 2018.
* [7] J. S. P. Giraldo, C. O’Connor, and M. Verhelst. Efficient keyword spotting through hardware-aware conditional execution of deep neural networks. In 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA), pages 1–8, 2019.
* [8] J. S. P. Giraldo, S. Lauwereins, K. Badami, H. Van Hamme, and M. Verhelst. 18uw soc for near-microphone keyword spotting and speaker verification. In 2019 Symposium on VLSI Circuits, pages C52–C53, 2019.
* [9] S. Leem, I. Yoo, and D. Yook. Multitask learning of deep neural network-based keyword spotting for iot devices. IEEE Transactions on Consumer Electronics, 65(2):188–194, 2019\.
* [10] A depthwise separable convolutional neural network for keyword spotting on an embedded system. EURASIP Journal on Audio, 2020:10, 2020.
* [11] Massimo Merenda, Carlo Porcaro, and Demetrio Iero. Edge machine learning for ai-enabled iot devices: A review. Sensors (Switzerland), 20, 5 2020.
* [12] B. Liu, Z. Wang, W. Zhu, Y. Sun, Z. Shen, L. Huang, Y. Li, Y. Gong, and W. Ge. An ultra-low power always-on keyword spotting accelerator using quantized convolutional neural network and voltage-domain analog switching network-based approximate computing. IEEE Access, 7:186456–186469, 2019.
* [13] S. Yin, P. Ouyang, S. Zheng, D. Song, X. Li, L. Liu, and S. Wei. A 141 uw, 2.46 pj/neuron binarized convolutional neural network based self-learning speech recognition processor in 28nm cmos. In 2018 IEEE Symposium on VLSI Circuits, pages 139–140, 2018.
* [14] Nebojsa Bacanin, Timea Bezdan, Eva Tuba, Ivana Strumberger, and Milan Tuba. Optimizing convolutional neural network hyperparameters by enhanced swarm intelligence metaheuristics. 2020\.
* [15] Rishad Shafik, Alex Yakovlev, and Shidhartha Das. Real-power computing. IEEE Transactions on Computers, 2018.
* [16] Ole-Christoffer Granmo. The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic. arXiv, April 2018.
* [17] Rishad Shafik, Adrian Wheeldon, and Alex Yakovlev. Explainability and dependability analysis of learning automata based AI hardware. In IEEE IOLTS, 2020.
* [18] Adrian Wheeldon, Rishad Shafik, Tousif Rahman, Jie Lei, Alex Yakovlev, and Ole-Christoffer Granmo. Learning automata based AI hardware design for IoT. Philosophical Trans. A of the Royal Society, 2020.
* [19] J. Lei, A. Wheeldon, R. Shafik, A. Yakovlev, and O. C. Granmo. From arithmetic to logic based ai: A comparative analysis of neural networks and tsetlin machine. In 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pages 1–4, 2020.
* [20] S. Chu, S. Narayanan, and C. . J. Kuo. Environmental sound recognition with time–frequency audio features. IEEE Transactions on Audio, Speech, and Language Processing, 17(6):1142–1158, 2009.
* [21] Zohaib Mushtaq and Shun-Feng Su. Environmental sound classification using a regularized deep convolutional neural network with data augmentation. Applied Acoustics, 167:107389, 2020.
* [22] W. Shan, M. Yang, J. Xu, Y. Lu, S. Zhang, T. Wang, J. Yang, L. Shi, and M. Seok. 14.1 a 510nw 0.41v low-memory low-computation keyword-spotting chip using serial fft-based mfcc and binarized depthwise separable convolutional neural network in 28nm cmos. In 2020 IEEE International Solid- State Circuits Conference - (ISSCC), pages 230–232, 2020.
* [23] Muqing Deng, Tingting Meng, Jiuwen Cao, Shimin Wang, Jing Zhang, and Huijie Fan. Heart sound classification based on improved mfcc features and convolutional recurrent neural networks. Neural Networks, 130:22 – 32, 2020.
* [24] L. Xiang, S. Lu, X. Wang, H. Liu, W. Pang, and H. Yu. Implementation of lstm accelerator for speech keywords recognition. In 2019 IEEE 4th International Conference on Integrated Circuits and Microsystems (ICICM), pages 195–198, 2019.
* [25] Kirandeep Kaur and N. Jain. Feature extraction and classification for automatic speaker recognition system – a review. 2015\.
* [26] Joseph W. Picone. Signal modeling techniques in speech recognition. In PROCEEDINGS OF THE IEEE, pages 1215–1247, 1993.
* [27] Uday Kamath, John Liu, and James Whitaker. Automatic Speech Recognition, pages 369–404. Springer International Publishing, Cham, 2019.
* [28] Automatic speech recognition. In Speech and Audio Signal Processing, pages 299–300. John Wiley & Sons, Inc., oct 2011.
* [29] N.J. Nalini and S. Palanivel. Music emotion recognition: The combined evidence of mfcc and residual phase. Egyptian Informatics Journal, 17(1):1 – 10, 2016.
* [30] Q. Li, Y. Yang, T. Lan, H. Zhu, Q. Wei, F. Qiao, X. Liu, and H. Yang. Msp-mfcc: Energy-efficient mfcc feature extraction method with mixed-signal processing architecture for wearable speech recognition applications. IEEE Access, 8:48720–48730, 2020.
* [31] C. Paseddula and S. V. Gangashetty. Dnn based acoustic scene classification using score fusion of mfcc and inverse mfcc. In 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), pages 18–21, 2018.
* [32] S. Jothilakshmi, V. Ramalingam, and S. Palanivel. Unsupervised speaker segmentation with residual phase and mfcc features. Expert Systems with Applications, 36(6):9799 – 9804, 2009.
* [33] Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition, 2018\.
* [34] Yundong Zhang, Naveen Suda, Liangzhen Lai, and Vikas Chandra. Hello edge: Keyword spotting on microcontrollers. CoRR, abs/1711.07128, 2017.
* [35] Z. Zhang, S. Xu, S. Zhang, T. Qiao, and S. Cao. Learning attentive representations for environmental sound classification. IEEE Access, 7:130327–130339, 2019.
* [36] Tara Sainath and Carolina Parada. Convolutional neural networks for small-footprint keyword spotting. In Interspeech, 2015.
* [37] J. G. Wilpon, L. R. Rabiner, C. . Lee, and E. R. Goldman. Automatic recognition of keywords in unconstrained speech using hidden markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(11):1870–1878, 1990.
* [38] Santiago Fernández, Alex Graves, and Jürgen Schmidhuber. An application of recurrent neural networks to discriminative keyword spotting. In Proceedings of the 17th International Conference on Artificial Neural Networks, ICANN’07, page 220–229, Berlin, Heidelberg, 2007\. Springer-Verlag.
* [39] G. Chen, C. Parada, and G. Heigold. Small-footprint keyword spotting using deep neural networks. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4087–4091, 2014.
* [40] Chulhong Min, Akhil Mathur, and Fahim Kawsar. Exploring audio and kinetic sensing on earable devices. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications, WearSys ’18, page 5–10, New York, NY, USA, 2018. Association for Computing Machinery.
* [41] F. Kawsar, C. Min, A. Mathur, and A. Montanari. Earables for personal-scale behavior analytics. IEEE Pervasive Computing, 17(3):83–89, 2018.
* [42] Adrian Wheeldon, Alex Yakovlev, Rishad Shafik, and Jordan Morris. Low-latency asynchronous logic design for inference at the edge. arXiv preprint arXiv:2012.03402, 2020.
* [43] Lei Jiao, Xuan Zhang, Ole-Christoffer Granmo, and K. Darshana Abeyrathna. On the convergence of tsetlin machines for the xor operator, 2021.
* [44] Bimal Bhattarai, Ole-Christoffer Granmo, and Lei Jiao. Measuring the novelty of natural language text using the conjunctive clauses of a tsetlin machine text classifier, 2020.
* [45] Saeed Rahimi Gorji, Ole-Christoffer Granmo, Adrian Phoulady, and Morten Goodwin. A tsetlin machine with multigranular clauses, 2019.
* [46] K Darshana Abeyrathna, Ole-Christoffer Granmo, Xuan Zhang, Lei Jiao, and Morten Goodwin. The regression tsetlin machine: a novel approach to interpretable nonlinear regression. Philosophical Trans. A of the Royal Society, 2019.
* [47] Ole-Christoffer Granmo, Sondre Glimsdal, Lei Jiao, Morten Goodwin, Christian W. Omlin, and Geir Thore Berge. The convolutional tsetlin machine. CoRR, abs/1905.09688, 2019.
* [48] K Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, and Rohan K Yadav. Massively parallel and asynchronous tsetlin machine architecture supporting almost constant-time scaling. arXiv preprint arXiv:2009.04861, 2020.
* [49] K Darshana Abeyrathna, Ole-Christoffer Granmo, Rishad Shafik, Alex Yakovlev, Adrian Wheeldon, Jie Lei, and Morten Goodwin. A novel multi-step finite-state automaton for arbitrarily deterministic tsetlin machine learning. In International Conference on Innovative Techniques and Applications of Artificial Intelligence, pages 108–122. Springer, 2020.
*[KWS]: keyword spotting
*[IoT]: Internet of Things
*[AI]: artificial intelligence
*[NN]: neural network
*[ML]: machine learning
*[TM]: Tsetlin machine
*[FSM]: finite state machine
*[MFCC]: Mel-frequency cepstrum coefficients
*[MFCCs]: Mel-frequency cepstrum coefficients
*[TA]: Tsetlin Automaton
*[TMs]: Tsetlin machine
*[NNs]: neural network
*[DNN]: Deep Neural Network
*[CNN]: convolutional neural network
*[QCNN]: Quantized Convolutional Neural Network
*[ASIC]: application-specific integrated circuit
*[BCNN]: Binarized Convolutional Neural Network
*[HPC]: High Performance Computing
*[CUDA]: Compute Unified Device Architecture
*[GPU]: Graphics Processing Unit
|
# Launchers and Targets in Social Networks
Pedro Martins
Polytechnic Institute of Coimbra, Coimbra Business School - ISCAC, Portugal,
and
Centro de Matemática, Aplicações Fundamentais e Investigação Operacional
(CMAFcIO), Universidade de Lisboa, 1749-016 Lisboa, Portugal
e-mail<EMAIL_ADDRESS>Filipa Alarcão Martins
Digital Marketing Analyst, Portugal
e-mail<EMAIL_ADDRESS>
###### Abstract
Influence propagation in social networks is a subject of growing interest. A
relevant issue in those networks involves the identification of key
influencers. These players have an important role on viral marketing
strategies and message propagation, including political propaganda and fake
news. In effect, an important way to fight malicious usage on social networks
is to understand their properties, their structure and the way messages
propagate.
This paper proposes two new indices for analysing message propagation in
social networks, based on the network topological nature and the power of the
message. The first index involves the strength of each node as a launcher of
the message, dividing the nodes into launchers and non-launchers. The second
index addresses the potential of each member as a receiver (target) of the
message, dividing the nodes into targets and non-targets. Launcher individuals
should indicate strong influencers and target individuals should identify the
best target consumers. These indices can assist other known metrics when used
to select efficient influencers in a social network. For instance, instead of
choosing a strong and probably expensive member according to its degree in the
network (number of followers), we may previously select those belonging to the
launchers group and look for the lowest degree members, which are probably
cheaper but still guarantying almost the same influence effectiveness as the
largest degree members.
On a different direction, using the second index, the strong target members
should characterize relevant consumers of information in the network, which
may include fake news’ regular collectors.
We discuss these indices using small-world randomly generated graphs and a
number of real-world social networks available in known datasets repositories.
Keywords: influencers, influence propagation, social networks, launchers and
targets in social networks
## 1 Introduction
Social networks have long existed in society, but the fast growth of the web
made these networks emerge at an unimaginable scale. These networks represent
linkage among people and besides their relevancy in communication and in
society, they also provide influence exertion, information dissemination (true
or false) and, in some cases, gossip spread.
Message or information propagation follows on a cascade setting in an online
social network. As an example, suppose that John writes on a friend’s Facebook
wall about a party he has been last night. This post is communicated to his
friends who may typically comment, getting the message across their friends,
and so on. This way, John’s initial message propagates transitively throughout
the network. In another example, suppose that Mary posts on Twitter about a
new mobile phone she bought. Some of her followers on Twitter reply to her
tweet while others retweet it, producing again a cascade of information
propagation. In these examples, the messages start on a single person as is
usual in many posts and comments on social networks. Our study will focus on
these cases, where a message is launched by a single user. In these cases, an
interesting point to observe is how far the message can go throughout the
network when sent by each of the users; and who are the most targeted users in
the network. These are the aspects that we propose discussing in the present
work. Along the text, we will use the word ”message” in a broad sense,
representing a content launched by a member into the network.
This paper considers a social network on a simple and oriented graph (or
network) $G=(N,A)$, with $N=\\{1,\ldots,n\\}$ the set of nodes or individuals
or members (homogeneous) and $A\subseteq\\{(i,j)\in N\times N:i\neq j\\}$ the
set of arcs, representing the existing links among individuals. We denote by
$\delta^{-}(j)$ and $\delta^{+}(j)$ the set of predecessors and the set of
successor of $j$ in $G$, respectively. We also denote by $g^{-}(i)$ and
$g^{+}(i)$ the in-degree and out-degree of node $i$, respectively, thus,
$g^{-}(i)=|\delta^{-}(i)|$ and $g^{+}(i)=|\delta^{+}(i)|$. Arc $(i,j)\in A$
indicates that node $i$ influences node $j$. In the context of a social
network, this means that individual $i$ is followed by individual $j$ (note
that the orientation of the arc can seem contradictory, however, we are
assuming that an individual influences its followers, as is usual). A node is
said to be a _seed_ if it is the starting point of a message launched into the
network, meaning that this node/individual is the launcher of a message. The
_strength of influence_ that node $i$ exerts on $j$ is characterized by
$d(i,j)$, for all $(i,j)\in A$; and the _hurdle_ of each node $j\in N$ to
adopt the message is denoted by $h(j)$. Thus, following other authors
notation, node $j$ is said to be _active_ (or _covered_) if it is a seed node
or if $\sum_{i\in S\cap\delta^{-}(j)}d(i,j)\geq h(j)$, for $S\subseteq N$ the
current subset of active nodes. This expression is denoted by _activating
condition_. So, in an online marketing setting for promoting a product, a
node/individual is active if it has adopted or promoted the product, otherwise
it is inactive. In this study we assume that an active member is an adopter. A
more detailed discussion about the roles of an influencer node is conducted in
[Chen et al. 2013]. In addition, if the original graph is undirected, then
each edge $\\{i,j\\}$ should be substituted by the two arcs $(i,j)$ and
$(j,i)$.
The activating condition previously described is based on the Linear Threshold
(LT) model proposed in [Kempe et al. 2003], where a person will adopt a
product/message if the influence received from its neighbors has reached a
certain threshold (the hurdle value of the node). This condition follows
similar expressions used on influence propagation and influence maximization
problems in the literature, namely in [Chen 2009], and in very recent
publications in [Fischetti et al. 2018], [Raghavan and Zhang 2019] and [Günneç
et al. 2020]. All these works address propagation originated on a set of
members and not on a single individual as in our case. Finding such a set of
the smallest size that would lead the entire network to repass the message or
adopt the product is known as the Target Set Selection (TSS) problem,
discussed in [Chen 2009]. In the TSS, 100% adoption is required, the hurdle
$h(i)$ of a node $i\in N$ is a value between 1 and its degree in $G$ and a
node $i$ becomes active if it has at least $h(i)$ active neighbors, that is,
equal influence is assumed. Later on, [Raghavan and Zhang 2019] considered a
weighted version of the TSS, denoted by WTSS problem. In the WTSS, equal
influence from neighbors and 100% adoption is still present, but the hurdle of
a node is a value (a weight) that characterizes the amount of effort required
by an individual (node) to be convinced to adopt the product. Equal influence
and 100% adoption are no longer required on the Least-Cost Influence Problem
(LCIP) discussed in [Günneç et al. 2020]. In the LCIP, the activating
condition includes a tailored (i.e., partial) incentive (monetary in most
cases) on each node, exerting influence on that node together with the usual
influence employed by its predecessors to promote adoption. The LCIP seeks
minimizing the sum of all tailored incentives provided to individuals in a
social network while ensuring that a given fraction of the network is
influenced. This problem was also addressed in [Fischetti et al. 2018],
considering a nonlinear influence structure, besides the usual settings that
characterize the LCIP.
An extensive survey on influence propagation in social networks is described
in [Peng et al. 2018]. It encompasses various types of social networks, their
properties, social influence, state-of-the-art evaluation metrics and models,
and an overview of known methods for influence maximization.
The interpretation of the strength of influence $d(i,j)$ that individual $i$
exerts on $j$ is not a theme of consensus, as observed in most works
previously reported on influence propagation and influence maximization. In
fact, it is not easy to set a function for characterizing the influence
parameters $d(i,j)$, neither the hurdle of a node as in most cases it is
individual dependent, characterized by the message/product and it may also
vary along the time. In our case, we set these parameters as functions of the
nodes’ degrees, and thus defining the activating condition as a linear
function of nodes’ degrees. This way, we do not have to assess influence
strength and nodes’ hurdles for each particular instance. Instead, the
activating conditions are automatically build, translating the inherent
topological nature of the network for characterizing influence propagation,
with an additional single parameter to assess the viral power of the
message/product involved. Thus, in the present work we consider that influence
strength of an individual can be related with the number of other individuals
that he/she directly exerts influence on. In this case, we define this
influence as its out-degree if working on an oriented graph, or degree if the
graph is undirected. The same way, the hurdle can be associated with the
influence strength of the individual (its out degree) and with the
influence/viral power of the message (parameter $\alpha$, denoted by _hurdle
coefficient_). It acts as a threshold for this individual to become active,
that is, to adopt a product or to repass a message. Thus, we define:
* •
$d(i,j)=g^{+}(i)$, for all $(i,j)\in A$, that is, $d(i,j)$ is the out-degree
of node $i$, indicating that the strength of influence that $i$ exerts on $j$
is defined by the number of individuals that $i$ can influence, that is, the
number of followers of node $i$; and
* •
$h(j)=\alpha\cdot g^{+}(j)$, for all $j\in N$, where $\alpha$ is the hurdle
coefficient, used to leverage the difficulty to activate node $j$, being
related with the viral power of the message sent by the seed; and $g^{+}(j)$
is, again, node´s $j$ own strength.
Hence, for any node $j\in N$ and $S\subseteq N$ the current subset of active
nodes, the activating condition becomes:
$\sum_{i\in S\cap\delta^{-}(j)}g^{+}(i)\geq\alpha\cdot g^{+}(j)$ (1)
According to the previous definition of the influence that a node $j\in N$
receives from their direct neighbors (characterized by parameters $d(i,j)$,
for all $(i,j)\in A$), equal influence from predecessor neighbors is not
present, except if $G$ is regular, which is quite unlikely in a social
network. Equal influence from predecessors was considered on the TSS and WTSS
problems previously mentioned, being unavoidable when privacy concerns are
present in the network. In addition, the hurdle is also node dependent as in
the WTSS and LCIP problems, indicating that different nodes require different
levels of effort to become active. In this paper, we assume that this hurdle
is proportional to the node’s out-degree (its number of followers), as
mentioned above. The relevancy of the number of followers is also stressed in
the literature (see, e.g., [Huang et al. 2012, Bakshy et al. 2011]) and widely
used on a number of public social activities, namely to select marketing
influencers, or people to TV shows, or other public exhibitions, which
motivates our option. This way, all the influential process is based on the
topological nature of the graph, using just the degree information of the
nodes in the entire activating condition. This excludes other external
incident features, like monetary incentives, user specific characterizations
of the influence among individuals or specific characterizations of the
hurdle.
This paper proposes two new indices for analysing message propagation in
social networks. These indices are based on two new concepts:
* •
the _Individual Launching Power_ (ILP) of node $i\in N$, denoted by
$ilp^{\prime}(i)$, representing the number of activated nodes in $G$ when $i$
is the launcher of a message; and
* •
the _Individual Target Potential_ (ITP) of node $i\in N$, denoted by
$itp^{\prime}(i)$, representing the number of times that node $i$ is activated
when each of the other nodes $j\in N\setminus\\{i\\}$ launch their own
messages.
Thus, we define the following two indices, respectively:
* •
the ILP index of $i$: $ilp(i)=\frac{ilp^{\prime}(i)}{(n-1)}$, for all $i\in
N$; and
* •
the ITP index of $i$: $itp(i)=\frac{itp^{\prime}(i)}{(n-1)}$, for all $i\in
N$.
Index $ilp(i)$ represents the potential strength of individual $i$ as a
launcher of a message, while index $itp(i)$ is the potential of individual $i$
as a receiver (target) of a message sent from the nodes of $G$. Launcher
individuals should correspond to strong influencers and target individuals
should identify the best target consumers or the most prominent message
collectors. These indices are assuming that all nodes are receptive for the
message/product and all nodes have equal chance to be the seeds of a
message/product.
There are three relevant issues involved on message propagation: i) the
launcher strength, ii) the message power, and iii) the network topological
structure. The launcher strength can be set by the ILP index, the message
influence/viral power is characterized by the hurdle coefficient $\alpha$ and
the network structure is the particular topology of graphs $G$. Note that the
relevancy of classifying the viral power of the message was also stressed in
[Bakshy et al. 2011] on the cascade size of influence propagation.
To exemplify, consider the oriented graph $G1$ in Figure 1a, with 9 nodes and
19 arcs.
graph $G1$ | | $\alpha=1.5$ | | $\alpha=2.0$
---|---|---|---|---
| | | |
(a) | | (b) | | (c)
Figure 1: Oriented graph $G1$ in (a) and the active nodes when node 3 is the
seed, for $\alpha=1.5$ and $\alpha=2.0$, in (b) and (c), respectively.
Table 1 shows the ILP and ITP indices’ values for all the nodes in $G1$, for
$\alpha=1.5,2.0\mbox{ and }2.5$. It also includes the out-degree of each node.
| | $ilp(i)$ | $itp(i)$
---|---|---|---
node $i$ | $g^{+}(i)$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=2.5$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=2.5$
1 | 1 | 0.000 | 0.000 | 0.000 | 0.125 | 0.125 | 0.125
2 | 1 | 0.000 | 0.000 | 0.000 | 0.125 | 0.125 | 0.125
3 | 5 | 0.875 | 0.625 | 0.500 | 0.000 | 0.000 | 0.000
4 | 1 | 0.000 | 0.000 | 0.000 | 0.375 | 0.375 | 0.375
5 | 0 | 0.000 | 0.000 | 0.000 | 0.250 | 0.250 | 0.250
6 | 5 | 0.625 | 0.625 | 0.625 | 0.000 | 0.000 | 0.000
7 | 3 | 0.375 | 0.125 | 0.125 | 0.250 | 0.250 | 0.125
8 | 1 | 0.000 | 0.000 | 0.000 | 0.500 | 0.250 | 0.125
9 | 2 | 0.125 | 0.125 | 0.000 | 0.375 | 0.125 | 0.125
Table 1: ILP and ITP indices for graph $G1$, including the nodes’ out-degree
($g^{+}(i)$)
In this example and considering a message launched in the network by node 3,
if the hurdle coefficient is $\alpha=1.5$, representing a rather viral
message, then it can reach all nodes but the 6, covering 87.5% of the entire
network (excluding node 3), represented in Figure 1b. In this case, node 7 is
made active just by influence of node 3. Then, in turn, node 7 can influence
node 9, which will then influence node 8. Alternatively, if node 3 releases a
less viral message, with hurdle coefficient 2.0, then it only reaches 62.5% of
the other nodes, shown in Figure 1c. In this case, node 7 is activated through
the joint influence of 3 and 4; and nodes 8 and 9 are no longer influenceable.
However, if the hurdle coefficient is $\alpha=2.5$, the message sent by 3 will
reach only nodes 1, 2, 4 and 5, covering just 0.5% of the nodes in
$N\setminus\\{3\\}$. Besides, note that although nodes 3 and 6 have the same
out-degree ($g^{+}(3)=g^{+}(6)=5$), their influence power in the entire graph
is different due to the influential cascade each one generates. In effect, in
this case, node 3 can go further than node 6.
On the other hand, if observing the $itp(i)$ indices, representing the
targeting potential of each node $i\in N$, when the message is very viral,
with $\alpha=1.5$, node 8 can be reached by half of the other individuals.
However, when the message is less viral, with $\alpha=2.0$ or $\alpha=2.5$,
the easiest individual to reach is node 4, which can be targeted by 3 of the
remaining individuals (37.5%).
If graph $G1$ is undirected, by substituting each arc by an edge between the
same nodes, the connectivity in the network may seem extended. However,
considering again node 3 as the seed of a message, the ILP indices are
$ilp(3)=1$ for $\alpha=1.5\mbox{ and }2.0$, and $ilp(3)=0.375$ when
$\alpha=2.5$. While for $\alpha=1.5\mbox{ and }2.0$ the influence capability
increases; it shrinks instead when the message becomes less viral
($\alpha=2.5$). The reason for the influence reduction in the last case is
related with the hurdle increase in most nodes, due to the out-degree growth,
making some nodes harder to cover when the message is less viral.
The various social networks available these days vary quite a lot concerning
oriented and undirected graph topologies. For example, Facebook and LinkedIn
should be represented by undirected graphs, as each connection requires both
users to accept the link. On the other hand, Instagram and ResearchGate, for
instance, should be described using directed graphs, as user A can decide to
follow B, but B may not be interested in following A.
The contribution of the present paper involves the following three aspects:
* •
We propose two new indices especially devoted for identifying strong
influencers and strong targeting nodes in a social network, through the ILP
and the ITP indices, respectively. These indices incorporate a hurdle criteria
on each node to model its hardness to forward the message, while most link
topological measures in the literature are entirely build upon the relevancy
of neighbors, ignoring the nodes’ ability to decide on a message propagation
scheme.
* •
The construction of the new indices uses just the underlying topological
nature of the graph and a hurdle coefficient ($\alpha$) to characterize the
viral strength of the message/product.
* •
The ILP index results divide the nodes into launchers and non-launchers. The
launchers class can be used to assist other metrics to find the best
influencers, providing better decisions. The ITP results divide the nodes into
targets and non-targets, allowing to identify strong consumers of information
that may possibly magnetize malicious information in the network.
The paper is organized as follows: an algorithm for constructing the ILP and
ITP indices is described in the next section; computational tests are
conducted in Section 3; and the paper ends with a conclusions section (Section
4).
## 2 Algorithm for calculating the ILP and ITP indices
The construction of the two indices (ILP and ITP) is made in the same
algorithm, on a given simple oriented graph $G=(N,A)$. It calculates the
$ilp(i)$ index for each node $i\in N$ and updates the $itp(i)$ values along
the calculation of the ILP indices. So, starting from a given node $i\in N$
and for a given input parameter value $\alpha$, the algorithm generates a
sequential cascade of newly active nodes through influence propagation, using
the activating condition set in (1). Its pseudocode is depicted in Figure 2.
Before, we describe the sets and the variables involved.
$S$ | is the current subset of activated nodes ($S\subset N$)
---|---
$L$ | is the current list of nodes to analyse (still non activated nodes)
$\delta^{-}(i)$ | is the set of nodes converging to $i$ in $G$, for all $i\in N$
$\delta^{+}(i)$ | is the set of nodes diverging from $i$ in $G$, for all $i\in N$
$g^{+}(i)$ | is the out-degree of node $i$ in $G$, for all $i\in N$ (that is, $g^{+}(i)=|\delta^{+}(i)|$)
$\alpha$ | is the hurdle coefficient
| Algorithm ILP-ITP($\alpha$) |
---|---|---
| Input: $G=(N,A)$ and $\alpha$ |
| Output: $ilp(i)$ and $itp(i)$ for all $i\in N$ |
1 | for all $i\in N$ do |
2 | $itp(i)\leftarrow 0$ | //_initialize vector itp_ //
3 | end_do |
4 | for all $i\in N$ do |
5 | $S\leftarrow\\{i\\}$, $L\leftarrow\\{j\in N\setminus S:(i,j)\in A\\}$ | //_initialize the sets_ //
6 | while ($L\neq\emptyset$) do |
7 | $v\leftarrow\arg\min_{j\in L}\\{g^{+}(j)\\}$ | //_select the next candidate to activate_ //
8 | if $\left(\sum_{j\in S\cap\delta^{-}(v)}g^{+}(j)\geq\alpha\cdot g^{+}(v)\right)$ then | //_if it becomes active, then_ //
9 | $S\leftarrow S\cup\\{v\\}$ | //_put the newly activated node in $S$_//
10 | $L\leftarrow L\cup\\{j\in N\setminus S:(v,j)\in A\\}$ | //_add to $L$ all inactive successors of $v$_//
11 | $itp(v)\leftarrow itp(v)+1$ | //_$i$ is also able to activate node $v$_//
12 | end_if |
13 | $L\leftarrow L\setminus\\{v\\}$ | //_remove node $v$ from $L$_//
14 | end_while |
15 | $ilp(i)\leftarrow|S|-1$ | //_$|S|$ is the number of nodes that node $i$ can_
| | _activate, excluding itself_ //
16 | end_do |
17 | $ilp(i)\leftarrow\frac{ilp(i)}{n-1}$ and $itp(i)\leftarrow\frac{itp(i)}{n-1}$ for all $i\in N$ |
18 | return $ilp(i)$ and $itp(i)$ for all $i\in N$ |
Figure 2: Algorithm for computing the ILP and ITP indices.
The algorithm starts initializing the $itp(i)$ variables, for all $i\in N$.
Then, for each node $i\in N$, it generates a cascade of influences, trying to
activate the nodes placed in the candidates list $L$. So, $i$ is the first
node to enter the active nodes list $S$ and the activating candidates list $L$
is initialized with all the successors of $i$ in $G$. Then, while list $L$ is
nonempty, we take from $L$ the node ($v$) with the lowest out-degree in $G$
and test it for activation. If $v$ passes the test, then we include it in $S$
and put in $L$ all the inactive successors of $v$. In addition, and if $v$
becomes active, we increase variable $itp(v)$ in one unit, meaning that one
additional node ($i$) is able to activate $v$ through influence propagation.
Then, node $v$ is removed from $L$, whether it entered $S$ or not. When $L$
becomes empty, the process terminates for the influential search started in
node $i$, allowing to set the $ilp(i)$ result, which is equal to the number of
nodes that were made active, excluding node $i$, divided by $n-1$.
Each new candidate to activate taken from list $L$ is selected according to
the minimum out-degree of the nodes in $L$. The advantage of this option is to
promote a fast activation of the easiest nodes. The execution times increase
when priority is given to the nodes with largest out-degree.
## 3 Computational tests and discussion
In this section we use the ILP-ITP($\alpha$) algorithm for computing the ILP
and ITP indices on a number of randomly generated and real-world graphs. We
start describing the instances in the first subsection; then in Subsection
3.2, we use a small known example to compare the ILP and ITP indices with
other metrics usually considered in the literature. The computational results
and comments on larger sized graphs are conducted in Subsection 3.3.
All tests were run on an Intel Core i7-2600 with 3.40 GHz and 8 GB RAM. The
experiments were performed under Microsoft Windows 10 operating system. The
algorithm described in Figure 2 was coded in Fortran and compiled on gfortran.
Times are reported in seconds.
### 3.1 Instances
In the tests conducted in this section, we consider two classes of instances:
randomized and real-world. Randomized instances represent small-world
networks, following the methodology described in [Watts and Strogatz 1998].
Real-world instances are taken from known online repositories and used in a
number of publications addressing virtual social networks. The repositories
are: the Stanford Large Network Dataset Collection (SNAP, [Leskovec and Krevl
2014]), the Koblenz Network Collection (KONECT, [Kunegis 2017]), and from the
Social Networks Security Research Group from the Ben-Gurion University of the
Negev (BGU; [Lesser et al. 2013]). All these instances are described next.
Small-world randomly generated instances:
The randomly generated instances are classified by nodes’ average degree and
sparsity. All instances involve $n=10,000$ nodes. Their initial average
degrees are 10, 20 and 50. We follow the methodology described in [Watts and
Strogatz 1998] for generating small-world networks. Depending on the
construction process, these graphs can have social network properties, as
described in [Barabási 2016, Chap. 3] and [Günneç and Raghavan 2017]. The
small-world construction of each network is initially made for the undirected
version. Then, we use the undirected graph to build two additional oriented
versions by substituting some of the edges by an arc in one of the two
directions, while substituting the remaining edges by the two associated
oriented arcs. The two oriented versions involve the substitution of $(2/3)m$
and $(1/3)m$ edges by a single directed arc, for $m$ the number of edges in
the undirected counterpart. The edges selected to be oriented are taken at
random. These networks are denoted by WS-_k_ -_o_ , with $k$ representing the
initial average nodes’ degrees ($k=10,20\mbox{ and }50$) and $o$ indicating
the proportion of bidirected links between pairs of nodes (edges in the
original graph) that will remain, taking values $o=1.00,0.66\mbox{ and
}0.33$). In version $o=1.00$, all links are bidirected, representing the
undirected graph; for $o=0.66$, the graph is oriented, keeping 66% of the
initial set of edges (as bidirected arcs) and 33% of unidirected arcs; and for
$o=0.33$ the graph is also oriented, keeping 33% of the initial set of edges
(as bidirected arcs) and 66% unidirected arcs.
The Watts and Strogatz procedure starts with a regular graph with nodes’
degree _k_. Each edge of the graph is rewired, being reconnected, with
probability _p_ , to another node chosen uniformly at random (duplicate edges
are forbidden). The process is repeated for all original edges. Following a
number of works in the literature, also addressing social networks (see, .e.g,
[Günneç and Raghavan 2017, Fischetti et al. 2018, Raghavan and Zhang 2019]),
we consider the most destructive rewiring probability in the recommended range
($0.1\leq p\leq 0.3$), setting $p=0.3$. This rewiring probability represents
most closely the social networks studied by Watts and Strogatz ([Watts and
Strogatz 1998]).
Table 2 summarizes the main characteristics of the WS randomly generated
graphs. The information concerning nodes’ degrees and density is adapted to
the correspondent version of the graph, considering the degrees’ values for
the undirected versions and the out-degrees ($g^{+}(i)$) for the oriented
cases.
| | | | degree/out-degree |
---|---|---|---|---|---
| nodes ($n$) | edges/arcs ($m$) | density | min | average | max | type
WS-10-33 | 10,000 | 66,500 | 0.0007 | 0 | 6.65 | 15 | oriented
WS-10-66 | 10,000 | 83,000 | 0.0008 | 2 | 8.30 | 16 | oriented
WS-10-100 | 10,000 | 50,000 | 0.0010 | 5 | 10.00 | 18 | undirected
WS-20-33 | 10,000 | 133,000 | 0.0013 | 3 | 13.30 | 25 | oriented
WS-20-66 | 10,000 | 166,000 | 0.0017 | 6 | 16.60 | 28 | oriented
WS-20-100 | 10,000 | 100,000 | 0.0020 | 12 | 20.00 | 30 | undirected
WS-50-33 | 10,000 | 332,500 | 0.0033 | 17 | 33.25 | 50 | oriented
WS-50-66 | 10,000 | 415,000 | 0.0042 | 20 | 41.50 | 59 | oriented
WS-50-100 | 10,000 | 500,000 | 0.0100 | 38 | 50.00 | 66 | undirected
Table 2: Main characteristics of the WS randomly generated graphs
Real-world instances:
$\bullet$ Zachary karate club [Zachary 1977]:
Data source: http://vlado.fmf.uni-lj.si/pub/networks/data/ucinet/ucidata.htm
This dataset was collected from the members of a university karate club by the
sociologist Wayne Zachary in 1977 [Zachary 1977]. Each node represents a
member of the club and an edge between two members indicates that they are
connected, generating an undirected graph. Each club member knew all the
others, but the network only represents links between members who regularly
interact outside the club. This network is widely used in a number of papers
in the literature. Most of these works try to find the two groups of people
into which the karate club split after an argument between the president (John
A.) and an instructor (Mr. Hi). We use this network to compare the ILP and ITP
indices with other known metrics in the literature.
$\bullet$ Konect - Advogato network [Massa et al. 2009]:
Data source: http://konect.uni-koblenz.de/networks/advogato
The Advogato trust network is build using the Advogato online community
platform for developers of free software. The original dataset has 3992 loops
that were removed. The resulting graph is oriented, with 5,155 nodes (Advogato
users) and 47,135 arcs (trust relationships, called a ”certification” on
Advogato).
$\bullet$ Konect - Hamsterster network [Kunegis 2013]:
Data source: http://konect.uni-koblenz.de/networks/petster-friendships-hamster
This Network is based on friendships between users of the website
hamsterster.com. The network is undirected, with 1,858 nodes (users) and
12,534 edges (friendships).
$\bullet$ SNAP - ego-Facebook [Leskovec and McAuley 2012]:
Data source: https://snap.stanford.edu/data/index.html
This network represents social circles (circles of friends) from Facebook
(anonymized), collected from survey participants using this online social
network. Nodes are Facebook users and edges represent interactions between
users. The dataset includes users’ profiles, circles and ego networks. The
resulting graph is undirected, with 4,039 nodes (users) and 88,234 edges
(interactions). The anonymized process permits relating users by their
affiliations but does not allow to identify those affiliations.
$\bullet$ SNAP - email-EU-core network [Yin et al. 2017, Leskovec et al.
2007]:
Data source: https://snap.stanford.edu/data/email-Eu-core.html
This network was generated using email data from a large European research
institution. It used anonymized information about all incoming and outgoing
email messages between members. The resulting graph is oriented, with 1,005
nodes (members of the research institution) and 25,571 arcs, where an arc
$(i,j)$ in the graph indicates that member $i$ sent at least one email message
to $j$, considering just email messages shared between members (the core).
$\bullet$ SNAP - CollegeMsg temporal network [Panzarasa et al. 2009]:
Data source: https://snap.stanford.edu/data/CollegeMsg.html
This network involves an online social network at the University of
California, Irvine. It is a temporal network based on private messages shared
among members, derived from a dataset hosted by Tore Opsahl [Panzarasa et al.
2009]. Users could search the network for others and then initiate
conversation based on profile information. The resulting graph is oriented,
with 1,899 nodes (members) and 20,296 arcs, involving 59,835 messages shared
along a given time frame. An arc $(i,j)$ means that user $i$ sent at least one
message to $j$ within a given time frame.
$\bullet$ BGU - Ning network [Lesser et al. 2013]:
Data source: http://proj.ise.bgu.ac.il/sns/ning.html
Ning is a very large online community building platform for people and
organizations to create social networks. This particular network is a snapshot
of the friendship and group affiliation networks from Ning, harvested during
September 2012. The resulting graph is oriented, with 10,298 nodes (members)
and 76,262 arcs. It has 5,512 pairs of nodes linked by a single arc. The
remaining arcs represent bidirected links among pairs of nodes. We have
removed one loop from the original dataset.
We have switched all arcs’ orientation in the real-world networks because a
link from node $i$ to node $j$ in the original dataset indicates that $i$
follows $j$, that is, $j$ is followed by $i$. Thus, according to the
definition of the influence graph $G$ introduced in Section 1, this link
should be represented by arc $(j,i)$, that is, node $j$ influences node $i$.
Table 3 summarizes the main characteristics of the graphs, using the same
column labels considered in Table 2, besides ”source”, representing the online
repository dataset.
| | | | | degree/out-degree |
---|---|---|---|---|---|---
| source | nodes ($n$) | edges/arcs ($m$) | density | min | average | max | type
Zachary karate club | Konect | 34 | 78 | 0.1390 | 1 | 4.59 | 17 | undirected
Advogato | Konect | 5,155 | 47,135 | 0.0018 | 0 | 9.14 | 721 | oriented
Hamsterster | Konect | 1,858 | 12,534 | 0.0073 | 2 | 13.49 | 272 | undirected
ego-Facebook | SNAP | 4,039 | 88,234 | 0.0108 | 1 | 43.69 | 1045 | undirected
email-EU | SNAP | 1,005 | 25,571 | 0.0253 | 0 | 25.44 | 212 | oriented
CollegeMsg | SNAP | 1,899 | 20,296 | 0.0056 | 0 | 10.69 | 137 | oriented
Ning | BGU | 10,298 | 76,262 | 0.0007 | 0 | 7.41 | 633 | oriented
Table 3: Main characteristics of the real world instances used in the
computational tests
An important difference observed among the WS randomly generated instances and
the real world examples here considered, is the range of variation of nodes’
degrees. Although the average settings are comparable, the differences between
minimum and maximum degrees in the real world examples are much larger, which
is closer to the usual behavior of a social network.
### 3.2 Other metrics
The most usual metrics in the literature and in real-life problems belong to
two classes: centralized and link topological metrics (see, e.g., [Kiss and
Bichler 2008, Peng et al. 2018]). Centralized metrics characterize the spread
capabilities of the nodes and also describes nodes’ proximity to the other
players in the network; while link topological metrics emphasize important
neighbors, benefiting from their relevancy. These metrics are also included in
the measures list considered in Gephi [Bastian et al. 2009]. The lists in
these classes are vast, particularly in the centralized group. In the present
paper, we consider the selection described next.
Before, we must introduce the concept of _path length_ between two nodes,
denoted by $p_{ij}$ for all pairs in $\left\\{\\{i,j\\}:i,j\in N\mbox{ and
}i\neq j\right\\}$, where $p_{ij}$ is the length (number of arcs or edges) in
the shortest path between $i$ and $j$ in $G$.
Centralized metrics:
$\bullet$ degree centrality ([Foulds 2012, Peng et al. 2018]):
The degree of a node $i\in N$ is the number of links incident on $i$. If the
graph is oriented, we consider just its out-degree, as defined in Section 1,
that is, $g^{+}(i)=|\delta^{+}(i)|$, related with the strength of influence
and the hurdle of a node, considered in (1). We use the notation $g(i)$ when
addressing the undirected case. Nodes with higher degree (more friends or more
followers) are considered to be more influential.
$\bullet$ eccentricity ([Hage and Harary 1995, Foulds 2012]):
The eccentricity of a node $i\in N$ is the maximum path length between $i$ and
any other node in the graph, that is, $ec(i)=\max_{j\in N}p_{ij}$. The central
nodes in the graph are those with the lowest eccentricity.
$\bullet$ closeness centrality ([Okamoto et al. 2008]):
The closeness centrality of node $i\in N$ is the inverse of the average path
length between $i$ and all the other nodes in the graph, that is,
$cc(i)=\frac{n-1}{\sum_{j\in N\setminus\\{i\\}}p_{ij}}$. Nodes with larger
closeness centrality can reach sooner the other nodes in the graph, on
average, being better positioned to spread the information.
$\bullet$ betweenness centrality ([Freeman 1977, Brandes 2001, Boccaletti et
al. 2006]):
The betweenness centrality (or load) of node $i\in N$ represents a measure of
the number of shortest paths passing through node $i$, being defined by
$bc(i)=\sum_{s,t\in N\setminus\\{i\\},s\neq
t}\frac{\sigma_{st}(i)}{\sigma_{st}}$, with $\sigma_{st}(i)$ representing the
number of shortest paths between $s$ and $t$ passing through node $i$, and
$\sigma_{st}$ representing the number of shortest paths between $s$ and $t$ in
$G$. Nodes with larger betweenness centrality are in the ”preferable” path
between many pairs of nodes, acting as bridges. Under the assumption that
information flows through shortest paths, these nodes are better placed to
control most communications’ traffic in the graph. In addition, the
betweenness centrality value of node $i$ also indicates the disruption induced
in the graph when $i$ is removed.
Link topological metrics ([Peng et al. 2018]):
$\bullet$ eigenvector centrality ([Bonacich 2007, Lü et al. 2016]):
The eigenvector centrality of node $i\in N$ measures the level of influence of
node $i$ based on the influential strength of their direct neighbors, being
denoted by $ev(i)$. This means that the spreading ability of $i$ is enforced
if it is connected to other influential nodes. In this case, a person $i$ with
few connections can have a high $ev(i)$ if those connections are also well-
connected with others in the network. Nodes with higher eigenvector value are
expected to be more influential in the network. The eigenvector metric is the
same as degree centrality if all neighbor nodes have equal degree, namely on
regular graphs. The eigenvector centralities can be computed iteratively
through the expression $ev(i)=\frac{1}{\lambda}\sum_{j\in\delta^{-}(i)}ev(j)$,
that is, $Ax=\lambda x$, where $A$ is the adjacency matrix, $x$ is the $ev$
vector and $\lambda$ is the largest eigenvalue in absolute value of $A$
($\lambda\neq 0$), related to the principal eigenvector of $A$.
$\bullet$ PageRank ([Brin and Page 2012, Lü et al. 2016, Liu et al. 2017]):
The PageRank index, proposed by Sergey Brin and Lawrence Page [Brin and Page
1998, Brin and Page 2012], and used by Google’s search engine, is a measure of
the relevancy of a given web page in a webgraph (oriented, in general). It is
also used in other network environments ([Gleich 2015]), including social
networks. As the eigenvector metric, this index is based on the number and
relevancy of the hiperlinks a website receives. Thus, higher PageRank values
correspond to more meaningful websites in a webgraph. The PageRank considers
the number of inward arcs (i.e., converging links to a website) and the
relevancy of the linkers (i.e., the PageRank of the adjacent converging
websites). The PageRank index of a node $i$, here denoted by $pr(i)$ for all
$i\in N$, is calculated using a recursive procedure, defined by
$pr_{t}(i)=\beta\sum_{j\in\delta^{-}(i)}\frac{pr_{t-1}(j)}{g^{+}(j)}+(1-\beta)\frac{1}{n}$,
where $\beta$ is the probability that a user clicks on a hyperlink on the
current page; while $1-\beta$ is the probability of teleportation by typing
the name of a web page of choice in the address bar, moving elsewhere. The
procedure starts with $pr_{1}(i)=1$ for all $i\in N$ and terminates when the
PageRank values become stable. The final result is $pr(i)=pr_{tt}(i)$ for all
$i\in N$, with $tt$ the total number of iterations of the recursive procedure.
It has good convergence properties when $\beta$ is not too close to 1 ([Gleich
2015]). The usually recommended value for $\beta$ is 0.85 (e.g., [Brin and
Page 2012, Gleich 2015, Lü et al. 2016]).
$\bullet$ HITS (Hyperlink-Induced Topic Search) ([Kleinberg 1999, Lü et al.
2016]):
HITS was also developed to produce metrics on webgraphs, classifying websites
into authoritative and hub pages, here denoted by $au(i)$ and $hu(i)$,
respectively, for all $i\in N$. Authoritative pages are those with many
incoming links from hub websites, exposing the value of the content of the
page; while hub pages are those incident on many relevant authoritative pages.
These two types of nodes are intimately related to each other, being
calculated through a recursive procedure that should converge to stable
authoritative and hub scores. The recursive expressions are:
$au_{t}(i)=\sum_{\delta^{-}(i)}hu_{t-1}^{\prime}(j)$ and
$hu_{t}(i)=\sum_{\delta^{+}(i)}au_{t-1}^{\prime}(j)$, with
$au_{t}^{\prime}(i)=\frac{au_{t}(i)}{\sqrt{\sum_{k\in
N}\left(au_{t}(k)\right)^{2}}}$ and
$hu_{t}^{\prime}(i)=\frac{hu_{t}(i)}{\sqrt{\sum_{k\in
N}\left(hu_{t}(k)\right)^{2}}}$ the normalized scores. The procedure starts
with $au_{1}(i)=hu_{1}(i)=1$ for all $i\in N$; and terminates when the scores
of all nodes reach the steady state, controlled by a given threshold, denoted
by $\varepsilon$ and set to 0.0001 in our computational tests. The final
values are $au(i)=au_{tt}(i)$ and $hu(i)=hu_{tt}(i)$ for all $i\in N$, with
$tt$ the total number of iterations of the recursive procedure. The two
metrics’ results are similar on undirected graphs. A good authoritative page
should have many incoming links from many good hubs, that is, being linked
from pages known as hubs for information; and a good hub page should point to
many good authoritative pages, that is, linked to pages that are considered to
be authorities on the subject. The nodes with higher authoritative value are
the most central in the graph.
Unlike other authors, we chose to include the eigenvector metric within the
link topological class because it is build upon neighborhood influence, as
PageRank and HITS.
All these centralized measures are focused on direct indistinct neighbors or
on the path length distance to the other nodes in the graph. They do not
reveal information spread based on nodes’ influence strength over their
neighbors. This influence strength over the neighbors is essential to model
influence spread cascades, used to show each node’s influence capability over
the entire network, performed by the ILP measure here proposed.
The same way, the selected link topological metrics described above are based
on direct neighborhood influence, distinguishing the relevancy of the various
neighbors. These neighbors also benefit from their neighbors influence and so
on, producing a kind of influence propagation towards the nodes. However, in
this cases, the relevancy of a member is entirely build upon the relevancy of
its neighbors, ignoring its own ability to decide on a message propagation
scheme. In effect, message propagation capability depends on the strength of
inward neighbors, but it then must break the node’s own hurdle to further
propagate the message. This point establishes the main difference between the
link topological metrics described above and influence propagation considered
on Threshold and Cascade models, including the ILP and ITP indices
architecture. Instead of just reading neighbors’ direct influence, the ILP
index uses the network topology and the message viral power to truly simulate
message propagation. On a different perspective, the ITP index describes each
node capability as a consumer of information flowing in the graph. It may
resemble the betweenness centrality measure, but instead of receiving the flow
due to its geodesic position in the graph, it receives the flow by true
influence propagation cascades started on each node in the graph, being also
sensitive to the message viral power.
To detach these differences, we use the Zachary karate club network described
in Subsection 3.2, represented by an undirected graph.
Table 8 in the Appendix shows the ILP and ITP indices’ values for all the
nodes in the Zachary’s club graph, for $\alpha=1.5\mbox{ and }3.0$. It also
includes all the results produced by the metrics described above, calculated
using Gephi [Bastian et al. 2009].
Starting with the ILP index results, when $\alpha=1.5$, representing a rather
viral message, for instance, an opinion about an adversary, there are 5
members able to spread out this opinion throughout the entire network
(members: 1 (Mr. Hi), 2, 3, 33 and 34 (John A.)). The other members have low
or null influence capability in the graph. This is a typical behavior of this
index that separates the members in two groups: strong and low/null message
launchers. However, when $\alpha=3.0$, representing a less viral message, for
instance, a non-consensual opinion about one of the leaders (Mr. Hi or John
A.), the message propagation should be harder because they are all members of
the same club. In this case, only node 1 (Mr. Hi) and node 34 (John A.) can
spread the message, being able to cover only 20 other members, when starting
in each one of them, represented in Figure 3 (a) and (b), for Mr. Hi and John
A., respectively. There is another member (node 33) that can also do some
damage, being able to reach 9 other members, but clearly with lower strength
compared to the two leaders.
| |
---|---|---
(a) | | (b)
Figure 3: ILP index solution with $\alpha=3.0$, for node 1 (in (a)) and node
34 (in (b)) as the origins, for the Karate club Zachary’s graph.
The two solutions in Figure 3 also show that there are some members being
reached by the two potential influencers: Mr. Hi and John A., namely members
3, 9, 10, 14, 20, 29, 31 and 32. Some of these members stayed with Mr. Hi (3,
9, 14, 20) and the others stayed with John A. (10, 29, 31, 32) after the
club’s fission.
Concerning the ITP index results, when the message is volatile ($\alpha=1.5$)
there are two nodes (17 and 26) that are more influenceable than the others,
being reached by 7 members of the club. These two members have great potential
for being consumers or collectors of information in the network, which may
suggest that they should be observed closely for all sort of message
propagation in the graph, namely gossips or fake news. However, for messages
with less spread capability ($\alpha=3.0$), namely classified information, the
members to observe are no longer the former, but 9, 10, 20, 29 and 31.
Curiously, they all belong to the subset of those influenced by both sides, as
observed above.
Compared to the other metrics, the ILP and ITP indices incorporate the
capability to classify the message being spread. In effect, the viral strength
of the message is a key issue for its spreadability, while the previously
mentioned metrics make no distinction on this matter. If we observe the degree
information of each node, it provides close answers for the more volatile
message (with $\alpha=1.5$), but it helps less when the message has more
friction (with $\alpha=3.0$), namely on nodes 2, 3 and 33. However, nodes’
degree can complement the ILP index information, revealing, for instance, that
some lower degree members (2 and 3) are also able to cover the entire network
when the message is volatile. These members can be easier to convince to act
as influencers in a marketing campaign, for instance. In addition, among the
members with the lowest eccentricity there are a number of them with low or
null ability to spread the message, namely members 4, 9, 14, 20 and 32.
Instead, John A. (member 34) has higher eccentricity. Also, closeness
centrality detaches members 9, 14, and 32 with low average path length (below
$\frac{1}{0.52}$) and very low or null capability to be message spreaders. On
the other hand, member 17 has high average path length ($\frac{1}{0.28}$)
while being a good consumer/collector of volatile messages when $\alpha=1.5$,
with $ipt(17)=0.21$. Then, the betweenness centrality metric detaches members
3 and 32 as interesting bridges for the flow passing among nodes in the graph,
but they have low capability to be good influencers, specially for less viral
messages ($\alpha=3.0$). This is curious because they can be highly disruptive
if removed from the graph. In addition, the stronger ability of the members as
consumers/collectors (index ITP) is not followed by higher values of
betweenness centrality. Now, considering the link topological metrics, they
all bring similar results for this example, detaching members 1 and 34, and
also members 2, 3 and 33 for their relevancy in the graph, which is directly
related with the number of neighbors and their weight. However, they lack
indicating how far this relevancy can go considering message propagation. In
effect, they also stress nodes 4, 9, 14 and 32 (as the eccentricity and
closeness centrality metrics) with moderate relevancy, although these members
have low or null ability for message spreading in the graph. To conclude,
these observations stress that the new indices should not be used to
substitute the known metrics, but used in conjunction with those metrics to
complement the decision process on influencers selection in social networks.
In the next subsection we provide a few more observations involving the
metrics here described using a larger sized graph.
### 3.3 Computational results
This section describes the computational tests on the ILP-ITP($\alpha$)
algorithm described in Section 2 for generating the ILP (Individual Launching
Power) and ITP (Individual Target Potential) indices on two given classes of
instances. These classes involve the randomized and real-world instances
selected in Subsection 3.1.
Table 4 reports the ILP and ITP values on the WS randomly generated graphs,
involving $n=10,000$ nodes. The table includes the execution times (in
seconds) of the algorithm and the percentage of nodes that are able to cover
(activate) at least 99% of the remaining nodes in the graph, representing
strong candidates to be influencers, using the ILP index results. The tests
were conducted considering the following values for the hurdle coefficient:
$\alpha=$ 1.0, 1.5 and 2.0, indicating the virality level of a message
generated in each of the nodes.
| | | percentage of strong influencers | execution times (sec.)
---|---|---|---|---
instance | density | type | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$
WS-10-33 | 0.0007 | oriented | 80.99 | 4.23 | 0.00 | 1853 | 70 | $<1$
WS-10-66 | 0.0008 | oriented | 86.57 | 14.57 | 0.35 | 2443 | 371 | 7
WS-10-100 | 0.0010 | undirected | 90.31 | 16.87 | 0.44 | 3472 | 552 | 13
WS-20-33 | 0.0013 | oriented | 91.70 | 21.01 | 5.32 | 4227 | 1730 | 231
WS-20-66 | 0.0017 | oriented | 94.23 | 23.05 | 1.66 | 4923 | 1214 | 89
WS-20-100 | 0.0020 | undirected | 94.87 | 6.77 | 0.03 | 7183 | 502 | 3
WS-50-33 | 0.0033 | oriented | 97.18 | 21.01 | 0.37 | 9808 | 2142 | 38
WS-50-66 | 0.0042 | oriented | 98.24 | 8.53 | 0.09 | 11023 | 993 | 11
WS-50-100 | 0.0100 | undirected | 97.87 | 0.14 | 0.00 | 16792 | 24 | $<1$
Table 4: Percentage of strong influencers and execution times of the ILP-
ITP($\alpha$) algorithm on the WS randomly generated graphs, considering
$\alpha=$ 1.0, 1.5 and 2.0.
As expected, all the ILP index results observed using the WS graphs divide the
nodes in two classes: launchers and non-launchers. The same way, the results
with index ITP also found two classes of nodes: targets and non-targets. The
breaking point that separates launchers from non-launchers represents the
minimum proportion of nodes that launchers can cover. This value was observed
to be higher than 0.9959 in all experiments reported in Table 4, except for
instances WS-10-33 and WS-50-100 for $\alpha=2.0$.
Considering the results in this table, when the message is very viral
($\alpha=1.0$), most nodes are able to act as strong launchers in the graph,
especially when it becomes denser, as expected. As an example, suppose a
social network (connected) of football (soccer) supporters. If we think of a
very viral message in this network, for instance, ”Ronaldo returns to
Manchester”, it can easily reach most nodes of the network if launched by any
member, no matter its strength, in most cases. A similar result holds for a
less viral message ($\alpha=1.5$) when $k=10$, but it suddenly changes when
the graph becomes denser. In effect, for $k=20$ the number of strong launchers
is lower in the undirected graph (WS-20-100) compared with the lower density
instances (WS-20-33 and WS-20-66); and this behavior is even clearer for the
$k=50$ instances, where the number of stronger launchers decreases along with
the density increase of the graphs, which is somehow unexpected. This
performance is even more noticeable for the less viral message cases, with
$\alpha=2.0$. Actually, although density growth increases the number of
connections, it also makes the nodes stronger in their own hurdle, turning
message dissemination harder to pass. These small-world networks are
particularly sensitive to this aspect, due to the homogeneity of their nodes’
degrees. As observed in the forthcoming tests, this low variation on nodes’
degrees is not typical in a social network, which may cast doubt on the
suitability of small-world artificial graphs to simulate social networks.
Now, considering the larger sized real-world instances proposed in Subsection
3.1, we show in Tables 6 and 7 the results of the ILP and ITP indices returned
by the ILP-ITP($\alpha$) algorithm, considering the following values for the
hurdle coefficient: $\alpha=$ 1.0, 1.5, 2.0 and 3.0. The algorithm was run for
the entire graphs, despite the fact that most of them are not connected (or
strong connected in the oriented cases). The sizes of these connected (or
strongly connected) components (in percentage of nodes over the entire graph)
and the execution times of the algorithm for the various hurdle coefficients
are reported in Table 5.
| | | execution times (in seconds)
---|---|---|---
instance | type | largest component (in %) | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=3.0$
Advogato | oriented | 60.91 | 12 | 6 | 4 | 2
Hamsterster | undirected | 96.23 | 3 | 2 | 1 | $<1$
ego-Facebook | undirected | 100.00 | 189 | 125 | 88 | 30
email-EU | oriented | 79.90 | 2 | 1 | $<1$ | $<1$
CollegeMsg | oriented | 68.14 | 2 | 1 | 1 | $<1$
Ning | oriented | 87.37 | 69 | 32 | 18 | 7
Table 5: Sizes (in percentage) of the largest connected (strongly connected)
components in the real-world instances; and the execution times of the ILP-
ITP($\alpha$) algorithm for $\alpha=$ 1.0, 1.5, 2.0 and 3.0.
The execution time of the algorithm is influenced by the size and density of
the graph, but also by the hurdle coefficient. In effect, when the hurdle
coefficient increases, the number of launcher nodes diminishes, being
reflected on a lower execution effort by the algorithm. In addition, the
largest connected (or strongly connected) component has almost the initial
graph size on the undirected instances, being smaller on the oriented
counterparts, especially on the Advogato graph. Despite these differences, we
chose to run and report the tests on the original graphs because it is closer
to reality.
Here again, all the ILP index results on the RW graphs divided the nodes in
two classes: launchers and non-launchers. The same way, the results with index
ITP found two classes of nodes: targets and non-targets. We consider again the
breaking point that separates launchers from non-launchers as the minimum
proportion that launcher members can cover, here denoted as _minimum
influential breaking point_ (mibp in short). We also consider the breaking
point that separates targeting from non-targeting nodes as the minimum
proportion of nodes covering target nodes, denoted by _minimum targeting
breaking point_ (mtbp in short). Thus, Table 6 reports the percentage of
launcher nodes and the mibp values found in each instance and for each hurdle
coefficient. Table 7 represents the percentage of targeting nodes and the mtbp
values for the same instances and the same hurdle coefficients.
| percentage of launchers | min influential breaking point (mibp)
---|---|---
instance | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=3.0$ | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=3.0$
Advogato | 21.51 | 11.70 | 7.29 | 3.16 | 0.7241 | 0.7115 | 0.6995 | 0.6814
Hamsterster | 32.88 | 20.61 | 12.70 | 5.71 | 0.9553 | 0.9467 | 0.9435 | 0.9413
ego-Facebook | 73.31 | 52.34 | 36.37 | 15.23 | 1.0000 | 0.9861 | 0.9861 | 0.9854
email-EU | 52.64 | 36.42 | 26.27 | 12.14 | 0.8137 | 0.8137 | 0.8118 | 0.8078
CollegeMsg | 25.43 | 12.06 | 6.79 | 2.05 | 0.6965 | 0.6907 | 0.6886 | 0.6797
Ning | 14.58 | 7.44 | 4.47 | 1.93 | 0.8757 | 0.8642 | 0.8527 | 0.8196
Table 6: ILP index results on the real-world selected instances, considering
$\alpha=$ 1.0, 1.5, 2.0 and 3.0.
The mibp percentage on these instances is lower, in general, compared to the
WS instances results, especially on the Advogato and CollegeMsg datasets,
probably due to the smaller size of the largest strongly connected components.
On the other instances, the launcher nodes can cover more than 80% of the
nodes.
If observing the largest oriented graph (Ning), when the message is very viral
($\alpha=1.0$), 14.58% of the nodes (1501 members) can be classified as
launchers and the nodes in this group can reach at least 87.57% (9018 nodes)
of the entire set of members. The remaining 85.42% (non-launcher nodes) can
reach no more than 83 other nodes in the graph. However, if the message
becomes less viral, with $\alpha=1.5$, the successful launcher nodes falls to
7.44% (766 members) and each of these nodes can cover 86.42% (8900 nodes) of
the graph, or more. The remaining 9532 non-launcher nodes can reach less than
184 other members in the graph. Further, if the message has low virality (with
$\alpha=3.0$), the percentage of launcher nodes is 1.93%, representing only
199 members that are able to cover at least 81.96% (8440 nodes) of the graph.
The remaining 98.07% members (nonlaunchers) can only cover at most 509 other
nodes. An additional observation considering, for instance, the $\alpha=1.0$
case, the mibp of the launchers (0.8757, representing 9018 nodes) is bigger
than the largest strongly connected component in that graph (87.37%, that is,
8997 nodes). The reason for this is that the message can propagate across
nodes in and out of the largest strongly connected component, namely among
nodes in weakly connected components.
Considering a different case, if we observe the largest undirected graph (ego-
Facebook network), which is entirely connected, it has 73.31% launcher members
that are able to cover the entire graph through influence propagation if the
message is very viral, with $\alpha=1.0$. The remaining 26.69% members can
reach no more than 60 other nodes in the graph, thus, being non-launchers.
Once again, if the virality of the message decreases, considering
$\alpha=1.5$, the percentage of launcher nodes decreases to 51.34%, each of
which being able to cover 98.61% of the graph; and if the message becomes
harder to pass, with $\alpha=2.0$, the percentage of launcher nodes decreases
further, to 36.37%, falling even deeper (15.23%) if the virality of the
message is further decreased. These 15.23% launchers are 615 Facebook members
that are able to reach (individually) 98.54% of the other nodes in the graph,
at least.
The detached launcher members able to cover almost the entire graph are still
too much if we are intended to choose some of them to propagate a message or
initiate a marketing campaign. Therefore, we propose complementing the
information with some of the metrics discussed in Subsection 3.2. To assist on
this discussion we show in Figure 4 an image with the ego-Facebook instance,
where the nodes’ color (from red to light yellow) and size are proportional to
their degree in the graph. The network was build using Gephi [Bastian et al.
2009]. Observing this image, a natural choice for strong influencers (as
launchers) would be those with larger degree, according to the criteria used
in the construction of the activating condition (1) described in Section 1. In
fact, the five nodes with largest degree are 108, 1685, 1913, 3438 and 1, with
degrees 1045, 792, 755, 547 and 347, respectively. These 5 nodes are also on
top of the list for closeness centrality, betweenness centrality and PageRank,
indicating that they are central on communication and neighbors influence.
These are probably the most relevant players in the graph, but they should
also be the more expensive if we think about a financial incentive to pay
these members to support a marketing campaign or to decide sending a message.
In effect, they all belong to the launchers’ list determined by the ILP index.
However, and still thinking about the cost to pay to these members, are there
strong launcher nodes that may cost less? A possible answer to this question
can arise from the lower degree nodes, or other metric, still belonging to the
launchers’ list. In effect, among the nodes in this list, we can find a number
of members with degree below 30 (nodes 679, 3081, 3232 and 991) which may
represent good candidates to act as launchers (influencers) in practice.
Probably due to their position in the graph, these apparently weak members are
so effective on influential spread as node 108 that exhibits the strongest
degree ($g(108)=1045$) in the graph. Note that the average degree in this
graph is 43.69 and the standard-deviation is 52.41. A curious aspect to
mention involves node 568 that has the lowest eccentricity ($ec(568)=4$), node
degree slightly above average ($g(568)=63$) and very high betweenness
centrality (above 750,000), suggesting that it could be placed in a privileged
position as an influencer. Yet, it is classified in the non-launchers class by
the ILP index for virality level $\alpha=3.0$, as it is able to reach only 21
other nodes in the graph. So, the other known metrics may not be tailored for
assessing influence propagation on their own, but their performance can be
improved if used together with the ILP index information.
Figure 4: ego-Facebook network, build using Gephi.
Besides instances ego-Facebook and email-EU, the launchers’ groups on all
other networks are relatively small. The number of members in these groups
should fall below 1% when $\alpha>3.0$ in most of the studied graphs. This
percentage is close to the number of nodes that are able to launch an
effective influence cascade over the network, detached in [Goel et al. 2016]
and based on a very extensive variety of contents launched on Twitter. As
shown in our experiments, these proportions are strongly influenced by the
viral power of the message ($\alpha$) and the network topological nature.
| percentage of targets | min targeting breaking point (mtbp)
---|---|---
instance | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=3.0$ | $\alpha=1.0$ | $\alpha=1.5$ | $\alpha=2.0$ | $\alpha=3.0$
Advogato | 72.42 | 71.15 | 69.95 | 68.15 | 0.2150 | 0.1168 | 0.0728 | 0.0314
Hamsterster | 95.53 | 94.67 | 94.35 | 94.13 | 0.3285 | 0.2057 | 0.1265 | 0.0565
ego-Facebook | 100.00 | 98.61 | 98.61 | 98.54 | 0.7330 | 0.5233 | 0.3777 | 0.1521
email-EU | 81.39 | 81.39 | 81.19 | 80.80 | 0.5259 | 0.3635 | 0.2620 | 0.1205
CollegeMsg | 69.67 | 69.09 | 68.88 | 67.98 | 0.2540 | 0.1201 | 0.0674 | 0.0200
Ning | 87.57 | 86.42 | 85.27 | 81.96 | 0.1457 | 0.0743 | 0.0446 | 0.0192
Table 7: ITP index results on the real-world selected instances, considering
$\alpha=$ 1.0, 1.5, 2.0 and 3.0.
Now, concerning the ITP index results reported in Table 7, most nodes in the
graphs belong to the targets group, with a slight exception on instances
CollegeMsg and Advogato, possibly due to their lower connectivity properties.
However, most of these targeting nodes are reachable by a relatively small
number of members, except on the ego-Facebook and email-EU graphs. For
instance, the targeting group in the ego-Facebook graph includes all members
when the message is very viral ($\alpha=1.0$) and these members are reachable
by at least 73.30% (2961 members) other nodes in the graph. In this case,
there are no non-targeting members. However, when the message has low
virality, with $\alpha=3.0$, the targeting group includes 98.54% members (3980
nodes), but these members are only reachable by 15.21% other nodes in the
graph. The remaining 1.46% non-targeting members (59 nodes) can be reached by
no more than 0.07% of the nodes, that is, at most 3 members. The targeting
class, in particular, includes a large variety of members, all of them acting
as message consumers (or collectors). This group of members may possibly
concentrate the usual targets of fake news that can be used to start the
tracking process of fake news’ origins. To further explore the selection
within the targeting members’ group, we may use the betweenness centrality
metric as an additional filter. If we focus again on the $\alpha=3.0$ virality
level and still on the ego-Facebook network, the nodes with largest
betweenness centrality value (larger than 1 million) are, again, members 108,
1685, 3438, 1913, 1086 and 1, among those in the targeting nodes’ class.
Curiously, all these nodes are both good launchers and good targets. Also,
they are among top degree members, except node 1086 ($g(1086)=66$), so with no
novelty, they are strong players in the graph. However, they are also very
heavy players that can belong to the expensive nodes’ class.
To conclude, an interesting and expected observation in these tests shows that
the percentage of launchers is almost the same as the mtbp proportion; and the
percentage of targets is also close to the mibp proportion. This illustrates
that the nodes able to reach the target members are basically the launchers;
and the nodes that are reached by the launchers are mostly the targets.
## 4 Conclusions
In the present paper, we have considered an entirely deterministic process for
characterizing adoption and influence using an activating condition based on
the Linear Threshold (LT) model. The activating condition uses the topological
information of the graph, namely the nodes’ out-degree (or degree if it is
undirected) and a single parameter - the hurdle coefficient - for classifying
the viral propagation strength of the message/product under consideration.
Thus, it does not depend on personal information of the users and hence can be
applied easily in practice.
Based on this process, we have proposed an algorithm that produces two
influence propagation indices for online social networks: the ILP (Individual
Launching Power) and the ITP (Individual Target Potential). The ILP provides a
clear division of the nodes into launchers and non-launchers, with very low
distinction inside each group. Each of the launchers can cover most of the
graph through influential cascades, reaching more than 70% of the members in
most social networks used in our tests. The size of the launchers’ group
diminishes significantly with the hurdle coefficient increase, reflecting the
natural virality variation of the message or marketing campaign (message
virality weakens with the hurdle coefficient increase). It also depends on the
social network topology. When the message has low virality (with
$\alpha>3.0$), the launchers group’s size is possibly lower than 1%, being in
line with [Goel et al. 2016] that observed that less than 1% of the
influential cascades are able to pass beyond the very next neighbors.
This partition of the nodes into launchers and non-launchers can be used as a
first filter for the selection of the best candidates for influence
propagation. Then, we can use other metrics in the literature to choose the
members with the best characteristics. In effect, if we choose influencers on
a social network without this filter, considering, for instance, just the
nodes’ degree information, we would possibly tend to select the members with
largest degree (individuals with more followers). However, those are also
typically the more expensive if used in a marketing campaign or to pass a
message. As we have observed in the computational tests here conducted, there
are other members in the launchers’ group that are able to reach a similar
performance as the mentioned very strong members, but having a significantly
lower degree (having much less followers). These lower degree influencers are
probably much less expensive, being able to perform almost as well as the
highest degree members on the given network. So, instead of searching for the
largest degree nodes, we recommend looking for the lower degree ones but
belonging to the launchers’ group, earlier found by the ILP index. These are
probably the ”ordinary influencers” (individuals who exert average or even
less-than-average influence) observed in [Bakshy et al. 2011] when studying
influence propagation on Twitter. This process can be conducted using other
centralized or link topological metrics besides nodes’ degree, depending on
the kind of members we are looking for.
We have also observed that the ILP index information obtained from artificial
small-world networks (instances WS, generated according to [Watts and Strogatz
1998]) produce lower sized launchers’ groups when the density of the graph
increases, especially when the hurdle coefficient is higher (lower virality
messages). Although higher density graphs have more connections, offering more
chances for message propagation; it also makes the nodes stronger in their own
hurdle, which turns them harder to collaborate on message transmission. This
last observation may justify the mentioned unexpected behavior found on the
small-world graphs, which is possibly justified by the low variation of their
nodes’ degrees. This low variation is not typical on social networks, which
may cast doubt on the suitability of small-world artificial graphs to simulate
social networks.
The ITP index, instead, divides the nodes into targeting and non-targeting
members. According to our tests, most nodes in the graphs belong to the
targets group (more than 70% on most social networks). However, each of those
targeting nodes are targeted by a small number of members (no more than 25% of
the nodes, in most cases), falling sharply with the hurdle coefficient
increase, that is, when the message becomes less viral. These targeting nodes
can represent compulsive consumers of information in online networks, which
may include fake news’ easy targets. As mentioned above, the targeting group
is very large, so, once again, we can use it as a first filter and then resort
to other available metrics to assist on the search for specific members
selection profiles.
On another perspective, the ILP and ITP indices could be used to restrict the
set of candidate seed nodes in most Influence Maximization problems based on
the Linear Threshold model. That restricted subset should focus on strong
influencers (launchers) with low chance of sharing common nodes in their
propagation cascades.
An additional aspect to stress involves the choice of adequate values for the
hurdle parameter $\alpha$. The present work introduces a brief discussion on
this matter, but further work is needed, specially on real-practice
environments for adequately tuning this parameter. In the meanwhile, we
recommend considering sensitivity analysis on $\alpha$ in any new real-world
instance.
In future works, the ILP and ITP indices can be discussed using the
Independent Cascade model [Kempe et al. 2003] instead of the Linear Threshold
model here considered. Also, and in this line, instead of assuming that all
individuals are equally receptive for a message/product to be launched, we
could consider these assumptions to be ruled stochastically.
## Acknowledgements
Pedro Martins acknowledges support from the Portuguese National Funding:
Fundação para a Ciência e a Tecnologia - FCT, under the project
UIDB/04561/2020.
## References
* [Bakshy et al. 2011] Bakshy, E., Hofman, J. M., Mason, W.A., Watts, D.J., 2011. Everyone’s an influencer: quantifying influence on twitter. Proceedings of the fourth ACM international conference on Web search and data mining (pp. 65-74). doi: 10.1145/1935826.1935845
* [Barabási 2016, Chap. 3] Barabási, A.L., 2016. Network science. Cambridge university press, Chapter 3.
* [Bastian et al. 2009] Bastian, M., Heymann, S., and Jacomy M., 2009. Gephi: an open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media.
* [Boccaletti et al. 2006] Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.U., 2006. Complex networks: Structure and dynamics. Physics reports 424(4-5), 175-308. doi: 10.1016/j.physrep.2005.10.009
* [Bonacich 2007] Bonacich, P., 2007. Some unique properties of eigenvector centrality. Social networks 29(4), 555-564. doi: 10.1016/j.socnet.2007.04.002
* [Brandes 2001] Brandes, U., 2001. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology 25(2), 163-177. doi: 10.1080/0022250X.2001.9990249
* [Brin and Page 1998] Brin, S., and Page, L., 1998. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems 30, 107-117.
* [Brin and Page 2012] Brin, S., and Page, L., 2012. Reprint of: The anatomy of a large-scale hypertextual web search engine. Computer networks 56(18), 3825-3833. doi: 10.1016/j.comnet.2012.10.007
* [Chen 2009] Chen, N., 2009. On the approximability of influence in social networks. SIAM Journal on Discrete Mathematics 23(3), 1400-1415. doi: 10.1137/08073617X
* [Chen et al. 2013] Chen, W., Lakshmanan, L. V., and Castillo, C., 2013. Information and influence propagation in social networks. Synthesis Lectures on Data Management 5(4), 1-177. doi: 10.2200/S00527ED1V01Y201308DTM037
* [Fazio et al. 2015] Fazio, L.K., Brashier, N.M., Payne, B.K., Marsh, E.J., 2015. Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General 144(5), 993-1002. doi: 10.1037/xge0000098
* [Fire et al. 2013] Fire, M., Puzis, R., Elovici, Y., 2013. Link prediction in highly fractional data sets. In: Subrahmanian V. (eds) Handbook of computational approaches to counterterrorism, Springer, New York, NY (pp. 283-300). doi: 10.1007/978-1-4614-5311-6_14
* [Fischetti et al. 2018] Fischetti, M., Kahr, M., Leitner, M., Monaci, M., and Ruthmair, M., 2018. Least cost influence propagation in (social) networks. Mathematical Programming 170(1), 293-325. doi: 10.1007/s10107-018-1288-y
* [Foulds 2012] Foulds, L.R., 2012. Graph theory applications. Springer-Verlag, New York. doi: 10.1007/s10107-018-1288-y
* [Freeman 1977] Freeman, L.C., 1977. A set of measures of centrality based on betweenness. Sociometry 35-41. doi: 10.2307/3033543
* [Gleich 2015] Gleich, D.F., 2015. PageRank beyond the Web. Siam Review 57(3), 321-363. doi: 10.1137/140976649
* [Goel et al. 2016] Goel, S., Anderson, A., Hofman, J., and Watts, D.J., 2016. The structural virality of online diffusion. Management Science 62(1), 180-196. doi: 10.1287/mnsc.2015.2158
* [Günneç and Raghavan 2017] Gunnec, D., Raghavan, S., 2017. Integrating social network effects in the share-of-choice problem. Decision Sciences 48(6), 1098-1131. doi: 10.1111/deci.12246
* [Günneç et al. 2020] Günneç, D., Raghavan, S., and Zhang, R., 2020. Least-Cost Influence Maximization on Social Networks. INFORMS Journal on Computing 32(2), 289-302. doi: 10.1287/ijoc.2019.0886
* [Hage and Harary 1995] Hage, P., Harary, F., 1995. Eccentricity and centrality in networks. Social networks 17(1), 57-63. doi: 10.1016/0378-8733(94)00248-9
* [Huang et al. 2012] Huang, J., Cheng, X. Q., Shen, H. W., Zhou, T., Jin, X., 2012. Exploring social influence via posterior effect of word-of-mouth recommendations. In Proceedings of the fifth ACM international conference on Web search and data mining (pp. 573-582). doi: 10.1145/2124295.2124365
* [Kempe et al. 2003] Kempe, D., Kleinberg, J., and Tardos, É., 2003. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining, 137-146. doi: 10.1145/956750.956769
* [Kiss and Bichler 2008] Kiss, C., Bichler, M., 2008. Identification of influencers-measuring influence in customer networks. Decision Support Systems 46(1), 233-253. doi: 10.1016/j.dss.2008.06.007
* [Kleinberg 1999] Kleinberg, J.M., 1999. Authoritative sources in a hyperlinked environment. Journal of the ACM (JACM) 46(5), 604-632. doi: 10.1145/324133.324140
* [Kunegis 2013] Kunegis, J., 2013. KONECT - THe Koblenz Network Collection. Proc. Int. Conf. on World Wide Web Companion, 1343-1350.
* [Kunegis 2017] Kunegis, J., 2017. Konect network dataset - KONECT. Accessed March 27, 2020, http://konect.uni-koblenz.de/networks/konect.
* [Leskovec et al. 2007] Leskovec, J., Kleinberg, J., Faloutsos, C., 2007. Graph evolution: Densification and shrinking diameters. ACM transactions on Knowledge Discovery from Data (TKDD) 1(1), Article 2. doi: 10.1145/1217299.1217301
* [Leskovec and Krevl 2014] Leskovec, J., Krevl, A., 2014. SNAP datasets: Stanford large network dataset collection. Accessed June 30, 2020, http://snap.stanford.edu/data.
* [Leskovec and McAuley 2012] Leskovec, J., McAuley, J.J., 2012. Learning to discover social circles in ego networks. In Proceedings of the Neural Information Processing Systems Conference 2012 (NIPS 2012), Advances in Neural Information Processing Systems 25, (pp. 539-547).
* [Lesser et al. 2013] Lesser, O., Tenenboim-Chekina, L., Rokach, L., Elovici, Y., 2013. Intruder or welcome friend: Inferring group membership in online social networks. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction (LNCS 7812), Springer, Berlin, Heidelberg (pp. 368-376). doi: 10.1007/978-3-642-37210-0_40
* [Liu et al. 2017] Liu, Q., Xiang, B., Yuan, N.J., Chen, E., Xiong, H., Zheng, Y., Yang, Y., 2017. An influence propagation view of pagerank. ACM Transactions on Knowledge Discovery from Data (TKDD) 11(3), 1-30. doi: 10.1145/3046941
* [Lü et al. 2016] Lü, L., Chen, D., Ren, X.L., Zhang, Q.M., Zhang, Y.C., Zhou, T., 2016. Vital nodes identification in complex networks. Physics Reports 650:1-63. doi: 10.1016/j.physrep.2016.06.007
* [Massa et al. 2009] Massa, P., Salvetti, M., Tomasoni, D., 2009. Bowling alone and trust decline in social network sites. In 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing (pp. 658-663). doi: 10.1109/DASC.2009.130
* [Okamoto et al. 2008] Okamoto, K., Chen, W., Li, X.Y., 2008. Ranking of closeness centrality for large-scale social networks. In International workshop on frontiers in algorithmics (LNCS 5059). Springer, Berlin, Heidelberg (pp. 186-195). doi: 10.1007/978-3-540-69311-6_21
* [Panzarasa et al. 2009] Panzarasa, P., Opsahl, T., Carley, K.M., 2009. Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community. Journal of the American Society for Information Science and Technology 60(5), 911-932. doi: 10.1002/asi.21015
* [Peng et al. 2018] Peng, S., Zhou, Y., Cao, L., Yu, S., Niu, J., and Jia, W., 2018. Influence analysis in social networks: A survey. Journal of Network and Computer Applications 106, 17-32. doi: 10.1016/j.jnca.2018.01.005
* [Raghavan and Zhang 2019] Raghavan, S., and Zhang, R., 2019. A branch-and-cut approach for the weighted target set selection problem on social networks. INFORMS Journal on Optimization 1(4), 304-322. doi: 10.1287/ijoo.2019.0012
* [Ripeanu et al. 2002] Ripeanu, M., Foster, I., Iamnitchi, A., 2002. Mapping the gnutella network: Properties of large-scale peer-to-peer systems and implications for system design. arXiv preprint cs/0209028.
* [Watts and Strogatz 1998] Watts, D.J., Strogatz, S.H., 1998. Collective dynamics of ”small-world” networks. Nature 393(6684), 440-442. doi: 10.1038/30918
* [Yin et al. 2017] Yin, H., Benson, A.R., Leskovec, J., Gleich, D.F., 2017. Local higher-order graph clustering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 555-564). doi: 10.1145/3097983.3098069
* [Zachary 1977] Zachary, W.W., 1977. An information flow model for conflict and fission in small groups. Journal of anthropological research 33(4), 452-473. doi: 10.1086/jar.33.4.3629752
## Appendix
Table 8 shows the ILP and ITP indices’ values for the Zachary’s club graph. It
also includes the results of the metrics described in Subsection 3.2,
calculated using Gephi [Bastian et al. 2009].
nodes | $ilp(i)$ | $itp(i)$ | centralized | link topological
---|---|---|---|---
$i$ | $\alpha=1.5$ | $\alpha=3.0$ | $\alpha=1.5$ | $\alpha=3.0$ | $g(i)$ | $ec(i)$ | $cc(i)$ | $bc(i)$ | $ev(i)$ | $pr(i)$ | $au(i)$ | $hu(i)$
1 | 1 | 0.61 | 0.12 | 0 | 16 | 3 | 0.57 | 231.07 | 0.96 | 0.1 | 0.36 | 0.36
2 | 1 | 0.09 | 0.12 | 0.03 | 9 | 3 | 0.49 | 28.48 | 0.7 | 0.05 | 0.27 | 0.27
3 | 1 | 0.06 | 0.12 | 0.06 | 10 | 3 | 0.56 | 75.85 | 0.84 | 0.06 | 0.32 | 0.32
4 | 0.06 | 0.03 | 0.15 | 0.03 | 6 | 3 | 0.46 | 6.29 | 0.56 | 0.04 | 0.21 | 0.21
5 | 0 | 0 | 0.15 | 0.03 | 3 | 4 | 0.38 | 0.33 | 0.21 | 0.02 | 0.08 | 0.08
6 | 0.06 | 0 | 0.18 | 0.03 | 4 | 4 | 0.38 | 15.83 | 0.23 | 0.03 | 0.08 | 0.08
7 | 0.06 | 0 | 0.18 | 0.03 | 4 | 4 | 0.38 | 15.83 | 0.23 | 0.03 | 0.08 | 0.08
8 | 0 | 0 | 0.18 | 0.03 | 4 | 4 | 0.44 | 0 | 0.45 | 0.02 | 0.17 | 0.17
9 | 0 | 0 | 0.15 | 0.09 | 5 | 3 | 0.52 | 29.53 | 0.61 | 0.03 | 0.23 | 0.23
10 | 0 | 0 | 0.15 | 0.09 | 2 | 4 | 0.43 | 0.45 | 0.27 | 0.01 | 0.1 | 0.1
11 | 0 | 0 | 0.15 | 0.03 | 3 | 4 | 0.38 | 0.33 | 0.21 | 0.02 | 0.08 | 0.08
12 | 0 | 0 | 0.15 | 0.03 | 1 | 4 | 0.37 | 0 | 0.14 | 0.01 | 0.05 | 0.05
13 | 0 | 0 | 0.18 | 0.06 | 2 | 4 | 0.37 | 0 | 0.22 | 0.01 | 0.08 | 0.08
14 | 0 | 0 | 0.15 | 0.06 | 5 | 3 | 0.52 | 24.22 | 0.6 | 0.03 | 0.23 | 0.23
15 | 0 | 0 | 0.15 | 0.06 | 2 | 5 | 0.37 | 0 | 0.27 | 0.01 | 0.1 | 0.1
16 | 0 | 0 | 0.15 | 0.06 | 2 | 5 | 0.37 | 0 | 0.27 | 0.01 | 0.1 | 0.1
17 | 0 | 0 | 0.21 | 0.03 | 2 | 5 | 0.28 | 0 | 0.07 | 0.02 | 0.02 | 0.02
18 | 0 | 0 | 0.15 | 0.06 | 2 | 4 | 0.38 | 0 | 0.25 | 0.01 | 0.09 | 0.09
19 | 0 | 0 | 0.15 | 0.06 | 2 | 5 | 0.37 | 0 | 0.27 | 0.01 | 0.1 | 0.1
20 | 0 | 0 | 0.15 | 0.09 | 3 | 3 | 0.5 | 17.15 | 0.4 | 0.02 | 0.15 | 0.15
21 | 0 | 0 | 0.15 | 0.06 | 2 | 5 | 0.37 | 0 | 0.27 | 0.01 | 0.1 | 0.1
22 | 0 | 0 | 0.15 | 0.06 | 2 | 4 | 0.38 | 0 | 0.25 | 0.01 | 0.09 | 0.09
23 | 0 | 0 | 0.15 | 0.06 | 2 | 5 | 0.37 | 0 | 0.27 | 0.01 | 0.1 | 0.1
24 | 0.03 | 0 | 0.15 | 0.06 | 5 | 5 | 0.39 | 9.3 | 0.41 | 0.03 | 0.15 | 0.15
25 | 0 | 0 | 0.18 | 0.03 | 3 | 4 | 0.38 | 1.17 | 0.16 | 0.02 | 0.06 | 0.06
26 | 0 | 0 | 0.21 | 0.03 | 3 | 4 | 0.38 | 2.03 | 0.17 | 0.02 | 0.06 | 0.06
27 | 0 | 0 | 0.18 | 0.03 | 2 | 5 | 0.36 | 0 | 0.2 | 0.02 | 0.08 | 0.08
28 | 0 | 0 | 0.15 | 0.03 | 4 | 4 | 0.46 | 11.79 | 0.36 | 0.03 | 0.13 | 0.13
29 | 0 | 0 | 0.18 | 0.09 | 3 | 4 | 0.45 | 0.95 | 0.35 | 0.02 | 0.13 | 0.13
30 | 0.03 | 0 | 0.15 | 0.06 | 4 | 5 | 0.38 | 1.54 | 0.36 | 0.03 | 0.13 | 0.13
31 | 0 | 0 | 0.15 | 0.09 | 4 | 4 | 0.46 | 7.61 | 0.46 | 0.02 | 0.17 | 0.17
32 | 0.09 | 0 | 0.15 | 0.06 | 6 | 3 | 0.54 | 73.01 | 0.52 | 0.04 | 0.19 | 0.19
33 | 1 | 0.27 | 0.12 | 0.03 | 12 | 4 | 0.52 | 76.69 | 0.83 | 0.07 | 0.31 | 0.31
34 | 1 | 0.61 | 0.12 | 0 | 17 | 4 | 0.55 | 160.55 | 1 | 0.1 | 0.37 | 0.37
Table 8: ILP and ITP indices and the results of the metrics described in
Subsection 3.2 for the Zachary’s club graph
|
# Homogenization of Schrödinger equations.
Extended Effective Mass Theorems
for non-crystalline matter
Vernny Ccajma1, Wladimir Neves1, Jean Silva2
###### Abstract
This paper concerns the homogenization of Schrödinger equations for non-
crystalline matter, that is to say the coefficients are given by the
composition of stationary functions with stochastic deformations. Two rigorous
results of so-called effective mass theorems in solid state physics are
obtained: a general abstract result (beyond the classical stationary ergodic
setting), and one for quasi-perfect materials (i.e. the disorder in the non-
crystalline matter is limited). The former relies on the double-scale limits
and the wave function is spanned on the Bloch basis. Therefore, we have
extended the Bloch Theory which was restrict until now to crystals (periodic
setting). The second result relies on the Perturbation Theory and a special
case of stochastic deformations, namely stochastic perturbation of the
identity.
11footnotetext: Instituto de Matemática, Universidade Federal do Rio de
Janeiro, C.P. 68530, Cidade Universitária 21945-970, Rio de Janeiro, Brazil.
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS> Departamento
de Matemática, Universidade Federal de Minas Gerais. E-mail:
<EMAIL_ADDRESS>
Key words and phrases. Homogenization theory, stochastic Schrödinger
equations, initial value problem, extended effective mass theorems.
###### Contents
1. 1 Introduction
1. 1.1 Contextualization
2. 1.2 Summary of the main results
2. 2 Preliminaries and Background
1. 2.1 Anisotropic Schrödinger equations
2. 2.2 Stochastic configuration
1. 2.2.1 Ergodic theorems
2. 2.2.2 Analysis of stationary functions
3. 2.3 $\Phi_{\omega}-$Two-scale Convergence
4. 2.4 Perturbations of bounded operators
3. 3 Bloch Waves Analysis
1. 3.1 The WKB method
2. 3.2 Sobolev spaces on groups
1. 3.2.1 Groups and Dynamical systems
2. 3.2.2 Rellich–Kondrachov type Theorem
3. 3.2.3 On a class of Quasi-periodic functions
3. 3.3 Auxiliary celular equations
4. 4 On Schrödinger Equations Homogenization
1. 4.1 The Abstract Theorem.
2. 4.2 Radom Perturbations of the Quasi-Periodic Case
5. 5 Homogenization of Quasi-Perfect Materials
1. 5.1 Perturbed Periodic Case: Spectral Analysis
2. 5.2 Homogenization Analysis of the Perturbed Model
1. 5.2.1 Expansion of the effective coefficients
## Part I: Conceptual Framework
## 1 Introduction
In this work we study the homogenization (assymptotic limit as $\varepsilon\to
0$) of the anisotropic Schrödinger equation in the following Cauchy problem
$\left\\{\begin{aligned} &i\displaystyle\frac{\partial
u_{\varepsilon}}{\partial t}-{\rm
div}{\big{(}A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\omega\big{)}},\omega)\nabla
u_{\varepsilon}\big{)}}+\frac{1}{\varepsilon^{2}}V(\Phi^{-1}{\big{(}\displaystyle\frac{x}{\varepsilon},\omega\big{)}},\omega)\;u_{\varepsilon}\\\\[5.0pt]
&\hskip
100.0pt+U(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\omega\big{)}},\omega)\;u_{\varepsilon}=0,\quad\text{in
$\mathbb{R}^{n+1}_{T}\\!\times\\!\Omega$},\\\\[3.0pt]
&u_{\varepsilon}=u_{\varepsilon}^{0},\quad\text{in
$\mathbb{R}^{n}\\!\times\\!\Omega$},\end{aligned}\right.$ (1.1)
where $\mathbb{R}^{n+1}_{T}:=(0,T)\times\mathbb{R}^{n}$, for any real number
$T>0$, $\Omega$ is a probability space, and the unknown function
$u_{\varepsilon}(t,x,\omega)$ is complex-value.
The coefficients in (1.1), that is the matrix-value function $A$, the real-
value (potencial) functions $V$, $U$ are random perturbations of stationary
functions accomplished by stochastic diffeomorphisms
$\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$, (called stochastic
deformations). The stationarity property of random functions will be precisely
defined in Section 2.2, also the definition of stochastic deformations which
were introduced by X. Blanc, C. Le Bris, P.-L. Lions (see [9, 10]). In that
paper they consider the homogenization problem of an elliptic operator whose
coefficients are periodic or stationary functions perturbed by stochastic
deformations.
In particular, we assume that $A=(A_{k\ell})$, $V$ and $U$ are measurable and
bounded functions, i.e. for $k,\ell=1,\ldots,n$
$A_{k\ell},\;V,\;U\in L^{\infty}(\mathbb{R}^{n}\times\Omega).$ (1.2)
Moreover, the matrix $A$ is symmetric and uniformly positive defined, that is,
there exists $a_{0}>0$, such that, for a.a.
$(y,\omega)\in\mathbb{R}^{n}\times\Omega$, and each $\xi\in\mathbb{R}^{n}$
$\sum_{k,\ell=1}^{n}A_{k\ell}(y,\omega)\,\xi_{k}\,\xi_{\ell}\geqslant
a_{0}{|\xi|}^{2}.$ (1.3)
This paper is the second part of the Project initiated with T. Andrade, W.
Neves, J. Silva [6] (Homogenization of Liouville Equations beyond stationary
ergodic setting) concerning the study of moving electrons in non-crystalline
matter, which justify the form considered for the coefficients in (1.1). We
recall that crystalline materials, also called perfect materials, are
described by periodic functions. Thus any homogenization result for
Schrödinger equations with periodic coefficients is restrict to crystalline
matter. Moreover, perfect materials are rare in Nature, there exist much more
non-crystalline than crystalline materials. For instance, there exists a huge
class called quasi-perfect materials (see Section 5, also [6]), which are
closer to perfect ones. Indeed, the concept of stochastic deformations are
very suitable to describe interstitial defects in materials science (see
Cances, Le Bris [12], and Myers [28]).
One remarks that, the homogenization of the Schrödinger equation in (1.1),
when the stochastic deformation $\Phi(y,\omega)$ is the identity mapping and
the coefficients are periodic, were studied by Allaire, Piatnitski [4].
Notably, that paper presents the discussion about the differences between the
scaling considered in (1.1) and the one called semi-classical limit. We are
not going to rephrase this point here, and address the reader to Chapter 4 in
[8] for more general considerations about that. It should be mentioned that,
to the best of our knowledge the present work is the first to study the
homogenization of the Schrödinger equations beyond the periodic setting,
applying the double-scale limits and the wave function is spanned on the Bloch
basis. Therefore, we have extended the Bloch Theory, which was restrict until
now to periodic potentials.
Last but not least, one observes that the initial data $u_{\varepsilon}^{0}$
shall be considered well-prepared, see equation (4.85). This assumption is
fundamental for the abstract homogenization result established in Theorem 4.2,
where the limit function obtained from $u_{\varepsilon}$ satisfies a simpler
Schrödinger equation, called the effective mass equation, with effective
constant coefficients, namely matrix $A^{*}$, and potential $V^{*}$. This
homogenization procedure is well known in solid state physics as Effective
Mass Theorems, see Section 4.
Finally, we stress Section 5 which is related to the homogenization of the
Schrödinger equation for quasi-perfect materials, and it is also an important
part of this paper. Indeed, a very special case occurs in situations where the
amount of randomness is small, more specifically the disorder in the material
is limited. In particular, this section is interesting for numerical
applications, where specific computationally efficient techniques already
designed to deal with the homogenization of the Schrödinger equation in the
periodic setting, can be employed to treat the case of quasi-perfect
materials.
### 1.1 Contextualization
Let us briefly recall that the homogenization’s problem for (1.1) has been
treated for the periodic case ($A_{\rm per}(y)$, $V_{\rm per}(y)$, $U_{\rm
per}(y)$), and $\Phi(y,\omega)=y$ by some authors. Besides the paper by G.
Allaire, A.Piatnitski [4] already mentioned, we address the following papers
for the case of $A_{\rm per}=I_{n\times n}$, i.e. isotropic Schrödinger
equation in (1.1): G. Allaire, M.Vanninathan [3], L. Barletti, N. Ben Abdallah
[7], V. Chabu, C. Fermanian-Kammerer, F. Marcià [14], and we observe that this
list is by no means exhaustive. In [3], the authors study a semiconductors
model excited by an external potencial $U_{\rm per}(t,x)$, which depends on
the time $t$ and macroscopic variable $x$. In [7] the authors treat the
homogenization’s problem when the external potential $U_{\rm per}(x,y)$
depends also on the macroscopic variable $x$. Finally, in [14] it was
considered an external potential $U_{\rm per}(t,x)$ which model the effects of
impurities on the otherwise perfect matter.
All the references cited above treat the homogenization’s problem for (1.1),
studying the spectrum of the associated Bloch spectral cell equation, that is,
for each $\theta\in\mathbb{R}^{n}$, find the eigenvalue-eigenfunction pair
$(\lambda,\psi)$, satisfying
$\left\\{\begin{aligned} L_{\rm
per}(\theta){\big{[}\psi\big{]}}&=\lambda\,\psi,\quad\text{in
$[0,1)^{n}$},\\\\[5.0pt] \psi(y)&\not=0,\quad\text{periodic
function},\end{aligned}\right.$ (1.4)
where $L_{\rm per}(\theta)$ is the Hamiltonian given by
$L_{\rm per}(\theta){\big{[}f\big{]}}=-{\big{(}{\rm
div}_{\\!y}+2i\pi\theta\big{)}}{\big{[}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta)}f\big{]}}+V_{\rm per}(y)f.$
The above eigenvalue problem is precisely stated (in the more general context
studied in this paper) in Section 3. Here, concerning the periodic setting
mathematical solutions to (1.4), we address the reader to C. H. Wilcox [37],
(see in particular Section 2: A discussion of related literature). Then, once
this eigenvalue problem is resolved, the goal is to pass to the limit as
$\varepsilon\to 0$. One remarks that, there does not exist an uniform estimate
in $H^{1}(\mathbb{R}^{n})$ for the family of solutions $\\{u_{\varepsilon}\\}$
of (1.1), due to the scale $\varepsilon^{-2}$ multiplying the internal
potential $V_{\rm per}(y)$. To accomplish the desired asymptotic limit, under
this lack of compactness, a nice strategy is to use the two-scale convergence,
for instance see the proof of Theorem 3.2 in [4].
Let us now focus on the stochastic setting proposed in this paper, more
precisely when the coefficients of the Schrödinger equation in (1.1) are the
composition of stationary functions with stochastic deformations. Hence we
have the following natural questions:
$(Q.1)$ Is it possible to obtain an analogously Bloch spectral cell equation
to this stochastic setting?
$(Q.2)$ This new stochastic spectral problem can be resolved, such that, the
eigenvalues do not depend on $\omega\in\Omega$ (see Remark 3.2)?
$(Q.3)$ Is it feasible to adapt the two-scale convergence to this new proposed
stochastic setting? We remark that, the approach of stochastic two-scale
convergence developed by Bourgeat, Mikelic, Wright [11], and also by Zhikov,
Pyatnitskii [38] do not fit to the present context because of the presence of
the stochastic deformation $\Phi$.
The former question $(Q.1)$ is answered in Section 3.1. Indeed, assuming that
the solution of equation (1.1) is given by a plane wave, the stochastic
spectral Bloch cell equation (3.56) is obtained applying the asymptotic
expansion WKB method, (developed by Wentzel, Kramers, and Brillouin, see G.
Allaire [2]). More specifically, the Hamiltonian in (3.56) is given by
$L^{\Phi}(\theta)\big{[}F\big{]}\\!\\!=-\big{(}{\rm
div}_{\\!z}+2i\pi\theta\big{)}{\left[A{(\Phi^{-1}(z,\omega),\omega)}{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}F\right]}+V(\Phi^{-1}(z,\omega),\omega)F,$
for each $F(z,\omega)=f\left(\Phi^{-1}(z,\omega),\omega\right)$, where
$f(y,\omega)$ is a stationary function.
To answer $(Q.2)$, we have to study the spectrum of the operator
$L^{\Phi}(\theta)$, for each $\theta\in\mathbb{R}^{n}$ fixed. The first idea
is to follow the techniques applied for the periodic setting, that is, for the
operator $L_{\rm per}(\theta)$ in (1.4), where the fundamental tool is the
compact embedding of $H^{1}_{\rm per}([0,1)^{n})$ in $L^{2}([0,1)^{n})$.
Although, since $\omega\in\Omega$ can not be treat as a fixed parameter, we
have to consider the more general theory of Sobolev spaces on locally compact
Abelian groups, which is developed in Section 3.2. In fact, we have
established in details a Rellich-Kondrachov type Theorem, (see Theorem 3.22),
such that together with the study of continuous dynamic systems on compact
Abelian groups enable us to answer positively this question, at least, when
$\Omega$ has some structure. The second applied strategy here to answer
$(Q.2)$ is the Perturbation Theory, that is to say, taking the advantage of
the well known spectrum for $L_{\rm per}(\theta)$. To this end, we first
consider that the coefficients of the Schrödinger equation in (1.1) are the
composition of the periodic functions $A_{\rm per}$, $V_{\rm per}$ and $U_{\rm
per}$ with a special case of stochastic deformations, namely stochastic
perturbation of the identity (see Definition 5.1), which is given by
$\Phi_{\eta}(y,\omega):=y+\eta\,Z(y,\omega)+\mathrm{O}(\eta^{2}),$
where $Z$ is some stochastic deformation and $\eta\in(0,1)$. This concept was
introduced by X. Blanc, C. Le Bris, P.-L. Lions [10], and applied for the
first time to evolutionary equations in T. Andrade, W. Neves, J. Silva [6].
Then, taking this special case $\Phi_{\eta}$, the operator
$L^{\Phi_{\eta}}(\theta)$ has the following expansion in a neighborhood of
$(0,\theta_{0})\in\mathbb{R}^{n+1}$,
$L^{\Phi_{\eta}}(\theta)=L_{\rm
per}(\theta_{0})+\sum_{{|\varrho|}=1}^{3}((\eta,\theta)-(0,\theta_{0}))^{\varrho}L_{\varrho}+\mathrm{O}(\eta^{2}),$
where
$\varrho=(\varrho_{1},\ldots,\varrho_{n},\varrho_{n+1})\in\mathbb{N}^{n+1}$,
${|\varrho|}=\sum_{k=1}^{n+1}\varrho_{k}$, and $L_{\varrho}$ is a bounded
operator, see Section 5. From the above equation, it follows that the point
spectrum (i.e. the set of eigenvalues) of $L^{\Phi_{\eta}}(\theta)$ is not
empty in a neighborhood of $(0,\theta_{0})$, when $\lambda_{\rm
per}(\theta_{0})$ is an isolated eigenvalue with finite multiplicity. This
last property is studied in details in Section 2.4, see Theorem 2.26.
The question $(Q.3)$ is answered positively in Section 2.3, that is, we have
established in this section a two-scale convergence in a stochastic setting,
which is beyond the classical stationary ergodic setting. Indeed, the main
difference here with the earlier stochastic extensions of the periodic setting
is that, the test functions used are random perturbations of stationary
functions accomplished by the stochastic deformations. These compositions are
beyond the stationary class, thus we have a lack of the stationarity property
in this kind of test functions (see the introduction section in [6] for a deep
discussion about this subject). It was introduced a compactification argument
that, preserves the ergodic nature of the setting involved and allow us to
overcome these difficulties.
### 1.2 Summary of the main results
In this section we summarize the main results on this paper. Since some of the
theorems (cited below) have its on interested, we describe shortly the main
issue of each one.
First, Theorem 2.18 allows us to overcome the lack of topological structure of
a given probability space reducing it to a separable compact space whose
topological basis is dictated by the coefficients of the problem (1.1).
Then, the Theorem 2.21 uses all topological features brought forth by the
Theorem 2.18 in order to give us a result about two-scale convergence where
the test functions are random perturbations accomplished by stochastic
diffeomorphisms of stationary functions. It is worth mentioning that this
result generalizes the corresponding one for deterministic case in [16] and
the corresponding one for the stochastic case in [11].
Theorem 2.26 consider a sequence of bounded operators in a Hilbert space,
which defines a symmetric operator via the power series of multidimensional
complex variables. It is stated that, if the first coefficient operator of
this series has isolated eigenvalues of finite multiplicity, then the
holomorphic defined operator inherited from it similar point spectrum
analysis.
The Theorem 3.15 established a necessary condition such that, the
Rellich–Kondrachov Theorem on compact Abelian groups holds true. More
precisely, the dual group must be an enumerable set.
A complete characterization of the Rellich–Kondrachov Theorem on compact
Abelian groups is given by Theorem 3.22. Moreover, as a byproduct of this
characterization, we provide a proof of the Rellich–Kondrachov Theorem in a
precise context.
The Theorem 4.2 is one of the main results of this paper. It is an abstract
homogenization result for Schrödinger equations that encompasses the
corresponding one given by Allaire and Piatnistski [4] in the periodic
context.
The Theorem 5.9 shows how the periodic setting can be used to deal with
homogenization of the equation (1.1) for materials when the amount of
randomness is small. This has importants numerical implications.
The Theorem 5.11 reveals an interesting splitting property of the solution of
the homogenized equation associated to (1.1) in the specific case of the
quasi-perfect materials.
## 2 Preliminaries and Background
This section introduces the basement theory, which will be used through the
paper. To begin we fix some notations, and collect some preliminary results.
The material which is well-known or a direct extension of existing work are
giving without proofs, otherwise we present them.
We denote by $\mathbb{G}$ the group $\mathbb{Z}^{n}$ (or $\mathbb{R}^{n}$),
with $n\in\mathbb{N}$. The set $[0,1)^{n}$ denotes the unit cube, which is
also called the unitary cell and will be used as the reference period for
periodic functions. The symbol $\left\lfloor x\right\rfloor$ denotes the
unique number in $\mathbb{Z}^{n}$, such that $x-\left\lfloor
x\right\rfloor\in[0,1)^{n}$. Let $H$ be a complex Hilbert space, we denote by
$\mathcal{B}(H)$ the Banach space of linear bounded operators from $H$ to $H$.
Let $U\subset\mathbb{R}^{n}$ be an open set, $p\geqslant 1$, and
$s\in\mathbb{R}$. We denote by $L^{p}(U)$ the set of (real or complex)
$p-$summable functions with respect to the Lebesgue measure (vector ones
should be understood componentwise). Given a Lebesgue measurable set
$E\subset\mathbb{R}^{n}$, $|E|$ denotes its $n-$dimensional Lebesgue measure.
Moreover, we will use the standard notations for the Sobolev spaces
$W^{s,p}(U)$ and $H^{s}(U)\equiv W^{s,2}(U)$.
### 2.1 Anisotropic Schrödinger equations
The aim of this section is to present the well-posedness for the solutions of
the Schrödinger equation, and some properties of them. Most of the material
can be found in Cazenave, Haraux [13].
First, let us consider the following Cauchy problem, which is driven by a
linear anisotropic Schrödinger equation, that is
$\left\\{\begin{aligned} &i\;\partial_{t}u(t,x)-{\rm div}\big{(}A(x)\nabla
u(t,x)\big{)}+V(x)\,u(t,x)=0\quad\text{in $\mathbb{R}^{n+1}_{T}$},\\\\[5.0pt]
&u(0,x)=u_{0}(x)\quad\text{in $\mathbb{R}^{n}$},\end{aligned}\right.$ (2.5)
where the unknown $u(t,x)$ is a complex value function, and $u_{0}$ is a given
initial datum. The coefficient $A(x)$ is a symmetric real $n\times n$-matrix
value function, and the potential $V(x)$ is a real function. We always assume
that
$A(x),V(x)\quad\text{are measurable bounded functions}.$ (2.6)
One recalls that, a matrix $A$ is called (uniformly) coercive, when, there
exists $a_{0}>0$, such that, for each $\xi\in\mathbb{R}^{n}$, and almost all
$x\in\mathbb{R}^{n}$, $A(x)\xi\cdot\xi\geqslant a_{0}|\xi|^{2}$.
The following definition tell us in which sense a complex function $u(t,x)$ is
a mild solution to (2.5).
###### Definition 2.1.
Let $A,V$ be coefficients satisfying (2.6). Given $u_{0}\in
H^{1}(\mathbb{R}^{n})$, a function
$u\in C([0,T];H^{1}(\mathbb{R}^{n}))\cap C^{1}((0,T);H^{-1}(\mathbb{R}^{n}))$
is called a mild solution to the Cauchy problem (2.5), when for each
$t\in(0,T)$, it follows that
$i\partial_{t}u(t)-{\rm div}\big{(}A\nabla u(t)\big{)}+Vu(t)=0\quad\text{in
$H^{-1}(\mathbb{R}^{n})$},$ (2.7)
and $u(0)=u_{0}$ in $H^{1}(\mathbb{R}^{n})$.
Then, we state the following
###### Proposition 2.2.
Let $A$ be a coercive matriz value function, $V$ a potential and $u_{0}\in
H^{1}(\mathbb{R}^{n})$ a given initial data. Assume that $A,V$ satisfy (2.6).
Then, there exist a unique mild solution of the Cauchy problem (2.5).
###### Proof.
The proof follows applying Lemma 4.1.5 and Corollary 4.1.2 in [13]. ∎
###### Remark 2.3.
It is very important in the homogenization procedure of the Schrödinger
equation, when the coefficients $A$ and $V$ in (2.5) are constants, the matrix
$A$ is not necessarily coercive, and the initial data $u_{0}\in
L^{2}(\mathbb{R}^{n})$. Then, a function $u\in L^{2}(\mathbb{R}^{n+1}_{T})$ is
called a weak solution to (2.5), if it satisfies
$i\partial_{t}u-{\rm tr}(AD^{2}u)+Vu=0\quad\text{in distribution sense}.$
Since $A,V$ are constant, we may apply the Fourier Transform, and obtain the
existence of a unique solution $u\in H^{1}((0,T);L^{2}(\mathbb{R}^{n}))$.
Therefore, the solution $u\in C([0,T];L^{2}(\mathbb{R}^{n}))$ after being
redefined in a set of measure zero, and we have $u(0)=u_{0}$ in
$L^{2}(\mathbb{R}^{n})$.
Now, let us recall the standard a priori estimates for the solutions of the
Cauchy problem (2.5). First, under the conditions of Proposition 2.2, a
function $u\in C([0,T];H^{1}(\mathbb{R}^{n}))\cap
C^{1}((0,T);H^{-1}(\mathbb{R}^{n}))$, which is the mild solution of (2.5),
satisfies for each $t\in[0,T]$
$\displaystyle(i)\
\int_{\mathbb{R}^{n}}|u(t)|^{2}dx=\int_{\mathbb{R}^{n}}|u_{0}|^{2}dx,$ (2.8)
$\displaystyle(ii)\ \int_{\mathbb{R}^{n}}|\nabla u(t)|^{2}dx\leqslant C\
\big{(}\int_{\mathbb{R}^{n}}|\nabla
u_{0}|^{2}dx+\int_{\mathbb{R}^{n}}|u_{0}|^{2}dx\big{)},$
where $C=C(\|V\|_{L^{\infty}},\|A\|_{L^{\infty}},a_{0})$ is a positive
constant. Clearly, for the constant coefficients case, with $A$ non-coercive
and $u_{0}\in L^{2}(\mathbb{R}^{n})$, a function $u\in
C([0,T];L^{2}(\mathbb{R}^{n}))$, which is the weak solution of (2.5), just
satisfies the item $(i)$ above. These estimates follow by density argument.
### 2.2 Stochastic configuration
Here we present the stochastic context, which will be used thoroughly in the
paper. To begin, let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space.
For each random variable $f$ in $L^{1}(\Omega;\mathbb{P})$, ($L^{1}(\Omega)$
for short), we denote its expectation value by
$\mathbb{E}[f]=\int_{\Omega}f(\omega)\ d\mathbb{P}(\omega).$
A mapping $\tau:\mathbb{G}\times\Omega\to\Omega$ is said a $n-$dimensional
dynamical system if:
1. (i)
(Group Property) $\tau(0,\cdot)=id_{\Omega}$ and
$\tau(x+y,\omega)=\tau(x,\tau(y,\omega))$ for all $x,y\in\mathbb{G}$ and
$\omega\in\Omega$.
2. (ii)
(Invariance) The mappings $\tau(x,\cdot):\Omega\to\Omega$ are
$\mathbb{P}$-measure preserving, that is, for each $x\in\mathbb{G}$ and every
$E\in\mathcal{F}$, we have
$\tau(x,E)\in\mathcal{F},\qquad\mathbb{P}(\tau(x,E))=\mathbb{P}(E).$
For simplicity, we shall use $\tau(k)\omega$ to denote $\tau(k,\omega)$.
Moreover, it is usual to say that $\tau(k)$ is a discrete (continuous)
dynamical system if $k\in\mathbb{Z}^{n}$ ($k\in\mathbb{R}^{n}$), but we only
stress this when it is not obvious from the context.
A measurable function $f$ on $\Omega$ is called $\tau$-invariant, if for each
$k\in\mathbb{G}$
$f(\tau(k)\omega)=f(\omega)\quad\text{for almost all $\omega\in\Omega$}.$
Hence a measurable set $E\in\mathcal{F}$ is $\tau$-invariant, if its
characteristic function $\chi_{E}$ is $\tau$-invariant. In fact, it is a
straightforward to show that, a $\tau$-invariant set $E$ can be equivalently
defined by
$\tau(k)E=E\quad\text{for each $k\in\mathbb{G}$}.$
Moreover, we say that the dynamical system $\tau$ is ergodic, when all
$\tau$-invariant sets $E$ have measure $\mathbb{P}(E)$ of either zero or one.
Equivalently, we may characterize an ergodic dynamical system in terms of
invariant functions. Indeed, a dynamical system is ergodic if each $\tau$\-
invariant function is constant almost everywhere, that is to say
$\Big{(}f(\tau(k)\omega)=f(\omega)\quad\text{for each $k\in\mathbb{G}$ and
a.e. $\omega\in\Omega$}\Big{)}\Rightarrow\text{ $f(\cdot)=const.$ a.e.}.$
###### Example 2.4.
Let $\Omega=[0,1)^{n}$ be a sample space, $\mathcal{F}$ the appropriate
$\sigma$-algebra on $\Omega$, and $\mathbb{P}$ the probability measure, i.e.
the Lebesgue measure restrict to $\Omega$. Then, we consider the
$n$-dimensional dynamical system $\tau:\mathbb{R}^{n}\times\Omega\to\Omega$,
defined by
$\tau(x)\omega:=x+\omega-\left\lfloor x+\omega\right\rfloor.$
The group property for $\tau(x)$ follows from the greatest integer function
properties, and its invariance from the translation invariance of the Lebesgue
measure.
###### Example 2.5.
Let $(\Omega_{0},\mathscr{F}_{0},\mathbb{P}_{0})$ be a probability space. For
$m\in\mathbb{N}$ fixed, we consider the set $S=\\{0,1,2,\ldots,m\\}$ and the
real numbers
$\text{$p_{0},p_{1},p_{2},\ldots,p_{m}$ in $(0,1)$, such that
$\sum_{\ell=0}^{m}p_{\ell}=1$}.$
If $\\{X_{k}:\Omega_{0}\to S\\}_{k\in\mathbb{Z}^{n}}$ is a family of random
variables, then it is induced a probability measure from it on the measurable
space $\big{(}S^{\mathbb{Z}^{n}},\bigotimes_{k\in\mathbb{Z}^{n}}2^{S}\big{)}$.
Indeed, we may define the probability measure
$\mathbb{P}(E):=\mathbb{P}_{0}{\left\\{X\in
E\right\\}},\;\;E\in\bigotimes_{k\in\mathbb{Z}^{n}}2^{S},$
where the mapping $X:\Omega_{0}\to S^{\mathbb{Z}^{n}}$ is given by
$X(\omega_{0})=(X_{k}(\omega_{0}))_{k\in\mathbb{Z}^{n}}$.
Now, we denote for convenience $\Omega=S^{\mathbb{Z}^{n}}$ and
$\mathscr{F}=\bigotimes_{k\in\mathbb{Z}^{n}}2^{S}$, that is,
$\mathscr{F}=\sigma(\mathscr{A})$, where $\mathscr{A}$ is the algebra given by
the finite union of sets (cylinders of finite base) of the form
$\prod_{k\in\mathbb{Z}^{n}}E_{k},$ (2.9)
where $E_{k}\in 2^{S}$ is different from $S$ for a finite number of indices
$k$. Additionally we assume that, the family
$\\{X_{k}\\}_{k\in\mathbb{Z}^{n}}$ is independent, and for each
$k\in\mathbb{Z}^{n}$, we have
$\mathbb{P}_{0}{\\{X_{k}=0\\}}=p_{0},\,\,\mathbb{P}_{0}{\\{X_{k}=1\\}}=p_{1},\,\,\ldots,\,\,\mathbb{P}_{0}{\\{X_{k}=m\\}}=p_{m}.$
(2.10)
Then, we may define an ergodic dynamical system
$\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$, by
${\left(\tau(\ell)\omega\right)}(k):=\omega(k+\ell),\quad\text{for any
$k,\ell\in\mathbb{Z}^{n}$},$
where $\omega=(\omega(k))_{k\in\mathbb{Z}^{n}}$.
$i)$ The group property follows from the definition. Indeed, for each
$\omega\in\Omega$ and $\ell_{1},\ell_{2}\in\mathbb{Z}^{n}$, it follows that
${\big{(}\tau(\ell_{1}+\ell_{2})\omega\big{)}}(k)=\omega(k+\ell_{1}+\ell_{2})={\big{(}\tau(\ell_{1})\tau(\ell_{2})\omega\big{)}}(k),$
for any $k\in\mathbb{Z}^{n}$.
$(ii)$ The mappings $\tau(\ell,\cdot):\Omega\to\Omega$ are
$\mathbb{P}$-measure preserving. First, we observe from (2.9) that, for all
$\ell\in\mathbb{Z}^{n}$
$\tau(\ell)\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}=\prod_{k\in\mathbb{Z}^{n}}E_{k+\ell}.$
Therefore, for any $\ell\in\mathbb{Z}^{n}$
$\displaystyle\mathbb{P}{\Big{(}\tau(\ell){\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}}\Big{)}}$
$\displaystyle=\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k+\ell}\big{)}}=\mathbb{P}_{0}{\big{(}\bigcap_{k\in\mathbb{Z}^{n}}\\{X_{k}\in
E_{k+\ell}\\}\big{)}}$
$\displaystyle=\prod_{k\in\mathbb{Z}^{n}}\mathbb{P}_{0}{\left\\{X_{k}\in
E_{k+\ell}\right\\}}$
$\displaystyle=\prod_{k\in\mathbb{Z}^{n}}\mathbb{P}_{0}{\left\\{X_{k+\ell}\in
E_{k+\ell}\right\\}}=\prod_{k\in\mathbb{Z}^{n}}\mathbb{P}_{0}{\left\\{X_{k}\in
E_{k}\right\\}},$
where we have used in the second line that the family of random variables is
independent and in the third line it has the same distribution, equation
(2.10). Then, the measure preserving is satisfied for each element of the
algebra $\mathscr{A}$, and hence for each element of $\mathscr{F}$.
$(iii)$ The ergodicity. Given the cylinders
${\prod_{k\in\mathbb{Z}^{n}}E_{k}}$ and ${\prod_{k\in\mathbb{Z}^{n}}F_{k}}$,
there exists $\ell_{0}\in\mathbb{Z}^{n}$, such that
$\mathbb{P}{\Big{(}\tau(\ell_{0}){\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}}\cap{\big{(}\prod_{k\in\mathbb{Z}^{n}}F_{k}\big{)}}\Big{)}}=\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}}\,\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}F_{k}\big{)}}.$
Indeed, let us define
$e_{0}:={\rm max}{\\{{|k|}\,;\,k\in\mathbb{Z}^{n},\,E_{k}\not=S\\}},\,\quad
f_{0}:={\rm max}{\\{{|k|}\,;\,k\in\mathbb{Z}^{n},\,F_{k}\not=S\\}},$
and observe that, if $\ell_{0}\in\mathbb{Z}^{n}$ satisfies
${{|\ell_{0}|}>e_{0}+f_{0}}$, then
$E_{k+\ell_{0}}\cap
F_{k}=\left\\{\begin{array}[]{ll}F_{k}&\text{if}\;{|k|}\leqslant
f_{0},\\\\[5.0pt] E_{k}&\text{if}\;f_{0}<{|k|}\leqslant
e_{0}+f_{0},\\\\[5.0pt] S&\text{if}\;{|k|}>e_{0}+f_{0}.\end{array}\right.$
Therefore, we have
$\displaystyle\mathbb{P}{\big{(}\tau(\ell_{0}){\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}}\cap{\big{(}\prod_{k\in\mathbb{Z}^{n}}F_{k}\big{)}}\big{)}}$
$\displaystyle=$
$\displaystyle\mathbb{P}{\big{(}{\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k+\ell_{0}}\big{)}}\cap{\big{(}\prod_{k\in\mathbb{Z}^{n}}F_{k}\big{)}}\big{)}}$
$\displaystyle=$
$\displaystyle\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}{\big{(}E_{k+\ell_{0}}\cap
F_{k}\big{)}}\big{)}}$ $\displaystyle=$
$\displaystyle\prod_{k\in\mathbb{Z}^{n}}\mathbb{P}_{0}{\left\\{X_{k}\in
E_{k+\ell_{0}}\cap F_{k}\right\\}}$ $\displaystyle=$
$\displaystyle\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}E_{k}\big{)}}\mathbb{P}{\big{(}\prod_{k\in\mathbb{Z}^{n}}F_{k}\big{)}}.$
The above property follows for finite unions of cylinders, that is to say,
given $E_{1},E_{2}\in\mathscr{A}$, there exists $\ell_{0}\in\mathbb{Z}^{n}$,
such that
$\mathbb{P}{\left(\tau(\ell_{0}){E}_{1}\cap{E}_{2}\right)}=\mathbb{P}({E}_{1})\,\mathbb{P}({E}_{2}).$
Now, let $E\in\mathscr{F}$ be a $\tau$-invariant set. For each
$\varepsilon>0$, there exists ${E}_{0}\in\mathscr{A}$ such that,
$\mathbb{P}{\left({E}\Delta\,{E}_{0}\right)}<\varepsilon$. Then, since $E$ is
$\tau$-invariant we have for each $\ell\in\mathbb{Z}^{n}$
$\displaystyle\mathbb{P}{\big{(}\tau(\ell){E}_{0}\,\Delta\,{E}_{0}\big{)}}$
$\displaystyle\leq\mathbb{P}{\big{(}\tau(\ell){E}_{0}\,\Delta\,\tau(\ell){E}\big{)}}+\mathbb{P}{\big{(}\tau(\ell){E}\,\Delta\,{E}\big{)}}+\mathbb{P}{\big{(}{E}\Delta\,{E}_{0}\big{)}}$
(2.11) $\displaystyle=2\,\mathbb{P}{\big{(}{E}\Delta\,{E}_{0}\big{)}}\leq
2\varepsilon.$
On the other hand, since ${E}_{0}\in\mathscr{A}$, for some
$\ell_{0}\in\mathbb{Z}^{n}$, it follows that
$\mathbb{P}{\left(\tau(\ell_{0}){E}_{0}\cap{E}_{0}^{c}\right)}=\mathbb{P}({E}_{0})\mathbb{P}({E}_{0}^{c})\quad\text{and}\quad\mathbb{P}{\left(\tau(\ell_{0}){E}_{0}^{c}\cap{E}_{0}\right)}=\mathbb{P}({E}_{0}^{c})\mathbb{P}({E}_{0}),$
and thus
$\displaystyle\mathbb{P}{\big{(}\tau(\ell_{0}){E}_{0}\,\Delta\,{E}_{0}\big{)}}$
$\displaystyle=\mathbb{P}{\left(\tau(\ell_{0}){E}_{0}\cap{E}_{0}^{c}\right)}+\mathbb{P}{\left(\tau(\ell_{0}){E}_{0}^{c}\cap{E}_{0}\right)}$
(2.12) $\displaystyle=2\mathbb{P}({E}_{0})(1-\mathbb{P}({E}_{0})).$
From (2.11) and (2.12), it follows for each $\varepsilon>0$
$\mathbb{P}({E}_{0})(1-\mathbb{P}({E}_{0}))<\varepsilon.$
Consequently, we obtain that $\mathbb{P}({E})=0$ or $\mathbb{P}({E})=1$.
Now, let $(\Gamma,\mathcal{G},\mathbb{Q})$ be a given probability space. We
say that a measurable function $g:\mathbb{R}^{n}\times\Gamma\to\mathbb{R}$ is
stationary, if for any finite set consisting of points
$x_{1},\ldots,x_{j}\in\mathbb{R}^{n}$, and any $k\in\mathbb{G}$, the
distribution of the random vector
$\Big{(}g(x_{1}+k,\cdot),\cdots,g(x_{j}+k,\cdot)\Big{)}$
is independent of $k$. Further, subjecting the stationary function $g$ to some
natural conditions it can be showed that, there exists other probability space
$(\Omega,\mathcal{F},\mathbb{P})$, a $n-$dimensional dynamical system
$\tau:\mathbb{G}\times\Omega\to\Omega$ and a measurable function
$f:\mathbb{R}^{n}\times\Omega\to\mathbb{R}$ satisfying
* •
For all $x\in\mathbb{R}^{n}$, $k\in\mathbb{G}$ and $\mathbb{P}-$almost every
$\omega\in\Omega$
$f(x+k,\omega)=f(x,\tau(k)\omega).$ (2.13)
* •
For each $x\in\mathbb{R}^{n}$ the random variables $g(x,\cdot)$ and
$f(x,\cdot)$ have the same law. We recall that, the equality almost surely
implies equality in law, but the converse is not true.
One remarks that, the set of stationary functions forms an algebra, and also
is stable by limit process. For instance, the product of two stationaries
functions is a stationary one, and the derivative of a stationary function is
stationary. Moreover, the stationarity concept is the most general extension
of the notions of periodicity and almost periodicity for a function to have
some ”self-averaging” behaviour.
###### Example 2.6.
Under the conditions of Example 2.4, let $F\\!:\Omega\\!\to\mathbb{C}$ be a
measurable function. Then, the function
$f\\!:\\!\mathbb{R}^{n}\\!\times\Omega\\!\to\\!\mathbb{C}$, defined by
$f(x,\omega):=F(\tau(x)\omega)$
is a stationary function. In fact, considering continuous dynamical systems,
any stationary function can be written in this way. Therefore, even if
$f(\cdot,\omega)$ is just a measurable function, it makes sense to write, for
instance, $f(0,\cdot)$ due to the stationary property.
###### Example 2.7.
Under the conditions of Example 2.5, we take $m=1$, and consider the following
functions, $\varphi_{0}=0$ and $\varphi_{1}$ be a Lipschitz vector field, such
that, $\varphi_{1}$ is periodic, ${\rm supp}\,\varphi_{1}\subset(0,1)^{n}$.
Consequently, the function
$f(y,\omega):=\varphi_{\omega({\lfloor
y\rfloor})}(y),\;\;(y,\omega)\in\mathbb{R}^{n}\times\\!\Omega$
satisfies, ${f(y,\cdot)}$ is ${\mathscr{F}}$-measurable, ${f(\cdot,\omega)}$
is continuous, and for each $k\in\mathbb{Z}^{n}$,
$f(y+k,\omega)=f(y,\tau(k)\omega).$
Therefore, ${f}$ is a stationary function.
Now, we present the precise definition of the stochastic deformation as
presented in [6].
###### Definition 2.8.
A mapping $\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n},(y,\omega)\mapsto
z=\Phi(y,\omega)$, is called a stochastic deformation (for short
$\Phi_{\omega}$), when satisfies:
* i)
For $\mathbb{P}-$almost every $\omega\in\Omega$, $\Phi(\cdot,\omega)$ is a
bi–Lipschitz diffeomorphism.
* ii)
There exists $\nu>0$, such that
$\underset{\omega\in\Omega,\,y\in\mathbb{R}^{n}}{\rm ess\,inf}\big{(}{\rm
det}\big{(}\nabla\Phi(y,\omega)\big{)}\big{)}\geq\nu.$
* iii)
There exists a $M>0$, such that
$\underset{\omega\in\Omega,\,y\in\mathbb{R}^{n}}{\rm
ess\,sup}\big{(}|\nabla\Phi(y,\omega)|\big{)}\leq M<\infty.$
* iv)
The gradient of $\Phi$, i.e. $\nabla\Phi(y,\omega)$, is stationary in the
sense (2.13).
Here, we first recall from [6] a general example of stochastic deformations
$\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$ associated to a dynamical
system $T:\mathbb{R}^{n}\times\Omega\to\Omega$, where the sample space
$\Omega$ is arbitrary. Then, following the idea of Example 2.7, we present an
example of stochastic deformation
$\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$ associated to a dynamical
system $T:\mathbb{Z}^{n}\times\Omega\to\Omega$, where $\Omega$ is prescribed.
Let $(\Omega_{i},\mathcal{F}_{i},\mathbb{P}_{i})_{i=1}^{n}$ be probability
spaces, and $f_{i}:\Omega_{i}\to\mathbb{R}$ be a measurable function, such
that $0<c_{0}\leq f_{i}(\omega)\leq c_{1}$ for a.e. $\omega\in\Omega_{i}$, for
$i=1,\ldots,n$. Let $T_{i}:\mathbb{R}\times\Omega_{i}\to\Omega_{i}$ be a
$1-$dimensional dynamical system such that the function
$f_{i}\left(T_{i}(\cdot)\omega\right)$ is continuous. Then, we define
$\Phi_{i}(\lambda,\omega):=\text{sgn}(\lambda)\int_{\min{\\{\lambda,0\\}}}^{\max{\\{\lambda,0}\\}}f_{i}\left(T_{i}(s)\omega\right)\,ds.$
Then, $\Phi_{i}:\mathbb{R}\times\Omega_{i}\to\mathbb{R}$ is not a stationary
function and verifies all conditions of the Definition 2.8. Finally, define
the probability space $(\Omega,\mathcal{F},\mathbb{P})$, where
$\Omega:=\otimes_{i=1}^{n}\Omega_{i}$,
$\mathcal{F}:=\otimes_{i=1}^{n}\mathcal{F}_{i}$ and
$\mathbb{P}:=\otimes_{i=1}^{n}\mathbb{P}_{i}$ and the following
$n-$dimensional dynamical system $T:\mathbb{R}^{n}\times\Omega\to\Omega$
$T(x,\omega):=\big{(}T_{1}(x_{1},\omega_{1}),\ldots,T_{n}(x_{n},\omega_{n})\big{)},$
where we denote $x=(x_{1},\cdots,x_{n})$ and
$\omega=(\omega_{1},\ldots,\omega_{n})$. Thus, the function
$\Phi(x,\omega)=(\Phi_{1}(x_{1},\omega_{1}),\ldots,\Phi_{n}(x_{n},\omega_{n}))$
fulfills the conditions of Definition 2.8. Moreover, for any orthogonal
$n\times n-$matrix $Q$, with $\det Q=1$, the mapping $Q\Phi$ is also a
stochastic deformation, which gradient is not necessary diagonal.
Now, let us present a second example of stochastic deformations.
###### Example 2.9.
Under the conditions of Example 2.7, let us consider (for $\eta>0$) the
following map
$\Phi(y,\omega):=y+\eta\,\varphi_{\omega({\lfloor
y\rfloor})}(y),\;\;(y,\omega)\in\mathbb{R}^{n}\times\Omega.$
Then, $\nabla_{\\!\\!y}\Phi(y,\omega)=I_{\mathbb{R}^{n\times
n}}+\eta\,\nabla\varphi_{\omega({\lfloor y\rfloor})}(y)$, and for $\eta$
sufficiently small all the conditions in the Definition 2.8 are satisfied.
Then, ${\Phi}$ is a stochastic deformation.
Given a stochastic deformation $\Phi$, let us consider the following spaces
$\mathcal{L}_{\Phi}:={\big{\\{}F(z,\omega)=f(\Phi^{-1}(z,\omega),\omega);f\in
L^{2}_{\rm loc}(\mathbb{R}^{n};L^{2}(\Omega))\;\;\text{stationary}\big{\\}}}$
(2.14)
and
$\mathcal{H}_{\Phi}:={\big{\\{}F(z,\omega)=f(\Phi^{-1}(z,\omega),\omega);\;f\in
H^{1}_{\rm loc}(\mathbb{R}^{n};L^{2}(\Omega))\;\;\text{stationary}\big{\\}}}$
(2.15)
which are Hilbert spaces, endowed respectively with the inner products
$\displaystyle{\langle F,G\rangle}_{\mathcal{L}_{\Phi}}$
$\displaystyle:=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!F(z,\omega)\,\overline{G(z,\omega)}\,dz\,d\mathbb{P}(\omega),$
$\displaystyle{\langle F,G\rangle}_{\mathcal{H}_{\Phi}}$
$\displaystyle:=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!F(z,\omega)\,\overline{G(z,\omega)}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle\;\quad+\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\nabla_{\\!\\!z}F(z,\omega)\cdot\overline{\nabla_{\\!\\!z}G(z,\omega)}\,dz\,d\mathbb{P}(\omega).$
###### Remark 2.10.
Under the above notations, when $\Phi=Id$ we denote $\mathcal{L}_{\Phi}$ and
$\mathcal{H}_{\Phi}$ by $\mathcal{L}$ and $\mathcal{H}$ respectively.
Moroever, a function $F\in{\mathcal{H}}_{\Phi}$ if, and only if,
$F\circ\Phi\in{\mathcal{H}}$, and there exist constants $C_{1},C_{2}>0$, such
that
$C_{1}\|F\circ\Phi\|_{{\mathcal{H}}}\leq\|F\|_{{\mathcal{H}}_{\Phi}}\leq
C_{2}\|F\circ\Phi\|_{{\mathcal{H}}}.$
Analogously, $F\in{\mathcal{L}}_{\Phi}$ if, and only if,
$F\circ\Phi\in{\mathcal{L}}$, and there exist constants $C_{1},C_{2}>0$, such
that
$C_{1}\|F\circ\Phi\|_{{\mathcal{L}}}\leq\|F\|_{{\mathcal{L}}_{\Phi}}\leq
C_{2}\|F\circ\Phi\|_{{\mathcal{L}}}.$
Indeed, let us show the former equivalence. Applying a change of variables, we
obtain
$\displaystyle\|F\|^{2}_{{\mathcal{H}}_{\Phi}}$
$\displaystyle=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!|F(z,\omega)|^{2}\,dz\,d\mathbb{P}(\omega)+\int_{\Omega}\int_{\Phi(\mathsf{Y},\omega)}\\!\\!|\nabla_{\\!\\!z}F(z,\omega)|^{2}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle=\int_{\Omega}\int_{[0,1)^{n}}\\!\\!|f(y,\omega)|^{2}\det[\nabla\Phi(y,\omega)]\,dy\,d\mathbb{P}(\omega)$
$\displaystyle\quad+\int_{\Omega}\int_{[0,1)^{n}}\\!\\!|[\nabla\Phi(y,\omega)]^{-1}\nabla_{\\!\\!z}f(y,\omega)|^{2}\det[\nabla\Phi(y,\omega)]\,dy\,d\mathbb{P}(\omega).$
The equivalence follows from the properties of the stochastic deformation
$\Phi$.
#### 2.2.1 Ergodic theorems
We begin this section with the concept of mean value, which is in connection
with the notion of stationarity. A function $f\in
L^{1}_{\text{loc}}(\mathbb{R}^{n})$ is said to possess a mean value if the
sequence $\\{f(\cdot/\varepsilon){\\}}_{\varepsilon>0}$ converges in the
duality with $L^{\infty}$ and compactly supported functions to a constant
$M(f)$. This convergence is equivalent to
$\lim_{t\to\infty}\frac{1}{t^{n}|A|}\int_{A_{t}}f(x)\,dx=M(f),$ (2.16)
where $A_{t}:=\\{x\in\mathbb{R}^{n}\,:\,t^{-1}x\in A\\}$, for $t>0$ and any
$A\subset\mathbb{R}^{n}$, with $|A|\neq 0$.
###### Remark 2.11.
Unless otherwise stated, we assume that the dynamical system
$\tau:\mathbb{G}\times\Omega\to\Omega$ is ergodic and we will also use the
notation
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f(x)\
dx\quad\text{for $M(f)$}.$
Now, we state the result due to Birkhoff, which connects all the notions
considered before, see [27].
###### Theorem 2.12 (Birkhoff Ergodic Theorem).
Let $f\in L^{1}_{\text{loc}}(\mathbb{R}^{n};L^{1}(\Omega))$, $($also $f\in
L^{\infty}(\mathbb{R}^{n};L^{1}(\Omega)))$, be a stationary random variable.
Then, for almost every $\widetilde{\omega}\in\Omega$ the function
$f(\cdot,\widetilde{\omega})$ possesses a mean value in the sense of (2.16).
Moreover, the mean value $M\left(f(\cdot,\widetilde{\omega})\right)$ as a
function of $\widetilde{\omega}\in\Omega$ satisfies for almost every
$\widetilde{\omega}\in\Omega$:
i) Discrete case (i.e. $\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$);
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f(x,\widetilde{\omega})\
dx=\mathbb{E}\left[\int_{[0,1)^{n}}f(y,\cdot)\,dy\right].$
ii) Continuous case (i.e. $\tau:\mathbb{R}^{n}\times\Omega\to\Omega$);
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f(x,\widetilde{\omega})\
dx=\mathbb{E}\left[f(0,\cdot)\right].$
The following lemma shows that, the Birkhoff Ergodic Theorem holds if a
stationary function is composed with a stochastic deformation.
###### Lemma 2.13.
Let $\Phi$ be a stochastic deformation and $f\in
L^{\infty}_{\text{loc}}(\mathbb{R}^{n};L^{1}(\Omega))$ be a stationary random
variable in the sense (2.13). Then for almost $\widetilde{\omega}\in\Omega$
the function
$f\left(\Phi^{-1}(\cdot,\widetilde{\omega}),\widetilde{\omega}\right)$
possesses a mean value in the sense of (2.16) and satisfies:
i) Discrete case;
$\text{$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f\left(\Phi^{-1}(z,\widetilde{\omega}),\widetilde{\omega}\right)\,dz=\frac{\mathbb{E}\left[\int_{\Phi([0,1)^{n},\cdot)}f{\left(\Phi^{-1}\left(z,\cdot\right),\cdot\right)}\,dz\right]}{\det\left(\mathbb{E}\left[\int_{[0,1)^{n}}\nabla_{\\!\\!y}\Phi(y,\cdot)\,dy\right]\right)}$
\quad for a.a. $\widetilde{\omega}\in\Omega$}.$
ii) Continuous case;
$\text{$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f\left(\Phi^{-1}(z,\widetilde{\omega}),\widetilde{\omega}\right)\,dz=\frac{\mathbb{E}\left[f(0,\cdot)\det\left(\nabla\Phi(0,\cdot)\right)\right]}{\det\left(\mathbb{E}\left[\nabla\Phi(0,\cdot)\right]\right)}$
\qquad for a.a. $\widetilde{\omega}\in\Omega$}.$
###### Proof.
See Blanc, Le Bris, Lions [9], also Andrade, Neves, Silva [6]. ∎
#### 2.2.2 Analysis of stationary functions
In the rest of this paper, unless otherwise explicitly stated, we assume
discrete dynamical systems and therefore, stationary functions are considered
in this discrete sense.
We begin the analysis of stationary functions with the concept of realization.
###### Definition 2.14.
Let $f:\mathbb{R}^{n}\\!\times\\!\Omega\to\mathbb{R}$ be a stationary
function. For $\omega\in\Omega$ fixed, the function $f(\cdot,\omega)$ is
called a realization of $f$.
Due to Theorem 2.12, almost every realization $f(\cdot,\omega)$ possesses a
mean value in the sense of (2.16). On the other hand, if $f$ is a stationary
function, then the mapping
$y\in\mathbb{R}^{n}\mapsto\int_{\Omega}f(y,\omega)\,d\mathbb{P}(\omega)$
is a periodic function.
In fact, it is enough to consider the realizations to study some properties of
stationary functions. For instance, the following theorem will be used more
than once through this paper.
###### Theorem 2.15.
For $p>1$, let $u,v\in L^{1}_{\rm loc}(\mathbb{R}^{n};L^{p}(\Omega))$ be
stationary functions. Then, for any $i\in\\{1,\ldots,n\\}$ fixed, the
following sentences are equivalent:
$(A)\quad\int_{[0,1)^{n}}\int_{\Omega}u(y,\omega)\frac{\partial{\zeta}}{\partial
y_{i}}(y,\omega)\,d\mathbb{P}(\omega)\,dy=-\int_{[0,1)^{n}}\int_{\Omega}v(y,\omega)\,{\zeta}(y,\omega)\,d\mathbb{P}\,dy,\hskip
20.0pt$ (2.17)
for each stationary function $\zeta\in C^{1}(\mathbb{R}^{n};L^{q}(\Omega))$,
with $1/p+1/q=1$.
$(B)\quad\int_{\mathbb{R}^{n}}u(y,\omega)\frac{\partial{\varphi}}{\partial
y_{i}}(y)\,dy=-\int_{\mathbb{R}^{n}}v(y,\omega)\,{\varphi}(y)\,dy,\hskip
87.0pt$ (2.18)
for any $\varphi\in C^{1}_{\rm c}(\mathbb{R}^{n})$, and almost sure
$\omega\in\Omega$.
###### Proof.
1\. First, let us show that $(A)$ implies $(B)$. To begin, given
$\gamma\in\mathbb{R}^{n}$, there exists a $\mathscr{F}$-measurable set
$N_{\gamma}$ such that, $\mathbb{P}(N_{\gamma})=0$ and
$\int_{\mathbb{R}^{n}}u(y,\omega)\,\frac{\partial{\varphi}}{\partial
y_{i}}(y)\,dy=-\int_{\mathbb{R}^{n}}v(y,\omega)\,{\varphi}(y)\,dy,$
for each $\varphi\in C^{1}_{\rm c}((0,1)^{n}+\gamma)$ and
$\omega\in\Omega\setminus N_{\gamma}$. Indeed, for $\varphi\in C^{1}_{\rm
c}((0,1)^{n}+\gamma)$ and ${\rho\in L^{q}(\Omega)}$, let us define
$\zeta_{\gamma}:\mathbb{R}^{n}\\!\times\\!\Omega\to\mathbb{R}$, by
$\zeta_{\gamma}(y,\omega):=\varphi(y-\left\lfloor
y-\gamma\right\rfloor)\rho(\tau(\left\lfloor y-\gamma\right\rfloor)\omega),$
where $\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$ is a (discrete) dynamical
system. Then, $\zeta_{\gamma}(\cdot,\omega)$ is a continuous functions,
$\zeta_{\gamma}(y,\cdot)$ is a $\mathscr{F}$-measurable function, and for each
$k\in\mathbb{Z}^{n}$, it follows that
$\zeta_{\gamma}(y+k,\omega)=\zeta_{\gamma}(y,\tau(k)\omega).$
Consequently, $\zeta_{\gamma}\in C^{1}(\mathbb{R}^{n};L^{q}(\Omega))$ is a
stationary Caratheodory function. Moreover, since $\left\lfloor
y-\gamma\right\rfloor=0$ for each $y\in(0,1)^{n}+\gamma$, we have
$\zeta_{\gamma}(y,\omega)=\varphi(y)\rho(\omega),$ (2.19)
for each $(y,\omega)\in((0,1)^{n}+\gamma)\times\Omega$. Therefore, taking
$\zeta_{\gamma}$ as a test function in (2.17), we obtain
$\int_{[0,1)^{n}}\int_{\Omega}u(y,\omega)\,\frac{\partial{\zeta_{\gamma}}}{\partial
y_{i}}(y,\omega)\,d\mathbb{P}(\omega)\,dy=-\int_{[0,1)^{n}}\int_{\Omega}v(y,\omega)\,{\zeta_{\gamma}}(y,\omega)\,d\mathbb{P}(\omega)\,dy.$
(2.20)
Due to the space of stationary functions form an algebra, the functions
$y\mapsto\int_{\Omega}u(y,\omega)\,\frac{\partial{\zeta_{\gamma}}}{\partial
y_{i}}(y,\omega)\,d\mathbb{P}(\omega)\quad\text{and}\quad
y\mapsto\int_{\Omega}v(y,\omega)\,{\zeta_{\gamma}}(y,\omega)\,d\mathbb{P}(\omega)$
are periodic, and hence translation invariants. Then, we have from (2.20)
$\int_{(0,1)^{n}+\gamma}\int_{\Omega}u\,\frac{\partial{\varphi}}{\partial
y_{i}}(y)\,{\rho}(\omega)\,d\mathbb{P}(\omega)\,dy=-\int_{(0,1)^{n}+\gamma}\int_{\Omega}v\,{\varphi}(y){\rho}(\omega)\,d\mathbb{P}(\omega)\,dy,$
where we have used (2.19). Applying Fubini’s Theorem, it follows that
$\int_{\Omega}{\left(\int_{(0,1)^{n}+\gamma}u(y,\omega)\,\frac{\partial{\varphi}}{\partial
y_{i}}(y)\,dy+\int_{(0,1)^{n}+\gamma}v(y,\omega)\,{\varphi}(y)\,dy\right)}{\rho}(\omega)\,d\mathbb{P}(\omega)=0$
for each $\rho\in L^{p}(\Omega)$. Therefore, for each $\varphi\in C^{1}_{\rm
c}((0,1)^{n}+\gamma)$ there exists a set $N_{\gamma}\in\mathscr{F}$ with
$\mathbb{P}(N_{\varphi})=0$, (which may depend on $\varphi$), such that, for
each $\omega\in\Omega\setminus N_{\gamma}$ we have
$\int_{(0,1)^{n}+\gamma}u(y,\omega)\,\frac{\partial{\varphi}}{\partial
y_{i}}(y)\,dy=-\int_{(0,1)^{n}+\gamma}v(y,\omega)\,{\varphi}(y)\,dy.$
From a standard argument, we may remove the dependence on $N_{\gamma}$ of the
test function $\varphi$.
2\. Finally, to pass from $\varphi\in C^{1}_{c}((0,1)^{n}+\gamma)$ to the case
where $\varphi\in C^{1}_{\rm c}(\mathbb{R}^{n})$, we are going to use a
standard procedure of partition of unity. We made it here, to become clear the
argument in our case. Given $\varphi\in C^{1}_{\rm c}(\mathbb{R}^{n})$, since
${\rm supp}\,\varphi$ is a compact set, there exists
$\left\\{\gamma_{j}\right\\}_{j=1}^{m}$ a finite subset of $\mathbb{R}^{n}$,
such that
${\rm
supp}\,\varphi\subset\bigcup_{j=1}^{m}\left((0,1)^{n}+\gamma_{j}\right).$
Then, we consider a partition of unity $\\{\theta_{j}\\}_{j=0}^{m}$
subordinated to this open covering, that is to say
* i)
$\theta_{j}\in C^{1}_{c}(\mathbb{R}^{n})$, $0\leqslant\theta_{j}\leqslant 1$,
$j=0,\ldots,m$,
* ii)
${\sum_{j=0}^{m}\theta_{j}(y)=1}$, for all $y\in\mathbb{R}^{n}$,
* iii)
${\rm supp}\,\theta_{j}\subset(0,1)^{n}+\gamma_{j}$, $j=1,\ldots,m$, and ${\rm
supp}\,\theta_{0}\subset\mathbb{R}^{n}\setminus{\rm supp}\,\varphi$.
Since $\varphi=0$ on the support of $\theta_{0}$, it follows that, for each
$y\in\mathbb{R}^{n}$
$\varphi(y)=\varphi(y)\sum_{i=1}^{m}\theta_{i}(y)=\sum_{i=1}^{m}(\varphi\theta_{i})(y).$
(2.21)
Moreover, from item 1, there exist sets
$N_{\gamma_{1}},\ldots,N_{\gamma_{m}}\in\mathscr{F}$ with
$\mathbb{P}(N_{\gamma_{j}})=0$, for any $j\in\\{1,\ldots,m\\}$, such that
$\int_{\mathbb{R}^{n}}u(y,\omega)\,\frac{\partial({\varphi\theta_{j}})}{\partial
y_{i}}\,dy=-\int_{\mathbb{R}^{n}}v(y,\omega)\,({\varphi\theta_{j}})\,dy,$
for each $\omega\in\Omega\setminus N_{\gamma_{j}}$. To follow, we define
$N:=\bigcup_{j=1}^{m}N_{\gamma_{j}}$ (which may depend on $\varphi$), then
$\mathbb{P}(N)=0$ and summing from 1 to $m$, we obtain from the above equation
$\sum_{j=1}^{m}\int_{\mathbb{R}^{n}}u(y,\omega)\,\frac{\partial({\varphi\theta_{j}})(y)}{\partial
y_{i}}\,dy=-\sum_{j=1}^{m}\int_{\mathbb{R}^{n}}v(y,\omega)\,({\varphi\theta_{j}})(j)\,dy,$
for each $\omega\in\Omega\setminus N$. Therefore, since the above sum is
finite and using (2.21), we obtain
$\int_{\mathbb{R}^{n}}u(y,\omega)\,\frac{\partial{\varphi(y)}}{\partial
y_{i}}\,dy=-\int_{\mathbb{R}^{n}}v(y,\omega)\,\varphi(y)\,dy.$
Again, due to a standard argument, we may remove the dependence on $N$ with
the test function $\varphi$. Consequently, we have obtained (2.18), more
precisely sentence $(B)$.
3\. Now, let us show sentence $(A)$ from $(B)$. For each $\ell\in\mathbb{N}$,
we define the set ${Q}_{\ell}:=(-\ell,\ell)^{n}$ and the function
$\chi_{\ell}\in C^{1}_{c}(\mathbb{R}^{n})$, such that
$\chi_{\ell}\equiv 1$ in ${Q}_{\ell}$, $\chi_{\ell}\equiv 0$ in
$\mathbb{R}^{n}\setminus{Q}_{\ell+1}$, and
${\|\nabla\chi_{\ell}\|_{\infty}\leqslant 2}$.
Then, given $\zeta\in C^{1}(\mathbb{R}^{n};L^{q}(\Omega))$ and
$i\in\\{1,\ldots,n\\}$, we consider $\zeta(\cdotp,\omega)\chi_{\ell}$, (for
$\ell\in\mathbb{N}$ and $\omega\in\Omega$ fixed), as test function in (2.18),
that is
$\int_{\mathbb{R}^{n}}u(y,\omega)\,\frac{\partial}{\partial
y_{i}}{\left({\zeta}(y,\omega)\chi_{\ell}(y)\right)}\,dy=-\int_{\mathbb{R}^{n}}v(y,\omega)\,{{\zeta}(y,\omega)\chi_{\ell}(y)}\,dy.$
From the definition of $\chi_{\ell}$, and applying the product rule we obtain
$\displaystyle\int_{Q_{\ell+1}}u(y,\omega)\frac{\partial{\zeta(y,\omega)}}{\partial
y_{i}}\,\chi_{\ell}(y)\,dy$ $\displaystyle+\int_{Q_{\ell+1}\setminus
Q_{\ell}}u(y,\omega){\zeta}(y,\omega)\,\frac{\partial\chi_{\ell}(y)}{\partial
y_{i}}\,dy$
$\displaystyle=-\int_{Q_{\ell+1}}v(y,\omega)\,{{\zeta}(y,\omega)\chi_{\ell}(y)}\,dy,$
or conveniently using that $Q_{\ell+1}=Q_{\ell}\cup(Q_{\ell+1}\setminus
Q_{\ell})$, we have
$\displaystyle\int_{Q_{\ell}}u(y,\omega)\,\frac{\partial{\zeta}}{\partial
y_{i}}(y,\omega)\,dy$
$\displaystyle+\int_{Q_{\ell}}v(y,\omega)\,{\zeta}(y,\omega)\,dy$ (2.22)
$\displaystyle=-\int_{Q_{\ell+1}\setminus
Q_{\ell}}u(y,\omega)\,\frac{\partial{\zeta}}{\partial
y_{i}}(y,\omega)\,\chi_{\ell}(y)\,dy$
$\displaystyle\quad-\int_{Q_{\ell+1}\setminus
Q_{\ell}}u(y,\omega)\,{\zeta}(y,\omega)\,\frac{\partial\chi_{\ell}(y)}{\partial
y_{i}}\,dy$ $\displaystyle\quad-\int_{Q_{\ell+1}\setminus
Q_{\ell}}v(y,\omega)\,{\zeta}(y,\omega)\,\chi_{\ell}(y)\,dy$
$\displaystyle=I_{1}(\omega)+I_{2}(\omega)+I_{3}(\omega),$
with obvious notation.
Claim: For $j=1,2,3$,
$\lim_{\ell\to\infty}\int_{\Omega}\frac{|I_{j}(\omega)|}{|Q_{\ell}|}d\mathbb{P}(\omega)=0.$
Proof of Claim: Let us show for $j=2$, that is
$\lim_{\ell\to\infty}\int_{\Omega}\frac{1}{|Q_{\ell}|}{\Big{|}\int_{Q_{\ell+1}\setminus
Q_{\ell}}u(y,\omega)\,{\zeta}(y,\omega)\,\frac{\partial\chi_{\ell}(y)}{\partial
y_{i}}\,dy\Big{|}}d\mathbb{P}(\omega)=0,$
the others are similar. Then, applying Fubini’s Theorem
$\displaystyle\int_{\Omega}\frac{1}{|Q_{\ell}|}\Big{|}\int_{Q_{\ell+1}\setminus
Q_{\ell}}$ $\displaystyle
u(y,\omega)\,{\zeta}(y,\omega)\,\frac{\partial\chi_{\ell}(y)}{\partial
y_{i}}\,dy\Big{|}d\mathbb{P}(\omega)$
$\displaystyle\leq\,\frac{1}{|Q_{\ell}|}\int_{Q_{\ell+1}\setminus
Q_{\ell}}\int_{\Omega}{|u(\cdotp,\omega)\,{\zeta}(\cdotp,\omega)|}\,{\|\nabla\chi_{\ell}\|}_{\infty}\,d\mathbb{P}\,dy$
$\displaystyle\leq\,\frac{2}{|Q_{\ell}|}\int_{Q_{\ell+1}\setminus
Q_{\ell}}\int_{\Omega}{|u(y,\omega)\,{\zeta}(y,\omega)|}\,d\mathbb{P}(\omega)\,dy$
$\displaystyle=\frac{2\,{((2(\ell+1))^{n}-(2\ell)^{n})}}{(2\ell)^{n}}\\!\\!\int_{[0,1)^{n}}\int_{\Omega}{|u(y,\omega)\,{\zeta}(y,\omega)|}\,d\mathbb{P}(\omega)\,dy$
$\displaystyle=\,{2\,{{((1+\ell^{-1})^{n}-1)}}}\int_{[0,1)^{n}}\int_{\Omega}{|u(y,\omega)\,{\zeta}(y,\omega)|}\,d\mathbb{P}(\omega)\,dy,$
from which, passing to the limit as $\ell\to 0$, follows the claim.
Then, dividing equation (2.22) by $|Q_{\ell}|$ and integrating in $\Omega$, we
obtain
$\liminf_{\ell\to\infty}\int_{\Omega}{\Big{|}\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}(u\,\frac{\partial{\zeta}}{\partial
y_{i}})(y,\omega)dy+\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}(v\,{\zeta})(y,\omega)dy\Big{|}}d\mathbb{P}(\omega)=0,$
and applying Fato’s Lemma
$\int_{\Omega}\liminf_{\ell\to\infty}\int_{\Omega}{\Big{|}\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}(u\,\frac{\partial{\zeta}}{\partial
y_{i}})(y,\omega)dy+\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}(v\,{\zeta})(y,\omega)\,dy\Big{|}}d\mathbb{P}(\omega)=0.$
Therefore, there exists a $\mathscr{F}$-measurable set
$\widetilde{\Omega}\subset\Omega$ of full measure, such that, for each
$\omega\in\widetilde{\Omega}$, we have
$\liminf_{\ell\to\infty}{\Big{|}\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}u(y,\omega)\,\frac{\partial{\zeta}}{\partial
y_{i}}(y,\omega)\,dy+\frac{1}{|Q_{\ell}|}\int_{Q_{\ell}}v(y,\omega)\,{\zeta}(y,\omega)\,dy\Big{|}}=0.$
Then, applying Theorem 2.12 and from equation (2.16), it follows that
$\int_{[0,1)^{n}}\int_{\Omega}u(y,\omega)\,\frac{\partial{\zeta}}{\partial
y_{i}}(y,\omega)\,d\mathbb{P}(\omega)\,dy=-\int_{[0,1)^{n}}\int_{\Omega}v(y,\omega)\,{\zeta}(y,\omega)\,d\mathbb{P}(\omega)\,dy,$
which finish the proof of the theorem. ∎
Similarly to the above theorem, we have the characterization of weak
derivatives of stationary functions composed with stochastic deformations,
given by the following
###### Theorem 2.16.
Let $u,v\in L^{1}_{\rm loc}(\mathbb{R}^{n};L^{p}(\Omega))$ be stationary
functions, $(p>1)$. Then, for any $i\in\\{1,\ldots,n\\}$ fixed, the following
sentences are equivalent:
$(A)\quad\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}u{\left(\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\frac{\partial{\left(\zeta{\left(\Phi^{-1}(z,\omega),\omega\right)}\right)}}{\partial
z_{k}}}\,dz\,d\mathbb{P}(\omega)\\\\[5.0pt]
=-\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}v{\left(\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\zeta{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega),$
for each stationary function $\zeta\in C^{1}(\mathbb{R}^{n};L^{q}(\Omega))$,
with $1/p+1/q=1$.
$(B)\quad\int_{\mathbb{R}^{n}}u{\left(\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\frac{\partial\varphi}{\partial
z_{k}}(z)}\,dz=-\int_{\mathbb{R}^{n}}v{\left(\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\varphi(z)}\,dz,\hskip
12.0pt$
for any $\varphi\in C^{1}_{\rm c}(\mathbb{R}^{n})$, and almost sure
$\omega\in\Omega$.
###### Proof.
The proof follows the same lines as in the proof of Theorem 2.15 after the
change of variables $y=\Phi^{-1}(z,\omega)$. ∎
### 2.3 $\Phi_{\omega}-$Two-scale Convergence
In this subsection, we shall consider the two-scale convergence in a
stochastic setting that is beyond of the classical stationary ergodic setting.
The classical concept of two-scale convergence was introduced by Nguetseng
[29] and futher developed by Allaire [1] to deal with periodic problems.
The notion of two-scale convergence has been successfully extended to non-
periodic settings in several papers as in [20, 16] in the ergodic algebra
setting and in [11] in the stochastic setting. The main difference here with
the earlier studies is that the test functions used here are random
perturbations accomplished by stochastic diffeomorphisms of stationary
functions. The main difficulty brought by this kind of test function is the
lack of the stationarity property (see [6] for a deep discussion about that)
which makes us unable to use the results described in [11] and the lack of a
compatible topology with the probability space considered. This is overcome by
using a compactification argument that preserves the ergodic nature of the
setting involved. For this, we will make use of the following lemma, whose
simple proof can be found in [5].
###### Lemma 2.17.
Let $X_{1},X_{2}$ be compact spaces, $R_{1}$ a dense subset of $X_{1}$ and
$W:R_{1}\to X_{2}$. Suppose that for all $g\in C(X_{2})$ the function $g\circ
W$ is the restriction to $R_{1}$ of some (unique) $g_{1}\in C(X_{1})$. Then
$W$ can be uniquely extended to a continuous mapping $\underline{W}:X_{1}\to
X_{2}$. Further, suppose in addition that $R_{2}$ is a dense set of $X_{2}$,
$W$ is a bijection from $R_{1}$ onto $R_{2}$ and for all $f\in C(X_{1})$,
$f\circ W^{-1}$ is the restriction to $R_{2}$ of some (unique) $f_{2}\in
C(X_{2})$. Then, $W$ can be uniquely extended to a homeomorphism
$\underline{W}:X_{1}\to X_{2}$.
Now, we can prove the following result.
###### Theorem 2.18.
Let $\mathbb{S}\subset L^{\infty}(\mathbb{R}^{n}\times\Omega)$ be a countable
set of stationary functions. Then there exists a compact (separable)
topological space $\widetilde{\Omega}$ and one-to-one function
$\delta:\Omega\to\widetilde{\Omega}$ with dense image satisfying the following
properties:
1. (i)
The probability space $\Big{(}\Omega,\mathscr{F},\mathbb{P}\Big{)}$ and the
ergodic dynamical system $\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$ acting on
it extends respectively to a Radon probability space
$\Big{(}\widetilde{\Omega},\mathscr{B},\widetilde{\mathbb{P}}\Big{)}$ and to
an ergodic dynamical system
$\widetilde{\tau}:\mathbb{Z}^{n}\times\widetilde{\Omega}\to\widetilde{\Omega}$.
2. (ii)
The stochastic deformation $\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$
extends to a stochastic deformation
$\tilde{\Phi}:\mathbb{R}^{n}\times\widetilde{\Omega}\to\mathbb{R}^{n}$
satisfying
$\Phi(x,\omega)=\tilde{\Phi}(x,\delta(\omega)),$
for a.e. $\omega\in\Omega$.
3. (iii)
Any function $f\in\mathbb{S}$ extends to a $\tilde{\tau}-$stationary function
$\tilde{f}\in L^{\infty}(\widetilde{\Omega}\times\mathbb{R}^{n})$ satisfying
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f\left(\Phi^{-1}(z,\omega),\omega\right)\,dz=\mkern
12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}\tilde{f}\left(\tilde{\Phi}^{-1}(z,\delta(\omega)),\delta(\omega)\right)\,dz,$
for a.e. $\omega\in\Omega$.
###### Proof.
1\. Let $\mathbb{S}$ be the set of the lemma. Given $f\in\mathbb{S}$, define
$f_{j}(y,\omega):=\int_{\mathbb{R}^{n}}f(y+x,\omega)\,\rho_{j}(x)\,dx,$
where $\rho_{j}$ is the classical approximation of the identity in
$\mathbb{R}^{n}$. Note that for a.e. $y\in\mathbb{R}^{n}$, we have that
$f_{j}(y,\cdot)\to f(y,\cdot)$ in $L^{1}(\Omega)$ as $j\to\infty$. Define
$\mathcal{A}$ as the closed algebra with unity generated by the set
$\Big{\\{}f_{j}(y,\cdot);\,j\geq
1,y\in\mathbb{Q}^{n},f\in\mathbb{S}\Big{\\}}\cap\Big{\\{}\partial_{j}\Phi_{i}(y,\cdot);\,1\leq
j,i\leq n,y\in\mathbb{Q}^{n}\Big{\\}}.$
Since $[-1,1]$ is a compact set, by the well known Tychonoff’s Theorem, the
set
$[-1,1]^{\mathcal{A}}:=\Big{\\{}\text{the functions
$\gamma:\mathcal{A}\to[-1,1]$}\Big{\\}}$
is a compact set in the product topology. Define
$\delta:\Omega\to[-1,1]^{\mathcal{A}}$ by
$\delta(\omega)(g):=\left\\{\begin{array}[]{rc}\frac{g(\omega)}{\|g{\|}_{\infty}},&\mbox{if}\quad
g\neq 0,\\\ 0,&\mbox{if}\quad g=0.\end{array}\right.$
We may assume that the algebra $\mathcal{A}$ distinguishes between points of
$\Omega$, that is, given any two distinct points
$\omega_{1},\omega_{2}\in\Omega$, there exists $g\in\mathcal{A}$ such that
$g(\omega_{1})\neq g(\omega_{2})$. In the case that it is not true we may
replace $\Omega$ by its quotient by a trivial equivalence relation, in a
standard way, and we proceed correspondingly with the $\sigma-$algebra
$\mathscr{F}$ and with the probability measure $\mathbb{P}$. Thus, the
function $\delta$ is one-to-one. Define
$\widetilde{\Omega}:=\overline{\delta(\Omega)}.$
Now, we can see that the set $\Omega$ inherits all topological features of the
compact space $\widetilde{\Omega}$ in a natural way which allows us to
identify it homeomorphically with the image $\delta(\Omega)$.
2\. Define the mapping $i:\mathcal{A}\to C(\delta(\Omega))$ by
$i(g)(\delta(\omega)):=g(\omega).$
We claim that there exists a continuous function
$\tilde{g}:\widetilde{\Omega}\to\mathbb{R}$ such that
$i(g)=\tilde{g}\,\text{on $\delta(\Omega)$}.$
In fact, take $g\in\mathcal{A}$ and $Y:=\overline{g(\Omega)}$. Define the
function $f^{*}:C(Y)\to\mathcal{A}$ by
$f^{*}(h):=h\circ g\,\text{(the algebra structure is used!)}$
Hence, we can define $f^{**}:[-1,1]^{\mathcal{A}}\to[-1,1]^{C(Y)}$ by
$f^{**}(h):=h\circ f^{*}.$
Note that the function $f^{**}$ is a continuous function. In order to see
that, we highlighted that it is known that a function $H$ from a topological
space to a product space $\otimes_{\alpha\in\mathcal{I}}X_{\alpha}$ is
continuous if and only if each component $\pi_{\alpha}\circ H:=H_{\alpha}$ is
continuous. Hence, if $\alpha\in C(Y)$ then the projection function
$f^{**}_{\alpha}$ must satisfy
$\displaystyle f^{**}_{\alpha}(h):=\left(\pi_{\alpha}\circ
f^{**}\right)(h)=\pi_{\alpha}\circ\left(f^{**}(h)\right)=\pi_{\alpha}\left(h\circ
f^{*}\right)$ $\displaystyle\qquad=\left(h\circ
f^{*}\right)(\alpha)=h\left(f^{*}(\alpha)\right)=h\left(\alpha\circ
g\right)=\pi_{\alpha\circ g}(h).$
Now, consider the function $\tilde{\delta}:Y\to[-1,1]^{C(Y)}$ given by
$\tilde{\delta}(y)(h):=\left\\{\begin{array}[]{rc}\frac{h(y)}{\|h{\|}_{\infty}},&\mbox{if}\quad
h\neq 0,\\\ 0,&\mbox{if}\quad h=0.\end{array}\right.$
Since the algebra $C(Y)$ has the following property: If $F\subset Y$ is a
closed set and $y\notin F$ then $f(y)\notin\overline{f(F)}$ for some $f\in
C(Y)$, we can conclude that the function $\tilde{\delta}$ is a homeomorphism
onto its image. Furthermore, given $\omega\in\Omega$ we have that
$\left(f^{**}\circ\delta\right)(\omega)=f^{**}\left(\delta(\omega)\right)=\delta(\omega)\circ
f^{*}$. Hence, if $0\neq h\in C(Y)$ it follows that
$\displaystyle\left(f^{**}\circ\delta\right)\big{(}\omega\big{)}(h)=\left(\delta(\omega)\circ
f^{*}\right)(h)=\delta(\omega)\left(f^{*}(h)\right)$
$\displaystyle\quad=\delta(\omega)\left(h\circ g\right)=\frac{\left(h\circ
g\right)(\omega)}{\|h\circ
g{\|}_{\infty}}=\tilde{\delta}\left(g(\omega)\right)(h).$
Thus, we see that ${\tilde{\delta}}^{-1}\circ f^{**}=i(g)$. Defining
$\tilde{g}:={\tilde{\delta}}^{-1}\circ f^{**}$ we have clearly that
$i:\mathcal{A}\to C(\widetilde{\Omega})$ is a one-to-one isometry satisfying
$i(g)=\tilde{g}$ and our claim is proved. Moreover, it is easy to see that
$i(\mathcal{A})$ is an algebra of functions over $C(\widetilde{\Omega})$
containing the unity. As before, if $i(\mathcal{A})$ does not separate points
of $\widetilde{\Omega}$, then we may replace $\widetilde{\Omega}$ by its
quotient $\widetilde{\Omega}/\sim$, where
$\tilde{\omega_{1}}\sim\tilde{\omega_{2}}\,\Leftrightarrow\,\tilde{g}(\tilde{\omega_{1}})=\tilde{g}(\tilde{\omega_{2}})\,\forall
g\in\mathcal{A}.$
Therefore, we can assume that $i(\mathcal{A})$ separates the points of
$\widetilde{\Omega}$. Hence, by the Stone’s Theorem (see [Ru2], p. 162,
Theorem 7.3.2) we must have $i(\mathcal{A})=C(\widetilde{\Omega})$.
3\. Define
$\widetilde{\tau}:\mathbb{Z}^{n}\times\delta(\Omega)\to\delta(\Omega)$ by
$\widetilde{\tau}_{k}\left(\delta(\omega)\right):=\delta(\tau_{k}\omega).$
It is easy to see that
$\widetilde{\tau}_{k_{1}+k_{2}}(\delta(\omega))=\widetilde{\tau}_{k_{1}}\Big{(}\widetilde{\tau}_{k_{2}}(\delta(\omega))\Big{)},$
for all $k_{1},k_{2}\in\mathbb{Z}^{n}$ and $\omega\in\Omega$. Since
$\tilde{g}\circ{\widetilde{\tau}_{k}}=\widetilde{g\circ{\tau}_{k}}$ for all
$g\in\mathcal{A}$ and $k\in\mathbb{Z}^{n}$, the lemma 2.17 allows us to extend
the mapping $\widetilde{\tau}_{k}$ from $\delta(\Omega)$ to
$\widetilde{\Omega}$ satisfying the group property
$\widetilde{\tau}_{k_{1}+k_{2}}=\widetilde{\tau}_{k_{1}}\circ{\widetilde{\tau}}_{k_{2}}$.
Given a borelian set $\tilde{A}\subset{\widetilde{\Omega}}$ and defining
$\widetilde{\mathbb{P}}(\tilde{A}):=\mathbb{P}(\delta^{-1}(\tilde{A}\cap\delta(\Omega)))$,
we can deduce that
$\widetilde{\mathbb{P}}\circ{\widetilde{\tau}_{k}}=\widetilde{\mathbb{P}}$.
Thus, the mapping $\widetilde{\tau}_{k}$ is an ergodic dynamical system over
the Radon probability space
$\Big{(}\widetilde{\Omega},\mathscr{B},\widetilde{\mathbb{P}}\Big{)}$. Thus,
we have concluded the proof of the item (i).
4\. Now, note that for each $\omega\in\widetilde{\Omega}$ and each integer
$j\geq 1$, the function $f_{j}(\cdot,\omega)$ is uniformly continuous over
$\mathbb{Q}^{n}$. Hence, it can be extended uniquely to a function
$\widetilde{f_{j}}(\cdot,\omega)$ defined in $\mathbb{R}^{n}$ that satisfies
$\limsup_{j,l\to\infty}\int_{[0,1)^{n}\times\widetilde{\Omega}}|\widetilde{f_{j}}(y,\omega)-\widetilde{f_{l}}(y,\omega)|\,d\widetilde{\mathbb{P}}(\omega)\,dy=0.$
Therefore, there exists a $\widetilde{\tau}$-stationary function
$\widetilde{f}\in
L^{1}_{\text{loc}}\left(\mathbb{R}^{n}\times\widetilde{\Omega}\right)$, such
that $\widetilde{f_{j}}\to\widetilde{f}$ as $j\to\infty$ in
$L^{1}_{\text{loc}}(\mathbb{R}^{n}\times\widetilde{\Omega})$. Since
$\|\widetilde{f_{j}}{\|}_{\infty}\leq\|{f}{\|}_{\infty}$, for all $j\geq 1$ we
have that $\widetilde{f}\in
L^{\infty}(\mathbb{R}^{n}\times\widetilde{\Omega})$. In the same way, the
stochastic deformation $\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$
extends to a stochastic deformation
$\tilde{\Phi}:\mathbb{R}^{n}\times\widetilde{\Omega}\to\mathbb{R}^{n}$
satisfying $\Phi(y,\omega)=\tilde{\Phi}(y,\delta(\omega))$ for all
$\omega\in\Omega$ and
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f\left(\Phi^{-1}(z,\omega),\omega\right)\,dz=\mkern
12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}\tilde{f}\left(\tilde{\Phi}^{-1}(z,\delta(\omega)),\delta(\omega)\right)\,dz,$
for a.e. $\omega\in\Omega$. This, completes the proof of the theorem (2.18).
∎
In practice, in our context, the set $\mathbb{S}$ shall be a countable set
generated by the coefficients of our equation $\eqref{jhjkhkjhkj765675233}$
and by the eigenfunctions of the spectral equation associated to it. Thus, the
Theorem (2.18) allow us to suppose without loss of generality that our
probability space $\Big{(}\Omega,\mathscr{F},\mathbb{P}\Big{)}$ is a separable
compact space. Using the Ergodic Theorem, given a stationary function $f\in
L^{\infty}(\mathbb{R}^{n}\times\Omega)$ there exists a set of full measure
$\Omega_{f}\subset\Omega$ such that
$\mkern 12.0mu\mbox{\vrule
height=4.0pt,depth=-3.2pt,width=5.0pt}\mkern-16.5mu\int\nolimits_{\mathbb{R}^{n}}f\left(\Phi^{-1}(z,\tilde{\omega}),\tilde{\omega}\right)\,dz=c_{\Phi}^{-1}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}f\left(\Phi^{-1}(z,\omega),\omega\right)\,dz\,d\mathbb{P}(\omega),$
(2.23)
for almost all $\tilde{\omega}\in{\Omega}_{f}$. Due to the separability of the
probability compact space $\Big{(}\Omega,\mathscr{F},\mathbb{P}\Big{)}$, we
can find a set $\mathbb{D}\subset C_{b}(\mathbb{R}^{n}\times\Omega)$ such
that:
* •
Each $f\in\mathbb{D}$ is a stationary function.
* •
$\mathbb{D}$ is a countable and dense set in
$C_{0}\big{(}[0,1)^{n}\times\Omega\big{)}$.
In this case, there exists a set $\Omega_{0}\subset\Omega$ of full measure
such that the equality (2.23) holds for any $\tilde{\omega}\in\Omega_{0}$ and
$f\in\mathbb{D}$.
Now, we proceed with the definition of the two-scale convergence in this
scenario of stochastically deformed. In what follows, the set
$O\subset\mathbb{R}^{n}$ is an open set.
###### Definition 2.19.
Let $1<p<\infty$ and $v_{\varepsilon}:O\times\Omega\to\mathbb{C}$ be a
sequence such that $v_{\varepsilon}(\cdot,\tilde{\omega})\in L^{p}(O)$. The
sequence $\\{v_{\varepsilon}(\cdot,\tilde{\omega}){\\}}_{\varepsilon>0}$ is
said to $\Phi_{\omega}-$two-scale converges to a stationary function
$V_{\tilde{\omega}}\in L^{p}\left(O\times[0,1)^{n}\times\Omega\right)$, when
for a.e. $\tilde{\omega}\in\Omega$ holds the following
$\displaystyle\lim_{\varepsilon\to
0}\int_{O}v_{\varepsilon}(x,\tilde{\omega})\,\varphi(x)\,\Theta\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\tilde{\omega}\right),\tilde{\omega}\right)\,dx$
$\displaystyle=c_{\Phi}^{-1}\int_{O\times\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!V_{\tilde{\omega}}\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right)\,\varphi(x)\,\Theta(\Phi^{-1}(z,\omega),\omega)\,dz\,d{\mathbb{P}(\omega)}\,dx,$
for all $\varphi\in C_{c}^{\infty}(O)$ and $\Theta\in
L^{q}_{\text{loc}}(\mathbb{R}^{n}\times\Omega)$ stationary. Here,
$p^{-1}+q^{-1}=1$ and
$c_{\Phi}:=\det\Big{(}\int_{[0,1)^{n}\times\Omega}\nabla\Phi(y,\omega)\,d{\mathbb{P}(\omega)}\,dy\Big{)}$.
###### Remark 2.20.
From now on, we shall use the notation
$v_{\varepsilon}(x,\widetilde{\omega})\;\xrightharpoonup[\varepsilon\to
0]{2-{\rm
s}}\;V_{\widetilde{\omega}}{\left(x,\Phi^{-1}(z,\omega),\omega\right)},$
to indicate that $v_{\varepsilon}(\cdot,\tilde{\omega})$ $\Phi_{\omega}-$two-
scale converges to $V_{\tilde{\omega}}$.
The most important result about the two-scale convergence needed in this paper
is the following compactness theorem which generalize the corresponding one
for the deterministic case in [16] (see Theorem 4.8) and the corresponding one
for the stochastic case in [11] (see Theorem 3.4).
###### Theorem 2.21.
Let $1<p<\infty$ and $v_{\varepsilon}:O\times\Omega\to\mathbb{C}$ be a
sequence such that
$\sup_{\varepsilon>0}\int_{O}|v_{\varepsilon}(x,\tilde{\omega})|^{p}\,dx<\infty,$
for almost all $\tilde{\omega}\in\Omega$. Then, for almost all
$\tilde{\omega}\in\Omega_{0}$, there exists a subsequence
$\\{v_{\varepsilon^{\prime}}(\cdot,\tilde{\omega}){\\}}_{\varepsilon^{\prime}>0}$,
which may depend on $\tilde{\omega}$, and a stationary function
$V_{\tilde{\omega}}\in L^{p}(O\times[0,1)^{n}\times\Omega)$, such that
$v_{\varepsilon}(x,\widetilde{\omega})\;\xrightharpoonup[\varepsilon^{\prime}\to
0]{2-{\rm
s}}\;V_{\widetilde{\omega}}{\left(x,\Phi^{-1}(z,\omega),\omega\right)}.$
###### Proof.
1\. We begin by fixing $\tilde{\omega}\in\Omega_{0}$. Due to our assumption,
there exists ${c(\widetilde{\omega})>0}$, such that for all $\varepsilon>0$
${\left\|v_{\varepsilon}(\cdot,\widetilde{\omega})\right\|}_{L^{p}(O)}\leqslant
c(\widetilde{\omega}).$
Now, taking $\phi\in\Xi\times\mathbb{D}$ with $\Xi\subset C_{c}^{\infty}(O)$
dense in $L^{q}(O)$, we have after applying the Hölder inequality and the
Ergodic Theorem,
$\underset{\varepsilon\to
0}{\limsup}{|\int_{O}v_{\varepsilon}(x,\widetilde{\omega})\,{\phi{\left(x,\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}}dx|}\\\
\leq c(\widetilde{\omega}){\left[\underset{\varepsilon\to
0}{\limsup}\int_{O}{\left|\phi{\left(x,\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\right|}^{q}dx\right]}^{1/q}\hskip
100.0pt\\\
=c(\widetilde{\omega}){\left[c_{\Phi}^{-1}\int_{O}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\phi{\left(x,\Phi^{-1}(z,\omega),\omega\right)}\right|}^{q}dz\,d\mathbb{P}(\omega)\,dx\right]}^{1/q}.$
(2.24)
Thus, the use of the enumerability of the set $\Xi\times\mathbb{D}$ combined
with a diagonal argument yields us a subsequence $\\{\varepsilon^{\prime}\\}$
(maybe depending of $\tilde{\omega}$) such that the functional
$\mu:\Xi\times\mathbb{D}\to\mathbb{C}$ given by
$\langle\mu,\phi\rangle:=\lim_{\varepsilon^{\prime}\to
0}\int_{O}v_{\varepsilon^{\prime}}(x,\widetilde{\omega})\,{\phi{\left(x,\Phi^{-1}{\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right)},\widetilde{\omega}\right)}}dx$
(2.25)
is well-defined and bounded with respect to the norm $\|\cdot{\|}_{q}$ defined
as
$\|\phi{\|}_{q}:=\big{[}c_{\Phi}^{-1}\int_{O}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\phi{\left(x,\Phi^{-1}(z,\omega),\omega\right)}\right|}^{q}dz\,d\mathbb{P}(\omega)\,dx\big{]}^{1/q}$
by (2.24). Since the set $\Xi\times\mathbb{D}$ is dense in
$L^{q}\left(O\times[0,1)^{n}\times\Omega\right)$, we can extend the functional
$\mu$ to a bounded functional $\tilde{\mu}$ over
$L^{q}\left(O\times[0,1)^{n}\times\Omega\right)$. Hence, we find
$V_{\tilde{\omega}}\in L^{p}(O\times[0,1)^{n}\times\Omega)$ which can be
extended to $O\times\mathbb{R}^{n}\times\Omega$ in a stationary way by setting
$V_{\tilde{\omega}}(x,y,\omega)=V_{\tilde{\omega}}\left(x,y-\left\lfloor
y\right\rfloor,\tau_{\left\lfloor y\right\rfloor}\omega\right),$
and satisfying for all $\phi\in
L^{q}\left(O\times[0,1)^{n}\times\Omega\right)$,
$\langle\tilde{\mu},\phi\rangle\\!\\!=c_{\Phi}^{-1}\int_{O\times\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!\\!V_{\tilde{\omega}}\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right)\phi\left(x,\Phi^{-1}(z,\omega),\omega\right)dzd\mathbb{P}(\omega)dx.$
2\. Now, take $\varphi\in C^{\infty}_{c}(O)$ and $\Theta\in
L^{q}_{\text{loc}}(\mathbb{R}^{n}\times\Omega)$ a $\tau$-stationary function.
Since the set $\Xi\times\mathbb{D}$ is dense in
$L^{q}\left(O\times[0,1)^{n}\times\Omega\right)$, we can pick up a sequence
$\\{(\varphi_{j},\Theta_{j}){\\}}_{j\geq 1}\subset\Xi\times\mathbb{D}$ such
that
$\lim_{j\to\infty}(\varphi_{j},\Theta_{j})=(\varphi,\Theta)\quad\text{in
$L^{q}\Big{(}O\times[0,1)^{n}\times\Omega\Big{)}$}.$
Then, observing that
$\displaystyle\limsup_{\varepsilon^{\prime}\to
0}\Big{|}\int_{O}v_{\varepsilon^{\prime}}(x,\tilde{\omega})\varphi(x)\Theta\left(\Phi^{-1}\left(\frac{x}{\varepsilon^{\prime}},\tilde{\omega}\right),\tilde{\omega}\right)\,dx$
$\displaystyle-\int_{O}v_{\varepsilon^{\prime}}(x,\tilde{\omega})\varphi_{j}(x)\Theta_{j}\left(\Phi^{-1}\left(\frac{x}{\varepsilon^{\prime}},\tilde{\omega}\right),\tilde{\omega}\right)\,dx\Big{|}$
$\displaystyle\leq
c\|\varphi-\varphi_{j}{\|}_{L^{q}(O)}[c_{\Phi}^{-1}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|(\Theta-\Theta_{j}){\left(\Phi^{-1}(z,\omega),\omega\right)}\right|}^{q}dz\,d\mathbb{P}(\omega)]^{1/q},$
where $c=c(\tilde{\omega})$ is a positive constant. Then, combining the
previous equality with the (2.25), we concluded the proof of the theorem. ∎
Let us remember the following space (see Remark 2.10)
$\mathcal{H}:=\Big{\\{}w\in
H^{1}_{\text{loc}}(\mathbb{R}^{n};L^{2}(\Omega));\,\text{$w$ is a stationary
function}\Big{\\}},$
which is a Hilbert space with respect to the following inner product
$\displaystyle\langle
w,v{\rangle}_{\mathcal{H}}:=\int_{[0,1)^{n}\times\Omega}$
$\displaystyle\nabla_{\\!y}w(y,\omega)\cdot\nabla_{y}v(y,\omega)\,d{\mathbb{P}}(\omega)\,dy$
$\displaystyle+\int_{[0,1)^{n}\times\Omega}.\\!\\!\\!\\!w(y,\omega)v(y,\omega)\,d{\mathbb{P}}(\omega)\,dy.$
The next lemma will be important in the homogenization’s process.
###### Lemma 2.22.
Let $O\subset\mathbb{R}^{n}$ be an open set and assume that
$\\{u_{\varepsilon}(\cdot,\tilde{\omega}){\\}}_{\varepsilon>0}$ and
$\\{\varepsilon\nabla
u_{\varepsilon}(\cdot,\tilde{\omega}){\\}}_{\varepsilon>0}$ are bounded
sequences in $L^{2}(O)$ and in $L^{2}(O;\mathbb{R}^{n})$ respectively for a.e.
$\tilde{\omega}\in\Omega$. Then, for a.e. $\tilde{\omega}\in\Omega$, there
exists a subsequence $\\{\varepsilon^{\prime}\\}$(it may depend on
$\tilde{\omega}$) and $u_{\tilde{\omega}}\in L^{2}(O;\mathcal{H})$, such that
$u_{\varepsilon^{\prime}}(\cdot,\tilde{\omega})\;\xrightharpoonup[\varepsilon\to
0]{2-{\rm s}}\;u_{\tilde{\omega}},$
and
$\varepsilon^{\prime}\nabla
u_{\varepsilon^{\prime}}(\cdot,\tilde{\omega})\;\xrightharpoonup[\varepsilon\to
0]{2-{\rm s}}\;[\nabla_{y}\Phi]^{-1}\nabla_{y}u_{\tilde{\omega}}.$
###### Proof.
Applying the Theorem 2.21 for the sequences
${\\{u_{\varepsilon}(\cdot,\widetilde{\omega})\\}_{\varepsilon>0}},\quad{\\{\varepsilon\nabla
u_{\varepsilon}(\cdot,\widetilde{\omega})\\}_{\varepsilon>0}}$
for a.e. ${\widetilde{\omega}\in\Omega}$, we can find a subsequence
$\\{\varepsilon^{\prime}\\}$, and functions
${u_{\widetilde{\omega}}\in
L^{2}({O}\\!\times\\![0,1)^{n}\\!\times\\!\Omega)},\quad{V_{\widetilde{\omega}}\in
L^{2}({O}\\!\times[0,1)^{n}\\!\times\\!\Omega;\mathbb{R}^{n})}$
with
${V_{\widetilde{\omega}}=(v^{(1)}_{\widetilde{\omega}},\ldots,v^{(n)}_{\widetilde{\omega}})}$
satisfying for ${k\in\\{1,2,\ldots,n\\}}$,
$u_{\varepsilon^{\prime}}(\cdot,\widetilde{\omega})\;\xrightharpoonup[\varepsilon^{\prime}\to
0]{2-{\rm s}}\;u_{\widetilde{\omega}},$ (2.26)
and
$\varepsilon^{\prime}\frac{\partial u_{\varepsilon^{\prime}}}{\partial
x_{k}}\;\xrightharpoonup[\varepsilon^{\prime}\to 0]{2-{\rm
s}}\;v^{(k)}_{\widetilde{\omega}}.$ (2.27)
Hence for each ${k\in\\{1,\ldots,n\\}}$ and performing an integration by parts
we have
$\int_{{O}}\varepsilon^{\prime}\frac{\partial
u_{\varepsilon^{\prime}}}{\partial
x_{k}}(x,\widetilde{\omega})\,{\varphi(x)\,\Theta{\left(\Phi^{-1}\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right),\widetilde{\omega}\right)}}dx\\\
=-\varepsilon^{\prime}\int_{{O}}u_{\varepsilon^{\prime}}(x,\widetilde{\omega})\,{\frac{\partial\varphi}{\partial
x_{k}}(x)\,\Theta{\left(\Phi^{-1}\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right),\widetilde{\omega}\right)}}dx\\\
\quad\quad-\int_{{O}}u_{\varepsilon^{\prime}}(x,\widetilde{\omega})\,{\varphi(x)\,{[\nabla_{\\!\\!y}\Phi]}^{-1}{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,\nabla_{\\!\\!y}\Theta{\left(\Phi^{-1}\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right),\widetilde{\omega}\right)}\cdotp
e_{k}}\,dx,$
for every $\varphi\in C^{\infty}_{c}(O)$ and $\Theta\in
C_{c}^{\infty}\big{(}[0,1)^{n};L^{\infty}(\Omega)\big{)}$ extended in a
stationary way to $\mathbb{R}^{n}$. Then, using the relations (2.26)-(2.27)
and a density argument in the space of the test functions, we arrive after
letting $\varepsilon^{\prime}\to 0$
$\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}v^{(k)}_{\widetilde{\omega}}{\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,d\mathbb{P}\,dz\\\
=-\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}u_{\widetilde{\omega}}\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right){\frac{\partial}{\partial
z_{k}}{\left(\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}\right)}}\,d\mathbb{P}\,dz,$
for a.e. $x\in{O}$ and for any $\Theta\in
C_{c}^{\infty}\big{(}[0,1)^{n};L^{\infty}(\Omega)\big{)}$.
Hence applying Theorem 2.16, we obtain
$\int_{\mathbb{R}^{n}}v^{(k)}_{\widetilde{\omega}}{\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\varphi(z)}\,dz\,=\,-\int_{\mathbb{R}^{n}}u_{\widetilde{\omega}}{\left(x,\Phi^{-1}\left(z,\omega\right),\omega\right)}\,{\frac{\partial\varphi}{\partial
z_{k}}(z)}\,dz,$
for all ${\varphi\in C_{\rm c}^{\infty}(\mathbb{R}^{n})}$ and a.e.
${\omega\in\Omega}$. This completes the proof of our lemma. ∎
### 2.4 Perturbations of bounded operators
The aim of this section is to study the point spectrum, that is the set of
eigenvalues, for perturbations of a given bounded operator. More precisely,
given a complex Hilbert space $H$, and a sequence of operators
$\\{A_{\alpha}\\}$, with $A_{\alpha}\in\mathcal{B}(H)$ for each
$\alpha\in\mathbb{N}^{n}$, we analyse the point spectrum of the power series
of $n-$complex variables $\boldsymbol{z}=(z_{1},\ldots,z_{n})$, which is
$\sum_{\alpha\in\mathbb{N}^{n}}\boldsymbol{z}^{\alpha}A_{\alpha}\equiv\sum_{\alpha_{1},\ldots,\alpha_{n}=0}^{\infty}z_{1}^{\alpha_{1}}\ldots
z_{n}^{\alpha_{n}}A_{\alpha_{1},\ldots,\alpha_{n}},$ (2.28)
from the properties of the spectrum $\sigma(A_{0,\ldots,0})$. This subject was
studied for instance by T. Kato [25] and F. Rellich [33].
To follow, we define $|\alpha|:=\alpha_{1}+\ldots+\alpha_{n}$,
$(\alpha\in\mathbb{N}^{n})$,
$r:=\Big{(}\underset{k\in\mathbb{N}}{\rm
inf}\big{\\{}\underset{{\|\alpha\|}_{\infty}>k}{\rm
sup}\sqrt[{|\alpha|}]{{\|A_{\alpha}\|}}\big{\\}}\Big{)}^{-1},$ (2.29)
and for $R>0$
$\Delta_{R}:=\prod_{\nu=1}^{n}B(0,R).$
Then, we have the following
###### Lemma 2.23.
Let ${\left\\{A_{\alpha}\right\\}}$ be a sequence of operators, such that
$A_{\alpha}\in\mathcal{B}(H)$ for each $\alpha\in\mathbb{N}^{n}$. Then, the
series (2.28) is convergent for each $z\in\Delta_{r}$, with $r>0$ given by
(2.29).
###### Proof.
Given $\boldsymbol{z}\in\Delta_{r}$, there exists $\varepsilon>0$ such that
${\big{(}\frac{1}{r}+\varepsilon\big{)}}{|z_{\nu}|}<1,\quad\text{for any
$\nu\in\\{1,\ldots,n\\}$}.$ (2.30)
On the other hand, from (2.29) there exists $k_{0}\in\mathbb{N}$, such that,
for each $k\geqslant k_{0}$
$\underset{{\|\alpha\|}_{\infty}>k}{\rm
sup}\sqrt[|\alpha|]{{\|A_{\alpha}\|}}<\frac{1}{r}+\varepsilon.$
Then, for $\|\alpha\|_{\infty}>k_{0}$
${\|A_{\alpha}\|}<{\big{(}\frac{1}{r}+\varepsilon\big{)}}^{{|\alpha|}},$
and hence we have
${|z_{1}|}^{\alpha_{1}}\ldots{|z_{n}|}^{\alpha_{n}}{\|A_{\alpha}\|}<{\Big{(}\big{(}\frac{1}{r}+\varepsilon\big{)}{|z_{1}|}\Big{)}}^{\alpha_{1}}\ldots{\Big{(}\big{(}\frac{1}{r}+\varepsilon\big{)}{|z_{n}|}\Big{)}}^{\alpha_{n}}.$
Therefore, we obtain
$\displaystyle\sum_{{\|\alpha\|}_{\infty}>k_{0}}{|\boldsymbol{z}|}^{\alpha}{\|A_{\alpha}\|}$
$\displaystyle\leqslant$
$\displaystyle\sum_{\alpha_{1},\ldots,\alpha_{n}=0}^{\infty}{\left\\{{\left[{\left(\frac{1}{r}+\varepsilon\right)}{|z_{1}|}\right]}^{\alpha_{1}}\ldots{\left[{\left(\frac{1}{r}+\varepsilon\right)}{|z_{n}|}\right]}^{\alpha_{n}}\right\\}}$
$\displaystyle=$
$\displaystyle{\left\\{\sum_{\alpha_{1}=0}^{\infty}{\left[{\left(\frac{1}{r}+\varepsilon\right)}{|z_{1}|}\right]}^{\alpha_{1}}\right\\}}\ldots{\left\\{\sum_{\alpha_{n}=0}^{\infty}{\left[{\left(\frac{1}{r}+\varepsilon\right)}{|z_{n}|}\right]}^{\alpha_{n}}\right\\}},$
and due to (2.30) the power series (2.28) is absolutely convergent for each
$\boldsymbol{z}\in\Delta_{r}$. ∎
One remarks that, for each $r_{0}<r$ the series (2.28) converges uniformly in
$\Delta_{r_{0}}$. Moreover, it follows from Definition (2.29) that, there
exists $c>0$ such that, for any $\alpha\in\mathbb{N}^{n}$,
${\|A_{\alpha}\|}\leqslant{c}^{{|\alpha|}+1}$.
Now, let us recall the definition of operator value maps of many complex
variables, and after that consider some important results. Let
$\mathcal{O}\subset\mathbb{C}^{n}$ be an open set. A map
$f:\mathcal{O}\to\mathcal{B}(H)$ is called holomorphic in
$\boldsymbol{w}\in\mathcal{O}$, when there exists an open set
$U\subset\mathcal{O}$, $\boldsymbol{w}\in U$, such that $f$ is equal to the
(absolutely convergent) power series in $\boldsymbol{z}-\boldsymbol{w}$, with
coefficients $A_{\alpha}\in\mathcal{B}(H)$, that is
$\displaystyle f(\boldsymbol{z})\equiv f(z_{1},\ldots,z_{n})$
$\displaystyle=\sum_{\alpha\in\mathbb{N}^{n}}(\boldsymbol{z}-\boldsymbol{w})^{\alpha}A_{\alpha}$
$\displaystyle\equiv\sum_{\alpha_{1},\ldots,\alpha_{n}=0}^{\infty}(z_{1}-w_{1})^{\alpha_{1}}\ldots(z_{n}-w_{n})^{\alpha_{n}}A_{\alpha_{1},\ldots,\alpha_{n}}$
for each $\boldsymbol{z}\in U$. Moreover, the function $f$ is called
holomorphic in $\mathcal{O}$, if it is holomorphic for any
$\boldsymbol{w}\in\mathcal{O}$.
Moreover, assume that $A\in\mathcal{B}(H)$ is a symmetric operator and
$\lambda\in\mathbb{R}$ is an eigenvalue of $A$ with finite multiplicity $h$.
Therefore, the operator $A-\lambda I$ is not invertible and there exists a
symmetric operator $R\in\mathcal{B}(H)$, uniquely defined, such that
$\displaystyle R(A-\lambda I)f$ $\displaystyle=f-\sum_{k=1}^{h}{\langle
f,\psi_{k}\rangle}\psi_{k},\quad\text{for each $f\in H$, and}$ (2.31)
$\displaystyle R\psi_{k}$ $\displaystyle=0,\quad\text{for all
$k\in\\{1,\ldots,h\\}$},$
where $\\{\psi_{1},\ldots,\psi_{h}\\}$ is an orthonormal basis of ${\rm
Ker}(A-\lambda I)$. The operator $R$ is called a pseudo-inverse of $A-\lambda
I$, and one observes that, $AR=RA$.
It is also important to consider the following results on complex value
functions.
###### Lemma 2.24 (Osgood’s Lemma).
Let $\mathcal{O}\subset\mathbb{C}^{n}$ be an open set, and
$f:\mathcal{O}\to\mathbb{C}$ a continuous function that is holomorphic in each
variable separately. Then, the function $f$ is holomorphic.
Then, in order to state the Weierstrass’ Preparation Theorem, let us recall
the concept of Weierstrass’ polinomial. A complex function
$W(\varrho,\boldsymbol{z})$, which is holomorphic in a neighborhood of
$(0,\boldsymbol{0})\in\mathbb{C}\\!\times\\!\mathbb{C}^{n}$, is called a
Weirstrass polynomial of degree $m$, when
$W(\varrho,\boldsymbol{z})=\varrho^{m}+a_{1}(\boldsymbol{z})\varrho^{m-1}+\ldots+a_{m-1}(\boldsymbol{z})\varrho+a_{m}(\boldsymbol{z}),$
where any $a_{i}(\boldsymbol{z})$, $(i=1,\ldots,m)$, is an holomorphic
function in a neighborhood $\boldsymbol{0}\in\mathbb{C}^{n}$ that vanishes at
$\boldsymbol{z}=\boldsymbol{0}$. Then, we have the following
###### Theorem 2.25 (Weierstrass Preparation Theorem).
Let $m$ be a positive integer, and $F(\varrho,\boldsymbol{z})$ holomorphic in
a neighborhood of $(0,\boldsymbol{0})\in\mathbb{C}\\!\times\\!\mathbb{C}^{n}$
such that, the mapping $\varrho\mapsto F(\varrho,\boldsymbol{0})/\varrho^{m}$
is holomorphic in a neighborhood of $0\in\mathbb{C}$ and is non-zero at $0$.
Then, there exist a Weierstrass polynomial $W(\varrho,\boldsymbol{z})$ of
degree m, and a holomorphic function $E(\varrho,\boldsymbol{z})$ which does
not vanish in a neighborhood $U$ of $(0,\boldsymbol{0})$, such that, for all
$(\varrho,\boldsymbol{z})\in U$
$F(\varrho,\boldsymbol{z})=W(\varrho,\boldsymbol{z})E(\varrho,\boldsymbol{z}).$
###### Proof.
See S. G. Krantz, H. R. Parks [26, p. 96]. ∎
At this point, we are in condition to establish the main result of this
section, that is to say, the perturbation theory for bounded operators with
isolated eigenvalues of finite multiplicity. The theorem considered here is a
convenient and direct version for our purposes in this paper.
###### Theorem 2.26.
Let $H$ be a Hilbert space, and a sequence of operators $\\{A_{\alpha}\\}$,
$A_{\alpha}\in\mathcal{B}(H)$ for each $\alpha\in\mathbb{N}^{n}$. Consider the
power series of $n-$complex variables $\boldsymbol{z}=(z_{1},\ldots,z_{n})$
with coefficients $A_{\alpha}$, which is absolutely convergent in a
neighborhood $\mathcal{O}$ of $\boldsymbol{z}=\boldsymbol{0}$. Define, the
holomorphic map $A:\mathcal{O}\to\mathcal{B}(H)$,
$A(\boldsymbol{z}):=\sum_{\alpha\in\mathbb{N}^{n}}\boldsymbol{z}^{\alpha}A_{\alpha}$
and assume that, it is symmetric. If $\lambda$ is an eigenvalue of
$A_{0}\equiv A(\boldsymbol{0})$ with finite multiplicity $h$ (and respective
eigenvectors $\psi_{i}$, $i=1,\ldots,h$), then there exist a neighborhood
$U\subset\mathcal{O}$ of ${\boldsymbol{0}}$, and holomorphic functions
$\displaystyle\boldsymbol{z}\in U$
$\displaystyle\,\mapsto\,\lambda_{1}(\boldsymbol{z}),\lambda_{2}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\in\mathbb{R},$
$\displaystyle\boldsymbol{z}\in U$
$\displaystyle\,\mapsto\,\psi_{1}(\boldsymbol{z}),\psi_{2}(\boldsymbol{z}),\ldots,\psi_{h}(\boldsymbol{z})\in
H\setminus\\{0\\},$
satisfying for each $\boldsymbol{z}\in U$ and $i\in\\{1,\ldots,h\\}:$
* $(i)$
$A(\boldsymbol{z})\psi_{i}(\boldsymbol{z})=\lambda_{i}(\boldsymbol{z})\psi_{i}(\boldsymbol{z})$,
* $(ii)$
${\lambda_{i}(\boldsymbol{z}=\boldsymbol{0})=\lambda}$,
* $(iii)$
${{\rm dim}{\\{w\in
H\;;\;A(\boldsymbol{z})w=\lambda_{i}(\boldsymbol{z})w\\}}\leqslant h}$.
Moreover, if there exists $d>0$ such that
$\sigma(A_{0})\cap(\lambda-d,\lambda+d)={\left\\{\lambda\right\\}},$
then for each $d^{\prime}\in(0,d)$ there exists a neighborhood $W\subset U$ of
$\boldsymbol{0}$, such that
$\sigma(A(\boldsymbol{z}))\cap(\lambda-d^{\prime},\lambda+d^{\prime})={\left\\{\lambda_{1}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\right\\}}$
(2.32)
for all $\boldsymbol{z}\in W$.
###### Proof.
1\. First, we conveniently define
$B(\boldsymbol{z}):=A(\boldsymbol{z})-A_{0}=\sum_{{|\alpha|}\not=0}\boldsymbol{z}^{\alpha}A_{\alpha}.$
(2.33)
Then, there exists a neighborhood of
$(\varrho,\boldsymbol{z})=(0,\boldsymbol{0})$ such that, the function
$(\varrho,\boldsymbol{z})\,\mapsto\,\sum_{l=0}^{\infty}{\left[R{\left(\varrho-B(\boldsymbol{z})\right)}\right]}^{l}\in\mathcal{B}(H)$
is well defined (see equation (2.31)), and holomorphic on it. Indeed, first we
recall that there exists $c>0$ such that, for any $\alpha\in\mathbb{N}^{n}$,
${\|A_{\alpha}\|}\leqslant{c}^{{|\alpha|}+1}$. Then, it follows from (2.33)
that
$\displaystyle{\|B(\boldsymbol{z})\|}$
$\displaystyle\leqslant\sum_{{|\alpha|}\not=0}{|z_{1}|}^{\alpha_{1}}\ldots{|z_{n}|}^{\alpha_{n}}{\|A_{\alpha}\|}\leqslant\sum_{{|\alpha|}\not=0}{|\boldsymbol{z}|}^{{|\alpha|}}c^{{|\alpha|}+1}$
$\displaystyle=\sum_{k=1}^{\infty}{\sum_{{|\alpha|}=k}{|\boldsymbol{z}|}^{{|\alpha|}}c^{{|\alpha|}+1}}=\sum_{k=1}^{\infty}{\sum_{{|\alpha|}=k}{|\boldsymbol{z}|}^{k}c^{k+1}}$
$\displaystyle=\sum_{k=1}^{\infty}{\left(\\#{\left\\{\alpha\in\mathbb{N}^{n}\;;\;{|\alpha|}=k\right\\}}{|\boldsymbol{z}|}^{k}c^{k+1}\right)}$
$\displaystyle\leqslant\sum_{k=1}^{\infty}(k+1)^{n}{|\boldsymbol{z}|}^{k}c^{k+1}={|\boldsymbol{z}|}c^{2}\sum_{k=1}^{\infty}(k+1)^{n}{|\boldsymbol{z}|}^{k-1}c^{k-1}$
$\displaystyle\leqslant{|\boldsymbol{z}|}c^{2}\sum_{k=0}^{\infty}(k+2)^{n}{|\boldsymbol{z}|}^{k}c^{k}.$
Therefore, it follows that
$\sum_{k=0}^{\infty}(k+2)^{n}{|\boldsymbol{z}|}^{k}c^{k}$ is absolutely
convergent for each $\boldsymbol{z}\in
B{\left(\boldsymbol{0},\frac{1}{4^{n}c}\right)}$. Moreover, there exists
$\tilde{c}>0$, such that
$\big{|}\sum_{k=0}^{\infty}(k+2)^{n}{|\boldsymbol{z}|}^{k}c^{k}\big{|}\leqslant\tilde{c}\quad\text{for
each $\boldsymbol{z}\in B(\boldsymbol{0},\frac{1}{4^{n+1}c})$}.$
Hence we have from (2.33) that
$\displaystyle{\left\|R(\varrho-B(\boldsymbol{z}))\right\|}$
$\displaystyle\leq{\|R\|}({|\varrho|}+{|\boldsymbol{z}|}c^{2}\tilde{c})$
(2.34) $\displaystyle\leq{\|R\|}\ {\rm
max}{\left\\{1,c^{2}\tilde{c}\right\\}}({|\varrho|}+{|\boldsymbol{z}|}),$
for ${\varrho\in\mathbb{C}}$ and ${\boldsymbol{z}\in
B{\left(\boldsymbol{0},\frac{1}{4^{n+1}c}\right)}}$. To follow, we define
$r:=\min\Big{\\{}\frac{1}{8{\|R\|}{\rm
max}{\left\\{1,c^{2}\tilde{c}\right\\}}},\frac{1}{4^{n+1}c}\Big{\\}},\quad\Delta_{r}:=B(0,r)\\!\times\\!B(\boldsymbol{0},r)\subset\mathbb{C}\\!\times\\!\mathbb{C}^{n}.$
(2.35)
Then, for any $m,n\in\mathbb{N}$ with $m>n$, and all
$(\varrho,\boldsymbol{z})\in\Delta_{r}$, we have
$\displaystyle{\|\sum_{l=0}^{m}{\left[R(\varrho-B(\boldsymbol{z}))\right]}^{l}-\sum_{l=0}^{n}{\left[R(\varrho-B(\boldsymbol{z}))\right]}^{l}\|}$
$\displaystyle\leq\sum_{l=n+1}^{m}{\|R(\varrho-B(\boldsymbol{z}))\|}^{l}$
$\displaystyle\leq\sum_{l=n+1}^{m}{\left(\frac{1}{4}\right)}^{l}.$
Consequently, for any $(\varrho,\boldsymbol{z})\in\Delta_{r}$,
$\\{\sum_{l=0}^{m}{\left[R(\varrho-B(\boldsymbol{z}))\right]}^{l}\\}_{m\in\mathbb{N}}$
is a Cauchy sequence in $\mathcal{B}(H)$. Therefore, the mapping
$(\varrho,\boldsymbol{z})\in\Delta_{r}\;\mapsto\;\sum_{l=0}^{\infty}{\left[R(\varrho-B(\boldsymbol{z}))\right]}^{l}$
(2.36)
is holomorphic, since it is the uniform limit of holomorphic functions.
2\. Now, for $i,j=1,\ldots,h$ and $(\varrho,\boldsymbol{z})\in\Delta_{r}$, let
us consider
$f_{ij}(\varrho,\boldsymbol{z})=\Big{\langle}\sum_{l=0}^{\infty}(\varrho-B(\boldsymbol{z})){\left[R(\varrho-B(\boldsymbol{z}))\right]}^{l}\psi_{i},\psi_{j}\Big{\rangle}.$
Therefore, the function $F:\Delta_{r}\rightarrow\mathbb{C}$, defined by
$F(\varrho,\boldsymbol{z}):={\rm
det}{\left[{\left(f_{ij}(\varrho,\boldsymbol{z})\right)}\right]}$ is
holomorphic. In fact, $F(\varrho,\boldsymbol{z})$ is a real value function,
when $\varrho\in\mathbb{R}$. Moreover, $F(\varrho,\boldsymbol{0})={\rm
det}{\left[\varrho\,(\delta_{ij})\right]}=\varrho^{h}$ for each $\varrho\in
B(0,r)$, where $\delta_{ij}$ is the Kronecker delta. Indeed, we have
$\displaystyle f_{ij}(\varrho,\boldsymbol{0})$ $\displaystyle=$
$\displaystyle\Big{\langle}\sum_{l=0}^{\infty}(\varrho-B(\boldsymbol{0})){\left[R(\varrho-B(\boldsymbol{0}))\right]}^{l}\psi_{i},\psi_{j}\Big{\rangle}$
$\displaystyle=$
$\displaystyle\left\langle\sum_{l=0}^{\infty}\varrho^{l+1}R^{l}\psi_{i},\psi_{j}\right\rangle$
$\displaystyle=$
$\displaystyle\left\langle\varrho\,\psi_{i},\psi_{j}\right\rangle+\sum_{l=1}^{\infty}\left\langle\varrho^{l+1}R^{l}\psi_{i},\psi_{j}\right\rangle=\varrho\,\delta_{ij},$
from which follows the result.
3\. At this point, we show that there exist $h$ holomorphic functions
$\varrho_{k}(\boldsymbol{z})$, $(k=1,\ldots,h)$, defined in a neighborhood of
$\boldsymbol{z}=\boldsymbol{0}$, such that
$\lim_{\boldsymbol{z}\to\boldsymbol{0}}\varrho_{k}(\boldsymbol{z})=0,\quad\text{for
$k\in\\{1,\ldots,h\\}$}.$
Indeed, applying Theorem 2.25 (Weierstrass Preparation Theorem) there exists a
Weirstrass polynomial of degree $h$
$W(\varrho,\boldsymbol{z})=\varrho^{h}+a_{1}(\boldsymbol{z})\varrho^{h-1}+\ldots+a_{h-1}(\boldsymbol{z})\varrho+a_{h}(\boldsymbol{z}),$
and also a holomorphic function $E(\varrho,\boldsymbol{z})$, which does not
vanish in a neighborhood $U\times V$ of $(0,\boldsymbol{0})$, with $U\subset
B(0,r)\subset\mathbb{C}$ and $V\subset
B(\boldsymbol{0},r)\subset\mathbb{C}^{n}$, such that for each
$(\varrho,\boldsymbol{z})\in U\\!\times\\!V$
$F(\varrho,\boldsymbol{z})=E(\varrho,\boldsymbol{z}){\left(\varrho^{h}+a_{1}(\boldsymbol{z})\varrho^{h-1}+\ldots+a_{h-1}(\boldsymbol{z})\varrho+a_{h}(\boldsymbol{z})\right)}.$
Since the coefficients of the Weirstrass polynomial are holomorphic functions,
which vanish in $\boldsymbol{z}=\boldsymbol{0}$, then there exist holomorphic
functions $\varrho_{k}(\boldsymbol{z})$, such that
$\displaystyle F(\varrho,\boldsymbol{z})$
$\displaystyle=E(\varrho,\boldsymbol{z})\,\prod_{k=1}^{h}{\left(\varrho-\varrho_{k}(\boldsymbol{z})\right)},$
(2.37)
$\displaystyle\lim_{\boldsymbol{z}\to\boldsymbol{0}}\varrho_{k}(\boldsymbol{z})$
$\displaystyle=0,\quad(k=1,\ldots,h).$
4\. At this point, let us show that, for $k\in\\{1,\ldots,h\\}$ there exists a
map $\psi_{k}(\boldsymbol{z})\in H-\\{0\\}$ such that,
$A(\boldsymbol{z})\psi_{k}(\boldsymbol{z})=(\lambda+\varrho_{k}(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})$,
for each $\boldsymbol{z}$ in a neighborhood of $\boldsymbol{0}$. Indeed, let
$k\in\\{1,\ldots,h\\}$ be fix. From item 3, there exists a set
$V\subset\mathbb{C}$, which is a neighborhood of
$\boldsymbol{z}=\boldsymbol{0}$, such that
${\rm
det}{\left[{\left(f_{ij}(\varrho_{k}(\boldsymbol{z}),\boldsymbol{z})\right)}\right]}=0$
for each $\boldsymbol{z}\in V$. Therefore, for each $\boldsymbol{z}\in V$ the
linear system
$\left(f_{ji}(\varrho_{k}(\boldsymbol{z}),\boldsymbol{z})\right)(c_{1},\ldots,c_{n})^{T}=0$
has a non-trivial solution. Consequently, there exist $h$ holomorphic
functions $\boldsymbol{z}\in V\mapsto
c_{1}^{k}(\boldsymbol{z}),\ldots,c_{h}^{k}(\boldsymbol{z})$, such that for all
$j=1,\ldots,h$
$\sum_{i=1}^{h}f_{ij}(\varrho_{k}(\boldsymbol{z}),\boldsymbol{z})\,c_{i}^{k}(\boldsymbol{z})=0,$
and without loss of generality we may assume
$\sum_{i=1}^{h}{|c_{i}^{k}(\boldsymbol{z})|}^{2}=1.$ (2.38)
From equation (2.37) it is possible to find a neighborhood $\tilde{V}$ of
$\boldsymbol{0}$, which is compactly embedded in $V$, such that
$\underset{\boldsymbol{z}\in\tilde{V}}{\rm
sup}{|\varrho_{k}(\boldsymbol{z})|}<\frac{1}{8{\|R\|}{\rm
max}(1,c^{2}\tilde{c})},$
for each $\boldsymbol{z}\in\tilde{V}$ and all $k\in\\{1,\ldots,n\\}$. Hence we
obtain for each $\boldsymbol{z}\in\tilde{V}$
${\|R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\|}\leq{\|R\|}{\rm
max}(1,c^{2}\tilde{c}){\left({|\varrho_{k}(\boldsymbol{z})|}+{|\boldsymbol{z}|}\right)}\leq\frac{1}{4},$
(2.39)
and then
$\sum_{l=0}^{\infty}{\|R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\|}^{l}\leqslant\frac{4}{3}.$
(2.40)
Now, we define for any $\boldsymbol{z}\in\tilde{V}$
$\phi_{k}(\boldsymbol{z}):=\sum_{i=1}^{h}c_{i}^{k}(\boldsymbol{z})\psi_{i},\quad\text{and}\quad\psi_{k}(\boldsymbol{z}):=\sum_{l=0}^{\infty}{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l}\phi_{k}(\boldsymbol{z}).$
Therefore, we have
$\displaystyle\psi_{k}(\boldsymbol{z})$ $\displaystyle=$
$\displaystyle\phi_{k}(\boldsymbol{z})+\sum_{l=1}^{\infty}{\left[R(\mu_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l}\phi_{k}(\boldsymbol{z})$
$\displaystyle=$
$\displaystyle\phi_{k}(\boldsymbol{z})+{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}\sum_{l=1}^{\infty}{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l-1}\phi_{k}(\boldsymbol{z})$
$\displaystyle=$
$\displaystyle\phi_{k}(\boldsymbol{z})+{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}\psi_{k}(\boldsymbol{z}),$
and it follows that
$\displaystyle(A_{0}-\lambda)\psi_{k}(\boldsymbol{z})$
$\displaystyle=(A_{0}-\lambda)\phi_{k}(\boldsymbol{z})+(A_{0}-\lambda){\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}\psi_{k}(\boldsymbol{z})$
(2.41)
$\displaystyle=\sum_{i=1}^{h}c_{i}^{k}(\boldsymbol{z})(A_{0}-\lambda)\psi_{i}+(A_{0}-\lambda)R{\left[(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})\right]}$
$\displaystyle=R(A_{0}-\lambda){\left[(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})\right]}$
$\displaystyle=(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})-\sum_{j=1}^{h}\left\langle(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z}),\psi_{j}\right\rangle\psi_{j}$
$\displaystyle=(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})$
since
$\displaystyle\left\langle(\varrho_{k}(\boldsymbol{z})\right.$
$\displaystyle-\left.B(\boldsymbol{z}))\psi_{k}(\boldsymbol{z}),\psi_{j}\right\rangle$
$\displaystyle=\sum_{i=1}^{h}\Big{\langle}\sum_{l=0}^{\infty}(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z})){\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l}\psi_{i},\psi_{j}\Big{\rangle}c_{i}^{k}(\boldsymbol{z})$
$\displaystyle=\sum_{i=1}^{h}f_{ij}(\varrho_{k}(\boldsymbol{z}),\boldsymbol{z})\,c_{i}^{k}(\boldsymbol{z})=0.$
Thus, for each $\boldsymbol{z}\in\tilde{V}$,
$A(\boldsymbol{z})\psi_{k}(\boldsymbol{z})=(\lambda+\varrho_{k}(\boldsymbol{z}))\psi_{k}(\boldsymbol{z})$.
On the other hand,
$\psi_{k}(\boldsymbol{z})=\phi_{k}(\boldsymbol{z})+{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}\sum_{l=1}^{\infty}{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l-1}\phi_{k}(\boldsymbol{z}),$
hence from (2.38), (2.39), and (2.40), we have for each
$\boldsymbol{z}\in\tilde{V}$
$\displaystyle{\|\psi_{k}(\boldsymbol{z})-\phi_{k}(\boldsymbol{z})\|}$
$\displaystyle\leq$
$\displaystyle{\|R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\|}{\|\sum_{l=0}^{\infty}{\left[R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\right]}^{l}\|}{\|\phi_{k}(\boldsymbol{z})\|}$
$\displaystyle\leq$
$\displaystyle\frac{4}{3}{\|R(\varrho_{k}(\boldsymbol{z})-B(\boldsymbol{z}))\|}\leq\frac{1}{3}.$
Consequently, for each $\boldsymbol{z}\in\tilde{V}$ we have
$\psi_{k}(\boldsymbol{z})\not=0$, since
$1={\|\phi_{k}(\boldsymbol{z})\|}\leq{\|\phi_{k}(\boldsymbol{z})-\psi_{k}(\boldsymbol{z})\|}+{\|\psi_{k}(\boldsymbol{z})\|}\leq\frac{1}{3}+{\|\psi_{k}(\boldsymbol{z})\|}.$
5\. Now, let us show item $(iii)$ of the thesis, that is,
${{\rm dim}{\\{w\in
H\;;\;A(\boldsymbol{z})w=\lambda_{i}(\boldsymbol{z})w\\}}\leqslant h}.$
From the previous item, there exists
$\lambda_{k}(\boldsymbol{z})=\lambda+\varrho_{k}(\boldsymbol{z})$,
$k\in\\{1,\ldots,h\\}$, an eigenvalue of the operator $A(\boldsymbol{z})$, for
$\boldsymbol{z}$ in a neighborhood of $\boldsymbol{z}=\boldsymbol{0}$. We set
$\lambda(\boldsymbol{z})=\lambda_{k}(\boldsymbol{z})$, for any
$k\in\\{1,\ldots,h\\}$ fixed, and let $\psi(\boldsymbol{z})$ be any function
satisfying
$A(\boldsymbol{z})\psi(\boldsymbol{z})=\lambda(\boldsymbol{z})\psi(\boldsymbol{z}),$
which is not necessarily the eigenfunction $\psi_{k}(\boldsymbol{z})$. Then,
we are going to show that, there exist a neighborhood of
$\boldsymbol{z}=\boldsymbol{0}$, and for each $\boldsymbol{z}$ in this
neighborhood an invertible holomorphic operator
$S(\boldsymbol{z})\in\mathcal{B}(H)$, such that,
$\psi(\boldsymbol{z})\in{\rm
span}{\big{\\{}S(\boldsymbol{z})\psi_{1},S(\boldsymbol{z})\psi_{2},\ldots,S(\boldsymbol{z})\psi_{h}\big{\\}}}.$
(2.42)
Indeed, to show (2.42) let us define
$\varrho(\boldsymbol{z}):=\lambda(\boldsymbol{z})-\lambda$, then we have
${\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}}\psi(\boldsymbol{z})={\big{(}A_{0}-\lambda\big{)}}\psi(\boldsymbol{z}).$
Hence from the first equation in (2.31), it follows that
$R{\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}}\psi(\boldsymbol{z})=\psi(\boldsymbol{z})-\sum_{i=1}^{h}{\langle\psi(\boldsymbol{z}),\psi_{i}\rangle}\psi_{i},$
or equivalently
${\big{[}I-R{\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}}\big{]}}\psi(\boldsymbol{z})=\sum_{i=1}^{h}{\langle\psi(\boldsymbol{z}),\psi_{i}\rangle}\psi_{i}.$
On the other hand, from (2.34) it is possible to find a neighborhood $V$ of
$\boldsymbol{z}=\boldsymbol{0}$, such that,
$\|R\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}\|<1$
Therefore, it exists an invertible operator
$S(\boldsymbol{z})={\big{[}I-R{\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}}\big{]}}^{-1}=\sum_{\nu=0}^{\infty}{\big{[}R{\big{(}\varrho(\boldsymbol{z})I-B(\boldsymbol{z})\big{)}}\big{]}}^{\nu},$
and hence
$\psi(\boldsymbol{z})=S(\boldsymbol{z}){\left(\sum_{i=1}^{h}{\langle\psi(\boldsymbol{z}),\psi_{i}\rangle}\psi_{i}\right)}=\sum_{i=1}^{h}{\langle\psi(\boldsymbol{z}),\psi_{i}\rangle}{\big{[}S(\boldsymbol{z})\psi_{i}\big{]}}.$
6\. Finally, we show that the perturbed eigenvalues are isolated. To this end,
we consider
$N(\boldsymbol{z}):={\rm
span}{\left\\{\psi_{1}(\boldsymbol{z}),\psi_{2}(\boldsymbol{z}),\ldots,\psi_{h}(\boldsymbol{z})\right\\}},$
the operator $P(\boldsymbol{z}):H\to H$, which is a projection on
$N(\boldsymbol{z})$, given by
$P(\boldsymbol{z})u=\sum_{i=1}^{h}\left\langle
u,\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z}),$
and for $d>0$ the operator $D(\boldsymbol{z}):H\to H$, defined by
$D(\boldsymbol{z}):=A(\boldsymbol{z})-2dP(\boldsymbol{z}).$
One observes that
$D(\boldsymbol{z})u=\sum_{i=1}^{h}(\lambda_{i}(\boldsymbol{z})-2d)\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})+A(\boldsymbol{z})u_{2},$
(2.43)
where we have used the direct sum $u=u_{1}+u_{2}$, $u_{1}\in
N(\boldsymbol{z})$ and $u_{2}\in N(\boldsymbol{z})^{\perp}$.
Claim 1.
* a)
For
$\xi\in\mathbb{R}\setminus{\left\\{\lambda_{1}(\boldsymbol{z}),\lambda_{2}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\right\\}}$,
$D(\boldsymbol{z})-\xi\;\,\text{is
bijective}\;\Rightarrow\;A(\boldsymbol{z})-\xi\;\,\text{is bijective}.$
* b)
For
$\xi\in\mathbb{R}\setminus{\left\\{\lambda_{1}(\boldsymbol{z})-2d,\lambda_{2}(\boldsymbol{z})-2d,\ldots,\lambda_{h}(\boldsymbol{z})-2d\right\\}}$,
$A(\boldsymbol{z})-\xi\;\,\text{is
bijective}\;\Rightarrow\;D(\boldsymbol{z})-\xi\;\,\text{is bijective}.$
Proof of Claim 1. First, we show item (a). Let
$\xi\in\mathbb{R}\setminus{\left\\{\lambda_{1}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\right\\}}$
be such that, $D(\boldsymbol{z})-\xi$ is bijective. Then, we must show that
$A(\boldsymbol{z})-\xi$ is injective and surjective:
Injective. Let $u\in{\rm Ker}(A(\boldsymbol{z})-\xi)$ and since
$\xi\not=\lambda_{i}(\boldsymbol{z})$, for $i\in\\{1,\ldots,n\\}$, we have
$\left\langle u,\psi_{i}(\boldsymbol{z})\right\rangle=0$ for all
$i\in\\{1,\ldots,n\\}$. Therefore, $u\in N(\boldsymbol{z})^{\perp}$ and from
(2.43),
$(D(\boldsymbol{z})-\xi)u=(A(\boldsymbol{z})-\xi)u=0.$
Consequently, we obtain $u=0$.
Surjective. Applying the surjection of $D(\boldsymbol{z})-\xi$, for each $v\in
H$ there exists $u\in H$, such that
$(D(\boldsymbol{z})-\xi)u=v.$ (2.44)
On the other hand, we write $u=u_{1}+u_{2}$, with $u_{1}\in N(\boldsymbol{z})$
and $u_{2}\in N(\boldsymbol{z})^{\perp}$, hence from equations (2.43) and
(2.44), we obtain
$v=\sum_{i=1}^{h}(\lambda_{i}(\boldsymbol{z})-2d-\xi)\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})+(A(\boldsymbol{z})-\xi)u_{2}.$
(2.45)
Moreover, since $\xi\not=\lambda_{i}(\boldsymbol{z})$, for
$i\in\\{1,\ldots,n\\}$, it follows that
$(A(\boldsymbol{z})-\xi){\left[\frac{\psi_{i}(\boldsymbol{z})}{\lambda_{i}(\boldsymbol{z})-\xi}\right]}=\psi_{i}(\boldsymbol{z}),$
and hence applying it in (2.45), we have
$\displaystyle v$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{h}(A(\boldsymbol{z})-\xi){\left[{\left(\frac{\lambda_{i}(\boldsymbol{z})-2d-\xi}{\lambda_{i}(\boldsymbol{z})-\xi}\right)}\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})\right]}+(A(\boldsymbol{z})-\xi)u_{2}$
$\displaystyle=$
$\displaystyle(A(\boldsymbol{z})-\xi){\left[\sum_{i=1}^{h}{\left(\frac{\lambda_{i}(\boldsymbol{z})-2d-\xi}{\lambda_{i}(\boldsymbol{z})-\xi}\right)}\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})+u_{2}\right]}.$
Thus, the operator $A(\boldsymbol{z}-\xi)$ is surjective.
Now, let us show item (b). Let
$\xi\in\mathbb{R}\setminus{\left\\{\lambda_{1}(\boldsymbol{z})-2d,\ldots,\lambda_{h}(\boldsymbol{z})-2d\right\\}}$
be such that, $A(\boldsymbol{z})-\xi$ is bijective. Similarly, we must show
that $D(\boldsymbol{z})-\xi$ is injective and surjective:
Injective. Let $u\in H$ be such that $(D(\boldsymbol{z})-\xi)u=0$. Then
writing $u=u_{1}+u_{2}$, with $u_{1}\in N(\boldsymbol{z})$ and $u_{2}\in
N(\boldsymbol{z})^{\perp}$, it follows from equation (2.43) that
$0=\sum_{i=1}^{h}(\lambda_{i}(\boldsymbol{z})-2d-\xi)\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})+(A(\boldsymbol{z})-\xi)u_{2},$
(2.46)
thus $(A(\boldsymbol{z})-\xi)u_{2}\in N(\boldsymbol{z})$. Consequently, we
have
$\displaystyle(A(\boldsymbol{z})-\xi)u_{2}$ $\displaystyle=$ $\displaystyle
P(\boldsymbol{z}){\left[(A(\boldsymbol{z})-\xi)u_{2}\right]}=P(\boldsymbol{z})A(\boldsymbol{z})u_{2}-\xi
P(\boldsymbol{z})u_{2}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{h}\left\langle
A(\boldsymbol{z})u_{2},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})-\xi
P(\boldsymbol{z})u_{2}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{h}\left\langle
u_{2},A(\boldsymbol{z})\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})-\xi
P(\boldsymbol{z})u_{2}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{h}\lambda_{i}(\boldsymbol{z})\left\langle
u_{2},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})-\xi
P(\boldsymbol{z})u_{2}=0$
since $u_{2}\in N(\boldsymbol{z})^{\perp}$. By hypothesis
$A(\boldsymbol{z})-\xi$ is injective, thus $u_{2}=0$. Then, from equation
(2.46) we obtain
$\sum_{i=1}^{h}(\lambda_{i}(\boldsymbol{z})-2d-\xi)\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle\psi_{i}(\boldsymbol{z})=0,$
and since $\\{\psi_{i}(\boldsymbol{z})\\}_{i=1}^{h}$ is a linearly dependent
set of vectors, we have for each $i\in\\{1,\ldots,h\\}$, $\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle=0$, thus $\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle=0$. Recall that by hypothesis
$\lambda_{i}(\boldsymbol{z})-2d-\xi\not=0$, for all $i\in\\{1,\ldots,h\\}$.
Therefore, we obtain $u_{1}=0$.
Surjective. Again, applying the surjection of $A(\boldsymbol{z})-\xi$, for
each $v\in H$ there exists $u\in H$, such that
$(A(\boldsymbol{z})-\xi)u=v.$ (2.47)
Then, writing $u=u_{1}+u_{2}$, with $u_{1}\in N(\boldsymbol{z})$ and $u_{2}\in
N(\boldsymbol{z})^{\perp}$, from equations (2.43) and (2.47), we have
$v=\sum_{i=1}^{h}(\lambda_{i}(\boldsymbol{z})-\xi){\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle}\psi_{i}(\boldsymbol{z})+(D(\boldsymbol{z})-\xi)u_{2}.$
(2.48)
Moreover, since $\xi\not=\lambda_{i}(\boldsymbol{z})-2d$, for
$i\in\\{1,\ldots,n\\}$,
$(D(\boldsymbol{z})-\xi){\left[\frac{\psi_{i}(\boldsymbol{z})}{\lambda_{i}(\boldsymbol{z})-2d-\xi}\right]}=\psi_{i}(\boldsymbol{z})$
and then from (2.48), it follows that
$\displaystyle v$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{h}(D(\boldsymbol{z})-\xi){\left[{\left(\frac{\lambda_{i}(\boldsymbol{z})-\xi}{\lambda_{i}(\boldsymbol{z})-2d-\xi}\right)}{\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle}\phi^{i}(\boldsymbol{z})\right]}+(D(\boldsymbol{z})-\xi)u_{2}$
$\displaystyle=$
$\displaystyle(D(\boldsymbol{z})-\xi){\left[\sum_{i=1}^{h}{\left(\frac{\lambda_{i}(\boldsymbol{z})-\xi}{\lambda_{i}(\boldsymbol{z})-2d-\xi}\right)}{\left\langle
u_{1},\psi_{i}(\boldsymbol{z})\right\rangle}\psi_{i}(\boldsymbol{z})+u_{2}\right]}.$
Therefore, the operator $D(\boldsymbol{z})-\xi$ is surjective.
Claim 2. The spectrum of the operator $D(\boldsymbol{0})$ does not contain
elements of the interval $(\lambda-d,\lambda+d)$, i.e.
$\sigma(D(\boldsymbol{0}))\cap(\lambda-d,\lambda+d)=\emptyset.$
Proof Claim 2. From item (b) of Claim 1, we have
$\sigma(D(\boldsymbol{0}))\subset\sigma(A_{0})$, and thus
$\sigma(D(\boldsymbol{0}))\cap(\lambda-d,\lambda+d)\subset\sigma(A_{0})\cap(\lambda-d,\lambda+d)=\\{\lambda\\}.$
Suppose that $\lambda\in\sigma(D(\boldsymbol{0}))\cap(\lambda-d,\lambda+d)$,
that is to say, it is an isolated element of the spectrum of
$D(\boldsymbol{0})$. Therefore, $\lambda$ is an eigenvalue of
$D(\boldsymbol{0})$, but this is not possible since
$D(\boldsymbol{0})-\lambda$ is an injective operator (see the proof of item
(b) of Claim 1). Consequently, we have
$\sigma(D(\boldsymbol{0}))\cap(\lambda-d,\lambda+d)=\emptyset.$
It remains to show (2.32). First, by definition $P(\boldsymbol{z})u$ is
holomorphic for each $\boldsymbol{z}$ in a neighborhood of $\boldsymbol{0}$.
Therefore, the mapping $\boldsymbol{z}\mapsto P(\boldsymbol{z})$ is
holomorphic in this neighbohood. Then, the mapping $\boldsymbol{z}\ \mapsto
D(\boldsymbol{z})\in\mathcal{B}(H)$ is continuous. Moreover, since the subset
of invertible operators in $\mathcal{B}(H)$ is an open set, there exists a
(small) neighborhood $\boldsymbol{0}$, such that the function
$\boldsymbol{z}\mapsto(D(\boldsymbol{z})-\lambda)^{-1}\in\mathcal{B}(H)$ is
continuous.
On the other hand, there exists $d^{\prime}\in(0,d)$ such that
${\left\|(D(\boldsymbol{0})-\lambda)^{-1}\right\|}\leqslant\frac{1}{{\rm
dist}(\lambda,\sigma(D(\boldsymbol{0})))}\leqslant\frac{1}{d}<\frac{1}{d^{\prime}},$
see Reed, Simon [32, Chapter VIII]. Therefore, by the continuity of the map
$\boldsymbol{z}\mapsto(D(\boldsymbol{z})-\lambda)^{-1}\in\mathcal{B}(H)$,
there exists a neighborhood of $\boldsymbol{0}$, namely $W$, such that for all
$\boldsymbol{z}\in W$
${\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)^{-1}\right\|}<\frac{1}{d^{\prime}}.$
Thus for any $u\in H$ and $\boldsymbol{z}\in W$, it follows that
$\displaystyle{\|u\|}$
$\displaystyle={\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)^{-1}{\left[{\left(D(\boldsymbol{\boldsymbol{z}})-\lambda\right)}u\right]}\right\|}$
$\displaystyle\leq{\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)^{-1}\right\|}{\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)u\right\|}<\frac{1}{d^{\prime}}\,{\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)u\right\|}.$
Hence for $d^{\prime\prime}\in(0,d^{\prime})$ and
$\xi\in(\lambda-d^{\prime\prime},\lambda+d^{\prime\prime})$, we have
$\displaystyle{\left\|(D(\boldsymbol{\boldsymbol{z}})-\xi)u\right\|}$
$\displaystyle\geq$
$\displaystyle{\left\|(D(\boldsymbol{\boldsymbol{z}})-\lambda)u\right\|}-{\left|\lambda-\xi\right|}{\lVert
u\rVert}$ $\displaystyle\geq$
$\displaystyle(d^{\prime}-d^{\prime\prime}){\|u\|}.$
Consequently, for all
$\xi\in(\lambda-d^{\prime\prime},\lambda+d^{\prime\prime})$, $\xi$ is an
element of the resolvent of $D(\boldsymbol{z})$, that is
$\xi\in\rho(D(\boldsymbol{z}))$. Thus for each $\boldsymbol{z}\in W$, we have
$(\lambda-d^{\prime},\lambda+d^{\prime})\subset\rho(D(\boldsymbol{z})).$
Finally, since for each $\boldsymbol{z}\in W$
$\sigma(A(\boldsymbol{z}))\setminus\\{\lambda_{1}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\\}\subset\sigma(D(\boldsymbol{z}))\setminus\\{\lambda_{1}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\\},$
we obtain from item (a) of Claim 1, that
$\sigma(A(\boldsymbol{z}))\setminus\\{\lambda_{1}(\boldsymbol{z}),\ldots,\lambda_{h}(\boldsymbol{z})\\}\cap(\lambda-d^{\prime},\lambda+d^{\prime})=\emptyset,$
which finish the proof. ∎
## 3 Bloch Waves Analysis
Bloch waves analysis is important in the theory of solid-state physics. More
precisely, the displacement of an electron in a crystal (periodic setting) is
often described by Bloch waves, and this application is supported by Bloch’s
Theorem which states that, the energy eigenstates for an electron in a crystal
can be written as Bloch waves.
The aim of this section is to extend the Bloch waves theory, which is known
just for periodic functions to the considered stochastic setting, that is,
stationary functions composed with stochastic deformations, which is used here
to describe non-crystalline matter. Therefore, we would like to show that, the
electron waves in a non-crystalline matter can have a basis consisting
entirely of Bloch wave energy eigenstates (now solution to a stochastic Bloch
spectral cell equation). Consequently, we are extending the concept of
electronic band structures to non-crystalline matter.
### 3.1 The WKB method
Here we formally obtain the Bloch spectral cell equation (see Definition
3.56), applying the asymptotic Wentzel-Kramers-Brillouin (WKB for short)
expansion method, that is, we assume that the solution of equation (1.1) is
given by a plane wave. More precisely, for each $\varepsilon>0$ let us assume
that, the solution $u_{\varepsilon}(t,x,\omega)$ of the equation (1.1) has the
following asymptotic expansion
$u_{\varepsilon}(t,x,\omega)=e^{2\pi
iS_{\varepsilon}(t,x)}\sum_{k=1}^{\infty}\varepsilon^{k}u_{k}\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)},$
(3.49)
where the functions $u_{k}(t,x,y,\omega)$ are conveniently stationary in $y$,
and $S_{\varepsilon}$ is a real valued function to be established a posteriori
(not necessarily a polynomial in $\varepsilon$), which take part of the
modulated plane wave (3.49) from $e^{2\pi iS_{\varepsilon}(t,x)}$.
The spatial derivative of the above ansatz (3.49) is
$\displaystyle\nabla u_{\varepsilon}$ $\displaystyle(t,x,\omega)=e^{2i\pi
S_{\varepsilon}(t,x)}\big{(}2i\pi\nabla
S_{\varepsilon}(t,x)\sum_{k=0}^{\infty}\varepsilon^{k}\,u_{k}\big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\big{)}$
$\displaystyle\qquad+\sum_{k=0}^{\infty}\varepsilon^{k}\Big{\\{}\left(\partial_{x}u_{k}\right)\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}$
$\displaystyle\qquad+\frac{1}{\varepsilon}(\nabla\Phi)^{-1}\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\right)\left(\partial_{y}u_{k}\right)\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}\Big{\\}}\Big{)}$
$\displaystyle=e^{2i\pi
S_{\varepsilon}(t,x)}\Big{(}\sum_{k=0}^{\infty}\varepsilon^{k}\left(\frac{{\nabla}_{z}}{\varepsilon}+2i\pi\nabla
S_{\varepsilon}(t,x)\right)u_{k}\Big{(}t,x,\Phi^{-1}(\frac{x}{\varepsilon},\omega),\omega\Big{)}$
$\displaystyle\qquad+\sum_{k=0}^{\infty}\varepsilon^{k}\left(\nabla_{x}u_{k}\right)\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}\Big{)}.$
Now, computing the second derivatives of the expansion (3.49) and writing as a
cascade of the power of $\varepsilon$, we have
$\displaystyle e^{-2i\pi S_{\varepsilon}(t,x)}$ $\displaystyle{\rm
div}{\big{(}A{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega)}\nabla
u_{\varepsilon}(t,x,\omega)\big{)}}$ (3.50)
$\displaystyle=\frac{1}{\varepsilon^{2}}\Big{(}{\rm
div}_{\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\Big{(}\nabla_{\\!\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}$ $\displaystyle\qquad\qquad\qquad\qquad
u_{0}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Bigg{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\frac{1}{\varepsilon}\Big{(}{\rm
div}_{\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\Big{(}\nabla_{\\!\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}$ $\displaystyle\qquad\qquad\qquad\qquad
u_{1}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}+I_{\varepsilon},$
where
$\displaystyle I_{\varepsilon}=\sum_{k=0}^{\infty}\varepsilon^{k}\Big{(}{\rm
div}_{\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\Big{(}\nabla_{\\!\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}$ $\displaystyle\qquad\qquad
u_{k+2}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\frac{1}{\varepsilon}\Big{(}{\rm
div}_{\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\nabla_{x}u_{0}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\sum_{k=0}^{\infty}\varepsilon^{k}\Big{(}{\rm
div}_{\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\nabla_{x}u_{k+1}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\frac{1}{\varepsilon}{\rm
div}_{\\!x}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\Big{(}\nabla_{\\!\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}u_{0}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\sum_{k=0}^{\infty}\varepsilon^{k}{\rm
div}_{\\!x}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\Big{(}\nabla_{\\!\\!z}+2i\pi\varepsilon\nabla
S_{\varepsilon}(t,x)\Big{)}u_{k+1}(t,x,\Phi^{-1}(\cdot,\omega),\omega)\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}$
$\displaystyle+\sum_{k=0}^{\infty}\varepsilon^{k}{\rm
div}_{\\!x}\Big{(}A{\left(\Phi^{-1}(\cdot,\omega),\omega\right)}\nabla_{\\!\\!x}u_{k}{\Big{(}t,x,\Phi^{-1}(\cdot,\omega),\omega\Big{)}}\Big{)}{\Big{\rvert}}_{z=x/\varepsilon}.$
(3.51)
Proceeding in the same way with respect to the temporal derivative, we have
$\displaystyle e^{-2i\pi S_{\varepsilon}(t,x)}\,{\partial}_{t}u_{\varepsilon}$
$\displaystyle\qquad\qquad=\frac{1}{\varepsilon^{2}}\Big{(}2i\pi\varepsilon^{2}{\partial}_{t}S_{\varepsilon}(t,x)\Big{)}u_{0}\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}$
$\displaystyle\qquad\qquad+\frac{1}{\varepsilon}\Big{(}2i\pi\varepsilon^{2}{\partial}_{t}S_{\varepsilon}(t,x)\Big{)}u_{1}\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}$
$\displaystyle\qquad\qquad+\Big{(}2i\pi\varepsilon^{2}{\partial}_{t}S_{\varepsilon}(t,x)\Big{)}\sum_{k=0}^{\infty}\varepsilon^{k}u_{k+2}\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}$
$\displaystyle\qquad\qquad\qquad\qquad+\sum_{k=0}^{\infty}\varepsilon^{k}{\partial}_{t}u_{k}\Big{(}t,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)}.$
(3.52)
Thus, if we insert the equations (3.50) and (3.1) in (1.1) and compute the
$\varepsilon^{-2}$ order term, we arrive at
$L^{\Phi}(\varepsilon\nabla
S_{\varepsilon}(t,x))u_{0}{\big{(}t,x,\Phi^{-1}(\cdot,\omega),\omega\big{)}}=2\pi\big{(}\varepsilon^{2}\partial_{t}S_{\varepsilon}(t,x)\big{)}u_{0}{\big{(}t,x,\Phi^{-1}(\cdot,\omega),\omega\big{)}},$
where for each $\theta\in\mathbb{R}^{n}$, the linear operator
$L^{\Phi}(\theta)$ is defined by
$\displaystyle L^{\Phi}(\theta)[\cdot]:=$ $\displaystyle-\big{(}{\rm
div}_{\\!z}+2i\pi\theta\big{)}\big{(}A{(\Phi^{-1}(z,\omega),\omega)}{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}[\cdot]\big{)}$
(3.53)
$\displaystyle+V{\big{(}\Phi^{-1}\left(z,\omega\right),\omega\big{)}}[\cdot].$
Therefore, $2\pi\Big{(}\varepsilon^{2}\partial_{t}S_{\varepsilon}(t,x)\Big{)}$
is an eigenvalue of $L^{\Phi}(\varepsilon\nabla S_{\varepsilon}(t,x))$.
Consequently, if $\lambda(\theta)$ is any eigenvalue of $L^{\Phi}(\theta)$
(which is sufficiently regular with respect to $\theta$), then the following
(eikonal) Hamilton-Jacobi equation must be satisfied
$2\pi\varepsilon^{2}\partial_{t}S_{\varepsilon}(t,x)-\lambda(\varepsilon\nabla
S_{\varepsilon}(t,x))=0.$
Thus, if we suppose for $t=0$ (companion to (3.49)) the modulated plane wave
initial data
$u_{\varepsilon}(0,x,\omega)=e^{2i\pi\frac{\theta\cdot
x}{\varepsilon}}\sum_{k=1}^{\infty}\varepsilon^{k}u_{k}\Big{(}0,x,\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\Big{)},$
(3.54)
then the unique solution for the above Hamilton-Jacobi equation is, for each
parameter $\theta\in\mathbb{R}^{n}$,
$S_{\varepsilon}(t,x)=\frac{\lambda(\theta)\
t}{2\pi\varepsilon^{2}}+\frac{\theta\cdot x}{\varepsilon}.$ (3.55)
To sum up, the above expansion, that is the solution $u_{\varepsilon}$ of the
equation (1.1) with initial data given respectively by (3.49) and (3.54),
suggests the following
###### Definition 3.1 (Bloch or shifted spectral cell equation).
Let $\Phi$ be a stochastic deformation. For any $\theta\in\mathbb{R}^{n}$
fixed, the following time independent asymptotic equation
$\left\\{\begin{array}[]{l}L^{\Phi}(\theta)[\Psi(z,\omega)]=\lambda\
\Psi(z,\omega),\hskip 40.0pt\text{in $\mathbb{R}^{n}\times\Omega$},\\\\[5.0pt]
\hskip
32.0pt\Psi(z,\omega)=\psi{\left(\Phi^{-1}(z,\omega),\omega\right)},\quad\text{$\psi$
is a stationary function},\end{array}\right.$ (3.56)
is called Bloch’s spectral cell equation companion to the Schrödinger equation
in (1.1), where $L^{\Phi}(\theta)$ is given by (3.53). Moreover, each
$\theta\in\mathbb{R}^{n}$ is called a Bloch frequency, $\lambda(\theta)$ is
called a Bloch energy and the corresponded $\Psi(\theta)$ is called a Bloch
wave. Moreover, if $\Phi$ is well understood in the context, then $L\equiv
L^{\Phi}$.
The unknown $(\lambda,\Psi)$ in (3.56), which is an eigenvalue-eigenfunction
pair, is obtained by the associated variational formulation, that is
$\displaystyle\langle L(\theta)[F],G\rangle$ (3.57)
$\displaystyle=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!\\!\\!\\!\\!\\!\\!A(\Phi^{-1}(z,\omega),\omega)(\nabla_{\\!\\!z}+2i\pi\theta)F(z,\omega)\cdot\overline{{(\nabla_{\\!\\!z}+2i\pi\theta)}G(z,\omega)}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle+\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}V{(\Phi^{-1}(z,\omega),\omega)}\
F(z,\omega)\,\overline{G(z,\omega)}\,dz\,d\mathbb{P}(\omega).$
###### Remark 3.2.
One remarks that, $\lambda=\lambda(\theta)\in\mathbb{R}$, that is to say,
$\lambda$ depends on the parameter $\theta$. However, $\lambda$ could not
depend on $\omega$, since the homogeneized effective matrix is obtained from
the Hessian of $\lambda$ at some point $\theta^{*}$, and should be constant.
Therefore, the probabilistic variable $\omega$ could not be considered as
fixed parameter in (3.56).
### 3.2 Sobolev spaces on groups
The main motivation to study Sobolev spaces on groups, besides being an
elegant and modern mathematical theory, is related to the eigenvalue problem:
Find $\lambda(\theta)\in\mathbb{R}$ and
$\Psi(\theta)\in\mathcal{H}_{\Phi}\setminus\\{0\\}$ satisfying (3.56).
Indeed, we may use a compactness argument, that is the space
$\mathcal{H}_{\Phi}$ is compactly embedded in $\mathcal{L}_{\Phi}$, in order
to solve the associated variational formulation (3.57). Although, as observed
in Remark 3.2, $\omega\in\Omega$ can not be fixed, hence we are going to
establish an equivalence between the space $\mathcal{H}_{\Phi}$ and the
Sobolev space on groups, and then consider a related Rellich-Kondrachov’s
Theorem. This is the main issue of this section. Let us recall that, the
subject of Sobolev spaces on Abelian locally compact groups, to the best of
our knowledge, was introduced by P. Górka, E. G. Reyes [21].
To begin, we sum up some definitions and properties of topological groups,
which will be used along this section. Most of the material could be found in
E. Hewitt, A. Ross [23] and G. B. Folland [19] (with more details).
A nonempty set $G$ endowed with an application, $\ast:G\\!\times\\!G\to G$, is
called a group, when for each $x,y,z\in G$:
* 1.
${(x\ast y)\ast z=x\ast(y\ast z)}$;
* 2.
There exists ${e\in G}$, such that ${x\ast e=e\ast x=e}$;
* 3.
For all ${y\in G}$, there exists ${y^{-1}\in G}$, such that ${y\ast
y^{-1}=y^{-1}\ast y=e}$.
Moreover, if $x\ast y=y\ast x$, then $G$ is called an Abelian group. From now
on, we write for simplicity $x\,z$ instead of $x\ast z$. A topological group
is a group $G$ together with a topology, such that, both the group’s binary
operation $(x,y)\mapsto x\,y$, and the function mapping group elements to
their respective inverses $x\mapsto x^{-1}$ are continuous functions with
respect to the topology. Unless the contrary is explicit stated, any group
mentioned here is a locally compact Abelian (LCA for short) group, and we may
assume without loss of generality that, the associated topology is Hausdorff
(see G. B. Folland [19], Corollary 2.3).
A complex value function $\xi:G\to\mathbb{S}^{1}$ is called a character of
$G$, when
$\xi(x\,y)=\xi(x)\xi(y),\quad\quad\text{(for each $x,y\in G$)}.$
We recall that, the set of characters of $G$ is an Abelian group with the
usual product of functions, identity element $e=1$, and inverse element
$\xi^{-1}=\overline{\xi}$. The characters’ group of the topological group $G$,
called the dual group of $G$ and denoted by $G^{\wedge}$, is the set of all
continuous characters, that is to say
$G^{\wedge}:=\\{\xi:G\to\mathbb{S}^{1}\;;\;\text{$\xi$ is a continuous
homomorphism}\\}.$
Moreover, we may endow $G^{\wedge}$ with a topology with respect to which,
$G^{\wedge}$ itself is a LCA group.
We denote by $\mu$, $\nu$ the unique (up to a positive multiplicative
constant) Haar mesures in $G$ and $G^{\wedge}$ respectively. The $L^{p}$
spaces over $G$ and its dual are defined as usual, with their respective
mesures. Let us recall two important properties when $G$ is compact:
$\displaystyle i)\quad\text{If $\mu(G)=1$, then $G^{\wedge}$ is an orthonormal
set in $L^{2}(G;\mu)$}.$ (3.58) $\displaystyle ii)\quad\text{The dual group
$G^{\wedge}$ is discrete, and $\nu$ is the countermeasure}.$
One remarks that, the study of Sobolev spaces on LCA groups uses essentially
the concept of Fourier Transform, then we have the following
###### Definition 3.3.
Given a complex value function $f\in L^{1}(G;\mu)$, the function
$\widehat{f}:G^{\wedge}\to\mathbb{C}$, defined by
$\widehat{f}(\xi):=\int_{G}f(x)\,\overline{\xi(x)}\,d\mu(x)$ (3.59)
is called the Fourier transform of $f$ on $G$.
Usually, the Fourier Transform of $f$ is denoted by ${\mathcal{F}}f$ to
emphasize that it is an operator, but we prefer to adopt the usual notation
$\widehat{f}$. Moreover, we recall that the Fourier transform is an
homomorphism from $L^{1}(G;\mu)$ to $C_{0}(G^{\wedge})$ (or $C(G^{\wedge})$
when $G$ is compact), see Proposition 4.13 in [19]. Also we address the reader
to [19], Chapter 4, for the Plancherel Theorem and the Inverse Fourier
Transform.
Before we establish the definition of (energy) Sobolev spaces on LCA groups,
let us consider the following set
$\displaystyle{\rm P}=\\{p:G^{\wedge}\times$ $\displaystyle
G^{\wedge}\to[0,\infty)/$ $\displaystyle\text{$p$ is a continuous invariant
pseudo-metric on $G^{\wedge}$}\\}.$
The Birkhoff-Kakutani Theorem (see [23] p.68) ensures that, the set P is not
empty. Any pseudo-metric $p\in{\rm P}$ is well defined for each $(x,y)\in
G^{\wedge}\times G^{\wedge}$, hence we may define
$\gamma(x):=p(x,e)\equiv p(x,1).$ (3.60)
Moreover, one observes that $\gamma(1)=0$. Then, we have the following
###### Definition 3.4 (Energy Sobolev Spaces on LCA Groups).
Let $s$ be a non-negative real number and $\gamma(x)$ be given by (3.60) for
some fixed $p\in{\rm P}$. The energy Sobolev space $H^{s}_{\gamma}(G)$ is the
set of functions $f\in L^{2}(G;\mu)$, such that
$\int_{G^{\wedge}}(1+\gamma(\xi)^{2})^{s}\,|\widehat{f}(\xi)|^{2}d\nu(\xi)<\infty.$
(3.61)
Moreover, given a function $f\in H^{s}_{\gamma}(G)$ its norm is defined as
$\|f\|_{H^{s}_{\gamma}(G)}:=\left(\int_{G^{\wedge}}\left(1+\gamma(\xi)^{2}\right)^{s}|\widehat{f}(\xi)|^{2}d\nu(\xi)\right)^{1/2}.$
(3.62)
Below, taking specific functions $\gamma$, the usual Sobolev spaces on
$\mathbb{R}^{d}$ and other examples are considered. In particular, the
Plancherel Theorem implies that, $H^{0}_{\gamma}(G)=L^{2}(G;\mu)$.
###### Example 3.5.
Let $G=(\mathbb{R}^{n},+)$ which is known to be a LCA group, and consider its
dual group $(\mathbb{R}^{n})^{\wedge}=\\{\xi_{y}\;;\;y\in\mathbb{R}^{n}\\}$,
where for each $x\in\mathbb{R}^{n}$
$\xi_{y}(x)=e^{2\pi i\,y\cdot x},$ (3.63)
hence $|\xi_{y}(x)|=1$ and $\xi_{0}(x)=1$. One remarks that, here we denote
(without invocation of vector space structure)
$a\cdot b=a_{1}b_{1}+a_{2}b_{2}+\ldots+a_{n}b_{n},\quad\text{(for all $a,b\in
G$)}.$
For any $x,y\in\mathbb{R}^{n}$ let us consider
$p(\xi_{x},\xi_{y})=2\pi\|x-y\|,$
where $\|\cdot\|$ is the Euclidean norm in $\mathbb{R}^{n}$. Hence
$\gamma(\xi_{x})=p(\xi_{x},1)=2\pi\|x\|$. Since
$(\mathbb{R}^{n})^{\wedge}\cong\mathbb{R}^{n}$, the Sobolev space
$H^{s}_{\gamma}(G)$ coincide with the usual Sobolev space on $\mathbb{R}^{n}$.
###### Example 3.6.
Let us recall that, the set $[0,1)^{n}$ endowed with the binary operation
$(x,y)\in[0,1)^{n}\\!\times\\![0,1)^{n}\;\;\mapsto\;\;x+y-\left\lfloor
x+y\right\rfloor\in[0,1)^{d}$
is an Abelian group, and the function $\Lambda:\mathbb{R}^{n}\to[0,1)^{n}$,
$\Lambda(x):=x-\left\lfloor x\right\rfloor$ is an homomorphism of groups.
Moreover, under the induced topology by $\Lambda$, that is to say
$\\{U\subset[0,1)^{n}\;;\;\Lambda^{-1}(U)\;\text{is an open set
of}\;\,\mathbb{R}^{n}\\},$
$[0,1)^{n}$ is a compact Abelian group, which is called $n-$dimensional Torus
and denoted by $\mathbb{T}^{n}$. Its dual group is characterized by the
integers $\mathbb{Z}^{n}$, that is
$\text{ $(\mathbb{T}^{n})^{\wedge}=\\{\xi_{m}\;;\;m\in\mathbb{Z}^{n}\\}$,
where $\xi_{m}(x)$ is given by \eqref{caracterunitario} for all
$x\in\mathbb{R}^{n}$}.$
For each $m,k\in\mathbb{Z}^{n}$, we consider
$p(\xi_{m},\xi_{k})=2\pi\sum_{j=1}^{n}{|m_{j}-k_{j}|},\quad\text{and thus
$\gamma(\xi_{m})=2\pi\sum_{j=1}^{n}{|m_{j}|}$}.$
Then, the Sobolev space $H^{s}_{\gamma}(\mathbb{T}^{n})$ coincide with the
usual Sobolev space on $\mathbb{T}^{n}$.
Now, following the above discussion let us consider the infinite Torus
$\mathbb{T}^{I}$, where $I$ is an index set. Since an arbitrary product of
compact spaces is compact in the product topology (Tychonoff Theorem),
$\mathbb{T}^{I}$ is a compact Abelian group. Here, the binary operation on
$\mathbb{T}^{I}\times\mathbb{T}^{I}$ is defined coordinate by coordinate, that
is, for each $\ell\in I$
$g_{\ell}+h_{\ell}:=g_{\ell}+h_{\ell}-\left\lfloor
g_{\ell}+h_{\ell}\right\rfloor.$
Moreover, the set $\mathbb{Z}^{I}_{\rm c}:=\\{m\in\mathbb{Z}^{I};\text{{\rm
supp} $m$ is compact}\\}$ characterizes the elements of the dual group
$(\mathbb{T}^{I})^{\wedge}$. Indeed, applying Theorem 23.21 in [23], similarly
we have
$(\mathbb{T}^{I})^{\wedge}={\left\\{\xi_{m}\;;\;m\in\mathbb{Z}^{I}_{\rm
c}\right\\}},$
where $\xi_{m}(k)$ is given by (3.63) for each $m,k\in\mathbb{Z}_{\rm c}^{I}$,
the pseudo-metric
$p(\xi_{m},\xi_{k})=2\pi\sum_{\ell\in I}{|m_{\ell}-k_{\ell}|},\quad\text{and
$\gamma(\xi_{m})=2\pi\sum_{\ell\in I}{|m_{\ell}|}$}.$
Consequently, we have establish the Sobolev spaces
$H^{s}_{\gamma}(\mathbb{T}^{I})$.
#### 3.2.1 Groups and Dynamical systems
In this section, we are interested to come together the discussion about
dynamical systems studied in Section 2.2 with the theory developed in the last
section for LCA groups. To this end, we consider stationary functions in the
continuous sense (continuous dynamical systems). Moreover, we recall that all
the groups in this paper are assumed to be Hausdorff.
To begin, let $G$ be a locally compact group with Haar measure $\mu$, we know
that $\mu(G)<\infty$ if, and only if, $G$ is compact. Therefore, we consider
from now on that $G$ is a compact Abelian group, hence $\mu$ is a finite
measure and, up to a normalization, $(G,\mu)$ is a probability space. We are
going to consider the dynamical systems, $\tau:\mathbb{R}^{n}\times G\to G$,
defined by
$\tau(x)\omega:=\varphi(x)\,\omega,$ (3.64)
where $\varphi:\mathbb{R}^{n}\to G$ is a given (continuous) homomorphism.
Indeed, first $\tau(0)\omega=\omega$ and
$\tau(x+y,\omega)=\varphi(x)\varphi(y)\omega=\tau(x,\tau(y)\omega)$. Moreover,
since $\mu$ is a translation invariant Haar measure, the mapping
$\tau(x,\cdot):G\to G$ is $\mu-$measure preserving. Recall from Remark 2.11 we
have assumed that, the dynamical systems we are interested here are ergodic.
Then, it is important to characterize the conditions for the mapping
$\varphi$, under which the dynamical system defined by (3.64) is ergodic. To
this end, first let us consider the following
###### Lemma 3.7.
Let $H$ be a topological group, $F\subset H$ closed, $F\neq H$ and $x\notin
F$. Then, there exists a neighborwood $V$ of the identity $e$, such that
$FV\cap xV=\emptyset.$
###### Proof.
First, we observe that:
i) Since $F\subset H$ is closed and $F\neq H$, there exists a neighborwood $U$
of the identity $e$, such that $F\cap xU=\emptyset$.
ii) There exists a symmetric neighborwood $V$ of the identity $e$, such that
$VV\subset U$.
Now, suppose that $FV\cap xV\neq\emptyset$. Therefore, there exist
$v_{1},v_{2}\in V$ and $k_{0}\in F$ such that, $k_{0}v_{1}=xv_{2}$.
Consequently, $k_{0}=xv_{2}v_{1}^{-1}$ and from $(ii)$ it follows that,
$k_{0}\in xU$. Then, we have a contradiction from $(i)$. ∎
Claim 1: The dynamical system defined by (3.64) is ergodic if, and only if,
$\varphi(\mathbb{R}^{n})$ is dense in $G$.
Proof of Claim 1: 1. Let us show first the necessity. Therefore, we suppose
that $\varphi(\mathbb{R}^{n})$ is not dense in $G$, that is
$K:=\overline{\varphi(\mathbb{R}^{n})}\neq G$. Then, applying Lemma 3.7 there
exists a neighborhood $V$ of $e$, such that $KV\cap xV=\emptyset$, for some
$x\notin K$. Recall that the Haar measure on open sets are positive, moreover
$KV=\bigcup_{k\in K}kV,$
which is an open set, thus we have
$0<\mu(KV)+\mu(xV)\leq 1.$
Consequently, it follows that $0<\mu(\varphi(\mathbb{R}^{n})V)<1$. For
convenience, le us denote $E=\varphi(\mathbb{R}^{n})V$, hence $\tau(x)E=E$ for
each $x\in\mathbb{R}^{n}$. Then, the dynamical system $\tau$ is not ergodic,
since $E\subset G$ is a $\tau$-invariant set with $0<\mu(E)<1$.
2\. It remains to show the sufficiency. Let $E\subset G$ be a $\mu-$measurable
$\tau$-invariant set, hence $\omega E=E$ for each
$\omega\in\varphi(\mathbb{R}^{n})$. Assume by contradiction that,
$0<\mu(E)<1$, thus $\mu(G\setminus E)>0$. Denote by $\mathcal{B}_{G}$ the
Borel $\sigma-$algebra on $G$, and define, $\lambda:=\mu_{\lfloor E}$, that is
$\lambda(A)=\mu(E\cap A)$ for all $A\in\mathcal{B}_{G}$. Recall that $G$ is
not necessarily metric, therefore, it is not clear if each Borel set is
$\mu-$measurable. Then, it follows that:
$(i)$ For any $A\in\mathcal{B}_{G}$ fixed, the mapping $\omega\in
G\mapsto\lambda(\omega A)$ is continuous. Indeed, for $\omega\in G$ and
$A\in\mathcal{B}_{G}$, we have
$\displaystyle\lambda(\omega A)$ $\displaystyle=\int_{G}1_{E}(\varpi)1_{\omega
A}(\varpi)d\mu(\varpi)$
$\displaystyle=\int_{G}1_{E}(\varpi)1_{A}(\omega^{-1}\varpi)d\mu(\varpi)=\int_{G}1_{E}(\omega\varpi)1_{A}(\varpi)d\mu(\varpi).$
Therefore, for $\omega,\omega_{0}\in G$
$\displaystyle|\lambda(\omega A)-\lambda(\omega_{0}A)|$
$\displaystyle=\big{|}\int_{G}\big{(}1_{E}(\omega\varpi)-1_{E}(\omega_{0}\varpi)\big{)}1_{A}(\varpi)d\mu(\varpi)\big{|}$
$\displaystyle\leq\big{(}\mu(A)\big{)}^{1/2}\big{(}\int_{G}|1_{E}(\omega\varpi)-1_{E}(\omega_{0}\varpi)|^{2}d\mu(\varpi)\big{)}^{1/2}\raisebox{-7.3194pt}{$\stackrel{{\scriptstyle\textstyle{\longrightarrow}}}{{\scriptstyle{\omega\to\omega_{0}}}}$}0.$
$(ii)$ $\lambda$ is invariant, i.e. for all $\omega\in G$, and
$A\in\mathcal{B}_{G}$, $\lambda(\omega A)=\lambda(A)$. Indeed, for each
$\omega\in\varphi(\mathbb{R}^{d})$, and $A\in\mathcal{B}_{G}$, we have
$(\omega A)\cap E=(\omega A)\cap(\omega E)=\omega(A\cap E).$
Thus since $\mu$ is invariant, $\mu_{\lfloor E}(\omega A)=\mu_{\lfloor E}(A)$.
Consequently, due to item $(i)$ and $\overline{\varphi(\mathbb{R}^{d})}=G$, it
follows that $\lambda$ is invariant.
From item $(ii)$ the Radon measure $\lambda$ is a Haar measure on $G$. By the
uniqueness of the Haar measure on $G$, there exists $\alpha>0$, such that for
all $A\in\mathcal{B}_{G}$, $\alpha\lambda(A)=\mu(A)$. In particular,
$\alpha\lambda(G\setminus E)=\mu(G\setminus E)$. But $\lambda(G\setminus E)=0$
by definition and $\mu(G\setminus E)>0$, which is a contradiction and hence
$\tau$ is ergodic.
###### Remark 3.8.
1\. One remarks that, in order to show that $\tau$ given by (3.64) is ergodic,
it was not used that $\varphi$ is continuous, nor that $G$ is metric. Compare
with the statement in [24] p.225 (after Theorem 7.2).
2\. From now on, we assume that $\varphi(\mathbb{R}^{n})$ is dense in $G$.
Now, for the dynamical system established before, the main issue is to show
how the Sobolev space $H^{1}_{\gamma}(G)$ is related with the space
$\mathcal{H}_{\Phi}$ given by (2.15) for $\Phi=Id$, that is
$\mathcal{H}={\big{\\{}f(y,\omega);\;f\in H^{1}_{\rm
loc}(\mathbb{R}^{n};L^{2}(G))\;\;\text{stationary}\big{\\}}},$
which is a Hilbert space endowed with the following inner product
${\langle
f,g\rangle}_{\mathcal{H}}=\int_{G}f(0,\omega)\,\overline{g(0,\omega)}\,d\mu(\omega)+\int_{G}\nabla_{\\!\\!y}f(0,\omega)\cdot\overline{\nabla_{\\!\\!y}g(0,\omega)}\,d\mu(\omega).$
Let $\chi$ be a character on $G$, i.e. $\chi\in G^{\wedge}$. Since
$\varphi:\mathbb{R}^{n}\to G$ is a continuous homomorphism, the function
$(\chi\circ\varphi):\mathbb{R}^{n}\to\mathbb{C}$ is a continuous character in
$\mathbb{R}^{n}$. More precisely, given any fixed $\chi\in G^{\wedge}$ we may
find $y\in\mathbb{R}^{n}$, $(y\equiv y(\chi))$, such that, for each
$x\in\mathbb{R}^{n}$
$\big{(}\chi\circ\varphi\big{)}(x)=:\xi_{y(\chi)}(x)=e^{2\pi i\,y(\chi)\cdot
x}.$
Following Example 3.5 we define the pseudo-metric
$p_{\varphi}:G^{\wedge}\times G^{\wedge}\to[0,\infty)$ by
$p_{\varphi}(\chi_{1},\chi_{2}):=p(\xi_{y_{1}(\chi_{1})},\xi_{y_{2}(\chi_{2})})=2\pi\|y_{1}(\chi_{1})-y_{2}(\chi_{2})\|.$
(3.65)
Then, we have
$\gamma(\chi)=p_{\varphi}(\chi,1)=2\pi\|y(\chi)\|.$
Let us observe that, we have used in the above construction of $\gamma$ the
continuity of the homomorphism $\varphi:\mathbb{R}^{n}\to G$, that is to say,
it was essential the continuity of $\varphi$. In fact, the function $\gamma$
was given by the pseudo-metric $p_{\varphi}$, which is not necessarily a
metric. Although, we have the following
Claim 2: The pseudo-metric $p_{\varphi}:G^{\wedge}\times
G^{\wedge}\to[0,\infty)$ given by (3.65) is a metric if, and only if,
$\varphi(\mathbb{R}^{n})$ is dense in $G$.
Proof of Claim 2: 1. First, let us assume that
$\overline{\varphi(\mathbb{R}^{n})}\neq G$, and then show that $p_{\varphi}$
is not a metric. Therefore, we have the necessity proved. From Corollary 24.12
in [23], since $\overline{\varphi(\mathbb{R}^{n})}$ is a closer proper
subgroup of $G$, hence there exists $\xi\in G^{\wedge}\setminus\\{1\\}$, such
that $\xi(\overline{\varphi(\mathbb{R}^{n})})=\\{1\\}$. Hence there exists
$\xi\in G^{\wedge}\setminus\\{1\\}$, such that, $\xi(\varphi(x))=1$, for each
$x\in\mathbb{R}^{n}$, i.e. $y(\xi)=0$. Therefore, we have
$p_{\varphi}(\xi,1)=0$, which implies that $p_{\varphi}$ is not a metric.
2\. Now, let us assume that $\overline{\varphi(\mathbb{R}^{n})}=G$, and it is
enough to show that if $p_{\varphi}(\xi,1)=0$, then $\xi=1$. Indeed, if
$0=p_{\varphi}(\xi,1)=2\pi\|y(\xi)\|$, then $y(\xi)=0$. Therefore,
$\xi(\varphi(x))=1$ for each $x\in\mathbb{R}^{d}$, since $\xi$ is continuous
and $\overline{\varphi(\mathbb{R}^{n})}=G$, it follows that, for each
$\omega\in G$, $\xi(\omega)=1$, which finishes the proof of the claim.
###### Remark 3.9.
Since we have already assumed that $\varphi(\mathbb{R}^{n})$ is dense in $G$,
it follows that $p_{\varphi}$ is indeed a metric, which does not imply
necessarily that $G$, itself, is metric.
Under the assumptions considered above, we have the following
###### Lemma 3.10.
If $f\in\mathcal{H}$, then for $j\in\\{1,\ldots,d\\}$ and all $\xi\in
G^{\wedge}$
$\widehat{\partial_{j}f(0,\xi)}=2\pi i\;y_{j}(\xi)\widehat{f(0,\xi)}.$ (3.66)
###### Proof.
First, for each $x\in\mathbb{R}^{d}$ and $\omega\in G$, define
$\displaystyle\xi_{\tau}(x,\omega)$
$\displaystyle:=\xi(\tau(x,\omega))=\xi(\varphi(x)\omega)=\xi(\varphi(x))\;\xi(\omega)$
$\displaystyle=e^{2\pi ix\cdot y(\xi)}\;\xi(\omega).$
Therefore $\xi_{\tau}\in C^{\infty}(\mathbb{R}^{d};L^{2}(G))$, and we have for
$j\in\\{1,\ldots,d\\}$
$\partial_{j}\xi_{\tau}(0,\omega)=2\pi i\;y_{j}(\xi)\;\xi(\omega).$ (3.67)
Finally, applying Theorem 2.15 we obtain
$\displaystyle\int_{G}\partial_{j}f(0,\omega)\;\overline{\xi_{\tau}}(0,\omega)d\mu(\omega)$
$\displaystyle=-\int_{G}f(0,\omega)\;\partial_{j}\overline{\xi_{\tau}}(0,\omega)d\mu(\omega)$
$\displaystyle=2\pi
i\;y_{j}(\xi)\int_{G}f(0,\omega)\;\overline{\xi}(\omega)d\mu(\omega),$
where we have used (3.67). From the above equation and the definition of the
Fourier transform on groups we obtain (3.66), and the lemma is proved. ∎
Now we are able to state the equivalence between the spaces $\mathcal{H}$ and
$H^{1}_{\gamma}(G)$, which is to say, we have the following
###### Theorem 3.11.
A function $f\in\mathcal{H}$ if, and only if, $f(0,\cdot)\in
H^{1}_{\gamma}(G)$, and
$\|f\|_{\mathcal{H}}=\|f(0,\cdot)\|_{H_{\gamma}^{1}(G)}.$
###### Proof.
1\. Let us first show that, if $f\in\mathcal{H}$ then $f\in
H^{1}_{\gamma}(G)$. To follow we observe that
$\displaystyle\int_{G^{\wedge}}(1+\gamma(\xi)^{2})|\widehat{f(0,\xi)}|^{2}\;d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}|\widehat{f(0,\xi)}|^{2}\;d\nu(\xi)$
$\displaystyle+\int_{G^{\wedge}}|2\pi
i\;y(\xi)\widehat{f(0,\xi)}|^{2}\;d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}|\widehat{f(0,\xi)}|^{2}\;d\nu(\xi)+\int_{G^{\wedge}}|\widehat{\nabla_{\\!\\!y}f(0,\xi)}|^{2}\;d\nu(\xi),$
where we have used (3.66). Therefore, applying Plancherel theorem
$\int_{G^{\wedge}}\\!(1+\gamma(\xi)^{2})|\widehat{f(0,\xi)}|^{2}\;d\nu(\xi)=\\!\\!\int_{G}\\!|{f(0,\omega)}|^{2}\;d\mu(\omega)+\\!\int_{G}|\nabla_{\\!\\!y}{f(0,\omega)}|^{2}\;d\mu(\omega)\\!<\\!\infty,$
and thus $f(0,\cdot)\in H^{1}_{\gamma}(G)$.
2\. Now, let $f(x,\omega)$ be a stationary function, such that $f(0,\cdot)\in
H^{1}_{\gamma}(G)$, then we show that $f\in\mathcal{H}$. Given a stationary
function $\zeta\in C^{1}(\mathbb{R}^{d};L^{2}(G))$, applying the Palncherel
theorem and polarization identity
$\int_{G}\partial_{j}\zeta(0,\omega)\;\overline{f(0,\omega)}d\mu(\omega)=\int_{G^{\wedge}}\widehat{\partial_{j}\zeta(0,\xi)}\;\overline{\widehat{f(0,\xi)}}d\nu(\xi)$
for $j\in\\{1,\ldots,d\\}$. Due to (3.66), we may write
$\displaystyle\int_{G}\partial_{j}\zeta(0,\omega)\;\overline{f(0,\omega)}d\mu(\omega)$
$\displaystyle=\int_{G^{\wedge}}2\pi
i\;y_{j}(\xi)\widehat{\zeta(0,\xi)}\;\overline{\widehat{f(0,\xi)}}d\nu(\xi)$
(3.68) $\displaystyle=-\int_{G^{\wedge}}\widehat{\zeta(0,\xi)}\;\overline{2\pi
i\;y_{j}(\xi)\widehat{f(0,\xi)}}d\nu(\xi).$
For $j\in\\{1,\ldots,d\\}$ we define, $g_{j}(\omega):=\big{(}2\pi
i\;y_{j}(\xi)\widehat{f(0,\xi)}\big{)}^{\vee}$, then $g_{j}\in L^{2}(G)$.
Indeed, we have
$\int_{G}|g_{j}(\omega)|^{2}d\mu(\omega)=\int_{G^{\wedge}}|\widehat{g_{j}(\xi)}|^{2}d\nu(\xi)\leq\int_{G^{\wedge}}(1+\gamma(\xi)^{2})|\widehat{f(0,\xi)}|^{2}d\nu(\xi)<\infty.$
Therefore, we obtain from (3.68)
$\int_{G}\partial_{j}\zeta(0,\omega)\;\overline{f(0,\omega)}d\mu(\omega)=-\int_{G}\zeta(0,\omega)\;\overline{g_{j}(\omega)}d\mu(\omega)$
for any stationary function $\zeta\in C^{1}(\mathbb{R}^{d};L^{2}(G))$, and
$j\in\\{1,\ldots,d\\}$. Then $f\in\mathcal{H}$ due to Theorem 2.15. ∎
###### Corollary 3.12.
Let $f\in L^{2}_{\text{loc}}(\mathbb{R}^{d};L^{2}(G))$ be a stationary
function and $\Phi$ a stochastic deformation. Then,
$f\circ\Phi^{-1}\in{\mathcal{H}}_{\Phi}$ if, and only if, $f(0,\cdot)\in
H^{1}_{\gamma}(G)$, and there exist constants $C_{1},C_{2}>0$, such that
$C_{1}\|f\circ\Phi^{-1}\|_{\mathcal{H}_{\Phi}}\leq\|f(0,\cdot)\|_{H_{\gamma}^{1}(G)}\leq
C_{2}\|f\circ\Phi^{-1}\|_{\mathcal{H}_{\Phi}}.$
###### Proof.
Follows from Theorem 3.11 and Remark 2.10. ∎
#### 3.2.2 Rellich–Kondrachov type Theorem
The aim of this section is to characterize when the Sobolev space
$H^{1}_{\gamma}(G)$ is compactly embedded in $L^{2}(G)$, written
$H^{1}_{\gamma}(G)\subset\subset L^{2}(G)$, where $G$ is considered a compact
Abelian group and $\gamma:G^{\wedge}\to[0,\infty)$ is given by (3.60). We
observe that, $H^{1}_{\gamma}(G)\subset\subset L^{2}(G)$ is exactly the
Rellich–Kondrachov Theorem on compact Abelian groups, which was established
under some conditions on $\gamma$ in [21]. Nevertheless, as a byproduct of the
characterization established here, we provide the proof of this theorem in a
more precise context.
To start the investigation, let $(G,\mu)$ be a probability space and consider
the operator $T:L^{2}(G^{\wedge})\to L^{2}(G^{\wedge})$, defined by
$[T(f)](\xi):=\frac{f(\xi)}{\sqrt{(1+\gamma(\xi)^{2})}}.$ (3.69)
We remark that, $T$ as defined above is a bounded linear, ($\|T\|\leqslant
1$), self-adjoint operator, which is injective and satisfies for each $f\in
L^{2}(G^{\wedge})$
$\int_{G^{\wedge}}\left(1+\gamma(\xi)^{2}\right)\,{|[T(f)](\xi)|}^{2}d\nu(\xi)=\int_{G^{\wedge}}|f(\xi)|^{2}d\nu(\xi).$
(3.70)
Moreover, a function $f\in H^{1}_{\gamma}(G)$ if, and only if, $\widehat{f}\in
T(L^{2}(G^{\wedge}))$, that is to say
$f\in H^{1}_{\gamma}(G)\Leftrightarrow\widehat{f}\in T(L^{2}(G^{\wedge})).$
(3.71)
Indeed, if $f\in H^{1}_{\gamma}(G)$ then, we have $f\in L^{2}(G)$ and
$\int_{G^{\wedge}}\left(1+\gamma(\xi)^{2}\right)|\widehat{f}(\xi)|^{2}d\nu(\xi)=\int_{G^{\wedge}}|\sqrt{\left(1+\gamma(\xi)^{2}\right)}\,\widehat{f}(\xi)|^{2}d\nu(\xi)<\infty.$
Therefore, defining
$g(\xi):=\sqrt{\left(1+\gamma(\xi)^{2}\right)}\widehat{f(\xi)}$, hence $g\in
L^{2}(G^{\wedge})$ and we have $\widehat{f}\in T(L^{2}(G^{\wedge}))$.
Now, if $\widehat{f}\in T(L^{2}(G^{\wedge}))$ let us show that, $f\in
H^{1}_{\gamma}(G)$. First, there exists $g\in L^{2}(G^{\wedge})$ such that,
$\widehat{f}=T(g)$. Thus from equation (3.70), we obtain
$\int_{G^{\wedge}}(1+\gamma(\xi)^{2})\,|\widehat{f}(\xi)|^{2}d\nu(\xi)=\int_{G^{\wedge}}|g(\xi)|^{2}d\nu(\xi)<\infty,$
that is, by definition $f\in H^{1}_{\gamma}(G)$.
Then we have the following Equivalence Theorem:
###### Theorem 3.13.
The Sobolev space $H^{1}_{\gamma}(G)$ is compactly embedded in $L^{2}(G)$ if,
and only if, the operator $T$ defined by (3.69) is compact.
###### Proof.
1\. First, let us assume that $H^{1}_{\gamma}(G)\subset\subset L^{2}(G)$, and
take a bounded sequence $\\{f_{m}\\}$, $f_{m}\in L^{2}(G^{\wedge})$ for each
$m\in\mathbb{N}$. Thus $T(f_{m})\in L^{2}(G^{\wedge})$, and defining
$g_{m}:=T(f_{m})^{\vee}$, we obtain by Plancherel Theorem that $g_{m}\in
L^{2}(G)$ for each $m\in\mathbb{N}$. Moreover, from equation (3.70), we have
for any $m\in\mathbb{N}$
$\displaystyle\infty>\int_{G^{\wedge}}|f_{m}(\xi)|^{2}d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}(1+\gamma(\xi)^{2})\,|[T(f_{m})](\xi)|^{2}d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}(1+\gamma(\xi)^{2})\,|\widehat{g_{m}(\xi)}|^{2}d\nu(\xi).$
Therefore, the sequence $\\{g_{m}\\}$ is uniformly bounded in
$H^{1}_{\gamma}(G)$, with respect to $m\in\mathbb{N}$. By hypothesis there
exists a subsequence of $\\{g_{m}\\}$, say $\\{g_{m_{j}}\\}$, and a function
$g\in L^{2}(G)$ such that, $g_{m_{j}}$ converges strongly to $g$ in $L^{2}(G)$
as $j\to\infty$. Consequently, we have
$T(f_{m_{j}})=\widehat{g_{m_{j}}}\to\widehat{g}\quad\text{in
$L^{2}(G^{\wedge})$ as $j\to\infty$},$
that is, the operator $T$ is compact.
2\. Now, let us assume that the operator $T$ is compact and then show that
$H^{1}_{\gamma}(G)\subset\subset L^{2}(G)$. To this end, we take a sequence
$\\{f_{m}\\}_{m\in\mathbb{N}}$ uniformly bounded in $H^{1}_{\gamma}(G)$. Then,
due to the equivalence (3.71) there exists for each $m\in\mathbb{N}$,
$g_{m}\in L^{2}(G^{\wedge})$, such that $\widehat{f_{m}}=T(g_{m})$. Thus for
any $m\in\mathbb{N}$, we have from equation (3.70) that
$\displaystyle\int_{G^{\wedge}}|g_{m}(\xi)|^{2}d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}(1+\gamma(\xi)^{2})\,|[T(g_{m})](\xi)|^{2}d\nu(\xi)$
$\displaystyle=\int_{G^{\wedge}}(1+\gamma(\xi)^{2})\,|\widehat{f_{m}(\xi)}|^{2}d\nu(\xi)<\infty.$
Then, the sequence $\\{g_{m}\\}$ is uniformly bounded in $L^{2}(G)$. Since the
operator $T$ is compact, there exist $\\{m_{j}\\}_{j\in\mathbb{N}}$ and $g\in
L^{2}(G^{\wedge})$, such that
$\widehat{f_{m_{j}}}=T(g_{m_{j}})\xrightarrow[j\to\infty]{}g\quad\text{in
$L^{2}(G^{\wedge})$}.$
Consequently, the subsequence $\\{f_{m_{j}}\\}$ converges to $g^{\vee}$
strongly in $L^{2}(G)$, and thus $H^{1}_{\gamma}(G)$ is compactly embedded in
$L^{2}(G)$. ∎
###### Remark 3.14.
Due to Theorem 3.13 the compactness characterization, that is
$H^{1}_{\gamma}(G)\subset\subset L^{2}(G)$, follows once we show the
conditions that the operator $T$ is compact. The study of the dual space of
$G$, i.e. $G^{\wedge}$, and $\gamma$ it will be essential for this
characterization.
Recall from (3.58) item $(ii)$ that, $G^{\wedge}$ is discrete since $G$ is
compact. Then, $\nu$ is a countermeasure, and $\nu(\\{\chi\\})=1$ for each
singleton $\\{\chi\\}$, $\chi\in G^{\wedge}$. Now, for any $\chi\in
G^{\wedge}$ fixed, we define the point mass function at $\chi$ by
$\delta_{\chi}(\xi):=1_{\\{\chi\\}}(\xi),\quad\text{for each $\xi\in
G^{\wedge}$}.$
Hence the set $\\{\delta_{\xi}\;;\;\xi\in G^{\wedge}\\}$ is an orthonornal
basis for $L^{2}(G^{\wedge})$. Indeed, we first show the orthonormality. For
each $\chi,\pi\in G^{\wedge}$, we have
$\langle\delta_{\chi},\delta_{\pi}\rangle_{L^{2}(G^{\wedge})}=\int_{G^{\wedge}}\delta_{\chi}(\xi)\;\delta_{\pi}(\xi)\,d\nu(\xi)=\left\\{\begin{array}[]{ccl}1,&\text{if}&\chi=\pi,\\\
0,&\text{if}&\chi\not=\pi.\end{array}\right.$ (3.72)
Now, let us show the density, that is $\overline{\\{\delta_{\xi}\;;\;\xi\in
G^{\wedge}\\}}=L^{2}(G^{\wedge})$, or equivalently $\\{\delta_{\xi}\;;\;\xi\in
G^{\wedge}\\}^{\perp}=\\{0\\}$. For any $w\in\\{\delta_{\xi}\;;\;\xi\in
G^{\wedge}\\}^{\perp}$, we obtain
$0=\langle\delta_{\xi},w\rangle_{L^{2}(G^{\wedge})}=\int_{G^{\wedge}}\delta_{\xi}(\chi)w(\chi)\,d\nu(\chi)=\int_{\\{\xi\\}}w(\chi)\,d\nu(\chi)=w(\xi)$
for any $\xi\in G^{\wedge}$, which proves the density.
From the above discussion, it is important to study the operator $T$ on
elements of the set $\\{\delta_{\xi}\;;\;\xi\in G^{\wedge}\\}$. Then, we have
the following
###### Theorem 3.15.
If the operator $T$ defined by (3.69) is compact, then $G^{\wedge}$ is an
enumerable set.
###### Proof.
1\. First, let $\\{\delta_{\xi}\;;\;\xi\in G^{\wedge}\\}$ be the orthonormal
basis for $L^{2}(G^{\wedge})$, and $T$ the operator defined by (3.69). Then,
the function $\delta_{\xi}\in L^{2}(G^{\wedge})$ is an eigenfunction of $T$
corresponding to the eigenvalue $(1+\gamma^{2})^{-1/2}$, that is
$\delta_{\xi}\neq 0$, and
$T(\delta_{\xi})=\frac{\delta_{\xi}}{\sqrt{1+\gamma(\xi)^{2}}}.$ (3.73)
2\. Now, since $T$ is compact and $\\{\delta_{\xi}\;;\;\xi\in G^{\wedge}\\}$
is a basis for $L^{2}(G^{\wedge})$, it must be enumerable from (3.73). On the
other hand, the function $\xi\in G^{\wedge}\mapsto\delta_{\xi}\in
L^{2}(G^{\wedge})$ is injective, hence $G^{\wedge}$ is enumerable. ∎
###### Corollary 3.16.
If the operator $T$ defined by (3.69) is compact, then $L^{2}(G)$ is
separable.
###### Proof.
First, the Hilbert space $L^{2}(G^{\wedge})$ is separable, since
$\\{\delta_{\xi}\;;\;\xi\in G^{\wedge}\\}$ is an enumerable orthonormal basis
of it. Then, the proof follows applying the Plancherel Theorem. ∎
###### Corollary 3.17.
Let $G_{B}$ be the Bohr compactification of $\mathbb{R}^{n}$ (see A. Pankov
[30]). Then $H^{1}_{\gamma}(G_{B})$ is not compactly embedded in
$L^{2}(G_{B})$.
###### Proof.
Indeed, $G_{B}^{\wedge}$ is non enumerable. ∎
Consequently, $G^{\wedge}$ be enumerable is a necessarily condition for the
operator $T$ be compact, which is not sufficient as shown by the Example 3.20
below. Indeed, it might depend on the chosen $\gamma$, see also Example 3.23.
To follow, we first recall the
###### Definition 3.18.
Let $G$ be a group (not necessarily a topological one) and $S$ a subset of it.
The smallest subgroup of G containing every element of S, denoted $\langle
S\rangle$, is called the subgroup generated by $S$. Equivalently, see Dummit,
Foote [17] p.63,
$\langle
S\rangle=\big{\\{}g^{\varepsilon_{1}}_{1}g^{\varepsilon_{2}}_{2}\ldots
g^{\varepsilon_{k}}_{k}/\text{$k\in\mathbb{N}$ and for each $j$, $g_{j}\in
S,\varepsilon_{j}=\pm 1$}\big{\\}}.$
Moreover, if a group $G=\langle S\rangle$, then $S$ is called a generator of
$G$, and in this case when S is finite, $G$ is called finitely generated.
###### Theorem 3.19.
If the operator $T$ defined by (3.69) is compact and there exists a generator
of $G^{\wedge}$ such that $\gamma$ is bounded on it, then $G^{\wedge}$ is
finite generated.
###### Proof.
Let $S_{0}$ be a generator of $G^{\wedge}$, such that $\gamma$ is bounded on
it. Therefore, there exists $d_{0}\geq 0$ such that,
$\text{for each $\xi\in S_{0}$, $\;\gamma(\xi)\leq d_{0}$}.$
Now, since $T$ is compact and $\|T\|\leq 1$, there exists $0<c\leq 1$ such
that, the set of eigenvectors
$\Big{\\{}\delta_{\xi}\;;\;\xi\in
G^{\wedge}\;\;\text{and}\;\;\frac{1}{\sqrt{1+\gamma(\xi)^{2}}}\geq
c\Big{\\}}\equiv\Big{\\{}\delta_{\xi}\;;\;\xi\in
G^{\wedge}\;\;\text{and}\;\;\gamma(\xi)\leq\sqrt{\frac{1}{c^{2}}-1}\Big{\\}}$
is finite, where we have used the Spectral Theorem for bounded compact
operators. Therefore, since
$\left\\{\delta_{\xi}\;;\;\xi\in
S_{0}\right\\}\subset\left\\{\delta_{\xi}\;;\;\xi\in
G^{\wedge}\;\;\text{and}\;\;\gamma(\xi)\leq d_{0}\right\\}$
it follows that $S_{0}$ is a finite set, and thus $G^{\wedge}$ is finite
generated. ∎
###### Example 3.20 (Infinite enumerable Torus).
Let us recall the Sobolev space $H^{1}_{\gamma}(\mathbb{T}^{\mathbb{N}})$,
where $\mathbb{T}^{\mathbb{N}}$ is the infinite enumerable Torus. We claim
that: $H^{1}_{\gamma}(\mathbb{T}^{\mathbb{N}})$ is not compactly embedded in
$L^{2}(\mathbb{T}^{\mathbb{N}})$, for $\gamma$ defined in Exemple 3.6. Indeed,
given $k\in\mathbb{N}$ we define $1_{k}\in\mathbb{Z}^{\mathbb{N}}$, such that
it is zero for any coordinate $\ell\neq k$, and one in $k-$coordinate.
Therefore, the set
$S_{0}:=\\{\xi_{1_{k}}\;;\;k\in\mathbb{N}\\}$
is an infinite generator of the dual group
$(\mathbb{T}^{\mathbb{N}})^{\wedge}$. Since for each $k\in\mathbb{N}$,
$\gamma(\xi_{1_{k}})=1$, i.e. bounded in $S_{0}$, applying Theorem 3.19 it
follows that $H^{1}_{\gamma}(\mathbb{T}^{\mathbb{N}})$ is not compactly
embedded in $L^{2}(\mathbb{T}^{\mathbb{N}})$.
###### Remark 3.21.
The above discussion in the Example 3.20 follows as well to the Sobolev space
$H^{1}_{\gamma}(\mathbb{T}^{I})$, where $I$ is an index set (enumerable or
not). Clearly, the Sobolev space $H^{1}_{\gamma}(\mathbb{T}^{I})$ in not
compactly embedded in $L^{2}(\mathbb{T}^{I})$, when $I$ is a non enumerable
index set. Indeed, the set $(\mathbb{T}^{I})^{\wedge}$ is non enumerable.
Now, we charactherize the condition on $\gamma:G^{\wedge}\to[0,\infty)$, in
order to $T$ be compact. More precisely, let us consider the following
property:
${\bf C}.\quad\text{For each $d>0$, the set $\left\\{\xi\in
G^{\wedge}\;;\;\gamma(\xi)\leq d\right\\}$ is finite}.$ (3.74)
###### Theorem 3.22.
If $\gamma:G^{\wedge}\to[0,\infty)$ satisfies ${\bf C}$, then the operator $T$
defined by (3.69) is compact.
###### Proof.
By hypothesis, $\\{\xi\in G^{\wedge}\;;\;\gamma(\xi)\leq d\\}$ is finite, then
we have
$G^{\wedge}=\bigcup_{k\in\mathbb{N}}\left\\{\xi\in
G^{\wedge}\;;\;\gamma(\xi)\leq k\right\\}.$
Consequently, the set $G^{\wedge}$ is enumerable and we may write
$G^{\wedge}=\\{\xi_{i}\\}_{i\in\mathbb{N}}$.
Again, due to condition ${\bf C}$ for each $c\in(0,1)$ the set
$\Big{\\{}\xi\in G^{\wedge}\;;\;\frac{1}{\sqrt{1+\gamma(\xi)^{2}}}\geq
c\Big{\\}}$ (3.75)
is finite. Since the function $\xi\in G^{\wedge}\mapsto\delta_{\xi}\in
L^{2}(G^{\wedge})$ is injective, the set
$\\{\delta_{\xi_{i}}\;;\;i\in\mathbb{N}\\}$ is an enumerable orthonormal basis
of eigenvectors for $T$, which corresponding eigenvalues satisfy
$\lim_{i\to\infty}\frac{1}{\sqrt{1+\gamma(\xi_{i})^{2}}}=0,$
where we have used (3.75). Consequently, $T$ is a compact operator. ∎
###### Example 3.23 (Bis: Infinite enumerable Torus).
There exists a function $\gamma_{0}$ such that,
$H^{1}_{\gamma_{0}}(\mathbb{T}^{\mathbb{N}})$ is compactly embedded in
$L^{2}(\mathbb{T}^{\mathbb{N}})$. Indeed, we are going to show that,
$\gamma_{0}$ satisfies ${\bf C}$. Let
$\alpha\equiv(\alpha_{\ell})_{\ell\in\mathbb{N}}$ be a sequence in
$\mathbb{R}^{\mathbb{N}}$, such that for each $\ell\in\mathbb{N}$,
$\alpha_{\ell}\geq 0$ and
$\lim_{\ell\to\infty}\alpha_{\ell}=+\infty.$ (3.76)
Then, we define the following pseudo-metric in the dual group
$(\mathbb{T}^{\mathbb{N}})^{\wedge}$ as follows
$p_{0}(\xi_{m},\xi_{n}):=2\pi\sum_{\ell=1}^{\infty}\alpha_{\ell}\;{|m_{\ell}-n_{\ell}|},\quad(m,n\in\mathbb{Z}^{\mathbb{N}}_{\rm
c}),$
and consider $\gamma_{0}(\xi_{m})=p_{0}(\xi_{m},1)$. Thus for each $d>0$, the
set
$\\{m\in\mathbb{Z}^{\mathbb{N}}_{\rm c}\;;\;\gamma_{0}(\xi_{m})\leq
d\\}\quad\text{is finite.}$
Indeed, from (3.76) there exists $\ell_{0}\in\mathbb{N}$, such that
$\alpha_{\ell}>d$, for each $\ell\geq\ell_{0}$. Therefore, if
$m\in\mathbb{Z}^{\mathbb{N}}_{\rm c}$ and the support of $m$ is not contained
in $\\{1,\ldots,\ell_{0}-1\\}$, that is to say, there exists
$\tilde{\ell}\geq\ell_{0}$, such that, $m_{\tilde{\ell}}\neq 0$. Then,
$2\pi\sum_{\ell=1}^{\infty}\alpha_{\ell}\;{|m_{\ell}|}\geq\alpha_{\tilde{\ell}}>d.$
Consequently, we have
$\\{m\in\mathbb{Z}^{\mathbb{N}}_{\rm c}\;;\;\gamma_{0}(\xi_{m})\leq
d\\}\subset\\{m\in\mathbb{Z}^{\mathbb{N}}_{\rm c}\;;\;{\rm supp}\
m\subset\\{1,\ldots,\ell_{0}-1\\}\\},$
which is a finite set. Finally, applying Theorem 3.22 we obtain that, the
Sobolev space $H^{1}_{\gamma_{0}}(\mathbb{T}^{\mathbb{N}})$ is compactly
embedded in $L^{2}(\mathbb{T}^{\mathbb{N}})$.
#### 3.2.3 On a class of Quasi-periodic functions
In this section we consider an important class of quasi-periodic functions,
which includes for instance the periodic functions.
Let $\lambda_{1},\lambda_{2},\ldots,\lambda_{m}\in\mathbb{R}^{n}$ be
$m-$linear independent vectors with respect to $\mathbb{Z}$, and consider the
following matrix
$\Lambda:={\left(\begin{array}[]{c}\lambda_{1}\\\ \lambda_{2}\\\ \vdots\\\
\lambda_{m}\end{array}\right)}_{m\times n}$
such that, for each $d>0$ the set
$\\{k\in\mathbb{Z}^{m}\;;\;{|\Lambda^{T}k|}\leqslant d\\}\quad\text{is
finite.}$ (3.77)
Therefore, we are considering the class of quasi-periodic functions satisfying
condition (3.77). This set is not empty, for instance let us define the matrix
$B:=\Lambda\Lambda^{T}$, such that $\det B>0$, which is called here positive
quasi-periodic functions. It is not difficult to see that, positive quasi-
periodic functions satisfies (3.77). Indeed, it is sufficiently to observe
that, for each $k\in\mathbb{Z}^{m}$, we have
$|k|=|B^{-1}Bk|\leq\|B^{-1}\|\|\Lambda\||\Lambda^{T}k|.$
Moreover, since $\lambda_{1},\lambda_{2},\ldots,\lambda_{m}\in\mathbb{R}^{n}$
are $m-$linear independent vectors with respect to $\mathbb{Z}$, (this
property does not imply $\det B>0$), the dynamical system
$\tau:\mathbb{R}^{n}\times\mathbb{T}^{m}\to\mathbb{T}^{m}$, given by
$\tau(x)\omega:=\omega+\Lambda x-\left\lfloor\omega+\Lambda x\right\rfloor$
(3.78)
is a ergodic.
Now we remark that, the application
${\varphi:\mathbb{R}^{n}\to\mathbb{T}^{m}}$, $\varphi(x):=\Lambda
x-\left\lfloor\Lambda x\right\rfloor$, is a continuous homeomorphism of
groups. Then, we have
$\tau(x)\omega=\varphi(x)\omega\equiv\omega+\Lambda
x-\left\lfloor\omega+\Lambda x\right\rfloor.$
Consequently, under the conditions of the previous sections, we obtain for
each $k\in\mathbb{Z}^{m}$
$\gamma(\xi_{k})=2\pi{|\Lambda^{T}k|},$
and applying Theorem 3.22 (recall (3.77)), it follows that
$H^{1}_{\gamma}{\left(\mathbb{T}^{m}\right)}\subset\\!\subset
L^{2}{\left(\mathbb{T}^{m}\right)}.$
Therefore, given a stochastic deformation $\Phi$, we have
$\mathcal{H}_{\Phi}\subset\\!\subset\mathcal{L}_{\Phi}$ for the class of
quasi-periodic functions satisfying (3.77), and it follows a solution to
Bloch’s spectral cell equation.
### 3.3 Auxiliary celular equations
The preposition below, which is an immediate consequence of Theorem 2.26, give
us the necessaries characteristics to deduce from the cell equation (3.56)
other equations (called here auxiliary cellular equations), that will be
essential in our homogenization analysis.
###### Proposition 3.24.
Given ${\theta\in\mathbb{R}^{n}}$, let
$\big{(}\lambda(\theta),\Psi(\theta)\big{)}$ be a spectral point of the cell
equation (3.56). Suppose that for some $\theta_{0}\in\mathbb{R}^{n}$ the
corresponding eigenvalue ${\lambda(\theta_{0})}$ has finite multiplicity.
Then, there exists a neighborhood ${{\mathcal{U}}\subset\mathbb{R}^{n}}$ of
${\theta_{0}}$, such that the following functions
$\theta\in{\mathcal{U}}\mapsto\Psi(\theta)\in\mathcal{H}_{\Phi}\;\;\;\text{and}\;\;\;\theta\in{\mathcal{U}}\mapsto\lambda(\theta)\in\mathbb{R}-\\{0\\},$
are analytical.
Now, introducing the operator ${\mathbb{A}(\theta)}$,
(${\theta\in\mathbb{R}^{n}}$), defined on $\mathcal{H}_{\Phi}$ by
$\displaystyle\mathbb{A}(\theta)[F]=-({\rm
div}_{\\!z}+2i\pi\theta){\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(\nabla_{\\!\\!z}+2i\pi\theta)}F\Big{)}}$
$\displaystyle\qquad\qquad\qquad\qquad+V{\left(\Phi^{-1}(z,\omega),\omega\right)}F-\lambda(\theta)F,$
and writing $\theta=(\theta_{1},\cdots,\theta_{n})$, we obtain for
$k=1,\ldots,n$,
$\displaystyle\mathbb{A}(\theta){\left[\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\right]}=({\rm
div}_{\\!z}+2i\pi\theta){\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(2i\pi
e_{k}\Psi(\theta))}\Big{)}}$ $\displaystyle\qquad\qquad+{(2i\pi
e_{k})}{\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{({\rm
div}_{\\!z}+2i\pi\theta)}\Psi(\theta)\Big{)}}$ $\displaystyle\hskip
170.71652pt+\frac{\partial\lambda}{\partial\theta_{k}}(\theta)\Psi(\theta),$
(3.79)
where $\\{e_{k}{\\}}_{1\leq k\leq n}$ is the canonical basis of
$\mathbb{R}^{n}$. The equation (3.3) is called the first auxiliary cellular
equation (or f.a.c. equation, in short). In the same way, we have for
$k,\ell=1,\ldots,n$,
$\displaystyle\mathbb{A}(\theta){\left[\frac{\partial^{2}\Psi(\theta)}{\partial\theta_{\ell}\,\partial\theta_{k}}\right]}=({\rm
div}_{\\!z}+2i\pi\theta){\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{2i\pi
e_{\ell}\frac{\partial\Psi(\theta)}{\partial\theta_{k}}}\Big{)}}$
$\displaystyle\qquad\qquad+({\rm
div}_{\\!z}+2i\pi\theta){\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{2i\pi
e_{k}\frac{\partial\Psi(\theta)}{\partial\theta_{\ell}}}\Big{)}}$
$\displaystyle\qquad\qquad+{(2i\pi
e_{k})}{\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{({\rm
div}_{\\!z}+2i\pi\theta)}\frac{\partial\Psi(\theta)}{\partial\theta_{\ell}}\Big{)}}$
$\displaystyle\qquad\qquad+{(2i\pi
e_{\ell})}{\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{({\rm
div}_{\\!z}+2i\pi\theta)}\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\Big{)}}$
$\displaystyle\qquad\qquad\qquad+{(2i\pi
e_{k})}{\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(2i\pi
e_{\ell}\Psi(\theta)\right)}\Big{)}}$ (3.80)
$\displaystyle\qquad\qquad\qquad\qquad+{(2i\pi
e_{\ell})}{\Big{(}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(2i\pi
e_{k}\Psi(\theta)\right)}\Big{)}}$
$\displaystyle\qquad\qquad\qquad\qquad+\frac{\partial\lambda(\theta)}{\partial\theta_{k}}\frac{\partial\Psi(\theta)}{\partial\theta_{\ell}}+\,\frac{\partial\lambda(\theta)}{\partial\theta_{\ell}}\frac{\partial\Psi(\theta)}{\partial\theta_{k}}+\frac{\partial^{2}\lambda(\theta)}{\partial\theta_{\ell}\,\partial\theta_{k}}\Psi(\theta),$
which we call the second auxiliary cellular equation (or s.a.c. equation, in
short).
In order to make clear in which sense the auxiliary cellular equations are
understood, we note that if ${G\in\mathcal{H}_{\Phi}}$ then the variational
formulation of the f.a.c. equation (3.3) is given by
$\displaystyle\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\Big{\\{}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\cdot\overline{{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}G}$
$\displaystyle\qquad\qquad+V{\left(\Phi^{-1}(z,\omega),\omega\right)}\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\,\overline{G}-\lambda(\theta)\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\,\overline{G}\Big{\\}}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle\qquad\qquad\qquad=:\Big{\langle}\mathbb{A}(\theta){\left[\frac{\partial\Psi(\theta)}{\partial\theta_{k}}\right]},G\Big{\rangle}$
(3.81)
$\displaystyle\qquad=-\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\Big{\\{}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\Psi(\theta)\cdot\overline{{(2i\pi
e_{k}G)}}$
$\displaystyle\qquad\qquad\qquad\qquad-A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(2i\pi
e_{k}\Psi(\theta))}\cdot\overline{{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}G}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\partial\lambda(\theta)}{\partial\theta_{k}}\,\Psi(\theta)\,\overline{G}\Big{\\}}\,dz\,d\mathbb{P}(\omega).$
Similar reasoning can be made with the s.a.c. equation (3.3).
In the following, we highlight an important fact that is fundamental to
determine the hessian nature of the effective tensor in our homogenization
analysis concerning the Schrödinger equation (1.1). This fact is brought out
by choosing ${\theta\in{\mathcal{U}}}$, ${k\in\\{1,\ldots,n\\}}$ and defining
$\Lambda_{k}(z,\omega,\theta):=\frac{1}{2i\pi}\frac{\partial\Psi}{\partial\theta_{k}}(z,\omega,\theta)$.
Hence, taking ${\Psi(\theta)}$ as a test function in the s.a.c. equation
(3.3), we get
$\displaystyle\frac{1}{4\pi^{2}}\frac{\partial^{2}\lambda(\theta)}{\partial\theta_{\ell}\,\partial\theta_{k}}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{|\Psi(\theta)|}^{2}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle\qquad=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\Big{\\{}-A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(e_{\ell}\Lambda_{k}(\theta))}\cdot\overline{{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\Psi(\theta)}$
$\displaystyle\qquad\qquad\qquad\,-A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(e_{k}\Lambda_{\ell}(\theta))}\cdot\overline{{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\Psi(\theta)}$
$\displaystyle\qquad\qquad\qquad\,+A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\Lambda_{k}(\theta)\cdot\overline{{(e_{\ell}\,\Psi(\theta))}}$
$\displaystyle\qquad\qquad\qquad\,+A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(\nabla_{\\!\\!z}+2i\pi\theta\right)}\Lambda_{\ell}(\theta)\cdot\overline{{(e_{k}\,\Psi(\theta))}}$
$\displaystyle\qquad\qquad\qquad\qquad\,+A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(e_{k}\Psi(\theta))}\cdot\overline{{(e_{\ell}\,\Psi(\theta))}}$
(3.82)
$\displaystyle\qquad\qquad\qquad\qquad\,+A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(e_{\ell}\Psi(\theta))}\cdot\overline{{(e_{k}\,\Psi(\theta))}}$
$\displaystyle\qquad\qquad\qquad\qquad+\frac{1}{2i\pi}\Big{(}\frac{\partial\lambda(\theta)}{\partial\theta_{\ell}}\Lambda_{k}(\theta)+\frac{\partial\lambda(\theta)}{\partial\theta_{k}}\Lambda_{\ell}(\theta)\Big{)}\,\overline{\Psi(\theta)}\Big{\\}}\,dz\,d\mathbb{P}(\omega).$
On the other hand, using $\Lambda_{k}(z,\omega,\theta)$ as a test function in
the f.a.c. equation (3.3) and due to Theorem 2.16, we arrive at
$\begin{array}[]{l}\displaystyle\int_{\mathbb{R}^{n}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\Lambda_{k,\varepsilon}(\theta)\cdot\overline{{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\varphi}\,dx\\\\[12.0pt]
\displaystyle+\frac{1}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}V{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}\,\Lambda_{k,\varepsilon}(\theta)\,\overline{\varphi}\,dx-\frac{\lambda(\theta)}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}\Lambda_{k,\varepsilon}(\theta)\,\overline{\varphi}\,dx\\\\[12.0pt]
\hskip
56.9055pt\displaystyle=-\frac{1}{\varepsilon}\int_{\mathbb{R}^{n}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\Psi_{\varepsilon}(\theta)\cdot\overline{{(e_{k}\varphi)}}\,dx\\\\[12.0pt]
\hskip
71.13188pt\displaystyle-\frac{1}{\varepsilon}\int_{\mathbb{R}^{n}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}{(e_{k}\Psi_{\varepsilon}(\theta))}\cdot\overline{{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\varphi}\,dx\\\\[12.0pt]
\hskip
78.24507pt\displaystyle+\frac{1}{\varepsilon^{2}}\frac{1}{2i\pi}\frac{\partial\lambda}{\partial\theta_{k}}(\theta)\int_{\mathbb{R}^{n}}\Psi_{\varepsilon}(\theta)\,\overline{\varphi}\,dx,\end{array}$
(3.83)
for any ${\varphi\in C^{\infty}_{\rm c}(\mathbb{R}^{n})}$ and a.e
${\omega\in\Omega}$. Here,
$\Lambda_{k,\varepsilon}(x,\omega,\theta):=\Lambda_{k}{\left(\frac{x}{\varepsilon},\omega,\theta\right)}$.
Proceeding in a similar way with the cell equation (3.56), we can find
$\int_{\mathbb{R}^{n}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\Psi_{\varepsilon}(\theta)\cdot\overline{{\left(\nabla+2i\pi\frac{\theta}{\varepsilon}\right)}\varphi}\,dx\\\
+\frac{1}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}V{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)}\,\Psi_{\varepsilon}(\theta)\,\overline{\varphi}\,dx-\frac{\lambda(\theta)}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}\Psi_{\varepsilon}(\theta)\,\overline{\varphi}\,dx=0,$
(3.84)
for any ${\varphi\in C^{\infty}_{\rm c}(\mathbb{R}^{n})}$ and a.e.
${\omega\in\Omega}$.
## Part II: Asymptotic Equations
## 4 On Schrödinger Equations Homogenization
In this section, we shall describe the asymptotic behaviour of the family of
solutions $\\{u_{\varepsilon}{\\}_{\varepsilon>0}}$ of the equation (1.1),
this is the content of Theorem 4.2 below. It generalizes the similar result of
Allaire, Piatnitski [4] where they consider the similar problem in the
periodic setting. Our scenario is much different from one considered by them.
Here, the coefficients of equation (1.1) are random perturbations accomplished
by stochastic diffeomorphisms of stationary functions. Since the two-scale
convergence technique is the best tool to deal with asymptotic analysis of
linear operators, we make use of it in analogous way done in [4]. Although,
the presence of the stochastic deformation in the coefficients brings out
several complications, which we were able to overcome.
To begin, some important and basic a priori estimates of the solution of the
Schrödinger equation (1.1) are needed. Then, we have the following
###### Lemma 4.1 (Energy Estimates).
Assume that the conditions (1.2), (1.3) hold and let ${u_{\varepsilon}}\in
C\big{(}[0,T);H^{1}(\mathbb{R}^{n})\big{)}$ be the solution of the equation
(1.1) with initial data $u_{\varepsilon}^{0}$. Then, for all ${t\in[0,T]}$ and
a.e. ${\omega\in\Omega}$, the following a priori estimates hold:
* (i)
$($Energy Conservation$.)$
${\displaystyle\int_{\mathbb{R}^{n}}{|u_{\varepsilon}(t,x,\omega)|}^{2}dx=\int_{\mathbb{R}^{n}}{|u_{\varepsilon}^{0}(x,\omega)|}^{2}dx}$.
* (ii)
$(\varepsilon\nabla-$ Estimate$.)$
$\displaystyle\int_{\mathbb{R}^{n}}|\varepsilon\nabla
u_{\varepsilon}(t,x,\omega)|^{2}\,dx\leq
C\int_{\mathbb{R}^{n}}\Big{\\{}|\varepsilon\nabla
u_{\varepsilon}^{0}(x,\omega)|^{2}+|u_{\varepsilon}^{0}(x,\omega)|^{2}\Big{\\}}\,dx,$
where
${C:=C\big{(}\Lambda,{\|A\|}_{\infty},{\|V\|}_{\infty},{\|U\|}_{\infty}}\big{)}$
is a positive constant which does not depend on $\varepsilon>0$.
###### Proof.
1\. If we multiply the Eq. (1.1) by $\overline{u_{\varepsilon}}$ and take the
imaginary part, then we obtain
$\frac{d}{dt}\int_{\mathbb{R}^{n}}|u_{\varepsilon}(t,x,\omega)|^{2}\,dx=0,$
which gives the proof of the item $(i)$.
2\. Now, multiplying the Eq. (1.1) by $\overline{\partial_{t}u_{\varepsilon}}$
and taking the real part, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^{n}}\Big{\\{}\varepsilon^{2}A\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\right)\nabla
u_{\varepsilon}\cdot\nabla\overline{u_{\varepsilon}}$
$\displaystyle\qquad\qquad+\Big{(}V\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\right)+\varepsilon^{2}U\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right),\omega\right)\Big{)}|u_{\varepsilon}|^{2}\Big{\\}}\,dx=0,$
which provides the proof of the item $(ii)$. ∎
Its important to remember the followings facts that will be necessary in this
section:
* •
The initial data of the equation (1.1) is assumed to be well-prepared, that
is, for $(x,\omega)\in\mathbb{R}^{n}\\!\times\\!\Omega$, and
${\theta^{\ast}\in\mathbb{R}^{n}}$
$u_{\varepsilon}^{0}(x,\omega)=e^{2i\pi\frac{\theta^{\ast}\cdot
x}{\varepsilon}}\psi\left(\Phi^{-1}(x/\varepsilon,\omega),\omega,\theta^{\ast}\right)v^{0}(x),$
(4.85)
where ${v^{0}\in C_{\rm c}^{\infty}(\mathbb{R}^{n})}$, and
${\psi(\theta^{\ast})}$ is an eigenfunction of the cell problem (3.56).
* •
Using the Ergodic Theorem, it is easily seen that the sequences
${\\{u_{\varepsilon}^{0}(\cdot,\omega)\\}_{\varepsilon>0}}\quad\text{and}\quad{\\{\varepsilon\nabla
u_{\varepsilon}^{0}(\cdot,\omega)\\}_{\varepsilon>0}}$
are bounded in ${L^{2}(\mathbb{R}^{n})}$ and ${[L^{2}(\mathbb{R}^{n})]^{n}}$,
respectively.
One observes that, the main importance of the well preparedness of the initial
data is the following: Trivially, the sequence of solutions
$\\{u_{\varepsilon}{\\}_{\varepsilon>0}}$ of the equation (1.1) two-scale
converges to zero. However, if our initial data is well-prepared, we are able
to correct the oscillations present in $u_{\varepsilon}$ in such a way that,
after this correction we can strengthen the weak convergence to the solution
of a nontrivial homogenized Schrödinger Equation. For instance, we invite the
readers to Allaire, Piatnistski [4], Bensoussan, Lions, Papanicolaou [8,
Chapter 4] and Poupaud, Ringhofer [31].
### 4.1 The Abstract Theorem.
In the next, we establish an abstract homogenization theorem for Schrödinger
equations.
###### Theorem 4.2.
Let $\Phi(y,\omega)$ be a stochastic deformation, and
$\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$ an ergodic $n-$dimensional
dynamical system. Assume that the conditions (1.2), (1.3) hold, and there
exists a Bloch frequence ${\theta^{\ast}\\!\in\mathbb{R}^{n}}$ which is a
critical point of Â$\lambda(\cdot)$, that is
${\nabla_{\\!\\!\theta}\,\lambda(\theta^{\ast})=0}$, where
${\lambda(\theta^{\ast})}$ is a simple eigenvalue of the spectral cell
equation (3.56) associated to the eigenfunction
$\Psi(z,\omega,\theta^{\ast})\equiv\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}$.
Assume also that, the initial data is well-prepared in the sense of (4.85). If
${u_{\varepsilon}}\in C\big{(}[0,T);H^{1}(\mathbb{R}^{n})\big{)}$ is the
solution of (1.1) for each $\varepsilon>0$ fixed, then the sequence
${v_{\varepsilon}}$ defined by
$v_{\varepsilon}(t,x,\omega):=e^{-{\left(i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}\right)}}u_{\varepsilon}(t,x,\omega),\;\,(t,x)\in\mathbb{R}^{n+1}_{T},\;\omega\in\Omega,$
$\Phi_{\omega}-$two-scale converges to
${v(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}}$, and
satisfies for a.e. ${\omega\in\Omega}$
$\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}\\!{\left|v_{\varepsilon}(t,x,\omega)-v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega,\theta^{\ast}\right)}\right|}^{2}dx\,dt\,=\,0,$
where the function ${v\in C\big{(}[0,T);L^{2}(\mathbb{R}^{n})\big{)}}$ is the
unique solution of the homogenized Schrödinger equation
$\left\\{\begin{aligned} &i\displaystyle\frac{\partial v}{\partial t}-{\rm
div}{\left(A^{\ast}\nabla
v\right)}+U^{\ast}v=0\,,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[5.0pt]
&v(0,x)=v^{0}(x)\,,\;\,x\in\mathbb{R}^{n},\end{aligned}\right.$ (4.86)
with effective (constant) coefficients: matrix
${A^{\ast}=D_{\theta}^{2}\lambda(\theta^{\ast})}$, and potential
$U^{\ast}=c^{-1}_{\psi}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}U{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left|\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\right|}^{2}dz\,d\mathbb{P}(\omega),$
where
$c_{\psi}=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\right|}^{2}dz\,d\mathbb{P}(\omega).$
###### Proof.
In order to better understand the main difficulties brought by the presence of
the stochastic deformation $\Phi$, we split our proof in five steps.
1.(A priori estimates and $\Phi_{\omega}-$two-scale convergence.) First, we
define
$v_{\varepsilon}(t,x,\widetilde{\omega}):=e^{-{\left(i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}\right)}}u_{\varepsilon}(t,x,\widetilde{\omega}),\;\,(t,x,\widetilde{\omega})\in\mathbb{R}^{n+1}_{T}\\!\times\\!\Omega.$
(4.87)
Then, computing the first derivatives with respect to the variable $x$, we get
$\varepsilon\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\,e^{-{\left(i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}\right)}}=(\varepsilon\nabla+2i\pi\theta^{\ast})v_{\varepsilon}(t,x,\widetilde{\omega}).$
(4.88)
Applying Lemma 4.1 yields:
* •
${\displaystyle\int_{\mathbb{R}^{n}}{|v_{\varepsilon}(t,x,\widetilde{\omega})|}^{2}dx=\int_{\mathbb{R}^{n}}{|u_{\varepsilon}^{0}(x,\widetilde{\omega})|}^{2}dx},$
* •
${\displaystyle\int_{\mathbb{R}^{n}}{|\varepsilon\nabla
v_{\varepsilon}(t,x,\widetilde{\omega})|}^{2}dx\leq\widetilde{C}{\displaystyle\int_{\mathbb{R}^{n}}\Big{(}{|\varepsilon\nabla
u_{\varepsilon}^{0}(x,\widetilde{\omega})|}^{2}+{|u_{\varepsilon}^{0}(x,\widetilde{\omega})|}^{2}\Big{)}dx}}$
for all ${t\in[0,T)}$ and a.e. $\widetilde{\omega}\in\Omega$, where the
constant ${\widetilde{C}}$ depends on $\|A{\|}_{\infty}$, $\|V{\|}_{\infty}$,
$\|U{\|}_{\infty}$ and $\theta^{\ast}$. Then, from the uniform boundedness of
the sequences
${\\{u_{\varepsilon}^{0}(\cdot,\widetilde{\omega})\\}_{\varepsilon>0}}$ and
${\\{\varepsilon\nabla
u_{\varepsilon}^{0}(\cdot,\widetilde{\omega})\\}_{\varepsilon>0}}$, we deduce
that the sequences
${{\\{v_{\varepsilon}(\cdot,\cdot\cdot,\widetilde{\omega})\\}}_{\varepsilon>0}}\quad\text{and}\quad{{\\{\varepsilon\nabla
v_{\varepsilon}(\cdot,\cdot\cdot,\widetilde{\omega})\\}}_{\varepsilon>0}}$
are bounded, respectively, in ${L^{2}(\mathbb{R}^{n+1}_{T})}$ and
${{[L^{2}(\mathbb{R}^{n+1}_{T})]}^{n}}$ for a.e.
${\widetilde{\omega}\in\Omega}$. Therefore, applying Lemma 2.22, there exists
a subsequence ${\\{\varepsilon^{\prime}\\}}$(which may dependent on
$\tilde{\omega}$), and a stationary function
${v^{\ast}_{\widetilde{\omega}}\in L^{2}(\mathbb{R}^{n+1}_{T},\mathcal{H})}$,
for a.e. ${\widetilde{\omega}\in\Omega}$, such that
$v_{\varepsilon^{\prime}}(t,x,\widetilde{\omega})\;\xrightharpoonup[\varepsilon^{\prime}\to
0]{2-{\rm
s}}\;v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)},$
and
$\varepsilon^{\prime}\frac{\partial v_{\varepsilon^{\prime}}}{\partial
x_{k}}(t,x,\widetilde{\omega})\;\xrightharpoonup[\varepsilon^{\prime}\to
0]{2-{\rm s}}\;\frac{\partial}{\partial
z_{k}}{\big{(}v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}{(z,\omega)},\omega\right)}\big{)}},$
which means that, for ${k\in\\{1,\ldots,n\\}}$, we have
$\displaystyle\lim_{\varepsilon^{\prime}\to 0}\iint_{\mathbb{R}^{n+1}_{T}}$
$\displaystyle
v_{\varepsilon^{\prime}}\left(t,x,\widetilde{\omega}\right)\,\overline{\varphi(t,x)\,\Theta\left(\Phi^{-1}{\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,dx\,dt$
(4.89)
$\displaystyle=c_{\Phi}^{-1}\\!\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!\\!\\!v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}{(z,\omega)},\omega\right)}\,$
$\displaystyle\hskip
90.0pt\times\;\overline{\varphi(t,x)\,\Theta\left(\Phi^{-1}(z,\omega),\omega\right)}\,dz\,d\mathbb{P}\,dx\,dt$
and
$\displaystyle\lim_{\varepsilon^{\prime}\to 0}\iint_{\mathbb{R}^{n+1}_{T}}$
$\displaystyle\varepsilon^{\prime}\frac{\partial
v_{\varepsilon^{\prime}}}{\partial
x_{k}}\left(t,x,\widetilde{\omega}\right)\,\overline{\varphi(t,x)\,\Theta\left(\Phi^{-1}{\left(\frac{x}{\varepsilon^{\prime}},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,dx\,dt$
(4.90)
$\displaystyle=c_{\Phi}^{-1}\\!\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\frac{\partial}{\partial
z_{k}}{\left(v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}{(z,\omega)},\omega\right)}\right)}\,$
$\displaystyle\hskip
90.0pt\times\,\overline{\varphi(t,x)\,\Theta\left(\Phi^{-1}(z,\omega),\omega\right)}\,dz\,d\mathbb{P}\,dx\,dt,$
for all functions $\varphi\in C^{\infty}_{\rm
c}((-\infty,T)\times\mathbb{R}^{n})$ and $\Theta\in
L^{2}_{\text{loc}}\left(\mathbb{R}^{n}\times\Omega\right)$ stationary.
Moreover, the sequence
${{\\{v_{\varepsilon}^{0}(\cdot,\widetilde{\omega})\\}}_{\varepsilon>0}}$
defined by,
$v_{\varepsilon}^{0}(x,\omega):=\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega,\theta^{\ast}\right)}v^{0}(x),\;\;(x,\omega)\in\mathbb{R}^{n}\\!\times\\!\Omega,$
(4.91)
satisfies
$v_{\varepsilon}^{0}(\cdot,\widetilde{\omega})\;\xrightharpoonup[\varepsilon\to
0]{2-{\rm
s}}\;v^{0}(x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)},$
(4.92)
for each stationary function ${\psi(\theta^{\ast})}$.
2.(The Split Process.) We consider the following
Claim: There exists ${v_{\widetilde{\omega}}\in L^{2}(\mathbb{R}^{n+1}_{T})}$,
such that
$\displaystyle
v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}$
$\displaystyle=v_{\widetilde{\omega}}(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}$
$\displaystyle\equiv
v_{\widetilde{\omega}}(t,x)\,\Psi(z,\omega,\theta^{\ast}).$
Proof of Claim: First, for any $\widetilde{\omega}\in\Omega$ fixed, we take
the function
$Z_{\varepsilon}(t,x,\widetilde{\omega})=\varepsilon^{2}e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}\varphi(t,x)\,\Theta{\big{(}\Phi^{-1}\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)},\widetilde{\omega}\big{)}}$
(4.93)
as a test function in the associated variational formulation of the equation
(1.1), where ${\varphi\in C^{\infty}_{\rm
c}((-\infty,T)\\!\times\\!\mathbb{R}^{n})}$ and $\Theta\in
L^{\infty}\left(\mathbb{R}^{n}\times\Omega\right)$ stationary, with
$\Theta(\cdot,\omega)$ smooth. Therefore, we obtain
$\displaystyle-i\iint_{\mathbb{R}^{n+1}_{T}}u_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{Z_{\varepsilon}}}{\partial
t}(t,x,\widetilde{\omega})\,dx\,dt+i\int_{\mathbb{R}^{n}}u_{\varepsilon}^{0}(x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(0,x,\widetilde{\omega})\,dx$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}A{\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\widetilde{\omega}\right),\widetilde{\omega}\right)}\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\cdot\nabla\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt$
$\displaystyle+\frac{1}{\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}V{\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\widetilde{\omega}\right),\widetilde{\omega}\right)}u_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}U{\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\widetilde{\omega}\right),\widetilde{\omega}\right)}u_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt=0,$
and since
$\displaystyle\frac{\partial Z_{\varepsilon}}{\partial
t}(t,x,\widetilde{\omega})$
$\displaystyle=i\lambda(\theta^{\ast})\,e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}\,\varphi(t,x)\,\Theta{(\Phi^{-1}(\frac{x}{\varepsilon},\widetilde{\omega}),\widetilde{\omega})}+\mathrm{O}(\varepsilon^{2}),$
$\displaystyle\nabla Z_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle=\varepsilon\,e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}\,(\varepsilon\nabla+2i\pi\theta^{\ast})\big{(}\varphi(t,x)\,\Theta{(\Phi^{-1}{(\frac{x}{\varepsilon},\widetilde{\omega})},\widetilde{\omega})}\big{)},$
it follows that
$\displaystyle-\lambda(\theta^{\ast})\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,\Theta{(\Phi^{-1}\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)},\widetilde{\omega})}}\,dx\,dt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}A{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}\,{(\varepsilon\nabla+2i\pi\theta^{\ast})}v_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle\hskip
60.0pt\cdot\overline{{(\varepsilon\nabla+2i\pi\theta^{\ast})}{\left(\varphi(t,x)\,\Theta{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\right)}}\,dx\,dt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}V{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,\Theta{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}}\,dx\,dt=\mathrm{O}(\varepsilon^{2}),$
where we have used (4.87), (4.88), (4.91), and (4.93). Although, it is more
convenient to rewrite as
$\displaystyle-\lambda(\theta^{\ast})\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,\Theta{(\Phi^{-1}\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)},\widetilde{\omega})}}\,dx\,dt$
(4.94)
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}{(\varepsilon\nabla+2i\pi\theta^{\ast})}v_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle\hskip
20.0pt\cdot\overline{A{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}\,{(\varepsilon\nabla+2i\pi\theta^{\ast})}{\big{(}\varphi(t,x)\,\Theta{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}\big{)}}}\,dx\,dt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,V{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}\,\Theta{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}}\,dx\,dt=\mathrm{O}(\varepsilon^{2}).$
Now, making $\varepsilon={\varepsilon^{\prime}}$, letting
$\varepsilon^{\prime}\to 0$ and using the Definition 2.19, we have for a.e.
${\widetilde{\omega}\in\Omega}$, for all ${\varphi\in C^{\infty}_{\rm
c}((-\infty,T)\\!\times\\!\mathbb{R}^{n})}$, $\Theta\in
L^{\infty}\left(\mathbb{R}^{n}\times\Omega\right)$ stationary and
$\Theta(\cdot,\omega)$ smooth,
$\displaystyle-\lambda(\theta^{\ast})\,c_{\Phi}^{-1}\\!\\!\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}$
$\displaystyle\hskip
90.0pt\times\overline{\varphi(t,x)\,\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega)\,dx\,dt$
$\displaystyle+c_{\Phi}^{-1}\\!\\!\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})}{\left(v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}\right)}$
$\displaystyle\cdot\overline{\varphi(t,x)\,A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left[{(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})}{\left(\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}\right)}\right]}}\,dz\,d\mathbb{P}(\omega)\,dx\,dt$
$\displaystyle+c_{\Phi}^{-1}\\!\\!\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}\,$
$\displaystyle\hskip
60.0pt\times\overline{\varphi(t,x)\,V{\left(\Phi^{-1}(z,\omega),\omega\right)}\,\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega)\,dx\,dt=0.$
Therefore, due to an argument of density in the test functions (thanks to the
topological structure of $\Omega$), we can conclude that
$\displaystyle-\lambda(\theta^{\ast})\,\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}\,\overline{\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle+\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})}{\left(v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}\right)}$
$\displaystyle\hskip
60.0pt\cdot\overline{{(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})}{\left(\Theta{\left(\Phi^{-1}(z,\omega),\omega\right)}\right)}}\,dz\,d\mathbb{P}(\omega)$
$\displaystyle+\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!V{(\Phi^{-1}(z,\omega),\omega)}\,v^{\ast}_{\widetilde{\omega}}{(t,x,\Phi^{-1}(z,\omega),\omega)}\,$
$\displaystyle\hskip
120.0pt\times\overline{\Theta{(\Phi^{-1}(z,\omega),\omega)}}\,dz\,d\mathbb{P}(\omega)=0,$
for a.e. ${(t,x)\in\mathbb{R}^{n+1}_{T}}$ and for all ${\Theta}$ as above.
Thus, the simplicity of the eigenvalue $\lambda(\theta^{\ast})$ assures us
that for a.e. $\mathbb{R}^{n+1}_{T}$, the function
${(z,\omega)\mapsto
v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}}$
(which belongs to the space ${\mathcal{H}}$) is parallel to the function
${\Psi(\theta^{\ast})}$, i.e., we can find
${v_{\widetilde{\omega}}(t,x)\in\mathbb{C}}$, such that
$\displaystyle
v^{\ast}_{\widetilde{\omega}}{\left(t,x,\Phi^{-1}(z,\omega),\omega\right)}$
$\displaystyle=$ $\displaystyle
v_{\widetilde{\omega}}(t,x)\,\Psi(z,\omega,\theta^{\ast})$
$\displaystyle\equiv$ $\displaystyle
v_{\widetilde{\omega}}(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}.$
Finally, since ${v^{\ast}_{\widetilde{\omega}}\in
L^{2}(\mathbb{R}^{n+1}_{T};\mathcal{H})}$, we conclude that
${v_{\widetilde{\omega}}\in L^{2}(\mathbb{R}^{n+1}_{T})}$, which completes the
proof of our claim.
3.(Homogenization Process.) Let ${\Lambda_{k}(\theta^{\ast})}$, for any
$k\in\\{1,\ldots,n\\}$, be the function defined by
$\Lambda_{k}(z,\omega,\theta^{\ast})=\frac{1}{2i\pi}\frac{\partial\Psi}{\partial\theta_{k}}(z,\omega,\theta^{\ast})=\frac{1}{2i\pi}\frac{\partial\psi}{\partial\theta_{k}}{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)},\;(z,\omega)\in\mathbb{R}^{n}\\!\times\\!\Omega,$
where the function
$\Psi(z,\omega,\theta^{\ast})=\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}$
is the eigenfunction of the spectral cell problem (3.56). Then, we consider
the following test function
$Z_{\varepsilon}(t,x,\widetilde{\omega})=e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}{\big{(}\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}},$
where ${\varphi\in C^{\infty}_{\rm c}((-\infty,T)\\!\times\\!\mathbb{R}^{n})}$
and
$\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})=\Psi{\left(\frac{x}{\varepsilon},\widetilde{\omega},\theta^{\ast}\right)},\quad\;\;\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})=\Lambda_{k}{\left(\frac{x}{\varepsilon},\widetilde{\omega},\theta^{\ast}\right)}.$
Using the function ${Z_{\varepsilon}}$ as test function in the variational
formulation of the equation (1.1), we obtain
$\displaystyle\big{[}i\int_{\mathbb{R}^{n}}u_{\varepsilon}^{0}(x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(0,x,\widetilde{\omega})\,dx-i\iint_{\mathbb{R}^{n+1}_{T}}u_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{Z_{\varepsilon}}}{\partial
t}(t,x,\widetilde{\omega})\,dx\,dt\big{]}$ (4.95)
$\displaystyle+\big{[}\iint_{\mathbb{R}^{n+1}_{T}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\cdot\nabla\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt\big{]}$
$\displaystyle+\big{[}\frac{1}{\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}V{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)\,u_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt}$
$\displaystyle\hskip
30.0pt+\iint_{\mathbb{R}^{n+1}_{T}}U{\left(x,\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,u_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt\big{]}=0.$
In order to simplify the manipulation of the above equation, we shall denote
by $I_{k}^{\varepsilon}(k=1,2,3)$ the respective term in the $k^{\text{th}}$
brackets, so that we can rewrite the equation (4.95) as
$I_{1}^{\varepsilon}+I_{2}^{\varepsilon}+I_{3}^{\varepsilon}=0$.
The analysis of the $I_{1}^{\varepsilon}$ term is triggered by the following
computation
$\displaystyle\frac{\partial Z_{\varepsilon}}{\partial
t}(t,x,\widetilde{\omega})$
$\displaystyle=e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}{\Big{[}i\frac{\lambda(\theta^{\ast})}{\varepsilon^{2}}{\big{(}\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}}$
$\displaystyle+\,{{\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}}+\frac{\partial\varphi}{\partial
t}(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}$
$\displaystyle{+\,\varepsilon\sum_{k=1}^{n}\frac{\partial^{2}\varphi}{\partial
t\,\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\Big{]}},$
therefore we have
$\displaystyle I_{1}^{\varepsilon}$
$\displaystyle=i\int_{\mathbb{R}^{n}}v_{\varepsilon}^{0}\,\overline{{\big{(}\varphi(0,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(0,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}}}dx$
$\displaystyle-\frac{\lambda(\theta^{\ast})}{\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}\,\overline{{\big{(}\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}}}dx\,dt$
$\displaystyle-i\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}\,\overline{{\big{(}\frac{\partial\varphi}{\partial
t}(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial^{2}\varphi}{\partial
t\,\partial
x_{k}}(t,x)\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}}}.$
For the analysis of the term $I_{2}^{\varepsilon}$, we need to make the
following computations
$\displaystyle\nabla Z_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle=e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}{\big{[}\nabla\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varphi(t,x)\,\nabla\Psi_{\varepsilon}(z,\widetilde{\omega},\theta^{\ast})}$
$\displaystyle+\varepsilon\sum_{k=1}^{n}\nabla{\big{(}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\big{)}}\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\nabla\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})$
$\displaystyle+\,2i\pi\frac{\theta^{\ast}}{\varepsilon}{{\big{(}\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{)}}\big{]}}$
$\displaystyle=e^{i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}{\big{[}\varphi(t,x)\,{\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}$
$\displaystyle+{\,\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,{\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\nabla\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}$
$\displaystyle+\,{\varepsilon\sum_{k=1}^{n}\nabla{\big{(}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\big{)}}\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})\big{]}},$
and from this, we have
$\displaystyle
A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\cdot\overline{\nabla
Z_{\varepsilon}}(t,x,\widetilde{\omega})$
$\displaystyle=A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}{\big{[}\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\,e^{-\left({i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}}\right)}\big{]}}$
$\displaystyle\cdot\big{[}\overline{\varphi}(t,x)\,(\nabla-2i\pi\frac{\theta^{\ast}}{\varepsilon})\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})}+\varepsilon\\!\sum_{k=1}^{n}\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\,{(\nabla\\!\\!-2i\pi\frac{\theta^{\ast}}{\varepsilon})\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})$
$\displaystyle+\nabla\overline{\varphi}(t,x)\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\nabla{\big{(}\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\big{)}}\,\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}.$
Then, from equation (4.88) and using the terms
${\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi}(t,x)),\quad\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}(v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)),$
it follows from the above equation that
$A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\nabla
u_{\varepsilon}\cdot\overline{\nabla
Z_{\varepsilon}}=\sum_{k=1}^{n}\left(I_{2,1}^{\varepsilon,k}+I_{2,2}^{\varepsilon,k}+I_{2,3}^{\varepsilon,k}\right)(t,x,\widetilde{\omega}),$
where
$\displaystyle
I_{2,1}^{\varepsilon,k}(t,x,\widetilde{\omega}):=\varepsilon\,A{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}\big{[}{\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x))}\big{]}$
$\displaystyle\quad\cdot\big{[}{\big{(}\nabla-2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}$
$\displaystyle-A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})\big{[}v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\,e_{k}\big{]}\\!\cdot\\!\big{[}{\big{(}\nabla-2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}$
$\displaystyle+A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega}){\big{[}{\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}(v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x))}\big{]}\\!\cdot\\!\big{[}{e_{k}}\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]},$
$\displaystyle
I_{2,2}^{\varepsilon,k}(t,x,\widetilde{\omega}):=\frac{1}{n}A{(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})}{\big{[}{\big{(}\nabla+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}(v_{\varepsilon}(t,x,\widetilde{\omega})\overline{\varphi}(t,x))\big{]}}$
$\displaystyle\quad\quad\cdot\big{[}{\big{(}\nabla-2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}}\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}$
$\displaystyle\quad\quad-A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega}){\big{[}v_{\varepsilon}(t,x,\widetilde{\omega})\,\nabla\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\big{]}}\cdot{\big{[}e_{k}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}},$
and
$\displaystyle
I_{2,3}^{\varepsilon,k}(t,x,\widetilde{\omega}):=A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})\big{[}{\left(\varepsilon\nabla+2i\pi\theta^{\ast}\right)}v_{\varepsilon}(t,x,\widetilde{\omega})\big{]}$
$\displaystyle\quad\cdot\big{[}\nabla{\big{(}\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\big{)}}\,\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}$
$\displaystyle-A(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})\big{[}v_{\varepsilon}(t,x,\widetilde{\omega})\,\nabla\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)\big{]}\\!\cdot\\!\big{[}{\left(\varepsilon\nabla-2i\pi\theta^{\ast}\right)}\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\big{]}.$
Thus integrating in $\mathbb{R}^{n+1}_{T}$ we recover the
$I_{2}^{\varepsilon}$ term, that is
$\displaystyle
I_{2}^{\varepsilon}=\iint_{\mathbb{R}^{n+1}_{T}}A{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\nabla
u_{\varepsilon}(t,x,\widetilde{\omega})\cdot\overline{\nabla
Z_{\varepsilon}}(t,x,\widetilde{\omega})\,dx\,dt$
$\displaystyle\qquad\qquad=\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\left(I_{2,1}^{\varepsilon,k}+I_{2,2}^{\varepsilon,k}+I_{2,3}^{\varepsilon,k}\right)(t,x,\widetilde{\omega})\,dx\,dt.$
(4.96)
Now, with the help of the first auxiliar cell equation (3.3), we intend to
simplify the expression of $I_{2}^{\varepsilon}$ to a more comely one. For
this aim, we shall take
${v_{\varepsilon}(t,\cdot,\widetilde{\omega})\,\overline{\varphi}(t,\cdot)}$,
$t\in(0,T)$, as a test function in equation (3.84). Then, we obtain
$\displaystyle\int_{\mathbb{R}^{n}}\\!\\!A(\Phi^{-1}{\big{(}\frac{x}{\varepsilon},\widetilde{\omega}\big{)}},\widetilde{\omega})[\big{(}\nabla\\!+2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi})}]\\!\cdot\\![\big{(}\nabla\\!-2i\pi\frac{\theta^{\ast}}{\varepsilon}\big{)}\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})]dx$
$\displaystyle\quad=\frac{\lambda(\theta^{\ast})}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi}(t,x))}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx$
$\displaystyle\quad-\frac{1}{\varepsilon^{2}}\int_{\mathbb{R}^{n}}V(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})\,{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi}(t,x))}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx.$
Therefore, comparing with $I_{2,2}^{\varepsilon,k}(t,x,\widetilde{\omega})$
term obtained before, we have
$\displaystyle\iint_{\mathbb{R}^{n+1}_{T}}I_{2,2}^{\varepsilon,k}(t,x,\widetilde{\omega})\,dx\,dt$
(4.97)
$\displaystyle=\frac{\lambda(\theta^{\ast})}{n\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi}(t,x))}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx\,dt$
$\displaystyle-\frac{1}{n\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}V{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}\,(v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi}(t,x))\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx\,dt$
$\displaystyle-\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\\!A(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}){[v_{\varepsilon}(t,x,\widetilde{\omega})\,\nabla\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)]}\cdot{[e_{k}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})]}\,dx\,dt.$
Analogously, taking
${v_{\varepsilon}(t,\cdot,\widetilde{\omega})\,\displaystyle\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,\cdot)}$, $t\in(0,T)$, with ${k\in\\{1,\ldots,n\\}}$ as a test
function in the equation (3.83), taking into account that
$\nabla_{\\!\theta}\lambda(\theta^{\ast})=0$ and comparing this expression
with $I_{2,1}^{\varepsilon,k}(t,x,\widetilde{\omega})$, we deduce that
$\displaystyle\iint_{\mathbb{R}^{n+1}_{T}}I_{2,1}^{\varepsilon,k}(t,x,\widetilde{\omega})\,dx\,dt$
$\displaystyle\quad=\frac{\lambda(\theta^{\ast})}{\varepsilon}\iint_{\mathbb{R}^{n+1}_{T}}{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x))}\,\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx\,dt$
(4.98)
$\displaystyle\quad-\frac{1}{\varepsilon}\iint_{\mathbb{R}^{n+1}_{T}}V{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}\,{(v_{\varepsilon}(t,x,\widetilde{\omega})\,\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x))}\,\overline{\Lambda_{k,\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})\,dx\,dt.$
Therefore, summing equations (4.97), (4.1), we arrive at
$\displaystyle\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\Big{(}I_{2,1}^{\varepsilon,k}+I_{2,2}^{\varepsilon,k}\Big{)}(t,x,\widetilde{\omega})\,dxdt$
$\displaystyle\quad\\!\\!={\frac{\lambda(\theta^{\ast})}{\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\\!v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\\!\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}\,dxdt}$
$\displaystyle\quad-\frac{1}{\varepsilon^{2}}\iint_{\mathbb{R}^{n+1}_{T}}\\!V{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\,v_{\varepsilon}(t,x,\widetilde{\omega})$
(4.99) $\displaystyle\hskip
85.35826pt\times\,\overline{\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\,\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}\,dxdt$
$\displaystyle\quad-\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\\!A{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}{[v_{\varepsilon}(t,x,\widetilde{\omega})\,\nabla\frac{\partial\overline{\varphi}}{\partial
x_{k}}(t,x)]}\\!\cdot\\!{[e_{k}\,\overline{\Psi_{\varepsilon}}(x,\widetilde{\omega},\theta^{\ast})]}\,dxdt.$
Moreover, expressing the $I_{3}^{\varepsilon}$ term as
$\displaystyle
I_{3}^{\varepsilon}=\iint_{\mathbb{R}^{n+1}_{T}}\frac{1}{\varepsilon^{2}}\,V{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}\,v_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle\qquad\qquad\qquad\qquad\times\,\overline{\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}\,dx\,dt$
$\displaystyle\qquad\quad+\iint_{\mathbb{R}^{n+1}_{T}}U{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}\,v_{\varepsilon}(t,x,\widetilde{\omega})$
$\displaystyle\qquad\qquad\qquad\qquad\times\,\overline{\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})+\varepsilon\sum_{k=1}^{n}\frac{\partial\varphi}{\partial
x_{k}}(t,x)\Lambda_{k,\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}\,dx\,dt,$
adding with (4.1) and $I_{1}^{\varepsilon}$, we obtain
$\displaystyle
I_{1}^{\varepsilon}+\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\Big{(}I_{2,1}^{\varepsilon,k}+I_{2,2}^{\varepsilon,k}\Big{)}(t,x,\widetilde{\omega})\,dx\,dt+I_{3}^{\varepsilon}$
$\displaystyle={i\int_{\mathbb{R}^{n}}\\!\\!\\!v_{\varepsilon}^{0}(x,\widetilde{\omega})\,\overline{\varphi(0,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}dx-i\\!\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\\!\\!v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\frac{\partial\varphi}{\partial
t}(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}}$
$\displaystyle-\sum_{k,\ell=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\\!{v_{\varepsilon}(t,x,\widetilde{\omega})\,e_{\ell}\,\frac{\partial^{2}\overline{\varphi}}{\partial
x_{\ell}\,\partial
x_{k}}(t,x)}\cdot\overline{A{(\Phi^{-1}{(\frac{x}{\varepsilon},\widetilde{\omega})},\widetilde{\omega})}{\;e_{k}\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}}\,dxdt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}U{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega})}\,v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{\varphi(t,x)\,\Psi_{\varepsilon}(x,\widetilde{\omega},\theta^{\ast})}\,dxdt+\,\mathrm{O}(\varepsilon).$
Thus, for $\varepsilon=\varepsilon^{\prime}(\widetilde{\omega})$ and due to
Step 2, that is to say
$v_{\varepsilon^{\prime}}(t,x,\widetilde{\omega})\;\xrightharpoonup[\varepsilon^{\prime}\to
0]{2-{\rm s}}\;v_{\widetilde{\omega}}(t,x)\,\Psi(z,\omega,\theta^{\ast}),$
we obtain after letting $\varepsilon^{\prime}\to 0$, from the previous
equation
$\displaystyle\lim_{\varepsilon^{\prime}\to
0}\Big{(}I_{1}^{\varepsilon^{\prime}}+\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\Big{(}I_{2,1}^{\varepsilon^{\prime},k}+I_{2,2}^{\varepsilon^{\prime},k}\Big{)}(t,x,\widetilde{\omega})\,dx\,dt+I_{3}^{\varepsilon^{\prime}}\Big{)}$
(4.100)
$\displaystyle=i\int_{\mathbb{R}^{n}}{\left(\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\Psi(z,\omega,\theta^{\ast})\right|}^{2}dz\,d\mathbb{P}\right)}v^{0}(x)\,\overline{\varphi}(0,x)\,dx$
$\displaystyle-i\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!{(\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\Psi(z,\omega,\theta^{\ast})\right|}^{2}dz\,d\mathbb{P})}\,v_{\widetilde{\omega}}(t,x)\,\frac{\partial\overline{\varphi}}{\partial
t}(t,x)\,dx\,dt$
$\displaystyle-\sum_{k,\ell=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!{(\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{(e_{\ell}\,\Psi)}\cdot{(e_{k}\,\overline{\Psi})}\,dz\,d\mathbb{P})}$
$\displaystyle\quad\times\,v_{\widetilde{\omega}}(t,x)\,\frac{\partial^{2}\overline{\varphi}}{\partial
x_{\ell}\,\partial x_{k}}(t,x)\,dx\,dt$
$\displaystyle+\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!{(\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!\\!U{\left(\Phi^{-1}(z,\omega),\omega\right)}{|\Psi|}^{2}dz\,d\mathbb{P})}\,v_{\widetilde{\omega}}(t,x)\,\overline{\varphi}(t,x)\,dx\,dt.$
Proceeding in the same way with respect to the term
$I_{2,3}^{\varepsilon,k}(t,x,\widetilde{\omega})$, we obtain
$\displaystyle\lim_{\varepsilon^{\prime}\to
0}\sum_{k=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}I_{2,3}^{\varepsilon^{\prime},k}(t,x,\widetilde{\omega})\,dx\,dt$
$\displaystyle\quad=\sum_{k,\ell=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\Big{(}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left({\left(\nabla_{\\!\\!z}+2i\pi\theta^{\ast}\right)}\Psi(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\hskip
56.9055pt\cdot{\left(e_{\ell}\,\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\right)}\,dz\,d\mathbb{P}\Big{)}v_{\widetilde{\omega}}(t,x)\,\frac{\partial^{2}\overline{\varphi}}{\partial
x_{\ell}\,\partial x_{k}}(t,x)\,dx\,dt$
$\displaystyle\quad-\sum_{k,\ell=1}^{n}\iint_{\mathbb{R}^{n+1}_{T}}\\!\\!\Big{(}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{\ell}\,\Psi(z,\omega,\theta^{\ast})\right)}$
(4.101) $\displaystyle\hskip
56.9055pt\cdot{\left(\nabla_{\\!\\!z}-2i\pi\theta^{\ast}\right)}\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\,dz\,d\mathbb{P}\Big{)}v_{\widetilde{\omega}}(t,x)\,\frac{\partial^{2}\overline{\varphi}}{\partial
x_{\ell}\,\partial x_{k}}(t,x)\,dx\,dt.$
Therefore, since
$I_{1}^{\varepsilon^{\prime}}+I_{2}^{\varepsilon^{\prime}}+I_{3}^{\varepsilon^{\prime}}=0$,
(see (4.95)), combining the two last equations we conclude that, the function
$v_{\widetilde{\omega}}$ is a distribution solution of the following
homogenized Schrödinger equation
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial
v_{\widetilde{\omega}}}{\partial t}(t,x)-{\rm div}{\big{(}B^{\ast}\nabla
v_{\widetilde{\omega}}(t,x)\big{)}}+U^{\ast}v_{\widetilde{\omega}}(t,x)=0,\;\,(t,x)\in\mathbb{R}^{n+1}_{T},\\\\[5.0pt]
v_{\widetilde{\omega}}(0,x)=v^{0}(x),\;\,x\in\mathbb{R}^{n},\end{array}\right.$
(4.102)
where the effective tensor
$\displaystyle
B_{k,\ell}^{\ast}=\frac{1}{c_{\psi}}{\int_{\Omega}\int_{\Phi\left([0,1)^{n},\omega\right)}\\!\\!\\!\big{\\{}A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{\ell}\,\Psi(z,\omega,\theta^{\ast})\right)}}\cdot{\left(e_{k}\,\overline{\Psi}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad+A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{\ell}\,\Psi(z,\omega,\theta^{\ast})\right)}\cdot{\left((\nabla_{\\!\\!z}-2i\pi\theta^{\ast})\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad-A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\Big{(}(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})\Psi(z,\omega,\theta^{\ast})\Big{)}}$
$\displaystyle\hskip
170.71652pt\cdot{\left(e_{\ell}\,\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\right)}\big{\\}}\,dz\,d\mathbb{P}(\omega),$
(4.103)
for ${k,\ell\in\\{1,\ldots,n\\}}$, and the effective potential
$U^{\ast}=c_{\psi}^{-1}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}U{\left(\Phi^{-1}(z,\omega),\omega\right)}{|\Psi(z,\omega,\theta^{\ast})|}^{2}dz\,d\mathbb{P}(\omega)$
with
$\displaystyle c_{\psi}=\\!\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}$
$\displaystyle\\!\\!\\!{|\Psi(z,\omega,\theta^{\ast})|}^{2}dz\,d\mathbb{P}(\omega)$
$\displaystyle\equiv\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!{|\psi{(\Phi^{-1}{(\frac{x}{\varepsilon},\omega)},\omega)}|}^{2}dz\,d\mathbb{P}(\omega).$
Moreover, we are allowed to change the tensor $B^{\ast}$ in the equation
(4.102) by the corresponding symmetric part of it, that is
$A^{\ast}=\big{(}B^{\ast}+(B^{\ast})^{t}\big{)}/2.$
4.(The Form of the Matrix $A^{\ast}$.) Now, we show that the homogenized
tensor $A^{\ast}$ is a real value matrix, and it coincides with the hessian
matrix of the function ${\theta\mapsto\lambda(\theta)}$ in the point
$\theta^{\ast}$. In fact, using that
${\nabla_{\\!\\!\theta}\lambda(\theta^{\ast})=0}$ the equation (3.3) can be
written as
$\displaystyle\quad\frac{1}{4\pi^{2}}\frac{\partial^{2}\lambda(\theta^{\ast})}{\partial\theta_{\ell}\,\partial\theta_{k}}\,c_{\psi}$
$\displaystyle\qquad\qquad=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\big{\\{}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\left(e_{\ell}\,\Psi(z,\omega,\theta^{\ast})\right)}\cdot{\left(e_{k}\,\overline{\Psi}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad+A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{\ell}\,\Psi(z,\omega,\theta^{\ast})\right)}\cdot{\left((\nabla_{\\!\\!z}-2i\pi\theta^{\ast})\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad-A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left[(\nabla_{\\!\\!z}+2i\pi\theta^{\ast})\Psi(z,\omega,\theta^{\ast})\right]}\cdot{\left(e_{\ell}\,\overline{\Lambda_{k}}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad+A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{k}\,\Psi(z,\omega,\theta^{\ast})\right)}\cdot{\left(e_{\ell}\,\overline{\Psi}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad+A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left(e_{k}\,\Psi(z,\omega,\theta^{\ast})\right)}\cdot{\left((\nabla_{\\!\\!z}-2i\pi\theta^{\ast})\overline{\Lambda_{\ell}}(z,\omega,\theta^{\ast})\right)}$
$\displaystyle\qquad\qquad\qquad-A{\left(\Phi^{-1}(z,\omega),\omega\right)}\,{\left((\nabla_{\\!\\!z}+2i\pi\theta^{\ast})\Psi(z,\omega,\theta^{\ast})\right)}$
(4.104) $\displaystyle\hskip
227.62204pt\cdot{\left(e_{k}\,\overline{\Lambda_{\ell}}(z,\omega,\theta^{\ast})\right)}\big{\\}}\,dz\,d\mathbb{P}(\omega),$
from which we obtain
$A^{\ast}=\frac{1}{8\pi^{2}}\,D^{2}_{\\!\theta}\lambda(\theta^{\ast}).$
Therefore, from Remark 2.3 we deduce the well-posedness of the homogenized
Schrödinger (4.102). Hence the function ${v_{\widetilde{\omega}}\in
L^{2}(\mathbb{R}^{n+1}_{T})}$ does not depend on
${\widetilde{\omega}\in{\Omega}}$. Moreover, denoting by $v$ the unique
solution of the problem (4.102), we have that the sequence
${\\{v_{\varepsilon}(t,x,\widetilde{\omega})\\}_{\varepsilon>0}\subset
L^{2}(\mathbb{R}^{n+1}_{T})}$ $\Phi_{\omega}-$two-scale converges to the
function
$v(t,x)\,\Psi(z,\omega,\theta^{\ast})\equiv
v(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}.$
5.(A Corrector-type Result.) Finally, we show the following corrector type
result, that is, for a.e. ${\widetilde{\omega}\in\Omega}$
$\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}\big{|}v_{\varepsilon}(t,x,\widetilde{\omega})-v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\big{|}^{2}dx\,dt=0.$
We begin by the simple observation
$\displaystyle\iint_{\mathbb{R}^{n+1}_{T}}|v_{\varepsilon}(t,x,\widetilde{\omega})-v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}|^{2}dxdt$
(4.105)
$\displaystyle\quad=\iint_{\mathbb{R}^{n+1}_{T}}{\left|v_{\varepsilon}(t,x,\widetilde{\omega})\right|}^{2}dx\,dt$
$\displaystyle\quad-\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}}\,dxdt$
$\displaystyle\quad-\iint_{\mathbb{R}^{n+1}_{T}}\overline{v_{\varepsilon}(t,x,\widetilde{\omega})}\,v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\,dx\,dt$
$\displaystyle\quad+\iint_{\mathbb{R}^{n+1}_{T}}{\left|v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\right|}^{2}dx\,dt.$
From Lemma 4.1 we see that, the first integral of the right hand side of the
above equation satisfies, for all ${t\in[0,T]}$ and a.e.
${\widetilde{\omega}\in\Omega}$
$\displaystyle\int_{\mathbb{R}^{n}}{\left|v_{\varepsilon}(t,x,\widetilde{\omega})\right|}^{2}dx$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{n}}{\left|u_{\varepsilon}(t,x,\widetilde{\omega})\right|}^{2}dx$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{n}}{\left|u_{\varepsilon}^{0}(x,\widetilde{\omega})\right|}^{2}dx\;\,=\;\,\int_{\mathbb{R}^{n}}{\left|v_{\varepsilon}^{0}(x,\widetilde{\omega})\right|}^{2}dx$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{n}}{\left|v^{0}(x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\right|}^{2}dx.$
Using the elliptic regularity theory (see E. De Giorgi [15], G. Stampacchia
[36]), it follows that $\psi(\theta)\in
L^{\infty}(\mathbb{R}^{n};L^{2}(\Omega))$ and we can apply the Ergodic Theorem
to obtain
$\displaystyle\lim_{\varepsilon\to 0}$
$\displaystyle\iint_{\mathbb{R}^{n+1}_{T}}{\left|v_{\varepsilon}(t,x,\widetilde{\omega})\right|}^{2}dx\,dt$
$\displaystyle=\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}{\left|v^{0}(x)\right|}^{2}{\left|\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\right|}^{2}dxdt$
$\displaystyle=c_{\Phi}^{-1}\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|v^{0}(x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\right|}^{2}dz\,d\mathbb{P}\,dxdt.$
Similarly, we have
$\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}{\left|v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\right|}^{2}dxdt\\\
=c_{\Phi}^{-1}\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|v(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\right|}^{2}dz\,d\mathbb{P}\,dxdt.$
Moreover, seeing that for a.e. ${\widetilde{\omega}\in\Omega}$
$\displaystyle\lim_{\varepsilon\to 0}$
$\displaystyle\iint_{\mathbb{R}^{n+1}_{T}}v_{\varepsilon}(t,x,\widetilde{\omega})\,\overline{v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}}\,dxdt$
$\displaystyle=c_{\Phi}^{-1}\iint_{\mathbb{R}^{n+1}_{T}}\\!\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}\\!\\!\\!v(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\,$
$\displaystyle\qquad\qquad\qquad\qquad\times\overline{v(t,x)\,\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}}\,dz\,d\mathbb{P}\,dxdt,$
we can make Â${\varepsilon\to 0}$ in the equation (4.105) to find
$\displaystyle\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}{\left|v_{\varepsilon}(t,x,\widetilde{\omega})-v(t,x)\,\psi{\left(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast}\right)}\right|}^{2}dxdt$
$\displaystyle\qquad=c_{\Phi}^{-1}{\big{(}\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}{\left|\psi{\left(\Phi^{-1}(z,\omega),\omega,\theta^{\ast}\right)}\right|}^{2}dz\,d\mathbb{P}(\omega)\big{)}}$
$\displaystyle\qquad\qquad\qquad\qquad\times\big{(}{\iint_{\mathbb{R}^{n+1}_{T}}{\left|v^{0}(x)\right|}^{2}dx\,dt}-{\iint_{\mathbb{R}^{n+1}_{T}}{\left|v(t,x)\right|}^{2}dx\,dt}\big{)},$
for a.e. ${\widetilde{\omega}\in\Omega}$. Therefore, using the energy
conservation of the homogenized Schrödinger equation (4.86), that is, for all
${t\in[0,T]}$
$\int_{\mathbb{R}^{n}}{\left|v(t,x)\right|}^{2}dx=\int_{\mathbb{R}^{n}}{\left|v^{0}(x)\right|}^{2}dx,$
we obtain that, for a.e. ${\widetilde{\omega}\in\Omega}$
$\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}|v_{\varepsilon}(t,x,\widetilde{\omega})-v(t,x)\,\psi{(\Phi^{-1}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega},\theta^{\ast})}|^{2}dxdt=0,$
completing the proof of the theorem. ∎
### 4.2 Radom Perturbations of the Quasi-Periodic Case
In this section, we shall give a nice application of the framework introduced
in this paper, which can be used to homogenize a model beyond the periodic
settings considered by Allaire and Piatnitski in [4]. For reaching this aim,
we shall make use of some results discussed in Section 3.2 (Sobolev spaces on
groups), in particular, Section 3.2.3. Other interesting application will be
given in the last section.
Let $n,m\geq 1$ be integers numbers and $\lambda_{1},\cdots,\lambda_{m}$ be
vectors in $\mathbb{R}^{n}$ linearly independent over the set $\mathbb{Z}$
satisfying the condition that
$\big{\\{}k\in\mathbb{Z}^{m};\,|k_{1}\lambda_{1}+\cdots+k_{m}\lambda_{m}|<d\big{\\}}$
is a finite set for any $d>0$. Let
$\left(\Omega_{0},\mathcal{F}_{0},\mathbb{P}_{0}\right)$ be a probability
space and $\tau_{0}:\mathbb{Z}^{n}\times\Omega_{0}\to\Omega_{0}$ be a discrete
ergodic dynamical system and $\mathbb{R}^{m}/{\mathbb{Z}^{m}}$ be the
$m-$dimensional torus which can be identified with the cube $[0,1)^{m}$. For
$\Omega:=\Omega_{0}\times[0,1)^{m}$, consider the following continuous
dynamical system $T:\mathbb{R}^{n}\times\Omega\to\Omega$, defined by
$T(x)(\omega_{0},s):=\Big{(}\tau_{\left\lfloor
s+Mx\right\rfloor}\omega_{0},s+Mx-\left\lfloor s+Mx\right\rfloor\Big{)},$
where $M$ is the matrix $M=\Big{(}\lambda_{i}\cdot
e_{j}{\Big{)}}_{i=1,j=1}^{m,n}$ and $\left\lfloor y\right\rfloor$ denotes the
unique element in $\mathbb{Z}^{m}$ such that $y-\left\lfloor
y\right\rfloor\in[0,1)^{m}$. Now, we consider $[0,1)^{m}-$periodic functions
$A_{\rm per}:\mathbb{R}^{m}\to\mathbb{R}^{n^{2}},\,V_{\rm
per}:\mathbb{R}^{m}\to\mathbb{R}$ and $U_{\rm
per}:\mathbb{R}^{m}\to\mathbb{R}$ such that
* •
There exists $a_{0},a_{1}>0$ such that for all $\xi\in\mathbb{R}^{n}$ and for
a.e $y\in\mathbb{R}^{m}$ we have
$a_{0}|\xi|^{2}\leq A_{\rm per}(y)\xi\cdot\xi\leq a_{1}|\xi|^{2}.$
* •
$V_{\rm per},\,U_{\rm per}\in L^{\infty}(\mathbb{R}^{m})$.
Let $B_{\rm per}:\mathbb{R}^{m}\to\mathbb{R}^{n^{2}}$ be a
$[0,1)^{m}-$periodic matrix and
$\Upsilon:\mathbb{R}^{n}\times[0,1)^{m}\to\mathbb{R}^{n}$ be any stochastic
diffeomorphism satisfying
$\nabla\Upsilon(x,s)=B_{\rm per}\Big{(}T(x)(\omega_{0},s)\Big{)}.$
Thus, we define the following stochastic deformation
$\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$ by
$\Phi(x,\omega)=\Upsilon(x,s)+{\bf X}(\omega_{0}),$
where we have used the notation $\omega$ for the pair
$(\omega_{0},s)\in\Omega$ and ${\bf X}:\Omega_{0}\to\mathbb{R}^{n}$ is a
random vector. Now, taking
$A(x,\omega):=A_{\rm per}\left(T(x)\omega\right),\,V(x,\omega):=V_{\rm
per}\left(T(x)\omega\right),\;U(x,\omega):=U_{\rm per}\left(T(x)\omega\right)$
in the equation (1.1), it can be seen after some computations that the
spectral equation correspondent is
$\left\\{\begin{array}[]{l}-{\Big{(}{\rm
div}_{\rm{QP}}+2i\pi\theta\Big{)}}{\left[A_{\rm
per}{\left(\cdot\right)}{\Big{(}\nabla^{\rm{QP}}+2i\pi\theta\Big{)}}{\Psi}_{\rm
per}(\cdot)\right]}\\\\[7.5pt] \hskip 56.9055pt+V_{\rm
per}{\left(\cdot\right)}{\Psi}_{\rm per}(\cdot)=\lambda{\Psi}_{\rm
per}(\cdot)\;\;\text{in}\,\;[0,1)^{m},\\\\[7.5pt] \hskip 42.67912pt{\Psi}_{\rm
per}(\cdot)\;\;\;\psi\;\,\text{is a $[0,1)^{m}-$periodic
function},\end{array}\right.$ (4.106)
where the operators ${\rm div}_{\rm{QP}}$ and $\nabla^{\rm{QP}}$ are defined
as
* •
$\left(\nabla^{\rm{QP}}u_{\rm per}\right)(y):=B_{\rm
per}^{-1}(y)M^{\ast}\left(\nabla u_{\rm per}\right)(y)$;
* •
$\left(\rm{div}_{\rm{QP}}\,a\right)(y):=\rm{div}\left(MB_{\rm
per}^{-1}(\cdot)a(\cdot)\right)(y)$.
Although the coefficients of the spectral equation (4.106) can be seen as
periodic functions, its analysis is possible thanks to the results developed
in the Section 3.2.3. This happens due to the fact that, the bilinear form
associated to the problem (4.106) can lose its coercivity which unable us to
apply the classic theory.
Assume that for some $\theta^{\ast}\in\mathbb{R}^{n}$, the spectral equation
(4.106) admits a solution $\big{(}\lambda(\theta^{\ast}),\Psi_{\rm
per}(\theta^{\ast})\big{)}\in\mathbb{R}\times H^{1}\left([0,1)^{m}\right)$,
such that
* •
$\lambda(\theta^{\ast})$ is a simple eigenvalue;
* •
$\nabla\lambda(\theta^{\ast})=0$.
Now, we consider the problem (1.1) with new coefficients as highlighted above
and with well-prepared initial data, that is,
$u_{\varepsilon}(x,\omega):=e^{2\pi i\frac{\theta^{\ast}\cdot
x}{\varepsilon}}\,{\Psi}_{\rm
per}\Big{(}T\left(\Phi^{-1}\left(\frac{x}{\varepsilon},\omega\right)\right)\omega,\theta^{\ast}\Big{)}v^{0}(x),$
for $(x,\omega)\in\mathbb{R}^{n}\times\Omega$ and $v^{0}\in
C^{\infty}_{c}(\mathbb{R}^{n})$. Applying Theorem 4.2, the function
$v_{\varepsilon}(t,x,\omega):=e^{-{\left(i\frac{\lambda(\theta^{\ast})t}{\varepsilon^{2}}+2i\pi\frac{\theta^{\ast}\\!\cdot
x}{\varepsilon}\right)}}u_{\varepsilon}(t,x,\omega),\;\,(t,x)\in\mathbb{R}^{n+1}_{T},\;\omega\in\Omega,$
$\Phi_{\omega}-$two-scale converges strongly to ${v(t,x)\,{\Psi}_{\rm
per}\Big{(}{T\left(\Phi^{-1}(z,\omega)\right)\omega,\theta^{\ast}}}\Big{)}$,
where ${v\in C([0,T],L^{2}(\mathbb{R}^{n}))}$ is the unique solution of the
homogenized Schrödinger equation
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial v}{\partial t}-{\rm
div}{\left(A^{\ast}\nabla
v\right)}+U^{\ast}v=0\,,\;\,\text{em}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
v(0,x)=v^{0}(x)\,,\;\,x\in\mathbb{R}^{n},\end{array}\right.$
with effective matrix ${A^{\ast}=D_{\theta}^{2}\lambda(\theta^{\ast})}$ and
effective potential
$U^{\ast}=c^{-1}_{\psi}\int_{[0,1)^{m}}U_{\rm
per}{\left(y\right)}\,{\left|{\Psi}_{\rm
per}{\left(y,\theta^{\ast}\right)}\right|}^{2}|\det\left(B_{\rm
per}(y)\right)|\,dy,$
where
$c_{\psi}=\int_{[0,1)^{m}}{\left|{\Psi}_{\rm
per}{\left(y,\theta^{\ast}\right)}\right|}^{2}\,|\det\left(B_{\rm
per}(y)\right)|\,dy.$
It is worth highlighting that this singular example encompasses the settings
considered by Allaire-Piatnitski in [4]. For this, it is enough to take
$n=m,\,\lambda_{j}=e_{j},\,\Upsilon(\cdot,s)\equiv I_{n\times n},\;\text{and
${\bf X}(\cdot)\equiv 0$.}$
Moreover, we consider $[0,1)^{m}-$periodic functions: $V_{\rm per},U_{\rm
per}:\mathbb{R}^{m}\to\mathbb{R}$, and $A_{\rm
per}:\mathbb{R}^{m}\to\mathbb{R}^{n^{2}}$, such that
* •
There exists $a_{0},a_{1}>0$ such that for all $\xi\in\mathbb{R}^{n}$ and for
a.e $y\in\mathbb{R}^{m}$ we have
$a_{0}|\xi|^{2}\leq A_{\rm per}(y)\xi\cdot\xi\leq a_{1}|\xi|^{2};$
* •
$V_{\rm per},\,U_{\rm per}\in L^{\infty}(\mathbb{R}^{m})$.
## 5 Homogenization of Quasi-Perfect Materials
Perfect materials (which represent the periodic setting) are rare in nature.
However, there is a huge class of materials which have small deviation from
perfect ones, called here quasi-perfect materials. We consider in this section
an interesting context, which is the small random perturbation of the periodic
setting. In particular, this context is important for numerical applications.
To begin, we remember the reader that in the previous section, it was seen
that our homogenization analysis (see Theorem 4.2) of the equation (1.1) rely
on the spectral study of the operator
$L^{\Phi}(\theta)(\theta\in\mathbb{R}^{n})$ posed in the dual space
${\mathcal{H}^{\ast}}$ and with domain Â${D(L^{\Phi}(\theta))=\mathcal{H}}$
and defined by
$\begin{array}[]{l}L^{\Phi}(\theta)[f]:=-{\big{(}{\rm
div}_{\\!z}+2i\pi\theta\big{)}}{\Big{[}A{\big{(}\Phi^{-1}(\cdot,{\cdot\cdot}),{\cdot\cdot}\big{)}}{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}f{\big{(}\Phi^{-1}(\cdot,{\cdot\cdot}),{\cdot\cdot}\big{)}}\Big{]}}\\\\[10.0pt]
\hskip
113.81102pt+\,V{\big{(}\Phi^{-1}(\cdot,{\cdot\cdot}),{\cdot\cdot}\big{)}}f{\big{(}\Phi^{-1}(\cdot,{\cdot\cdot}),{\cdot\cdot}\big{)}},\end{array}$
(5.107)
where $\Phi:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$ is a stochastic
deformation, $A:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n^{2}}$ and
$V:\mathbb{R}^{n}\times\Omega\to\mathbb{R}$ are stationary functions. Also,
remember that the variational formulation of the operator $L^{\Phi}(\theta)$
is given by:
$\begin{split}&{\left\langle
L^{\Phi}(\theta)[f],g\right\rangle}:=\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}A{\left(\Phi^{-1}(z,\omega),\omega\right)}{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}f{\left(\Phi^{-1}(z,\omega),\omega\right)}\cdot\\\
&\hskip
199.16928pt\overline{{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}g{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega)\\\
&+\int_{\Omega}\int_{\Phi([0,1)^{n},\omega)}V{\left(\Phi^{-1}(z,\omega),\omega\right)}f{\left(\Phi^{-1}(z,\omega),\omega\right)}\,\overline{g{\left(\Phi^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega),\end{split}$
for ${f,g\in\mathcal{H}}$.
More precisely, it was required the existence of a pair
${{\big{(}\theta^{\ast},\lambda(\theta^{\ast})\big{)}}\in\mathbb{R}^{n}\times\mathbb{R}}$
that satisfies:
$\left\\{\,\begin{split}&\lambda(\theta^{\ast})\;\text{is a simple eigenvalue
of}\;L^{\Phi}(\theta^{\ast}),\\\ &\theta^{\ast}\;\text{is a critical point
of}\;\lambda(\cdot),\,\text{that
is},\nabla_{\\!\\!\theta}\lambda(\theta^{\ast})=0.\end{split}\right.$ (5.108)
As observed before, it is not clear the existence of a pair
$(\theta^{\ast},\lambda(\theta^{\ast}))$, in general stochastic environments,
satisfying the two above conditions. The reason is due mainly to the lack of
compact embedding of ${\mathcal{H}}$ in ${\mathcal{L}}$. However, in the
periodic settings there are concrete situations where such conditions take
place (see, for instance, [4, 7, 8]). Our aim in this section is to show
realistic models whose spectral nature is inherited from the periodic ones.
### 5.1 Perturbed Periodic Case: Spectral Analysis
In this section we shall study the spectral properties of the operator
${L^{\Phi}(\theta)}$, when the diffeomorphism ${\Phi}$ is a stochastic
perturbation of the identity. This concept was introduced in [10], and well-
developed by T. Andrade, W. Neves, J. Silva [6] for modelling quasi-perfect
materials.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space,
$\tau:\mathbb{Z}^{n}\times\Omega\to\Omega$ a discrete dynamical system, and
$Z$ any fixed stochastic deformation. Then, we consider the concept of
stochastic perturbation of the identity given by the following
###### Definition 5.1.
Given $\eta\in(0,1)$, let
$\Phi_{\eta}:\mathbb{R}^{n}\times\Omega\to\mathbb{R}^{n}$ be a stochastic
deformation. Then $\Phi_{\eta}$ is said a stochastic perturbation of the
identity, when it can be written as
$\Phi_{\eta}(y,\omega)=y+\eta\,Z(y,\omega)+\mathrm{O}(\eta^{2}),$ (5.109)
for some stochastic deformation $Z$.
We emphasize that the equality (5.109) is understood in the sense of ${\rm
Lip}_{\text{loc}}\big{(}\mathbb{R}^{n};L^{2}(\Omega)\big{)}$, i.e. for each
bounded open subset ${\mathcal{O}\subset\mathbb{R}^{n}}$, there exist
$\delta,C>0$, such that for all ${\eta\in(0,\delta)}$
$\displaystyle\underset{y\in\mathcal{O}}{\rm
sup}\,{\left\|\Phi_{\eta}(y,\cdotp)-y-\eta
Z(y,\cdotp)\right\|}_{L^{2}(\Omega)}$
$\displaystyle\qquad+\,\underset{y\in\mathcal{O}}{\rm
ess\,sup}\,{\left\|\nabla_{\\!\\!y}\Phi_{\eta}(y,\cdotp)-I-\eta\,\nabla_{\\!\\!y}Z(y,\cdotp)\right\|}_{L^{2}(\Omega)}\leqslant
C\,\eta^{2}.$
Moreover, after some computations, we have
$\left\\{\begin{aligned}
\nabla_{y}^{-1}\Phi_{\eta}&=I-\eta\,\nabla_{y}Z+O(\eta^{2}),\\\\[5.0pt]
\det\big{(}\nabla_{y}\Phi_{\eta}\big{)}&=1+\eta\,{\rm
div}_{y}Z+O(\eta^{2}).\end{aligned}\right.$ (5.110)
Now, we consider the periodic functions $A_{\rm
per}:\mathbb{R}^{n}\to\mathbb{R}^{n^{2}},\,V_{\rm
per}:\mathbb{R}^{n}\to\mathbb{R}$ and $U_{\rm
per}:\mathbb{R}^{n}\to\mathbb{R}$, such that
* •
There exists $a_{0},a_{1}>0$ such that for all $\xi\in\mathbb{R}^{n}$ and for
a.e $y\in\mathbb{R}^{n}$ we have
$a_{0}|\xi|^{2}\leq A_{\rm per}(y)\xi\cdot\xi\leq a_{1}|\xi|^{2}.$
* •
$V_{\rm per},\,U_{\rm per}\in L^{\infty}(\mathbb{R}^{n})$.
The following lemma is well-known and it is stated explicitly here only for
reference. For a proof, we recommend the reader to [18].
###### Lemma 5.2.
For $\theta\in\mathbb{R}^{n}$ and $f\in H_{\rm per}^{1}([0,1)^{n})$, let
${L_{\rm per}(\theta)}$ be the operator defined by
$L_{\rm per}(\theta){[f]}:=-({\rm div}_{\\!y}+2i\pi\theta){\big{[}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta)}f(y)\big{]}}+V_{\rm per}(y)f(y),$
(5.111)
with variational formulation
$\begin{array}[]{c}\displaystyle{\left\langle L_{\rm
per}(\theta){\big{[}f\big{]}},g\right\rangle}:=\int_{[0,1)^{n}}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}f(y)\cdot\overline{{\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}g(y)}\,dy\\\\[10.0pt]
\displaystyle\hskip 48.36958pt+\int_{[0,1)^{n}}V_{\rm
per}(y)\,f(y)\,\overline{g(y)}\,dy,\end{array}$
for ${f,g\in H_{\rm per}^{1}({[0,1)^{n}})}$. Then ${L_{\rm per}(\theta)}$ has
the following properties:
1. (i)
There exist ${\gamma_{0},b_{0}>0}$, such that ${L_{\gamma_{0}}:=L_{\rm
per}(\theta)+{\gamma_{0}}I}$ satisfies for all $f\in H_{\rm
per}^{1}({[0,1)^{n}})$,
${\langle L_{\gamma_{0}}{\big{[}f\big{]}},f\rangle}\geq b_{0}{\|f\|}_{H_{\rm
per}^{1}({[0,1)^{n}})}^{2}.$
2. (ii)
The point spectrum of ${L_{\rm per}(\theta)}$ is not empty and their
eigenspaces have finite dimension, that is, the set
$\sigma_{\rm point}{\big{(}L_{\rm
per}(\theta)\big{)}}=\\{\lambda\in\mathbb{C}\;;\;\lambda\;\text{an eigenvalue
of}\;L_{\rm per}(\theta)\\}$
is not empty and for all ${\lambda\in\sigma_{\rm point}{\big{(}L_{\rm
per}(\theta)\big{)}}}$ fixed,
${\rm dim}{\big{\\{}f\in H^{1}_{\rm per}({[0,1)^{n}})\;;\;L_{\rm
per}(\theta){\big{[}f\big{]}}=\lambda f\big{\\}}}<\infty.$
3. (iii)
Every point in ${\sigma_{\rm point}\big{(}L_{\rm per}(\theta)\big{)}}$ is
isolated.
###### Remark 5.3.
We observe that, the properties of the ${L_{\rm per}(\theta)}$,
${\theta\in\mathbb{R}^{n}}$, given by the Lemma 5.2 can be conveyed to the
space ${\mathcal{H}}$ in a natural way.
In whats follow, we are interested in the study of spectral properties of the
operator ${L^{\Phi_{\eta}}(\theta)}$ whose variational formulation is given by
$\begin{split}&{\left\langle
L^{\Phi_{\eta}}(\theta)[f],g\right\rangle}:=\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}A_{\rm
per}{\left(\Phi_{\eta}^{-1}(z,\omega)\right)}{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}f{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\cdot\\\
&\hskip
199.16928pt\overline{{\big{(}\nabla_{\\!\\!z}+2i\pi\theta\big{)}}g{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega)\\\
&+\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}V_{\rm
per}{\left(\Phi_{\eta}^{-1}(z,\omega)\right)}f{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\,\overline{g{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}}\,dz\,d\mathbb{P}(\omega),\end{split}$
(5.112)
for ${f,g\in\mathcal{H}}$. As we shall see in the next theorem, some of the
spectral properties of the operator ${L^{\Phi_{\eta}}(\theta)}$ are inherited
from the periodic case.
###### Theorem 5.4.
Let ${\Phi_{\eta}}$, ${\eta\in(0,1)}$ be a stochastic perturbation of identity
and ${\theta_{0}\in\mathbb{R}^{n}}$. If ${\lambda_{0}}$ is an eigenvalue of
${L_{\rm per}(\theta_{0})}$ with multiplicity ${k_{0}\in\mathbb{N}}$, that is,
${\rm dim}{\big{\\{}f\in H^{1}_{\rm per}([0,1)^{n})\;;\;L_{\rm
per}(\theta_{0}){\big{[}f\big{]}}=\lambda_{0}f\big{\\}}}=k_{0},$
then there exist a neighbourhood ${\mathcal{U}}$ of ${(0,\theta_{0})}$,
${k_{0}}$ real analytic functions
$(\eta,\theta)\in\mathcal{U}\;\mapsto\;\lambda_{k}(\eta,\theta)\in\mathbb{R},\;\;k\in\\{1,\ldots,k_{0}\\},$
and ${k_{0}}$ vector-value analytic maps
$(\eta,\theta)\in\mathcal{U}\;\mapsto\;\psi_{k}(\eta,\theta)\in\mathcal{H}\setminus\\{0\\},\;\;k\in\\{1,\ldots,k_{0}\\},$
such that, for all ${k\in\\{1,\ldots,k_{0}\\}}$,
* (i)
${\lambda_{k}(0,\theta_{0})=\lambda_{0}}$,
* (ii)
${L^{\Phi_{\eta}}(\theta){\big{[}\psi_{k}(\eta,\theta)\big{]}}=\lambda_{k}(\eta,\theta)\,\psi_{k}(\eta,\theta)}$,
${\forall(\eta,\theta)\in\mathcal{U}}$,
* (iii)
${{\rm
dim}{\big{\\{}f\in\mathcal{H}\;;\;L^{\Phi_{\eta}}(\theta){\big{[}f\big{]}}=\lambda_{k}(\eta,\theta)f\big{\\}}}\leqslant
k_{0}}$, ${\forall(\eta,\theta)\in\mathcal{U}}$.
###### Proof.
1\. The aim of this step is to rewrite the operator
${L^{\Phi_{\eta}}(\theta)\in\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$, for
${\eta\in(0,1)}$ and ${\theta\in\mathbb{R}^{n}}$ as an expansion in the
variable ${(\eta,\theta)}$ of operators in
${\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$ around the point
${(\eta,\theta)=(0,\theta_{0})}$. For this, using the variational formulation
(5.112), a change of variables and the expansions (5.110) we obtain
$\begin{split}&\\!\\!\\!{\langle
L^{\Phi_{\eta}}(\theta){\big{[}f\big{]}},g\rangle}=\\\
&{\left[\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}f\cdot\overline{{\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}g}\,d\mathbb{P}\,dy+\int_{[0,1)^{n}}\int_{\Omega}V_{\rm
per}(y)\,f\,\overline{g}\,d\mathbb{P}\,dy\right]}\\\ &\hskip
7.11317pt+\eta{\left[\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}f\right)}\cdot\overline{{\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}g}\,d\mathbb{P}\,dy\right.}\\\
&\hskip 42.67912pt{+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}f\cdot\overline{{\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}g\right)}}\,d\mathbb{P}\,dy}\\\
&\hskip 59.75095pt{\left.+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}f\cdot\overline{{\left(\nabla_{\\!\\!y}+2i\pi\theta\right)}g}\,\,{\rm
div}_{\\!y}Z(y,\omega)\,d\mathbb{P}\,dy\right]}\\\ &\hskip
177.82971pt+\mathrm{O}(\eta^{2}),\end{split}$
in $\mathbb{C}$ as ${\eta\to 0}$, for ${f,g\in\mathcal{H}}$.
Now, making the expansion of it in the variable ${\theta}$ about the point
${\theta=\theta_{0}}$, it is convenient to rewrite the above expansion in the
form
$\begin{split}&\\!{\left\langle
L^{\Phi_{\eta}}(\theta){\big{[}f\big{]}},g\right\rangle}=\\\
&((\eta,\theta)-(0,\theta_{0}))^{(0,\boldsymbol{0})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta_{0})}f\cdot\overline{{(\nabla_{\\!\\!y}+2i\pi\theta_{0})}g}\,d\mathbb{P}\,dy\right.}\\\
&\hskip 256.0748pt{\left.+\int_{[0,1)^{n}}\int_{\Omega}V_{\rm
per}(y)\,f\,\overline{g}\,d\mathbb{P}\,dy\right)}\\\
&+\sum_{k=1}^{n}((\eta,\theta)-(0,\theta_{0}))^{(0,e_{k})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta_{0})}f\cdot\overline{(2i\pi
e_{k}g)}\,d\mathbb{P}\,dy\right.}\\\ &\hskip
149.37697pt{\left.+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm per}(y)(2i\pi
e_{k}f)\cdot\overline{{(\nabla_{\\!\\!y}+2i\pi\theta_{0})}g}\,d\mathbb{P}\,dy\right)}\end{split}$
$\begin{split}&+\sum_{k,\ell=1}^{n}((\eta,\theta)-(0,\theta_{0}))^{(0,e_{k}+e_{\ell})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y)(2i\pi e_{k}f)\cdot\overline{(2i\pi
e_{\ell}g)}\,d\mathbb{P}\,dy\right)}\\\
&+((\eta,\theta)-(0,\theta_{0}))^{(1,\boldsymbol{0})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}f\right)}\cdot\overline{{(\nabla_{\\!\\!y}+2i\pi\theta_{0})}g}\,d\mathbb{P}\,dy\right.}\\\
&\hskip 61.17325pt+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta_{0})}f\cdot\overline{{\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}g\right)}}\,d\mathbb{P}\,dy\\\
&\hskip 61.17325pt{\left.+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta_{0})}f\cdot\overline{{(\nabla_{\\!\\!y}+2i\pi\theta_{0})}g}\,\,{\rm
div}_{\\!y}Z(y,\omega)\,d\mathbb{P}\,dy\right)}\\\
&+\sum_{k=1}^{n}((\eta,\theta)-(0,\theta_{0}))^{(1,e_{k})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}f\right)}\cdot\overline{(2i\pi
e_{k}g)}\,d\mathbb{P}\,dy\right.}\\\ &\hskip
85.35826pt+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm per}(y){(2i\pi
e_{k}f)}\cdot\overline{{\left(-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}g\right)}}\,d\mathbb{P}\,dy\\\
&\hskip 85.35826pt+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta_{0})}f\cdot\overline{(2i\pi
e_{k}g)}\,\,{\rm div}_{\\!y}Z(y,\omega)\,d\mathbb{P}\,dy\\\ &\hskip
85.35826pt{\left.+\int_{[0,1)^{n}}\int_{\Omega}A_{\rm per}(y){(2i\pi
e_{k}f)}\cdot\overline{{(\nabla_{\\!\\!y}+2i\pi\theta_{0})}g}\,\,{\rm
div}_{\\!y}Z(y,\omega)\,d\mathbb{P}\,dy\right)}\\\
&+\sum_{k,\ell=1}^{n}((\eta,\theta)-(0,\theta_{0}))^{(1,e_{k}+e_{\ell})}{\left(\int_{[0,1)^{n}}\int_{\Omega}A_{\rm
per}(y){(2i\pi e_{k}f)}\cdot\overline{(2i\pi e_{\ell}g)}\,\,{\rm
div}_{\\!y}Z(y,\omega)\,d\mathbb{P}\,dy\right)}\\\ &\hskip
177.82971pt+\mathrm{O}(\eta^{2}),\end{split}$
in ${\mathbb{C}}$ as ${\eta\to 0}$, for ${f,g\in\mathcal{H}}$, which is the
expansion in the variable ${(\eta,\theta)}$ around the point
${(0,\theta_{0})}$. Here, for
${(\alpha,\beta)\in\mathbb{N}\times\mathbb{N}^{n}}$ and
${\beta=(\beta_{1},\ldots,\beta_{n})}$, we are using the multi-index notation
${((\eta,\theta)-(0,\theta_{0}))^{(\alpha,\beta)}=\eta^{\alpha}\prod_{k=1}^{n}(\theta_{k}-\theta_{0k})^{\beta_{k}}}$.
Now, noting that the term of order ${(\eta,\theta)^{(0,\boldsymbol{0})}}$ is
the variational formulation of ${L_{\rm per}(\theta_{0})}$ as in (5.111), we
can rewrite the above expansion in the form
$L^{\Phi_{\eta}}(\theta)=L_{\rm
per}(\theta_{0})+\sum_{{|(\alpha,\beta)|}=1}^{3}((\eta,\theta)-(0,\theta_{0}))^{(\alpha,\beta)}L_{(\alpha,\beta)}+\mathrm{O}(\eta^{2}),$
(5.113)
in ${\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$ as ${\eta\to 0}$, where
${L_{(\alpha,\beta)}\in\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$ and
${{|(\alpha,\beta)|}=\alpha+\sum_{k=1}^{n}\beta_{k}}$.
Clearly, we can consider the parameters ${(\eta,\theta)}$ in the set
$B(0,1)\times\mathbb{C}^{n}$.
2\. In this step, we shall modify the expansion (5.113) conveniently in order
to obtain an holomorphic invertible operator in the variable
${(\eta,\theta)}$. For this, remember that according to the item ${(i)}$ in
Lemma 5.2, there exists $\gamma_{0}>0$ such that the operator ${L_{\rm
per}(\theta_{0})+{\gamma_{0}}I}$ is invertible. Then there exists ${\delta>0}$
such that the expansion
$L^{\Phi_{\eta}}(\theta)+{\gamma_{0}}I=(L_{\rm
per}(\theta_{0})+{\gamma_{0}}I)+\sum_{{|(\alpha,\beta)|}=1}^{3}((\eta,\theta)-(0,\theta_{0}))^{(\alpha,\beta)}L_{(\alpha,\beta)}+\mathrm{O}(\eta^{2}),$
(5.114)
in ${\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$ as ${\eta\to 0}$, is
invertible for all ${(\eta,\theta)\in B(0,\delta)\times
B(\theta_{0},\delta)}$, since the set of invertible bounded operators
${GL(\mathcal{H},\mathcal{H}^{\ast})}$ is an open subset of
${\mathcal{B}(\mathcal{H},\mathcal{H}^{\ast})}$. Now, we denote by
${S(\eta,\theta)}$ the inverse operator of
${L^{\Phi_{\eta}}(\theta)+{\gamma_{0}}I}$, ${(\eta,\theta)\in
B(0,\delta)\times B(\theta_{0},\delta)}$. Since the map ${L\in
GL(\mathcal{H},\mathcal{H}^{\ast})\mapsto
L^{-1}\in\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H})}$ is continuous, the map
$(\eta,\theta)\in B(0,\delta)\times B(\theta_{0},\delta)\mapsto
S(\eta,\theta)\in\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H})$
is continuous. As a consequence of this, for
${(\widetilde{\eta},\widetilde{\theta})\in B(0,\delta)\times
B(\theta_{0},\delta)}$ fixed, the limit of
$\frac{S(\eta,\widetilde{\theta})-S(\widetilde{\eta},\widetilde{\theta})}{\eta-\widetilde{\eta}}=-S(\eta,\widetilde{\theta}){\left[\frac{(L^{\Phi_{\eta}}(\widetilde{\theta})+{\gamma_{0}}I)-(L^{\Phi_{\widetilde{\eta}}}(\widetilde{\theta})+{\gamma_{0}}I)}{\eta-\widetilde{\eta}}\right]}S(\widetilde{\eta},\widetilde{\theta}),$
as ${\widetilde{\eta}\not=\eta\to 0}$, exists. Thus, ${\eta\in
B(0,\delta)\mapsto S(\eta,\widetilde{\theta})}$ is an holomorphic map. In
analogy with it, for ${j\in{\\{1,\ldots,n\\}}}$, we can prove that
$\theta_{j}\mapsto
S(\widetilde{\eta},\widetilde{\theta}_{1},\ldots,\widetilde{\theta}_{j-1},\theta_{j},\widetilde{\theta}_{j+1},\ldots,\widetilde{\theta}_{n})$
is an holomorphic map. Therefore, by Osgood’s Lemma, see for instance [22], we
conclude that
$(\eta,\theta)\in B(0,\delta)\times B(\theta_{0},\delta)\mapsto
S(\eta,\theta)\in\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H})$ (5.115)
is a holomorphic function.
3\. Finally, we are in conditions to prove items ${(i)}$, ${(ii)}$ and
${(iii)}$ (the spectral analysis of the operator ${S(\eta,\theta)}$). First,
we shall note that for ${(\eta,\theta)}$ in a neighbourhood of
${(0,\theta_{0})}$, the map ${(\eta,\theta)\mapsto S(\eta,\theta)}$ satisfies
the assumptions of the Theorem 2.26. We begin recalling that the restriction
operator ${T\in\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H})\mapsto
T\big{|}_{\mathcal{L}}\in\mathcal{B}(\mathcal{L},\mathcal{L})}$ is continuous
and it satisfies
${\|T\|}_{\mathcal{B}(\mathcal{L},\mathcal{L})}\leqslant{\|T\|}_{\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H})}\,,\;\,\forall
T\in\mathcal{B}(\mathcal{H}^{\ast},\mathcal{H}).$ (5.116)
Then, by (5.115), the map ${(\eta,\theta)\in B(0,\delta)\times
B(\theta_{0},\delta)\mapsto
S(\eta,\theta)\in\mathcal{B}(\mathcal{L},\mathcal{L})}$ is holomorphic. Since
holomorphic maps are, locally, analytic maps there exists a neighbourhood
${\mathcal{U}}$ of ${(0,\theta_{0})}$,
${(0,\theta_{0})\in\mathcal{U}\subset\mathbb{C}\times\mathbb{C}^{n}}$, and a
family ${\\{S_{\sigma}\\}_{\sigma\in\mathbb{N}\times\mathbb{N}^{n}}}$
contained in ${\mathcal{B}(\mathcal{L},\mathcal{L})}$, such that
$S(\eta,\theta)=S_{0}+\sum_{\begin{subarray}{c}\sigma\in\mathbb{N}\times\mathbb{N}^{n}\\\
{|\sigma|}\neq
0\end{subarray}}(\eta,\theta)^{\sigma}S_{\sigma}\,,\;\,\forall(\eta,\theta)\in\mathcal{U}.$
(5.117)
Using (5.114) and (5.117), it is easy to see that ${S_{0}=(L_{\rm
per}(\theta_{0})+{\gamma_{0}}I)^{-1}\big{|}_{\mathcal{L}}}$. Notice also that
${\mu_{0}:={\left(\lambda_{0}+\gamma_{0}\right)}^{-1}}$ is an eigenvalue of
${S_{0}}$ if and only if ${\lambda_{0}}$ is an eigenvalue of ${L_{\rm
per}(\theta_{0})}$ that is
$g\in{\\{f\in\mathcal{L}\;;\;S_{0}{\big{[}f\big{]}}=\mu_{0}f\\}}\;\Leftrightarrow\;g\in{\\{f\in\mathcal{L}\;;\;L_{\rm
per}(\theta_{0}){\big{[}f\big{]}}=\lambda_{0}f\\}}.$
The final part of the proof is a direct application of the Theorem 2.26. Due
to our assumption, $\mu_{0}$ is a real eigenvalue of the operator $S_{0}$ with
multiplicity $k_{0}$. Hence, by the Theorem 2.26, there exists a neighbourhood
${\widetilde{\mathcal{U}}}$ of ${(0,\theta_{0})}$, with
${\widetilde{\mathcal{U}}\subset\mathcal{U}}$ and analytic maps
$\begin{array}[]{l}(\eta,\theta)\in\widetilde{\mathcal{U}}\;\longmapsto\;\mu_{01}(\eta,\theta),\mu_{02}(\eta,\theta),\ldots,\mu_{0k_{0}}(\eta,\theta)\in(0,\infty),\\\\[5.0pt]
(\eta,\theta)\in\widetilde{\mathcal{U}}\;\longmapsto\;\psi_{01}(\eta,\theta),\psi_{02}(\eta,\theta),\ldots,\psi_{0k_{0}}(\eta,\theta)\in\mathcal{L}-\\{0\\},\end{array}$
such that
* •
${\mu_{0\ell}(0,\theta_{0})=\mu_{0}}$,
* •
${S(\eta,\theta){\big{[}\psi_{0\ell}(\eta,\theta)\big{]}}=\mu_{0\ell}(\eta,\theta)\psi_{0\ell}(\eta,\theta)}$,
${\forall(\eta,\theta)\in\widetilde{\mathcal{U}}}$,
* •
${{\rm
dim}{\\{f\in\mathcal{L}\;;\;S(\eta,\theta){\big{[}f\big{]}}=\mu_{0\ell}(\eta,\theta)f\\}}\leqslant
k_{0}}$, ${\forall(\eta,\theta)\in\widetilde{\mathcal{U}}}$,
for all ${\ell\in\\{1,\ldots,k_{0}\\}}$. Thus, the proof of the item ${(i)}$
is clear.
Using the second equality above, we obtain
$\displaystyle(L^{\Phi_{\eta}}(\theta)+{\gamma_{0}}I){\big{[}\psi_{0\ell}(\eta,\theta)\big{]}}$
$\displaystyle=$
$\displaystyle\frac{1}{\mu_{0\ell}(\eta,\theta)}(L^{\Phi_{\eta}}(\theta)+{\gamma_{0}}I){\left\\{S(\eta,\theta){\big{[}\psi_{0\ell}(\eta,\theta)\big{]}}\right\\}}$
$\displaystyle=$
$\displaystyle\frac{1}{\mu_{0\ell}(\eta,\theta)}\psi_{0\ell}(\eta,\theta),$
which implies that
${L^{\Phi_{\eta}}(\theta){\big{[}\psi_{0\ell}(\eta,\theta)\big{]}}=\lambda_{0\ell}(\eta,\theta)\psi_{0\ell}(\eta,\theta)}$,
for ${(\eta,\theta)\in\widetilde{\mathcal{U}}}$,
${\ell\in\\{1,\ldots,m_{0}\\}}$ and
${\lambda_{0\ell}(\eta,\theta):=[\mu_{0\ell}(\eta,\theta)]^{-1}-{\gamma_{0}}}$.
This finish the proof of the item ${(ii)}$.
Finally, note that
${S(\eta,\theta)\big{[}\mathcal{L}\big{]}\subset\mathcal{H}}$ and
$g\\!\in\\!\big{\\{}f\in\mathcal{H};S(\eta,\theta){\big{[}f\big{]}}\\!=\\!\mu_{0\ell}(\eta,\theta)f\big{\\}}\Leftrightarrow
g\\!\in\\!\big{\\{}f\in\mathcal{H}\;;\;L^{\Phi_{\eta}}(\theta){\big{[}f\big{]}}\\!=\\!\lambda_{0\ell}(\eta,\theta)f\big{\\}},$
which concludes the proof of the item ${(iii)}$. Therefore, the proof is
completed. ∎
### 5.2 Homogenization Analysis of the Perturbed Model
In this section, we shall investigate in which way the stochastic perturbation
of the identity characterize the form of the coefficients, during the
asymptotic limit of the Schrödinger equation
$\left\\{\begin{array}[]{l}i\displaystyle\frac{\partial
u_{\eta\varepsilon}}{\partial t}-{\rm div}{\bigg{(}A_{\rm
per}{\left(\Phi_{\eta}^{-1}{\left(\frac{x}{\varepsilon},\omega\right)}\right)}\nabla
u_{\eta\varepsilon}\bigg{)}}\\\\[14.0pt]
+{\bigg{(}\displaystyle\frac{1}{\varepsilon^{2}}V_{\rm
per}{\left(\Phi_{\eta}^{-1}{\left(\displaystyle\frac{x}{\varepsilon},\omega\right)}\right)}+U_{\rm
per}{\left(\Phi_{\eta}^{-1}{\left(\displaystyle\frac{x}{\varepsilon},\omega\right)}\right)}\bigg{)}}u_{\eta\varepsilon}=0\quad\text{in}\;\,\mathbb{R}^{n+1}_{T}\\!\times\\!\Omega,\\\\[14.0pt]
u_{\eta\varepsilon}(0,x,\omega)=u_{\eta\varepsilon}^{0}(x,\omega),\;\;(x,\omega)\in\mathbb{R}^{n}\\!\times\\!\Omega,\end{array}\right.$
(5.118)
where ${0<T<\infty}$, ${\mathbb{R}^{n+1}_{T}=(0,T)\times\mathbb{R}^{n}}$. The
coefficients are accomplishing of the periodic functions ${A_{\rm per}(y)}$,
${V_{\rm per}(y)}$, ${U_{\rm per}(y)}$ (as defined in the last subsection)
with a stochastic perturbation of identity ${\Phi_{\eta}}$, ${\eta\in(0,1)}$,
presenting an rate of oscillation ${\varepsilon^{-1}}$, ${\varepsilon>0}$. The
function ${u_{\eta\varepsilon}^{0}(x,\omega)}$ is a well prepared initial
data(see (5.126)) and this well-preparedness is triggered by natural periodic
conditions, to wit, on the existence of a pair
${\big{(}\theta^{\ast},\lambda_{\rm
per}(\theta^{\ast})\big{)}\in\mathbb{R}^{n}\times\mathbb{R}}$ such that
$\begin{split}(i)&\;\;\,\lambda_{\rm per}(\theta^{\ast})\;\text{is a simple
eigenvalue of}\;L_{\rm per}(\theta^{\ast}),\\\
(ii)&\;\;\,\theta^{\ast}\;\text{is a critical point of}\;\lambda_{\rm
per}(\cdot),\,\text{that is},\nabla_{\\!\\!\theta}\lambda_{\rm
per}(\theta^{\ast})=0.\end{split}$ (5.119)
By the condition ${(i)}$ and the Theorem 5.4, there exists a neighborhood
${\mathcal{U}}$ of ${(0,\theta^{\ast})}$ and the analytic maps
$\begin{split}(i)&\;\;\,(\eta,\theta)\in\mathcal{U}\;\mapsto\;\lambda(\eta,\theta)\in\mathbb{R},\\\
(ii)&\;\;\,(\eta,\theta)\in\mathcal{U}\;\mapsto\;\psi(\eta,\theta)\in\mathcal{H}\setminus\\{0\\},\end{split}$
(5.120)
such that ${\lambda(0,\theta^{\ast})=\lambda_{\rm per}(\theta^{\ast})}$,
${L^{\Phi_{\eta}}(\theta)\big{[}\psi(\eta,\theta)\big{]}=\lambda(\eta,\theta)\,\psi(\eta,\theta)}$
and
${\rm
dim}\big{\\{}f\in\mathcal{H}\;;\;L^{\Phi_{\eta}}(\theta)=\lambda(\eta,\theta)\,f\big{\\}}=1,\;\forall(\eta,\theta)\in\mathcal{U}.$
Thus,
$\lambda(\eta,\theta)\;\text{is a simple eigenvalue
of}\;L^{\Phi_{\eta}}(\theta),\forall(\eta,\theta)\in\mathcal{U}.$ (5.121)
Additionally, as ${\lambda(0,\theta^{\ast})=\lambda_{\rm per}(\theta^{\ast})}$
is an isolated point of ${\sigma_{\rm point}\big{(}L_{\rm
per}(\theta^{\ast})\big{)}}$ (any point has this property ),
${\lambda(\eta,\theta)}$ is an isolated point of ${\sigma_{\rm
point}\big{(}L^{\Phi_{\eta}}(\theta^{\ast})\big{)}}$ for each
${(\eta,\theta)\in\mathcal{U}}$. Thus, we have ${\lambda(0,\cdot)=\lambda_{\rm
per}(\cdot)}$ in a neighbourhood of ${\theta^{\ast}}$. We now denote
${\psi_{\rm per}(\cdot):=\psi(0,\cdot)}$. Without loss of generality, we
assume ${\int_{[0,1)^{n}}{|\psi_{\rm per}(\theta^{\ast})|}^{2}dy=1}$.
Moreover, we shall assume that the homogenized (periodic) matrix ${A_{\rm
per}^{\ast}=D_{\\!\theta}^{2}\lambda_{\rm per}(\theta^{\ast})}$ is invertible
which happens if $\theta=\theta^{\ast}$ is a point of local minimum or local
maximum strict of $\mathbb{R}^{n}\ni\theta\mapsto\lambda_{\rm per}(\theta)$.
Thus, an immediate application of the Implicit Function Theorem gives us the
following lemma:
###### Lemma 5.5.
Let the condition (5.119) be satisfied and ${A_{\rm per}^{\ast}}$ be an
invertible matrix. Then, there exists a neighborhood ${\mathcal{V}}$ of ${0}$,
${0\in\mathcal{V}\subset\mathbb{R}}$, and a ${\mathbb{R}^{n}}$-value analytic
map
$\theta(\cdot):\eta\in\mathcal{V}\mapsto\theta(\eta)\in\mathbb{R}^{n},$
such that ${\theta(0)=\theta^{\ast}}$ and
$\nabla_{\\!\\!\theta}\lambda\big{(}\eta,\theta(\eta)\big{)}=0,\;\;\forall\eta\in\mathcal{V}.$
(5.122)
By the analytic structure of the functions in (5.120) and the Lemma 5.5, there
exists a neighborhood ${\mathcal{V}}$ of ${0}$,
${0\in\mathcal{V}\subset\mathbb{R}}$, such that
$\begin{split}(i)&\;\;\,\eta\in\mathcal{V}\;\mapsto\;\lambda\big{(}\eta,\theta(\eta)\big{)}\in\mathbb{R},\\\
(ii)&\;\;\,\eta\in\mathcal{V}\;\mapsto\;\psi\big{(}\eta,\theta(\eta)\big{)}\in\mathcal{H}\setminus\\{0\\},\\\
(iii)&\;\;\,\eta\in\mathcal{V}\;\mapsto\;\xi_{k}\big{(}\eta,\theta(\eta)\big{)}\in\mathcal{H},\forall\\{1,\ldots,n\\},\end{split}$
(5.123)
are analytic functions, where
${\xi_{k}(\eta,\theta):=(2i\pi)^{-1}{\partial_{\theta_{k}}\psi}(\eta,\theta)}$,
for ${k\in\\{1,\ldots,n\\}}$. We also consider ${\xi_{k,{\rm
per}}(\cdot)=\xi_{k}(0,\cdot)}$. Furthermore, by (5.121) and (5.122), for each
fixed ${\eta\in\mathcal{V}}$ we have that the pair
${\big{(}\theta(\eta),\lambda\big{(}\eta,\theta(\eta)\big{)}\big{)}\in\mathbb{R}^{n}\times\mathbb{R}}$
satisfies:
$\begin{split}(i)&\;\;\,\lambda(\eta,\theta(\eta))\;\text{is a simple
eigenvalue of}\;L^{\Phi_{\eta}}\big{(}\theta(\eta)\big{)},\\\
(ii)&\;\;\,\theta(\eta)\;\text{is a critical point
of}\;\lambda(\eta,\cdot),\,\text{that
is},\nabla_{\\!\\!\theta}\lambda(\eta,\theta(\eta))=0.\end{split}$ (5.124)
This means that the Theorem 4.2 can be used. Before, we establish a much
simplified notations for the functions in (5.123) as follows:
$\begin{split}(i)&\;\;\,\theta_{\eta}:=\theta(\eta),\\\
(ii)&\;\;\,\lambda_{\eta}:=\lambda\big{(}\eta,\theta(\eta)\big{)},\\\
(iii)&\;\;\,\psi_{\eta}:=\psi\big{(}\eta,\theta(\eta)\big{)},\\\
(iv)&\;\;\,\xi_{k,\eta}:=\xi_{k}\big{(}\eta,\theta(\eta)\big{)},\,k\in\\{1,\ldots,n\\}.\end{split}$
(5.125)
Finally, from (5.124), for each fixed ${\eta\in\mathcal{V}}$, the notion of
well-preparedness for the initial data $u_{\eta\varepsilon}^{0}$ is given as
below.
$u_{\eta\varepsilon}^{0}(x,\omega)=e^{2i\pi\frac{\theta_{\eta}\cdot
x}{\varepsilon}}\,v^{0}(x)\,\psi_{\eta}{\left(\Phi_{\eta}^{-1}{\left(\frac{x}{\varepsilon},\omega\right)},\omega\right)},\;(x,\omega)\in\mathbb{R}^{n}\times\Omega,$
(5.126)
where ${v^{0}\in C_{\rm c}^{\infty}(\mathbb{R}^{n})}$. Thus, applying the
Theorem 4.2, if ${u_{\eta\varepsilon}}$ is solution of (5.118), the sequence
in ${\varepsilon>0}$
$v_{\eta\varepsilon}(t,x,\widetilde{\omega})=e^{-{\left(i\frac{\lambda_{\eta}t}{\varepsilon^{2}}+2i\pi\frac{\theta_{\eta}\cdot
x}{\varepsilon}\right)}}u_{\eta\varepsilon}(t,x,\widetilde{\omega}),\;\,(t,x,\widetilde{\omega})\in\mathbb{R}^{n+1}_{T}\times\Omega,$
$\Phi_{\omega}-$two-scale converges to the limit
${v_{\eta}(t,x)\,\psi_{\eta}{\big{(}\Phi^{-1}(z,\omega),\omega\big{)}}}$ with
$\lim_{\varepsilon\to
0}\iint_{\mathbb{R}^{n+1}_{T}}\\!{\left|v_{\eta\varepsilon}(t,x,\widetilde{\omega})-v_{\eta}(t,x)\,\psi_{\eta}{\left(\Phi^{-1}_{\eta}{\left(\frac{x}{\varepsilon},\widetilde{\omega}\right)},\widetilde{\omega}\right)}\right|}^{2}dx\,dt\,=\,0,$
for a.e. ${\widetilde{\omega}\in\Omega}$, where ${v_{\eta}\in
C\big{(}[0,T],L^{2}(\mathbb{R}^{n})\big{)}}$ is the unique solution of the
homogenized Schrödinger equation
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial v_{\eta}}{\partial
t}-{\rm div}{\left(A^{\ast}_{\eta}\nabla
v_{\eta}\right)}+U_{\\!\eta}^{\ast}v_{\eta}=0\,,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
v_{\eta}(0,x)=v^{0}(x)\,,\;\,x\in\mathbb{R}^{n},\end{array}\right.$ (5.127)
with effective coefficients
${A^{\ast}_{\eta}=D_{\\!\theta}^{2}\lambda\big{(}\eta,\theta(\eta)\big{)}}$
and
$U^{\ast}_{\\!\eta}=c^{-1}_{\eta}\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}U_{\rm
per}{\big{(}\Phi^{-1}_{\eta}(z,\omega)\big{)}}{\left|\psi_{\eta}{\big{(}\Phi^{-1}_{\eta}(z,\omega),\omega\big{)}}\right|}^{2}dz\,d\mathbb{P}(\omega),$
(5.128)
where
$c_{\eta}=\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}{\left|\psi_{\eta}{\big{(}\Phi^{-1}_{\eta}(z,\omega),\omega\big{)}}\right|}^{2}dz\,d\mathbb{P}(\omega).$
(5.129)
###### Remark 5.6.
We remember that, using the equality (4.1), we have for each ${\eta}$ fixed
that the matrix ${B_{\eta}\in\mathbb{R}^{n\times n}}$ must satisfy
$\begin{split}&(B_{\eta})_{k\ell}:=c_{\eta}^{-1}\bigg{[}\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}A_{\rm
per}{\left(\Phi_{\eta}^{-1}(z,\omega)\right)}{\left(e_{\ell}\,\psi_{\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}\cdot\\\
&\hskip
223.3543pt\overline{\left(e_{k}\,\psi_{\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}\,dz\,d\mathbb{P}(\omega)\\\
&+\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}A_{\rm
per}{\left(\Phi_{\eta}^{-1}(z,\omega)\right)}{\left(e_{\ell}\,\psi_{\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}\cdot\\\
&\hskip
170.71652pt\overline{{\left(\nabla_{\\!\\!z}+2i\pi\theta_{\eta}\right)}{\left(\xi_{k,\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}}\,dz\,d\mathbb{P}(\omega)\\\
&-\int_{\Omega}\int_{\Phi_{\eta}([0,1)^{n},\omega)}A_{\rm
per}{\left(\Phi_{\eta}^{-1}(z,\omega)\right)}{\left(\nabla_{\\!\\!z}+2i\pi\theta_{\eta}\right)}{\left(\psi_{\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}\cdot\\\
&\hskip
221.93158pt\overline{{\left(e_{\ell}\,\xi_{k,\eta}{\left(\Phi_{\eta}^{-1}(z,\omega),\omega\right)}\right)}}\,dz\,d\mathbb{P}(\omega)\bigg{]},\end{split}$
(5.130)
for ${k,\ell\in\\{1,\ldots,n\\}}$ and the homogenized matrix can be written as
${A_{\eta}^{\ast}=2^{-1}{\big{(}B_{\eta}+B_{\eta}^{t}\big{)}}}$.
#### 5.2.1 Expansion of the effective coefficients
As a consequence of the formula of the effective coefficients of the
homogenized equation (5.127), we have the following proposition:
###### Proposition 5.7.
The maps ${\eta\mapsto A_{\eta}^{\ast}\in\mathbb{R}^{n\times n}}$,
${\eta\mapsto B_{\eta}\in\mathbb{R}^{n\times n}}$ and ${\eta\mapsto
U_{\eta}^{\ast}\in\mathbb{R}}$ are analytics in a neighbourhood of ${\eta=0}$.
###### Proof.
Let us assume ${\mathcal{U}}$ and ${\mathcal{V}}$ as in (5.120) and (5.123),
respectively. For each ${\eta\in\mathcal{V}}$, the above arguments give us the
formula
${A^{\ast}_{\eta}=D_{\\!\theta}^{2}\lambda\big{(}\eta,\theta(\eta)\big{)}}$.
Thus, as ${(\eta,\theta)\in\mathcal{U}\mapsto
D_{\\!\theta}^{2}\lambda(\eta,\theta)\in\mathbb{R}^{n\times n}}$ and
${\eta\in\mathcal{V}\mapsto\theta(\eta)\in\mathbb{R}^{n}}$ are analytic maps,
we conclude that ${\eta\in\mathcal{V}\mapsto
D_{\theta}^{2}\lambda(\eta,\theta(\eta))\in\mathbb{R}^{n\times n}}$ is also an
analytic map. This means that ${\eta\mapsto A_{\eta}^{\ast}}$ is an analytic
map.
From (5.128) and (5.129), making a change of variables, we have
$U^{\ast}_{\\!\eta}=c^{-1}_{\eta}\int_{\Omega}\int_{[0,1)^{n}}U_{\rm
per}(y){\left|\psi_{\eta}(y,\omega)\right|}^{2}{\rm
det}[\nabla_{\\!\\!y}\Phi_{\eta}(y,\omega)]\,dz\,d\mathbb{P}(\omega)$
and
$c_{\eta}=\int_{\Omega}\int_{[0,1)^{n}}{\left|\psi_{\eta}(y,\omega)\right|}^{2}{\rm
det}[\nabla_{\\!\\!y}\Phi_{\eta}(y,\omega)]\,dz\,d\mathbb{P}(\omega)\not=0.$
Then, as the map ${\eta\mapsto\psi_{\eta}\in\mathcal{H}\setminus\\{0\\}}$ is
analytic, the map ${\eta\mapsto c_{\eta}\not=0}$ is also analytic. Hence the
map ${\eta\mapsto c_{\eta}^{-1}}$ is analytic. Therefore, ${\eta\mapsto
U_{\eta}^{\ast}}$ is analytic. ∎
As a consequence of this proposition, there exist
${\\{A^{(j)},\,B^{(j)}\\}_{j\in\mathbb{N}}\subset\mathbb{R}^{n\times n}}$ and
${\\{U^{(j)}\\}_{j\in\mathbb{N}}\subset\mathbb{R}}$ such that
$\left\\{\begin{array}[]{lll}A_{\eta}^{\ast}&=&A^{(0)}+\eta
A^{(1)}+\eta^{2}A^{(2)}+\ldots,\\\\[5.0pt] U_{\eta}^{\ast}&=&U^{(0)}+\eta
U^{(1)}+\eta^{2}U^{(2)}+\ldots,\\\\[5.0pt] B_{\eta}^{\ast}&=&B^{(0)}+\eta
B^{(1)}+\eta^{2}B^{(2)}+\ldots.\end{array}\right.$ (5.131)
Now, the object of our interest is determine the terms of order ${\eta^{0}}$
and ${\eta}$ of these homogenized coefficients. For this purpose, guided by
5.110 and by the formulas (5.130), (5.128) and (5.129), we shall analyse the
expansion of the analytic functions in (5.125). By analytic property, there
exist the sequences
${{\\{\theta^{(j)}\\}}_{j\in\mathbb{N}}\subset\mathbb{R}^{n}}$,
${{\\{\lambda^{(j)}\\}}_{j\in\mathbb{N}}\subset\mathbb{R}}$,
${{\\{\psi^{(j)}\\}}_{j\in\mathbb{N}}\subset\mathcal{H}}$ and
${{\\{\xi_{k}^{(j)}\\}}_{j\in\mathbb{N}}\subset\mathcal{H}}$,
${k\in\\{1,\ldots,n\\}}$, such that
$\displaystyle\theta_{\eta}$ $\displaystyle=$
$\displaystyle\theta^{(0)}+\eta\theta^{(1)}+\eta^{2}\theta^{(2)}+\ldots=\theta^{(0)}+\eta\theta^{(1)}+\mathrm{O}(\eta^{2}),$
(5.132) $\displaystyle\lambda_{\eta}$ $\displaystyle=$
$\displaystyle\lambda^{(0)}+\eta\lambda^{(1)}+\eta^{2}\lambda^{(2)}+\ldots=\lambda^{(0)}+\eta\lambda^{(1)}+\mathrm{O}(\eta^{2}),$
(5.133) $\displaystyle\psi_{\eta}$ $\displaystyle=$
$\displaystyle\psi^{(0)}+\eta\psi^{(1)}+\eta^{2}\psi^{(2)}+\ldots=\psi^{(0)}+\eta\psi^{(1)}+\mathrm{O}(\eta^{2}),$
(5.134) $\displaystyle\xi_{k,\eta}$ $\displaystyle=$
$\displaystyle\xi_{k}^{(0)}+\eta\xi_{k}^{(1)}+\eta^{2}\xi_{k}^{(2)}+\ldots=\xi_{k}^{(0)}+\eta\xi_{k}^{(1)}+\mathrm{O}(\eta^{2}),$
(5.135)
where ${k\in\\{1,\ldots,n\\}}$.
At first glance, in order to determine the coefficients of the expansions in
(5.131) we should solve, a priori, auxiliary problems that involves both, the
deterministic and stochastic variables. This can be a disadvantage from the
point of view of numerical analysis. Our aim hereafter is to prove that, we
can simplify the computations of this coefficients working in a periodic
environment which is computationally cheaper. In order to do this, note that
${\theta^{(0)}=\theta^{\ast}}$, ${\lambda^{(0)}=\lambda_{\rm
per}(\theta^{\ast})}$, ${\psi^{(0)}=\psi_{\rm per}(\theta^{\ast})}$ and
${\xi_{k}^{(0)}=\xi_{k,{\rm per}}(\theta^{\ast})}$, ${k\in\\{1,\ldots,n\\}}$,
which satisfy
$\displaystyle\left\\{\begin{array}[]{l}{\left(L_{\rm
per}(\theta^{\ast})-\lambda_{\rm per}(\theta^{\ast})\right)}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}=0\;\,\text{in}\;\,[0,1)^{n},\\\\[6.0pt] \hskip
42.67912pt\psi_{\rm
per}(\theta^{\ast})\;\;[0,1)^{n}\text{-periodic},\end{array}\right.$ (5.138)
$\displaystyle\left\\{\begin{array}[]{l}{\left(L_{\rm
per}(\theta^{\ast})-\lambda_{\rm
per}(\theta^{\ast})\right)}{\big{[}\xi_{k,{\rm
per}}(\theta^{\ast})\big{]}}=\mathcal{X}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}\;\,\text{in}\;\,[0,1)^{n},\\\\[6.0pt] \hskip
42.67912pt\xi_{k,{\rm
per}}(\theta^{\ast})\;\;[0,1)^{n}\text{-periodic},\end{array}\right.$ (5.141)
where
$\mathcal{X}{\big{[}f\big{]}}:={\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\big{\\{}A_{\rm
per}(y){(e_{k}f)}\big{\\}}}+{(e_{k})}{\big{\\{}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta^{\ast})}f\big{\\}}},$
for ${f\in\mathcal{H}}$. The equation (5.138) is the spectral cell equation
and (5.141) is the first auxiliary cell equation related to the periodic case
(see Section 3.3).
The following theorem show us that, the terms ${\psi^{(1)}}$ and
${\xi_{k}^{(1)}}$, $k\in\\{1,\ldots,n\\}$, given by (5.134) and (5.135)
respectively, satisfy auxiliary type cell equations.
###### Theorem 5.8.
Let ${\psi^{(1)}}$ and ${\xi_{k}^{(1)}}$, ${k\in\\{1,\ldots,n\\}}$, be as
above. Then these functions satisfy the following equations:
$\displaystyle\left\\{\begin{array}[]{l}{\left(L_{\rm
per}(\theta^{\ast})-\lambda_{\rm
per}(\theta^{\ast})\right)}{\big{[}\psi^{(1)}\big{]}}=\mathcal{Y}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}\;\,\text{in}\;\,[0,1)^{n}\times\Omega,\\\\[6.0pt]
\hskip 42.67912pt\psi^{(1)}\;\text{stationary},\end{array}\right.$ (5.144)
$\displaystyle\left\\{\begin{array}[]{l}{\left(L_{\rm
per}(\theta^{\ast})-\lambda_{\rm
per}(\theta^{\ast})\right)}{\big{[}\xi_{k}^{(1)}\big{]}}=\mathcal{X}{\big{[}\psi^{(1)}\big{]}}\\\\[6.0pt]
\hskip 56.9055pt+\,\mathcal{Y}{\big{[}\xi_{k,{\rm
per}}(\theta^{\ast})\big{]}}+\mathcal{Z}_{k}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}\;\,\text{in}\;\,[0,1)^{n}\times\Omega,\\\\[6.0pt]
\hskip 42.67912pt\xi_{k}^{(1)}\;\text{stationary},\end{array}\right.$ (5.148)
where the operators ${\mathcal{Y}}$ and ${\mathcal{Z}_{k}}$,
${k\in\\{1,\ldots,n\\}}$, are defined by
$\displaystyle\mathcal{Y}{\big{[}f\big{]}}$ $\displaystyle\\!\\!:=\\!\\!$
$\displaystyle{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\big{\\{}A_{\rm
per}(y){\big{(}-[\nabla_{\\!\\!y}Z](y,\omega)\nabla_{\\!\\!y}f+2i\pi\theta^{(1)}f\big{)}}\big{\\}}}$
$\displaystyle-\,{\rm
div}_{\\!y}{\big{\\{}[\nabla_{\\!\\!y}Z]^{t}(y,\omega)A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta^{\ast})}f\big{\\}}}$
$\displaystyle+{\left(2i\pi\theta^{(1)}\right)}{\big{\\{}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta^{\ast}\right)}f\big{\\}}}$
$\displaystyle+\,{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\big{\\{}{\left[{\rm
div}_{\\!y}Z(y,\omega)A_{\rm
per}(y)\right]}{(\nabla_{\\!\\!y}+2i\pi\theta^{\ast})}f\big{\\}}}+\lambda^{(1)}f$
$\displaystyle+\,{\big{\\{}{\rm div}_{\\!y}Z(y,\omega)\,{\left[\lambda_{\rm
per}(\theta^{\ast})-V_{\rm per}(y)\right]}\big{\\}}}f,$
$\displaystyle\mathcal{Z}_{k}{\big{[}f\big{]}}$ $\displaystyle\\!\\!:=\\!\\!$
$\displaystyle{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\big{\\{}{\left[{\rm
div}_{\\!y}Z(y,\omega)A_{\rm per}(y)\right]}{(e_{k}f)}\big{\\}}}$
$\displaystyle-\,{\rm
div}_{\\!y}{\big{\\{}[\nabla_{\\!\\!y}Z]^{t}(y,\omega)A_{\rm
per}(y){(e_{k}f)}\big{\\}}}+{\left(2i\pi\theta^{(1)}\right)}{\left\\{A_{\rm
per}(y){(e_{k}f)}\right\\}}$
$\displaystyle+\,{\left(e_{k}\right)}{\big{\\{}{\big{[}{\rm
div}_{\\!y}Z(y,\omega)A_{\rm
per}(y)\big{]}}{\left(\nabla_{\\!\\!y}+2i\pi\theta^{\ast}\right)}f\big{\\}}}$
$\displaystyle-\,{\left(e_{k}\right)}{\left\\{A_{\rm
per}(y){\left[\nabla_{\\!\\!y}Z\right]}(y,\omega)\nabla_{\\!\\!y}f\right\\}}+{\left(e_{k}\right)}{\big{\\{}A_{\rm
per}(y){(2i\pi\theta^{(1)}f)}\big{\\}}},$
for ${f\in\mathcal{H}}$.
For the proof of this theorem, we shall use essentially the structure of the
spectral cell equation (3.56) and of the f.a.c. equation (3.3) with periodic
coefficients accomplished by stochastic deformation of identity
${\Phi_{\eta}}$ together with the identities (5.110).
###### Proof.
1\. For begining, let us consider the set ${\mathcal{V}}$ as in (5.123). Then,
making change of variables in the spectral cell equation (3.56) adapted to
this context, we find
$\displaystyle\int_{[0,1)^{n}}\int_{\Omega}\Big{\\{}A_{\rm
per}(y)\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\psi_{\eta}+2i\pi\theta_{\eta}\psi_{\eta}\big{)}\cdot\overline{\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\zeta+2i\pi\theta_{\eta}\zeta\big{)}}\,$
$\displaystyle\qquad\qquad\qquad+{\left(V_{\rm
per}(y)-\lambda_{\eta}\right)}\,\psi_{\eta}\,\overline{\zeta}\,\Big{\\}}{\rm
det}[\nabla_{\\!\\!y}\Phi_{\eta}]\,d\mathbb{P}(\omega)\,dy=0,$ (5.149)
for all ${\eta\in\mathcal{V}}$ and ${\zeta\in\mathcal{H}}$. If we insert the
equations (5.110), (5.132), (5.133) and (5.134) in equation (5.2.1) and
compute the term $\eta$, we arrive at
$\begin{array}[]{l}\displaystyle{\int_{[0,1)^{n}}\\!\int_{\Omega}\\!\Big{\\{}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}\psi^{(1)}+2i\pi\theta^{\ast}\psi^{(1)}\right)}\cdot\overline{{\left(\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\right)}}+{\left(V_{\rm
per}(y)-\lambda_{\rm
per}(\theta^{\ast})\right)}\psi^{(1)}\,\overline{\zeta}}\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad+\,\\!A_{\rm
per}(y){\left(-[\nabla_{\\!\\!y}Z]\,\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{(1)}\psi_{\rm
per}(\theta^{\ast})\right)}\\!\cdot\\!\overline{{\left(\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\right)}}\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad+A_{\rm per}(y){\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)}\cdot\overline{{\left(-[\nabla_{\\!\\!y}Z]\nabla_{\\!\\!y}\zeta+2i\pi\theta^{(1)}\zeta\right)}}\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad+A_{\rm per}(y){\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)}\cdot\overline{{\left(\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\right)}}\,{\rm
div}_{\\!y}Z\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad-\lambda^{(1)}\,\psi_{\rm
per}(\theta^{\ast})\,\overline{\zeta}+{\left(V_{\rm per}(y)-\lambda_{\rm
per}(\theta^{\ast})\right)}\psi^{(0)}\,\overline{\zeta}\,{\rm
div}_{\\!y}Z\Big{\\}}\,d\mathbb{P}(\omega)\,dy=0,\end{array}$
for all ${\eta\in\mathcal{V}}$ and ${\zeta\in\mathcal{H}}$. This equation is
the variational formulation of the equation (5.144), which concludes the first
part of the proof.
2\. For the second part of the proof, we proceed similarly with respect to the
f.a.c. equation (3.3) and obtain
$\begin{split}&\displaystyle\int_{[0,1)^{n}}\int_{\Omega}\Big{\\{}A_{\rm
per}(y)\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\xi_{k,\eta}+2i\pi\theta_{\eta}\xi_{k,\eta}\big{)}\cdot\overline{\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\zeta+2i\pi\theta_{\eta}\zeta\big{)}}\\\
&\displaystyle\qquad\qquad\qquad+A_{\rm
per}(y){\left(e_{k}\,\psi_{\eta}\right)}\cdot\overline{\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\zeta+2i\pi\theta_{\eta}\zeta\big{)}}\\\
&\displaystyle\qquad\qquad\qquad-A_{\rm
per}(y)\big{(}[\nabla_{\\!\\!y}\Phi_{\eta}]^{-1}\nabla_{\\!\\!y}\psi_{\eta}+2i\pi\theta_{\eta}\psi_{\eta}\big{)}\cdot\overline{{\left(e_{k}\,\zeta\right)}}\\\
&\displaystyle\qquad\quad+{\left(V_{\rm
per}(y)-\lambda_{\eta}\right)}\,\xi_{k,\eta}\,\overline{\zeta}-\,\frac{1}{2i\pi}\frac{\partial\lambda}{\partial\theta_{k}}(\eta,\theta(\eta))\,\psi_{\eta}\,\overline{\zeta}\Big{\\}}\,{\rm
det}[\nabla_{\\!\\!y}\Phi_{\eta}]\,d\mathbb{P}(\omega)\,dy=0,\end{split}$
(5.150)
for all ${\eta\in\mathcal{V}}$, ${\zeta\in\mathcal{H}}$ and
${k\in\\{1,\ldots,n\\}}$. Hence, taking into account the Lemma 5.5 and
inserting the equations (5.110), (5.132), (5.133), (5.134) and (5.135) in
equation (5.150), a computation of the term of order ${\eta}$ lead us to
$\begin{array}[]{l}\displaystyle\int_{[0,1)^{n}}\int_{\Omega}\Big{\\{}A_{\rm
per}(y)\big{(}\nabla_{\\!\\!y}\xi_{k}^{(1)}+2i\pi\theta^{\ast}\xi_{k}^{(1)}\big{)}\cdot\overline{\big{(}\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\big{)}}+{\left(V_{\rm
per}(y)-\lambda_{\rm
per}(\theta^{\ast})\right)}\xi_{k}^{(1)}\,\overline{\zeta}\\\\[15.0pt]
\displaystyle\qquad\qquad+A_{\rm
per}(y){\left(e_{k}\,\psi^{(1)}\right)}\cdot\overline{\big{(}\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\big{)}}-A_{\rm
per}(y)\big{(}\nabla_{\\!\\!y}\psi^{(1)}+2i\pi\theta^{\ast}\psi^{(1)}\big{)}\cdot\overline{{\left(e_{k}\,\zeta\right)}}\\\\[15.0pt]
\displaystyle\qquad\qquad+A_{\rm
per}(y)\big{(}-[\nabla_{\\!\\!y}Z]\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{(1)}\xi_{k,{\rm
per}}(\theta^{\ast})\big{)}\cdot\overline{\big{(}\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\big{)}}\\\\[15.0pt]
\displaystyle\qquad\qquad+A_{\rm per}(y)\big{(}\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\big{)}\cdot\overline{\big{(}-[\nabla_{\\!\\!y}Z]\nabla_{\\!\\!y}\zeta+2i\pi\theta^{(1)}\zeta\big{)}}\\\\[15.0pt]
\displaystyle\qquad\qquad+A_{\rm per}(y)\big{(}\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\big{)}\cdot\overline{\big{(}\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\big{)}}\,{\rm
div}_{\\!y}Z\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad\qquad-\lambda^{(1)}\,\xi_{k,{\rm
per}}(\theta^{\ast})\,\overline{\zeta}+{\left(V_{\rm per}(y)-\lambda_{\rm
per}(\theta^{\ast})\right)}\xi_{k,{\rm
per}}(\theta^{\ast})\,\overline{\zeta}\,{\rm div}_{\\!y}Z\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad+\,A_{\rm per}(y){\left(e_{k}\,\psi_{\rm
per}(\theta^{\ast})\right)}\cdot\Big{(}\overline{\big{(}\nabla_{\\!\\!y}\zeta+2i\pi\theta^{\ast}\zeta\big{)}\,{\rm
div}_{\\!y}Z-[\nabla_{\\!\\!y}Z]\nabla_{\\!\\!y}\zeta+2i\pi\theta^{(1)}\zeta}\Big{)}\\\\[15.0pt]
\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad-A_{\rm
per}(y)\big{(}\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\big{)}\cdot\overline{{\left(e_{k}\,\zeta\right)}}\,{\rm
div}_{\\!y}Z\\\\[15.0pt] \displaystyle\qquad\qquad-\,A_{\rm
per}(y)\big{(}-[\nabla_{\\!\\!y}Z]\,\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{(1)}\psi_{\rm
per}(\theta^{\ast})\big{)}\cdot\overline{{\left(e_{k}\,\zeta\right)}}\Big{\\}}\,d\mathbb{P}(\omega)\,dy=0,\end{array}$
for all ${\zeta\in\mathcal{H}}$. Noting that this is the variational
formulation of the equation (5.148), we conclude the proof.
∎
We remember the reader that if $f:\mathbb{R}^{n}\times\Omega\to\mathbb{R}$ is
a stationary function, then we shall use the following notation
$\mathbb{E}[f(x,\cdot)]=\int_{\Omega}f(x,\omega)\,d\mathbb{P}(\omega),$
for any $x\in\mathbb{R}^{n}$. Roughly speaking, the theorem below tell us that
the homogenized matrix of the problem (5.118) can be obtained by solving
periodic problems.
###### Theorem 5.9.
Let ${A_{\eta}^{\ast}}$ be the homogenized matrix as in (5.127). Then
$A_{\eta}^{\ast}=A_{\rm per}^{\ast}+\eta A^{(1)}+\mathrm{O}(\eta^{2}).$
Moreover, the term of order ${\eta^{0}}$ is given by the homogenized matrix of
the periodic case, that is, ${A_{\rm
per}^{\ast}=2^{-1}{\left(B^{(0)}+(B^{(0)})^{t}\right)}}$, where the matrix
${B^{(0)}}$ is the term of order ${\eta^{0}}$ in (5.131) and it is defined by
$\begin{array}[]{r}\displaystyle(B^{(0)})_{k\ell}:=\int_{[0,1)^{n}}A_{\rm
per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot{(e_{k}\,\overline{\psi_{\rm
per}(\theta^{\ast})})}\,dy\\\\[10.0pt] \displaystyle+\,\int_{[0,1)^{n}}A_{\rm
per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot\overline{\left(\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\right)}\,dy\\\\[10.0pt]
\displaystyle-\,\int_{[0,1)^{n}}A_{\rm
per}(y){\Big{(}{\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)}\Big{)}}\cdot\overline{{(e_{\ell}\,\xi_{k,{\rm
per}}(\theta^{\ast}))}}\,dy.\end{array}$
The term of order ${\eta}$ is given by
${A^{(1)}=2^{-1}{\left(B^{(1)}+(B^{(1)})^{t}\right)}}$, where the matrix
${B^{(1)}}$ is the term of order ${\eta}$ in (5.131) and it is defined by
$\begin{array}[]{l}\displaystyle(B^{(1)})_{k\ell}={{\bigg{[}\int_{[0,1)^{n}}A_{\rm
per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot{(e_{k}\,\overline{\psi_{\rm
per}(\theta^{\ast})})}\,\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}\,dy}}\\\\[11.0pt]
\displaystyle+\,\int_{[0,1)^{n}}A_{\rm per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot{(e_{k}\,\overline{\mathbb{E}\big{[}\psi^{(1)}(y,\cdot)\big{]}})}\,dy\\\\[11.0pt]
\displaystyle+\int_{[0,1)^{n}}A_{\rm
per}(y){\left(e_{\ell}\,{\mathbb{E}\big{[}\psi^{(1)}(y,\cdot)\big{]}}\right)}\cdot{(e_{k}\,\overline{\psi_{\rm
per}(\theta^{\ast})})}\,dy\\\\[11.0pt] \displaystyle+\int_{[0,1)^{n}}A_{\rm
per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot\overline{\left(\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\right)}\,{\mathbb{E}\big{[}{\rm
div}_{\\!y}Z(y,\cdot)\big{]}}\,dy\\\\[11.0pt]
\displaystyle+\int_{[0,1)^{n}}A_{\rm per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot\overline{(\nabla_{\\!\\!y}{\mathbb{E}\big{[}\xi_{k}^{(1)}(y,\cdot)\big{]}}+2i\pi\theta^{\ast}{\mathbb{E}\Big{[}\xi_{k}^{(1)}(y,\cdot)\Big{]}}}\\\\[11.0pt]
\hskip 113.81102pt\overline{+\,2i\pi\theta^{(1)}\xi_{k,{\rm
per}}(\theta^{\ast})-{\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}}\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})})\,dy\\\\[11.0pt]
\displaystyle+\int_{[0,1)^{n}}\\!\\!\\!A_{\rm
per}(y){\left(e_{\ell}\,{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}\right)}\cdot\overline{\left(\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\right)}\,dy\\\\[11.0pt]
\displaystyle-\int_{[0,1)^{n}}\\!\\!\\!A_{\rm
per}(y){({\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)})}\cdot\overline{{(e_{\ell}\,\xi_{k,{\rm
per}}(\theta^{\ast}))}}\,{\mathbb{E}\big{[}{\rm
div}_{\\!y}Z(y,\cdot)\big{]}}\,dy\\\\[11.0pt]
\displaystyle-\int_{[0,1)^{n}}\\!\\!\\!A_{\rm
per}(y){\Big{(}{\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)}\Big{)}}\cdot\overline{{\Big{(}e_{\ell}\,{\mathbb{E}\Big{[}\xi_{k}^{(1)}(y,\cdot)\Big{]}}\Big{)}}}\,dy\end{array}$
$\begin{array}[]{l}\displaystyle\hskip 21.33955pt-\int_{[0,1)^{n}}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}+2i\pi\theta^{\ast}{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}\right.}\\\\[10.0pt]
\hskip 56.9055pt{{\left.+\,2i\pi\theta^{(1)}\psi_{\rm
per}(\theta^{\ast})-{\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}}\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})\right)}\cdot\overline{{(e_{\ell}\,\xi_{k,{\rm
per}}(\theta^{\ast}))}}\,dy\bigg{]}}\\\\[11.0pt]
\displaystyle-{\bigg{[}\int_{[0,1)^{n}}{|\psi_{\rm
per}(\theta^{\ast})|}^{2}{\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}}\,dy+\int_{[0,1)^{n}}\psi_{\rm
per}(\theta^{\ast})\,\overline{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}}\,dy\\\\[11.0pt]
\displaystyle\hskip
21.33955pt{+\int_{[0,1)^{n}}{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}\,\overline{\psi_{\rm
per}(\theta^{\ast})}\,dy\bigg{]}}\cdot{\bigg{[}\int_{[0,1)^{n}}A_{\rm
per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot{(e_{k}\,\overline{\psi_{\rm
per}(\theta^{\ast})})}\,dy}\\\\[11.0pt] \displaystyle\hskip
21.33955pt+\int_{[0,1)^{n}}A_{\rm per}(y){(e_{\ell}\,\psi_{\rm
per}(\theta^{\ast}))}\cdot\overline{\left(\nabla_{\\!\\!y}\xi_{k,{\rm
per}}(\theta^{\ast})+2i\pi\theta^{\ast}\xi_{k,{\rm
per}}(\theta^{\ast})\right)}\,dy\\\\[11.0pt] \displaystyle\hskip
21.33955pt-\,{{\int_{[0,1)^{n}}A_{\rm
per}(y){\left[{\left(\nabla_{\\!\\!y}\psi_{\rm
per}(\theta^{\ast})+2i\pi\theta^{\ast}\psi_{\rm
per}(\theta^{\ast})\right)}\right]}\cdot\overline{{(e_{\ell}\,\xi_{k,{\rm
per}}(\theta^{\ast}))}}\,dy\bigg{]}}}.\end{array}$
###### Proof.
1\. Taking into account ${\mathcal{V}}$ as in (5.123), we get from (5.130),
for ${\eta\in\mathcal{V}}$, that the homogenized matrix is given by
${A_{\eta}^{\ast}=2^{-1}(B_{\eta}+B_{\eta}^{t})}$. Thus, in order to describe
the terms of the expansion of ${A_{\eta}^{\ast}}$, we only need to determine
the terms in the expansion of ${B_{\eta}}$.
2\. Using the equations (5.110) and (5.134), the map ${\eta\mapsto
c_{\eta}\in(0,+\infty)}$ has an expansion about ${\eta=0}$. Remembering that
${\int_{[0,1)^{n}}{|\psi_{\rm per}(\theta^{\ast})|}^{2}dy=1}$, we have
$\begin{split}c_{\eta}^{-1}=&1-\eta{\left[\int_{\Omega}\int_{[0,1)^{n}}{|\psi_{\rm
per}(\theta^{\ast})|}^{2}{\rm
div}_{\\!y}Z(y,\omega)\,dy\,d\mathbb{P}\right.}\\\
&{\left.+\int_{\Omega}\int_{{[0,1)^{n}}}\psi_{\rm
per}(\theta^{\ast})\overline{\psi^{(1)}}\,dy\,d\mathbb{P}+\int_{\Omega}\int_{[0,1)^{n}}\psi^{(1)}\overline{\psi_{\rm
per}(\theta^{\ast})}\,dy\,d\mathbb{P}\right]}+\;\mathrm{O}(\eta^{2}),\end{split}$
in ${\mathbb{C}}$ as ${\eta\to 0}$. Thus, using the expansions (5.110),
(5.132), (5.134) and (5.135) in the formula (5.130), the computation of the
resulting term of order ${\eta^{0}}$ of ${B_{\eta}}$ give us the desired
expression for $(B^{(0)})_{k,\ell}$. The same reasoning with a little more
computations, which is an exercise that we leave to the reader, allow us to
obtain the expression for $(B^{(1)})_{k\ell}$. ∎
###### Remark 5.10.
We next record the observation that the computation of the coefficients of
${A_{\rm per}^{\ast}}$ is performed by solving the equations (5.138) and
(5.141), which are equations with periodic boundary conditions. In order to
compute the coefficients of ${A^{(1)}}$, we need to know the functions
${\psi^{(1)}}$ and ${\xi_{k}^{(1)}}$, ${k\in\\{1,\ldots,n\\}}$, which are a
priori stochastic in nature (see the equations (5.144) and (5.148),
respectively). But in Theorem 5.9, it has seen that we only need their
expectation values, ${\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}$ and
${\mathbb{E}\Big{[}\xi_{k}^{(1)}(y,\cdot)\Big{]}}$, ${k\in\\{1,\ldots,n\\}}$,
which are ${[0,1)^{n}}$-periodic functions and, respectively, solutions of the
following equations:
$\displaystyle\left\\{\begin{array}[]{l}{\Big{(}L_{\rm
per}(\theta^{\ast})-\lambda_{\rm
per}(\theta^{\ast})\Big{)}}\,{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}=\mathcal{Y}_{\rm
per}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}\;\,\text{in}\;\,[0,1)^{n},\\\\[6.0pt] \hskip
56.9055pt{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}\;\text{is
$[0,1)^{n}$-periodic},\end{array}\right.$
$\displaystyle\left\\{\begin{array}[]{l}{\left(L_{\rm
per}(\theta^{\ast})-\lambda_{\rm
per}(\theta^{\ast})\right)}{\mathbb{E}\Big{[}\xi_{k}^{(1)}(y,\cdot)\Big{]}}=\mathcal{X}\Big{[}{\mathbb{E}\Big{[}\psi^{(1)}(y,\cdot)\Big{]}}\Big{]}\\\\[6.0pt]
\hskip 49.79231pt+\,\mathcal{Y}_{\rm per}{\big{[}\xi_{k,{\rm
per}}(\theta^{\ast})\big{]}}+\mathcal{Z}_{k,{\rm per}}{\big{[}\psi_{\rm
per}(\theta^{\ast})\big{]}}\;\,\text{in}\;\,[0,1)^{n},\\\\[6.0pt] \hskip
56.9055pt{\mathbb{E}\Big{[}\xi_{k}^{(1)}(y,\cdot)\Big{]}}\;\text{is
$[0,1)^{n}$-periodic},\end{array}\right.$
where
$\displaystyle\mathcal{Y}_{\rm per}{\big{[}f\big{]}}$
$\displaystyle\\!\\!:=\\!\\!$ $\displaystyle{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\Big{\\{}A_{\rm
per}(y){\big{(}-\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}\nabla_{\\!\\!y}f+2i\pi\theta^{(1)}f\big{)}}\Big{\\}}}$
$\displaystyle-\,{\rm
div}_{\\!y}{\Big{\\{}\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}^{t}A_{\rm
per}(y){(\nabla_{\\!\\!y}+2i\pi\theta^{\ast})}f\Big{\\}}}$
$\displaystyle+{\left(2i\pi\theta^{(1)}\right)}{\big{\\{}A_{\rm
per}(y){\left(\nabla_{\\!\\!y}+2i\pi\theta^{\ast}\right)}f\big{\\}}}$
$\displaystyle+\,{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\Big{\\{}{\left[\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}A_{\rm
per}(y)\right]}{(\nabla_{\\!\\!y}+2i\pi\theta^{\ast})}f\Big{\\}}}+\lambda^{(1)}f$
$\displaystyle+\,{\Big{\\{}\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}\,{\left[\lambda_{\rm per}(\theta^{\ast})-V_{\rm
per}(y)\right]}\Big{\\}}}f,$ $\displaystyle\mathcal{Z}_{k,{\rm
per}}{\big{[}f\big{]}}$ $\displaystyle\\!\\!:=\\!\\!$
$\displaystyle{\left({\rm
div}_{\\!y}+2i\pi\theta^{\ast}\right)}{\Big{\\{}{\left[\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}A_{\rm per}(y)\right]}{(e_{k}f)}\Big{\\}}}$
$\displaystyle-\,{\rm
div}_{\\!y}{\Big{\\{}\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}^{t}A_{\rm
per}(y){(e_{k}f)}\Big{\\}}}+{\left(2i\pi\theta^{(1)}\right)}{\left\\{A_{\rm
per}(y){(e_{k}f)}\right\\}}$
$\displaystyle+\,{\left(e_{k}\right)}{\Big{\\{}{\Big{[}\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}A_{\rm
per}(y)\Big{]}}{\left(\nabla_{\\!\\!y}+2i\pi\theta^{\ast}\right)}f\Big{\\}}}$
$\displaystyle-\,{\left(e_{k}\right)}{\left\\{A_{\rm
per}(y)\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}\nabla_{\\!\\!y}f\right\\}}+{\left(e_{k}\right)}{\big{\\{}A_{\rm
per}(y){(2i\pi\theta^{(1)}f)}\big{\\}}}.$
for ${f\in H^{1}_{\rm per}([0,1)^{n})}$.
Summing up, the determination of the homogenized coefficients for (1.1) is a
stochastic problem in nature. However, when we consider the interesting
context of materials which have small deviation from perfect ones (modeled by
periodic functions), this problem, in the specific case (5.1) reduces, at the
first two orders in $\eta$, to the simpler solution to the two periodic
problems above. Both of them are of the same nature. Importantly, note that
$Z$ in (5.1) is only present through $\mathbb{E}\Big{[}{\rm
div}_{\\!y}Z(y,\cdot)\Big{]}$ and
$\mathbb{E}\Big{[}[\nabla_{\\!\\!y}Z](y,\cdot)\Big{]}$.
In the theorem below, we assume that the homogenized matrix of the periodic
case satisfies the uniform coercive condition, that is,
$A_{\rm per}^{\ast}\xi\cdot\xi\geq\Lambda|\xi|^{2},$
for some $\Lambda>0$ and for all $\xi\in\mathbb{R}^{n}$, which has
experimental evidence for metals and semiconductors. Therefore, due to Theorem
5.9 the homogenized matrix of the perturbed case ${A_{\eta}^{\ast}}$ has
similar property for $\eta\sim 0$.
###### Theorem 5.11.
Let ${v_{\eta}}$ be the solution of homogenized equation (5.127). Then
$v_{\eta}\Big{(}t,\sqrt{A_{\eta}^{\ast}}\,x\Big{)}=v_{\rm
per}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}+\eta\,v^{(1)}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}+\mathrm{O}(\eta^{2}),$
weakly in ${L^{2}(\mathbb{R}^{n}_{T})}$ as ${\eta\to 0}$, that means,
$\displaystyle\int_{\mathbb{R}^{n}_{T}}{\Bigg{(}v_{\eta}\Big{(}t,\sqrt{A_{\eta}^{\ast}}\,x\Big{)}-v_{\rm
per}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}-\eta\,v^{(1)}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}\Bigg{)}}\,h(t,x)\,dx\,dt$
$\displaystyle\qquad\qquad\qquad=\mathrm{O}(\eta^{2}),$
for each ${h\in L^{2}(\mathbb{R}^{n}_{T})}$, where ${v_{\rm per}}$ is the
solution of the periodic homogenized problem
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial v_{\rm per}}{\partial
t}-{\rm div}{\left(A_{\rm per}^{\ast}\nabla v_{\rm per}\right)}+U_{\\!\rm
per}^{\ast}v_{\rm per}=0,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
v_{\rm per}(0,x)=v_{0}(x)\,,\;\,x\in\mathbb{R}^{n},\end{array}\right.$ (5.153)
and ${v^{(1)}}$ is the solution of
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial v^{(1)}}{\partial
t}-{\rm div}{\left(A_{\rm per}^{\ast}\nabla v^{(1)}\right)}+U_{\\!\rm
per}^{\ast}v^{(1)}={\rm div}{\left(A_{\rm per}^{\ast}\nabla v_{\rm
per}\right)}-U^{(1)}v_{\rm
per},\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
v^{(1)}(0,x)=v^{1}_{0}(x)\,,\;\,x\in\mathbb{R}^{n},\end{array}\right.$ (5.154)
where ${U^{(1)}}$ is the coefficient of the term of order ${\eta}$ of the
expansion ${U_{\eta}^{\ast}}$ and $v_{0}^{1}\in
C_{c}^{\infty}(\mathbb{R}^{n})$ is given by the limit
$v_{0}^{1}\Big{(}\sqrt{A_{\rm per}^{\ast}}\,x\Big{)}:=\lim_{\eta\to
0}\frac{v_{0}\Big{(}\sqrt{A_{\eta}^{\ast}}\,x\Big{)}-v_{0}\Big{(}\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}}{\eta}.$
###### Proof.
1\. Taking into account the set ${\mathcal{V}}$ as in (5.123), we have for
${\eta\in\mathcal{V}}$ and from the conservation of energy of the homogenized
Schrödinger equation (5.127), that the solution
${v_{\eta}:\mathbb{R}^{n}_{T}\to\mathbb{C}}$ satisfies
${\|v_{\eta}\|}_{L^{2}(\mathbb{R}^{n+1}_{T})}=T{\|v_{0}\|}_{L^{2}(\mathbb{R}^{n})},\;\,\forall\eta\in\mathcal{V}.$
Thus, after possible extraction of a subsequence, we have the existence of a
function ${v^{(0)}\in L^{2}(\mathbb{R}^{n+1}_{T})}$ such that
$v_{\eta}\;\xrightharpoonup[\eta\to
0]{}\;v^{(0)}\;\text{em}\;L^{2}(\mathbb{R}^{n+1}_{T}).$ (5.155)
By the variational formulation of the equation (5.127), we find
$\begin{array}[]{l}0=\displaystyle
i\int_{\mathbb{R}^{n}}v_{0}(x)\,\overline{\varphi}(0,x)\,dx-i\int_{\mathbb{R}^{n}_{T}}v_{\eta}(t,x)\,\frac{\partial\overline{\varphi}}{\partial
t}(t,x)\,dx\,dt\\\\[15.0pt]
\displaystyle\qquad\qquad+\int_{\mathbb{R}^{n}_{T}}\Bigg{\\{}-{\left\langle
A_{\eta}^{\ast}v_{\eta}(t,x),D^{2}{\varphi}(t,x)\right\rangle}+U_{\eta}^{\ast}v_{\eta}(t,x)\,\overline{\varphi}(t,x)\Bigg{\\}}\,dx\,dt,\end{array}$
(5.156)
for all ${\varphi\in C_{\rm c}^{1}((-\infty,T))\otimes C_{\rm
c}^{2}(\mathbb{R}^{n})}$. Recall that ${{\left\langle P,Q\right\rangle}:={\rm
tr}(P\overline{Q}^{t})}$, for ${P,Q}$ in ${\mathbb{C}^{n\times n}}$. Then,
using (5.155), the Theorem 5.7, making ${\eta\to 0}$ and invoking the
uniqueness property of the equation (5.153), we conclude that ${v^{(0)}=v_{\rm
per}}$.
2\. Now, using that ${U_{\eta}^{\ast}=U_{\rm
per}^{\ast}+\eta\,U^{(1)}+\mathrm{O}(\eta^{2})}$ as ${\eta\to 0}$, defining
$V_{\eta}(t,x):=v_{\eta}\Big{(}t,\sqrt{A_{\eta}^{\ast}}\,x\Big{)}$ and using
the homogenized equation (5.127), we arrive at
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial V_{\eta}}{\partial
t}-\Delta V_{\eta}+U_{\rm
per}^{\ast}V_{\eta}=-\Big{(}\eta\,U^{(1)}+\mathrm{O}(\eta^{2})\Big{)}\,V_{\eta}\,,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
V_{\eta}(0,x)=v^{0}\Big{(}\sqrt{A_{\eta}^{\ast}}\,x\Big{)}\,,\;\,x\in\mathbb{R}^{n},\end{array}\right.$
(5.157)
Proceeding similarly with respect to $V(t,x):=v_{\rm per}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}$, we obtain
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial V}{\partial t}-\Delta
V+U_{\rm
per}^{\ast}V_{\eta}=0\,,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
V(0,x)=v^{0}\Big{(}\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}\,,\;\,x\in\mathbb{R}^{n}.\end{array}\right.$ (5.158)
Now, the difference between the equations (5.157) and (5.158) yields,
$\left\\{\begin{array}[]{c}i\displaystyle\frac{\partial(V_{\eta}-V)}{\partial
t}-\Delta(V_{\eta}-V)+U_{\rm
per}^{\ast}(V_{\eta}-V)=-\Big{(}\eta\,U^{(1)}+\mathrm{O}(\eta^{2})\Big{)}\,V_{\eta}\,,\;\,\text{in}\;\,\mathbb{R}^{n+1}_{T},\\\\[7.5pt]
(V_{\eta}-V)(0,x)=v^{0}\Big{(}\sqrt{A_{\eta}^{\ast}}\,x\Big{)}-v^{0}\Big{(}\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)}\,,\;\,x\in\mathbb{R}^{n}.\end{array}\right.$ (5.159)
Hence, multiplying the last equation by $\overline{V_{\eta}-V}$, integrating
over $\mathbb{R}^{n}$ and taking the imaginary part yields
$\frac{d}{dt}\|V_{\eta}-V{\|}_{L^{2}(\mathbb{R}^{n})}\leq\mathrm{O}(\eta),$
for $\eta\in\mathcal{V}$. Defining
$W_{\eta}(t,x):=\frac{V_{\eta}(t,x)-V(t,x)}{\eta},\;\,\eta\in\mathcal{V},$
the last inequality provides
$\sup_{\eta\in\mathcal{V}}\|W_{\eta}{\|}_{L^{2}(\mathbb{R}^{n+1}_{T})}<+\infty.$
Thus, taking a subsequence if necessary, there exists ${v^{(1)}\in
L^{2}(\mathbb{R}^{n+1}_{T})}$ such that
$W_{\eta}(t,x)\;\xrightharpoonup[\eta\to 0]{}\;v^{(1)}\Big{(}t,\sqrt{A_{\rm
per}^{\ast}}\,x\Big{)},\;\text{in}\;L^{2}(\mathbb{R}^{n+1}_{T}).$ (5.160)
Hence, multiplying the equation (5.159) by $\eta^{-1}$, letting $\eta\to 0$
and performing a change of variables, we reach the equation (5.154) finishing
the proof of the theorem.
∎
## Acknowledgements
Conflict of Interest: Author Wladimir Neves has received research grants from
CNPq through the grant 308064/2019-4. Author Jean Silva has received research
grants from CNPq through the Grant 302331/2017-4.
## References
* [1] Allaire G., Homogenization and two-scale convergence, SIAM J. Math. Anal. 23(6), 1482-1518, 1992.
* [2] Allaire G., Periodic homogenization and effective mass theorems for the Schrödinger Equation. In: Abdallah N. B., Frosali G. (eds) Quantum transport. Lecture Notes in Mathematics, vol 1946. Springer, Berlin, Heidelberg, 2008.
* [3] Allaire G., Vanninathan M., Homogenization of the Schrödinger equation with a time oscillating potential, Discrete Contin. Dyn. Syst.-Ser. B 6(1), 1-16, 2006.
* [4] Allaire G., Piatnistki A., Homogenization of the Schrödinger Equation and Effective Mass Theorems, Commun. Math. Phys. 258(1), 1-22, 2005.
* [5] Ambrosio L., Frid H., Multiscale Young Measure in almost periodic homogenization and applications, Arch. Ration. Mech. Anal. 192(1), 37-85, 2009.
* [6] Andrade T., Neves W., Silva J., Homogenization of Liouville Equations beyond stationary ergodic setting, Arch. Ration. Mech. Anal. 237(2), 999-1040, 2020.
* [7] Barletti L., Abdallah N. B., Quantum transport in crystals: effective mass theorem and K$\cdot$P Hamiltonians, Comm. Math. Phys. 307(3), 567-607, 2011.
* [8] Bensoussan A., Lions J.-L., Papanicolaou G., Asymptotic analysis for periodic structures, Amsterdam: North-Holland Pub. Co., 1978.
* [9] Blanc X., Le Bris C., Lions P.-L., Une variante de la théorie de l’homogénéisation stochastique des opérateurs elliptiques, C. R. Math. Acad. Sci. Paris 343(11-12), 717-724, 2006.
* [10] Blanc X., Le Bris C., Lions P.-L., Stochastic homogenization and random lattices, J. Math. Pures Appl. 88(1), 34-63, 2007.
* [11] Bourgeat A., Mikelić A., Wright S., Stochastic two-scale convergence in the mean and applications, J. Reine Angew. Math. 456, 19-51, 1994.
* [12] Cancès E., Le Bris C., Mathematical modeling of point defects in materials science, Math. Models Methods Appl. Sci. 23(10), 1795-1859, 2013.
* [13] Cazenave T., Haraux A., An introduction to semilinear evolution equations, Oxford Lecture Series in Mathematics and Its Applications, 13. Clarendon, Oxford University Press, New York, 1998.
* [14] Chabu V., Fermanian-Kammerer C., Marcià F., Wigner measures and effective mass theorems, Ann. H. Lebesgue 3, 1049-1089, 2020.
* [15] De Giorgi E., Sulla differenziabilità e l’analiticità delle estremali degli integrali multipli regolari, Mem. Accad. Sci. Torino Cl. Sci. Fis. Mat. Nat. (3) 3, 25-43, 1957.
* [16] Diaz J. C., Gayte I., The two-scale convergence method Applied to generalized Besicovitch spaces, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 458(2028), 2925-2946, 2002.
* [17] Dummit D., Foote R., Abstrac algebra, Third edition, John Wiley and Sons, New York, 2004.
* [18] Evans L. C., Partial differential equations, Second edition. Graduate Studies in Mathematics, vol. 19. AMS, Providence, 2010.
* [19] Folland G. B., A course in abstract harmonic analysis, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1995.
* [20] Frid H., Silva J., Versieux H., Homogenization of a generalized Stefan problem in the context of ergodic algebra, J. Funct. Anal. 268(11), 3232-3277, 2015.
* [21] Górka P., Reyes E. G., Sobolev spaces on locally compact Abelian groups and the Bosonic string equation, J. Aust. Math. Soc. 98(1), 39-53, 2015.
* [22] Gunning R. C., Rossi H., Analytic functions of several complex variables, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1965.
* [23] Hewitt E., Ross K. A., Abstract harmonic analysis. Vol. I, Springer-Verlag, Berlin, 1963.
* [24] Jikov V. V., Kozlov S. M., Oleinik O. A., Homogenization of differential operators and integral functionals, Springer-Verlag, Berlin, 1994.
* [25] Kato T., Perturbation theory for linear operators, Springer-Verlag, Berlin, 1995.
* [26] Krantz S. G., Parks H. R., The implicit function theorem. History, theory and applications, Birkhäuser, Boston, 2002.
* [27] Krengel U., Ergodic theorems, Gruyter Studies in Mathematics, vol. 6, de Gruyter, Berlin, 1985.
* [28] Myers H. P., Introductory solid state physics, London: Taylor & Francis, 1990.
* [29] Nguetseng G., A General convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal. 20(3), 608-623, 1989.
* [30] Pankov A. A., Bounded and almost periodic solutions of nonlinear operator differential equations, Dordrecht, Springer, 2013.
* [31] Poupaud F., Ringhofer C., Semi-classical limits in a crystal with exterior potentials and effective mass theorems, Comm. Partial Differential Equations 21(11-12), 1897-1918, 1996.
* [32] Reed M., Simon B., Methods of modern mathematical physics. Vol. I, Academic Press, Inc., New York, 1980.
* [33] Rellich F., Perturbation theory of eigenvalue problems, Gordon & Breach, New York, 1969.
* [34] RodrÃguez-Vega J. J., Zúñiga-Galindo W. A., Elliptic pseudodifferential equations and Sobolev spaces over $p$-adic fields, Pacific J. Math. 246(2), 407-420, 2010.
* [35] Rudin W., Real and complex analysis, Third edition. McGraw-Hill Book Co., New York, 1987.
* [36] Stampacchia G., Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus, Ann. Inst. Fourier (Grenoble) 15, 189-258, 1965.
* [37] Wilcox C. H., Theory of Bloch waves, J. Analyse Math. 33, 146-167, 1978.
* [38] Zhikov V. V., Pyatnitskii A. L., Homogenization of random singular structures and random measures, Izv. Math. 70(1), 19-67, 2006.
|
# A diffuse interface box method for elliptic problems
G. Negrinia, N. Parolinia and M. Verania
###### Abstract
We introduce a diffuse interface box method (DIBM) for the numerical
approximation on complex geometries of elliptic problems with Dirichlet
boundary conditions. We derive a priori $H^{1}$ and $L^{2}$ error estimates
highlighting the rôle of the mesh discretization parameter and of the diffuse
interface width. Finally, we present a numerical result assessing the
theoretical findings.
Keywords: box method, diffuse interface, complex geometries
a MOX, Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da
Vinci 32, I-20133 Milano, Italy
## 1 Introduction
The finite volume method (FVM) is a popular numerical strategy for solving
partial differential equations modelling real life problems. One crucial and
attractive property of FVM is that, by construction, many physical
conservation laws possessed in a given application are naturally preserved.
Besides, similar to the finite element method, the FVM can be used to deal
with domains with complex geometries. In this respect, one crucial issue is
the construction of the computational grid. To face this problem, one can
basically resort to two different types of approaches. In the first approach,
a mesh is constructed on a sufficiently accurate approximation of the exact
physical domain (see, e.g., isoparametric finite elements [9], isogeometric
analysis [10], or Arbitrary Lagrangian-Eulerian formulation [11, 16, 17]),
while in the second approach (see, e.g., Immersed Boundary methods [19], the
Penalty Methods [2], the Fictitious Domain/Embedding Domain Methods [6, 5, 4],
the cut element method [7, 8] and the Diffuse Interface Method [18]) one
embeds the physical domain into a simpler computational mesh whose elements
can intersect the boundary of the given domain. Clearly, the mesh generation
process is extremely simplified in the second approach, while the imposition
of boundary conditions requires extra work. Among the methods sharing the
second approach, in this paper we focus on the diffuse interface approach
developed in [20]. In parallel, we consider, for its simplicity, the piecewise
linear FVM, or box method, that has been the object of an intense study in the
literature (see, e.g., the pioneering works [3, 15] and the more recent [12,
21]).
The goal of this paper is to propose and analyse a diffuse interface variant
of the box method, in the sequel named DIBM (diffuse interface box method),
obtaining a priori $H^{1}$ and $L^{2}$ error estimates depending both on the
discretization parameter $h$ (dictating the accuracy of the approximation of
the PDE) and the width $\epsilon$ of the diffuse interface (dictating the
accuracy of the domain approximation). Up to our knowledge, this is new in the
literature. Besides, the study of DIBM for elliptic problems, despite its
simplicity, opens the door to the study of more complicated differential
problems and to the analysis of diffuse interface variants of more
sophisticated finite volume schemes.
The outline of the paper is as follows. In section 2 we briefly recall the box
method, while in section 3, we present the diffuse interface box method (DIBM)
along with a priori error estimates. Finally in section 4 we will provide a
numerical test to validate the theoretical results. The numerical results have
been obtained using the open-source library OpenFOAM®.
## 2 The box method
In this section, we recall (see [3, 15, 21]) the box method for the solution
of an elliptic problem. Let $D\subset\mathbb{R}^{2}$ be a polygonal bounded
domain (in the following section this hypothesis will be relaxed). We consider
the following problem:
$\begin{cases}-\Delta u=f,\quad&\mathrm{in}\leavevmode\nobreak\ D\\\
u=g,\quad&\mathrm{on}\leavevmode\nobreak\ \Gamma=\partial D,\end{cases}$ (2.1)
where $f\in L^{2}(\Omega)$ and $g\in H^{1/2}(\Gamma)$.
Let $\mathcal{T}_{h}=\\{t_{i}\\}$ be a conforming and shape regular
triangulation of $D$. We denote by $h_{t}$ the diameter of
$t\in\mathcal{T}_{h}$ and we introduce the set
$\textsf{V}_{h}=\\{\textsf{v}_{i}\\}$ of vertices of $\mathcal{T}_{h}$ with
$\textsf{V}_{h}=\textsf{V}_{h}^{\partial}\cup\textsf{V}_{h}^{o}$, the set
$\textsf{V}_{h}^{o}$ containing the interior vertices of $\mathcal{T}_{h}$. We
denote by $w_{\textsf{v}}$ the set of triangles sharing the vertex v. On
$\mathcal{T}_{h}$ we define the space of linear finite elements
$\mathcal{V}_{h,g_{h}}=\left\\{v_{h}\in
C^{0}(\bar{D}):v_{h}|_{t}\in\mathbb{P}^{1}(t)\leavevmode\nobreak\ \forall
t\in\mathcal{T}_{h}\leavevmode\nobreak\ \mathrm{and}\leavevmode\nobreak\
v_{h}=g_{h}\leavevmode\nobreak\ \mathrm{on}\leavevmode\nobreak\ \partial
D\right\\},$
where $g_{h}$ is a suitable piecewise linear approximation of $g$ on $\partial
D$.
Let $\mathcal{B}_{h}=\\{b_{\textsf{v}}\\}_{\textsf{v}\in{\textsf{V}}_{h}^{o}}$
be the “box mesh” (or dual mesh) associated to $\mathcal{T}_{h}$. Each box
$b_{\textsf{v}}$ is a polygon with a boundary consisting of two straight lines
in each related triangle $t\in w_{\textsf{v}}$. These lines are defined by the
mid-points of the edges and the barycentres of the triangles in
$w_{\textsf{v}}$.
On $\mathcal{B}_{h}$ we introduce the space of piecewise constant functions,
$\mathcal{W}_{h}=\left\\{w_{h}\in
L^{2}(D):w_{h}\in\mathbb{P}^{0}(b_{\textsf{v}})\leavevmode\nobreak\ \forall
b_{\textsf{v}}\in\mathcal{B}_{h}\right\\}.$
The box method for the approximation of (2.1) reads as follows: find
$u_{B,h}\in\mathcal{V}_{h,g_{h}}$ such that
$a_{\mathcal{T}_{h}}(u_{B,h},w_{h})=(f,w_{h})_{D}\quad\forall
w_{h}\in\mathcal{W}_{h},$ (2.2)
where
$a_{\mathcal{T}_{h}}(u_{B,h},w_{h})=-\sum_{\textsf{v}\in\textsf{V}_{h}^{o}}\int_{\partial
b_{\textsf{v}}}\frac{\partial
v_{h}}{\partial\boldsymbol{\textbf{n}}_{b}}w_{h}\mathrm{d}s,$ (2.3)
being $\boldsymbol{\textbf{n}}_{b}$ the outer normal to $b_{\textsf{v}}$ and
$(\cdot,\cdot)_{D}$ is the usual $L^{2}$ scalar product on $D$.
Note that there holds (see [3, 15] for the two dimensional case and [21] for
the extension to any dimension)
$\int_{\partial
b_{\textsf{v}}}\frac{\partial\phi_{\textsf{v}^{\prime}}}{\partial\boldsymbol{\textbf{n}}_{b}}w_{h}\mathrm{d}s=\int_{D}\nabla\phi_{\textsf{v}}\cdot\nabla\phi_{\textsf{v}^{\prime}}\mathrm{d}x,\quad\forall\textsf{v}\in\textsf{V}_{h}^{o},\forall\textsf{v}^{\prime}\in\textsf{V}_{h},$
(2.4)
where $\phi_{\textsf{v}}$ is the usual hat basis function with support equal
to $w_{\textsf{v}}$.
The relation (2.4) is crucial to show the following perturbation results (see
[15, 21]):
$\displaystyle\left\lVert\nabla(u_{B,h}-u_{G,h})\right\rVert_{L^{2}(D)}$
$\displaystyle\leq Ch\left\lVert f\right\rVert_{L^{2}(D)},$ (2.5)
$\displaystyle\left\lVert u_{B,h}-u_{G,h}\right\rVert_{L^{2}(D)}$
$\displaystyle\leq Ch^{2}\left\lVert f\right\rVert_{L^{2}(D)},$
where $h=\max_{t\in\mathcal{T}_{h}}h_{t}$ and
$u_{G,h}\in\mathcal{V}_{h,g_{h}}$ is the linear finite element approximation
to the solution of problem (2.1).
## 3 The box method with diffuse interface (DIBM)
Figure 1: Diffuse interface representation: $D$ is a surrogate domain of
$\Omega$; $\Gamma$ is the Dirichlet boundary and $S^{\epsilon}$ is its tubular
neighbour.
The aim of this section is to introduce a variant of the box method for the
approximate solution of problem (2.1) in case of a general (non-polygonal)
domain $D\subset\mathbb{R}^{2}$, where in the spirit of [20] the Dirichlet
boundary condition is treated with a diffuse interface approach. To this aim
we introduce an hold-all domain $\Omega$ such that $D\subset\Omega$. In the
sequel we will work under the hypothesis $\Gamma=\partial D\in C^{1,1}$. With
a slight abuse of notation we denote by $\mathcal{T}_{h}$ a shape regular
triangulation of $\Omega$. It is worth noting that $\mathcal{T}_{h}$ is not
conforming with $D$. Following [20] we first select a tubular neighbourhood
$S^{\epsilon}$ of $\Gamma$, where $\epsilon$ denotes the width of
$S^{\epsilon}$ (see Figure 1). Then we introduce the set $S^{\epsilon}_{h}$
which contain s all the triangles of $\mathcal{T}_{h}$ having non-empty
intersection with $S^{\epsilon}$. Note that the width of the discrete tubular
neighbourhood $S^{\epsilon}_{h}$ is $\delta+\epsilon$ where $\delta$ is the
maximum diameter of triangles crossed by $\partial S^{\epsilon}$.
To proceed, we assume that there exists an extension $\tilde{g}\in
H^{2}(\Omega)$ of the boundary data g.
We set $D^{\epsilon}_{h}=D\backslash S^{\epsilon}_{h}$ and introduce the
function $u^{\epsilon,h}\in H^{1}(D_{h}^{\epsilon})$ such that
$u^{\epsilon,h}=g$ on $\partial D^{\epsilon}_{h}$, which solves the following
continuos problem:
$\int_{D^{\epsilon}_{h}}\nabla u^{\epsilon,h}\cdot\nabla
v=\int_{D^{\epsilon}_{h}}fv\quad\forall v\in H^{1}_{0}(D^{\epsilon}_{h}).$
(3.1)
The solution $u^{\epsilon,h}$ is then extended to $S^{\epsilon}_{h}$ by
setting $u^{\epsilon,h}=\tilde{g}$ in $S^{\epsilon}_{h}$.
The following results have been proved in [20, Thm 1.2]:
$\frac{1}{\epsilon+\delta}\left\lVert
u-u^{\epsilon,h}\right\rVert_{L^{2}(D)}+\frac{1}{\sqrt{\epsilon+\delta}}\left\lVert\nabla
u-\nabla u^{\epsilon,h}\right\rVert_{L^{2}(D)}\leq C\left(\left\lVert
f\right\rVert_{L^{2}(D)}+\left\lVert g\right\rVert_{H^{2}(D)}\right).$ (3.2)
Let
$\mathcal{V}_{h,\tilde{g}_{h}}^{\epsilon}=\left\\{v_{h}|_{D^{\epsilon}_{h}}:v_{h}\in\mathbb{P}^{1}(t)\forall
t\in\mathcal{T}_{h}\leavevmode\nobreak\ \mathrm{and}\leavevmode\nobreak\
v_{h}=\tilde{g}_{h}\leavevmode\nobreak\ \mathrm{on}\leavevmode\nobreak\
\partial D^{\epsilon}_{h}\right\\}$, with $\tilde{g}_{h}$ the Lagrangian
piecewise linear interpolant of $\tilde{g}$.
It has been proved (cf. [20, Thms 5.1 and 5.3]) that the linear finite element
approximation $u^{\epsilon}_{G,h}\in\mathcal{V}_{h,\tilde{g}_{h}}^{\epsilon}$
of $u^{\epsilon,h}$ satisfies the following estimates:
$\displaystyle\left\lVert\nabla(u^{\epsilon,h}-u^{\epsilon}_{G,h})\right\rVert_{L^{2}(D)}$
$\displaystyle\leq C(\sqrt{\delta}+\kappa^{\frac{2}{3}}+h)\left(\left\lVert
f\right\rVert_{L^{2}(D)}+\left\lVert\tilde{g}\right\rVert_{H^{2}(D)}\right),$
(3.3) $\displaystyle\left\lVert
u^{\epsilon,h}-u^{\epsilon}_{G,h}\right\rVert_{L^{2}(D)}$ $\displaystyle\leq
C(\delta+\kappa^{\frac{4}{3}}+h^{2})\left(\left\lVert
f\right\rVert_{L^{2}(D)}+\left\lVert\tilde{g}\right\rVert_{H^{2}(D)}\right),$
where $\kappa$ is the maximum diameter of the triangles intersection $\partial
S^{\epsilon+h}$ and $u^{\epsilon}_{G,h}$ has been extended to
$D^{\epsilon}_{h}$ by setting $u^{\epsilon}_{G,h}=\tilde{g}_{h}$ on
$S^{\epsilon}_{h}$.
Let us now introduce the box method with diffuse interface (DIBM). We denote
by $u^{\epsilon}_{B,h}\in\mathcal{V}_{h,\tilde{g}_{h}}^{\epsilon}$, the
approximation obtained from applying the box method to (3.1) (cf. (2.2)). The
solution $u^{\epsilon}_{B,h}$ is then extended to $D$ by setting
$u^{\epsilon}_{B,h}=\tilde{g}_{h}$ in $S^{\epsilon}_{h}$. Then employing the
triangle inequality in combination with (3.2), (3.3) and (2.5) we get the
following estimates for DIBM:
$\displaystyle\left\lVert\nabla(u-u^{\epsilon}_{B,h})\right\rVert_{L^{2}(D)}$
$\displaystyle\lesssim\sqrt{\epsilon+\delta}+\sqrt{\delta}+k^{\frac{2}{3}}+h,$
(3.4) $\displaystyle\left\lVert u-u^{\epsilon}_{B,h}\right\rVert_{L^{2}(D)}$
$\displaystyle\lesssim\epsilon+\delta+k^{\frac{4}{3}}+h^{2}.$
Figure 2: Discrete diffuse interface representation on triangulation (left)
and on box mesh (right). Constrained cells are marked with red dots while the
continuous and discrete diffuse interfaces are coloured by darker an lighter
red respectively.
## 4 Numerical experiments
In this section we numerically assess the theoretical estimates obtained in
Section 3. To this aim, we consider the test case originally introduced in
[20, Section 6] that is briefly recalled in the sequel. Let
$\Omega=(-1,1)^{2}$ and let $\Gamma$ be the boundary of the circle $B_{1}(0)$
with centre $(0,0)$ and unitary radius. Thus, $\Gamma$ splits the domain
$\Omega$ into two subregions: $D_{1}=B_{1}(0)$ and
$D_{2}=\Omega\setminus\overline{D}_{1}$. Let $u$ be the solution of the
following problem
$-\Delta u=f\leavevmode\nobreak\ \leavevmode\nobreak\
\text{in\leavevmode\nobreak\ }\Omega,\qquad u=g\leavevmode\nobreak\
\leavevmode\nobreak\ \text{on\leavevmode\nobreak\ }\Gamma,\qquad
u=0\leavevmode\nobreak\ \leavevmode\nobreak\ \text{on\leavevmode\nobreak\
}\partial\Omega,$ (4.1)
where $g(x,y)=(4-x^{2})(4-y^{2})$ on $\Gamma$ and extended to $\Omega$ as
$\tilde{g}(x,y)=(4-x^{2})(4-y^{2})\cos(1-x^{2}-y^{2}).$
Setting the solution equal to:
$u(x,y)=(4-x^{2})(4-y^{2})\left(\chi_{D_{2}}+\exp(1-x^{2}-y^{2})\chi_{\bar{D}_{1}}\right),$
(4.2)
where $\chi_{D_{i}}$, $i=1,2$ are the characteristic functions of the two
parts of $\Omega$, the source term $f$ is chosen as:
$f=\begin{cases}-\Delta u&\quad\mathrm{in}\leavevmode\nobreak\
\Omega\backslash\Gamma,\\\ 0&\quad\mathrm{on}\leavevmode\nobreak\
\Gamma.\end{cases}$
All the computations have been performed employing a Voronoi dual mesh of a
Delaunay triangulation (i.e., the dual mesh is obtained by connecting the
barycentres of the triangles with straight lines).
To validate the estimates (3.4) we consider in a separate way the influence of
$h$ and $\epsilon$ on the error. More precisely, we first explore the
convergence with respect to $h$ and then we study the convergence with respect
to $\epsilon$. In both cases we consider a uniform discretization of the
domain $\Omega$ so to have $\kappa=\delta=h$.
### Convergence w.r.t. $\boldsymbol{h}$
We set $\epsilon=2^{-20}\ll h$ while we let $h$ vary as
$h=0.056,0.028,0.0139,0.00694.$
From Figure 3 we observe that the $L^{2}$-norm of the error decreases with
order $1$ while the error decreases with order 1/2 in the $H^{1}$-norm. These
rates of convergence are in agreement with (3.4).
###### Remark 4.1.
If a local refinement of the diffuse interface region is performed in such a
way that $\delta\simeq\kappa\simeq h^{2}$ (Figure 5), then first and second
order of convergence are recovered for $H^{1}$ and $L^{2}$ norms, respectively
(cf. [20, Section 6]).
### Convergence w.r.t. $\boldsymbol{\epsilon}$
We employ a fine mesh ($h=0.00694$) and let the value of $\epsilon$ vary as:
$\epsilon=2^{i},\leavevmode\nobreak\ i=-1,...,-20.$
The results are collected in Figure 4. The theoretical rates of convergence
with respect to $\epsilon$ (cf. (3.4)) are obtained both in the $L^{2}$-norm
(order $1$) and in the $H^{1}$\- norm (order $1/2$ ). It is worth noticing
that when the value of $\epsilon$ becomes smaller than the chosen value of
$h$, a plateau is observed as the (fixed) contribution from the discretization
of the PDE (related to $h$) dominates over the contribution from the
introduction of the diffuse interface (related to $\epsilon$).
Figure 3: Error behaviour with respect to $h$ (fixed $\epsilon=2^{-20}$):
(left) $L^{2}$-norm error, (right) $H^{1}$-norm error. Dashed lines are
theoretical convergence orders. Figure 4: Error behaviour with respect to
$\epsilon$ (fixed $h=0.00694$): (left) $L^{2}$-norm error, (right)
$H^{1}$-norm error. Dotted lines are theoretical convergence orders.
Figure 5: On the left: example of a dual mesh with local mesh refinement
around surrogate boundary. On the right: error behaviour with respect to $h$
with local mesh refinement around the interface (fixed $\epsilon=2^{-20}$):
(left) $L^{2}$-norm error, (right) $H^{1}$-norm error. Dashed lines are
theoretical convergence orders.
## 5 Conclusions
In this paper we introduced a diffuse interface variant of a finite volume
method, namely of the the so-called box method and obtained $L^{2}$ and
$H^{1}$ error estimates highlighting the contributions from the discretization
parameter $h$ associated to the polygonal computational mesh and the width
$\epsilon$ of the diffuse interface. Despite the simplicity of the method, the
present contribution seems to be novel in the literature. Moreover, the
present work may represent the first step towards the study of the diffuse
interface variant of more sophisticated finite volume schemes (possibly for
more complex differential problems).
This work opens fictitious boundary methods analysis to the box method and
finite volume framework. Possible extensions of this research could be to
being able to apply the plenty of penalization methods that are mostly thought
for finite element implementations such as shifted boundary, Nitsche penalty,
cut-fem or Brinkman penalization.
## 6 Acknowledgements
The first author acknowledges the financial support of Fondazione Politecnico.
The third author acknowledges the financial support of PRIN research grant
number 201744KLJL “ _Virtual Element Methods: Analysis and Applications_ ”
funded by MIUR. The second and third authors acknowledge the financial support
of INdAM-GNCS.
## References
* [1] Ivo Babuška. The finite element method with Lagrangian multipliers. Numer. Math., 20:179–192, 1972/73.
* [2] Ivo Babuška. The finite element method with penalty. Math. Comp., 27:221–228, 1973.
* [3] Randolph E. Bank and Donald J. Rose. Some error estimates for the box method. SIAM Journal on Numerical Analysis, 24(4):777–787, 1987.
* [4] Stefano Berrone, Andrea Bonito, Rob Stevenson, and Marco Verani. An optimal adaptive fictitious domain method. Math. Comp., 88(319):2101–2134, 2019.
* [5] Daniele Boffi and Lucia Gastaldi. A finite element approach for the immersed boundary method. volume 81, pages 491–501. 2003. In honour of Klaus-Jürgen Bathe.
* [6] Christoph Börgers and Olof B. Widlund. On finite element domain imbedding methods. SIAM J. Numer. Anal., 27(4):963–978, 1990.
* [7] Erik Burman and Peter Hansbo. Fictitious domain finite element methods using cut elements: I. A stabilized Lagrange multiplier method. Comput. Methods Appl. Mech. Engrg., 199(41-44):2680–2686, 2010\.
* [8] Erik Burman and Peter Hansbo. Fictitious domain finite element methods using cut elements: II. A stabilized Nitsche method. Appl. Numer. Math., 62(4):328–341, 2012.
* [9] Philippe G. Ciarlet. The finite element method for elliptic problems, volume 40 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. Reprint of the 1978 original [North-Holland, Amsterdam; MR0520174 (58 #25001)].
* [10] J. Austin Cottrell, Thomas J. R. Hughes, and Yuri Bazilevs. Isogeometric analysis. John Wiley & Sons, Ltd., Chichester, 2009. Toward integration of CAD and FEA.
* [11] J. Donea, S. Giuliani, and J.P. Halleux. An arbitrary lagrangian-eulerian finite element method for transient dynamic fluid-structure interactions. Computer Methods in Applied Mechanics and Engineering, 33(1):689 – 723, 1982.
* [12] Richard E. Ewing, Tao Lin, and Yanping Lin. On the accuracy of the finite volume element method based on piecewise linear polynomials. SIAM J. Numer. Anal., 39(6):1865–1888, 2002.
* [13] V. Girault and R. Glowinski. Error analysis of a fictitious domain method applied to a Dirichlet problem. Japan J. Indust. Appl. Math., 12(3):487–514, 1995.
* [14] Roland Glowinski, Tsorng-Whay Pan, and Jacques Périaux. A fictitious domain method for Dirichlet problem and applications. Comput. Methods Appl. Mech. Engrg., 111(3-4):283–303, 1994.
* [15] W. Hackbusch. On first and second order box schemes. Computing, 41(4):277–296, 1989.
* [16] C. W. Hirt, A. A. Amsden, and J. L. Cook. An arbitrary Lagrangian-Eulerian computing method for all flow speeds [J. Comput. Phys. 14 (1974), no. 3, 227–253]. volume 135, pages 198–216. 1997. With an introduction by L. G. Margolin, Commemoration of the 30th anniversary {of J. Comput. Phys.}.
* [17] Thomas J. R. Hughes, Wing Kam Liu, and Thomas K. Zimmermann. Lagrangian-Eulerian finite element formulation for incompressible viscous flows. Comput. Methods Appl. Mech. Engrg., 29(3):329–349, 1981.
* [18] X. Li, J. Lowengrub, A. Rätz, and A. Voigt. Solving PDEs in complex geometries: a diffuse domain approach. Commun. Math. Sci., 7(1):81–107, 2009.
* [19] Charles S. Peskin. The immersed boundary method. Acta Numer., 11:479–517, 2002.
* [20] Matthias Schlottbom. Error analysis of a diffuse interface method for elliptic problems with Dirichlet boundary conditions. Appl. Numer. Math., 109:109–122, 2016.
* [21] Jinchao Xu and Qingsong Zou. Analysis of linear and quadratic simplicial finite volume methods for elliptic equations. Numer. Math., 111(3):469–492, 2009.
|
acoustics actuators addictions admsci aerospace agriculture agriengineering
agronomy ai algorithms animals antibiotics antibodies antioxidants applmech
applsci arts asc asi atmosphere atoms axioms batteries bdcc behavsci beverages
bioengineering biology biomedicines biomimetics biomolecules biosensors
brainsci buildings cancers carbon catalysts cells ceramics challenges
chemengineering chemistry chemosensors children civileng cleantechnol climate
clockssleep cmd coatings colloids computation computers condensedmatter
cosmetics cryptography crystals dairy data dentistry designs diagnostics
diseases diversity drones econometrics economies education ejbc ejihpe
electrochem electronics endocrines energies entropy environments epigenomes
est fermentation fibers fire fishes fluids foods forecasting forests
fractalfract futureinternet futurephys galaxies games gastrointestdisord gels
genealogy genes geohazards geosciences geriatrics hazardousmatters healthcare
hearts heritage highthroughput horticulturae humanities hydrology ijerph ijfs
ijgi ijms ijtpp informatics information infrastructures inorganics insects
instruments inventions iot j jcdd jcm jcp jcs jdb jfb jfmk jimaging
jintelligence jlpea jmmp jmse jne jnt jof joitmc jpm jrfm jsan land languages
laws life literature logistics lubricants machines magnetochemistry make
marinedrugs materials mathematics mca medicina medicines medsci membranes
metabolites metals microarrays micromachines microorganisms minerals modelling
molbank molecules mps mti nanomaterials ncrna ijns neurosci neuroglia nitrogen
notspecified nutrients oceans ohbm optics particles pathogens pharmaceuticals
pharmaceutics pharmacy philosophies photonics physics plants plasma pollutants
polymers polysaccharides preprints proceedings processes prosthesis proteomes
psych publications quantumrep quaternary qubs reactions recycling religions
remotesensing reports reprodmed resources risks robotics safety sci scipharm
sensors separations sexes signals sinusitis smartcities sna societies socsci
soilsystems sports standards stats surfaces surgeries sustainability
sustainableworld symmetry systems technologies telecom test tourismhosp toxics
toxins transplantology tropicalmed universe urbansci vaccines vehicles vetsci
vibration viruses vision water wem wevj
|
# Coverage Analysis of Broadcast Networks with Users Having Heterogeneous
Content/Advertisement Preferences
Kanchan K. Chaurasia, Reena Sahu, Abhishek K. Gupta The authors are with the
Department of Electrical Engineering, Indian Institute of Technology Kanpur,
Kanpur, India 208016. Email<EMAIL_ADDRESS>
###### Abstract
This work is focused on the system-level performance of a broadcast network.
Since all transmitters in a broadcast network transmit the identical signal,
received signals from multiple transmitters can be combined to improve system
performance. We develop a stochastic geometry based analytical framework to
derive the coverage of a typical receiver. We show that there may exist an
optimal connectivity radius that maximizes the rate coverage. Our analysis
includes the fact that users may have their individual content/advertisement
preferences. We assume that there are multiple classes of users with each user
class prefers a particular type of content/advertisements and the users will
pay the network only when then can see content aligned with their interest.
The operator may choose to transmit multiple contents simultaneously to cater
more users’ interests to increase its revenue. We present revenue models to
study the impact of the number of contents on the operator revenue. We
consider two scenarios for users’ distribution- one where users’ interest
depends on their geographical location and the one where it doesn’t. With the
help of numerical results and analysis, we show the impact of various
parameters including content granularity, connectivity radius, and rate
threshold and present important design insights.
###### Index Terms:
Stochastic geometry, Broadcast networks, Coverage.
## I Introduction
Broadcasting networks provide society with various services including TV
communication, delivery of critical information and alerts, general
entertainment, and educational services and thus have been a key wireless
technology. With the recent advancement in wireless technologies and handheld
electronic devices including smartphone and tablets, the use of broadcasting
services has been extended to include many modern applications including
delivery of traffic information to vehicles in vehicle-to-infrastructure
networks, advertisement industry and mobile TV services. Many broadcasting
standards have been recently proposed including the digital video broadcast-
terrestrial standard (DVB-T2), the advanced television systems committee
standard (ATSC 3.0), and the DVB-next generation handheld standard (DVB-NGH)
to assist delivering of TV broadcasting services to mobile devices [1][2][3].
The advent of digital broadcasting has lead to a significant increase in the
demand for multimedia services for handheld devices including mobile TV, live
video streaming, and on-demand video in the last decade [1]. From these
applications’ perspective, broadcasting based multimedia services can provide
better data rate and performance compared to the uni-cast cellular network
based mobile TV. Note that in a cellular network where the desired data is
transmitted to each user via orthogonal resources, users may suffer from
spectral congestion in regions with high density due to limit bandwidth
resulting in the performance degradation. But, in a broadcasting network
providing multimedia services, all transmitters transmit identical data to all
users and hence, do not require orthogonal resource. In these networks, each
transmitter can use the complete spectrum to serve their users, and hence,
these are also called single frequency networks (SFN). Due to this, users may
experience a better quality of service.
### I-A Related Work
Given increasing demand of broadcasting services, it is very interesting to
analyze the broadcast networks in terms of signal-to-noise-ratio (SINR) and
achievable data rate to understand their limitations and potential to meet
these demands. There have been some recent works in the system-level analysis
of broadcast networks. In [4], the authors evaluated the blocking probability
for users accessing a network delivering mobile TV services over a hybrid
broadcast unicast communication. In [5], the authors studied a cellular
network with uni-cast and multicast-broadcast deployments. However, these
works didn’t include the effect of transmitters’ locations in the evaluations
which is required for the system-level analysis of broadcast networks.
Stochastic geometry framework can be utilized to analyze wireless networks
from system level perspective [6, 7, 8]. Stochastic geometry based models have
been validated for various types of networks including cellular networks,and
ad hoc networks [9, 6, 10, 11]. In [12], the authors describe the analytical
approach to calculate the coverage probability of a hybrid broadcast and uni-
cast network, however, the authors have only considered a single broadcast
transmitter along with many uni-cast transmitters. In our past work [13], we
have considered a broadcast network with multiple broadcasting transmitters to
compute the coverage performance of users. However, the work assumed a static
connectivity region around the user where transmitters need to be located to
be able to serve the users. As shown in this paper, this connectivity region
is of variable size depending on the location of the first closest
transmitter. To the best of our knowledge, there exists no other past work
which analyzes the SINR and rate performance of a broadcast network with
multiple broadcasting transmitters which is one of the main focuses of this
paper.
Another important metric to evaluate broadcast networks is the revenue earned
by the network operator. In a broadcast network, the revenue is generated
either from subscribers as network access fees for viewing content of their
choice or from advertisers to show their advertisements to interested
subscribers. With digital broadcast, subscriptions and user-targeted
advertisements added a new dimension in the revenue. Due to advancements in
technologies over the past few decades, the advertising has become more user
targeted and location-adaptive which can be planned according to the user
demographics and their preferences to improve network revenue. In [14], the
authors studied location-based mobile marketing and advertising to show the
positive interest of mobile consumers in receiving relevant promotions. It is
intuitive that a targeted and localized content will have a better engagement
factor. It is interesting to analyze the network revenue earned from the users
with their preference dependent on their choices and geographical location. As
far we know, there does not exist any past work that analyses the network
revenue of a broadcast network with subscribers having preferences for content
and advertisement which is another focus of this paper.
### I-B Contributions
In this paper, we derive an analytical framework to evaluate the performance
of a broadcast network with multiple broadcasting transmitters with users
having content preference. We also present a revenue model to quantify the
network revenue earned by the network operator. In particular, the
contributions of this paper are as follows:
1. 1.
We consider a broadcast network with multiple transmitters. Since all
transmitters in a broadcast network are transmitting the same signal, received
signals from multiple transmitters from a certain connectivity region around
the user can be combined to improve the coverage at this user. Using tools
from stochastic geometry, we derive the expression for SINR coverage and rate
coverage of a typical receiver located at the origin. Due to the contribution
in the desired signal power from multiple transmitters, the analysis is
significantly different and difficult than their cellular counterpart. Our
main contribution lies in developing the framework and deriving techniques to
evaluate the analytical expressions of SINR and rate coverage. We show that
this connectivity region depends on network bandwidth.
2. 2.
We present some numerical results to validate our analysis and present design
insights. We show the impact of connectivity region size, path-loss exponents,
and the network density on the SINR and rate coverage. We also find that there
exists an optimal size of connectivity region that maximizes the rate
coverage.
3. 3.
In this paper, we also include the fact that users may have their individual
content or advertisement preferences. We assume that there are multiple
classes of users with each class of users prefers a particular type of
content/advertisements and the users will pay the network only when then can
see a particular content of their interest. We assume that one unit of revenue
comes to the network from a particular class of users if every user of this
class can see the content as per the preference of this class. We study the
revenue thus obtained by the network from users. The broadcast operator may
choose to transmit multiple contents simultaneously to cater more users’
interest to increase its revenue. However, given the limited resources, the
network can cater only to few classes and this capability depends on how these
user classes are distributed spatially. There are two scenarios considered for
users’ distribution. In one scenario, users’ interest depend on their
geographical position in the network and in the second scenario it does not.
We calculate the analytical expression for SINR coverage and rate coverage at
a typical user and evaluate the total revenue. We present many important
design insights via numerical results.
Notation: Let $\mathcal{B}({\mathbf{x}},r)$ denote the ball of radius $r$ with
center at ${\mathbf{x}}$. $\|{\mathbf{x}}\|$ denotes the norm of the vector
${\mathbf{x}}$ and $\|{\mathbf{x}}_{i}\|=r_{i}$ denotes the random distance of
BBS located at ${{\mathbf{x}}_{i}}$. Let $\mathbf{o}$ denote the origin.
$\mathsf{B}\left(x,y;z\right)$ is the incomplete Beta function which is
defined as
$\mathsf{B}\left(x,y;z\right)=\int_{0}^{z}u^{x-1}(1-u)^{y-1}\mathrm{d}u.$
Let $c$ denote the speed of EM waves in the media. $\mathsf{A}^{\complement}$
denotes the complement of set $\mathsf{A}$.
## II System Model
In this paper, we consider a broadcast network with multiple broadcasting base
stations (BBSs), deployed in the 2D region $\mathsf{R}=\mathbb{R}^{2}$. The
considered system model is as follows:
Figure 1: Illustration of system model of a broadcast network. A typical user
is considered at the origin. ${X_{0}}$ is the distance of the nearest BS from
the typical user. The 2D region
$\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$ denotes the connectivity
region of the user.
### II-A Network Model
The location of BBSs can be modeled as a homogeneous Poisson point process
$\Phi=\\{\mathbf{X}_{i}\in\mathbb{R}^{2}\\}$ with density $\lambda$ in the
region $\mathsf{R}$ (See Fig. 1). Let $R_{e}=1/\sqrt{\lambda\pi}$ which
represents the cell radius of an average cell. The subscribers (users) of the
broadcasting service are assumed to form a stationary point process. We assume
a typical user located at the origin $\mathbf{o}$. Consider each BS is
operating in the same frequency band with the transmission bandwidth $W$. Let
$T_{\mathrm{s}}$ is the symbol time of the transmitted symbol which is
inversely proportional to the bandwidth $W$. Assume the transmit power of each
BBS be $p_{\mathsf{t~{}\\!}}$ and all devices are equipped with a single
isotropic transmit antenna. The analysis can be extended for finite networks
by taking $\mathsf{R}=\mathcal{B}(\mathbf{o},R)$ with a finite $R$.
### II-B Channel Model
We assume the standard path-loss model. Hence, the received signal power from
the $i$th BBS at the typical user at origin is given as
$\displaystyle P_{i}$
$\displaystyle=p_{\mathsf{t~{}\\!}}a\beta_{i}{\|\mathbf{X}_{i}\|}^{-\alpha},$
(1)
where $X_{i}=\|\mathbf{X}_{i}\|$ denotes the random distance of this BBS from
the typical user. Here, $\alpha$ is the path-loss exponent and $a$ is near-
field gain which depends on the propagation environment. $\beta_{i}$ denotes
the fading between the $i^{th}$ BBS and the user. We assume Rayleigh fading,
i.e. , $\beta_{i}\sim\mathrm{Exp}(1)$ for tractability.
### II-C Serving Signal and Interference Model
In a broadcast system, multiple BBSs may transmit the same data at the one
frequency band (as suggested by the name SFN). Therefore, at the receiver end,
it can be seen as a single transmission with multi-path propagation and
signals transmitted from multiple BBSs can be combined at the user. However,
since the signals from different BBSs are delayed according to time delays
dependent on their distance, some of these signals may be delayed
significantly and may overlap with the next transmission slots. Therefore,
only those signals that have delay within a certain limit can be combined to
successfully decode the received symbol [15]. The rest BBSs contribute to the
ISI (inter-symbol interference) which can be significant depending on the BBSs
density.
Let $\mathbf{X}_{0}$ denotes the nearest serving BBS. The probability density
function of the distance $X_{0}=\|\mathbf{X}_{0}\|$ to the nearest BS from the
user is given as[7]
$\displaystyle f_{X_{0}}(u)=2\pi\lambda ue^{-\pi\lambda
u^{2}}\mathbbm{1}\left({u\geq 0}\right).$ (2)
The time taken by the signal to reach from the $i^{th}$ BBS located at
$\mathbf{X}_{i}$ to the typical user at $\mathbf{o}$ be $T_{i}={X_{i}}/c$. In
particular, $T_{0}$ denotes the time taken by signal to reach from
$\mathbf{X}_{0}$ to a typical user at $\mathbf{o}$. Let the propagation delay
of transmitted signal from $i^{th}$ BBS compared to the nearest serving BBS is
$\Delta_{i}=T_{i}-T_{0}$.
We assume that the receiver design allows the maximum delay of $\delta
T_{\mathrm{s}}$ for the received signals to be combined at the user where
$\delta\in[0\ 1]$ is a design parameter. This means that the received signal
from the $i^{th}$ BBS may contribute in the serving signal power if
$\Delta_{i}\leq\delta T_{\mathrm{s}}$. This condition is equivalent to the
condition $\|\mathbf{X}_{i}\|-\|\mathbf{X}_{0}\|\leq
R_{\mathrm{s}}\stackrel{{\scriptstyle\Delta}}{{=}}T_{\mathrm{s}}\delta c$ on
the BBSs location $\mathbf{X}_{i}$. In other words, this means that all the
BBSs that are located in the 2D region
$\\{\mathbf{X}:\|\mathbf{X}\|\leq\|\mathbf{X}_{0}\|+R_{\mathrm{s}}\\}=\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$
can contribute to the serving signal at the typical receiver at origin
$\mathbf{o}$. We term this region
$\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$ as the connectivity region
for the user and ${X_{0}}+R_{\mathrm{s}}$ can be termed as the connectivity
radius. Let $\overline{m}^{2}={\lambda\pi R_{\mathrm{s}}^{2}}$ denote the mean
number of BBSs in this connectivity radius.
On the other hand, all the BBSs located outside
$\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$ i.e. all the BBSs with
${X_{i}}\geq{X_{0}}+R_{\mathrm{s}}$ will contribute to the interference power
even when they are transmitting the same data as their signal will be delayed
beyond the specified limit.
### II-D Modeling Content Preferences of Users
In this paper, we also include the fact that users may have their individual
content or advertisement preferences. We assume that there are
$N_{\mathrm{c}}$ classes of users. Here, $N_{\mathrm{c}}$ is termed
content/advertisement granularity. Each class of users prefers a particular
type of content/ advertisements. We assume that the users will pay the network
only when then can see a particular content of their interest. Each class
consists of some quanta of users. For simplicity, we assume that each class
has the same number of users, however, the presented framework can be
trivially extended to include user classes with unequal sizes. We assume that
one unit of revenue comes to the network from a particular class of users if
every user of this class can see the content as per the preference of this
class. Given the limited resources, the network can cater only to few classes
and this capability depends on how these user classes are distributed
spatially. We will consider two types of users class distributions over the
geographical space. We will also discuss a revenue model to characterize the
network’s revenue to help us understand optimal scheduling policies for the
two scenarios.
## III Coverage Analysis for Common Content Transmission
We first start with the scenario that all users seek the same content, hence,
all BBSs are transmitting the same content to everyone. Examples include
systems transmitting emergency information, or traffic data which is common to
every user. In this section, we will derive the SINR and rate coverage
probability for a typical user at the origin $\mathbf{o}$ for such system.
### III-A SINR
Since all BBSs are transmitting the same content, all BBSs located inside the
connectivity region $\mathcal{B}(0,X_{0}+\mathbb{R}_{\mathrm{s}})$ contribute
to the signal power. Therefore, the desired received signal power for the
typical user at origin is given as
$\displaystyle S^{\prime}$
$\displaystyle=p_{\mathsf{t~{}\\!}}a\beta_{0}{\|\mathbf{X}_{0}\|}^{-\alpha}+\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}p_{\mathsf{t~{}\\!}}a\beta_{i}{\|\mathbf{X}_{i}\|}^{-\alpha}.$
(3)
Similarly, the total interference can be given as
$\displaystyle I^{\prime}$
$\displaystyle=\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,\,X_{0}+R_{\mathrm{s}})^{\complement}}p_{\mathsf{t~{}\\!}}a\beta_{j}{\|\mathbf{X}_{j}\|}^{-\alpha}.$
(4)
The signal-to-interference-plus-noise ratio (SINR) at the typical receiver is
given as
$\displaystyle\mathtt{SINR}$
$\displaystyle=\frac{S^{\prime}}{I^{\prime}+N}=\frac{\beta_{0}{{X_{0}}}^{-\alpha}+\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}\beta_{i}{{X_{i}}}^{-\alpha}}{\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})^{\complement}}\beta_{j}{{X_{j}}}^{-\alpha}+\sigma^{2}}.$
(5)
Here, $\sigma^{2}$ is the normalized noise power given as
$\sigma^{2}=N/(p_{\mathsf{t~{}\\!}}a)$ where $N$ is the noise power. Similarly
normalized desired received signal power and interference are denoted by $S$
and $I$ which are given as $S=S^{\prime}/(p_{\mathsf{t~{}\\!}}a)$ and
$I=I^{\prime}/(p_{\mathsf{t~{}\\!}}a)$. Hence, the SINR is equal to
$\displaystyle\mathtt{SINR}$ $\displaystyle=\frac{S}{I+\sigma^{2}}.$ (6)
Let $K=\sigma^{2}/R_{e}^{-\alpha}$ which represents the SNR at cell edge of an
average cell.
### III-B SINR Coverage Probability
The SINR coverage probability $\mathrm{p_{c}}(\tau,\lambda)$ of a user is
defined as the probability that the SINR at the user is above the threshold
$\tau$ i.e.
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)$
$\displaystyle=\mathbb{P}\left[\mathtt{SINR}>\tau\right]$ (7)
Using the conditioning on the nearest serving BBS’s location $\mathbf{X}_{0}$,
the SINR coverage for typical user at $\mathbf{o}$ is given as
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)$
$\displaystyle=\mathbb{E}_{\mathbf{X}_{0}}\left[\mathbb{P}\left(\mathtt{SINR}>\tau\right)|\,\mathbf{X}_{0}\right]$
$\displaystyle=\mathbb{E}_{\mathbf{X}_{0}}\left[\mathbb{P}\left(\frac{S}{I+\sigma^{2}}>\tau\right)|\,\mathbf{X}_{0}\right]=\mathbb{E}_{\mathbf{X}_{0}}\left[\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,\mathbf{X}_{0}\right)\right].$
(8)
Using the distribution of $\|\mathbf{X}_{0}\|=X_{0}$, the SINR coverage
probability can be further written as
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)$
$\displaystyle=\mathbb{E}_{\mathbf{X}_{0}}\left[\mathbb{P}\left(S>\tau\left(I+N\right)\,|\,\mathbf{X}_{0}\right)\right]$
$\displaystyle=\int_{0}^{\infty}2\pi\lambda ue^{-\pi\lambda
u^{2}}\mathbb{P}\left(S>\tau\left(I+\sigma^{2}\right)\,|\,\|\mathbf{X}_{0}\|=u\right)\mathrm{d}u.$
(9)
To solve the inner term further, we will use Gil Pelaez’s Lemma [16] which
states that the CDF of a random variable $Y$ can be written in term of its
Laplace transform $\mathcal{L}_{Y}(t)$ as
$\displaystyle F_{Y}(s)=\mathbb{P}\left[Y\leq
s\right]=\frac{1}{2}-\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[e^{-jts}\mathcal{L}_{Y}(-jt)\right]\mathrm{d}t.$
(10)
Using this Lemma, we get,
$\displaystyle\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,\mathbf{X}_{0}\right)$
$\displaystyle=\mathbb{E}_{I|\,\mathbf{X}_{0}}\left[\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[e^{-jt\tau(I+\sigma^{2})}\mathcal{L}_{S}(-jt)\right]\mathrm{d}t\right]$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\mathbb{E}_{I|\,\mathbf{X}_{0}}\left[e^{-jt\tau(I+\sigma^{2})}\right]\mathcal{L}_{S|\,\mathbf{X}_{0}}(-jt)\right]\mathrm{d}t$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\mathcal{L}_{I|\,\mathbf{X}_{0}}(jt\tau)e^{-jt\tau\sigma^{2}}\mathcal{L}_{S|\,\mathbf{X}_{0}}(-jt)\right]\mathrm{d}t.$
(11)
Now, using in (9), the SINR coverage probability is
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\frac{1}{t}\mathsf{Im}\left[\mathcal{L}_{I|\,\mathbf{X}_{0}}(jt\tau)e^{-jt\tau\sigma^{2}}\mathcal{L}_{S|\,\mathbf{X}_{0}}(-jt)\right]\mathrm{d}t\
\mathrm{d}u.$ (12)
Here, $\mathcal{L}_{I|\,\mathbf{X}_{0}}(.)$ and
$\mathcal{L}_{S|\,\mathbf{X}_{0}}(.)$ are the Laplace transform of the sum
interference $I$ and of the desired received signal power $S$ respectively
which are given in the following Lemma.
###### Lemma 1.
The Laplace transforms of the desired signal power and the sum interference at
the receiver located at origin $\mathbf{o}$ are given as
$\displaystyle\mathcal{L}_{S\,|\,\mathbf{X}_{0}}(s)$
$\displaystyle=\frac{1}{1+sX_{0}^{-\alpha}}\exp\left(-2\pi\lambda\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\,\mathrm{d}r\right)$
(13) $\displaystyle\mathcal{L}_{I}(s)$
$\displaystyle=\exp{\left(-2\pi\lambda\int_{X_{0}+R_{\mathrm{s}}}^{\infty}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\,\mathrm{d}r\right)}$
(14)
###### Proof.
See Appendix A. ∎
Using Lemma 1 in (12), we can get the SINR coverage which is given in Theorem
1.
###### Theorem 1.
The probability of the SINR coverage for the user located at the origin in a
broadcast network with $\lambda$ density of BBSs, is given as
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[{\color[rgb]{0,0,0}\frac{e^{-jt\tau\sigma^{2}}}{1-jtu^{-\alpha}}}\right.$
$\displaystyle\hskip
56.9055pt\times\left.\exp\left(-2\pi\lambda\left(\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}r\,\mathrm{d}r+\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau
r^{-\alpha}}r\,\mathrm{d}r\right)\right)\right]\mathrm{d}t\ \mathrm{d}u$
$\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2v\frac{1}{s}\left[\frac{1}{1+s^{2}v^{-2\alpha}}\right]e^{-v^{2}}e^{-s^{\frac{2}{\alpha}}M_{d}(s,v)}$
$\displaystyle\times\left[sv^{-\alpha}\cos\left(s^{\frac{2}{\alpha}}N_{d}(s,v)+\tau
sK\right)-\sin\left(s^{\frac{2}{\alpha}}N_{d}(s,v)+\tau
sK\right)\right]\mathrm{d}v\;\mathrm{d}s$ (15)
where $M_{d}(t,\,u)$ and $N_{d}(t,\,u)$ are given as
$\displaystyle M_{d}(s,v)$
$\displaystyle=\frac{1}{\alpha}\left[Q\left(\frac{1}{\alpha},s^{2}(v+\overline{m})^{-2\alpha},s^{2}v.^{-2\alpha}\right)+\tau^{2/\alpha}Q\left(\frac{1}{\alpha},0,\tau^{2}s^{2}(v+\overline{m})^{-2\alpha}\right)\right]$
(16) $\displaystyle N_{d}(s,v)$
$\displaystyle=\frac{1}{\alpha}\left[-Q\left(\frac{1}{\alpha}+\frac{1}{2},s^{2}(v+\overline{m})^{-2\alpha},s^{2}v^{-2\alpha}\right)+\tau^{\frac{2}{\alpha}}Q\left(\frac{1}{\alpha}+\frac{1}{2},0,\tau^{2}s^{2}(v+\overline{m})^{-2\alpha}\right)\right]$
(17) with $\displaystyle
Q\left(z,a,b\right)=\mathsf{B}\left(z,-z+1;\frac{1}{1+a}\right)-\mathsf{B}\left(z,-z+1;\frac{1}{1+b}\right).$
(18)
###### Proof.
See Appendix B. ∎
Theorem 1 provides the SINR coverage in terms of two parameters: $K$ which
denotes the inverse of SNR at the cell edge and $\overline{m}^{2}$ which
denotes the mean number of BBSs in connectivity radius circle. Further we can
derive the following remarks.
###### Remark 1.
For interference limited scenario, $K=0$, which means the coverage probability
is a function of $\overline{m}$ only. In case $\overline{m}$ is fixed,
individual variation of $\lambda$ and $R_{\mathrm{s}}$ will not change the
coverage.
###### Remark 2.
For a broadcast network, an increase in the BBS density $\lambda$ improves
both the desired signal power and the interference power. However, due to
increase in number of serving BBSs due to increase in $\lambda$ which improve
the overall SINR coverage (which can also be seen in the numerical results).
This behavior is different than conventional cellular case. Recall that with
single serving BS density, the SINR in an interference-limited cellular
network does not get affected by any increase in the BS density which is known
as SINR invariance [6]. This can be shown from (15) by performing a
comparative study between $\lambda$ and $\lambda(1+\epsilon)$ with
$\epsilon<1$ for some $\lambda$.
###### Remark 3.
It can been observed that the SINR coverage probability increases with an
increases connectivity radius $R_{\mathrm{s}}$ as it increases the serving
power and decreases the interference.
### III-C Rate Coverage Probability
The rate coverage probability of a user is defined as the probability that the
maximum achievable rate for the considered user is above some threshold $\rho$
i.e.
$\displaystyle\mathrm{r_{c}}(\rho)$
$\displaystyle=\mathbb{P}\left[\mathtt{Rate}>\rho\right].$
Note that the maximum achievable rate for the typical user is given as
$\displaystyle\mathtt{Rate}$ $\displaystyle=\xi W\log_{2}(1+\mathtt{SINR})$
(19)
where $\xi$ is some coefficient that denotes the spectrum utilization. $W$
denotes the system bandwidth available to each BBS. Hence, the rate coverage
for the typical user is
$\displaystyle\mathrm{r_{c}}(\rho)$
$\displaystyle=\mathbb{P}\left[\mathtt{Rate}>\rho\right]$
$\displaystyle=\mathbb{P}\left[\xi W\log_{2}(1+\mathtt{SINR})>\rho\right]$
$\displaystyle=\mathbb{P}\left[\mathtt{SINR}>2^{\rho/(\xi
W)}-1\right]=\mathrm{p_{c}}(2^{\rho/(\xi W)}-1)$ (20)
where $\mathrm{p_{c}}$ is the SINR coverage probability given in (15). Note
that the available bandwidth $W$ affects $T_{\mathrm{s}}$ and hence,
$R_{\mathrm{s}}$. If the BBSs use orthogonal frequency division multiplexing
(OFDM) for transmission with FFT size $N_{\mathrm{s}}$, then, $W$ is related
to $T_{\mathrm{s}}$ as
$W=\frac{N_{\mathrm{s}}}{T_{\mathrm{s}}}.$
Hence, the connectivity radius is
$\displaystyle R_{\mathrm{s}}=T_{\mathrm{s}}\delta
c=\frac{N_{\mathrm{s}}\delta c}{W}.$ (21)
Hence, an increase in the system bandwidth increases the pre-log factor in
(19), however, it also decreases the connectivity radius resulting in the
lower SINR coverage probability. Therefore, we can observe a trade-off on the
rate coverage with increasing bandwidth.
### III-D Numerical Results
We now validate our results for SINR and rate coverage probabilities through
numerical simulation. We will also explore the impact of different parameters
on the coverage probabilities via numerical evaluations of derived expressions
to develop design insights. The default parameters are given in Table I which
are according to [17, 18].
TABLE I: Default parameters for numerical evaluations Parameters | Numerical value | Parameters | Numerical value
---|---|---|---
$R_{\mathrm{s}}$ | 19.18 km | $\lambda$ | 0.0014 BBSs / $\text{km}^{2}$
$N$ | 0 | Path-loss $a,\ \alpha$ | $1.6\times 10^{-3},\ 4$
$W$ | 8 MHz | $p_{\mathsf{t~{}\\!}}$ | 20 dB
$N_{\mathrm{s}}\delta$ | 512 | Coefficients | $a=10^{-3},\ \xi=1$
$N_{\mathrm{c}}$ | 15 | Simulation radius | $800$ km
Figure 2: SINR coverage vs. SINR threshold ($\tau$) for various BBS density
$\lambda$ in a broadcast system with multiple BBSs. Here, the solid lines
represent the analytical expression and markers represent simulation values.
The parameters are according to Table I. It can be seen that the analysis
matches with simulation results. Figure 3: Rate coverage vs. rate threshold
($\rho$) for various BBS density $\lambda$ in a broadcast system with multiple
BBSs. Here, the solid lines represent the analytical expression and markers
represent simulation values. The parameters are according to Table I. It can
be seen that the analysis matches with simulation results.
Validation of results: Fig. 2 shows the SINR coverage probability vs SINR
threshold ($\tau$) for different values of BBSs density ($\lambda$). Here, the
solid lines represent the analytical expression and markers represent
simulation values. It can be seen that the analysis matches with simulation
results which establishes the validity of the presented analytical results.
From Fig. 2, it can be seen that SINR coverage increases with an increase in
the BBS density consistent with Remark 2. Similarly, Fig. 3 shows the rate
coverage probability vs rate threshold ($\rho$) for different values of
$\lambda$. It can be seen that the rate coverage increases with an increase in
$\lambda$ which is expected due to the SINR coverage behavior with $\lambda$.
Figure 4: SINR coverage vs. $R_{\mathrm{s}}$ for different values of SINR
threshold ($\tau$) in a broadcast network. Here, bandwidth varies with
$R_{\mathrm{s}}$ according to (21) with maximum value at 80 MHz. The rest of
the parameters are according to Table I. It is observed that the SINR improves
with $R_{\mathrm{s}}$. Figure 5: Rate coverage vs. $R_{\mathrm{s}}$ for
different values of rate threshold ($\rho$ in Mbps) in a broadcast network.
Here, bandwidth varies with $R_{\mathrm{s}}$ according to (21) with maximum
value at 50 MHz. The rest of the parameters are according to Table I. A trade-
off is seen in rate the with varying $R_{\mathrm{s}}$.
Impact of connectivity radius on SINR and rate coverage: Fig. 4 shows the
variation of SINR coverage with the connectivity radius ($R_{\mathrm{s}}$) for
different values of target SINR threshold. It is observed that the SINR
coverage increases with $R_{\mathrm{s}}$. It can be justified as
$R_{\mathrm{s}}$ increases the number of contributing BBSs increases and the
number of interfering BBSs decreases.
Fig. 5 shows the variation of the rate coverage with $R_{\mathrm{s}}$. We can
observe that with $R_{\mathrm{s}}$, the rate coverage first increases up to a
certain value of $R_{\mathrm{s}}$ and afterwards starts decreasing again. From
(21), increase in $R_{\mathrm{s}}$ requires a decrease in the bandwidth $W$ in
order to allow larger symbol time. This causes a trade-off in the system
performance. As bandwidth is a pre-log factor in the rate expression, it has a
negative and larger impact on the rate coverage. Hence, beyond a certain value
of $R_{\mathrm{s}}$, the impact of decrease in $W$ dominates the increase in
SINR caused by increased $R_{\mathrm{s}}$ which results in the decrease in the
rate. Due to the same reasons, there may exist an optimal $R_{\mathrm{s}}$
that maximizes the rate coverage. The knowledge of optimal $R_{\mathrm{s}}$
can be helpful in designing the broadcast network.
Impact of network density on SINR coverage: Fig. 6(a) and (b) show the
variation of SINR and rate coverage with the network density $\lambda$ for
different value of target SINR threshold and connectivity radius (which is
achieved by changing bandwidth while keeping other parameters the same as
Table I). The coverage while ignoring the noise is also shown. From Fig. 5(a),
it can be seen that densification of the network helps in SINR coverage. When
the BBS density is very small, network is noise limited. As $\lambda$
increases, BBSs comes closer to the user improving SINR coverage while SIR
coverage remains constant. After a certain $\lambda$, the increase in
$\lambda$ reduces the interference also. Hence, both SIR and SINR coverage
improves. At high value of $\lambda$, coverage becomes 1 as all dominant BBSs
provide serving power. The behavior of SIR coverage is similar to as seen in
networks with dual-slope pathloss [19].
Figure 6: Coverage vs. BBS density $\lambda$ for different values of
$R_{\mathrm{s}}$ in a broadcast network. Here, rest of the parameters are
according to Table I. (a) SINR coverage. Dashed lines indicate SIR coverage
while ignoring noise. (b) Rate coverage. Dashed lines indicate rate coverage
ignoring the noise.
## IV Scenario I: Coverage Analysis for Networks with Users having a High
Level of Spatial Heterogeneity in Content Preference
We now extend the system model to include networks with users having their
individual content or advertisement preferences. In this section, we consider
the first scenario where there is high level of spatial heterogeneity in
users. This means that all classes of users are present in any region. Given
the limitation of resources, network selects $n$ classes of users and shows
$n$ contents (one for each class) at any point of time. Here, $n$ is a design
parameter decided by the broadcast network. Since the user classes are
spatially inseparable, each BBSs should transmit to the same $n$ contents to
improve SINR coverage. We have assumed OFDM based transmission where a BBS
transmits the $n$ number of contents on orthogonal resources.
### IV-A SINR and SINR Coverage
To improve coverage, the network can use the same bands for a particular
content across all BBSs. Since all the BBSs are transmitting the same data in
a band, the SINR for a typical user is the same as given in (5). Similarly, in
this case, the SINR coverage probability of a typical user is the same as
given in Theorem 1.
### IV-B Rate Coverage
Now, the available resources are divided among $n$ contents. If the total
available bandwidth is $W$, the bandwidth available for each content is $W/n$.
The instantaneous achievable rate for a typical user located at origin, for
each content is
$\displaystyle\mathtt{Rate}$
$\displaystyle=\xi\frac{W}{n}\log_{2}(1+\mathtt{SINR}).$ (22)
From (20), the rate coverage probability is given as:
$\displaystyle\mathrm{r_{c}}(\rho)$
$\displaystyle=\mathbb{P}\left[\mathtt{SINR}>2^{n\rho/(\xi
W)}-1\right]=\mathrm{p_{c}}(2^{n\rho/(\xi W)}-1)$ (23)
where $\mathrm{p_{c}}$ is given in (15).
### IV-C Network Revenue
Let $\rho$ denote the minimum rate required for a user to be able to view the
content. Then, the rate coverage $\mathrm{r_{c}}$ at $\rho$ denotes the
fraction of users that are able to view this content. Therefore,
$\mathrm{r}_{c}$ unit of revenue will be earned by the network from a
particular class, since only $\mathrm{r_{c}}$ fraction of users can watch it.
Therefore, the network’s total revenue $\mathrm{R}_{\mathrm{n}}$ can be given
as:
$\displaystyle\mathrm{R}_{\mathrm{n}}$ $\displaystyle=n\mathrm{r}_{c}(\rho).$
(24)
Figure 7: Variation of total revenue with respect to allowed number of user
classes for different rate threshold $\rho$ (Mbps) (which is a proxy for
content quality requested). Content granularity is $N_{\mathrm{c}}=15$. Other
parameters are according to Table I. It is observed that an optimal value of
$n$ can provide the maximum revenue to the network which depends on the
content quality requested.
### IV-D Numerical Results: Impact of $n$ on Total Revenue
Fig. 7 shows the variation of the network revenue $\mathrm{R}_{\mathrm{n}}$
with $n$ for a system with $N_{\mathrm{c}}=15$. We can observe that initially,
the revenue increases with an increase in $n$ up to a certain value, and
thereafter, starts decreasing. This can be justified in the following way. If
$n$ increases, the following two effects take place– (1) more user classes are
served, causing a linear increase in $\mathrm{R}_{\mathrm{n}}$, and (2)
available bandwidth $W/n$ for each content/advertisement decreases causing the
rate coverage to drop. As a combined effect dictated by (24), the revenue is
optimal at a particular value of $n$. However, this behavior also depends on
the target rate threshold. At a higher rate threshold,
$\mathrm{R}_{\mathrm{n}}$ decreases with $n$, showing $n=1$ as the optimal
choice. This implies that the optimal number of user classes that can be
served, depends on the quality of the content. If the quality requested is
high, then it may be better to serve less user classes, while more user
classes can be served when the quality requested is lower.
## V Scenario II: Coverage Analysis for Networks with Users having Spatially
Separated Classes for Content Preference
The second scenario we consider corresponds to the case where user classes are
spatially separated. For tractability, we assume that coverage area of each
BBS comprises of users from a single class. Thus, users of different classes
are spatially separated. Let $S_{i}$ denote the class of the $i$th BBS located
at $\mathbf{X}_{i}$ which is a uniform discrete random variable with PMF given
as
$\displaystyle p_{S_{i}}(k)$
$\displaystyle=\frac{1}{N_{\mathrm{c}}}\mathbbm{1}\left({1\leq k\leq
N_{\mathrm{c}}}\right).$ (25)
We assume that $S_{i}$’s are independent of each other.
The network can cater to all user classes by letting each BBS to transmit
content according to the class of users lying in its coverage area. Note that
for a typical user, only those BBSs that transmit the same content, can
contribute to the serving signal power at this user. Therefore, this strategy
will reduce the number of serving BSs and hence reduce the SINR. On the other
hand, network can decide to cater only one user class by forcing all BBSs to
transmit only one content, will reduce the revenue as users of only one class
will receive their preferred content. It will be interesting to find the
optimal number of user-classes that can be catered by the network. As a
general problem, we consider that the network decides to cater $n$ user
classes out of total $N_{\mathrm{c}}$ classes. Let us denote the set of all
selected index by $\mathcal{M}$.
Let us denote the content transmitted by $i$th BS by $M_{i}$. If the BBS’s
user class is one of the $n$ selected classes (i.e. $S_{i}\in\mathcal{M}$), it
will transmit the content corresponding to its user class i.e. $M_{i}=S_{i}$.
If the BBS’s user class is not one of the selected classes, it will transmit
the content corresponding to a randomly selected user class to help boost its
signal strength.
Let us consider a typical user at $\mathbf{o}$. Without loss of generality,
assume that its user class is 1. The probability that its class is one of the
$n$ selected classes is $n/N_{\mathrm{c}}$. Let us condition on the fact that
it is one of the selected classes.
Let the tagged BS of this typical user transmits the content $M_{0}=1$. Then,
for the $i$th BBS, $M_{i}=M_{0}$ if
1. 1.
$S_{i}=M_{0}$ which occurs with probability $\frac{1}{N_{\mathrm{c}}}$, or
2. 2.
$S_{i}\notin\mathcal{M}$ and $S_{i}=M_{i}$ which occurs with probability
$\frac{N_{\mathrm{c}}-n}{N_{\mathrm{c}}}\cdot\frac{1}{n}$.
Therefore, the probability that the $i$th BBS is transmitting the content as
per the preference of this typical user is
$\displaystyle p$
$\displaystyle=\mathbb{P}\left[M_{i}=M_{0}\right]=\frac{1}{N_{\mathrm{c}}}+\frac{N_{\mathrm{c}}-n}{N_{\mathrm{c}}}\cdot\frac{1}{n}=\frac{1}{n}.$
(26)
### V-A SINR
Now, note that the BBSs that are transmitting the same content as $M_{0}$ and
are located inside $\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$ will
contribute to the desired signal power to the typical user at origin
$\mathbf{o}$. Therefore, the desired signal power for the typical user at
origin is given as
$\displaystyle S=$ $\displaystyle
p_{\mathsf{t~{}\\!}}a\beta_{0}{{X_{0}}}^{-\alpha}+\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,X_{0}+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}p_{\mathsf{t~{}\\!}}a\beta_{i}{{X_{i}}}^{-\alpha}\mathbbm{1}\left({M_{i}=M_{0}}\right).$
(27)
Similarly, the interference for the typical user is caused by the BBSs that
are either located outside $\mathcal{B}(\mathbf{o},{u}+R_{\mathrm{s}})$ or
located inside the $\mathcal{B}(\mathbf{o},{X_{0}}+R_{\mathrm{s}})$ but
transmitting a different content than $M_{0}$. Hence, the total interference
is given as
$\displaystyle I=$
$\displaystyle\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,X_{0}+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}p_{\mathsf{t~{}\\!}}a\beta_{i}{{X_{i}}}^{-\alpha}\mathbbm{1}\left({M_{i}\neq
M_{0}}\right)+\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})^{\complement}}p_{\mathsf{t~{}\\!}}a\beta_{j}{{X_{j}}}^{-\alpha}.$
(28)
Now, the SINR for this user is given as
$\displaystyle\mathtt{SINR}=\frac{S}{I+N}$
$\displaystyle=\frac{\beta_{0}{\|\mathbf{X}_{0}\|}^{-\alpha}+\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,u+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}\beta_{i}{\|\mathbf{X}_{i}\|}^{-\alpha}\mathbbm{1}\left({M_{i}=M_{0}}\right)}{\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,u+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}\beta_{i}{\|\mathbf{X}_{i}\|}^{-\alpha}\mathbbm{1}\left({M_{i}\neq
M_{0}}\right)+\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,u+R_{\mathrm{s}})^{\complement}}\beta_{j}{\|\mathbf{X}_{j}\|}^{-\alpha}+\sigma^{2}}.$
(29)
Here, $\sigma^{2}$ is the normalized noise power which is given as
$\sigma^{2}=N/(p_{\mathsf{t~{}\\!}}a)$ where $N$ is the noise power.
### V-B SINR Coverage Probability
We now calculate the SINR coverage for a typical user. Similar to Section
III-B, the SINR coverage probability is given as
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\frac{1}{t}\mathsf{Im}\left[\mathcal{L}_{I|\,\mathbf{X}_{0}}(jt\tau)e^{-jt\tau\sigma^{2}}\mathcal{L}_{S|\,\mathbf{X}_{0}}(-jt)\right]\mathrm{d}t\
\mathrm{d}u$ (30)
where $S$ and $I$ are given in (27) and (28).
###### Lemma 2.
Conditioned on the location of the closest serving BBS, the Laplace transforms
of the desired signal power and the sum interference at the receiver are given
as
$\displaystyle\mathcal{L}_{S|\,\mathbf{X}_{0}}(s)=\frac{1}{1+su^{-\alpha}}\exp\left(-2\pi\lambda
p\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r\right)$
(31)
$\displaystyle\mathcal{L}_{I|\mathbf{X}_{0}}(s)=\exp\left(-2\pi\lambda(1-p)\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r-2\pi\lambda\int_{X_{0}+R_{\mathrm{s}}}^{\infty}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r\right).$
(32)
###### Proof.
See Appendix C. ∎
Using Lemma 2 and (30) we can calculate the SINR coverage which is given in
Theorem 2.
###### Theorem 2.
The probability of SINR coverage for the user located at the origin in a
broadcast network with $\lambda$ density of BBSs, is given as:
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[{\color[rgb]{0,0,0}\frac{e^{-jt\tau\sigma^{2}}}{1-jtu^{-\alpha}}}\right.$
$\displaystyle\indent\times\left.\exp\left(-2\pi\lambda\left(p\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}r\,\mathrm{d}r+(1-p)\int_{u}^{u+R_{\mathrm{s}}}\frac{jt\tau
r^{-\alpha}}{1+jt\tau r^{-\alpha}}r\,\mathrm{d}r\right.\right.\right.$
$\displaystyle\hskip
42.67912pt\left.\left.\left.+\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau
r^{-\alpha}}r\,\mathrm{d}r\right)\right)\right]\mathrm{d}t\mathrm{d}u$
$\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2v\frac{1}{s}\left[\frac{1}{1+s^{2}v^{-2\alpha}}\right]e^{-v^{2}}e^{-s^{\frac{2}{\alpha}}M^{\prime}_{d}(s,v)}$
$\displaystyle\times\left[sv^{-\alpha}\cos\left(s^{\frac{2}{\alpha}}N^{\prime}_{d}(s,v)+\tau
sK\right)-\sin\left(s^{\frac{2}{\alpha}}N^{\prime}_{d}(s,v)+\tau
sK\right)\right]\mathrm{d}v\;\mathrm{d}s$ (33)
where $M^{\prime}_{d}(t,\,u)$ and $N^{\prime}_{d}(t,\,u)$ is given as
$\displaystyle M_{d}(s,v)=$
$\displaystyle\frac{1}{\alpha}\left[pQ\left(\frac{1}{\alpha},s^{2}(v+\overline{m})^{-2\alpha},s^{2}v^{-2\alpha}\right)+(1-p)\tau^{2/\alpha}Q\left(\frac{1}{\alpha},\tau^{2}s^{2}(v+\overline{m})^{-2\alpha},\tau^{2}s^{2}v^{-2\alpha}\right)\right.$
$\displaystyle\left.+\tau^{2/\alpha}Q\left(\frac{1}{\alpha},0,\tau^{2}s^{2}(v+\overline{m})^{-2\alpha}\right)\right]$
(34) $\displaystyle N_{d}(s,v)=$
$\displaystyle\frac{1}{\alpha}\left[-pQ\left(\frac{1}{\alpha}+\frac{1}{2},s^{2}(v+\overline{m})^{-2\alpha},s^{2}v^{-2\alpha}\right)\right.$
$\displaystyle\left.+(1-p)\tau^{2/\alpha}Q\left(\frac{1}{\alpha}+\frac{1}{2},\tau^{2}s^{2}(v+\overline{m})^{-2\alpha},\tau^{2}s^{2}v^{-2\alpha}\right)\right.$
$\displaystyle\left.+\tau^{2/\alpha}Q\left(\frac{1}{\alpha}+\frac{1}{2},0,\tau^{2}s^{2}(v+\overline{m})^{-2\alpha}\right)\right]$
(35)
###### Proof.
See Appendix D. ∎
### V-C Rate Coverage
Since each BBS shows only one content, it can use the total available
bandwidth $W$. The instantaneous achievable rate for a typical user located at
origin, while receiving the content, is
$\displaystyle\mathtt{Rate}$ $\displaystyle=\xi{W}\log_{2}(1+\mathtt{SINR}).$
(36)
From (20), the rate coverage probability is given as:
$\displaystyle\mathrm{r_{c}}(\rho)$
$\displaystyle=\mathbb{P}\left[\mathtt{SINR}>2^{\rho/(\xi
W)}-1\right]=\mathrm{p_{c}}(2^{\rho/(\xi W)}-1)$ (37)
where $\mathrm{p_{c}}$ is given in Theorem 2.
### V-D Network Revenue
Let $\rho$ denote the minimum rate required for a user to be able to view the
content. Then, the rate coverage $\mathrm{r_{c}}$ at $\rho$ denotes the
fraction of users that are able to view this content. Therefore,
$\mathrm{r}_{c}$ unit of revenue will be earned by the network from a
particular class, since only $\mathrm{r_{c}}$ fraction of users can watch it.
The probability that the typical user receives the content as per its
preference is $n/N_{\mathrm{c}}$ which is also the probability that the
network will receive revenue from this typical user. Similar to previous
sections, the total revenue can be computed as
$\displaystyle\mathrm{R}_{\mathrm{n}}$
$\displaystyle=\frac{n}{N_{\mathrm{c}}}\cdot\mathrm{r_{c}}(\rho),$ (38)
where the rate coverage $\mathrm{r_{c}}$ for the typical user is given by
(37).
### V-E Numerical Results
We now present numerical results for the considered scenario II. The
parameters are stated in Table I.
Figure 8: Variation of SINR and rate coverage probability with respect to
allowed number of user classes for different values of connectivity radius
($R_{\mathrm{s}}$ in km) at a typical user in a broadcast system with
geographically separated user classes. Content granularity is
$N_{\mathrm{c}}=15$, SINR threshold $\tau=10$ dB and rate threshold $\rho=15$
Mbps. Network density is $\lambda=.014$/km2. The bandwidth varies with
$R_{\mathrm{s}}$ according to (21) with maximum value at 80 MHz. Other
parameters are according to Table I. The coverage decreases with $n$ due to
increased interference at the typical user.
Impact of $n$ on SINR and rate coverage probability: Fig. 8(a) shows the
variation of SINR coverage with $n$ for different values of connectivity
radius $R_{\mathrm{s}}$. Here, SINR threshold $\tau=10$ dB and there are
$N_{\mathrm{c}}=15$ user classes. From Fig. 8, we observe that for a fix value
of $R_{\mathrm{s}}$, SINR coverage decreases with increase in $n$. This is due
to the fact that more BBSs interfere as $n$ increases. However, after a
certain $n$, the coverage doesn’t changes much with $n$. This is because
additional fraction of BBSs that interfere when $n$ increases by 1, is equal
to $\frac{n}{n+1}-\frac{n-1}{n}=\frac{1}{n(n+1)}$ which decreases very fast
with $n$. Therefore, after a certain $n$, there will not be a significant
increase in the interference, which makes the SINR constant with $n$.
It is also observed that the decrease in the SINR coverage probability is
faster when $R_{\mathrm{s}}$ is large. This can be justified as follows. First
note that $n$ only affects the BBSs that can either be a interferer or a
serving BBS, depending on the content they are showing. These BBSs lie in the
ring of $R_{\mathrm{s}}$ width denoting the region and their number
approximately scales as $\lambda\pi R_{\mathrm{s}}^{2}$. Note that this number
is large when $R_{\mathrm{s}}$ is large. When we allow BBSs to show more
advertisements/contents, a large number of these BBSs means that there is a
larger number of potential interferes. When $R_{\mathrm{s}}$ is small, there
are less number of these potential BBSs (or even 0), hence allowing more
advertisements doesn’t affect the coverage significantly. Fig. 8(b) shows the
variation of rate coverage with respect to $n$ for different size of
connectivity region. The rate coverage also follows similar behavior as SINR
coverage as described in (37).
Figure 9: Variation of the total network revenue with respect to allowed
number of user classes $n$ for different values of rate threshold $\rho$ (in
Mbps) with $R_{\mathrm{s}}=150\,\text{km}$ for a broadcast system with
geographically separated user classes. Content granularity is
$N_{\mathrm{c}}=15$. Here, bandwidth varies with $R_{\mathrm{s}}$ according to
(21). Other parameters are according to Table I.
Impact of $n$ on the total network revenue: Fig. 9 shows the impact of $n$ on
the total network revenue for different values of rate threshold. Increase in
$n$ means catering to more number of user classes. From Fig. 9, we can observe
that the revenue initially decreases and then, increases with $n$. The initial
decrease in the revenue seen from $n=1$ to $n=2$ for some configurations is
due to the decrease in rate coverage from $n=1$ to $n=2$, as observed in Fig.
8(b) which dominated the increase in the revenue generated due to catering to
an additional user class. However, this behavior may depend on the target rate
threshold and the value of $R_{\mathrm{s}}$.
Figure 10: Variation of the total network revenue with respect to the
connectivity radius $R_{\mathrm{s}}$ (in km) for different values of $n$ with
$\rho=5\,\text{Mbps}$ for a broadcast system with geographically separated
user classes. Content granularity is $N_{\mathrm{c}}=15$. Here, bandwidth
varies with $R_{\mathrm{s}}$ according to (21) with maximum value at 50 MHz.
Other parameters are according to Table I.
Impact of $R_{\mathrm{s}}$ on the total network revenue Fig. 10 shows the
total revenue with respect to size of connectivity region $R_{\mathrm{s}}$ for
different $n$ with $\rho=10\,\text{Mbps}$ and $N_{\mathrm{c}}=15$. We can
observe that, the revenue decreases with increase in $R_{\mathrm{s}}$. Also
the revenue increases with $n$ for lower values of $R_{\mathrm{s}}$. For a
higher values of $R_{\mathrm{s}}$, the behavior with $n$ is not monotonic. The
revenue for $n\geq 1$ may fall below the revenue for $n=1$. As discussed
previously, the rate coverage decreases drastically from $n=1$ to $n=2$ for
large $R_{\mathrm{s}}$ which can dominate the increase in the revenue
generated due to catering to an additional user class.
## VI Conclusions
In this paper, we presented an analytical framework for the system performance
of a broadcast network using stochastic geometry. Since all BBSs in the
broadcast network are transmitting the same signal, signals from multiple BSs
can be used to improve the coverage. We show that there exists a region such
that all BBSs lying in this region may contribute to the desired signal power.
We computed the SINR and rate coverage probability for a typical user located
at the origin. We validated our results using numerical analysis. Using these
results, we found that there exists an optimal region size which maximized the
rate coverage. When users consists of many user classes having heterogenous
content preference, network can schedule content to maximize its revenue. We
presented an analytical model of revenue thus obtained from users. The results
are validated through numerical analysis. We also present the variation of
total revenue with respect to various parameters including number of user
classes to be catered, size of connectivity region, and rate threshold. We
show how content quality also affects the network decision on variety of
content shown by the operator.
## Appendix A Proof for Lemma 1
Using (4), the Laplace transform of the sum interference $I|\mathbf{X}_{0}$ is
given as
$\displaystyle\mathcal{L}_{I\,|\,\mathbf{X}_{0}}(s)=\mathbb{E}\left[e^{-sI}|\mathbf{X}_{0}\right]=\mathbb{E}\left[\exp{\left(-s\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})^{\complement}}\beta_{j}{\|\mathbf{X}_{j}\|}^{-\alpha}\right)}\right]$
$\displaystyle\overset{(a)}{=}\exp{\left(-\lambda\int_{\Phi\cap\mathcal{B}(0,{X_{0}}+R_{\mathrm{s}})^{\complement}}\left(1-\mathbb{E}_{\beta}\left[e^{-s\beta\|\mathbf{X}\|^{-\alpha}}\right]\right)\mathrm{d}{\mathbf{x}}\right)}$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\exp{\left(-2\pi\lambda\int_{X_{0}+R_{\mathrm{s}}}^{\infty}\left(1-\mathbb{E}_{\beta}\left[e^{-s\beta
r^{-\alpha}}\right]\right)r\,\mathrm{d}r\right)}$ (39)
where (a) is due to the probability generating functional (PGFL) of
homogeneous PPP [7] and (b) is due to conversion to polar coordinates. Now,
since $\beta$’s are exponetially distributed, using their MGF, we get
$\displaystyle\mathcal{L}_{I}(s)=\exp{\left(-2\pi\lambda\int_{u+R_{\mathrm{s}}}^{\infty}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\,\mathrm{d}r\right)}.$
(40)
Similarly, the Laplace transform of desired signal power conditioned on the
nearest is BS located at $\mathbf{X}_{0}$, is given as:
$\displaystyle\mathcal{L}_{S\,|\,\mathbf{X}_{0}}(s)=\mathbb{E}_{S|\,\mathbf{X}_{0}}\left[e^{-sS}|\mathbf{X}_{0}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{\beta_{i},\mathbf{X}_{i}}\left[\exp{\left(-s\left(\beta_{0}\|\mathbf{X}_{0}\|^{-\alpha}+\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}{(\mathbf{o},X_{0}+R_{\mathrm{s}})}\setminus\mathbf{X}_{0}}\beta_{i}\|\mathbf{X}_{i}\|^{-\alpha}\right)\right)}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{\Phi\,|\,\mathbf{X}_{0}}\left[\mathbb{E}_{\beta_{0}|\,\mathbf{X}_{0}}\left[e^{-s\beta_{0}\|\mathbf{X}_{0}\|^{-\alpha}}\right]\right.$
$\displaystyle\times\left.\prod_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}{(\mathbf{o},X_{0}+R_{\mathrm{s}})}\setminus\mathbf{X}_{0}}\mathbb{E}_{\beta_{i}|X_{i},\mathbf{X}_{0}}\left[e^{-s\beta_{i}\|\mathbf{X}_{i}\|^{-\alpha})}\right]\right]$
(41)
Now, from Slivnyak theorem [7], we know that conditioned on $\mathbf{X}_{0}$,
$\\{\mathbf{X}_{i}:\mathbf{X}_{i}\in\Phi\\}\setminus\\{\mathbf{X}_{0}\\}$ is a
PPP with the same density. Therefore, using the PGFL of a PPP and noting that
$\beta_{i}$’s are exponential RVs, we get
$\displaystyle\mathcal{L}_{S\,|\,\mathbf{X}_{0}}(s)=$
$\displaystyle\frac{1}{1+s\|\mathbf{X}_{0}\|^{-\alpha}}\exp\left(-2\pi\lambda\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\left(1-\frac{1}{1+sr^{-\alpha}}\right)r\mathrm{d}r\right)$
$\displaystyle=$
$\displaystyle\frac{1}{1+sX_{0}^{-\alpha}}\exp\left(-2\pi\lambda\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r\right).$
(42)
## Appendix B Proof of Theorem 1
Using Lemma 1, we get
$\displaystyle\mathcal{L}_{S|\,X_{0}=u}(-jt)$
$\displaystyle=\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\exp\left(-2\pi\lambda\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}r\mathrm{d}r\right)$
(43) $\displaystyle\mathcal{L}_{I|\,X_{0}=u}(jt\tau)$
$\displaystyle=\exp{\left(-2\pi\lambda\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau r^{-\alpha}}r\,\mathrm{d}r\right)}.$ (44)
Substituting the above values in (11), we get
$\displaystyle\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,X_{0}=u\right)$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)e^{-jt\tau\sigma^{2}}\times\right.$
$\displaystyle\hskip
14.22636pt\left.\exp\left(-2\pi\lambda\left(\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau
r^{-\alpha}}r\,\mathrm{d}r+\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}r\,\mathrm{d}r\right)\right)\right]\mathrm{d}t$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)e^{-jt\tau\sigma^{2}}\times\right.$
$\displaystyle\left.\exp\left(-2\pi\lambda\left(\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r+\int_{u}^{u+R_{\mathrm{s}}}\frac{t^{2}r^{-2\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right.\right.\right.$
$\displaystyle\left.\left.\left.+j\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\,-\,j\int_{u}^{u+R_{\mathrm{s}}}\frac{tr^{-\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right)\right)\right]\mathrm{d}t$
(45)
where the last step is obtained using multiplication of conjugate terms. Now,
if we define
$\displaystyle
M(t,u)=2\alpha{t^{-2/\alpha}}\left[\int_{u}^{u+R_{\mathrm{s}}}\frac{t^{2}r^{-2\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r+\,\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\right]$
(46) $\displaystyle
N(t,u)=2\alpha{t^{-2/\alpha}}\left[\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\,-\,\int_{u}^{u+R_{\mathrm{s}}}\frac{tr^{-\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right]$
(47)
(45) can be written as
$\displaystyle\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,X_{0}=u\right)$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)\right.$
$\displaystyle\times\left.\exp\left(-2\pi\lambda{\color[rgb]{0,0,0}t^{2/\alpha}}M(t,u)-j2\pi\lambda{\color[rgb]{0,0,0}t^{2/\alpha}}N(t,u)-jt\tau\sigma^{2}\right)\right]\mathrm{d}t.$
(48)
Now, with some trivial manipulations and substituting (48) in (9), we get
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\cdot\frac{1}{t}\cdot\left[\frac{1}{1+t^{2}u^{-2\alpha}}\right]e^{-2\pi\lambda
t^{2/\alpha}M(t,\,u)/2\alpha}$
$\displaystyle\times\left[tu^{-\alpha}\cos{\left(\frac{\pi}{\alpha}\lambda{t^{2/\alpha}}N(t,\,u)+\,t\tau\sigma^{2}\right)}-\sin{\left(\frac{\pi}{\alpha}\lambda{t^{2/\alpha}}N(t,\,u)+\,t\tau\sigma^{2}\right)}\right]\mathrm{d}t\mathrm{d}u.$
(49)
Further, the forms of $M$ and $N$ can be simplified by trivial manipulations
and definition of incomplete Beta function in (46) and (47) to get
$\displaystyle
M(t,u)=Q\left(\frac{1}{\alpha},t^{2}(u+R_{\mathrm{s}})^{-2\alpha},t^{2}u^{-2\alpha}\right)+\tau^{2/\alpha}Q\left(\frac{1}{\alpha},0,t^{2}\tau^{2}(u+R_{\mathrm{s}})^{-2\alpha}\right),\text{
and }$ (50) $\displaystyle
N(t,\,u)=-Q\left(\frac{1}{\alpha}+\frac{1}{2},t^{2}(u+R_{\mathrm{s}})^{-2\alpha},t^{2}u^{-2\alpha}\right)+\tau^{2/\alpha}Q\left(\frac{1}{\alpha}+\frac{1}{2},0,t^{2}\tau^{2}(u+R_{\mathrm{s}})^{-2\alpha}\right)$
(51)
Now, we can substitute
$\displaystyle t$ $\displaystyle\rightarrow s/{(\lambda\pi)}^{\alpha/2}$
$\displaystyle u$ $\displaystyle\rightarrow v/\sqrt{\lambda\pi}\ $ (52)
in (49), (50) and (51) to get the desired result.
## Appendix C Proof of Lemma 2
From (23), the Laplace transform of desired signal power $S$ conditioned that
the nearest BS located at $\mathbf{X}_{0}$ is given as:
$\displaystyle\mathcal{L}_{S|\,\mathbf{X}_{0}}(s)=\mathbb{E}_{S\,|\,\mathbf{X}_{0}}\left[e^{-sS}\right]$
$\displaystyle=\mathbb{E}_{\\{\beta_{i},\mathbf{X}_{i}\\}|\,\mathbf{X}_{0}}\left[\exp\left(-s\beta_{0}{X_{0}}^{-\alpha}-\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}{(\mathbf{o},X_{0}+R_{\mathrm{s}})}\setminus\mathbf{X}_{0}}s\beta_{i}{X_{i}}^{-\alpha}\mathbbm{1}\left(M_{i}=M_{0}\right)\right)\right]$
which is similar to (41) except the fact that the summation in the last term
is over only those points that satisfy an additional condition $M_{i}=M_{0}$.
From the independent thinning theorem, these points also form a PPP with
density $\lambda\mathbb{P}\left[M_{i}=M_{0}\right]=\lambda p$. Now using the
PGFL of this PPP, we get
$\displaystyle\mathcal{L}_{S|\,\mathbf{X}_{0}}(s)$
$\displaystyle=\frac{1}{1+sX_{0}^{-\alpha}}\exp\left(-2\pi\lambda
p\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r\right).$
(53)
Now, from (24), the Laplace transform of sum interference is
$\displaystyle\mathcal{L}_{I}(s)=\mathbb{E}_{I}\left[e^{-sI}\right]$
$\displaystyle=\mathbb{E}_{I}\left[\exp\left(-\sum_{\mathbf{X}_{i}\in\Phi\cap\mathcal{B}(0,X_{0}+R_{\mathrm{s}})\setminus\mathbf{X}_{0}}s\beta_{i}{{X_{i}}}^{-\alpha}\mathbbm{1}\left(M_{i}\neq
M_{0}\right)-\sum_{\mathbf{X}_{j}\in\Phi\cap\mathcal{B}(0,X_{0}+R_{\mathrm{s}})^{\complement}}s\beta_{j}{X_{j}}^{-\alpha}\right)\right]$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\exp\left(-2\pi\lambda(1-p)\int_{X_{0}}^{X_{0}+R_{\mathrm{s}}}\left(1-\mathbb{E}_{\beta}\left[e^{-s\beta
r^{-\alpha}}\right]\right)r\mathrm{d}r-2\pi\lambda\int_{X_{0}+R_{\mathrm{s}}}^{\infty}\left(1-\mathbb{E}_{\beta}\left[e^{-s\beta
r^{-\alpha}}\right]\right)r\mathrm{d}r\right)$
$\displaystyle\overset{(b)}{=}\exp\left(-2\pi\lambda(1-p)\int_{u}^{X_{0}+R_{\mathrm{s}}}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r-2\pi\lambda\int_{X_{0}+R_{\mathrm{s}}}^{\infty}\frac{sr^{-\alpha}}{1+sr^{-\alpha}}r\mathrm{d}r\right),$
(54)
where (a) is due to the probability generating functional of homogeneous PPP
and independent thinning theorem and (b) is due to MGF of exponentially
distributed $\beta_{i}$’s.
## Appendix D Proof of Theorem 2
Using Lemma 2, we get
$\displaystyle\mathcal{L}_{S|\,X_{0}=u}(-jt)=\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\exp\left(-2\pi\lambda
p\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}rdr\right)$
(55)
$\displaystyle\mathcal{L}_{I|\,X_{0}=u}(jt\tau)=\exp\left(-2\pi\lambda\left[(1-p)\int_{u}^{u+R_{\mathrm{s}}}\frac{jt\tau
r^{-\alpha}}{1+jt\tau
r^{-\alpha}}r\,\mathrm{d}r+\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau r^{-\alpha}}r\,\mathrm{d}r\right]\right).$ (56)
Substituting the above values in (11), we get
$\displaystyle\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,X_{0}=u\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)e^{-jt\tau\sigma^{2}}\cdot\right.$
$\displaystyle\left.\exp\left(-2\pi\lambda\left(\int_{u+R_{\mathrm{s}}}^{\infty}\frac{jt\tau
r^{-\alpha}}{1+jt\tau
r^{-\alpha}}r\,\mathrm{d}r+(1-p)\int_{u}^{u+R_{\mathrm{s}}}\frac{jt\tau
r^{-\alpha}}{1+jt\tau r^{-\alpha}}r\,\mathrm{d}r\right.\right.\right.$
$\displaystyle\indent\left.\left.\left.+p\int_{u}^{u+R_{\mathrm{s}}}\frac{-jtr^{-\alpha}}{1-jtr^{-\alpha}}r\,\mathrm{d}r\right)\right)\right]\mathrm{d}t$
$\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)e^{-jt\tau\sigma^{2}}\cdot\right.$
$\displaystyle\left.\exp\left(-2\pi\lambda\left(\int_{u}^{u+R_{\mathrm{s}}}\frac{p\,t^{2}r^{-2\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\,\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\,+\,\int_{u}^{u+R_{\mathrm{s}}}\frac{(1-p)t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\right.\right.\right.$
$\displaystyle\left.\left.\left.+\,j\left[\int_{u+R_{\mathrm{s}}}^{\infty}\frac{t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\,+\,\int_{u}^{u+R_{\mathrm{s}}}\frac{(1-p)\,t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\,-\,\int_{u}^{u+R_{\mathrm{s}}}\frac{p\,tr^{-\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right]\right)\right)\right]\mathrm{d}t$
(57)
where the last step is obtained using multiplication of conjugate terms and
rearranging into the real and imaginary parts. Now, if we define
$\displaystyle M^{\prime}(t,u)=$ $\displaystyle 2{\alpha
t^{-\alpha/2}}\left[\int_{u}^{u+R_{\mathrm{s}}}\frac{p\,t^{2}r^{-2\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right.$
$\displaystyle\left.+\int_{u+R_{\mathrm{s}}}^{R}\frac{t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r+\int_{u}^{u+R_{\mathrm{s}}}\frac{(1-p)t^{2}\tau^{2}r^{-2\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\right],$
(58) $\displaystyle N^{\prime}(t,u)=$ $\displaystyle
2\alpha{t^{-\alpha/2}}\left[\int_{u+R_{\mathrm{s}}}^{R}\frac{t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r\right.$
$\displaystyle+\left.\int_{u}^{u+R_{\mathrm{s}}}\frac{(1-p)\,t\tau
r^{-\alpha+1}}{1+t^{2}\tau^{2}r^{-2\alpha}}\mathrm{d}r-\int_{u}^{u+R_{\mathrm{s}}}\frac{p\,tr^{-\alpha+1}}{1+t^{2}r^{-2\alpha}}\mathrm{d}r\right],$
(59)
(57) can be written as
$\displaystyle\mathbb{P}\left(S>(I+\sigma^{2})\tau|\,X_{0}=u\right)$
$\displaystyle=\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t}\mathsf{Im}\left[\left(\frac{1+jtu^{-\alpha}}{1+t^{2}u^{-2\alpha}}\right)\right.$
$\displaystyle\times\left.\exp\left(-2\pi\lambda{\color[rgb]{0,0,0}t^{2/\alpha}}M^{\prime}(t,u)-j2\pi\lambda{\color[rgb]{0,0,0}t^{2/\alpha}}N^{\prime}(t,u)-jt\tau\sigma^{2}\right)\right]\mathrm{d}t.$
(60)
Now, with some trivial manipulations and substituting (60) in (9), we get
$\displaystyle\mathrm{p_{c}}(\tau,\lambda)=$
$\displaystyle\frac{1}{2}+\frac{1}{\pi}\int_{0}^{\infty}\int_{0}^{\infty}2\pi\lambda
ue^{-\pi\lambda
u^{2}}\cdot\frac{1}{t}\cdot\left[\frac{1}{1+t^{2}u^{-2\alpha}}\right]\cdot
e^{-\frac{\pi}{\alpha}\lambda t^{2/\alpha}M^{\prime}(t,\,u)}$
$\displaystyle\times\left[tu^{-\alpha}\cos{\left(\frac{\pi}{\alpha}\lambda{t^{2/\alpha}}N^{\prime}(t,u)+\,t\tau\sigma^{2}\right)}-\sin{\left(\frac{\pi}{\alpha}\lambda{t^{2/\alpha}}N^{\prime}(t,u)+\,t\tau\sigma^{2}\right)}\right]\mathrm{d}t\mathrm{d}u.$
(61)
Further, the forms of $M$ and $N^{\prime\prime}$ can be simplified by trivial
manipulations and definition of incomplete Beta function in (58) and (59) to
get
$\displaystyle
M^{\prime}(t,u)=pQ\left(\frac{1}{\alpha},t^{2}(u+R_{\mathrm{s}})^{-2\alpha},t^{2}u^{-2\alpha}\right)$
$\displaystyle+\tau^{2/\alpha}(1-p)Q\left(\frac{1}{\alpha},(t\tau)^{2}(u+R_{\mathrm{s}})^{-2\alpha},(t\tau)^{2}u^{-2\alpha}\right)+\tau^{2/\alpha}Q\left(\frac{1}{\alpha},0,t^{2}\tau^{2}(u+R_{\mathrm{s}})^{-2\alpha}\right).$
(62)
and,
$\displaystyle
N^{\prime}(t,u)=-pQ\left(\frac{1}{\alpha}+\frac{1}{2},t^{2}(u+R_{\mathrm{s}})^{-2\alpha},t^{2}u^{-2\alpha}\right)$
$\displaystyle+\tau^{2/\alpha}(1-p)Q\left(\frac{1}{\alpha}+\frac{1}{2},(t\tau)^{2}(u+R_{\mathrm{s}})^{-2\alpha},(t\tau)^{2}u^{-2\alpha}\right)+\tau^{2/\alpha}Q\left(\frac{1}{\alpha}+\frac{1}{2},0,t^{2}\tau^{2}(u+R_{\mathrm{s}})^{-2\alpha}\right).$
(63)
Now, we can substitute
$\displaystyle t$ $\displaystyle\rightarrow s/{(\lambda\pi)}^{\alpha/2}$
$\displaystyle u$ $\displaystyle\rightarrow v/\sqrt{\lambda\pi}\ $ (64)
in (61), (62) and (63) to get the desired result.
## References
* [1] M. El-Hajjar and L. Hanzo., “A survey of digital television broadcast transmission techniques,” _IEEE Communication Surveys and Tutorials_ , vol. 15, no. 4, p. 1924–1949., 2013.
* [2] L. Fay and L. Michael and D. Gmez-Barquero and N. Ammar and M.W. Caldwell, “An overview of the ATSC 3.0 physical layer specification,” _IEEE Transactions on Broadcasting_ , vol. 62, pp. 159–171, 2016.
* [3] D. Gómez-Barquero, C. Douillard, P. Moss, and V. Mignone, “DVB-NGH: The next generation of digital broadcast services to handheld devices,” _IEEE Transactions on Broadcasting_ , vol. 60, no. 2, pp. 246–257, 2014.
* [4] D. Catrein, J. Huschke, U. Horn, “Analytic evaluation of a hybrid broadcast-unicast TV offering,” in _Proc. IEEE Vehicular Technology Conference-Spring (VTC-spring)_ , 2008, pp. 2864–2868.
* [5] L. Rong, S. E. Elayoubi, and O. B. Haddada, “Performance evaluation of cellular networks offering TV services,” _IEEE Transactions on Vehicular Technology_ , vol. 60, no. 2, 2011.
* [6] J. G. Andrews, F. Baccelli, and R. K. Ganti, “A tractable approach to coverage and rate in cellular networks,” _IEEE Transactions on communications_ , vol. 59, no. 11, pp. 3122–3134, 2011.
* [7] J. G. Andrews, A. K. Gupta, and H. S. Dhillon, “A primer on cellular network analysis using stochastic geometry,” _arXiv preprint arXiv:1604.03183_ , 2016.
* [8] M. Haenggi, _Stochastic Geometry for Wireless Networks_. Cambridge: Cambridge University Press, 2012.
* [9] A. Guo and M. Haenggi, “Asymptotic deployment gain: A simple approach to characterize the SINR distribution in general cellular networks,” _IEEE Transactions on Communications_ , vol. 63, pp. 962–976, Mar. 2015.
* [10] J. G. Andrews, T. Bai, M. N. Kulkarni, A. Alkhateeb, A. K. Gupta, and R. W. Heath, “Modeling and analyzing millimeter wave cellular systems,” _IEEE Trans. Commun._ , vol. 65, no. 1, pp. 403–430, Jan 2017.
* [11] W. Lu and M. Di Renzo, “Stochastic geometry modeling of cellular networks: Analysis, simulation and experimental validation,” in _Proc. ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM)_ , 2015, p. 179–188.
* [12] A. Shokair and Y. Nasser and M. Crussière and J. Hélard and O. Bazzi, “Analytical study of the probability of coverage in hybrid broadcast-unicast networks,” in _Proc. Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)_ , 2018, pp. 1–6.
* [13] R. Sahu, K. K. Chaurasia, and A. K. Gupta, “SINR and rate coverage of broadcast networks using stochastic geometry,” in _Proc. International Conference on Signal Processing and Communications (SPCOM)_ , July 2020, pp. 1–5.
* [14] L. Peterson and R. Groot, “Location-based advertising: The key to unlocking the most value in the mobile advertising and location-based services markets,” _Peterson Mobility Solutions_ , 2009.
* [15] L. Shi, E. Obregon, K. W. Sung, J. Zander, and J. Bostrom, “CellTV—on the benefit of TV distribution over cellular networks: A case study,” _IEEE Transactions on Broadcasting_ , vol. 60, no. 1, pp. 73–84, 2014.
* [16] J. Gil-Pelaez, “Note on the inversion theorem,” _Biometrika_ , vol. 38,no. 3-4, p. 481–482, 1951.
* [17] International Telecommunication Union, “Recommendation ITU-R P.1546-6, method for point-to-area predictions for terrestrial services in the frequency range 30 mhz to 4 000 MHz,” _P Series Radiowave Propagation_ , August 2019.
* [18] ——, _Handbook on Digital Terrestrial Television Broadcasting Networks and Systems Implementation_ , 2016.
* [19] A. K. Gupta, X. Zhang, and J. G. Andrews, “SINR and throughput scaling in ultradense urban cellular networks,” _IEEE Wireless Communications Letters_ , vol. 4, no. 6, pp. 605–608, 2015.
|
# Fully developed anelastic convection with no-slip boundaries
Chris A. Jones1<EMAIL_ADDRESS>Krzysztof A. Mizerski2 Mouloud
Kessar3 1Department of Applied Mathematics, University of Leeds, Leeds, LS2
9JT, UK 2 Department of Magnetism, Institute of Geophysics, Polish Academy of
Sciences, ul. Ksiecia Janusza 64, 01-452 Warsaw, Poland 3Université de Paris,
Institut de physique du globe de Paris, CNRS, IGN, F-75005 Paris, France
###### Abstract
Anelastic convection at high Rayleigh number in a plane parallel layer with no
slip boundaries is considered. Energy and entropy balance equations are
derived, and they are used to develop scaling laws for the heat transport and
the Reynolds number. The appearance of an entropy structure consisting of a
well-mixed uniform interior, bounded by thin layers with entropy jumps across
them, makes it possible to derive explicit forms for these scaling laws. These
are given in terms of the Rayleigh number, the Prandtl number, and the bottom
to top temperature ratio, which measures how stratified the layer is. The top
and bottom boundary layers are examined and they are found to be very
different, unlike in the Boussinesq case. Elucidating the structure of these
boundary layers plays a crucial part in determining the scaling laws. Physical
arguments governing these boundary layers are presented, concentrating on the
case in which the boundary layers are thin even when the stratification is
large, the incompressible boundary layer case. Different scaling laws are
found, depending on whether the viscous dissipation is primarily in the
boundary layers or in the bulk. The cases of both high and low Prandtl number
are considered. Numerical simulations of no-slip anelastic convection up to a
Rayleigh number of $10^{7}$ have been performed and our theoretical
predictions are compared with the numerical results.
Manuscript at
## 1 Introduction
The problem of the influence of density stratification on developed convection
is of great importance from the astrophysical point of view. Giant planets and
stars are typically strongly stratified and the anelastic approximation, see
e.g. Ogura & Phillips (1962), Gough (1969) and Lantz & Fan (1999), is commonly
used to describe convection in their interiors, e.g. Toomre et al. (1976),
Glatzmaier & Roberts (1995), Brun & Toomre (2002), Browning et al. (2006),
Miesch et al. (2008), Jones & Kuzanyan (2009), Jones et al. (2011), Verhoeven
et al. (2015), Kessar et al. (2019) and many others. The anelastic
approximation is based on the convective system being a small departure from
the adiabatic state, which is appropriate for large scale systems with
developed, turbulent and thus strongly-mixing convective regions. The small
departure from adiabaticity induces convective velocities much smaller than
the speed of sound, so sound waves are neglected in the dynamics. We consider
a plane layer of fluid between two parallel plates distance $d$ apart, and we
assume the turbulent flow is spatially homogeneous in the horizontal
direction. We also assume that the convection is in a statistically steady
state, so that the time-averages of time-derivative terms can be neglected.
Most astrophysical applications are in spherical geometry, but the simpler
plane layer problem is a natural place to start our investigation of high
Rayleigh number anelastic convection.
In convection theory, we try to determine the dependencies of the
superadiabatic heat flux and the convective velocities (measured by the
Nusselt, $Nu$, and Reynolds $Re$ numbers) on the driving force measured by the
imposed temperature difference between the top and bottom plates, i.e. on the
Rayleigh number, $Ra$, and on the Prandtl number, $Pr$ (the ratio of the fluid
kinematic viscosity to its thermal diffusivity). Here we aim to discover how
these dependencies are affected by the stratification. We rely strongly on the
theory of Grossmann & Lohse (2000) (further developed later and updated in
Stevens et al. 2013) developed for Boussinesq, i.e. non-stratified,
convection. However, compressible convection differs strongly from the
Boussinesq case, with the latter mostly corresponding to experimental
situations. There are two crucial differences, which have very important
consequences for the dynamics of convection. Firstly, in the compressible case
the viscous heating and the work of the buoyancy force are no longer
negligible compared to the heat transport. Secondly, in stratified convection
the boundary layers and the flow velocities are different at the top of the
layer and the bottom of the layer, unlike the Boussinesq case where the top
and bottom boundary layers have the same structure and the temperature of the
well-mixed interior is exactly half way between the temperature of the top and
bottom plates. So although our approach is based on that of Grossmann & Lohse
(2000), there are additional novel features required in the compressible
convection case. We develop the theory of fully developed convection in
stratified systems and study the dependence of the total superadiabatic heat
flux and the amplitude of convective flow on the number of density scale
heights in the layer. The scaling laws, i.e. the dependencies of the Nusselt
and Reynolds numbers on the Rayleigh and Prandtl numbers are the same as in
the Boussinesq convection.
In this paper, we concentrate on the convective regimes which seem to be most
relevant to current numerical capabilities, i.e. regimes most easily achieved
by numerical experiments. These are the regimes where the thermal dissipation
is dominated by the thermal boundary layer contribution. It is those regimes,
denoted by $I_{u}$, $I_{l}$, $II_{u}$ and $II_{l}$ on the phase diagram,
figure 2 of Grossmann & Lohse (2000), that in the Boussinesq case correspond
to Rayleigh numbers less than about 1012. It must be noted, however, that
contrary to the Boussinesq case, which is well established by numerous
experimental and numerical investigations, there are to date no experiments on
fully turbulent stratified convection, due to the difficulties of achieving
significant density stratification in laboratory settings. Some experiments
are being developed using the centrifugal acceleration in rapid rotating
systems to enhance the effective gravity (Menaut et al. 2019). There have also
been some numerical investigations of anelastic convection in a plane layer,
mostly focussed on elucidating how well the anelastic approximation performs
compared to fully compressible convection, Verhoeven et al. (2015) and Curbelo
et al. (2019). This latter paper notes that the top and bottom boundary layer
structures that occur in the case of high Prandtl number are different.
In addition to the dependence on Rayleigh and Prandtl number, our problem
depends on how stratified the layer is, which can be estimated by the ratio
$\Gamma$ of the temperature at the bottom of the the layer $T_{B}$ to the
temperature at the top $T_{T}$. When $\Gamma$ is close to unity the layer is
nearly Boussinesq, but when $\Gamma$ is large there are many temperature and
density scale heights within the layer. We aim to derive the functional form
of $Nu(\Gamma,\,Ra,\,Pr)$ and $Re(\Gamma,\,Ra,\,Pr)$, but we cannot derive
reliable numerical values for the prefactors in anelastic convection. Since
experiments are not available, this will require high resolution high Ra
simulations, which are being developed, but are not yet in a state to
determine the prefactors accurately.
In § 2 we derive the anelastic equations and the reference states we use, and
outline the structure of high Rayleigh number anelastic convection. Further
details of the form of the anrelastic temperature perturbation are given in
appendix A. In § 3 we derive the energy and entropy production integrals,
which are the fundamental building blocks for developing convective scaling
laws. In sections § 4 and § 5 we derive the physical arguments used to get the
key properties of the top and bottom boundary layers. In § 6 we derive the
scaling laws for the case where the viscous dissipation is primarily in the
boundary layers. The case where the dissipation is mainly in the bulk is dealt
with in appendix B. § 7 gives the results of our numerical simulations,
comparing them with our theoretical results. Our conclusions are in § 8.
## 2 Fully developed compressible convection under the anelastic
approximation
Consider a layer of compressible perfect gas between two parallel plates, of
thickness $d$, periodic in horizontal directions, the evolution of which is
described by the set of the Navier-Stokes, mass conservation and energy
equations under the anelastic approximation,
$\frac{\partial\mathbf{u}}{\partial
t}+\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}=-\nabla\left(\frac{p}{\bar{\varrho}}\right)+\frac{g}{c_{p}}s\hat{\mathbf{e}}_{z}+\frac{\mu}{\bar{\rho}}\left[\nabla^{2}\mathbf{u}+\frac{1}{3}\nabla\left(\nabla\cdot\mathbf{u}\right)\right],$
(1) $\nabla\cdot\left(\bar{\varrho}\mathbf{u}\right)=0,$ (2)
$\bar{\rho}\bar{T}\left[\frac{\partial s}{\partial t}+\mathbf{u}\cdot\nabla
s\right]=k\nabla^{2}T+\mu\left[q+\partial_{j}u_{i}\partial_{i}u_{j}-\left(\nabla\cdot\mathbf{u}\right)^{2}\right],$
(3) $\frac{p}{\bar{p}}=\frac{\rho}{\bar{\rho}}+\frac{T}{\bar{T}},\qquad
s=c_{v}\frac{p}{\bar{p}}-c_{p}\frac{\rho}{\bar{\rho}},\quad\gamma=\frac{c_{p}}{c_{v}},\quad
c_{p}-c_{v}=R,$ $None$
where
$q=(\partial_{j}u_{i})(\partial_{j}u_{i})+\frac{1}{3}\left(\nabla\cdot\mathbf{u}\right)^{2},$
(5)
$\mathbf{u}$ being the fluid velocity, $p$ the pressure, $\rho$ the density,
$T$ the temperature and $s$ the entropy. Barred variables are adiabatic
reference state variables, unbarred variables denote the perturbation from the
reference state due to convection. The dynamic viscosity $\mu=\bar{\rho}\nu$,
the thermal conductivity $k$, gravity $g$ and the specific heats at constant
pressure, $c_{p}$, and constant volume, $c_{v}$, are all assumed constant. The
bounding plates are no-slip and impenetrable, so $\bf u=0$ there. We consider
the constant entropy boundary conditions
$s=\Delta S\quad\textrm{at}\quad z=0,\qquad s=0\quad\textrm{at}\quad z=d.$ (6)
Note that we do not replace the thermal diffusion term in (3) by an entropy
diffusion term, as is often done in anelastic approaches. With our no-slip
boundaries, there will be boundary layers which may be laminar even at very
high Rayleigh number, and entropy diffusion is not appropriate if laminar
boundary layers are present. We discuss the additional issues raised by
adopting constant temperature boundary conditions rather than constant entropy
conditions in Appendix A.
In the anelastic approximation, the full variables, $\hat{p}$, $\hat{\rho}$
and $\hat{T}$, are expanded in terms of the small parameter $\epsilon$, which
is defined precisely in equation (10) below, so
$\hat{p}=\bar{p}+\epsilon
p,\quad\hat{\rho}=\bar{\rho}+\epsilon\rho,\quad\hat{T}=\bar{T}+\epsilon
T,\quad u\sim(\epsilon gd)^{1/2},\quad t\sim\left(\frac{\epsilon
g}{d}\right)^{-1/2},\quad\hat{s}=\bar{s}+\epsilon s.$ (7)
where $\bar{p}$, $\bar{\rho}$ and $\bar{T}$ comprise the adiabatic reference
state. Here $\bar{s}$ is simply a constant, and since
$s=c_{v}\ln{\hat{p}}/{\hat{\rho}}^{\gamma}$ \+ const., and
$\bar{p}/{\bar{\rho}}^{\gamma}$ is constant, we obtain ($None$b) by choosing
the constant appropriately.
### 2.1 The adiabatic reference state
The reference state is the adiabatic static equilibrium governed by
$\mathrm{d}\bar{p}/\mathrm{d}z=-g\bar{\rho}$, $\bar{p}=R\bar{\rho}\bar{T}$ and
$\bar{p}=K{\bar{\rho}}^{\gamma}$, where $R$ is the gas constant, $z=0$ is the
bottom of the layer and $z=d$ the top. It follows that
$\bar{T}=T_{B}\left(1-\theta\frac{z}{d}\right),\quad\bar{\rho}=\rho_{B}\left(1-\theta\frac{z}{d}\right)^{m},\quad\bar{p}=\frac{gd\rho_{B}}{\theta\left(m+1\right)}\left(1-\theta\frac{z}{d}\right)^{m+1},$
$None$
$\frac{gd}{c_{p}}=\Delta{\bar{T}}=T_{B}-T_{T}>0,\quad\theta=\frac{\Delta{\bar{T}}}{T_{B}},\quad
m=\frac{1}{\gamma-1},\quad\Gamma=\frac{T_{B}}{T_{T}}=\frac{1}{1-\theta},$
$None$
which defines $\theta$, and the polytropic index $m$. We use subscripts $T$
and $B$ to denote conditions at the top and bottom boundary respectively. The
temperature ratio $\Gamma>1$, is a convenient measure of the compressibility.
$\Gamma\to 1$ is the Boussinesq limit, and highly compressible layers have
$\Gamma$ large. Note that $\Gamma^{m}=\rho_{B}/\rho_{T}$ is the ratio of the
highest to lowest density in the layer. The density ratio can be very large in
astrophysical applications, the density of the bottom of the solar convection
zone being $\sim 10^{6}$ times the density at the top. Sometimes the number of
density scale heights, $N_{\rho}$, (the scale height being defined at the top
of the layer) that fit into the layer is used to measure compressibility,
$N_{\rho}=m(\Gamma-1)$.
### 2.2 The conduction state
The adiabatic reference state satisfies $\nabla^{2}{\bar{T}}=0$, but since it
is isentropic is does not satisfy the entropy boundary conditions. The
anelastic conduction state is also a polytrope, but with a slightly more
negative temperature gradient, so ${\tilde{T}}_{B}=T_{B},\
{\tilde{T}}_{T}<T_{T}$. The conduction state is
$\tilde{T}=T_{B}\left(1-{\tilde{\theta}}\frac{z}{d}\right),\quad\tilde{\rho}=\rho_{B}\left(1-{\tilde{\theta}}\frac{z}{d}\right)^{\tilde{m}},\quad\tilde{p}=\frac{gd\rho_{B}}{{\tilde{\theta}}\left({\tilde{m}}+1\right)}\left(1-{\tilde{\theta}}\frac{z}{d}\right)^{{\tilde{m}}+1},$
$None$ ${\widetilde{\Delta
T}}=T_{B}-{\tilde{T}}_{T}>0,\quad{\tilde{\theta}}=\frac{{\widetilde{\Delta
T}}}{T_{B}},\quad{\tilde{m}}=\frac{gd}{R{\widetilde{\Delta T}}}-1.$ $None$
The small anelastic parameter $\epsilon$ is now defined as
$\epsilon={\tilde{\theta}}\frac{{\tilde{m}}+1-\gamma{\tilde{m}}}{\gamma}=-\frac{d}{T_{B}}\left[\frac{\mathrm{d}\tilde{T}}{\mathrm{d}z}+\frac{g}{c_{p}}\right]\ll
1,$ (10)
and the entropy in the conduction state is
${\tilde{s}}=c_{v}\ln\frac{\tilde{p}}{\tilde{\rho}^{\gamma}}+\mathrm{const}=\frac{\epsilon
c_{p}}{\theta}\ln\left(1-\theta\frac{z}{d}\right)+\mathrm{const},\
\textrm{so}\
s=\frac{c_{p}}{\theta}\ln\left(1-\theta\frac{z}{d}\right)+\mathrm{const}$
$None$
which is the scaled entropy, see (7), correct to $O(\epsilon)$ since
$\tilde{\theta}$ and $\theta$ differ by only $O(\epsilon)$. Since the
boundaries have fixed entropy, the entropy at the boundaries in the conduction
state defines the entropy drop across the layer for all Rayleigh numbers, so
$\Delta
S=\frac{c_{p}}{\theta}\ln\Gamma=\frac{c_{p}\Gamma\ln\Gamma}{\Gamma-1},$ (12)
relating to the entropy boundary conditions (6). Note that as our entropy
variable $s$ is scaled by $\epsilon$, the entropy drop is $O(\epsilon)$. Some
anelastic papers take the conduction state as the reference state, and some
take the adiabatic state as the reference state. Taking the conduction state
as the reference state is appropriate when convection near critical Rayleigh
number is considered, but at the large Rayleigh numbers considered here, the
conduction state is less relevant. Although the conduction state tends to the
adiabatic reference state as $\epsilon\to 0$, the thermodynamic variables are
not the same in the two systems: $T=0$ with respect to the adiabatic state
corresponds to $T=T_{B}z/d$ if the conduction state is chosen as the reference
state.
In equation (1) we have made use of ($None$b), ($None$) and (10) to write
(Braginsky & Roberts, 1995; Lantz & Fan, 1999)
$\displaystyle-\frac{\nabla
p}{\bar{\rho}}-\frac{\rho}{\bar{\rho}}g\hat{\mathbf{e}}_{z}=-\nabla\left(\frac{p}{\bar{\rho}}\right)+\frac{g}{c_{p}}s\hat{\mathbf{e}}_{z}+O(\epsilon).$
(13)
### 2.3 The Nusselt and Rayleigh numbers in anelastic convection
Next we consider the superadiabatic heat flux. The horizontal average at level
$z$ is denoted by $\langle\ \rangle_{h}$. At the boundaries, all the heat is
carried by conduction, and if the total temperature $\hat{T}=\bar{T}+\epsilon
T$, then the total heat flux at the boundaries is
$-kd{\left\langle\hat{T}\right\rangle_{h}}/dz=-kd\bar{T}/dz-k\epsilon
d\left\langle T\right\rangle_{h}/dz$, but the superadiabatic part is obtained
by subtracting off the heat flux carried along the adiabat, so we let
$\epsilon F^{super}=-\epsilon k\frac{d\left\langle
T\right\rangle_{h}}{dz}\Big{|}_{z=0},\quad{\rm so}\quad
F^{super}=-k\frac{d\left\langle T\right\rangle_{h}}{dz}\Big{|}_{z=0}.$ (14)
The Nusselt number in anelastic convection is defined as the ratio of the
superadiabatic heat flux divided by the heat conducted down the conduction
state superadiabatic gradient. Note that the flux conducted down the adiabatic
gradient is ignored in this definition, so
$Nu=\frac{F^{super}d}{kT_{B}},$ (15)
so $Nu$ is close to unity near onset, and is large in fully developed
convection. For fixed entropy boundary conditions, the Rayleigh number is
defined as
$Ra=\frac{g\Delta Sd^{3}\rho_{B}^{2}}{\mu k}\approx\frac{c_{p}\Delta
S\Delta{\bar{T}}d^{2}\rho_{B}^{2}}{\mu k}.$ (16)
The anelastic approximation is asymptotically valid in the limit $\epsilon\to
0$. Note that small superadiabatic temperature gradient does not imply small
Rayleigh number $Ra$, since the diffusion coefficients can be small, in fact
to get $Ra\sim O(1)$ in the limit $\epsilon\to 0$ we must have
$\frac{k}{\rho_{B}c_{p}}\sim\left(gd^{3}\epsilon\right)^{1/2},\quad\frac{\mu}{\rho_{B}}\sim\left(gd^{3}\epsilon\right)^{1/2}.$
(17)
allowing large but finite $Ra$ even when the superadiabatic gradient is small.
To derive the anelastic equations (1) to (5), we insert (7) into the full
compressible equations and divide the momentum equation by $\epsilon$, the
mass conservation equation by $\epsilon^{1/2}$ and the energy equation by
$\epsilon^{3/2}$. Having taken this limit, the parameter $\epsilon$ no longer
appears in our analysis. However, if anelastic work is compared to fully
compressible situations, then a finite value of $\epsilon$ must be chosen, and
the anelastic results are only approximate, though there is a growing body of
evidence that the anelastic approximation does capture the main features of
subsonic compressible convection.
### 2.4 High Rayleigh number convection
We have the following physical picture in mind. In strongly turbulent
convection we expect the entropy $s$ to be well-mixed away from boundary
layers near $z=0,d$. We denote the global spatial average over the convecting
layer by $||\ \ ||$ and the horizontal average at level $z$ by $\langle\
\rangle_{h}$. The total entropy drop is the conduction state value $\Delta
S=c_{p}\ln\Gamma/\theta.$ Since the entropy is constant in the bulk interior,
we define the entropy drops $\Delta S_{B}$ and $\Delta S_{T}$ across the
bottom and top boundary layers respectively. These will not be equal, with
$\Delta S_{T}$ normally considerably larger than $\Delta S_{B}$. We must
however have
$\Delta S_{B}+\Delta S_{T}=\Delta S.$ (18)
We consider only the case where both the top and bottom boundary layers are
laminar. At extremely high $Ra$ these layers may become turbulent as can
happen in the Boussinesq case. The laminar boundary layer case is simpler, and
gives predictions which can be broadly compared with numerical simulations,
though it is difficult for numerical simulations to get into the fully
developed large Rayleigh and Nusselt number regime we are aiming at here. A
schematic picture of the horizontally-averaged entropy profile expected in
highly supercritical anelastic convection is shown in figure 1(a).
Since the heat flux through the boundary layers is determined by thermal
diffusion rather than entropy diffusion, we need to express the temperature
jumps across the thermal boundary layers in terms of the entropy jumps. From
($None$) we obtain
$\frac{\left(\Delta\rho\right)_{i}}{\rho_{i}}\approx\frac{1}{\gamma-1}\left[\frac{\left(\Delta
T\right)_{i}}{T_{i}}-\gamma\frac{\left(\Delta
s\right)_{i}}{c_{p}}\right],\qquad\frac{\left(\Delta
p\right)_{i}}{p_{i}}\approx\frac{\gamma}{\gamma-1}\left[\frac{\left(\Delta
T\right)_{i}}{T_{i}}-\frac{\left(\Delta s\right)_{i}}{c_{p}}\right],$ $None$
where the $\Delta$ quantities refer to the jump in the horizontally averaged
value across the boundary layer and the subscript $i$ stands either for $B$ or
$T$. We also define the thermal and viscous boundary layer thicknesses,
$\delta_{i}^{th}$ and $\delta_{i}^{\nu}$ with $i=B,T$, which we use to obtain
scaling estimates. Numerical simulations indicate that the horizontal velocity
$U_{H}=\left(\left\langle u_{x}^{2}\right\rangle_{h}+\left\langle
u_{y}^{2}\right\rangle_{h}\right)^{1/2}$ (20)
has local maxima close to both boundaries (see e.g. figure 3(b) below), so
these maxima are a convenient way to define the velocity jumps across the
viscous boundary layers, $\Delta U_{i}$, so
$\Delta U_{B}=U_{H}(z=z_{max,B}),\quad\Delta U_{T}=U_{H}(z=z_{max,T}),$ (21)
where $z=z_{max,B}$, $z=z_{max,T}$ are the locations of the local maxima. The
thermal boundary layer thickness for the entropy, $\delta_{i}^{th}$, and the
viscous boundary layer thickness, $\delta_{i}^{\nu}$, can be defined as
$\delta_{i}^{th}=\left\\{-\frac{1}{\Delta S_{i}}\frac{d\left\langle
S\right\rangle_{h}}{dz}{\Big{|}}_{z=z_{i}}\right\\}^{-1},\quad\delta_{i}^{\nu}=\left\\{\pm\frac{1}{\Delta
U_{i}}\frac{dU_{H}}{dz}{\Big{|}}_{z=z_{i}}\right\\}^{-1},\quad
z_{i}=z_{B},\,z_{T}$ (22)
(a) (b)
z z
$\langle s\rangle_{h}$ $\langle T\rangle_{h}$
Figure 1: (a) A schematic picture of the entropy profile in developed
convection. (b) a schematic picture of the anelastic temperature perturbation
in developed convection.
In the boudary layers, the dominant balance in the $z$-component of the
Navier-Stokes equation occurs between the pressure gradient and the buoyancy
force. Mass conservation in the boundary layers means $u_{z,i}\sim
O(\delta_{i}^{\nu})$ so the vertical component of inertia is small. The
boundary layers are therefore approximately hydrostatic,
$\left(\Delta p\right)_{i}\approx\frac{g}{c_{p}}\rho_{i}\Delta
s_{i}\delta_{i}^{th}.$ (23)
Inserting (23) into ($None$b) leads to
$\frac{\left(\Delta T\right)_{i}}{T_{i}}\approx\frac{\left(\Delta
s\right)_{i}}{c_{p}}\left(1+\theta\frac{\delta_{i}^{th}}{d}\frac{T_{B}}{T_{i}}\right).$
(24)
Typically the term $(\theta\delta_{i}^{th}T_{B})/(T_{i}d)$ resulting from the
pressure jump across the boundary layers is expected to be small because the
boundary layer is thin. However, in simulations where the Rayleigh number is
bounded above by numerical constraints, the top boundary layer may not be as
thin as desired for accurate asymptotics to apply, and the factor
$T_{B}/T_{T}$ can be large in layers containing many scale heights. We refer
to the case where the pressure term is not negligible as the compressible
boundary layer case. However, in this work we assume the boundary layers are
incompressible, which is valid provided $T_{B}/T_{T}$ remains finite as the
Rayleigh number increases and the boundary layers become very thin. Then the
pressure fluctuation term is constant in both boundary layers, so that in the
boundary layers
$\frac{\left(\Delta T\right)_{i}}{T_{i}}\approx\frac{\left(\Delta
s\right)_{i}}{c_{p}},\ \ \textrm{and}\ \ \frac{\left(\nabla
T\right)_{i}}{T_{i}}\approx\frac{\left(\nabla s\right)_{i}}{c_{p}}$ (25)
and defining the temperature boundary layer thicknesses similarly to (22),
$\delta_{i}^{T}=\left\\{-\frac{1}{\Delta T_{i}}\frac{d\left\langle
T\right\rangle_{h}}{dz}{\Big{|}}_{z=z_{i}}\right\\}^{-1},$ (26)
the temperature boundary layer thicknesses are the same as the entropy
boundary layer thicknesses. Note that in the compressible boundary layer case
the entropy and temperature boundary layer thicknesses will be different. For
incompressible boundary layers and high Rayleigh number, the Nusselt number
can be written in terms of the boundary layer thicknesses, using (15), (14),
(26) and (25),
$Nu=\frac{d}{\delta^{th}_{T}}\frac{\Delta S_{T}}{\Gamma
c_{p}}=\frac{d}{\delta^{th}_{B}}\frac{\Delta S_{B}}{c_{p}}.$ (27)
In figure 1(b) we sketch the horizontally-averaged anelastic temperature
perturbation $\langle T\rangle_{h}$. This is sometimes referred to as the
superadiabatic temperature (e.g. Verhoeven et al. 2015). Note that with our
constant entropy boundary conditions, $\langle T\rangle_{h}$ is not zero at
the boundaries. We show in appendix A that the offsets, $\langle
T\rangle_{hB}$ at $z=0$ and $\langle T\rangle_{hT}$ at $z=d$, are both
positive and we show also that in the bulk, turbulent pressure effects make
the gradient of $T$ positive as shown in figure 1(b). This means that the
total horizontally averaged temperature gradient in the presence of convection
is less negative than the adiabatic reference state, so that on horizontal
average the layer is subadiabatically stratified (e.g. Korre et al. 2017). Of
course, in some parts of the layer the local temperature gradient must be
superadiabatic to drive the convection, but other parts are subadiabatic so
that the horizontal average can be subadiabatic.
To obtain the anelastic temperature fluctuation as sketched in figure 1(b), we
need to make use of equations ($None$a) and ($None$b), so we need to make a
specific choice for entropy at the boundaries. Here we have chosen to take the
entropy at the top boundary as zero, so the entropy at the bottom boundary is
$s=\Delta S$. A different choice of entropy constant adds an easily found
function of $z$ to $T$, $\rho$ and $p$ but this does not affect the velocity
field obtained. One further point is that if (1) is horizontally averaged, the
horizontal average satisfies a first order differential equation in $z$ (see
appendix A for details), so a boundary condition on $\langle p\rangle_{h}$ is
required. Here we choose the natural condition that the anelastic density
perturbation vanishes when integrated over the layer, that is
$||\ \rho\ ||=0\quad\Rightarrow\quad\langle p\rangle_{h,T}=\langle
p\rangle_{h,B}.$ (28)
This means that the total mass in the layer is the same as in the adiabatic
reference state. As we see in appendix A, this means the horizontally averaged
anelastic pressure perturbations at the top and bottom of the layer must be
equal.
## 3 Energy and entropy production integrals
Understanding the energy transfer and entropy production in convective flow is
the key to understanding the physics of compressible convection. Therefore we
derive now a few exact relations which will allow us to study some general
aspects of the dynamics of developed compressible convection. We assume any
initial transients in the convection have been eliminated, and we are in a
statistically steady state. Formally, this means we consider time-averaged
quantities throughout the paper.
### 3.1 Energy balance
By multiplying the Navier-Stokes equation (1) by $\bar{\rho}{u}$ and averaging
over the entire volume (recalling that horizontal averages of $x$ and $y$
derivatives are zero) we obtain the following relation
$\frac{g}{c_{p}}||\bar{\rho}u_{z}s||=\mu||q||,$ (29)
stating that the total work per unit volume of the buoyancy force is equal to
the total viscous dissipation in the fluid per unit volume. In deriving (29)
use has been made of the no-slip boundary conditions and equation (2).
We derive the superadiabatic heat flux in the system at every $z$ by averaging
over a horizontal plane and integrating the heat equation (3) from $0$ to $z$,
$\displaystyle F^{super}$ $\displaystyle=$ $\displaystyle-k\frac{d\left\langle
T\right\rangle_{h}}{dz}\Big{|}_{z=0}=\left\langle\bar{\rho}\bar{T}u_{z}s\right\rangle_{h}-k\frac{d\left\langle
T\right\rangle_{h}}{dz}$ (30) $\displaystyle+$
$\displaystyle\frac{g}{c_{p}}\int_{0}^{z}\left\langle\bar{\rho}u_{z}s\right\rangle_{h}\mathrm{d}z-\mu\int_{0}^{z}\left\langle
q\right\rangle_{h}\mathrm{d}z-2\mu\left[\left\langle
u_{z}\frac{du_{z}}{\mathrm{d}z}\right\rangle_{h}-\frac{m\Delta{\bar{T}}}{\bar{T}d}\left\langle
u_{z}^{2}\right\rangle_{h}\right].$
In deriving this expression, we have made use of ($None$a,d) and
$(\partial_{j}u_{i})(\partial_{i}u_{j})-(\nabla\cdot{\bf
u})^{2}=\partial_{j}\partial_{i}(u_{i}u_{j})-2\partial_{j}(u_{j}\nabla\cdot{\bf
u})=\partial_{j}\partial_{i}(u_{i}u_{j})+2\partial_{j}(u_{j}u_{z}\frac{m\Delta{\bar{T}}}{\bar{T}})$
(31)
since the continuity equation gives
$\nabla\cdot{\bf
u}=\partial_{i}u_{i}=\frac{u_{z}m\Delta{\bar{T}}}{{\bar{T}}d},$ (32)
and we recall that $x$ or $y$ derivatives vanish on horizontal averaging. As
$z\to d$ all terms with a factor $u_{z}$ tend to zero, so on using (29) we
obtain the overall energy balance equation,
$F^{super}=-k\frac{d\left\langle
T\right\rangle_{h}}{dz}\Big{|}_{z=0}=-k\frac{d\left\langle
T\right\rangle_{h}}{dz}\Big{|}_{z=d},$ (33)
thus in a stationary state the heat flux entering the fluid volume at $z=0$
must be equal to the outgoing heat flux $z=d$.
### 3.2 Entropy balance
This energy balance equation alone is not very helpful for evaluating the
Nusselt number. We need the entropy balance equation, obtained by dividing the
energy equation (3) by $\bar{T}$, horizontally averaging, and integrating from
$0$ to $z$,
$\displaystyle\left\langle\bar{\rho}u_{z}s\right\rangle_{h}$ $\displaystyle=$
$\displaystyle-\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}+\frac{k}{\bar{T}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z}-\int_{0}^{z}\frac{k\Delta{\bar{T}}}{{\bar{T}}^{2}d}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\,dz+\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz$ (34) $\displaystyle+$
$\displaystyle\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle\partial_{j}(\partial_{i}(u_{i}u_{j}))-2\partial_{j}(u_{j}(\partial_{i}u_{i}))\right\rangle_{h}\,dz$
where use has been made of equation (31). The overall entropy balance equation
comes from taking the limit $z\to d$ of (34), noting $u_{z}\to 0$ in this
limit,
$\displaystyle\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}-\frac{k}{T_{T}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=d}$ $\displaystyle=$
$\displaystyle-\int_{0}^{d}\frac{k\Delta{\bar{T}}}{{\bar{T}}^{2}d}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\,dz+\int_{0}^{d}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz$
Equations (34, LABEL:eq:3_7) look complicated, but they simplify considerably
when the top and bottom boundary layers are thin. We start with (34), which we
write as
$\displaystyle S_{conv}=S_{diff}+S_{visc}$ (36)
where
$\displaystyle S_{conv}=\left\langle\bar{\rho}u_{z}s\right\rangle_{h},$ (37)
the net entropy carried out of the region $(0,z)$ by the convective velocity
at level $z$,
$\displaystyle
S_{diff}=-\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}+\frac{k}{\bar{T}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z}-\int_{0}^{z}\frac{k\Delta{\bar{T}}}{{\bar{T}}^{2}d}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\,dz$ (38)
so that $S_{diff}$ is the entropy balance in region $(0,z)$ of the entropy
change due to thermal diffusion. This is divided into the first term, which
represents the positive entropy being conducted into our region at the bottom
boundary (the gradient of $\langle T\rangle_{h}$ is negative there, see figure
1b), the second term is the entropy conducted across level $z$, and the third
term is the entropy production by internal diffusion in our given region. By
looking at figure 1b it is apparent that when the boundary layers are thin,
the first term is much larger than the other two except when $z$ is in the
boundary layers. If $z$ is in the bulk, the gradient of $\langle T\rangle_{h}$
is $O(\Delta{\bar{T}}/d)$ whereas at the boundary it is
$O(\Delta{\bar{T}}/\delta^{th})$, much larger since the boundary layer is
thin. The third term is always small compared to the first, because the
gradient is $O(\Delta{\bar{T}}/d)$ outside the boundary layers. The integrand
is of order $O(\Delta{\bar{T}}/\delta^{th})$ in the boundary layers, but
because they are thin this only contributes a small amount to the integral. So
when the boundary layers are thin
$\displaystyle
S_{diff}\approx-\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}\quad\textrm{if $z$ is in the bulk},$ (39)
$\displaystyle
S_{diff}\approx-\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}+\frac{k}{T_{T}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=d}\quad\textrm{if $z=d$ . }$ (40)
We now turn to
$S_{visc}=\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz+\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle\partial_{j}(\partial_{i}(u_{i}u_{j}))-2\partial_{j}(u_{j}(\partial_{i}u_{i}))\right\rangle_{h}\,dz.$
(41)
Because of the horizontal averaging, and using equations (2), ($None$) and
(32), the second integral can be written
$\displaystyle\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle\partial_{j}(\partial_{i}(u_{i}u_{j}))-2\partial_{j}(u_{j}(\partial_{i}u_{i}))\right\rangle_{h}\,dz=\qquad\qquad$
$\displaystyle\int_{0}^{z}\frac{\mu}{\bar{T}}\left[\frac{\partial^{2}}{\partial
z^{2}}\left\langle
u_{z}^{2}\right\rangle_{h}-\frac{m\Delta{\bar{T}}}{{\bar{T}}d}\frac{\partial}{\partial
z}\left\langle
u_{z}^{2}\right\rangle_{h}-\frac{m(\Delta{\bar{T}})^{2}}{{\bar{T}}^{2}d^{2}}\left\langle
u_{z}^{2}\right\rangle_{h}\right]\,dz.$ (42)
We now consider the magnitude of the terms in equation (41). If the root mean
square velocity in the bulk is $U$, we expect the horizontal velocity to vary
from 0 to $O(U)$ across the boundary layers of thickness $\delta_{i}^{\nu}$,
so the velocity gradients appearing in $q$ are of magnitude
$O(U/\delta_{i}^{\nu})$. $q$ itself is therefore
$O(U^{2}/{(\delta_{i}^{\nu}})^{2})$, and since the boundary layers are thin
their contribution to the first integral in $S_{visc}$ is $O(\mu
U^{2}/{\bar{T}}\delta_{i}^{\nu})$. In the boundary layers $u_{z}$ is small,
but in the bulk we expect $u_{z}$ to be $O(U)$ and so $\langle
u_{z}^{2}\rangle_{h}$ is $O(U^{2})$. Because $\langle u_{z}^{2}\rangle_{h}$ is
horizontally averaged, it will vary on a length-scale $d$ with $z$, so the
gradient of $\langle u_{z}^{2}\rangle_{h}$ is $O(U^{2}/d)$ and the second
derivative is $O(U^{2}/d^{2})$. From (42), the order of magnitude of the
second term in (41) is then $O(\mu U^{2}/Td)$, which is
$O(\delta_{i}^{\nu}/d)$ smaller than the contribution from the term due to
$q$. Therefore provided the Rayleigh number is high enough for the boundary
layers to be thin, equation (34) is asymptotically equivalent to
$\displaystyle\left\langle\bar{\rho}u_{z}s\right\rangle_{h}\sim-\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}+\int_{0}^{z}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz.$ (43)
when $z$ is in the bulk. Note that this simplification still holds if the
dissipation in the bulk is larger than the dissipation in the boundary layers,
which can happen, as noted by Grossmann & Lohse (2000). When the dissipation
is primarily in the boundary layers, the left-hand-side of ( 43) is constant
in the bulk, which we exploit later. In either case, as $z\to d$ we get
$\displaystyle\frac{k}{T_{B}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=0}-\frac{k}{T_{T}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
T\right\rangle_{h}\Big{|}_{z=d}=\frac{F^{super}\Delta{\bar{T}}}{T_{B}T_{T}}\sim\int_{0}^{d}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz.$ (44)
Note also that in these expressions, the term $(\nabla\cdot{\bf u})^{2}/3$ in
equation (5) makes a negligible contribution to (44) compared to the gradient
terms by using (32).
### 3.3 The Boussinesq limit
At first sight, it appears that our entropy balance formulation of the
equation for dissipation (LABEL:eq:3_7) is fundamentally different from that
used by Grossman & Lohse (2000) in the Boussinesq case, who start from
equation (2.5) of Siggia (1994),
$(Nu-1)Ra=\langle(\nabla v)^{2}\rangle,\ \textrm{or}\quad g\alpha\kappa\Delta
T(Nu-1)=\nu\int_{0}^{d}\langle q\rangle_{h}\,dz$ (45)
in our dimensional units. Here $\Delta T$ is the superadiabatic temperature
difference between the boundaries. In the Boussinesq limit $\Gamma\to 1$, the
basic state temperature and density tend to constant values $T_{B}$ and
$\rho_{B}$ respectively, so the thermal diffusivity $\kappa$ and kinematic
viscosity $\nu$ are constants, $\kappa=k/\rho_{B}c_{p}$ and
$\nu=\mu/\rho_{B}$. For a perfect gas the coefficient of expansion
$\alpha=1/T_{B}$. In this subsection we show that (45) is in fact the
Boussinesq limit of the entropy balance equation (LABEL:eq:3_7), which we use
to derive the scaling laws in §6 below. Our entropy balance formulation is a
generalization of the Grossmann & Lohse (2000) formulation, which is now seen
to be a limiting case of the more general entropy balance approach.
Following Spiegel & Veronis (1960) we note that from the $z$-component of (1)
$p/{\bar{\rho}}d\sim gs/c_{p}$, so
$\frac{p}{\bar{p}}\sim\frac{g{\bar{\rho}}d}{{\bar{p}}}\frac{s}{c_{p}}\sim\frac{d}{H}\frac{s}{c_{p}}$
(46)
where $H$ is the pressure scale height. In the Boussinesq limit $d/H$ becomes
small; the Boussinesq limit $\Gamma\to 1$ is the thin layer limit (Spiegel &
Veronis, 1960). Then ($None$a,b) become
$\frac{T}{T_{B}}\sim-\frac{\rho}{\rho_{B}},\quad\frac{s}{c_{p}}\sim\alpha T,$
(47)
so the entropy variable becomes equivalent to the superdiabatic temperature
variable. The entropy jump $\Delta S$ across the layer can be written as a
superadiabatic temperature jump $\Delta T=\Delta S/\alpha c_{p}$. As
$\Gamma\to 1$, from (12) $\Delta S/c_{p}\to 1$ so $\Delta T\to
1/\alpha=T_{B}$. From energy conservation (33) the gradient of $\langle
T\rangle_{h}$ is the same at the top and bottom of the layer, so
(LABEL:eq:3_7) can be written
$-\frac{\Delta{\bar{T}}}{T_{B}^{2}}\frac{d}{dz}\langle
T\rangle_{h}{\Big{|}}_{z=0}=-\frac{k\Delta{\bar{T}}}{T_{B}^{2}d}\int_{0}^{d}\frac{d}{dz}\langle
T\rangle_{h}\,dz+\frac{\mu}{T_{B}}\int_{0}^{d}\langle q\rangle_{h}\,dz,$ (48)
using the constancy of $\bar{T}$ in the Boussinesq limit. From (14) and (15)
$Nu=-\frac{d}{T_{B}}\frac{d}{dz}\langle T\rangle_{h}{\Big{|}}_{z=0}\
\textrm{or}\quad Nu=-\frac{d}{\Delta T}\frac{d}{dz}\langle
T\rangle_{h}{\Big{|}}_{z=0},$ (49)
which is the familiar form of the Boussinesq Nusselt number, the ratio of the
total heat flux at the bottom to the conducted heat flux $-k\Delta T/d$. From
($None$d) $\Delta{\bar{T}}$ can be written $gd/c_{p}$ so (48) becomes
$\frac{k}{c_{p}T_{B}}Nug\alpha\Delta T=\frac{k}{c_{p}T_{B}}g\alpha\Delta
T+\frac{\mu}{T_{B}}\int_{0}^{d}\langle q\rangle_{h}\,dz\ \ \textrm{or}\ \
(Nu-1)ga\Delta T\kappa=\nu\int_{0}^{d}\langle q\rangle_{h}\,dz,$ (50)
which is (45), showing that the dissipation integral which plays a key role in
Grossmann & Lohse’s (2000) analysis is indeed the Boussinesq limit of the
entropy balance equation (LABEL:eq:3_7).
## 4 The boundary layers and Prandtl number effects
(a) (b)
Figure 2: (a) Thermal and viscous boundary layers in the case $Pr>1$. The
thermal diffusion is smaller, so the thermal boundary layer is nested inside
the viscous boundary layer. (b) The case $Pr<1$, where the viscous boundary
layer is nested inside the thermal boundary layer.
As in the Boussinesq case, the thermal and viscous boundary layers can be
nested inside each other when the Prandtl number is different from unity.
A central idea in the theory of fully developed Boussinesq convection is based
on the assumption that the structure of turbulent convective flow is always
characterized by the presence of a large-scale convective roll called the
_wind of turbulence_ , Grossmann & Lohse (2000). This idea, which in the non-
stratified case stems from vast numerical and experimental evidence, is
retained in the case of anelastic convection. However, the significant
stratification in the anelastic case breaks the Boussinesq up-down symmetry,
and thus we must distinguish between the magnitude of the wind of turbulence
near the bottom of the bulk and its magnitude near the top of the bulk,
denoted by $U_{B}$ and $U_{T}$ respectively, which can now significantly
differ. So the label $U_{i}$ can denote either the horizontal velocity just
outside the top or bottom viscous boundary layers. We also assume that this
wind of turbulence forms boundary layers with a horizontal length scale
comparable to the layer depth $d$. It is of course an assumption that such
layers form in anelastic convection, but they are observed to occur in
incompressible flow, and the limited simulations we have available gives this
idea some support. Whereas the results in §2 and §3 are asymptotically valid
in the anelastic framework in the limit of large $Ra$, what follows is
dependent on the Grossmann-Lohse (2000) approach being valid for anelastic
convection.
The Prandtl number is a constant in this problem, given by
$Pr=\frac{\mu c_{p}}{k}.$ (51)
In figure (2a) the high Prandtl number case is shown, with the thinner thermal
boundary layer nested inside the viscous boundary layer. The velocity at the
edge of the thermal boundary layer is then
$\delta_{i}^{th}U_{i}/\delta_{i}^{\nu}$, assuming the velocity falls off
linearly inside the viscous boundary layer over the thinner thermal boundary
layer. In the boundary layers, advection balances diffusion, so from the
momentum equation (1) we estimate that
$\frac{\rho_{i}U_{i}^{2}}{d}\sim\frac{\mu
U_{i}}{\left(\delta_{i}^{\nu}\right)^{2}}\ \ \textrm{so}\ \ U_{i}\sim\frac{\mu
d}{\rho_{i}\left(\delta_{i}^{\nu}\right)^{2}}.$ (52)
For the entropy boundary layers, from (3)
$\frac{\rho_{i}T_{i}U_{i}s}{d}\frac{\delta_{i}^{th}}{\delta_{i}^{\nu}}\sim
k\nabla^{2}T\approx\frac{kT_{i}}{c_{p}}\nabla^{2}s\sim\frac{kT_{i}s}{c_{p}\left(\delta_{i}^{th}\right)^{2}},\
\ \textrm{so}\ \
U_{i}=\frac{kd\delta_{i}^{\nu}}{\rho_{i}c_{p}\left(\delta_{i}^{th}\right)^{3}},$
(53)
where (25) has been used and the factor $\delta_{i}^{th}/{\delta_{i}^{\nu}}$
arises because the horizontal velocity is reduced because the entropy boundary
layer is thinner than the viscous boundary layer. Dividing (52) by (53) we
obtain
$\frac{\delta_{i}^{\nu}}{\delta_{i}^{th}}\sim Pr^{1/3}$ (54)
giving the ratio of the viscous to thermal boundary layer thickness for the
high Prandtl number case. Note that although the top and bottom boundary
layers have different thicknesses, this ratio is the same for both layers.
For the low Prandtl number case, the viscous boundary layer lies inside the
thermal boundary layer, see figure (2b). Now the velocity at the edge of the
thermal boundary layer is the same as that at the edge of the viscous boundary
layer, so the velocity reduction factor $\delta_{i}^{th}/\delta_{i}^{\nu}$ is
no longer required, so (53) becomes
$\frac{\rho_{i}T_{i}U_{i}s}{d}\sim
k\nabla^{2}T\approx\frac{kT_{i}}{c_{p}}\nabla^{2}s\sim\frac{kT_{i}s}{c_{p}\left(\delta_{i}^{th}\right)^{2}},\
\ \textrm{so}\ \
U_{i}=\frac{kd}{\rho_{i}c_{p}\left(\delta_{i}^{th}\right)^{2}},$ (55)
giving the ratio of the boundary layer thicknesses as
$\frac{\delta_{i}^{\nu}}{\delta_{i}^{th}}\sim Pr^{1/2}$ (56)
in the low Prandtl number case.
## 5 The boundary layer ratio problem
In Boussinesq convection, there is a symmetry about the mid-plane which means
that the top and bottom boundary layers have the same thickness and structure,
and the temperature of the bulk interior is midway between that of the
boundaries. In anelastic convection, this symmetry no longer holds, and the
top and bottom boundary layers can be very different, and the entropy of the
bulk interior is significantly different from $\Delta S/2$. This raises the
question of how the ratios of the thicknesses of the top and bottom boundary
layers, the ratio of the bulk horizontal velocities just outside the boundary
layers, and the ratio of the entropy jumps across the layers are actually
determined. We assume the incompressible boundary layer case holds throughout
this section.
We write the ratio of the entropy jumps across the boundary layers as
$r_{s}=\frac{\Delta S_{T}}{\Delta S_{B}},\ \ \textrm{so}\ \ \Delta
S_{B}=\frac{\Delta S}{1+r_{s}}\ \ \textrm{and the entropy in the bulk
is}\quad\frac{r_{s}}{1+r_{s}}\Delta S,$ (57)
and the ratio of the anelastic temperature jumps across the boundary layers as
$r_{T}=\frac{\Delta T_{T}}{\Delta T_{B}}.$ (58)
We define the ratio of the velocities at the edge of the boundary layers as
$r_{u}=\frac{U_{T}}{U_{B}}.$ (59)
The last important ratio is the ratio of the thicknesses of the boundary
layers. In general the viscous and thermal boundary layers will be of
different thickness, but here we start with the thermal boundary layers which
have thicknesses at the top and bottom of $\delta^{th}_{T}$ and
$\delta^{th}_{B}$ with ratio
$r_{\delta}=\frac{\delta^{th}_{T}}{\delta^{th}_{B}}.$ (60)
We have four unknown ratios, so we need four equations to determine them. Our
first equation uses the fact that the heat flux passing through the bottom
boundary must equal the heat flux passing through the top boundary. Since this
heat flux is entirely by conduction close to the boundary,
$-k\frac{dT}{dz}|_{i}\sim k\frac{\Delta T_{i}}{\delta^{th}_{i}}\Rightarrow
r_{\delta}=r_{T}.$ (61)
We can use the balance of advection and diffusion in the boundary layers
exploited in §4 to obtain another equation relating the boundary layer ratios.
In §4 we saw that the ratio of the thermal boundary layer thicknesses was the
same as the ratio of the viscous boundary layer thicknesses,
$\frac{\delta^{th}_{T}}{\delta^{th}_{B}}=\frac{\delta^{\nu}_{T}}{\delta^{\nu}_{B}}=r_{\delta},$
(62)
so we use the viscous boundary balance equation (52) to estimate
$\frac{\rho_{B}U_{B}^{2}}{d}\sim\frac{\mu
U_{B}}{{\delta^{\nu}_{B}}^{2}},\quad\frac{\rho_{T}U_{T}^{2}}{d}\sim\frac{\mu
U_{T}}{{\delta^{\nu}_{T}}^{2}}\quad\Rightarrow
r_{u}{r_{\delta}}^{2}\sim\frac{\rho_{B}}{\rho_{T}}=\Gamma^{m}.$ (63)
We now need an equation for the ratio of bulk large scale flow velocities at
the top and bottom of the layer, $r_{U}$. We consider first the case where the
viscous dissipation occurs primarily in the boundary layers, which is likely
to be true in numerical simulations with no-slip boundaries. Since the entropy
production occurs in the boundary layers and is relatively small in the
interior, and entropy diffusion is small in the bulk interior, the convected
entropy flux $\langle\bar{\rho}u_{z}s\rangle_{h}$ is approximately constant in
the bulk interior. We now multiply the equation of motion (1) by
${\bar{\rho}}{\bf u}$ and average over the bulk interior, ignoring the small
viscous term in the bulk, to get
$\frac{1}{2}\frac{\partial}{\partial z}\left(\bar{\rho}\left\langle
u_{z}u^{2}\right\rangle_{h}\right)\approx-\frac{\partial}{\partial z}\langle
u_{z}p\rangle_{h}+\frac{g}{c_{p}}\left\langle{\bar{\rho}}u_{z}s\right\rangle_{h}.$
(64)
Near the boundary layers, the pressure term $p$ will be approximately constant
as shown in (23), and since $\langle u_{z}\rangle_{h}=0$, the term involving
$\langle u_{z}p\rangle_{h}$ will be small there, and we ignore it. Since we
expect $\langle\bar{\rho}u_{z}s\rangle_{h}$ to be approximately the same just
outside the two boundary layers,
$\frac{\partial}{\partial z}\left(\bar{\rho}\left\langle
u_{z}u^{2}\right\rangle_{h}\right)|_{T}\approx\frac{\partial}{\partial
z}\left(\bar{\rho}\left\langle u_{z}u^{2}\right\rangle_{h}\right)|_{B},$ (65)
where here $T$ and $B$ refer to conditions at the top of the bulk and the
bottom of the bulk respectively. In the turbulent bulk interior (unlike the
boundary layers), we expect the three components of velocity to have similar
magnitudes. It remains to estimate the length-scale associated with the
$z$-derivative, and this is perhaps the most uncertain part of the analysis.
Astrophysical mixing length theory uses the pressure scale height, or a
multiple of the pressure scale height, as the mixing length for the vertical
length scale. Since we are only interested in the top and bottom ratios here,
our results are independent of what multiple of the scale height is chosen.
Some support for the mixing length idea can be derived from Kessar et al.
(2019), which shows that convective length scales decrease in the bulk towards
the top of the layer. We also note that because we are only concerned with
ratios, it doesn’t matter whether the pressure scale height or the density
scale height is used. Adopting the pressure scale height,
$H_{T}=\frac{d}{(m+1)(\Gamma-1)},\quad H_{B}=\frac{\Gamma
d}{(m+1)(\Gamma-1)}\quad\Rightarrow\quad\frac{H_{T}}{H_{B}}=\Gamma^{-1}$ (66)
so that equation (65) gives
$\frac{\rho_{T}u_{T}^{3}}{H_{T}}\sim\frac{\rho_{B}u_{B}^{3}}{H_{B}}\Rightarrow
r_{u}^{3}\sim\Gamma^{m-1}\Rightarrow r_{u}\sim\Gamma^{\frac{m-1}{3}}.$ (67)
From the incompressible boundary layer equation (25) we have $r_{s}=\Gamma
r_{T}$, so with the other three ratio equations (61), (63) and (67) we have
$r_{s}=\Gamma r_{T},\quad r_{\delta}=r_{T},\quad
r_{U}{r_{\delta}}^{2}=\Gamma^{m},\quad{r_{u}}=\Gamma^{\frac{m-1}{3}},$ $None$
with solution
$r_{u}=\Gamma^{\frac{m-1}{3}},\quad r_{\delta}=\Gamma^{\frac{2m+1}{6}},\quad
r_{s}=\Gamma^{\frac{2m+7}{6}},\quad r_{T}=\Gamma^{\frac{2m+1}{6}}.$ (69)
In the case where $m=3/2$, appropriate for ideal monatomic gas,
$r_{u}=\Gamma^{\frac{1}{6}},\quad r_{\delta}=\Gamma^{\frac{2}{3}},\quad
r_{s}=\Gamma^{\frac{5}{3}},\quad r_{T}=\Gamma^{\frac{2}{3}}.$ (70)
Since $\Gamma$ is always greater than 1 and can be large, we see that the
entropy in the bulk is much closer to the entropy of the bottom boundary than
to the entropy of the top boundary. The top boundary layer is thicker than the
bottom boundary layer. The challenge for numerical simulations is to get to
sufficiently high Rayleigh number that the top boundary layer is truly thin,
as required by our asymptotic analysis. The ratio of the bulk velocities at
the top and bottom, is only weakly dependent on $\Gamma$, so again rather
large values of $Ra$ are required to establish the asymptotic trend.
Note that in deriving equation (63) we assumed that the horizontal length
scale for advection along the boundary layer was $d$, as did Grossmann & Lohse
(2000). We found that choosing the vertical length scales in the bulk to be
based on the pressure scale height in equation (66) agreed reasonably with the
numerics, see § 7 below, so a natural question is whether the horizontal
length scale near the top boundary becomes less than $d$ when the layer is
strongly stratified. The picture from our numerics is somewhat mixed, and is
discussed further in § 7 below. For moderate stratification, the numerics
suggest the boundary layers do appear to extend to $d$ at both the top and
bottom boundary; including a factor $\Gamma$ in the horizontal length scales
in the boundary layers gives poorer agreement with numerical estimates of the
boundary layer thickness ratio. However, at the largest values of $\Gamma$ we
did find a departure from the (70) scalings which could be accounted for by
some reduction in the horizontal length scale near the top boundary.
In the case where the viscous dissipation is mainly in the bulk, which happens
at low $Pr$ (Grossmann & Lohse, 2000) the equations (57-63) still hold, but
the argument for equation (67) breaks down because the entropy flux is no
longer approximately constant in the bulk because viscous dissipation in the
bulk is no longer negligible. This case is discussed in Appendix B.
## 6 The Nusselt number and Reynolds number scaling laws
When the boundary layers are thin, the overall entropy balance reduced to
(44),
$\frac{F^{super}\Delta{\bar{T}}}{T_{B}T_{T}}\sim\int_{0}^{d}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz.$ (71)
If the dissipation is mainly in laminar boundary layers, the $z$-derivatives
of the horizontal velocities will dominate terms in the expression for $q$, so
$q=(\partial_{j}u_{i})(\partial_{j}u_{i})+\frac{1}{3}\left(\nabla\cdot\mathbf{u}\right)^{2}\approx\left(\frac{\partial
u_{x}}{\partial z}\right)^{2}+\left(\frac{\partial u_{y}}{\partial
z}\right)^{2}\sim\frac{U_{i}^{2}}{(\delta_{i}^{\nu})^{2}},$ (72)
in these layers. So integrating over the boundary layers of thickness
$\delta_{i}^{\nu}$ and assuming $T$ varies little in these layers,
$\frac{F^{super}\Delta{\bar{T}}}{T_{B}T_{T}}\sim\frac{\mu
U_{B}^{2}}{T_{B}\delta_{B}^{\nu}}+\frac{\mu
U_{T}^{2}}{T_{T}\delta_{T}^{\nu}}=\frac{\mu
U_{B}^{2}}{T_{B}\delta_{B}^{\nu}}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right).$ (73)
where we have used the ratios (59), (60) and ($None$g). Now we can write the
superadiabatic flux in terms of the thermal boundary layer thicknesses, using
(14), (26), (25), (57) and (18) to get
$F^{super}=\frac{k\Delta T_{B}}{\delta_{B}^{th}}=\frac{kT_{B}\Delta
S}{c_{p}(1+r_{s})\delta_{B}^{th}}.$ (74)
Inserting this into (73) and using the definition (16) for the Rayleigh
number, the entropy balance equation can be written
$\frac{k^{2}Ra\Gamma}{c_{p}^{2}(1+r_{s})d^{2}\rho_{B}^{2}}\frac{\delta_{B}^{\nu}}{\delta_{B}^{th}}\sim
U_{B}^{2}\left(1+\frac{\Gamma r_{u}^{2}}{r_{\delta}}\right).$ (75)
Now we introduce the Reynolds number near the bottom boundary
$Re_{B}=\frac{\rho_{B}U_{B}d}{\mu},$ (76)
noting that the Reynolds number near the top, $Re_{T}$, is given by
$r_{u}Re_{B}$. We also use the definition of the Prandtl number, (51), to
write (75) as
$\frac{Ra\Gamma}{Pr^{2}(1+r_{s})}\frac{\delta_{B}^{\nu}}{\delta_{B}^{th}}\sim
Re_{B}^{2}\left(1+\frac{\Gamma r_{u}^{2}}{r_{\delta}}\right).$ (77)
The entropy balance equation has thus given us a relation between the Reynolds
number and the Rayleigh number, which is similar to that of regime I of
Grossmann & Lohse (2000) but with additional factors of $\Gamma$. In the high
Prandtl number limit applying (54) gives
$Re_{B}\sim Ra^{1/2}Pr^{-5/6}\Gamma^{1/2}(1+r_{s})^{-1/2}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/2},$ (78)
while in the low Prandtl number case we get using (55)
$Re_{B}\sim Ra^{1/2}Pr^{-3/4}\Gamma^{1/2}(1+r_{s})^{-1/2}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/2}.$ (79)
We now use the viscous boundary layer balance between advection and diffusion,
(52), $\rho_{B}U_{B}/d\sim\mu/(\delta^{\nu}_{B})^{2}$, to obtain a balance
between $Nu$ and $Re_{B}$. The boundary layer balance becomes
$Re_{B}\sim\left(\frac{d}{\delta_{B}^{\nu}}\right)^{2},\ \textrm{but}\
Nu=\frac{d}{\delta_{B}^{th}}\frac{\Gamma\ln\Gamma}{(1+r_{s})(\Gamma-1)}$ (80)
using the expression (27) for the Nusselt number together with (12) and (57).
Eliminating $d/\delta_{B}^{th}$ between these,
$Re_{B}^{1/2}=\frac{\delta_{B}^{th}}{\delta_{B}^{\nu}}\frac{Nu(\Gamma-1)(1+r_{s})}{\Gamma\ln\Gamma}.$
(81)
As above, the ratio of the boundary layer thicknesses can be evaluated in
terms of the Prandtl number, and (81) allows us to eliminate $Re_{B}$ from
(77) to obtain the Nusselt number as a function of Rayleigh number,
$\frac{Ra\Gamma}{Pr^{2}(1+r_{s})}\left(\frac{\delta_{B}^{\nu}}{\delta_{B}^{th}}\right)^{5}\sim\left(\frac{Nu(\Gamma-1)(1+r_{s})}{\Gamma\ln\Gamma}\right)^{4}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right).$ (82)
At large $Pr$, (54) gives the Nusselt number – Rayleigh number scaling in the
form
$Nu\sim
Ra^{1/4}Pr^{-1/12}\frac{\Gamma^{5/4}\ln\Gamma}{\Gamma-1}(1+r_{s})^{-5/4}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/4},$ (83)
while at low $Pr$ (56) gives
$Nu\sim
Ra^{1/4}Pr^{1/8}\frac{\Gamma^{5/4}\ln\Gamma}{\Gamma-1}(1+r_{s})^{-5/4}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/4}.$ (84)
If we accept the ratio scalings derived in §5, in the case of a monatomic
ideal gas, $\gamma=5/3,m=3/2$, we can write
$(1+r_{s})^{-5/4}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/4}=\left(1+\Gamma^{5/3}\right)^{-5/4}\left(1+\Gamma^{2/3}\right)^{-1/4},$
(85)
so as $\Gamma$ becomes large, $Nu$ decreases as $\ln\Gamma\Gamma^{-2}$. So we
expect the Nusselt number, the dimensionless measure of the heat transport, to
be considerably smaller when the layer is strongly compressible, i.e. when
$\Gamma$ is large and there are many density scale heights in the layer at a
given $Ra$ and $Pr$. Analogous results for the case where the dissipation is
mainly in the bulk rather than in the boundary layers, as can happen at low
$Pr$, are given in Appendix B.
In the Boussinesq limit, $\Gamma$ is close to unity and
$\theta=(\Gamma-1)/\Gamma$ is small, so $\ln\Gamma/(\Gamma-1)\to 1$ and
$\rho_{B}\to\rho_{T}$ and $T_{B}\to T_{T}$. Equation (1) reduces to the usual
Boussinesq equation with $s/c_{p}$ replaced by $\alpha T$, where for a perfect
gas $\alpha=1/{\bar{T}}$ is the coefficient of expansion, consistent with
(25). The total jump of entropy across the layer, $\Delta S$, is replaced by
the total temperature jump $\Delta T=\Delta S/\alpha c_{p}$ so the Rayleigh
number (16) reduces to the familiar form $Ra=g\alpha\Delta Td^{3}/\kappa\nu$
where $\kappa=k/{\bar{\rho}}c_{p}$ and $\nu=\mu/{\bar{\rho}}$ are the thermal
diffusivity and kinematic viscosity respectively. These are both constant in
the Boussinesq limit, and the ratios $r_{u}$, $r_{\delta}$ and $r_{s}$ all go
to unity, see §5. Our scaling laws (78), (79), (83) and (84) all reduce to
those of regimes $I_{u}$ and $I_{l}$ of Grossmann & Lohse (2000). Grossmann &
Lohse also give suggested prefactors for their scaling laws in table 2 of
their paper, and since our anelastic scaling laws reduce to theirs in the
Boussinesq limit, our prefactors should in theory be consistent with theirs.
In practice, the prefactors depend on the aspect ratio of the experiments (or
numerical experiments) used to determine them (see e.g. Chong et al., 2018).
For the case of the high Prandtl number regime $I_{u}$, their values were
$Nu\approx 0.33Ra^{1/4}Pr^{-1/12}$ and $Re\approx 0.039Ra^{1/2}Pr^{-5/6}$, so
(83) becomes
$Nu\approx
C_{Nu}Ra^{1/4}Pr^{-1/12}\frac{\Gamma^{5/4}\ln\Gamma}{\Gamma-1}(1+r_{s})^{-5/4}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/4},\quad C_{Nu}=0.93.$ (86)
In the low $Pr$ limit where (84) applies, regime $I_{l}$ of Grossmann & Lohse
(2000), they suggest a prefactor corresponding to $C_{Nu}=0.76$ rather than
0.93. For the Reynolds number, (78) becomes
$Re_{B}\approx
C_{Re}Ra^{1/2}Pr^{-5/6}\Gamma^{1/2}(1+r_{s})^{-1/2}\left(1+\frac{\Gamma
r_{u}^{2}}{r_{\delta}}\right)^{-1/2},\quad C_{Re}=0.078.$ (87)
There is some uncertainty about the prefactor $C_{Re}$, discussed in §7 below.
Prefactors in the case $I_{l}$ and in the case where dissipation is mainly in
the bulk, their case $II_{l}$, (see Appendix B) can also be found.
## 7 The numerical results and discussion
(a) (b)
(c) (d)
(e) (f)
Figure 3: Horizontally averaged entropy (in units of $c_{p}$) and horizontal
mean velocity profiles (Peclet number units) from the numerical simulations
for $\Gamma=1.9438$, $Ra=10^{6}$. (a) Entropy profile at $Pr=1$. (b)
Horizontal velocity profile at $Pr=1$. (c) Entropy profile at $Pr=10$. (d)
Horizontal velocity profile at $Pr=10$. (e) Entropy profile at $Pr=0.25$. (f)
Horizontal velocity profile at $Pr=0.25$.
(a) (b)
(c) (d)
Figure 4: Horizontally averaged entropy ${\langle}S{\rangle}_{h}$ and
horizontal mean velocity $U_{H}$ profiles for (a,b) $\Gamma=2.924$,
$Ra=3\times 10^{6}$, $Pr=1$: (c,d) $\Gamma=4.6416$, $Ra=6\times 10^{6}$,
$Pr=1$.
We have tested the theoretical predictions of our asymptotic theory using
numerical simulations of high Rayleigh number plane layer anelastic
convection. The numerical code differs from the theory in one respect, as it
uses entropy diffusion $k_{s}$ rather than temperature diffusion in the heat
equation, so
$\bar{\rho}\bar{T}\left[\frac{\partial s}{\partial t}+\mathbf{u}\cdot\nabla
s\right]=k_{s}\nabla\cdot{\bar{T}}\nabla
s+\mu\left[q+\partial_{j}u_{i}\partial_{i}u_{j}-\left(\nabla\cdot\mathbf{u}\right)^{2}\right],$
(88)
where $k_{s}$ is constant, replaces (3). This simplifies the code because
entropy is the only anelastic thermodynamic variable computed, and it can be
justified in circumstances where turbulent diffusion dominates laminar
diffusion (Braginsky & Roberts, 1995). Constant entropy boundary conditions
were used in the code. The energy balance equation (33) is only changed by
replacing $-kd{\langle T\rangle}_{h}/dz$ by $-k_{s}{\bar{T}}d{\langle
s\rangle}_{h}/dz$. In the entropy balance equation, entropy diffusion changes
$S_{diff}$ to
$\displaystyle S_{diff}=-{k_{s}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
s\right\rangle_{h}\Big{|}_{z=0}+{k_{s}}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
s\right\rangle_{h}\Big{|}_{z}-\int_{0}^{z}\frac{k_{s}\Delta{\bar{T}}}{{\bar{T}}d}\frac{\mathrm{d}}{\mathrm{d}z}\left\langle
s\right\rangle_{h}\,dz.$ (89)
Just as in the temperature diffusion case, the last two terms are negligible
when the entropy boundary layers are thin, except when $z$ is in the boundary
layers, when the second term is significant. In the case when the viscous
dissipation is primarily in the boundary layers, the argument leading to (65)
still holds, so the ratios satisfy (69) in the entropy diffusion case as well
as in the temperature diffusion case. It is therefore reasonable to compare
with the numerical results for the entropy diffusion case in the expectation
that they will be reasonably close to the temperature diffusion case.
Run | A1 | A2 | A3 | A4 | A5 |
---|---|---|---|---|---|---
| B1 | B2 | C1 | D1 | D2 | D3
Ra | $10^{6}$ | $3\times 10^{6}$ | $10^{7}$ | $3\times 10^{6}$ | $6\times 10^{6}$ |
| $10^{6}$ | $3\times 10^{6}$ | $10^{6}$ | $10^{6}$ | $10^{6}$ | $10^{6}$
Pr | $1$ | $1$ | $1$ | $1$ | $1$ |
| $10$ | $10$ | $0.25$ | $1$ | $10$ | $0.25$
$\Gamma$ | $1.9438$ | $1.9438$ | $1.9438$ | $2.924$ | $4.6416$ |
| $1.9438$ | $4.6416$ | $1.9438$ | $1$ | $1$ | $1$
$\rho_{b}/\rho_{T}$ | $2.71$ | $2.71$ | $2.71$ | $5$ | $10$ |
| $2.71$ | $10$ | $2.71$ | $1$ | $1$ | $1$
$r_{\delta}$ | $1.65\pm 0.01$ | $1.70\pm 0.01$ | $1.69\pm 0.01$ | $2.05\pm 0.01$ | $2.27\pm 0.02$ |
| $1.48\pm 0.01$ | $2.17\pm 0.01$ | $2.05\pm 0.04$ | 1 | 1 | 1
$r_{s}$ | $3.21\pm 0.02$ | $3.29\pm 0.01$ | $3.31\pm 0.01$ | $5.99\pm 0.02$ | $10.62\pm 0.03$ |
| $2.91\pm 0.02$ | $10.0\pm 0.02$ | $4.04\pm 0.04$ | 1 | 1 | 1
$r_{u}$ | $0.92\pm 0.01$ | $0.92\pm 0.01$ | $0.93\pm 0.01$ | $0.96\pm 0.01$ | $1.06\pm 0.02$ |
| $1.17\pm 0.01$ | $1.38\pm 0.01$ | $0.82\pm 0.02$ | 1 | 1 | 1
$Nu$ | $5.90\pm 0.02$ | $7.75\pm 0.03$ | $10.69\pm 0.03$ | $5.35\pm 0.02$ | $4.26\pm 0.02$ |
| $6.18\pm 0.02$ | $4.09\pm 0.02$ | $5.00\pm 0.04$ | $8.78\pm 0.03$ | $8.76\pm 0.04$ | $8.17\pm 0.04$
$U_{T}$ | $186\pm 3$ | $321\pm 3$ | $570\pm 3$ | $298\pm 3$ | $368\pm 3$ |
| $287\pm 2$ | $410\pm 3$ | $125\pm 4$ | $200\pm 3$ | $260\pm 5$ | $149\pm 3$
$U_{B}$ | $202\pm 3$ | $348\pm 3$ | $613\pm 3$ | $311\pm 3$ | $348\pm 3$ |
| $244\pm 2$ | $298\pm 3$ | $154\pm 4$ | $200\pm 3$ | $260\pm 5$ | $149\pm 3$
$\Gamma^{2/3}$ | 1.557 | 1.557 | 1.557 | 2.045 | 2.783 |
| 1.557 | 2.783 | 1.557 | 1 | 1 | 1
$\Gamma^{5/3}$ | 3.028 | 3.028 | 3.028 | 5.979 | 12.915 |
| 3.028 | 12.915 | 3.028 | 1 | 1 | 1
$\Gamma^{1/6}$ | 1.117 | 1.117 | 1.117 | 1.196 | 1.292 |
| 1.117 | 1.292 | 1.117 | 1 | 1 | 1
$Nu$-theory | 5.56 | 7.32 | 9.89 | 4.65 | 2.98 |
| 5.55 | 2.50 | 5.18 | 8.78 | 8.76 | 8.17
$Nu$-nblr | 5.60 | 7.22 | 9.67 | 4.97 | 3.86 |
| 5.63 | 3.11 | 4.37 | 8.78 | 8.76 | 8.17
$Pe_{T}$-theory | 194 | 336 | 614 | 307 | 376 |
| 252 | 345 | 144 | 200 | 260 | 149
$Pe_{B}$-theory | 174 | 301 | 549 | 257 | 291 |
| 226 | 267 | 129 | 200 | 260 | 149
$Pe_{T}$-nblr | 177 | 306 | 559 | 283 | 361 |
| 256 | 358 | 118 | 200 | 260 | 149
$Pe_{B}$-nblr | 192 | 332 | 601 | 295 | 341 |
| 219 | 260 | 144 | 200 | 260 | 149
Table 1: Data from the numerical runs all corresponding to $m=3/2$ polytropes.
The first four rows are the input parameters. $r_{\delta}$, $r_{s}$ and
$r_{u}$ are the measured boundary layer ratios for each run. The velocities
$U_{T}$ and $U_{B}$ are the local maxima at the edge of the boundary layers,
measured in velocity units of $k/d\rho_{B}c_{p}$, i.e they are Peclet numbers
based on the diffusivity at the base of the layer. The theoretical predictions
for the boundary layer ratios are given in the next three rows, see equation
(70). The $Nu$-theory entries are based on (86) with the prefactors $C_{Nu}$
as given in the text, and the boundary ratios come from (70). The $Nu$-nblr
entries also use (86) with the same prefactors, but instead of using (70), the
numerical boundary layer ratios (nblr) above are used. The $Pe_{T}$-theory and
$Pe_{B}$-theory entries come from (78) and (79). The prefactors used are not
those of Grossmann & Lohse (2000), see (87), but those given in the text.
Again, (70) is used to determine the boundary layer ratios. The $Pe_{T}$-nblr
and $Pe_{B}$-nblr entries use the numerical boundary layer ratios.
The numerical code is described in Kessar et al. (2019), though for that paper
stress-free boundary conditions were applied, whereas no-slip boundaries where
imposed in the runs described here. The unit of length is taken as $d$, the
unit of time is $\rho_{B}d^{2}c_{p}/k$, the thermal diffusion time at the
bottom of the layer. The velocities are in units of $k/\rho_{B}dc_{p}$, so
they correspond to local Peclet numbers, where $Pe=RePr$.
All the runs have polytropic index $m=3/2$. The code assumes periodic boundary
conditions in the horizontal $x$ and $y$ directions, with aspect ratio 2, that
is the period in the horizontal directions is $2d$. Table 1 gives the
parameters used in the eleven runs, which span a range of Prandtl numbers and
are at Rayleigh numbers which are as large as is numerically feasible, bearing
in mind the need to resolve the small scale structures that develop. The last
three runs are for Boussinesq cases, $\Gamma=1$, for comparison with the
anelastic cases. The density stratification measured by $\Gamma$ varies over a
moderate range only, because for the modest values of $Ra$ that are
numerically accessible, large $\Gamma$ leads to a top boundary layer which is
no longer thin, so our theory will no longer be valid. In figure 3, the
entropy profiles (in units of $c_{p}$) and the horizontal velocity profiles
are shown for the three runs A1, B1 and C1, and the profiles for A4 and A5 are
shown in figure 4. The entropy profiles are constructed by horizontal
averaging and time averaging the vertical profiles. From (12) the entropy
difference between the boundaries is $\Gamma\ln\Gamma/(\Gamma-1)$ and the
constant is chosen so that it is zero at the bottom boundary $z=0$.
From figure 3 it is immediately apparent that the entropy is indeed rather
constant in the well-mixed bulk interior, consistent with a key assumption of
the theory. It is also clear that the jump in entropy across the top boundary
layer is greater than that across the bottom boundary layer, and that the top
entropy boundary layer is thicker than the bottom entropy boundary layer. This
is consistent with the boundary layer ratios found in §5. The velocity
profiles have local maxima near the boundaries, which gives a convenient
definition for the top and bottom Reynolds numbers, $Re_{B}$ and $Re_{T}$,
after converting Peclet numbers to Reynolds numbers using $RePr=Pe$. We note
that there is no great difference between the top and bottom horizontal
velocities, consistent with the weak scaling with $\Gamma$ found in (67). This
result is slightly surprising, because astrophysical mixing length theory
predicts faster velocities where the fluid is less dense, but in our problem
the boundaries play an important role. The low Prandtl number case, figures
3(e) and 3(f), has a slightly different entropy boundary layer profile from
those of the $Pr=1$ and $Pr=10$ cases, with a more gradual matching on to the
uniform entropy interior, which is particularly noticeable in the upper
boundary layer. This suggests it may be necessary to go to higher $Ra$ at low
$Pr$ before accurate agreement with a theory that assumes thin boundary layers
can be obtained. The cases shown in figure 4, together with figures 3(a) and
3(b), form a sequence at constant $Pr=1$ with increasing $\Gamma$. The most
noticeable feature is that at larger $\Gamma$ the entropy of the mixed
interior becomes close to that of the bottom boundary, so the boundary layer
ratio $r_{S}$ increases rapidly, consistent with the prediction of (70). Also
notable is that the velocity ratio of the maximum horizontal velocities,
$r_{U}$, is never far from unity, again consistent with the weak scaling with
$\Gamma$ in (70). As expected, the boundary layers become thicker at larger
$\Gamma$, so that at fixed $Ra$ the thin boundary layer assumption breaks down
at large $\Gamma$.
In table 1 we compare various results of the simulations against the
theoretical predictions of §5 and §6. To evaluate the entropy boundary layer
thicknesses for our numerical data we use the definition (22), so the
gradients $d\langle S\rangle_{h}/dz$ at $z=0$ and $z=1$ were obtained by
differentiating a cubic spline representation of the entropy, and the entropy
jumps were obtained by averaging the entropy over the well mixed region,
assuming constant entropy there. The ratios of the top and bottom entropy
thicknesses and entropy jumps are denoted by $r_{\delta}$ and $r_{s}$ in table
1. The velocity ratios at the top and bottom are denoted by $r_{u}$. We can
compare these with the predicted ratios in (70). We see that there is some
variation in the numerical results, but they are roughly in agreement with the
predicted results. Considering that the top boundary layer is not that thin,
as can be seen in figures 3 and 4, these results are as good as can be
expected. We also tested how well the equations leading to our boundary layer
ratios compare individually with the numerical results. Equation (61)
expresses the fact that the heat flux is the same at the top and bottom, and
together with the incompressible boundary layer equation (25), gives
$r_{s}=\Gamma r_{\delta}$, which agrees with our numerics rather well, to the
1% level. The viscous boundary layer balance, (63), has less accurate
agreement with the numerics. All the runs at $\Gamma=1.9438$ had errors less
than 10% (except the low $Pr$ run where as we saw in figure 3 the boundary
layer structure is slightly different), but the runs with density ratio 10, A5
and B2, have $r_{\delta}$ and $r_{s}$ large, but not quite as large as
predicted by (70). The likely explanation is that the horizontal length scale
at the top boundary layer is getting smaller than at the bottom boundary,
though not by as much as the factor $\Gamma$ predicted for the vertical length
scale ratio. There is therefore some doubt as to whether it is correct to have
$d$ as the horizontal length scale in the boundary layers when $\Gamma$ is
large. Further research is needed to elucidate this issue. The velocity ratio
equation (67) correctly predicts that the velocity ratio is always close to
unity, but is less reliable at predicting whether it is above or below unity.
However, the higher density ratio runs do have $r_{u}>1$, as predicted by
(67).
We also evaluated the Nusselt number from the data, using
$\displaystyle Nu=-\frac{d}{c_{p}}\frac{d\langle
S\rangle_{h}}{dz}\Big{|}_{z=0}=-\frac{d}{\Gamma c_{p}}\frac{d\langle
S\rangle_{h}}{dz}\Big{|}_{z=d},$ (90)
again determining the gradients from our spline representation. When the run
has been integrated long enough, initial transients in the numerical run are
eliminated and these two estimates of the Nusselt number become close, and the
average value is used in table 1. Because the flow is turbulent, the Nusselt
number fluctuates continuously at about the 10% level, so a long time average
is used. The finite length of the run means there is a small uncertainty due
to the fluctuations not exactly cancelling, which we estimate as error bars in
table 1.
In table 1 we also give the value of the Nusselt number calculated by our
theory. We used the Boussinesq runs, D1, D2 and D3 to determine the prefactors
$C_{Nu}$ appropriate for each Prandtl number used. This gives $C_{Nu}=0.949$
for $Pr=10$, $C_{Nu}=0.785$ for $Pr=1$ and $C_{Nu}=0.869$ for $Pr=0.25$. Note
that in table 1 this means that for the Boussinesq runs D1, D2, and D3, the
$Nu$-theory entries are the same as the actual $Nu$ entries by construction.
None of these prefactors is very far from the values suggested by Grossmann &
Lohse (2000), who had $C_{Nu}=0.93$ at large $Pr$ and $C_{Nu}=0.76$ at small
Prandtl number, suggesting that the differences in aspect ratio and geometry
only make relatively small changes to the Nusselt number. For consistency, we
use these same prefactors in all runs. Since our main interest is in the
compressible cases, we have not explored why the prefactor for the $Pr=1$ case
is slightly lower than the other values of $C_{Nu}$.
We first consider the Nusselt number for the cases where the density ratio is
only 2.71, runs A1, A2, A3, B1 and C1. We note that all the predicted values
are not too far off the numerical values, though the predicted values are
generally a little lower than the actual numerical values. Using the
numerically calculated boundary layer ratios in formula (86) rather than the
theoretically predicted ones gives the result $Nu$-nblr in table 1. These are
only available after the simulation is run, but they are helpful for testing
whether small errors are due to slightly inaccurate boundary layer ratios, or
whether the formula (86) is inaccurate. For the density ratio 2.71 runs with
$Pr\geq 1$, the boundary layer ratios were close to the predicted values, so
not surprisingly, $Nu$-nblr are not significantly better than $Nu$-theory. We
conclude that the under-prediction of the Nusselt number in these cases, which
is less than about the 10% level, is due to the viscous dissipation not being
completely in the boundary layers, as required by the theory. We believe that
at higher $Ra$, where the dissipation progressively goes into the boundary
layers, and using longer runs to average out the fluctuations, the small
discrepancy will disappear. Using runs A1, A2 and A3, where only $Ra$ varies,
we can test the $Ra^{1/4}$ power law predicted in (86). The least squares fit
to a straight line in $\log(Nu)$ vs $\log(Ra)$ space has a slope of 0.257,
rather close to the predicted slope.
We now look at the larger $\Gamma$ cases, runs A4, A5 and B2, corresponding to
the more compressible cases. In the case run A4, at density ratio 5, the
predicted and numerical Nusselt numbers are reasonably close, but in the most
extreme cases of density ratio 10 the predicted $Nu$ is only 61% of the
numerical value for run B2, and in the case A5 predicted $Nu$ is 71% of the
numerical $Nu$. Part of this discrepancy is down to the boundary layer ratios,
which become very large at high $\Gamma$, and so small inaccuracy can affect
the Nusselt number significantly. If we use the numerical boundary layer
ratios in (86) rather than the theoretical ones for run B2, the predicted
$Nu$-nblr rises to 3.12, but even this is only 76% of the numerical value, and
A5 similarly improves but still is too low. We again conclude that in runs B2
and A5 the assumption that the dissipation occurs dominantly in the boundary
layers is suspect (particularly near the top boundary) and that higher $Ra$ is
needed before it becomes robustly valid.
We now consider the Reynolds number formula, (87), though it is convenient to
express this in terms of Peclet number $Pe=RePr$ . If the table 1 parameter
values are inserted into (87) with the value of $C_{Re}$ quoted there, the
values of the Peclet number are consistently a factor of about 5 too small
compared with the numerical values of $U_{T}$ and $U_{B}$ in table 1. There
are a number of reasons for this, but the two most important are (i) the power
law dependence of the Peclet number with Rayleigh number in Boussinesq
convection is slightly less than the predicted 0.5 (Grossmann & Lohse 2002),
and (ii) our runs are for aspect ratio 2, whereas experiments, and the
numerical simulations that simulate them (e.g. Silano et al. 2010), use aspect
ratios of 1 or less. The prefactor in (87) is based on experiments at large
$Ra$ which mostly used aspect ratios less than unity. The experiments of Qiu &
Tong (2001), see also figure 1 of Grossmann & Lohse (2002), using water (with
$Pr=5.5$) in a cylinder of aspect ratio unity found $Re=0.085Ra^{0.455}$. At
the run A1 parameters this formula gives $Pe=251$ consistent with a prefactor
of $C_{Re}=0.38$ in (87), much larger than the Grossmann & Lohse (2002) value.
They found very similar $Re$ prefactors for both high and low $Pr$. If we
adopt the same procedure as we did to get the Nusselt number prefactors, and
normalise using the Boussinesq runs D1, D2 and D3 we obtain $C_{Re}=0.354$ for
$Pr=10$, $C_{Re}=0.400$ at $Pr=1$ and $C_{Re}=0.421$ at $Pr=0.25$.
Reassuringly, these are all quite close to the Qiu & Tong (2001) value of
$C_{Re}=0.38$. We therefore use these three values of $C_{Re}$ at the
appropriate Prandtl number in all our theory calculations. With these
prefactors, the numerical results for $U_{T}$ and $U_{B}$ agree reasonably
well with the predicted $Pe_{T}$-theory and $Pe_{B}$-theory results. The
results have some scatter, which seems to reflect the scatter in our computed
$r_{U}$. If we use the computed boundary layer ratios, rather than the
asymptotically predicted ratios, there is less scatter in the comparison
between computed and theoretical Peclet numbers, though the theoretical Peclet
numbers are generally a few percent lower that the computed Peclet numbers.
Given that the boundary layers are not very thin, these small discrepancies
are not unexpected, and overall the predicted Reynolds numbers are in
reasonable agreement with those of our §6 asymptotic theory.
## 8 Conclusions
The scaling laws for heat flux and Reynolds number at high Rayleigh number
convection have been derived from the energy balance and entropy balance
equations derived in §3. These scaling laws are derived in terms of the
Rayleigh number, the Prandtl number and the temperature ratio $\Gamma$ which
measures the strength of the stratification. In the Boussinesq limit,
$\Gamma\to 1$, they reduce to the scaling laws of Grossmann & Lohse (2000).
The existence of the well-mixed entropy state, with the entropy changes being
mainly confined to thin boundary layers, makes it possible to estimate the
terms in the entropy balance equation, so allowing Nusselt number and Reynolds
number relationships to be established. The cases treated are those where the
viscous dissipation occurs in the boundary layers, the cases labelled as
$I_{u}$ and $I_{l}$ by Grossmann & Lohse (2000), the subscripts referring to
the high and low Prandtl number regimes, and the cases where the viscous
dissipation is primarily in the bulk, the cases $II_{u}$ and $II_{l}$. A
limitation of the theory is that both the entropy boundary layers do have to
be thin for the theory to be valid. For the top boundary layer to be thin when
the stratification is strong, the Rayleigh number has to be very large, which
is numerically difficult, so the range of $\Gamma$ which can be tested both
numerically and asymptotically is quite limited. This condition that the top
boundary layer is thin is equivalent to the condition that the boundary layers
are incompressible, so that a rather simple relationship holds between
temperature and entropy within the boundary layers. The more difficult case
where the boundary layers are compressible has not yet been solved in closed
form, but it is likely to be significantly different from our solutions.
A feature of this high Rayleigh number anelastic problem is that the top and
bottom boundary layers have a different structure, so to determine the scaling
laws, boundary layer ratios for the top and bottom boundary layers have to be
established. The three key ratios are those for the boundary layer widths, the
boundary layer entropy jumps and the horizontal velocities just outside the
boundary layers. In §5 we proposed formulae based on a simple physical picture
for these ratios. We have performed some numerical simulations to test these
proposed boundary layer ratios, and within the constraints imposed by the
numerics, namely not very high $Ra$, we find broad agreement between the
theory and the numerics. Another important assumption for Grossmann-Lohse
theory to be valid is the existence of a wind of turbulence. Our numerics
suggest that this feature persists in our simulations. There is, however,
still some uncertainty about whether the horizontal length scale of that wind,
which controls the boundary layers, remains at the vertical length scale $d$
as the stratification $\Gamma$ increases, or whether it becomes smaller at the
top boundary than the bottom boundary at large $\Gamma$.
We have also tested the theoretically derived Nusselt number and Reynolds
number relationships against the numerics, in the case where the viscous
dissipation is mainly in the boundary layers, the only numerically accessible
case. Using the prefactors determined in the Boussinesq case, which are the
only free parameters in the theory, the Nusselt numbers obtained are in
reasonable agreement with the theory, again noting the numerical limitations
preventing accurate agreement. A problem was encountered when comparing with
the theoretical Reynolds numbers, in that the theory using the original
Grossmann-Lohse prefactors gave smaller $Re$ than did the numerics. However,
the disagreement seems to be due more to issues with the Boussinesq problem
rather than to its extension to the anelastic case, in particular to the
difficulty of establishing a single prefactor over a huge range of $Ra$ and to
dependence of $Re$ on the aspect ratio. When the prefactors were determined by
normalizing on our Boussinesq runs, the issue was resolved.
We have focussed here on the case of no-slip boundaries, as this seems the
simplest case in which scaling laws can be derived from first principles
without introducing arbitrary constants into the formulae. There are, however,
many similar problems which could be addressed which are of great
astrophysical interest: the case of stress-free boundaries is thought to be
particularly relevant to stellar convection zones. Even within the context of
our simplified no-slip problem, the case of compressible boundary layers would
be of interest. We found it most convenient to consider fixed entropy boundary
conditions, but other boundary conditions, such as fixed temperature or fixed
flux are of interest too. Another issue that could be explored are the
differences between temperature diffusion and entropy diffusion cases. In our
particular problem, with incompressible boundary layers, the differences
appear to be quite minor, but this is not necessarily the case if more
challenging cases are addressed. Given the growing importance of the anelastic
approximation in exploring a very wide range of exciting astrophysical
problems, a firmer understanding of the fundamental behaviour of high Rayleigh
number anelastic convection would be very valuable.
###### Acknowledgements.
Acknowledgements This work was partially funded by the STFC grant ST/S00047X/1
held at the University of Leeds. The partial funding from the subvention of
the Ministry of Science and Higher Education in Poland as a part of the
statutory activity and the support of the National Science Centre of Poland
(grant No. 2017/26/E/ST3/00554) is gratefully acknowledged. The computational
work was performed on the ARC clusters, part of the high performance computing
facilities at the University of Leeds, and on the COSMA Data Centre system at
Durham University, operated on behalf of the STFC DiRAC HPC.
## Appendix A Form of the anelastic temperature perturbation
Taking the horizontal average of the anelastic continuity equation (2.2), and
using the $u_{z}=0$ boundary conditions gives
$\langle u_{z}\rangle_{h}=0.$ (91)
Using (1) and (13), the $z$-component of the anelastic equation of motion can
be written
${\bar{\rho}}\frac{\partial u_{z}}{\partial
t}+\nabla\cdot({\bar{\rho}}u_{z}{\bf u})+\nabla
p=-g\rho+\mu\left(\nabla^{2}u_{z}+\frac{1}{3}\frac{\partial}{\partial
z}\nabla\cdot{\bf u}\right).$ (92)
Taking the horizontal average of (92) we see that, using (91), the viscous
term vanishes to leave
$\frac{\mathrm{d}}{\mathrm{d}z}\left\langle\bar{\rho}u_{z}^{2}\right\rangle_{h}+\frac{\mathrm{d}\left\langle
p\right\rangle_{h}}{\mathrm{d}z}=-g\left\langle\rho\right\rangle_{h}=-g{\bar{\rho}}\left(\frac{\langle
p\rangle_{h}}{\bar{p}}-\frac{\langle T\rangle_{h}}{\bar{T}}\right)$ (93)
using ($None$a). In the bulk, entropy is well-mixed, so it is constant there
so differentiating ($None$b) and using ($None$a),
$R\frac{d}{dz}\left(\frac{\langle
p\rangle_{h}}{\bar{p}}\right)=c_{p}\frac{d}{dz}\left(\frac{\langle
T\rangle_{h}}{\bar{T}}\right)$ (94)
in the bulk. Using the adiabatic reference state hydrostatic and perfect gas
equations, with ($None$), this can be written
$\frac{d\langle p\rangle_{h}}{dz}=c_{p}{\bar{\rho}}\frac{d\langle
T\rangle_{h}}{dz}-\frac{g{\bar{\rho}}}{\bar{p}}\langle
p\rangle_{h}+\frac{g{\bar{\rho}}}{\bar{T}}\langle T\rangle_{h}$ (95)
and on substituting this into (93) we obtain
$\frac{d{\langle
T\rangle_{h}}}{dz}=-\frac{1}{c_{p}{\bar{\rho}}}\frac{d}{dz}\langle\bar{\rho}u_{z}^{2}\rangle_{h},$
(96)
which is valid in the bulk. Integrating this across the bulk from
$z=\delta^{th}_{B}$ to $z=d-\delta^{th}_{T}$, and assuming $u_{z}$ is
negligible close to the boundaries,
$\langle T_{bulk}(d-\delta^{th}_{T})\rangle_{h}-\langle
T_{bulk}(\delta^{th}_{B})\rangle_{h}=\int_{\delta^{th}_{B}}^{d-\delta^{th}_{T}}\langle\bar{\rho}u_{z}^{2}\rangle_{h}\frac{d}{dz}\left(\frac{1}{c_{p}\bar{\rho}}\right)\,dz=\Delta
T_{vel}>0,$ (97)
since $\bar{\rho}$ is monotonic decreasing with $z$. This establishes that in
the bulk the gradient $d{\langle T\rangle_{h}}/dz$ is positive on average,
corresponding to a subadiabatic horizontally averaged temperature gradient. We
denote this jump in $T$ across the bulk by $\Delta T_{vel}$ because it is
physically connected to the pressure changes induced by the fluid velocity. A
natural question is how large $\Delta T_{vel}$ is compared to the jumps in
$\langle T\rangle_{h}$ across the boundary layers, $\Delta T_{B}$ and $\Delta
T_{T}$. Formally they are both of same order of magnitude in the anelastic
approximation, but $\Delta T_{vel}$ will be small if we are close to
Boussinesq or if the Rayleigh number is small. Numerical evidence is sparse,
but figure 4 from Verhoeven et al. (2015) suggests that for their parameters,
$Ra=10^{6}$, $\rho_{B}/\rho_{T}=2.72$ and $Pr=0.7$, their $\Delta T_{vel}$ was
small.
### A.1 Positivity of the temperature offsets
We now consider the temperature offsets at the bottom and top boundaries,
$\left\langle T\right\rangle_{h,T}$ and $\left\langle T\right\rangle_{h,B}$.
Without numerical simulations, we cannot determine their magnitude, but we can
show that they must both be positive, a useful check on future simulations.
By examining the sum of the temperature jumps across the layer in Figure 1b we
can see that
$\left\langle T\right\rangle_{h,T}-\left\langle T\right\rangle_{h,B}+\Delta
T_{B}+\Delta T_{T}=\Delta T_{vel}.$ (98)
Using the incompressible boundary layer forms for the temperature jumps across
the boundary layers, (25), and the formulae for the ratios of these jumps,
$r_{s}=\frac{\Delta S_{T}}{\Delta S_{B}},\quad r_{T}=\frac{\Delta
T_{T}}{\Delta T_{B}},\quad r_{s}=\Gamma r_{T},$ (99)
equation (98) becomes
$\left\langle T\right\rangle_{h,T}-\left\langle
T\right\rangle_{h,B}+\frac{T_{B}\Delta S}{c_{p}}\frac{(1+r_{T})}{(1+\Gamma
r_{T})}=\Delta T_{vel}.$ (100)
A second equation for the temperature offsets can be derived from the boundary
conditions. From ($None$a) and ($None$b) we can deduce
$s=\frac{c_{p}T}{\bar{T}}-\frac{p}{\bar{\rho}\bar{T}}$ (101)
At $z=0$ and $z=d$ this gives
$\Delta S=\frac{c_{p}\left\langle
T\right\rangle_{h,B}}{T_{B}}-\frac{\left\langle
p\right\rangle_{h,B}}{\rho_{B}T_{B}},\quad 0=\frac{c_{p}\left\langle
T\right\rangle_{h,T}}{T_{T}}-\frac{\left\langle
p\right\rangle_{h,T}}{\rho_{T}T_{T}}.$ (102)
We now use the mass conservation equation (28) to set the pressure
perturbations on the top and bottom boundary equal, giving
$\frac{T_{B}\Delta S}{c_{p}}=\left\langle T\right\rangle_{h,B}-\left\langle
T\right\rangle_{h,T}\frac{\rho_{T}}{\rho_{B}}.$ (103)
Equations (100) and (103) are two equations for the temperature offsets
$\left\langle T\right\rangle_{h,B}$ and $\left\langle T\right\rangle_{h,T}$,
and using $\rho_{B}/\rho_{T}=\Gamma^{m}$ the solutions are
$\left\langle T\right\rangle_{h,B}=\frac{\Delta
T_{vel}}{\Gamma^{m}-1}+\frac{T_{B}\Delta
S}{c_{p}}\left\\{\frac{\Gamma^{m}(1+\Gamma
r_{T})-(1+r_{T})}{(\Gamma^{m}-1)(1+\Gamma r_{T})}\right\\},$ (104)
$\left\langle T\right\rangle_{h,T}=\frac{\Gamma^{m}\Delta
T_{vel}}{\Gamma^{m}-1}+\frac{T_{B}\Delta
S}{c_{p}}\left\\{\frac{(\Gamma-1)\Gamma^{m}r_{T}}{(\Gamma^{m}-1)(1+\Gamma
r_{T})}\right\\}.$ (105)
Since $\Gamma>1$ and $\Delta T_{vel}>0$ it follows that both quantities are
positive whatever $\Delta T_{vel}$ is. It is not possible to decide which
offset is larger without having more information about $\Delta T_{vel}$, but
these results confirm that Figure 1b is a plausible sketch of the temperature
perturbation, and will be helpful in testing numerical simulations.
## Appendix B The case when the dissipation in the bulk dominates the
dissipation in the boundary layers
Grossmann & Lohse (2000) point out that at low $Pr$ and large $Ra$ it is
possible for the viscous dissipation in the bulk to be larger than the viscous
dissipation in the boundary layers. When this occurs, our arguments about the
boundary layer ratios in §5 and the scaling laws in §6 need revising. We now
consider this scenario.
### B.1 The boundary layer ratios
When the viscous dissipation is mainly in the bulk, equations (57-63) still
hold, but the argument for equation (67) breaks down because the entropy flux
is no longer approximately constant in the bulk because viscous dissipation in
the bulk is no longer negligible. We can however use the energy flux equation
(30) because when the viscous dissipation is mainly in the bulk, the work done
by buoyancy must balance the viscous dissipation in the bulk, since now the
viscous dissipation in the boundary layers is negligible.
So
$\displaystyle\frac{g}{c_{p}}\int_{bulk}\left\langle\bar{\rho}u_{z}s\right\rangle_{h}\mathrm{d}z=\mu\int_{bulk}\left\langle
q\right\rangle\mathrm{d}z,$ (106)
and since thermal diffusion and the last two terms in (30) are negligible in
the bulk when the boundary layers are thin, it follows that
$\langle{\bar{\rho}}{\bar{T}}u_{z}s\rangle_{h}$ will be approximately the same
just outside the two boundary layers at $z=\delta_{B}^{\nu}$ and
$z=d-\delta_{T}^{\nu}$, so
$\displaystyle\rho_{B}T_{B}\langle
u_{z}s\rangle_{h}|_{z=\delta_{B}^{\nu}}\approx\rho_{T}T_{T}\langle
u_{z}s\rangle_{h}|_{z=d-\delta_{T}^{\nu}}.$ (107)
Note this is different from the case where the dissipation was mainly in the
boundary layers, when $\langle{\bar{\rho}}u_{z}s\rangle_{h}$ is approximately
constant. As we did in §5,we horizontally average the dot product of $\bf u$
and (1), and apply it just outside the boundary layers, at
$z=\delta_{B}^{\nu}$ and $z=d-\delta_{T}^{\nu}$. Here we are justified in
neglecting the pressure term as we did in §5, and we also neglect the viscous
term. This is not obvious when most of the viscous dissipation is in the bulk,
but following Grossmann & Lohse (2000), we envisage a turbulent cascade, where
the dissipation at larger scales is dominated by the inertial term. We
therefore adopt
$\frac{1}{2}\frac{\partial}{\partial z}\left(\bar{\rho}\left\langle
u_{z}u^{2}\right\rangle_{h}\right)\approx-\frac{\partial}{\partial z}\langle
u_{z}p\rangle_{h}+\frac{g}{c_{p}}\left\langle{\bar{\rho}}u_{z}s\right\rangle_{h}\approx\frac{g}{c_{p}}\left\langle{\bar{\rho}}u_{z}s\right\rangle_{h}$
(108)
at $z=\delta_{B}^{\nu}$ and $z=d-\delta_{T}^{\nu}$. Then since in the bulk we
expect all velocity components to be of similar magnitude in the bulk, using
(107)
$\displaystyle\rho_{B}T_{B}\frac{U_{B}^{3}}{H_{B}}\approx\rho_{T}T_{T}\frac{U_{T}^{3}}{H_{T}}\Rightarrow
r_{u}\sim\Gamma^{\frac{m}{3}},$ (109)
where the pressure scale heights are defined in (66). This result differs from
(67), where the dissipation is in the boundary layers, so that now the
horizontal velocity at the top is expected to be considerably faster than the
velocity at the bottom, whereas (69) predicts only a weak dependence on
$\Gamma$.
### B.2 The scaling laws when dissipation is in the bulk
We still expect thin boundary layers even when the dissipation is mainly in
the bulk, so (71),
$\frac{F^{super}\Delta{\bar{T}}}{T_{B}T_{T}}\sim\int_{0}^{d}\frac{\mu}{\bar{T}}\left\langle
q\right\rangle_{h}\,dz,$ (110)
still applies, but unlike the boundary layer dissipation case, we do not know
how the dissipation is distributed over the interior. We therefore assume that
the dissipation in the interior can be written as $\langle q\rangle_{h}\sim
U_{H}^{3}/2H$ where $U_{H}(z)$ is the horizontally averaged horizontal
velocity and $H$ is the local pressure scale height. We don’t know how
$U_{H}(z)$ is distributed in $z$, but we argued in §B1 above that $\rho
U_{H}^{3}$ is approximately the same at the edge of both boundary layers, so a
reasonable assumption for the purposes of estimation is that
$\rho U_{H}(z)^{3}\sim\ \textrm{constant}\
\approx\rho_{B}U_{B}^{3}\approx\rho_{T}U_{T}^{3}.$ (111)
The form of $U_{H}(z)$ from our numerical results suggests this might
overestimate the dissipation integrated over the whole layer, but nevertheless
we adopt (111) for the rest of this section. Equation (110) then becomes
$\frac{F^{super}\Delta{\bar{T}}}{T_{B}T_{T}}\sim\int_{0}^{d}\frac{\bar{\rho}U_{H}^{3}}{2\bar{T}H}\,dz=\int_{0}^{d}\frac{\rho_{B}U_{B}^{3}(m+1)\Delta{\bar{T}}}{2d{\bar{T}}^{2}}\,dz=\frac{\rho_{B}U_{B}^{3}(m+1)(\Gamma-1)}{2T_{b}}.$
(112)
From (15) $F^{super}=NukT_{B}/d$, and writing $U_{B}$ in terms of the bottom
Reynolds number using (76)
$\frac{Nuk\Delta{\bar{T}}}{dT_{T}}\sim\frac{\mu^{3}Re_{B}^{3}(m+1)(\Gamma-1)}{2T_{b}\rho_{B}^{2}d^{2}}.$
(113)
Combining (12) and (16), we can write the Rayleigh number as
$Ra=\frac{c_{p}^{2}\Delta{\bar{T}}d^{2}\rho_{B}^{2}\Gamma\ln\Gamma}{\mu
k(\Gamma-1)},$ (114)
and combining this with (113) and using the definition of the Prandtl number
(51) we obtain
$\frac{NuRa}{Pr^{2}}\sim\frac{(m+1)\ln\Gamma}{2}Re_{B}^{3},$ (115)
which is the entropy balance equation in the case where the dissipation is
mainly in the bulk rather than the boundary layers. We now use the same
boundary layer balance equation as before, but since we expect bulk
dissipation only to dominate at low $Pr$, we only use (55) for the boundary
layer ratio, so (81) becomes
$\left(Re_{B}Pr\right)^{1/2}=\frac{Nu(\Gamma-1)(1+r_{s})}{\Gamma\ln\Gamma}.$
(116)
Combining (115) and (116) we get the Nusselt number in terms of the Rayleigh
number in this case,
$Nu\sim
Ra^{1/5}Pr^{1/5}\left(\frac{2}{m+1}\right)^{1/5}\frac{\Gamma^{6/5}\ln\Gamma}{(\Gamma-1)^{6/5}}(1+r_{s})^{-6/5}.$
(117)
## References
* Braginsky and Roberts (1995) BRAGINSKY, S. I. & ROBERTS, P. H. 1995 Equations governing convection in Earth’s core and the geodynamo, Geophys. Astrophys. Fluid Dyn. 79, 1–97.
* Browning et al. (2006) BROWNING, M. K., MIESCH, M. S., BRUN, A. S. & TOOMRE, J. 2006 Dynamo action in the Solar convection zone and tachocline: pumping and organization of toroidal fields. Astrophys. J. 648, L157–L160.
* Brun and Toomre (2002) BRUN, A. S. & TOOMRE, J. 2002 Turbulent convection under the influence of rotation: sustaining a strong differential rotation. Astrophys. J. 570, 865–885.
* Chong et al (2018) CHONG, K. L., WAGNER, S., KACZOROWSKI, M., SHISKINA, O. & XIA, K-Q. 2018 Effect of Prandtl number on heat transport enhancement in Rayleigh-Bénard convection under geometrical confinement. Phys. Rev. Fluids 3, 013501\.
* Curbelo et al. (2019) CURBELO, J., DUARTE L., ALBOUSSIÈRE, T., DUBUFFET, F., LABROSSE, S. & RICARD, Y. 2019 Numerical solutions of compressible convection with an infinite Prandtl number: comparison of the anelastic and anelastic liquid models with the exact equations. J. Fluid Mech. 3, 646–687.
* Deardorff (1970) DEARDORFF, J. W. 1970 Convective velocity and temperature scales for the unstable planetary boundary layer and for Rayleigh convection. J. Atmos. Sci. 27, 1211–1213,
* Glatzmaier and Roberts (1995) GLATZMAIER, G. A. & ROBERTS, P. H. 1995 A three-dimensional convective dynamo solution with rotating and finitely conducting inner core and mantle. Phys. Earth Planet. Inter. 91, 63–75.
* Gough (1969) GOUGH, D. O. 1969 The anelastic approximation for thermal convection. J. Atmos. Sci. 26, 448–456.
* Grossmann and Lohse (2000) GROSSMANN, S. & LOHSE, D. 2000 Scaling in thermal convection: a unifying theory. J. Fluid Mech. 407, 27–56,
* Grossmann and Lohse (2002) GROSSMANN, S. & LOHSE, D. 2002 Prandtl and Rayleigh number dependence of the Reynolds number in turbulent thermal convection. Phys. Rev. E 66 (1), 016305\.
* Jones and Kuzanyan (2009) JONES, C. A. & KUZANYAN, K. 2009 Compressible convection in the deep atmospheres of giant planets. Icarus 204, 227–238.
* Jones et al. (2011) JONES, C. A., BORONSKI, P., BRUN, A. S., GLATZMAIER, G. A., GASTINE, T., MIESCH, M. S. & WICHT, J. 2011 Anelastic convection-driven dynamo benchmarks. Icarus 216, 120–135.
* Kessar et al. (2019) KESSAR, M., HUGHES, D.W., KERSALÉ, E., MIZERSKI, K. A. & TOBIAS, S. M. 2019 Scale selection in the stratified convection of the solar photosphere. Astrophys. J. 874, 103–117.
* Kolmogorov (1941a) KOLMOGOROV, A. N. 1941a The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Proceedings of the USSR Academy of Sciences 30, 299–303.
* Kolmogorov (1941b) KOLMOGOROV, A. N., 1941b Dissipation of Energy in the Locally Isotropic Turbulence. Proceedings of the USSR Academy of Sciences 32, 16–18.
* Korre et al. (2017) KORRE, L., BRUMMELL, N. & GARAUD, P. 2017 Weakly non-Boussinesq convection in a gaseous spherical shell. Phys. Rev. E 96 (3), 033104\.
* Lantz and Fan (1999) LANTZ, S. R. & FAN, Y. 1999 Anelastic magnetohydrodynamic equations for modelling solar and stellar convection zones. Astroph. J. Supp. Series 121, 247–264.
* Menaut et al. (2019) MENAUT, R., CORRE, Y., HUGUET, L., Le REUN, T., ALBOUSSIÈRE, T., BERGMAN, M., DEGUEN, R., LABROSSE, S. & MOULIN, M. 2019 Experimental study of convection in the compressible regime. Phys. Rev. Fluids 4, 033502\.
* Miesch et al. (2000) MIESCH, M. S., ELIOTT, J. R., TOOMRE, J., CLUNE, T. L., GLATZMAIER, G. A. & GILMAN, P. A. 2000 Three-dimensional Spherical Simulations of Solar Convection. I. Differential Rotation and Pattern Evolution Achieved with Laminar and Turbulent States. Astrophys. J. 532, 593–615.
* Ogura and Phillips (1962) OGURA, Y. & PHILLIPS, N. A. 2000 Scale analysis of deep and shallow convection in the atmosphere. J. Atmos. Sci. 19, 173–179.
* Qiu and Tong (2001) QIU, X.-L. & TONG, P. 2001 Large-scale velocity structures in turbulent thermal convection. Phys. Rev. E 64 (3), 036304\.
* Siggia (1994) SIGGIA. E. D. 1994 High Rayleigh number convection. Ann. Rev. Fluid Mech. 26, 137–168.
* Silano et al. (2010) SILANO, G., SREENIVASAN, K. R. & VERZICCO, R. 2010 Numerical simulations of Rayleigh-Bénard convection for Prandtl numbers between $10^{-1}$ and $10^{4}$ and Rayleigh numbers between $10^{5}$ and $10^{9}$. J. Fluid Mech. 662, 409–446.
* Spiegel and Veronis (1960) SPIEGEL, E. A. & VERONIS, G. 1960 On the Boussinesq approximation for a compressible fluid. Astrophys. J., 131, 442–447.
* Stevens et al. (2013) STEVENS, R. J. A. M., VAN DER POEL, E. P., GROSSMANN, S. & LOHSE, D. (2013) The unifying theory of scaling in thermal convection: updated prefactors. J. Fluid Mech. 730, 295–308.
* Toomre et al. (1976) TOOMRE, J., ZAHN, J.-P., LATOUR, J. & SPIEGEL, E. A. 1976 Stellar convection theory. II - Single-mode study of the second convection zone in an A-type star. Astrophys. J. 207, 545–563.
* Verhoeven et al. (2015) VERHOEVEN, J., WIESEHÖFER, T. & STELLMACH, S. 2015 Anelastic versus fully compressible turbulent Rayleigh-Bénard convection. Astrophys. J. 805, 62–75.
|
# An Empirical Study of Cross-Lingual Transferability in Generative Dialogue
State Tracker
Yen-Ting Lin, Yun-Nung Chen
###### Abstract
There has been a rapid development in data-driven task-oriented dialogue
systems with the benefit of large-scale datasets. However, the progress of
dialogue systems in low-resource languages lags far behind due to the lack of
high-quality data. To advance the cross-lingual technology in building dialog
systems, DSTC9 introduces the task of cross-lingual dialog state tracking,
where we test the DST module in a low-resource language given the rich-
resource training dataset.
This paper studies the transferability of a cross-lingual generative dialogue
state tracking system using a multilingual pre-trained seq2seq model. We
experiment under different settings, including joint-training or pre-training
on cross-lingual and cross-ontology datasets. We also find out the low cross-
lingual transferability of our approaches and provides investigation and
discussion.
## Introduction
Dialogue state tracking is one of the essential building blocks in the task-
oriented dialogues system. With the active research breakthrough in the data-
driven task-oriented dialogue technology and the popularity of personal
assistants in the market, the need for task-oriented dialogue systems capable
of doing similar services in low-resource languages is expanding. However,
building a new dataset for task-oriented dialogue systems for low-resource
language is even more laborious and costly. It would be desirable to use
existing data in a high-resource language to train models in low-resource
languages. Therefore, if cross-lingual transfer learning can be applied
effectively and efficiently on dialogue state tracking, the development of
task-oriented dialogue systems on low-resource languages can be accelerated.
The Ninth Dialog System Technology Challenge (DSTC9) Track2 (Gunasekara et al.
2020) proposed a cross-lingual multi-domain dialogue state tracking task. The
main goal is to build a cross-lingual dialogue state tracker with a rich
resource language training set and a small development set in the low resource
language. The organizers adopt MultiWOZ 2.1 (Eric et al. 2019) and CrossWOZ
(Zhu et al. 2020) as the dataset and provide the automatic translation of
these two datasets for development. In this paper’s settings, our task is to
build a cross-lingual dialogue state tracker in the settings of CrossWOZ-en,
the English translation of CrossWOZ. In the following, we will refer cross-
lingual datasets to datasets in different languages, such as MultiWOZ-zh and
CrossWOZ-en, and cross-ontology datasets to datasets with different
ontologies, such as MultiWOZ-en and CrossWOZ-en.
The cross-lingual transfer learning claims to transfer knowledge across
different languages. However, in our experiments, we experience tremendous
impediments in joint training on cross-lingual or even cross-ontology
datasets. To the best of our knowledge, all previous cross-lingual dialogue
state trackers approach DST as a classification problem (Mrkšić et al.
2017)(Liu et al. 2019), which does not guarantee the success of
transferability on our generative dialogue state tracker.
The contributions of this paper are three-fold:
* •
This paper explores the cross-lingual generative dialogue state tracking
system’s transferability.
* •
This paper compares joint training and pre-train then finetune method with
cross-lingual and cross-ontology datasets.
* •
This paper analyzes and open discussion on colossal performance drop when
training with cross-lingual or cross-ontology datasets.
## Problem Formulation
In this paper, we study the cross-lingual multi-domain dialogue state tracking
task. Here we define the multi-domain dialogue state tracking problem and
introduce the cross-lingual DST datasets.
### Multi-domain Dialogue State Tracking
The dialogue state in the multi-domain dialogue state tracking is a set of
(domain, slot name, value) triplets, where the domain indicates the service
that the user is requesting, slot name represents the goal from the user, and
value is the explicit constraint of the goal. For dialogue states not
mentioned in the dialogue context, we assign a null value, $\emptyset$, to the
corresponding values. For example, (Hotel, type, luxury) summarizes one of the
user’s constraints of booking a luxury hotel, and (Attraction, fee, 20 yuan or
less) means the user wants to find a tourist attraction with a ticket price
equal to or lower than 20 dollars. An example is presented in Figure1.
Our task is to predict the dialogue state at the $t^{th}$ turn,
$\mathcal{B}_{t}=\\{(\mathcal{D}^{i},\mathcal{S}^{i},\mathcal{V}^{i})\,|\,1\leq
i\leq I\\}$ where $I$ is the number of states to be tracked, given the
historical dialogue context until now, defined as
$\mathcal{C}_{t}=\\{\mathcal{U}_{1},\mathcal{R}_{1},\mathcal{U}_{2},\mathcal{R}_{2},\dots,\mathcal{R}_{t-1},\mathcal{U}_{t}\\}$
where $\mathcal{U}_{i}$ and $\mathcal{R}_{i}$ is the user utterance and system
response, respectively, at the $i^{th}$ turn.
Figure 1: Illustration of dialogue state tracking. The dialogue is sampled
from CrossWOZ-en.
### Dataset
MultiWOZ is the task-oriented dataset often used as the benchmark dataset for
task-oriented dialogue system tasks, including dialogue state tracking,
dialogue policy optimization, and NLG. MultiWOZ 2.1 is a cleaner version of
the previous counterpart with more than 30% updates in dialogue state
annotations. CrossWOZ is a Chinese multi-domain task-oriented dataset with
more than 6,000 dialogues, five domains, and 72 slots. Both of the above
datasets collects human-to-human dialogues in Wizard-of-Oz settings. Table 1
lists the details of the dataset.
In DSTC9 Track 2, the organizers translate MultiWOZ and CrossWOZ into Chinese
and English, respectively, and we refer the translated version of MultiWOZ and
CrossWOZ as MultiWOZ-zh and CrossWOZ-en, respectively. The public and private
test of CrossWOZ-en in DSTC9 has 250 dialogues, but only the public test set
has annotations. Therefore, we use the public one as the test set in our
experiments.
Metric | MultiWOZ | CrossWOZ
---|---|---
Language | English | Chinese (Simplified)
# Dialogues | 8,438 | 5,012
Total # turns | 113,556 | 84,692
# Domains | 7 | 5
# Slots | 24 | 72
# Values | 4,510 | 7,871
Table 1: Statistics of MultiWOZ and CrossWOZ. Note that the translated version
of these two datasets have the same metrics
## Related Work
### Dialogue State Tracker
Traditionally, dialogue state tracking depends on fixed vocabulary approaches
where retrieval-based models ranks slot candidates from a given slot ontology.
(Ramadan, Budzianowski, and Gašić 2018)(Lee, Lee, and Kim 2019)(Shan et al.
2020) However, recent research efforts in DST have moved towards generation-
based approaches where the models generate slot value given the dialogue
history. (Wu et al. 2019) proposed a generative multi-domain DST model with a
copy mechanism which ensures the capability to generate unseen slot values.
(Kim et al. 2019) introduced a selectively overwriting mechanism, a memory-
based approach to increase efficiency in training and inference. (Le, Socher,
and Hoi 2020) adopted a non-autoregressive architecture to model potential
dependencies among (domain, slot) pairs and reduce real-time DST latency
significantly. (Hosseini-Asl et al. 2020) took advantage of the powerful
generation ability of large-scale auto-regressive language model and
formulated the DST problem as a casual language modeling problem.
### Multilingual Transfer Learning in Task-oriented Dialogue
(Schuster et al. 2019) introduced a multilingual multi-domain NLU dataset.
(Mrkšić et al. 2017) annotated two additional languages to WOZ 2.0 (Mrkšic et
al. 2017) and (Liu et al. 2019) proposed a mixed-language training for cross-
lingual NLU and DST tasks. Noted that all previous multilingual DST methods
modeled the dialogue state tracking task as a classification problem. (Mrkšić
et al. 2017)(Liu et al. 2019)
## Methods
This paper considers the multi-domain dialogue state track-ing as a sequence
generation task by adopting a sequence-to-sequence framework.
### Architecture
Following (Liu et al. 2020), we use the sequence-to-sequence Transformer
architecture (Vaswani et al. 2017) with 12 layers in each encoder and decoder.
We denote seq2seq as our model in the following.
### DST as Sequence Generation
The input sequence is composed of the concatenation of dialogue context
$\mathbf{x^{t}}=\\{\mathcal{U}_{1};\mathcal{R}_{1};\mathcal{U}_{2};\mathcal{R}_{2};\dots;\mathcal{R}_{t-1};\mathcal{U}_{t}\\}$
where ; denote the concatenation of texts.
For the target dialogue state, we only consider the slots where the values are
non-empty. The target sequence is consist of the concatenation of the (domain,
slot, value) triplets with a non-empty value,
$\mathbf{y^{t}}=\\{\mathcal{D}^{i};\mathcal{S}^{i};\mathcal{V}^{i}|1\leq i\leq
I\wedge\mathcal{S}^{i}\neq\emptyset\\}$.
$\mathbf{\hat{y}^{t}}=seq2seq(\mathbf{x^{t}})$
We fix the order of the (domain, slot name, value) triplets for consistency.
The training objective is to minimize the cross-entropy loss between the
ground truth sequence $\mathbf{y^{t}}$ and the predicted sequence
$\mathbf{\hat{y}^{t}}$.
### Post-processing
The predicted sequence $\mathbf{\hat{y}^{t}}$ is then parsed by heuristic
rules to construct
$\hat{\mathcal{B}_{t}}=\\{\mathcal{D}^{i};\mathcal{S}^{i};\hat{\mathcal{V}}^{i}|1\leq
i\leq I\\}$.
By utilizing the possible values of slots in the ontology, for predicted slot
values $\hat{\mathcal{V}}$ that do not appears in the ontology, we choose the
one with the best match to our predicted value. 111This is implemented by
difflib.get_close_matches in Python
## Experiments
In the following section, we describe evaluation metrics, experiment setting
and introduce experimental results.
### Evaluation Metrics
We use joint goal accuracy and slot F1 as our metrics to evaluate our dialogue
state tracking system.
* •
Joint Goal Accuracy: The proportion of dialogue turns where predicted dialogue
states match entirely to the ground truth dialogue states.
* •
Slot F1: The macro-averaged F1 score for all slots in each turn.
### Experiments Settings
We want to examine how different settings affect the performance of the target
low-resource dataset: CrossWOZ-en.222In our experimental circumstance, English
is the low-resource language since the original language of CrossWOZ is
Chinese. We will conduct our experiments in the settings below.
* •
Direct Fine-tuning
* •
Cross-Lingual Training (CLT)
* •
Cross-Ontology Training (COT)
* •
Cross-Lingual Cross-Ontology Training (CL/COT)
* •
Cross-Lingual Pre-Training (CLPT)
* •
Cross-Ontology Pre-Training (COPT)
* •
Cross-Lingual Cross-Ontology Pre-Training (CL/COPT)
Table 2 and 3 show the datasets for training and pre-training in different
settings. For experiments with pre-training, all models are pre-trained on the
pre-training dataset and then fine-tuned on CrossWOZ-en.
The baseline model provided by DSTC9 is SUMBT (Lee, Lee, and Kim 2019), the
ontology-based model trained on CrossWOZ-en.
### Multilingual Denoising Pre-training
All of our models initialize from mBART25. (Liu et al. 2020) mBART25 is
trained with denoising auto-encoding task on mono-lingual data in 25
languages, including English and Simplified Chinese. (Liu et al. 2020) shows
pre-training of denoising autoencoding on multiple languages improves the
performance on low resource machine translation. We hope using mBART25 as
initial weights would improve the cross-lingual transferability.
### Implementation Details
In all experiments, the models are optimized with AdamW (Loshchilov and Hutter
2017) with learning rate set to $1e^{-4}$ for 4 epochs. The best model is
selected from the validation loss and is used for testing.
During training, the decoder part of our model is trained in the teacher
forcing fashion (Williams and Zipser 1989). Greedy decoding (Vinyals and Le
2015) is applied when inference. Following mBART (Liu et al. 2020), we use
sentencespiece tokenizer. For GPU memory constraints, source sequences longer
than 512 tokens are truncated at the front and target sequences longer than
256 tokens are truncated at the back.
The models are implemented in Transformers (Wolf et al. 2019), PyTorch (Paszke
et al. 2019) and PyTorch Lightning (Falcon 2019).
## Results and Discussion
The results for all experiment settings are shown in Table 2 and 3.
Experiment | Training Data | JGA | SF1
---|---|---|---
MultiWOZ | CrossWOZ
en | zh | en | zh
Baseline | | | ✓ | | 7.41 | 55.27*
Direct Fine-tuning | | | ✓ | | 16.82 | 66.35
CL/COT | ✓ | ✓ | ✓ | ✓ | 4.10 | 26.50
COT | ✓ | | ✓ | | 0.95 | 19.60
CLT | | | ✓ | ✓ | 0.53 | 13.45
Table 2: Experimental results on CrossWOZ-en with different training data (%). *: This slot f1 is averaged over both the public and private test dialogues. JGA: Joint Goal Accuracy. SF1: Slot F1. Experiment | Pre-training Data | JGA | SF1
---|---|---|---
MultiWOZ | CrossWOZ
en | zh | en | zh
Direct Fine-tuning | | | | | 16.82 | 66.35
CL/COPT | ✓ | ✓ | | | 5.94 | 38.36
COPT | ✓ | | | | 2.52 | 27.01
CLPT | | | | ✓ | 0.11 | 15.01
Table 3: Experimental results on CrossWOZ-en with pre-training (%).
### Additional Training Data Cause Degeneration
Direct Fine-tuning significantly outperforms other settings, including the
official baseline. We assume English and Chinese data with the same ontology
to train the mBART would bridge the gap between the two languages and increase
the performance. However, in Cross-Lingual Training, training on English and
Chinese version of CrossWOZ leads to catastrophic performance on CrossWOZ-en.
In the Cross-Ontology Training where combine two data in the same language.
However, with different ontologies, the performance marginally increases from
Cross-Lingual Training, which shows more extensive mono-lingual data with the
unmatched domain, slots, and ontology confuses the model during inference. In
the Cross-Lingual Cross-Ontology Training, we collect all four datasets for
training, and the performance is still far from Direct Fine-tuning.
In conclusion, additional data deteriorate the performance on CrossWOZ-en even
whether the language or ontology matches or not.
### Does ”First Pre-training, then fine-tuning” Help?
We hypothesize that training with additional data causes performance
degeneration, and therefore one possible improvement could be first pre-
training the model on cross-lingual / cross-ontology data and then fine-tuning
on the target dataset CrossWOZ-en. Table 3 shows the results.
By comparing COPT to COT and CL/COPT to CL/COP, the relative performance gain
by over 37% with regards to slot F1. ”Pre-training, fine-tuning” framework may
partially alleviate the problem of catastrophic performance drop in joint
training.
### Domain Performance Difference across Experiment Settings?
This section further investigates the cause of the performance decrease by
comparing the slot F1 of different models across five domains in Figure 2.
Generally speaking, in attraction, restaurant, and hotel domains, ”pre-train
then fine-tune” methods beat their ”joint training” counterparts by an
observable margin. By contrast, in metro and taxi domains, despite poor
performance among all, ”joint training” settings beat their”pre-train then
fine-tune” counterparts.
The only two trackable slots in the metro and taxi domain, ”from” and ”to,”
usually take the address or name of buildings, are highly non-transferable
across datasets. We conjecture that pretraining on cross-lingual or cross-
ontology datasets does not help or even hurt those non-transferable slots.
Figure 2: Slot F1 across 5 domains in CrossWOZ-en in different settings.
## Conclusion
In this paper, we build a cross-lingual multi-domain generative dialogue state
tracker with multilingual seq2seq to test on CrossWOZ-en and investigate our
tracker’s transferability under different training settings. We find that
jointly trained the dialogue state tracker on cross-lingual or cross-ontology
data degenerates the performance. Pre-training on cross-lingual or cross-
ontology data, then fine-tuning framework may alleviate the problem, and we
find empirically evidence on relative improvement in slot F1. A finding from
the domain performance shift is that performance on some non-transferable
slots, such as name, from, to, may be limited by the previous pretraining
approach. A future research direction would investigate why such a significant
performance declines in joint training and tries to bridge it.
## References
* Eric et al. (2019) Eric, M.; Goel, R.; Paul, S.; Kumar, A.; Sethi, A.; Ku, P.; Goyal, A. K.; Agarwal, S.; Gao, S.; and Hakkani-Tur, D. 2019. MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines.
* Falcon (2019) Falcon, W. 2019. PyTorch Lightning. _GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning_ 3\.
* Gunasekara et al. (2020) Gunasekara, C.; Kim, S.; D’Haro, L. F.; Rastogi, A.; Chen, Y.-N.; Eric, M.; Hedayatnia, B.; Gopalakrishnan, K.; Liu, Y.; Huang, C.-W.; Hakkani-Tür, D.; Li, J.; Zhu, Q.; Luo, L.; Liden, L.; Huang, K.; Shayandeh, S.; Liang, R.; Peng, B.; Zhang, Z.; Shukla, S.; Huang, M.; Gao, J.; Mehri, S.; Feng, Y.; Gordon, C.; Alavi, S. H.; Traum, D.; Eskenazi, M.; Beirami, A.; Eunjoon; Cho; Crook, P. A.; De, A.; Geramifard, A.; Kottur, S.; Moon, S.; Poddar, S.; and Subba, R. 2020. Overview of the Ninth Dialog System Technology Challenge: DSTC9 URL https://arxiv.org/abs/2011.06486.
* Hosseini-Asl et al. (2020) Hosseini-Asl, E.; McCann, B.; Wu, C.-S.; Yavuz, S.; and Socher, R. 2020. A Simple Language Model for Task-Oriented Dialogue URL http://arxiv.org/abs/2005.00796.
* Kim et al. (2019) Kim, S.; Yang, S.; Kim, G.; and Lee, S.-W. 2019. Efficient Dialogue State Tracking by Selectively Overwriting Memory. _arXiv_ URL http://arxiv.org/abs/1911.03906.
* Le, Socher, and Hoi (2020) Le, H.; Socher, R.; and Hoi, S. C. H. 2020. Non-Autoregressive Dialog State Tracking 1–21. URL http://arxiv.org/abs/2002.08024.
* Lee, Lee, and Kim (2019) Lee, H.; Lee, J.; and Kim, T.-Y. 2019. SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking 5478–5483. doi:10.18653/v1/p19-1546.
* Liu et al. (2020) Liu, Y.; Gu, J.; Goyal, N.; Li, X.; Edunov, S.; Ghazvininejad, M.; Lewis, M.; and Zettlemoyer, L. 2020. Multilingual Denoising Pre-training for Neural Machine Translation URL https://arxiv.org/abs/2001.08210.
* Liu et al. (2019) Liu, Z.; Winata, G. I.; Lin, Z.; Xu, P.; and Fung, P. 2019. Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems. _arXiv_ URL http://arxiv.org/abs/1911.09273.
* Loshchilov and Hutter (2017) Loshchilov, I.; and Hutter, F. 2017. Decoupled Weight Decay Regularization URL http://arxiv.org/abs/1711.05101.
* Mrkšic et al. (2017) Mrkšic, N.; Séaghdha, D.; Wen, T. H.; Thomson, B.; and Young, S. 2017\. Neural belief tracker: Data-driven dialogue state tracking. _ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)_ 1: 1777–1788. doi:10.18653/v1/P17-1163.
* Mrkšić et al. (2017) Mrkšić, N.; Vulić, I.; Séaghdha, D. Ó.; Leviant, I.; Reichart, R.; Gašić, M.; Korhonen, A.; and Young, S. 2017. Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints. _arXiv_ URL http://arxiv.org/abs/1706.00374.
* Paszke et al. (2019) Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; d'Alché-Buc, F.; Fox, E.; and Garnett, R., eds., _Advances in Neural Information Processing Systems 32_ , 8024–8035. Curran Associates, Inc. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* Ramadan, Budzianowski, and Gašić (2018) Ramadan, O.; Budzianowski, P.; and Gašić, M. 2018. Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , 432–437. Melbourne, Australia: Association for Computational Linguistics. doi:10.18653/v1/P18-2069. URL https://www.aclweb.org/anthology/P18-2069.
* Schuster et al. (2019) Schuster, S.; Gupta, S.; Shah, R.; and Lewis, M. 2019. Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 3795–3805. Minneapolis, Minnesota: Association for Computational Linguistics. doi:10.18653/v1/N19-1380. URL https://www.aclweb.org/anthology/N19-1380.
* Shan et al. (2020) Shan, Y.; Li, Z.; Zhang, J.; Meng, F.; Feng, Y.; Niu, C.; and Zhou, J. 2020. A Contextual Hierarchical Attention Network with Adaptive Objective for Dialogue State Tracking 6322–6333. URL http://arxiv.org/abs/2006.01554.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention Is All You Need.
* Vinyals and Le (2015) Vinyals, O.; and Le, Q. 2015. A neural conversational model. _arXiv preprint arXiv:1506.05869_ .
* Williams and Zipser (1989) Williams, R. J.; and Zipser, D. 1989. A learning algorithm for continually running fully recurrent neural networks. _Neural computation_ 1(2): 270–280.
* Wolf et al. (2019) Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; Davison, J.; Shleifer, S.; von Platen, P.; Ma, C.; Jernite, Y.; Plu, J.; Xu, C.; Scao, T. L.; Gugger, S.; Drame, M.; Lhoest, Q.; and Rush, A. M. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. _ArXiv_ abs/1910.03771.
* Wu et al. (2019) Wu, C.-S.; Madotto, A.; Hosseini-Asl, E.; Xiong, C.; Socher, R.; and Fung, P. 2019\. Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems 808–819. doi:10.18653/v1/p19-1078.
* Zhu et al. (2020) Zhu, Q.; Zhang, W.; Liu, T.; and Wang, W. Y. 2020. Counterfactual Off-Policy Training for Neural Response Generation URL http://arxiv.org/abs/2004.14507.
|
# Marangoni instability of a drop in a stably stratified liquid
Yanshen Li<EMAIL_ADDRESS>Physics of Fluids group, Max-Planck Center
Twente for Complex Fluid Dynamics, Department of Science and Technology, Mesa+
Institute, and J. M. Burgers Centre for Fluid Dynamics, University of Twente,
P.O. Box 217, 7500 AE Enschede, The Netherlands Christian Diddens Physics of
Fluids group, Max-Planck Center Twente for Complex Fluid Dynamics, Department
of Science and Technology, Mesa+ Institute, and J. M. Burgers Centre for Fluid
Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands Department of Mechanical Engineering, Eindhoven University of
Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands Andrea
Prosperetti Physics of Fluids group, Max-Planck Center Twente for Complex
Fluid Dynamics, Department of Science and Technology, Mesa+ Institute, and J.
M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500
AE Enschede, The Netherlands Department of Mechanical Engineering, University
of Houston, Texas 77204-4006, USA Detlef Lohse<EMAIL_ADDRESS>Physics of
Fluids group, Max-Planck Center Twente for Complex Fluid Dynamics, Department
of Science and Technology, Mesa+ Institute, and J. M. Burgers Centre for Fluid
Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands Max Planck Institute for Dynamics and Self-Organization, Am
Faßberg 17, 37077 Göttingen, Germany
###### Abstract
Marangoni instabilities can emerge when a liquid interface is subjected to a
concentration or temperature gradient. It is generally believed that for these
instabilities bulk effects like buoyancy are negligible as compared to
interfacial forces, especially on small scales. Consequently, the effect of a
stable stratification on the Marangoni instability has hitherto been ignored.
Here we report, for an immiscible drop immersed in a stably stratified
ethanol-water mixture, a new type of oscillatory solutal Marangoni instability
which is triggered once the stratification has reached a critical value. We
experimentally explore the parameter space spanned by the stratification
strength and the drop size and theoretically explain the observed crossover
from levitating to bouncing by balancing the advection and diffusion around
the drop. Finally, the effect of the stable stratification on the Marangoni
instability is surprisingly amplified in confined geometries, leading to an
earlier onset.
A concentration or temperature gradient applied to an interface can induce a
Marangoni instability of the motionless state, resulting in a steady
convection. Similarly, the steady state Marangoni convection can undergo
another instability, leading to an oscillatory motion rednikov1998two . Since
the first quantitative analysis in 1958 pearson1958convection , Marangoni
instabilities have been studied extensively due to their relevance for liquid
extraction sternling1959interfacial ; groothuis1960influence ;
rother1999effect ; berejnov2002spontaneous ; jain2011recent , coating
techniques pearson1958convection ; yarin1995surface ; demekhin2006suppressing
, metal processing gupta1992pore ; ratke2005theoretical ; zhang2006indirect
and crystal growth schwabe1978experiments ; schwabe1979some ;
chang1979thermocapillary ; chun1979experiments ; schwabe1982studies ;
preisser1983steady ; kamotani1984oscillatory , etc. Marangoni instabilities
are also the main mechanism to drive the self-propulsion of active drops
rednikov1994active ; rednikov1994drop ; herminghaus2014interfacial ;
yoshinaga2014spontaneous ; ryazantsev2017thermo ; maass2016swimming ;
morozov2019self , which have attracted lots of recent interest. Such drops are
an example of the rich physicochemical hydrodynamics of droplets far from
equilibrium lohse2020physicochemical which are very relevant for food
processing degner2013influence ; degner2014factors and modelling biological
systems maass2016swimming , etc.
Depending on the application, Marangoni instabilities have been investigated
in different configurations, such as a horizontal interface between two fluid
layers pearson1958convection ; sternling1959interfacial ;
reichenbach1981linear ; takashima1981surface ; levchenko1981instability ;
nepomnyashchii1983thermocapillary ; chu1988sustained ; chu1989transverse ;
hennenberg1992transverse ; rednikov1998two , the surface of a falling film on
a tilted plate nepomnyashchy1976wavy ; chang1994wave ; kliakhandler1997viscous
; miladinova2005effects ; demekhin2006suppressing , a vertical interface of a
liquid column schwabe1978experiments ; schwabe1979some ;
chang1979thermocapillary ; chun1979experiments ; schwabe1982studies ;
preisser1983steady ; kamotani1984oscillatory , and for drops submerged in a
solution rednikov1994active ; rednikov1994drop ; herminghaus2014interfacial ;
yoshinaga2014spontaneous ; ryazantsev2017thermo ; maass2016swimming ;
morozov2019self ; thanasukarn2004impact ; ghosh2008factors ;
degner2013influence ; degner2014factors ; dedovets2018five , etc. In many of
these situations, these systems are subjected to a stabilizing
temperature/concentration gradient takashima1981surface ;
levchenko1981instability ; demekhin2006suppressing ; schwabe1978experiments ;
schwabe1979some ; chang1979thermocapillary ; chun1979experiments ;
schwabe1982studies ; preisser1983steady ; kamotani1984oscillatory ;
chu1988sustained ; chu1989transverse , which induces a continuously stable
density stratification. However, except for a few cases for the horizontal
interface configuration welander1964convective ; wierschem2000internal ;
rednikov2000rayleigh , the effect of such a stable density stratification on
Marangoni convection has always been ignored, due to the generally accepted
view that on small scales bulk effects like buoyancy are negligible
nepomnyashchy2012interfacial . Here we report, for an immiscible drop immersed
in an ethanol-water mixture, that the stable stratification could actually
trigger an oscillatory instability once it is above a critical value.
Surprisingly, this critical value will decrease in a confined geometry,
implying that the effect of the stable stratification is actually amplified on
small scales. Our findings demonstrate that stable stratification can strongly
affect Marangoni convection and ask for further studies in related geometries.
Figure 1: Bouncing and levitating drops in a linearly and stably stratified
mixture of ethanol (lighter) and water (heavier). (a) Snapshots of two 5 cSt
silicone oil drops at the given time after they were released in the mixture.
The larger drop bounces at $h<$3\text{\,}\mathrm{m}\mathrm{m}$$, while the
smaller drop is levitating at a higher position
$h\approx$8.7\text{\,}\mathrm{m}\mathrm{m}$$. The snapshots are taken from one
experiment with two drops. To better show them, the upper/lower half of the
snapshots are shown with different scales. (b) Drop’s height $h$ as functions
of time $t$ for different drop radii $R$ after the initial sinking period.
$h=0$ is the position where the density of the drop equals that of the
mixture. The filled circles represent the relative size of the drops. (c) Flow
field around a levitating drop ($R=31\pm$1\text{\,}\mathrm{\SIUnitSymbolMicro
m}$$) measured by PIV, in a mixture with
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$5\text{\,}\mathrm{m}^{-1}$$. The
resolution is not high enough to resolve the velocity close to the drop’s
surface. (d) The ethanol weight fraction $w_{\mathrm{e}}$ at the corresponding
height, with
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$5\text{\,}\mathrm{m}^{-1}$$. (e)
A sketch of the levitating drop (with radius $R$ and density $\rho^{\prime}$)
and the ethanol concentration around it. Deeper red means higher ethanol
concentration. The shaded ring inside the dashed circle represents the
kinematic boundary layer with thickness $\delta$, set by the Marangoni
velocity $V_{\mathrm{M}}$. The ethanol concentration inside this layer is
enhanced & homogenized by Maragnoni advection bringing down the ethanol rich
liquid. The Marangoni flow is represented by the solid arrows. Dashed arrows
represent diffusion across this layer. $\rho$ is the representative density
inside this layer, and $\rho^{*}$ is the undisturbed density in the far field.
$\mu$ and $\mu^{\prime}$ are the viscosities of the mixture and the drop,
respectively. (f) Interfacial tension $\sigma(w_{\mathrm{e}})$ between
$5\text{\,}\mathrm{c}\mathrm{S}\mathrm{t}$ silicone oil and the ethanol-water
mixture. Each point is an average of six measurements and the error bar is the
standard deviation. The solid line is a polynomial fit to the data points.
To determine the onset of the Marangoni instability, we experimentally explore
the parameter space spanned by the concentration gradient and the drop radius
$R$. Using the double-bucket method oster1965density , linearly stratified
liquid mixtures are prepared in a cubic glass container (Hellma, 704.001-OG,
Germany) with inner width of $L=$30\text{\,}\mathrm{m}\mathrm{m}$$ filled to
different depth, depending on the degree of stratification. The ethanol weight
fraction $w_{\mathrm{e}}$ at each height is measured by laser deflection
lin2013one ; li2019bouncing , from which the gradient of ethanol weight
fraction $\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$ is calculated. The
concentration gradient $\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$ is varied from
$\sim$3\text{\,}\mathrm{m}^{-1}$$ to $\sim$130\text{\,}\mathrm{m}^{-1}$$,
corresponding to density gradients ranging from
$-480\text{\,}\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{4}$ to
$-4200\text{\,}\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{4}$.
$5\text{\,}\mathrm{c}\mathrm{S}\mathrm{t}$ Silicone oil (Sigma-Aldrich,
Germany) is injected through a thin needle (with outer-diameter
$0.515\text{\,}\mathrm{m}\mathrm{m}$) to generate drops of different radii
$R$. The drops are released from the top of the stratified mixtures, and their
trajectories are recorded by a sideview camera. During the measurements, only
one single drop exists in the container at a time. The silicone oil has
density
$\rho^{\prime}=$913\text{\,}\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{3}$$ and
viscosity $\mu^{\prime}=$4.6\text{\,}\mathrm{m}\mathrm{Pa}\cdot\mathrm{s}$$.
Two typical behaviors are observed after the initial sinking phase. See Fig.
1(a) for the successive snapshots of two silicone oil drops in a mixture with
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$5\text{\,}\mathrm{m}^{-1}$$:
While a smaller drop ($R=69\pm$2\text{\,}\mathrm{\SIUnitSymbolMicro m}$$)
stays at a fixed position around $h\approx$8.7\text{\,}\mathrm{m}\mathrm{m}$$,
a larger drop ($R=454\pm$2\text{\,}\mathrm{\SIUnitSymbolMicro m}$$) bounces
continuously in the range
$$0\text{\,}\mathrm{m}\mathrm{m}$<h<$3\text{\,}\mathrm{m}\mathrm{m}$$. Here
$h=0$ marks the position where the density of the oil
($\rho^{\prime}=$913\text{\,}\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{3}$$)
equals that of the mixture (at $w_{\mathrm{e}}\approx$49\text{\,}\%$$). The
drop’s position $h(t)$ as a function of time $t$ in the same stratified liquid
and the ethanol weight fraction $w_{\mathrm{e}}$ at the corresponding height
are respectively shown in Fig. 1(b) and (d). The smallest drop
($R_{1}\approx$44\text{\,}\mathrm{\SIUnitSymbolMicro m}$$) is levitating at
$h\approx$9.1\text{\,}\mathrm{m}\mathrm{m}$$. As the drop size increases, it
levitates at a lower position, until above a critical radius $R_{\mathrm{cr}}$
it starts to bounce instead of levitating. If its size is further increased,
the drop bounces around a lower position (but still with $h>0$).
The smaller drops are able to levitate above the density matched position
$h=0$ against gravity because of a stable Marangoni flow around it, as shown
in Fig. 1(c). The flow field is obtained by PIV measurements for a drop
levitating in the gradient
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$5\text{\,}\mathrm{m}^{-1}$$. The
interfacial tension of the drop $\sigma$ decreases with increasing ethanol
concentration of the mixture $w_{\mathrm{e}}$, as shown in Fig. 1(f). This
interfacial tension gradient at the drop’s surface pulls liquid downwards,
generating a viscous force acting against gravity, which levitates the drop.
When the drop becomes large enough, however, the equilibrium becomes
oscillatory, and the drop starts to bounce between two different levels. Thus,
the transition from a levitating drop to a bouncing one signals the onset of
the instability.
While exploring the parameter space, we use an easily distinguishable
criterion to determine whether a drop is bouncing: If the drop’s bouncing
amplitude $h_{\mathrm{A}}$ is larger than its radius $R$, then the drop is
considered to be bouncing (see Supplemental Material for more details). The
results are shown in Fig. 2. Surprisingly, while for weak gradients (like for
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$10\text{\,}\mathrm{m}^{-1}$$)
there is a critical radius $R_{\mathrm{cr}}$
($\approx$80\text{\,}\mathrm{\SIUnitSymbolMicro m}$$) above which the
Marangoni flow becomes unstable, the Marangoni flow is always unstable for
stronger gradients
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y>$23\text{\,}\mathrm{m}^{-1}$$ in all
performed experiments. Note that for larger drops
($R>$0.1\text{\,}\mathrm{m}\mathrm{m}$$), we could not explore the full
parameter space for
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y<$3\text{\,}\mathrm{m}^{-1}$$ since it
would require an unrealistically large container.
Figure 2: Phase diagram of the levitating & bouncing drops in the parameter
space of drop radius $R$ vs. concentration gradient
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$. Black triangles stand for bouncing
drops, red circles for levitating ones. Measurement errors in the $x$
direction are comparable with the size of the symbols.
To get a better understanding of the onset of this Marangoni instability, the
key is to understand the coupling between the Marangoni flow and the
concentration field: The Marangoni flow is induced by the ethanol (solute)
concentration gradient around the drop, which is subjected to change by
advection (caused by the Marangoni flow itself) and diffusion, see the sketch
in Fig. 1(e). The Marangoni flow tends to homogenize the ethanol concentration
around the drop, thus weakening the Marangoni flow force and thus itself. At
the same time, diffusion acts to restore the ethanol gradient in the vicinity
of the drop to its undisturbed value, i.e., the value it takes in the far
field. This competition between advection and diffusion around the drop
determines whether the Marangoni flow is stable or not. Furthermore, once it
becomes unstable, a temporarily strong Marangoni flow homogenizes the
concentration field around the drop, consequently weakening itself. Later the
Marangoni flow restarts once diffusion has restored the concentration field
around the drop, so that the flow is oscillatory and leads to the continuous
bouncing of the drop.
The liquid layer whose concentration is affected by the Marangoni advection is
effectively the Marangoni flow boundary layer with thickness $\delta$ (see
Fig. 1(e)). The time scale for advection to change the concentration in this
layer is the time needed for the Marangoni flow to bring down the ethanol-rich
liquid from the top: $\tau_{\mathrm{a}}\sim R/V_{\mathrm{M}}$, where
$V_{\mathrm{M}}$ is the Marangoni flow velocity at the equator of the drop.
For the drop in the concentration gradient it holds young1959motion (see
Supplementary Material) $V_{\mathrm{M}}\sim-\mathrm{d}\sigma/\mathrm{d}y\cdot
R/(\mu+\mu^{\prime})$. The time scale for diffusion to restore the
concentration across this layer is $\tau_{\mathrm{d}}\sim\delta^{2}/D$, where
$D$ is the diffusivity of ethanol in water. The flow will become unstable when
advection is faster than diffusion, $\tau_{\mathrm{a}}<\tau_{\mathrm{d}}$.
Substituting the two time scales into this relation, we obtain
${V_{\mathrm{M}}R}/{D}>{R^{2}}/{\delta^{2}}$. The left hand side has the form
of a Péclet number, which is the ratio between advection and diffusion, and
which in problems of this type is referred to as the Marangoni number
$Ma=\frac{V_{\mathrm{M}}R}{D}=-\frac{\mathrm{d}\sigma}{\mathrm{d}w_{\mathrm{e}}}\frac{\mathrm{d}w_{\mathrm{e}}}{\mathrm{d}y}R^{2}\cdot\frac{1}{(\mu+\mu^{\prime})D},$
(1)
where we have used above expression for $V_{\mathrm{M}}$ with an equal sign
and where $\mathrm{d}\sigma/\mathrm{d}w_{\mathrm{e}}$ is a material property
(see Fig. 1(f)) and $\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$ the undisturbed
ethanol gradient of the mixture. The instability criterion thus is
$Ma>{R^{2}}/{\delta^{2}}.$ (2)
The liquid within the boundary layer is lighter than its surroundings as it is
entrained from the top, and it is held in place by the Marangoni induced
viscous stress against buoyancy:
$\mu\frac{V_{\mathrm{M}}}{\delta^{2}}\sim g\Delta\rho,$ (3)
where $\Delta\rho=\rho^{*}-\rho$ is the density difference between the liquid
inside and outside of the kinematic boundary layer, see Fig. 1(e). The lighter
liquid is brought down by the Marangoni flow along the drop’s surface, so
$\Delta\rho\sim-R\cdot\mathrm{d}\rho/\mathrm{d}y$. Cancelling $\delta$ from
Eqs.(2)&(3), we obtain the instability criterion
$Ma/{Ra}^{1/2}>c,$ (4)
where
$Ra=-\frac{\mathrm{d}\rho}{\mathrm{d}y}\cdot\frac{gR^{4}}{\mu D}$ (5)
is the Rayleigh number for characteristic length $R$ and $c$ is a constant to
be determined.
To calculate the Marangoni and Rayleigh numbers, ethanol weight fractions at
the positions where the drops levitate are used to obtain the viscosity $\mu$,
diffusivity $D$ and the interfacial tension $\sigma$ (see Supplemental
Material for the concentration dependence of $\mu$ and $D$). In the following,
for bouncing drops, we use values corresponding to their lowest position.
The phase diagram shown in Fig. 2 is replotted with $Ma/Ra^{1/2}$ vs. $Ra$ in
Fig. 3(a). It clearly shows that there is indeed a critical value
$(Ma/{Ra}^{1/2})_{\mathrm{cr}}$ above which the drop will always bounce, and
the instability threshold $c$ in Eq.(4) is measured to be $c=275\pm 10$ in the
range $6\times 10^{-3}\lesssim Ra\lesssim 3$. We cannot carry out experiments
for $Ra<6\times 10^{-3}$ because drops with
$R<$20\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ are too small to observe. For
experiments in the range $Ra>3$, the finite size of the container comes into
play. However, we speculate that $c\approx 275$ still holds for $Ra>3$ as long
as the container is large enough. The existing data on bouncing are consistent
with this value.
Figure 3: (a) Phase diagram replotted in dimensionless numbers: $Ma/Ra^{1/2}$
vs. $Ra$. Black triangles stand for bouncing drops, red circles for levitating
ones. The blue line is the instability threshold
$(Ma/{Ra}^{1/2})_{\mathrm{cr}}=275$, above which the flow is oscillatory and
all drops bounce. The blue solid line (in the range $6\times 10^{-3}<Ra<3$) is
confirmed by experiments. Measurement errors in the $y$ direction are
comparable with the size of the symbols. (b) Phase diagram replotted with
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$ vs. $w_{\mathrm{e}}$, where
$w_{\mathrm{e}}$ is the ethanol weight fraction at the levitation height. The
blue curve is calculated from Eq.(6) with $c=275$. The dashed blue line in the
range $w_{\mathrm{e}}>$98\text{\,}\mathrm{w}\mathrm{t}\%$$
($w_{\mathrm{e}}<$50\text{\,}\mathrm{w}\mathrm{t}\%$$) corresponds to
$Ra<6\times 10^{-3}$ ($Ra>3$). Measurement errors are comparable with the size
of the symbols.
We now express our stability criterion Eq.(4) in dimensional quantities by
substituting the definition of $Ma$ and $Ra$ to obtain:
$\left(\frac{\mathrm{d}w_{\mathrm{e}}}{\mathrm{d}y}\right)_{\mathrm{cr}}=c^{2}\left(\mu+\mu^{\prime}\right)^{2}\cdot\frac{gD}{\mu}\frac{\mathrm{d}\rho}{\mathrm{d}\sigma}\frac{\mathrm{d}w_{\mathrm{e}}}{\mathrm{d}\sigma}.$
(6)
Eq.(6) actually predicts a critical concentration gradient above which the
equilibrium is unstable. Note that remarkably the drop radius $R$ does not
enter into this equation. All the fluid properties $\mu$, $D$,
$\mathrm{d}\rho/\mathrm{d}\sigma$ and
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}\sigma$ depend on $w_{\mathrm{e}}$ – the
ethanol weight fraction at the levitation height. Thus the critical gradient
$(\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y)_{\mathrm{cr}}$ as a function of
$w_{\mathrm{e}}$ is shown in Fig. 3(b) as the blue curve. The data shown in
Fig. 2 are also replotted in Fig. 3(b). As can be seen, the blue curve as
predicted by Eq.(6) nicely separates the levitating drops and the bouncing
ones. The dashed blue line in the range
$w_{\mathrm{e}}>$98\text{\,}\mathrm{w}\mathrm{t}\%$$
($w_{\mathrm{e}}<$50\text{\,}\mathrm{w}\mathrm{t}\%$$) corresponds to
$Ra<6\times 10^{-3}$ ($Ra>3$), i.e., the region in which we could not perform
experiments.
The above results are all obtained in a large enough container. We will now
discuss the effect of a geometrical confinement, i.e., the dependence of our
findings on the container size $L$. Let $\mathcal{L}$ denote the maximum
extent of the flow field induced by the drop. Then $\mathcal{L}>L$ means that
the flow is confined. In the case of no confinement, i.e., $\mathcal{L}<L$,
the liquid in the far field is not disturbed by the Marangoni flow, so that
the density in the far field is maintained at $\rho^{*}$ (see Fig. 1(e)).
However, when the flow is confined, i.e., $\mathcal{L}>L$, the liquid close to
the side wall is affected by the Marangoni flow. In such a situation, because
the liquid is pulled down in the center by the drop, the liquid close to the
wall will be pushed up due to mass conservation. This effectively increases
the density $\rho^{*}$. Consequently, the density difference
$\Delta\rho=\rho^{*}-\rho$ is increased, which means that the effect of
buoyancy is amplified. According to Eqs. (3)&(4), the instability threshold
$c$ will thus decrease. Either decreasing the container size $L$ or increasing
$\mathcal{L}$ both leads to a stronger confinement effect. Since for stable
stratifications $\mathcal{L}\sim\left(-{\mathrm{d}\rho}/{\mathrm{d}y}\cdot{\mu
D}/{g}\right)^{-1/4}$ phillips1970flows ; wunsch1970oceanic , one can thus
also increase the confinement effect by using very weak stratifications. We
have performed experiments for weaker gradients
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y<$3\text{\,}\mathrm{m}^{-1}$$ and also in
a larger container to confirm the effect of the confinement. Indeed, for
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$2\text{\,}\mathrm{m}^{-1}$$, a
cubic container with $L=$50\text{\,}\mathrm{m}\mathrm{m}$$ is already not
large enough, and the instability threshold is reduced to $c\approx 172$. A
smaller container ($L=$30\text{\,}\mathrm{m}\mathrm{m}$$) further decreases
the threshold to $c\approx 157$. For
$\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y\approx$1\text{\,}\mathrm{m}^{-1}$$, the
geometry is more confined, and the threshold is further decreased to $c\approx
122$ in the larger container and even to $c\approx 117$ in the smaller one.
The fact that a weaker stratification leads to a more amplified effect of
buoyancy demonstrates that the stable stratification is very relevant for the
Marangoni instability, in particular on small scales where the confinement is
more relevant.
In conclusion, we have discovered a new type of oscillatory Marangoni
instability for an immiscible drop immersed in a stably stratified ethanol-
water mixture. The commonly ignored stable density stratification induced by
the concentration gradient is vital in triggering this instability. Its onset
is indicated by the transition from a levitating drop to a bouncing one. By
experimentally exploring the parameter space spanned by the concentration
gradient $\mathrm{d}w_{\mathrm{e}}/\mathrm{d}y$ and the drop radius $R$, the
instability is found to be determined by the balance between the advection and
diffusion through the kinetic boundary layer set by the Marangoni flow. This
yields a critical concentration gradient as the instability criterion.
Remarkably, the critical gradient is decreased in a confined geometry, i.e.,
the effect of the stable stratification is amplified on small confined scales.
Our findings indicate that the stable stratification induced by the
corresponding concentration gradient is very relevant, especially in confined
geometries, and should be further explored in other geometries.
Our results for solutal Marangoni flows can also be extended to thermal
Marangoni flows. We found that a stabilizing temperature gradient as low as
$3\text{\,}\mathrm{K}\mathrm{/}\mathrm{m}\mathrm{m}$ can trigger a similar
oscillatory instability on a bubble immersed in water. Such low temperature
gradient is smaller than what is occurring in various applications
schwabe1978experiments ; ratke2006destabilisation ; dedovets2018five , where a
stabilizing temperature gradient can easily go beyond
$10\text{\,}\mathrm{K}\mathrm{/}\mathrm{m}\mathrm{m}$.
We thank Chao Sun and Vatsal Sanjay for valuable discussions. We acknowledge
support from the Netherlands Center for Multiscale Catalytic Energy Conversion
(MCEC), an NWO Gravitation programme funded by the Ministry of Education,
Culture and Science of the government of Netherlands, and the ERC-Advanced
Grant Diffusive Droplet Dynamics (DDD) with Project No. 740479.
## References
* (1) A. Y. Rednikov, P. Colinet, M. G. Velarde, and J. C. Legros, Two-layer Benard-Marangoni instability and the limit of transverse and longitudinal waves, Phys. Rev. E 57, 2872 (1998).
* (2) J. R. A. Pearson, On convection cells induced by surface tension, J. Fluid Mech. 4, 489 (1958).
* (3) C. V. Sternling and L. E. Scriven, Interfacial turbulence: Hydrodynamic instability and the Marangoni effect, AIChE J. 5, 514 (1959).
* (4) H. Groothuis and F. J. Zuiderweg, Influence of mass transfer on coalescence of drops, Chem. Eng. Sci 12, 288 (1960).
* (5) M. A. Rother and R. H. Davis, The effect of slight deformation on thermocapillary-driven droplet coalescence and growth, J. Colloid Interface Sci. 214, 297 (1999).
* (6) V. Berejnov, A. M. Leshanksy, O. M. Lavrenteva, and A. Nir, Spontaneous thermocapillary interaction of drops: Effect of surface deformation at nonzero capillary number, Phys. Fluids 14, 1326 (2002).
* (7) A. Jain and K. K. Verma, Recent advances in applications of single-drop microextraction: A review, Anal. Chim. Acta 706, 37 (2011).
* (8) A. L. Yarin, Surface-tension-driven flows at low Reynolds number arising in optoelectronic technology, J. Fluid Mech. 286, 173 (1995).
* (9) E. A. Demekhin, S. Kalliadasis, and M. G. Velarde, Suppressing falling film instabilities by Marangoni forces, Phys. Fluids 18, 042111 (2006).
* (10) A. K. Gupta, B. K. Saxena, S. N. Tiwari, and S. L. Malhotra, Pore formation in cast metals and alloys, J. Mater. Sci. 27, 853 (1992).
* (11) L. Ratke, Theoretical considerations and experiments on microstructural stability regimes in monotectic alloys, Mater. Sci. Eng. A 413, 504 (2005).
* (12) L. Zhang, Indirect methods of detecting and evaluating inclusions in steel—a review, J. Iron Steel Res. Int. 13, 1 (2006).
* (13) D. Schwabe, A. Scharmann, F. Preisser, and R. Oeder, Experiments on surface tension driven flow in floating zone melting, J. Cryst. Growth 43, 305 (1978).
* (14) D. Schwabe and A. Scharmann, Some evidence for the existence and magnitude of a critical Marangoni number for the onset of oscillatory flow in crystal growth melts, J. Cryst. Growth 46, 125 (1979).
* (15) C. E. Chang, W. R. Wilcox, and R. A. Lefever, Thermocapillary convection in floating zone melting: Influence of zone geometry and Prandtl number at zero gravity, Mater. Res. Bull. 14, 527 (1979).
* (16) C. H. Chun and W. Wuest, Experiments on the transition from the steady to the oscillatory Marangoni-convection of a floating zone under reduced gravity effect, Acta Astronaut. 6, 1073 (1979).
* (17) D. Schwabe, A. Scharmann, and F. Preisser, Studies of Marangoni convection in floating zones, Acta Astronaut. 9, 183 (1982).
* (18) F. Preisser, D. Schwabe, and A. Scharmann, Steady and oscillatory thermocapillary convection in liquid columns with free cylindrical surface, J. Fluid Mech. 126, 545 (1983).
* (19) Y. Kamotani, S. Ostrach, and M. Vargas, Oscillatory thermocapillary convection in a simulated floating-zone configuration, J. Cryst. Growth 66, 83 (1984).
* (20) A. Y. Rednikov, Y. S. Ryazantsev, and M. G. Velarde, Active drops and drop motions due to nonequilibrium phenomena, J. Non-Equilib. Thermodyn. 19, 95 (1994).
* (21) A. Y. Rednikov, Y. S. Ryazantsev, and M. G. Velarde, Drop motion with surfactant transfer in a homogeneous surrounding, Phys. Fluids 6, 451 (1994).
* (22) S. Herminghaus, C. C. Maass, C. Krüger, S. Thutupalli, L. Goehring, and C. Bahr, Interfacial mechanisms in active emulsions, Soft Matter 10, 7008 (2014).
* (23) N. Yoshinaga, Spontaneous motion and deformation of a self-propelled droplet, Phys. Rev. E 89, 012913 (2014).
* (24) Y. S. Ryazantsev, M. G. Velarde, R. G. Rubio, E. Guzmán, F. Ortega, and P. López, Thermo-and soluto-capillarity: Passive and active drops, Adv. Colloid Interfac. 247, 52 (2017).
* (25) C. C. Maass, C. Krüger, S. Herminghaus, and C. Bahr, Swimming droplets, Annu. Rev. Condens. Matter Phys. 7, 171 (2016).
* (26) M. Morozov and S. Michelin, Self-propulsion near the onset of Marangoni instability of deformable active droplets, J. Fluid Mech. 860, 711 (2019).
* (27) D. Lohse and X. Zhang, Physicochemical Hydrodynamics of Droplets out of Equilibrium, Nat. Rev. Phys. 2, 426 (2020).
* (28) B. M. Degner, K. M. Olson, D. Rose, V. Schlegel, R. Hutkins, and D. J. McClements, Influence of freezing rate variation on the microstructure and physicochemical properties of food emulsions, J. Food Eng. 119, 244 (2013).
* (29) B. M. Degner, C. Chung, V. Schlegel, R. Hutkins, and D. J. McClements, Factors influencing the freeze-thaw stability of emulsion-based foods, Compr. Rev. Food. Sci. F. 13, 98 (2014).
* (30) J. Reichenbach and H. Linde, Linear perturbation analysis of surface-tension-driven convection at a plane interface (Marangoni instability), J. Colloid Interface Sci. 84, 433 (1981).
* (31) M. Takashima, Surface tension driven instability in a horizontal liquid layer with a deformable free surface. I. Stationary convection, J. Phys. Soc. Japan 50, 2745 (1981).
* (32) E. B. Levchenko and A. L. Chernyakov, Instability of surface waves in a nonuniformly heated liquid, Sov. Phys. JETP 54, 102 (1981).
* (33) A. A. Nepomnyashchii and I. B. Simanovskii, Thermocapillary convection in a two-layer system, Fluid Dyn. 18, 629 (1983).
* (34) X. L. Chu and M. G. Velarde, Sustained transverse and longitudinal-waves at the open surface of a liquid, PhysicoChem. Hydrodyn. 10, 727 (1988).
* (35) X. L. Chu and M. G. Velarde, Transverse and longitudinal waves induced and sustained by surfactant gradients at liquid-liquid interfaces, J. Colloid Interface Sci. 131, 471 (1989).
* (36) M. Hennenberg, X. L. Chu, A. Sanfeld, and M. G. Velarde, Transverse and longitudinal waves at the air-liquid interface in the presence of an adsorption barrier, J. Colloid Interface Sci. 150, 7 (1992).
* (37) A. A. Nepomnyashchy, Wavy motions in the layer of viscous fluid flowing down the inclined plane, Fluid Dynamics, Part 8, Proc. of Perm State University 362, 114 (1976).
* (38) H. Chang, Wave evolution on a falling film, Annu. Rev. Fluid Mech. 26, 103 (1994).
* (39) I. L. Kliakhandler and G. I. Sivashinsky, Viscous damping and instabilities in stratified liquid film flowing down a slightly inclined plane, Phys. Fluids 9, 23 (1997).
* (40) S. Miladinova and G. Lebon, Effects of nonuniform heating and thermocapillarity in evaporating films falling down an inclined plate, Acta Mech. 174, 33 (2005).
* (41) P. Thanasukarn, R. Pongsawatmanit, and D. J. McClements, Impact of fat and water crystallization on the stability of hydrogenated palm oil-in-water emulsions stabilized by whey protein isolate, Colloids Surf. A Physicochem. Eng. Asp. 246, 49 (2004).
* (42) S. Ghosh and J. N. Coupland, Factors affecting the freeze–thaw stability of emulsions, Food Hydrocoll. 22, 105 (2008).
* (43) D. Dedovets, C. Monteux, and S. Deville, Five-dimensional imaging of freezing emulsions with solute effects, Science 360, 303 (2018).
* (44) P. Welander, Convective instability in a two-layer fluid heated uniformly from above, Tellus 16, 349 (1964).
* (45) A. Wierschem, H. Linde, and M. G. Velarde, Internal waves excited by the Marangoni effect, Phys. Rev. E 62, 6522 (2000).
* (46) A. Y. Rednikov, P. Colinet, M. G. Velarde, and J. C. Legros, Rayleigh–Marangoni oscillatory instability in a horizontal liquid layer heated from above: Coupling and mode mixing of internal and surface dilational waves, J. Fluid Mech. 405, 57 (2000).
* (47) A. Nepomnyashchy, J. C. Legros, and I. Simanovskii, Interfacial convection in multilayer systems (Springer, New York, 2012).
* (48) G. Oster, Density gradients, Sci. Am. 213, 70 (1965).
* (49) D. Lin, J. R. Leger, M. Kunkel, and P. McCarthy, One-dimensional gradient-index metrology based on ray slope measurements using a bootstrap algorithm, Opt. Eng. 52, 112108 (2013).
* (50) Y. Li, C. Diddens, A. Prosperetti, K. L. Chong, X. Zhang, and D. Lohse, Bouncing oil droplet in a stratified liquid and its sudden death, Phys. Rev. Lett. 122, 154502 (2019).
* (51) N. O. Young, J. S. Goldstein, and M. J. Block, The motion of bubbles in a vertical temperature gradient, J. Fluid Mech. 6, 350 (1959).
* (52) O. M. Phillips, On flows induced by diffusion in a stably stratified fluid, Deep Sea Res. Ocean. Abstr. 17, 435 (1970).
* (53) C. Wunsch, On oceanic boundary mixing, Deep Sea Res. Ocean. Abstr. 17, 293 (1970).
* (54) L. Ratke and A. Müller, On the destabilisation of fibrous growth in monotectic alloys, Scr. Mater. 54, 1217 (2006).
|
# KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
Hyunjae Lee, Jaewoong Yoon, Bonggyu Hwang, Seongho Joe, Seungjai Min,
Youngjune Gwon Samsung SDS
###### Abstract
A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional
representation learning for natural languages. Due to the lack of pretrained
ALBERT models for Korean language, the best available practice is the
multilingual model or resorting back to the any other BERT-based model. In
this paper, we develop and pretrain KoreALBERT, a monolingual ALBERT model
specifically for Korean language understanding. We introduce a new training
objective, namely Word Order Prediction (WOP), and use alongside the existing
MLM and SOP criteria to the same architecture and model parameters. Despite
having significantly fewer model parameters (thus, quicker to train), our
pretrained KoreALBERT outperforms its BERT counterpart on 6 different NLU
tasks. Consistent with the empirical results in English by Lan et al.,
KoreALBERT seems to improve downstream task performance involving multi-
sentence encoding for Korean language. The pretrained KoreALBERT is publicly
available to encourage research and application development for Korean NLP.
## I Introduction
Pre-trained language models are becoming an essential component to build a
modern natural language processing (NLP) application. Previously, recurrent
neural nets such as LSTM have dominated sequence-to-sequence (seq2seq) [1]
modeling for natural languages, upholding state-of-the-art performances for
core language understanding tasks. Since the introduction of the Transformer
[2], recurrent structures in a neural language model are reconsidered and
opted for attention, a mechanism that relates different positions in a
sequence to compute a representation of the sequence.
Devlin _et al._ [3] have proposed Bidirectional Encoder Representations from
Transformers (BERT) to improve on predominantly unidirectional training of a
language model by using the masked language model (MLM) training objective.
MLM is an old concept dating back to the 1950s [4]. By jointly conditioning on
both left and right context in all layers, the MLM objective has made pre-
training of the deep bidirectional language encoding possible. BERT uses an
additional loss for pre-training known as next-sentence prediction (NSP). NSP
is designed to learn high-level linguistic coherence by predicting whether or
not given two text segments should appear consecutively as in the original
text. NSP can improve performance on downstream NLP tasks such as natural
language inference that would require reasoning about inter-sentence
relations.
A Lite BERT (ALBERT) uses parameter reduction techniques to alleviate scaling
problems for BERT. ALBERT’s cross-layer parameter sharing can be thought as a
form of regularization that helps stabilize the pre-training and generalize
despite the substantially reduced number of model parameters. Also, the
sentence order prediction (SOP) objective in ALBERT replaces the ineffective
the next sentence prediction (NSP) loss in BERT for better inter-sentence
coherence.
Downstream tasks play critical measures for evaluating emerging language
models and NLP applications today. Pre-trained language models are central to
downstream task evaluations such as machine translation, text classification,
and machine reading comprehension. At a high level, there are two approaches
to use pre-trained language models. First, pre-trained models can provide
additional feature representations for a downstream task. More importantly,
pre-trained models can be a baseline upon which the downstream task is fine-
tuned.
By having an expensive, but shareable pre-training followed by much smaller
fine-tuning, it is a powerful paradigm to focus on optimizing the performance
of a downstream NLP task. Self-supervised learning with large corpora allows a
suitable starting point for an outer task-specific layer being optimized from
scratch while reusing the pre-trained model parameters.
Since its introduction, BERT has achieved state-of-the-art accuracy
performances for natural language understanding tasks such as GLUE [5],
MultiNLI [6], SQuAD v1.1 [7] & SQuAD v2.0 [8], and CoNLL-2003 NER [9]. Despite
having fewer parameters than BERT, ALBERT has been able to achieve new state-
of-the-art results on the GLUE, RACE [10], and SQuAD benchmarks.
It is important to remark that a large network is crucial in pushing state-of-
the-art results for downstream tasks. While BERT gives a sound choice to build
a general language model trained on large corpora, it is difficult to
experiment with training large BERT models due to the memory limitations and
computational constraints. Training BERT-large in fact is a lengthy process of
consuming significant hardware resources. Besides, there are already a wide
variety of languages pre-trained in BERT, which include the multilingual BERT
and monolingual models pre-trained in 104 different languages. ALBERT,
however, gives a much narrower choice in languages.
Asserting an argument that having a better language model is roughly
equivalent to pre-train a large model, all without imposing too much memory
and computational requirements, we choose to go with ALBERT. In this paper, we
develop and train KoreALBERT, a monolingual ALBERT model for Korean language
understanding. Compared to a multilingual model, monolingual language models
are known to optimize the performance for a specific language in every aspect,
including downstream tasks critical to build modern NLP systems and
applications.
In addition to the original ALBERT MLM and SOP training objectives, we
introduce a word order prediction (WOP) loss. WOP is fully compatible with the
MLM and SOP losses and can be added gracefully in implementation. Our pre-
trained KoreALBERT could outperform multilingual BERT and its BERT counterpart
on a brief evaluation with KorQuAD 1.0 benchmark for machine reading
comprehension. Consistent with the empirical results of ALBERT pre-trained in
English reported by Lan _et al._ [11], KoreALBERT seems to improve supervised
downstream task performances involving multiple Korean sentences.
The rest of this paper is organized as follows. In Section II, we provide
background on pre-trained neural language models. Section III presents
KoreALBERT. In Section IV, we describe our implementation, pre-training, and
empirical evaluation of KoreALBERT. Section V concludes the paper. Our pre-
trained KoreALBERT is publicly available to encourage NLP research and
application development for Korean language.
## II Background
### II-A Transformer, BERT, and ALBERT
Transformer [2] is a sequence transduction model based solely on attention
mechanism, skipping any recurrent and convolutional structures of a neural
network. The transformer architecture includes multiple identical encoder and
decoder blocks stacked on top of each other. While the encoder captures
linguistic information of the input sequence and produces the contextual
representations, the decoder generates output sequence corresponding to its
pair of input. Thanks to multi-head self-attention layers in an encoder block,
transformer can acquire varying attentions within a single sequence and
alleviate inevitable dragging caused during the training of a recurrent neural
network.
BERT distinguishes itself from other language models that predict the next
word given previous words by introducing new training methods. Instead of
predicting the next token given only previous tokens, it has to predict
replaced word by special token [MASK]. This training strategy gives BERT
bidirectionality which means having an access to left and right context around
the target word. Thus, BERT can produce deep bidirectional representation of
input sequence.
RoBERTa [12], ALBERT [11] and other variants [13, 14] utilize bidirectional
context representation and established state-of-the-art results on a wide
range of NLP tasks. BERT is trained with the masked language modeling (MLM)
and the next sentence prediction (NSP) losses. NSP is a binary classification
task to predict whether or not given two segments separated by another special
token [SEP] follow each other in the original text. The task is intended to
learn the relationship between two sentences in order to use on many
downstream tasks of which input template consists of two sentences as in
question answering (QA) and sentence entailment [3].
Recently, there is a criticism toward NSP that the NSP loss does not
necessarily help improve the downstream task performances [11, 12, 15] for its
loose inter-sentential coherence. Among them, ALBERT, whose architecture is
derived from BERT, uses a sentence order prediction(SOP) task instead. In the
SOP task, negative examples consist of a pair of sentences from the same
document, but the sentence order is swapped, and the model should predict
whether or not the order is swapped. With the improved SOP loss and other
parameter reduction techniques, ALBERT significantly reduces the number of
parameters– _i.e._ , 18x fewer for BERT-large, while achieving similar or
better performance on downstream tasks [11]. KoreALBERT takes the unmodified
ALBERT architecture as a baseline. We train KoreALBERT from scratch on large
Korean corpora collected online.
### II-B Related Work
Google has released BERT multilingual model (M-BERT) pre-trained using 104
different languages including the Korean. Karthikeyan _et al._ [16] show why
and how well M-BERT works on many downstream NLP tasks without explicitly
training with monolingual corpus. More recently, Facebook AI Research
presented crosslingual model (XLM-R) [17] generally outperforming M-BERT.
Recent literature argues that a monolingual language model is consistently
superior to M-BERT. For French, FlauBERT [18] and CamemBERT [19] with the same
approach as RoBERTa have been released. ALBERTo [20] focuses on Italian social
network data. BERTje [21] for Dutch and FinBERT [22] for Finnish have been
developed. They both have achieved superior results on the majority of
downstream NLP tasks compared to M-BERT.
Some previous work in the Korean language has focused on learning static
representations by using language-specific properties [23]. More recently, SKT
Brain has released BERT 111https://github.com/SKTBrain/KoBERT and GPT-2 pre-
trained on large Korean corpora.222https://github.com/SKT-AI/KoGPT2 Korean
Electronics and Telecommunications Research Institute (ETRI) has released two
versions of BERT: the morpheme analytic based and the syllable based
model.333http://aiopen.etri.re.kr/service_dataset.php These models are
worthwhile to experiment with and provide good benchmark evaluations in Korean
language model research.
BART [24] features interesting denoising approaches for input text used in
pre-training such as sentence permutation and text infilling. In the sentence
permutation task, an input document is divided into sentences and shuffled in
a random order. A combination of text infilling and sentence shuffling tasks
has shown significant improvement of the performance over either applied
separately. Inspired by BART, we have formulated word order prediction (WOP),
a new pre-training loss used alongside the MLM and SOP losses for KoreALBERT.
Differentiated from BART, which is essentially a sentence-level shuffling, WOP
is an intra-sentence, token-level shuffling.
## III KoreALBERT: Training Korean Language Model Using ALBERT
### III-A Architecture
KoreALBERT is a multi-layer bidirectional Transformer encoder with the same
factorized embedding parameterization and cross-layer sharing as ALBERT.
Inheriting ALBERT-base, KoreALBERT-base has 12 parameter sharing layers with
an embedding size of 128 dimensions, 768 hidden units, 12 heads, and GELU
nonlinearities [25]. The total number of parameters in KoreALBERT-base is 12
millions, and it increases to 18-million parameters for KoreALBERT-large
having 1024 hidden dimensions.
Lan _et al._ [11] argues that removing dropout has significantly helped
pretraining with the masked language modeling (MLM) loss. For KoreALBERT,
however, we have made an empirical decision to keep dropout after observing
degraded downstream performances without dropout.
### III-B Training Objectives
ALBERT pretrains on two objectives: masked language modeling (MLM) and
sentence order prediction (SOP) losses. We keep both objectives for KoreALBERT
and introduce an additional training objective called word order prediction
(WOP).
Word Order Prediction (WOP). Korean is an agglutinative language that a
combination of affixes and word roots determines usage and meaning [26].
Decomposing a Korean word into several morphemes and shuffling its order can
introduce grammatical errors and semantic altercations. We impose a word order
prediction (WOP) loss for pretraining KoreALBERT. The WOP objective is a
cross-entropy loss on predicting a correct order of shuffled tokens.
WOP is fully compatible with the ALBERT MLM and SOP, and we expect to
reinforce correct agglutination (or point out incorrect agglutinative usages)
beyond simply checking intra-sentence word orderings. There is an interesting
point of view about WOP mixed with MLM and SOP towards the problem of
generating a full sentence from a small subset of permuted words. Our primary
focus of this paper is on the empirical side of the design and pretraining of
an ALBERT-based foreign language model rather than a formal analysis on
training objectives.
The pretraining of KoreALBERT is illustrated in Fig. 1. A randomly sampled
subset of tokens in the input text are replaced with [MASK]. MLM computes a
cross-entropy loss on prediction of the masked tokens. As with ALBERT-base, we
uniformly choose 15% of the input tokens for possible masking, and the 80% of
the chosen are actually replaced with [MASK], leaving 10% unchanged and the
rest replaced with randomly selected tokens. SOP is known to focus on modeling
inter-sentence coherence. The SOP loss uses two consecutive segments from the
same text as a positive example and as a negative example if their order is
swapped. We have found that if WOP is too difficult, it can crucially impact
the KoreALBERT performance on downstream evaluations.
We have experimentally determined WOP to inter-work with MLM and SOP and
limited the shuffling rate up to 15%, which seemingly realizes the best
empirical performance for our case. In addition, we have decided to include
WOP into only specific portion of all batches. We revisit more detailed
description of our experimental setup in Section 4. Like MLM, we choose a
uniformly random set of tokens for WOP. The most crucial part of integrating
WOP into pretraining is _not_ switching tokens across [MASK]. This constraint
minimizes the corruption of contextual bidirectionality that acts as essential
information in denoising the [MASK] tokens.
Figure 1: Pre-training KoreALBERT with the MLM, SOP, and WOP objectives. The
loss (on top) with respect to all three objectives is calculated for
illustrative purposes. In our implementation, classification layer
(highlighted gray) in the middle consisting of three identical heads produces
a logit vector with respect to each label.
### III-C Optimization
We use the LAMB optimizer [27] with a learning rate of $1.25\times 10^{-3}$
and a warm-up ratio $1.25\times 10^{-2}$. To speed up the pretraining, we
maintain an input sequence length of 128 tokens despite the risk of suboptimal
performance. Due to memory limitations, it is necessary to use gradient
accumulation steps for a batch size of 2,048, which is comparable to BERT. We
apply a dropout rate of 0.1 on all layers and attention weights. We use a GELU
activation function [25].
## IV Experiments
### IV-A Implementation
We implement KoreALBERT based on Hugging Face’s transformer library [28] with
almost an identical model configuration for ALBERT-base. We add another linear
classifier on top of the encoder output for WOP task. The added layer is used
to predict the probability of the original position of words in the sequence
via softmax. Like the MLM objective, we take into account only switched tokens
to compute the cross-entropy loss. We train our model using 4 NVidia V100 GPUs
with half-precision floating-point weights.
### IV-B Data
Many BERT-style language models include Wikipedia in the pre-training corpora
for a wide coverage of topics in relatively high-quality writing. Korean
Wikipedia currently ranks the 23rd by volume, and this is just 7.8% compared
to English Wikipedia. To supplement training examples and the diversity of our
corpus, we also use the text from
NamuWiki444https://en.wikipedia.org/wiki/Namuwiki, which is another Korean
online encyclopedia that contains more subjective opinions covering a variety
of topics and writing styles.
#### IV-B1 Pretraining corpora
our pretraining corpora include the following.
* •
Web News: all articles from 8 major newspapers of Korea accross the topics
including politics, social, economics, culture, IT, opinion, and sports from
January 1, 2007 to December 31, 2019.
* •
Korean Wikipedia: 490,220 documents crawled in October, 2019.
* •
NamuWiki: 740,094 documents crawled in December, 2019.
* •
Book corpus: plots and editorial reviews about all Korean books published in
2010 to December 31, 2019555http://book.interpark.com/
#### IV-B2 Text preprocessing
We have preprocessed our text data in the following manner. First, we remove
all meta-tags such as the date of writing and name(s) of the author(s) in
newspapers appearing in the beginning and at the end of each article. We think
that the meta-tags do not contain any contextual or semantic information
essential for NLU tasks. We also adjust the proportion of categories making up
the news corpus in order to avoid topical bias of the examples. We tokenize
the corpora into subwords using SentencePiece tokenizer [29] like ALBERT to
construct vocabulary of a size 32k. We mask randomly sampled 15% of the words
using the whole word masking strategy recently introduced by BERT. After
cleaning and regularizing text, we obtain 43GB text with 325 million
sentences, which are equivalent to 4.4 billion words or 18 billion characters.
### IV-C Compatibility of Word Order Prediction (WOP)
We have performed ablation experiments with and without WOP to empirically
observe its compatibility with the MLM and SOP objectives by pretraining for
125K steps, which is the half of the entire pre-training. A critical decision
to introduce new noise via WOP is how many training examples should entail the
additional noising process as well as how many tokens should be shuffled
inside a sentence. We sample batches to contain re-ordered tokens
proportionally from 30 to 100%. We have observed that about 30-50% shuffling
achieves a good performance for most cases. Results are averaged over 10
different seeds and summarized in Table I.
We set up three combinations of the pretraining objectives to compare against
one another in the downstream evaluations to highlight the effect of WOP. We
also observe the intrinsic performance of each objective. In the WOP and MLM
combination, we configure the portion of corrupted examples to 30% for the WOP
objective. From the result averaged over 10 different seeds in Table II, WOP
hardly hurts the performance of MLM or SOP. The accuracy of MLM and WOP tasks
has improved in case of leaving the SOP objective out. We believe that the
best usage for WOP is not to disturb other intrinsic tasks for pretraining.
WOP should be added by carefully observing the performance of other objectives
on different WOP configurations.
As expected, the deletion of SOP has caused a degradation more than 3% in the
downstream performances of semantic textual similarity (8,628 examples) and
paraphrase detection (7,576 examples). These two tasks are relatively small
data experiments. Surprisingly, the performance of KorNLI is better without
SOP because NLI tasks depend on inter-sentence coherence. Note that KorNLI is
a much larger dataset (950,354 examples) compared to the semantic textual
similarity and paraphrase detection datasets. Combining the two denoising
objectives MLM and WOP seems to alleviate the performance degradation for a
classification task with multi-sentence input.
TABLE I: Experimental results on downstream tasks according to different portion of word order prediction tasks Portion of WOP | KorNLI | KorSTS | NSMC | PD | NER | KorQuAD1.0
---|---|---|---|---|---|---
acc | spearman | acc | acc | acc | f1
100 % | 76.8 | 74.8 | 88.3 | 92.3 | 80.6 | 89.4
50 % | 76.4 | 76.6 | 88.3 | 92.7 | 81.2 | 89.3
30 % | 76.6 | 75.4 | 88.4 | 93.2 | 80.7 | 89.8
TABLE II: Experimental results on Downstream task performance comparing between different combination of pretraining objectives Objectives | MLM | SOP | WOP | KorNLI | KorSTS | NSMC | PD | NER | KorQuAD1.0
---|---|---|---|---|---|---|---|---|---
acc | acc | acc | acc | spearman | acc | acc | acc | f1
MLM + SOP | 35.3 | 79.8 | - | 76.4 | 75.6 | 88.6 | 92.9 | 80.7 | 89.5
MLM + SOP + WOP | 35.1 | 79.1 | 80.7 | 76.9 | 76.6 | 88.4 | 93.2 | 81.2 | 89.8
MLM + WOP | 35.6 | - | 84.0 | 76.8 | 73.3 | 88.5 | 92.3 | 81.0 | 89.3
TABLE III: Experimental results on downstream tasks and model parameters Model | Params | Speedup | KorNLI | KorSTS | NSMC | PD | NER | KorQuAD1.0 | Avg.
---|---|---|---|---|---|---|---|---|---
acc | spearman | acc | acc | acc | f1
Multilingual BERT | 172M | 1.0 | 76.8 | 77.8 | 87.5 | 91.1 | 80.3 | 86.5 | 83.3
XLM-R | 270M | 0.5x | 80.0 | 79.4 | 90.1 | 92.6 | 83.9 | 92.3 | 86.4
KoBERT | 92M | 1.2x | 78.3 | 79.2 | 90.1 | 91.1 | 82.1 | 90.3 | 85.2
ETRI BERT | 110M | - | 79.5 | 80.5 | 88.8 | 93.9 | 82.5 | 94.1 | 86.6
KoreALBERT Base | 12M | 5.7x | 79.7 | 81.2 | 89.6 | 93.8 | 82.3 | 92.6 | 86.5
KoreALBERT Large | 18M | 1.3x | 81.1 | 82.1 | 89.7 | 94.1 | 83.7 | 94.5 | 87.5
### IV-D Evaluation
We fine-tune KoreALBERT for downstream performance evaluations. For
comparison, we consider other pretrained BERT-base language models available
off-the-shelf.
#### IV-D1 Fine-tuning
In addition to our KoreALBERT, we have downloaded pretrained models available
online: multilingual BERT666https://github.com/google-research/bert, XLM-R
from Facebook AI Research777https://github.com/facebookresearch/XLM,
KoBERT888https://github.com/SKTBrain/KoBERT, and ETRI
BERT999http://aiopen.etri.re.kr/service_dataset.php. We optimize respective
hyperparameters for each pretrained model before measuring the best and
average scores for each model. For all models, we use a batch size of 64 or
128 and from 3 to 5 epochs with a learning rate from $2.0\times 10^{-5}$ to
$5.0\times 10^{-5}$ and a max-sequence length from 128 to 512. For NER task,
we have found out that longer training epochs tend to work better and fine-
tuned up to 7 epochs.
#### IV-D2 Downstream Tasks
We consider six downstream NLP tasks detailed below.
* •
KorNLI: Korean NLU Dataset includes two downstream tasks. In Korean Natural
Language Inference (KorNLI) [30], the input is a pair of sentences, a premise
and a hypothesis. The fine-tuned model should predict their relationship in
one of the three possible labels: entailment, contradiction, and neutral.
KorNLI has a total of 950,354 examples.
* •
KorSTS: the second task from Korean NLU is semantic textual similarity (STS)
for Korean language. STS requires to predict how semantically similar the two
input sentences are on a 0 (dissimilar) to 5 (equivalent) scale. There are
8,628 KorSTS examples in the Korean NLU dataset.
* •
Sentiment analysis: we use Naver Sentiment Movie
Corpus,101010https://github.com/e9t/nsmc (NSMC) the biggest Korean movie
review dataset, which is collected by the same method that the massive movie
review dataset [31] proposes. NSMC consists of 200k reviews of shorter than
140 characters that are labeled with human annotations of sentiment.
* •
Paraphrase detection (PD): a PD model predicts whether or not a pair of
sentences are semantically equivalent. The dataset we consider contains 7,576
examples from a publicly available github
repository.111111https://github.com/songys/Question_pair
* •
Extractive machine reading comprehension (EMRC): EMRC takes in much longer
text sequences as an input compared to other tasks. The EMRC model needs to
extract the start and end indices inside a paragraph containing the answer of
a question. KorQuAD 1.0 [32] is a Korean dataset for machine reading
comprehension, which is similar to SQuAD 1.0 [7]. Having exactly the same
format as SQuAD, KorQuAD 1.0 comprises 60,407 question-answer pairs.
* •
Named entity recognition (NER): NER distinguishes a real-world object such as
a person, organization, and place (location) from documents. We use the NER
corpus121212http://air.changwon.ac.kr/?page_id=10 constructed by Naver Corp.
and Changwon University in South Korea. The corpus has 14 different types of
entities with attached tags B/I/-, denoting multi- or single-word entities as
described in Table IV.
TABLE IV: Proportion of the type of entities of NER dataset. Category | Tag | Amount
---|---|---
NUMBER | NUM | 64,876
CIVILIZATION | CVL | 60,918
PERSON | PER | 48,321
ORGANIZATION | ORG | 45,550
DATE | DAT | 33,944
TERM | TRM | 22,070
LOCATION | LOC | 21,095
EVENT | EVT | 17,430
ANIMAL | ANM | 6,544
ARTIFACTS_WORKS | AFW | 6,069
TIME | TIM | 4,337
FIELD | FLD | 2,386
PLANT | PLT | 267
MATERIAL | MAT | 252
### IV-E Discussion
As indicated in Table III, KoreALBERT consistently outperforms M-BERT over all
downstream NLU tasks considered. While KoreABLERT has the smallest number of
model parameters among all monolingual and multilingual language models
compared in this paper, it achieves better results in almost all downstream
evaluations. The advantage of having fewer computations of KoreALBERT makes
its base model about 5.7 faster than M-BERT and its large model 2.2 faster
than XLM-R base at training time.
In NSMC and NER, which are single-sentence classification tasks, KoreALBERT is
subpar against XLM-R and KoBERT. For NSMC, KoreALBERT-large cannot produce
more discriminnative result than the base model. We suspect the main reason
for the performance drop being lack of covering the colloquial usage of words
and phrases in our pretraining corpora that mostly consists of more formal
style of writings such as news articles and wikipedia. Examples in NSMC seem
to use much colloquialism. Also, XLM-R has shown a very good performance on
the NER task. Such result is due to the fact that NER does not require much
high-level language understanding like multi-sentence discourse coherence.
## V Conclusion
We have introduced KoreALBERT, a pre-trained monolingual ALBERT model for
Korean language understanding. We have described the details about training
KoreALBERT. In particular, we have proposed a word order prediction loss, a
new training objective, which is compatible with the original MLM and SOP
objectives of ALBERT. KoreALBERT consistently outperforms multi and
monolingual baselines on 6 downstream NLP tasks while having much fewer
parameters. In our future work, we plan to experiment more comprehensively
with the KoreALBERT WOP loss: i) replace token-level switching with word-level
switching to improve the difficulty of label prediction; ii) use dynamic token
shuffling with varying amount of tokens to be shuffled instead of fixed
proportion. We also plan to investigate how well the proposed WOP loss works
with non-agglutinative languages like English.
## References
* [1] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” _CoRR_ , vol. abs/1409.3215, 2014. [Online]. Available: http://arxiv.org/abs/1409.3215
* [2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in Neural Information Processing Systems 30_ , I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5998–6008. [Online]. Available: http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
* [3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding,” _arXiv preprint arXiv:1810.04805_ , 2018.
* [4] W. L. Taylor, “Cloze Procedure: a New Tool for Measuring Readability,” _Journalism Quarterly_ , vol. 30, no. 4, pp. 415–433, 1953.
* [5] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, “Glue: A multi-task benchmark and analysis platform for natural language understanding,” _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , 2018. [Online]. Available: http://dx.doi.org/10.18653/v1/w18-5446
* [6] A. Williams, N. Nangia, and S. Bowman, “A broad-coverage challenge corpus for sentence understanding through inference,” _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , 2018. [Online]. Available: http://dx.doi.org/10.18653/v1/N18-1101
* [7] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100, 000+ questions for machine comprehension of text,” _CoRR_ , vol. abs/1606.05250, 2016. [Online]. Available: http://arxiv.org/abs/1606.05250
* [8] P. Rajpurkar, R. Jia, and P. Liang, “Know what you don’t know: Unanswerable questions for squad,” _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , 2018. [Online]. Available: http://dx.doi.org/10.18653/v1/P18-2124
* [9] E. F. Tjong Kim Sang and F. De Meulder, “Introduction to the conll-2003 shared task,” _Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 -_ , 2003. [Online]. Available: http://dx.doi.org/10.3115/1119176.1119195
* [10] G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy, “RACE: Large-scale ReAding comprehension dataset from examinations,” in _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_. Copenhagen, Denmark: Association for Computational Linguistics, Sep. 2017, pp. 785–794. [Online]. Available: https://www.aclweb.org/anthology/D17-1082
* [11] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A Lite BERT for Self-supervised Learning of Language Representations,” _arXiv preprint arXiv:1909.11942_ , 2019.
* [12] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized BERT pretraining approach,” _CoRR_ , vol. abs/1907.11692, 2019. [Online]. Available: http://arxiv.org/abs/1907.11692
* [13] M. Joshi, D. Chen, Y. Liu, D. S. Weld, L. Zettlemoyer, and O. Levy, “Spanbert: Improving pre-training by representing and predicting spans,” _CoRR_ , vol. abs/1907.10529, 2019. [Online]. Available: http://arxiv.org/abs/1907.10529
* [14] Y. Cui, W. Che, T. Liu, B. Qin, Z. Yang, S. Wang, and G. Hu, “Pre-training with whole word masking for chinese BERT,” _CoRR_ , vol. abs/1906.08101, 2019. [Online]. Available: http://arxiv.org/abs/1906.08101
* [15] Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” _CoRR_ , vol. abs/1906.08237, 2019. [Online]. Available: http://arxiv.org/abs/1906.08237
* [16] K. K, Z. Wang, S. Mayhew, and D. Roth, “Cross-lingual ability of multilingual bert: An empirical study,” 2019.
* [17] A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, “Unsupervised cross-lingual representation learning at scale,” 2019.
* [18] H. Le, L. Vial, J. Frej, V. Segonne, M. Coavoux, B. Lecouteux, A. Allauzen, B. Crabbé, L. Besacier, and D. Schwab, “Flaubert: Unsupervised language model pre-training for french,” 2019.
* [19] L. Martin, B. Muller, P. J. O. Suárez, Y. Dupont, L. Romary, Éric Villemonte de la Clergerie, D. Seddah, and B. Sagot, “Camembert: a tasty french language model,” 2019.
* [20] M. Polignano, P. Basile, M. de Gemmis, G. Semeraro, and V. Basile, “Alberto: Italian bert language understanding model for nlp challenging tasks based on tweets,” 11 2019.
* [21] W. Vries, A. Cranenburgh, A. Bisazza, T. Caselli, G. van Noord, and M. Nissim, “Bertje: A dutch bert model,” 12 2019.
* [22] A. Virtanen, J. Kanerva, R. Ilo, J. Luoma, J. Luotolahti, T. Salakoski, F. Ginter, and S. Pyysalo, “Multilingual is not enough: Bert for finnish,” 12 2019.
* [23] S. Park, J. Byun, S. Baek, Y. Cho, and A. Oh, “Subword-level word vector representations for korean,” 01 2018, pp. 2429–2438.
* [24] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” 2019.
* [25] D. Hendrycks and K. Gimpel, “Bridging nonlinearities and stochastic regularizers with gaussian error linear units,” _CoRR_ , vol. abs/1606.08415, 2016. [Online]. Available: http://arxiv.org/abs/1606.08415
* [26] J. J. Song, “The korean language:structure, use and context,” _Routledge_ , 2006.
* [27] Y. You, J. Li, J. Hseu, X. Song, J. Demmel, and C. Hsieh, “Reducing BERT pre-training time from 3 days to 76 minutes,” _CoRR_ , vol. abs/1904.00962, 2019. [Online]. Available: http://arxiv.org/abs/1904.00962
* [28] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew, “Huggingface’s transformers: State-of-the-art natural language processing,” _ArXiv_ , vol. abs/1910.03771, 2019.
* [29] T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” _CoRR_ , vol. abs/1808.06226, 2018. [Online]. Available: http://arxiv.org/abs/1808.06226
* [30] J. Ham, Y. J. Choe, K. Park, I. Choi, and H. Soh, “Kornli and korsts: New benchmark datasets for korean natural language understanding,” _arXiv preprint arXiv:2004.03289_ , 2020.
* [31] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Portland, Oregon, USA: Association for Computational Linguistics, Jun. 2011, pp. 142–150. [Online]. Available: https://www.aclweb.org/anthology/P11-1015
* [32] M. K. L. Seungyoung Lim, “KorQuAD: Korean QA Dataset for Machine Comprehension,” _Journal of Computing Science and Engineering_ , vol. , pp. 539–541, 2018. [Online]. Available: http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE07613668
|
§ INTRODUCTION
With the increase of multidimensional data availability and modern computing power, statistical models for spatial
and spatio-temporal data are developing at a rapid pace. Hence, there is a need for stable and reliable, yet updated and
efficient, software packages. In this section, we briefly discuss multidimensional data in climate and environmental studies as well as statistical software for space-time data.
§.§ Multidimensional data
Large multidimensional data sets often arise when climate and environmental phenomena are observed at the global
scale over extended periods. In climate studies, relevant physical variables are observed on a three-dimensional (3D)
spherical shell (the atmosphere) while time is the fourth dimension.
For instance, measurements are obtained by radiosondes flying from ground level up to the stratosphere [Fassò et al., 2014], by interferometric sensors aboard
satellites [Finazzi et al., 2018] or by laser-based methods, such as Light Detection and Ranging (LIDAR) [Negri et al., 2018].
In this context, statistical modelling of multidimensional data requires describing and exploiting the spatio-temporal correlation of the underlying phenomenon or data-generating process.
This is done using explanatory variables and multidimensional latent variables with covariance functions defined over a convenient spatio-temporal support.
When considering 3D$\times$T data (4D for brevity), covariance functions defined over the 4D support may be adopted.
However, these covariance functions often have a complex form [Porcu et al., 2018].
Moreover, when estimating the model parameters or making inferences,
very large covariance matrices (though they may be sparse) are implied.
In large climate and environmental applications, 4D data are rarely collected at high frequency in all spatial and temporal dimensions.
Often, only one dimension is sampled at high frequency while the remaining dimensions are sampled sparsely.
Radiosonde data, for instance, are sparse over the Earth's sphere, but they are dense along the vertical dimension, providing atmospheric profiles.
This suggests that handling all spatial dimensions equally (e.g. using a 3D covariance function) may not be the best option from a modelling or computational perspective,
and a data reduction technique may be useful instead. In this paper, the functional data analysis (FDA) approach
[Ramsay and Silverman, 2007] is adopted to model the relationship between measurements along the profile, while the remaining dimensions
are handled following the classic spatio-temporal data modelling approach using only 2D spatial covariance functions.
§.§ Statistical software
Various software programmes are available for considering data on a plane or in a two-dimensional (2D) Euclidean space.
The choice is more restricted when considering multidimensional or non-Euclidean spaces arising from atmospheric or remote sensing spatio-temporal data observed
on the surface of a sphere and over time.
For example, Figure <ref> depicts
the spatial locations of measurements collected globally in a single day through radio sounding, as discussed in Section <ref>.
Space is three-dimensional, and measurements are repeated over time at the same spatial locations over the Earth's surface but at different pressure values.
Radio sounding data example. Each dot represents the spatial location of a measurement taken by a radiosonde. Dots of the same colour belong to the same radiosonde. (Pressure axis not in scale).
The spBayes package [Finley et al., 2015] handles large spatio-temporal data sets, but space is only 2D. The documentation of the spacetime [Pebesma, 2012] and gstat [Pebesma and Heuvelink, 2016] packages does not explicitly address the multidimensional case, but, according to [Gasch et al., 2015], both packages have some capabilities to handle the 3D$\times$T.
However, we want to avoid working with 3D spatial covariance functions or sample spatio-temporal variograms.
Fixed rank kriging ([Cressie and Johannesson, 2008] [Cressie and Johannesson, 2008]) implemented in the R
package FRK [Zammit-Mangion, 2018] handles spatial and spatio-temporal data both on the Euclidean plane and on the
surface of the sphere. FRK implements a set of tools for data gridding and basis function computation, resulting in
efficient dimension reduction, allowing it to handle large satellite data sets [Cressie, 2018].
It is based on a spatio-temporal random effects (SRE) model estimated by the
expectation-maximisation (EM) algorithm.
Recent extensions to FRK include the use of multi-resolution basis functions [Tzeng and Huang, 2018].
A second package based on SRE and the EM algorithm is D-STEM v1 [Finazzi and Fassò, 2014].
This package implements an efficient state-space approach for handling the temporal dimension and a heterotopic multivariate response approach that is useful when correlating heterogeneous networks [Fassò and Finazzi, 2011, Calculli et al., 2015].
D-STEM v1 has been successfully used in various medium-to-large applications, proving that the EM algorithm implementation, being mainly based on closed-form iterations, is quite stable.
These applications include air quality assessment in the metropolitan areas of Milan, Teheran and Beijing [Fassò, 2013, Taghavi-Shahri et al., 2019, Wan et al., 2020];
multivariate spatio-temporal modelling at the country and continental levels in Europe [Finazzi et al., 2013, Fassò et al., 2016];
time series clustering [Finazzi et al., 2015];
emulators of atmospheric dispersion modelling systems [Finazzi et al., 2019]; and
near real-time prediction of earthquake parameters [Finazzi, 2020].
A brief, non-exhaustive list of other models and/or software packages for advanced spatial data modelling is presented below, according to the principal technique, allowing the handling of large data sets.
In general, these techniques aim at avoiding the Cholesky decomposition of large and dense covariance matrices.
Some approaches, including FRK and D-STEM v1, leverage sparse variance–covariance matrices. Others exploit the sparsity of the precision matrix, thanks to a spatial Markovian assumption. This class includes the R packages LatticeKrig [Nychka et al., 2015, Nychka et al., 2016], INLA
[Blangiardo et al., 2013, Lindgren and Rue, 2015, Bivand et al., 2015, Rue et al., 2014] and the multi-resolution approximation approach of [Katzfuss, 2017],
which uses the predictive process and the state space representation [Jurek and Katzfuss, 2018] to model spatio-temporal data. Low-rank models are another
popular approach used by spBayes. Finally, the R package laGP [Gramacy, 2016], based on a machine learning approach, implements an efficient nearest neighbour prediction-oriented method.
[Heaton et al., 2018] develop an interesting spatial prediction competition considering a large data set and involving the above-mentioned approaches.
We observe that, although some of the software packages mentioned above consider both space and time, to the best of our knowledge, none of them handles a spatio-temporal FDA approach for data sets of the kind discussed in <ref>.
In this paper, we present D-STEM v2, which is a MATLAB package, extending D-STEM v1. The new version introduces modelling of functional data indexed over space and time.
Moreover, new complexity reduction techniques have been added for both model estimation and dynamic mapping, which are especially useful for large data sets.
The rest of the paper is organised as follows.
Section <ref> introduces the methodology adopted in this paper and, in particular, the data modelling approach and the complexity-reduction techniques.
Section <ref> describes the D-STEM v2 software in terms of the MATLAB classes used to define the data structure, model fitting and diagnostics and kriging.
This is followed by an illustration of the software use through two case studies.
The first one, discussed in Section <ref>, considers high-frequency spatio-temporal ozone data in Beijing.
The second one, in Section <ref>, considers modelling of global atmospheric temperature profiles and exploits the complexity-reduction capabilities of the new package. Finally, concluding remarks are provided in Section <ref>.
§ METHODOLOGY
This section discusses the methodology behind the modelling and the complexity-reduction techniques implemented in D-STEM v2
when dealing with functional space-time data sets. Moreover, model estimation, validation and dynamic kriging are briefly discussed.
§.§ Model equations
Let $\bm{s}=(s_{lat},s_{lon})^{ \top }$ be a generic spatial location on the
Earth's sphere, $\mathbb{S}^{2}$, and $t\in \mathbb{N} $ a discrete time index.
It is assumed that the function of interest, $f\left( \bm{s},h,t\right )$, with domain $\mathcal{H}=\left[ h_{1},h_{2}\right] \subset \mathbb{R}$, can be observed at any $\left( \bm{s},t\right) $ and $h\in\mathcal{H}$ through noisy measurements $y(\bm{s},h,t)$ according to the following model:
\begin{align}
y(\bm{s},h,t) & =f\left( \bm{s},h,t\right) +\varepsilon
f\left( \bm{s},h,t\right) & =\bm{x}(\bm{s},h,t)^{ \top }\bm{\beta}\left( h\right) +\bm{\phi}(h)^{\top}\bm{z}(\bm{s}%
\bm{z}(\bm{s},t) & =\bm{G}\bm{z}(\bm{s}%
,t-1)+\bm{\eta}(\bm{s},t). \label{eq:model_line3}%
\end{align}
This model is referred to as the functional hidden dynamic geostatistical model (f-HDGM). In Equation (<ref>), $\varepsilon$ is a zero-mean Gaussian
measurement error independent in space and time with functional variance
$\sigma_{\varepsilon}^{2}\left( h\right) $, implying
that $\varepsilon$ is heteroskedastic across the domain $\mathcal{H}$. The variance is modelled as
\[
\log(\sigma_{\varepsilon}^{2}\left( h\right) )=\bm{\phi}(h)^{\top
\]
where $\bm{\phi}(h)$ is a $p\times1$ vector of basis functions evaluated at
$h$, while $\bm{c}_{\varepsilon}$ is a vector of coefficients to be estimated. In Equation (<ref>), $\bm{x}(\bm{s},h,t)$ is a $b \times 1$ vector of covariates while $\bm{\beta}\left( h\right) =(\beta_{1}(h),...,\beta_{b}(h))^{\top}$ is the vector of functional parameters modelled as
\[
\beta_{j}(h)=\bm{\phi}(h)^{\top}\bm{c}_{\beta,j}, j=1,...,b,
\]
and $\bm{c}_{\beta}=\left( \bm{c}_{\beta,1}^{\top},...,\bm{c}_{\beta,b}^{\top}\right) ^{\top}$ is a $pb\times1$ vector of coefficients that needs to be estimated. Additionally, $\bm{z}(\bm{s},t)$ is a $p\times1$ latent space-time
variable with Markovian dynamics given in Equation (<ref>).
The matrix $\bm{G}$ is a diagonal transition matrix with diagonal elements in the $p \times 1$ vector
$\bm{g}$. The innovation vector $\bm{\eta}$ is obtained from a
multivariate Gaussian process that is independent in time but correlated across space
with matrix spatial covariance function given by
\[
\bm{\Gamma}(\bm{s},\bm{s}^{\prime};\bm{\theta})=diag\left(
\rho(\bm{s},\bm{s}^{\prime};\bm{\theta}_{p})\right),
\]
where $\bm{v}=\left( v_{1},...,v_{p}\right) ^{\top}$ is a vector of variances and $\rho(\bm{s},\bm{s}^{\prime};\bm{\theta}_{j})$ is a valid spatial correlation function for locations $\bm{s},\bm{s}^{\prime}\in\mathbb{S}^{2}$, parametrised by $\bm{\theta}_{j}$, and $\bm{\theta}=(\bm{\theta}_{1},...,\bm{\theta}_{p})^{\top}$. The unknown model parameter vector is given by $\bm{\psi}=\left(
\bm{c}_{\varepsilon}^{\top},\bm{c}_{\beta}^{\top},\bm{g}^{\top},\bm{v}^{\top},\bm{\theta}^{\top}\right) ^{\top} $.
Note that, in order to ease the notation, the same $p$-dimensional basis functions $\bm{\phi}(h)$ are used to model
$\sigma_{\varepsilon}^{2}$, $\beta_{j}$ and $\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)$ in Equations
(<ref>)-(<ref>). In practice, D-STEM v2 allows one to specify a different number
of basis functions for each model component. Also note that $\varepsilon$ is not a pure measurement error since it
also accounts for model misspecification.
Finally, the covariates $\bm{x}(\bm{s},h,t)$ are assumed to be known without error for any $\bm{s}$, $h$ and $t$,
and thus they do not need a basis function representation.
§.§ Basis function choice
Choosing basis functions essentially means choosing the basis type and the number of basis functions. D-STEM v2 currently supports Fourier bases and B-spline bases. The former guarantee that the function is periodic in the domain $\mathcal{H}$, while the latter are not (in general) periodic
but have higher flexibility in describing functions with a complex shape.
Whichever basis function type is adopted, the number $p$ of basis functions must be fixed before model estimation. Usually, a high $p$ implies
a better model $R^2$, but over-fitting may be an issue. Moreover, special care must be taken when choosing the number of basis functions for
$\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)$. The classic FDA approach suggests fixing a high number of basis functions and adopting penalisation to avoid over-fitting.
In our context, this is not viable since the covariance matrices involved in model estimation have dimension $n^3p^3 \times n^3p^3$. Since $n$ is usually large, a large $p$ would make model estimation unfeasible, especially if the number of time points $T$ is also high.
When using B-spline basis, a small $p$ implies that the location of knots along the domain $\mathcal{H}$ also matters and may affect the model fitting performance. Ideally, $p$ and knot locations are chosen using a model validation technique (see <ref>) by trying different combinations of $p$ and knot locations.
If, due to time constraints, this is not possible, equally spaced knots are a convenient option.
§.§ Model estimation
The estimation of $\bm{\psi}$ and the latent space-time variable $\bm{z}(\bm{s},t)$ is based on the maximum likelihood approach considering profile data observed at spatial locations $\mathcal{S}=\{\bm{s}_{i},i=1,...,n\}$ and time points $t=1,...,T$.
At a specific location $\bm{s}_{i}$ and time $t$, $q_{i,t}$ measurements are taken at points $\bm{h}_{\bm{s}_{i},t}=\left( h_{i,1,t},...,h_{i,q_{i,t},t}\right) ^{\top}$ and collected in the vector
\[
\bm{y}_{\bm{s}_{i},t}=(y(\bm{s}_{i},h_{i,1,t},t),...,y(\bm{s}_{i},h_{i,q_{i,t},t},t))^{\top},
\]
here called the observed profile.
Although D-STEM v2 allows for varying $q_{i,t}$, for ease of notation, it is assumed here that
all profiles include exactly $q$ measurements, although $\bm{h}_{\bm{s}_{i},t}$ may be different across profiles.
Profiles observed at time $t$ across spatial locations $\mathcal{S}$ are then stored in the $nq\times1$
vector $\bm{y}_{t}=(\bm{y}_{s_{1},t}^{\top},...,\bm{y}_{s_{n},t}^{\top})^{\top}$.
Applying model (<ref>)-(<ref>) to the defined data above, we have the following matrix representation:
\begin{align*}
\bm{y}_{t} & =\tilde{\bm{X}}_{t}\bm{c}_{\bm{\beta}}+\bm{\Phi}_{\bm{z},t}\bm{z}%
\bm{z}_{t} & =\tilde{\bm{G}}\bm{z}_{t-1}+\bm{\eta}_{t},
\end{align*}
where $\tilde{\bm{X}}_{t}=\bm{X}_{t}\bm{\Phi}_{\bm{\beta},t}$ is a $nq\times bp$
matrix, with $\bm{X}_{t}$ the matrix of covariates and $\bm{\Phi}_{\bm{\beta},t}$
the basis matrix for $\bm{\beta}$. $\bm{\Phi}_{\bm{z},t}$ is the $nq\times np$ basis
matrix for the latent $np\times1$ vector
}(\bm{s}_{1},t)^{\top},...,\bm{\eta}(\bm{s}_{n},t)^{\top})^{\top}$ is the $np\times1$ innovation vector, while $\bm{\varepsilon}_{t}\ $ is the $nq\times1$ vector of measurement errors.
Additionally, $\tilde{\bm{G}}= \bm{I}_{n} \otimes \bm{G}$ is the $np\times np$
diagonal transition matrix.
The complete-data likelihood function $L(\bm{\psi};\bm{Y},\bm{Z})$ can
be written as
\[
\]
where $\bm{Y}=\left( \bm{y}_{1},...,\bm{y}_{T}\right) $,
$\bm{Z}=\left( \bm{z}_{0},\bm{z}_{1},...,\bm{z}_{T}\right)
$, $\bm{\psi}_{\bm{z}}=\left( \bm{g}^{\top},\bm{v}^{\top},\bm{\theta}^{\top}\right) ^{\top} $, $\bm{\psi}_{\bm{y}}=\left(
\bm{c}_{\varepsilon}^{\top},\bm{c}_{\beta}^{\top}\right) ^{\top} $, and $\bm{z}_{0}$ is the Gaussian initial vector with parameter $\bm{\psi}_{\bm{z}_{0}}$. Maximum likelihood estimation is based on an extension of the EM algorithm detailed in
[Calculli et al., 2015]. The model parameter set $\bm{\psi}$ is initialised
with starting values $\bm{\psi}^{\left\langle 0\right\rangle }$ and then
updated at each iteration $\iota$ of the EM algorithm.
The algorithm terminates if any of the following conditions is satisfied:
\[
\max_{l}\left\vert \psi _{l}^{\left\langle \iota \right\rangle }-\psi
_{l}^{\left\langle \iota -1\right\rangle }\right\vert /\left\vert \psi
_{l}^{\left\langle \iota \right\rangle }\right\vert <\epsilon _{1}
\]
\[ \left\vert
L(\bm{\psi}^{\left\langle \iota\right\rangle };\bm{Y}%
)-L(\bm{\psi}^{\left\langle \iota-1\right\rangle };\bm{Y})\right\vert
/\left\vert L(\bm{\psi}^{\left\langle \iota\right\rangle };\bm{Y})\right\vert
\[ \iota>\iota^{\ast}, \]
where $\psi _{l}^{\left\langle \iota \right\rangle }$ is the generic element of $\bm{\psi}^{\left\langle \iota \right\rangle }$ at the
$\iota\text{-}th$ iteration, $L(\bm{\psi}^{\left\langle \iota\right\rangle };\bm{Y})$ is the
observed-data likelihood function evaluated at $\bm{\psi}^{\left\langle
\iota\right\rangle }$, $0<\epsilon_1\ll1$ and $0<\epsilon_2\ll1$ are small positive numbers
(e.g. $10^{-4}%
$), while $\iota^{\ast}$ is a user-defined positive integer number (e.g. $100$) to limit the
iterations in the case of convergence failure of the EM algorithm.
Note that $\mathcal{S}$ is not time-varying, which means that spatial locations are fixed. This could be a limit in applications where spatial locations change for each $t$. On the other hand, missing profiles are allowed; that is, $\bm{y}_{\bm{s}_{i},t}$ may be a vector of $q$ missing values at some $t$. In the extreme case, a given spatial location $\bm{s}_{i}$ has only one profile over the entire period (if all the profiles are missing, the spatial location can be dropped from the data set).
<cit.> explains how the likelihood function of a state-space model changes in the case of a missing observation vector and how the EM estimation formulas are derived. Missing data handling in D-STEM v2 is based on the same approach.
§.§ Partitioning
At each iteration of the EM algorithm, the computational complexity of the E-step is
$O\left( Tn^{3}p^{3}\right) $, which may be unfeasible if $n$ is large.
When necessary, D-STEM v2 allows one to use a partitioning approach [Stein, 2013] for model
The spatial locations $\mathcal{S}$ are divided into $k$ partitions,
and $\bm{z}_{t}$ is partitioned conformably, namely, $\bm{z}_{t}=\left( \bm{z}_{t}^{(1)\top},...,\bm{z}%
_{t}^{(k)\top}\right) ^{\top}$.
Hence, the likelihood function becomes
\[
L\left( \bm{\psi}_{\bm{y}};\bm{y}_{t}\mid\bm{z}_{t}\right) \cdot%
L\left( \bm{\psi}_{\bm{z}_{0}};\bm{z}_{0}^{(j)}\right) \cdot%
L\left( \bm{\psi}_{\bm{z}};\bm{z}_{t}^{(j)}\mid\bm{z}_{t-1}%
\]
From the EM algorithm point of view, this implies
that the E-step is independently applied to each partition, possibly in parallel. When all partitions are equal in size,
the computational complexity reduces to $\mathcal{O}\left( Tkr^{3}p^{3}\right)$, with $r$ as the partition size.
Geographical partitioning, constructed aggregating proximal locations, is a natural choice for environmental applications.
Given the number of partitions $k$, the k-means algorithm applied to spatial coordinates
provides a geographical partitioning of $\mathcal{S}$.
However, the number of points in each partition is not controlled, and a heterogeneous partitioning may arise.
If some subsets are very large and others are small, the reduction in computational complexity given above is far from being achieved.
This can easily happen, for example, when $\mathcal{S}$ is a global network constrained by continent shapes.
For this reason, D-STEM v2 provides a heuristically modified k-means algorithm that encourages
partitions with similar numbers of elements.
The algorithm optimises the following objective function:
\begin{equation}
\sum_{j=1}^{k}\sum_{\bm{s}\in\mathcal{S}_{j}}d\left( \bm{s},\bm{c}_{j}\right) +\lambda\sum_{j=1}^{k}\left(
r_{j}-\frac{n}{k}\right) ^{2}, \label{eq:k-means}%
\end{equation}
where $\lambda\ge0$, $\mathcal{S}_{j} \subset \mathcal{S}$ is the set of coordinates in the $j\text{-}th$
partition, $d$ is the geodesic distance on the sphere $\mathbb{S}^{2}$ and $\bm{c}_{j}$ and $r_{j}$
are the centroid and the number of elements in the $j\text{-}th$ partition, respectively.
The second term in (<ref>) accounts for the variability of the partition sizes and acts as a penalisation for heterogeneous partitionings.
Clearly, when $\lambda=0$, the above-mentioned objective function gives the classic k-means algorithm.
For high values of $\lambda$, solutions with similarly sized partitions are favoured.
Unfortunately, an optimality theory for this algorithm has not yet been developed, and the choice of $\lambda$ is left to the user. Nonetheless, it may be a useful tool to define a partitioning that is appropriate for the application at hand with regard to computing time and geographical properties.
§.§ Variance-covariance matrix estimation
The EM algorithm provides a point estimate of the parameter vector $\bm{\psi}$
but no uncertainty information. Building on <cit.>, D-STEM v2
estimates the variance–covariance matrix
by means of the observed Fisher information matrix, $\mathbf{I}_{T}$, namely
To understand its computational cost, note that the information matrix given above may be written as a sum:
For large data sets, each matrix $\mathbf{i}_t$ may be expensive to compute, and the total computational cost is linear in $T$, provided missing data are evenly distributed in time.
This results in a time-consuming task with a computational burden even higher than that for model estimation.
For this reason, D-STEM v2 makes it possible to approximate $\hat{\Sigma}_{\bm{\psi},T}$ using a truncated information matrix, namely:
\begin{equation}
\tilde{\Sigma}_{\bm{\psi},t^*}=(\frac{T}{t^*}\mathbf{I}_{t^*})^{-1},
\label{eq:Fisher_approximated}
\end{equation}
which reduces the computational burden by a factor of $1-t^*/T$.
$\tilde{\Sigma}_{\bm{\psi},t^*} \rightarrow \hat{\Sigma}_{\bm{\psi},T}$ for $t^* \rightarrow T$,
the truncation time $t^*$ is chosen to control the approximation error in $\hat{\Sigma}_{\bm{\psi}}$. In particular, $t^*$ is the first integer such that
\begin{equation}
\frac{\left\Vert \tilde{\Sigma}_{\bm{\psi},t}-\tilde{\Sigma}_{\bm{\psi},t-1}%
\right\Vert_{F} }{\left\Vert \tilde{\Sigma}_{\bm{\psi},t}\right\Vert_{F} }\leq
\delta,\label{eq:varcov_approximated}
\end{equation}
where $\left\Vert { \cdot }\right\Vert_{F}$ is the Frobenius norm, and $\delta$ may be defined by the user.
Generally speaking, the behaviour of $\hat{\Sigma}_{\bm{\psi},T}$ for large $T$ and, hence, the behaviour of $\tilde{\Sigma}_{\bm{\psi},t}$ relays on stationarity and ergodicity of the underlying stochastic process; see, for example,
<cit.> and references therein.
To have operative guidance for the user, let us assume first that no missing values are present, the information matrix is well-conditioned and the covariates have no isolated outliers or extreme trends.
In this case, away from the borders $t\cong1$ and $t\cong T$,
the observed conditional information $\mathbf{i}_t$ has a relatively smooth stochastic behaviour, and the approximation in (<ref>) is expected to be satisfactory at the level defined by $\delta$.
Conversely, if some data are missing at time $t$, the information $\mathbf{i}_t$ is reduced accordingly. If the missing pattern is random over time, this is not an issue.
But, in the unfavourable case with a high percentage of missing data mostly concentrated at the end the time series, $t\cong T$, the above approximation may over-estimate the information and under-estimate the variances of the parameter estimates.
§.§ Dynamic kriging
In this paper, dynamic kriging refers to evaluating the following
\begin{align}
\hat{f} \left( \bm{s},h,t\right) &= \mathbb{E}_{\hat{\bm{\psi}}}\left( f\left( \bm{s},h,t\right) \mid
\bm{Y}\right), \label{eq:krig_exp}\\
\VAR\left( \hat{f} \left( \bm{s},h,t\right) \right) &= \mathbb{V}_{\hat{\bm{\psi}}}\left( f\left( \bm{s},h,t\right) \mid\bm{Y}\right), \label{eq:krig_var}%
\end{align}
for any $\bm{s}\in\mathbb{S}^{2}$, $h\in\mathcal{H}$ and $t=1,...,T$. A common approach is to map the kriging estimates on a regular pixelation $\mathcal{S}^{\ast}=\left\{ \bm{s}_{1}^{\ast},...,\bm{s}_{m}^{\ast}\right\} $. This may be a time-consuming task when $m$ and/or $n$ and/or $T$ are large. To tackle this problem, D-STEM v2 allows one to exploit a nearest-neighbour approach, where the conditioning term in Equations (<ref>) and
(<ref>) is not $\bm{Y}$, but the data at the spatial locations $\mathcal{S}_{\sim j}$, where
$\mathcal{S}_{\sim j}\subset\mathcal{S}$ is the set of the $\tilde{n}\ll n$ nearest
spatial locations to $\bm{s}_{j}^{\ast}$. The use of the nearest-neighbour approach
is justified by the so-called screening effect. Even when the spatial correlation function exhibits
long-range dependence, it can subsequently be assumed that $y$ at spatial location $\bm{s}$ is nearly independent of spatially distant
observations when conditioned on nearby observations <cit.>.
For computational efficiency, D-STEM v2 performs kriging for blocks of pixels.
To do this, $\mathcal{S}^{\ast}$ is partitioned in $u$ blocks $\mathcal{S}^{\ast}=\left\{\mathcal{S}_{1}^{\ast},...,\mathcal{S}_{u}^{\ast}\right\}$, and kriging is
done on each block $\mathcal{S}_{l}^{\ast}$, $l=1,...,u$, with $u\ll m$ controlled by the user.
For each target block $\mathcal{S}_{l}^{\ast}$, the conditioning term in Equations
(<ref>) and (<ref>) is given by the data
observed at $\mathcal{\tilde{S}}_{l}=\bigcup\nolimits_{j\in \mathcal{J}_{l}}\mathcal{S}_{\sim j},%
\mathcal{J}_{l}=\left\{ j:s_{j}^{\ast }\in \mathcal{S}_{l}^{\ast}\right\}$.
Note that, if $\mathcal{S}_{l}^{\ast}$ is dense and $\mathcal{S}$ is sparse (namely $n\ll m$), then $\mathcal{\tilde{S}}_{l}$ is not
much larger than $\mathcal{S}_{\sim j}$ since most of the spatial locations in $\mathcal{S}_{l}^{\ast}$
tend to have the same neighbours $\mathcal{S}_{\sim j}$.
§.§ Validation
D-STEM v2 allows one to implement an out-of-sample validation by partitioning the original spatial locations
$\mathcal{S}$ into subsets $\mathcal{S}_{est}$ and $\mathcal{S}_{val}$. Data at $\mathcal{S}%
_{est}$ are used for model estimation while data at $\mathcal{S}%
_{val}$ are used for validation. Once the model is estimated, the kriging
formula in Equation (<ref>) is used to predict at $\mathcal{S}_{val}$ for all times $t$ and heights $\bm{h}$. The following validation mean squared errors are
then computed
\begin{align*}
MSE_{t} & =\frac{1}{P_{1}}\sum_{\bm{s}\in\mathcal{S}_{val}}%
\sum_{h\in\bm{h}_{\bm{s},t}}\left( y\left( \bm{s},h,t\right) -\hat{y}\left( \bm{s},h,t\right) \right) ^{2},\\
MSE_{\bm{s}} & =\frac{1}{P_{2}}\sum_{t=1}^{T}\sum_{h\in
\bm{h}_{\bm{s},t}}\left( y\left( \bm{s},h,t\right)
-\hat{y}\left( \bm{s},h,t\right) \right) ^{2},\\
MSE_{h} & =\frac{1}{P_{3}}\sum_{t=1}^{T}\sum_{\bm{s}\in\mathcal{S}_{val}}\left( y\left( \bm{s},h,t\right)
-\hat{y}\left( \bm{s},h,t\right) \right) ^{2},%
\end{align*}
where $\hat{y}\left( \bm{s},h,t\right) $ is obtained from Equation (<ref>), while $P_{1}$, $P_{2}$ and $P_{3}$ are the number of
terms in each sum.
When $\bm{h}_{\bm{s},t}$ varies across the profiles, D-STEM v2 provides a binned MSE by splitting the continuous domain $\mathcal{H}$ into $B$ equally spaced intervals. Let $H^*_r$ be the set of observation points in the $r\text{-}th$ interval, let $n_r$ be the corresponding observation number and let $\bar{h}_r = \frac{1}{n_r} \sum_{h\in H^*_r} h$ be the mean of points in $b\text{-}th$ interval. Then, the $MSE_{\bar{h}_r}$ is computed by
\begin{align*}
MSE_{\bar{h}_r} & =\frac{1}{P_{4}} \sum_{h\in H^*_r} \sum_{t=1}^{T}\sum_{\bm{s}\in\mathcal{S}_{val}}\left( y\left( \bm{s},h,t\right)
-\hat{y}\left( \bm{s},h,t\right) \right) ^{2},%
\end{align*}
where $P_{4}$ is the total number of observations in the $b\text{-}th$ interval.
D-STEM v2 also provides the validation $R^2$ with respect to time
\begin{align*}
R^{2}_{t} & =1 - \frac{MSE_{t}}{\VAR\left( \{y\left( \bm{s},h,t\right), \bm{s}\in\mathcal{S}_{val}, h\in\bm{h}_{\bm{s},t} \} \right)}.
\end{align*}
and the analogous validation $R^2$ with respect to location $\bm{s}$ and $h_r$.
§ SOFTWARE
This section starts by briefly describing the modelling capabilities of D-STEM v2 inherited by the previous version for dealing with spatio-temporal data sets. Then, it focuses on the D-STEM v2 classes and methods, which implement estimation, validation and dynamic mapping of the model presented in Section <ref>. Although some of the classes are already available in D-STEM v1, they are listed here for completeness.
§.§ Software description
D-STEM v1 implemented a substantial number of models. The dynamic coregionalisation model (DCM, [Finazzi and Fassò, 2014] [Finazzi and Fassò, 2014]) and the hidden dynamic geostatistical model (HDGM, [Calculli et al., 2015] [Calculli et al., 2015])
are suitable for modelling and mapping multivariate space-time data collected from
unbalanced monitoring networks. Model-based clustering (MBC, [Finazzi et al., 2015] [Finazzi et al., 2015]) has been introduced
for clustering time series, and it is suitable for large data sets with spatially registered time series.
Moreover, the emulator model [Finazzi et al., 2019] is based on a Gaussian emulator, and it is exploited for
modelling the multivariate output of a complex physical model.
In addition, D-STEM v2 (available at <github.com/graspa-group/d-stem>) provides the functional version of HDGM, denoted by f-HDGM, which handles modelling and mapping of functional space-time data, following the methodology of Section <ref>. For implementing f-HDGM,
D-STEM v2 relies on the MATLAB version of the fda package [Ramsay et al., 2018], which is automatically downloaded and installed by D-STEM v2.
§.§ Data format
Two data formats are available to define observations for the f-HDGM. One is the internal format used by the D-STEM v2 classes, and the other one is the user format based on the more user-friendly table data type implemented in recent versions of MATLAB.
The latter permits storing measurement profiles, covariate profiles, coordinates, timestamps and units of measure in a single object. The internal format is not discussed here.
Considering a table in the user format, each row includes the profiles collected at a given spatial location and time point.
The column labels are defined as follows:
columns Y and Y_name are used for the dependent variable $y$ and its name as a string field, respectively;
the column with prefix X_h_ is used for the values of the domain $h$; eventually, columns with prefix X_beta_ are used for covariates $\bm{x}$.
These tables have only one column for $y$ and only one column for $h$. Instead, we can have any number $b \geqslant 0$ of covariate columns. Additionally, the table has columns X_coordinate and Y_coordinate for spatial location $\bm{s}$ and column Time for the timestamp. Units of measure are stored in the Properties.VariableUnits property of the table columns and used in outputs and plots. Units for X_coordinate and Y_coordinate can be deg for degrees, m for meters and km for kilometres. Geodetic distance is used when the unit is deg; otherwise, the Euclidean distance is used.
At the table row corresponding to location $\bm{s}_i$ and time $t$, the elements related to $y$ and $\bm{x}$ are vectors with $q_{i,t}$ elements.
Vectors related to $y$ may include missing data (NaN). If $y$ is entirely missing for a
given $\left( \bm{s},t\right) $, the row must be removed from the table.
Since spatial locations $\mathcal{S}$ are fixed in time, and as their number $n$ is
determined by the number of unique coordinates in the table,
profiles observed at different time points but the same spatial location $\bm{s}$ must have
the same coordinates.
§.§ Software structure
In D-STEM, a hierarchical structure of object classes and methods is used to handle data definition, model definition and estimation, validation, dynamic kriging and the related plotting capabilities.
The structure is schematically given below.
Further details on the use of each class are given within the two case studies in this paper, while class constructors, methods and property details can be obtained in MATLAB using the command
doc <class_name>.
§.§.§ Data handling
The stem_data class allows the user to define the data used in f-HDGM models, mainly through the following objects and methods.
* Objects of stem_data
* stem_modeltype: model type (DCM, HDGM, MBC, Emulator or f-HDGM); note that model type is needed here because the data structure varies among the different models;
* stem_fda: basis functions specification;
* stem_validation (optional): definition of the learning and testing datasets for model validation.
* Methods and Properties of stem_data
* kmeans_partitioning: data partitioning for parallel EM computations of Section <ref>;
this method is applied to a stem_data object, and its output is used by the EM_estimation method in the stem_model class below;
* shape (optional): structure with geographical borders used for mapping.
* Internal Objects of stem_data
* stem_varset: observed data and covariates;
* stem_gridlist: list of stem_grid objects
* stem_grid: spatial locations coordinates;
* stem_datestamp: temporal information.
Interestingly, stem_misc.data_formatter is a helper method, which is useful for building stem_varset objects starting from data tables. Its class, stem_misc, is a miscellanea static class implementing other methods for various intermediate tasks not discussed here for brevity.
§.§.§ Model building
The stem_model class is used to define, estimate, validate and output a f-HDGM, mainly through the following objects and methods.
* Objects of stem_model
* stem_data: defined above;
* stem_par: model parameters;
* stem_EM_result: container of the estimation output, after EM_estimate;
* stem_validation_result (optional): container of validation output, available only if stem_data contains the stem_validation object;
* stem_EM_options (optional): model estimation options; it is an input of the EM_estimate method below.
* Methods of stem_model
* EM_estimate: computation of parameter estimates;
* set_varcov: computation of the estimated variance-covariance matrix;
* plot_profile: plot of functional data;
* print: print estimated model summary;
* beta_Chi2_test: testing significance of covariates;
* plot_par: plot functional parameter;
* plot_validation: plot MSE validation.
§.§.§ Kriging
The kriging handling is implemented with two classes. The first is the stem_krig class, which implements the kriging spatial interpolation.
* Objects of stem_krig
* stem_krig_data: mesh data for kriging;
* stem_krig_options: kriging options;
* Methods of stem_krig
* kriging: computation of kriging, the output is a stem_krig_result object.
The second is the stem_krig_result class, which stores the kriging output and implements the methods for plotting the kriging output.
* Methods of stem_krig_result
* surface_plot: mapping of kriging estimate and their standard deviation for fixed $h$;
* profile_plot: method for plotting the kriging function and the variance-covariance matrix for a fixed space and time.
Although at first reading the user could prefer a single object for both input and output of the kriging, these objects may be quite large, making the current approach more flexible.
§ CASE STUDY ON OZONE DATA
This section illustrates how to make inferences on an f-HDGM for ground-level high-frequency air
quality data collected by a monitoring network. In particular, hourly ozone ($O_{3}$, in $\mu g/m^3$) measured
in Beijing, China, is considered.
§.§ Air quality data
Ground-level $O_{3}$ is an increasing public concern due to its
essential role in air pollution and climate change. In China, $O_{3}$ has
become one of the most severe air pollutants in recent years
[Wang et al., 2017].
In this case study, the aim is to model hourly $O_{3}$ concentrations from 2015 to 2017 with respect to temperature and ultraviolet radiation (UVB) across Beijing.
Concentration and temperature data are available at twelve monitoring stations (Figure <ref>). Hourly UVB data are obtained from the ERA-Interim product of the European Centre for Medium-Range Weather Forecasts (ECMWF) at a grid size of $0.25^{\circ} \times 0.25^{\circ}$ over the city.
locations of the twelve stations in Beijing [Kahle and Wickham, 2013].
To describe the diurnal cycle of $O_{3}$, which peaks in the afternoon and reaches a minimum at night-time, the 24 hours of the day are used as domain $\mathcal{H}$ of the basis functions, while the time index $t$ is on the daily scale. Moreover, due to the circularity of
time, Fourier basis functions are adopted, which implies that $\beta_{j}\left(
h\right) $, $\sigma_{\varepsilon}^{2}\left( h\right) $ are periodic functions.
The measurement equation for $O_{3}$ is
\begin{equation}
_{temp}(h)+x_{uvb}\left( t\right) \beta_{uvb}(h)+\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)+\varepsilon(\bm{s},h,t), \label{eq:O3meaurement}%
\end{equation}
where $\bm{s}$ is the generic spatial location, $h\in\left[ 0,24\right)
$ is the time within the day expressed in hours and $t=1,...,1096$ is the day
index over the period 2015–2017. Based on a preliminary analysis, the number of basis functions for $\beta_{j}\left( h\right) $, $\sigma_{\varepsilon}^{2}\left( h\right) $
and $\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)$ is chosen to be $5$, $5$
and $7$, respectively.
§.§ Software implementation
This paragraph details the implementation of the D-STEM v2 in three
aspects: model estimation, validation and kriging. Relevant
scripts are demo_section4_model_estimate.m,
demo_section4_validation.m and demo_section4_kriging.m,
respectively, which are available in the supplementary material. All the scripts can be executed by choosing the option number from 1 to 3 in the demo_menu_user.m script.
§.§.§ Model estimation
This paragraph describes the demo_section4_model_estimate.m script
devoted to the estimation of the model parameters and of their
variance–covariance matrix.
The data set needed to perform this case study is stored as a MATLAB table in the user format of Section <ref> and named Beijing_O3. It can be loaded from the corresponding file as follows:
load ../Data/Beijing_O3.mat;
In the Beijing_O3 table, each row refers to a fixed space-time point and gives a 24-element hourly ozone profile with the corresponding conformable covariates, which are: a constant, temperature and UVB.
The following lines of code specify the model type and the basis functions, which are stored in an object of class stem_fda:
o_modeltype = stem_modeltype('f-HDGM');
input_fda.spline_type = 'Fourier';
input_fda.spline_range = [0 24];
input_fda.spline_nbasis_z = 7;
input_fda.spline_nbasis_beta = 5;
input_fda.spline_nbasis_sigma = 5;
o_fda = stem_fda(input_fda);
When using a Fourier basis, spline_nbasis_z must be set to a positive odd
Meanwhile, spline_nbasis_beta and/or spline_nbasis_sigma must be left empty, if $\bm{\beta}(h)\equiv\bm{\beta}$ and/or $\sigma
_{\varepsilon}^{2}(h)\equiv\sigma_{\varepsilon}^{2}$ are constant functions.
The next step is to define an object of class stem_data, which specifies the model type and contains the basis function object and the data from the Beijing_O3 table, transformed in the internal data format. This is done using the intermediate input_data structure:
input_data.stem_modeltype = o_modeltype;
input_data.data_table = Beijing_O3;
input_data.stem_fda = o_fda;
o_data = stem_data(input_data);
Then, an object of class stem_model is created by using both information on data, stored in the o_data object, and on parametrisation, contained in the stem_par object named o_par:
o_par = stem_par(o_data,'exponential');
o_model = stem_model(o_data, o_par);
To facilitate visualisation, the method plot_profile of class stem_model shows the $O_3$ profile data at location (lat0, lon0), in the days between t_start and t_end (Figure <ref>):
lat0 = 40; lon0 = 116;
t_start = 880; t_end = 900;
o_model.plot_profile(lat0, lon0, t_start, t_end);
Before running the EM algorithm, the model parameters need to be initialised.
This is done using the method get_beta0 of class stem_model, which
provides the starting values for $\bm{\beta}$, and the method
get_coe_log_sigma_eps0 for the case of a functional $\sigma_{\varepsilon}^{2}(h)$.
Next, the method set_initial_values of the o_model object is called to complete the initialisation of model parameters:
n_basis = o_fda.get_basis_number;
o_par.beta = o_model.get_beta0();
o_par.sigma_eps = o_model.get_coe_log_sigma_eps0();
o_par.theta_z = ones(1, n_basis.z)*0.18;
o_par.G = eye(n_basis.z)*0.5;
o_par.v_z = eye(n_basis.z)*10;
Note that the theta_z parameter must be provided in the same unit of measure as the spatial coordinates.
$O_3$ concentrations at location 39.92 latitude and 116.19 longitude for 21 days beginning on 29 May 2017. Left: each dot is a concentration measurement. The colour of the dot depicts the concentration. Right: each graph is a daily concentration profile.
Before model estimation, EM exiting conditions $\epsilon_1$ (exit_toll_par), $\epsilon_2$ (exit_toll_loglike) and $\iota^{\ast}$ (max_iterations) introduced in Section <ref> can be optionally defined as follows:
o_EM_options = stem_EM_options();
o_EM_options.exit_toll_par = 0.0001;
o_EM_options.exit_toll_loglike = 0.0001;
o_EM_options.max_iterations = 200;
Model estimation is started by calling the method EM_estimate of the
o_model object, with the optional o_EM_options object passed as an input argument.
After model estimation, the variance–covariance matrix of the estimated parameters is evaluated by calling the method set_varcov, with the optional approximation level $\delta$ of Equation (<ref>) passed as an input parameter.
Finally, set_logL computes the observed data log-likelihood.
delta = 0.001;
All the relevant estimation results are found in the internal
stem_EM_result object, which can be accessed as a property of the
o_model object as follows:
Figure <ref> is produced by calling the plot_par method and
shows the estimated $\bm{\beta}_{0}(h)$, $\bm{\beta}_{temp}(h)$, $\bm{\beta}_{uvb}(h)$, and $\sigma_{\varepsilon}^{2}(h)$.
Thanks to the use of a Fourier basis, the functions are periodic with a period of one day. In the plot
of $\sigma_{\varepsilon}^{2}(h)$, the unexplained portion of $O_{3}$ variance,
$\sigma_{\varepsilon}^{2}(h)$, is small during daylight hours, which is
consistent with the results of [Dohan and Masschelein, 1987].
When the confidence bands of parplot contain zero, it
may be useful to test the significance of the covariates. By calling the method beta_Chi2_test,
the results of $\chi^{2}$ tests are obtained, and they are reported in Table <ref>. Although $\beta_{uvb}$ is
close to $0$ in the morning, all fixed effects are highly significant overall. The model output is shown in the
MATLAB command window by calling the print method.
$\beta_{0}(h),~\beta_{temp}(h),~\beta_{uvb}(h)$ and
$\sigma_{\epsilon}^{2}(h)$, with $90\%,~95\%,~99\%$ confidence bands, respectively, shown through the different shades.
$\chi^{2}$ statistic $p$ value
Constant 1r136.33 1r0
Temperature 1r14266.07 1r0
UVB 1r2094.34 1r0
$\chi^{2}$ tests for significance of covariates.
§.§.§ Validation
This paragraph describes the script demo_section4_validation.m, which implements validation. Compared
to the code in demo_section4_model_estimate.m, it only differs in
providing an object of class stem_validation.
To create the object called o_validation, the name of the validation
variable is needed as well as the indices of the validation stations. Moreover, if the size of the nearest neighbour
set for each kriging site (nn_size) is not provided as the third input
argument in the stem_validation class constructor, D-STEM v2 uses all the
remaining stations. For example, a validation data set with three stations is constructed as follows:
S_val = [1,7,10];
input_data.stem_validation = stem_validation('O3', S_val);
The validation statistics, computed by EM_estimate, are saved in the internal object
stem_validation_result, which can be accessed as a property of the o_model object. The stem_validation_result object contains the estimated $O_{3}$ residuals for the above-mentioned validation stations as well as the validation mean square errors and $R^2$, as defined in Section <ref>.
§.§.§ Kriging
This paragraph describes the demo_section4_kriging.m script, which applies the
approach of Section <ref> to
the estimated model to map the $O_{3}$ concentrations over Beijing city.
The first step is to create an object of class
stem_grid, which collects the information about the regular grid of
pixels $\mathcal{B}$ to be used for mapping.
Then, an object of class stem_krig_data is created, where the o_krig_grid object is passed as an input argument:
load ../Output/ozone_model;
step_deg = 0.05;
lat = 39.4:step_deg:41.1;
lon = 115.4:step_deg:117.5;
[lon_mat,lat_mat] = meshgrid(lon,lat);
krig_coordinates = [lat_mat(:) lon_mat(:)];
o_krig_grid = stem_grid(krig_coordinates,'deg','regular','pixel',...
o_krig_data = stem_krig_data(o_krig_grid);
Two comments on the above lines follow. First, since the grid in the o_krig_grid object is regular, the dimensions of the grid (size(lat_mat), $35 \times 43$), must be provided as well as the shape of the pixels and the spatial resolution of the grid, which is $0.05^{\circ} \times 0.05 ^{\circ}$. Second, the above step using the stem_krig_data constructor may appear redundant at first glance. Indeed, it is needed for compatibility with other model types for which, in addition to the stem_grid object, other information is also necessary for the stem_krig_data constructor.
Next, the stem_krig_options class provides some options for kriging.
By default, the output is back-transformed in the original unit of measure if the observations have been log-transformed and/or standardised. The back_transform property enables handling this.
Moreover, the no_varcov property must be set to 1 to avoid the time-consuming computation of the kriging variance.
Eventually, the block_size property is used to define the number of spatial locations in $\mathcal{S}%
o_krig_options = stem_krig_options();
o_krig_options.back_transform = 0;
o_krig_options.no_varcov = 0;
o_krig_options.block_size = 30;
After storing the map of Beijing boundaries into the o_model object, the latter is used with o_krig_data to create an object of class stem_krig.
This and o_krig_options together contain all information for kriging, which is obtained by the corresponding kriging method:
o_model.stem_data.shape = shaperead('../Maps/Beijing_adm1.shp');
o_krig = stem_krig(o_model, o_krig_data);
o_krig_result = o_krig.kriging(o_krig_options);
Note that this task may be time consuming for large grids. The kriging output saved in the o_krig_result object gives the latent process estimate $z_t$ and its variance.
The surface_plot and profile_plot methods may be used to obtain and plot $\hat{f}(\bm{s},h,t)$ of Equation (<ref>). In this case, the user has to provide the corresponding covariate (X_beta) for the scale/vector h, time t or location $\bm{s}$ (lon0, lat0) of interest.
Specifically, the surface_plot method is used to display the $O_{3}$ map using h, t, X_beta as input arguments. In the case of unavailable X_beta, the mapping concerns the component $\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)$. Loaded from the homonym file, the array X_beta_t_100 refers to time $t=100$ and hour $h=10.5$ and has the dimension $35 \times 43 \times 3$. Maps of $O_{3}$ concentrations and their standard deviation are shown in Figure <ref>.
load ../Data/kriging/X_beta_t_100;
t = 100;
h = 10.5;
[y_hat, diag_Var_y_hat] = o_krig_result.surface_plot(h, t, X_beta_t_100);
$O_3$ concentrations and their standard deviation at 10:30 am ($h = 10.5$), on 10 April 2015, where 12 stations are marked with black stars.
On the other hand, the profile_plot method is used to display the $O_{3}$ profile at a given spatial location $\bm{s}$ (lon0, lat0) and time t. Still, the profile plot concerns the component $\bm{\phi}(h)^{\top}\bm{z}(\bm{s},t)$ if X_beta is not provided. After loading the X_beta_h (dimension $25\times 3$) from the homonym file, this method represents the profile of $O_{3}$ concentrations and their variance–covariance matrix as in Figure <ref>:
load ../Data/kriging/X_beta_h;
h = 0:24;
lon0 = 116.25;
lat0 = 40.45;
t = 880;
[y_hat, diag_Var_y_hat] = o_krig_result.profile_plot(h, lon0, lat0, ...
t, X_beta_h);
Note that the prediction in Equation (<ref>) and the variance in Equation (<ref>) are stored in the output arguments y_hat, and diag_Var_y_hat, respectively.
$O3$ concentrations with $90\%, 95\%, 99\%$ confidence bands (different shadings), and their variance–covariance at latitude 40.45, longitude 116.25 on 29 May 2017.
§ CASE STUDY ON CLIMATE DATA
In order to show the complexity-reduction capabilities of D-STEM v2, a data set of temperature vertical profiles collected by the radiosondes of the Universal Radiosonde Observation Program (RAOB) is now considered.
The profiles are observed over the Earth's
sphere, and they are misaligned, that is, each profile differs in terms of the number
of observations and altitude above the ground of each observation.
Additionally, the computation burden is higher due to the higher number of
spatial locations at which profiles are observed.
§.§ RAOB data
Radiosondes are routinely launched from stations all over the world to
measure the state of the upper troposphere and lower stratosphere. Data collected
by radio sounding have applications in weather prediction and climate studies.
Temperature data from 200 globally distributed stations collected daily
during January 2015 at 00:00 and 12:00 UTC are considered here.
Each profile consists of a given number of measurements taken at different
pressure levels. Since the weather balloon carrying the radiosonde usually
explodes at an unpredictable altitude, the profile measurements
are misaligned across the profiles and have different pressure ranges. A
functional data approach is natural in this case since the underlying temperature
profile can be seen as a continuous function sampled at some pressure levels.
Figure <ref> depicts the spatial locations of temperature measurements
taken on 1 January 2015 at 00:00 UTC. This demo data set, which only covers one month,
includes around $10^5$ data points. When the full data set is used in climate studies,
the number of data points grows to around $10^8$. In this case, a recent server machine with multiple CPUs
with at least 256 GB of RAM is required for model estimation and kriging.
The focus of the case study is on the difference between the radiosonde
measurement and the output of the ERA-Interim global atmospheric reanalysis
model provided by ECMWF. In particular, the aim is to study the spatial
structure of the this difference in 4D space, where the
dimensions are latitude, longitude, altitude and time.
The model for temperature $y$ is as follows
\[
y \left( \bm{s},h,t\right) =x_{ERA}\left( \bm{s},h,t\right)
\beta_{ERA}\left( h\right) +\bm{\phi}\left( h\right) ^{\top}\bm{z}\left( \bm{s},t\right) +\varepsilon\left( \bm{s}%
\]
where $h\in\left[ 50, 925\right] $ $hPa$ is the pressure level, while
$t=1,...,62$ is a discrete time index for January 2015. Figure <ref> shows the temperature measurements at a given station, where 50 and 925 $hPa$ correspond approximately to 25 and 1.3 km, respectively.
Temperature at location 5.25 latitude and -3.93 longitude in January 2015. Left: each dot is a temperature measurement. The colour of the dot depicts the temperature. Right: each graph is a temperature vertical profile collected through radio sounding.
§.§ Software implementation
This section details the software implementation of the case study described above as in script
demo_section5.m, which can be also executed in the demo_menu_user.m script. To avoid repetition, only the relevant parts of the script that differ from the case study of Section <ref> are reported and commented on here. In particular, data
loading and instantiation of the stem_model object are not described.
§.§.§ Model estimation
The problem of vertical misalignment of the measurements is completely transparent to the user and is handled by the internal stem_misc.data_formatter method when creating the stem_data object.
Note that the dimension of the matrices in o_varset depends on $q$, the maximum number of measurements in each profile.
To prevent out-of-memory problems, it is advisable to avoid data sets in which only a few profiles have a large number of
measurements, which could result in large matrices in
o_varset, with most of the elements set to NaN.
B-spline bases are used, since, in this application, vertical profiles are not periodic with respect to the pressure domain. The corresponding object of class stem_fda is created in the following way:
spline_order = 2;
rng_spline = [50,925];
knots_number = 5;
knots = linspace(rng_spline(1),rng_spline(2),knots_number);
input_fda.spline_type = 'Bspline';
input_fda.spline_order = spline_order;
input_fda.spline_knots = knots;
input_fda.spline_range = rng_spline;
o_fda = stem_fda(input_fda);
Note that the knots are equally spaced along the functional range. In general,
however, non-equally spaced knots can be provided, and each model component
(i.e. $\sigma_{\varepsilon}^{2}$, $\beta_{j}$ and $\bm{\phi}(h)^{\top}\bm{z}(\bm{s}%
,t)$) can have a different set of knots. This is obtained using spline_order and spline_knots with additional suffixes _sigma, _beta, _z.
Although this data set is not large, the demo shows how to
enable the partitioning discussed in Section <ref>. First, the spatial locations are partitioned using the modified k-means algorithm:
k = 5;
trials = 100;
lambda = 5000;
partitions = o_data.kmeans_partitioning(k, trials, lambda);
where k is the number of partitions, trials is the number of times when the k-means
algorithm is executed starting from randomised centroids and lambda is $\lambda$ in Equation (<ref>).
At the end of the k-means algorithm, data are internally re-ordered for parallel
computing. Model estimation is done after creating and setting an object of class
stem_EM_options. To do this, the output of the kmeans_globe method is passed to the
partitioning_block_size property of the o_EM_options object.
Additionally, for parallel computing, the number of workers must be set to a
value higher than 1. In general, this could be any number up to the number of
cores available on the machine.
o_EM_options = stem_EM_options();
o_EM_options.partitions = partitions;
o_EM_options.workers = 2;
The three validation MSEs defined in Section <ref> are shown in Figure <ref> and <ref>. To generate these figures, the method plot_validation is called with vertical = 1, which provides “atmospheric profile” plots with $h$ on the vertical axis:
vertical = 1;
(Left) Validation MSE with respect to the $\bar{h}$ coloured by the number of observations $n_b$, and (Right) the validation MSE with respect to time $t$.
Validation MSE for the thirty-three stations, where the stations used for estimation are marked with blue stars.
§.§.§ Kriging
Interpolation across space and over time is done as in Section
<ref>. However, complexity reduction is enabled by adopting the
nearest neighbour approach detailed in Section <ref>.
To do this, a class constructor is first called, where the block_size is used to define the number of spatial locations in $\mathcal{S}_{l}^{\ast}$, and then nn_size is used to define $\tilde{n}$. Additionally, setting
o_krig_options.workers makes it possible to do the kriging over the $u$ blocks in
parallel using up to the allocated number of workers:
o_krig_options = stem_krig_options();
o_krig_options.block_size = 150;
o_krig_options.nn_size = 10;
o_krig_options.workers = 2;
Finally, kriging predictions and standard errors are mapped for a given $h\in\mathcal{H}$ and time $t$:
h = 875.3;
t = 12;
o_krig_result.surface_plot(h, t);
Since covariates are not provided to the surface_plot method, the plots are on the component
$\bm{\phi}(h)^{\top}\bm{\hat{z}}(\bm{s},t)$, namely, the difference between RAOB and ERA-Interim and its standard deviation. The output
of the above code is depicted in Figures <ref> and <ref>.
$\bm{\phi}(h)^{\top}\bm{\hat{z}}(\bm{s},t)$ at pressure $875.3$ $hPa$, and 12:00 am on 6 January 2015, where $200$ stations are shown as black stars.
Standard deviation of $\bm{\phi}(h)^{\top}\bm{\hat{z}}(\bm{s},t)$ at pressure $875.3$ $hPa$, and 12:00 am on 06 January 2015, where $200$ stations are shown as black stars.
§ CONCLUDING REMARKS
This paper introduced the package D-STEM v2 through two case studies of spatio-temporal modelling of functional data. It is shown that, in addition to maximum likelihood estimation, Hessian approximation and kriging for large
data sets, D-STEM v2 also develops several data-handling capabilities, allows for automatic construction of relevant objects and provides graphical output. In particular, it provides high-quality global maps and two kinds of functional
plotting: the traditional x–y plot and the vertical profile plot, which is popular, for example, in atmospheric data analysis.
In this regard, model validation and kriging are straightforward.
D-STEM v2 fills a gap in functional geostatistics. In fact, although statistical methods for georeferenced functional data have been recently
developed (e.g. [Ignaccolo et al., 2014]), standard geostatistical packages do not consider functional data, especially in the spatio-temporal context.
The successful use of D-STEM v1 in a number of applications proved that the EM algorithm implementation is quite stable. Now, due to improvements in computational efficiency, the new D-STEM v2 has the capability to handle large data sets.
Moreover, thanks to the approximated variance–covariance matrix, it is possible to compute standard errors for all model parameters relatively fast and avoid the large number of iterations typically required by an MCMC approach for making inferences.
However, a limit of the EM algorithm is its limited flexibility to changes in the model equations. Indeed, changes in parametrisation or latent variable structure usually require deriving new closed-form estimation formulas and changing the software accordingly. Moreover, changes in covariance functions are not easy to handle.
Computationally, the main limit of D-STEM v2 is in the number $p$ of basis functions that can be handled.
Even if partitioning is exploited in $k$ blocks of size $r$, computational complexity is $\mathcal{O}\left(Tkr^{3}p^{3}\right)$, meaning that $p$ cannot be large.
Currently, the authors are working on a new version, which makes it possible to handle multivariate functional space-time data and user-defined spatial covariance functions, which will make D-STEM v2 a valid and comprehensive alternative to the Gaussian process regression models (fitrgp) implemented in the Statistics and Machine Learning Toolbox of MATLAB.
§ ACKNOWLEDGMENTS
We would like to thank China's National Key Research Special Program Grant (No. 2016YFC0207701) and the Center for Statistical Science at Peking University.
[Bivand et al., 2015]
Bivand RS, Gómez-Rubio V, Rue H (2015).
Spatial Data Analysis with R-INLA with Some
Journal of Statistical Software, 63(20), 1–31.
[Blangiardo et al., 2013]
Blangiardo M, Cameletti M, Baio G, Rue H (2013).
Spatial and Spatio-Temporal Models with
Spatial and Spatio-Temporal Epidemiology, 4, 33–49.
[Calculli et al., 2015]
Calculli C, Fassò A, Finazzi F, Pollice A, Turnone A (2015).
Maximum Likelihood Estimation of the Multivariate Hidden
Dynamic Geostatistical Model with Application to Air Quality in Apulia,
Environmetrics, 26(6), 406–417.
[Cressie, 2018]
Cressie N (2018).
Mission CO2ntrol: A Statistical Scientist's Role in Remote
Sensing of Atmospheric Carbon Dioxide.
Journal of the American Statistical Association,
113(521), 152–168.
[Cressie and Johannesson, 2008]
Cressie N, Johannesson G (2008).
Fixed Rank Kriging for Very Large Spatial Data Sets.
Journal of the Royal Statistical Society B, 70(1),
[Dohan and Masschelein, 1987]
Dohan JM, Masschelein WJ (1987).
The Photochemical Generation of Ozone: Present
Ozone: Science $\&$ Engineering, 9(4), 315–334.
[Fassò, 2013]
Fassò A (2013).
Statistical Assessment of Air Quality Interventions.
Stochastic Environmental Research and Risk Assessment,
27(7), 1651–1660.
<DOI: 10.1007/s00477-013-0702-5>.
[Fassò and Finazzi, 2011]
Fassò A, Finazzi F (2011).
Maximum Likelihood Estimation of the Dynamic
Coregionalization Model with Heterotopic Data.
Environmetrics, 22(6), 735–748.
[Fassò et al., 2016]
Fassò A, Finazzi F, Ndongo F (2016).
European Population Exposure to Airborne Pollutants Based on
a Multivariate Spatio-Temporal Model.
Journal of agricultural, biological, and environmental
statistics, 21(3), 492–511.
[Fassò et al., 2014]
Fassò A, Ignaccolo R, Madonna F, Demoz B, Franco-Villoria M (2014).
Statistical Modelling of Collocation Uncertainty in
Atmospheric Thermodynamic Profiles.
Atmospheric Measurement Techniques, 7(6), 1803–1816.
[Finazzi, 2020]
Finazzi F (2020).
Fulfilling the Information Need after an Earthquake:
Statistical Modelling of Citizen Science Seismic Reports for Predicting
Earthquake Parameters in Near Realtime.
Journal of the Royal Statistical Society: Series A,
183(3), 857–882.
[Finazzi and Fassò, 2014]
Finazzi F, Fassò A (2014).
D-STEM: A Software for the Analysis and Mapping of
Environmental Space-Time Variables.
Journal of Statistical Software, 62(6), 1–29.
[Finazzi et al., 2018]
Finazzi F, Fassò A, Madonna F, Negri I, Sun B, Rosoldi M (2018).
Statistical Harmonization and Uncertainty Assessment in the
Comparison of Satellite and Radiosonde Climate Variables.
arXiv preprint arXiv:1803.05835.
[Finazzi et al., 2015]
Finazzi F, Haggarty R, Miller C, Scott M, Fassò A (2015).
A Comparison of Clustering Approaches for the Study of the
Temporal Coherence of Multiple Time Series.
Stochastic Environmental Research and Risk Assessment,
29(2), 463–475.
[Finazzi et al., 2019]
Finazzi F, Napier Y, Scott M, Hills A, Cameletti M (2019).
A Statistical Emulator for Multivariate Model Outputs with
Missing Values.
Atmospheric Environment, 199, 415 – 422.
[Finazzi et al., 2013]
Finazzi F, Scott EM, Fassò A (2013).
A Model-Based Framework for Air Quality Indices and
Population Risk Evaluation, with an Application to the Analysis of Scottish
Air Quality Data.
Journal of the Royal Statistical Society: Series C,
62(2), 287–308.
[Finley et al., 2015]
Finley A, Banerjee S, Gelfand A (2015).
spBayes for Large Univariate and Multivariate
Point-Referenced Spatio-Temporal Data Models.
Journal of Statistical Software, Articles, 63(13),
ISSN 1548-7660.
[Furrer et al., 2006]
Furrer R, Genton MG, Nychka D (2006).
Covariance Tapering for Interpolation of Large Spatial
Journal of Computational and Graphical Statistics,
15(3), 502–523.
[Gasch et al., 2015]
Gasch CK, Hengl T, Gräler B, Meyer H, Magney TS, Brown DJ (2015).
Spatio-Temporal Interpolation of Soil Water, Temperature,
and Electrical Conductivity in 3D+T: The Cook Agronomy Farm Data Set.
Spatial Statistics, 14, 70–90.
[Gramacy, 2016]
Gramacy RB (2016).
laGP: Large-Scale Spatial Modeling via Local
Approximate Gaussian Processes in R.
Journal of Statistical Software, 72(1), 1–46.
[Heaton et al., 2018]
Heaton MJ, Datta A, Finley AO, Furrer R, Guinness J, Guhaniyogi R, Gerber F,
Gramacy RB, Hammerling D, Katzfuss M, Lindgren F, Nychka DW, Sun F,
Zammit-Mangion A (2018).
A Case Study Competition among Methods for Analyzing Large
Spatial Data.
Journal of Agricultural, Biological and Environmental
Statistics, pp. 1–28.
[Ignaccolo et al., 2014]
Ignaccolo R, Mateu J, Giraldo R (2014).
Kriging with External Drift for Functional Data for Air
Quality Monitoring.
Stochastic Environmental Research and Risk Assessment,
28(5), 1171–1186.
[Jurek and Katzfuss, 2018]
Jurek M, Katzfuss M (2018).
Multi-Resolution Filters for Massive Spatio-Temporal Data.
arXiv preprint arXiv:1810.04200.
[Kahle and Wickham, 2013]
Kahle D, Wickham H (2013).
ggmap: Spatial visualization with ggplot2.
The R Journal, 5(1), 144–161.
[Katzfuss, 2017]
Katzfuss M (2017).
A Multi-Resolution Approximation for Massive Spatial
Journal of the American Statistical Association,
112(517), 201–214.
[Lindgren and Rue, 2015]
Lindgren F, Rue H (2015).
Bayesian Spatial Modelling with R-INLA.
Journal of Statistical Software, 63(19), 1–25.
[Negri et al., 2018]
Negri I, Fassò A, Mona L, Papagiannopoulos N, Madonna F (2018).
Modeling Spatiotemporal Mismatch for Aerosol Profiles.
In Quantitative Methods in Environmental and Climate Research,
pp. 63–83. Springer-Verlag.
[Nychka et al., 2015]
Nychka D, Bandyopadhyay S, Hammerling D, Lindgren F, Sain S (2015).
A Multiresolution Gaussian Process Model for the Analysis of
Large Spatial Datasets.
Journal of Computational and Graphical Statistics,
24(2), 579–599.
[Nychka et al., 2016]
Nychka D, Hammerling D, Sain S, Lenssen N (2016).
LatticeKrig: Multiresolution Kriging Based on Markov
Random Fields.
University Corporation for Atmospheric Research, Boulder, CO, USA.
R package version 7.0.
[Pebesma, 2012]
Pebesma E (2012).
spacetime: Spatio-temporal Data in R.
Journal of statistical software, 51(7), 1–30.
[Pebesma and Heuvelink, 2016]
Pebesma E, Heuvelink G (2016).
Spatio-temporal Interpolation Using gstat.
RFID Journal, 8(1), 204–218.
[Porcu et al., 2018]
Porcu E, Alegria A, Furrer R (2018).
Modeling Temporally Evolving and Spatially Globally
Dependent Data.
International Statistical Review, 86(2), 344–377.
[Ramsay and Silverman, 2007]
Ramsay JO, Silverman BW (2007).
Applied Functional Data Analysis: Methods and Case Studies.
[Ramsay et al., 2018]
Ramsay JO, Wickham H, Graves S, Hooker G (2018).
fda: Functional Data Analysis.
R package version 2.4.8,
[Rue et al., 2014]
Rue H, Martino S, Blangiardo FL, Simpson D, Riebler A, Krainski ET (2014).
INLA: Functions which Allow to Perform Full Bayesian
Analysis of Latent Gaussian Models using Integrated Nested Laplace
R package version 0.0-1404466487,
[Shumway and Stoffer, 2017]
Shumway RH, Stoffer DS (2017).
Time Series Analysis and Its Applications: with R
[Stein, 2002]
Stein ML (2002).
The Screening Effect in Kriging.
The Annals of Statistics, 30(1), 298–323.
[Stein, 2013]
Stein ML (2013).
Statistical Properties of Covariance Tapers.
Journal of Computational and Graphical Statistics,
22(4), 866–885.
[Taghavi-Shahri et al., 2019]
Taghavi-Shahri S, Fassò A, Mahaki B, Amin H (2019).
Concurrent Spatiotemporal Daily Land Use Regression Modeling
and Missing Data Imputation of Fine Particulate Matter Using Distributed
Space-Time Expectation Maximization.
Atmospheric Environment, 224, 1–11.
[Tzeng and Huang, 2018]
Tzeng S, Huang HC (2018).
Resolution Adaptive Fixed Rank Kriging.
Technometrics, 60(2), 198–208.
[Wan et al., 2020]
Wan Y, Xu M, Huang H, Chen S (2020).
A Spatio-Temporal Model for the Analysis and Prediction of
Fine Particulate Matter Concentration in Beijing.
Environmetrics, accepted.
[Wang et al., 2017]
Wang T, Xue L, Brimblecombe P, Lam YF, Li L, Zhang L (2017).
Ozone Pollution in China: A Review of Concentrations,
Meteorological Influences, Chemical Precursors, and Effects.
Science of the Total Environment, 575, 1582–1596.
[Zammit-Mangion, 2018]
Zammit-Mangion A (2018).
FRK: Fixed Rank Kriging.
R package version 0.2.2,
get arXiv to do 4 passes: Label(s) may have changed. Rerun
|
11institutetext: INAF - Osservatorio Astronomico di Brera, via Brera 28, 20121
Milan, Italy
11email<EMAIL_ADDRESS>22institutetext: DiSAT - Università degli Studi
dell’ Insubria, via Valleggio 11, 22100, Como, Italy 33institutetext:
International Centre for Radio Astronomy Research, Curtin University, 1 Turner
Avenue, Bentley, WA 6102, Australia
# Radio detection of VIK J2318$-$3113, the most distant radio-loud quasar
($z$=6.44)
L. Ighina 1122 S. Belladitta 1122 A. Caccianiga 11 J. W. Broderick 33 G.
Drouart 33 A. Moretti 11 N. Seymour 33
We report the 888 MHz radio detection in the Rapid ASKAP Continuum Survey
(RACS) of VIK J2318$-$3113, a $z$=6.44 quasar. Its radio luminosity (1.2
$\times 10^{26}$ W Hz-1 at 5 GHz) compared to the optical luminosity (1.8
$\times 10^{24}$ W Hz-1 at 4400 Å) makes it the most distant radio-loud quasar
observed so far, with a radio loudness R$\sim$70
(R$=L_{\mathrm{{5GHz}}}/L_{\mathrm{{4400\AA}}}$). Moreover, the high
bolometric luminosity of the source (Lbol=7.4 $\times 10^{46}$ erg s-1)
suggests the presence of a supermassive black hole with a high mass
($\gtrsim$6 $\times$108 M⊙) at a time when the Universe was younger than a
billion years. Combining the new radio data from RACS with previous ASKAP
observations at the same frequency, we found that the flux density of the
source may have varied by a factor of $\sim$2, which could suggest the
presence of a relativistic jet oriented towards the line of sight, that is, a
blazar nature. However, currently available radio data do not allow us to
firmly characterise the orientation of the source. Further radio and X-ray
observations are needed.
###### Key Words.:
galaxies: active – galaxies: high-redshift – galaxies: jets – quasars: general
– quasars individual: VIKING~J231818.3$-$311346
## 1 Introduction
In recent years, the exploitation of numerous optical and infrared (IR) wide-
area surveys (e.g. the Panoramic Survey Telescope and Rapid Response System,
Pan-STARRS, Chambers et al. 2016; the VISTA Kilo-degree Infrared Galaxy
Survey, VIKING, Edge et al. 2013; the Dark Energy Survey, DES, Dark Energy
Survey Collaboration et al. 2016, etc.) has led to the discovery of thousands
of high-$z$ quasars (QSOs), with more than 200 sources discovered at $z$>6
(e.g. Mazzucchelli et al. 2017; Matsuoka et al. 2019; Fan et al. 2019; Wang et
al. 2019; Andika et al. 2020), the three most distant of which are at
z$\sim$7.5 (Bañados et al., 2018; Yang et al., 2020a; Wang et al., 2021).
These sources have already proved to be very useful tools for investigating
the intergalactic medium (IGM) at early cosmic times through the absorption of
their optical spectra bluewards of Ly$\alpha$ (e.g. Kashikawa et al. 2006;
Gaikwad et al. 2020). Moreover, the mere presence of such powerful and massive
objects in the primordial Universe places strong constraints on theoretical
models describing the evolution and the accretion rate of supermassive black
holes (SMBHs; e.g. Volonteri et al. 2015; Wang et al. 2020).
Decades of studies at low redshift have now established that radio-loud
(RL111We considered a QSO to be radio loud when it has a radio loudness
$R$>10, with $R$ defined as the ratio of the 5 GHz and 4400 Å rest-frame flux
densities, $R=S_{\mathrm{5GHz}}/S_{\mathrm{{4400\AA}}}$ (Kellermann et al.,
1989).) sources represent $\sim$10-15% of the total QSO population (e.g.
Retana-Montenegro & Röttgering 2017), with no significant deviations until
z$\sim$6 (e.g. Stern et al. 2000; Liu et al. 2021; Diana et al. in prep.).
However, of all the $z$>6 QSOs, only a few have a radio detection, which means
that there are far fewer confirmed high-$z$ RL QSOs. To date, only five have
been found at $z$>6 (McGreer et al., 2006; Bañados et al., 2015; Belladitta et
al., 2020; Liu et al., 2021), with the most distant being at $z$=6.21 (Willott
et al. 2010). As described by Kellermann et al. (2016), the RL classification
(R>10), as opposed to the radio quiet (RQ; R<10), should identify sources that
produce the radio emission through a relativistic jet, which can significantly
affect both the accretion process itself and the environment of the source
(see Blandford et al. 2019 for a recent review). Identifying and
characterising powerful RL sources at the highest redshifts therefore is of
key importance for studying the role of relativistic jets in the primordial
Universe.
In this Letter we report the radio detection of the $z$=6.444$\pm$0.005 QSO
VIKING J231818.35$-$311346.3 (hereafter VIK J2318$-$3113; Decarli et al.
2018). With a relatively bright radio flux density ($\sim$1.4 mJy at 888 MHz),
this source is the most distant RL QSO observed to date. VIK J2318$-$3113 was
discovered from the near-IR (NIR) VIKING survey with the dropout technique,
and its redshift was confirmed with both X-Shooter in the NIR and the Atacama
Large Millimetre/submillimetre Array (ALMA) in the submillimetre (Decarli et
al., 2018; Yang et al., 2020b). In this Letter we present its radio properties
using recent observations, and by combining them with archival data, we also
compare VIK J2318$-$3113 with the small number of other high-$z$ RL QSOs.
We use a flat $\Lambda$CDM cosmology with $H_{0}$=70 km s-1 Mpc-1,
$\Omega_{m}$=0.3, and $\Omega_{\Lambda}$=0.7. Spectral indices are given
assuming $S_{\nu}\propto\nu^{-\alpha}$ , and all errors are reported at
1$\sigma$ unless otherwise specified.
## 2 Radio observations
### 2.1 888 MHz ASKAP observations
VIK J2318$-$3113 has been detected in the first data release of the Rapid
ASKAP Continuum Survey (RACS; McConnell et al.
2020)222https://data.csiro.au/collections/collection/CIcsiro:46532. with a
peak flux density of 1.43 mJy beam-1at 888 MHz, which considering the
associated RMS (0.19 mJy beam-1), corresponds to a signal-to-noise ratio (S/N)
>7 (values as reported in the catalogue released on 2020 December 17).
The overall RACS survey is planned to cover the entire sky south of
declination $+51^{\circ}$ (36656 deg2 in total) in three different radio bands
centred at 888, 1296, and 1656 MHz, all with a bandwidth of 288 MHz. These
observations are designed as a pilot project to prepare for the data
calibration and handling of future deeper surveys (e.g. the evolutionary map
of the Universe, EMU, Norris et al. 2011) with the Australian SKA Pathfinder
(ASKAP; Johnston et al. 2008). In the first data release (December 2020), the
sky south of declination $+41^{\circ}$ was covered in the lower frequency band
(888 MHz) with a spatial resolution of $\sim$15′′. By cross-matching this
first data release with the list of $z$>6 QSOs discovered to date in the same
sky area (169 sources in total), we found the radio counterparts of three of
them: VIK J2318$-$3113, and two other RL QSOs. For these last two objects a
discussion of their radio properties has already been reported in the
literature: FIRST J1427385+331241 ($z$=6.12; McGreer et al. 2006) and PSO
J030947.49+271757.31 ($z$=6.10; Belladitta et al. 2020).
Figure 1: 1′ $\times$ 1′ cutout of the $Y$-band VIKING image around VIK
J2318$-$3113, overlaid with the 888 MHz radio contours from RACS (continuous
red lines) and GAMA23 (dashed blue lines). In both cases the contours are
spaced by $\sqrt{2}$ starting from three times the off-source RMS derived in
our analysis, $\sim$0.20 mJy beam-1 for RACS and $\sim$0.04 mJy beam-1 for
GAMA23. In the bottom left corner the beam sizes from the RACS (12.2′′
$\times$ 11.4′′) and GAMA23 (10.2′′ $\times$ 8.5′′) observations are shown.
The radio source is located 1.6′′ from the optical/NIR counterpart of VIK
J2318$-$3113, which is consistent with the positional error reported in the
RACS catalogue ($\sim$4′′). Even considering typical uncertainties in
interferometric radio positions ($\approx$$\frac{\Delta\theta}{2\times
S/N}$$\sim$0.9′′, where $\Delta\theta$ is the size of the beam; Fomalont 1999)
together with the typical astrometric precision of the survey ($\sim$0.8′′;
McConnell et al. 2020), the observed offset is still consistent. Moreover,
from the source density of the RACS survey ($\sim$80 sources deg-2, McConnell
et al. 2020), we can also compute the probability of finding an unrelated
radio source within a 1.6′′ radius from any given position (see e.g. eq. 4 in
Condon et al. 1998). In this case, the probability is $\sim$5$\times$10-5,
which means that the expected number of spurious associations of the 169 $z$>6
QSOs that we based the query on is <0.01. We can therefore conclude that the
association between VIK J2318$-$3113 and the radio source is statistically
significant and unlikely to be spurious.
At the same time, VIK J2318$-$3113 also belongs to one of the Galaxy and Mass
Assembly (GAMA; Driver et al. 2011) fields, GAMA23 (339 < R.A. [deg] < 351 and
$-$35 < Dec. [deg] < $-$30). In particular, this region has recently (2019
March) been covered by a deeper ASKAP observation (RMS$\sim$0.04 mJy beam-1),
again at 888 MHz, within an ASKAP/EMU early science
project333https://data.csiro.au/collections/collection/CIcsiro:40262. and was
reduced as described in Seymour et al. (2020). We report in Fig. 1 the 888 MHz
radio contours from the RACS and GAMA23 observations, overlaid on the NIR
VIKING image in the $Y$-band.
Table 1: Results of the analysis of the 888 MHz ASKAP observations of VIK
J2318$-$3113.
Project: | RACS | GAMA23
---|---|---
Total flux density (mJy): | 1.44$\pm$0.34∗ | 0.59$\pm$0.07
Peak surf. brightness (mJy/beam): | 1.48$\pm$0.20 | 0.59$\pm$0.04
Major axis∗∗ (arcsec): | 13.2$\pm$2.0 | 10.5$\pm$0.8
Minor axis∗∗ (arcsec): | 10.2$\pm$1.2 | 8.2$\pm$0.5
P.A. east of north (deg): | 45$\pm$18 | 105$\pm$10
Off-source RMS (mJy/beam): | 0.20 | 0.04
444 $*$$*$footnotetext: In the following we use the more conservative error of
0.60 mJy obtained from eq. 7 in McConnell et al. (2020). See section 2.1 for
further details.
$**$$**$footnotetext: Convolved with the beam of the instrument.
In Tab. 1 we report the results of a single Gaussian fit performed on the RACS
and GAMA23 images using the Common Astronomy Software Applications package
(CASA; McMullin et al. 2007). In the GAMA23 observation the best-fit position
is only 0.37′′ away from the NIR counterpart, thus providing further strong
evidence for the radio association. Given the very similar angular resolution
in both cases, the source is point-like and not resolved. However, the
estimated flux density varies by a factor $\sim$2.4 in the two images, from
0.59$\pm$0.07 to 1.44$\pm$0.34 mJy.
The time separation between the two observations is one year (2019 March –
2020 March), which in the source rest frame corresponds to $\sim$50 days
(without taking possible relativistic effects into account). In order to
verify whether the source variation between the GAMA23 and RACS observations
is real or is only a systematic effect related to the calibration, we compared
the integrated flux densities of the sources detected in the two images. In
particular, as for VIK J2318$-$3113, we performed a single Gaussian fit with
the CASA software on $\sim$70 sources with a flux density between 1 and 10 mJy
and within one square degree from the QSO position555Although a primary beam
correction was performed during the data reduction of the RACS survey and the
GAMA23 images, we applied a search radius cutoff in order avoid any possible
residual fluctuation of the flux calibration.. The distribution of the ratios
of the flux densities measured in the two images is a Gaussian centred at one
and with $\sigma$=0.16, consistent with the statistical errors on the flux
densities and thus indicating that the observed difference for VIK
J2318$-$3113 cannot be attributed to a systematic calibration offset in the
two datasets. When we sum in quadrature the uncertainties related to the two
flux density estimates, the significance of the variation observed in VIK
J2318$-$3113 is $\sim$2.4$\sigma$. A large variation in a short period of time
as observed in this case is usually associated with the presence of a
relativistic jet oriented towards the line of sight, that is, a blazar nature
(e.g. Hovatta et al. 2008).
The uncertainty on the flux density ratios reported above ($\sigma$=0.16) was
derived from the relative comparison of the RACS and GAMA23 images, that is,
from datasets obtained from the same telescope. McConnell et al. (2020) have
studied the uncertainties on the absolute flux density scale of RACS images by
comparing sources with multiple independent RACS observations (i.e. on the
overlapping edges of different tiles), also with other catalogues in the
literature, finding $\Delta$Sν = 0.5 mJy + 0.07$\times$Sν (eq. 7 in their
paper). In the particular case of VIK J2318$-$3113, the corresponding value is
$\sim$0.60 mJy. We take this uncertainty into account in section 4 when we
compute the quantities based on the RACS flux density (e.g. radio luminosity
and radio loudness).
### 2.2 Archival radio observations
Even though VIK J2318$-$3113 is not reported in any other public radio
catalogue, we checked archival radio images at the NIR position of the source
to search for the presence of a faint but significant (S/N>2.5) radio signal.
We did not detect the source in the TIFR Giant Metrewave Radio Telescope Sky
Survey (TGSS; Intema et al. 2017) at 150 MHz (image RMS$\sim$2.9 mJy beam-1),
the Sydney University Molonglo Sky Survey (SUMSS; Mauch et al. 2003) at 843
MHz (image RMS$\sim$2.5 mJy beam-1), or in the NRAO Karl G. Jansky Very Large
Array Sky Survey (NVSS; Condon et al. 1998) at 1.4 GHz (image RMS$\sim$0.45
mJy beam-1). In contrast, we did find a radio excess less than 0.6′′ away from
the NIR position of the source in the first (2018 February) and second (2020
November) epochs of the Very Large Array Sky Survey (VLASS; Lacy et al. 2020)
at 3 GHz. The peak flux density of the emission in the two epochs is
0.29$\pm$0.11 mJy beam-1 in the first and 0.40$\pm$0.13 mJy beam-1 in the
second, which corresponds to a S/N of 2.6 and 3.0, respectively. Even though
the two estimates are marginally consistent, we consider the average of the
two and the overall range of uncertainty because of the possible intrinsic
variability of the source: 0.35$\pm$0.18 mJy. In Tab. 2 we report the radio
data and the 2.5$\sigma$ upper limits obtained from archival observations as
described above.
When we take currently available data with their uncertainties and the upper
limits derived from non-detections into account, the spectral index of a
single power law covering the observed frequency range is poorly constrained
($\alpha_{r}$=0–1.2). However, in addition to information on the flux density
and the dimensions of the sources, the RACS catalogue also reports the
spectral index computed within the 288 MHz band centred at 888 MHz. The
spectral index reported for VIK J2318$-$3113 is $\alpha_{r}$=0.98, which is
similar to what is typically observed in high-$z$ QSOs (e.g. Coppejans et al.
2017; Bañados et al. 2018). In the following, we consider this to be the best-
fit value despite the relatively low S/N across the ASKAP band, even though a
different assumption does not affect the results. A more detailed discussion
of the broad-band radio properties of VIK J2318$-$3113 will be presented in a
forthcoming work.
Table 2: Estimates and 2.5$\sigma$ upper limits on the radio flux densities of VIK J2318$-$3113 from archival radio surveys. Survey: | TGSS | SUMSS | NVSS | VLASS
---|---|---|---|---
Obs. Freq. (GHz): | 0.15 | 0.843 | 1.4 | 3
Flux density (mJy/beam): | <7.3 | <6.3 | <1.1 | 0.35$\pm$0.18
## 3 Optical/UV properties
Given the high-redshift nature of VIK J2318$-$3113, the NIR photometric data
from the VIKING survey (reported in Tab. 3) cover the UV/optical spectrum in
its rest frame. Therefore we used these photometric points to estimate the
bolometric luminosity (Lbol) of the source. In the following, we assume an
optical spectral index given by the slope observed between the $K$ and $J$
bands, $\alpha_{{\mathrm{{o}}}}=0.54$, which is consistent with what is
normally found in other QSOs (e.g. Vanden Berk et al. 2001). We started by
computing the rest-frame monochromatic luminosities at 1350 and 3000 Å using
the observed magnitudes in the filter with the closest corresponding rest-
frame wavelength, that is, $Y$ ($\sim$1370 Å) and $K$ ($\sim$2860 Å),
respectively. The bolometric luminosity can then be inferred using the
correction factors derived in Shen et al. (2008) for 1350 Å
($L_{\mathrm{{bol}}}$= 8.0 $\pm$ 2.7 $\times$ 1046 erg s-1) and in Runnoe et
al. (2012) for 3000 Å ($L_{\mathrm{{bol}}}$= 7.4 $\pm$ 0.3 $\times$ 1046 erg
s-1). Averaging the two results with the corresponding variances as weights,
we obtain $L_{\mathrm{{bol}}}$= 7.4 $\pm$ 0.3 $\times$ 1046 erg s-1. Assuming
an Eddington-limited accretion, that is, Lbol $\leq$LEDD,666Where LEDD = 1.26
$\times$ 1038 (MBH/M⊙) erg s-1. this value of the bolometric luminosity
implies that the SMBH mass must be higher than 6 $\times\leavevmode\nobreak\
10^{8}$ M⊙.
Table 3: NIR magnitudes of VIK J2318$-$3113 as measured in the VIKING survey (Vega system). Filter: | $Z$ | $Y$ | $J$ | $H$ | $K$
---|---|---|---|---|---
$\lambda_{eff}$ ($\mu$m): | 0.878 | 1.021 | 1.254 | 1.646 | 2.149
magnitude: | 21.42 | 20.17 | 19.89 | 19.61 | 18.67
mag. error: | 0.11 | 0.08 | 0.11 | 0.18 | 0.14
## 4 Radio loudness and comparison with high-$z$ RL QSOs
In order to estimate the rest-frame monochromatic luminosity at 5 GHz, we
considered the 888 MHz flux density obtained from the RACS observation
(S888MHz= 1.44 mJy) with an uncertainty that takes the absolute calibration of
the map into account (0.60 mJy, see previous section) and the spectral index
reported in the RACS catalogue ($\alpha_{r}$=0.98). We also considered the
GAMA23 flux density (S888MHz= 0.59$\pm$0.07 mJy) and a spectral index in the
range $\alpha_{r}$=0–1.2 to estimate the associated uncertainty. The result,
however, has little dependence on the $\alpha_{r}$ assumption because the
observed frequency of 888 MHz corresponds to a rest-frame frequency of 6.6
GHz, which is very close to 5 GHz. The resulting radio luminosity is L5GHz=
1.2${}_{-0.9}^{+0.6}\,\times 10^{26}$ W Hz-1. Combining this estimate with the
optical luminosity at 4400 Å (L4400Å= 1.8$\pm$0.1 $\times\leavevmode\nobreak\
10^{31}$ erg s-1 Hz-1), computed from the observed $K$ magnitude, we obtain a
radio loudness R= 66.3${}_{-46.7}^{+36.3}$. Adopting the typical value of R=10
as the threshold between RL and RQ sources, this makes VIK J2318$-$3113 the
most distant RL QSO observed so far, at $z$=6.44. We note that this
classification does not depend on the somewhat arbitrary criterion for
separating the RL and RQ populations. Even if we consider a radio loudness as
defined by Jiang et al. (2007)777In this case, the radio loudness is defined
as the following rest-frame ratio:
R=$S_{\mathrm{{5GHz}}}/S_{\mathrm{{2500\AA}}}$. or a single threshold in the
radio luminosity (L5GHz>$10^{32.5}$ erg s-1 Hz-1; Jiang et al. 2007), the RL
classification still holds.
In RL QSOs, the radio emission is thought to be produced by relativistic jets
and not by star-formation (SF) processes (e.g. Kellermann et al. 2016). VIK
J2318$-$3113 was found to be very luminous in the far-IR (FIR) (log(LFIR/L⊙)
in the range 11.89–12.46, between 42.5 and 122.5 $\mu$m; Decarli et al. 2018;
Venemans et al. 2018, 2020), and this may imply that at least part of the
observed radio emission is due to SF. However, considering the relation
between radio and FIR luminosity observed in SF galaxies (Condon et al.,
2002), we expect that only a few percent (<5%) of the observed radio emission
can be produced by SF. This confirms that the high radio power observed in VIK
J2318$-$3113 is likely produced by relativistic jets, as expected in RL
sources. Interestingly, Venemans et al. (2020) also found that the FIR
continuum and [C II] emissions extend up to $\sim$5 kpc (0.2′′) with an
irregular morphology. Further radio observations at similar resolution would
be fundamental for understanding the role of the different components at work
in this complex QSO.
Figure 2: Left: Rest-frame radio luminosity density at 5 GHz vs. the rest-
frame optical luminosity density at 4400 Å for $z$>5.5 QSOs with a radio
detection in the literature. Diagonal lines indicate constant radio-loudness
values. Adapted from Bañados et al. (2015). Right: Radio loudness as a
function of redshift for the $z$>5.5 confirmed RL QSOs compared to an
optically selected sample of RL QSOs at lower redshift (orange points; Zhu et
al. 2020) and all the RL QSOs at $z$>4 (yellow diamonds) known to date. The
blue squares (circles) report $z$>5.5 (>6) sources in both graphs. The only
confirmed $z$>5.5 blazar (Belladitta et al., 2020) is reported with a green
triangle. At lower redshifts we did not distinguish this class because not all
sources have a reliable classification. The red star represents VIK
J2318$-$3113.
Following Bañados et al. (2015), we report in Fig. 2 (left) the rest-frame
radio luminosity (5 GHz) as a function of the rest-frame optical luminosity
(4400 Å) for the updated list of $z$>5.5 QSOs with a radio observation and
thus a firm RL/RQ classification888Data from Bañados et al. (2015, 2018),
Belladitta et al. (2020), and Liu et al. (2021). Clearly, the radio loudness
of VIK J2318$-$3113 is similar to that of the majority of $z$>5.5 RL QSOs,
with 10<R<100\.
Moreover, in Fig. 2 (right) we compare the confirmed RL QSOs at $z$>5.5 to the
optically selected sample at lower redshift ($\sim$800 sources) discussed in
Zhu et al. (2020) and to the $z$>4 RL QSOs discovered so far999These are all
the RL QSOs published to date. To estimate their radio loudness, we considered
the radio spectral index, if present; otherwise, we assumed $\alpha_{r}$=0.75
(Bañados et al., 2015) and considered a $\pm$0.25 variation to estimate the
uncertainty. The full list of sources with the corresponding radio data and
references will be presented in Belladitta et al. (in prep.).. Interestingly,
only a small fraction of very radio-powerful high-$z$ sources (logR>2.5) has
been found at $z$>5.5 compared to low redshifts. This may be a consequence of
the fact that at these redshifts, QSOs have been selected mainly in the
optical/UV, with only three radio-selected sources (which include the two
radio-brightest sources at $z$>6). Nevertheless, we expect that upcoming and
ongoing wide-area surveys such as RACS and the development of dedicated
selection techniques in the radio band (e.g. Drouart et al. 2020) will find
many more radio-powerful sources at $z$>6 (e.g. Amarantidis et al. 2019).
## 5 Conclusions
We have presented the radio detection (at 888 MHz) of VIK J2318$-$3113, a
$z$=6.44 QSO. Combining the new radio information from RACS with the archival
data, we estimate a radio-loudness value of R$\sim$70, which means that this
source is the most distant RL QSO observed to date. The radio association was
made by cross-matching the first data release of the RACS survey and a list of
the 169 previously discovered $z$>6 QSOs in the same area of the sky. As a
result, we found radio counterparts for a total of three RL sources, VIK
J2318$-$3113 included, which corresponds to a radio detection rate of $\sim$2%
in the aforementioned list of $z$>6 QSOs. Because the RACS flux density limit
is not deep enough to detect all the $z$>6 RL QSOs discovered so far, which
have typical NIR magnitudes $\sim$22, this detection rate should be considered
as a lower limit to the actual RL fraction at $z$>6\.
We cannot fully characterise the radio spectral properties of VIK
J2318$-$3113, and thus establish whether it is a flat, steep, or peaked
source, with the currently available radio data. This is an important
diagnostic for understanding the orientation of the relativistic jet with
respect to the line of sight, that is, whether VIK J2318$-$3113 is a blazar.
The possible presence of variability at 888 MHz, as found in the comparison of
the RACS and GAMA23 observations, may suggest that the emission of this source
is dominated by the relativistic beaming, which could mean that the jet is
oriented at small angles from the line of sight. More data are required to
confirm this result, however. Assuming an Eddington-limited accretion, the
relatively high bolometric luminosity suggests the presence of a central SMBH
with a mass $\gtrsim$6 $\times$ 108M⊙.
This detection anticipates the discovery of many more RL high-$z$ sources in
the next years when the new generation of all-sky radio surveys will be
performed by the Square Kilometre Array and its precursors.
###### Acknowledgements.
We thank the anonymous referee for the useful comments and suggestions. We
acknowledge financial contribution from the agreement ASI-INAF n. I/037/12/0
and n.2017-14-H.0 and from INAF under PRIN SKA/CTA FORECaST. In this work we
have used data from the ASKAP observatory. The Australian SKA Pathfinder is
part of the Australia Telescope National Facility which is managed by CSIRO.
Operation of ASKAP is funded by the Australian Government with support from
the National Collaborative Research Infrastructure Strategy. ASKAP uses the
resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the
Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are
initiatives of the Australian Government, with support from the Government of
Western Australia and the Science and Industry Endowment Fund. We acknowledge
the Wajarri Yamatji people as the traditional owners of the Observatory site.
This paper includes archived data obtained through the CSIRO ASKAP Science
Data Archive, CASDA (http://data.csiro.au). This research made use of Astropy
(http://www.astropy.org) a community-developed core Python package for
Astronomy (Astropy Collaboration et al., 2013, 2018). This research has made
use of the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et
al., 2000).
## References
* Amarantidis et al. (2019) Amarantidis, S., Afonso, J., Messias, H., et al. 2019, MNRAS, 485, 2694
* Andika et al. (2020) Andika, I. T., Jahnke, K., Onoue, M., et al. 2020, ApJ, 903, 34
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Bañados et al. (2018) Bañados, E., Venemans, B. P., Mazzucchelli, C., et al. 2018, Nature, 553, 473
* Bañados et al. (2015) Bañados, E., Venemans, B. P., Morganson, E., et al. 2015, ApJ, 804, 118
* Belladitta et al. (2020) Belladitta, S., Moretti, A., Caccianiga, A., et al. 2020, A&A, 635, L7
* Blandford et al. (2019) Blandford, R., Meier, D., & Readhead, A. 2019, ARA&A, 57, 467
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, eprint arXiv:1612.05560 [arXiv:1612.05560]
* Condon et al. (2002) Condon, J. J., Cotton, W. D., & Broderick, J. J. 2002, AJ, 124, 675
* Condon et al. (1998) Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, AJ, 115, 1693
* Coppejans et al. (2017) Coppejans, R., van Velzen, S., Intema, H. T., et al. 2017, MNRAS, 467, 2039
* Dark Energy Survey Collaboration et al. (2016) Dark Energy Survey Collaboration, Abbott, T., Abdalla, F. B., et al. 2016, MNRAS, 460, 1270
* Decarli et al. (2018) Decarli, R., Walter, F., Venemans, B. P., et al. 2018, ApJ, 854, 97
* Driver et al. (2011) Driver, S. P., Hill, D. T., Kelvin, L. S., et al. 2011, MNRAS, 413, 971
* Drouart et al. (2020) Drouart, G., Seymour, N., Galvin, T. J., et al. 2020, PASA, 37, e026
* Edge et al. (2013) Edge, A., Sutherland, W., Kuijken, K., et al. 2013, The Messenger, 154, 32
* Fan et al. (2019) Fan, X., Wang, F., Yang, J., et al. 2019, ApJ, 870, L11
* Fomalont (1999) Fomalont, E. B. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 180, Synthesis Imaging in Radio Astronomy II, ed. G. B. Taylor, C. L. Carilli, & R. A. Perley, 301
* Gaikwad et al. (2020) Gaikwad, P., Rauch, M., Haehnelt, M. G., et al. 2020, MNRAS, 494, 5091
* Hovatta et al. (2008) Hovatta, T., Nieppola, E., Tornikoski, M., et al. 2008, A&A, 485, 51
* Intema et al. (2017) Intema, H. T., Jagannathan, P., Mooley, K. P., & Frail, D. A. 2017, A&A, 598, A78
* Jiang et al. (2007) Jiang, L., Fan, X., Ivezić, Ž., et al. 2007, ApJ, 656, 680
* Johnston et al. (2008) Johnston, S., Taylor, R., Bailes, M., et al. 2008, Experimental Astronomy, 22, 151
* Kashikawa et al. (2006) Kashikawa, N., Shimasaku, K., Malkan, M. A., et al. 2006, ApJ, 648, 7
* Kellermann et al. (2016) Kellermann, K. I., Condon, J. J., Kimball, A. E., Perley, R. A., & Ivezić, Ž. 2016, ApJ, 831, 168
* Kellermann et al. (1989) Kellermann, K. I., Sramek, R., Schmidt, M., Shaffer, D. B., & Green, R. 1989, AJ, 98, 1195
* Lacy et al. (2020) Lacy, M., Baum, S. A., Chandler, C. J., et al. 2020, PASP, 132, 035001
* Liu et al. (2021) Liu, Y., Wang, R., Momjian, E., et al. 2021, ApJ, 908, 124
* Matsuoka et al. (2019) Matsuoka, Y., Onoue, M., Kashikawa, N., et al. 2019, ApJ, 872, L2
* Mauch et al. (2003) Mauch, T., Murphy, T., Buttery, H. J., et al. 2003, MNRAS, 342, 1117
* Mazzucchelli et al. (2017) Mazzucchelli, C., Bañados, E., Venemans, B. P., et al. 2017, ApJ, 849, 91
* McConnell et al. (2020) McConnell, D., Hale, C. L., Lenc, E., et al. 2020, PASA, 37, e048
* McGreer et al. (2006) McGreer, I. D., Becker, R. H., Helfand, D. J., & White, R. L. 2006, ApJ, 652, 157
* McMullin et al. (2007) McMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical Data Analysis Software and Systems XVI, ed. R. A. Shaw, F. Hill, & D. J. Bell, 127
* Norris et al. (2011) Norris, R. P., Hopkins, A. M., Afonso, J., et al. 2011, PASA, 28, 215
* Retana-Montenegro & Röttgering (2017) Retana-Montenegro, E. & Röttgering, H. J. A. 2017, A&A, 600, A97
* Runnoe et al. (2012) Runnoe, J. C., Brotherton, M. S., & Shang, Z. 2012, MNRAS, 422, 478
* Seymour et al. (2020) Seymour, N., Huynh, M., Shabala, S. S., et al. 2020, PASA, 37, e013
* Shen et al. (2008) Shen, Y., Greene, J. E., Strauss, M. A., Richards, G. T., & Schneider, D. P. 2008, ApJ, 680, 169
* Stern et al. (2000) Stern, D., Djorgovski, S. G., Perley, R. A., de Carvalho, R. R., & Wall, J. V. 2000, AJ, 119, 1526
* Vanden Berk et al. (2001) Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, AJ, 122, 549
* Venemans et al. (2018) Venemans, B. P., Decarli, R., Walter, F., et al. 2018, ApJ, 866, 159
* Venemans et al. (2020) Venemans, B. P., Walter, F., Neeleman, M., et al. 2020, ApJ, 904, 130
* Volonteri et al. (2015) Volonteri, M., Silk, J., & Dubus, G. 2015, ApJ, 804, 148
* Wang et al. (2020) Wang, F., Fan, X., Yang, J., et al. 2020, arXiv e-prints, arXiv:2011.12458
* Wang et al. (2021) Wang, F., Yang, J., Fan, X., et al. 2021, arXiv e-prints, arXiv:2101.03179
* Wang et al. (2019) Wang, F., Yang, J., Fan, X., et al. 2019, ApJ, 884, 30
* Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9
* Willott et al. (2010) Willott, C. J., Delorme, P., Reylé, C., et al. 2010, AJ, 139, 906
* Yang et al. (2020a) Yang, J., Wang, F., Fan, X., et al. 2020a, ApJ, 897, L14
* Yang et al. (2020b) Yang, J., Wang, F., Fan, X., et al. 2020b, ApJ, 904, 26
* Zhu et al. (2020) Zhu, S. F., Brandt, W. N., Luo, B., et al. 2020, MNRAS, 496, 245
|
# Photon-photon interactions in Rydberg-atom arrays
Lida Zhang (gbsn张理达)1 Valentin Walther1,2 Klaus Mølmer1 Thomas Pohl1 1Center
for Complex Quantum Systems, Department of Physics and Astronomy, Aarhus
University, DK-8000 Aarhus C, Denmark 2ITAMP, Harvard-Smithsonian Center for
Astrophysics, Cambridge, Massachusetts 02138, USA
###### Abstract
We investigate the interaction of weak light fields with two-dimensional
lattices of atoms with high lying atomic Rydberg states. This system features
different interactions that act on disparate length scales, from zero-range
defect scattering of atomic excitations and finite-range dipolar exchange
processes to long-range Rydberg-state interactions, which span the entire
array and can block multiple Rydberg excitations. Analyzing their interplay,
we identify conditions that yield a nonlinear quantum mirror which coherently
splits incident fields into correlated photon-pairs in a single transverse
mode, while transmitting single photons unaffected. In particular, we find
strong anti-bunching of the transmitted light with equal-time pair
correlations that decrease exponentially with an increasing range of the
Rydberg blockade. Such strong photon-photon interactions in the absence of
photon losses open up promising avenues for the generation and manipulation of
quantum light, and the exploration of many-body phenomena with interacting
photons.
Photons typically cross each other unimpeded. Yet, the scientific and
technological prospects of effective photon interactions Chang _et al._
(2014) have motivated substantial research efforts into nonlinear optical
processes at the ultimate quantum level. Here, optical resonators Birnbaum
_et al._ (2005); Volz _et al._ (2014); Reiserer and Rempe (2015); Welte _et
al._ (2018); O’Shea _et al._ (2013) and nano-scale photonic structures Goban
_et al._ (2012); Thompson _et al._ (2013); Tiecke _et al._ (2014); Petersen
_et al._ (2014); Lodahl _et al._ (2015); Chang _et al._ (2018); Noaman
(2018); Yu _et al._ (2019); Prasad _et al._ (2020) have made it possible to
couple photons to single saturable emitters, and strong interactions between
highly excited atoms have been used to realize large optical nonlinearities in
atomic ensembles Pritchard _et al._ (2010); Peyronel _et al._ (2012);
Thompson _et al._ (2017); Paris-Mandoki _et al._ (2017); Murray and Pohl
(2016); Firstenberg _et al._ (2016). The use of many-particle systems to
reach a strong collective light-matter coupling present an attractive
approach, and the exploitation of Rydberg-state interactions in atomic gases
has indeed enabled recent breakthroughs that, for example, demonstrated
single-photon switching Baur _et al._ (2014); Gorniaczyk _et al._ (2014);
Tiarks _et al._ (2014) and photonic quantum gates Tiarks _et al._ (2019).
Yet, photon losses that are intrinsic to such ensemble approaches Gorshkov
_et al._ (2013); Murray _et al._ (2018) limit the performance of these
applications Baur _et al._ (2014); Murray _et al._ (2016) and severely
challenge the exploration of correlated quantum states Otterbach _et al._
(2013) beyond the few-photon regime Zeuthen _et al._ (2017); Bienias _et
al._ (2020). At the same time, studies of ordered arrangements, instead of
random atomic ensembles, have revealed a number of exciting linear optical
properties Facchinetti _et al._ (2016); Perczel _et al._ (2017); Bettles
_et al._ (2017); Guimond _et al._ (2019); Ballantine and Ruostekoski (2020)
that arise from the many-body nature of light-matter interactions in these
systems. In particular, the cooperative response of two-dimensional arrays
facilitates strong photon coupling at greatly reduced losses Bettles _et al._
(2016); Shahmoon _et al._ (2017), as recently demonstrated experimentally
with ultracold atoms in optical lattices Rui _et al._ (2020).
Figure 1: (Color online) (a) A regular array of atoms interacts with weak
coherent light and can induce nonclassical correlations in the transmitted
probe field. (d) Its amplitude, $\mathcal{E}$, couples the ground states,
$|g\rangle$, of the atoms to an excited state, $|e\rangle$, which is coupled
to a high-lying Rydberg state $|s\rangle$ by an additional control field with
a Rabi frequency $\Omega$. Atomic interactions, induced by the driven dipoles
of the lower transition, lead to a collective energy shift, $\Delta_{c}$, and
collective photon emission of the excited array with a rate $\Gamma_{c}$. For
$\Omega=0$, this can result in near-perfect reflection [red line in (e)] when
the probe-field detuning $\Delta$ matches $\Delta_{c}$, while EIT of the
three-level system yields perfect transmission on two-photon resonance for a
finite control-field coupling ($\Omega\neq 0$) [blue line in (e)]. (c) The van
der Waals interaction between Rydberg atoms can be strong enough to inhibit
the excitation of more than a single Rydberg state within a blockade area
(blue circle) that may cover the entire array. In combination with EIT and the
collective photon reflection of the array, this provides a nonlinear mechanism
for strong coherent photon interactions that can generate highly non-classical
states of light. This is shown by the strong pair correlations of transmitted
($\rightarrow$) and reflected ($\leftarrow$) photons in panel (b).
In this work, we investigate the effects of strong Rydberg-state interactions
in regular atomic arrays [Fig.1(a)], and analyse their _nonlinear cooperative_
response. Rydberg-state interactions can be used to couple a single atom to
adjacent lattices Grankin _et al._ (2018); Bekenstein _et al._ (2020), and
here we show that atomic interactions within Rydberg arrays can generate
strong photon-photon interactions at greatly suppressed losses. This quantum
optical nonlinearity emerges from the interplay of various atomic interactions
and a narrow transmission feature [Fig.1(e)] that arises from three-level
photon coupling [Fig.1(d)] under conditions of electromagnetically induced
transparency (EIT), and can produce highly correlated states of light [see
Fig.1(b)]. Compared to two-level systems, where few-photon nonlinearities can
also arise in small arrays with very small distances, $\lesssim 100$nm Cidrim
_et al._ (2020); Williamson _et al._ (2020), or be induced via Rydberg-
dressing of low-lying states Moreno-Cardoner _et al._ (2021); Henkel _et
al._ (2010), Rydberg-EIT in the present lattice-setting provides strong photon
coupling and facilitates large nonlinearities under conditions of present
experiments Rui _et al._ (2020); Zeiher _et al._ (2015).
We consider a two dimensional regular array of closely spaced atoms at
positions ${\bf r}_{j}$, as illustrated in Fig.1(a). A weak probe field with
an amplitude $\mathcal{E}$ drives the transition between the ground state
$|g\rangle$ and an intermediate state $|e\rangle$ at a frequency detuning
$\Delta$, while a high-lying Rydberg state $|s\rangle$ is excited by an
additional control field with a Rabi frequency $\Omega$ [Fig.1(d)]. The
combined action of both light fields leads to two distinct types of atomic
interactions that act on vastly different length scales.
First, the Rydberg atoms feature van der Waals interactions that can be
sufficiently strong to block the excitation of multiple Rydberg
$|s\rangle$-states within distances of several micrometers Jaksch _et al._
(2000); Lukin _et al._ (2001). This Rydberg blockade has been explored for a
range of applications Saffman _et al._ (2010); Adams _et al._ (2019). In
particular, it has already been demonstrated in dense atomic lattices Zeiher
_et al._ (2015), with an excitation blockade over distances of more than $\sim
10$ sites. We focus here on configurations in which the entire atomic array is
covered by the blockade radius, and quantum states with more than a single
Rydberg excitation are blocked by the strong atomic interaction [see
Fig.1(c)].
Second, a small lattice constant $a\sim\lambda$, on the order of the
$|g\rangle-|e\rangle$ transition wavelength $\lambda$, entails strong dipole-
dipole interactions that arise from near-resonant photon exchange on the probe
transition James (1993); Dung _et al._ (2002); Asenjo-Garcia _et al._
(2017), which leads to coherent exchange of atomic $|e\rangle$-excitations
across the atomic array. This results in a collective optical response that
can greatly suppress photon scattering and generate near-perfect coherent
coupling to the single transverse mode of the incident field. One can find
superradiant as well as subradiant states of a single de-localized
$|e\rangle$-excitation, whose collective emission rate $\Gamma_{c}$ is
respectively enhanced or suppressed relative to the single-atom decay rate
$\Gamma$ Zoubi and Ritsch (2011); Sutherland and Robicheaux (2016);
Facchinetti _et al._ (2016); Bettles _et al._ (2015); Guimond _et al._
(2019); Zhang and Mølmer (2019); Piñeiro Orioli and Rey (2019). For large
extended arrays, the resulting collective level shift Glicenstein and Ferioli
(2020), $\Delta_{c}$ of the $|g\rangle-|e\rangle$ transition marks the
spectral position of reflection resonances at which an incoming photon is
reflected perfectly Bettles _et al._ (2016); Shahmoon _et al._ (2017),
without scattering into other transverse modes.
Moreover, the control-field coupling to the Rydberg-state permits to control
the optical response on the $|g\rangle-|e\rangle$ transition. In particular,
on two-photon resonance, the three-level system features a dark eigenstate
that does not contain the intermediate state $|e\rangle$ Fleischhauer and
Lukin (2000); Arimondo (1996), and therefore enables lossless transmission of
the incident light due to EIT Fleischhauer _et al._ (2005). As illustrated in
Fig. 1(e), EIT is restricted to a narrow transparency window in the reflection
spectrum of the array. Its width, $\Omega^{2}/\Gamma_{c}$, permits to control
reflectivity by tuning the intensity of the classical control field Manzoni
_et al._ (2018); Bekenstein _et al._ (2020).
Figure 2: (Color online) (a) Coefficients for linear reflection ($R$),
transmission ($T$) and loss ($L$) of the incident probe light for an array of
two-level atoms ($\Omega=0$) with a circular boundary as shown in the inset.
The depicted dependence on the waist, $w_{0}$, of the probe beam shows a
maximum, virtually perfect reflection of $R\simeq 0.975$ at $w_{0}\simeq
2\lambda$ for an optimized lattice constant $a=0.75\lambda$ and probe detuning
$\Delta=0.05\Gamma$. Panel (b) shows the average change of the linear response
coefficients for identical parameters when adding a Rydberg defect in the form
of an empty lattice site at ${\bf r}_{j}$ with a probability
$p_{j}\propto|\mathcal{E}({\bf r}_{j})|^{2}$.
A quantum mechanical switching mechanism can emerge from the strong
interaction between the Rydberg states. Hereby, the Rydberg blockade of
multiple atomic dark states exposes the reflective two-level response to
multi-photon states, while single probe photons can pass the array unimpeded.
Such a nonlinearity may yield effective photon-photon interactions that can
operate at the level of single photons and greatly suppressed scattering
losses. We have studied this behavior by solving the Master equation,
$\partial_{t}\hat{\rho}(t)=-i[\hat{H},\hat{\rho}]+\mathcal{L}(\hat{\rho})$,
for the density matrix, $\hat{\rho}$, of the atomic array. The Hamiltonian,
$\hat{H}=\hat{H}_{\rm LA}+\hat{H}_{\rm dd}$ contains the light-atom coupling
in the rotating wave approximation
$\hat{H}_{\rm LA}=-\sum^{N}_{j=1}\left[g\mathcal{E}({\bf
r}_{j})\hat{\sigma}^{(j)}_{eg}+\Omega\hat{\sigma}^{(j)}_{es}+{\rm
h.c.}\right]+\Delta\hat{\sigma}^{(j)}_{ee},$ (1)
where $\hat{\sigma}_{\alpha\beta}^{(j)}=|\alpha_{j}\rangle\langle\beta_{j}|$
denote the projection and transition operators for the $j$th atom, $g$ denotes
the atom-photon coupling strength. The probe-field amplitude follows the
paraxial wave equation $[4\pi
i\partial_{z}+\lambda\nabla_{\perp}^{2}]\mathcal{E}=0$, and is normalized such
that $|\mathcal{E}|^{2}$ yields a spatial photon density. The remaining
photonic dynamics can be integrated out to obtain a Hamiltonian
$\hat{H}_{\text{dd}}=-\sum_{i\neq
j}J_{ij}\hat{\sigma}^{(i)}_{eg}\hat{\sigma}^{(j)}_{ge}$ (2)
and Liouvillian
$\mathcal{L}(\rho)=\sum^{N}_{i,j}\frac{1}{2}\Gamma_{ij}(2\hat{\sigma}^{(j)}_{ge}\rho\hat{\sigma}^{(i)}_{eg}-\hat{\sigma}^{(i)}_{eg}\hat{\sigma}^{(j)}_{ge}\rho-\rho\hat{\sigma}^{(i)}_{eg}\hat{\sigma}^{(j)}_{ge})$
(3)
that describe the light-induced atomic interactions within the Born-Markov
approximation Asenjo-Garcia _et al._ (2017). The interaction coefficients
$J_{ij}$ and $\Gamma_{ij}$ for two atoms at positions ${\bf r}_{i}$ and ${\bf
r}_{j}$ are determined by the Greens function tensor of the free-space
electromagnetic field. Knowing the dipolar field from each atom, one can
readily reconstruct the mean values and correlation functions of the photonic
field from the solution, $\hat{\rho}$, of the driven atomic many-body
dynamics. While this yields the entirety of the emitted light field, we focus
here on its projection onto the single transverse mode of the driving field
$\mathcal{E}$. This yields simple relations
$\displaystyle\hat{a}_{\rightarrow}(t)=$
$\displaystyle\sqrt{\mathcal{P}}+i\frac{g}{c\sqrt{\mathcal{P}}}\sum_{j}\mathcal{E}^{*}({\bf
r}_{j})\hat{\sigma}_{eg}^{(j)}(t),$ (4a)
$\displaystyle\hat{a}_{\leftarrow}(t)=$ $\displaystyle
i\frac{g}{c\sqrt{\mathcal{P}}}\sum_{j}\mathcal{E}^{*}({\bf
r}_{j})\hat{\sigma}_{eg}^{(j)}(t)$ (4b)
for the photon operators of the transmitted ($\hat{a}_{\rightarrow}$) and
reflected ($\hat{a}_{\leftarrow}$) field modes, in terms of the atomic
transition operators. Here, $\mathcal{P}=\int|\mathcal{E}({\bf r})|^{2}{\rm
d}{\bf r}_{\perp}$ denotes the transverse integral over the input intensity
profile and defines the probe beam power, which is conserved along the
propagation.
Figure 3: (Color online) (a) Equal-time two-photon correlation function of the
transmitted light as a function of diameter of the array for
$\sqrt{\mathcal{P}}=0.01\sqrt{\Gamma/c}$. Parameters are optimized to obtain a
maximum linear reflection as shown in Fig.2(a). Panel (b) shows the
correlation function as a function of the probe-beam waist for large arrays,
$\ell\rightarrow\infty$.
For reference, let us first consider the optical properties of the two-level
array in the absence of Rydberg-state coupling ($\Omega=0$). While only an
infinitely extended array can perfectly reflect an incident plane wave
Shahmoon _et al._ (2017), finite arrays can yield high reflection for a
judicious choice of the system parameters. This is illustrated in Fig. 2(a),
where we show the steady-state reflectivity
$R=\langle\hat{a}_{\leftarrow}^{\dagger}\hat{a}_{\leftarrow}\rangle/\mathcal{P}$
along with the transmission coeffcient
$T=\langle\hat{a}_{\rightarrow}^{\dagger}\hat{a}_{\rightarrow}\rangle/\mathcal{P}$
and loss $L=1-T-R$ for circular disc-shaped arrays with a diameter of $\ell$
atoms and a Gaussian driving mode $|\mathcal{E}|=\sqrt{2\mathcal{P}/(\pi
w^{2})}{\rm e}^{-r_{\perp}^{2}/w^{2}}$, whose width changes as
$w^{2}=w_{0}^{2}+\lambda^{2}z^{2}/(\pi^{2}w_{0}^{2})$ along the propagation
direction and has its waist centered at the mirror position (Fig.1). Already
for an array with a $10$-atom diameter ($\ell=10$), one can reach near-unity
reflection of $R\simeq 0.975$ for a beam waist of only $w_{0}\simeq 2\lambda$.
The linear reflectivity for a finite Rydberg-state coupling, $\Omega$, is
determined by the atomic dark state
$|D\rangle\propto\Omega|G\rangle-g\sum_{j}\mathcal{E}({\bf
r}_{j})\hat{\sigma}_{sg}^{(j)}|G\rangle$, where $|G\rangle$ denotes the
$N$-atom ground state. Owing to the long Rydberg-atom lifetime, this state
does not suffer from spontaneous emission and hence it facilitates a vanishing
reflection and perfect transmission of the probe field.
The nonlinear response arises from the intricate interplay between EIT and the
different atomic interactions, from (i) local defect interactions with atomic
excitations, and (ii) finite-range photon-mediated dipole-dipole interactions,
to (iii) the long-range Rydberg interactions that extent across the array. We
can estimate the first effect by sampling a single empty site of the array
from the probability distribution $p_{j}\propto|\mathcal{E}({\bf r}_{j})|^{2}$
of generated Rydberg impurities. As shown in Fig.2(b), such a single de-
localized Rydberg impurity can have significant consequences for the optical
response, causing transverse photon scattering at the expense of the two-level
reflection coefficient.
We have performed stochastic wave function simulations Mølmer _et al._ (1993)
to solve the $N$-body master equation determined by Eqs.(1)-(3). For
sufficiently weak probe fields, one can truncate the many-body wave function
of the atoms at maximally two $|e\rangle$-excitations. This describes the
physics of two interacting photons, which can be analyzed via the two-photon
densities
$\rho_{{\begin{subarray}{c}\alpha\\\
\beta\end{subarray}}}(t,t^{\prime})=\langle\hat{a}_{\alpha}^{\dagger}(t)\hat{a}_{\beta}^{\dagger}(t^{\prime})\hat{a}_{\beta}(t^{\prime})\hat{a}_{\alpha}(t)\rangle,$
(5)
and the associated correlation functions
$g^{(2)}_{{\begin{subarray}{c}\alpha\\\
\beta\end{subarray}}}(|t-t^{\prime}|)\equiv\rho_{{\begin{subarray}{c}\alpha\\\
\beta\end{subarray}}}(t,t^{\prime})/(\langle\hat{a}_{\alpha}^{\dagger}(t)\hat{a}_{\alpha}(t)\rangle\langle\hat{a}_{\beta}^{\dagger}(t^{\prime})\hat{a}_{\beta}(t^{\prime})\rangle)$,
where $\alpha,\beta=\rightarrow,\leftarrow$ labels the forward and backward
propagating mode of emitted probe photons, as also indicated in Eqs.(4). The
pair-correlation functions only depend on the time difference
$\tau=|t-t^{\prime}|$ in the steady-state under cw-driving.
Figure 4: (Color online) Pair correlation function of the transmitted (a) and
reflected light (b) for different values of the control-field Rabi frequency
$\Omega$, $\ell=10$, $w_{0}=1.7\lambda$, and otherwise identical parameters as
in Fig. 3. For both correlation functions, all data approximately collapse
onto a single curve upon scaling the time between successive photon detections
by the EIT delay time $\tau_{d}$, as given in Eq.(6).
In Fig.3(a) we show the equal-time two-photon correlations
$g_{\rightrightarrows}^{(2)}(0)$ of the transmitted light for different sizes
of the array, assuming parameters optimized to maximize the linear reflection.
As can be seen, the Rydberg blockade can lead to strongly antibunched
transmitted light. This is possible because the Rydberg-state component of the
dark state, $|D\rangle$, generated by absorption of one photon, blocks
excitation of further atoms into the dark state and therefore suppresses the
simultaneous transmission of multiple photons. We find stronger anti-bunching
for larger arrays, i.e. a rapid drop of $g_{\rightrightarrows}^{(2)}(0)$ with
increasing size $\ell$ of the array. This behaviour arises from the effects of
Rydberg impurities, discussed above and illustrated in Fig.2(b). A larger size
of the array and illuminated area implies a lower density of the single
Rydberg impurity, and therefore improves the efficiency of the nonlinear
reflection. Simulations for a fixed beam waist and increasing lattice size
show convergence to a finite value of $g_{\rightrightarrows}^{(2)}(0)$ for
sufficiently large arrays. These asymptotic values are shown in Fig.3(b) and
reveal a rapid exponential drop with increasing waist of the probe beam and
yields strong anti-bunching with $g_{\rightrightarrows}^{(2)}(0)<0.1$ already
for remarkably small values $w_{0}\sim 1.5\lambda$.
These results demonstrate the strong suppression of multi-photon transmission,
and the two-photon density depicted in Fig.1(b) shows how incident photons are
rerouted by their nonlinear interaction with the Rydberg array. Here, the two-
time density defined in Eq.(5) has been converted to a spatial steady-state
correlation function, using the relation between spatial photon position and
time, $z=ct$ by the speed of light, $c$. In particular, we see that the two-
photon density,
$\rho_{\rightleftarrows}(z,z^{\prime})=\rho_{\leftrightarrows}(z^{\prime},z)$
for counter-propagating photon pairs continuously connects to the density,
$\rho_{\leftleftarrows}(z,z^{\prime})$, of simultaneously reflected photon
pairs. This indicates that the two-photon component of the incident probe
field is symmetrically rerouted into these two modes.
We can understand this behavior as follows. In the linear limit and in the
absence of EIT ($\Omega=0$), the weak probe field $\mathcal{E}$ only drives
weak atomic excitations with small transition dipole moments determined by
$\hat{\sigma}_{eg}^{(j)}$. At the reflection maxima, the field generated by
these weak atomic dipoles just cancels the probe field in Eq.(4a) and
therefore yields high reflection with
$\hat{a}_{\leftarrow}\approx\sqrt{\mathcal{P}}$ according to Eq.(4b). In the
opposite limit of perfect EIT, the atomic dipoles vanish entirely, leading to
perfect transmission with $\hat{a}_{\rightarrow}=\sqrt{\mathcal{P}}$ and
$\hat{a}_{\leftarrow}=0$, according to Eqs.(4). The nonlinear response,
however, differs fundamentally, because the detection of the first reflected
photon causes a projection of the $N$-atom wavefunction into a state with a
definite de-localized excitation. This unit-probability, heralded excitation
consequently contributes a much stronger emission that vastly overwhelms the
incident probe field amplitude. Following Eq.(4), the subsequent conditioned
photon emission becomes virtually symmetric, with
$\hat{a}_{\rightarrow}\approx\hat{a}_{\leftarrow}$ and leads to the typical
form of the correlated two-photon density shown in Fig.1(b).
Fig.4 offers further insights into the dynamics of the nonlinear photon
interaction. From the linear response of the array under EIT conditions, we
find that a transmitted light pulse experiences a delay of
$\tau_{d}=\frac{\Gamma_{c}}{2\Omega^{2}},$ (6)
which coincides with the inverse width of the transparency window shown in
Fig.1(e). This pulse delay emerges in analogy to slow-light propagation
through an extended EIT medium Fleischhauer and Lukin (2000), and corresponds
to the average time for which a transmitted photon is transferred to the de-
localized Rydberg state $\sim\sum_{j}\mathcal{E}({\bf
r}_{j})\hat{\sigma}_{sg}^{(j)}|G\rangle$ and blocks EIT for any other incident
photons. The pair correlation functions and two-photon densities, depicted in
Fig.4, accurately corroborate this picture, showing the same characteristic
correlation time $\tau_{d}$ for bunched and anti-bunched photon states of the
reflected and transmitted light for varying values of the control-field Rabi
frequency $\Omega$. At the same time, we find that the outgoing photons
maintain a high degree of coherence, as quantified by
$g_{\alpha}^{(1)}(t)=\langle\hat{a}_{\alpha}^{\dagger}(t)\hat{a}_{\alpha}(0)\rangle/\langle\hat{a}_{\alpha}^{\dagger}(0)\hat{a}_{\alpha}(0)\rangle\sim
1$, on the scale of the characteristic correlation time of both fields, which
reflects the suppression of photon loss and decoherence by the collective
light-matter coupling of the ordered array.
This combination of high coherence, low photon losses and strong photon-photon
interactions offers a promising outlook for the generation and manipulation of
non-classical light in optical-lattice experiments that have already
demonstrated Rydberg blockade of more then $\sim 100$ atoms Zeiher _et al._
(2015) as well as efficient photon reflection by arrays with sub-wavelength
lattice constants Rui _et al._ (2020). The demonstrated nonlinearity is akin
to that of wave guide QED with single few-level emitters, whereby the
eliminated scattering into other transverse modes effectively corresponds to a
near-perfect coupling into a single guided mode. This limit of strong coherent
photon coupling has thus far been difficult to reach in atomic systems Prasad
_et al._ (2020); Stiesdal _et al._ (2021), but will enable a range of
applications, from the generation of single narrow-bandwidth photons Parkins
_et al._ (1993), and logic gates Ralph _et al._ (2015) to few-photon routing
and sorting Witthaut _et al._ (2012). The Rydberg array can hereby be
employed as an active or passive element under pulsed or cw operation,
exploiting the additional temporal control provided by the control-field
coupling. While we have focussed here on the few-photon domain in order to
analyse the basic interaction mechanism, the multi-photon regime under strong-
driving conditions provides an exciting perspectives for exploring quantum
optical many-body phenomena. Similarly to cavity-QED with single emitters, the
described nonlinearities may be further enhanced by positioning the array in
front of mirrors or inside optical cavities Shahmoon _et al._ (2020).
Arrangements of multiple Rydberg arrays, or more complex 3D configurations
could be constructed with atoms in configurable optical tweezer arrays to form
networks of quantum beam splitters and nonlinear resonators that also exploit
multi-photon and multi-mode interference effects and may supplement proposals
for quantum enhanced interferometry Demkowicz-Dobrzański _et al._ (2015).
This work was supported by the Carlsberg Foundation through the ’Semper
Ardens’ Research Project QCooL, by the NSF through a grant for ITAMP at
Harvard University, by the DFG through the SPP1929, by the European Commission
through the H2020-FETOPEN project ErBeStA (No. 800942), and by the Danish
National Research Foundation through the Center of Excellence ”CCQ” (Grant
agreement no.: DNRF156).
_Note added_ : During completing of this manuscript we became aware of a
related work Moreno-Cardoner _et al._ (2021) that describes nonlinearities in
two-level arrays where finite-range interactions are induced by Rydberg
dressing Henkel _et al._ (2010).
## References
* Chang _et al._ (2014) D. E. Chang, V. Vuletić, and M. D. Lukin, Nature Photonics 8, 685 (2014).
* Birnbaum _et al._ (2005) K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Nature 436, 87 (2005).
* Volz _et al._ (2014) J. Volz, M. Scheucher, C. Junge, and A. Rauschenbeutel, Nature Photonics 8, 965 (2014).
* Reiserer and Rempe (2015) A. Reiserer and G. Rempe, Rev. Mod. Phys. 87, 1379 (2015).
* Welte _et al._ (2018) S. Welte, B. Hacker, S. Daiss, S. Ritter, and G. Rempe, Phys. Rev. X 8, 011018 (2018).
* O’Shea _et al._ (2013) D. O’Shea, C. Junge, J. Volz, and A. Rauschenbeutel, Phys. Rev. Lett. 111, 193601 (2013).
* Goban _et al._ (2012) A. Goban, K. S. Choi, D. J. Alton, D. Ding, C. Lacroûte, M. Pototschnig, T. Thiele, N. P. Stern, and H. J. Kimble, Phys. Rev. Lett. 109, 033603 (2012).
* Thompson _et al._ (2013) J. D. Thompson, T. G. Tiecke, N. P. de Leon, J. Feist, A. V. Akimov, M. Gullans, A. S. Zibrov, V. Vuletić, and M. D. Lukin, Science 340, 1202 (2013).
* Tiecke _et al._ (2014) T. G. Tiecke, J. D. Thompson, N. P. de Leon, L. R. Liu, V. Vuletić, and M. D. Lukin, Nature 508, 241 (2014).
* Petersen _et al._ (2014) J. Petersen, J. Volz, and A. Rauschenbeutel, Science 346, 67 (2014).
* Lodahl _et al._ (2015) P. Lodahl, S. Mahmoodian, and S. Stobbe, Rev. Mod. Phys. 87, 347 (2015).
* Chang _et al._ (2018) D. E. Chang, J. S. Douglas, A. González-Tudela, C.-L. Hung, and H. J. Kimble, Rev. Mod. Phys. 90, 031002 (2018).
* Noaman (2018) M. Noaman, M. Langbecker, and P. Windpassinger, Opt. Lett. 43, 3925 (2018).
* Yu _et al._ (2019) S.-P. Yu, J. A. Muniz, C.-L. Hung, and H. J. Kimble, Proceedings of the National Academy of Sciences 116, 12743 (2019) .
* Prasad _et al._ (2020) A. S. Prasad, J. Hinney, S. Mahmoodian, K. Hammerer, S. Rind, P. Schneeweiss, A. S. Sørensen, J. Volz, and A. Rauschenbeutel, Nature Photonics 14, 719 (2020).
* Pritchard _et al._ (2010) J. D. Pritchard, D. Maxwell, A. Gauguet, K. J. Weatherill, M. P. A. Jones, and C. S. Adams, Phys. Rev. Lett. 105, 193603 (2010).
* Peyronel _et al._ (2012) T. Peyronel, O. Firstenberg, Q.-Y. Liang, S. Hofferberth, A. V. Gorshkov, T. Pohl, M. D. Lukin, and V. Vuletić, Nature 488, 57 (2012).
* Thompson _et al._ (2017) J. D. Thompson, T. L. Nicholson, Q.-Y. Liang, S. H. Cantu, A. V. Venkatramani, S. Choi, I. A. Fedorov, D. Viscor, T. Pohl, M. D. Lukin, and V. Vuletić, Nature 542, 206 (2017).
* Paris-Mandoki _et al._ (2017) A. Paris-Mandoki, C. Braun, J. Kumlin, C. Tresp, I. Mirgorodskiy, F. Christaller, H. P. Büchler, and S. Hofferberth, Phys. Rev. X 7, 041010 (2017).
* Murray and Pohl (2016) C. Murray and T. Pohl, Advances In Atomic, Molecular, and Optical Physics 65, 321 (2016).
* Firstenberg _et al._ (2016) O. Firstenberg, C. S. Adams, and S. Hofferberth, J. Phys B 49, 152003 (2016).
* Baur _et al._ (2014) S. Baur, D. Tiarks, G. Rempe, and S. Dürr, Phys. Rev. Lett. 112, 073901 (2014).
* Gorniaczyk _et al._ (2014) H. Gorniaczyk, C. Tresp, J. Schmidt, H. Fedder, and S. Hofferberth, Phys. Rev. Lett. 113, 053601 (2014).
* Tiarks _et al._ (2014) D. Tiarks, S. Baur, K. Schneider, S. Dürr, and G. Rempe, Phys. Rev. Lett. 113, 053602 (2014).
* Tiarks _et al._ (2019) D. Tiarks, S. Schmidt-Eberle, T. Stolz, G. Rempe, and S. Dürr, Nature Physics 15, 124 (2019).
* Gorshkov _et al._ (2013) A. V. Gorshkov, R. Nath, and T. Pohl, Phys. Rev. Lett. 110, 153601 (2013).
* Murray _et al._ (2018) C. R. Murray, I. Mirgorodskiy, C. Tresp, C. Braun, A. Paris-Mandoki, A. V. Gorshkov, S. Hofferberth, and T. Pohl, Phys. Rev. Lett. 120, 113601 (2018).
* Murray _et al._ (2016) C. R. Murray, A. V. Gorshkov, and T. Pohl, New Journal of Physics 18, 092001 (2016).
* Otterbach _et al._ (2013) J. Otterbach, M. Moos, D. Muth, and M. Fleischhauer, Phys. Rev. Lett. 111, 113001 (2013).
* Zeuthen _et al._ (2017) E. Zeuthen, M. J. Gullans, M. F. Maghrebi, and A. V. Gorshkov, Phys. Rev. Lett. 119, 043602 (2017).
* Bienias _et al._ (2020) P. Bienias, J. Douglas, A. Paris-Mandoki, P. Titum, I. Mirgorodskiy, C. Tresp, E. Zeuthen, M. J. Gullans, M. Manzoni, S. Hofferberth, D. Chang, and A. V. Gorshkov, Phys. Rev. Research 2, 033049 (2020).
* Facchinetti _et al._ (2016) G. Facchinetti, S. D. Jenkins, and J. Ruostekoski, Phys. Rev. Lett. 117, 243601 (2016).
* Perczel _et al._ (2017) J. Perczel, J. Borregaard, D. E. Chang, H. Pichler, S. F. Yelin, P. Zoller, and M. D. Lukin, Phys. Rev. Lett. 119, 023603 (2017).
* Bettles _et al._ (2017) R. J. Bettles, J. c. v. Minář, C. S. Adams, I. Lesanovsky, and B. Olmos, Phys. Rev. A 96, 041603 (2017).
* Guimond _et al._ (2019) P.-O. Guimond, A. Grankin, D. V. Vasilyev, B. Vermersch, and P. Zoller, Phys. Rev. Lett. 122, 093601 (2019).
* Ballantine and Ruostekoski (2020) K. E. Ballantine and J. Ruostekoski, Phys. Rev. Lett. 125, 143604 (2020).
* Bettles _et al._ (2016) R. J. Bettles, S. A. Gardiner, and C. S. Adams, Phys. Rev. Lett. 116, 103602 (2016).
* Shahmoon _et al._ (2017) E. Shahmoon, D. S. Wild, M. D. Lukin, and S. F. Yelin, Phys. Rev. Lett. 118, 113601 (2017).
* Rui _et al._ (2020) J. Rui, D. Wei, A. Rubio-Abadal, S. Hollerith, J. Zeiher, D. M. Stamper-Kurn, C. Gross, and I. Bloch, Nature 583, 369 (2020).
* Grankin _et al._ (2018) A. Grankin, P. O. Guimond, D. V. Vasilyev, B. Vermersch, and P. Zoller, Phys. Rev. A 98, 043825 (2018).
* Bekenstein _et al._ (2020) R. Bekenstein, I. Pikovski, H. Pichler, E. Shahmoon, S. F. Yelin, and M. D. Lukin, Nature Physics 16, 676 (2020).
* Cidrim _et al._ (2020) A. Cidrim, T. S. do Espirito Santo, J. Schachenmayer, R. Kaiser, and R. Bachelard, Phys. Rev. Lett. 125, 073601 (2020).
* Williamson _et al._ (2020) L. A. Williamson, M. O. Borgh, and J. Ruostekoski, Phys. Rev. Lett. 125, 073602 (2020).
* Moreno-Cardoner _et al._ (2021) M. Moreno-Cardoner, D. Goncalves, and D. E. Chang, “Quantum nonlinear optics based on two-dimensional rydberg atom arrays,” (2021), arXiv:2101.01936 [quant-ph] .
* Henkel _et al._ (2010) N. Henkel, R. Nath, and T. Pohl, Phys. Rev. Lett. 104, 195302 (2010).
* Zeiher _et al._ (2015) J. Zeiher, P. Schauß, S. Hild, T. Macrì, I. Bloch, and C. Gross, Phys. Rev. X 5, 031015 (2015).
* Jaksch _et al._ (2000) D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, Phys. Rev. Lett. 85, 2208 (2000).
* Lukin _et al._ (2001) M. D. Lukin, M. Fleischhauer, R. Cote, L. M. Duan, D. Jaksch, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 87, 037901 (2001).
* Saffman _et al._ (2010) M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010).
* Adams _et al._ (2019) C. S. Adams, J. D. Pritchard, and J. P. Shaffer, Journal of Physics B: Atomic, Molecular and Optical Physics 53, 012002 (2019).
* James (1993) D. F. V. James, Phys. Rev. A 47, 1336 (1993).
* Dung _et al._ (2002) H. T. Dung, L. Knöll, and D.-G. Welsch, Phys. Rev. A 66, 063810 (2002).
* Asenjo-Garcia _et al._ (2017) A. Asenjo-Garcia, J. D. Hood, D. E. Chang, and H. J. Kimble, Phys. Rev. A 95, 033818 (2017).
* Zoubi and Ritsch (2011) H. Zoubi and H. Ritsch, Phys. Rev. A 83, 063831 (2011).
* Sutherland and Robicheaux (2016) R. T. Sutherland and F. Robicheaux, Phys. Rev. A 94, 013847 (2016).
* Bettles _et al._ (2015) R. J. Bettles, S. A. Gardiner, and C. S. Adams, Phys. Rev. A 92, 063822 (2015).
* Zhang and Mølmer (2019) Y.-X. Zhang and K. Mølmer, Phys. Rev. Lett. 122, 203605 (2019).
* Glicenstein and Ferioli (2020) A. Glicenstein, G. Ferioli, N. Sibalic, L. Brossard, I. Ferrier-Barbut, and A. Browaeys, Phys. Rev. Lett. 124, 253602 (2020).
* Piñeiro Orioli and Rey (2019) A. Piñeiro Orioli and A. M. Rey, Phys. Rev. Lett. 123, 223601 (2019).
* Fleischhauer and Lukin (2000) M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. 84, 5094 (2000).
* Arimondo (1996) E. Arimondo (Elsevier, 1996) pp. 257 – 354.
* Fleischhauer _et al._ (2005) M. Fleischhauer, A. Imamoglu, and J. P. Marangos, Rev. Mod. Phys. 77, 633 (2005).
* Manzoni _et al._ (2018) M. T. Manzoni, M. Moreno-Cardoner, A. Asenjo-Garcia, J. V. Porto, A. V. Gorshkov, and D. E. Chang, New Journal of Physics 20, 083048 (2018).
* Murray (2021) C. R. Murray, and T. Pohl, Phys. Rev. X 7, 031007 (2017).
* Mølmer _et al._ (1993) K. Mølmer, Y. Castin, and J. Dalibard, J. Opt. Soc. Am. B 10, 524 (1993).
* Stiesdal _et al._ (2021) N. Stiesdal, H. Busche, K. Kleinbeck, J. Kumlin, M. G. Hansen, H. P. Büchler, and S. Hofferberth, “Controlled multi-photon subtraction with cascaded Rydberg superatoms as single-photon absorbers,” (2021), arXiv:2103.15738 [quant-ph] .
* Parkins _et al._ (1993) A. S. Parkins, P. Marte, P. Zoller, and H. J. Kimble, Phys. Rev. Lett. 71, 3095 (1993).
* Ralph _et al._ (2015) T. C. Ralph, I. Söllner, S. Mahmoodian, A. G. White, and P. Lohdal, Phys. Rev. Lett. 114, 173603 (1993).
* Witthaut _et al._ (2012) D. Witthaut, M. D. Lukin, and A. S. Sørensen, EPL 97, 50007 (2015).
* Shahmoon _et al._ (2020) E. Shahmoon, D. S. Wild, M. D. Lukin, and S. F. Yelin, “Theory of cavity qed with 2d atomic arrays,” (2020), arXiv:2006.01972 [quant-ph] .
* Demkowicz-Dobrzański _et al._ (2015) X. R. Demkowicz-Dobrzański, M. Jarzyna, and J. Kolodyński, Progress in Optics 60, 345 (2015).
|
# Learning Abstract Representations through Lossy Compression of Multi-Modal
Signals
Charles Wilmot, Gianluca Baldassarre, and Jochen Triesch C. Wilmot and J.
Triesch are with the Frankfurt Institute for Advanced Studies, Ruth-Moufang-
Str. 1, 60438 Frankfurt am Main, Germany. {wilmot,triesch}fias.uni-
frankfurt.de G. Baldassarre is with the National Research Council, Institute
of Cognitive Sciences and Techologies, Via S. Martino della Battaglia 44,
I-00185 Rome, Italy<EMAIL_ADDRESS>
Manuscript received September 30, 2020; revised September 31, 2020.
###### Abstract
A key competence for open-ended learning is the formation of increasingly
abstract representations useful for driving complex behavior. Abstract
representations ignore specific details and facilitate generalization. Here we
consider the learning of abstract representations in a multi-modal setting
with two or more input modalities. We treat the problem as a lossy compression
problem and show that generic lossy compression of multimodal sensory input
naturally extracts abstract representations that tend to strip away modalitiy
specific details and preferentially retain information that is shared across
the different modalities. Specifically, we propose an architecture that is
able to extract information common to different modalities based on the
compression abilities of generic autoencoder neural networks. We test the
architecture with two tasks that allow 1) the precise manipulation of the
amount of information contained in and shared across different modalities and
2) testing the method on a simulated robot with visual and proprioceptive
inputs. Our results show the validity of the proposed approach and demonstrate
the applicability to embodied agents.
###### Index Terms:
open-ended learning, abstraction, multimodality, lossy compression,
autoencoder, intrinsic motivation.
## 1 Introduction
Human intelligence rests on the ability to learn abstract representations. An
abstract representation has stripped away many details of specific examples of
a concept and retains what is common, thereby facilitating generalization and
transfer of knowledge to new tasks [25]. A key challenge for natural and
artificial developing agents is to learn such abstract representations. How
can this be done?
### 1.1 Learning abstract representations
In classic supervised learning settings, an external teacher provides the
abstract concept by virtue of explicit labeling of training examples. For
example, in neural network based image recognition explicit labels (cat, dog,
etc.) are provided as a “one-hot” abstract code and the network learns to map
input images to this abstract representation [1, 2, 3]. While such a learned
mapping qualifies as an abstract representation that has “stripped away many
details of specific examples of a concept and retains what is common” it needs
to be provided by the teacher through typically millions of examples. This is
clearly not how human infants learn and often leads to undesirable
generalization behavior [4, 5]. Therefore unsupervised learning and
reinforcement learning are more interesting settings for studying the
autonomous formation of abstract representations.
Reinforcement learning (RL) can also be viewed from the perspective of
learning abstract representations. The essence of RL is to learn a policy,
i.e., a mapping from states of the world to actions that an agent should take
in order to maximize the sum of future rewards [6]. Often, this mapping is
realized through neural networks. In deep Q-learning networks [7, 8], for
example, a neural network learns to map the current state of the world (e.g.,
the current image of a computer game screen) onto the expected future rewards
for performing different actions (e.g., joystick commands) in this particular
state. This can be interpreted as the agent learning an abstract “concept” of
the following kind: the set of all world states for which this particular
joystick command will be the optimal choice of action. While these “concepts”
are not provided explicitly by a teacher, they are provided implicitly through
the definition of the RL problem (states, actions, rewards, environment
dynamics). In fact, from an RL perspective, these “concepts” are the only ones
the agent ever needs to know about. They suffice for behaving optimally in
this particular environment. However, they may also become completely useless
when the task changes, e.g., a different computer game should be played. This
exemplifies the deep and unresolved problem of how abstract knowledge could be
extracted in RL that is likely to transfer to new tasks.
In the domain of unsupervised learning, on which we will focus in the
remainder, a simple approach to learning somewhat abstracted representations
is through clustering. For example, in $k$-means clustering [9] an input
$x\in\mathbb{R}^{n}$ is represented by mapping it to one of $k$ cluster
centers $c_{i}\in\mathbb{R}^{n},\,i\in\\{1,\ldots,k\\}$ based on a suitably
defined distance metric in $\mathbb{R}^{n}$. Representing an input $x$ by the
closest cluster center strips away information about the precise location of
$x$ in $\mathbb{R}^{n}$, achieving a simple form of abstraction. However, the
use of a predefined distance metric is limiting. A second set of approaches
for learning more abstract representation through lossy compression are
dimensionality reduction techniques. A classic example of such an approach is
principal component analysis (PCA). PCA finds linear projections of the data
such that the projected data has maximum variance, while orthogonal directions
are discarded. Thus, the information that gets stripped away corresponds to
linear projections of small variance. The central limitation of PCA is the
restriction to linear projections. A popular and more powerful approach is the
use of autoencoder neural networks [10], on which we will focus in the
following. Like other dimensionality reduction techniques, autoencoders
construct a more compact and abstract representation of the input domain by
learning to map inputs $x\in\mathbb{R}^{n}$ onto a more compact latent
representation $z\in\mathbb{R}^{m}$ with $m\ll n$ via a neural network. For
this, the network has an encoder/decoder structure with a “bottleneck” in the
middle. The $n$-dimensional input is mapped via several layers onto the
$m$-dimensional bottleneck and from there via several layers to an
$n$-dimensional output. The learning objective is to reconstruct the input at
the output, but the trivial solution of a direct identity mapping is avoided,
because the input needs to be “squeezed through” the central $m$-dimensional
bottleneck. After training, the decoder is often discarded and only the
encoder is retained, providing a mapping from the original $n$-dimensional
input $x$ to a compressed lower-dimensional latent representation $z$. Due to
the nonlinear nature of the neural network, it is difficult to characterize
exactly what information will be stripped away in the encoding process. An
exception is the special case of a linear network with a quadratic loss
function. For this case, it can be shown that the network discovers the same
subspace as linear PCA. In the following, we will consider the implications of
a developing agent learning to encode an input $x$ that comprises multiple
sensory modalities.
### 1.2 Multimodality, Abstraction, and Lossy Compression: An Information
Theoretic Perspective
Our different sensory modalities (vision, audition, proprioception, touch,
smell, taste) provide us with different “views” of our physical environment.
These views are not independent, but contain shared information about the
underlying physical reality. Therefore, it must be possible to compress the
information coming from the different modalities into a more compact code. As
an example, consider viewing and touching your favorite coffee cup. Some
information such as the color or the text or picture printed on the cup will
only be accessible to the visual modality. Some information, such as the
temperature or roughness of the surface, will only be accessible to the haptic
modality. Some information, however, such as the 3-D shape of the cup, will be
accessible to both modalities. This implies potential for compression. Let
$X_{v}$ represent the visual input and $X_{h}$ the haptic input. We can
quantify the amount of information using concepts from information theory. For
now, we will ignore the fact that the amount of information is a function of
our behavior, but we will return to this point in the Discussion. We can
quantify the amount of information in $X_{v}$ and $X_{h}$ as:
$H(X_{v},X_{h})=H(X_{v})+H(X_{h})-MI(X_{v};X_{h})\;,$ (1)
where $H(X_{v}$) and $H(X_{h})$ are the individual entropies of the visual and
haptic signals, respectively, $H(X_{v},X_{h})$ is their joint entropy, and
$MI(X_{v};X_{h})$ is their mutual information, i.e., the amount of information
that $X_{v}$ and $X_{h}$ have in common. The individual entropies $H(X_{v}$)
and $H(X_{h})$ and the joint entropy $H(X_{v},X_{h})$ indicate, respectively,
how many bits are required on average to encode $X_{v}$ and $X_{h}$
individually or jointly. The mutual information expresses how many bits can be
“saved” by jointly encoding $X_{v}$ and $X_{h}$ compared to encoding them
separately. If the visual and haptic inputs were statistically independent,
then $MI(X_{v};X_{h})=0$ and no savings can be gained. If there are any
statistical dependencies between $X_{v}$ and $X_{h}$, then $MI(X_{v};X_{h})>0$
and $H(X_{v},X_{h})<H(X_{v})+H(X_{h})$, i.e., the visual and haptic signals
can be compressed into a more compact code. In principle, this compression can
be achieved without any loss of information.
However, it is the very nature of abstraction to “strip away” information
about the details of a situation and only maintain a “coarser” and hopefully
more generalizable representation. Consider again the situation of visuo-
haptic perception of your favorite coffee cup. If we were to strip away the
information that is only accessible to the visual modality (color and printed
text/picture) and strip away the information that is accessible to only the
haptic modality (temperature, surface roughness), then we are left with a much
reduced and abstract representation that maintains essentially the 3-D shape
of the cup and allows for many generalizations, e.g., how to grasp this
particular cup versus many similarly shaped cups with virtually endless
combinations of color, picture, text, surface roughness, and temperature.
Thus, learning an abstract code by retaining information that is shared across
modalities and stripping away information that is specific to only individual
modalities may lead to very useful representations with high potential for
generalization. How can this be done?
Here we investigate a possible solution that relies on autoencoding the
multimodal inputs into a sufficiently compact lossy code. The rationale for
this approach is as follows. Consider an autoencoder that learns to map the
concatenation of $X_{v}$ and $X_{h}$ onto a small latent vector $Z$ such that
$X_{v}$ and $X_{h}$ can be reconstructed from $Z$ with minimal loss. What
information should $Z$ encode? If the information coding capacity of $Z$ is at
least $H(X_{v},X_{h})$, then it can simply encode all the information
contained in the visual and haptic signals. If it is smaller, however, then
some information must be discarded. So what information should be kept and
what information should be discarded? In general, it appears best to keep the
information that $X_{v}$ and $X_{h}$ have in common, because this information
aids in reconstructing both of them, essentially killing two (metaphorical)
birds with one stone. Information that is present in only either $X_{v}$ or
$X_{h}$ cannot be helpful in reconstructing the other. Therefore, an
autoencoder with limited capacity will learn a representation that tends to
prioritize the information that $X_{v}$ and $X_{h}$ have in common and tends
to strip away the information that is unique to either modality. This is
exactly the kind of abstract representation that we would like to achieve.
In the remainder of this article we make these ideas more concrete and study
them in extensive computer simulations. We begin by a discussion of related
work. We then propose a number of models to learn abstract representations
from multimodal input via autoencoding and compare their behavior. In a first
approach, we use synthetically generated inputs, since this allows us to
precisely control the amount of information in individual sensory modalities
and the amount of information that they share. In a second approach, we use
visual and proprioceptive data from a robot simulation. We end by discussing
broader implications of learning abstract representations through lossy
compression of multimodal input for cognitive development.
## 2 Related Work
Our work falls in the general area of unsupervised representation learning
[24]. Representation learning aims to build machine learning algorithms that
extract the explanatory variation factors of data. In so doing, the learned
representations are also often compressed/abstracted in that they discard some
information. Algorithms for representation learning can be grouped based on
the general strategy they use to form representations. This strategy plays the
role of the choice of generic priors on the process that is assumed to have
generated the data. These priors might for example assume that there are
indeed distinct factors capturing the variability of data (e.g., changing
positions of objects and light sources relative to a camera giving rise to
varying camera images), that observations close in space or time have similar
values (e.g., as in natural images and videos), or that most of the
probability mass of the data concentrates on manifolds with a dimensionality
smaller than that of the perceptual space (e.g., as assumed in autoencoders).
Information theoretic approaches to learning representations are of particular
interest and have a long history. A complete review is beyond the scope of
this article. Here we focus on relating our work to some classic approaches.
In Efficient Coding [27, 23], the goal is to learn a representation for
sensory signals that is “compact” and exploits redundancies in the signals to
arrive at a more efficient code. Often this is formulated as maximizing the
mutual information $I(X;Y)$ between an input $X$ and its representation $Y$
while putting additional constraints on $Y$. For example, in linear and non-
linear independent component analysis (ICA) [28], one attempts to make the
components of $Y$ statistically independent, corresponding to the prior of the
data being generated by mixing independent information sources. Searching for
codes that are factorial while retaining the maximum amount of information
about the input leads to an efficient code. This is because if the extracted
components $Y$ were not independent, then there would be potential for further
compression of $Y$ to generate an even more efficient code.
Sparse coding is a popular variant of efficient coding that is related to ICA
and imposes a sparsity constraint (or prior) on $Y$ [29]. This has given rise
to a large literature on learning sparse representations for sensory signals.
Sparse coding models are frequently employed as models of learning sensory
representations in the brain. More recently, such approaches have also been
extended to active perception. In the Active Efficient Coding (AEC) framework,
the sensory encoding is optimized by simultaneously optimizing the encoding of
the sensory inputs and the movements of the sense organs that shape the
statistics of these inputs [30, 31].
Predictive coding can be viewed as a special case of efficient coding that
forms hierarchical representations, where higher levels predict the activity
of lower levels and lower levels signal prediction errors to higher levels
[32]. Note that in these approaches the goal is generally to retain as much
information about the input as possible There is no notion of abstraction as
we have defined it, i.e., of deliberately discarding information to arrive at
a representation that generalizes more easily to new situations.
An alternative information theoretic learning objective was proposed by Becker
and Hinton [33]. In their IMAX approach, two distinct inputs are considered
(in particular visual inputs from two neighboring locations of stereoscopic
image pairs) and the objective is to extract information that is shared by
these two input sources. Becker and Hinton demonstrate that this allows
learning to extract disparity information from the stereoscopic images. While
the specific texture projected to the two neighboring retinal locations may be
different, the binocular disparity is typically the same, because it tends to
vary smoothly across the image. Their approach can be viewed as an early
attempt to extract more “abstract” information (disparity) by encoding
information from multiple sources (binocular visual inputs from two
neighboring locations) and only keeping the information that they have in
common.
IMAX can also be viewed as related to Canonical Correlation Analysis (CCA)
[34]. In CCA one tries to find linear combinations of the components of a
random vector $X$ and the components of a random vector $Y$ such that these
linear combinations are maximally correlated. A high correlation between the
two projections implies that they share a substantial amount of information
(correlation implies statistical dependence and therefore non-zero mutual
information while the reverse is generally not true).
The information bottleneck method by Tishby and colleagues considers the
objective of encoding a sensory input $X$ and only keeping the information
that is useful for predicting a second “relevant” signal $Y$ [35]. Thus it
also aims at keeping only information that is shared with (or: predictive of)
another signal, while discarding all other information. However, in contrast
to IMAX there is a clear asymmetry between the input $X$ and the signal $Y$.
The information bottleneck objective can be expressed as follows: Let $T$ be a
compressed version of $X$. $T$ should retain as little information from $X$ as
possible, but at the same time keep as much information about $Y$ as possible.
Thus $T$ functions as the information bottleneck. Formally, one is seeking to
optimize the functional:
$\min_{p(T=t|X=x)}I(X;T)-\beta I(T;Y),$ (2)
where $p(T=t|X=x)$ describes the encoding of an input $x$ via its
representation $t$, $I(X;T)$ is the mutual information between $X$ and $T$,
$I(T;Y)$ is the mutual information between $T$ and $Y$, and $\beta$ is a
parameter for balancing the two objectives. Unfortunately, as shown in the
Appendix, it is not straightforward to derive a symmetric variant of the
information bottleneck, i.e., to find an encoding $p(T=t|X=x,Y=y)$ that would
keep only the information that $X$ and $Y$ share.
Instead, in our approach we utilize the generic ability of (deep) autoencoders
to perform lossy compression of sensory inputs. Autoencoders implicitly assume
that much of the variability of the data can be accounted for by a smaller
number of latent causes. Specifically, in our case we assume that inputs from
different sensory modalities are related since they are consequences of the
same underlying physical causes. For example, a human infant (or robot)
hitting their hand on the table will observe the consequences in multiple
modalities (feeling the contact, seeing the movement, hearing the sound). The
choice of the size of the autoencoder’s bottleneck is analogous to the choice
of $\beta$ in the information bottleneck. Note that many different types of
autoencoders exist including denoising autoencoders [38], sparse autoencoders
[36], contractive autoencoders [37], variational autoencoders [39],
adversarial autoencoders [40], etc. In order to not distract from our main
point about lossy compression of multimodal signals, we restrict our
experiments to “generic” autoencoders. It should be understood that the
results are expected to generalize to other types of autoencoders or, indeed,
other lossy compression schemes. The novelty of the proposed solution resides
in the fact that it exploits the multimodal organisation of the information
available to an agent to form abstract representations. Furthermore, we
propose a specific cross-modality prediction architecture to distill only the
information that is shared across multiple modalities.
Figure 1: Overview of the approaches, assuming only $2$ modalities ($n=2$). A
baseline experiment, jointly encoding (JE) the dependent vectors $y_{i}$. B
control experiment, jointly encoding the dependent vectors $y_{i}$ but
reconstructing the original data $x_{i}$. C cross-modality prediction
experiment (CM), jointly encoding the predicted vectors
$\tilde{y}_{\smallsetminus i}$. In each schema, the red areas represent the
random neural networks generating dependent vectors (section 3.1.1), the
yellow areas represent the encoding and decoding networks (section 3.1.2), the
green areas represent the readout networks (section 3.1.3), and the orange
area represents the cross-modality prediction networks (section 3.1.4).
## 3 Methods
Our general approach comprises three processing steps. The first step consists
in generating multimodal sensory data. In order to have more control over the
amount of mutual information among the different sensory modalities, we
present a way to generate the latter from noise. We call this type of data
synthetic as it has no real significance. For this we use random multi-layer
neural networks that map independent information sources $x$ onto different
“views” $y$ seen by different sensory modalities. This models the process how
multimodal sensory information is generated from unobserved causes in the
world. Our approach applied to the synthetic data will be explained in sub-
section 3.1. We also test our approach on multimodal data from an actual robot
simulation. This setup will be introduced in sub-section 3.2.
In the second step, we train a neural network autoencoder with varying
capacity, i.e., size of the central bottleneck, to learn a compressed (lossy)
representation $z$ from the concatenation of the multimodal inputs $y$. This
models the process of a developing agent learning an abstract representation
from multimodal sensory information. In the third step we analyze the learned
representation $z$ and measure how much information it retains that is unique
to individual modalities versus shared among multiple modalities. For this we
train a third set of neural networks to reconstruct the original information
sources $x$ from the latent code $z$. The reconstruction error is a proxy for
how much information has been lost during the encoding process. We now explain
the three processing steps in detail starting with the synthetic setup.
### 3.1 Experiments with Synthetic Multimodal Input
#### 3.1.1 Step 1: Generating Synthetic Multimodal Input
We produce the multimodal sensory data in such a way that we can precisely
control the amount of mutual information between the different sensory
modalities. We first define a distribution $p_{m}$ from which we sample
information that is shared by all sensory modalities. $p_{m}$ therefore
represents independent information sources in the world that affect multiple
sensory modalities. We define it as a $d_{m}$-dimensional distribution of
independent standard normal distributions. We then define a second
distribution $p_{e}$, from which the information exclusive to each modality is
sampled. We define $p_{e}$ as a $d_{e}$-dimensional distribution of
independent standard normal Gaussians. For a shared vector $x_{m}\sim p_{m}$
and $n$ vectors $x_{e,i}\sim p_{e}$, $i\in\mathbb{N}_{n-1}$ we can create the
vectors $x_{i}=x_{e,i}\oplus x_{m}$ carrying the information of each modality,
where $\oplus$ is the concatenation operation.
Our sensory modalities do not sample the underlying causes directly (e.g.,
objects and light sources), but indirectly (e.g., images provided by the
eyes). To mimic such an indirect sampling of the world without making any
strong assumptions, we generate the sensory inputs $y$ that the learning agent
perceives via random neural networks. Specifically, we define the input to
modality $i$ as:
$y_{i}=\frac{C\left(x_{i},\theta_{C,i}\right)-\mu_{i}}{\sigma_{i}}\;,$ (3)
where $C$ and $\theta_{C,i}$ are the input construction network and its
weights for the modality $i$ and $\mu_{i}$ and $\sigma_{i}$ are constants
calculated to normalize the components of $y_{i}$ to zero mean and unit
variance.
Tuning the amount of mutual information between the vectors $y_{i}$ is done by
changing the dimensionalities $d_{m}$ and $d_{e}$ of the vectors $x_{m}$ and
$x_{e,i}$, respectively. The amount of information preserved from the vectors
$x_{i}$ in the vectors $y_{i}$ depends on the dimension $d_{y}$ of the vectors
$y_{i}$. We define $d_{y}$ to be proportional to the dimension
$d_{x}=d_{m}+d_{e}$:
$d_{y}=k\times d_{x}\;,$ (4)
where $k\gg 1$. This ensures that the sensory inputs $y_{i}$ essentially
retain all information from the sources $x_{m}$ and $x_{e,i}$.
#### 3.1.2 Step 2: Learning an Abstract Representation of the Synthetic
Multimodal Input via Autoencoding
Taken together, the set of vectors $\\{y_{i}\\}_{i\in\mathbb{N}}$ carries once
the information from each $x_{e,i}$ and $n$ times the information from the
mutual vector $x_{m}$. To show that a lossy-compression algorithm achieves a
better encoding when favoring the reconstruction of the repeated information,
we train an autoencoder to jointly encode the set of the $y_{i}$. We therefore
construct the concatenation $y=y_{0}\oplus\dots\oplus y_{n-1}$ to train the
autoencoder:
$\displaystyle z$ $\displaystyle=E\left(y,\theta_{E}\right)$ (5)
$\displaystyle\tilde{y}$ $\displaystyle=D\left(z,\theta_{D}\right)\;,$ (6)
where $E$ and $\theta_{E}$ are the encoding network and its weights and $D$
and $\theta_{D}$ are the decoding network and its weights. Tuning the
dimension $d_{z}$ of the latent representation $z$ enables us to control the
amount of information lost in the encoding process. The training loss for the
weights $\theta_{E}$ and $\theta_{D}$ is the mean squared error between the
data and its reconstruction, averaged over the component dimension and summed
over the batch dimension:
$\displaystyle L_{E,D}$
$\displaystyle=\sum_{\text{batch}}\frac{1}{nd_{y}}\left(y-\tilde{y}\right)^{2}\;.$
(7)
#### 3.1.3 Step 3: Quantifying Independent and Shared Information in the
Learned Latent Representation
Finally, in order to measure what information is preserved in the encoding
$z$, we train readout neural networks to reconstruct the original data $x_{m}$
and the vectors $x_{e,i}$:
$\displaystyle\tilde{x}_{m}$ $\displaystyle=R_{m}\left(z,\theta_{m}\right)$
(8) $\displaystyle\tilde{x}_{e,i}$
$\displaystyle=R_{e}\left(z,\theta_{R,i}\right)\text{,}$ (9)
where $R_{m}$ and $\theta_{m}$ are the mutual information readout network and
its weights, and the $R_{e}$ and $\theta_{R,i}$ are the exclusive information
readout networks and their weights.
The losses for training the readout operations are the mean squared errors
between the readout and the original data summed over the batch dimension and
averaged over the component dimension:
$\displaystyle L_{m}$
$\displaystyle=\frac{1}{d_{m}}\sum_{\text{batch}}\left(\tilde{x}_{m}-x_{m}\right)^{2}\;\text{and}$
(10) $\displaystyle L_{e,i}$
$\displaystyle=\frac{1}{d_{e}}\sum_{\text{batch}}\left(\tilde{x}_{e,i}-x_{e,i}\right)^{2}\;\text{.}$
(11)
Finally, once the readout networks trained, we measure the average per data-
point mean squared errors
$\displaystyle r_{m}$
$\displaystyle=\frac{1}{d_{m}}\mathbb{E}\left[\left(\tilde{x}_{m}-x_{m}\right)^{2}\right]\;\text{and}$
(12) $\displaystyle r_{e}$
$\displaystyle=\frac{1}{d_{e}}\mathbb{E}\left[\left(\tilde{x}_{e,i}-x_{e,i}\right)^{2}\right]\;\text{,}$
(13)
serving as a measure of the portion of the mutual and exclusive data retained
in the encoding $z$.
As a control condition, we also study a second encoding mechanism, where,
instead of reconstructing the dependent vectors $y_{0}\oplus\dots\oplus
y_{n-1}=y$, the decoder part reconstructs the original data
$x_{0}\oplus\dots\oplus x_{n-1}=x$. The loss for training the encoder and
decoder networks $E$ and $D$ from equation 7 then becomes:
$L_{E,D}=\sum_{\text{batch}}\frac{1}{nd_{x}}\left(x-\tilde{x}\right)^{2}\;\text{.}$
(14)
#### 3.1.4 An Alternative to Step 2: Isolating the Shared Information
We also compare the previous approach to an alternative architecture, designed
specifically to isolate the information shared between the modalities. Let
$\left(A,B\right)$ be a pair of random variables defined over the space
$\mathcal{A}\times\mathcal{B}$ with unknown joint distribution
$P\left(A,B\right)$ and marginal distributions $P\left(A\right)$ and
$P\left(B\right)$. Determining the mutual information between the variables
$A$ and $B$ consists in finding either one of $P\left(A,B\right)$,
$P\left(A|B\right)$ or $P\left(B|A\right)$. With no other assumptions, this
process requires to sample many times from the joint distribution
$P\left(A,B\right)$. We propose to make the strong assumption that the
conditional probabilities are standard normal distributions with a fixed
standard deviation $\sigma$
$\displaystyle
P\left(B=b|A=a\right)=\mathcal{N}\left(\mu\left(a\right),\sigma,b\right)$ (15)
We can then try to approximate the function $\mu\left(a\right)$ with a neural
network $M\left(a,\theta_{M}\right)$ maximizing the probability
$P\left(B=b|A=a\right)$. $\mu\left(a\right)$ thus represents the most likely
$b$ associated with $a$, under the standard normal assumption. Training the
network is done by minimizing the mean squared error loss
$\displaystyle L_{M}$ $\displaystyle=-\mathbb{E}_{a,b\sim
P\left(A,B\right)}\left[\log\left(\mathcal{N}\left(\mu,\sigma,b\right)\right)\right]$
(16) $\displaystyle=\mathbb{E}_{a,b\sim
P\left(A,B\right)}\left[\left(\mu-b\right)^{2}\right]\cdot K_{1}+K_{2}$ (17)
with $K_{1}$ and $K_{2}$ constants depending on $\sigma$.
More concretely and using the notation from the first architecture, we define
for each modality $i$ a neural network $M\left(y_{i},\theta_{M_{i}}\right)$
learning to predict all other modality vectors $y_{j},j\neq i$. The loss for
the weights $\theta_{M_{i}}$ is defined
$L_{M_{i}}=\sum_{\text{batch}}\frac{1}{\left(n-1\right)d_{y}}\left(y_{\smallsetminus
i}-\tilde{y}_{\smallsetminus i}\right)^{2}$ (18)
with
$y_{\smallsetminus i}=\bigoplus_{j\neq i}y_{j}$ (19)
the concatenation of all vectors $y_{j}$ for $j\neq i$ and
$\tilde{y}_{\smallsetminus i}$ the output of the network. We then consider the
concatenation of the $\tilde{y}_{\smallsetminus i}$ for all $i$ as a high-
dimensional code of the shared information. This code is then compressed using
an autoencoder, similarly to the description in Section 3.1.2. We vary the
dimension of the encoder’s latent code. Finally, similarly to the first
approach, we train readout networks from the compressed latent code to
determine how mutual and exclusive information are preserved in the process.
Overall, this way of processing the data is analogous to the baseline
experiment in that the cross modality prediction networks and the subsequent
auto-encoder, when considered together, form a network that transforms the
vectors $y_{i}$ into themselves. Together, these two components can thus be
considered as an auto-encoder, subject to a cross-modality prediction
constraint.
#### 3.1.5 Neural Network Training
In the following, we compare three architectures against each other (compare
Fig. 1):
* •
The baseline architecture (JE), simply auto-encoding the vectors $y_{i}$
jointly (cf. Fig. 1A).
* •
The control condition with a simpler encoding task, where the vectors $y_{i}$
are encoded into a latent code $z$, from which the decoder tries to
reconstruct the original vectors $x_{i}$, from which the inputs $y_{i}$ were
generated (cf. Fig. 1B).
* •
The alternative architecture (CM), where for each modality, a neural network
tries to predict all other modalities and then all resulting predictions are
jointly encoded, similarly to the baseline architecture (cf. Fig. 1C).
We will now describe the training procedure and implementation details. In
order to show the nature of the information preferably preserved by the
encoding process, we measure the quality of the readouts obtained as we vary
the dimension of the latent vector $d_{z}$. To this end, for each dimension
$d_{z}\in\left[1;d_{z,max}\right]$, we successively train the cross modality
prediction networks (experiment C only), the autoencoder weights $\theta_{E}$
and $\theta_{D}$ and the readout weights $\theta_{m}$ and $\theta_{R,i}$. Once
training is completed, we measure the average mean squared error of the
readouts $\tilde{x}_{m}$ and $\tilde{x}_{e,i}$.
We choose the distributions of the vectors $x_{m}$ and $x_{e,i}$ to be
multivariate standard normal with a zero mean and unit variance. Therefore, a
random guess would score an average mean squared error of $1$. Each experiment
is repeated $3$ times and results are averaged.
The neural networks for the input construction $C$, cross-modality prediction
$M$, encoding $E$, decoding $D$, mutual readout $R_{m}$, and exclusive readout
$R_{e}$ all have three fully-connected layers. The two first layers always
have a fixed dimension of $200$ and use a ReLU as non-linearity. The final
layer is always linear, its dimension for each network is reported in Table I.
Network | $M$ | $C$ | $E$ | $D$ | $R_{m}$ | $R_{e}$
---|---|---|---|---|---|---
Dimension | $\left(n-1\right)\times d_{y}$ | $d_{y}$ | $d_{z}$ | $n\times d_{y}$ | $d_{m}$ | $d_{x}$
TABLE I: Dimension of the last layer of each neural network
For each model architecture A, B, or C, we show the effect of varying the
ratio between mutual and exclusive data and that of varying the number of
modalities. The default experiment used $d_{e}=4$, $d_{m}=4$, $n=2$, $k=10$.
We then varied $d_{e}\in\\{4,10,16,22\\}$ or $n\in\\{2,3,4,5\\}$, keeping all
other parameters fixed.
Each network is trained on $2500$ batches of data of size $128$ with a
learning rate of $10^{-3}$ and using the Adam algorithm [11].
### 3.2 Experiments with Multimodal Input from a Robot Simulation
#### 3.2.1 Step 1: Generating Multimodal Input from the Robot Simulation
Figure 2: High resolution image of the $2$ robot arms in the simulated
environment. The images in the dataset have a resolution of $32$ by $64$
pixels only.
In order to validate our approach in a more realistic setting, we applied it
to data generated from a robot simulator, in which we placed two $7$-degrees-
of-freedom robot arms side by side (see Fig. 2). We then generated a dataset
comprised of pictures of the $2$ arms, representing the visual modality, and
of the joint positions and speeds, representing the proprioceptive modality.
It has a total size of $2.000.000$ samples. To generate the dataset, we
sampled random target joints positions uniformly in the joints’ motion range,
and we simulated $10$ iterations of $0.2$ seconds to let the agent reach the
random target using its position PID controllers. At each iteration, a
snapshots of the vision sensors and the joint sensors is recorded. Note how
the position information is present both in the visual and proprioceptive
modalities. However, as each data-sample is composed of a single image, the
velocity information is present only in the proprioceptive modality. So as to
also have at our disposal an information stream that is uniquely present in
the visual modality, we decided to provide the encoding networks with only the
proprioceptive information from one of the two arms (the right arm). Thus, the
position information of the other arm (left arm) is only available through the
visual information. Furthermore, by doing so, the velocity information about
the left arm is present in neither of the two modalities and thus serves as a
control factor. Finally, the dataset also contains records of the end
effectors’ positions of the two arms. We consider the end effector position of
the right arm as being implicitly part of the proprioceptive modality, as it
can be accurately deduced from the position information, while not directly
feeding it into the networks. A summary of the information available to each
modality is provided in Fig. 3. In Sec. 3.1.1, we named the vectors
representing the different modalities $y_{i}$. When dealing with the realistic
data, we will use $y_{0}=y_{v}$ for the visual modality and $y_{1}=y_{p}$ for
the proprioceptive one. The $y_{p}$ is $z$-scored, i.e., it has a $0$ mean and
a standard deviation of $1$. The $y_{v}$ vector is normalized such that the
pixel values are in $\left[-1,1\right]$.
We will now redefine the steps $2$, $3$ and the alternative to step $2$ for
this dataset. The main difference with the synthetic dataset lies in the fact
that the visual information is processed with convolutional neural networks.
Moreover, we propose to compare $2$ ways of jointly encoding the modalities,
which we refer to as options. The default option is analogous to the way the
synthetic data is encoded, with the difference that the visual information is
processed by a convolutional neural network. The second option named
Alternative Encoding Scheme (AES) consists in learning a latent representation
of the visual information with a convolutional autoencoder prior to jointly
encoding the latent visual code with the proprioceptive information.
Figure 3: Schema representing the information available to each modalities for
the realistic data dataset. $\varphi$ and ${\dot{\varphi}}$ denote the
positions and velocities of the joints, respectively, and $X_{L}$ and $X_{R}$
the left / right parts of the visual information. Note that the positions and
velocities of the left arm are not part of the proprioceptive modality. This
way, the information about the position of the left arm is available only
through the vision sensor. It also results that the velocity information of
the left arm is present in neither of the two modalities, and thus serves as a
control factor in our experiments.
#### 3.2.2 Step 2: Learning an Abstract Representation of the Robot Data
Similar to Sec. 3.1.2, the $y_{v}$ and $y_{p}$ vectors are jointly encoded and
decoded with an autoencoder $\left(E,\theta_{E},D,\theta_{D}\right)$. This
time, however, the encoding and decoding steps are divided into two parts:
$\displaystyle z_{\text{pre}}$
$\displaystyle=E_{v}\left(y_{v},\theta_{E_{v}}\right)\oplus
E_{p}\left(y_{p},\theta_{E_{p}}\right)$ (20) $\displaystyle z$
$\displaystyle=E_{\text{pre}}\left(z_{\text{pre}},\theta_{E_{\text{pre}}}\right)$
(21)
for the encoder and
$\displaystyle z_{\text{post}}$
$\displaystyle=D_{\text{post}}\left(z,\theta_{D_{\text{post}}}\right)$ (22)
$\displaystyle\tilde{y}_{v}$
$\displaystyle=D_{v}\left(z_{{\text{post}},v},\theta_{D_{v}}\right)$ (23)
$\displaystyle\tilde{y}_{p}$
$\displaystyle=D_{p}\left(z_{{\text{post}},p},\theta_{D_{p}}\right)$ (24)
for the decoder, where $z_{{\text{post}},v}$ and $z_{{\text{post}},p}$ form a
partition of $z_{\text{post}}$. The index within $z_{\text{post}}$ at which
the split occurs is a hyper-parameter. The loss is then defined as:
$\displaystyle
L_{E,D}=\frac{1}{2d_{p}}\sum_{\text{batch}}\left(\tilde{y}_{p}-y_{p}\right)^{2}+\frac{1}{2d_{v}}\sum_{\text{batch}}\left(\tilde{y}_{v}-y_{v}\right)^{2}\;,$
(25)
where $d_{p}$ and $d_{v}$ are the sizes of the proprioception and vision
tensors, respectively.
Dividing the encoding and decoding process in two parts enables to use
convolutional and deconvolutional networks $E_{v}$ and $D_{v}$ to encode and
decode the visual information. We did not find any difference when pre-
encoding the proprioceptive information with a MLP compared to directly
feeding it to the $E_{\text{pre}}$ network, we will therefore report the
results for $E_{p}=\text{id}$ and $D_{p}=\text{id}$.
In the case of the Alternative Encoding Scheme (AES) option, the $y_{v}$ is a
compressed representation of the visual information and there is no need to
process it with a convolutional neural network. In that case, we also set
$E_{v}=\text{id}$ and $D_{v}=\text{id}$, meaning that the architecture of the
networks in that case is the same as for the synthetic dataset.
#### 3.2.3 Step 3: Deciphering the Latent Code
Similarly to Sec. 3.1.3, we train readout neural networks to decipher the
information contained in the encoding $z$. This time, however, since we do not
have access to the original vectors $x$ which induced the vectors $y_{v}$ and
$y_{p}$, the readout operation aims at reconstructing the proprioceptive
information from both arms $y_{\text{target}}=y_{{\text{pos}},l}\oplus
y_{{\text{vel}},l}\oplus y_{{\text{ee}},l}\oplus y_{{\text{pos}},r}\oplus
y_{{\text{vel}},r}\oplus y_{{\text{ee}},r}$. The readout operation is written
as:
$\displaystyle y_{\text{readout}}=R\left(z,\theta_{\text{readout}}\right)$
(26)
and its loss is:
$\displaystyle
L_{\text{readout}}=\frac{1}{d_{\text{target}}}\sum_{\text{batch}}\left(y_{\text{readout}}-y_{\text{target}}\right)^{2}\;.$
(27)
#### 3.2.4 An Alternative to Step 2 for the Robot Data: Isolating the Shared
Information
Finally, similarly to Sec. 3.1.4, we propose an alternative to step number $2$
aiming at isolating only the information shared by both modalities. This is
done by training two cross-modality prediction networks:
$\displaystyle\tilde{y}_{\smallsetminus p}$
$\displaystyle=M_{v}\left(y_{p},\theta_{M_{v}}\right)\quad\text{and}$ (28)
$\displaystyle\tilde{y}_{\smallsetminus v}$
$\displaystyle=M_{p}\left(y_{v},\theta_{M_{p}}\right)$ (29)
with the losses
$\displaystyle L_{M_{v}}$
$\displaystyle=\sum_{\text{batch}}\frac{1}{d_{v}}\left(\tilde{y}_{\smallsetminus
p}-y_{v}\right)^{2}\quad\text{and}$ (30) $\displaystyle L_{M_{p}}$
$\displaystyle=\sum_{\text{batch}}\frac{1}{d_{p}}\left(\tilde{y}_{\smallsetminus
v}-y_{p}\right)^{2}\;.$ (31)
Again, like in Sec. 3.2.2, the representations $\tilde{y}_{\smallsetminus p}$
and $\tilde{y}_{\smallsetminus v}$ are encoded using the autoencoder networks
$E_{v}$, $E_{p}$, $E_{\text{pre}}$, $D_{\text{post}}$, $D_{v}$ and $D_{p}$:
$\displaystyle z_{\text{pre}}$
$\displaystyle=E_{v}\left(\tilde{y}_{\smallsetminus
p},\theta_{E_{v}}\right)\oplus E_{p}\left(\tilde{y}_{\smallsetminus
v},\theta_{E_{p}}\right)$ (32) $\displaystyle z$
$\displaystyle=E_{\text{pre}}\left(z_{\text{pre}},\theta_{E_{\text{pre}}}\right)$
(33) $\displaystyle z_{\text{post}}$
$\displaystyle=D_{\text{post}}\left(z,\theta_{D_{\text{post}}}\right)$ (34)
$\displaystyle\tilde{y}_{v}$
$\displaystyle=D_{v}\left(z_{{\text{post}},v},\theta_{D_{v}}\right)$ (35)
$\displaystyle\tilde{y}_{p}$
$\displaystyle=D_{p}\left(z_{{\text{post}},p},\theta_{D_{p}}\right)\;.$ (36)
For the AES option, since $y_{v}$ is a one-dimensional vector, we set
$E_{v}=D_{v}=E_{p}=D_{p}=\text{id}$.
#### 3.2.5 Description of the Networks
In the case of the default option, the network $E_{v}$ is a convolutional
neural network composed of $2$ convolutional layers with kernel size $4$ and
stride $2$ followed by a dense layer with output size $100$. The network
$E_{\text{pre}}$ is a $3$-layered MLP where all layer sizes but the last are
$200$. The last layer uses a linear activation function and has a size
$d_{z}$. The network $D_{\text{post}}$ is a $3$-layered MLP where all layer
sizes but the last are $200$. The last layer uses a linear activation function
and has a size $100+d_{p}$. The network $D_{v}$ is a deconvolutional neural
network composed of a dense layer of size $8192$ followed by two transposed
convolutional layers with kernel sizes $4$ and strides $2$. Finally, the
readout network is also a $3$-layered MLP where all layer sizes but the last
are $200$. The last layer uses a linear activation function and has a size
$d_{\text{target}}$.
For the cross-modality prediction, the network $M_{v}$ is a deconvolutional
neural network composed of a dense layer of size $8192$ followed by two
transposed convolutional layers with kernel size $4$ and stride $2$ and the
network $M_{p}$ is a convolutional neural network composed of two
convolutional layers with kernel size $4$ and stride $2$ followed by a dense
layer of size $d_{p}$.
Finally, as stated above, in the case of the AES option, $y_{v}$ is a learned
code of size $100$ representing the visual information. In this case we set
$E_{v}=D_{v}=E_{p}=D_{p}=\text{id}$. The network learning the code is a
convolutional autoencoder composed of $2$ convolutions, one dense layer of
size $100$, one dense layer of size $8192$ and $2$ transposed convolutions.
## 4 Results
### 4.1 Lossy Compression of Multimodal Input Preferentially Encodes
Information Shared Across Modalities
Figure 4: Each plot represents the reconstruction error of the readout
operation for the exclusive data $r_{e}$ in blue, and for the shared data
$r_{m}$ in red, as a function of the auto-encoder latent dimension. The dotted
vertical line indicates the latent dimension matching $nd_{e}+d_{m}$. The data
point for a latent dimension of $0$ is theoretically inferred to be equal to
$1.0$ (random guess). The four plots in one row correspond to different
dimensions $d_{e}$ of the exclusive data. The results are presented for the
three architectures A, B and C.
Figure 4 shows the reconstruction errors for exclusive vs. shared information
as a function of $d_{z}$, the size of the autencoder’s bottleneck, for the
three different architectures. Each data point represents the mean of $3$
repetitions of the experiments, and the shaded region around the curves
indicate the standard deviation.
The grey dotted vertical line indicates the latent code dimension $d_{z}$
matching the number of different univariate gaussian distributions used for
generating the correlated vectors $y_{i}$, $d_{min}=d_{m}+nd_{e}$. Assuming
that each dimension in the latent vector can encode the information in one
normally distributed data source, when $d_{z}=d_{min}$ both the exclusive data
and the shared data can theoretically be encoded with minimal information
loss. Knowing that random guesses would score a reconstruction error of $1.0$,
we can augment the data with the theoretical values $r_{m}=1$ and $r_{e}=1$
for $d_{z}=0$.
The results for the JE architecture (cf. Fig. 1A), jointly encoding the
correlated vectors $y_{i}$, show that the data shared by all modalities is
consistently better reconstructed by the autoencoder for all latent code sizes
$d_{z}$. In particular, this is also true for over-complete codes when
$d_{z}>d_{min}$. Information loss in that regime is due to imperfections of
the function approximator. When the code dimension is bellow $d_{min}$, the
information loss is greater, as not all of the data can pass through the
antoencoder’s bottleneck. These results confirm our intuition from the
Introduction that shared information should be preferentially encoded during
lossy compression of multimodal inputs.
This information filtering is a consequence of neural networks’ continuity,
implying topological properties on the functions that they can learn. Indeed,
while there exist non-continuous functions for which the dimensionality of the
codomain is greater than that of the domain, the continuity property enforces
that the dimension of the codomain is less or equal to that of the domain. As
a consequence, the dimensionality of the codomain of the decoder network of an
autoencoder is less than or equal to the dimensionality of the latent code. If
the dimension of the latent code is itself lower than that of the data, as can
be enforced by a bottleneck, it follows that the data and its reconstruction
sit on manifolds of different dimensionality, implying information loss.
In the under-complete regime, $d_{z}<d_{min}$, the autoencoder shows a
stronger preference for retaining the shared data, partly filtering out the
exclusive data. The chief reason for this is that the shared data is
essentially counted $n$ times in the network’s reconstruction loss, while the
exclusive data is counted only once. As the dimension of the exclusive data
$d_{e}$ increases, we still observe the two training regimes for $d_{z}$ less
or greater than $d_{min}$, even though the boundary between both tends to
vanish as we reach the network’s capacity.
The results for the second (control) architecture (cf. Fig. 1B), jointly
encoding the correlated vectors $y_{i}$ by reconstructing the original data
vectors $x_{i}$ rather than $y_{i}$, are similar in nature to those of the JE
experiment. The main differences occur at low values of $d_{z}$. The readout
quality of the exclusive data is overall higher and that of the shared data
lower.
Finally, results for the CM architecture (cf. Fig. 1C), encoding the cross-
modality predictions, are significantly different and confirm that it is
possible to isolate the mutual information between different data sources.
Notice how for $d_{e}=4$, the readout quality of the exclusive data $r_{e}$
seems to improve slowly as the dimension $d_{z}$ increases. We verified that
the values of $r_{e}$ remain high for high values of $d_{z}$, measuring
reconstruction errors converging around $0.8$. Thus, this architecture is more
effective in stripping away any exclusive information. This is because, by
definition, exclusive information cannot be encoded during the initial cross-
modality prediction (Fig. 1C, orange part).
### 4.2 Increasing the Number of Modalities Promotes Retention of Shared
Information
Figure 5: Similarly to Fig. 4, each plot represents the reconstruction error
of the readout operation for the exclusive data $r_{e}$ in blue, and for the
shared data $r_{m}$ in red, as a function of the auto-encoder latent
dimension. The four plots correspond to a different number $n$ of modalities.
The results are presented for the three architectures A, B and C.
Figure 5 shows the results for varying the number of modalities. For the JE
and control architectures, they show how increasing the number of modalities
reinforces the retention of the shared data over the exclusive data. Note how
the reconstruction errors for the shared information (red curves) decay more
rapidly for higher numbers of modalities $n$. This is in contrast to the CM
architecture, where results are very similar for different numbers of
modalities. This is because the initial cross-modality prediction network
(Fig. 1C, orange part) effectively removes all modality-specific information,
leaving essentially the same encoding task for the subsequent autoencoder
despite the different numbers of modalities $n$.
### 4.3 Results on the Robot Dataset
Figure 6: Readout reconstruction errors for the JE and CM approaches as a
function of the size of the bottleneck of the encoding. Blue and red curves
correspond to right and left arms respectively, solid lines correspond to
information present in both modalities, dashed lines to information present in
one modality only, and the dotted line to information present in none of the
modalities.
Figure 6 shows the readout errors for the proprioceptive information from both
arms, for the JE and CM architecture, as a function of the size of the latent
code. In both cases, the velocity information about the left arm, which is
present in neither of the $2$ modalities and thus serves as a control factor,
is not recovered for any latent code size. This is indicated by a chance-level
reconstruction quality of $1.0$. For the JE architecture, the information
which is present in both modalities (i.e. the right arm’s joint positions and
the right end-effector position) is well reconstructed. For the CM
architecture, however, the right arm’s joint positions are recovered with a
MSE of around at best $0.2$. The reason for this is that the position of some
of the joints is not visible at all in the image frames (like for example the
last joint in the arm rotating the wrist). Furthermore, in some positions
occlusion effects occur. The information about these joints is therefore not
present in the mutual information and is thus filtered out by the cross-
modality prediction. The joint velocity information inherently has a low
entropy, making it easier to compress. As a result, the JE approach is very
good at recovering this information. The other approach, however, has properly
filtered it out, even-though some of the information seems to have leaked out
during the proprioception $\rightarrow$ vision cross-modality prediction. This
is understandable given the big increase in dimensionality taking place in
this operation. For the CM approach, the position of the left arm, which is
present only in the visual modality, is properly filtered out with MSEs
greater than $0.7$ for all latent sizes. In the first approach however, the
left end-effector position is recovered only for latent sizes greater than
$14$, which corresponds to the point where the proprioception of the right arm
is fully represented in the latent code.
Figure 7: Reconstruction error of the visual modality for the JE and CM
approaches as a function of the size of the bottleneck of the encoding. The
error is split in two parts corresponding to the left and right halves of the
frames. The results show that the pixels which share information with the
proprioceptive modality are better reconstructed.
Figure 7 shows the reconstruction error of the visual information as a
function of the latent code size for the JE and CM approaches. The
reconstruction error is split into two parts corresponding to the left and
right half of the frames. Note that the chance reconstruction error is around
$0.027$ (MSE). The results show that in both approaches, the pixels
corresponding to the right arm are better reconstructed. In the JE approach,
when the latent code size allows it, the entirety of the frame is encoded
while for the CM approach, the left half of the frame is never encoded.
Figure 8: We show for each bottleneck sizes in $\\{1,10,20,64\\}$ an image and
it’s reconstruction through the autoencoder, as well as the mean
reconstruction error map (darker indicates lower error). The JE approach (A)
encodes the visual information about both arms if the information bottleneck
allows for it, while the CM approach (B) reconstructs only one of the two arm
for any bottleneck size.
Finally, Fig. 8 shows concrete reconstructions of images by the encoding
process for bottleneck sizes taken from the set $\\{1,10,20,64\\}$ for the JE
and CM approaches. The results clearly show that the JM approach (A) goes from
a regime where neither the left nor the right part of the frame is
reconstructed, to a regime where the right part is, but not the left, to a
regime where the whole frame is correctly reconstructed. For the CM approach
(B), the system only goes through the $2$ first regimes.
## 5 Discussion
Forming abstract representations is critical for higher-level intelligence. We
have argued that the essence of abstraction is the lossy compression of
information — stripping away details to arrive at a representation that
transcends the original context and more easily generalizes to new situations.
In principle, this could be done in many different ways. The critical question
is what information to strip away and what residual information to keep.
Depending on the task, there may be different answers to this question. For
example, in a supervised classification setting a system may learn to strip
away any information that is irrelevant for determining the class label. Or in
a reinforcement learning setting, an agent may attempt to strip away any
information that is not helpful for predicting future rewards.
Here we have focused on an unsupervised approach for the learning of abstract
representations through lossy compression of multimodal information. Our key
result is that lossy compression of multimodal inputs through autoencoding
naturally favors the retention of information that is shared across the
sensory modalities. Such shared information may be particularly useful for
generalizing to new situations.
We first demonstrated our approach using synthetic multimodal data and then
validated it using a simulated embodied agent (two-armed robot). The results
indicate that the approach scales well to a more realistic scenario with
visual and proprioceptive information. However, to compensate for the vastly
different dimensionality of visual and proprioceptive data, we used a more
complex network architecture where visual information was first passed through
several convolutional layers before being integrated with proprioceptive
information.
It is important to stress that different sensory modalities have evolved in
biological organisms (and are used in robots) exactly because they provide
different, complementary information about the world. Discarding information
that is not shared among modalities but unique to a single modality therefore
seems to undermine this modality’s raison d’être. Indeed, we are not arguing
that such information is not useful and should be discarded altogether. What
we are arguing is that such information may be less useful when the goal is to
learn highly abstract and compressed representations of the physical world.
One of the greatest challenges for a developing mind is to make sense of what
William James called the “blooming, buzzing, confusion” of sensations provided
by different modalities, which eventually must become “coalesced together into
one and the same space” [41]. The challenge thus is to identify how the inputs
provided by the different sensory modalities relate to one another, i.e., what
information they share. As we have seen, our generic approach is able to
distill this shared information from raw sensor data.
In this work, we have focused on generic autoencoder networks, as they are
popular tools for dimensionality reduction and learning compressed
representations of sensory signals in many contexts. In deep reinforcement
learning, for example, they are frequently used to learn a compact abstract
representation of high-dimensional (e.g., visual) input. In the future, it
will be interesting to consider extensions of the generic autoencoder
framework such as sparse autoencoders [12, 13] or other forms of regularized
autoencoders such as beta-variational autoencoders [14], or encoder networks
that learn to simultaneously predict rewards to focus limited encoding
resources on relevant aspects of the multimodal sensory inputs that are
associated with rewards.
Our approach to learning abstract representations from multimodal data can
also be related to approaches that learn view-invariant visual object
representations through temporal coherence [16, 15]. In such approaches,
temporal input from a single modality, typically the visual one, is
considered, and a code is learned that is slowly varying, for example through
trace rules or slow feature analysis (SFA). This corresponds to a lossy
compression across time: fast changing information is discarded and slowly
changing information is retained. For example, the fast changing information
may be the pose of the object and the slowly changing information may be the
object’s identity. Both jointly determine the current image. By retaining
information that is slowly varying, a viewpoint invariant representation of
the object’s identity can be learned, which can be used to recognize the
object in different poses. This form of temporal compression of information is
complementary to the multimodal compression we have considered here. In fact,
the simple information theoretic argument from the Introduction applies in the
same way if we replace the visual and haptic inputs $X_{v}$ and $X_{h}$ with,
say, successive inputs from a single sensory modality. Lossy compression of
such data will naturally favor the retention of information that the
successive inputs have in common, i.e., information that is helpful for
predicting $X_{t+1}$ from $X_{t}$ or vice versa. In the future, it will be
interesting to consider learning abstract representations by abstracting
across time and sensory modalities in hierarchical cognitive architectures for
agents that learn abstract dynamic models of their interactions with the
world.
Multimodal compression as discussed here may also be an effective driver of
intrinsically motivated exploration in infants and robots, reviewed in [26].
Schmidhuber proposed using compression progress as an intrinsic motivation
signal [18]. In our own work, we have proposed the Active Efficient Coding
(AEC) framework, that uses an intrinsic motivation for coding efficiency [19,
20, 21, 22]. AEC is an extension of classic efficient coding ideas [23] to
active perception. Next to learning an efficient code for sensory signals, it
proposes to control behavior in a way to maximize the information coming from
different sources, while reducing the reconstruction loss during lossy
compression. This has been shown to lead to the fully autonomous self-
calibration of active perception systems, e.g., active stereo vision or active
motion vision. However, this approach has not been considered in a multimodal
setting. For example, consider an infant (or a developing robot) moving her
hand in front of her face. In this situation, visual and proprioceptive
signals are coupled. As the hand is felt to move to the left, the visual sense
indicates motion to the left. Thus, the signals can be jointly encoded more
compactly or with less reconstruction loss. An intrinsic motivation trying to
minimize such reconstruction loss will therefore promote behaviors, where
signals from the different modalities are strongly coupled. For example,
banging a toy on the table creates correlated sensations in visual,
proprioceptive, haptic, and auditory modalities and affords strong compression
when jointly encoding these signals. We conjecture that AEC-like intrinsic
motivations may drive infants to engage in such behaviors and may be effective
in guiding open-ended learning in robots who try to understand the world
around them.
## Appendix A Futility of a Symmetric Information Bottleneck
We start from the original Information Bottleneck objective:
$\min_{p(t|x)}I(X;T)-\beta I(T;Y),$ (37)
where $p(T=t|X=x)$ describes the encoding of an input $x$ via its
representation $t$, $I(X;T)$ is the mutual information between $X$ and $T$,
$I(T;Y)$ is the mutual information between $T$ and $Y$, and $\beta$ serves to
balance the two objectives.
We now consider the reverse problem of $T$ trying to extract information from
$Y$ that is useful for predicting $X$. This leads to the following “reverse”
objective:
$\min_{p(t|y)}I(Y;T)-\beta I(T;X),$ (38)
where the roles of $X$ and $Y$ have simply been swapped.
A naive approach to derive a symmetric version of the information bottleneck
is to consider an encoding where $T$ encodes $X$ and $Y$ jointly via a
probability $p(t|x,y)$, while minimizíng both the forward and the reverse
functionals:
$\min_{p(t|x,y)}I(X;T)-\beta I(T;Y)+I(Y;T)-\beta I(T;X),$ (39)
which simplifies to:
$\min_{p(t|x,y)}(1-\beta)I(X;T)+(1-\beta)I(Y;T)$ (40)
because of the symmetry of the mutual information (e.g. $I(X;T)=I(T;X)$).
Unfortunately, however, for $\beta<1$ this amounts to minimizing the
information that $T$ contains about $X$ and $Y$, while for $\beta>1$ this
amounts to maximizing the information that $T$ contains about $X$ and $Y$. In
neither case will $T$ contain only the information that is shared between $X$
and $Y$.
## Appendix B Results for the AES Option
Figure 9: Readout reconstruction errors for the JE and CM approaches as a
function of the size of the bottleneck of the encoding. Blue and red curves
correspond to right and left arms respectively, solid lines correspond to
information present in both modalities, dashed lines to information present in
one modality only, and the dotted line to information present in none of the
modalities. Figure 10: Reconstruction error of the visual modality for the JE
and CM approaches as a function of the size of the bottleneck of the encoding.
The error is split in two parts corresponding to the left and right halves of
the frames. The results show that the pixels which share information with the
proprioceptive modality are better reconstructed.
Figure 11: We show for each bottleneck sizes in $\\{1,10,20,64\\}$ an image
and it’s reconstruction through the autoencoder, as well as the mean
reconstruction error map (darker indicates lower error). The JE approach (A)
encodes the visual information about both arms if the information bottleneck
allows for it, while the CM approach (B) reconstructs only one of the two arm
for any bottleneck size.
We here present the results for the AES option. Overall, the results for the
default and the AES options are similar, thus showing that the underlying
principle is independent of the implementation details. Compared to the
default option, the information about the left arm seems to be marginally
better filtered out in the AES option for the CM approach. Figure 9 shows the
reconstruction error of the various components of the proprioceptive data of
both arms as a function of the bottleneck size. Similarly to Fig. 6, the
information contained in both the visual and the proprioceptive modalities is
better reconstructed than the information present in only one of the
modalities. Figure 10 shows the reconstruction error of the data from the
vision sensor as a function of the bottleneck size. The two curves correspond
to the left and right half of the frames. Similarly to the results presented
in Fig. 7, the part of the image corresponding to the arm which is jointly
encoded is better reconstructed than the other. In the CM approach, the other
arm is not reconstructed at all. Finally, Fig. 11 shows concrete
reconstructions obtained from the encoding process, and the mean
reconstruction error map for bottleneck sizes in $\\{1,10,20,64\\}$. For Figs.
10 and 11, the frame reconstruction is
$B\left(D_{\text{post}}\left(z,\theta_{D_{\text{post}}}\right),\theta_{B}\right)$
with $B$ the decoder part of the autoencoder learning the latent
representation $y_{v}$.
## Acknowledgments
This work was supported by the European Union’s Horizon 2020 Research and
Innovation Program under Grant Agreement No 713010 (GOAL-Robots – Goal-based
Open-ended Autonomous Learning Robots). JT acknowledges support from the
Johanna Quandt foundation.
## References
* [1] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
* [2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
* [3] Zhang, X., Zou, J., He, K., & Sun, J. (2015). Accelerating very deep convolutional networks for classification and detection. IEEE transactions on pattern analysis and machine intelligence, 38(10), 1943-1955.
* [4] Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828-841.
* [5] Brown, T. B., Mané, D., Roy, A., Abadi, M., & Gilmer, J. (2017). Adversarial patch. arXiv preprint arXiv:1712.09665.
* [6] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
* [7] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
* [8] Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep reinforcement learning with double q-learning. arXiv preprint arXiv:1509.06461.
* [9] Lloyd, S. (1982). Least squares quantization in PCM. IEEE transactions on information theory, 28(2), 129-137.
* [10] Kramer, M. A. (1991). Nonlinear principal component analysis using autoassociative neural networks. AIChE journal, 37(2), 233-243.
* [11] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
* [12] Makhzani, A., & Frey, B. (2013). K-sparse autoencoders. arXiv preprint arXiv:1312.5663.
* [13] Ng, A. (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19.
* [14] Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M. & Lerchner, A. (2016). beta-vae: Learning basic visual concepts with a constrained variational framework.
* [15] Einhäuser, W., Hipp, J., Eggert, J., Körner, E., & König, P. (2005). Learning viewpoint invariant object representations using a temporal coherence principle. Biological Cybernetics, 93(1), 79-90.
* [16] Földiák, P. (1991). Learning invariance from transformation sequences. Neural Computation, 3(2), 194-200.
* [17] Berkes, P., & Wiskott, L. (2005). Slow feature analysis yields a rich repertoire of complex cell properties. Journal of vision, 5(6), 9-9.
* [18] Schmidhuber, J. (2008, June). Driven by compression progress: A simple principle explains essential aspects of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes. In Workshop on anticipatory behavior in adaptive learning systems (pp. 48-76). Springer, Berlin, Heidelberg.
* [19] Zhao, Y., Rothkopf, C. A., Triesch, J., & Shi, B. E. (2012, November). A unified model of the joint development of disparity selectivity and vergence control. In 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL) (pp. 1-6). IEEE.
* [20] Vikram, T. N., Teulière, C., Zhang, C., Shi, B. E., & Triesch, J. (2014, October). Autonomous learning of smooth pursuit and vergence through active efficient coding. In 4th International Conference on Development and Learning and on Epigenetic Robotics (pp. 448-453). IEEE.
* [21] Eckmann, S., Klimmasch, L., Shi, B. E., & Triesch, J. (2020). Active efficient coding explains the development of binocular vision and its failure in amblyopia. Proceedings of the National Academy of Sciences, 117(11), 6156-6162.
* [22] Wilmot, C., Shi, B. E., & Triesch, J. (2020, October). Self-Calibrating Active Binocular Vision via Active Efficient Coding with Deep Autoencoders. In International Conference on Development and Learning and on Epigenetic Robotics. IEEE.
* [23] Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. Sensory communication, 1, 217-234.
* [24] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35 (8), 1798–1828.
* [25] Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3 (1), 9.
* [26] Baldassarre, G., & Mirolli, M. (Eds.). (2013). Intrinsically motivated learning in natural and artificial systems (No. 907). Berlin: Springer.
* [27] Attneave, F. (1954). Some informational aspects of visual perception. Psychological review , 61 (3), 183.
* [28] Hyvärinen, A., & Oja, E. (2000). Independent component analysis: algorithms and applications. Neural networks, 13 (4-5), 411–430.
* [29] Olshausen, B. A., & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37 (23), 3311–3325.
* [30] Zhao, Y., Rothkopf, C. A., Triesch, J., & Shi, B. E. (2012). A unified model of the joint development of disparity selectivity and vergence control. In 2012 ieee international conference on development and learning and epigenetic robotics (icdl) (pp. 1–6).
* [31] Eckmann, S., Klimmasch, L., Shi, B. E., & Triesch, J. (2020). Active efficient coding explains the development of binocular vision and its failure in amblyopia. Proceedings of the National Academy of Sciences, 117 (11), 6156–6162.
* [32] Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience, 2 (1), 79–87.
* [33] Becker, S., & Hinton, G. E. (1992). Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355 (6356), 161–163.
* [34] Hotelling, H. (1936). Relations between two sets of variates. Biometrika, 28 (3–4), 321–377. Retrieved from https://doi.org/10.1093/biomet/28.3-4.321 doi: 10.1093/biomet/28.3-4.321
* [35] Tishby, N., Pereira, F. C., & Bialek, W. (1999). The information bottleneck method. Proceedings of The 37th Allerton Conference on Communication, Control, & Computing, Univ. of Illinois.
* [36] Ng, A., et al. (2011). Sparse autoencoder. CS294A Lecture notes, 72 (2011), 1–19.
* [37] Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011). Contractive auto-encoders: Explicit invariance during feature extraction. In ICML.
* [38] Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on machine learning (pp. 1096–1103).
* [39] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 .
* [40] Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2015). Adversarial autoencoders. arXiv preprint arXiv:1511.05644 .
* [41] James, W. (1890). The principles of psychology. Henry Holt and Company.
| Charles Wilmot attended preparatory classes for France’s Grandes Ecoles at
the University of Valenciennes and recieved the B.E. in maths in 2013, and
received an engineering diploma and the M.E. in information technologies at
the École nationale supérieure d’électronique, informatique,
télécommunications, mathématiques et mécanique de Bordeaux in 2016. He then
integrated the research team of Jochen Triesch in Frankfurt, where he studied
developmental robotics, intrinsically motivated reinforcement learning and
hierarchical reinforcement learning in the intent of obtaining the Ph.D. in
2021.
---|---
| Gianluca Baldassarre received the B.A. and M.A. degrees in economics from
the Sapienza University of Rome, Italy, in 1998, the Diploma of the
Specialization Course “Cognitive Psychology and Neural Networks” from the same
University, in 1999, and the Ph.D. degree in Computer Science from the
University of Essex, Colchester, U.K., in 2003. He was later a postdoc with
the Italian Institute of Cognitive Sciences and Technologies, National
Research Council (ISTC-CNR), Rome. Since 2006 he has been a researcher, now a
Research Director, with ISTC-CNR where he founded and is Coordinator of the
Research Group “Laboratory of Embodied Natural and Artificial Intelligence”.
He was the Principal Investigator for the EU project “ICEA–Integrating
Cognition Emotion and Autonomy” from 2006 to 2009, the Coordinator of the EU
Integrated Project “IM-CLeVeR–Intrinsically Motivated Cumulative Learning
Versatile Robots” from 2009 to 2013, and, since 2016, he has been the
Coordinator of the EU FET-OPEN Project “GOALRobots–Goal-Based Open-ended
Autonomous Learning Robots” ending in 2021. His research interests span open-
ended learning of sensorimotor skills, driven by extrinsic and intrinsic
motivations, in animals, humans, and robots.
---|---
| Jochen Triesch received his Diploma and Ph.D. degrees in Physics from the
University of Bochum, Germany, in 1994 and 1999, respectively. After two years
as a post-doctoral fellow at the Computer Science Department of the University
of Rochester, NY, USA, he joined the faculty of the Cognitive Science
Department at UC San Diego, USA as an Assistant Professor in 2001. In 2005 he
became a Fellow of the Frankfurt Institute for Advanced Studies (FIAS), in
Frankfurt am Main, Germany. In 2006 he received a Marie Curie Excellence
Center Award of the European Union. Since 2007 he is the Johanna Quandt
Research Professor for Theoretical Life Sciences at FIAS. He also holds
professorships at the Department of Physics and the Department of Computer
Science and Mathematics at the Goethe University in Frankfurt am Main,
Germany. In 2019 he obtained a visiting professorship at the Université
Clermont Auvergne, France. His research interests span Computational
Neuroscience, Machine Learning, and Developmental Robotics.
---|---
|
# On the Pythagoras number of the simplest cubic fields
Magdaléna Tinková Charles University, Faculty of Mathematics and Physics,
Department of Algebra, Sokolovská 83, 18600 Praha 8, Czech Republic
<EMAIL_ADDRESS>
###### Abstract.
The simplest cubic fields $\mathbb{Q}(\rho)$ are generated by a root $\rho$ of
the polynomial $x^{3}-ax^{2}-(a+3)x-1$ where $a\geq-1$. In this paper, we will
show that the Pythagoras number of the order $\mathbb{Z}[\rho]$ is equal to
$6$ for $a\geq 3$.
###### Key words and phrases:
Pythagoras number, the simplest cubic fields, indecomposable integers
###### 2010 Mathematics Subject Classification:
11R16, 11R80, 11E25
The author was supported by Czech Science Foundation GAČR, grant 21-00420M, by
projects PRIMUS/20/SCI/002, UNCE/SCI/022, GA UK 1298218 from Charles
University, and by SVV-2020-260589.
## 1\. Introduction
Let $\mathcal{O}$ be a commutative ring, and let $\sum\mathcal{O}^{2}$ and
$\sum^{m}\mathcal{O}^{2}$ be the sets defined by
$\sum\mathcal{O}^{2}=\Big{\\{}\sum_{i=1}^{n}\alpha_{i}^{2};\;\alpha_{i}\in\mathcal{O},n\in\mathbb{N}\Big{\\}},\qquad\sum^{m}\mathcal{O}^{2}=\Big{\\{}\sum_{i=1}^{m}\alpha_{i}^{2};\;\alpha_{i}\in\mathcal{O}\Big{\\}}.$
In this paper, we are concerned with the so-called Pythagoras number
$\mathcal{P}(\mathcal{O})$ of the ring $\mathcal{O}$ given by
$\mathcal{P}(\mathcal{O})=\min\Big{\\{}m\in\mathbb{N};\;\sum\mathcal{O}^{2}=\sum^{m}\mathcal{O}^{2}\Big{\\}}.$
Regarding some basic examples,
$\mathcal{P}(\mathbb{R})=\mathcal{P}(\mathbb{C})=1$, and Lagrange’s famous
four-square theorem implies $\mathcal{P}(\mathbb{Q})=4$. Moreover, it can be
proved that $\mathcal{P}(K)\leq 4$ for every number field $K$ [14, 36].
Arguably the most important and classical cases are Pythagoras numbers of
rings of algebraic integers $\mathcal{O}_{K}$ of totally real number fields
$K$. The first result is, of course, Lagrange’s above-mentioned theorem giving
$\mathcal{P}(\mathbb{Z})=4$ that led to the study of universal quadratic
forms. Let $\mathcal{O}_{K}^{+}$ be the set of totally positive integers of
$\mathcal{O}_{K}$ (by this, we mean those algebraic integers whose conjugates
are all positive). Roughly speaking, universal quadratic form over
$\mathcal{O}_{K}$ is a quadratic form which has coefficients from
$\mathcal{O}_{K}$ and which represents all the elements in
$\mathcal{O}_{K}^{+}$. For more details about universal quadratic forms, see
also for example [2, 3, 7, 15, 17, 21, 22, 33].
Considering sums of squares, Maaß has shown that the sum of three squares is
universal over $\mathcal{O}_{K}$ for $K=\mathbb{Q}(\sqrt{5})$, which implies
$\mathcal{P}(\mathcal{O}_{K})=3$ in this case [27]. Nevertheless, the
following result of Siegel says that a sum of any number of squares can be
universal only in the fields $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{5})$ [37]. It
means that in the other totally real number fields, we cannot express all the
elements of $\mathcal{O}_{K}^{+}$ as a sum of squares, and thus we must
restrict to those which indeed lie in $\sum\mathcal{O}_{K}^{2}$.
Let now $\mathcal{O}\subseteq\mathcal{O}_{K}$ be an order. Scharlau showed
that the Pythagoras number of an order is always finite, although it can be
arbitrarily large [34]. The case of quadratic orders was in great detail
studied by Peters; he proved that except for a few cases, the Pythagoras
number is always $5$ [30]. Moreover, he also characterized all the elements
which are representable as a sum of squares. Considering the other cases,
recently, Kala and Yatsyna [20] proved that $\mathcal{P}(\mathcal{O})\leq
f(d)$ for every order $\mathcal{O}$ in totally real number $K$, where $f$ is a
function depending only on the degree $d$ of the field $K$. Moreover, one can
take $f(d)=d+3$ if $2\leq d\leq 5$. Note that their subsequent paper [19]
studies sums of squares in certain subrings of $\mathcal{O}_{K}$.
However, given the difficulty of studying $\mathcal{P}(\mathcal{O})$ for
orders, most of the research so far focused on the situation over fields. In
the case of non-formally real fields $K$ (i.e., in which $-1$ can be expressed
as a sum of squares), the Pythagoras number is closely related to Stufe $s(K)$
of $K$, which is the minimal number of squares whose sum gives $-1$. We have
here $s(K)\leq\mathcal{P}(K)\leq s(K)+1$. By the results of Pfister [31], the
value of $s(K)$ can attain only the powers of $2$, which greatly limits the
possibilities for the value of $\mathcal{P}(K)$. On the other hand, Hoffmann
has shown that for every $n\in\mathbb{N}$ and formally real field $K_{0}$,
there exists a formally real field $K$ over $K_{0}$ with $\mathcal{P}(K)=n$
[14]. Nevertheless, we can find many other results on the Pythagoras number of
fields in the number of specific situations, for example, in relation with
rational function fields, elliptic curves, Hasse number or Laurent series [5,
8, 16, 32].
In this paper, we will focus on orders in the so-called simplest cubic fields
[9, 35]. They are generated by a root $\rho$ of the polynomial
$x^{3}-ax^{2}-(a+3)x-1$ where $a\geq-1$, and were richly studied in many
different contexts, see for example [1, 4, 11, 24, 25, 26, 38]. This is due to
the fact that they have many useful properties: They contain units of all
signatures, and every totally positive unit is a square [28]. Moreover,
$\mathcal{O}_{K}=\mathbb{Z}[\rho]$ for infinitely many $a$ (for example, if
the square root $a^{2}+3a+9$ of the discriminant is squarefree), and they are
also cyclic.
In this case, the result of Kala and Yatsyna gives the upper bound $6$ on
$\mathcal{P}(\mathcal{O})$. We will show that this bound is attained in
infinitely many cases by proving the following theorem:
###### Theorem 1.1.
Let $\rho$ be a root of the polynomial $x^{3}-ax^{2}-(a+3)x-1$ where $a\geq
3$. Then $\mathcal{P}(\mathbb{Z}[\rho])=6$.
To the best of our knowledge, there are no results on the Pythagoras number
for orders in number fields of higher degrees similar to Peters’ results on
quadratic orders. Thus, Theorem 1.1 represents the first breakthrough in this
problem. Moreover, since $\mathcal{O}_{K}=\mathbb{Z}[\rho]$ in infinitely many
cases of $a$, this conclusion also provides us a precise result for the
maximal order $\mathcal{O}_{K}$.
Having the upper bound from the result of Kala and Yatsyna, we will focus on
the determination of the lower bound. To reach this aim, we will primarily
rely on the idea of additively indecomposable integers in totally real
algebraic fields. Probably the most studied case is when we consider totally
positive elements. Let $\alpha\in\mathcal{O}_{K}^{+}$. We say that $\alpha$ is
indecomposable in $\mathcal{O}_{K}$ if we cannot express it as
$\alpha=\beta_{1}+\beta_{2}$ where
$\beta_{1},\beta_{2}\in\mathcal{O}_{K}^{+}$. Note that under the name extremal
elements, they can be found in the above-mentioned Siegel’s proof of the
(non)-universality of sums of squares in number fields. However, in our
proofs, we will need their extended definition for all the possible signatures
(see Section 2).
Regarding real quadratic fields, the indecomposable integers were fully
described by Perron [29] and Dress and Scharlau [10], and their additive
structure was studied in [13]. Some partial results for the biquadratic case
can be found in the work of Čech, Lachman, Svoboda, Zemková and the present
author [6], and in the following paper [23], which focuses on ternary
quadratic forms in these fields. The cubic fields are in the center of
interest of [18], where we also determined the full structure of
indecomposable integers in the simplest cubic fields. The proof of Theorem 1.1
is based on this result.
Nevertheless, so far, the indecomposable integers have been mainly used in the
study of universal quadratic forms [2, 3, 6, 17, 18, 23, 37, 39] or the
elements of small norms [18], thus this paper also provides a new application
of this phenomenon. Moreover, some of the ideas introduced here can be also
used for the determination of the Pythagoras number for other cubic orders.
The proof of Theorem 1.1 is provided in Section 3. Moreover, in Section 4 we
give some partial results on the Pythagoras number for the remaining cases
$-1\leq a\leq 2$.
## 2\. Preliminaries
Let $K=\mathbb{Q}(\rho)$ be a totally real cubic field, and let
$\mathcal{O}_{K}$ be the ring of algebraic integers of $K$. Moreover, let
$\rho^{\prime}$ and $\rho^{\prime\prime}$ be Galois conjugates of $\rho$. Then
by signature $\sigma$ of $\alpha\in\mathbb{Q}(\rho)$, we mean the triple
$(\text{sgn}(\alpha),\,\text{sgn}(\alpha^{\prime}),\,\text{sgn}(\alpha^{\prime\prime}))$
where sgn is the signum function, and $\alpha^{\prime}$ and
$\alpha^{\prime\prime}$ are images of $\alpha$ under the
$\mathbb{Q}$-isomorphism given by $\rho\mapsto\rho^{\prime}$, and
$\rho\mapsto\rho^{\prime\prime}$, respectively. In the following, we will use
symbols $+$ and $-$ instead of $\pm 1$, e.g., we will replace $(1,1,1)$ by
$(+,+,+)$. Moreover, the norm of $\alpha$ is defined as
$N(\alpha)=\alpha\alpha^{\prime}\alpha^{\prime\prime}$, and the trace of
$\alpha$ as $\text{Tr}(\alpha)=\alpha+\alpha^{\prime}+\alpha^{\prime\prime}$.
Let $\mathcal{O}\subseteq\mathcal{O}_{K}$ be an order in $K$, i.e., a subring
of finite index in $\mathcal{O}_{K}$. By $\mathcal{O}^{\sigma}$, we will mean
the set of those elements in $\mathcal{O}$ which have the signature $\sigma$.
The element $\alpha\in\mathcal{O}^{\sigma}$ is $\sigma$-indecomposable in
$\mathcal{O}$ if it cannot be written as $\alpha=\beta_{1}+\beta_{2}$ where
$\beta_{1},\beta_{2}\in\mathcal{O}^{\sigma}$. Otherwise, we say that the
element $\alpha$ is $\sigma$-decomposable in $\mathcal{O}$. Note that for
example, all the units are $\sigma$-indecomposable for some signature
$\sigma$.
In particular, the totally positive elements, i.e., the elements with the
signature $(+,+,+)$, were richly studied in the past, and for them, we will
introduce some more notation. We will denote the subset of totally positive
elements of $\mathcal{O}$ by $\mathcal{O}^{+}$. We say that
$\alpha\in\mathcal{O}^{+}$ is totally greater than $\beta\in\mathcal{O}^{+}$
if $\alpha>\beta$, $\alpha^{\prime}>\beta^{\prime}$ and
$\alpha^{\prime\prime}>\beta^{\prime\prime}$. We will denote it by
$\alpha\succ\beta$. Sometimes, we will also use the symbol $\succeq$ when we
want to include the case when $\alpha=\beta$. Note that, for example, all non-
zero squares are totally positive.
Let us now recall some facts about the simplest cubic fields, which we study
in this paper. They are generated by a root of the polynomial
$x^{3}-ax^{2}-(a+3)x-1$. Troughout this paper, we will denote the roots of
this polynomial in the following way: $a+1<\rho$, $-2<\rho^{\prime}<-1$, and
$-1<\rho^{\prime\prime}<0$. Nevertheless, if $a\geq 7$, we have more precise
estimates on these roots, namely
(2.1) $a+1<\rho<a+1+\frac{2}{a},\ \
-1-\frac{1}{a+1}<\rho^{\prime}<-1-\frac{1}{a+2},\text{ and
}-\frac{1}{a+2}<\rho^{\prime\prime}<-\frac{1}{a+3}.$
Note that this result mostly comes from [24], only the original estimate for
$\rho^{\prime}$ was too rough for the purposes of this paper, so we have
stated its slightly improved form, which can be easily checked. We will use
these estimates many times in the following proofs.
Besides that, we will use the fact that we know the full structure of
$\sigma$-indecomposable integers in $\mathbb{Z}[\rho]$. In particular, in
[18], we have shown the following theorem:
###### Theorem 2.1 ([18, Theorem 1.2]).
Let $K$ be the simplest cubic field with $a\in\mathbb{Z}_{\geq-1}$ such that
$\mathcal{O}_{K}=\mathbb{Z}[\rho]$. The elements $1$, $1+\rho+\rho^{2}$, and
$-v-w\rho+(v+1)\rho^{2}$ where $0\leq v\leq a$ and $v(a+2)+1\leq
w\leq(v+1)(a+1)$ are, up to multiplication by totally positive units, all the
totally positive indecomposable integers in $\mathbb{Q}(\rho)$.
Note that in fact, Theorem 2.1 provides us all the totally positive
indecomposable integers in the order $\mathbb{Z}[\rho]$ for every $a\geq-1$.
Moreover, although this theorem considers only the totally positive
indecomposable integers, it gives us also the complete information about
$\sigma$-indecomposables for all the other signatures $\sigma$. These
$\sigma$-indecomposables can be obtained as $\varepsilon\alpha$ where
$\varepsilon$ runs over all the units with signature $\sigma$, and $\alpha$
runs over all the elements listed in Theorem 2.1. This property is given by
the fact that $\mathbb{Q}(\rho)$ contains units of all signatures.
Moreover, we can divide the totally positive indecomposable integers from
Theorem 2.1 into three sets: units, the exceptional indecomposable integer
$1+\rho+\rho^{2}$ and the “triangle” of indecomposables of the form
$\blacktriangle=\blacktriangle(a)=\\{-v-w\rho+(v+1)\rho^{2},0\leq v\leq
a\text{ and }v(a+2)+1\leq w\leq(v+1)(a+1)\\}.$
Nevertheless, except for $\alpha\in\blacktriangle$, the set $\blacktriangle$
also contains some specific unit multiples of conjugates of $\alpha$. Let
$\alpha(v,W)=-v-(v(a+2)+1+W)\rho+(v+1)\rho^{2}\in\blacktriangle$ for some
$0\leq W\leq a-v$, and let $a=3A+a_{0}$ where $a_{0}\in\\{0,1,2\\}$. Instead
of $\blacktriangle$, we can consider its subset of the form
$\blacktriangle_{0}=\blacktriangle_{0}(a)=\left\\{\begin{array}[]{ll}\left\\{\alpha(v,W);0\leq
v\leq A\text{ and }v\leq W\leq a-2v-1\right\\}\text{ if }a_{0}\in\\{1,2\\},\\\
\\{\alpha(v,W);0\leq v\leq A-1\text{ and }v\leq W\leq
a-2v-1\\}\cup\\{\alpha(A,A)\\}\text{ if }a_{0}=0.\end{array}\right.$
The excluded elements of $\blacktriangle$ are just these unit multiples of
conjugates of $\blacktriangle_{0}$, and thus in some sense, covered by the
elements in $\blacktriangle_{0}$. For more details, see [18].
In our proof, we will work with norms of these elements, and in particular, we
will use the following lemma from [18], which partly compares norms of
elements belonging to the set $\blacktriangle_{0}$.
###### Lemma 2.2 ([18, Lemma 6.4]).
Let $a\geq 3$ and assume that $\alpha(v+1,W)\in\blacktriangle_{0}$. Then
$N(\alpha(v,W))<N(\alpha(v+1,W))$.
Note that for fixed $v$, the norm of $\alpha(v,W)$ firstly increases in $W$
and then it can start to decrease (in some cases, it increases in the whole
interval for $W$ but one of these two cases always occurs). For more details,
see [18].
As we will see below, we will also need to know more about units in
$\mathbb{Q}(\rho)$. It was proved that the system of fundamental units of
$\mathbb{Q}(\rho)$ (and also of $\mathbb{Z}[\rho]$) is formed by the pair
$\rho$ and $\rho^{\prime}$ [12, 35]. Benefiting from this property, the
authors of [18] prove the following lemma, which we will often use in this
paper.
###### Lemma 2.3 ([18, Lemma 6.2]).
Let $a\geq 7$ and let $\varepsilon$ be a unit such that
$|\varepsilon|,|\varepsilon^{\prime}|,|\varepsilon^{\prime\prime}|<a$. Then
$\varepsilon=1$.
Especially, if $\varepsilon\neq 1$ is a totally positive unit (and thus a
square), then by Lemma 2.3, some of its conjugates is greater than $a^{2}$. In
some cases, we will also need the stronger result stated in the following
lemma [18].
###### Lemma 2.4 ([18, Lemma 6.3]).
Let $a\geq 7$ and let $\varepsilon$ be a totally positive unit such that
$\varepsilon>a^{2}$. If $\varepsilon\neq\rho^{2},\rho^{\prime\prime-2}$, then
at least one of the following holds:
1. (1)
$\varepsilon>a^{4}$, or
2. (2)
$\varepsilon^{\prime}>a^{2}$, or
3. (3)
$\varepsilon^{\prime\prime}>a^{2}$.
## 3\. Proof of Theorem 1.1
Now we will describe the method which we will use in the proof of Theorem 1.1.
Recall that by the result of Kala and Yatsyna [19], the upper bound on the
Pythagoras number in cubic orders is $6$. Thus, it suffices to prove that the
lower bound is also $6$. To do that, it is enough to find an element
$\gamma\in\mathbb{Z}[\rho]$ which can be written as a sum of six squares but
not as a sum of five squares.
Hence we will proceed as follows. We will suitably choose such an element
$\gamma$ and find all the elements $\omega$ such that
$\gamma\succeq\omega^{2}$. Every square decomposition of $\gamma$ can consist
only of these elements. Then, using some combinatorics, we will show that none
sum of five (or less) of these squares can give $\gamma$.
In the determination of these squares, we will use the knowledge of
$\sigma$-indecomposable integers in the simplest cubic fields originating from
Theorem 2.1. Let $\omega$ be such that $\gamma\succeq\omega^{2}$. This element
$\omega$ has some signature $\sigma$, and it can be thus expressed as
$\omega=\sum_{i=1}^{n}\beta_{i}$ where $\beta_{i}$ are $\sigma$-indecomposable
integers in $\mathbb{Z}[\rho]$, and $n\in\mathbb{N}$. Having this, we can see
that
$\gamma\succeq\sum_{i=1}^{n}\beta_{i}^{2}+2\sum_{\begin{subarray}{c}i,j=1\\\
i\neq j\end{subarray}}^{n}\beta_{i}\beta_{j}.$
Obviously, the squares $\beta_{i}^{2}$ are totally positive, as well as
elements $\beta_{i}\beta_{j}$ for $i\neq j$ since $\beta_{i}$ and $\beta_{j}$
have the same signature $\sigma$. Thus, we can immediately conclude that
$\gamma\succeq\beta_{i}^{2}$ for all $i=1,\ldots,n$. We will use this simple
fact in the following way. First of all, we will find all the
$\sigma$-indecomposable integers $\beta$ for all the signatures $\sigma$ such
that $\gamma\succeq\beta^{2}$. Then, by summing these elements $\beta$ with
the same signature $\sigma$, we will derive all the $\sigma$-decomposable
integers $\omega$ satisfying $\gamma\succeq\omega^{2}$.
Moreover, every of these $\sigma$-indecomposables $\beta$ can we rewritten as
$\beta=\varepsilon\alpha$ where $\varepsilon$ is a unit and $\alpha$ is one of
$1$, $1+\rho+\rho^{2}$ and elements of $\blacktriangle_{0}$, or one of their
conjugates. Thus, we firstly detect the elements of this list whose squares
have the norm smaller than $N(\gamma)$, and consequently use the results on
units from Lemmas 2.3 and 2.4 to determine all the possible units
$\varepsilon$ which indeed give
$\beta^{2}=\varepsilon^{2}\alpha^{2}\preceq\gamma$.
In the case of the simplest cubic fields, we can choose our element as
$\gamma=a^{2}+a+8+(a^{2}-a+1)\rho+(2-a)\rho^{2}=1+1+1+4+\rho^{2}+(a+1+a\rho-\rho^{2})^{2}.$
As we see, we can write $\gamma$ as a sum of six squares. We will fix this
choice of $\gamma$ and work with it for the rest of this paper. Moreover, we
will show that except for a few cases of $a$, there exist only $8$ non-zero
elements $\omega^{2}$ such that $\omega^{2}\preceq\gamma$, which is a great
advantage of the choice of this element.
Using estimates given in (2.1), we can easily deduce that for $a\geq 7$,
$\displaystyle\gamma$ $\displaystyle<a^{2}+6a+9+\frac{2}{a},$
$\displaystyle\gamma^{\prime}$ $\displaystyle<10+\frac{4}{(a+2)^{2}}<11,$
$\displaystyle\gamma^{\prime\prime}$
$\displaystyle<a^{2}+11+\frac{a^{2}-8a-28}{(a+3)^{2}}.$
In particular, the conjugate $\gamma$ has the largest value. We can
immediately see that, if $\omega$ is a rational integer, then necessarily
$\omega^{2}\in\\{1,4,9\\}$.
### 3.1. Units
Our first concern is to find all the totally positive units $\varepsilon$
satisfying $\gamma\succeq\varepsilon$. Recall that every such unit is a
square, thus it can play a role in a square decomposition of the element
$\gamma$.
###### Lemma 3.1.
Let $a\geq 7$ and let $\varepsilon$ be a totally positive unit in
$\mathbb{Z}[\rho]$. If $\gamma\succeq\varepsilon$, then
$\varepsilon\in\\{1,\rho^{2}\\}$.
###### Proof.
If $\varepsilon\neq 1$, Lemma 2.3 implies that one of the conjugates of
$\varepsilon$ is greater than $a^{2}$. Without loss of generality, we can
assume $\varepsilon>a^{2}$. Using the fundamental units, the unit
$\varepsilon$ can be written as $\rho^{k}\rho^{\prime l}$ for some $k,l\in
2\mathbb{Z}$. As we can see from the estimates in (2.1), the value of
$\varepsilon$ can be greater than $a^{2}$ only if $k\geq 2$. On the other
hand, $\varepsilon^{\prime}=\rho^{\prime k}\rho^{\prime\prime l}\leq\gamma$
for $k\geq 2$ only if $l\geq-2$. This condition on $l$ also implies that
$\varepsilon\leq\gamma$ only for $k=2$ and $a\geq 7$. The other cases are not
possible as we have $\gamma>\gamma^{\prime},\gamma^{\prime\prime}$.
Let us first focus on the case when $k=2$ and $l=-2$. Obviously,
$\varepsilon^{\prime}>\gamma^{\prime\prime}$, thus the only conjugate of
$\varepsilon$ which can be totally smaller $\gamma$ is $\varepsilon^{\prime}$.
However, it can be directly verified that $\gamma-\rho^{\prime
2}\rho^{\prime\prime-2}$ is not totally positive for $a\geq 7$.
Therefore, let $l\geq 0$. In these cases, clearly,
$\varepsilon>\gamma^{\prime\prime}$ for $a\geq 7$, thus $\varepsilon$ is the
only conjugate of $\varepsilon$ which can be totally smaller than $\gamma$.
First of all, let us assume $l\geq 6$. In this case, we have
$\varepsilon^{\prime\prime}=\rho^{\prime\prime
2}\rho^{l}>\frac{1}{(a+3)^{2}}(a+1)^{6}>\gamma^{\prime\prime}$
for $a\geq 7$, and thus we can exclude the cases with $l\geq 6$.
Therefore, except for $\varepsilon=1$, we are left with the units $\rho^{2}$,
$\rho^{2}\rho^{\prime 2}$ and $\rho^{2}\rho^{\prime 4}$. However, the last
unit can be rewritten as $\rho^{\prime 2}\rho^{\prime\prime-2}$, which was
excluded in the previous part. In the same manner, we can show that
$\gamma-\rho^{2}\rho^{\prime 2}$ is not totally positive for $a\geq 7$. Thus,
the only units which can be (and actually are) totally smaller than $\gamma$
are exactly $1$ and $\rho^{2}$. ∎
### 3.2. Squares of $\sigma$-indecomposable integers
In this part, we will find all non-unit $\sigma$-indecomposable integers
$\beta$ such that $\gamma\succeq\beta^{2}$. Necessarily, in that case,
$N(\gamma)\geq N(\beta^{2})$. It can be easily computed that
$N(\gamma)=9a^{4}+22a^{3}+247a^{2}+258a+1493.$
In our investigation, we can use the knowledge of totally positive
indecomposable integers given by Theorem 2.1, and the fact that every square
of non-unit $\sigma$-indecomposable integer is a conjugate of some element of
the form $\varepsilon\alpha^{2}$ where $\varepsilon$ is a totally positive
unit and $\alpha\in\blacktriangle_{0}\cup\\{1+\rho+\rho^{2}\\}$. Thus, we
firstly detect all the elements $\alpha$ for which
$N(\alpha^{2})=N(\beta^{2})\leq N(\gamma)$.
###### Lemma 3.2.
Let $a\geq 15$ and let $\alpha\in\blacktriangle_{0}\cup\\{1+\rho+\rho^{2}\\}$.
If $N(\alpha^{2})\leq N(\gamma)$, then
$\alpha\in\\{-w\rho+\rho^{2};1\leq w\leq
a\\}\cup\\{1+\rho+\rho^{2},-1-(a+4)\rho+2\rho^{2}\\}.$
###### Proof.
It can be easily computed that $N((1+\rho+\rho^{2})^{2})<N(\gamma)$ for
$a\geq-1$. Thus, let us now focus on $\alpha\in\blacktriangle_{0}$. In this
case, we have $\alpha=\alpha(v,W)=-v-(a(v+2)+1+W)\rho+(v+1)\rho^{2}$ for some
admissible values of $v,W$. In the following, we will use Lemma 2.2, which
compares norms of elements belonging to $\blacktriangle_{0}$.
Firstly, let us focus on the case when $v=1$. For $W=1$ (the smallest value of
$W$ for $v=1$), we get $N(\alpha(1,1)^{2})=4a^{4}+24a^{3}-108a+81<N(\gamma)$
for $a\geq-1$, i.e., we obtain the element $-1-(a+4)\rho+2\rho^{2}$ listed in
the statement of the lemma. On the other hand,
$N(\alpha(1,2)^{2})=9a^{4}+54a^{3}-141a^{2}-666a+1369>N(\gamma)$ for $a\geq
15$. Recall that the norm of $\alpha(v,W)$ for fixed $v$ increases in $W$, and
then it can start to decrease. Thus, to complete the proof for $v=1$, it
suffices to check the norm for $W=a-3$ (the largest $W$ for $v=1$ and $a\geq
15$). Nevertheless, we obtain
$N(\alpha(1,a-3)^{2})=16a^{4}-136a^{2}+289>N(\gamma)$ for $a\geq 10$.
By Lemma 2.2 and using the previous part, the norms of $\alpha(v,W)^{2}$ for
$v\geq 2$ are too large to be smaller than $N(\gamma)$. Thus, we are left with
one element with $v=1$ and all the elements with $v=0$, which completes the
proof. ∎
Therefore, we have determined all the representatives of the
$\sigma$-indecomposable integers $\alpha$ with sufficiently small norms. Now
we will find all the totally positive units $\varepsilon$ for which
$\varepsilon\alpha^{2}$ or one of its conjugates is indeed totally smaller
than $\gamma$. To reach this aim, we will use Lemmas 2.3 and 2.4, which state
some useful results about units in the simplest cubic fields.
###### Lemma 3.3.
Let $a\geq 15$ and let $\beta$ be a non-unit $\sigma$-indecomposable integer
in $\mathbb{Z}[\rho]$ for some signature $\sigma$. If
$\gamma\succeq\beta^{2}$, then $\beta^{2}$ is one of the following elements:
1. (1)
$\rho^{\prime 2}\rho^{\prime\prime 2}(-\rho+\rho^{2})^{2}=1-2\rho+\rho^{2}$,
2. (2)
$\rho^{\prime\prime 2}\rho^{2}(-\rho^{\prime}+\rho^{\prime
2})^{2}=a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2}$,
3. (3)
$\rho^{\prime\prime 2}\rho^{2}(-2\rho^{\prime}+\rho^{\prime
2})^{2}=a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2}$,
4. (4)
$\rho^{2}\rho^{\prime 2}(-(a-1)\rho^{\prime\prime}+\rho^{\prime\prime
2})^{2}=a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}$.
###### Proof.
We will proceed as follows. We will consider the elements $\alpha$ given by
Lemma 3.2 and discuss whether some conjugate of $\varepsilon\alpha^{2}$ can be
totally smaller than $\gamma$ for some totally positive unit $\varepsilon$.
Let us start with $\alpha=1+\rho+\rho^{2}$. Using (2.1), we can see that
$\alpha^{2}>a^{4}+6a^{3}+15a^{2}+18a+9>a^{2}+6a+9+\frac{2}{a}>\gamma>\gamma^{\prime},\gamma^{\prime\prime}$
for $a\geq 15$. Thus, we can immediately exclude all the conjugates of
$(1+\rho+\rho^{2})^{2}$, i.e., when we multiple $\alpha^{2}$ by the totally
positive unit $\varepsilon=1$. Let now $\varepsilon\neq 1$. By Lemma 2.3,
without loss of generality, we can suppose $\varepsilon>a^{2}$. However, we
can immediately exclude the conjugates of $\varepsilon(1+\rho+\rho^{2})^{2}$,
and we are left with the elements of the form
$\varepsilon(1+\rho^{\prime}+\rho^{\prime 2})^{2}$ and
$\varepsilon(1+\rho^{\prime\prime}+\rho^{\prime\prime 2})^{2}$.
Now, we will use Lemma 2.4. If $\varepsilon>a^{4}$, it can be easily verified
that $\varepsilon(1+\rho^{\prime}+\rho^{\prime
2})^{2},\varepsilon(1+\rho^{\prime\prime}+\rho^{\prime\prime 2})^{2}>\gamma$,
thus these cases are not possible. Suppose
$\varepsilon\neq\rho^{2},\rho^{\prime\prime-2}$. In that case, our unit
$\varepsilon$ has a conjugate greater than $a^{2}$. Since this conjugate
cannot be paired with $1+\rho+\rho^{2}$, we can restrict to the elements
$\varepsilon(1+\rho^{\prime}+\rho^{\prime 2})^{2}$ with
$a^{2}<\varepsilon,\varepsilon^{\prime}<a^{4}$ (the case
$\varepsilon(1+\rho^{\prime\prime}+\rho^{\prime\prime 2})^{2}$ is covered by
that). However, using a similar method as in Lemma 3.1, we can show that under
these conditions, $\varepsilon=\rho^{2}\rho^{\prime-2}$. Nevertheless, it can
be easily checked that none conjugate of
$\rho^{2}\rho^{\prime-2}(1+\rho^{\prime}+\rho^{\prime 2})^{2}$ is totally
smaller than $\gamma$.
Thus we are left with the elements of the form
$\varepsilon(1+\rho^{\prime}+\rho^{\prime 2})^{2}$ and
$\varepsilon(1+\rho^{\prime\prime}+\rho^{\prime\prime 2})^{2}$ where
$\varepsilon=\rho^{2},\rho^{\prime\prime-2}$. However, for these cases, we can
directly verify that none of them (or their conjugates) is totally smaller
than $\gamma$.
We will proceed with the elements from $\blacktriangle_{0}$. First of all, let
us assume that $\alpha=-w\rho+\rho^{2}$ where $3\leq w\leq a-2$. In this case,
as before, $\alpha^{2}>\gamma$ (as $\alpha^{2}>((a+1)^{2}-w(a+2))^{2}\geq
4a^{2}+20a+25>\gamma$), thus we can exclude $\varepsilon=1$. Let now
$\varepsilon>a^{2}$. Obviously, $\varepsilon\alpha^{2}>\gamma$ , and
$\varepsilon\alpha^{\prime 2}>\gamma$ as $\varepsilon\alpha^{\prime
2}>a^{2}(w+1)^{2}\geq 16a^{2}>\gamma$. Hence only the conjugates of
$\varepsilon\alpha^{\prime\prime 2}$ can be totally smaller than $\gamma$.
Similarly as before, using Lemma 2.4, we can exclude all the units with
$\varepsilon>a^{4}$, $\varepsilon^{\prime}>a^{2}$ and
$\varepsilon^{\prime\prime}>a^{2}$, and neither of $\rho^{2}$ or
$\rho^{\prime\prime-2}=\rho^{2}\rho^{\prime 2}$ produce an element totally
smaller than $\gamma$, which can checked by direct computations.
Put $\alpha=-1-(a+4)\rho+2\rho^{2}$. Using Lemma 2.3 and basic estimates
(2.1), we can easily find candidates on totally smaller integers, which are
conjugates of $\varepsilon\alpha^{\prime\prime 2}$ where
$\varepsilon\in\\{\rho^{2}\rho^{\prime 2},\rho^{4}\rho^{\prime
2},\rho^{4}\rho^{\prime 4}\\}$. Nevertheless, by comparing norms, traces, or
the remaining coefficient of the minimal polynomial of $\gamma$ and $\alpha$,
we can exclude all of these several concrete cases.
Let now $\alpha=-a\rho+\rho^{2}$. In this case, $\alpha>a^{2}-2a+1$,
$\alpha^{\prime}>(a+1)^{2}$ and
$\alpha^{\prime\prime}>\frac{a^{2}}{(a+3)^{2}}$. Nevertheless,
$\text{Tr}(\alpha)=2a^{2}+10a+18>2a^{2}+2a+36=\text{Tr}(\gamma)$, thus we can
exclude $\varepsilon=1$. Regarding $\varepsilon\neq 1$, the only possible
candidates are again conjugates of $\rho^{2}\alpha^{\prime\prime 2}$ and
$\rho^{2}\rho^{\prime 2}\alpha^{\prime\prime 2}$. Nevertheless, by direct
calculations, we can easily show that none of them is totally smaller than
$\gamma$.
We will proceed with $\alpha=-(a-1)\rho+\rho^{2}$. In this case, we have
$\alpha^{2}>4a^{2}+8+\frac{4}{a^{2}}>\gamma$, thus we can exclude
$\varepsilon=1$. For $\varepsilon\neq 1$, similarly, as before, we are left
with $\rho^{2}\alpha^{\prime\prime 2}$ and $\rho^{2}\rho^{\prime
2}\alpha^{\prime\prime 2}$. In this case, we indeed get an element, which is
totally smaller than $\gamma$, and it is equal to
$\rho^{2}\rho^{\prime 2}\alpha^{\prime\prime
2}=a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}.$
Let now $\alpha=-2\rho+\rho^{2}$. We can conclude that only the conjugates of
$\rho^{2}\rho^{\prime 2}\alpha^{\prime\prime 2}$ can be totally smaller than
$\gamma$, from which only the one satisfies this condition, namely
$\rho^{\prime\prime 2}\rho^{2}\alpha^{\prime
2}=a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2}.$
Therefore, it remains to consider the element $\alpha=-\rho+\rho^{2}$. Using
Lemma 2.4 and estimates (2.1), we can easily derive that some conjugate of our
element has to be of the form $\varepsilon\alpha^{\prime\prime 2}$ where
$\varepsilon\in\\{\rho^{2}\rho^{\prime 2},\rho^{4}\rho^{\prime
2},\rho^{4}\rho^{\prime 4}\\}$. From these nine cases, only two are actually
totally smaller than $\gamma$, specifically
$\displaystyle\rho^{\prime 2}\rho^{\prime\prime 2}\alpha^{2}$
$\displaystyle=1-2\rho+\rho^{2},$ $\displaystyle\rho^{\prime\prime
2}\rho^{2}\alpha^{\prime 2}$
$\displaystyle=a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2}.$
The others can be excluded by direct calculations, completing the proof. ∎
### 3.3. Sums of $\sigma$-indecomposable integers
In the previous subsections, we have found all the squares of
$\sigma$-indecomposable integers $\beta$ for all signatures $\sigma$ for which
we have $\gamma\succeq\beta^{2}$. Now we will consider possible
$\sigma$-decomposable integers, which we can create from these
$\sigma$-indecomposables and which are (possibly) totally smaller than
$\gamma$. However, to do that, we must know the signatures of these elements,
which can be found in Table 1.
$\sigma$-indecomposable integer $\beta$ | Signature of $\beta$ | Signature of $-\beta$
---|---|---
$1$ | $(+,+,+)$ | $(-,-,-)$
$\rho$ | $(+,-,-)$ | $(-,+,+)$
$\rho^{\prime}\rho^{\prime\prime}(-\rho+\rho^{2})$ | $(+,-,-)$ | $(-,+,+)$
$\rho^{\prime\prime}\rho(-\rho^{\prime}+\rho^{\prime 2})$ | $(-,-,+)$ | $(+,+,-)$
$\rho^{\prime\prime}\rho(-2\rho^{\prime}+\rho^{\prime 2})$ | $(-,-,+)$ | $(+,+,-)$
$\rho\rho^{\prime}(-(a-1)\rho^{\prime\prime}+\rho^{\prime\prime 2})$ | $(-,+,-)$ | $(+,-,+)$
Table 1. Signatures of $\sigma$-indecomposable integers whose squares are
totally smaller than $\gamma$.
###### Lemma 3.4.
Let $a\geq 15$ and $\gamma\succeq\omega^{2}$. If $\omega$ is
$\sigma$-decomposable for some $\sigma$, then $\omega^{2}\in\\{4,9\\}$.
###### Proof.
Now we will consider possible sums of our $\sigma$-indecomposable integers.
Note that we can sum up only the elements with the same signatures. Moreover,
the opposite signatures (i.e., which have all the signs opposite) behave in
the same manner and give the same squares, and thus it suffices to study only
one of each such a pair.
1. (1)
Signature $(+,+,+)$ (respectively, $(-,-,-)$): Here we have only one element,
namely $1$, which produces two $\sigma$-decomposable integers $2$ and $3$
whose squares $4$ and $9$ are totally smaller than $\gamma$.
2. (2)
Signature $(+,-,-)$ (respectively, $(-,+,+)$): The set of
$\sigma$-indecomposables for this signature consists of the elements $\rho$
and $\rho^{\prime}\rho^{\prime\prime}(-\rho+\rho^{2})$. However, we can easily
compute that
1. (a)
$(2\rho)^{2}>4(a+1)^{2}>\gamma$,
2. (b)
$(\rho+\rho^{\prime}\rho^{\prime\prime}(-\rho+\rho^{2}))^{2}=1-4\rho+4\rho^{2}>4a^{2}+4a-3>\gamma$,
3. (c)
$(2\rho^{\prime}\rho^{\prime\prime}(-\rho+\rho^{2}))^{2}=4-8\rho+4\rho^{2}>4a^{2}-8>\gamma$
for $a\geq 15$. Moreover, these results imply that our $\omega$ cannot also be
a sum of more than two $\sigma$-indecomposable integers. Thus, in this
signature, none square of $\sigma$-decomposable integer is totally smaller
than $\gamma$.
3. (3)
Signature $(-,-,+)$ (respectively, $(+,+,-)$): In this case, we consider
exactly two $\sigma$-indecomposable integers
$\rho^{\prime\prime}\rho(-\rho^{\prime}+\rho^{\prime 2})$ and
$\rho^{\prime\prime}\rho(-2\rho^{\prime}+\rho^{\prime})$. However, we can
easily show that
1. (a)
$((2\rho^{\prime\prime}\rho(-\rho^{\prime}+\rho^{\prime
2}))^{2})^{\prime\prime}>\gamma^{\prime\prime}$,
2. (b)
$(\rho^{\prime\prime}\rho(-\rho^{\prime}+\rho^{\prime
2})+\rho^{\prime\prime}\rho(-2\rho^{\prime}+\rho^{\prime
2}))^{2})^{\prime\prime}>\gamma^{\prime\prime}$,
3. (c)
$((2\rho^{\prime\prime}\rho(-2\rho^{\prime}+\rho^{\prime
2}))^{2})^{\prime\prime}>\gamma^{\prime\prime}$
for $a\geq 15$. Thus we do not obtain any additional element.
4. (4)
Signature $(-,+,-)$ (respectively, $(+,-,+)$): This case contains exactly one
$\sigma$-indecomposable integer, namely
$\rho\rho^{\prime}(-(a-1)\rho^{\prime\prime}+\rho^{\prime\prime 2})$. However,
it can be easily computed that
$((2\rho\rho^{\prime}(-(a-1)\rho^{\prime\prime}+\rho^{\prime\prime
2}))^{2})^{\prime\prime}>\gamma^{\prime\prime}$
for $a\geq 15$, thus this case does not produce more elements to consider.
∎
### 3.4. Proof of Theorem 1.1
Using the results of Lemmas 3.1, 3.3 and 3.4, we can now prove Theorem 1.1
stated in the introduction.
###### Proof of Theorem 1.1.
In Subsections 3.1, 3.2 and 3.3, we have found all the elements $\omega$ such
that $\gamma\succeq\omega^{2}$ for $a\geq 15$. We have obtained the following
squares:
1. (1)
rational integers $1$, $4$ and $9$,
2. (2)
squares of $\sigma$-indecomposable integers of the form
1. (a)
$\rho^{2}$,
2. (b)
$1-2\rho+\rho^{2}$,
3. (c)
$a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2}$,
4. (d)
$a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2}$,
5. (e)
$a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}$.
Using a computer program (all the calculations were performed in Mathematica),
we can show that we get the same elements (and none more) also for $5\leq
a\leq 14$. For $a=3$, we get two additional elements $20+11\rho-3\rho^{2}$ and
$1+2\rho+\rho^{2}$, and for $a=4$, we obtain $1+2\rho+\rho^{2}$. Nevertheless,
using a similar procedure as below, we can prove that even in these cases, we
need at least $6$ squares to express $\gamma$. Thus, in the following, we will
suppose $a\geq 5$.
Recall that $\gamma=a^{2}+a+8+(a^{2}-a+1)\rho+(2-a)\rho^{2}$. The coefficient
before $\rho$ of $\gamma$ is clearly odd, thus in every square decomposition
of $\gamma$, we need at least one element with this coefficient odd. Looking
at the list, this is satisfied by the elements
$\displaystyle a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2},$ $\displaystyle
a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2},$ $\displaystyle
a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}.$
Note that these elements are also the only ones, which have positive
coefficient before $\rho$. However, for
$a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2}$ and
$a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}$, the value of this coefficients is
strictly smaller than $a^{2}-a+1$. Thus, if our square decomposition of
$\gamma$ contained one of these two element, some other summand would have to
be one of these three above-mentioned elements. Nevertheless, in that case,
the coefficient before $1$ (we mean the coefficients in the basis $1,\rho,$
and $\rho^{2}$) is at least $2a^{2}-2a>a^{2}+a+8$ for $a\geq 5$. This is not
possible since all the squares totally smaller than $\gamma$ have a non-
negative coefficient before $1$. Hence no square decomposition of $\gamma$ can
contain the elements $a^{2}-a+(a^{2}-3a+1)\rho-(a-3)\rho^{2}$ and
$a^{2}+a-1+(a^{2}-a-3)\rho-(a-2)\rho^{2}$.
It implies that one summand of our decomposition must be
$a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2}$, and we get
$\gamma=a^{2}+a+1+(a^{2}-a+1)\rho-(a-1)\rho^{2}+\delta,$
where $\delta=7+\rho^{2}$. Obviously, every square decomposition of $\delta$
may consist of only the elements $1$, $4$, $\rho^{2}$ and $1-2\rho+\rho^{2}$
since the coefficient before $1$ of the other elements from the list is too
large. Nevertheless, $1-2\rho+\rho^{2}$ cannot appear in this decomposition
since its coefficient before $\rho$ is negative, and the remaining three
integers have this coefficient equal to zero. Thus, only the elements $1$,
$4$, and $\rho^{2}$ can appear in a square decomposition of $\delta$, and for
that, we need at least $5$ of these elements. It implies that every square
decomposition of $\gamma$ consists of at least $6$ non-zero squares, which
together with the upper bound, gives $\mathcal{P}(\mathbb{Z}[\rho])=6$. ∎
## 4\. The case $-1\leq a\leq 2$
We will now focus on the remaining cases of $a$, i.e., $-1\leq a\leq 2$.
However, the situation is different here. Indeed, at least the element
$\gamma$ can be expressed as a sum of less than $6$ squares for all of these
cases, thus it cannot provide us the same lower bound as before. Moreover,
based on computer experiments, we may propose that the Pythagoras number of
$\mathbb{Z}[\rho]$ is even less than $6$. Nevertheless, our computer program
searches for elements of small traces, and thus we cannot exclude that there
exists an element that can be written as a sum of more squares and has a large
trace.
The lower bounds on $\mathcal{P}(\mathbb{Z}[\rho])$ for $-1\leq a\leq 2$ are
provided in Table 2. We also show here an example of an element for which this
lower bound is attained.
$a$ | $\mathcal{P}(\mathbb{Z}[\rho])$ | Example of element
---|---|---
$-1$ | $\geq 4$ | $7$
$0$ | $\geq 5$ | $-8\rho+8\rho^{2}$
$1$ | $\geq 5$ | $4-3\rho+2\rho^{2}$
$2$ | $\geq 5$ | $7+\rho^{2}$
Table 2. The lower bound on $\mathcal{P}(\mathbb{Z}[\rho])$ for $-1\leq a\leq
2$ and an example of element for which this lower bound is attained.
## Acknowledgements
The author is greatly indebted to Pavlo Yatsyna and Vítězslav Kala for their
advice during the preparation of this paper.
## References
* [1] S. Balady, Families of cyclic cubic fields, J. Number Theory 167, 394–406 (2016).
* [2] V. Blomer and V. Kala: Number fields without $n$-ary universal quadratic forms, Math. Proc. Cambridge Philos. Soc. 159 (2), 239–252 (2015).
* [3] V. Blomer and V. Kala, On the rank of universal quadratic forms over real quadratic fields, Doc. Math. 23, 15–34 (2018).
* [4] D. Byeon, Class number 3 problem for the simplest cubic fields, Proc. Amer. Math. Soc. 128, 1319–1323 (2000).
* [5] J. W. S. Cassels, W. J. Ellison and A. Pfister, On sums of squares and on elliptic curves over function fields, J. Number Theory 3, 125–149 (1971).
* [6] M. Čech, D. Lachman, J. Svoboda, M. Tinková and K. Zemková, Universal quadratic forms and indecomposables over biquadratic fields, Math. Nachr. 292, 540–555 (2019).
* [7] W. K. Chan, M.-H. Kim and S. Raghavan, Ternary universal integral quadratic forms over real quadratic fields, Japan. J. Math. 22, 263–273 (1996).
* [8] M. D. Choi, Z. D. Dai, T. Y. Lam and B. Reznick. The Pythagoras number of some affine algebras and local algebras, J. Reine Angew. Math. 336, 45–82 (1982).
* [9] H. Cohn, A device for generating fields of even class number, Proc. Amer. Math. Soc. 7, 595–598 (1956).
* [10] A. Dress and R. Scharlau: Indecomposable totally positive numbers in real quadratic orders, J. Number Theory 14, 292–306 (1982).
* [11] K. Foster, HT90 and “simplest” number fields, Illinois J. Math. 55, 1621–1655 (2011).
* [12] H. J. Godwin, The determination of units in totally real cubic fields, Proc. Cambridge Philos. Soc. 56, 318–321 (1960).
* [13] T. Hejda and V. Kala, Additive structure of totally positive quadratic integers, Manuscripta Math. 163, 263–278 (2020).
* [14] D. W. Hoffmann, Pythagoras numbers of fields, J. Amer. Math. Soc. 12 (3), 839–848 (1999).
* [15] J. S. Hsia, Y. Kitaoka and M. Kneser, Representations of positive definite quadratic forms, J. Reine Angew. Math. 301, 132–141 (1978).
* [16] Y. Hu, The Pythagoras number and the $u$-invariant of Laurent series fields in several variables, J. Algebra 426, 243–258 (2015).
* [17] V. Kala, Universal quadratic forms and elements of small norm in real quadratic fields, Bull. Aust. Math. Soc. 94, 7–14 (2016).
* [18] V. Kala and M. Tinková, Universal quadratic forms, small norms and traces in families of number fields, preprint. https://arxiv.org/abs/2005.12312
* [19] V. Kala and P. Yatsyna, Sums of squares in S-integers, New York J. Math. 26, 1145–1154 (2020).
* [20] V. Kala and P. Yatsyna, Lifting problem for universal quadratic forms, Adv. Math. 377, 107497 (2021).
* [21] B. M. Kim, Finiteness of real quadratic fields which admit positive integral diagonal septenary universal forms, Manuscr. Math. 99, 181–184 (1999).
* [22] B. M. Kim, Universal octonary diagonal forms over some real quadratic fields, Commentarii Math. Helv. 75, 410–414 (2000).
* [23] J. Krásenský, M. Tinková and K. Zemková, There are no universal ternary quadratic forms over biquadratic fields, Proc. Edinb. Math. Soc. 63 (3), 861–912 (2020).
* [24] F. Lemmermeyer and A. Pethö, Simplest Cubic Fields, Manuscripta Math. 88, 53–58 (1995).
* [25] G. Lettl, A lower bound for the class number of certain cubic number fields, Math. Comp. 46, 659–666 (1986).
* [26] S. Louboutin, Class-number problems for cubic number fields, Nagoya Math. J. 138, 199–208 (1995).
* [27] H. Maaß, Über die Darstellung total positiver Zahlen des Körpers $R(\sqrt{5})$ als Summe von drei Quadraten, Abh. Math. Sem. Univ. Hamburg 14, 185–191 (1941).
* [28] W. Narkiewicz, Elementary and analytic theory of algebraic numbers, 3rd Edition, Springer-Verlag, Berlin, 2004.
* [29] O. Perron, Die Lehre von den Kettenbrüchen, B. G. Teubner, 1913.
* [30] M. Peters, Summe von Quadraten in Zahlringen, J. Reine Angew. Math. 268/269, 318–323 (1974).
* [31] A. Pfister,Quadratic forms with applications to algebraic geometry and topology, London Math. Soc. Lect. Notes 217, Cambridge University Press, 1995.
* [32] A. Prestel, Remarks on the Pythagoras and Hasse number of real fields, J. Reine Angew. Math. 303/304, 284–294 (1978).
* [33] H. Sasaki, Quaternary universal forms over $\mathbb{Q}[\sqrt{13}]$, Ramanujan J. 18, 73–80 (2009).
* [34] R. Scharlau, On the Pythagoras number of orders in totally real number fields, J. Reine Angew. Math. 316, 208–210 (1980).
* [35] D. Shanks, The simplest cubic number fields, Math. Comp. 28, 1137–1152 (1974).
* [36] C. L. Siegel, Darstellung total positiver Zahlen durch Quadrate, Math. Z. 11, 246–275 (1921).
* [37] C. L. Siegel, Sums of m-th powers of algebraic integers, Ann. of Math. 46, 313–339 (1945).
* [38] L. Washington, Class numbers of the simplest cubic fields, Math. Comp. 48, 371–384 (1987).
* [39] P. Yatsyna, A lower bound for the rank of a universal quadratic form with integer coeficients in a totally real field, Comment. Math. Helvet. 94, 221–239 (2019).
|
# Self-Calibrating Active Binocular Vision via Active
Efficient Coding with Deep Autoencoders
Charles Wilmot Frankfurt Institute for Advanced Studies Frankfurt, Germany
Email<EMAIL_ADDRESS>Bertram E. Shi Hong Kong University of
Science and Technology Clear Water Bay, Hong Kong Email<EMAIL_ADDRESS>Jochen
Triesch Frankfurt Institute for Advanced Studies Frankfurt, Germany Email:
<EMAIL_ADDRESS>
###### Abstract
We present a model of the self-calibration of active binocular vision
comprising the simultaneous learning of visual representations, vergence, and
pursuit eye movements. The model follows the principle of Active Efficient
Coding (AEC), a recent extension of the classic Efficient Coding Hypothesis to
active perception. In contrast to previous AEC models, the present model uses
deep autoencoders to learn sensory representations. We also propose a new
formulation of the intrinsic motivation signal that guides the learning of
behavior. We demonstrate the performance of the model in simulations.
Keywords: active efficient coding, intrinsic motivation, binocular vision,
vergence, pursuit, self-calibration, autonomous learning
## 1 Introduction
Human vision is an active process and comprises a number of different types of
eye movements. How the human visual system calibrates itself and learns the
required sensory representations of the visual input signals is only poorly
understood. A better understanding of this process might allow to build fully
self-calibrating active vision systems that are robust to perturbations, e.g.,
[1], which could in turn form the basis for models describing the autonomous
learning of object manipulation skills such as reaching or grasping, e.g.,
[2]. Here, we present a model of the self-calibration of active binocular
vision that formulates the task as an intrinsically motivated reinforcement
learning problem. In contrast to classic computer vision solutions to vergence
control and object tracking, our model does not require pre-defined visual
representations or kinematic models. Instead, it learns “from scratch” from
raw images. Nevertheless, it achieves sub-pixel accuracy in its vergence and
tracking movements demonstrating successful self-calibration.
### 1.1 Biological Background
Humans and many other species have two forward facing eyes providing two
largely overlapping views of the world. Initially, the visual signals are
transformed to electrical signals in the retina. Different types of retinal
ganglion cells are responsible for transmitting different kinds of information
to the brain. In particular, the so-called magnocellular pathway conveys
information at high temporal resolution required for motion vision [3], while
the so-called parvo-cellular pathway has less temporal resolution but color
sensitivity and higher spatial resolution [4]. From the retina, information is
passed to the lateral geniculate nucleus, where information from both eyes is
kept separate. Only in the primary visual cortex, the next processing stage,
individual neurons receive information from both eyes. In particular, there
are cells that detect small differences in local image structures at
corresponding retinal locations in the left and right eye, so-called binocular
disparities [5]. Furthermore, there are neurons that detect local image motion
[6]. How the response properties of primary visual cortex cells develop has
been the subject of a large body of research[7, 5, 8]. A widely accepted view
is that these representations reflect an optimization of the visual system
towards coding efficiency.
### 1.2 Efficient Coding in Perception
In particular, inspired by information theory, Horace Barlow proposed a model
of sensory coding postulating that neurons minimize the number of spikes
needed for transmitting sensory information [9]. This would help to save
energy, which is highly relevant, since the brain has high metabolic demands.
It has been argued in [10], that retinal receptors can receive information at
a rate of $10^{9}{\rm bit}/s$ [11], while the optic nerve can only transmit
information at $10^{7}{\rm bit}/s$ [12]. This implies that the sensory
information must be substantially compressed. Based on the idea of finding
efficient codes for sensory signals, a large number of models have been
proposed to explain the shapes of receptive fields in sensory brain areas for
different modalities (vision, audition, olfaction, touch). More recently, it
has been argued that adaptation of the organism’s behavior can also help to
make the coding of sensory information more efficient. This theory is called
Active Efficient Coding (AEC) [1, 13, 14]. It models the self-calibration of
sensorimotor loops, where sensory input statistics shape the sensory
representation, the sensory representation shapes the behavior and the
behavior in turn shapes the input statistics. AEC has mostly been studied in
the context of vision. There, it has been shown that AEC models can account
for the self-calibration of active stereo vision, active motion-vision, and
accommodation control or combinations thereof. These models have used only
shallow neural network architectures to learn to encode the sensory signals.
Here, we investigate potential benefits of deeper network architectures by
utilizing deep autoencoders and formulate a new intrinsic reward signal to
simultaneously learn the control of vergence and pursuit eye movements through
reinforcement learning. Our results show that the model achieves sub-pixel
accuracy in a simulated agent in a 3-D environment.
## 2 The Model
Figure 1: Architecture of the model. Two regions are extracted at the center
of the left / right camera images and encoded. The condensed representation is
used to train the $Q$-function. The latter is trained to maximize the reward,
which is proportional to the improvement of the reconstruction error of the
encoder.
### 2.1 Sensory encoding via deep autoencoders
When the two eyes verge on the same point, the foveal regions of the two
retinal images become more and more similar. As a consequence, the mutual
information between the left and right foveal image representations ${\rm
MI}(I_{\rm L};I_{\rm R})$ for a given disparity $d$ increases as $d$ goes to
$0$. It indicates how redundant the left and right images are and reflects the
quality of the fixation. Similarly, tracking a moving object can be achieved
by maximizing the information redundancy of the foveal image region across
time by maximizing ${\rm MI}(I_{t};I_{t-1})$ (for one or more eyes).
Here, we propose to measure the redundancy in the visual data via training an
auto-encoder, hypothesizing that an information stream is better reconstructed
when it is more redundant (see measurements in Figs. 2 and 3). The first
advantage of this technique is that it is agnostic to the underlying data
representation (for example the RGB channels in left and right data streams
could be expressed in different bases or be non-linearly transformed, as only
the data redundancy truly matters). Potentially, the algorithm could also
exploit highly non-linear redundancies between different sensory modalities,
given that the encoding network is sufficiently deep such that it captures
these redundancies. The second advantage of this technique is that it learns a
condensed representation of the sensory information which can be used by other
learning components of the system. Such lossy compression of information may
be essential for learning abstract representations at higher processing
levels.
We consider a binocular vision system with $3$ degrees of freedom: the pan and
the tilt control conjugate horizontal and vertical movements of the gaze and
the vergence controls inward and outward movement of the eyes with opposite
sign across the two eyes. To properly fixate objects, the agent controlling
the vergence must increase the redundancy in the left and right camera images,
whereas the agents controlling the pan and tilt must increase the redundancy
between consecutive images. We therefore used different inputs for the
vergence, and for the pan and tilt agents. The visual sensory information for
the vergence agent consists of the left and right images concatenated on the
color dimension, whereas that of the pan and tilt joints consists of the
concatenation of left and right images at time $t$ and $t-1$. The processing
taking place on these inputs is the same for all agents. The different visual
inputs for vergence vs. pan and tilt control reflect the distinction of two
separate visual pathways discussed above: the magnocellular pathway with
greater temporal resolution and lower spatial resolution compared to the
parvocellular pathway.
Let $v(t)$ denote one of the two visual sensory information streams (in the
following we will drop the explicit time dependence for compactness of
notation), and $E$ and $D$ be, respectively, the encoder and decoder parts of
an auto-encoder, such that
$\displaystyle s$ $\displaystyle=E\left(v;\theta_{E}\right)\;\text{and}$ (1)
$\displaystyle\tilde{v}$ $\displaystyle=D\left(s;\theta_{D}\right)\;,$ (2)
with $s$ representing the encoding and $\tilde{v}$ its reconstruction. We use
the loss function
$l=\frac{1}{3N_{\text{pixels}}}\sum_{ijk}(v_{ijk}-\tilde{v}_{ijk})^{2}$ for
training the encoder and decoder weights $\theta_{E}$ and $\theta_{D}$, where
$i$, $j$ and $k$ index the height, width, and color dimensions.
Both eyes see the world from a slightly shifted perspective. The apparent
shift of an object on the retinal image is called binocular disparity. In this
paper we measure it in pixels. In order for the vergence control to work, the
encoding / decoding component needs to learn a representation such that
increasing binocular disparities induce a degradation of the reconstruction
quality. While this is the case for all network architectures we tried, it is
important to verify that the range of binocular disparities at which this is
true matches the range of disparities at which we want the system to operate.
For example, if we want the system to have a fixation accuracy better than $1$
pixel, we must check that the reconstruction error at $1$ pixel disparity is
greater than the reconstruction error at $0$ pixel disparity. Similarly, if we
want the agent to be capable of resolving disparities greater than $10$
pixels, we want the corresponding reconstruction error to be greater than for
lower disparities. Since we want the model to operate over a wide range of
disparities (and object velocities) we encode the visual input at different
spatial scales, much like the retina samples the world at lower resolution
towards the periphery. Details are given in Section 3.
### 2.2 Learning of the Behavior Component
#### 2.2.1 Intrinsically motivated reinforcement learning formulation
We consider the classical Markov decision process framework, where at discrete
time $t=0,1,2,\ldots$ an agent observes sensory information
$s_{t}=E\left(v_{t},\theta_{E}\right)$ and chooses action $a_{t}$ according to
the distribution $\pi\left(a_{t}|s_{t}\right)$. After applying the action, the
agent transitions to a new state according to a transition function
$s_{t+1}=T(s_{t},a_{t})$, and receives a reward $r_{t}$. While reinforcement
learning classically considers a reward provided by the agent’s environment
through a potentially stochastic reward model, we here define an intrinsic
reward based on the agent’s sensory encoding of its environment. Specifically,
we define the reward
$r^{\rm new}_{t}=C\left(l_{t}-l_{t+1}\right)\;,$ (3)
where $C$ is a scaling factor. This reward signal measures the improvement of
encoding quality, i.e., it favors movements that cause transitions from high
to low reconstruction error of the autoencoder representing the visual input.
We also compare the training speed obtained when training the agent with the
simpler reward
$r^{\rm old}_{t}=-Cl_{t+1}\;,$ (4)
which has been used in previous AEC models and simply measures the quality of
the encoding.
The goal of reinforcement learning is to learn a policy function $\pi$ that
maximizes the (discounted) sum of future rewards $R_{t}$ called return
$R_{t}=\sum_{i=0}^{\infty}\gamma^{i}r_{t+i}$ (5)
Where $\gamma$ is a discount factor in $\left[0,1\right]$ ensuring the
convergence of the reward sum. In this particular application of reinforcement
learning, the agent does not need to plan ahead his behaviour, consistent with
our observation that the algorithm works best for $\gamma=0$.
#### 2.2.2 RL-Algorithm
We opted for an asynchronous version of the DQN algorithm [15, 16]. It
consists of a $Q$-value function approximation
$q=\begin{pmatrix}q_{0}\\\ \vdots\\\
q_{n}\end{pmatrix}=Q\left(s,\theta_{Q}\right)\;,$ (6)
where $q_{j}$ represents an estimate of the return after performing discrete
action $j$ in state $s$ and $n$ is the number of possible actions. The loss
for training the $Q$-function is the Huber loss between the estimate and the
return target [17]. Exploration during the training phase is performed via an
$\epsilon$-greedy sampling.
Table 1: Networks architecture Network | Architecture
---|---
Encoder | conv $96$ filters, size $8\times 8$ stride $4$
(vergence) | conv $24$ filters, size $1\times 1$ stride $1$
Decoder | conv $384$ filters, size $1\times 1$ stride $1$
(vergence) |
Encoder | conv $192$ filters, size $8\times 8$ stride $4$
(pan, tilt) | conv $48$ filters, size $1\times 1$ stride $1$
Decoder | conv $768$ filters, size $1\times 1$ stride $1$
(pan, tilt) |
Critic | conv $2\times 2$ stride $1$
| max-pooling $2\times 2$ stride $2$
| flatten
| concatenate all scales
| fully-connected $200$
| fully-connected $9$
Table 2: Parameters value Parameter | Value
---|---
Pan range | $\left[-10\degree,10\degree\right]$
Tilt range | $\left[-10\degree,10\degree\right]$
Vergence range | $\left[0\degree,20\degree\right]$
Discount factor $\gamma$ | $0$
Encoder learning rate | $5.10^{-4}$
Critic learning rate | $5.10^{-4}$
Episode length | $10$ iterations
Buffer size | $1000$ transitions
Batch size | $200$ transitions
Epsilon $\epsilon$ | $0.05$
Reward scaling factor $C$ | $600$
## 3 Experimental setup
We conducted our experiments using the robot simulator CoppeliaSim (previously
named V-REP) using the python API PyRep [18], within which a robot head
composed of two cameras separated by $6.8$ cm was simulated. Each camera has a
resolution of $240\times 320$ $\mathrm{p}\mathrm{x}$ (height $\times$ width)
for a horizontal field of view of $90\degree$ (therefore $1$ pixel
$=0.28\degree$). To make the results easier to interpret, we expressed all
angles and angular velocities in $\mathrm{p}\mathrm{x}$ and
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$,
respectively. In the simulated environment, a screen was moving at uniform
speeds varying from $0$ to $4$
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
in front of the robot head, at distances between $0.5$ and $5$ meters. The
screen displayed natural stimuli taken from the McGill Calibrated Color Image
Database [19].
The two spatial scales of visual processing of the cameras are realized by
extracting two centered $32\times 32$ pixel regions per camera (cf. Figure 1),
respectively covering a field of view of $9\degree$ (fine scale) and
$27\degree$ (coarse scale). The auto-encoder for each scale corresponds to a
$3$-layered fully-connected network encoding patches of size $8\times 8$
pixels. This patch-wise autoencoder is implemented as a convolutional neural
network with filter size $8\times 8$ in the first layer and $1\times 1$ in the
following layers (see Tab. 1). Figure 3 shows the reconstruction error as a
function of the binocular disparity for each scale after learning.
The critic network $Q$ can be described in $2$ parts. The first part operates
individually on each scale. It is composed of a convolutional layer followed
by a pooling layer. The results are then flattened and concatenated before
being processed by $2$ fully-connected layers in the second part (cf. Tab. 1).
All networks are trained using the Adam algorithm [20] with a learning rate of
$5.10^{-4}$. We use a value of $\epsilon=\;$0.05 for the $\epsilon$-greedy
sampling. The training is divided into episodes of $10$ iterations. Each time
an episode is simulated, all its transitions are placed in a replay buffer of
size 1000 and a batch of data is then sampled uniformly at random from the
buffer for training the networks. We use a batch size of $200$. The training
is spread over multiple processes, each simulating one agent. Each process
asynchronously pulls the current weight values from a server, uses them to
compute the weight updates $\Delta\theta$, and sends them to the server, which
is responsible for performing the updates.
The robot has $3$ joints available to control the eyes. All joints used the
same action discretization. However, the vergence joint operates in velocity
control mode, while the pan and tilt actions are interpreted as accelerations.
The action set we used for all joints is the following:
$\left[-4,-2,-1,-\frac{1}{2},0,\frac{1}{2},1,2,4\right]$
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
(vergence), or
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}^{2}$
(pan and tilt). The vergence angle of the robot’s eyes is constrained between
$0\degree$ (parallel optical axes of the two eyes) and $20\degree$ (inward
rotated eyes). The pan and tilt joints are constrained to remain in
$\left[-10\degree,10\degree\right]$.
At regular intervals, the training is paused, and the agents’ performances are
measured. For evaluating the agents’ performances, we gather two sets of data
at each testing step. One, the controlled-error set, gauges the performance of
the agents under defined apparent disparities. The other, the behaviour set,
measures how the policies of the agents recover from initial disparities. All
measurements are repeated for $20$ stimuli displayed on the screen $2$
$\mathrm{m}$ away from the eyes of the robot. To construct the controlled-
error set, we simulated various pan, tilt and vergence errors by manually
setting the speed of the screen and the vergence angle of the eyes. Only one
joint was tested at a time, meaning that the errors for the two others were
set to $0$. We then recorded the reconstruction errors of the fine and coarse
scales and the agents’ preferred actions. The behaviour set is the recording
of $20$ iterations of the agents’ behaviour, starting from controlled initial
pan, tilt, or vergence errors.
## 4 Results
For successful learning, the pan, tilt and vergence errors must become
reflected in the reconstruction errors of the encoders, as explained in
Section 2.1. We start by analyzing the reconstruction errors of the encoders
for every pan, tilt, or vergence error at the end of training with reward
function $r^{\rm new}$. Figures 2 and 3 show that for each joint, the
reconstruction error is minimal when the absolute joint error is minimal. In
particular, Figure 2 shows the mean reconstruction error for each stimulus
displayed on the screen (in blue) and the average for all stimuli (in red),
while Figure 3 shows the mean reconstruction error for each scale separately.
Repeating the same analysis using random weights for the encoder and decoder
shows no difference in the reconstruction quality for low or high absolute
errors (the mean error curve is flat instead of being V-shaped, with values
around $0.27$, not shown). The characteristic V-shaped curves are a
consequence of both learning a compact code of the visual input and adapting
the behavior to shape the statistics of the visual input [21].
Figure 2: The autoencoders’ reconstruction errors averaged across the two
scales as a function of the pan, tilt, and vergence error. Each blue curve
corresponds to a different stimulus displayed on the screen. The red curve
represents the mean. For each plot, the error for the two other joints is set
to $0$.
Figure 3: The autoencoders’ mean reconstruction errors plotted separately for
the two scales as a function of the pan, tilt, and vergence error. For the
generation of each plot, the error for the two other joints has been set to
$0$.
To show the precision of the learnt policies, we represent for each joint the
probability of selecting each possible action in the action set as a function
of that joint’s error in Figure 4. The diagonal shapes in the three policies
indicate that the model has learned to accurately compensate for any vergence,
pan, or tilt errors.
Figure 4: Probability of choosing each action in the action sets as a function
of the pan, tilt, and vergence error. For the generation of each plot, the
error for the two other joints has been set to $0$.
To show the speed at which the algorithm converges and compare the two reward
functions $r^{\rm new}$ and $r^{\rm old}$, Figure 5 plots the average training
error at the end of episodes (i.e. the joints’ absolute errors while following
the $\epsilon$-greedy policies) as a function of training time for both
rewards. The testing error is measured at regular intervals as the mean
absolute joint error after $10$ iterations of following the greedy policy,
starting from initial errors of $-4$, $-2$, $2$ and $4$ $\mathrm{p}\mathrm{x}$
(vergence) or
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
(pan and tilt). From the plots it is evident that the reward $r^{\rm new}$
measuring the improvement in encoding quality (Eq. 3), leads to faster
convergence. Focusing on the improvement of the encoding quality helps the
system to deal with very different levels of absolute difficulty for encoding
different stimuli (cf. Fig. 2).
Figure 5: Reduction of errors with training time for the $2$ different
rewards. The blue curves correspond to the “improvement” reward $r^{\rm new}$
(Eq. 3) and the red curves to $r^{\rm old}$ (Eq. 4). The light curves show the
pan, tilt, and vergence errors at the last iteration of an episode as a
function of training time when following the $\epsilon$-greedy policies. The
data is averaged over 5 independent runs and smoothed for clarity. The dark
curves indicate the performance of the greedy policy after $10$ iterations,
starting from initial (speed) errors in $\left[-4,4\right]$
$\mathrm{p}\mathrm{x}$ (vergence) or
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
(pan and tilt), with a step size of half a pixel. Vertical bars indicate the
standard deviation across $5$ runs. The testing performance of the new reward
is consistently below $1$ $\mathrm{p}\mathrm{x}$ (vergence) or
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
(pan and tilt) after $10,000$ episodes of training demonstrating sub-pixel
accuracy.
Finally, to show how quickly and accurately the algorithm fixates objects and
tracks them, Figure 6 shows the mean accuracy and its standard deviation,
during 20 consecutive iterations, starting from all errors in
$\left[-4,4\right]$ $\mathrm{p}\mathrm{x}$ (vergence) or
$\mathrm{p}\mathrm{x}\mathrm{/}\mathrm{i}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}$
(pan and tilt) with a step size of half a pixel. Subpixel error levels are
typically reached in just one or two iterations despite the discrete action
set. Note that an error of, e.g., 3 pixels cannot be reduced to zero in a
single step because the two closest discrete actions of 2 pixels and 4 pixels
would both lead to a remaining error of 1 pixel.
Figure 6: Rapid object fixation and tracking. Pan speed error, tilt speed
error, and vergence error are decreasing quickly during one episode and
typically reach subpixel levels in one or two steps. The shaded region
indicates one standard deviation.
## 5 Discussion
Understanding the development of the human mind and replicating this
development in artificial cognitive agents is a grand challenge for 21st
century science. Here, we focused on a very early step in this development,
which lays the foundation for most of what follows: the development of early
visual representations and the ability to self-calibrate accurate eye
movements. From learning to manipulate objects to interacting with social
partners, vision is a key sensory modality. In this work, we focused on active
stereo and motion vision, proposing a model for their completely autonomous
self-calibration. Our work falls inside but also extends the recently proposed
Active Efficient Coding (AEC) framework, which is itself an extension of
Barlow’s classic efficient coding hypothesis[9] to active perception and
therefore rooted in Shannon’s information theory. AEC postulates that visual
representations and eye movements are jointly optimized to maximize the
efficiency of the visual system to encode information. Along these lines,
previous models have shown how AEC can explain the development of active
stereo vision [21, 22], active motion vision[23, 13], as well as the control
of torsional eye movements [24] and accommodation [14] and various
combinations thereof, e.g., [25, 26]. The two key innovations of the present
work are to consider “deeper” sensory representations compared to the shallow
sparse coding approaches used earlier and to use a new intrinsic reward
formulation. Regarding the first innovation, we have employed deep
convolutional autoencoders for the sensory encoding stage and a Deep Q-Network
(DQN) [15, 16] to map the learned representations onto behavior and obtained
very good results. The model quickly achieves sub-pixel accuracy in all
degrees of freedom. Regarding the second innovation, we have shown that an
intrinsic reward for improvements in encoding quality leads to faster
convergence.
From the perspective of biological plausibility, this success comes at a
price, however. The learning algorithms used to train the auto-encoder and the
DQN rely on error back-propagation mechanisms, which are thought to be not
biologically plausible by most researchers in the field. While this should be
considered a weakness when evaluating this work as a model of biological
mechanisms, it is not problematic from a robotics application perspective.
Another limitation of the approach is that accurate performance requires that
the object to be fixated and tracked must be sufficiently big. If the object
fills only a fraction of the two regions defining the two spatial scales, then
multiple disparities and velocities are present within these regions, because
the rest is filled by background. In this case, the system will be “confused”
and may decide to fixate and stabilize the background. To deal with this
problem, a mechanism for foreground/background separation needs to be
introduced.
Our work follows the traditional structure of AEC models using a separation
into two distinct learning modules — the first being the deep auto-encoders
for unsupervised learning of a sensory representation and generation of reward
signals and the second being the DQN for learning behavior via reinforcement
learning. It should be questioned however, if this “hard” separation is
strictly necessary. An alternative architecture might try to blend these
functions into a single more homogeneous network. This topic is left for
future work.
Another interesting direction for future work is to consider other sensory
modalities. Ongoing work (unpublished) is revealing that AEC can also be used
to model the self-calibration of echolocation in bats in the auditory
modality. More generally, the combination of different sensory modalities is
an interesting topic for future research. Arguably, a key step in cognitive
development is discovering and understanding the relationships between sensory
signals from different modalities in order to arrive at a unified
representation of the world.
## Funding
This project has received funding from the European Union’s Horizon 2020
research and innovation program under grant 713010. J.Triesch is supported by
the Johanna Quandt Foundation.
## References
* [1] L. Lonini, S. Forestier, C. Teulière, Y. Zhao, B. Shi, and J. Triesch, “Robust active binocular vision through intrinsically motivated learning,” _Frontiers in Neurorobotics_ , vol. 7, p. 20, 2013. [Online]. Available: https://www.frontiersin.org/article/10.3389/fnbot.2013.00020
* [2] F. D. L. Bourdonnaye, C. Teulière, J. Triesch, and T. Chateau, “Stage-wise learning of reaching using little prior knowledge,” _Front. Robotics and AI_ , vol. 2018, 2018.
* [3] P. J. Jeffries AM, Killian NJ, “Mapping the primate lateral geniculate nucleus: a review of experiments and methods.” _J Physiol Paris_ , 2014.
* [4] X. Xu, J. M. Ichida, J. D. Allison, J. D. Boyd, A. Bonds, and V. A. Casagrande, “A comparison of koniocellular, magnocellular and parvocellular receptive field properties in the lateral geniculate nucleus of the owl monkey (aotus trivirgatus),” _The Journal of physiology_ , vol. 531, no. 1, pp. 203–218, 2001.
* [5] N. Qian, “Binocular disparity and the perception of depth,” _Neuron_ , vol. 18, no. 3, pp. 359–368, 1997.
* [6] A. Borst and M. Egelhaaf, “Principles of visual motion detection,” _Trends in neurosciences_ , vol. 12, no. 8, pp. 297–306, 1989.
* [7] J. Y. Lettvin, H. R. Maturana, W. S. McCulloch, and W. H. Pitts, “What the frog’s eye tells the frog’s brain,” _Proceedings of the IRE_ , vol. 47, no. 11, pp. 1940–1951, 1959.
* [8] B. Scholl, J. Burge, and N. J. Priebe, “Binocular integration and disparity selectivity in mouse primary visual cortex,” _Journal of neurophysiology_ , vol. 109, no. 12, pp. 3013–3024, 2013.
* [9] H. Barlow, “Possible principles underlying the transformations of sensory messages,” _Sensory Communication_ , vol. 1, 01 1961.
* [10] L. Zhaoping, “Theoretical understanding of the early visual processes by data compression and data selection,” _Network (Bristol, England)_ , vol. 17, pp. 301–34, 01 2007.
* [11] D. H. Kelly, “Information capacity of a single retinal channel,” _IRE Trans. Information Theory_ , vol. 8, pp. 221–226, 1962.
* [12] S. Nirenberg, S. M. Carcieri, A. L. Jacobs, and P. E. Latham, “Retinal ganglion cells act largely as independent encoders,” _Nature_ , vol. 411, no. 6838, pp. 698–701, 2001. [Online]. Available: https://doi.org/10.1038/35079612
* [13] T. N. Vikram, C. Teulière, C. Zhang, B. E. Shi, and J. Triesch, “Autonomous learning of smooth pursuit and vergence through active efficient coding,” in _4th International Conference on Development and Learning and on Epigenetic Robotics_ , 2014, pp. 448–453.
* [14] S. Eckmann, L. Klimmasch, B. E. Shi, and J. Triesch, “Active efficient coding explains the development of binocular vision and its failure in amblyopia,” _Proceedings of the National Academy of Sciences_ , vol. 117, no. 11, pp. 6156–6162, 2020. [Online]. Available: https://www.pnas.org/content/117/11/6156
* [15] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” _arXiv preprint arXiv:1312.5602_ , 2013.
* [16] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in _International conference on machine learning_ , 2016, pp. 1928–1937.
* [17] P. J. Huber, “Robust estimation of a location parameter,” _Ann. Math. Statist._ , vol. 35, no. 1, pp. 73–101, 03 1964. [Online]. Available: https://doi.org/10.1214/aoms/1177703732
* [18] S. James, M. Freese, and A. J. Davison, “Pyrep: Bringing v-rep to deep robot learning,” _arXiv preprint arXiv:1906.11176_ , 2019.
* [19] A. Olmos and F. A. A. Kingdom, “A biologically inspired algorithm for the recovery of shading and reflectance images,” _Perception_ , vol. 33, no. 12, pp. 1463–1473, 2004, pMID: 15729913. [Online]. Available: https://doi.org/10.1068/p5321
* [20] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [21] Y. Zhao, C. A. Rothkopf, J. Triesch, and B. E. Shi, “A unified model of the joint development of disparity selectivity and vergence control,” in _2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)_. IEEE, 2012, pp. 1–6.
* [22] L. Lonini, Y. Zhao, P. Chandrashekhariah, B. E. Shi, and J. Triesch, “Autonomous learning of active multi-scale binocular vision,” in _2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)_ , 2013, pp. 1–6.
* [23] Q. Zhu, J. Triesch, and B. E. Shi, “Autonomous, self-calibrating binocular vision based on learned attention and active efficient coding,” _2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)_ , pp. 27–32, 2017.
* [24] Q. Zhu, C. Zhang, J. Triesch, and B. E. Shi, “Autonomous learning of cyclovergence control based on active efficient coding,” in _2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2018, Tokyo, Japan, September 17-20, 2018_. IEEE, 2018, pp. 251–256. [Online]. Available: https://doi.org/10.1109/DEVLRN.2018.8761033
* [25] A. Priamikov, V. Narayan, B. E. Shi, and J. Triesch, “The role of contrast sensitivity in the development of binocular vision: A computational study,” in _2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)_ , 2015, pp. 33–38.
* [26] A. Lelais, J. Mahn, V. Narayan, C. Zhang, B. E. Shi, and J. Triesch, “Autonomous development of active binocular and motion vision through active efficient coding,” _Frontiers in neurorobotics_ , vol. 13, p. 49, 2019.
|
# Effect of new jet substructure measurements on Pythia8 tunes
Deepak Kar Pratixan Sarmah School of Physics, University of Witwatersrand
Johannesburg, South Africa.
Email<EMAIL_ADDRESS>Department of Physics, BITS Pilani
Rajasthan, India.
Email<EMAIL_ADDRESS>
###### Abstract
This masters project used the recent ATLAS jet substructure measurements to
see if any improvements can be made to the commonly used Pythia8 Monash and
A14 tunes.
###### keywords:
Pythia8, jet substructure, FSR, tune
## 1 Introduction
The commonly used Pythia8 [1, 2] tunes, Monash [3] and A14 [4] are rather
dated, and the latter was observed to have some tension with LEP measurements,
primarily due to its lower Final State Radiation (FSR) $\alpha_{s}$ value. In
last couple of years, a plethora of jet substructure [5, 6, 7, 8] measurements
have been published by both ATLAS and CMS collaborations, utilising LHC Run 2
data. Here, we investigate the effect of four such ATLAS measurements on
parameters sensitive to jet substructure observables.
## 2 Tuning setup
The following ATLAS measurements were considered in this study (along with
their Rivet identifiers):
* 1.
Soft-Drop Jet Mass [9](ATLAS_2017_I1637587)
* 2.
Jet substructure measurements in multijet events [10] (ATLAS_2019_I1724098)
* 3.
Soft-drop observables [11](ATLAS_2019_I1772062)
* 4.
Lund jet plane with charged particles [12] (ATLAS_2020_I1790256)
The following parameters were considered in this tuning exercise, with the
ranges stated in Table 1.
Parameter | Lower value | Upper value
---|---|---
BeamRemnants:primordialKThard | 1.25 | 3
ColorReconncetion:range | 1.25 | 3
TimeShower:pTmin | 0.5 | 1.5
MultipartonInteractions:pT0Ref | 1.5 | 3
TimeShower:alphaSvalue | 0.118 | 0.145
Table 1: Sampling range of the parameters considered
Weighted hardQCD events were generated with a PThatMin of 300 GeV. 100
Sampling runs were performed, each with 100000 events. Rivet3 [13] and
Professor tuning system [14] were used. The goodness of sampling and the
weight files used can be found in Appenix 5.2 and in Appenix 5.3.
## 3 Results
The first step was to ascertain where we have a scope of improvement. While a
detailed observable-by-observable determination was performed (see Appendix
5.1), here we highlight the most salient features:
* 1.
For Lund Jet Plane (LJP) distributions, we observed that the hard-wide angle
emissions part is better modelled by the Monash tune whereas the region
ranging from UE/MPI to Soft-collinear and Collinear limits are in general
better modelled by the A14 tune. However, this distributions also offer the
biggest scope of improved modelling.
* 2.
For the soft drop $\rho$ and $r_{g}$ observables, in general Monash tune
performs somewhat better than A14. One deviation from this trend is when the
jet construction is Cluster based, in which case the A14 tune performs better
over a large range.
* 3.
Both the Jet Substructure and Soft drop jet mass distributions are somewhat
better modelled by the A14 tune.
Table 2 lists the parameter values of A14 and Monash along with our tuned
values. A separate tune for LJP was performed as this analysis had the largest
discrepancy. The LJP tune column shows the parameter values corresponding to
the best tune for LJP and the Common Tune column shows the values of the best
tune for all the analyses considered. Figures 1 and 2 show the tuned
distributions for the one dimensional vertical slices of the LJP. Figure 3
shows the tuned distributions for the soft drop observables. Figure 4 shows
the tuned distributions for soft drop mass. And lastly, Figure 5 shows the
tuned distributions for the jet substructure observables.
Parameters | A14 | Monash | LJP Tune | Common Tune
---|---|---|---|---
BeamRemnants:primordialKThard | 1.88 | 1.8 | 2.288 | 2.065
ColorReconnection:range | 1.71 | 1.8 | 2.73 | 1.69
TimeShower:pTmin | 0.40 | 0.50 | 1.288 | 0.775
MultipartonInteractions:pT0Ref | 2.09 | 2.28 | 2.766 | 2.91
TimeShower:alphaSvalue | 0.127 | 0.1365 | 0.1308 | 0.1309
Table 2: Comparison of tuned values with Monash and A14
Figure 1: Comparison of our tunes with A14 and Monash tunes for Lund Jet Plane
distributions (vertical slices)
Figure 2: Comparison of our tunes with A14 and Monash tunes for Lund Jet Plane
distributions (horizontal slices)
Figure 3: Comparison of our tunes with A14 and Monash tunes for soft drop jet
mass distributions
Figure 4: Comparison of our tunes with A14 and Monash tunes for soft drop jet
mass distributions
Figure 5: Comparison of our tunes with A14 and Monash tunes for jet Sub
structure observable distribution for dijet selection
## 4 Summary
The results obtained show small improvements of roughly 5-10% in the
distributions of the Lund Jet Plane and Soft Drop Mass distributions from the
previous A14 and Monash Tunes. As in Table 2, it can be seen that the
parameter values of the tunes obtained are pulled up from the A14 and Monash
Tunes. In the case of the LJP , we see that the A14 and Monash Tunes deviate
most from the data near the peaks of the distributions. This is the region
where soft collinear effects transitions to UE/MPI effects in the LJP. Since
the tunes we obtained improve this region of the distributions, it can be
inferred that higher values of these parameters facilitate more soft
radiations in the final state.
In the case of the soft drop observable distributions, there are regions that
require generation of more mass to model the data better. These compete with
the LJP values and decreases values of parameters:
BeamRemnants:primordialKThard from 2.288 to 2.065, ColorReconnection:range
from 2.73 to 1.69, TimeShower:pTmin from 1.288 to 0.775. For the other two
parameters, MPI:pT0Ref and TimeShower:alphaSvalue, the values increased
slightly.
## Acknowledgements
DK is funded by National Research Foundation (NRF), South Africa through
Competitive Programme for Rated Researchers (CPRR), Grant No: 118515. We thank
Andy Buckley and Holger Schulz for technical assistance with Professor
program, as well as for physics discussions.
## References
* [1] T. Sjostrand, S. Mrenna, P. Z. Skands, A Brief Introduction to PYTHIA 8.1, Comput. Phys. Commun. 178 (2008) 852–867. arXiv:0710.3820, doi:10.1016/j.cpc.2008.01.036.
* [2] T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, P. Z. Skands, An Introduction to PYTHIA 8.2, Comput. Phys. Commun. 191 (2015) 159–177. arXiv:1410.3012, doi:10.1016/j.cpc.2015.01.024.
* [3] P. Skands, S. Carrazza, J. Rojo, Tuning PYTHIA 8.1: the Monash 2013 Tune, Eur. Phys. J. C74 (8) (2014) 3024. arXiv:1404.5630, doi:10.1140/epjc/s10052-014-3024-y.
* [4] ATLAS Collaboration, ATLAS Pythia 8 tunes to $7\;\mbox{TeV}$ data, ATL-PHYS-PUB-2014-021 (2014).
URL https://cds.cern.ch/record/1966419
* [5] A. Altheimer, et al., Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks, J. Phys. G39 (2012) 063001. arXiv:1201.0008, doi:10.1088/0954-3899/39/6/063001.
* [6] A. Altheimer, et al., Boosted objects and jet substructure at the LHC. Report of BOOST2012, held at IFIC Valencia, 23rd-27th of July 2012, Eur. Phys. J. C74 (3) (2014) 2792. arXiv:1311.2708, doi:10.1140/epjc/s10052-014-2792-8.
* [7] S. Marzani, G. Soyez, M. Spannowsky, Looking inside jets: an introduction to jet substructure and boosted-object phenomenology, Vol. 958, Springer, 2019. arXiv:1901.10342, doi:10.1007/978-3-030-15709-8.
* [8] R. Kogler, et al., Jet Substructure at the Large Hadron Collider: Experimental Review, Rev. Mod. Phys. 91 (4) (2019) 045003. arXiv:1803.06991, doi:10.1103/RevModPhys.91.045003.
* [9] ATLAS Collaboration, Measurement of the Soft-Drop Jet Mass in pp Collisions at $\sqrt{s}=13$ TeV with the ATLAS Detector, Phys. Rev. Lett. 121 (9) (2018) 092001. arXiv:1711.08341, doi:10.1103/PhysRevLett.121.092001.
* [10] ATLAS Collaboration, Measurement of jet-substructure observables in top quark, $W$ boson and light jet production in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 08 (2019) 033. arXiv:1903.02942, doi:10.1007/JHEP08(2019)033.
* [11] ATLAS Collaboration, Measurement of soft-drop jet observables in $pp$ collisions with the ATLAS detector at $\sqrt{s}$ =13 TeV, Phys. Rev. D 101 (5) (2020) 052007. arXiv:1912.09837, doi:10.1103/PhysRevD.101.052007.
* [12] ATLAS Collaboration, Measurement of the Lund Jet Plane Using Charged Particles in 13 TeV Proton-Proton Collisions with the ATLAS Detector, Phys. Rev. Lett. 124 (22) (2020) 222002. arXiv:2004.03540, doi:10.1103/PhysRevLett.124.222002.
* [13] C. Bierlich, et al., Robust Independent Validation of Experiment and Theory: Rivet version 3, SciPost Phys. 8 (2020) 026. arXiv:1912.05451, doi:10.21468/SciPostPhys.8.2.026.
* [14] A. Buckley, H. Hoeth, H. Lacker, H. Schulz, J. E. von Seggern, Systematic event generator tuning for the LHC, Eur. Phys. J. C 65 (2010) 331–357. arXiv:0907.2973, doi:10.1140/epjc/s10052-009-1196-7.
## 5 Appendix
### 5.1 Tune performance
Plots | Observable | $\ln(R/\Delta R)$ or | Better Tune Performance regions | Better Tune
---|---|---|---|---
| | $\ln(1/z)$ Slice | A14 | Monash | (overall)
d03-x01-y01 | $\ln(1/z)$ | 0.00-0.33 | 4.3-6 | 0.7-4.3 | Monash
d04-x01-y01 | $\ln(1/z)$ | 0.33-0.67 | 3.4-4.2 | 0.7-3.4 , 4.4-6 | Monash
d05-x01-y01 | $\ln(1/z)$ | 0.67-1.00 | 3-5 | 0.7-3 , 5-6 | Monash
d06-x01-y01 | ln(1/z) | 1.00-1.33 | 3-4.2 | 0.7-3 , 4.2-6 | Monash
d07-x01-y01 | $\ln(1/z)$ | 1.33-1.67 | 2.4-4.2 | 0.7-2.4 , 4.3-6 | Monash
d08-x01-y01 | $\ln(1/z)$ | 1.67-2.00 | 2-4 , 5.2 , 5.8 | 0.7-2 , 4-5 , 5.5 | -
d09-x01-y01 | $\ln(1/z)$ | 2.00-2.33 | 1.8-4 , 4.7, 5.2-6 | 0.7-1.8, 4-4.4 | A14
d10-x01-y01 | $\ln(1/z)$ | 2.33-2.67 | 1.6-3, 4.4-5.8 | 0.7-1.4, 3.2-4.2 | A14
d11-x01-y01 | $\ln(1/z)$ | 2.67-3.00 | 1.4-5 | 0.7-1.2, 5.2-5.8 | A14
d12-x01-y01 | $\ln(1/z)$ | 3.00-3.33 | 0.8-1.4, 3-4, 4.7, 5.8 | 1.6-3, 3.3, 4-4.5, 5-5.6 | -
d13-x01-y01 | $\ln(1/z)$ | 3.33-3.67 | 0.7-3.4, 4.6-5 | 3.6-4.4, 5.1-6 | A14
d14-x01-y01 | $\ln(1/z)$ | 3.67-4.00 | 0-3.4 | 3.6-4.6 | A14
d15-x01-y01 | $\ln(1/z)$ | 4.00-4.33 | 0.7-2.6, 3.6, 4-4.6, 5, 5.57 | 2.7-3.4, 3.9, 4.7, 5.23 | A14
d16-x01-y01 | $\ln(R/\Delta R)$ | 0.69-0.97 | 3-4.5 | 0-3 | Monash
d17-x01-y01 | $\ln(R/\Delta R)$ | 0.97-1.25 | 3-4 | 0-3 | Monash
d18-x01-y01 | $\ln(R/\Delta R)$ | 1.25-1.52 | 2.7-3.2, 3.5-4 | 0-2.6 | Monash
d19-x01-y01 | $\ln(R/\Delta R)$ | 1.52-1.80 | 2.4-4.5 | 0.5-2.3 | -
d20-x01-y01 | $\ln(R/\Delta R)$ | 1.80-2.08 | 2-3, 3.3-4.5 | 0-2 | -
d21-x01-y01 | $\ln(R/\Delta R)$ | 2.08-2.36 | 1.6-4.5 | 0-1.6, 2.2, 3.4-4 | -
d22-x01-y01 | $\ln(R/\Delta R)$ | 2.36-2.63 | 1.4-3, 3.6-4.5 | 0-1.3, 3-3.6 | A14
d23-x01-y01 | $\ln(R/\Delta R)$ | 2.63-2.91 | 1.4-3, 3.5 | 0-1.3, 3-4.5 | -
d24-x01-y01 | $\ln(R/\Delta R)$ | 2.91-3.19 | 0.6-4.5 | 0-0.5 | A14
d25-x01-y01 | $\ln(R/\Delta R)$ | 3.19-3.47 | 0.6-2.4, 3.4-4.5 | 0-0.5, 2.4-3.3 | A14
d26-x01-y01 | $\ln(R/\Delta R)$ | 3.47-3.74 | 0.5-4.5 | 0.2, 1.2, 2.5, 3.5 | A14
d27-x01-y01 | $\ln(R/\Delta R)$ | 3.74-4.02 | 0.3-4.5 | 0.2 | A14
d28-x01-y01 | $\ln(R/\Delta R)$ | 4.02-4.30 | 0-1.6, 3.7-4.5 | 1.7-3.6 | -
d29-x01-y01 | $\ln(R/\Delta R)$ | 4.30-4.57 | 0.2, 0.5,1.2, 2.3-4.5, | 0.8, 1-2.2, 3.2, 3.8 | -
d30-x01-y01 | $\ln(R/\Delta R)$ | 4.57-4.85 | 0.2, 1.6-3.3, 3.5 | 0.3-1.5, 3.7-4.5 | -
d31-x01-y01 | $\ln(R/\Delta R)$ | 4.85-5.13 | 0-3, 3.4-4.5 | 3.2 | A14
d32-x01-y01 | $\ln(R/\Delta R)$ | 5.13-5.41 | 0.2, 1.7-4 | 0.4-1.7, 2.75 | A14
d33-x01-y01 | $\ln(R/\Delta R)$ | 5.41-5.68 | 0.2, 2-3 | 0.5-2, 3.2-4 | Monash
d34-x01-y01 | $\ln(R/\Delta R)$ | 5.68-5.96 | 1.7-2.3 | 0-1.7, 2.5-3 | Monash
Table 3: ATLAS_2020_I1790256(LJP) Plots | | $\beta$ | $z_{cut}$ | Observable | A14 | Monash | Better Tune
---|---|---|---|---|---|---|---
d01-x01-y01 | Calorimeter based | 0 | 0.1 | $\rho$ | - | all | Monash
d02-x01-y01 | Track based | 0 | 0.1 | $\rho$ | - | all | Monash
d03-x01-y01 | Cluster based | 1 | 0.1 | $\rho$ | [-4.5,-3.7], | [-3.5,-2.1], | A14/
| | | | | [-2,-1.3] | [-1,-0.5] | Monash
d04-x01-y01 | Track based | 1 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d05-x01-y01 | Cluster based | 2 | 0.1 | $\rho$ | [-4.5,-1.1] | -0.7 | A14
d06-x01-y01 | Track based | 2 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d07-x01-y01 | Track based | 1 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d16-x01-y01 | Track based | 1 | 0.1 | $r_{g}$ | - | all | Monash
d17-x01-y01 | Cluster based | 2 | 0.1 | $r_{g}$ | [-1.2,-0.2] | -0.15 | A14
d18-x01-y01 | Track based | 2 | 0.1 | $r_{g}$ | -1.1 | [-1,-0.1] | Monash
d19-x01-y01 | Central jet/Calorimeter | 0 | 0.1 | $r_{g}$ | - | all | Monash
d20-x01-y01 | Central jet/Track | 0 | 0.1 | $r_{g}$ | - | all | Monash
d21-x01-y01 | Central jet/Cluster | 1 | 0.1 | $\rho$ | [-4.5,-1] | -0.7 | A14
d22-x01-y01 | Central jet/Track | 1 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d23-x01-y01 | Central jet/Cluster | 2 | 0.1 | $\rho$ | [-3.5,-0.9] | -0.7 | A14
d24-x01-y01 | Central jet/Track | 2 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.7] | Monash
d34-x01-y01 | Central jet/Track | 1 | 0.1 | $r_{g}$ | - | all | Monash
d35-x01-y01 | Central jet/Cluster | 2 | 0.1 | $r_{g}$ | [-1.2,-0.4] | -0.5,-0.15 | A14
d36-x01-y01 | Central jet/Track | 2 | 0.1 | $r_{g}$ | -1.1 | [-1,-0.1] | Monash
d37-x01-y01 | Forward jet/Calorimeter | 0 | 0.1 | $r_{g}$ | - | all | Monash
d38-x01-y01 | Forward jet/Track | 0 | 0.1 | $\rho$ | - | all | Monash
d39-x01-y01 | Forward jet/Cluster | 1 | 0.1 | $\rho$ | -4.3 | [-4,-0.5] | Monash
d40-x01-y01 | Forward jet/Track | 1 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d41-x01-y01 | Forward jet/Cluster | 2 | 0.1 | $\rho$ | -3.9,[-3.1,-1] | -3.5,-0.7 | A14
d42-x01-y01 | Forward jet/Track | 2 | 0.1 | $\rho$ | [-4.5,-3.7] | [-3.5,-0.5] | Monash
d49-x01-y01 | Forward jet/Track | 0 | 0.1 | $r_{g}$ | all | all | -
d51-x01-y01 | Forward jet/Cluster | 1 | 0.1 | $r_{g}$ | [-0.8,-0.2] | [-1.2,-0.8],-0.1 | -
d52-x01-y01 | Forward jet/Track | 1 | 0.1 | $r_{g}$ | - | all | Monash
d53-x01-y01 | Forward jet/Cluster | 2 | 0.1 | $r_{g}$ | all | -0.15 | A14
d54-x01-y01 | Forward jet/Track | 2 | 0.1 | $r_{g}$ | -1.1 | [-1,-0.1] | Monash
Table 4: ATLAS 2019 I1772062(Soft_Drop_Jet Observables) Plots | Observable | Better Tune Performance regions | Better Tune
---|---|---|---
| | A14 | Monash | (overall)
d01-x01-y01 | Nsubjets | 0-10 | - | A14
d02-x01-y01 | $C_{2}$ | 0-0.86 | 0.36-0.42 , 0.64-0.72 | A14
d03-x01-y01 | $D_{2}$ | 0-0.5 | - | A14
d04-x01-y01 | LHA | 0,4.5 | - | A14
d05-x01-y01 | ECF${}_{2}^{norm}$ | 0-0.252 | 0.252-0.35 | A14
d06-x01-y01 | ECF${}_{3}^{norm}$ | 0-0.04 | - | A14
d23-x01-y01 | Nsubjets | 1-2 | 2-5 | Monash
d24-x01-y01 | $C_{2}$ | 0-0.38 , 0.42-0.5 , 0.54-0.62 | 0.4 , 0.52 , 0.62-1.0 | A14
d25-x01-y01 | $D_{2}$ | 0-0.48 | - | A14
d26-x01-y01 | LHA | 0-1.4 | 1.4-9 | A14
d27-x01-y01 | ECF${}_{2}^{norm}$ | 0-0.23 | ECF${}_{2}^{norm}$ 0.23-0.35 | A14
d28-x01-y01 | ECF${}_{3}^{norm}$ | 0-0.02 | 0.02-0.32 | A14
Table 5: ATLAS 2019 I1724098(JSS, Dijet Selection) Plots | Observable | $\beta$ | Better Tune Performance regions | Better Tune
---|---|---|---|---
| ($p_{T}^{lead}>600$ GeV) | ($z_{cut}=0.1$) | A14 | Monash | (overall)
d01-x01-y01 | $\log_{10}$[($m^{\textrm{soft drop}}/p_{T}^{\textrm{ungroomed}})^{2}$] | 0 | [-2.5,-0.5] | [-4,-2.5] | A14
d02-x01-y01 | $\log_{10}$[($m^{\textrm{soft drop}}/p_{T}^{\textrm{ungroomed}})^{2}$] | 1 | [-4.5,-0.5] | - | A14
d03-x01-y01 | $\log_{10}$[($m^{\textrm{soft drop}}/p_{T}^{\textrm{ungroomed}})^{2}$] | 2 | [-4.5,-0.5] | - | A14
Table 6: ATLAS 2017 I1637587(Soft Drop Mass)
### 5.2 Envelope plots
Having decided the range of values for each parameter, we visualise the region
of the distributions to check its utility and hence it is important before
proceeding further. This can be done using the PROFESSOR tool by generating
envelopes plots with the comand prof2-envelopes. The envelope plots show an
area covering the distributions which indicates the bin values that the
observables can take within the selected parameter ranges. These are shown in
the following sub sections.
#### 5.2.1 Soft Drop Mass Distributions
Figure 6: Envelope plots for ATLAS_2017_I1637587
As can be seen in Figure 6, The envelopes cover the reference data in almost
every bin and hence we can say that the range selected for the parameters are
appropriate.
#### 5.2.2 Lund Jet Plane Distributions
Figure 7: Envelope plots for Lund Jet Plane
As can be seen in Figure 7, the envelopes do not entirely cover the reference
data. This is because we have reached a limit as to how much the distribution
can be further fitted to the data with Pythia. Thus we consider this suitable
for the purpose of this report and proceed with our set of parameter ranges.
#### 5.2.3 Soft Drop Observables Distributions
In the Figure 8, we see that the envelopes cover the data points in almost all
the bins of our distributions of interest i.e soft drop jet mass from the soft
drop jet observables analysis. Thus the parameter ranges are suitable for
proceeding to tune the distributions.
Figure 8: Envelope plots for Soft Drop Mass observable
### 5.3 Weight file
The weight file assigns weights to the distributions to be tuned and hence is
manually changed depending on our interests. For this project, a total of 16
distributions were assigned weights greater than 1 and 6 distributions were
given no weight i.e 0 for obtaining the Common Tune. These are :
Analysis | Distribution code | Weight
---|---|---
ATLAS_2020_I1790256 | d03-x01-y01 | 106
ATLAS_2020_I1790256 | d04-x01-y01 | 112
ATLAS_2020_I1790256 | d05-x01-y01 | 108
ATLAS_2020_I1790256 | d06-x01-y01 | 20
ATLAS_2020_I1790256 | d07-x01-y01 | 16.6
ATLAS_2020_I1790256 | d08-x01-y01 | 16
ATLAS_2020_I1790256 | d09-x01-y01 | 16
ATLAS_2019_I1772062 | d19-x01-y01 | 75
ATLAS_2019_I1772062 | d20-x01-y01 | 75
ATLAS_2019_I1772062 | d21-x01-y01 | 200
ATLAS_2019_I1772062 | d22-x01-y01 | 80
ATLAS_2019_I1772062 | d23-x01-y01 | 200
ATLAS_2019_I1772062 | d24-x01-y01 | 80
ATLAS_2017_I1637587 | d01-x01-y01 | 500
ATLAS_2017_I1637587 | d02-x01-y01 | 500
ATLAS_2017_I1637587 | d03-x01-y01 | 500
ATLAS_2019_I1772062 | d61-x01-y01 | 0
ATLAS_2019_I1772062 | d62-x01-y01 | 0
ATLAS_2019_I1772062 | d79-x01-y01 | 0
ATLAS_2019_I1772062 | d80-x01-y01 | 0
ATLAS_2019_I1772062 | d97-x01-y01 | 0
ATLAS_2019_I1772062 | d98-x01-y01 | 0
Table 7: Weights assigned to obtain the Common tune
|
# Some Estimates of the Generalized Beukers Integral with Techniques of
Partial Fraction Decomposition
Xiaowei Wang(Potsdam)
This paper was written in June 2020
###### Abstract
In this paper we establish the generalized Beukers integral
$I_{m}(a_{1},...,a_{n})$ with some methods of partial fraction decomposition.
Thus one obtains an explicit expression of the generalized Beukers integral.
Further, we estimate the rational denominator of $I$ and. In the second
section of this paper, we provide some estimates of the upper and lower bound
of the value $J_{3}$, which involves the generalized Beukers integral and is
related to $\zeta(5)$.
_K_ eywords generalized Beukers integral, zeta(5), partial fraction
decomposition
## 1 The Lemmas
###### Lemma 1.
(Homogeneous partial fraction decomposition)
Let $a_{1},...,a_{n}$ be distinct complex number,
$x\in\mathbb{C}\backslash\\{-a_{1},...,-a_{n}\\}$, then there exist
$\lambda_{1},...,\lambda_{n}\in\mathbb{C}$ such that following identity is
true,
$\prod_{i=1}^{n}\frac{1}{a_{i}+x}=\sum_{i=1}^{n}\frac{\lambda_{i}}{a_{i}+x}$
(1)
where $\lambda_{i}$ has explicit expression as following. They only depend on
$a_{1},...,a_{n}$.
$\lambda_{i}=\prod_{j=1,j\neq i}^{n}\frac{1}{a_{j}-a_{i}}$
Further, we have
$\sum_{i=1}^{n}\lambda_{i}=0$
###### Proof.
In order to show (1), we multiply $\prod_{i=1}^{n}(a_{i}+x)$ on both side of
(1). It becomes
$\sum_{i=1}^{n}\lambda_{i}\prod_{j=1,j\neq i}^{n}(a_{j}+x)=1$
Now let
$p(x)=\sum_{i=1}^{n}\lambda_{i}\prod_{j=1,j\neq
i}^{n}(a_{j}+x)=\sum_{i=1}^{n}\prod_{j=1,j\neq
i}^{n}\frac{a_{j}+x}{a_{j}-a_{i}}$
It’s easy to see that $p(x)$ is a polynomial with degree $n-1$ and satisfies
that $p(-a_{i})=1$ for all $i=1,...,n$. On the one hand we already found $n$
zeros of $p(x)-1$, on the other hand by the fundamental theorem of algebra,
$p(x)-1$ has $n-1$ zeros. Therefore it can only be $p(x)\equiv 1$. That is
$\sum_{i=1}^{n}\lambda_{i}\prod_{j=1,j\neq i}^{n}(a_{j}+x)\equiv 1$
Comparing the coefficient of $x^{n-1}$ on both side, we obtain
$\sum_{i=1}^{n}\lambda_{i}=0$
∎
###### Lemma 2.
(Inhomogeneous partial fraction decomposition)
Let $c_{1},...,c_{n}$ be distinct complex numbers, $b_{1},...,b_{n}$ be
positive integers, then following decomposition is valid for
$x\in\mathbb{C}\backslash\\{-c_{1},...,-c_{n}\\}$.
$\prod_{i=1}^{n}\frac{1}{(c_{i}+x)^{b_{i}}}=\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(c_{i}+x)^{j}}$
The expression of $\mu_{ij}$ is given by
$\mu_{ij}=\frac{(-1)^{j-1}}{(b_{i}-j)!}\frac{d^{b_{i}-j}}{dz^{b_{i}-j}}|_{z=c_{i}}(\prod_{\ell=1,\ell\neq
i}^{n}\frac{1}{(c_{\ell}-z)^{b_{\ell}}})$
Note that if $b_{1}=...=b_{n}=1$, then $\mu_{i1}$ is exactly $\lambda_{i}$ in
Lemma 1. Moreover, we have
$\sum_{i=1}^{n}\mu_{i1}=0$
###### Proof.
Let $f,g$ are both functions of $z_{1},...,z_{n}$, namely
$\displaystyle f(z_{1},...,z_{n})$
$\displaystyle:=\prod_{i=1}^{n}\frac{1}{(z_{i}+x)^{b_{i}}}$ $\displaystyle
g(z_{1},...,z_{n})$ $\displaystyle:=\prod_{i=1}^{n}\frac{1}{z_{i}+x}$
According to Lemma 1, we have an equality for
$x\in\mathbb{C}\backslash\\{-z_{1},...,-z_{n}\\}$
$\prod_{i=1}^{n}\frac{1}{z_{i}+x}=\sum_{i=1}^{n}\frac{\lambda_{i}}{z_{i}+x}$
(2)
where
$\lambda_{i}=\prod_{\ell=1,j\neq i}^{n}\frac{1}{z_{\ell}-z_{i}}$
holds for all $i$. Now we regard $\lambda_{i}$ as function of
$z_{1},...,z_{n}$. Taking partial derivatives
$\partial_{(b_{1}-1,...,b_{n}-1)}$ on both sides of (2), we obtain following.
Here the notation $\partial_{(N_{1},...,N_{m})}$ means
$\frac{\partial^{N_{1}+...+N_{m}}}{\partial z_{1}^{N_{1}}...\partial
z_{m}^{N_{m}}}$, sometimes $\frac{\partial^{N_{1}+...+N_{m}}F}{\partial
z_{1}^{N_{1}}...\partial z_{m}^{N_{m}}}$ is denote by $F^{(N_{1},...,N_{m})}$
for convenience.
$\partial_{(b_{1}-1,...,b_{n}-1)}g=\prod_{i=1}^{n}\frac{(-1)^{b_{i}-1}(b_{i}-1)!}{(z_{i}+x)^{b_{i}}}=(\prod_{i=1}^{n}(-1)^{b_{i}-1}(b_{i}-1)!)f$
On the other hand,
$\displaystyle\partial_{(b_{1}-1,...,b_{n}-1)}\sum_{i=1}^{n}\frac{\lambda_{i}}{z_{i}+x}$
$\displaystyle=\sum_{i=1}^{n}\partial_{(0,...,b_{i}-1,...,0)}\partial_{(b_{1}-1,...,b_{i-1}-1,0,b_{i+1}-1,...,b_{n}-1)}\frac{\lambda_{i}}{z_{i}+x}$
$\displaystyle=\sum_{i=1}^{n}\partial_{(0,...,b_{i}-1,...,0)}\frac{\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,0,b_{i+1}-1,...,b_{n}-1)}}{z_{i}+x}$
$\displaystyle=\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\binom{b_{i}-1}{j-1}\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}(\frac{1}{z_{i}+x})^{(0,0,...,j-1,...,0)}$
$\displaystyle=\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\binom{b_{i}-1}{j-1}\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}\frac{(-1)^{j-1}(j-1)!}{(z_{i}+x)^{j}}$
Supposed that
$\prod_{i=1}^{n}\frac{1}{(z_{i}+x)^{b_{i}}}=\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(z_{i}+x)^{j}}$
by comparing the coefficients we obtain
$\displaystyle\mu_{ij}$
$\displaystyle=\frac{(-1)^{j-1}(j-1)!}{\prod_{\ell=1}^{n}(-1)^{b_{\ell}-1}(b_{\ell}-1)!}\binom{b_{i}-1}{j-1}\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}$
$\displaystyle=\frac{(-1)^{j-1}}{(b_{i}-j)!\prod_{\ell=1,\ell\neq
i}^{n}(-1)^{b_{\ell}-1}(b_{\ell}-1)!}\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}$
Finally, it remains to compute
$\lambda_{i}^{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}$.
$\displaystyle\partial_{(b_{1}-1,...,b_{i-1}-1,b_{i}-j,b_{i+1}-1,...,b_{n}-1)}\lambda_{i}$
$\displaystyle=$
$\displaystyle\partial_{(0,...,b_{i}-j,...,0)}\partial_{(b_{1}-1,...,b_{i-1}-1,0,b_{i+1}-1,...,b_{n}-1)}\prod_{\ell=1,\ell\neq
i}^{n}\frac{1}{z_{\ell}-z_{i}}$ $\displaystyle=$
$\displaystyle\partial_{(0,...,b_{i}-j,...,0)}\prod_{\ell=1,\ell\neq
i}^{n}\frac{(-1)^{b_{\ell}-1}(b_{\ell}-1)!}{(z_{\ell}-z_{i})^{b_{\ell}}}$
$\displaystyle=$ $\displaystyle(\prod_{\ell=1,\ell\neq
i}^{n}(-1)^{b_{\ell}-1}(b_{\ell}-1)!)(\prod_{\ell=1,\ell\neq
i}^{n}\frac{1}{(z_{\ell}-z_{i})^{b_{\ell}}})^{(0,...,b_{i}-j,...,0)}$
That is
$\mu_{ij}=\frac{(-1)^{j-1}}{(b_{i}-j)!}\frac{d^{b_{i}-j}}{dz^{b_{i}-j}}|_{z=c_{i}}(\prod_{\ell=1,\ell\neq
i}^{n}\frac{1}{(c_{\ell}-z)^{b_{\ell}}})$
In order to prove
$\sum_{i=1}^{n}\mu_{i1}=0$
just need to multiply $\prod_{i=1}^{n}(x+c_{i})^{b_{i}}$ on both sides of
$\prod_{i=1}^{n}\frac{1}{(x+c_{i})^{b_{i}}}\equiv\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(x+c_{i})^{j}}$
Then it becomes
$1\equiv\sum_{i=1}^{n}\sum_{j=1}^{b_{i}}\mu_{ij}\frac{\prod_{k=1}^{n}(x+c_{k})^{b_{k}}}{(x+c_{i})^{j}}$
The right hand side of the equality is a polynomial of $x$ with the degree
$b_{1}+...+b_{n}-1$. Since this polynomial is actually constant $1$, therefore
the initial coefficient is $0$ and only $\mu_{i1}$ contributes to the
coefficient of $x^{b_{1}+...+b_{n}-1}$. Consequently, we infer that
$\sum_{i=1}^{n}\mu_{i1}=0$
∎
## 2 The First Attempt
In this section, we discuss the more practical case $n=2$. Hadjicostas [1]
called it the first generalization. The general cases are discussed in the
next section.
###### Theorem 1.
Suppose that $a,b,m$ are nonnegative integers. Define
$I_{m}(a,b)=\frac{(-1)^{m}}{m!}\int_{(0,1)^{2}}\frac{\log^{m}(xy)x^{a}y^{b}}{1-xy}dxdy$
It’s easy to see $I_{m}(a,b)=I_{m}(b,a)$. Suppose that $a\leq b$, then without
loss of generality, we have
$I_{m}(a,b)=\begin{cases}&\frac{H_{m+1}(b)-H_{m+1}(a)}{b-a}\text{, if }a<b\\\
&(m+1)\zeta(m+2,a+1)\text{, if }a=b\\\ \end{cases}$
where $H_{m}(x)$ is the generalized harmonic number, which is given by
$H_{m}(x)=\sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k^{m}}$.
###### Proof.
For $t\geq 0$, define
$A(t,a,b):=\int_{(0,1)^{2}}\frac{x^{a+t}y^{b+t}}{1-xy}dxdy$
For $x,y\in(0,1)$, the series $\sum_{k=0}^{\infty}x^{a+t+k}y^{b+t+k}$
converges absolutely and uniformly for all $x,y\in(\varepsilon,1-\varepsilon)$
to $\frac{x^{a+t}y^{b+t}}{1-xy}$. Hence
$\displaystyle A(t,a,b)$
$\displaystyle=\int_{(0,1)^{2}}\frac{x^{a+t}y^{b+t}}{1-xy}dxdy$
$\displaystyle=\sum_{k=0}^{\infty}\int_{0}^{1}x^{a+t+k}dx\int_{0}^{1}y^{b+t+k}dy$
$\displaystyle=\sum_{k=1}^{\infty}\frac{1}{(a+t+k)(b+t+k)}$
In following we consider taking $\frac{\partial^{m}}{\partial t^{m}}|_{t=0}$
on both sides of the equality. There are two cases:
Case I. If $a=b$,
$\sum_{k=1}^{\infty}\frac{1}{(a+t+k)(b+t+k)}=\sum_{k=1}^{\infty}\frac{1}{(a+t+k)^{2}}=\zeta(2,a+t+1)$
We have
$\displaystyle\frac{(-1)^{m}}{m!}\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a,b)$ $\displaystyle=$
$\displaystyle\frac{(-1)^{m}}{m!}\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}\zeta(2,a+t+1)$ $\displaystyle=$
$\displaystyle(m+1)\zeta(m+2,a+1)$
Case II. If $a<b$, then the decomposition
$\frac{1}{(a+t+k)(b+t+k)}=\frac{1}{b-a}(\frac{1}{a+t+k}-\frac{1}{b+t+k})$
is true for all positive integer $k$ and all nonnegative real number $t$. This
implies that
$\sum_{k=1}^{\infty}\frac{1}{(a+t+k)(b+t+k)}=\frac{1}{b-a}\sum_{k=1}^{\infty}(\frac{1}{a+t+k}-\frac{1}{b+t+k})$
Therefore
$\displaystyle\frac{(-1)^{m}}{m!}\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a,b)$ $\displaystyle=$
$\displaystyle\frac{(-1)^{m}}{m!}\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}\sum_{k=1}^{\infty}\frac{1}{(a+t+k)(b+t+k)}$ $\displaystyle=$
$\displaystyle\frac{1}{b-a}\sum_{k=1}^{\infty}(\frac{1}{(a+k)^{m+1}}-\frac{1}{(b+k)^{m+1}})$
$\displaystyle=$
$\displaystyle\frac{1}{b-a}(\sum_{k=1}^{\infty}\frac{1}{(a+k)^{m+1}}-\sum_{k=b-a+1}^{\infty}\frac{1}{(a+k)^{m+1}})$
$\displaystyle=$ $\displaystyle\frac{H_{m+1}(b)-H_{m+1}(a)}{b-a}$
On the other hand, no matter in which case, from the above integral
representation we have
$\frac{(-1)^{m}}{m!}\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a,b)=\frac{(-1)^{m}}{m!}\int_{(0,1)^{2}}\frac{\log^{m}(xy)x^{a}y^{b}}{1-xy}dxdy=I_{m}(a,b)$
The details about convergence and interchanging the order of integration,
summation and derivatives are omitted here, one can see[1].
As a consequence,
$I_{m}(a,b)=\begin{cases}&\frac{H_{m+1}(b)-H_{m+1}(a)}{b-a}\text{, if }a<b\\\
&(m+1)\zeta(m+2,a+1)\text{, if }a=b\\\ \end{cases}$
∎
## 3 The Generalized Beukers Integral
In following we use the notation $\\{x_{1}^{(N{1})},...,x_{j}^{(N{j})}\\}$ to
represent a finite multiset, where $N_{i}$ is the multiplicity of $x_{i}$,
$i=1,...,j$.
###### Theorem 2.
(Generalized Beukers Integral Representation)
Assume that $n\geq 2$, and $m,a_{1},...,a_{n}$ be nonnegative integers. Define
$I_{m}(a_{1},a_{2},\ldots,a_{n}):=\frac{(-1)^{m}}{m!}\int_{(0,1)^{n}}\frac{\log^{m}(\prod_{i=1}^{n}x_{i})\prod_{i=1}^{n}x_{i}^{a_{i}}}{1-\prod_{i=1}^{n}x_{i}}dx_{1}...dx_{n}$
Let $\\{c_{1}^{(b_{1})},...,c_{r}^{(b_{r})}\\}$ be the multiset of
$a_{1},...,a_{n}$ with $c_{1}<...<c_{r}$ and $b_{1}+...+b_{r}=n$, then
* •
if $r=1$,
$I_{m}(a_{1},a_{2},\ldots,a_{n})=\binom{m+n-1}{m}\zeta(n+m,c_{1}+1)$
* •
if $1<r\leq n$,
$I_{m}(a_{1},a_{2},\ldots,a_{n})=\sum_{i=1}^{r-1}\mu_{i1}(H_{m+1}(c_{r})-H_{m+1}(c_{i}))+\sum_{i=1}^{r}\sum_{j\geq
2}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m,c_{i}+1)$
where $\lambda_{i}$ and $\mu_{ij}$ are defined by the homogeneous and
inhomogeneous partial fraction decomposition (Lemma1 and Lemma2) as following
respectively
$\displaystyle\prod_{i=1}^{n}\frac{1}{a_{i}+x}=\sum_{i=1}^{n}\frac{\lambda_{i}}{a_{i}+x}$
$\displaystyle\prod_{i=1}^{r}\frac{1}{(c_{i}+x)^{b_{i}}}=\sum_{i=1}^{r}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(c_{i}+x)^{j}}$
###### Proof.
Assume that $t\geq 0$, define
$A(t,a_{1},a_{2},...,a_{n}):=\int_{(0,1)^{n}}\frac{\prod_{i=1}^{n}x_{i}^{a_{i}+t}}{1-\prod_{i=1}^{n}x_{i}}dx_{1}...dx_{n}$
Since all $x_{1},...,x_{n}\in(0,1)$, it has a series expansion as
$\frac{\prod_{i=1}^{n}x_{i}^{a_{i}+t}}{1-\prod_{i=1}^{n}x_{i}}=\sum_{k=0}^{\infty}\prod_{i=1}^{n}x_{i}^{a_{i}+k+t}$
Therefore
$A(t,a_{1},a_{2},...,a_{n})=\int_{(0,1)^{n}}\frac{\prod_{i=1}^{n}x_{i}^{a_{i}+t}}{1-\prod_{i=1}^{n}x_{i}}dx_{1}...dx_{n}=\sum_{k=0}^{\infty}\prod_{i=1}^{n}\frac{1}{1+a_{i}+k+t}$
The series on the right hand side absolutely and uniformly converges on
$x_{i}\in(\varepsilon,1-\varepsilon),i=1,2,...,n$. For the details, see[1].
Similar to the first attempt, the main idea is also taking the $m$-partial
derivatives with respect to $t$ around $0$ on both sides of the equation.
There are several different cases.
Case I, $r=1$ and $b_{1}=n$, which means $a_{1}=a_{2}=...=a_{n}=a=c_{1}$.
In this case $\sum_{k=0}^{\infty}\prod_{i=1}^{n}\frac{1}{1+a_{i}+k+t}$ becomes
$\sum_{k=0}^{\infty}\frac{1}{(1+a+k+t)^{n}}$. Then
$\displaystyle\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})$
$\displaystyle=(-1)^{m}\frac{(n+m-1)!}{(n-1)!}\sum_{k=0}^{\infty}\frac{1}{(1+a+k)^{n+m}}$
$\displaystyle=(-1)^{m}\frac{(n+m-1)!}{(n-1)!}\zeta(n+m,a+1)$
$\displaystyle=(-1)^{m}m!\binom{m+n-1}{m}\zeta(n+m,a+1)$
Case II $r=n,b_{1}=...=b_{n}=1$, namely $a_{1}<a_{2}<...a_{n}$.
At First, to decompose the product $\prod_{i=1}^{n}\frac{1}{1+a_{i}+k+t}$ as
$\prod_{i=1}^{n}\frac{1}{1+a_{i}+k+t}=\sum_{i=1}^{n}\lambda_{i}\frac{1}{1+a_{i}+k+t}$
It follows from Lemma 1 that there exist $\lambda_{1},...,\lambda_{n}$ which
are independent to $k,t$. At first obviously
$\sum_{k=0}^{\infty}\prod_{i=1}^{n}\frac{1}{1+a_{i}+k}$ is convergent, hence
$\displaystyle A(0,a_{1},a_{2},...,a_{n})$
$\displaystyle=\sum_{k=0}^{\infty}\sum_{i=1}^{n}\lambda_{i}\frac{1}{1+a_{i}+k}$
$\displaystyle=\lim_{N\rightarrow\infty}(\lambda_{1}\sum_{k=a_{1}+1}^{N}\frac{1}{k}+\ldots+\lambda_{n}\sum_{k=a_{n}+1}^{N}\frac{1}{k})$
$\displaystyle=\lambda_{1}\sum_{k=a_{1}+1}^{a_{n}}\frac{1}{k}+\ldots+\lambda_{n-1}\sum_{k=a_{n-1}+1}^{a_{n}}\frac{1}{k}+(\lambda_{1}+\ldots+\lambda_{n})\lim_{N\rightarrow\infty}\sum_{k=a_{n}+1}^{N}\frac{1}{k}$
recall that $\lambda_{1}+\ldots+\lambda_{n}=0$, therefore
$A(0,a_{1},a_{2},...,a_{n})=\sum_{i=1}^{n-1}\lambda_{i}\sum_{k=a_{i}+1}^{a_{n}}\frac{1}{k}$
Now assume that $m\geq 1$,
$\displaystyle\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})$ $\displaystyle=$
$\displaystyle(-1)^{m}m!\sum_{k=0}^{\infty}\sum_{i=1}^{n}\lambda_{i}\frac{1}{(1+a_{i}+k)^{m+1}}$
$\displaystyle=$
$\displaystyle(-1)^{m}m!(\lambda_{1}\sum_{k=a_{1}+1}^{a_{n}}\frac{1}{k^{m+1}}+\ldots+\lambda_{n-1}\sum_{k=a_{n-1}+1}^{a_{n}}\frac{1}{k^{m+1}}+(\lambda_{1}+\ldots+\lambda_{n})\sum_{k=a_{n}+1}^{\infty}\frac{1}{k^{m+1}})$
$\displaystyle=$
$\displaystyle(-1)^{m}m!\sum_{i=1}^{n-1}\lambda_{i}\sum_{k=a_{i}+1}^{a_{n}}\frac{1}{k^{m+1}}$
In a nutshell, we have
$\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})=(-1)^{m}m!\sum_{i=1}^{n-1}\lambda_{i}\sum_{k=a_{i}+1}^{a_{n}}\frac{1}{k^{m+1}}$
Case III Some $a_{i}$ are the same. In this case $\\{a_{1},...,a_{n}\\}$ can
be represented as multiset $\\{c_{1}^{(b_{1})},...,c_{r}^{(b_{r})}\\}$, where
$c_{1}<...<c_{r}$, $b_{1}+...+b_{r}=n$. It follows from Lemma 2 that.
$\prod_{i=1}^{n}\frac{1}{1+a_{i}+k+t}=\prod_{i=1}^{r}\frac{1}{(1+c_{i}+k+t)^{b_{i}}}=\sum_{i=1}^{r}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(1+c_{i}+k+t)^{j}}$
then
$\displaystyle\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})$ $\displaystyle=$
$\displaystyle\sum_{k=0}^{\infty}\sum_{i=1}^{r}\sum_{j=1}^{b_{i}}\frac{(-1)^{m}(m+j-1)!\mu_{ij}}{(j-1)!}\frac{1}{(1+c_{i}+k)^{j+m}}$
$\displaystyle=$
$\displaystyle(-1)^{m}(m)!\sum_{k=0}^{\infty}\sum_{i=1}^{r}\mu_{i1}\frac{1}{(1+c_{i}+k)^{1+m}}+\sum_{k=0}^{\infty}\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\frac{(-1)^{m}(m+j-1)!\mu_{ij}}{(j-1)!}\frac{1}{(1+c_{i}+k)^{j+m}}$
By the conclusion of Lemma 2, $\sum_{i=1}^{r}\mu_{i1}=0$, therefore
$\displaystyle\sum_{k=0}^{\infty}\sum_{i=1}^{r}\mu_{i1}\frac{1}{(1+c_{i}+k)^{1+m}}$
$\displaystyle=\sum_{i=1}^{r}\sum_{k=c_{i}+1}^{\infty}\frac{\mu_{i1}}{k^{1+m}}$
$\displaystyle=\sum_{i=1}^{r-1}\sum_{k=c_{i}+1}^{c_{r}}\frac{\mu_{i1}}{k^{1+m}}+\sum_{i=1}^{r}\mu_{i1}\sum_{k=c_{r}+1}^{\infty}\frac{1}{k^{1+m}}$
$\displaystyle=\sum_{i=1}^{r-1}\sum_{k=c_{i}+1}^{c_{r}}\frac{\mu_{i1}}{k^{1+m}}$
This is a rational number. On the other hand, note that if $j\geq 2$, then
$\sum_{k=0}^{\infty}\frac{1}{(1+c_{i}+k)^{j+m}}=\zeta(j+m)-\sum_{k=1}^{c_{i}}\frac{1}{k^{j+m}}$
Hence
$\displaystyle\sum_{k=0}^{\infty}\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\frac{(-1)^{m}(m+j-1)!\mu_{ij}}{(j-1)!}\frac{1}{(1+c_{i}+k)^{j+m}}$
$\displaystyle=$
$\displaystyle(-1)^{m}\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\frac{(m+j-1)!\mu_{ij}}{(j-1)!}(\zeta(j+m)-\sum_{k=1}^{c_{i}}\frac{1}{k^{j+m}})$
$\displaystyle=$
$\displaystyle(-1)^{m}\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\frac{(m+j-1)!\mu_{ij}}{(j-1)!}\zeta(j+m,c_{i}+1)$
It turns out that
$\displaystyle\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})$ $\displaystyle=$
$\displaystyle(-1)^{m}m!\sum_{i=1}^{r-1}\sum_{k=c_{i}+1}^{c_{r}}\frac{\mu_{i1}}{k^{1+m}}+(-1)^{m}\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\frac{(m+j-1)!\mu_{ij}}{(j-1)!}\zeta(j+m,c_{i}+1)$
$\displaystyle=$
$\displaystyle(-1)^{m}m!\\{\sum_{i=1}^{r-1}\sum_{k=c_{i}+1}^{c_{r}}\frac{\mu_{i1}}{k^{1+m}}+\sum_{i=1}^{r}\sum_{j=2}^{b_{i}}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m,c_{i}+1)\\}$
$\displaystyle=$
$\displaystyle(-1)^{m}m!\\{\sum_{i=1}^{r-1}\mu_{i1}(H_{m+1}(c_{r})-H_{m+1}(c_{i}))+\sum_{i=1}^{r}\sum_{j\geq
2}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m,c_{i}+1)\\}$
If $r=n$, then $b_{1}=...=b_{n}=1$, that is $\lambda_{i}=\mu_{i1}$. Case II is
in fact included in Case III. On the other hand, no matter in which case,
since
$A(t,a_{1},a_{2},...,a_{n})=\int_{(0,1)^{n}}\frac{\prod_{i=1}^{n}x_{i}^{t+a_{i}}}{1-\prod_{i=1}^{n}x_{i}}dx_{1}\ldots
dx_{n}$
then
$\frac{\partial^{m}}{\partial
t^{m}}|_{t=0}A(t,a_{1},a_{2},...,a_{n})=\int_{(0,1)^{n}}\frac{\log^{m}(\prod_{i=1}^{n}x_{i})\prod_{i=1}^{n}x_{i}^{a_{i}}}{1-\prod_{i=1}^{n}x_{i}}dx_{1}\ldots
dx_{n}$
Therefore as a consequence,
if $r=1$
$I_{m}(a_{1},a_{2},\ldots,a_{n})==\binom{m+n-1}{m}\zeta(n+m,c_{1}+1)$
if $1<r\leq n$,
$I_{m}(a_{1},a_{2},\ldots,a_{n})=\sum_{i=1}^{r-1}\mu_{i1}(H_{m+1}(c_{r})-H_{m+1}(c_{i}))+\sum_{i=1}^{r}\sum_{j\geq
2}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m,c_{i}+1)$
The details about convergence and interchanging the order of integration,
summation and derivatives are omitted here, one can see[1].
∎
###### Example 3.
Let
$I_{m}(a_{1},a_{2},a_{3})=\frac{(-1)^{m}}{m!}\int_{(0,1)^{3}}\frac{\log^{m}(xyz)x^{a_{1}}y^{a_{2}}z^{a_{3}}}{1-xyz}dxdydz$
where $a_{1},a_{2},a_{3}$ nonnegative integers,
* •
If $a_{1}=a_{2}=a_{3}=a$, then
$I_{m}(a_{1},a_{2},a_{3})=\binom{m+2}{m}\zeta(m+3,a+1)=\frac{(m+1)(m+2)}{2}(\zeta(m+3)-H_{m+3}(a))$
* •
If $a_{1}<a_{2}<a_{3}$, then
$\displaystyle I_{m}(a_{1},a_{2},a_{3})$ $\displaystyle=$
$\displaystyle\frac{1}{(a_{2}-a_{1})(a_{3}-a_{1})}(H_{m+1}(a_{3})-H_{m+1}(a_{1}))+\frac{1}{(a_{1}-a_{2})(a_{3}-a_{2})}(H_{m+1}(a_{3})-H_{m+1}(a_{2}))$
* •
If $c_{1}=a_{1}=a_{2}<a_{3}=c_{2}$, then
$I_{m}(a_{1},a_{2},a_{3})=\mu_{11}(H_{m+1}(c_{2})-H_{m+1}(c_{1}))+(m+1)\mu_{12}\zeta(m+2,c_{1}+1)$
* •
If $c_{1}=a_{1}<a_{2}=a_{3}=c_{2}$, then
$I_{m}(a_{1},a_{2},a_{3})=\mu_{11}(H_{m+1}(c_{2})-H_{m+1}(c_{1}))+(m+1)\mu_{22}\zeta(m+2,c_{2}+1)$
###### Example 4.
As a special case of $I_{m}(a_{1},...,a_{n})$, let $n=1$, $a_{1}=a$, then
$I_{m}(a)=\frac{(-1)^{m}}{m!}\int_{0}^{1}\frac{\log^{m}(x)x^{a}}{1-x}dx$
In fact this integral converges if $m\geq 1$. To see this, firstly consider
$f_{N}(x)=x^{a+t}(1+x+...+x^{N-1})=\frac{x^{a+t}(1-x^{N})}{1-x}$
where $N$ is an integer sufficiently large. Observe the integral
$\int_{0}^{1}f_{N}(x)dx=\sum_{k=1}^{N}\frac{1}{a+t+k}$
and taking $\frac{d^{m}}{dt^{m}}|_{t=0}$ on both sides, where $m\geq 1$,
$m\in\mathbb{Z}$, we get
$\int_{0}^{1}\frac{\log^{m}(x)x^{a}(1-x^{N})}{1-x}dx=\sum_{k=1}^{N}\frac{(-1)^{m}m!}{(a+k)^{m+1}}$
Let $N\rightarrow\infty$, then $x^{N}\rightarrow 0$ for all $x\in(0,1)$. That
is
$\int_{0}^{1}\frac{\log^{m}(x)x^{a}}{1-x}dx=\sum_{k=1}^{\infty}\frac{(-1)^{m}m!}{(a+k)^{m+1}}$
Therefore
$I_{m}(a)=\frac{(-1)^{m}}{m!}\int_{0}^{1}\frac{\log^{m}(x)x^{a}}{1-x}dx=\sum_{k=1}^{\infty}\frac{1}{(a+k)^{m+1}}=\zeta(m+1,a+1)$
(3)
It’s well defined if $m\geq 1,a\geq 0$, $a,m\in\mathbb{Z}$.
In fact, recall the integral representation of Hurwitz zeta function
$\zeta(m+1,a+1)=\frac{1}{\Gamma(m+1)}\int_{0}^{\infty}\frac{t^{m}e^{-(a+1)t}}{1-e^{-t}}dt$
for $\Re(m)>0,\Re(a)>-1$.
To substitute $t$ by $-\log(x)$, by simple computation we obtain
$\zeta(m+1,a+1)=\frac{1}{\Gamma(m+1)}\int_{0}^{1}\frac{(-\log(x))^{m}x^{a}}{1-x}dx$
It is exactly (3) formally, but here $a,m\in\mathbb{C}$ and
$\Re(m)>0,\Re(a)>-1$.
###### Theorem 5.
Assume that $n\geq 2$, and $m,a_{1},...,a_{n}$ be nonnegative integers,
$\\{c_{1}^{(b_{1})},...,c_{r}^{(b_{r})}\\}$ be the multiset representation of
$a_{1},...,a_{n}$ with $c_{1}<...<c_{r}$ and $b_{1}+...+b_{r}=n$,
$b_{+}=\max\\{b_{1},...,b_{r}\\}$. According to Theorem 2, it follows that
$I_{m}(a_{1},...,a_{n})=\frac{p_{1}+p_{2}\zeta(m+2)+...+p_{n}\zeta(m+b_{+})}{q}$
where $p_{1},...,p_{n},q\in\mathbb{Z}$ with $(p_{i},q)=1$ for all $i$, we have
the following estimates of $q$.
If $r=1$, then
$q|lcm(1,...,a_{n})^{n+m}$
If $r>1$, then
$q|(b_{+}-1)!\cdot lcm(1,...,c_{r})^{m+b_{+}}\prod_{1\leq s<t\leq
r}(c_{t}-c_{s})^{n-1}$
Before showing the proof, we firstly recall some concepts and facts. Let
$x\in\mathbb{Q}$ and $x\neq 0$, then there are always integers $p,q$
satisfying $x=p/q$ and $q>0$ with $(p,q)=1$. $q$ is called the reduced
denominator of $x$, which is denoted by $\delta(x)$ in this paper. In fact,
assume that $x\in\mathbb{Q},a\in\mathbb{Z}$, both $a,x\neq 0$, if
$ax\in\mathbb{Z}$ then $\delta(x)|a$. The lowest common multiple of
$x_{1},...,x_{n}$ is denoted by $lcm(x_{1},...,x_{n})$. A very simple fact is
that, if $a,b\in\mathbb{Q}$ and $a,b\neq 0$, then
$\delta(a+b)|lcm(\delta(a),\delta(b))$. This is due to
$lcm(\delta(a),\delta(b))\cdot(a+b)\in\mathbb{Z}$.
###### Proof.
Firstly reformulating the expression of $I_{m}(a_{1},...,a_{n})$, there are
two cases
Case I, if $r=1$, that is $c_{1}=a_{1}=a_{2}=...=a_{n}$. Follows from the
result of preceding theorem, we have
$\displaystyle I_{m}(a_{1},...,a_{n})$
$\displaystyle=\binom{m+n-1}{m}\zeta(n+m,c_{1}+1)$
$\displaystyle=\binom{m+n-1}{m}\zeta(n+m)-\binom{m+n-1}{m}H_{n+m}(c_{1})$
Since $\binom{m+n-1}{m}$ is always an integer, it’s sufficient to estimate the
denominator of $H_{n+m}(c_{1})$. And since
$H_{n+m}(c_{1})=\sum_{k=1}^{c_{1}}\frac{1}{k^{n+m}}$
the denominator of $H_{n+m}(c_{1})$ should be a divisor of
$lcm(1,...,c_{1})^{n+m}$. Therefore if we represent $I_{m}(a_{1},...,a_{n})$
as $\frac{p_{1}+p_{2}\zeta(m+2)+...+p_{n}\zeta(m+n)}{q}$ under the condition
of $c_{1}=a_{1}=a_{2}=...=a_{n}$, then
$q|lcm(1,...,c_{1})^{n+m}$
Case II, if $1<r\leq n$, then it follows from the result of preceding theorem
$I_{m}(a_{1},...,a_{n})=\sum_{i=1}^{r-1}\mu_{i1}(H_{m+1}(c_{r})-H_{m+1}(c_{i}))+\sum_{i=1}^{r}\sum_{j\geq
2}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m,c_{i}+1)$
Reformulate $\zeta(j+m,c_{i}+1)$ as $\zeta(j+m)-H_{j+m}(c_{i})$, then we
obtain
$\displaystyle I_{m}(a_{1},...,a_{n})$ (4) $\displaystyle=$
$\displaystyle\sum_{i=1}^{r-1}\mu_{i1}(H_{m+1}(c_{r})-H_{m+1}(c_{i}))-\sum_{i=1}^{r}\sum_{j\geq
2}\binom{m+j-1}{m}\mu_{ij}H_{j+m}(c_{i})$ (5)
$\displaystyle+\sum_{i=1}^{r}\sum_{j\geq 2}\binom{m+j-1}{m}\mu_{ij}\zeta(j+m)$
(6) $\displaystyle=$
$\displaystyle\sum_{i=1}^{r-1}\frac{N_{1,i}}{\delta(\mu_{i1})\delta(H_{m+1}(c_{r})-H_{m+1}(c_{i}))}-\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N_{2,ij}}{\delta(\mu_{ij})\delta(H_{j+m}(c_{i}))}$ (7)
$\displaystyle+\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N_{3,ij}\zeta(j+m)}{\delta(\mu_{ij})}$ (8)
where $N_{1,i},N_{2,ij},N_{3,ij}\in\mathbb{Z}$. In following we divide the
proof in three steps: Firstly, to prove that there are integers $D_{i}$ such
that both $\delta(\mu_{i1})$ and $\delta(\mu_{ij})$ are divisors of $D_{i}$.
Secondly, to prove that there is an integer $D$ such that both
$\delta((H_{m+1}(c_{r})-H_{m+1}(c_{i})))$ and $\delta(H_{j+m}(c_{i}))$ are
divisors of of $D$. Finally, by showing that $q|D\cdot lcm(D_{1},...,D_{r})$
to find the estimate that we needed.
STEP 1
Let
$\prod_{i=1}^{r}\frac{1}{(c_{i}+x)^{b_{i}}}=\sum_{i=1}^{r}\sum_{j=1}^{b_{i}}\frac{\mu_{ij}}{(c_{i}+x)^{j}}$
By the Lemma2, we have the expression of $\mu_{ij}$ as follow
$\mu_{ij}=\frac{(-1)^{j-1}}{(b_{i}-j)!}\frac{\partial^{b_{i}-j}}{\partial
z^{b_{i}-j}}|_{z=c_{i}}\prod_{\ell=1,\ell\neq
i}^{r}\frac{1}{(c_{\ell}-z)^{b_{\ell}}}$
For simplicity, we may let
$A_{\ell}=\begin{cases}c_{\ell}\text{, if }\ell<i\\\ c_{\ell+1}\text{, if
}\ell\geq i\end{cases}$ $B_{\ell}=\begin{cases}b_{\ell}\text{, if }\ell<i\\\
b_{\ell+1}\text{, if }\ell\geq i\end{cases}$
then
$\prod_{\ell=1,\ell\neq
i}^{r}\frac{1}{(c_{\ell}-z)^{b_{\ell}}}=\prod_{\ell=1}^{r-1}\frac{1}{(A_{\ell}-z)^{B_{\ell}}}$
Let $M=b_{i}-j$ and $0\leq M_{1},...,M_{r-1}\leq M$ be integers. If we denote
$F(z)=\frac{\partial^{M}}{\partial z^{M}}\prod_{\ell=1,\ell\neq
i}^{r-1}\frac{1}{(c_{\ell}-z)^{b_{\ell}}}$
$\displaystyle F(z)$
$\displaystyle=\sum_{M_{1}+...+M_{r-1}=M}\binom{M}{M_{1},...,M_{r-1}}\prod_{\ell=1}^{r-1}(\frac{1}{(A_{\ell}-z)^{B_{\ell}}})^{(M_{\ell})}$
$\displaystyle=\sum_{M_{1}+...+M_{r-1}=M}\binom{M}{M_{1},...,M_{r-1}}\prod_{\ell=1}^{r-1}\frac{(B_{\ell}+M_{\ell}-1)!}{(B_{\ell}-1)!}\frac{1}{(A_{\ell}-z)^{B_{\ell}+M_{\ell}}}$
Note that $\binom{M}{M_{1},...,M_{r-1}}\in\mathbb{Z}$,
$\frac{(B_{\ell}+M_{\ell}-1)!}{(B_{\ell}-1)!}\in\mathbb{Z}$ for all
$1\leq\ell\leq r-1$. then
$F(z)\cdot\prod_{\ell=1}^{r-1}(A_{\ell}-z)^{B_{\ell}+M}$
should be a polynomial of $z$ with integer coefficients. This implies that the
denominator of $F(c_{i})$ is a divisor of
$\prod_{\ell=1}^{r-1}(A_{\ell}-c_{i})^{B_{\ell}+M}$. In other words
$\delta(F(c_{i}))|\prod_{\ell=1,\ell\neq
i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-j}$
Because of $\mu_{ij}=\frac{(-1)^{j-1}}{(b_{i}-j)!}F(c_{i})$, therefore
$\delta(\mu_{ij})|(b_{i}-j)!\prod_{\ell=1,\ell\neq
i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-j}$
As a special case,
$\delta(\mu_{i1})|(b_{i}-1)!\prod_{\ell=1,\ell\neq
i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-1}$
It’s easy to check for $j\geq 1$
$\displaystyle(b_{i}-j)!|(b_{i}-1)!$
$\displaystyle(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-j}|(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-1}$
This implies that
$\delta(\mu_{ij})|(b_{i}-1)!\prod_{\ell=1,\ell\neq
i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-1}$
Now denote $(b_{i}-1)!\prod_{\ell=1,\ell\neq
i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-1}$ by $D_{i}$, thus
$\delta(\mu_{ij})|D_{i}$ for all $j$.
STEP 2
By the expression of $H_{m+1}(x)$ it’s obvious to see that,
$\delta(H_{m+1}(c_{r})-H_{m+1}(c_{i}))|lcm(c_{i}+1,...,c_{r})^{m+1}$
On the one hand, since $c_{1}<...<c_{r}$, this gives following is true for all
$i\geq 1$
$lcm(c_{i}+1,...,c_{r})^{m+1}|lcm(c_{1}+1,...,c_{r})^{m+1}$
Hence
$\delta(H_{m+1}(c_{r})-H_{m+1}(c_{i}))|lcm(c_{1}+1,...,c_{r})^{m+1}$
On the other hand,
$\delta(H_{m+j}(c_{i}))|lcm(1,...,c_{i})^{m+j}$
and since $c_{1}<...<c_{r}$, this gives for all $i\geq 1$
$lcm(1,...,c_{i})^{m+j}|lcm(1,...,c_{r})^{m+j}$
Hence
$\delta(H_{m+j}(c_{i}))|lcm(1,...,c_{r})^{m+j}$
Now let $D=lcm(1,...,c_{r})^{m+b_{+}}$, where
$b_{+}=\max\\{b_{1},...,b_{r}\\}$, we have both
$\delta(H_{m+1}(c_{r})-H_{m+1}(c_{i}))$ and $\delta(H_{m+j}(c_{i}))$ are
divisors of $D$.
STEP 3
Observe (4) and rewrite it as
$\displaystyle I_{m}(a_{1},...,a_{n})$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{r-1}\frac{N_{1,i}}{\delta(\mu_{i1})\delta(H_{m+1}(c_{r})-H_{m+1}(c_{i}))}-\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N_{2,ij}}{\delta(\mu_{ij})\delta(H_{j+m}(c_{i}))}$
$\displaystyle+\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N_{3,ij}\zeta(j+m)}{\delta(\mu_{ij})}$
By the result of Step 2, now multiplying $D=lcm(1,...,c_{r})^{m+b_{+}}$ on
both sides, we have
$D\cdot
I_{m}(a_{1},...,a_{n})=\sum_{i=1}^{r-1}\frac{N^{\prime}_{1,i}}{\delta(\mu_{i1})}-\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N^{\prime}_{2,ij}}{\delta(\mu_{ij})}+\sum_{i=1}^{r}\sum_{j\geq
2}\frac{N^{\prime}_{3,ij}\zeta(j+m)}{\delta(\mu_{ij})}$
Because $\delta(\mu_{ij})|D_{i}$ for all $i,j$, by multiplying
$lcm(D_{1},...,D_{r})$ on both sides, we have
$D\cdot
lcm(D_{1},...,D_{r-1})I_{m}(a_{1},...,a_{n})=N^{\prime\prime}_{1}+N^{\prime\prime}_{2}\zeta(m+2)+...+N^{\prime\prime}_{b_{+}}\zeta(m+b_{+})$
That is $q|D\cdot lcm(D_{1},...,D_{r})$.
Finally, let
$D_{0}=(b_{+}-1)!\prod_{1\leq s<t\leq r}(c_{t}-c_{s})^{n-1}$
then $lcm(D_{1},...,D_{r})|D_{0}$. It’s easy to see
$(b_{i}-1)!|(b_{+}-1)!$
And by $b_{1}+...+b_{r}=n$ we have
$\prod_{\ell=1,\ell\neq i}^{r}(c_{\ell}-c_{i})^{b_{\ell}+b_{i}-1}|\prod_{1\leq
s<t\leq r}(c_{t}-c_{s})^{n-1}$
Now we can give the estimate of $q$ as
$q|(b_{+}-1)!\cdot lcm(1,...,c_{r})^{m+b_{+}}\prod_{1\leq s<t\leq
r}(c_{t}-c_{s})^{n-1}$
That is what we need. ∎
## 4 Estimates of the Rational Approximation of $\zeta(5)$
In order to prove $\zeta(3)$ is irrational, the key is to find a parametric
representation of $\zeta(3)$ and to construct an effective rational
approximation. This rational approximation is related to the Legendre-type
polynomial. In the last section we have discussed the generalized Beukers
integral. On the one hand, it provides a parametric representation of
$\zeta(2n+1)$, on the other hand, such generalization makes it possible to
construct rational approximation of $\zeta(2n+1)$. As a special case, by using
the Legendre-type polynomials to find a approximation of $\zeta(5)$ is the
most obvious way trying to prove the irrationality of $\zeta(5)$. But
unfortunately, this approximation is not as effective as the case of
$\zeta(3)$. In this section, we prove this result. Before showing the proof,
we firstly give two lemmas. Through out this section, $1-xy$ is denoted by
$f$, $1-s$ is denoted by $\overline{s}$, $1-r$ is denoted by $\overline{r}$
etc.
More specifically, by theorem 1 we can construct a integral $I(a,b)$ for
nonnegative integer $a,b$, such that
$I(a,b)=\begin{cases}q_{0}\zeta(5)+q_{1}\text{, if }a=b\\\ q_{2}\text{, if
}a\neq b\end{cases}$
where $q_{0},q_{1},q_{2}\in\mathbb{Q}$. It turns out that if we let
$Q_{n}(x),Q_{n}(y)$ be polynomials of $x$ and $y$ respectively with integer
coefficients and degree $n$, then
$-\int_{(0,1)^{2}}\frac{\log^{3}(xy)Q_{n}(x)Q_{n}(y)}{1-xy}dxdy=\alpha_{n}\zeta(5)+\beta_{n}$
where $\alpha_{n},\beta_{n}\in\mathbb{Q}$. That is, we found a parametric
representation of $\zeta(5)$. By letting $Q_{n}$ be the Legendre-type
polynomial, which denoted by $P_{n}$ here, namely,
$P_{n}(x):=\frac{1}{n!}\frac{d^{n}}{dx^{n}}(x(1-x))^{n}$, we are able to
construct a rational approximation of $\zeta(5)$.
Let
$J_{3}(n):=-\int_{(0,1)^{2}}\frac{\log^{3}(xy)P_{n}(x)P_{n}(y)}{1-xy}dxdy$,
then according to theorem 1, we have
$J_{3}(n)=\frac{A_{n}\zeta(5)+B_{n}}{d_{n}^{5}}$, where
$A_{n},B_{n}\in\mathbb{Z}$, $d_{n}=lcm(1,...,n)$. In following we prove that
$\frac{6}{(n+1)^{4}}\leq J_{3}(n)\leq\frac{6\pi^{2}}{(n+\frac{1}{2})^{2}}$.
Due to $d_{n}^{5}\frac{6}{(n+1)^{4}}>1$ for all sufficiently large $n$, we are
not able to show the irrationality of $\zeta(5)$.
###### Lemma 3.
For any integer $m\geq 2$, following inequality is true for all
$x\in(0,+\infty)$. Moreover, the equations hold if and only if $x=1$.
$m(1-\frac{1}{\sqrt[m]{x}})\leq\log(x)\leq m(\sqrt[m]{x}-1)$
###### Proof.
The proof is divided into two parts.
I.
Define $g(x):=\log(x)-m(\sqrt[m]{x}-1)$. Obviously $g(1)=1$ and
$g^{\prime}(x)=\frac{1}{x}-\frac{x^{\frac{1}{m}}}{x}=\frac{1-x^{\frac{1}{m}}}{x}$
If $x\in(0,1)$, then $g^{\prime}(x)>0$. If $x\in(1,\infty)$ then
$g^{\prime}(x)<0$. Therefore $g(x)$ is strictly monotonically increasing from
negative number to $0$ on $(0,1)$, strictly monotonically decreasing from $0$
to negative number on $(1,+\infty)$. This shows $\log(x)\leq
m(\sqrt[m]{x}-1)$. The two sides are equal if and only if $x=1$.
II.
Likewise we define $g(x):=\log(x)-m(1-x^{-\frac{1}{m}})$. Observe that
$g(1)=1$ and
$g^{\prime}(x)=\frac{1}{x}-\frac{1}{x^{1+\frac{1}{m}}}=\frac{x^{\frac{1}{m}}-1}{x^{1+\frac{1}{m}}}$
If $x\in(0,1)$, then $g^{\prime}(x)<0$. If $x\in(1,\infty)$ then
$g^{\prime}(x)>0$. Therefore $g(x)$ is strictly monotonically decreasing from
positive number to $0$ on $(0,1)$, strictly monotonically increasing from $0$
to positive number on $(1,+\infty)$. This shows
$m(1-\frac{1}{\sqrt[m]{x}})\leq\log(x)$. The two sides are equal if and only
if $x=1$.
∎
###### Lemma 4.
(Canonical transform)
Define
$L(a,b;n+1):=\int_{0}^{1}\frac{s^{a}\overline{s}^{b}}{(1-fs)^{n+1}}ds$
then the equality is valid
$L(a,b;n+1)=(1-f)^{b-n}L(b,a;a+b+1-n)$
###### Proof.
Substitute $s$ by $\frac{1-r}{1-fr}$, then $1-s=\frac{r(1-f)}{1-fr}$,
$1-fs=\frac{1-f}{1-fr}$ and $ds=-\frac{1-f}{(1-fr)^{2}}dr$. If $s=0$, then
$r=1$, and if $s=1$, then $r=0$. Then
$\displaystyle\int_{0}^{1}\frac{s^{a}\overline{s}^{b}}{(1-fs)^{n+1}}ds$
$\displaystyle=-\int_{1}^{0}(\frac{1-r}{1-fr})^{a}(\frac{r(1-f)}{1-fr})^{b}(\frac{1-f}{1-fr})^{-n-1}\frac{1-f}{(1-fr)^{2}}dr$
$\displaystyle=(1-f)^{b-n}\int_{0}^{1}\frac{r^{b}\overline{r}^{a}}{(1-fr)^{a+b+1-n}}dr$
This is what we need. For convenience, this transform is called the canonical
transform. ∎
###### Lemma 5.
Assume that
$\displaystyle
J_{3}(n):=-\int_{(0,1)^{2}}\frac{\log^{3}(xy)P_{n}(x)P_{n}(y)}{1-xy}dxdy$
$\displaystyle
R_{2}(n)=\int_{(0,1)^{4}}\frac{x^{n}\overline{x}^{n}y^{n}\overline{y}^{n}s^{n}\overline{u}^{n}}{(1-fs)^{n+1}}\frac{\log(\frac{s\overline{u}}{u\overline{s}})}{s-u}dxdydsdu$
then the equality $J_{3}(n)=6R_{2}(n)$ is valid for all $n\in\mathbb{Z}^{+}$.
###### Proof.
Recall that $f:=1-xy$, since
$-\frac{\log(1-f)}{f}=\int_{0}^{1}\frac{1}{1-fz}dz$, we can rewrite $J_{3}(n)$
as following,
$\displaystyle J_{3}(n)$
$\displaystyle=-\int_{(0,1)^{2}}\frac{\log^{3}(xy)P_{n}(x)P_{n}(y)}{1-xy}dxdy$
$\displaystyle=-\int_{(0,1)^{2}}\frac{\log^{3}(1-f)}{f^{3}}f^{2}P_{n}(x)P_{n}(y)dxdy$
$\displaystyle=\int_{(0,1)^{5}}\frac{f^{2}P_{n}(x)P_{n}(y)}{(1-fz_{1})(1-fz_{2})(1-fz_{3})}dxdydz_{1}dz_{2}dz_{3}$
By the partial fraction decomposition
$\displaystyle\frac{f^{2}}{(1-fz_{1})(1-fz_{2})(1-fz_{3})}$ $\displaystyle=$
$\displaystyle\frac{1}{(z_{2}-z_{i})(z_{3}-z_{1})}\frac{1}{1-fz_{1}}+\frac{1}{(z_{1}-z_{2})(z_{3}-z_{2})}\frac{1}{1-fz_{2}}+\frac{1}{(z_{1}-z_{3})(z_{2}-z_{3})}\frac{1}{1-fz_{3}}$
we obtain
$J_{3}(n)=Q_{1}(n)+Q_{2}(n)+Q_{3}(n)$
where
$\displaystyle Q_{1}(n)$
$\displaystyle=\int_{(0,1)^{5}}\frac{P_{n}(x)P_{n}(y)}{(1-fz_{1})(z_{2}-z_{1})(z_{3}-z_{1})}dxdydz_{1}dz_{2}dz_{3}$
$\displaystyle Q_{2}(n)$
$\displaystyle=\int_{(0,1)^{5}}\frac{P_{n}(x)P_{n}(y)}{(1-fz_{2})(z_{1}-z_{2})(z_{3}-z_{2})}dxdydz_{1}dz_{2}dz_{3}$
$\displaystyle Q_{3}(n)$
$\displaystyle=\int_{(0,1)^{5}}\frac{P_{n}(x)P_{n}(y)}{(1-fz_{3})(z_{1}-z_{3})(z_{2}-z_{3})}dxdydz_{1}dz_{2}dz_{3}$
It’s easy to see that $Q_{1}(n)=Q_{2}(n)=Q_{3}(n)$, namely
$J_{3}(n)=3Q_{1}(n)$. Hence it’s sufficient to deal with $Q_{1}(n)$. For
$Q_{1}(n)$, after taking n-fold partial integration with respect to $x$, we
have
$Q_{1}(n)=\int_{(0,1)^{5}}\frac{(xyz_{1})^{n}(1-x)^{n}P_{n}(y)}{(1-fz_{1})^{n+1}(z_{2}-z_{1})(z_{3}-z_{1})}dxdydz_{1}dz_{2}dz_{3}$
Now substitute $\frac{1-z_{i}}{1-fz_{i}}$ by $w_{i}$ for $i=1,2,3$ and by
straightforward verification of following
I,
$z_{i}=\frac{1-w_{i}}{1-fw_{i}},\text{and }z_{i}=0\Leftrightarrow
w_{i}=1,z_{i}=1\Leftrightarrow w_{i}=0$
II,
$dz_{i}=\frac{f-1}{(1-fw_{i})^{2}}dw_{i}\\\ $
III, if $k=1,2,3$ and $k\neq i$, then
$z_{k}-z_{i}=\frac{1-w_{k}}{1-fw_{k}}-\frac{1-w_{i}}{1-fw_{i}}=\frac{(f-1)(w_{k}-w_{i})}{(1-fw_{k})(1-fw_{i})}$
IV,
$\frac{z_{1}^{n}}{(1-fz_{1})^{n+1}}=\frac{(1-fw_{1})(1-w_{1})^{n}}{(1-f)^{n+1}}$
we have
$\displaystyle Q_{1}(n)$
$\displaystyle=\int_{(0,1)^{5}}\frac{(xyz_{1})^{n}(1-x)^{n}P_{n}(y)}{(1-fz_{1})^{n+1}(z_{2}-z_{1})(z_{3}-z_{1})}dxdydz_{1}dz_{2}dz_{3}$
$\displaystyle=\int_{(0,1)^{5}}\frac{x^{n}(1-x)^{n}y^{n}P_{n}(y)(1-fw_{1})(1-w_{1})^{n}}{(1-f)^{n}(1-fw_{2})(1-fw_{3})(w_{2}-w_{1})(w_{3}-w_{1})}dxdydw_{1}dw_{2}dw_{3}$
recall that $1-f=1-(1-xy)=xy$, thus
$Q_{1}(n)=\int_{(0,1)^{5}}(1-x)^{n}(1-w_{1})^{n}P_{n}(y)\frac{(1-fw_{1})}{(1-fw_{2})(1-fw_{3})(w_{2}-w_{1})(w_{3}-w_{1})}dxdydw_{1}dw_{2}dw_{3}\\\
$
Once again using the partial fraction decomosition
$\frac{(1-fw_{1})}{(1-fw_{2})(1-fw_{3})}=\frac{w_{1}-w_{2}}{w_{3}-w_{2}}\frac{1}{1-fw_{2}}+\frac{w_{3}-w_{1}}{w_{3}-w_{2}}\frac{1}{1-fw_{3}}$
Then $Q_{1}(n)=R_{1}(n)+R_{2}(n)$, where
$\displaystyle R_{1}(n)$
$\displaystyle=-\int_{(0,1)^{5}}\frac{(1-x)^{n}(1-w_{1})^{n}P_{n}(y)}{(1-fw_{2})(w_{3}-w_{2})(w_{3}-w_{1})}dxdydw_{1}dw_{2}dw_{3}$
$\displaystyle R_{2}(n)$
$\displaystyle=\int_{(0,1)^{5}}\frac{(1-x)^{n}(1-w_{1})^{n}P_{n}(y)}{(1-fw_{3})(w_{3}-w_{2})(w_{2}-w_{1})}dxdydw_{1}dw_{2}dw_{3}$
Notice that actually $R_{1}(n)$ and $R_{2}(n)$ are the same, therefore
$Q_{1}(n)=2R_{2}(n)$. It’s sufficient to compute $R_{2}(n)$. For convenience,
substituting $w_{3},w_{2},w_{1}$ by $s,t,u$ respectively, i.e.
$R_{2}(n)=\int_{(0,1)^{5}}\frac{(1-x)^{n}(1-u)^{n}P_{n}(y)}{(1-fs)^{n+1}(s-t)(t-u)}dxdydsdtdu\\\
$
After n-fold partial integration with respect to $y$ for $R_{2}(n)$, we have
$R_{2}(n)=\int_{(0,1)^{5}}\frac{x^{n}(1-x)^{n}y^{n}(1-y)^{n}s^{n}(1-u)^{n}}{(1-fs)^{n+1}(s-t)(t-u)}dxdydsdtdu$
∎
Note that if $s\neq u$,
$\int_{0}^{1}\frac{1}{(s-t)(t-u)}dt=\frac{\log(\frac{s}{1-s})-\log(\frac{u}{1-u})}{s-u}=\frac{\log(\frac{s(1-u)}{u(1-s)})}{s-u}$
If $s>u$, then $\log(s)-\log(u)>0$ and $\log(1-u)>\log(1-s)$, therefore
$\frac{\log(\frac{s(1-u)}{u(1-s)})}{s-u}>0$. If $u>s$,
$\frac{\log(\frac{s(1-u)}{u(1-s)})}{s-u}=\frac{\log(\frac{u(1-s)}{s(1-u)})}{u-s}>0$.
That is if $s\neq u$, $\frac{\log(\frac{s(1-u)}{u(1-s)})}{s-u}>0$.
Now we can see $R_{2}(n)>0$, and
$R_{2}(n)=\int_{(0,1)^{4}}\frac{x^{n}\overline{x}^{n}y^{n}\overline{y}^{n}s^{n}\overline{u}^{n}}{(1-fs)^{n+1}}\frac{\log(\frac{s\overline{u}}{u\overline{s}})}{s-u}dxdydsdu$
Since $J_{3}(n)=3Q_{1}(n)=6R_{2}(n)$. This is what we need to prove.
###### Theorem 6.
For all integer $n\geq 1$, following inequalities are true.
$\frac{6}{(n+1)^{4}}\leq J_{3}(n)\leq\frac{6\pi^{2}}{(n+\frac{1}{2})^{2}}$
###### Proof.
The proof is divided into two parts
I.
Firstly we give the upper bound of $J_{3}(n)$. In the preceding Lemma we
proved that $J_{3}(n)=6R_{2}(n)$, where
$R_{2}(n)=\int_{(0,1)^{4}}\frac{(x\overline{x}y\overline{y}s\overline{u})^{n}}{(1-fs)^{n+1}}\frac{\log(\frac{s\overline{u}}{\overline{s}u})}{s-u}dxdydsdu$
Now apply the Lemma 3 we obtain for any positive integer $m\geq 2$
$\displaystyle\frac{\log(\frac{s\overline{u}}{\overline{s}u})}{s-u}$
$\displaystyle\leq
m(\sqrt[m]{\frac{s\overline{u}}{\overline{s}u}}-1)/(s-u)\leq
m\frac{\sqrt[m]{s\overline{u}}-\sqrt[m]{\overline{s}u}}{\sqrt[m]{\overline{s}u}(s\overline{u}-\overline{s}u)}$
Note that
$\frac{s\overline{u}-\overline{s}u}{(s\overline{u})^{\frac{1}{m}}-(\overline{s}u)^{\frac{1}{m}}}=\sum_{k=0}^{m-1}(s\overline{u})^{\frac{m-1-k}{m}}(\overline{s}u)^{\frac{k}{m}}$
we apply the inequality of arithmetic and geometric means, then
$\sum_{k=0}^{m-1}(s\overline{u})^{\frac{m-1-k}{m}}(\overline{s}u)^{\frac{k}{m}}\geq
m(s\overline{s}u\overline{u})^{\frac{m(m-1)}{2m}\frac{1}{m}}=m(s\overline{s}u\overline{u})^{\frac{(m-1)}{2m}}$
Therefore
$\frac{\log(\frac{s\overline{u}}{\overline{s}u})}{s-u}\leq\frac{1}{(s\overline{u})^{\frac{1}{2}-\frac{1}{2m}}(\overline{s}u)^{\frac{1}{2}+\frac{1}{2m}}}$
It turns out that
$R_{2}(n)\leq\int_{(0,1)^{4}}\frac{(x\overline{x}y\overline{y})^{n}}{(1-fs)^{n+1}}(s)^{n-\frac{1}{2}+\frac{1}{2m}}(\overline{s})^{-\frac{1}{2}-\frac{1}{2m}}(\overline{u})^{n-\frac{1}{2}+\frac{1}{2m}}(u)^{-\frac{1}{2}-\frac{1}{2m}}dxdydsdu$
On the one hand,
$\int_{0}^{1}\overline{u}^{n-\frac{1}{2}+\frac{1}{2m}}u^{-\frac{1}{2}-\frac{1}{2m}}du=B(n+\frac{1}{2m}+\frac{1}{2},-\frac{1}{2m}+\frac{1}{2})$
On the other hand, by the canonical transform (Lemma 4) we obtain
$\displaystyle\int_{(0,1)^{3}}\frac{(x\overline{x}y\overline{y})^{n}}{(1-fs)^{n+1}}(s)^{n-\frac{1}{2}+\frac{1}{2m}}(\overline{s})^{-\frac{1}{2}-\frac{1}{2m}}dxdyds$
$\displaystyle=$
$\displaystyle\int_{(0,1)^{3}}(1-f)^{-\frac{1}{2}-\frac{1}{2m}-n}(x\overline{x}y\overline{y})^{n}z^{-\frac{1}{2}-\frac{1}{2m}}\overline{z}^{n-\frac{1}{2}+\frac{1}{2m}}dxdydz$
$\displaystyle=$
$\displaystyle\int_{(0,1)^{3}}x^{-\frac{1}{2}-\frac{1}{2m}}\overline{x}^{n}y^{-\frac{1}{2}-\frac{1}{2m}}\overline{y}^{n}z^{-\frac{1}{2}-\frac{1}{2m}}\overline{z}^{n-\frac{1}{2}+\frac{1}{2m}}dxdydz$
$\displaystyle=$
$\displaystyle(B(n+1,\frac{1}{2}-\frac{1}{2m}))^{2}B(n+\frac{1}{2}+\frac{1}{2m},\frac{1}{2}-\frac{1}{2m})$
Therefore
$R_{2}(n)\leq(B(n+1,\frac{1}{2}-\frac{1}{2m})B(n+\frac{1}{2}+\frac{1}{2m},\frac{1}{2}-\frac{1}{2m}))^{2}$.
Moreover, both $B(n+1,\frac{1}{2}-\frac{1}{2m})$ and
$B(n+\frac{1}{2}+\frac{1}{2m},\frac{1}{2}-\frac{1}{2m})$ are decreasing with
$m$ increasing. Let $m\rightarrow\infty$, that is
$\displaystyle\lim_{m\rightarrow\infty}(B(n+1,\frac{1}{2}-\frac{1}{2m})B(n+\frac{1}{2}+\frac{1}{2m},\frac{1}{2}-\frac{1}{2m}))^{2}$
$\displaystyle=$
$\displaystyle(B(n+1,\frac{1}{2})B(n+\frac{1}{2},\frac{1}{2}))^{2}$
$\displaystyle=$
$\displaystyle(\frac{\Gamma(n+1)\Gamma(\frac{1}{2})}{\Gamma(n+\frac{3}{2})}\frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{1}{2})}{\Gamma(n+1)})^{2}$
$\displaystyle=$ $\displaystyle\frac{\pi^{2}}{(n+\frac{1}{2})^{2}}$
Therefore $J_{3}(n)\leq\frac{6\pi^{2}}{(n+\frac{1}{2})^{2}}$.
II.
In this part we give the lower bound of $J_{3}(n)$. By the Lemma 3,
$\frac{\log(\frac{s\overline{u}}{\overline{s}u})}{s-u}\geq
m\frac{1-(\frac{s\overline{u}}{\overline{s}u})^{-\frac{1}{m}}}{s\overline{u}-{\overline{s}u}}=m\frac{(s\overline{u})^{\frac{1}{m}}-(\overline{s}u)^{\frac{1}{m}}}{(s\overline{u})^{\frac{1}{m}}(s\overline{u}-{\overline{s}u})}$
Likewise
$\frac{s\overline{u}-\overline{s}u}{(s\overline{u})^{\frac{1}{m}}-(\overline{s}u)^{\frac{1}{m}}}=\sum_{k=0}^{m-1}(s\overline{u})^{\frac{m-1-k}{m}}(\overline{s}u)^{\frac{k}{m}}$
If $s\overline{u}>\overline{s}u$,
$\sum_{k=0}^{m-1}(s\overline{u})^{\frac{m-1-k}{m}}(\overline{s}u)^{\frac{k}{m}}<m(s\overline{u})^{\frac{m-1}{m}}<m$.
If $s\overline{u}<\overline{s}u$,
$\sum_{k=0}^{m-1}(s\overline{u})^{\frac{m-1-k}{m}}(\overline{s}u)^{\frac{k}{m}}<m(\overline{s}u)^{\frac{m-1}{m}}<m$
as well. So
$\frac{\log(\frac{s\overline{u}}{\overline{s}u})}{s-u}>\frac{m}{m(s\overline{u})^{\frac{1}{m}}}=(s\overline{u})^{-\frac{1}{m}}$
Now come back to $R_{2}(n)$,
$R_{2}(n)>\int_{(0,1)^{4}}\frac{(x\overline{x}y\overline{y})^{n}}{(1-fs)^{n+1}}(s)^{n-\frac{1}{m}}\overline{u}^{n-\frac{1}{m}}dxdydsdu$
Obviously on the one hand,
$\int_{0}^{1}\overline{u}^{n-\frac{1}{m}}du=\frac{1}{n-\frac{1}{m}+1}$
On the other hand, with the canonical transform we obtain
$\displaystyle\int_{(0,1)^{3}}\frac{(x\overline{x}y\overline{y})^{n}}{(1-fs)^{n+1}}s^{n-\frac{1}{m}}dxdyds$
$\displaystyle=$
$\displaystyle\int_{(0,1)^{3}}\frac{(x\overline{x}y\overline{y})^{n}\overline{z}^{n-\frac{1}{m}}}{(1-f)^{n}(1-fz)^{1-\frac{1}{m}}}dxdydz$
$\displaystyle\geq$
$\displaystyle\int_{(0,1)^{3}}\overline{x}^{n}\overline{y}^{n}\overline{z}^{n-\frac{1}{m}}dxdydz$
$\displaystyle=$ $\displaystyle\frac{1}{(n+1)^{2}}\frac{1}{(n-\frac{1}{m}+1)}$
Therefore $R_{2}(n)\geq\frac{1}{(n+1)^{2}(n-\frac{1}{m}+1)^{2}}$. Likewise,
taking the limit we have $J_{3}(n)\geq\frac{6}{(n+1)^{4}}$. Therefore, finally
$\frac{6}{(n+1)^{4}}\leq J_{3}(n)\leq\frac{6\pi^{2}}{(n+\frac{1}{2})^{2}}$
∎
Hence we have the conclusion: as $n$ tends to infinity, although
$J_{3}(n)\rightarrow 0$, $lcm(1,2,...,n)^{5}\cdot J_{3}(n)\rightarrow\infty$.
That is, compare to the rational approximation of $\zeta(3)$ (see [2]), the
approximation of $\zeta(5)$ by generalized Beukers’ method is too slow. One
has to looking for another method to prove the irrationality of $\zeta(5)$.
Following Table gives some numerical comparison.
The numerical comparison of upper bound and lower bound of $J_{3}(n)$.
$n$ | $\frac{6}{(n+1)^{4}}$ | $J_{3}(n)$ | $\frac{6\pi^{2}}{(n+\frac{1}{2})^{2}}$
---|---|---|---
$1$ | $0.3750$ | $4.4313$ | $26.3189$
$2$ | $0.0741$ | $0.9474$ | $9.4748$
$3$ | $0.0234$ | $0.2996$ | $4.8341$
$4$ | $0.0096$ | $0.1237$ | $2.9243$
$5$ | $0.0046$ | $0.0605$ | $1.9576$
$6$ | $0.0025$ | $0.0332$ | $1.4016$
$7$ | $0.0015$ | $0.0198$ | $1.0528$
$8$ | $0.0009$ | $0.0182$ | $0.8196$
$9$ | $0.0006$ | $0.0126$ | $0.6562$
$10$ | $0.0004$ | $0.0058$ | $0.5371$
## 5 Acknowledgement
Ich möchte mich bei den Leuten bedanken, die im 2012/2013 mich online
verleumdet hatten. Diese ungerechte Worte sind mir noch deutlich errinnerlish.
Diese ungerechte Worte gaben mir Antrieb und machten mir ununterbrochen
weiterkommen.
## References
* [1] Some Generalizations of Beukers’ Integrals, KYUNGPOOK Math J 42(2002), 399-416
* [2] A Note on the Irrationality of $\zeta(2)$ and $\zeta(3)$ Bull. London Math. Soc 11(1979), 268-272
Xiaowei Wang(王骁威)
Institut für Mathematik, Universität Potsdam, Potsdam OT Golm, Germany
Email<EMAIL_ADDRESS>
|
# Remote multi-user control of the production of Bose-Einstein condensates for
research and education
J S Laustsen, R Heck, O Elíasson, J J Arlt, J F Sherson and C A Weidner
Department of Physics and Astronomy, Aarhus University, 8000 Aarhus C, Denmark
<EMAIL_ADDRESS>
###### Abstract
Remote control of experimental systems allows for improved collaboration
between research groups as well as unique remote educational opportunities
accessible by students and citizen scientists. Here, we describe an experiment
for the production and investigation of ultracold quantum gases capable of
asynchronous remote control by multiple remote users. This is enabled by a
queuing system coupled to an interface that can be modified to suit the user,
e.g. a gamified interface for use by the general public or a scripted
interface for an expert. To demonstrate this, the laboratory was opened to
remote experts and the general public. During the available time, remote users
were given the task of optimising the production of a Bose-Einstein condensate
(BEC). This work thus provides a stepping stone towards the exploration and
realisation of more advanced physical models by remote experts, students and
citizen scientists alike.
*
Keywords: Remote experiment control, ultracold atoms, BEC
## 1 Introduction
Ultracold quantum gases have become one of the prime platforms for simulating
technologically relevant quantum systems within the last decades. In
particular, extremely clean and pure quantum model systems can be realised
that offer a high degree of controllability with respect to parameters such as
the atoms’ interaction strength and temperature. This progress has led to
increasingly poweful and complex experiments in lattice-based quantum
simulation [1, 2], the simulation of strongly-correlated condensed matter
systems [3], and quantum computing with Rydberg atoms [4], among others,
rendering cold atoms a fantastic platform for the development of technologies
that will drive the second quantum revolution [5].
Collaboration between experimental and theoretical groups is an essential part
of developing and evaluating models applicable to quantum simulation
experiments. To optimise experimental procedures, it may indeed be beneficial
to use dedicated, automated protocols developed by theory groups. Opening the
laboratory to direct remote control by collaborators may thus increase the
efficiency of such collaborative efforts. Moreover, a remote control system
opens up new possibilities regarding outreach to students and the general
public. By allowing a broad audience of non-expert users to control some
experimental parameters, one can imagine a number of scenarios geared towards
public outreach and education. First, the public can take part in citizen
science experiments, and, in particular, previous work using the system
described here shows that valuable insight into cognitive science can be
gained [6]. Secondly such platforms can be used to educate and engage non-
experts in quantum physics, e.g. by allowing students access to cutting-edge
research laboratories regardless of where the laboratory is physically
located. In both cases this creates the need for an intuitive user interface
which allows users to focus on the essential parts and hides the technical
details. At the same time, the experimental system must also contain the
infrastructure to handle the input from one of many users and return the
relevant results to the correct user. A number of these open platforms already
exist, including the IBM Quantum Experience [7], and the open availability of
this platform has allowed for the production of a number of research articles
(see, e.g., Refs. [8, 9, 10, 11, 12]), educational material [13], and games
[14].
In principle, any experimental control program can be modified for remote
control via the addition of a remote server and a suitable front-end for the
user. In terms of experimental control programs, several publicly available
systems for cold atom experiments have been published [15, 16, 17, 18, 19]. In
addition, numerous commercial options are available, such as systems by
ColdQuanta [20], MLab [21], ADwin [22] and IQ Technologies [23] that can be
purchased together with suitable hardware. All of these control systems have
sub-microsecond control of digital and analog channels and some allow for
direct output of radio frequency (RF) signals. Additionally, they typically
allow for communication with external hardware through different protocols or
via implementation of appropriate drivers. These criteria define the typical
minimum viable product for useful cold-atom experiment control. Software for
camera control and analysis of the images enables some systems to optimise
experimental performance in a closed loop optimisation of experimental
parameters. Moreover, all of these systems are remotely-controllable either
directly or via simple screen-sharing protocols. However, to our knowledge,
none of these control programs had been used in a multi-user setting where
several users simultaneously remotely controlled an experiment through the use
of the aforementioned server and front-end, with the exception that, while
preparing this manuscript, we became aware of the Albert system built by
ColdQuanta that came online in late 2020 [24]. This work thus represents the
growing commercial and academic interest in remote control of cold atom
systems.
Here we discuss the implementation of a remote controlled experiment usable by
single expert user or multiple non-expert users accessing the experiment.
Previously, we have documented the challenges that we provided to our users,
as well as the main findings that arose from this work [6]. However, we have
not explained the underlying system architecture and the overarching
possibilities that this gives rise to in research and education. The general
knowledge of these details is crucial for other groups to implement similar
systems, and this is what we focus on in this work. In both of the use-cases
considered here, there is a need for a queuing system for the input sequences
and the return of the results. When considering multi-user access there is
also the need to track the sender throughout the process of queuing,
performing the experiment, analysis and reporting the results. The
infrastructure of the experiment also allows for multiple expert users and
this option will be explored in future work. For instance, one could imagine
running multiple collaborative efforts simultaneously.
This paper is organised as follows: In the first section the software enabling
remote control is presented. Following this, the experimental sequence and its
technical implementation is described. We then describe the two different
implementations of remote user access that were used in previous work: single-
and multi-user control [6]. The last section concludes and provides an
outlook.
## 2 The control software
The experimental control system is LabVIEW-based and capable of being expanded
as new hardware is added to the experiment. A field-programmable gate array
(FPGA, PCI-7813R) is used to control $70$ digital and $48$ analog channels
through $4$ digital to analog converter modules (NI 9264, NI 9263). In
addition, the system can communicate with hardware drivers to other hardware
such as motion stages, piezoelectric devices and RF synthesisers. Thus, our
control system meets the aforementioned criteria for usability in a cold atom
experiment.
The control program is based on a server/client architecture. The server
controls all hardware, including the FPGA, and the client provides an
interface for the user and compiles the programmed sequence. On the client
side the sequence is built of waves which correspond to the output of a given
digital or analog channel, a GPIB command or a command through a hardware
driver. Regularly-used sequences of waves can be collected and collapsed into
blocks, e.g., the commands required to load atoms into a trap or image an atom
cloud. For each wave and block, externally-accessible variables can be
declared, e.g. the frequencies of the RF tones applied to acousto-optic
modulators (AOMs) or the duration of the RF pulse applied to the AOM. This
allows the user to create sequences with an adaptive level of abstraction. For
instance one can hide the exact technical implementation of experimental steps
in a block but keep the essential control parameters accessible, which is
useful for reducing the cognitive load of a remote user.
An example of a block used for absorption imaging of ultracold atoms is shown
in Fig. 1, where smaller blocks are incorporated. The waves and blocks are
ordered in a tree structure that controls the timing of an experiment. The
tasks are performed from top to bottom in such a tree. Any waves or blocks on
indented branches are performed simultaneously, and delays can be defined
within individual elements for more precise control of relative timing.
Initialised outputs may be defined such that they either hold their last value
or are reset to a default value after a given time. Thus the user need only
handle the values of relevant outputs at any given point. Wave and block
variables can be scanned individually or jointly in single- and multi-
dimensional scans, respectively. Loops are also available where a subset of
blocks is repeated while one or several variables are changed. For example,
the user can loop the capture of atoms in a trap while changing a given
parameter value during each loop iteration, effectively performing a parameter
scan within a single realisation of the experiment.
The novel aspects of the control system lie in its capability for
communication with remote users. This includes loading sequences from a queue
either created by a single user or multiple different users. After a remotely-
requested sequence is performed, relevant results (e.g. atom number) are sent
back to the user who designed the sequence. To make the remote control as
flexible as possible, the control software does not provide any user interface
for the remote user but communicates with stand-alone interfaces. Thus a
remote user can easily set up closed-loop optimisation by linking the returned
results into a script running a given optimisation algorithm that then
generates the next desired sequence, as described in detail below.
## 3 The experiment
To demonstrate the use of the control system and the communication necessary
for multi-user operation, we conducted an experiment in which remote users
create a Bose-Einstein Condensate (BEC). The experimental system is described
in Refs. [25, 26] and only its main features are described here.
The experimental sequence starts by precooling a cloud of Rb-87 atoms in a 3D
MOT. Here the atoms are laser-cooled and trapped via a combination of light
pressure and magnetic field gradients. Subsequently polarisation gradient
cooling is performed and the atoms are optically pumped to the low-field-
seeking $|F\,,\,m_{F}\rangle=|2,2\rangle$ state. The atoms are then trapped in
a magnetic quadrupole trap generated by a pair of coils in an anti-Helmholtz
configuration. These coils are mounted on a rail and are used to transport the
atoms through a differential pumping stage to the final chamber. Here the
atoms are evaporatively cooled by transferring the hottest atoms to a high-
field-seeking sublevel. By the end of the evaporation sequence the atoms have
a temperature of roughly $30\text{\,}\mathrm{\SIUnitSymbolMicro K}$.
Subsequently, a crossed optical dipole trap (CDT) consisting of two laser
beams (wavelength $\lambda=$1064\text{\,}\mathrm{nm}$$, $1/e^{2}$ waists of
$45\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and
$85\text{\,}\mathrm{\SIUnitSymbolMicro m}$) is superimposed on the atoms.
After the final evaporation stage, the atom cloud is released from the trap
and an absorption image is recorded after a TOF. If the user-defined
evaporation sequence is effective, the cloud is condensed and a BEC is visible
in the image.
Figure 1: An example of a block used take an absorption image at the end of an
experimental sequence (cf. Sec. 3). The block contains individual analog (A)
and digital (D) waves (W), as well as two embedded blocks (B) used to take the
absorption and background images. The block runs from the top to bottom with
indented elements running in parallel with the element above. In this block,
the imaging shutter is opened while the frequency applied to the imaging AOM
is set and the CDT AOM is turned off (thus turning off the CDT itself and
dropping the atoms from the trap). Then, after a variable time-of-flight (TOF)
the next block (blue squares, with zoom-in to the right of the main block)
simultaneously pulses the imaging AOM and triggers the camera shutter to take
the absorption image of the atoms. After 300 ms of camera processing time, the
same block takes a background image without atoms. The imaging shutter is
closed, and after an additional 300 ms, the camera is triggered again to take
the dark image without any light present. Subtracting the absorption image
from the background and dark images reveals the atom signal.
For the remote experiments reported here, the control parameters available to
the users are the laser powers of both laser beams forming the CDT and the
current in the quadrupole coils as a function of time. This configuration
allows the user to cool the atoms using forced evaporative cooling either in a
pure CDT [27], in a so-called hybrid trap consisting of the quadrupole
magnetic field and one of the dipole beams [28], or any combination of the
two. The depth and geometry of the trap depends on these parameters in a non-
trivial optimal way, providing an opportunity for external optimisation, the
goal of which is to produce the largest possible BEC. For both expert and non-
expert users a limitation of the available control space is necessary as only
a small fraction of the full control landscape will yield a BEC.
## 4 Two cases of remote user control
For a remotely-controllable system to be useful, appropriate user interfaces
must be developed, and each user class has different requirements to optimally
facilitate the interaction. For experts a scripted interface can be an
advantage as complex algorithms can be directly implemented. A more visual
interface of the control (for instance in a game-like setting) is needed for
non-expert users. Importantly, a different program structure is needed when
handling input from a single user or multiple users. In what follows, we
describe two different implementations of our remote control geared towards
single expert users and asynchronous use among the general public,
respectively. In this section, we elaborate on each of these cases. Again,
note that the data presented here is drawn from the same source as our initial
work [6], and detailed research results can be found there. Here, we focus on
more technical aspects of the experimental implementation and execution.
### 4.1 Single-user remote control
In our first implementation of remote control, an expert user optimised the
evaporation using the so-called dressed chopped random-basis (dCRAB)
optimisation algorithm [29]. Note that the algorithm was implemented on the
user side, so our implementation of remote expert control is algorithm
agnostic. Here the user had access to the CDT laser powers and quadrupole coil
currents as a function of time. Sequences of waveforms corresponding to the
parameter values were created as text files and sent to the experimental
control program through a folder on a cloud drive and placed in a queue. Even
for a single user, a queue is necessary due to the relatively long ($30$ s)
cycle time of the experiment. The queue operated on the first-in-first-out
(FIFO) principle, allowing the user to submit several sequences simultaneously
and easily keep track of the outputs; this is useful, e.g., when initialising
the initial simplex for Nelder-Mead optimisation.
For each user-accessible parameter, the parameter values can be defined at any
desired time, while values between these times are linearly interpolated at
the hardware level. Therefore the effective temporal resolution of the
waveforms can be controlled by the user, and the total number of
parameter/time pairs that can be used is ultimately limited by the memory of
our FPGA.
When a given sequence was ready to be run, the relevant experimental sequences
were generated by reading in the waveforms the from text files generated by
the expert user. The experiment was then run and the resulting image was
analysed. From this image the BEC atom number was extracted and returned to
the expert user through the same cloud drive, again as a text file. This atom
number was read in by the expert user and served as the cost parameter closing
the optimisation loop.
### 4.2 Multi-user remote control
Figure 2: A screenshot of the interface used in the _Alice Challenge_ ,
showing (a) the spline editor used to create the ramps of the laser powers and
coil current, shown on a logarithmic scale, (b) top score list, (c) latest
executed sequences, and (d) the control buttons, including the estimated wait
time until the submitted sequence is returned.
In the second implementation, called the Alice Challenge, citizen scientists
were given access to the experiment. This subsection details the architectural
considerations required for the challenge as well as some statistics on user
load in real time over the course of the challenge. This information is useful
when considering the future implementation of similar systems.
Citizen scientists were given access to the system via a gamified interface as
shown in Fig. 2, and this is used to provide more intuitive access to the
parameter space. The interface was designed to visualise the ramps of the
laser powers and coil currents sent by the citizen scientists to the
experiment. The control values were normalised and presented for ease of use
on a logarithmic axis in a spline editor where the user could manipulate the
curves by clicking-and-dragging points along the curve. When the user was done
editing the curves, the sequence was submitted and subsequently realised in
the experiment.
This was done in the following manner: The user sequence (encoded as a JSON
file with a unique user ID) was delivered to a web server. The web server then
delivered the sequence to the cloud folder that served as the queue. When a
sequence was ready to be evaluated, it was sent to the LabVIEW control system,
where the JSON data was translated into waveform data identical to the type
used in the single expert user configuration. This was done via a special
_optimisation class_ defined in LabVIEW that was responsible for extracting
the relevant parameters from the JSON file. Once a sequence was completed, the
control program wrote the results to another JSON file, inserted the relevant
user ID, and stored it in a separate folder on the cloud. The webserver then
delivered the results, and the backend of the game interface scaled the BEC
atom number to a _score_ which was displayed to the user. The score and
corresponding sequence was also visible for other players who could then copy
the sequence as inspiration when creating their own sequences.
Figure 3: Schematic view of the data flow for the remote control of the
experiment by multiple, asynchronous outside users during the _Alice
Challenge_. Experimental sequences are submitted through a game-like user
interface to a web server that subsequently sends them to the cloud-based
queue in the order they are received (here, User A has submitted their
sequence first). Each submission has a unique User ID that is tracked
throughout the process. The control system reads the oldest files via the FIFO
principle and runs the corresponding experiment. When image analysis is
completed, the results are returned to the proper user via the UI.
In contrast to the case of a single user, the web server was needed to track
the run number and user ID if multiple users were running the experiment
simultaneously. A schematic view of the multi-user data handling
infrastructure used for the Alice Challenge is presented in Fig. 3. To ensure
that the result of the experiment was linked to the right user sequence a
check was made in the experimental control system such that the experimental
sequence was repeated if no result was returned for a given run.
Moreover, the state of the experiment was checked by inserting an established
benchmark sequence in the queue every tenth run. This benchmark sequence was
known to create a BEC under stable experimental conditions. In the case of a
problem, such as a laser failure, the experiment was paused until the problem
was solved. At the same time, the users were informed of the temporary delay
caused by the disturbance. The benchmark sequence was also executed in case of
an empty queue in order to keep the experiment in a stable condition. This
also allows one to track overall experimental drifts, e.g. due to thermal
fluctuations, which can be useful in advanced closed-loop optimisation
schemes.
Figure 4: The mean rate at which the experimental sequences were performed
during the week in which the experiment was open to non-expert outside users.
The different plateaus arise due to changes in the ramp time whereas the high
run rates are an effect of synchronisation problems. The green shading denotes
the time period depicted in Fig. 5. In the inset the accumulated number of
unique citizen scientists that used the system throughout the week is shown.
The date markers indicate midnight CET on a given day.
The experiment was open to the public for a full week with only minor
interruptions, resulting in total of $7577$ submitted and evaluated sequences.
Figure 4 shows the rate of the experimental runs during this week. Over the
course of the challenge, the preset duration of the evaporation ramps were
varied. This allowed the citizen scientists to explore different optimisation
landscapes, varying the challenge offered to them and keeping things
interesting for returning users.
The different evaporation durations create some variation of the rate at which
experiments were performed over the course of the week. In addition, when
experimental problems caused the experiment to be paused, the rate decreased.
It should also be noted that the peaks of high run rates were caused by
synchronisation problems between the web server and the control program. This
problem was solved on the third day of the challenge, after which none of the
larger peaks are visible. The inset shows the progress of the accumulated
number of unique users through out the week.
Throughout the day the number of active users varied as players from several
parts of the world came online. Figure 5 shows the queue and number of active
users on Friday evening (CET), one of the highest peaks in active users. Here
we see that up to 15 unique users were active at any given time during the
evening which created a wait time of above one hour. As the number of users
declined, the length of the queue was slowly reduced. Since each user could
submit several different sequences at a time, the correlation between the
number of unique users and the queue length is nonlinear.
Figure 5: The queue time and number of active users during the busiest period
of the challenge. The blue trace shows the calculated waiting time from a
submission of a sequence until a result is given and the green trace shows the
number of active users at any given point.
Figure 6 shows a histogram of how many times a given BEC number was achieved.
We see that most sequences submitted to the experiment result in the creation
of a BEC. This is despite the fact that citizen scientists had limited insight
to the physical system they were controlling to create the condensates.
Figure 6: A histogram of how many times a given BEC atom number was obtained
by the sequences submitted by the users of the Alice challenge. Above $73\%$
of the submitted sequences created a BEC. This data is also presented in Ref.
[6].
## 5 Outlook
In future work, remote controlled optimisation of a system may be
advantageous, as remote optimisation allows for easy implementation of
advanced optimisation algorithms. Several programs are available that can
implement closed-loop optimisation of cold-atom experiments [15, 16, 17, 18,
19]. Students can also access such systems for educational purposes, as has
already been done with quantum computers [13]. This allows students to explore
complex, cutting-edge research systems that are not accessible in many
educational learning laboratories.
For remote users to be able to run optimise the experiment, the relevant
experimental control parameters have to be easily controllable. Collaborative
optimisation between several remote users requires a structure that includes a
multi-user queue and tracking the ID of submitted sequences so that the
results may be returned to the correct user. The control program presented
here can be expanded to give remote access to larger parts of the control
sequence or even the entire experiment. Thus, future work will give remote
users expanded access, allowing them to tackle more advanced scientific
problems in a research or educational setting. For example, with the new
capabilities of the experiment to image single atoms using a quantum gas
microscope [30] in combination with spin addressing [31] and arbitrary light
potential generation techniques [32] the experiment can be used as an analog
quantum simulator with remote control capability.
Such advanced control will require a more complex user interface since the
number of experimental parameters would increase, rendering algorithmic
optimisation more difficult. However other groups have shown that optimisation
of such systems with large parameter spaces is possible using genetic
algorithms [33, 34, 35] or machine learning methods such as neural networks
[36], Gaussian processes [37] or evolutionary optimisation [38].
## 6 Acknowledgements
The authors would like to acknowledge Aske Thorsen for the development of the
LabVIEW control code used for the experiments described here.
## References
## References
* [1] Gross C and Bloch I 2017 Science 357 995–1001 ISSN 0036-8075, 1095-9203
* [2] Schäfer F, Fukuhara T, Sugawa S, Takasu Y and Takahashi Y 2020 Nature Reviews Physics 2 411–425 ISSN 2522-5820
* [3] Hofstetter W and Qin T 2018 Journal of Physics B: Atomic, Molecular and Optical Physics 51 082001 ISSN 0953-4075, 1361-6455
* [4] Saffman M 2016 Journal of Physics B: Atomic, Molecular and Optical Physics 49 202001 ISSN 0953-4075, 1361-6455
* [5] Deutsch I H 2020 PRX Quantum 1 020101 ISSN 2691-3399
* [6] Heck R, Vuculescu O, Sørensen J J, Zoller J, Andreasen M G, Bason M G, Ejlertsen P, Elíasson O, Haikka P, Laustsen J S, Nielsen L L, Mao A, Müller R, Napolitano M, Pedersen M K, Thorsen A R, Bergenholtz C, Calarco T, Montangero S and Sherson J F 2018 Proceedings of the National Academy of Sciences 115 E11231–E11237 ISSN 0027-8424, 1091-6490
* [7] IBM Quantum Experience https://quantum-computing.ibm.com/
* [8] Berta M, Wehner S and Wilde M M 2016 New Journal of Physics 18 073004 ISSN 1367-2630
* [9] Dumitrescu E F, McCaskey A J, Hagen G, Jansen G R, Morris T D, Papenbrock T, Pooser R C, Dean D J and Lougovski P 2018 Physical Review Letters 120 210501 ISSN 0031-9007, 1079-7114
* [10] Wang Y, Li Y, Yin Z q and Zeng B 2018 npj Quantum Information 4 46 ISSN 2056-6387
* [11] Wootton J R and Loss D 2018 Physical Review A 97 052313 ISSN 2469-9926, 2469-9934
* [12] Zulehner A, Paler A and Wille R 2019 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38 1226–1236 ISSN 0278-0070, 1937-4151
* [13] Wootton J R, Harkins F, Bronn N T, Vazquez A C, Phan A and Asfaw A T 2020 arXiv:2012.09629 [physics, physics:quant-ph] (Preprint 2012.09629)
* [14] Wootton J R 2017 Proceedings of the Second Gamification and Serious Games Symposium 2 63–64 ISSN 2297-914X
* [15] Owen S F and Hall D S 2004 Review of Scientific Instruments 75 259–265 ISSN 0034-6748, 1089-7623
* [16] Gaskell P E, Thorn J J, Alba S and Steck D A 2009 Review of Scientific Instruments 80 115103 ISSN 0034-6748, 1089-7623
* [17] Starkey P T, Billington C J, Johnstone S P, Jasperse M, Helmerson K, Turner L D and Anderson R P 2013 Review of Scientific Instruments 84 085111 ISSN 0034-6748, 1089-7623
* [18] Keshet A and Ketterle W 2013 Review of Scientific Instruments 84 015105 ISSN 0034-6748, 1089-7623 (Preprint 1208.2607)
* [19] Perego E, Pomponio M, Detti A, Duca L, Sias C and Calosso C E 2018 Review of Scientific Instruments 89 113116 ISSN 0034-6748, 1089-7623
* [20] ColdQuanta control system, www.coldquanta.com
* [21] MLabs ARTIQ https://m-labs.hk/
* [22] ADwin https://www.adwin.de/
* [23] IQ Technologies A Laboratory Control System for Cold Atom Experiments http://www.iq-technologies.net/projects/pc/024/
* [24] ColdQuanta Albert: Quantum matter on the cloud https://www.coldquanta.com/albertcloud/
* [25] Bason M G, Heck R, Napolitano M, Elíasson O, Müller R, Thorsen A, Zhang W Z, Arlt J J and Sherson J F 2018 Journal of Physics B: Atomic, Molecular and Optical Physics 51 175301 ISSN 0953-4075, 1361-6455
* [26] Elíasson O, Heck R, Laustsen J S, Napolitano M, Müller R, Bason M G, Arlt J J and Sherson J F 2019 Journal of Physics B: Atomic, Molecular and Optical Physics 52 075003 ISSN 0953-4075
* [27] Grimm R, Weidemüller M and Ovchinnikov Y B 2000 Optical Dipole Traps for Neutral Atoms Advances In Atomic, Molecular, and Optical Physics vol 42 (Elsevier) pp 95–170 ISBN 978-0-12-003842-8
* [28] Lin Y J, Perry A R, Compton R, Spielman I B and Porto J 2009 Physical Review A 79 063631 ISSN 1050-2947
* [29] Rach N, Müller M M, Calarco T and Montangero S 2015 Physical Review A 92 062343
* [30] Elíasson O, Laustsen J S, Heck R, Müller R, Weidner C A, Arlt J J and Sherson J F 2020 arXiv:1912.03079 [cond-mat, physics:quant-ph] (Preprint 1912.03079)
* [31] Weitenberg C, Endres M, Sherson J F, Cheneau M, Schauß P, Fukuhara T, Bloch I and Kuhr S 2011 Nature 471 319–24 ISSN 1476-4687
* [32] Christie S Chiu, Geoffrey Ji, Anton Mazurenko, Daniel Greif and Markus Greiner 2018 120 243201
* [33] Rohringer W, Fischer D, Trupke M, Schmiedmayer J and Schumm T 2011 Stochastic Optimization of Bose-Einstein Condensation Using a Genetic Algorithm Stochastic Optimization - Seeing the Optimal for the Uncertain ed Dritsas I (InTech) ISBN 978-953-307-829-8
* [34] Lausch T, Hohmann M, Kindermann F, Mayer D, Schmidt F and Widera A 2016 Applied Physics B 122
* [35] Geisel I, Cordes K, Mahnke J, Jöllenbeck S, Ostermann J, Arlt J J, Ertmer W and Klempt C 2013 Applied Physics Letters 102 ISSN 00036951
* [36] Tranter A D, Slatyer H J, Hush M R, Leung A C, Everett J L, Paul K V, Vernaz-Gris P, Lam P K, Buchler B C and Campbell G T 2018 Nature Communications 9 4360 ISSN 2041-1723
* [37] Wigley P B, Everitt P J, van den Hengel A, Bastian J W, Sooriyabandara M A, McDonald G D, Hardman K S, Quinlivan C D, Manju P, Kuhn C C N, Petersen I R, Luiten A N, Hope J J, Robins N P and Hush M R 2016 Scientific Reports 6 25890
* [38] Barker A J, Style H, Luksch K, Sunami S, Garrick D, Hill F, Foot C J and Bentine E 2020 Machine Learning: Science and Technology 1 015007 ISSN 2632-2153
|
# The field theoretical ABC of epidemic dynamics
Giacomo Cacciapaglia, Corentin Cot, Stefan Hohenegger, Shahram Vatani
Institut de Physique des 2 Infinis (IP2I), CNRS/IN2P3, UMR5822, 69622
Villeurbanne, France
Université de Lyon, Université Claude Bernard Lyon 1, 69001 Lyon, France
Michele Della Morte IMADA & CP3-Origins. Univ. of Southern Denmark, Campusvej
55, DK-5230 Odense, Denmark Francesco Sannino Scuola Superiore Meridionale,
Largo S. Marcellino, 10, 80138 Napoli NA, Italy
CP3-Origins and D-IAS, Univ. of Southern Denmark, Campusvej 55, DK-5230
Odense, Denmark
Dipartimento di Fisica, E. Pancini, Univ. di Napoli, Federico II and INFN
sezione di Napoli
Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126
Napoli, Italy
###### Abstract
Infectious diseases are a threat for human health with tremendous impact on
our society at large. They are events that recur with a frequency that is
growing with the exponential increase in the world population and growth of
the human ecological footprint. The latter causes a frequent spillover of
transmissible diseases from wildlife to humans. The recent COVID-19 pandemic,
caused by the SARS-CoV-2, is the latest example of a highly infectious disease
that, since late 2019, is ravaging the globe with a huge toll in terms of
human lives and socio-economic impact. It is therefore imperative to develop
efficient mathematical models, able to substantially curb the damages of a
pandemic by unveiling disease spreading dynamics and symmetries. This will
help inform (non)-pharmaceutical prevention strategies. It is for the reasons
above that we decided to write this report. It goes at the heart of
mathematical modelling of infectious disease diffusion by simultaneously
investigating the underlying microscopic dynamics in terms of percolation
models, effective description via compartmental models and the employment of
temporal symmetries naturally encoded in the mathematical language of critical
phenomena. Our report reviews these approaches and determines their common
denominators, relevant for theoretical epidemiology and its link to important
concepts in theoretical physics. We show that the different frameworks exhibit
common features such as criticality and self-similarity under time rescaling.
These features are naturally encoded within the unifying field theoretical
approach. The latter leads to an efficient description of the time evolution
of the disease via a framework in which (near) time-dilation invariance is
explicitly realised. As important test of the relevance of symmetries we show
how to mathematically account for observed phenomena such as multi-wave
dynamics. Although we consider the COVID-19 pandemic as an explicit
phenomenological application, the models presented here are of immediate
relevance for different realms of scientific enquiry from medical applications
to the understanding of human behaviour. Our review offers novel perspectives
on how to model, capture, organise and understand epidemiological data and
disease dynamics for modelling real-world phenomena, and helps devising public
health and socio-economics strategies.
###### keywords:
epidemiology , field theory
###### MSC:
[2021] 92D30
††journal: Physics Reports
###### Contents
1. 1 Introduction
1. 1.1 Historical Overview
2. 1.2 Current approaches to epidemiology
3. 1.3 Relating different scales in Field Theory
4. 1.4 Organisation of the Review
2. 2 Percolation Approach
1. 2.1 Lattice and Percolation Models
2. 2.2 Numerical Simulations and Criticality
1. 2.2.1 The principle
2. 2.2.2 Results
3. 2.3 Master Action and Field Theory
4. 2.4 Relation to Compartmental Models
3. 3 Compartmental Models
1. 3.1 SIR(S) Model, Basic Definitions
2. 3.2 Numerical Solutions and their Qualitative Properties
3. 3.3 From Lattice to SIR
4. 3.4 Parametric Solution of the Classical SIR Model
5. 3.5 Generalisations of the SIR Model
1. 3.5.1 Time Dependent Infection and Recovery Rates
2. 3.5.2 Spontaneous Creation and Multiple Waves
3. 3.5.3 Heterogeneous Transmission Rates and Superspreaders
6. 3.6 The SIR model as a set of Renormalisation Group Equations
1. 3.6.1 Beta Function
2. 3.6.2 Connection between SIR models and the eRG approach
7. 3.7 Analytic Solution during a Linear Growth Phase
1. 3.7.1 Simplified SIR Model with Constant New Infections
2. 3.7.2 Vanishing $\zeta$ and Constant $\epsilon$
3. 3.7.3 Constant Active Number of Infectious Individuals
4. 4 Epidemic Renormalisation Group
1. 4.1 Beta Function and Asymptotic Fixed Points
1. 4.1.1 Generalisation to multiple regions
2. 4.2 Complex (fixed point) epidemic Renormalisation Group
3. 4.3 Modelling multi-wave patterns
5. 5 COVID-19
6. 6 Outlook and Conclusions
## 1 Introduction
Infectious diseases that can efficiently spread across the human population
and cause a pandemic have always been a threat to humanity. This menace has
been growing with the increase in the population and the progressive
destruction of the wild environment with its impact on wildlife. The last
century has been affected by, at least, three major worldwide pandemics: the
1918 “Spanish” influenza of 1918-1920 [1], HIV/AIDS [2, 3] and the most recent
COVID-19 that started at the end of 2019. Understanding in a mathematically
consistent way the diffusion of a pandemic is of paramount importance in
designing effective policies apt at curbing and limiting its diffusion and the
impact on the life loss and economic damage. In this report we will review
some crucial aspects of the mathematical modelling, ranging from the
microscopic mechanisms encoded in diffusion models, to approaches based on
symmetries. In this discussion, the application of field theory and other
concepts borrowed from theoretical physics will play a crucial role.
The dynamics of physical phenomena, from the fundamental laws of nature to
quantum and ordinary matter phase transitions, even including protein
behaviour, is well captured by effective descriptions in terms of fields and
their interactions. Given the enormous success of the field theoretical
interpretation of physical phenomena, it is highly interesting to review
several main mathematical models employed to describe the diffusion of
infectious diseases and show how the different approaches are related within
the field theoretical framework. We will show that the models exhibit common
features, such as criticality and self-similarity under time rescaling. These
features are naturally encoded within the unifying field theoretical approach.
The latter yields an efficient description of the time evolution of the
disease via a framework in which (near) time-dilation invariance is explicitly
realised. The models are extended to account for observed phenomena such as
multi-wave dynamics. Because of the immediacy of the COVID-19 pandemic and the
high quality data available, we use it as an explicit and relevant
phenomenological test of the models and their effectiveness. It should be
clear, however, that the methodologies presented here are relevant for any
infectious disease, and can be extended to different realms of scientific
enquiry, from medical applications to the understanding of human behaviour.
We will complete this introduction with a historical overview of the
mathematical modelling applied to infectious diseases, the contemporary
applications and the role of field theory concepts, before offering a summary
of the main body of the review.
### 1.1 Historical Overview
The first application of mathematical modelling to an epidemiological process
is the work of Daniel Bernoulli [4] on the effectiveness of an inoculation
against smallpox in 1760. A more systematic application of mathematical
methods to study the spread of infectious diseases occurred after the work of
Robert Koch and Louis Pasteur, which showed that such diseases are caused by
living organisms, triggering the question on how they are passed on from one
individual to another. A related point in this regards is how (and why)
outbreaks and epidemics end. As outlined in [5], there are two prevalent
hypotheses:
* 1.
Farr’s hypothesis (mostly based on the work of W. Farr in 1866 [6]): epidemics
stop because the potency of the microorganisms decreases with every new
individual that is infected.
* 2.
Snow’s hypothesis (mostly based on the work of J. Snow in 1853 [7]): epidemics
end due to a lack of sufficient available new individuals to infect (the
disease runs out of “fuel”).
In view of closer studies of actual data stemming from outbreaks of
communicable diseases, Farr’s hypothesis was gradually dropped from the
scientific discussion. Moreover, the focus of research shifted towards
explaining regularities of observed _epidemic curves_. A first discovery along
these lines can be found in the work of W.H. Hamer [8, 9, 10, 11], who
described the biennial period of measles outbreaks in London and implicitly
[5] introduced the concept of mass-action law into epidemiology. The latter
was firmly established in the pioneering works of Sir R. Ross [12, 13, 14, 15]
and A.G. McKendrick [16, 17, 18]. Specifically, in a model of discretised time
(with time steps $\delta t$) such as in [12], the mass action law can be
formulated as follows:
$\displaystyle\text{number of cases at }t+\delta t\propto(\text{number of
cases at }t)\times(\text{susceptibles at }t)\,.$
In models with a continuous time variable (as in later works of Ross and
notably McKendrick), the model can be formulated in terms of differential
equations, including additional contributions capturing the population
dynamics due to recovery from the disease, birth, death, migration, _etc._.
Credit for the so-called SIR model, still widely used today (and which we
review in Section 3.1), is given to the work by W.O. Kermack and A.G.
McKendrick in 1927 [19]. The basic idea behind models of this type is that the
disease is passed on among individuals in the form of happenings or
collisions, in analogy to how reactions work in chemistry. This led to
numerous more refined models, see for example the reviews [20, 21, 22, 23, 24,
25, 26, 27, 28], including the historical overview in [29] .
In the second half of the 20th century, progress in different disciplines
influenced epidemiological investigations. It was understood that, to describe
(and combat) large scale outbreaks such as HIV/AIDS, human behaviour plays a
crucial role in modulating the spreading of the virus (_e.g._ [30, 31]).
Thereby, mathematical modelling started going beyond models inspired by basic
chemical reactions. The appearance of a large number of reviews and books on
epidemiological modelling is testimony to the depth and complexity of the
analysis as well as the interdisciplinary attention this topic has received,
_e.g._ [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52, 53, 54, 55, 56, 57]. Besides influences and contributions from
pure mathematics, chemistry and social sciences, certain features and
symmetries of epidemic curves are similar to those found in particular
physical systems. This has lead to novel approaches rooted in the physics of
critical phenomena and phase transitions, such as _percolation models_ [58,
59] and their relation to _(scale invariant) field theories_. As we shall
review in Section 2.1, there are various types of percolation models. Here we
shall define them simply as collections of points in a given space, where some
of which can be linked pairwise. The sets of points that are linked to each
other are termed clusters and the spread of such clusters (following certain
pre-determined rules) can be used to model the spread of a disease within a
given population. In particular, the transition from finite sized clusters to
the percolation phase (where all points are linked together) is a phase
transition. This important feature allows percolation models to be organised
in terms of universality classes and even to put them in correspondence to
other physical systems. This property is useful to determine important
physical quantities. The first attempts appeared in [60] and their relation to
phase transitions was pointed out in [61], while excellent reviews on more
complicated models can be found in [62, 63, 64, 65, 66, 67, 68, 69, 70, 71].
Direct formulations of percolation models in terms of field theories follow
the approach of M. Doi and L. Peliti [72, 73, 74], which have been reviewed,
for example, in [75]. Further work in this direction (notably the work by J.
Cardy and P. Grassberger [76] and its relation to models in particle physics
[77, 78]) shall be reviewed in Section 2.1.
### 1.2 Current approaches to epidemiology
The aim of our work is to summarise, review and connect various current
approaches to understand and model the time evolution of pandemics. From the
brief historical analysis of the previous subsection it is clear that, over
the course of almost a century, many different approaches have been developed.
Classifying them is often difficult. From a mathematical perspective one can
distinguish stochastic and deterministic approaches, based on how the basic
fundamental (microscopic) processes of the transmission and development of the
disease are modelled: all epidemiological models generally assume that new
infected individuals can appear when an uninfected one (usually called a
_susceptible_ individual) comes in contact with an _infectious_ individual
such that the disease is passed on. After some time, infected individuals may
turn non-infectious (at least temporarily) via recovering or dying from the
disease or by some other means of _removal_ from the actively involved
population. Mathematically speaking, these processes can be modelled in two
different fashions:
* 1.
Stochastic approach: all (microscopic) processes between individuals are of a
probabilistic nature. For instance, the contact between a susceptible and an
infectious individual has a certain probability to lead to an infection of the
former; infected individuals have a certain probability of removal after a
certain time; _etc_. In these approaches, time is understood as a discrete
variable and time-evolution is typically described in the form of
differential-difference equations (called _master equations_). The solutions
depend on a set of probabilities (_e.g._ the probability of a contact among
individuals leading to an infection), geometric parameters (such as the number
of ’neighbouring’ individuals that a single infectious individual can
potentially infect) as well as the initial conditions. Furthermore, in order
to make predictions or to compare with deterministic approaches, some sort of
averaging process is required.
* 2.
Deterministic approach: the time evolution of the number of susceptible,
infectious and removed individuals is understood as a fully predictable
process and is typically described through systems of coupled, ordinary
differential equations in time (the latter is understood as a continuous
variable). Solutions of these systems are therefore determined by certain
parameters (such as infection and recovery rates) as well as initial
conditions (_e.g._ the number of infectious individuals at the outbreak of the
disease).
In this review, we prefer to think of this classification in a somewhat
different (but equivalent) fashion, which (as we shall explain) is closer to
the concept of (energy) _scale_ in particle physics. Indeed, we prefer to
think of models as ranging from microscopic models, in which fundamental
interactions (_i.e._ at the level of individuals) are explicitly modelled, to
more and more macroscopic approaches, in which the microscopic interactions
have been (at least partially) included into the interactions of new,
effective degrees of freedom. A basic overview, with concrete models, is given
in Figure 1: models in the left part of the diagram (red box) incorporate many
details of how the disease spreads at a microscopic level, _i.e._ between
single individuals. These models are mostly of a stochastic nature, using
probabilistic means to simulate the spread of the disease. As we shall
explain, many of them are inspired by chemical models, in which a random
movement of molecules is considered, with collisions leading with a certain
probability to a chemical reaction (and the creation of new molecules). The
models further to the right of the diagram (blue box) are more macroscopic, in
the sense that they no longer model individual interactions (_i.e._ the spread
from one person to the next), but rather describe the time evolution of the
disease in a larger population (_e.g._ an entire country). While,
historically, the oldest models that have been developed to describe the
spread of an infectious disease are in this category, many of them can be
obtained from more microscopic approaches (_e.g._ percolation models) through
a ’replacement’ of the degrees of freedom of the latter by more macroscopic
ones. This can happen, for example, via a _mean field approximation_ or via
certain averaging procedures or by describing the spread of the disease
through suitable flow equations. The resulting models are mostly of a
deterministic nature, but can retain stochastic elements.
Besides the explicit models and approaches listed in Figure 1 (some of which
we shall review in the main part of this article), there are also data- and
computer-driven approaches [79, 80]. These generally use machine learning
(also called statistical learning) tools to analyse existing data with the
goal of finding patterns and predicting the future development of pandemics.
On the one hand, these approaches use the large advances in computer
technology (in particular the development of artificial intelligence). On the
other hand, they are made viable in recent years due to the dramatical
increase in the volume and quality of available data on the spread and
development (_e.g._ its genetic mutations) of diseases in a large population.
This allows data-driven approaches to be applied at any level, ranging from
analysing microscopic interactions (see _e.g._ [81]) to more effective
descriptions that only aim at predicting ’global’ key statistics of epidemics
[82, 83]. Since the current review is aimed at studying field theoretic tools
in epidemiology, we shall not discuss these methods here. However, we point
out a number of excellent review articles [84] in the literature. Another
class of models we will not discuss utilises complex networks to include the
effect of human behaviour [85].
Microscopic approaches on the left spectrum of Figure 1 generally utilise
first principles, however at the expense of a lack of symmetries (usually also
entailing a large computational cost). Effective theories on the right side of
the graph are, usually, less intuitive (since basic interactions of the
disease enter into a less obvious manner). However, they incorporate basic
symmetries that appear in the solutions of the microscopic models – in the
sense of making them manifest – typically also leading to more streamlined and
less expensive computations. Here is an incomplete list of the symmetries at
the base of these approaches:
microscopic models macroscopic models $\bullet$ lattice models$\bullet$
percolation models $\bullet$ random walks$\bullet$ diffusion models$\bullet$
(epidemic) field th.$\bullet$ network models $\bullet$ compartmental
models$\bullet$ epidemic RenormalisationGroupeffective description microscopic
degrees of freedom replaced by more appropriate effective ones:
mean field approximations, averaging, beta-functions, flow equations,… \+
based on ’first principles’ \- symmetries not manifest input: basic properties
of the disease and the way it spreads \+ based on manifest symmetries \+
computationally simpler \- modelling requires more intuition about the system
and/or data input: ’effective’ properties of the disease in a specific
population
Figure 1: Schematic overview of different approaches to describe the time
evolution of pandemics and their relation to field theoretical methods.
1. _(i)_
_Criticality:_ depending on the parameters of the model and the starting
conditions, solutions of microscopic models feature either a quick eradication
of the disease, where the total cumulative number of infected individuals
remains relatively low, or a fast and widespread diffusion of the disease,
leading to a much larger total number of infected. Which of these two classes
of solutions is realised is usually governed by a single ordering parameter
(_e.g._ the average number of susceptible individuals infected by a single
infectious, also known as _reproduction number_ $R_{0}$), and the transition
from one type to the other can be very sharp.
2. _(ii)_
_Self-similarity and waves:_ depending on the disease in question, solutions
of microscopic models may exhibit distinct phases in their time evolution in
the form of a wave pattern, where phases of exponential growth of the number
of infected individuals are followed by intermediate periods of slow,
approximately linear, growth. Each wave typically looks similar to the
previous and following ones. Furthermore, certain classes of solutions may
also exhibit spatial self-similarities, _i.e._ the solutions describing the
temporal spread of the disease among individuals follow similar patterns as
the spread among larger clusters (_e.g._ cities, countries _etc._).
3. _(iii)_
_Time-scale invariance:_ several microscopic models exhibit a (nearly) time-
scale invariant behaviour, which is a symmetry under rescaling of the time
variable and of the rates (infection, removal, _etc._). If the solution
exhibits a wave-structure, these near-symmetric regions can appear in specific
regimes, _e.g._ in between two periods of exponential growth.
These properties are familiar from field theoretical models in physics, _e.g._
in solid state and high energy physics, which exhibit phase transitions.
Indeed, over the years, it has been demonstrated that the various approaches
mentioned above can be reformulated (or at least related to) field theoretical
descriptions. The latter are typically no longer sensitive to microscopic
details of the spread of the disease at the level of individuals, but instead
capture _universal_ properties of their solutions. They are therefore an ideal
arena to study properties of the dynamics of diseases and the mechanisms to
counter their spread.
### 1.3 Relating different scales in Field Theory
The dynamics of physical phenomena, ranging from the fundamental laws of
nature to quantum and ordinary matter phase transitions including protein
behaviour, is well captured by effective descriptions in terms of fields and
their interactions. These fields are meant to capture the overall features of
the phenomenon in question, describe the interaction between (elementary)
constituents and even predict the evolution of the system. Once the field
theoretical dynamics is married to underlying approximate or exact symmetries,
it becomes an extremely powerful tool that, in a given range of energy,
provides a faithful representation of the microscopic physics underlying many
phenomena. Zooming in or out of the relevant physical scales involved in the
dynamics of a given process generically requires a modification of the degrees
of freedom needed to describe that specific process. This property is captured
by the renormalisation group (RG) framework [86, 87]. Within this approach, in
order to take into account the change in degrees of freedom, one modifies
(renormalises) the interaction strengths and rescales the fields. In fact, the
idea of scale transformations and scale invariance is ancient, dating back to
the Pythagorean school. The concept was used in the work by Euclid and much
later by Galileo. The idea received renewed popularity towards the end of the
19th century with the idea of enhanced viscosity of O. Reynolds to address
turbulence in fluids [88, 89].
However, the seed-idea of the RG initially started in 1953 with the work of
Ernst Stueckelberg and André Petermann [90]. They noted that the
renormalisation procedure in quantum field theory exhibits a group of
transformations, which acts on parameters that govern basic interactions of
the system, _e.g._ changing the bare couplings in the Lagrangian by including
(counter) terms needed to correct the theory. For example, the application to
quantum electro-dynamics (QED) was elucidated by Murray Gell-Mann and Francis
E. Low in 1954 [91]: this led to the renown determination of the variation of
the electromagnetic coupling in QED with the energy of the physical processes.
Hence, the basic idea at the heart of the RG approach stems from the property
that, as the scale of the physical process varies, the theory displays a self-
similar behaviour and any scale can be described by the same physics. In
mathematical terms, this properties is reproduced by a group transformation
acting on the interaction strengths of the theory. Thanks to Gell-Mann and Low
a computational method based on a mathematical flow function of the
interaction strength parameter was introduced. This function determines the
differential change of the interaction strength with respect to a small change
in the energy of the process through a differential equation known as the
renormalisation group equation (RGE). Although mainly devised for particle
physics, nowadays its applications extend to solid-state physics, fluid
mechanics, physical cosmology, and even nanotechnology.
In certain cases, such as in particle physics, the field theoretical
description can be elevated to the ultimate description of fundamental
interactions if short distance scale invariance occurs. Once scale invariance
is married to relativity the group of invariance generically enlarges to the
conformal group.
### 1.4 Organisation of the Review
In the following we shall start by presenting examples of microscopic and
effective (respectively deterministic and stochastic) approaches and show how
they can be related to field theoretical models. We start in Section 2 with
analysing the direct percolation approach, which is based on a microscopic
stochastic description of the diffusion processes. We shall see that the
approach, in the mean field approximation, naturally leads to compartmental
models. The latter (as well as generalisations thereof) are reviewed in
Section 3: we commence this investigation with a basic review of the SIR model
and then investigate how to incorporate multi-wave epidemic dynamics paying
particular attention to the inter-wave period. After highlighting further
possible extensions of compartmental models, we finally provide a formulation
of the SIR model in terms of flow equations, which resembles the
$\beta$-function familiar from the RG approach to particle and high-energy
physics.
We use this last result to motivate the most recent approach to epidemic
dynamics, _i.e._ the epidemiological renormalisation group (eRG) [92, 93] in
Section 4. The latter is inspired by the Wilsonian renormalisation group
approach [86, 87] and uses the approximate short and long time dilation
invariance of the system to organise its description. For the COVID-19, the
eRG has been shown to be very efficient when describing the epidemic and
pandemic time evolution across the world [94] and in particular when
predicting the emergence of new waves and the interplay across different
regions of the world [95, 96].
The discussion in Sections 2, 3 and 4 is general in the sense that the methods
apply to generic infectious diseases and populations. In Section 5 we consider
particular features of the current ongoing COVID-19 pandemic, and discuss how
the different approaches can be adapted to it.
Several excellent reviews already exist in the literature [97, 85, 98, 32].
Our work complements and integrates them, adds to the literature on the field
theoretical side and further incorporates more recent approaches.
## 2 Percolation Approach
Executive Summary 1. We introduce percolation and lattice models as
_stochastic_ approaches to directly simulate microscopic interactions down to
the individual level. 2. The models are characterised through a set of
probabilities (related for example to the infection and recovery rates of
individuals) and the geometry of the system. Time is assumed to be a quantised
variable. 3. The approach naturally models the spatial as well as the temporal
evolution of a disease. 4. The models feature a (sharp) _phase transition_ in
terms of the asymptotic infected fraction of the population. The latter is the
order parameter of the system. 5. Compartmental models (see next Section)
emerge as a mean field description of percolation models.
### 2.1 Lattice and Percolation Models
Arguably the most direct way to (theoretically) study the spread of a
communicable disease is via systems that simulate the process of infection at
a microscopic level, _i.e._ at the level of individuals in a (finite)
population. The most immediate such models are lattice simulations, in which
the individuals are represented by the lattice sites on a spatial grid, some
of which may be infected by the disease. These lattice sites can spread the
disease with a certain probability to neighbouring sites, following an
established set of rules. Lattice models, therefore, allow to track the spread
of the disease in discretised time steps and, after taking the average of
several simulations, allow to make statements about the time evolution (and
asymptotic values) of the number of infected individuals. As we shall see in
the following, even simple models of this type show particular time-scaling
symmetries, as well as criticality (_i.e._ the fact that the asymptotic number
of infected individuals changes rapidly, when a certain parameter of the model
approaches a specific critical value).
A larger class of models that work with a discrete number of individuals (as
well as discretised time) consists of _percolation models_ , which broadly
speaking consist of points (sites) scattered in space that can be connected by
links. Depending on the specific details, one distinguishes [71]:
* 1.
_Bond percolation models_ : in this case the points are fixed and the links
between them are created randomly. Examples of this type are (regular)
lattices in various spatial dimensions with nearest neighbour sites being
linked.
* 2.
_Site percolation models_ : in this case the position of the points is random,
while the links between different points are created based on rules that
depend on the positions of the points.
More complex models can also incorporate both aspects. An important quantity
to compute in any percolation model is the so-called _pair connectedness_ ,
_i.e._ the probability that two points are connected to each other (through a
chain of links with other points). Assuming the system to extend infinitely
(_i.e._ there are infinitely many sites), we can importantly distinguish
whether it is made of only local clusters (in which finitely many sites are
connected) or whether it is in a _percolating state_ (where infinitely many
sites are connected). The probability of occurrence of these two situations
usually depends on the value of a single parameter (typically related to the
probability $p$ that a link exists between two ‘neighbouring’ sites), in such
a way that the transition from local connectedness to percolation can be
described as a _phase transition_ (see _e.g._ [61]). The system close to this
critical value $p_{c}$ lies in the same universality class of several other
models in molecular physics, solid state physics and epidemiology: this
implies that the behaviour of certain quantities follows a characteristic
power law behaviour that is the same for all the theories in the same
universality class. For example, the probability $P(p)$ for a system to be in
the percolating state (as a function of $p$) takes the form
$\displaystyle\lim_{p\to p_{c}}P(p)\sim(p-p_{c})^{\nu}\,,$ (2.1)
where $\nu$ is called _critical exponent_. Models within the same universality
class share the same critical exponents despite the fact that the concrete
details of the theory, in particular the concrete meaning of the quantity $P$
in Eq. (2.1), may be very different. This connection makes percolation models
very versatile and many of them have been studied extensively (see [71] and
references therein).
In the following, we shall first present a simple lattice simulation model,
which allows us to reveal important properties of the time evolution of the
infection (notably criticality and time-rescaling symmetry). Furthermore, we
shall discuss a percolation model that, near criticality, is in the same
universality class as time-honoured epidemiological models, along with some of
its extensions and generalisations.
### 2.2 Numerical Simulations and Criticality
Lattice simulations of reaction-diffusion processes are a well established
tool to study the epidemic spreading of a disease since the original work by
P. Grassberger in [99]. In specific realisations the models have been studied
to very high precision and the critical values of the parameters are known
with an accuracy reaching the six digits, see for example Ref. [100] and
references therein. Different geometries have been considered as well as
different ranges of interactions, including random long-range couplings among
sites, see [101, 102] for recent discussions. All of these follow a _Markov
decision process[103, 104], _i.e._ the population is represented by a discrete
lattice and the time evolution of the disease is organised in discretised time
steps (so-called Markov iterations) between each of which the state of the
lattice is changed based on a set of stochastic decisions._ Here we consider a
synchronous algorithm (i.e., we update all the lattice sites in each Markov
iteration), and isotropic interactions of range $r$ (in lattice units).
#### 2.2.1 The principle
For our purposes, the simplest and most direct way to study percolation models
is to simulate the time evolution of the spread of a disease via stochastic
processes on a finite dimensional lattice. The individuals, represented by
each lattice site, can be in one and only one of the three given states:
susceptible, infectious or removed. They are defined as follows:
* 1.
_Susceptible_ : these are individuals that are currently not infectious, but
can contract the disease. We do not distinguish between individuals who have
never been infected and those who have recovered from a previous infection,
but are no longer immune.
* 2.
_Infectious_ : these are individuals who are currently infected by the disease
and can actively transmit it to a susceptible individual.
* 3.
_Removed (recovered)_ : these are individuals who currently can neither be
infected themselves, nor can infect susceptible individuals. This comprises
individuals who have (temporary) immunity (either natural, or because they
have recovered from a recent infection), but also all deceased individuals.
The time evolution of the lattice configurations follows a set of rules, which
implements the following two basic mechanisms into an algorithm that models
the spread of the disease within a finite and isolated population in
discretised time steps:
* _i)_
the infection of susceptible individuals in the vicinity of an infectious one;
* _ii)_
the removal (recovery) of an infectious individual (so that it can no longer
infect other individuals).
The infection process depends on the reach of an infectious site over
potential nearby susceptible ones. This reach depends on the geometry of the
lattice (here we always use square lattices) and on the range $r$. The removal
instead depends on the site itself and on an intrinsic removal probability.
Starting from the two principles above, there are two ways to let the lattice
evolve and to define the elementary time steps, starting from a given initial
spatial distribution of infectious and susceptible sites. On the one hand, we
can randomly choose an infectious site and begin the infection process within
its surrounding sites (_i.e._ determine how many susceptible neighbouring
sites are turned infectious). Once the process is over, another infectious
site is chosen randomly, defining the next time step. Such a sequence forbids
multiple infections, as only one infected site is considered at each step of
time. On the other hand, we could take into account all the possible
infections at the same time and consider the susceptible sites that may become
infected by them, according to the rules of the algorithm. The lattice is then
updated with the new infected and thus the next time step begins. This process
allows multiple infections to be considered, as susceptible sites can have
multiple infected neighbours infecting them at a single time step. The first
method is called “asynchronous” as opposed to the second “synchronous”
algorithm.
Having discussed the temporal structure of the simulation, we can turn now to
the specific mechanism of the spread, which, in our setup, depends on three
parameters:
* 1.
The _coordination radius_ $r\in\mathbb{R}_{+}$, which is a measure for the
distance (on the lattice) over which direct infections between individuals can
take place, _i.e._ only sites within a distance $r$ from the infectious one
can be infected. We illustrate $r$ in Fig. 2 within a 2-dimensional squared
lattice.
$\vdots$$\vdots$$\cdots$$\cdots$$\mathbf{e}_{1}$$\mathbf{e}_{2}$$r$
Figure 2: Two-dimensional cubic lattice generated by the lattice vectors
$(\mathbf{e}_{1},\mathbf{e}_{2})$. The blue circle of coordination radius $r$
($r=2$ in the current example) contains all susceptible sites (blue) that may
become infected by a single infectious one (red) at its centre.
* 2.
The _infection probability_ $\mathfrak{g}\in[0,1]$ for an infectious
individual to infect a neighbour site. In practice, the probability of a
single individual in the neighbourhood (defined in terms of the coordination
radius) to be infected is equal to $\mathfrak{g}$ divided by the number of
sites within a radius $r$ from the infectious one. This choice, as we shall
see in Section 3.3, allows us to draw a more direct relation between
$\mathfrak{g}$ and the infection rate parameter defined in other approaches.
* 3.
The _removal probability_ $\mathfrak{e}\in[0,1]$ for an infectious individual
to become removed.
In the following we shall highlight some of the key-features of this approach
and study their dependence on the three parameters above. To do so, we
consider a 2-dimensional lattice with periodic boundary conditions. We follow
the “synchronous” algorithm with a slightly different path compared to the
common approach in the literature [105]. Usually, in order to determine the
time-evolution of the lattice configuration, one needs to go through all
infectious sites and individually apply the infection algorithm to all
susceptible sites within their coordination radius: each contact is simulated
by the call of a randomly generated number $x$, between 0 and 1. If
$x\leq\mathfrak{g}$, an infection occurs and the considered susceptible site
will become infectious at the next time step. Else, nothing happens – the site
will stay susceptible and the whole process is repeated for each of the sites
surrounding a given infectious one.
Instead of this infectious-site-centred procedure, we will consider an
algorithm centred on the susceptible sites: for each susceptible site, we
count the number $n$ of infectious sites within the coordination radius and
calculate the cumulated probability of infection. One can show that, on
average, the probability $\mathcal{P}\left(n\right)$ for this site to become
infectious in the next Markov iteration is given by
$\displaystyle\mathcal{P}\left(n\right)=1-(1-\mathfrak{g})^{n}$. We use this
probability to determine the fate of each susceptible site. This improved
procedure speeds-up the algorithm and reduces stochastic fluctuations, as it
is equivalent to performing a local average at each time step. We turn now to
the presentation of our results.
#### 2.2.2 Results
Figure 3: Number of infected individuals as a function of the discretised
time for a lattice with $40401$ sites, $\mathfrak{g}=0.7$, $\mathfrak{e}=0.1$
and coordination radius $r=1$.
A plot of the evolution of the cumulative number of infected sites as a
function of the discretised time-steps is shown in Fig. 3 for a sample choice
of the parameters $\mathfrak{g}=0.7$, $\mathfrak{e}=0.1$ and $r=1$ and for a
square lattice with $201$ sites on each side (_i.e._ $40401$ sites in total).
At large $t$, the cumulated number of infected approaches an asymptotic value,
which, averaged over a sufficient number of simulations, is a function of the
probabilities $(\mathfrak{g},\mathfrak{e})$ as well as of the coordination
radius $r$. Varying these parameters leads to substantially different
asymptotic values, as is shown in Fig. 4: in the four panels, we plot the
asymptotic values as a function of the infection probability $\mathfrak{g}$.
We use the same lattice as before and fix $\mathfrak{e}=0.1$. For each point,
we repeat the process $50$ times to compute the shown mean and standard
deviation. As expected, the larger $\mathfrak{g}$, the higher the number of
infected sites at the end of the process. The plots also show the critical
behaviour of the system, as the asymptotic value jumps from a very small value
at small $\mathfrak{g}$ to a value of the same order of the total population
(_i.e._ the number of sites in the lattice). For each value of $r$, one can
define a critical value $\mathfrak{g}_{c}(r)$: increasing $r$ reduces the
value of $\mathfrak{g}_{c}$.
(a) $r=1$ (b) $r=2$
(c) $r=5$ (d) $r=50$
Figure 4: Evolution of the asymptotic number of infected sites as a function
of the infection probability $\mathfrak{g}$ for different coordination radii
$r$. The removal probability is fixed to $\mathfrak{e}=0.1$.
In the simulations of Fig. 4 we use the same initial condition, where all the
sites within a radius $5$ (in lattice units) from the centre of the lattice
are set to the infectious state, thus having initially $81$ cases. Due to the
stochastic nature of the process, the final number of infected cases does
depend non-trivially on the initial state, especially for small coordination
radius $r$. For $r=1$ and $\mathfrak{e}=0.1$, this dependence on the initial
infected $N_{I}$ is shown in the left panel of Fig. 5, where we plot the
asymptotic value of infected as a function of $N_{I}$, randomly distributed on
the lattice. We plot the results for three different values of
$\mathfrak{g}=0.4,\;0.5$ and $0.7$, where $\mathfrak{g}=0.5$ is close to the
critical $\mathfrak{g_{c}}$. The critical behaviour described above seems to
be also sensitive to $N_{I}$. This could be due to finite volume effects, as
the evolution of the infection is expected to depend crucially on the density
(rather than on the actual number) of initial infectious cases on the lattice
as well as on their spatial distribution. The dependence on the initial state
is consistent with the result obtained for the SIR compartmental models
discussed in Sec. 3.2. This effect should disappear in the infinite volume
limit. Especially near the critical value, we observe a large spread of
results for the asymptotic numbers. This is particularly evident for small
densities of initial infections, where stochastic effects become relevant. As
an example, we show a bundle of 50 solutions near the critical value in the
right panel of Fig. 5.
Figure 5: Left panel: Evolution of the final number of infected sites as a
function of the initial infectious ones. The mean and the standard deviation
are computed over $50$ simulations for each point with $\mathfrak{e}=0.1$ and
different values of $\mathfrak{g}$. Right panel: Time evolution of the
infected cases for $50$ simulations with $\mathfrak{e}=0.1$,
$\mathfrak{g}=0.5$, $N_{I}=2$ and $r=1$.
### 2.3 Master Action and Field Theory
Here we briefly summarise the percolation approach and the derivation via
field theory of the reaction diffusion processes. We follow G. Pruessner’s
lectures [75] and borrow part of his notation. The overarching goal is to
reproduce and extend the action given in the seminal work of J.L. Cardy and P.
Grassberger [76].
We, therefore, consider a model of random walkers described by a field $W$
that diffuse through a lattice, reproduce themselves and drop some poison $P$
as they stroll around. The poison field $P$ does not diffuse but kills walkers
if they hit a poisoned location. Interpreting the positions of the walkers as
infected sites and those of the poison as simultaneously representing either
the immune or removed individuals, the model effectively describes a disease
diffusion process featuring infection and immunisation dynamics. The
microscopic processes considered in [76] (see also [106, 107]) can be
schematically summarised as follows:
$\displaystyle W\,\rightarrow\,W+W\;,$ $\displaystyle{\rm
with\;rate\;}\sigma\;,$ $\displaystyle W\,\rightarrow\,W+P\;,$
$\displaystyle{\rm with\;rate\;}\alpha\;,$ $\displaystyle
W+P\,\rightarrow\,P\;,$ $\displaystyle{\rm with\;rate\;}\beta\;.$ (2.2)
The first branching process corresponds to infection, while the last two
processes describe immunisation. In addition we will consider a process of
spontaneous creation, by which infected can appear at one site independently
from the presence of other infected at neighbouring sites, with a rate $\xi$.
$n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}$$e_{1}$$e_{2}$
Figure 6: Schematic presentation of the state
$\\{n_{\mathbf{x}}^{W},n_{\mathbf{x}}^{P}\\}$ with $e_{i}$ the basis vectors
of $\Gamma$.
The field theory is derived from a discretised version of the model,
eventually taking the continuum limit. The starting point is a _Master
Equation_ that directly leads to the action through a process of second-
quantisation. Let $\Gamma\subset\mathbb{Z}^{d}$ be a $d$-dimensional
hypercubic lattice with coordination number $q$, which is generated by a set
of vectors $\mathbf{e}$. We denote by
$\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}$ a state with site $\mathbf{x}$
occupied by $n^{W}_{\mathbf{x}}$ and $n^{P}_{\mathbf{x}}$ particles of type
$W$ and $P$ $\forall\mathbf{x}\in\Gamma$ (for a schematic representation see
Fig. 6). The probability that such state is realised at time $t$ is denoted by
$P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)$. Configurations can change
via the different mechanisms described above. The probability thus satisfies
the first order differential equation (Master Equation):
$\displaystyle\frac{dP(\\{{n}^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)}{dt}=$
$\displaystyle\frac{H}{q}\,\sum_{\mathbf{y}\in\Gamma}\sum_{e\in\mathbf{e}}\left[(n^{W}_{\mathbf{y}+e}+1)P(\\{n^{W}_{\mathbf{y}}-1,n^{W}_{\mathbf{y}-e}+1,n^{P}_{\mathbf{x}}\\};t)-n^{W}_{\mathbf{y}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\right]$
$\displaystyle+\sigma\sum_{\mathbf{y}\in\Gamma}\left[(n^{W}_{\mathbf{y}}-1)P(\\{n^{W}_{\mathbf{y}}-1,n^{P}_{\mathbf{x}}\\};t)-n^{W}_{\mathbf{y}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\right]$
$\displaystyle+\alpha\sum_{\mathbf{y}\in\Gamma}\left[n^{W}_{\mathbf{y}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{y}}-1\\};t)-n^{W}_{\mathbf{y}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\right]$
$\displaystyle+\beta\sum_{\mathbf{y}\in\Gamma}\left[(n^{W}_{\mathbf{y}}+1)n_{\mathbf{y}}^{P}P(\\{n^{W}_{\mathbf{y}}+1,n^{P}_{\mathbf{x}}\\};t)-n^{W}_{\mathbf{y}}n^{P}_{\mathbf{y}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\right]$
$\displaystyle+\xi\sum_{\mathbf{y}\in\Gamma}\left[P(\\{n^{W}_{\mathbf{y}}-1,n^{P}_{\mathbf{x}}\\};t)-P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\right]\,.$
(2.3)
The first line describes diffusion of walkers from one lattice site to one of
its $q$ nearest neighbours with frequency $H/q$. This process is schematically
shown in Fig. 7.
$n^{W}_{\mathbf{y}}-1,n^{P}_{\mathbf{y}}$$n^{W}_{\mathbf{y}+e}+1,n^{P}_{\mathbf{y}+e}$$e$$H/q$$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}$$n^{W}_{\mathbf{y}+e},n^{P}_{\mathbf{y}+e}$$e$
Figure 7: Schematic representation of the process leading to the first line of
Eq.(2.3): a single walker moving to a neighbouring lattice site (with
$n^{W}_{\mathbf{y}}\geq 1$ and
$n^{P}_{\mathbf{y}}\,,n^{W}_{\mathbf{y}+e}\,,n^{P}_{\mathbf{y}+e}\geq 0$).
There $\\{n^{W}_{\mathbf{y}}-1,n^{W}_{\mathbf{y}-e}+1,n^{P}_{\mathbf{x}}\\}$
denotes the state differing from $\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}$
by having a walker less at $\mathbf{y}$ and a walker more at $\mathbf{y}-e$.
The second and third lines produce the first two branching processes in Eq.
(2.2) respectively and are schematically shown in Figs 8 and 9.
$n^{W}_{\mathbf{y}}-1,n^{P}_{\mathbf{y}}$$\sigma$$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}$
Figure 8: Schematic representation of the branching process leading to the
second line of (2.3): a single walker creating a copy of itself at the the
site $\mathbf{y}$ (with $n^{W}_{\mathbf{y}}\geq 2$ and $n^{P}_{\mathbf{y}}\geq
0$).
$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}-1$$\alpha$$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}$
Figure 9: Schematic representation of the branching process leading to the
third line of (2.3): a walker ’drops’ poison at the lattice site $\mathbf{y}$
(with $n^{P}_{\mathbf{y}}\geq 1$ and $n^{W}_{\mathbf{y}}\geq 0$).
The fourth line accounts for the third process there and is graphically
represented in Fig. 10.
$n^{W}_{\mathbf{y}}+1,n^{P}_{\mathbf{y}}$$\beta$$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}$
Figure 10: Schematic representation of the branching process leading to the
fourth line of (2.3): a single walker ’dying’ from poison at the lattice site
$\mathbf{y}$ (with $n^{P}_{\mathbf{y}}\,,n^{W}_{\mathbf{y}}\geq 0$).
Finally, the last line gives the spontaneous creation of one walker at site
$\mathbf{y}$ and is schematically shown in Fig. 11.
$n^{W}_{\mathbf{y}}-1,n^{P}_{\mathbf{y}}$$\xi$$n^{W}_{\mathbf{y}},n^{P}_{\mathbf{y}}$
Figure 11: Schematic representation of the branching process leading to the
fifth line of (2.3): a single walker is spontaneously created at the lattice
site $\mathbf{y}$ (with $n^{W}_{\mathbf{y}}\geq 1$ and $n^{P}_{\mathbf{y}}\geq
0$).
In view of a second quantisation, following the Doi-Peliti approach [72, 73,
74], it is natural to interpret the state
$\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}$ as obtained by the action of
creation operators $a^{\dagger}(\mathbf{x})$ (for $W$) and
$b^{\dagger}(\mathbf{x})$ (for $P$) on a vacuum state. One introduces also the
corresponding annihilation operators, $a(\mathbf{x})$ and $b(\mathbf{x})$,
such that
$\displaystyle
a^{\dagger}(\mathbf{x})|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}\rangle=|\\{n^{W}_{\mathbf{x}}+1,n^{P}_{\mathbf{x}}\\}\rangle\,,$
$\displaystyle
b^{\dagger}(\mathbf{x})|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}\rangle=|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}+1\\}\rangle\,,$
(2.4) $\displaystyle
a(\mathbf{x})|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}\rangle=n^{W}_{\mathbf{x}}\,|\\{n^{W}_{\mathbf{x}}-1,n^{P}_{\mathbf{x}}\\}\rangle\,,$
$\displaystyle
b(\mathbf{x})|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\}\rangle=n^{P}_{\mathbf{x}}\,|\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}-1\\}\rangle\,,$
(2.5)
$\displaystyle\left[a(\mathbf{x}),a^{\dagger}(\mathbf{y})\right]=\delta_{\mathbf{x},\mathbf{y}}\,,$
$\displaystyle\left[b(\mathbf{x}),b^{\dagger}(\mathbf{y})\right]=\delta_{\mathbf{x},\mathbf{y}}\,,$
(2.6)
with all other possible commutators between $(a,a^{\dagger})$ and
$(b,b^{\dagger})$ vanishing. The field theory is realised by considering the
time-evolution of the state
$|\Psi(t)\rangle=\sum_{\\{n_{\mathbf{x}}^{W},n_{\mathbf{x}}^{P}\\}}P(\\{n^{W}_{\mathbf{x}},n^{P}_{\mathbf{x}}\\};t)\,|\\{n_{\mathbf{x}}^{W},n_{\mathbf{x}}^{P}\\}\rangle\;,$
(2.7)
which can be derived from the Master Equation (2.3). Upon mapping each
operator to conjugate fields
$\displaystyle a\rightarrow W\,$ ,
$\displaystyle\quad\tilde{a}=a^{\dagger}-1\rightarrow W^{+}\;,$ $\displaystyle
b\rightarrow P\,$ , $\displaystyle\quad\tilde{b}=b^{\dagger}-1\rightarrow
P^{+}\;,$ (2.8)
where the tilded operators are known as Doi-shifted operators, one finds that
the evolution is controlled by $\exp\\{-\int d^{d}xdt\,S(W^{+},W,P^{+},P)\\}$,
with the action density $S$ given by
$\displaystyle S=$ $\displaystyle
W^{+}\partial_{t}W+P^{+}\partial_{t}P+D\nabla W^{+}\nabla
W-\sigma(1+W^{+})W^{+}W$
$\displaystyle-\alpha(1+W^{+})P^{+}W+\beta(1+P^{+})W^{+}WP-\xi W^{+}\;,$ (2.9)
where$D=\lim_{\mathsf{a}\to 0}H\mathsf{a}^{2}/q$ is the hopping rate in the
continuum ($\mathsf{a}$ is the lattice spacing). The action in Eq.(2.9)
corresponds to the result in [76] augmented here by the last source term due
to spontaneous generation. This produces a background of infected and it is
responsible in this approach for a ‘strolling’ dynamics, as we motivate in
Section 3.5.2.
The renormalisation group equations stemming from the action in Eq.(2.9),
which follow closely those of other theories such as directed percolation
models or reggeon field theory [77, 78], have been analysed in [76]. In
particular, the Fourier transform of the correlation function of a field $W$
and a field $W^{+}$ was computed and shown to satisfy the following scaling
law near criticality
$\displaystyle\mathcal{F}\left(\langle
W(\vec{x},t)\,W^{+}(0,0)\rangle\right)(\omega,\vec{k})=|\vec{k}|^{\eta-2}\,\Phi(\omega\,\Delta^{\nu_{t}}\,,\vec{k}\,\Delta^{\nu})\,,$
(2.10)
for some function $\Phi$. Here $\Delta$ is a measure for the proximity to
criticality (_i.e._ it is proportional to $p-p_{c}$ of Eq. (2.1) in the
context of the percolation model) and $(\eta,\nu_{t},\nu)$ are critical
exponents determining the universality class of the model.111In a dimensional
regularisation scheme, they were found to be [76]
$\displaystyle\eta=-\frac{\epsilon}{21}\,,$
$\displaystyle\nu_{t}=1+\frac{\epsilon}{28}\,,$
$\displaystyle\nu=\frac{1}{2}-\frac{5}{84}\,\epsilon\,,$ (2.11) where
$\epsilon=6-d$. The quantity above is a measure for the probability of finding
a walker at some generic time and position $(\vec{x},t)\in\mathbb{R}^{6}$ if
there was one at the origin, where $d=6$ corresponds to the critical dimension
of the system [76].
### 2.4 Relation to Compartmental Models
As mentioned before, the model described by the action in Eq.(2.9) is in the
same universality class as numerous other models that are directly relevant
for the study of epidemic processes. As shown in [76] the particular choice
$\xi=0$, in fact, includes the SIR model, which is the most prominent
representative of compartmental models. To make the connection more concrete,
we return to studying the time evolution of a disease on a lattice $\Gamma$
and divide the individuals that are present at a given lattice site
$\mathbf{x}\in\Gamma$ into three classes or compartments [99], as defined in
Section 2.2.1. We shall denote
${n^{S}_{\mathbf{x}},n^{I}_{\mathbf{x}},n^{R}_{\mathbf{x}}}$ the number of
susceptible, infectious and removed individuals at $\mathbf{x}$, respectively.
222The occupation numbers
$(n^{S}_{\mathbf{x}},n^{I}_{\mathbf{x}},n^{R}_{\mathbf{x}})$ are denoted
$(X(\mathbf{x}),Y(\mathbf{x}),Z(\mathbf{x}))$ respectively in [99].
Concretely, for $\xi=0$, the model in [99] is very suitable for numerical
Markovian simulations and can be connected to the SIR model. The processes of
the model in [99] are
$\displaystyle n^{S}_{\mathbf{x}}+n^{I}_{\mathbf{x}^{\prime}}$
$\displaystyle\rightarrow n^{I}_{\mathbf{x}}+n^{I}_{\mathbf{x}^{\prime}}\;,$
$\displaystyle{\rm infection\;with\;rate\;}\hat{\gamma}\;,$ $\displaystyle
n^{I}_{\mathbf{x}}$ $\displaystyle\rightarrow n^{R}_{\mathbf{x}}\;,$
$\displaystyle{\rm recovery\;with\;rate\;}\hat{\epsilon}\;,$ (2.12)
where $\mathbf{x}$ and $\mathbf{x}^{\prime}$ are nearest neighbour sites on
$\Gamma$ (_i.e._ $\mathbf{x}^{\prime}=\mathbf{x}+e$ for some basis vector
$e\in\mathbf{e}$). As discussed in [99], treating the process as deterministic
(in particular, interpreting
$(n^{S}_{\mathbf{x}},n^{I}_{\mathbf{x}},n^{R}_{\mathbf{x}})$ as continuous
functions of time) one obtains the following equations of motion
$\displaystyle\frac{dn^{S}_{\mathbf{x}}}{dt}(t)$ $\displaystyle=$
$\displaystyle-\hat{\gamma}\,n^{S}_{\mathbf{x}}(t)\sum_{e\in\mathbf{e}}n^{I}_{\mathbf{x}+e}(t)\;,$
$\displaystyle\frac{dn^{I}_{\mathbf{x}}}{dt}(t)$ $\displaystyle=$
$\displaystyle\hat{\gamma}\,n^{S}_{\mathbf{x}}(t)\sum_{e\in\mathbf{e}}n^{I}_{\mathbf{x}+e}(t)-\hat{\epsilon}\,n^{I}_{\mathbf{x}}(t)\;,$
$\displaystyle\frac{dn^{R}_{\mathbf{x}}}{dt}(t)$ $\displaystyle=$
$\displaystyle\hat{\epsilon}\,n^{I}_{\mathbf{x}}(t)\;,$ (2.13)
where the sums on the right hand side extend over the nearest neighbours of
$\mathbf{x}$. Since the sum of all three equations in (2.13) implies
$\frac{d}{dt}(n^{S}_{\mathbf{x}}+n^{I}_{\mathbf{x}}+n^{R}_{\mathbf{x}})(t)=0$,
the total number of individuals is conserved and we denote its value by
$\displaystyle
N=\sum_{\mathbf{x}\in\Gamma}(n^{S}_{\mathbf{x}}(t)+n^{I}_{\mathbf{x}}(t)+n^{R}_{\mathbf{x}}(t))\,.$
(2.14)
Furthermore, we introduce the relative number of susceptible, infectious and
removed individuals respectively
$\displaystyle
S(t)=\frac{1}{N}\,\sum_{\mathbf{x}\in\Gamma}n^{S}_{\mathbf{x}}(t)\,,$
$\displaystyle
I(t)=\frac{1}{N}\,\sum_{\mathbf{x}\in\Gamma}n^{I}_{\mathbf{x}}(t)\,,$
$\displaystyle
R(t)=\frac{1}{N}\,\sum_{\mathbf{x}\in\Gamma}n^{R}_{\mathbf{x}}(t)\,,$ (2.15)
which satisfy
$\displaystyle S(t)+I(t)+R(t)=1\,.$ (2.16)
Finally, by taking a _mean-field approximation_ for the infected field in
Eq.(2.13) (_i.e._ replacing $n^{I}_{\mathbf{x}}$ by $I(t)$
$\forall\mathbf{x}\in\Gamma$, such that the sums
$\sum_{e\in\mathbf{e}}n^{I}_{\mathbf{x}+e}$ in Eq.(2.13) are replaced by
$\frac{q}{N}\sum_{\mathbf{x}\in\Gamma}n^{I}_{\mathbf{x}}=qI(t)$) and summing
over all $\mathbf{x}\in\Gamma$, one obtains the following coupled first order
differential equations:
$\displaystyle\frac{dS}{dt}(t)=-q\,\hat{\gamma}\,S(t)\,I(t)\,,$
$\displaystyle\frac{dI}{dt}(t)=q\,\hat{\gamma}\,S(t)\,I(t)-\hat{\epsilon}\,I(t)\,,$
$\displaystyle\frac{dR}{dt}(t)=\hat{\epsilon}\,I(t)\;,$ (2.17)
where $q$ is the coordination number, _i.e._ , the number of nearest
neighbours for each site ($4$ in a two-dimensional rectangular lattice). As we
shall discuss in the next section, this system of differential equations,
which has to be solved under the constraint in Eq.(2.16) and with suitable
initial conditions, is structurally of the same form as the SIR model [19],
one of the oldest deterministic models to describe the spread of a
communicable disease.
Spontaneous generation can be included in Eq.(2.17) as an additional process
$\displaystyle n^{S}_{\mathbf{x}}\rightarrow n^{I}_{\mathbf{x}}\,,$
$\displaystyle\text{ with rate }\hat{\xi}\,.$ (2.18)
In the deterministic and mean-field equations, this amounts to a term
$-\hat{\xi}S(t)$ in the first equation of (2.17), and the corresponding one
with opposite sign in the second equation, as we shall discuss in the context
of the SIR model in Section 3.5.2.
## 3 Compartmental Models
Executive Summary 1. We introduce compartmental models as _deterministic_
approaches that describe the diffusion of infectious diseases through coupled
differential equations in time. 2. These models are characterised through a
set of diffusion rates and initial conditions. Time is assumed to be a
continuous variable. 3. We discuss mathematical aspects of different models
capturing the asymptotic behaviour of their dynamics. 4. We show how to
extract several epidemiologically relevant phenomena from compartmental
models, including the endemic behaviour of diseases, the impact of
superspreaders, the possibility of re-infection and multi wave patterns. 5. We
show that compartmental models can be naturally related to the renormalisation
group framework.
### 3.1 SIR(S) Model, Basic Definitions
Independently of percolation models and epidemic field theory descriptions,
the differential equations (2.17) have been proposed as early as 1927 to
describe the dynamic spread of infectious diseases in an isolated population
of total size $N\gg 1$. As reviewed in the Historical Overview (Section 1.1),
a major breakthrough in the systematic study of the time evolution of
infectious diseases was the application of the mass-action law [8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19], stating roughly that the rate at which
individuals of two different types meet is proportional to the product of
their total numbers. 333In fact H. Heesterbeek [5] remarks that ’In short,
mass-action turned epidemiology into a science.’ This law underlies many
chemical reactions, in which different agents mix. In the context of
epidemiology, it leads to a class of deterministic approaches that are called
_compartmental models_ , whose hallmark is to divide the population into
several distinct classes. Each class or compartment is comprised of
individuals that have peculiar behaviour in the context of the disease. The
simplest of this class of models is called SIR and, as the name indicates, it
includes three compartments, as already described in detail in Section 2.4:
* 1.
_S usceptible:_ the total number of susceptible individuals at time $t$ shall
be denoted $N\,S(t)$.
* 2.
_I nfectious:_ the total number of infectious individuals at time $t$ shall be
denoted $N\,I(t)$.
* 3.
_R emoved (recovered):_ the total number of removed individuals at time $t$
shall be denoted $N\,R(t)$.
Depending on the type of disease under consideration, other compartments can
be included to make the model more realistic, _e.g._ [98, 32]
* 1.
_Passively immune ( $M$):_ infants that have been born with (temporary)
passive immunity ($M$ stands for maternally-derived immunity).
* 2.
_E xposed ($E$):_ individuals in the latent period, who are infected but not
yet infectious.
* 3.
_D eceased ($D$):_ individuals who have died from the disease (in some models
$D$ is considered to be part of $R$).
* 4.
_C arrier ($C$):_ individuals in a state where they have not completely
recovered and still carry the disease, but do not suffer from it (examples of
diseases for which this compartment is of relevance are tuberculosis and
typhoid fever, see _e.g._ [108]).
* 5.
_Q uarantine ($Q$):_ individuals who have been put under quarantine or
lockdown measures (see _e.g._ [109]).
* 6.
_V accinated ($V$):_ individuals who are vaccinated against the disease, thus
acquiring partial or total immunity.
Models are usually named/classified according to the compartments they
contain, _e.g._ SIR, MESIR, _etc._ Repetition of labels indicates that
individuals may return into a given compartment several times, _e.g._ SIS
denotes a model in which infectious individuals may become susceptible again
after an infection. Furthermore, each compartment can be generalised to
include dependences on biological (_e.g._ age, gender, _etc._) and/or
geometric parameters (_e.g._ parameters measuring geographic mobility,
_etc._). To better model social and behavioural particularities among the
population (but also to simulate different variants of a given disease),
models can include multiple copies of a compartment with slightly different
properties (see _e.g._ [48], which includes several different classes of
susceptible, each of which with a different infection rate, to model the
spread of gonorrhoea). Finally, depending on the duration of the epidemic, the
birth and death dynamic needs to be taken into account [110, 111]. That is,
into each class, new individuals may be born, or individuals of each
compartment can die from causes other than the disease.
In the following, for simplicity, we shall only consider models including the
compartments S, I and R. We assume that the total size of the population
remains constant, _i.e._ we impose the algebraic relation
$\displaystyle 1=S(t)+I(t)+R(t)\,,$ $\displaystyle\forall
t\in\mathbb{R}_{+}\,,$ (3.1)
where, without restriction of generality, we assume that the outbreak of the
epidemic starts at $t=0$. We shall also refer to $S$, $I$ and $R$ as the
_relative_ number of susceptible, infectious and removed individuals,
respectively. Furthermore, we assume that $N$ is sufficiently large such that
we can treat $S$, $I$ and $R$ as continuous functions of time:
$\displaystyle S\,,I\,,R\,:\hskip
28.45274pt\mathbb{R}_{+}\longrightarrow[0,1]\,.$ (3.2)
While in Section 2.4 the differential equations (2.17) were a consequence of
the basic microscopic processes in Eq.(2.12) on the lattice $\Gamma$, within
compartmental models they are independently argued on the basis of dynamical
mechanisms that change $(S,I,R)$ as functions of time:
* 1.
Infectious individuals can infect susceptible individuals, turning the latter
into infectious individuals themselves. We call an ‘infectious contact’ any
type of contact that results in the transmission of the disease between an
infectious and a susceptible and we denote the average number of such contacts
per infectious individual per unit of time by $\gamma$. In the original SIR
model [19], $\gamma$ is considered to be constant (_i.e._ it does not change
over time), however, in the following sections we shall not always limit
ourselves to this restriction. The total number of susceptible individuals
that are infected per unit of time (and thus become infectious themselves) is
thus $\gamma\,N\,S\,I$.
* 2.
Infectious individuals can be removed by recovering (and thus gaining
temporary immunity) or by being given immunity (_e.g_ via vaccinations), by
death or via any other form of removal. We shall denote $\epsilon$ the rate at
which infected individuals become removed. As before, we consider $\epsilon$
as a function that may change with time.
* 3.
Removed individuals may become susceptible again after some time or,
conversely, susceptible individuals may become directly removed. In both cases
we shall denote the respective rate by $\zeta$, which may be positive or
negative. If removed individuals are only temporarily immune against the
disease, they can become susceptible again. In this case $\zeta>0$, which
corresponds to the rate at which removed individuals become susceptible again.
Susceptible individuals may become immunised against the disease (_e.g._
through vaccinations). In this case $\zeta<0$. We remark that this is not the
only way to implement vaccinations to compartmental models, as the most direct
way is to add a specific compartment.
The flow among susceptible, infectious and removed is schematically shown in
Fig. 12. The dynamics of the system is also crucially determined by the
initial conditions in each compartment. As already mentioned, we consider
$t=0$ as the start of the epidemic diffusion, where a non-zero number of
infectious individuals is needed for the diffusion to start. Without loss of
generality, we start with zero removed at the initial time. Hence, the initial
conditions are given by
$\displaystyle S(t=0)=S_{0}\,,$ $\displaystyle I(t=0)=I_{0}\,,$ $\displaystyle
R(t=0)=0\,,$ (3.3)
where $S_{0},I_{0}\in[0,1]$ are constants that satisfy $S_{0}+I_{0}=1$. With
this notation, the time dependence of $S$, $I$ and $R$ is described by the
following set of coupled first order differential equations 444These equations
coincide with Eq.(2.17) upon identifying $q\hat{\gamma}\equiv\gamma$,
$\hat{\epsilon}\equiv\epsilon$, and for $\zeta=0$. Spontaneous generation of
infectious individuals can be added straightforwardly.
$\gamma\,N\,I\,S$$N\,S$$N\,I$$\epsilon N\,I$$N\,R$$\zeta\,N\,R$
Figure 12: Flow between susceptible, infectious and removed individuals.
$\displaystyle\frac{dS}{dt}=-\gamma\,I\,S+\zeta\,R\,,$
$\displaystyle\frac{dI}{dt}=\gamma\,I\,S-\epsilon\,I\,,$
$\displaystyle\frac{dR}{dt}=\epsilon\,I-\zeta\,R\,,$ (3.4)
together with the initial conditions (3.3). Notice that
$\frac{d}{dt}(S(t)+I(t)+R(t))=0$ such that the initial conditions (3.3) with
$S_{0}=I_{0}=1$ guarantee the algebraic relation (3.1).
For $\zeta=0$, the system of equations (3.4) is indeed the same model as
described in Section 2.4, which is called the SIR-model [19]. For $\zeta>0$,
this model is sometimes referred to as the SIRS model, since it holds the
possibility that recovered individuals may become susceptible again.
### 3.2 Numerical Solutions and their Qualitative Properties
Figure 13: Numerical solution of the differential equations (3.4) for
$S_{0}=0.92$, $\gamma=0.1$ and $\zeta=0$ for two different choices of
$\epsilon$: $\epsilon=0.1001$ such that $R_{e,0}=0.919$ (left) and
$\epsilon=0.05$ such that $R_{e,0}=1.84$ (right).
The Eqs (3.4) can be solved analytically for $\zeta=0$, as we will discuss in
the next subsection. First, we shall present some qualitative remarks that can
be deduced by considering numerical solutions, which we obtained by using a
simple forward Euler method (see _e.g._ [112, 113]). We first consider
$\zeta=0$, for which the temporal evolution of $(S,I,R)$ is illustrated in
Fig. 13 in two qualitatively different scenarios, depending on the value of
the _initial effective reproduction number_ $R_{e,0}$, that we define as [114]
(see also [115, 116, 117, 118, 119, 120] for further discussion of the
effective reproduction number)
$\displaystyle
R_{e,0}=S_{0}\,\sigma\,,\qquad\sigma=\frac{\gamma}{\epsilon}\,.$ (3.5)
The quantity $\sigma$, often called _basic reproduction number_ ($R_{0}$), can
be interpreted as the average number of infectious contacts of a single
infectious individual during the entire period they remain infectious. In
other words, $\sigma$ is the average number of susceptible individuals
infected by a single infectious one. In the left panel of Fig. 13,
$(\gamma,\epsilon,S_{0})$ have been chosen such that $R_{e,0}<1$: in this
case, even though at initial time a significant fraction of the population
($8\%$) is infectious, the function $I(t)$ decreases continuously, leading to
a relatively quick eradication of the disease. This is also visible directly
from Eqs (3.4): since (for $\zeta=0$) $S(t)$ is a monotonically decreasing
function (_i.e._ $S(t)\leq S_{0}$ $\forall t>0$), then $\frac{dI}{dt}(t)<0$
$\forall t>0$ such that the number of infectious individuals is continuously
decreasing. In the right panel of Fig. 13, we chose $R_{e,0}>1$: the number of
infectious cases grows to a maximum and starts decreasing once only a small
number of susceptible individuals remain available. This maximum is reached
once $S(t)=\frac{1}{\sigma}$ such that $\frac{dI}{dt}=0$.
This behaviour is more clearly visible in the asymptotic number of susceptible
(_i.e._ $S(\infty)=\lim_{t\to\infty}S(t)$) or (equivalently) the cumulative
number of individuals that have become infected throughout the entire
epidemic. Both quantities are a measure of how far the disease has spread
among the population. For later use, we define the function
$I_{\text{c}}(t):\,[0,\infty)\mapsto[0,N]$ as
$\displaystyle
I_{\text{c}}(t)=N\,I_{0}+\int_{0}^{t}dt^{\prime}\,\gamma\,N\,I(t^{\prime})\,S(t^{\prime})\,.$
(3.6)
It quantifies the cumulative total number of individuals who have been
infected by the disease up to time $t$. The definition (3.6) can be used for
generic $\zeta$ as a function of time. For $\zeta=0$, using Eqs (3.4), we
obtain the identity $\gamma\,I\,S=\frac{d}{dt}(I+R)$ that allows to simplify
Eq.(3.6) to:
$\displaystyle I_{\text{c}}(t)=N(I(t)+R(t))=N(1-S(t))\,,$ for
$\displaystyle\zeta=0\,.$ (3.7)
For $\zeta=0$, we also have that $\lim_{t\to\infty}I(t)\to 0$, thus we find
the following relations at infinite time:
$\displaystyle
I_{\text{c}}(\infty)=\lim_{t\to\infty}I_{\text{c}}(t)=1-S(\infty)=R(\infty)=\lim_{t\to\infty}R(t)\,.$
(3.8)
Figure 14: Asymptotic number of susceptible and cumulative number of
infectious as a function of $R_{e,0}$ for $S_{0}=1-10^{-6}$.
The limit $S(\infty)$ can be computed analytically, by realising that
$\displaystyle G(t)=S(t)\,e^{\sigma\,R(t)}\,,$ (3.9)
is conserved, _i.e._ $\frac{dG}{dt}(t)=0$ $\forall t\in\mathbb{R}$. This
implies
$\displaystyle S(t)=S_{0}\,e^{-\sigma(1-I(t)-S(t))}\,.$ (3.10)
With $\lim_{t\to\infty}I(t)=0$, this equation can be solved for the asymptotic
number of susceptible in the limit $t\to\infty$, giving
$\displaystyle
S(\infty)=-\frac{S_{0}}{R_{e,0}}\,W(-R_{e,0}\,e^{-\frac{R_{e,0}}{S_{0}}})\,,$
(3.11)
where $W$ is the Lambert function. The limiting values $S(\infty)$ and
$I_{\text{c}}(\infty)/N$ are shown in Fig. 14 as functions of $R_{e,0}$ for
the initial conditions of $S_{0}=1-10^{-6}$, _i.e._ a starting configuration
with one infectious individual per million. A kink seems to appear for
$R_{e,0}=1$, however both functions are smooth (continuous and differentiable)
for $S_{0}<1$, as highlighted in the subplots. In the limit $S_{0}\to 1$, the
solutions discontinuously jump to constants, as the absence of initial
infectious individuals prevents the spread of the disease. Qualitatively, this
plot shows that for $R_{e,0}<1$, the disease becomes eradicated before a
significant fraction of the population can be infected. However for
$R_{e,0}>1$ the cumulative number of infected grows rapidly.
For $\zeta\neq 0$, we can distinguish two different cases, depending on the
sign:
* 1.
Re-infection $\zeta>0$: a positive $\zeta$ implies that removed individuals
become susceptible again after some time. This can be interpreted to mean that
recovery from the disease only grants temporary immunity, such that a re-
infection at some later time is possible. At large times $t\to\infty$, the
system enters into an equilibrium state, such that $(S(t)\,,I(t)\,,R(t))$
approach constant values $(S(\infty)\,,I(\infty)\,,R(\infty))$. To find the
latter, we impose the equilibrium conditions
$\displaystyle\lim_{t\to\infty}\frac{d^{n}S}{dt^{n}}(t)=\lim_{t\to\infty}\frac{d^{n}I}{dt^{n}}(t)=\lim_{t\to\infty}\frac{d^{n}R}{dt^{n}}(t)=0\,,$
$\displaystyle\forall n\in\mathbb{N}\,,$ (3.12)
which have as solution
$\displaystyle(S(\infty),I(\infty),R(\infty))=\left\\{\begin{array}[]{lcl}(1,0,0)&\text{if}&\sigma\leq
1\text{ or }S_{0}=1\,,\\\\[10.0pt]
\left(\frac{\epsilon}{\gamma}\,,\frac{(\gamma-\epsilon)\zeta}{\gamma(\epsilon+\zeta)}\,,\frac{(\gamma-\epsilon)\epsilon}{\gamma(\epsilon+\zeta)}\right)&\text{if}&\sigma>1\,,\end{array}\right.$
for $\displaystyle\zeta>0\,.$ (3.15)
Here we have used that $0\leq(S(t)\,,I(t)\,,R(t))\leq 1$ (in particular that
$(S(t)\,,I(t)\,,R(t))$ cannot become negative) as well as the fact that the
equilibrium point $(1,0,0)$ cannot be reached for $S_{0}<1$ and
$\gamma>\epsilon$: indeed, this would require
$\displaystyle S(t)>\frac{\epsilon}{\gamma}\,,$ and
$\displaystyle\frac{dI}{dt}(t)<0\,,$ (3.16)
which are not compatible with Eqs (3.4). 555Furthermore, the only solutions of
the conditions
$\frac{d^{2}S}{dt^{2}}(t)=\frac{dI}{dt}(t)=\frac{d^{2}R}{dt^{2}}(t)=0$ are in
fact the two equilibrium points (3.15) (where in fact all derivatives of
$(S\,,I\,,R)$ vanish). This therefore suggests that there are no solutions
that are continuous oscillations with non-decreasing amplitudes and the system
indeed reaches an equilibrium at $t\to\infty$. The numerical solutions in Fig.
15 comply with this expectation. The two qualitatively different solutions of
Eqs (3.4) that lead to the asymptotic equilibria (3.15) are plotted in Fig.
15: for $\sigma<1$ (left panel), the disease is eradicated and the individuals
that have been infected eventually move back to be susceptible; for $\sigma>1$
(right panel), after some oscillations, an equilibrium is reached between the
infections and the end of immunity and the number of infectious individuals
tends to the non-zero constant given in Eq.(3.15) (this corresponds to an
endemic state of the disease). The distinction between eradication of the
disease and the endemic phase does not depend on $S_{0}$ (except for the
trivial initial condition $S_{0}=1$) but only on the basic reproduction number
$\sigma$. This fact can be intuitively understood as the rate $\zeta$
dynamically increases the number of susceptible individuals, thus the regime
becomes independent of the initial condition.
Figure 15: Numerical solution of the differential equations (3.4) for
$S_{0}=0.92$, $\gamma=0.1$ and $\zeta=0.01$ for two different choices of
$\epsilon$: $\epsilon=0.2$ implying $\sigma=0.5$ (left) and $\epsilon=0.05$
implying $\sigma=2$ (right).
* 2.
Direct immunisation $\zeta<0$: a negative $\zeta$ implies the possibility that
over time susceptible individuals can become removed and thus immune to the
disease, proportionally to the number of removed individuals. Schematically,
different solutions are shown in Fig. 16. For $\zeta<0$ the dynamics always
leads to the asymptotic values $(S(\infty)\,,I(\infty)\,,R(\infty))=(0,0,1)$
at large $t\to\infty$.
Figure 16: Numerical solution of the differential equations (3.4) for
$S_{0}=0.92$, $\gamma=0.1$ and $\zeta=-0.01$ for two different choices of
$\epsilon$: $\epsilon=0.2$ implying $\sigma=0.5$ (left) and $\epsilon=0.05$
implying $\sigma=2$ (right).
### 3.3 From Lattice to SIR
The relation between Compartmental Models and Percolation Field Theory has
already been established in Section 2.4. However it is also possible to link
the numerical simulations to the SIR model directly, as the microscopic
processes in the lattice simulations are in one-to-one correspondence with the
transfer mechanisms among compartments in the SIR model.
To visualise this we used the results in Fig. 4, where the lattice is of size
$201\times 201$ (_i.e._ a population of $40401$) and the recovery probability
is fixed to $0.1$. Once the recovery rate and the initial number of
susceptible individuals $S_{0}$ is fixed, in the SIR model the value of the
infection rate completely determines the asymptotic number of total infected
via Eq.(3.11). For each coordination radius, we look for the best rescaling of
the infection probability that could reproduce the behaviour in Fig. 4, _i.e._
we compute the optimal $\rho$ such that changing
$\mathfrak{g}\longrightarrow\rho\mathfrak{g}$ gives the best fit of the
numerical results. We show the solution in Fig. 17.
(a) $r=1$
(b) $r=2$
(c) $r=5$
(d) $r=50$
Figure 17: Evolution of the final number of infected cases as a function of
the infection probability for different coordination radii $r$, compared to
the asymptotic solution of the SIR model. The optimal factor found for the
cases (a),(b),(c) and (d) are respectively: $\rho=0.27,\;0.42,\;0.50,\;0.99$.
The results clearly show that increasing the coordination radius improves the
match between the lattice and the SIR model results. The reason for this is
simple: for maximal coordination radius, the mean-field approximation applied
to Eq. (2.13) leads directly to the SIR equations. The reason is that any
infectious site can infect any susceptible site on the lattice with equal
probability. Numerical lattice simulations of compartmental models, and in
particular of the SIR type, have been widely used in the literature (see
_e.g._ [121, 122, 123, 124]).
### 3.4 Parametric Solution of the Classical SIR Model
Apart from the numerical solutions, we can also gain insight into analytical
aspects by discussing a parametric solution of the classical SIR model [30].
For simplicity, we assume $\zeta=0$, such that the system in Eqs (3.4), (3.1)
and (3.3) reduces to
$\displaystyle\begin{array}[]{l}\frac{dS}{dt}(t)=-\gamma\,I(t)\,S(t)\,,\\\\[2.0pt]
\frac{dI}{dt}(t)=\gamma\,I(t)\,S(t)-\epsilon\,I(t)\,,\\\\[2.0pt]
\frac{dR}{dt}(t)=\epsilon\,I(t)\,,\end{array}$ with
$\displaystyle(S+I+R)(t)=1$ and
$\displaystyle\begin{array}[]{l}S(t=0)=S_{0}>0\,,\\\\[2.0pt]
I(t=0)=I_{0}>0\,,\\\ R(t=0)=0\,.\end{array}$ (3.23)
Since the constraint in Eq.(3.1) allows to remove one function, _e.g._
$R(t)=1-S(t)-I(t)$, it is sufficient to consider the differential equations
for $S$ and $I$. Dividing the latter by the former, we obtain a differential
equation for $I$ as a function of $S$
$\displaystyle\frac{dI}{dS}=-1+\frac{1}{\sigma\,S}\,,$ (3.24)
which can be integrated to
$\displaystyle I(S)=-S+\frac{1}{\sigma}\,\ln S+c\,,$ for $\displaystyle
c\in\mathbb{R}\,.$ (3.25)
The parameter $\sigma$ is defined in Eq.(3.5) and the constant $c$ appearing
in Eq.(3.25) can be fixed by the initial conditions at $t=0$ and gives
$c=I_{0}+S_{0}-\frac{1}{\sigma}\,\ln S_{0}$, such that
$\displaystyle I(S)=1-S+\frac{1}{\sigma}\,\ln\frac{S}{S_{0}}\,.$ (3.26)
A plot of this function in the allowed region
$\displaystyle\mathbb{P}=\\{(S,I)\in[0,1]\times[0,1]|S+I\leq 1\\}\,,$ (3.27)
for different initial conditions and $\sigma=0.9$ (left) and $\sigma=3$
(right) is shown in Fig. 18.
Figure 18: Relative number of infectious $I$ as a function of the relative
number of susceptible $S$ for
$S_{0}\in\\{0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9\\}$ and $\sigma=0.9$ (left) as
well as $\sigma=3$ (right). Curves with a local maximum are drawn in orange
while curves that are monotonically growing within $\mathbb{P}$ are drawn in
blue.
These plots once more highlight the qualitatively different solutions: the
solution $I(S)$ in Eq.(3.26) has a maximum at
$I_{\text{max}}=1-\frac{1}{\sigma}\left(1+\ln(\sigma S_{0})\right)$, which
lies inside of $\mathbb{P}$ only if the initial effective reproduction number
defined in Eq. (3.5) is $R_{e,0}\equiv\sigma S_{0}\geq 1$. Since $S(t)$ is a
monotonically decreasing function of time, as demonstrated in [30], this
implies that:
* 1.
If $R_{e,0}\leq 1$, then $I(t)$ tends to $0$ monotonically for $t\to\infty$,
as already established before.
* 2.
If $R_{e,0}>1$, $I(t)$ first increases to a maximum equal to
$1-\frac{1}{\sigma}\left(1+\ln(\sigma S_{0})\right)$ and then decreases to
zero for $t\to\infty$. The limit $S(\infty)=\lim_{t\to\infty}S(t)$ is the
unique root of
$\displaystyle
1-S(\infty)+\frac{1}{\sigma}\,\ln\left(\frac{S(\infty)}{S_{0}}\right)=0\,,$
(3.28)
in the interval $[0,\tfrac{1}{\sigma}]$, which is explicitly given in terms of
the Lambert function in Eq.(3.11).
Furthermore, inserting the solution (3.26) into Eq.(3.23), we obtain the
following non-linear, first order differential equation for $S$ (as a function
of time)
$\displaystyle\frac{dS}{dt}=\gamma\,S(S-1)-\gamma\,\frac{S}{\sigma}\,\ln\frac{S}{S_{0}}\,.$
(3.29)
The latter can be solved numerically using various methods.
### 3.5 Generalisations of the SIR Model
The SIR model, with 3 compartments $(S,I,R)$ and constant rates $\gamma$,
$\epsilon$ and $\zeta$, provides a simple, but rather crude, description of
the time evolution of an epidemic in an isolated population. This description
can be refined and extended in various fashions. The most common way consists
in adding more compartments, with more refined properties, giving birth to
models like SIRD (including Deceased separately), SEIR (including Exposed
individuals, in presence of a substantial incubation period), SIRV [125, 126]
(see also [127]) (including vaccinated individuals), an so on [32]. Here, as
an illustration, we shall discuss some generalisations of the SIR model that
do not introduce fundamentally new compartments: in Section 3.5.1 we shall
allow for time-dependent infection and recovery rates, in Section 3.5.2 we
shall include new terms in the differential equations (3.4) that simulate the
spontaneous appearance of new infectious (_e.g._ from outside of the
population), while in Section 3.5.3 we allow for multiple different types of
infectious individuals in an attempt to model inhomogeneous spreading of the
disease among the population. While these variations add new compartments to
the system, these are not of a completely new nature but simply copy an
already existing compartment. In all cases we shall motivate how these
modifications can be used to describe specific features of certain diseases.
For more general compartmental models (notably with the addition of completely
new compartments) we refer the reader to the above mentioned literature (see
_e.g._ [32] for an overview). Another generalisation is the inclusion of the
spatial evolution of the disease. This generally leads to coupled differential
equations which are of first order in the time variable and of second order in
the spatial variable. We shall not discuss these approaches in any detail in
this review.
#### 3.5.1 Time Dependent Infection and Recovery Rates
In the SIR model of Eqs (3.4), the rates $(\gamma,\epsilon,\zeta)$ are
considered to be constant in time. This assumption is difficult to justify, in
particular for epidemics that last over an extended period of time: many
diseases show (natural) seasonal effects [128, 129] related to the weather
dependence of the effectiveness of transmission vectors or the behaviour of
hosts (_e.g._ it can be argued that the rate of child infections is linked to
the cycle of school holidays [130]). Furthermore, even in the absence of an
effective vaccine, populations may take measures to prevent the spread of the
disease by imposing social distancing rules or quarantine procedures, thus
changing the (effective) infection rate $\gamma$. Pathogen mutations and
various forms of immunisations (including vaccines) can also increase or
reduce the value of $\gamma$ over time. With a prolonged duration of an
epidemic, more data about the disease can be collected, leading to better ways
to fight it on a biological and medical level, thus changing the recovery rate
$\epsilon$. Similarly, the disease may mutate and bypass previous immunisation
strategies, thus changing the rate $\zeta$ at which removed individuals may
become susceptible again. Modelling such effects and gauging their impact on
the time evolution of an epidemics requires $(\gamma,\epsilon,\zeta)$ to
change over time. In practice, this can be achieved by either interpreting
them as explicit functions of $t\in\mathbb{R}$, _i.e._
$(\gamma(t),\epsilon(t),\zeta(t))$, or by considering them to be functions of
the relative number of susceptible and/or infectious individuals, _i.e._
$(\gamma(S,I),\epsilon(S,I),\zeta(S,I))$. Since $(S,I)$ themselves are
functions of time, the latter possibility induces an implicit dependence on
$t$. For example, periodic and seasonal models in which these rates are
assumed to be smoothly varying functions in $t$ have been developed for HIV
[131], tuberculosis [132] or cutaneous leishmaniasis [133], while models for
pulse-vaccinations have been proposed in [134, 135, 136, 137, 138, 139, 140,
141, 142, 143, 144] (a model which in addition takes into account seasonal
effects was presented in [145]). The functional dependence can furthermore be
used, for example, to model population-wide lockdowns, _i.e._ quarantine
measures that are imposed if the relative number of infectious individuals
exceeds a certain value.
In the following we shall provide a simple (numerical) example of how the time
dependence of different
Figure 19: Numerical solution of the SIR equations (3.4) for the time-
dependent infection rate (3.30) with $S_{0}=0.99$, $\epsilon=0.05$, $\zeta=0$,
$\gamma_{0}=0.1$, $w=0.1$ and $\Delta I=0.05$.
parameters affects the time-evolution of the pandemic. We start by a simple
model that can be used to qualitatively assess the efficiency of lockdown
measures. To this end, we assume a ‘base’ infection rate $\gamma_{0}=$const.,
but assume that the population takes measures (social distancing, lockdowns,
_etc._) to ensure that the actual infection rate $\gamma(t)$ is reduced by a
percentage $w$ if the number of (active) infectious individuals exceeds a
certain value $\Delta I$. To model such social distancing measures in a very
simplistic fashion, we introduce the following implicit time-dependence:
$\displaystyle\gamma(I)=\gamma_{0}\,\left[1-w\,\theta(I(t)-\Delta
I)\right]\,,$ (3.30)
where $\theta$ is the Heaviside theta-function. 666To be mathematically
rigorous, since $\theta$ is not a continuous function, using this infection
rate in Eqs (3.4) would require to interpret $(S(t),I(t),R(t))$ as
distributions. This can be circumvented by replacing $\theta(I(t)-\Delta I)$
by $1+\tanh(\kappa_{0}(I(t)-\Delta I))$ with $\kappa_{0}$ a parameter that
‘smoothens’ the step function. For the following discussion, however, this
point shall not be relevant. We hasten to add that Eq.(3.30) offers a very
crude depiction of lockdown and quarantine measures taken by societies in the
real-world: indeed, decisions on whether or not to impose a lockdown (or other
social distancing measures) are usually based on numerous indicators which
would (at least) require a more complicated dependence of $\gamma$ on $I$
(_e.g._ its derivatives or averages of $I$ over a certain period of time prior
to $t$). Furthermore, the conditions when a lockdown is lifted are typically
independent of those when it is imposed.
An exemplary numerical solution of Eqs (3.4) for the particular $\gamma$ in
Eq.(3.30) is shown in Fig. 19. For better comparison we have also plotted
$I_{\text{no-q}}(t)$, which is the solution for $I(t)$ in the case of constant
$\gamma=\gamma_{0}=\mbox{const.}$ (_i.e._ with no reduction of the infection
rate) and all remaining parameters chosen the same. Despite its simplicity and
shortcomings, the model allows to make a few basic observations: the plot
shows that the time-dependent infection rate leads to a reduction of the
maximum of infectious individuals (‘flattening of the curve’). Moreover, this
simple model allows to compare the effectiveness of the quarantine measures as
a function of $w$ and $\Delta I$. To gauge this effectiveness, we consider the
cumulative number of infected individuals, which is plotted for different
values of $w$ and $\Delta I$ in Fig. 20. These plots confirm the intuitive
expectation that lockdown measures are the more effective the stronger the
reduction of the infection rate is and the earlier they are introduced.
However, due to its simplicity, the model also misses certain aspects compared
to the time evolution of real-world communicable diseases in the presence of
measures to prevent its spread: for example, possibly due to non-zero
incubation time of most infectious diseases, the effect of quarantine measures
on the number of infectious individuals can be detected only a certain time
after the measures have been imposed (see [146, 147, 148, 149] where this has
been established for the COVID-19 pandemic). To include the latter would
require a refinement of the model.
Figure 20: Numerical solution of the SIR equations (3.4) for the time-
dependent infection rate (3.30), with $S_{0}=0.99$, $\epsilon=0.05$,
$\zeta=0$, $\gamma_{0}=0.1$ and different choices of $(w,\Delta I)$:
$w\in\\{0.05\,,0.1\,,0.5\\}$ and $\Delta I=0.05$ (left) and $w=0.25$ and
$\Delta I\in\\{0.01\,,0.05\,,0.1\,,1\\}$ (right). Figure 21: Numerical
solution of the differential equations (3.31) for $S_{0}=0.99$, $\gamma=0.055$
and $\zeta=0.045$ for two different choices of $\xi$: $\xi=0$ (left) and
$\xi=0.002$ (right).
#### 3.5.2 Spontaneous Creation and Multiple Waves
In Section 2.3, in the context of percolation models, we have discussed
microscopic processes that correspond to the spontaneous creation of infected
individuals. Such processes can simulate, for example, the infection of
individuals through external sources (_e.g._ pathogen sources, contaminated
food sources, wildlife, _etc_.), but may also be used to model the infection
of susceptible individuals through asymptomatic infectious individuals or the
appearance of infectious individuals from outside of the population through
travel. How to introduce this process in SIR-type models has been discussed at
the end of Section 2.4. Mathematically, the SIR equations (3.4) can be
extended to
$\displaystyle\frac{dS}{dt}=-\gamma\,I\,S+\zeta\,R-\xi\,S\,,$
$\displaystyle\frac{dI}{dt}=\gamma\,I\,S-\epsilon\,I+\xi\,S\,,$
$\displaystyle\frac{dR}{dt}=\epsilon\,I-\zeta\,R\,,$ (3.31)
This is schematically shown in Fig. 21, where we show the solutions for
$\xi=0$ (left panel) compared to
Figure 22: Numerical solution of the differential equations (3.31) for
$S_{0}=0.99$, $\gamma=0.055$, $\zeta=0.045$ and
$\xi=0.002\,\left|\sin\left(\tfrac{2\pi t}{200}\right)\right|$.
where the rate $\xi=\hat{\xi}$ of Section 2.4. The system still needs to be
solved with the initial conditions (3.3). Here $\xi\in\mathbb{R}_{+}$ is a
constant that governs the rate at which new infectious individuals appear in
the population, corresponding to a qualitative change in the basic infection
mechanism: since susceptible individuals can contract the disease even if
there are no infectious individuals present in the population, the epidemic
can not be stopped before the entire population becomes infected. As a
consequence, the cumulative number of infected tends to $N$ for $t\to\infty$.
the solution for $\xi\neq 0$ (right panel). In the former case, the number of
cumulative infected tends to a finite value, while in the latter case,
$\lim_{t\to\infty}S(t)\to 0$.
Following the discussion in Section 3.5.1, we can also analyse the effect of a
time-dependent rate $\xi(t)$. This can be used to model a time-dependent rate
of the spontaneous creation of new infectious individuals, _e.g._ induced by
quarantine measures or geographical restrictions of the population. As a
simple example, we have plotted the numerical solution for a periodic function
$\xi$ in Fig. 22. Since $\xi$ does not remain zero after finite time, the
relative number of susceptible tends to $0$ (indicating that the entire
population is infected for $t\to\infty$). Moreover, the solution features
oscillations in time, which could be interpreted as different waves of the
epidemic spreading in the population.
#### 3.5.3 Heterogeneous Transmission Rates and Superspreaders
As another generalisation of compartmental models, we consider adding multiple
versions of the compartments $S$, $I$, $R$ [150, 151] to model the
heterogeneity of social interactions and their impact on the spread of a
disease: indeed, indications for _superspreaders_ (_i.e._ individuals who
transmit the disease with a significantly higher rate than average) have been
found in many diseases (_e.g._ influenza [152, 153], rubella [154]) and for
certain diseases it has in fact been suggested that only a small fraction of
the population is responsible for most infections (see _e.g._ [155, 156] for a
study of COVID-19). Similarly, the gender of individuals plays an important
role in the modelling of sexually transmitted diseases (see _e.g._ [157, 158,
159, 160] for the study of gonorrhoea, which also suggests the necessity of an
extended range of contact rates [150]). To account for these modified contact
rates, modifications of the SIR model (as described above) have been
suggested, which consist in adding multiple compartments of infectious
individuals, _i.e._ new subgroups that allow to refine the study of the
disease spread in a not-so-uniform population. These additional compartments
can,
$N\,S$$N\,I_{2}$$N\,I_{1}$$N\,R$$\beta(\gamma_{1}I_{1}+\gamma_{2}I_{2})\,N\,S$$(1-\beta)(\gamma_{1}I_{1}+\gamma_{2}I_{2})\,N\,S$$\epsilon\,N\,I_{1}$$\epsilon\,N\,I_{2}$$\zeta\,N\,R$
Figure 23: Flow between susceptible, 2 compartments of infectious and removed
individuals.
therefore, distinguish individuals based on biological/medical indicators
(_e.g._ gender, age, preexistent medical conditions, etc.), geographic
distribution, social behaviour and/or may be used to introduce additional
stages in the progression of the disease, such as latency periods or different
stages of symptoms. Inclusion of more compartments naturally renders the
relevant set of differential equations more complicated and is more demanding
in terms of computational costs (see [161] as an example). Furthermore, the
increase in the number of parameters (rates) leads to a loss of predictive
power compared to simpler models.
In the following we shall present one simple example that includes one
additional class of infectious individuals. This model is useful in
characterising different (social) behaviours among individuals. Indeed, in
general, the infection rate $\gamma$ is not homogeneous throughout the entire
population, since it depends on various factors such as geographical mobility,
social behaviour _etc._ , which may vary considerably. A particular effect in
this regard is the existence of so-called _superspreaders_. These are
individuals who are capable of transmitting the disease to susceptible
individuals at a rate that significantly exceeds the average. The presence of
superspreaders can be described by introducing two groups of infectious
individuals $I_{1,2}$, with different infection rates $\gamma_{1,2}$ and
appearing with a relative ratio $\beta\in[0,1]$. Extending Fig. 12, the new
flow among compartments is shown in Fig. 23 (for $\zeta=0$), and can be
described by the following differential equations [150]:
$\displaystyle\frac{dS}{dt}=-(\gamma_{1}\,I_{1}+\gamma_{2}\,I_{2})\,S\,,$
$\displaystyle\frac{dI_{1}}{dt}=\beta(\gamma_{1}\,I_{1}+\gamma_{2}\,I_{2})\,S-\epsilon\,I_{1}\,,$
$\displaystyle\frac{dI_{2}}{dt}=(1-\beta)(\gamma_{1}\,I_{1}+\gamma_{2}\,I_{2})\,S-\epsilon\,I_{2}\,,$
$\displaystyle\frac{dR}{dt}=\epsilon(I_{1}+I_{2})\,,$ (3.32)
together with the initial conditions
$\displaystyle S(t=0)=S_{0}\,,$ $\displaystyle I_{1}(t=0)=I_{0,1}\,,$
$\displaystyle I_{2}(t=0)=I_{0,2}\,,$ $\displaystyle R(t=0)=0\,,$ (3.33)
with
$\displaystyle 0\leq S_{0},I_{0,1},I_{0,2}\leq 1\,,$ $\displaystyle
1=S_{0}+I_{0,1}+I_{0,2}\,.$ (3.34)
In [150] the parameters $\beta$, $\gamma_{1,2}$, and $\epsilon$ were assumed
to be constant in time. By defining an effective infectious population
$J=(\gamma_{1}\,I_{1}+\gamma_{2}\,I_{2})/\lambda$, we can extract the
following differential equations for $(S,J)$ 777Note that our definition of
$J$ differs from the definition of the infective potential
$J=\gamma_{1}\,I_{1}+\gamma_{2}\,I_{2}$ in [150] by a constant normalisation.
$\displaystyle\frac{dS}{dt}=-\lambda\,J\,S\,,$
$\displaystyle\frac{dJ}{dt}=\lambda\,J\,S-\epsilon\,J\,,$ with
$\displaystyle\lambda=\gamma_{1}\,\beta+(1-\beta)\,\gamma_{2}\,.$ (3.35)
Thus, for $S$ and $J$ we obtain the same equations as in the classical SIR
model, which can be solved along the lines of Section 3.4: we extract the
following non-linear first-order equation for $S$:
$\displaystyle\frac{dS}{dt}=\lambda\,S^{2}-\epsilon\,S\,\ln
S+\mathfrak{c}_{0}\,S\,,$ with $\displaystyle\mathfrak{c}_{0}=\epsilon\,\ln
S_{0}-\lambda\,S_{0}-(\gamma_{1}I_{0,1}+\gamma_{2}I_{0,2})\,.$ (3.36)
which leads to the asymptotic number of susceptible $S(\infty)$ implicitly
given by
$\displaystyle 0=\lambda\,S(\infty)-\epsilon\,\ln
S(\infty)+\mathfrak{c}_{0}\,.$ (3.37)
As was pointed out in [150], the SIR model with superspreaders leads to the
same dynamics as the classical SIR models, albeit with a larger-than-average
infection rate $\lambda$, due to the contribution of superspreaders. With
constant infection and recovery rates and monotonically diminishing number of
susceptible (_i.e._ for $\zeta=0$), the impact of superspreaders is
conceptually not detectable. Nevertheless, from the perspective of the total
number of infected, superspreaders may have a significant impact in driving
the epidemics. In Fig. 24 (left) we have plotted the time evolution of a
typical solution, which indeed follows the same pattern as the usual SIR
model. However, as visible from Fig. 24 (right), even the presence of a
relatively small number of superspreaders can have a strong impact on the
cumulative number of infected.
Figure 24: Numerical solution of the SIR equations in the presence of
superspreaders, Eqs (3.32): time evolution for $S_{0}=0.99$, $I_{0,1}=0.01$,
$I_{0,2}=0$, $\gamma_{1}=0.04$, $\gamma_{2}=1$, $\epsilon=0.05$ and
$\beta=0.95$ (left) and comparison of the cumulative number of infected with
the ‘usual’ SIR model without superspreaders (_i.e._ $\beta=1$) (right).
Finally, it was argued in [150] that in situations in which the number of
susceptible individuals is no longer a monotonical function (which can for
example be achieved by allowing for a non-trivial $\zeta$), the time evolution
of the SIR model looks qualitatively different in the presence of
superspreaders.
### 3.6 The SIR model as a set of Renormalisation Group Equations
As we have seen from simple numerical studies in Section 3.2, solutions
$(S(t),I(t),R(t))$ of the classical SIR equations (3.4) exhibit interesting
properties as functions of time, which structurally remain valid for many of
the generalisations discussed in Section 3.5. In particular, the solutions
show a qualitatively different behaviour when a key parameter (in the
classical SIR model, the initial effective reproduction number
$R_{e,0}=S_{0}\sigma$) exceeds a critical value. This seems to play a similar
role to an ordering parameter in physical systems undergoing a phase
transition. A further related observation is the fact that Eqs (3.4) are
invariant under a re-scaling of the time-variable, if simultaneously all the
rates are also re-scaled:
$\displaystyle t\rightarrow\frac{1}{\mu}\,t\,,$
$\displaystyle\gamma\to\mu\,\gamma\,,$
$\displaystyle\epsilon\to\mu\,\epsilon\,,$
$\displaystyle\zeta\to\mu\,\zeta\,,$
$\displaystyle\forall\mu\in\mathbb{R}\setminus\\{0\\}\,.$ (3.38)
This rescaling of the time-variable is structurally not unlike the change of
the energy scale in quantum field theories that is used to describe the
_Wilsonian renormalisation_ of the couplings among elementary particles [86,
87]. The renormalisation flow can also feature similar symmetries to the ones
of the solutions of the SIR equations. Compartmental models can be formulated
in a way that is structurally similar to Renormalisation Group Equations
(RGEs) [92, 162], and this analogy lead to the formulation of an effective
description called _epidemiological Renormalisation Group_ [92, 93], which we
will introduce in the next section.
To understand the analogy, we recall that most (perturbative) quantum field
theories are effective models: they are typically based on an action that
encodes fundamental interactions of certain ‘bare’ fundamental fields, whose
strength is described by a set of coupling constants $\\{\lambda_{i}\\}$
(where $i$ takes values in a suitable set $\\{\mathcal{S}\\}$). Each effective
description, however, is generally well adapted only at a certain energy
scale, beyond which new degrees of freedom are more appropriate and new
interactions may become important. In practice, one introduces a cut-off
parameter (or some other regularisation form), beyond which the effective
description is no longer valid. The effective theory can thus be interpreted
as encoding all effective interactions, after having integrated out all
interactions at energy scales higher than the cut-off. From this perspective
it is clear that changing the energy scale (and thus the cut-off) will lead to
different interactions being integrated out and thus has a strong impact on
the theory, along with the fundamental degrees of freedom and the couplings
used to describe it. The process of arriving at the new effective theory is
called _renormalisation_. To describe it, we study universal quantities that
are invariant under the renormalisation, first and foremost the partition
function $\mathcal{Z}(\\{\lambda_{i}\\})$, which encodes the statistical
properties of the quantum system and depends on the set of coupling constants
mentioned before. For the purpose of this review, we can think of
$\mathcal{Z}$ as a mathematical function that encodes all the physical
properties of the system and its symmetries, independently on its explicit
definition. One of the symmetries is, as already mentioned, the invariance
under renormalisation, _i.e._ the change in the energy scale of the physical
interactions. If $\\{\lambda^{\prime}_{a}\\}$ (with $a$ taking values in a new
set $\\{\mathcal{S}^{\prime}\\}$) is the new set of renormalised couplings and
$\mathcal{Z}^{\prime}$ the partition function of the renormalised theory,
invariance of the partition function implies
$\displaystyle\mathcal{Z}(\\{\lambda_{i}\\})=\mathcal{Z}^{\prime}(\\{\lambda^{\prime}_{a}\\})\,.$
(3.39)
Hence, by continuously changing the energy scale, the theory sweeps out a
trajectory in the space of all possible effective theories, called the
_renormalisation group flow_ , which is governed by the invariance Eq.(3.39).
From the perspective of the interactions, the theory sweeps out a trajectory
in the space of all couplings $\lambda_{i}$. This is governed by the _beta-
functions_ $\beta_{i}(\lambda_{k})$, defined as the derivatives of the
couplings $\lambda_{i}$ with respect to the logarithm of the cut-off
parameter, and are functions of the couplings $\lambda_{i}$ themselves. The
flow is thus described in terms of a system of differential equations, like
the SIR model does, whose fixed points (_i.e._ zeros of the beta functions)
denote critical (_i.e._ scale invariant) points of the theory.
Before making the connection to epidemiology, we remark that physical theories
in general allow for field redefinitions, which means that they can be
equivalently formulated using different bare fields. This implies that the
coupling set $\\{\lambda_{i}\\}$ is not uniquely determined, but should rather
be thought of as a (local) choice of basis in the space of couplings. A
specific choice of a set of $\\{\lambda_{i}\\}$ is called a renormalisation
_scheme_. While a priori the specific form of the beta-functions depend on the
scheme (in particular their perturbative expansions as functions of the
$\\{\lambda_{i}\\}$), a change of scheme can be understood as an analytic
transformation in the space of couplings.
In [92], and subsequent works [93, 94, 163], it was suggested to interpret the
time evolution of the spread of a disease (specifically COVID-19) within the
framework of the Wilsonian renormalisation group equation. We shall explain
this description in more detail in Section 4. In the following, however, we
shall show how such a description can at least qualitatively be obtained from
the SIR equations by allowing time-dependent infection and removal rates, as
first pointed out in [93].
#### 3.6.1 Beta Function
In preparation to Section 4, we notice that the SIR model (with $\zeta=0$, but
time-dependent infection and recovery rates $\gamma(t)$ and $\epsilon(t)$) can
be written in a form which is strongly reminiscent of a RGE. To this end, we
return to Eqs (3.23) and repeat the same steps as in Section 3.4, except for
allowing $\sigma:\,[0,1]\to\mathbb{R}_{+}$ to be a priori a function of $S$.
Thus, we can integrate Eq.(3.24) in the following form
$\displaystyle I(S)=1-S+\int_{S_{0}}^{S}\frac{du}{u\,\sigma(u)}\,,$ (3.40)
which is compatible with the initial conditions in Eq.(3.23) at $t=0$.
Inserting this relation into the first equation of (3.4), for $\zeta=0$ it
yields
$\displaystyle\frac{dS}{dt}=-\gamma(t)\,S(t)\,\left[1-S+\int_{S_{0}}^{S}\frac{du}{u\,\sigma(u)}\right]\,.$
(3.41)
Instead of the relative number of susceptible, this equation can be re-written
in terms of the cumulative number of infected individuals $I_{\text{c}}$, as
defined in Eq. (3.6). Thus, Eq.(3.41) can be rewritten as
$\displaystyle\frac{dI_{\text{c}}}{dt}=N\,\gamma\,\left(1-\frac{I_{\text{c}}}{N}\right)\left[\frac{I_{\text{c}}}{N}+\int_{S_{0}}^{1-\frac{I_{\text{c}}}{N}}\frac{du}{u\,\sigma(u)}\right]\,.$
(3.42)
Next, generalising what was proposed in [92, 163], we define an _epidemic
coupling_ $\alpha(t)$ as a function of the cumulative number of infected
individuals:
$\displaystyle\alpha(t)=\phi(I_{\text{c}}(t))\,,$ (3.43)
where $\phi:\,[0,N]\rightarrow\mathbb{R}$ is a strictly monotonically growing,
continuously differentiable function with non-vanishing first derivative. A
priori, $\phi$ could also explicitly depend on $t$ (not only through
$I_{\text{c}}(t)$), but in the following we shall not explore this
possibility. In [92], in the context of the COVID-19 pandemic, $\phi$ was
chosen to be the natural logarithm, while in [163, 164] $\phi(x)=x$ was
chosen. For the moment, we shall leave $\phi$ arbitrary, which mimics the
liberty to choose different renormalisation schemes in the framework of the
Wilsonian approach. Upon defining formally the $\beta$-function as
$\displaystyle\beta(I_{\text{c}}(t))=-\frac{d\alpha}{dt}\,,$ (3.44)
Eq. (3.42) can be re-formulated as
$\displaystyle-\beta=\left(\frac{d\phi}{dI_{\text{c}}}\right)\frac{dI_{\text{c}}}{dt}=\left(\frac{d\phi}{dI_{\text{c}}}\right)\,N\,\gamma\,\left(1-\frac{I_{\text{c}}}{N}\right)\left[\frac{I_{\text{c}}}{N}+\int_{S_{0}}^{1-\frac{I_{\text{c}}}{N}}\frac{du}{u\,\sigma(u)}\right]\,.$
(3.45)
An explicit example that is designed to make contact with the work in [163] is
discussed in Section 3.6.2. Eq.(3.45), at least structurally, resembles a RGE
and has several intriguing properties to support this interpretation. Note
that with Eq.(3.6), we can also write
$\displaystyle\beta(t)=-\left(\frac{d\phi}{dI_{\text{c}}}\right)\,\frac{dI_{\text{c}}}{dt}=-\left(\frac{d\phi}{dI_{\text{c}}}\right)\,N\,\gamma(t)\,I(t)\,S(t)\,,$
(3.46)
which vanishes when:
* 1.
the infection rate vanishes $\gamma(t)=0$,
* 2.
or there are no susceptible individuals left $S(t)=0$,
* 3.
or the number of active infected vanishes $I(t)=0$ and the disease is
eradicated.
Further (structural) evidence can be given by considering concrete solutions:
an example for the interplay between the beta-function and $\sigma$ is
provided in the following Section 3.6.2. Furthermore, independently of its
connection to compartmental models, a renormalisation group approach can be
used to model and describe the dynamics of an epidemic, as we discuss in
Section 4.
#### 3.6.2 Connection between SIR models and the eRG approach
We now discuss via a concrete example how to formulate a SIR model (with time-
dependent $\sigma(t)$) in a way that reproduces the eRG framework, which will
be discussed in more detail in the next section. This relation has been first
discussed in [93, 94]. Following the logic outlined above, we will highlight
the similarities between the SIR equations and RGEs. In particular, we show
how a particular beta-function can be obtained from a time-dependent $\sigma$,
starting from Eq. (3.45). Concretely, we shall make contact with the
following:
$\displaystyle-\beta_{0}(I_{\text{c}})={\lambda}\,I_{\text{c}}\left[\left(1-\frac{I_{\text{c}}}{{A}}\right)^{2}-\delta\right]^{p}\,,$
(3.47)
where $\phi(I_{\text{c}})=I_{\text{c}}$, and $p,\delta,A$ are constant. The
form of the beta-function (3.47) will be motivated and discussed in more
detail in Section 4.2 and is used to study a single wave followed by an
endemic period characterised by a quasi linear growth, which can be precursor
to a next wave. We shall return on this linear period in Section 3.7.
As a starting point, we shall consider a SIR model where, for simplicity,
$\epsilon$ is constant, _i.e._ the rate of recovery remains constant
throughout the pandemic888$\epsilon$ depends on biological properties of the
virus as well medical and pharmaceutical means of the population to cure it.
Since these are difficult to change without significant effort, the value of
$\epsilon$ is difficult to change., while $\gamma$ and
$\sigma=\frac{\gamma}{\epsilon}$ are continuous functions of $S$. Finally, to
make contact with Eq.(3.47), we shall consider the asymptotic limit
$S_{0}\rightarrow 1$. Identifying the function $\beta(t)$ in Eq.(3.46) with
$\beta_{0}$ leads to an integral equation that, for constant $\epsilon$, can
be turned into a differential equation for $\sigma(t)$ (recall that
$S=1-\frac{I_{\text{c}}}{N}$):
$\displaystyle\frac{d}{dI_{\text{c}}}\left[\frac{\beta_{0}(I_{\text{c}})}{\epsilon\,\sigma\left(1-\frac{I_{\text{c}}}{N}\right)}\right]=1-\frac{1}{\left(1-\frac{I_{\text{c}}}{N}\right)\,\sigma\left(1-\frac{I_{\text{c}}}{N}\right)}\,.$
(3.48)
The equation above can be brought into the form
$\displaystyle
0=\sigma^{\prime}(S)+g_{1}(S)\,\sigma(S)+g_{2}(S)\,\sigma^{2}(S)\,,$ with
$\displaystyle\begin{array}[]{l}g_{1}(S)=\frac{1}{S}-\frac{N}{\beta_{0}(N(1-S))}\,\left(\epsilon-\beta^{\prime}_{0}(N(1-S))\right)\,,\\\\[4.0pt]
g_{2}(S)=\frac{N\epsilon S}{\beta_{0}(N(1-S))}\,.\end{array}$ (3.51)
In the above and following equations, the prime indicates a derivative with
respect to the argument of the function. The general solution of this first
order, non-linear differential equation is
$\displaystyle\sigma(S)=\frac{D(S)}{\frac{1}{\sigma_{0}}+\int_{S_{0}}^{S}dx\,D(x)\,g_{2}(x)}\,,$
with $\displaystyle
D(S)=\text{exp}\left[-\int_{S_{0}}^{S}g_{1}(x)\,dx\right]\,.$ (3.52)
Here $\sigma_{0}$ is an integration constant, which can be determined by
comparing the first derivative of $\beta_{0}$ and $\beta$ at
$S=S_{0}\rightarrow 1$ (_i.e._ at $I_{\text{c}}=N(1-S_{0})=0$). In fact,
$\beta^{\prime}_{0}(0)=\beta^{\prime}(0)$ implies
$\displaystyle\sigma(1)=\sigma_{0}=1-\frac{1}{\epsilon}\,\beta_{0}^{\prime}(0)=1+\frac{\lambda}{\epsilon}(1-\delta)^{p}\,.$
(3.53)
With $\beta_{0}$ given in Eq.(3.47), the integral over $g_{1}$ can be
performed analytically (involving an Appell hypergeometric function). However,
using this result to insert $D(S)$ into the first expression in Eq.(3.52), the
integral in the denominator is more involved and we could only find analytic
solutions for generic999We remark in passing that we were able compute
analytic solutions for other combinations of $(p,\delta)$ for specific
combinations of $(\lambda,\epsilon)$, _i.e._ for certain fixed ratios
$\frac{\lambda}{\epsilon}$. $\lambda,\epsilon$ for $(p=\tfrac{1}{4},\delta=0)$
and $(p=\tfrac{1}{2},\delta=0)$, whose limit $S_{0}\to 1$ is
$\displaystyle\lim_{S_{0}\to
1}\sigma(1-\tfrac{I_{\text{c}}}{N})\bigg{|}_{{p=\frac{1}{4}}\atop{\delta=0}}$
$\displaystyle=\frac{\frac{\lambda
N}{\epsilon(N-I_{\text{c}})}\sqrt{1-\frac{I_{\text{c}}}{A}}}{1+\frac{2^{1-\frac{\epsilon}{\lambda}}A\epsilon}{I_{\text{c}}(\lambda+\epsilon)}\left(\sqrt{1-\frac{I_{\text{c}}}{A}}-1\right)\left(\sqrt{1-\frac{I_{\text{c}}}{A}}+1\right)^{\frac{\epsilon}{\lambda}}\,_{2}F_{1}\left(\frac{\epsilon}{\lambda},\frac{\lambda+\epsilon}{\lambda};\frac{\epsilon}{\lambda}+2;\frac{1-\sqrt{1-\frac{I_{\text{c}}}{A}}}{2}\right)}\,,$
$\displaystyle\lim_{S_{0}\to
1}\sigma(1-\tfrac{I_{\text{c}}}{N})\bigg{|}_{{p=\frac{1}{2}}\atop{\delta=0}}$
$\displaystyle=\frac{N(A-I_{\text{c}})(\lambda+\epsilon)\left(1-\frac{I_{\text{c}}}{A}\right)^{-\frac{\epsilon}{\lambda}}}{A\epsilon(N-I_{\text{c}})\,_{2}F_{1}\left(\frac{\epsilon}{\lambda},\frac{\lambda+\epsilon}{\lambda};2+\frac{\epsilon}{\lambda};\frac{I_{\text{c}}}{A}\right)}\,.$
(3.54)
However, the integration can be performed numerically, and for different
values of $(p,\delta)$, $\sigma$ as a function of $I_{\text{c}}$ is shown in
Fig. 25.
Figure 25: $\sigma$ as a function of $I_{\text{c}}$ for different values of
$p$ and $\delta=0$ in the limit $S_{0}\to 1$ with $N=1.000.000$, $A=50.000$,
$\lambda=0.5$ and $\epsilon\in\\{0.1\,,0.3\,,0.5\,,0.7\,,0.9\,,1.1\,,1.3\\}$.
We note that for $p\leq 1/2$, $\text{Im}(\sigma)\neq 0$ for $I_{\text{c}}>A$,
thus indicating that the solution does not extend beyond the maximal number of
cumulative infected $I_{\text{c}}=A$ (see Fig. 26). Similar plots for
$\delta\neq 0$ are shown in Fig. 27.
Figure 26: Numerical computation of the imaginary part of
$\sigma(1-\tfrac{I_{\text{c}}}{N})$ for $\lambda=0.5$, $\epsilon=0.7$,
$N=1.000.000$ and $A=50.000$ in the limit $S_{0}\to 1$ for various values of
$p$ and $\delta=0$.
Finally, we also remark that the numerical integration allows us to include
$\delta<0$ and can even be generalised to more general classes of
$\beta$-functions proposed in [163]
$\displaystyle-\beta_{0}(I_{\text{c}})={\lambda}\,I_{\text{c}}\left[\left(1-\frac{I_{\text{c}}}{{A}}\right)^{2}-\delta\right]^{p}(1-\zeta
I_{\text{c}})\,,$ (3.55)
as shown in Fig. 27. In the case $\zeta>0$ we remark that
$\text{Im}(\sigma)\neq 0$ for $I_{\text{c}}>\zeta^{-1}$, indicating as above
the breakdown of the assumptions.
Figure 27: $\sigma$ as a function of $I_{\text{c}}$ for different values of
$(p,\delta,\zeta)$ in the limit $S_{0}\to 1$ with $N=1.000.000$, $A=50.000$,
$\lambda=0.5$ and $\epsilon\in\\{0.1\,,0.3\,,0.5\,,0.7\,,0.9\,,1.1\,,1.3\\}$.
### 3.7 Analytic Solution during a Linear Growth Phase
Many epidemics generated by an infectious disease feature a multi-wave
pattern, with periods in between waves where an approximately linear growth of
the number of infected is observed. As an example, COVID-19 data show this
period very clearly in most of the countries, thanks to the large amount of
data collected (see Section 5). This phase of the epidemic, which links two
consecutive waves, has found a natural explanation in the eRG framework [163,
164], which we will review in Section 4.2.
Here we attempt to describe this linear phase from the perspective of
compartmental models. In fact, we have seen from the explicit solutions in
Section 2 that such a behaviour is not found in simple percolation models in
which, notably, the probability or rate of infection remains constant
throughout the entire pandemic. Similarly, this type of solutions is absent in
compartmental models. However, more general approaches and extensions of these
simple models might exhibit such linear growth phases. Since the phenomenon is
seen in the cumulative number of infected (which is a ‘global’ key figure
pertaining to the entire population), we shall in the following analyse it
from the perspective of a SIR model, with time-dependent infection and
recovery rates.
#### 3.7.1 Simplified SIR Model with Constant New Infections
We consider a SIR model described by the equations (3.4) and the initial
conditions (3.3) with time-dependent $\gamma$, $\epsilon$ and $\zeta$ (see
Section 3.5.1). We define a linear growth regime as a period in time
$[t_{1},t_{2}]$ during which the cumulative number of infected $I_{\text{c}}$,
defined in Eq.(3.6) as
$\displaystyle
I_{\text{c}}(t)=N\,I_{0}+\int_{0}^{t}dt^{\prime}\,\gamma(t^{\prime})\,N\,I(t^{\prime})\,S(t^{\prime})\,,$
(3.56)
is a linear function of time. In other words,
$\displaystyle\frac{d}{dt}\,I_{\text{c}}(t)=N\,f=\text{const.}$
$\displaystyle\forall t\in[t_{1},t_{2}]\,,$ (3.57)
while $0\leq S(t),I(t),R(t)\leq 1$, with $f\in\mathbb{R}_{+}$. This implies
$\displaystyle\gamma(t)\,I(t)\,S(t)=f$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,.$ (3.58)
The condition above allows to analytically solve the SIR equations (3.4)
$\forall t\in[t_{1},t_{2}]$ with the initial conditions at the beginning of
the linear growth
$\displaystyle S(t=t_{1})=S_{s}\,,$ $\displaystyle I(t=t_{1})=I_{s}\,,$
$\displaystyle R(t=t_{1})=R_{s}\,,$ with $\displaystyle\begin{array}[]{l}0\leq
S_{s},I_{s},R_{s}\leq 1\,,\\\ S_{s}+I_{s}+R_{s}=1\,.\end{array}$ (3.61)
To see this, we define
$\displaystyle
D_{\epsilon}(t)=e^{\int_{t_{1}}^{t}\epsilon(t^{\prime})dt^{\prime}}\,,$ and
$\displaystyle
D_{\zeta}(t)=e^{\int_{t_{1}}^{t}\zeta(t^{\prime})dt^{\prime}}\,,$ (3.62)
which have the properties
$\displaystyle\frac{dD_{\epsilon}}{dt}(t)=\epsilon(t)\,D_{\epsilon}(t)\,,$
$\displaystyle\frac{dD_{\zeta}}{dt}(t)=\zeta(t)\,D_{\zeta}(t)\,,$
$\displaystyle D_{\epsilon}(t=t_{1})=1=D_{\zeta}(t=t_{1})\,.$ (3.63)
Next, we insert the constraint in Eq.(3.58) into Eqs (3.4) to obtain
$\displaystyle\frac{dI}{dt}=-\epsilon\,I+f\,,$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,.$ (3.64)
This differential equation only contains $I$ (hence, it is decoupled from $S$
and $R$). Multiplying by $D_{\epsilon}(t)$, we find
$\left[\frac{dI}{dt}+\epsilon\,I\right]\,D_{\epsilon}(t)=f\,D_{\epsilon}(t)\qquad\qquad\Rightarrow\qquad\qquad\frac{d}{dt}\left[I(t)\,D_{\epsilon}(t)\right]=f\,D_{\epsilon}(t)\,,$
(3.65)
which can be directly integrated, with the initial conditions (3.61), as:
$\displaystyle
I(t)=\frac{1}{D_{\epsilon}(t)}\left[f\,\int_{t_{1}}^{t}\,D_{\epsilon}(t^{\prime})\,dt^{\prime}+I_{s}\right]\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,.$ (3.66)
For the relative number of recovered, $R$, we can integrate the last equation
of (3.4)
$\displaystyle\frac{dR}{dt}(t)+\zeta(t)\,R=\epsilon(t)\,I(t)\,,$ (3.67)
where, inserting the solution for $I(t)$ in Eq.(3.66), the right hand side can
be understood as an inhomogeneity. Multiplying by $D_{\zeta}$ we obtain, as
before,
$\displaystyle\frac{d}{dt}\left[R(t)\,D_{\zeta}(t)\right]=\epsilon(t)\,I(t)\,D_{\zeta}(t)\,,$
(3.68)
which can be directly integrated, with the initial conditions (3.61), to give
$\displaystyle
R(t)=\frac{R_{s}}{D_{\zeta}(t)}+I_{s}\,\int_{t_{1}}^{t}dt^{\prime}\,\frac{\epsilon(t^{\prime})}{D_{\epsilon}(t^{\prime})}\,\frac{D_{\zeta}(t^{\prime})}{D_{\zeta}(t)}+f\,\int_{t_{1}}^{t}dt^{\prime}\int_{t_{1}}^{t^{\prime}}dt^{\prime\prime}\,\epsilon(t^{\prime})\,\frac{D_{\epsilon}(t^{\prime\prime})}{D_{\epsilon}(t^{\prime})}\,\frac{D_{\zeta}(t^{\prime})}{D_{\zeta}(t)}\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,.$ (3.69)
Finally, $S(t)$ is obtained through the constraint (3.1): $S(t)=1-I(t)-R(t)$.
Notice that the solutions (3.66) and (3.69) remain valid as long as $0\leq
S(t),I(t),R(t)\leq 1$.
#### 3.7.2 Vanishing $\zeta$ and Constant $\epsilon$
To simplify the solutions found above, we can adapt the functions $\zeta$ and
$\epsilon$ to reflect more closely the COVID-19 pandemic: since currently only
very few cases of patients contracting COVID-19 twice within a short time,
_i.e._ a single epidemic wave, are known in the medical literature [165] we
can set $\zeta(t)=0$ to simplify the solutions (3.66) and (3.69). Since
$\zeta=0$ also implies $D_{\zeta}(t)=1$, we find for these solutions
$\displaystyle S(t)$ $\displaystyle=S_{s}-f(t-t_{1})\,,$ $\displaystyle I(t)$
$\displaystyle=\frac{I_{s}}{D_{\epsilon}(t)}+f\,\int_{t_{1}}^{t}\,\frac{D_{\epsilon}(t^{\prime})}{D_{\epsilon}(t)}\,dt^{\prime}\,,$
$\displaystyle R(t)$
$\displaystyle=R_{s}+I_{s}\,\int_{t_{1}}^{t}dt^{\prime}\,\frac{\epsilon(t^{\prime})}{D_{\epsilon}(t^{\prime})}+f\,\int_{t_{1}}^{t}dt^{\prime}\int_{t_{1}}^{t^{\prime}}dt^{\prime\prime}\,\epsilon(t^{\prime})\,\frac{D_{\epsilon}(t^{\prime\prime})}{D_{\epsilon}(t^{\prime})}\,,\hskip
51.21504pt\forall t\in[t_{1},t_{2}]\,.$ (3.70)
We have explicitly verified that this is indeed a solution of Eqs (3.4) that
satisfies the correct initial conditions.
Furthermore, since the recovery rate in most cases depends on the disease in
question and/or the state of medical advancement to cure it, $\epsilon$ is
difficult to change throughout a pandemic without significant effort. For
simplicity, we therefore consider it in the following to be constant, _i.e._
$\epsilon=$ const. (in addition to $\zeta=0$), such that
$D_{\epsilon}(t)=e^{\epsilon(t-t_{1})}$. In this case, we can perform the
integrations in Eq.(3.70) to obtain
$\displaystyle I(t)$
$\displaystyle=e^{-\epsilon(t-t_{1})}\,\left[f\int_{t_{1}}^{t}dt^{\prime}\,e^{\epsilon(t^{\prime}-t_{1})}+I_{s}\right]=e^{-\epsilon(t-t_{1})}\,I_{s}+\frac{f}{\epsilon}\left(1-e^{-\epsilon(t-t_{1})}\right)\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,,$ (3.71)
as well as the relative number of removed
$\displaystyle R(t)$
$\displaystyle=R_{s}+I_{s}\,\epsilon\,\int_{t_{1}}^{t}\,dt^{\prime}\,e^{-\epsilon(t^{\prime}-t_{1})}+\epsilon
f\int_{t_{1}}^{t}dt^{\prime}\,e^{-\epsilon
t^{\prime}}\int_{t_{1}}^{t^{\prime}}dt^{\prime\prime}\,e^{\epsilon
t^{\prime\prime}}$
$\displaystyle=R_{s}+f(t-t_{1})+\left(I_{s}-\frac{f}{\epsilon}\right)\left(1-e^{-\epsilon(t-t_{1})}\right)\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,.$ (3.72)
One can directly verify that these expressions satisfy Eqs (3.4) along with
$\displaystyle S(t)+I(t)+R(t)=S_{s}+I_{s}+R_{s}\,,$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,.$ (3.73)
For some (random) values of $\epsilon$, $f$, $S_{s}$, $I_{s}$ and $R_{s}$, the
functions $S(t)$, $I(t)$ and $R(t)$ (for the region where $0\leq
S(t),I(t),R(t)\leq 1$) are plotted in the left panel of Fig. 28, while the
associated $\gamma(t)=\frac{f}{S(t)\,I(t)}$ is plotted in the right panel.
Figure 28: Solutions (3.70) and $\gamma(t)$ for $\epsilon=0.05$, $f=0.002$,
$S_{s}=0.9$, $I_{s}=0.1$, $R_{s}=0$ and $t_{1}=0$ as a function of time $t$.
#### 3.7.3 Constant Active Number of Infectious Individuals
During the linear growth periods, the COVID-19 data also shows that the number
of active infectious individuals remains constant. Intriguingly, this feature
is also observed in the solutions in the left panel of Fig. 28. In this
section, we will seek a solution of the SIR model with this property, _i.e._
$\displaystyle I(t)=\mathsf{f}=\text{const.}$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,,$ (3.74)
for some $\mathsf{f}\in[0,1]$, which in particular implies
$\displaystyle\frac{d}{dt}\,I(t)=0\,,$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,.$ (3.75)
Injecting this into Eqs (3.4), we obtain under the assumption $I(t)\neq 0$
$\forall t\in[t_{1},t_{2}]$
$\displaystyle S=\frac{\epsilon}{\gamma}\,,$ $\displaystyle\forall
t\in[t_{1},t_{2}]\,,$ (3.76)
and thus for $\zeta\neq 0$
$\displaystyle\frac{d}{dt}\left(\frac{\epsilon}{\gamma}\right)=-\epsilon\,\mathsf{f}\,+\zeta\,R\,,$
$\displaystyle\Longrightarrow$ $\displaystyle
R=\frac{1}{\zeta}\left[\frac{d}{dt}\left(\frac{\epsilon}{\gamma}\right)+\epsilon\mathsf{f}\right]\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,.$ (3.77)
For $\zeta=0$ we obtain the following constraint for the infection and
recovery rate
$\displaystyle\frac{d}{dt}\left(\frac{\epsilon}{\gamma}\right)=-\epsilon\,\mathsf{f}\,,$
$\displaystyle\forall t\in[t_{1},t_{2}]\,.$ (3.78)
For the classical SIR model (for which $\epsilon$ and $\gamma$ are time-
independent $\forall t$ and $\zeta=0$), assuming that $\gamma\neq 0$, the
constraint (3.78) implies that either
* 1.
$\mathsf{f}=0$, which however is excluded since $I\neq 0$;
* 2.
or $\epsilon=0$, in which case $\frac{dR}{dt}=0$ $\forall t$ (_i.e._ not just
$t\in[t_{1},t_{2}]$). However, with the initial conditions (3.3) this implies
$R(t)=0$ and thus
$\displaystyle\frac{d}{dt}S(t)=-\gamma\,\mathsf{f}\,S$
$\displaystyle\Longrightarrow$ $\displaystyle
S=c\,e^{-\gamma\mathsf{f}\,t}\,,$ $\displaystyle\forall t\in[t_{1},t_{2}]\,,$
(3.79)
for $c\in[0,1]$. On the other hand the relation (3.1) implies that
$\frac{dS}{dt}=0$ and thus (with $\gamma\neq 0$ and $\mathsf{f}\neq 0$) $S=0$
(consistent with Eq.(3.76)), in which case $I=\mathsf{f}=1$ and the entire
population is infected (and stays infected for all times).
Thus, within the classical SIR model, the only solution with
$I(t)=\mathsf{f}\neq 0$ constant is $\epsilon=0$ (_i.e._ instead of the SIR
model we only consider the SI model) and $I=1$. This corresponds to the late
phase of the SI model, where the entire population is infected. We, therefore,
see that the traditional SIR model cannot account for the linear growth of the
cumulative number of infected related to Eq.(3.75) and observed in the
COVID-19 data.
## 4 Epidemic Renormalisation Group
Executive Summary 1. We introduce the epidemic renormalisation group approach
that efficiently captures asymptotic time symmetries of the diffusion process.
2. The framework is based on flow equations characterised by fixed point
dynamics. 3. We show the power of the approach by studying single wave
epidemics, which can be naturally generalised to describe multi wave patterns
via complex fixed points 4. We demonstrate how the spreading of diseases
across different regions of the world can be efficiently described and
predicted
As anticipated in the previous section, it has been proposed in [92, 94] to
study the spread of a communicable infectious disease within the framework of
the Wilsonian renormalisation group [86, 87]. We have already pointed out in
Section 3 that the SIR model, defined by the differential equations (3.4), can
be formulated in a fashion that is structurally similar to a set of RGEs (see
[162]). In this section we review the new framework, first proposed in [92,
94], dubbed _epidemic Renormalisation Group (eRG)_.
The eRG framework consists, effectively, in a single differential equation
that captures the time evolution of the disease diffusion, after the
microscopic degrees of freedom and interactions have been ‘integrated out’ and
all the detailed effects (virulence of the disease, social measures, _etc._)
are taken into account by the few parameters in the equation. This leads to a
much more economical description in terms of calculation complexity as
compared to microscopic or compartmental models. At this stage, the main
relation with the renormalisation group is the fact that symmetries can be
explicitly included in the formalism. In the case of the eRG, the symmetries
are related to time scale invariance, _i.e._ the presence of phases where the
disease diffusion is (nearly) stable in time. In the Wilsonian renormalisation
group, which describes the energy dependence of physical charges (for
instance, the interaction strength among fundamental particles), the
symmetries involved are related to scale invariance of distances and energies.
A RGE, therefore, describes the energy flow of a charge from the Ultra-Violet
(UV) regime at high energies to the Infra-Red (IR) regime at low energies. The
eRG also describes a flow, however in time instead of in energy, as we will
see shortly. The physical charge is replaced by an _epidemiological charge_ ,
which is defined as a monotonic, differentiable function of the cumulative
number of individuals infected by the disease as a function of time. This
discussion has already been anticipated in Section 3.6.1.
The economy of this approach in terms of free parameters and computing time
needed to solve the flow of the disease makes it an ideal tool to study the
diffusion of an infectious disease at different scales, from small regions to
a global level. The eRG framework has been first used to characterise a single
epidemic wave, _i.e._ a single episode of exponential increase in the number
of infections followed by an attenuation [92], and extended to study the
inter-region propagation of the disease [94], with validation on the COVID-19
data in Europe [95] and in the US states [96]. Mobility data have also been
used to study the effect of non-pharmaceutical interventions [149] as well as
the role played by passenger flights in the US [96]. As we also review in this
section, the framework can be extended to include the multi-wave pattern [163,
164] that emerges in many communicable diseases, like the 1918 influenza
pandemic, the seasonal flu and the COVID-19 pandemic of 2019. Finally,
preliminary work on the inclusion of vaccinations [96] and virus mutations
[166, 167] are present in the literature, however we will not cover them in
this review.
### 4.1 Beta Function and Asymptotic Fixed Points
The main motivation behind the eRG approach to epidemiology stems from the
observation that a single epidemic wave starts with a very small number of
infected individuals and ends when the cumulative number of infections reaches
a constant, hence no new infections are detected. This dynamics is
characteristic of a system that flows from a fixed point at $t=-\infty$, when
no infections are present, to a new fixed point at $t=\infty$, when the number
of cumulative infections reaches a constant value again. The dynamical flow
between the two fixed points can be described by the following differential
equation:
$\displaystyle-\beta(\alpha)=\frac{d\alpha}{dt}(t)=\lambda\,\alpha\left(1-\frac{\alpha}{A}\right)\,,$
(4.1)
where $\alpha$ is a function of the number of infected, hence a function of
time. The precise form of this equation is an ansatz, for now, and we will
establish the precise relation between $\alpha$ and the number of infections
later. The main feature to stress is the presence of two zeros, corresponding
to the fixed points of the system: if $\alpha(t_{0})=0$ or $\alpha(t_{0})=A$
at any time $t_{0}$, the system will remain in this state at all times. The
zeros can be characterised through the so-called _scaling exponents_ :
$\displaystyle\vartheta=\frac{\partial\beta}{\partial\alpha}\bigg{|}_{\alpha^{\ast}}=\left\\{\begin{array}[]{lcl}-\lambda&\text{for}&\alpha_{1}^{*}=0\,,\\\\[4.0pt]
\lambda&\text{for}&\alpha_{2}^{*}=A\,,\end{array}\right.$ (4.4)
where $\alpha^{\ast}$ is the epidemic coupling constant at the fixed point. A
negative (positive) scaling exponent corresponds to a repulsive (attractive)
fixed point. Thus, a system in the repulsive fixed point at $\alpha^{\ast}=0$,
once perturbed (by a small initial number of infected individuals) will flow
towards the attractive fixed point at $\alpha^{\ast}=A$. As such, $A$ is a
function of the cumulative number of individuals infected during the epidemic
wave.
Figure 29: The logistic function schematically representing the cumulative
number of infected as a function of time. With regards to (4.5) we have
$A=20.000$, $B=1.000.000$ and $\kappa=0.2$.
The solution of Eq.(4.1) is a _logistic function_ (sigmoid) of the form:
$\displaystyle\alpha\,:\mathbb{R}$ $\displaystyle\longrightarrow[0,A]$
$\displaystyle t$ $\displaystyle\longmapsto\alpha(t)=\frac{A}{1+B\,e^{-\lambda
t}}\,,$ (4.5)
where $A,B,\lambda\in\mathbb{R}_{+}\setminus\\{0\\}$. This function shows a
characteristic ‘S’-shape (see Fig. 29 for a schematic representation) and has
the following asymptotic values
$\displaystyle\lim_{t\to-\infty}\alpha(t)=0\,,$
$\displaystyle\lim_{t\to\infty}\alpha(t)=A\,,$ (4.6)
corresponding to the zeros of the derivative $\frac{d\alpha}{dt}=0$.
As already mentioned, the parameter $A$ corresponds to (a function of) the
asymptotic number of infected cases during the epidemic wave. The second
parameter in Eq.(4.1), $\lambda$, which has dimension of a rate, measures how
fast the number of infections increases, while $B$ is an integration constant
that corresponds to a shift of the entire curve in time and determines the
beginning of the infection increase. More details about the properties of this
function and its epidemiological interpretation can be found in [92] and will
not be repeated here. It is, however, important to notice that the parameters
$\lambda$ and $A$ can be removed from the differential equation by a simple
rescaling of the function and of the time variable:
$\displaystyle\frac{d\tilde{\alpha}}{d\tau}=\tilde{\alpha}(\tau)\,(1-\tilde{\alpha}(\tau))\,,\qquad\tau=\lambda
t\,,\quad\tilde{\alpha}(\tau)=\frac{\alpha(\tau/\lambda)}{A}\,.$ (4.7)
Thus, while $A$ is a mere normalisation, $\lambda$ can be thought of as a
‘time dilation’ parameter. Once the normalised solutions are shown in the
‘local time’ $\tau$, therefore, all epidemic waves should reveal the same
universal temporal shape.
This universality property has been first pointed out in [92] from data of the
Hong Kong (HK) Sars-2003 outbreak as well as the COVID-19 pandemic during the
spring of 2020. It has been shown that the time dependence of the cumulative
total number of infected cases in various regions of the world shows the same
characteristic behaviour. In [92], the epidemic coupling has been defined as
the logarithm of the cumulative infected, $\alpha(t)=\ln I_{c}(t)$, however
other choices, like $\alpha(t)=I_{c}(t)$, can also reproduce the data. The
same framework can also be applied to the number of hospitalisations or the
number of deceased individuals. This feature of the epidemiological data shows
that the dynamics encoded in Eq.(4.1) provides an accurate description of the
diffusion of an infectious diseases in terms of a flow equation.
In [92, 94, 163] the following dictionary between the spread of an epidemic
and the Wilsonian renormalisation group was proposed:
* 1.
The time variable is identified with the (negative) logarithm of the energy
scale $\mu$
$\displaystyle\frac{t}{t_{0}}=-\ln\left(\frac{\mu}{\mu_{0}}\right)\,,$ (4.8)
where $t_{0}$/$\mu_{0}$ set the scale for the time and energy (for simplicity,
and without loss of generality, we will fix $t_{0}=1$). With this
identification, Eq.(4.1) is similar to the RGE for the gauge coupling in a
theory that features a Banks-Zaks type fixed point [168], _i.e._ an
interactive fixed point at low energies (in the Infra-Red).
* 2.
The solution can be associated to a coupling constant in the high energy
physics RGEs, $\alpha$. The epidemic coupling strength is defined as a
monotonic, differentiable and bijective, function $\phi$ of the cumulative
number of infected cases
$\displaystyle\alpha(t)=\phi(I_{\text{c}}(t))\,.$ (4.9)
In [92, 163] $\phi$ was chosen as the natural logarithm $\phi(x)=\ln(x)$,
while in [163, 164] it was chosen $\phi(x)=x$. The choice was justified by a
better fit to the actual data of the COVID-19 pandemic, while from the
perspective of the Wilsonian renormalisation group, the difference corresponds
to a different choice of scheme.
* 3.
The _beta function_ is defined as the (negative) time-derivative of the
epidemic coupling strength
$\displaystyle\beta\equiv\frac{d\alpha}{d\ln\left(\frac{\mu}{\mu_{0}}\right)}=-\frac{d\alpha}{dt}=-\frac{d\phi}{dI_{\text{c}}}\,\frac{dI_{\text{c}}}{dt}(t)\,.$
(4.10)
In order to better model the respective data of various countries during the
COVID-19 pandemic, it was furthermore proposed in [163, 164] to consider the
more general beta-function
$\displaystyle-\beta(\alpha)=\frac{d\alpha}{dt}(t)=\lambda\,\alpha\left(1-\frac{\alpha}{A}\right)^{2p}\,,$
(4.11)
for $p\in[1/2,\infty]$ and $\lambda,A\in\mathbb{R}_{+}$. The role of the
exponent $p$ is to smoothen the ‘S’-shape of the solution when it approaches
the attractive fixed point at $\alpha^{\ast}=A$.
#### 4.1.1 Generalisation to multiple regions
The approach discussed so far assumes an isolated population of sufficient
size. However, the simplicity of the eRG approach allows for a simple
generalisation to study the interaction between various regions of the world
[94] via the mobility of individuals. For $M$ separated populations (labelled
by $i=1,\ldots,M$) of size $N_{i}$ whose cumulative number of infected is
denoted by $I_{\text{c},i}$, it was proposed in [94] that infections can be
transmitted between these populations by travellers. Hence, the epidemic
diffusion can be described by $M$ coupled differential equations, in the form
of Eq.(4.11) for each population, with the addition of an interaction term:
$\displaystyle-\beta(\alpha_{i})=\lambda\,\alpha_{i}\left(1-\frac{\alpha_{i}}{A}\right)^{2p}+\frac{d\phi}{dI_{\text{c},i}}\,\sum_{j=1}^{M}\frac{k_{ij}}{N_{i}}\left(I_{\text{c},j}(t)-I_{\text{c},i}(t)\right)\,,$
(4.12)
where $k_{ij}\in\mathbb{R}$ is a measure for the number of travellers between
populations $i$ and $j$. The contribution to the beta function can be obtained
by replacing $I_{\text{c},i}\to\phi^{-1}(\alpha_{i})$, where $\alpha_{i}$ is
the epidemic coupling in each population. For more details, see Ref. [94].
Figure 30: Schematic representation of the flow in a two-region coupled eRG
framework of Eq. (4.12). In this fictitious example we fix $\lambda_{1}=0.7$,
$\lambda_{2}=0.9$, $N_{1}=200000$, $N_{2}=300000$,
$A_{1}=\log\left(\frac{1}{40}N_{1}\right)$,
$A_{2}=\log\left(\frac{N_{2}}{10}\right)$ and $p_{1}=p_{2}=\frac{1}{2}$. For
the matrix of couplings $k_{ij}$, we use $k_{12}=k_{21}=10^{-3}$ and
$k_{11}=k_{22}=0$. The two-component vectors are given by
$(-\beta(\alpha_{1}),-\beta(\alpha_{2}))$, with $I_{c}\in\mathbb{C}$ with the
overall length represented by the colour-coding. The function $\phi$ was
chosen $\phi(x)=x$ $\forall x\in\mathbb{R}$.
These coupled differential equations can be thought of flow equations, in the
spirit of the Wilsonian renormalisation, with the second term representing a
coupling between the different regions. A graphical representation of the
coupled $\beta$-functions in Eq.(4.12) can be given in the form of flow in an
$M$-dimensional space. In Fig. 30, we provide a numerical (fictitious) example
for $M=2$: choosing the scheme $\alpha_{i}(I_{c,i})=\ln(I_{c,i})$ for $i=1,2$,
the arrows indicate the vector field
$\left(\begin{array}[]{c}-\beta(\alpha_{1})\\\
-\beta(\alpha_{2})\end{array}\right)$ with the colour representing the length
$\sqrt{\beta(\alpha_{1})^{2}+\beta(\alpha_{2})^{2}}$ at any point in the
$(I_{c,1},I_{c,2})$-plane. The black dots are the actual trajectory of the
system calculated as the numerical solution of the coupled differential
equations (4.12). As it can be seen, the former flows along the arrows from a
repulsive fixed point at $(I_{c,1},I_{c,2})=(0,0)$ (all arrows point away from
it), which represents the absence of the disease in both countries, to an
attractive fixed point (all arrows point towards it) which corresponds to the
eradication of the disease.
The coupled eRG framework in Eq.(4.12) has been used to explain the diffusion
of the COVID-19 pandemic across different regions of the world. This is one of
the main mechanisms that can generate multiple waves across a geographic
region, while a second one will be discussed in the next section. The method
has been used to predict the arrival of a second COVID-19 wave, which has hit
Europe in the fall of 2020 [95]: the new infections originate from a seed
region, which can be interpreted as inflow from outside Europe or the effect
of hotspots and clusters, while the number of travellers, i.e. the entries of
$k_{ij}$, were generated randomly. In Ref. [96], the same framework was used
to explain the geographical wave patterns observed in the United States, with
the aid of open-source flight data to estimate the couplings.
### 4.2 Complex (fixed point) epidemic Renormalisation Group
Although the beta-function in Eq.(4.1) is relatively simple and contains only
two parameters, it describes the time evolution of short-time epidemics (such
as HK SARS-2003 and each wave of COVID-19) quite efficiently, as the flow from
a repulsive to an attractive fixed point (or from an UV to an IR fixed point
in the language of high-energy physics). However, this beta-function is too
simple to describe correctly longer lasting pandemics with a more intricate
time-evolution, such as subsequent waves of COVID-19: the attractive fixed
point at $t\to\infty$ corresponds to a complete eradication of the disease and
Eq.(4.1) describes outbreaks that follow a single wave. We have already
discussed the role of passenger mobility in generating further epidemiological
waves. However, data from COVID-19 has unveiled a second potential mechanism
that may be at the origin of multiple-waves: in fact, after the end of each
wave, a period of linear growth has been observed in all regions of the world
(except those where the virus has been locally eradicated thanks to aggressive
isolation policies). This is characterised by a nearly-constant number of new
infected cases, and it can be seen as an endemic phase of the pandemic, where
the virus circulates within the local population, without an exponential
increase.
Figure 31: Right: solutions of the CeRG equation, normalised to $A=1$ and with
time in units of $\lambda$, for $-\delta=0,10^{-4},10^{-3},10^{-2}$ and
$\delta_{\rm max}$, for $p=0.55$. Left: Estimated duration of the linear
growth phase, in units of $\lambda$, as a function of $-\delta$ for $p=0.5$,
$0.6$, $0.7$, $0.8$, $0.9$ and $1$. The lines end for $\delta=-\delta_{\rm
max}$.
In [163] it was proposed that this linear phase is evidence for a near time-
scale invariance symmetry in the dynamics governing the diffusion of the
virus. In practice, the system does not reach the second fixed point of
Eq.(4.1), instead it hits an instability that drives the system to a new
exponential phase after a given amount of time. The time-evolution of
pandemics can still be described within the framework of a RGE, however with a
more complicated beta-function that features a richer structure of (complex)
fixed points. The new framework was called the _Complex epidemic
Renormalisation Group (CeRG)_. In the CeRG approach, the beta function of
Eq.(4.11) is modified as follows:
$\displaystyle-\beta(I_{\text{c}})=\frac{dI_{\text{c}}}{dt}=\lambda\,I_{\text{c}}\left[\left(1-\frac{I_{\text{c}}}{{A}}\right)^{2}-\delta\right]^{p}=\lambda\,I_{\text{c}}\left(\frac{I_{\text{c}}}{{A}}-1+\sqrt{\delta}\right)^{p}\left(\frac{I_{\text{c}}}{{A}}-1-\sqrt{\delta}\right)^{p}\,,$
(4.13)
where the additional parameter $\delta\in\mathbb{R}_{-}$, _i.e._
$\delta=-|\delta|$. While this equation can be written for any epidemic
coupling $\alpha$, here we commit to the case $\alpha(t)=I_{\text{c}}(t)$ for
reasons that will be clear in the next Section. The eRG equation (4.11) can be
recovered for $\delta\to 0$. For non-vanishing $\delta$, instead of only two
asymptotic fixed points, this functions has three fixed points
$\displaystyle I_{\text{c},0}=0\,,$ $\displaystyle
I_{\text{c},\pm}=A\left(1\pm i\sqrt{|\delta|}\right)\,,$ (4.14)
with complex $I_{\text{c},\pm}\in\mathbb{C}$. Besides the repulsive fixed
point at $I_{\text{c}}^{\ast}=0$, which remains, the attractive fixed point
splits into two complex fixed points. Since the (cumulative) number of
infected individuals is a strictly real number, the system cannot actually
reach the complex fixed points and thus cannot exactly enter into a time-scale
invariant regime at infinite time. Instead, for small $|\delta|$, when the
solution approaches the would-be fixed point at $I_{\text{c}}\approx A$, the
time evolution will be strongly slowed down due to the effect of the nearby
complex fixed points. This results in a near-linear behaviour of the solution,
as shown in the left panel of Fig. 31. Thus, the new beta function (4.13)
realises an approximate time-scale symmetry in the solution. Concretely, the
precise form of the flow in the vicinity of these complex fixed points depends
on $|\delta|$:
Figure 32: Schematic flow diagrams representing the $\beta$-function (4.13)
with $A=1$, $\lambda=0.05$, $\delta=-0.003$ and $p=1/2$: the two-component
vectors are given by $(\text{Re}(-\beta(I_{c})),\text{Im}(-\beta(I_{c})))$,
with $I_{c}\in\mathbb{C}$ with the overall length represented by the colour-
coding. Left panel: trajectories of the flow in the complex plane with initial
conditions $I_{c}(t=0)=I_{c,0}$, with $\text{Im}(I_{c,0})$ specified in the
figure. Right panel: close-up on the complex fixed point $I_{c,+}$. The dashed
line represents a branch cut, which needs to be chosen such that it does not
intersect the real axis.
* 1.
For $|\delta|<\delta_{\text{max}}=\frac{p^{2}}{1+2p}$, the beta-function has a
local maximum and $I_{\text{c}}$ enters into a regime of near linear growth
characterised by
$\displaystyle\frac{dI_{\text{c}}}{dt}(t)\sim\text{const.}$ (4.15)
In the context of epidemics, the linear growth phase can be associated to an
endemic phase of the disease, when the virus keeps diffusing within the
population without an exponential growth in the number of new infected (this
corresponds to a situation with reproduction number $R_{0}=1$, which keeps the
number of infectious cases constant). A connection of this regime with
compartmental models of the SIR type has been presented in Section 3.7.
* 2.
In the CeRG, the linear growth is only an intermediate phase, which preludes
to a new exponential increase in the number of infections. The duration
depends on $|\delta|$, and can can be estimated as [163]
$\displaystyle\Delta t_{\rm
endemic}=-2\int_{A}^{\infty}\frac{dI_{\text{c}}}{\beta(I_{\text{c}})}\ .$
(4.16)
This time is plotted for different values of $p$ as a function of $\delta$ in
the right panel of Fig. 31.
* 3.
For $|\delta|\geq\delta_{\text{max}}$ the beta-function no longer has a local
maximum and $I_{\text{c}}$ keeps growing exponentially, without a linear
growing phase.
In Fig. 32 we represent the dynamics encoded in Eq.(4.13) as a flow in the
complex space of $I_{c}$. We clearly see that the system starts from the
unstable fixed point at $I_{c}=0$, and moves towards the approximate one at
$\text{Re}I_{c}\approx A$, where the evolution slows down. This is represented
by the closeness of the data-points, which are calculated at equal intervals
of time. We also show flows in the complex plane, which are unrealistic as
$I_{c}$ remains a real number when describing a pandemic. Anyhow, all the
solutions feature a slowing down of the infection growth near the complex
fixed points, which reproduced the endemic phase of linear growth.
The endemic linear-growing phase, therefore, is the prelude of a new wave of
the epidemic diffusion. The CeRG approach can describe this endemic phase and
the beginning of the next wave, however the number of infections would
continue to grow indefinitely. In the following section we will further extend
the approach to take into account the multi-wave pattern.
### 4.3 Modelling multi-wave patterns
Pandemics like the 1918 Spanish flu [1] and COVID-19 have shown the appearance
of multiple consecutive waves of exponential increase in the number of
infections. In the case of COVID-19, the data support the fact that an endemic
linearly-growing phase is always present in between two consecutive waves
[163]. The CeRG model can be extended to take into account this structure, in
a way that reproduces nicely the current data [164].
The multi-wave beta function, for an epidemic with $w$ consecutive waves, can
be written as:
$\displaystyle-\beta_{\rm multi-waves}(I_{\text{c}})=\lambda
I_{\text{c}}\;\prod_{\rho=1}^{w}\left[\left(1-\zeta_{\rho}\,\frac{I_{\text{c}}}{A}\right)^{2}-\delta_{\rho}\right]^{p_{\rho}}\,,$
(4.17)
with $\zeta_{\rho}\leq 1$, $|\delta_{\rho}|\ll 1$ and $p_{\rho}>0$ for
$\rho\in\\{1,\ldots,w\\}$. The normalisation $A$ can be fixed to match the
first wave, so that
$0<\zeta_{w}<\dots<\zeta_{2}<\zeta_{1}=1\,.$ (4.18)
Besides the repulsive fixed point at $I_{\text{c}}^{\ast}=0$, the equation has
a series of complex fixed points ruled by the parameters $\delta_{\rho}$.
Without loss of generality, we can fix $\delta_{w}=0$ so that the disease is
extinguished after the last wave, and the total number of infections during
the whole epidemic is given by $\lim_{t\to\infty}I_{\text{c}}(t)=A/\zeta_{w}$.
This description, however, only works for $\alpha(t)\propto I_{\text{c}}(t)$,
for which the value of the various fixed points are well separated [164], but
not for $\alpha(t)\propto\ln I_{\text{c}}(t)$.
Figure 33: Left panel: Schematic flow diagram representing the
$\beta$-function (4.17) and trajectories for different initial conditions with
$A=1/2$, $\zeta_{1}=1$, $\zeta_{2}=1/2$, $\lambda=0.05$, $\delta_{1}=-0.003$,
$p_{1}=1/2$ and $p_{2}=1$. The trajectories of the flow start from the initial
conditions $I_{c}(t=0)=I_{c,0}$, with $\text{Im}(I_{c,0})$ specified in the
figure. Right panel: Cumulative number of infected $I_{c}$ for the trajectory
with $\text{Im}(I_{c,0})=0$, _i.e._ the light blue dots in the left panel.
In Fig. 33 we show the flow in the complex plane for Eq.(4.17) with two waves
($\omega=2$). After leaving the unstable fixed point at $I_{c}=0$, the system
slows down near the complex fixed points, hence generating the linear endemic
phase like in the CeRG approach, before entering a second wave. The latter
ends at the final attractive fixed point. In the right panel we show the time
evolution of $I_{c}(t)$ for this fictitious example, clearly showing two
exponential episodes. As for the CeRG, the time delay between the two waves is
controlled by the number of new cases in the endemic phase, i.e. by the
parameter $\delta$ in Eq.(4.17). Hence, this model highlights the importance
of imposing some measures to limit the circulation of the virus after the end
of an epidemic wave in order to tame and control the emergence of the next
one. Note, finally, that this formalism can also be used for studying the
diffusion in between different regions, by adding a coupling term like the
second term in Eq.(4.12).
## 5 COVID-19
The approaches that we have discussed in the previous sections are applicable
to a large variety of infectious diseases. The main differences are in certain
key parameters, such as the method of transmission of the pathogen, the
incubation time, the infection and removal (mortality) rate, _etc._. They
influence the resulting time evolution of the epidemic and lead, for example,
to a different total duration of the epidemic, total number of infected and
fatalities, _etc_. In this section, as a study case, we present data for the
cumulative number of infected individuals in various countries during the
COVID-19 pandemic, which started at the end of 2019 and is still ravaging the
world. Since large scale testing is at the heart of many countries strategies
to combat this pandemic, there is a large amount of publicly available data
documenting the spread of the SARS-CoV-2 across the globe. Here we use data
from public repositories [169] for the time period of 15/02/2020 until
17/08/2021.
We use these data to highlight peculiarities of the time evolution of the
spread of the disease, namely the previously mentioned distinct multi-wave
structure of repeated phases of exponential growth in the number of infected
individuals interspersed with phases of (quasi-)linear growth: Figure 34 shows
examples of the first of such waves in countries taken from all around the
globe. The plots show the cumulative number of infected individuals as well as
the cumulative number of deaths. These plots provide examples of the
epidemiological dynamics under very different conditions not only with regards
to geographical (_e.g._ size of the population, population density, level of
urbanisation), climatic, economical (_e.g._ the gross national product of each
country), socio-cultural and political factors (_e.g._ the level of medical
care the population has access to), but also different strategies the
countries have deployed to combat the epidemic. While this has lead to a
different dynamics in each country (with regards for example to the total
number of cases or the duration of the wave), the infection numbers all follow
a similar shape. Indeed, as the solid lines in Fig. 34 shows, in each case the
data can be fitted with a a logistic function of the form of Eq.(4.5) and the
differences only lie with the different numerical parameters, as reported in
Table 1.
Country | $A$ | $\lambda$ | $B$
---|---|---|---
Australia | $6854\pm 24$ | $0.2095\pm 0.0051$ | $7509\pm 1615$
Azerbaijan | $37703\pm 136$ | $0.0547\pm 0.0004$ | $2013\pm 110$
Brazil | $5624661\pm 26343$ | $0.0333\pm 0.0003$ | $314\pm 14$
Canada | $101987\pm 544$ | $0.0716\pm 0.0012$ | $222\pm 18$
Germany | $177112\pm 910$ | $0.1192\pm 0.003$ | $399\pm 58$
Kenya | $39469\pm 236$ | $0.0571\pm 0.0006$ | $13234\pm 1225$
New Zealand | $1491\pm 3$ | $0.2133\pm 0.0032$ | $24095\pm 3636$
South Africa | $686350\pm 1586$ | $0.0637\pm 0.0007$ | $19449\pm 1910$
Table 1: Parameters of the logistic function in Eq.(4.5) obtained by fitting
the epidemiological curves (cumulative number of infected) shown in Fig. 34.
Furthermore, Fig. 34 also shows the cumulative number of deaths in each
country, which can also be fitted with a logistic function of the same form.
As discussed in Section 4, the fact that epidemiological curves in very
different regions of the world, under very different circumstances, can be
fitted with a single class of functions is due to a self-similarity structure
of the corresponding dynamics. Indeed, the underlying symmetry principle that
organises the spread of the disease around (near) fixed points of flow
equations is at the heart of the eRG approach.
Figure 34: Cumulative number of individuals infected with SARS-Cov-2 and
cumulative number of deaths during the first wave in countries across all
continents. The dots represent the data reported at [169] (averaged over a
week) and the coloured lines fits with logistic functions of the form (4.5).
After the first wave of COVID-19, most countries have entered into an endemic
phase, where the cumulative number of infected individuals has grown linearly,
followed by further waves. As an example, Figs 35–38 show the infection
numbers in 48 European countries for the entire duration of the pandemic so
far. We have highlighted in each of them individual waves and have fitted them
with a logistic function as a solution of the eRG approach that we have
reviewed in Section 4.101010For ease of visibility, we have focused on the
larger such epidemiological episodes. As can be seen from the quality of the
fit, although these functions only have three parameters $(A,B,\lambda)$, they
capture correctly the cumulative number of cases despite the fact that the
data (even for different waves within the same country) represent very
different epidemiological situations:
* 1.
the countries show large geographical, climatic as well as socio-cultural
differences;
* 2.
the waves occur during different seasons under different meteorological
conditions;
* 3.
during each wave the governments of these countries have imposed different
non-pharmaceutical interventions to reduce the spread of the virus;
* 4.
since the beginning of 2021 all countries have started vaccination campaigns
which have lead to a rate of roughly 60% of all adults across Europe being
fully vaccinated by summer 2021;
* 5.
since the beginning of the pandemic, the SARS-CoV-2 virus has mutated multiple
times and several different variants (with different infection and mortality
rates as well as different efficacy for the vaccines) have dominated certain
periods of the epidemiological dynamics.
As it is visible from the plots in Figs 35–38, despite all of these
differences, the cumulative number of infected can still be organised by a
self-similarity principle, which is characterised by logistic functions.
Finally, in Figs 35–38 we have restricted ourselves to fit waves that occurred
before the summer of 2021. Many countries, however, show in late summer/early
fall of 2021 once more a tendency of growing infection numbers, which (despite
the vaccination efforts), may indicate the onset of new waves.
Figure 35: Cumulative number of individuals infected with SARS-CoV-2 from
15/02/2020 until 17/08/2021 in different countries of Europe. The red dots
represent the data reported at [169] and the coloured lines fits with logistic
functions of the form (4.5). The coloured regions indicate the time frame over
which the data were fitted for a single wave. Figure 36: Cumulative number of
individuals infected with SARS-CoV-2 (contd.) Figure 37: Cumulative number of
individuals infected with SARS-CoV-2 (contd.) Figure 38: Cumulative number of
individuals infected with SARS-CoV-2 (contd.)
Figure 39: Cumulative number of individuals infected with SARS-CoV-2 (contd.)
Figure 40: Cumulative number of individuals infected with SARS-CoV-2 (contd.)
## 6 Outlook and Conclusions
The study of the time evolution of infectious diseases is a long standing
subject: the impact of pandemics on human society cannot be overstated (as the
recent devastating case of COVID-19 has highlighted). Consequently, over the
course of more than a century, numerous approaches and mathematical models
have been proposed with the aim to predict the spread of diseases among a
population, devise tools to estimate their biological, social and economical
impact and develop strategies to mitigate the harm done to society as a whole.
In this report we give a review of this endeavour that is inspired by
theoretical physics, in particular the study of phase transitions and critical
phenomena, encompassed by the framework of field theory. Indeed, we organise
mathematical models ranging from ‘microscopic’ models, in which the spread of
the disease is modelled at the individual level, to ‘effective’ models, in
which these microscopic interactions have been ‘summed up’ and replaced by the
description of the time evolution of suitable macroscopic degrees of freedom.
We give concrete examples in each case and show how they are related to one
another. We also show how to extend the models to account for observed
phenomena, like multi-wave dynamics and the emergence of time-dependent
symmetries such as approximate time-dilation invariance.
We start with lattice and percolation models in Section 2. These are among the
most ‘microscopic’ models and allow to simulate the spread of a disease at the
level of individuals, therefore permitting to easily incorporate biological
and social peculiarities related to the transfer of the disease from an
infected individual to a susceptible one. Typically at great computational
cost, these models provide insight into how these details influence the time
evolution of the disease at larger scales and can highlight emerging patterns
and symmetries. Indeed, via numerical analyses of simple models, we show in
Section 2 the emergence of critical behaviour: as a function of some key
parameters, the system undergoes a phase transition from a state where only a
small fraction of the population gets infected over time to a state where a
significant portion of individuals is affected. Near the critical point, this
behaviour can be cast into a field theoretical description for which we review
an action formalism.
We next argue that mean field and averaging procedures of percolation models
naturally lead to compartmental models. The latter are among the oldest
descriptions of epidemiological processes (the SIR-model dating back almost a
century) and are ubiquitous in the modern study of infectious diseases. As we
review in Section 3, following our classification of approaches, compartmental
models are effective descriptions: rather than describing the spread of a
disease among individuals of the population, they comprise (first order)
differential equations that yield (among others) the total number of
infectious individuals in the population. The microscopic details of the
spread of the disease have been ‘averaged’ and enter into the details of the
equations. The seeming loss of control over the microscopic details of the
infectious dynamics comes at the benefit of a more ‘global’ description of the
disease (and typically a reduced computational cost). In Section 3, we provide
an in-depth review of SIR-like compartmental models that, from a theoretical
vantage point, elucidates their mechanics and dynamics. We analyse, review and
extend the models to take into account single-wave dynamics, multi-wave
patterns and even superspreaders, thus highlighting the flexibility of the
approach as a whole. Finally, we also discuss that these models can be re-
organised in a fashion to make efficient use of time-scaling symmetries of the
epidemiological dynamics and which emphasises the role of fixed points.
In Section 4 we develop these ideas further and discuss the epidemic
Renormalisation Group framework, which is in fact organised around the
symmetry principle of time-scale invariance of the diffusion solutions. Using
intuition from particle physics, the epidemiological process is described
through flow equations (called beta-functions), which govern the trajectories
of the system that connect different fixed points. The latter correspond to
stationary solutions of the dynamics, in which either no disease is present in
the first place or it has been completely eliminated. By invoking an even
richer structure of fixed points, an extended eRG approach allows to model
multi-wave pandemics.
The approaches and models outlined in this review can be adapted to a large
range of different situations and cases: in Section 5 we have presented
results related to the COVID-19 pandemic. We highlight how the multi-wave
dynamics, as well as the impact of non-pharmaceutical interventions, vaccines
and the geographical mobility of the (a portion of the) population can be
modelled by the approaches outlined in the previous sections.
In order to keep the discussion as simple as possible and to focus on the
underlying ‘physics’, we have illustrated the ideas in this review by rather
simple models. The latter can be much more refined and, for example, take into
account other aspects and phenomena related to the spread of diseases. These
range, for example, from developing strategies for protecting the population
by implementing efficient vaccination campaigns and concrete strategies on the
use of non-pharmaceutical interventions (such as lockdowns, social distancing
measures and travel restrictions), to gauging the impact of mutations and
adaptation mechanisms of pathogens [170, 171]. We have refrained from working
out the latter in detail, but instead refer the reader to more specialised
literature.
Furthermore, due to their obvious applications, the tools developed in this
review have been exclusively focused on the description of infectious diseases
among a human population. While many of them have in fact been inspired by
other systems (notably chemical reactions), they can be applied to other
fields as well with equal ease and success: apart from the immediate
applicability to other species (_e.g._ the spread of diseases among
livestock), the ideas underlying the concrete models discussed here can be
applied to a much larger range of problems. In fact, similar problems to the
ones tackled here can be found in other complex systems as well, ranging from
applications in network systems (_e.g._ the spread of computer malware in a
decentralised system) to human behaviour [172, 173] as well as social
engineering and media science (_i.e._ the spread of ideas and information in a
network/society).
## References
## References
* [1] J. K. Taubenberger, D. M. Morens, 1918 influenza: The mother of all pandemics, Rev Biomed 17(1) (2006) 69–79.
* [2] R. A. Weiss, How does hiv cause aids?, Science 260 (1993) 1273–9. doi:https://doi.org/doi:10.1126/science.8493571.
* [3] D. C. Douec, M. Roeder, R. A. Koup, Emerging concepts in the immunopathogenesis of aids, Annual Review of Medicine 60 (2009) 471–84. doi:https://doi.org/doi:10.1146/annurev.med.60.041807.123549.
* [4] D. Bernoulli, Essai d’une nouvelle analyse de la mortalité causée par la petite vt́role et des avantages de l’inoculation pour la prévenir, Mémoires de Mathématiques et de Physique, Académie Royale des Sciences, Paris (1760) 1–45.
* [5] H. Heesterbeek, The law of mass-action in epidemiology: a historical perspective, in: K. Cuddington, B. Beisner (Eds), Epidemiological Paradigms Lost: Routes of Theory Change, p.81-106, Academic Press, 2005.
* [6] W. Farr, On the cattle plague, J. Soc. Sci. March 20.
* [7] J. Snow, On Continuous Molecular Changes, More Particularly in Their Relation to Epidemic Diseases, J.Churchill, London.
* [8] W. Hamer, Age-incidence in relation with cycles of disease prevalence, Trans. Epidem. Soc. London 15 (1896) 64–77.
* [9] W. Hamer, Epidemic disease in England: The evidence of variability and of persistency of type; Lecture 1, Lancet (1906) 569–574.
* [10] W. Hamer, Epidemic disease in England: The evidence of variability and of persistency of type; Lecture 2, Lancet (1906) 655–662.
* [11] W. Hamer, Epidemic disease in England: The evidence of variability and of persistency of type; Lecture 3, Lancet (1906) 733–739.
* [12] R. Ross, The Prevention of Malaria, second edition, John Murray, London.
* [13] R. Ross, An application of the theory of probabilities to the study of _a priori_ pathometry: Part I, Proc. Roy. Soc. Lond. A 92 (1916) 204–230.
* [14] R. Ross, H. Hudson, An application of the theory of probabilities to the study of _a priori_ pathometry: Part II, Proc. Roy. Soc. Lond. A 93 (1916) 212–225.
* [15] R. Ross, H. Hudson, An application of the theory of probabilities to the study of _a priori_ pathometry: Part III, Proc. Roy. Soc. Lond. A 93 (1916) 225–240.
* [16] A. McKendrick, The rise and fall of epidemics, Paludism (Transactions of the Committee for the Study of Malaria in India) 1 (1912) 54–66.
* [17] A. McKendrick, Studies on the theory of continuous probabilities, with special reference to its bearing on natural phenomena of a progressive nature, Proceedings of the London Mathematical Society 13 (1914) 401–416.
* [18] A. McKendrick, Applications of mathematics to medical problems, Proc. Edinburgh Math. Soc. 44 (1926) 98–130.
* [19] W. O. Kermack, A. McKendrick, G. T. Walker, A contribution to the mathematical theory of epidemics, Proceedings of the Royal Society A 115 (1927) 700–721. doi:https://doi.org/10.1098/rspa.1927.0118.
URL https://royalsocietypublishing.org/doi/10.1098/rspa.1927.0118
* [20] N. Bailey, The Mathematical Theory of Infectious Diseases, 2nd ed., Hafner, New York, 1975.
* [21] N. Becker, The use of epidemic models, Biometrics 35 (1978) 295–305.
* [22] e. C. Castillo-Chavez, Mathematical and Statistical Approaches to AIDS Epidemiology, Lecture Notes in Biomath, Springer-Verlag, Berlin 83.
* [23] K. Dietz, Epidemics and rumours: A survey, J. Roy. Statist. Soc. Ser. A 130 (1967) 505–528.
* [24] K. Dietz, Density dependence in parasite transmission dynamics, Parasit. Today 4 (1988) 91–97.
* [25] H. Hethcote, A thousand and one epidemic models, in Frontiers in Theoretical Biology, S.A. Levin, ed., Lecture Notes in Biomath., Springer-Verlag, Berlin 100 (1994) 504–515.
* [26] H. Hethcote, S. Levin, Periodicity in epidemiological models, in Applied Mathematical Ecology, L. Gross, T.G. Hallam, and S.A. Levin, eds., Springer-Verlag, Berlin (1989) 193–211.
* [27] H. S. H.W. Hethcote, P. V. D. Driessche, Periodicity and stability in epidemic models: A survey, in Differential Equations and Applications in Ecology, Epidemics and Population Problems, S. N. Busenberg and K.L. Cooke, eds. , Academic Press, New York (1981) 65–82.
* [28] K. Wickwire, Mathematical models for the control of pests and infectious diseases: A survey, Theoret. Population Biol. 11 (1977) 182–238.
* [29] K. Dietz, D. Schenzle, Mathematical models for infectious disease statistics, in: A. Atkinson (Ed.), A Celebration of Statistics, Springer (1985) 167–204.
* [30] H. W. Hethcote, Qualitative analyses of communicable disease models, Mathematical Biosciences 28 (3) (1976) 335 – 356. doi:https://doi.org/10.1016/0025-5564(76)90132-2.
URL http://www.sciencedirect.com/science/article/pii/0025556476901322
* [31] E. R.M. Anderson, R.M. May, Infectious Diseases of Humans: Dynamics and Control, Oxford University Press, UK, 1991.
* [32] H. W. Hethcote, The mathematics of infectious diseases, SIAM Review 42 (4). doi:https://doi.org/10.1137/S0036144500371907.
* [33] e. R.M. Anderson, Population of Infectious Diseases, Chapman and Hall, London, 1982\.
* [34] R. Anderson, E. R.M. May, Population Biology of Infectious Diseases, Springer Verlag, Berlin, Heidelberg, New York, 1982.
* [35] R. Anderson, R. May, Population biology of infectious diseases I, Nature 180 (1979) 361–367.
* [36] N. Bailey, The Biomathematics of Malaria, Charles Griffin, London, 1982.
* [37] M. Bartlett, Stochastic Population Models in Ecology and Epidemiology, Methuen, London, 1960.
* [38] N. Becker, Analysis of Infectious Disease Data, Chapman and Hall, New York, 1989\.
* [39] S. Busenberg, K. Cooke, Vertically Transmitted Diseases, Biomathematics 23, Springer-Verlag, Berlin, 1993.
* [40] V. Capasso, Mathematical Structures of Epidemic Systems, Lecture Notes in Biomath. 97, Springer-Verlag, Berlin, 1993.
* [41] A. Cliff, P. Haggett, Atlas of Disease Distributions: Analytic Approaches to Epidemiological Data, Blackwell, London, 1988.
* [42] D. Daley, J. Gani, Epidemic Modelling: An Introduction, Cambridge University Press, Cambridge, UK, 1999.
* [43] O. Diekmann, J. Heesterbeek, Mathematical Epidemiology of Infectious Diseases, Wiley, New York, 2000.
* [44] J. Frauenthal, Mathematical Modelling in Epidemiology, Springer-Verlag Universitext Berlin, 1980.
* [45] C. L. J.P. Gabriel, P.Picard, Stochastic Processes in Epidemic Theory, Springer-Verlag, Berlin, 1990.
* [46] B. Grenfell, E. A.P. Dobson, Ecology of Infectious Diseases in Natural Populations, Cambridge University Press, Cambridge UK, 1995.
* [47] H. Hethcote, J. V. Ark, Modeling HIV Transmission and AIDS in the United States, Lecture Notes in Biomath. 95, Springer-Verlag, Berlin, 1992.
* [48] H. Hethcote, J. Yorke, Gonorrhea Transmission Dynamics and Control, Lecture Notes in Biomath. 56, Springer-Verlag, Berlin, 1984.
* [49] V. Isham, E. G. Medley, Models for Infectious Human Diseases, Cambridge University Press, Cambridge UK, 1996.
* [50] E. J. Kranz, Epidemics of Plant Diseases: Mathematical Analysis and Modelling, Springer-Verlag, Berlin, 1990.
* [51] H. Lauwerier, Mathematical Models of Epidemics, Mathematisch Centrum, Amsterdam, 1981.
* [52] D. Ludwig, E. K.L. Cooke, Epidemiology, SIMS Utah Conference Proceedings, SIMS, Philadelphia, 1975.
* [53] D. Mollison, Epidemic Models: Their Structure and Relation to Data, Cambridge University Press, Cambridge UK, 1996.
* [54] I. Nasell, Hybrid Models of Tropical Infections, Springer-Verlag, Berlin, 1985.
* [55] M. Scott, E. G. Smith, Parasitic and Infectious Diseases, Academic Press, San Diego, 1994.
* [56] J. Vanderplank, Plant Diseases: Epidemics and Control, Academic Press, New York, 1963.
* [57] P. Waltman, Deterministic Threshold Models in the Theory of Epidemics, Lecture Notes in Biomath. 1, Springer-Verlag, Berlin, 1974.
* [58] P. J. Flory, Molecular size distribution in three dimensional polymers i. gelation, J. Am. Chem. Soc. 63 (1941) 3083.
* [59] W. H. Stockmayer, Theory of molecular size distribution and gel formation in branched polymers ii. general cross linking, Journal of Chemical Physics 12,4 (1944) 125.
* [60] S. Broadbent, J. Hammersley, Percolation Processes, Proc. Camb. Phil. Soc. 53 (1957) 629–41.
* [61] C. Domb, Fluctuation phenomena and stochastic processes, Nature 184 (1959) 509–12. doi:https://doi.org/10.1038/184509a0.
* [62] H. Frisch, J. Hammersley, Percolation Processes and Related Topics, J. Soc. Indust. Appl. Math. 11 (1963) 894–917.
* [63] J. Essam, Graph theory and statistical physics, Discrete Math. 1 (1971) 83–112.
* [64] V. Shante, S. Kirkpatrick, An introduction to percolation theory, Adv. Phys. 20 (1971) 325–57.
* [65] J. Essam, Phase Transitions and Critical Phenomena vol 2, chap 6, ed. C. Domb and M.S. Green, pp.197-270, New York: Academic, 1972.
* [66] S. Kirkpatrick, The nature of percolation ‘channels’, Solid St. Commun. 12 (1973) 1279–83.
* [67] S. Kirkpatrick, Percolation and Conduction, Rev. Mod. Phys. 45 (1973) 574–88.
* [68] D. Welsh, Percolation and Related Topics, Sci. Prog. Oxford. 64 (1977) 65–83.
* [69] F. Wu, Studies in Foundations and Combinatorics, Adv. Math. Suppl. Studies. vol 1 (1978) 151–66.
* [70] D. Stauffer, Scaling theory of percolation clusters, Phys. Rep. 54 (1979) 1–74.
* [71] J. W. Essam, Percolation theory, Rep. Prog. Phys. 43 (1980) 833.
URL https://iopscience.iop.org/article/10.1088/0034-4885/43/7/001/pdf
* [72] M. Doi, Second quantization representation for classical many-particle system, J. Phys. A: Math. Gen. 9 (1976) 1465.
URL https://iopscience.iop.org/article/10.1088/0305-4470/9/9/008
* [73] M. Doi, Stochastic theory of diffusion-controlled reaction, J. Phys. A: Math. Gen. 9 (1976) 1479. doi:https://doi.org/10.1016/S0378-4371(03)00458-8.
* [74] L. Peliti, Path integral approach to birth-death processes on a lattice, J. Phys. France (Paris) 46 (1985) 1469–1483. doi:htpps://doi.org/10.1051/jphys:019850046090146900.
* [75] G. Pruessner, Field theory notes, chapter 6, wwwf.imperial.ac.uk/$\sim$pruess/publications/Gunnar_Pruessner_field_theory_notes.pdfdoi:http://wwwf.imperial.ac.uk/~pruess/publications/Gunnar_Pruessner_field_theory_notes.pdf.
* [76] J. L. Cardy, P. Grassberger, Epidemic models and percolation, Journal of Physics A: Mathematical and General 18 (6) (1985) L267–L271. doi:10.1088/0305-4470/18/6/001.
URL https://doi.org/10.1088/0305-4470/18/6/001
* [77] H. Abarbanel, J. Bronzan, Structure of the pomeranchuk singularity in reggeon field theory, Phys. Rev. D 9 (1974) 2397. doi:https://doi.org/10.1103/PhysRevD.9.2397.
* [78] J. Cardy, R. Sugar, Directed percolation and reggeon field theory, J. Phys. A: Math. Gen. 13 (1980) L423.
URL https://iopscience.iop.org/article/10.1088/0305-4470/13/12/002/meta
* [79] L. Breiman, Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author), Statistical Science 16 (3) (2001) 199 – 231. doi:10.1214/ss/1009213726.
URL https://doi.org/10.1214/ss/1009213726
* [80] A. Flaxman, T. Vos, Machine learning in population health: Opportunities and threats, PLOS Medicine 15 (2018) e1002702. doi:10.1371/journal.pmed.1002702.
* [81] T. Wiemken, R. Kelley, R. Fernández-Botrán, W. Mattingly, F. Arnold, S. Furmanek, M. Restrepo, J. Chalmers, P. Peyrani, J. Bordón, S. Aliberti, J. Ramírez, Using cluster analysis of cytokines to identify patterns of inflammation in hospitalized patients with community-acquired pneumonia: a pilot study, Journal of Respiratory Infections 1. doi:10.18297/jri/vol1/iss1/1/.
* [82] A. Motsinger-Reif, S. Dudek, L. Hahn, M. Ritchie, Comparison of approaches for machine-learning optimization of neural networks for detecting gene-gene interactions in genetic epidemiology, Genetic epidemiology 32 (2008) 325–40. doi:10.1002/gepi.20307.
* [83] A. Ramasubramanian, R. Muckom, C. Sugnaux, C. Fuentes, B. L. Ekerdt, D. S. Clark, K. E. Healy, D. V. Schaffer, High-throughput discovery of targeted, minimally complex peptide surfaces for human pluripotent stem cell culture, ACS Biomaterials Science & Engineering 7 (4) (2021) 1344–1360, pMID: 33750112. arXiv:https://doi.org/10.1021/acsbiomaterials.0c01462, doi:10.1021/acsbiomaterials.0c01462.
URL https://doi.org/10.1021/acsbiomaterials.0c01462
* [84] T. Wiemken, R. Kelley, Machine learning in epidemiology and health outcomes research, Annual Review of Public Health 41 (2020) 21–36.
* [85] Z. Wang, M. A. Andrews, Z.-X. Wu, L. Wang, C. T. Bauch, Coupled disease–behavior dynamics on complex networks: A review, Physics of Life Reviews 15 (2015) 1 – 29. doi:https://doi.org/10.1016/j.plrev.2015.07.006.
URL http://www.sciencedirect.com/science/article/pii/S1571064515001372
* [86] K. G. Wilson, Renormalization group and critical phenomena. 1. Renormalization group and the Kadanoff scaling picture, Phys. Rev. B 4 (1971) 3174–3183. doi:https://doi.org/10.1103/PhysRevB.4.3174.
* [87] K. G. Wilson, Renormalization group and critical phenomena. 2. Phase space cell analysis of critical behavior, Phys. Rev. B 4 (1971) 3184–3205. doi:https://doi.org/10.1103/PhysRevB.4.3184.
* [88] O. Reynolds, An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels, Phil. Trans. R. Soc. Lond. 174 (1883) 935–982.
URL http://rstl.royalsocietypublishing.org/con-tent/174/935
* [89] O. Reynolds, On the dynamical theory of incompressible viscous fluids and the determination of the criterion, Phil. Trans. R. Soc. Lond. 186 (1895) 123–164.
URL http://rsta.royalsocietypublishing.org/content/186/123
* [90] E. Stueckelberg, A. Petermann, La renormalisation des constants dans la theorie de quanta, Helv. Phys. Acta 26.
* [91] M. Gell-Mann, F. E. Low, Quantum electrodynamics at small distances, Phys. Rev. 95 (1954) 1300–1312. doi:10.1103/PhysRev.95.1300.
URL https://link.aps.org/doi/10.1103/PhysRev.95.1300
* [92] M. Della Morte, D. Orlando, F. Sannino, Renormalization Group Approach to Pandemics: The COVID-19 Case, Front. in Phys. 8 (2020) 144. doi:https://doi.org/10.3389/fphy.2020.00144.
* [93] M. Della Morte, F. Sannino, Renormalisation Group approach to pandemics as a time-dependent SIR model, Front. in Phys. 8 (2021) 583. doi:https://doi.org/10.3389/fphy.2020.591876.
* [94] G. Cacciapaglia, F. Sannino, Interplay of social distancing and border restrictions for pandemics (COVID-19) via the epidemic Renormalisation Group framework, Sci Rep 10 (2020) 15828. arXiv:2005.04956, doi:https://doi.org/10.1038/s41598-020-72175-4.
* [95] G. Cacciapaglia, C. Cot, F. Sannino, Second wave covid-19 pandemics in europe: A temporal playbook, Sci Rep 10 (2020) 15514. arXiv:2007.13100, doi:https://doi.org/10.1038/s41598-020-72611-5.
* [96] G. Cacciapaglia, C. Cot, A. S. Islind, M. Óskarsdóttir, F. Sannino, Impact of us vaccination strategy on covid-19 wave dynamics, Sci Rep 11(1) (2021) 1–11. arXiv:2012.12004.
* [97] M. Perc, J. J. Jordan, D. G. Rand, Z. Wang, S. Boccaletti, A. Szolnoki, Statistical physics of human cooperation, Physics Reports 687 (2017) 1 – 51. doi:https://doi.org/10.1016/j.physrep.2017.05.004.
URL http://www.sciencedirect.com/science/article/pii/S0370157317301424
* [98] Z. Wang, C. T. Bauch, S. Bhattacharyya, A. d’Onofrio, P. Manfredi, M. Perc, N. Perra, M. Salathé, D. Zhao, Statistical physics of vaccination, Physics Reports 664 (2016) 1 – 113. doi:https://doi.org/10.1016/j.physrep.2016.10.006.
URL http://www.sciencedirect.com/science/article/pii/S0370157316303349
* [99] P. Grassberger, On the critical behavior of the general epidemic process and dynamical percolation, Mathematical Biosciences 63 (2) (1983) 157 – 172. doi:https://doi.org/10.1016/0025-5564(82)90036-0.
URL http://www.sciencedirect.com/science/article/pii/0025556482900360
* [100] T. Tomé, R. M. Ziff, Critical behavior of the susceptible-infected-recovered model on a square lattice, Phys. Rev. E 82 (2010) 051921. doi:10.1103/PhysRevE.82.051921.
URL https://link.aps.org/doi/10.1103/PhysRevE.82.051921
* [101] G. Santos, T. Alves, G. Alves, A. Macedo-Filho, R. Ferreira, Epidemic outbreaks on two-dimensional quasiperiodic lattices, Physics Letters A 384 (2) (2020) 126063. doi:https://doi.org/10.1016/j.physleta.2019.126063.
URL https://www.sciencedirect.com/science/article/pii/S0375960119309533
* [102] R. I. Mukhamadiarov, S. Deng, S. R. Serrao, Priyanka, R. Nandi, L. H. Yao, U. C. Täuber, Social distancing and epidemic resurgence in agent-based susceptible-infectious-recovered models, Scientific Reports 11 (1). doi:10.1038/s41598-020-80162-y.
URL http://dx.doi.org/10.1038/s41598-020-80162-y
* [103] R. Bellman, A markovian decision process, Journal of Mathematics and Mechanics 6 (5) (1957) 679–684.
URL http://www.jstor.org/stable/24900506
* [104] A. Howard, Dynamic Programming and Markov Processes, M.I.T. Press, 1960.
* [105] E. Arashiro, T. Tomé, The threshold of coexistence and critical behaviour of a predator–prey cellular automaton, Journal of Physics A: Mathematical and Theoretical 40 (5) (2007) 887–900. doi:10.1088/1751-8113/40/5/002.
URL http://dx.doi.org/10.1088/1751-8113/40/5/002
* [106] D. Mollison, Spatial contact models for ecological and epidemic spread, J. Roy. Statist. Soc. Ser. B 39 (1977) 283. doi:https://doi.org/10.1111/j.2517-6161.1977.tb01627.x.
* [107] N. T. J. Bailey, The mathematical theory of infectious diseases, Griffin, London.
* [108] A. Menon, N. Rajendran, A. Chandrachud, G. Setlur, Modelling and simulation of covid-19 propagation in a large population with specific reference to indiadoi:10.1101/2020.04.30.20086306.
* [109] X. Liu, G. Hewings, S. Wang, M. Qin, X. Xiang, S. Zheng, X. Li, Modelling the situation of covid-19 and effects of different containment strategies in china with dynamic differential equations and parameters estimation, medRxivarXiv:https://www.medrxiv.org/content/early/2020/03/13/2020.03.09.20033498.full.pdf, doi:10.1101/2020.03.09.20033498.
URL https://www.medrxiv.org/content/early/2020/03/13/2020.03.09.20033498
* [110] A. Omran, The epidemiological transition: A theory of the epidemiology of population change, The Milbank Quarterly 83 (4) (1971) 731–57. doi:doi:10.1111/j.1468-0009.2005.00398.x.
URL https://doi.org/10.1111/j.1468-0009.2005.00398.x
* [111] A. Santosa, S. Wall, E. Fottrell, U. H ogsberg, P. Byass, The development and experience of epidemiological transition theory over four decades: a systematic review, Global Health Action 7 (2014) 23574. doi:doi:10.3402/gha.v7.23574.
URL https://doi.org/10.3402/gha.v7.23574
* [112] W. H. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed., Cambridge University Press, 1992.
* [113] J. C. Butcher, Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, 2003.
* [114] P. L. Delamater, E. J. Street, T. F. Leslie, Y. Yang, K. H. Jacobsen, Complexity of the Basic Reproduction Number (R0), Emerg Infect Dis. 25(1) (2019) 1–4. doi:https://dx.doi.org/10.3201/eid2501.171901.
URL https://dx.doi.org/10.3201/eid2501.171901
* [115] J. A. P. Heesterbeek, K. Dietz, The concept of ro in epidemic theory, Statistica Neerlandica 50 (1) (1996) 89–110. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9574.1996.tb01482.x, doi:https://doi.org/10.1111/j.1467-9574.1996.tb01482.x.
URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9574.1996.tb01482.x
* [116] M. Keeling, B. T. Grenfell, Individual-based perspectives on r0, Journal of Theoretical Biology 203 (1) (2000) 51–61. doi:https://doi.org/10.1006/jtbi.1999.1064.
URL https://www.sciencedirect.com/science/article/pii/S0022519399910640
* [117] J. Heesterbeek, A brief history of $r_{0}$ and a recipe for its calculation., Acta Biotheoretica 50 (2002) 189–204. doi:10.1023/A:1016599411804.
* [118] J. Heffernan, R. Smith, L. Wahl, Perspectives on the basic reproductive ratio, Journal of The Royal Society Interface 2 (4) (2005) 281–293. arXiv:https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2005.0042, doi:10.1098/rsif.2005.0042.
URL https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2005.0042
* [119] M. Roberts, The pluses and minuses of r0, Journal of the Royal Society, Interface 4 (16) (2007) 949—961. doi:10.1098/rsif.2007.1031.
URL https://europepmc.org/articles/PMC2075534
* [120] L. Pellis, F. Ball, P. Trapman, Reproduction numbers for epidemic models with households and other social structures. i. definition and calculation of r0, Mathematical Biosciences 235 (1) (2012) 85–97. doi:https://doi.org/10.1016/j.mbs.2011.10.009.
URL https://www.sciencedirect.com/science/article/pii/S0025556411001556
* [121] D. R. de Souza, T. Tomé, Stochastic lattice gas model describing the dynamics of the sirs epidemic process, Physica A: Statistical Mechanics and its Applications 389 (5) (2010) 1142?1150. doi:10.1016/j.physa.2009.10.039.
URL http://dx.doi.org/10.1016/j.physa.2009.10.039
* [122] T. Tomé, R. M. Ziff, Critical behavior of the susceptible-infected-recovered model on a square lattice, Physical Review E 82 (5). doi:10.1103/physreve.82.051921.
URL http://dx.doi.org/10.1103/PhysRevE.82.051921
* [123] G. Santos, T. Alves, G. Alves, A. Macedo-Filho, R. Ferreira, Epidemic outbreaks on two-dimensional quasiperiodic lattices, Physics Letters A 384 (2) (2020) 126063\. doi:10.1016/j.physleta.2019.126063.
URL http://dx.doi.org/10.1016/j.physleta.2019.126063
* [124] T. F. A. Alves, G. A. Alves, A. Macedo-Filho, R. S. Ferreira, Epidemic outbreaks on random delaunay triangulations (2019). arXiv:1901.03029.
* [125] R. Ghostine, M. E. Gharamti, S. Hassrouny, I. Hoteit, An extended seir model with vaccination for forecasting the covid-19 pandemic in saudi arabia using an ensemble kalman filter, Mathematics (9) (2021) 636. doi:10.3390/math9060636.
* [126] X. Meng, Z. Cai, H. Dui, H. Cao, Vaccination strategy analysis with SIRV epidemic model based on scale-free networks with tunable clustering, IOP Conference Series: Materials Science and Engineering 1043 (3) (2021) 032012. doi:10.1088/1757-899x/1043/3/032012.
URL https://doi.org/10.1088/1757-899x/1043/3/032012
* [127] S. Gao, Z. Teng, J. J. Nieto, A. Torres, Analysis of an SIR epidemic model with pulse vaccination and distributed time delay, J Biomed Biotechnol. 2007 (2007) 64870. doi:https://doi.org/10.1155/2007/64870.
* [128] N. Grassly, C. Fraser, Seasonal infectious disease epidemiology, Proceedings. Biological sciences / The Royal Society 273 (2006) 2541–50. doi:10.1098/rspb.2006.3604.
* [129] S. Dowell, Seasonal variation in host susceptibility and cycles of certain infectious diseases., Emerging Infectious Diseases 7 (2001) 369 – 374.
* [130] M. Keeling, P. Rohani, B. Pourbohloul, Modeling infectious diseases in humans and animals, Clinical infectious diseases : an official publication of the Infectious Diseases Society of America 47 (2008) 864–865. doi:10.1086/591197.
* [131] Y. Wang, Y. Zhou, Mathematical modeling and dynamics of hiv progression and treatment, Chinese Journal of Engineering Mathematics 27.
* [132] L. Liu, X.-Q. Zhao, Y. Zhou, A tuberculosis model with seasonality, Bulletin of mathematical biology 72 (4) (2010) 931—952. doi:10.1007/s11538-009-9477-8.
URL https://doi.org/10.1007/s11538-009-9477-8
* [133] N. Bacaër, S. Guernaoui, The epidemic threshold of vector-borne diseases with seasonality: The case of cutaneous leishmaniasis in chichaoua, morocco, Journal of mathematical biology 53 (2006) 421–36. doi:10.1007/s00285-006-0015-0.
* [134] Z. Agur, L. Cojocaru, G. Mazor, R. Anderson, Y. Danon, Pulse mass measles vaccination across age cohorts, Proceedings of the National Academy of Sciences of the United States of America 90 (1994) 11698–702. doi:10.1073/pnas.90.24.11698.
* [135] L. Stone, B. Shulgin, Z. Agur, Theoretical examination of the pulse vaccination policy in the sir epidemic model, Mathematical and Computer Modelling 31 (2000) 207–215.
* [136] B. Shulgin, L. Stone, Z. Agur, Pulse vaccination strategy in the sir epidemic model, Bulletin of Mathematical Biology 60 (6) (1998) 1123–1148. doi:https://doi.org/10.1016/S0092-8240(98)90005-2.
URL https://www.sciencedirect.com/science/article/pii/S0092824098900052
* [137] S. Gao, L. Chen, J. J. Nieto, A. Torres, Analysis of a delayed epidemic model with pulse vaccination and saturation incidence, Vaccine 24 (35-36) (2006) 6037—6045. doi:10.1016/j.vaccine.2006.05.018.
URL https://doi.org/10.1016/j.vaccine.2006.05.018
* [138] S. Gao, L. Chen, Z. Teng, Pulse vaccination of an seir epidemic model with time delay, Nonlinear Analysis: Real World Applications 9 (2008) 599–607. doi:10.1016/j.nonrwa.2006.12.004.
* [139] S. Liu, Y. Pei, C. Li, L. Chen, Three kinds of tvs in a sir epidemic model with saturated infectious force and vertical transmission, Applied Mathematical Modelling 33 (4) (2009) 1923–1932. doi:https://doi.org/10.1016/j.apm.2008.05.001.
URL https://www.sciencedirect.com/science/article/pii/S0307904X08001066
* [140] X. Liu, P. Stechlinski, Infectious disease models with time-varying parameters and general nonlinear incidence rate, Applied Mathematical Modelling 36 (5) (2012) 1974–1994. doi:https://doi.org/10.1016/j.apm.2011.08.019.
URL https://www.sciencedirect.com/science/article/pii/S0307904X11005191
* [141] X. Meng, L. Chen, The dynamics of a new sir epidemic model concerning pulse vaccination strategy, Applied Mathematics and Computation 197 (2008) 582–597. doi:10.1016/j.amc.2007.07.083.
* [142] A. D’Onofrio, Pulse vaccination strategy in the sir epidemic model: Global asymptotic stable eradication in presence of vaccine failures, Mathematical and Computer Modelling 36 (2002) 473–489. doi:10.1016/S0895-7177(02)00177-2.
* [143] W. Chunjin, C. Lansun, A delayed epidemic model with pulse vaccination, Discrete Dynamics in Nature and Society 2008. doi:10.1155/2008/746951.
* [144] Y. Zhou, H. Liu, Stability of periodic solutions for an sis model with pulse vaccination, Mathematical and Computer Modelling 38 (2003) 299–308.
* [145] Y. He, S. Gao, D. Xie, An sir epidemic model with time-varying pulse control schemes and saturated infectious force, Applied Mathematical Modelling 37 (16) (2013) 8131–8140. doi:https://doi.org/10.1016/j.apm.2013.03.035.
URL https://www.sciencedirect.com/science/article/pii/S0307904X13001947
* [146] S. Lai, N. W. Ruktanonchai, L. Zhou, O. Prosper, W. Luo, J. R. Floyd, A. Wesolowski, M. Santillana, C. Zhang, X. Du, H. Yu, A. J. Tatem, Effect of non-pharmaceutical interventions for containing the covid-19 outbreak in china, Naturedoi:https://doi.org/10.1038/s41586-020-2405-7.
* [147] P. Liautaud, P. Huybers, M. Santillana, Fever and mobility data indicate social distancing has reduced incidence of communicable disease in the united statesarXiv:2004.09911.
* [148] X. Huang, Z. Li, Y. Jiang, X. Ye, C. Deng, J. Zhang, X. Li, The characteristics of multi-source mobility datasets and how they reveal the luxury nature of social distancing in the u.s. during the covid-19 pandemic, medRxivdoi:http://doi.org/10.1101/2020.07.31.20143016.
* [149] C. Cot, G. Cacciapaglia, F. Sannino, Mining google and apple mobility data: temporal anatomy for covid-19 social distancing, Scientific Reports 11 (1) (2021) 4150. doi:10.1038/s41598-021-83441-4.
URL https://doi.org/10.1038/s41598-021-83441-4
* [150] J. T. Kemper, On the identification of superspreaders for infectious disease, Mathematical Biosciences 48 (1) (1980) 111 – 127. doi:https://doi.org/10.1016/0025-5564(80)90018-8.
URL http://www.sciencedirect.com/science/article/pii/0025556480900188
* [151] I. Szapudi, Heterogeneity in sir epidemics modeling: superspreaders, medRxivarXiv:https://www.medrxiv.org/content/early/2020/07/06/2020.07.02.20145490.full.pdf, doi:10.1101/2020.07.02.20145490.
URL https://www.medrxiv.org/content/early/2020/07/06/2020.07.02.20145490
* [152] J. Fox, E. Kilbourne, Epidemiology of influenza – summary of influenza workshop iv, J. Infectious Disease 128 (1973) 361–386.
* [153] I. Elveback, J. Fox, E. Ackerman, A. Langworthy, M. Boyd, I. Gatewood, An influenza simulation model for immunisation studies, Amer. J. Epidemiology 103 (1976) 152–165.
* [154] R. Hattis, K. e. a. Halstead, S.B. Herrman, Rubella is an immunised island population, JAMA 223 (1973) 1019–1021.
* [155] D. Adam, P. Wu, J. Y. Wong, E. Lau, T. Tsang, S. Cauchemez, G. Leung, B. Cowling, Clustering and superspreading potential of severe acute respiratory syndrome coronavirus 2 (sars-cov-2) infections in hong kongdoi:10.21203/rs.3.rs-29548/v1.
* [156] A. Schuchat, Public health response to the initiation and spread of pandemic covid-19 in the united states, february 24–april 21, 2020, MMWR. Morbidity and Mortality Weekly Report 69. doi:10.15585/mmwr.mm6918e2.
* [157] R. R. Wilcox, The essence of gonorrhea control i, Acta Dermata-Venereologica 45 (1965) 302–308.
* [158] R. R. Wilcox, The essence of gonorrhea control ii, Acta Dermata-Venereologica 46 (1966) 95–100.
* [159] R. R. Wilcox, The essence of gonorrhea control iii, Acta Dermata-Venereologica 46 (1966) 250–256.
* [160] R. R. Wilcox, The essence of gonorrhea control iv, Acta Dermata-Venereologica 46 (1966) 460–465.
* [161] H. H. A. Yorke, A. Nold, Dynamics and control of the transmission of gonorrhea, Sexual $\&$ Transmitted Diseases 5(2) (1978) 51 – 56.
* [162] M. McGuigan, Pandemic modeling and the renormalization group equations: Effect of contact matrices, fixed points and nonspecific vaccine waningarXiv:2008.02149.
* [163] G. Cacciapaglia, F. Sannino, Evidence for complex fixed points in pandemic data, Front. Appl. Math. Stat. 7 (2021) 659580. arXiv:2009.08861, doi:https://doi.org/10.3389/fams.2021.659580.
* [164] G. Cacciapaglia, C. Cot, F. Sannino, Multiwave pandemic dynamics explained: How to tame the next wave of infectious diseases, Sci Rep 11 (2021) 6638. arXiv:2011.12846, doi:https://doi.org/10.1038/s41598-021-85875-2.
* [165] C. Stokel-Walker, What we know about covid-19 reinfection so far, BMJ 372. arXiv:https://www.bmj.com/content/372/bmj.n99.full.pdf, doi:10.1136/bmj.n99.
URL https://www.bmj.com/content/372/bmj.n99
* [166] G. Cacciapaglia, C. Cot, A. de Hoffer, S. Hohenegger, F. Sannino, S. Vatani, Epidemiological theory of virus variants (2021). arXiv:2106.14982.
* [167] A. de Hoffer, S. Vatani, C. Cot, G. Cacciapaglia, F. Conventi, A. Giannini, S. Hohenegger, F. Sannino, Variant-driven multi-wave pattern of covid-19 via machine learning clustering of spike protein mutations (2021). arXiv:2107.10115.
* [168] T. Banks, A. Zaks, On the Phase Structure of Vector-Like Gauge Theories with Massless Fermions, Nucl. Phys. B 196 (1982) 189–204. doi:10.1016/0550-3213(82)90035-9.
* [169] Worldometer, Coronavirus cases, https://www.worldometers.info/coronavirus/ (2021).
URL https://www.worldometers.info/coronavirus/
* [170] Z. Wang, F. Schmidt, Y. Weisblum, F. Muecksch, C. O. Barnes, S. Finkin, D. Schaefer-Babajew, M. Cipolla, C. Gaebler, J. A. Lieberman, Z. Yang, M. E. Abernathy, K. E. Huey-Tubman, A. Hurley, M. Turroja, K. A. West, K. Gordon, K. G. Millard, V. Ramos, J. Da Silva, J. Xu, R. A. Colbert, R. Patel, J. Dizon, C. Unson-O’Brien, I. Shimeliovich, A. Gazumyan, M. Caskey, P. J. Bjorkman, R. Casellas, T. Hatziioannou, P. D. Bieniasz, M. C. Nussenzweig, mrna vaccine-elicited antibodies to sars-cov-2 and circulating variants, bioRxivarXiv:https://www.biorxiv.org/content/early/2021/01/19/2021.01.15.426911.full.pdf, doi:10.1101/2021.01.15.426911.
URL https://www.biorxiv.org/content/early/2021/01/19/2021.01.15.426911
* [171] M. McCallum, A. D. Marco, F. Lempp, M. A. Tortorici, D. Pinto, A. C. Walls, M. Beltramello, A. Chen, Z. Liu, F. Zatta, S. Zepeda, J. di Iulio, J. E. Bowen, M. Montiel-Ruiz, J. Zhou, L. E. Rosen, S. Bianchi, B. Guarino, C. S. Fregni, R. Abdelnabi, S.-Y. Caroline Foo, P. W. Rothlauf, L.-M. Bloyet, F. Benigni, E. Cameroni, J. Neyts, A. Riva, G. Snell, A. Telenti, S. P. Whelan, H. W. Virgin, D. Corti, M. S. Pizzuto, D. Veesler, N-terminal domain antigenic mapping reveals a site of vulnerability for sars-cov-2, bioRxivarXiv:https://www.biorxiv.org/content/early/2021/01/14/2021.01.14.426475.full.pdf, doi:10.1101/2021.01.14.426475.
URL https://www.biorxiv.org/content/early/2021/01/14/2021.01.14.426475
* [172] H. Wang, Q. Xia, Z. Xiong, Z. Li, W. Xiang, Y. Yuan, Y. Liu, Z. Li, The psychological distress and coping styles in the early stages of the 2019 coronavirus disease (covid-19) epidemic in the general mainland chinese population: A web-based survey, PLOS ONE 15 (5) (2020) 1–10. doi:10.1371/journal.pone.0233410.
URL https://doi.org/10.1371/journal.pone.0233410
* [173] D. Sakan, D. Zuljevic, N. Rokvic, The role of basic psychological needs in well-being during the covid-19 outbreak: A self-determination theory perspective, Frontiers in Public Health 8 (2020) 713\. doi:10.3389/fpubh.2020.583181.
URL https://www.frontiersin.org/article/10.3389/fpubh.2020.583181
|
See copyright
###### Abstract
Polynomial multiplication is a bottleneck in most of the public-key
cryptography protocols, including Elliptic-curve cryptography and several of
the post-quantum cryptography algorithms presently being studied. In this
paper, we present a library of various large integer polynomial multipliers to
be used in hardware cryptocores. Our library contains both digitized and non-
digitized multiplier flavours for circuit designers to choose from. The
library is supported by a C++ generator that automatically produces the
multipliers’ logic in Verilog HDL that is amenable for FPGA and ASIC designs.
Moreover, for ASICs, it also generates configurable and parameterizable
synthesis scripts. The features of the generator allow for a quick generation
and assessment of several architectures at the same time, thus allowing a
designer to easily explore the (complex) optimization search space of
polynomial multiplication.
###### Index Terms:
schoolbook multiplier, karatsuba multiplier, toom cook multiplier, digitized
polynomial multiplication, Large integer polynomial multipliers
††publicationid: pubid: 978-1-5386-5541-2/18/$31.00 ©2021 IEEE
## I Introduction
Polynomial multiplication (i.e., $c(x)=a(x)\times b(x)$) is a fundamental
building block for cryptographic hardware and is often identified as the
bottleneck in implementing efficient circuits. The most widely deployed public
key crypto systems (e.g., RSA and ECC) need polynomial multiplications [1].
Many of the post-quantum cryptography (PQC) algorithms (e.g., NTRU-Prime,
FrodoKEM, Saber, etc.) also require large integer multipliers for multiplying
polynomial coefficients utilized to perform key-encapsulations and digital
signatures [2]. Another application is in fully homomorphic encryption, a
specific branch of cryptography that requires large integer multipliers to
enable multi-party and secure-by-construction on the cloud computations [3].
There is a clear demand for large integer multipliers to perform
multiplication over polynomial coefficients. To our knowledge, today, no
widely available repository of open source multiplier architectures exists.
This is the gap that our library addresses.
There are several multiplication methods employed to perform multiplication
over polynomial coefficients, including the schoolbook method (SBM),
Karatsuba, Toom-Cook, Montgomery, and number theoretic transformation (NTT). A
quick scan of the PQC algorithms involved in the NIST standardization effort
[4] reveals that many reference implementations suggest the use of these
multipliers: SBM is suggested by the authors of NTRU-Prime and FrodoKEM,
Karatsuba and Toom-Cook methods are used in Saber and NTRU, a combination of
NTT and SBM is suggested for CRYSTALS-Kyber, SBM and Montgomery are considered
in Falcon.
Examples of recent works employing non-digitized and digitized polynomial
multiplication methods are given in [5, 6, 7, 8, 9, 10, 11] and [12, 13, 14],
respectively. In [5], for different polynomial sizes, an architectural
evaluation of different multiplication methods (SBM, comba, Karatsuba, Toom-
Cook, Montgomery, and NTT) is performed over a Virtex-7 FPGA platform. An
improved Montgomery polynomial multiplier is presented in [7] for a polynomial
size of 1024 bits over a Virtex-6 FPGA. A run-time configurable and highly
parallelized NTT-based polynomial multiplication architecture over Virtex-7 is
discussed in [8]. A systolic based digit serial multiplier wrapper on an Intel
Altera Stratix-V FPGA is described in [12], where digit sizes of 22 and 30
bits are considered for operand lengths 233 and 409 bits, respectively. A
digit serial Montgomery based wrapper is provided in [13], where a digit size
of 64 is selected for the operand length 571 bits, on a Virtex-6. Similarly, a
digit serial modular multiplication based wrapper on Virtex-7 is shown in
[14], where digit sizes of 2, 4 and 8 bits are preferred for an operand length
of 2048 bits.
ASIC implementations, while less frequent, also explore the polynomial
multiplication design space. In [6], different polynomial multipliers with
different operand lengths are considered for area and power evaluations on a
65nm technology. On similar technology, a bit level parallel-in-parallel-out
(BL-PIPO) multiplier architecture and a modified interleaved modular reduction
multiplication with bit-serial sequential architecture is proposed in [10, 9],
respectively. Using a 65nm commercial node, for an operand length of 409 bits.
For fully homomorphic encryption schemes, an optimized multi-million bit
multiplier based on the Schonhage Strassen multiplication algorithm is
described in [11] on 60nm technology node.
Although there are several reported implementations of different
multiplication methods [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], these
implementations tend to be specifically tailored for a given operand size and
for a given target (e.g., high speed or low area). The matter is that this
trade-off space is rather complicated to navigate without automation.
Consequently, a common approach to assess (several) multiplication methods is
required.
In order to tackle the aforementioned limitations of the available literature
and the need for automation, we develop an open-source library of multipliers
which we name TTech-LIB. Our library is supported by a C++ generator utility
that produces – following user specifications – hardware description of four
selected multiplication methods: (a) SBM, (b) 2-way Karatsuba, (c) 3-way Toom-
Cook, and (d) 4-way Toom-Cook. For selected multiplication methods, our
library also offers a digitized solution: a single parameterized digit-serial
wrapper to multiply polynomial coefficients. By default, the wrapper
instantiates a singular SBM multiplier, but it can be replaced by any other
multiplier method since the interfaces are identical between all methods.
Finally, FPGA and ASIC designers can select their own multiplication method,
size of the input operands, and digit size (only for the digitized wrapper,
naturally). Moreover, for ASIC designers, there is the possibility to generate
synthesis scripts for one of two synthesis tools, either Cadence Genus or
Synopsys Design Compiler (DC). The user is not restricted to generating a
single architecture at a time, the generator will produce multiple solutions
if asked to do so, which will appear as separate Verilog (.v) files.
The remainder of this work is structured as follows: The mathematical
background for selected multiplication methods is described in Section II. The
generator architecture and the structure of proposed TTech-LIB is provided in
Section III. Section IV shows the experimental results and provide comparisons
of non-digitized and digitized flavours of multiplication methods. Finally,
Section V concludes the paper.
## II Mathematical background
In this section, we present the mathematical formulations behind polynomial
multiplication. We assume the inputs are two $m$-bit polynomials and the
output is a polynomial of size $2m-1$.
### II-A Non-digitized multiplication
The SBM is the traditional way to multiply two input polynomials $a(x)\times
b(x)$, as shown in Eq. 1. To produce resultant polynomial $c(x)$ by performing
bit by bit operations, it requires $2\times m$ clock cycles, $m^{2}$
multiplications and $(m-1)^{2}$ additions.
$\displaystyle
c(x)=\sum_{i=0}^{m-1}\sum_{j=0}^{m-1}a\textsubscript{i}b\textsubscript{j}x^{i+j}$
(1)
Other approaches such as the 2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-
Cook are more time efficient since they split the polynomials into $n$ equal
parts, as shown in Eq. 2. The value of $n$ for 2-way Karatsuba, 3-way Toom-
Cook and 4-way Toom-Cook multipliers is 2, 3 and 4, respectively and as the
name implies. In Eq. 2, the variable $k$ determines the index of the split
input polynomial. For example, for a 4-way Toom-Cook multiplier, the values of
$k$ are {3, 2, 1, 0}, meaning the input polynomial $a(x)$ becomes $a_{3}(x)$,
$a_{2}(x)$, $a_{1}(x)$, and $a_{0}(x)$.
(2)
In Eq. 3, the expanded version of Eq. 2 is presented for the case of 2-way
split of input polynomials. The straightforward computation would require four
multiplications: (1) one for the computation of inner product resulting
polynomial $c_{1}(x)$, two multiplications for the computation of $c_{2}(x)$,
and finally one multiplication for the computation of $c_{0}(x)$. However,
$c_{2}(x)$ could be alternatively calculated with only one multiplication, as
shown in Eq. 4. This is the Karatsuba observation. To generate the final
resultant polynomial $c(x)$, addition of inner products is required, as
presented in Eq. 5. Similarly, when considering the 3-way and 4-way Toom-Cook
multipliers, the expanded versions of Eq. 2 produce nine and sixteen
multiplications, respectively. These multiplications are then reduced to five
and seven using a process similar to the 2-way Karatsuba, respectively. We
omit the equations for Toom-Cook multipliers for the sake of brevity.
(3)
(4)
(5)
Now, let us assume that the polynomials involved in the multiplications above
remain relatively large in size even after split. Thus, SBM multipliers can be
employed to resolve the partial products. For a 2-way Karatsuba multiplier of
$m$-bit input polynomials, there will be 3 SBM multipliers and each will take
two polynomials of size $\frac{m}{2}$ as inputs. Each multiplier requires
$\frac{m}{2}$ clock cycles to be completed. If all multipliers operate in
parallel, the overall computation also takes $\frac{m}{2}$ cycles. For 3-way
and 4-way splits, the number of clock cycles is $\frac{m}{3}$ and
$\frac{m}{4}$, respectively. Since our library is aimed at large polynomials,
the 2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-Cook codes available in
it actually implement the parallel SBM strategy discussed above. In fact, our
non-digitized multipliers are hybrid multipliers.
### II-B Digitized multiplication
The digit serial wrapper in TTech-LIB takes two $m$-bit polynomials $a(x)$ and
$b(x)$ as an input and produces $c(x)$ as an output. Digits are created for
polynomial $b(x)$ with different sizes which are user-defined as follows:
$d=\frac{m}{n}$, where $d$ determines the total number of digits, $m$ denotes
the size of input polynomial $b(x)$, and $n$ is the size of each digit. Then,
the multiplication of each created digit is performed serially with the input
polynomial $a(x)$, while the final resultant polynomial $c(x)$ is produced
using shift and add operations. The main difference here is that our digitized
solution is serial, while the 2-, 3-, and 4-way multipliers are parallel. The
required computational cost (in clock cycles) to perform one digit
multiplication is $n$. Since there are $d$ digits, the overall computation
takes $d\times n$ clock cycles. It is important to mention that
users/designers can choose any multiplication method inside the described
digit serial wrapper as per their application requirements. We have used an
SBM multiplication method as default.
## III How to access TTech-LIB
The complete project files (written in C++) are freely available to everyone
on our GitHub repository [15]. A sample of pre-generated multipliers is also
included in the repository. As shown in Fig. 1, the user settings can be
customized by using a configuration file (config.xml). The structure of the
library is rather simple and includes five directories: (1) bin, (2) run, (3)
src, (4) synth, and (5) vlog. After running the generator binary, the produced
synthesis scripts are put in the synth directory while the generated
multipliers are put in the vlog folder. All generated multipliers have the
same interface (i.e., inputs are $clk$, $rst$, $a$, and $b$; the output is
$c$).
Figure 1: Generator architecture and file structure of TTech-LIB
## IV Experimental Results and Comparisons
### IV-A Implementation results and evaluations
The experimental results for non-digitized and digitized polynomial
multiplication methods over NIST defined field lengths [16] on 65nm technology
node using Genus, Cadence is provided in Table I and Table II, respectively.
Moreover, the implementation results for various digit sizes of digitized SBM
multiplication method over an Artix-7 FPGA device is given in Table III. In
tables I–II, clock frequency (MHz), area (in $\mu m^{2}$), and power (mW)
values are achieved after synthesis using Cadence Genus. Similarly, in Table
III, clock frequency (MHz), look-up-tables (LUTs), utilized registers (Regs)
and power (mW) values are achieved after synthesis using Vivado design tool.
Finally, latency for both digitized and non-digitized multipliers (in tables
I–III) is calculated using Eq. 6:
(6)
TABLE I: Results of non-digitized multipliers for NIST recommended Elliptic
curves over prime and binary fields
Multiplier | m | Freq (MHz) | latency ($\mu s$) | Area ($\mu m^{2}$) | Power (mW)
---|---|---|---|---|---
Schoolbook | P-192 | 500 | 0.382 | 32011.2 | 13.8
P-224 | 486 | 0.458 | 38048.0 | 17.1
P-256 | 480 | 0.531 | 48726.7 | 16.9
P-384 | 444 | 0.862 | 67861.8 | 27.1
P-521 | 434 | 1.198 | 100242.0 | 28.0
B-163 | 500 | 0.324 | 29341.4 | 12.9
B-233 | 476 | 0.487 | 39321.4 | 16.0
B-283 | 454 | 0.621 | 50603.4 | 17.8
B-409 | 442 | 0.923 | 73587.6 | 28.2
B-571 | 413 | 1.380 | 89993.2 | 29.1
2-way Karatsuba | P-192 | 473 | 0.202 | 41379.5 | 8.2
P-224 | 469 | 0.238 | 49514.4 | 9.6
P-256 | 467 | 0.274 | 59532.1 | 11.8
P-384 | 420 | 0.457 | 74844.0 | 15.2
P-521 | 408 | 0.639 | 105059.5 | 20.8
B-163 | 487 | 0.168 | 35060.0 | 7.7
B-233 | 478 | 0.244 | 52328.2 | 10.0
B-283 | 455 | 0.312 | 64743.8 | 12.6
B-409 | 432 | 0.474 | 84778.6 | 17.2
B-571 | 418 | 0.684 | 120374.3 | 21.7
3-way Toom-Cook | P-192 | 909 | 0.070 | 96498.4 | 44.4
P-224 | 869 | 0.086 | 102470.8 | 46.9
P-256 | 826 | 0.104 | 104820.9 | 49.4
P-384 | 689 | 0.185 | 139375.1 | 57.2
P-521 | 680 | 0.255 | 201341.2 | 80.0
B-163 | 934 | 0.058 | 75085.6 | 36.0
B-233 | 877 | 0.088 | 106357.7 | 49.5
B-283 | 800 | 0.118 | 115188.1 | 54.5
B-409 | 775 | 0.176 | 170509.0 | 78.4
B-571 | 766 | 0.249 | 256604.4 | 115.9
4-way Toom-Cook | P-192 | 900 | 0.053 | 105679.1 | 56.9
P-224 | 847 | 0.066 | 125124.1 | 62.0
P-256 | 826 | 0.077 | 122298.1 | 63.6
P-384 | 793 | 0.121 | 241893.7 | 98.2
P-521 | 767 | 0.170 | 332534.9 | 139.4
B-163 | 925 | 0.044 | 94834.1 | 49.9
B-233 | 892 | 0.066 | 132080.0 | 64.2
B-283 | 826 | 0.085 | 145709.3 | 70.6
B-409 | 769 | 0.133 | 236989.4 | 99.0
B-571 | 746 | 0.191 | 340750.8 | 148.2
* •
m determines the field size or length of the inputs (in bits), where ‘P’
stands for Prime and ‘B’ stands for Binary
TABLE II: Results of digitized multipliers for NIST recommended Elliptic curves over prime and binary fields m | digit size (n) | total digits (d) | Freq (MHz) | latency ($\mu s$) | Area ($\mu m^{2}$) | Power (mW)
---|---|---|---|---|---|---
521$\times$521 | 32 | 17 | 505 | 1.07 | 106956.7 | 30.9
41 | 13 | 377 | 1.41 | 101538.7 | 26.1
53 | 10 | 340 | 1.55 | 94752.7 | 20.0
81 | 7 | 336 | 1.68 | 84321.0 | 15.4
571$\times$571 | 32 | 18 | 487 | 1.18 | 114999.8 | 36.7
41 | 14 | 369 | 1.55 | 116010.3 | 28.9
53 | 11 | 312 | 1.86 | 91393.9 | 18.1
81 | 8 | 291 | 2.22 | 76146.8 | 14.1
1024$\times$1024 | 2 | 512 | 363 | 2.82 | 196131.2 | 38.0
4 | 256 | 357 | 2.86 | 178581.2 | 35.1
8 | 128 | 353 | 2.90 | 167536.4 | 31.5
16 | 64 | 343 | 2.98 | 166533.1 | 30.2
32 | 32 | 313 | 3.27 | 148489.5 | 23.0
64 | 16 | 285 | 3.59 | 122257.8 | 20.8
128 | 8 | 268 | 3.82 | 123164.6 | 19.9
256 | 4 | 263 | 3.89 | 129542.4 | 19.5
512 | 2 | 261 | 3.92 | 136292.4 | 23.1
1024 | 1 | 259 | 3.95 | 177834.2 | 24.1
TABLE III: FPGA based results of digitized 1024$\times$1024 SBM multiplier for different digit sizes (Artix-7) m | digit size (n) | total digits (d) | Freq (MHz) | latency ($\mu s$) | LUTs | Regs | Carry | Power (mW)
---|---|---|---|---|---|---|---|---
521$\times$521 | 32 | 17 | 33.11 | 16.43 | 6369 | 1692 | 408 | 184
41 | 13 | 29.15 | 18.28 | 7995 | 1681 | 416 | 192
53 | 10 | 28.32 | 22.72 | 8079 | 1732 | 417 | 191
64 | 9 | 34.48 | 15.12 | 6095 | 1758 | 408 | 220
81 | 8 | 30.30 | 21.38 | 8207 | 1795 | 415 | 247
| 128 | 5 | 34.84 | 14.95 | 5964 | 1881 | 424 | 220
571$\times$571 | 32 | 17 | 30.12 | 18.06 | 6397 | 1847 | 447 | 194
41 | 13 | 27.17 | 19.62 | 8750 | 1834 | 455 | 192
53 | 10 | 26.04 | 20.35 | 9053 | 1880 | 449 | 187
81 | 8 | 28.01 | 23.13 | 8958 | 1951 | 452 | 226
1024$\times$1024 | 2 | 512 | 14.22 | 72.11 | 10993 | 3634 | 1085 | 173
4 | 256 | 15.89 | 64.48 | 10824 | 3384 | 928 | 172
8 | 128 | 16.86 | 60.66 | 11074 | 3261 | 849 | 180
16 | 64 | 17.51 | 58.48 | 10634 | 3248 | 811 | 185
32 | 32 | 17.89 | 57.28 | 11371 | 3267 | 791 | 190
64 | 16 | 17.95 | 57.04 | 11947 | 3330 | 792 | 195
128 | 8 | 18.57 | 55.14 | 12207 | 3450 | 800 | 221
256 | 4 | 18.93 | 54.09 | 11367 | 3740 | 832 | 247
512 | 2 | 19.12 | 53.55 | 10385 | 4295 | 896 | 226
1024 | 1 | 18.46 | 55.50 | 11462 | 5303 | 1024 | 235
#### IV-A1 ASIC non-digitized multipliers
Our results consider NIST-defined prime (P-192 to P-521) and binary (B-163 to
B-571) fields utilized in ECC-based public key cryptosystems. As the operand
sizes increase, the corresponding clock frequency decreases, as shown in
column three of Table I. The decrease in frequency leads to an increase in
latency, as presented in column four of Table I. In addition to latency, the
corresponding area and power values also increase with the increase in size of
multiplier operands (see columns five and six of Table I). It is evident from
these results that the SBM multiplier requires less hardware resources than
2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-Cook multipliers. Moreover,
the 2-way Karatsuba achieves lower power values as compared to other selected
multipliers. This is explained by the datapath and the composition of the
different multipliers. SBM requires $2m+2m$ bit adder, 2-way Karatsuba
requires $m+m+m$ bit adder/subtracter for generating final polynomial, 3-way
Toom-Cook requires fifteen $\frac{m}{3}$ bit incrementers, and 4-way Toom-Cook
requires sixteen $\frac{m}{4}$ bit incrementers. There is always a trade-off
between various design parameters such as area, latency, power etc.
Consequently, the SBM multiplier is more useful for area constrained
applications. For better latency, other multipliers are more convenient.
#### IV-A2 ASIC digitized multipliers
For digitizing, we have selected 521, 571, and 1024 as the lengths of the
input operands, as shown in column one of Table II. Moreover, for input
lengths of 521 and 571, digit sizes of 32, 41, 53 and 81 have been adopted.
For an input length of 1024 bits, digit sizes are given in powers of two, for
$n$ = $2,\ldots,1024$. Digit size $n$ and total digits $d$ are listed in
columns two and three of Table II, respectively. It is noteworthy that the
increase in digit size results in a decrease in clock frequency, as presented
in column four of Table II. Moreover, it also translates to an increase in
latency, as shown in column five of Table II. For the $1024\times 1024$
multiplier, the obtained values for area and power show behavior similar to a
parabolic curve with respect to digit size, as given in the last two columns
of Table II. This is intuitive, as in the extreme cases of too small or too
large digits, the wrapper logic becomes inefficient and may even become the
bottleneck for timing. In summary, for an application that requires high clock
frequency, shorter digits are preferred; however, this brings a significant
cost in area and power.
#### IV-A3 FPGA digitized multipliers
Alike ASIC demonstrations (presented in Sec. IV-A2), we have chosen similar
lengths of the input operands (521, 571, and 1024) for the evaluation on an
Artix-7 FPGA platform, as shown in column one of Table III.We have used Xilinx
Viviado Desig Suite for the FPGA based experiments. Furthermore, for input
lengths of 521 and 571, digit sizes of 32, 41, 53 and 81 have been considered.
For an input length of 1024 bits, digit sizes are adopted in powers of two,
for $n$ = $2,\ldots,1024$. Digit size $n$ and total digits $d$ are listed in
columns two and three of Table III, respectively. The synthesis results (clock
frequency, latency, area in terms of LUTs and Regs, and power) achieved for
FPGA are totally distinct when compared to ASIC values as the implementation
platforms are quite contrasting. It is important to note that the frequency of
the multiplier architecture increases with the increase in digit size (shown
in column four of Table III). This phenomenon keeps on-going until it reaches
a saturation point (i.e., best possible performance in terms of clock
frequency with respect to $n$). Once it reaches a saturation point, then there
is a decrease in the clock frequency. Moreover, the saturation occurs at any
digit size between 0 to $n$ (in this work and for this experiment, the
saturation occurs when the value for $n$ = $512$). The saturation point also
varies with the change in operand size of the multiplier as given in Table
III. For other reported parameters, i.e., latency, LUTs and power, the
saturation point is not possible to show as there is a non-linear behavior
(see columns five, six and nine of Table III). It is noteworthy that we have
considered the worst case scenario by excluding the DSP (Digital Signal
Processing) blocks during synthesis. The performance of multiplier
architectures will be higher by considering the conventional synthesis flow
with DSPs.
#### IV-A4 Figure-of-Merit (FoM) for digitized SBM multiplier
A FoM is defined to perform a comparison while taking into account different
design characteristics at the same time. A FoM to evaluate the latency and
area parameters for both ASIC and FPGA platforms is defined using Eq. 7. The
higher the FoM values, the better. Similarly, a ratio for latency and power
characteristics are calculated considering Eq. 8.
$FoM=\frac{1}{latency\,(\mu s)\times area}$ (7)
$FoM=\frac{1}{latency\,(\mu s)\times power\,(mW)}$ (8)
The calculated values of defined FoMs for ASIC are given in figures 2 and 3,
where various digit sizes were considered for a $1024\times 1024$ multiplier.
Figure 2: Area and latency FoM for various digit sizes of a $1024\times 1024$
multiplier Figure 3: Power and latency FoM for various digit sizes of a
$1024\times 1024$ multiplier
For both FoMs (shown in figures 2 and 3), it becomes clear that the extreme
cases lead to suboptimal results. For the studied 1024 $\times$ 1024
multiplier, the variant with $n=64$ and $d=16$ presents an optimal solution.
Other similar values, such as $n=32$ and $n=128$, also give very close to
optimal solutions.
Likewise ASICs, the calculated values of defined FoM (from Eq. 7) for FPGA is
given in Fig. 4, where various digit sizes were considered for a
1024$\times$1024 multiplier. To calculate FPGA area utilizations, the slices
flip-flops, LUTs and carry units are the basic building-blocks. Therefore, the
FoM in Eq. 7 can be calculated by employing different metrics-of-interest
(e.g., slices, LUTs, registers and carry blocks). Note that we have used an
FPGA slices as area in Eq. 7. Fig. 4 reveals that the FoM value for $n=512$
and $d=2$ results an optimal solution.
Figure 4: Slices and latency FoM for various digit sizes of a $1024\times
1024$ multiplier
The combined relation between frequency, latency and power for different
values of $n$ is illustrated in Fig. 5. Therefore, it is noted from Fig. 5
that the value of latency decreases, frequency increases with the increase in
$n$. The increase in frequency and decrease in latency keeps on-going until
saturation point occurs (when $n=512$).
Figure 5: Frequency, latency and power analysis for various digit sizes of a
$1024\times 1024$ multiplier
### IV-B Comparison to the state of the art
To perform a fair comparison with existing state-of-the-art modular multiplier
architectures, we have used similar operand lengths, digit sizes and
implementation technologies (for FPGA and ASIC) as used in the corresponding
solutions, shown in Table IV. In state-of-the-art solutions, multiplication
results are given for different operands length. However, we have provided
comparison of our results with only the larger operands. Moreover, we have
used symbol ‘N/A’ in Table IV where the values for design parameters (Freq,
latency and area) are not given.
TABLE IV: Area and latency comparisons of non-digitized and digitized
multipliers with state of the art
Ref | Multiplier | Device | m | Freq (MHz) | latency ($\mu s$) | Area ($\mu m^{2}$)/LUTs
---|---|---|---|---|---|---
[5] | 2-way KM | V7 | 128 | 104.3 | 0.61 | 3499
2-way KM | V7 | 256 | 74.5 | 1.71 | 7452
2-way KM | V7 | 512 | 51.6 | 4.96 | 20474
[9] | BL-PIPO | 65nm | 163 | N/A | N/A | 5328 GE
[13] | DSM (ds=64) | V6 | 571 | 258.5 | 0.03 | 10983
[14] | DSMM (ds=2) | V7 | 2048 | N/A | N/A | 18067
DSMM (ds=4) | V7 | 2048 | N/A | N/A | 33734
DSMM (ds=8) | V7 | 2048 | N/A | N/A | 62023
TW | SBM | 65nm | 163 | N/A | N/A | 11727 GE
2-way KM | V7 | 128 | 167.4 | 0.38 | 2110
2-way KM | V7 | 256 | 119.9 | 1.06 | 4318
2-way KM | V7 | 512 | 63.8 | 4.01 | 9582
SBM (ds=2) | V7 | 2048 | 15.03 | 69760 | 25559
SBM (ds=4) | V7 | 2048 | 16.6 | 15790 | 22040
SBM (ds=8) | V7 | 2048 | 17.4 | 3760 | 23315
SBM (ds=64) | V6 | 571 | 46.4 | 1.74 | 6181
* •
V7: Xilinx Virtex-7, V6: Xilinx Virtex-6, ds: digit size, TW: this work, DSM:
Digit Serial Montgomery multiplier based wrapper, BL-PIPO: Bit level parallel
in parallel out multiplier using SBM multiplication method, GE: gate
equivalents
Concerning only the non-digitized multipliers for comparison, the 2-way
Karatsuba multiplier of [5] over Virtex-7 FPGA for operand sizes of 128, 256
and 512 bits presents 38%, 39% and 20% higher latency when compared to 2-way
Karatsuba multiplier generated by TTech-LIB, as shown in Table IV. Moreover,
the generated multiplier utilizes lower hardware resources in terms of LUTs
(see column seven in Table IV) as compared to resources (LUTs) utilized in
[5]. On 65nm node, the BL-PIPO multiplier of [9] utilizes 55% lower hardware
resources in terms of gate counts as compared to our SBM multiplier generated
by TTech-LIB.
When digitized flavor of polynomials multiplication is considered for
comparison over different digit sizes, the digit serial Montgomery multiplier
based wrapper of [13] results 83% higher clock frequency and requires 58% less
computational time as compared to our SBM based digit serial wrapper generated
by TTech-LIB. On the other hand, the SBM based digit serial wrapper results
56% lower hardware resources over Virtex-6 FPGA. There is always a trade-off
between performance and area parameters. Another digit serial modular
multiplication based wrapper of [14] results 14% (for ds=2) lower FPGA LUTs
while for remaining digit sizes of 4 and 8, it utilizes 35% and 63% higher
FPGA LUTs as compared to SBM wrapper generated by TTech-LIB. The frequency and
latency parameters cannot be compared as these are not given.
The comparisons and discussion above show that the multipliers generated by
TTech-LIB provide a realistic and reasonable comparison to state-of-the-art
multiplier solutions [5, 9, 13, 14]. Hence, not only can users explore various
design parameters within our library, they can also benefit from
implementations that are competitive with respect to the existing literature.
## V Conclusion
This work has presented an open-source library for large integer polynomial
multipliers. The library contains digitized and non-digitized flavors of
polynomial coefficient multipliers. For non-digitized multipliers, based on
the values for various design parameters, users/designers can select amongst
several studied multipliers according to needs of their targeted application.
Furthermore, we have shown that for digitized multipliers, the evaluation of
individual design parameters may not be comprehensive, and figures of merit
are better suited to capture the characteristics of a circuit. Furthermore, we
believe the results enabled by TTech-LIB will guide hardware designers to
select an appropriate digit size that reaches an acceptable performance
according to application requirements. This is achieved with the aid of TTech-
LIB’s generator, which helps a designer to quickly explore the complex design
space of polynomial multipliers.
## References
* [1] H. Eberle, N. Gura, S. Shantz, V. Gupta, L. Rarick, and S. Sundaram, “A public-key cryptographic processor for rsa and ecc.” IEEE, 2004, pp. 98–110.
* [2] NIST, “Computer security resource centre: Pqc standardization process, third round candidate announcement,” 2020. [Online]. Available: https://csrc.nist.gov/news/2020/pqc-third-round-candidate-announcement
* [3] A. López-Alt, E. Tromer, and V. Vaikuntanathan, “On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption,” in _Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing_ , ser. STOC ’12. New York, NY, USA: Association for Computing Machinery, 2012, p. 1219–1234.
* [4] NIST, “Computer security resource centre: Post-quantum cryptography, round 2 submissions,” 2020. [Online]. Available: https://csrc.nist.gov/projects/post-quantum-cryptography/round-2-submissions
* [5] C. Rafferty, M. O’Neill, and N. Hanley, “Evaluation of large integer multiplication methods on hardware,” _IEEE Transactions on Computers_ , vol. 66, no. 8, pp. 1369–1382, 2017.
* [6] M. Imran, Z. U. Abideen, and S. Pagliarini, “An experimental study of building blocks of lattice-based nist post-quantum cryptographic algorithms,” _Electronics_ , vol. 9, no. 11, p. 1953, Nov 2020.
* [7] A. A. Abd-Elkader, M. Rashdan, E.-S. A. Hasaneen, and H. F. Hamed, “Advanced implementation of montgomery modular multiplier,” _Microelectronics Journal_ , vol. 106, p. 104927, 2020.
* [8] A. C. Mert, E. Öztürk, and E. Savaş, “FPGA implementation of a run-time configurable ntt-based polynomial multiplication hardware,” _Microprocessors and Microsystems_ , vol. 78, p. 103219, 2020.
* [9] R. Azarderakhsh, K. U. Järvinen, and M. Mozaffari-Kermani, “Efficient algorithm and architecture for elliptic curve cryptography for extremely constrained secure applications,” _IEEE Transactions on Circuits and Systems I: Regular Papers_ , vol. 61, no. 4, pp. 1144–1155, 2014.
* [10] S. R. Pillutla and L. Boppana, “An area-efficient bit-serial sequential polynomial basis finite field gf(2m) multiplier,” _AEU - International Journal of Electronics and Communications_ , vol. 114, p. 153017, 2020. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1434841119318485
* [11] Y. Doröz, E. Öztürk, and B. Sunar, “Accelerating fully homomorphic encryption in hardware,” _IEEE Transactions on Computers_ , vol. 64, no. 6, pp. 1509–1521, 2015.
* [12] J. Xie, P. K. Meher, X. Zhou, and C. Lee, “Low register-complexity systolic digit-serial multiplier over $gf(2^{m})$ based on trinomials,” _IEEE Transactions on Multi-Scale Computing Systems_ , vol. 4, no. 4, pp. 773–783, 2018.
* [13] M. Morales-Sandoval, C. Feregrino-Uribe, P. Kitsos, and R. Cumplido, “Area/performance trade-off analysis of an fpga digit-serial gf(2m) montgomery multiplier based on lfsr,” _Computers & Electrical Engineering_, vol. 39, no. 2, pp. 542 – 549, 2013.
* [14] J. Pan, P. Song, and C. Yang, “Efficient digit-serial modular multiplication algorithm on fpga,” _IET Circuits, Devices Systems_ , vol. 12, no. 5, pp. 662–668, 2018.
* [15] M. Imran, Z. U. Abideen, and S. Pagliarini, “TTech-LIB: Center for hardware security,” 2020. [Online]. Available: https://github.com/Centre-for-Hardware-Security/TTech-LIB
* [16] C. Lily, M. Dustin, R. Andrew, and R. Karen, “Recommendations for discrete logarithm-based cryptography: Elliptic curve domain parameters,” 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186-draft.pdf
|
Research Article Daniel Lemire, DOT-Lab Research Center, Université du Québec
(TELUQ), Montreal, Quebec, H2S 3L5, Canada<EMAIL_ADDRESS>Natural Sciences
and Engineering Research Council of Canada, Grant Number: RGPIN-2017-03910
# Number Parsing at a Gigabyte per Second
Daniel Lemire
###### Abstract
With disks and networks providing gigabytes per second, parsing decimal
numbers from strings becomes a bottleneck. We consider the problem of parsing
decimal numbers to the nearest binary floating-point value. The general
problem requires variable-precision arithmetic. However, we need at most 17
digits to represent 64-bit standard floating-point numbers (IEEE 754). Thus we
can represent the decimal significand with a single 64-bit word. By combining
the significand and precomputed tables, we can compute the nearest floating-
point number using as few as one or two 64-bit multiplications.
Our implementation can be several times faster than conventional functions
present in standard C libraries on modern 64-bit systems (Intel, AMD, ARM and
POWER9). Our work is available as open source software used by major systems
such as Apache Arrow and Yandex ClickHouse. The Go standard library has
adopted a version of our approach.
###### keywords:
Parsing, Software Performance, IEEE-754, Floating-Point Numbers
## 1 Introduction
Computers approximate real numbers as binary IEEE-754 floating-point numbers:
an integer $m$ (the _significand_ 111The use of the term _mantissa_ is
discouraged by IEEE [1, 2] and by Knuth [3].) multiplied by 2 raised to an
integer exponent $p$: $m\times 2^{p}$. Most programming languages have a
corresponding 64-bit data type and commodity processors provide the
corresponding instructions. In several mainstream programming languages (C,
C++, Swift, Rust, Julia, C#, Go), floating-point numbers adopt the 64-bit
floating-point type by default. In JavaScript, all numbers are represented
using a 64-bit floating-point type, including integers—except maybe for the
large integer type BigInt. There are other number types beyond the standard
binary IEEE-754 number types. For example, Gustafson [4] has proposed Unums
types, Microsoft promotes its Microsoft Floating Point (MSFP) types [5] and
many programming languages support decimal-number data types [6]. However,
they are not as ubiquitous.
Numbers are frequently serialized on disk or over a network as ASCII strings
representing the value in decimal form (e.g., 3.1416, 1.0e10, 0.1). It is
generally impossible to find a binary IEEE-754 floating-point number that
matches exactly a decimal number. For example, the number 0.2 corresponding to
$1/5$ can never be represented exactly as a binary floating-point number: its
binary representation requires an infinite number of digits. Thus we must find
the nearest available binary floating-point number. The nearest approximation
to 0.2 using a standard 64-bit floating-point value is
$$7\,205\,759\,403\,792\,794$\times 2^{-55}$ or approximately
$0.200\,000\,000\,000\,000\,011\,10$. The second nearest floating-point value
is $$7\,205\,759\,403\,792\,793$\times 2^{-55}$ or approximately
$0.199\,999\,999\,999\,999\,983\,35$. In rare cases, the decimal value would
be exactly between two floating-point values. In such cases, the convention is
that we _round ties to even_ : of the two nearest floating-point values, we
choose the one with an even _significand_. Thus, since
$9\,000\,000\,000\,000\,000.5$ falls at equal distance from
$$9\,000\,000\,000\,000\,000$\times 2^{0}$ and
$$9\,000\,000\,000\,000\,001$\times 2^{0}$, we round it to
$9\,000\,000\,000\,000\,000$. Meanwhile we round
$9\,000\,000\,000\,000\,001.5$ and $9\,000\,000\,000\,000\,002.5$ to
$9\,000\,000\,000\,000\,002$ and so forth.
Finding the binary floating-point value that is closest to a decimal string
can be computationally challenging. Widely used number parsers fail to reach
$200\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ on commodity processors (see Fig.
2) whereas our disks and networks are capable of transmitting data at
gigabytes per second (more than 5 times as fast).
If we write a 64-bit floating-point number as a string using a decimal
significand with 17 significant digits, we can always parse it back exactly:
starting from the ASCII string representing the number with a correctly
rounded 17-digit decimal significand and picking the nearest floating-point
number, we retrieve our original number. Programmers and data analysts use no
more than 17 digits in practice since there is no benefit to the superfluous
digits if the number is originally represented as a standard binary floating-
point number. There are only some exceptional cases where more digits could be
expected: e.g.,
1. 1.
when the value has been entered by a human being,
2. 2.
when the value was originally computed using higher-accuracy arithmetic,
3. 3.
when the original value was in a different number type,
4. 4.
when a system was poorly designed.
We have that 64-bit unsigned integers can represent all 19-digit non-negative
integers since $10^{19}<2^{64}$. Given an ASCII string (e.g.,
2.2250738585072019e-308), we can parse the decimal significand as a 64-bit
integer and represent the number as $22250738585072019\times 10^{-308-16}$. It
remains to convert it to a binary floating-point number. In this instance, we
must divide $22250738585072019$ by $10^{308+16}$ and round the result: we show
that we can solve such problems using an efficient algorithm (§ 5). Further,
when more than 19 digits are found, we may often be able to determine the
nearest floating-point value from the most significant 19 digits (§ 11).
term | notation
---|---
$m$ | binary significand; $m$ is a non-negative integer (often in $[2^{52},2^{53})$)
$p$ | binary exponent; $p$ is an integer, it can be negative
non-negative floating-point number | $m\times 2^{p}$
$w$ | decimal significand; $w$ is a non-negative integer (often in $[0,2^{64})$)
$q$ | decimal exponent; $q$ is an integer, it can be negative
non-negative decimal number | $w\times 10^{q}$
rounding | $\operatorname{round}(x)$ is an integer value nearest to $x$, with ties broken by rounding to the nearest even integer
ceiling | given $x\geq 0$, $\operatorname{ceiling}(x)$ is the smallest integer no smaller than $x$
floor | given $x\geq 0$, $\operatorname{floor}(x)$ is the largest integer no larger than $x$
integer quotient | given non-negative integers $n,m$, $n\div m=\operatorname{floor}(n/m)$
integer remainder | $\operatorname{remainder}(n,m)=n-m\times\operatorname{floor}(n/m)$
trailing zeros | the positive integer $x$ has $k\in\mathbb{N}$ trailing zeros if and only if $x$ is divisible by $2^{k}$
Table 1: Notational conventions
Our main contribution is to show that we can reach high parsing speeds (e.g.,
$1\text{\,}\mathrm{GiB}\text{/}\mathrm{s}$) on current 64-bit processors
without sacrificing accuracy by focusing on optimizing the common number-
parsing scenario where we have no more than 19 digits. We make all of our
software freely available.
## 2 IEEE-754 Binary Floating-Point Numbers
Many of the most popular programming languages and many of the most common
processors support 64-bit and 32-bit IEEE-754 binary floating-point numbers.
See Table 2. The IEEE-754 standard defines other binary floating-point types
(binary16, binary128 and binary256) but they are less common.
According to the IEEE-754 standard, a positive _normal_ double-precision
floating-point number is a binary floating-point number where the 53-bit
integer $m$ (the _significand_) is in the interval $[2^{52},2^{53})$ while
being interpreted as a number in $[1,2)$ by virtually dividing it by $2^{52}$,
and where the 11-bit exponent $p$ ranges from $-1022$ to $1023$ [7]. Such a
double-precision number can represent all values between $2^{-1022}$ and up to
but not including $2^{1024}$; these are the positive _normal_ values. Some
values smaller than $2^{-1022}$ can be represented and are called _subnormal_
values: they use a special exponent code which has the value $2^{-1022}$ and
the significand is then interpreted as a value in $[0,1)$. A sign bit
distinguishes negative and positive values. A double-precision floating-point
number uses 64 bits and is often called binary64. The binary64 format can
represent exactly all decimal numbers made of a 15-digit significand from
$\approx-1.8\times 10^{308}$ to $\approx 1.8\times 10^{308}$. Importantly, the
reverse is not true: it is not sufficient to have 15 digits of precision to
distinguish any two floating-point numbers: we may need up to 17 digits.
The single-precision floating-point numbers are similar but span 32 bits
(binary32). They are binary floating-point numbers where the 24-bit
significand $m$ is in the interval $[2^{23},2^{24})$—considered as value in
$[1,2)$ after virtually dividing it by $2^{23}$—and where the 8-bit exponent
$p$ ranges from $-126$ to $127$. We can represent all numbers between
$2^{-126}$ up to, but not including, $2^{128}$; with special handling for some
numbers smaller than $2^{-126}$ (subnormals). The binary32 type can represent
exactly all decimal numbers made of a 6-digit significand. If we serialize a
32-bit number using 9 digits, we can always parse it back exactly.
name | exponent bits | significand (stored) | decimal digits (exact)
---|---|---|---
binary64 | 11 bits | 53 bits (52 bits) | 15 (17)
binary32 | 8 bits | 24 bits (23 bits) | 6 (9)
Table 2: Common IEEE-754 binary floating-point numbers: 64 bits (binary64) and
32 bits (binary32). A single bit is reserved for the sign in all cases.
## 3 Related Work
Clinger [8, 9] describes accurate decimal to binary conversion; he proposes a
fast path using the fact that small powers of 10 can be represented exactly as
floats. Indeed, if we seek to convert the decimal number $1245\times 10^{14}$
to a binary floating-point number, we observe that the number $10^{14}$ can be
represented exactly as a 64-bit floating-point number because
$10^{14}=5^{14}\times 2^{14}$, and $5^{14}<2^{53}$. Of course, the significand
($1245$) can also be exactly represented. Thus if the value $10^{14}$ is
precomputed as an exact floating-point value, it remains to compute the
product of $1245\times 10^{14}$. The IEEE-754 specification requires that the
result of an elementary arithmetic operation is correctly
rounded.222Mainstream commodity processors (e.g., x64 and 64-bit ARM) have
fast floating-point instructions with correct 64-bit and 32-bit rounding. In
this manner, we can immediately convert a decimal number to a 64-bit floating-
point number when it can be written as $w\times 10^{q}$ with $-22\leq q\leq
22$ and $w\leq 2^{53}$. We can extend Clinger’s fast approach to 32-bit
floating-point numbers with the conditions $-10\leq q\leq 10$ and $w\leq
2^{24}$.
Gay [10] improves upon Clinger’s work in many respects. He provides a
secondary fast path, for slightly larger powers. Indeed if we need to compute
$w\times 10^{q}$ for some small integer $w$ for some integer $q>22$, we may
decompose the problem as $(w\times 10^{q-22})\times 10^{22}$. If $q$ is
sufficiently small so that $w\times 10^{q-22}$ is less than $2^{53}$, then the
computation is still exact, and thus $(w\times 10^{q-22})\times 10^{22}$ is
also exact. Unfortunately, this approach is limited to decimal exponents
$q\in(22,22+16)$ since $10^{16}>2^{53}$. Gay contributes a fast general
decimal-to-binary implementation that is still in wide use: we benchmark
against its most recent implementation in § 12. The general strategy for
decimal-to-binary conversion involves first finding quickly a close
approximation, that is within a few floating-point numbers of the accurate
value, and then to refine it using one or two more steps involving exact big-
integer arithmetic. Though there has been many practical attempts at
optimizing number parsing [11], we are not aware of improved follow-up work to
Gay’s approach in the scientific literature.
A tangential problem is the conversion of binary floating-point numbers to
decimal strings; the inverse of the problem that we are considering. The
binary-to-decimal problem has received much attention [12, 13, 14, 15, 16,
17]. Among other problems is the one of representing a floating-point number
using as few digits as possible so that the exact original value can be
retrieved [13]: we need between 1 and 17 digits.
## 4 Parsing the String
A floating-point value may be encoded in different manners as a string. For
example, 1e+1, 10, 10.0, 10., 1.e1, +1e1, 10E-01 all represent the same value
(10). There are different conventions and rules. For example, in JSON [18],
the following strings are invalid numbers: +1, 01, 1.e1. Furthermore, there
are locale-specific conventions.
When parsing decimal numbers, our first step is to convert the string into a
significand and an exponent. Though details differ depending on the
requirement, the general strategy we propose is as follows:
1. 1.
The string may be explicitly delimited—we have the end point or a string
length—or we may use a sentinel such as the null character. The parser must
not access characters outside the string range to avoid memory errors and
security issues.
2. 2.
It may be necessary to skip all leading white-space characters. In general,
what constitutes a white-space character is locale-specific.
3. 3.
The number may begin with the ‘+’ or the ‘-’ character. Some formats may
disallow the leading ‘+’ character.
4. 4.
The significand is a sequence of digits (0,…,9) containing optionally the
decimal separator: the period character ‘.’ or a locale-specific equivalent.
We must check that at least one digit was encountered: we thus forbid length-
zero significands and significands made solely of the decimal separator. Some
formats like JSON disallow a leading zero (only the zero value may begin with
a zero) or an empty integer component (there must be digits before the decimal
separator) or an empty fractional component (there must be digits after the
decimal separator). To compute the significand, we may use a 64-bit unsigned
integer $w$. We also record the beginning of the significand (e.g., as a
pointer). We compute the digit value from the character using integer
arithmetic: the digits have consecutive code point values in the most popular
character encodings (Unicode and ASCII define values from 48 to 57). We can
similarly use the fact that the digits occupy consecutive code points to
quickly check whether a character is a digit. With each digit encountered, we
can compute the running significand with a multiplication by ten followed by
an addition ($w=10\times d+v$ where $v$ is the digit value). The
multiplication by ten is often optimized by compilers into efficient sequences
of instructions. If there are too many digits, the significand may overflow
which we can guard against by counting the number of processed digits: it is
not necessary to guard each addition and multiplication against overflows. We
can either be optimistic and later check whether an overflow was possible, or
else we may check our position in the string, making sure that we never parse
more than 19 digits, after omitting leading zeros. When the decimal separator
is encountered, we record its position, but we otherwise continue computing
the running significand. It is common to encounter many digits after the
decimal separator. Instead of processing the digits one by one, we may check
all at once whether a sequence of 8 digits is available and then update a
single time the running significand—using a technique called SIMD within a
register (SWAR) [19]. See Appendix D. If we find a sequence of eight digits,
it can be beneficial to check again whether eight more digits (for a total of
16) can be found.
5. 5.
If there is a decimal separator, we must record the number of fractional
digits, which we compute from the position of the decimal separator and the
end of the significand. E.g., if there are 12 digits after the decimal
separator, then the exponent is $10^{-12}$. If there is no decimal separator,
then the exponent is implicitly zero ($10^{0}$).
6. 6.
The significand may be followed by the letter ‘e’ or the letter ‘E’ in which
case we need to parse the exponent if the scientific notation is allowed.
Conversely, if the scientific notation is prescribed, we might fail if the
exponent character is not detected. The parsing of the exponent proceeds much
like the parsing of the significand except that no decimal separator is
allowed. An exceptional condition may occur if the exponent character is not
followed by digits, accounting for the possible ‘+’ and ‘-’ characters. We may
either fail, if the scientific notation is required, or we may decide to
truncate the string right before the exponent character. To avoid overflow
with the exponent, we may update it only if its absolute value is under some
threshold: it makes no difference whether the exponent is $-1000$ or
$-10\,000$; whether it is $1000$ or $10\,000$. The explicit exponent must be
added to the exponent computed from the decimal separator.
7. 7.
If the number of digits used to express the significand is less than 19, then
we know that the significand cannot overflow. If the number of digits is more
than 19, we may count the significant digits by omitting leading zeros (e.g.,
$0.000\,123$ has only three significant digits). Finally, if an overflow
cannot be dismissed, we may need to parse using a higher-precision code path
(§ 11).
There are instances when we can quickly terminate the computation after
decoding the decimal significand and its exponent. If the significand is zero
or the exponent is very small then the number must be zero. If the significand
is non-zero but the exponent is very large then we have an infinite value
($\pm\infty$).
## 5 Fast Algorithm
A fast algorithm to parse floating-point numbers might start by processing the
ASCII string (see § 4) to find a decimal significand and a decimal exponent.
If the number of digits in the significand is less than 19, then our approach
is applicable. (see § 11). However, before we apply our algorithm, we use
Clinger’s fast path [8, 9], see § 3. Even though it adds an additional branch
at the beginning, it is an inexpensive code path when it is applicable,
implying a single floating-point multiplication or division. We can implement
it efficiently. We check whether the decimal power is within the allowable
interval ($q\in[-22,22]$ in the 64-bit case, $q\in[-10,10]$ in the 32-bit
case) and whether the absolute value of the decimal significand is in the
allowable interval ($[0,2^{53}]$ in the 64-bit case or $[0,2^{24}]$ in the
32-bit case). When these conditions are encountered, we losslessly convert the
decimal significand to a floating-point value, we lookup the precomputed power
of ten $10^{|q|}$ and we multiply (when $q\geq 0$) or divide ($q<0$) the
converted significand.333The case with negative exponents where a division is
needed requires some care on systems where the division of two floating-point
numbers is not guaranteed to round the nearest floating-point value: when such
a system is detected, we may limit the fast path to positive decimal powers.
Gay [10] proposes an extended fast path that covers a broader range of decimal
exponents, but with more stringent conditions on the significand. We do not
make use of this secondary fast path. It adds additional branching and
complexity for relatively little gain in our context.
In particular, Clinger’s fast path covers all integer values in $[0,2^{53}]$
(64-bit case). We could also add an additional fast path specifically for
integers. We can readily identify such cases because the decimal exponent is
zero: $w\times 10^{0}$. We can rely on the fact that the IEEE standard
specifies that conversion between integer and floating-point be correctly
rounded [7]. Thus a cast from an integer value to a floating-point value is
often all that is needed. It may often require nearly just a single
instruction (e.g., cvtsi2sd under x64 and ucvtf under ARM). However, we choose
to disregard this potential optimization because the gains are modest while it
increases the complexity of the code.
We must then handle the general case, after the application of Clinger’s fast
path. We formalize our approach with Algorithm 1. This concise algorithm can
handle rounding, including ties to even, subnormal numbers and infinite
values. We specialize the code for positive numbers, but negative numbers are
handled by flipping the sign bit in the result. As the pseudo-code suggests,
it can be implemented in a few lines of code. The algorithm always succeeds
unless we have large or small decimal exponents ($q\notin[-27,55]$) in which
case we may need to fall back on a higher-precision approach in uncommon
instances. The algorithm relies on a precomputed table of 128-bit values
$T[q]$ for decimal exponents $q\in[-342,308]$ (see Appendix B).
* •
In lines 3 and 4, we check for very large or very small decimal exponents as
well as for zero decimal significands. In such cases, the result is always
either zero or infinity.
* •
In lines 5 and 6, we normalize the decimal significand $w$ by shifting it so
that $w\in[2^{63},2^{64})$.
* •
We must convert the decimal significand $w$ into the binary significand $m$.
We have that $w\times 10^{q}=w\times 5^{q}\times 2^{q}\approx m\times 2^{p}$
so we must estimate $w\times 5^{q}$. At line 7, we multiply the normalized
significand $w$ by the 128-bit value $T[q]$ using one or two 64-bit
multiplications. Intuitively, the product $w\times T[q]$ approximates $w\times
5^{q}$ after shifting the result. We describe this step in § 7 and § 8 for
positive decimal exponents ($q\geq 0$), and in § 9 for negative decimal
exponents. We have a 128-bit result $z$.
* •
At line 8, we check for failure, requiring the software to fall back on a
higher-precision approach. It corresponds to the case where we failed to
provably approximate $w\times 5^{q}$ to a sufficient degree. In practice, it
is unlikely and only ever possible if $q\notin[-27,55]$.
* •
At line 9, we compute the expected binary significand with one extra bit of
precision (for rounding) from the product $z$.
* •
At lines 10 and 11, we compute the expected binary exponent. We justify this
step in § 10.
* •
At line 12, we check whether the binary exponent is too small. When it is too
small, the result is zero.
* •
At line 13, we check whether we have a subnormal value when the binary
exponent is too small. See § 9.3.
* •
At line 18, we handle the case where we might have a value that is exactly
between two binary floating-point numbers. We describe this step generally in
§ 6 where we show that subnormal values cannot require rounding ties. We
describe it specifically in § 8.1 for the positive-exponent case ($q\geq 0$)
and in § 9.1 in the negative-exponent case. Intuitively, we identify ties when
the product $z$ from which we extracted the binary significand $m$ contains
many trailing zeroes after ignoring the least significant bit. We need to be
concerned when we would (at line 21) round up from an even value: we adjust
the value to prevent rounding up.
* •
At line 21, we round the binary significand. At line 22, we handle the case
where rounding up caused an overflow, in which case we need to increment the
binary exponent. At line 23, we handle the case where the binary exponent is
too large and we have an infinite value.
We show that the algorithm is correct by examining each step in the following
sections. We assess our algorithm experimentally in § 12.
1:an integer $w\in[0,10^{19}]$ and an integer exponent $q$
2:a table $T$ containing 128-bit reciprocals and truncated powers of five for
all powers from $-342$ to $308$ (see Appendix B)
3:if $w=0$ or $q<-342$ then Return 0 end if
4:if $q>308$ then Return $\infty$ end if
5:$l\leftarrow$ the number of leading zeros of $w$ as a 64-bit (unsigned) word
6:$w\leftarrow 2^{l}\times w$ $\triangleright$ Normalize the decimal
significand
7: Compute the 128-bit truncated product $z\leftarrow(T[q]\times w)\div
2^{64}$, stopping after one 64-bit multiplication if the most significant 55
bits (64-bit) or 26 bits (32-bit) are provably exact.
8:if $z\bmod 2^{64}=2^{64}-1$ and $q\notin[-27,55]$ then Abort end if
9:$m\leftarrow$ the most significant 54 bits (64-bit) or 25 bits (32-bit) of
the product $z$, not counting the eventual leading zero bit
10:$u\leftarrow z\div 2^{127}$ value of the most significant bit of $z$
11:$p\leftarrow((217706\times q)\div 2^{16})+63-l+u$ $\triangleright$ Expected
binary exponent
12:if $p\leq-1022-64$ (64-bit) or $p\leq-126-64$ (32-bit) then Return 0 end if
13:if $p\leq-1022$ (64-bit) or $p\leq-126$ (32-bit) then$\triangleright$
Subnormals
14: $s\leftarrow-1022-p+1$ (64-bit) or $s\leftarrow-126-p+1$ (32-bit)
15: $m\leftarrow m\div 2^{s}$ and $m\leftarrow m+1$ if $m$ is odd (round up),
and $m\leftarrow m\div 2$
16: Return $m\times 2^{p}\times 2^{-52}$ (64-bit) or $m\times 2^{p}\times
2^{-23}$ (32-bit case)
17:end if
18:if $z\bmod 2^{64}\leq 1$ and $m$ is odd and $m\div 2$ is even and
($q\in[-4,23]$ (64-bit) or $q\in[-17,10]$ (32-bit)) then $\triangleright$
Round ties to even
19: if $(z\div 2^{64})/m$ is a power of two then $m\leftarrow m-1$
$\triangleright$ Will not round up
20:end if
21: $m\leftarrow m+1$ if $m$ is odd; followed by $m\leftarrow m\div 2$
$\triangleright$ Round the binary signficand
22:if $m=2^{54}$ (64-bit) or $m=2^{25}$ (32-bit) then $m=m\div 2$;
$p\leftarrow p+1$ end if
23:if $p>1023$ (64-bit) or $p>127$ (32-bit) then Return $\infty$ end if
24:Return $m\times 2^{p}\times 2^{-52}$ (64-bit) or $m\times 2^{p}\times
2^{-23}$ (32-bit case)
Algorithm 1 Algorithm to compute the binary floating-point number nearest to a
decimal floating-point number $w\times 10^{q}$. We give just one algorithm for
both the 32-bit and 64-bit cases. For negative integers, we need to negate the
result.
## 6 Exact Numbers and Ties
We seek to approximate a decimal floating-point number of the form $w\times
10^{q}$ using a binary decimal floating-point number of the form $m\times
2^{p}$. Sometimes, there is no need to approximate since an exact
representation is possible. That is, we have that $w\times 10^{q}=m\times
2^{p}$ or, equivalently, $w\times 5^{q}\times 2^{q}=m\times 2^{p}$. In our
context, we refer to these numbers as _exact numbers_. We seek to better
identify when they can occur.
* •
When $q\geq 0$, we have $m=w\times 5^{q}\times 2^{q}\times 2^{-p}$ so that $m$
is divisible by $5^{q}$. In the 64-bit case, we have that $m<2^{53}$; and in
the 32-bit case, we have that $m<2^{24}$. Thus we have, respectively,
$5^{q}<2^{53}$ and $5^{q}<2^{24}$. These inequalities become $q\leq 22$ and
$q\leq 10$. For example, we have that $1\times 10^{22}$ is an exact 64-bit
number while $1\times 10^{23}$ is not.
* •
When $q<0$, we have that $w=5^{-q}\times 2^{-q}\times 2^{p}\times m$. We have
that $5^{-q}$ divides $w$. If we assume that $w<2^{64}$ then we have that
$5^{-q}<2^{64}$ or $q\geq-27$. For example, the number
$7450580596923828125\times 10^{-27}$ is the smallest exact 64-bit number. It
follows that no exact number is sufficiently small to qualify as a subnormal
value: the largest subnormal number has a small decimal power (e.g., $\approx
10^{-38}$ in the 32-bit case).
Thus we have that exact numbers must be of the form $w\times 10^{q}$ with
$q\in[-27,22]$ (64-bit case) or $q\in[-27,10]$ (32-bit case) subject to the
constraint that the decimal significand can be stored in a 64-bit value. Yet
floating-point numbers occupy a much wider range (e.g., from $4.9\times
10^{-324}$ to $1.8\times 10^{308}$). In other words, exact numbers are only
possible when the decimal exponent is near zero.
To find the nearest floating-point number when parsing, it is almost always
sufficient to round to the nearest value without an exact computation.
However, when the number we are parsing might fall exactly between two
numbers, more care is needed. The IEEE-754 standard recommends that we round
to even. We may need an exact computation to apply the round-ties-to-even
strategy.444We focus solely on rounding ties to even, as it is ubiquitous.
However, our approach could be extended to other rounding modes. The sign can
be ignored when rounding ties to even: if a value is exactly between the two
nearest floating-point numbers and they have different signs, then the
midpoint value must be zero, by symmetry.
It may seem that we could generate many cases where we fall exactly between
two floating-point numbers. Indeed, it suffices to take any floating-point
number that is not the largest one, and then take the next largest floating-
point number. From these two numbers, we pick a number that is right in-
between and we have a number that requires rounding to even. However, for such
a half-way number to be a concern to us, it must be represented exactly in
decimal form using a small number of decimal digits. In particular, the
decimal significand must be divisible by $5^{-q}$ and yet must be no larger
than $2^{64}$. It implies that the decimal exponent cannot be too small
($q\geq-27$). It also implies that the nearby binary floating-point values are
normal numbers.
Let us formalize the analysis. A mid-point between two floating-point numbers,
$m\times 2^{p}$ and $(m+1)\times 2^{p}$, can be written as $(2m+1)\times
2^{p-1}$. Assume that both numbers $m\times 2^{p}$ and $(m+1)\times 2^{p}$ can
be represented exactly using a standard floating-point type: for 64-bit
floating point numbers, it implies that $m+1<2^{53}$; for 32-bit numbers it
implies $m+1<2^{24}$. We only need rounding if $(2m+1)\times 2^{p-1}$ cannot
be represented. There are two reasons that might explain why a number cannot
be represented. Either it requires a power of two that is too small or too
large, or else its significand requires too many bits. Because the value is
exactly between two numbers that can be represented, we know that it is not
outside the bounds of the power of two. Thus the significand must require too
many bits. Furthermore, both $m\times 2^{p}$ and $(m+1)\times 2^{p}$ must be
normal numbers. Hence the fact that $2m+1$ has too many bits implies that
$2m+1\in(2^{53},2^{54}]$ for 64-bit floating point numbers and that
$2m+1\in(2^{24},2^{25}]$ for 32-bit numbers.
* •
When $q\geq 0$, we have that $5^{q}\leq 2m+1$. In the 64-bit case, we have
$5^{q}\leq 2m+1\leq 2^{54}$ or $q\leq 23$. In the 32-bit case, we have
$5^{q}\leq 2m+1\leq 2^{25}$ or $q\leq 10$.
* •
When $q<0$, we have $w\geq(2m+1)\times 5^{-q}$. We must have that $w<2^{64}$
so $(2m+1)\times 5^{-q}<2^{64}$. We have that $2m+1>2^{53}$ (64-bit case) or
$2m+1>2^{24}$ (32-bit case). Hence, we must have $2^{53}\times 5^{-q}<2^{64}$
(64-bit) and $2^{24}\times 5^{-q}<2^{64}$ (32-bit). Hence we have
$5^{-q}<2^{11}$ or $q\geq-4$ (64-bit case) and $5^{-q}<2^{40}$ or $q\geq-17$
(32-bit case).
Thus we have that we only need to round ties to even when we have that
$q\in[-4,23]$ (in the 64-bit case) or $q\in[-17,10]$ (in the 32-bit case). In
both cases, the power of five ($5^{|q|}$) fits in a 64-bit word.
## 7 Most Significant Bits of a Product
When converting decimal values to binary values, we may need to multiply or
divide by large powers of ten (e.g., $10^{300}=5^{300}2^{300}$). Mainstream
processors compute the 128-bit product of two 64-bit integers using one or two
fast instructions: e.g., with the single instruction imul (x64 processors) or
two instructions umulh and mul (aarch64 processors). However, we cannot
represent an integer like $5^{300}$ using a single 64-bit integer. We may
represent such large integers using multiple 64-bit words, henceforth a
_multiword integer_.
We may compute the product between two multiword integers starting from the
least significant bits. Thus if we are multiplying an integer that requires a
single machine word $w$ with an integer that requires $n$ machine words, we
can use $n$ 64-bit multiplications starting with a multiplication between the
word $w$ and the least significant word of the other integer, going up to the
most significant words. See Algorithm 2.
1:an integer $w\in(0,2^{64})$
2:a positive integer $b$ represented as $n$ words ($n>0$)
$b_{0},b_{1},\ldots,b_{n-1}$ such that $b=\sum_{i=0}^{n-1}b_{i}2^{64i}$.
3:Allocate $n+1$ words $u_{0},u_{1},\ldots,u_{n}$
4:$p\leftarrow w\times b_{0}$
5:$u_{0}\leftarrow p\bmod 2^{64}$
6:$r\leftarrow p\div 2^{64}$
7:for $i=1,\ldots,n-1$ do
8: $p\leftarrow w\times b_{i}$ $\triangleright$ $p\leq(2^{64}-1)^{2}$
9: $p\leftarrow p+r$ $\triangleright$ $p\leq 2^{128}-2^{64}+1$
10: $u_{i}\leftarrow p\bmod 2^{64}$
11: $r\leftarrow p\div 2^{64}$
12:end for
13:$u_{n}\leftarrow r$
14:Return: The result of the multiplication as an $n+1$-word $u$ such that
$u=\sum_{i=0}^{n}u_{i}2^{64i}$.
Algorithm 2 Conventional algorithm to compute the product of a single-word
integer and a multiple-word integer.
Such a conventional algorithm is inefficient when we only need to approximate
the product. For example, maybe we only want the most significant word of the
product and we would like to do the computation using only one or two
multiplications. Thankfully it is often possible in practice. Such partial
multiplications are sometimes called _truncated_ multiplications [20] and the
result is sometimes called a _short_ product [21, 22, 23].
1:an integer $w\in(0,2^{64})$
2:a positive integer $b$ represented as $n$ words ($n>0$)
$b_{0},b_{1},\ldots,b_{n-1}$ such that $b=\sum_{i=0}^{n-1}b_{i}2^{64i}$.
3:a desired number of exact words $m\in(0,n+1]$
4:Allocate $n+1$ words $u_{0},u_{1},\ldots,u_{n}$
5:$p\leftarrow w\times b_{n-1}$
6:$u_{n-1}\leftarrow p\bmod 2^{64}$
7:$u_{n}\leftarrow p\div 2^{64}$
8:if $m=1$ and $u_{n-1}<2^{64}-w$ then
9: Return:$u_{n}$ $\triangleright$ Stopping condition
10:end if
11:for $i=n-2,n-1,\ldots,0$ do
12: $p\leftarrow w\times b_{i}$
13: $u_{i}\leftarrow p\bmod 2^{64}$
14: if $u_{i+1}+(p\div 2^{64})\geq 2^{64}$ then
15: add 1 to $u_{i+2}$, if it exceeds $2^{64}$, set it to zero and add 1 to
$u_{i+3}$ and so forth up to $u_{n}$ potentially
16: end if
17: $u_{i+1}\leftarrow(u_{i+1}+(p\div 2^{64}))\bmod 2^{64}$
18: if $m\leq n-i$ and $u_{i}<2^{64}-w$ then
19: Return:$u_{u-m+1},\ldots,u_{n}$ $\triangleright$ Stopping condition
20: end if
21: if $m<n-i$ and $u_{i}<2^{64}-1$ then
22: Return:$u_{n-m+1},\ldots,u_{n}$ $\triangleright$ Stopping condition
23: end if
24:end for
25:Return:$u_{n-m+1},\ldots,u_{n}$.
Algorithm 3 Algorithm to compute the $m$ most significant words of the product
of a single-word integer and a multiple-word integer.
Suppose that we have computed the product of the single-word integer ($w$)
with the $k$ most significant words of the multiword integer: we have computed
the $k+1$ words of the product $w\times(\sum_{i=n-k}^{n-1}b_{i}2^{64i})\div
2^{64(n-k)}$. Compared with the $k+1$ most significant words of the full
product $(w\times(\sum_{i=0}^{n-1}b_{i}2^{64i}))\div 2^{64(n-k)}$, we are
possibly underestimating because we omit the contribution of the product
between the word $w$ and the least $n-k$ significant words. These less $n-k$
significant words have maximal value $2^{64(n-k)}-1$. Their product with the
word $w$ is thus at most $2^{64(n-k)}w-w$ and their contribution to the most
significant words is at most $(2^{64(n-k)}w-w)\div 2^{64(n-k)}=w-1$. Hence if
the least significant computed word that is no larger than $2^{64}-w+1$, then
all computed words are exact except maybe for that least significant one. Our
short product matches the full product. Thus we have _stopping condition_ : if
we only want the $k$ most significant words of the product, we can compute the
$k+1$ most significant words from the $k$ most significant words of the
multiword integers, and stop if the least significant word of the product is
no larger than $2^{64}-w+1$. This stopping condition is most useful if $w$ is
small ($w\ll 2^{64}$). We also have another stopping condition that is more
generally useful. Even if the least significant word is larger than
$2^{64}-w+1$, then the second least significant word needs to be incremented
by one in the worst case. If the second least significant word is not
$2^{64}-1$, then all other more significant words are exact. That is, we have
$k-2$ exact most significant words if the second last of our $k+1$ most
significant words is not $2^{64}-1$.
By combining these two conditions, we rarely have to compute more than 3 words
using two multiplications to get the exact value of the most significant word.
Algorithm 3 presents a general algorithm. We can stop maybe even earlier if we
need even less than the most significant word, say $t$ bits. Indeed, unless
all of the less significant bits in the computed most significant word have
value 1, then an overflow (+1) does not affect the most significant bits of
the most significant word. Thus the condition $u_{n-1}<2^{64}-w$ (line 8 in
Algorithm 3) can be replaced by ($u_{n-1}<2^{64}-w$ or $u_{n}\bmod
2^{64-t}\not=2^{64-t}-1$) if we only need $t$ exact bits of the product.
## 8 Multiplication by Positive Powers of Five
When parsing a decimal number, we follow the general strategy of first
identifying the non-negative decimal significand $w$ and its corresponding
exponent $q$. We then seek to convert $w\times 10^{q}$ to a binary floating-
point number. For example, given the mass of the Earth in kilograms as
$5.972\times 10^{24}$, we might parse it first as $5972\times 10^{27}$. Our
goal is to represent it as a nearest binary floating-point number such as
$5561858415603638\times 2^{30}$. The sign bit is handled separately.
The largest integer we can represent with a 64-bit floating-point number is
$\approx 1.8\times 10^{308}$. Thus, when processing numbers of the form
$w\times 10^{q}$ for non-negative powers of $q$, we only have to worry about
$q\in[0,308]$. If any larger value of $q$ is found and the decimal significand
is non-zero ($w>0$), the result is an infinity value (either $+\infty$ or
$-\infty$).
Figure 1: Number of 64-bit words necessary to represent $5^{q}$ exactly for
$q\in[0,308]$.
Our first step is to expand the exponent: $w\times 10^{q}=w\times 2^{q}\times
5^{q}$. Thus we seek to compute $w\times 5^{q}$. Though the number $5^{308}$
may appear large, it only requires twelve 64-bit words since
$5^{308}<2^{64\times 12}$. Storing all of these words require about
$15\text{\,}\mathrm{KiB}\text{/}$: we need between 1 and 12 words per power
(see Fig. 1). Thus it could be practical to memoize all of the exponents
$5^{q}$ for $q=1,\ldots,308$. In § 7, we compute the most significant word of
a product with few words of the multiword integer: it follows that even if we
need, in the worst case $15\text{\,}\mathrm{KiB}\text{/}$ of storage, an
actual implementation may touch only a fraction of that memory. Thus it may be
more efficient to only store two 64-bit words per power, as long as we can
fall back on a higher-precision approach. We then use only
$5\text{\,}\mathrm{KiB}\text{/}$ of storage (three times less). Effectively,
given a large power of five, we store 128 bits of precision. We truncate the
result, discarding its less significant words.
It could be that the most significant word of the product $w\times 5^{q}$
contains many leading zeros. Yet we desire a normalized result, without
leading zeros for efficiency and simplicity reasons.
* •
We store the powers of five in a format where the most significant bit of the
most significant word is 1: hence, we shift the power of five adequately:
e.g., instead of storing $5^{2}$, we store $5^{2}\times 2^{59}$. See Table 3.
* •
Further, we shift $w$ by an adequate power of two so that the
63${}^{\textrm{rd}}$ bit has value 1. It is always possible as $w$ is non-
zero; the case when $w=0$ is handled separately. Normalizing numbers such that
they have no leading zeros is inexpensive: the number of leading zeros can be
computed using a single instruction on popular processors (clz on ARM, lzcnt
on x64).
$q$ | first word | $q$ | first word | second word
---|---|---|---|---
0 | 8000000000000000 | 28 | 813f3978f8940984 | 4000000000000000
1 | a000000000000000 | 29 | a18f07d736b90be5 | 5000000000000000
2 | c800000000000000 | 30 | c9f2c9cd04674ede | a400000000000000
3 | fa00000000000000 | 31 | fc6f7c4045812296 | 4d00000000000000
4 | 9c40000000000000 | 32 | 9dc5ada82b70b59d | f020000000000000
5 | c350000000000000 | 33 | c5371912364ce305 | 6c28000000000000
6 | f424000000000000 | 34 | f684df56c3e01bc6 | c732000000000000
7 | 9896800000000000 | 35 | 9a130b963a6c115c | 3c7f400000000000
8 | bebc200000000000 | 36 | c097ce7bc90715b3 | 4b9f100000000000
9 | ee6b280000000000 | 37 | f0bdc21abb48db20 | 1e86d40000000000
10 | 9502f90000000000 | 38 | 96769950b50d88f4 | 1314448000000000
11 | ba43b74000000000 | 39 | bc143fa4e250eb31 | 17d955a000000000
12 | e8d4a51000000000 | 40 | eb194f8e1ae525fd | 5dcfab0800000000
13 | 9184e72a00000000 | 41 | 92efd1b8d0cf37be | 5aa1cae500000000
14 | b5e620f480000000 | 42 | b7abc627050305ad | f14a3d9e40000000
15 | e35fa931a0000000 | 43 | e596b7b0c643c719 | 6d9ccd05d0000000
16 | 8e1bc9bf04000000 | 44 | 8f7e32ce7bea5c6f | e4820023a2000000
17 | b1a2bc2ec5000000 | 45 | b35dbf821ae4f38b | dda2802c8a800000
18 | de0b6b3a76400000 | 46 | e0352f62a19e306e | d50b2037ad200000
19 | 8ac7230489e80000 | 47 | 8c213d9da502de45 | 4526f422cc340000
20 | ad78ebc5ac620000 | 48 | af298d050e4395d6 | 9670b12b7f410000
21 | d8d726b7177a8000 | 49 | daf3f04651d47b4c | 3c0cdd765f114000
22 | 878678326eac9000 | 50 | 88d8762bf324cd0f | a5880a69fb6ac800
23 | a968163f0a57b400 | 51 | ab0e93b6efee0053 | 8eea0d047a457a00
24 | d3c21bcecceda100 | 52 | d5d238a4abe98068 | 72a4904598d6d880
25 | 84595161401484a0 | 53 | 85a36366eb71f041 | 47a6da2b7f864750
26 | a56fa5b99019a5c8 | 54 | a70c3c40a64e6c51 | 999090b65f67d924
27 | cecb8f27f4200f3a | 55 | d0cf4b50cfe20765 | fff4b4e3f741cf6d
Table 3: Most significant bits in hexadecimal form of the powers $5^{q}$ for
some exponents $q$. The values are normalized by multiplication with a power
of two so that the most significant bit is always 1. For each power, the first
word represent the most significant 64 bits, the second word the next most
significant 64 bits. For $q\leq 27$, the second word is made of zeros and
omitted. In practice, we need to materialize this table for up to $q=308$ to
cover all of the relevant powers of five for 64-bit numbers.
Hence, because of our normalization, the computed product is at least as large
as $2^{63}\times 2^{63}=2^{126}$. That is, as a 128-bit value, it has at most
one leading 0. We can check for this case and shift the result by one bit
accordingly. To compute the product of the normalized decimal significand with
normalized words representing the power of five (see in Table 3), we use up to
two multiplications.
* •
When $5^{q}<2^{64}$, then a single multiplication always provide an exact
result. In particular, it implies that whenever we need to round ties to even,
we have the exact product (see § 8.1).
* •
We can always do just one multiplication as long as it provides the number of
significant bits of the floating-point standard (53 bits for 64-bit numbers
and 24 bits for 32-bit numbers) plus one extra bit to determine the rounding
direction, and one more bit to handle the case where the computed product has
a leading zero. It suffices to check the least significant bits of the most
significant 64 bits and verify that there is at least one non-zero bit out of
9 bits for 64-bit numbers and out of 38 bits for 32-bit numbers.
* •
When that is not the case, we execute a second multiplication. This second
multiplication is always sufficient to compute the product exactly as long as
$q\leq 55$ since $5^{55}<2^{128}$. We have that the largest 32-bit floating-
point number is $\approx 3.4\times 10^{38}$ so that exponents greater than
$55$ are irrelevant in the sense that they result in infinite values as long
as the significand is non-zero. However, 64-bit floating-point numbers can be
larger. For larger values of $q$, we have that the most significant 64 bits of
the truncated product are exact when the second most significant word is not
filled with 1-bits ($2^{64}-1$). When it is filled with ones then the
computation of a third (or subsequent) multiplication could add one to this
second word which would overflow into the most significant word, adding a
value of one again. It could maybe result into a value that was seemingly just
slightly under the midpoint between two floating-point values (and would thus
be rounded down) to appear to switch to just over the midpoint between two
floating-point values (and to be thus rounded up). In this unlikely case, when
the least significant 64 bits of the most significant 128 bits of the computed
product are all ones, we choose to fall back on a higher-precision approach.
###### Technical Remark 1
To be more precise, before we fall back, we could check that out of the most
significant 128 bits, all but the leading 55 bits (64-bit case) or leading 26
bits are ones, instead of merely checking the least significant 64 bits.
However, we can show that checking that the least significant 64 bits of the
truncated 128-bit product are all ones is sufficient. We sketch the proof.
Before the computation of the second product, the most significant 64 bits of
the product have trailing ones (e.g., 9 bits for 64-bit numbers), otherwise we
would not have needed a second multiplication. If the second multiplication
does not overflow into the most significant 64 bits, then the final result
still has trailing ones in its most significant 64 bits. Suppose it is not the
case: there was an overflow in the most significant 64 bits following the
second multiplication. When the second product overflows into the most
significant 64 bits (turning these ones into zeros), the second 64-bit word of
the product is at most $2\times(2^{64}-1)\bmod 2^{64}=2^{64}-2$. Thus when the
second most significant 64 bits are all ones then there was no overflow into
the most significant 64-bit word by the second multiplication and the least
significant bits of the first 64-bit word of the product are also ones.
Once we have determined that we have sufficient accuracy, we either need to
check for round-to-even cases ($q\leq 23$), see § 8.1, or else we proceed with
the general case. In the general case, we consider the most significant 54
bits (64-bit numbers) or 24 bits (32-bit numbers), not counting the possible
leading zero, and then we round up or down based on the least significant bit.
E.g., we round down to 53 bits in the 64-bit cases when the least significant
bit out of 54 bits is zero, otherwise we round up. If all of the bits are
ones, rounding up overflows into a more significant bit and we shift the
result by one bit so we have the desired number of bits (53 bits or 24 bits).
In § 10, we show how to compute the binary exponent efficiently. Whenever we
find that the resulting binary floating-point number $m\times 2^{p}$ is too
large, we return the infinite value. The upper limits are $2^{1024}$ ($\approx
1.7976934\times 10^{308}$) for 64-bit numbers and $2^{128}$ ($3.4028236\times
10^{38}$) for 32-bit numbers. The infinite values are signed
($+\infty,-\infty$).
###### Example 1
Consider the string 2440254496e57. We get $w=2440254496$ and $q=57$. We
normalize the decimal significand $w$ by shifting it by 32 bits so that the
most significant bit is set (considering $w$ as a 64-bit word). We look up the
64 most significant bits of $5^{57}$ which are given by the word value
11754943508222875079 (or $5^{57}\div 2^{133-64}$). The most significant 64
bits of the product are, in hexadecimal notation, 0x5cafb867790ea3ff. All of
the least significant 9 bits are set. Thus we need to execute a second
multiplication. In practice, it is a rare instance. We load the next most
significant 64 bits of the product (0xaff72d52192b6a0d) and compute the second
product. After updating our product, we get that the most significant word is
0x5cafb867790ea400 whereas the second most significant word is
0x1974b67f5e78017. The most significant 55 bits of the product have been
computed exactly. The most significant bit of 0x5cafb867790ea400 is zero. We
select the most significant 55 bits of the word: 13044452201105234. It is an
even value so we do not round up. Finally, we shift the result to get the
binary significand $m=6522226100552617\in[2^{52},2^{53})$.
### 8.1 Round Ties to Even with Positive Powers
To handle rounding ties to even when the decimal power is positive ($q\geq
0$), we only have to be concerned when the exponent $q$ is sufficient small:
$q\leq 23$ assuming that we want to support both 64-bit numbers (which require
$q\leq 23$) and 32-bit numbers (which require $q\leq 10$) due to the fact that
we require the decimal significand to be smaller than $2^{64}$ (see § 6). In
such cases, the product corresponding to $m\times 5^{q}$ is always exact after
one multiplication (since $5^{q}<2^{64}$).
We need to round-to-even when the resulting binary significand uses one extra
significant bit for a total of 54 bits in the 64-bit case and 25 bits in the
32-bit case. We can check for the occurrence of a round-to-even case as
follows:
* •
The most significant 64-bit word has exactly 10 trailing zeros (64-bit case)
or exactly 39 trailing zeros (32-bit case).555We assume that the most
significant word of the product contains a leading 1-bit, maybe following a
shift. If the word has a leading zero, we must subtract one from these
numbers: 9 for the 64-bit case and 38 for the 32-bit case.
* •
The second 64-bit word of the product containing the least significant bits
equals zero.
In such cases, we get a 54-bit (64-bit case) or a 25-bit (32-bit case) word
that ends with a 1-bit. We need to round to a 53-bit or 24-bit value. We can
either round up or round down. To implement the round to even, we check the
second last significant bit. If it is zero, we round down; if it is one, we
round up. When rounding up, if we overflow because we only have 1-bits, we
shift the result to get the desired number of bits (53 bits or 24 bits).
When the round-to-even case is not encountered, we either round up or down as
in the general case. Given that we have computed the exact product ($q\leq
23$), there can be no ambiguity.
###### Technical Remark 2
When $q\leq 23$, the second most significant 64-bit word must have some
trailing zeros, whether we are in a round-to-even case or not. Indeed, because
we always have $5^{q}\leq 2^{54}$ and because we store the powers of five
shifted so that they have a leading 1-bit in their respective 64-bit words
(e.g., $5^{23}$ is loaded as $5^{23}\times 2^{10}$), we have at least 10
trailing zeros, irrespective of the word $w$ as long as $q\leq 23$. Hence we
can replace a comparison with zero with a check that it is no larger than a
small value like 1, if it is convenient. It turns out to be useful to us for
simplifying Algorithm 1, see line 18.
###### Example 2
Consider the input string 9007199254740993. It is equal to $2^{53}+1$, a
number that cannot be represented as a standard 64-bit floating-point number.
After parsing the string, we get $w=9007199254740993$ and $q=0$. We normalize
$w$ to 9223372036854776832 ($2^{63}+2^{10}$). We multiply this shifted $w$ by
$5^{q}$ which, in this case, is just 1 ($5^{q}=1$). The precomputed $5^{q}$
has been normalized to the 64-bit value $2^{63}$. Hence we get the product
$2^{63}w$. We then select the most significant 54 bits of the result after
skipping the leading zero bit ($2^{53}+1$). Because this result has its least
significant bit set, and because all the bits that were not selected are zero,
we know that we need to round to even. The second least significant bit is
zero, and we know that we need to round down. We therefore end up with a
significand of one and an exponent of $2^{53}$.
## 9 Division by Powers of Five
We turn our attention to negative decimal exponents ($q<0$). Consider the
decimal number $9.109\times 10^{-31}$ corresponding to the mass of the
electron. We can write it as $9109\times 10^{-34}$ or $9109\times
2^{-34}\times 5^{-34}$. Though we can represent the integer $5^{34}$ using few
binary words, the value $5^{-34}$ could only be approximated in binary form.
E.g., it is approximately equal to $4676805239458889\times 2^{-131}$. To
convert it to a binary floating-point number, we must find $m$ and $p$ such
that $9109\times 2^{-34}\times 5^{-34}\approx m\times 2^{p}$.
Consider the general case, replacing $9109\times 2^{-34}$ by $w\times 10^{q}$
where $0<w<2^{64}$ and $q<0$. We need to approximate the decimal floating-
point number $w\times 10^{q}$ as closely as possible with the binary floating-
point number $m\times 2^{p}$. We formalize $m\times 2^{p}\approx w\times
10^{q}$ as the equation $m\times 2^{p}+\epsilon 2^{p}=w\times 10^{q}$ where
$\epsilon$ is the approximation error. If the binary significand $m$ is chosen
optimally, then the error must be as small as possible: $|\epsilon|\leq 1/2$.
Dividing throughout, we get $m+\epsilon=w\times 2^{q-p}/5^{-q}$ or
$m=\operatorname{round}(w\times 2^{q-p}/5^{-q})$. Thus the problem is reduced
to a division by an integer power of five ($5^{-q}$) of an integer $w$ that
fits in a 64-bit word multiplied by a power of two. The correct value of $p$
is such that $m$ should fit in 53 bits (64-bit case) or 24 bits (32-bit case).
In practice, we can derive the correct value $m$ after rounding if we compute
the division $w\times 2^{b}/5^{-q}$ for a sufficiently large power of two
$2^{b}$.
Dividing a large integer by another large integer could be expensive.
Thankfully, we can compute the quotient and remainder of such a division by
the divisor $d=5^{-q}$ using a multiplication followed by a right shift when
the divisor $d$ is known ahead of time. Many optimizing compilers have been
using such a strategy for decades. We apply the following result derived from
Warren [24, 25].
###### Theorem 9.1.
Consider an integer divisor $d>0$ and a range of integer numerators
$n\in[0,N]$ where $N\geq d$ is an integer. We have that
$\displaystyle n\div d=\operatorname{floor}(c\times n/t)$
for all integer numerators $n$ in the range if and only if
$\displaystyle 1/d\leq
c/t<\left(1+\frac{1}{N-\operatorname{remainder}(N+1,d)}\right)1/d.$
Intuitively, if $c/t$ is close to $1/d$, we can replace $n/d$ by $n\times
c/t$. We apply Theorem 9.1 by picking a power of two for $t$ so that the
computation of $\operatorname{floor}(c\times n/t)$ can be implemented as a
multiplication followed by a shift. Given $t$, the smallest constant $c$ such
that $1/d\leq c/t$ holds is $c=\operatorname{ceiling}(t/d)$. It remains to
check that $c=\operatorname{ceiling}(t/d)$ is sufficiently close to $t/d$:
$\displaystyle\operatorname{ceiling}(t/d)\times
d<\left(1+\frac{1}{N-\operatorname{remainder}(N+1,d)}\right)t.$
We have that $\operatorname{ceiling}(t/d)\times d\leq t-1+d$. By substitution,
we get the necessary condition $d-1<t/(N-\operatorname{remainder}(N+1,d))\leq
t/N$. And so we have the convenient sufficient condition $t>(d-1)\times N$. We
have shown Corollary 9.2.
###### Corollary 9.2.
Consider an integer divisor $d>0$ and a range of integer numerators
$n\in[0,N]$ where $N\geq d$ is an integer. We have that
$\displaystyle n\div
d=\operatorname{floor}\left(\operatorname{ceiling}\left(\frac{t}{d}\right)\frac{n}{t}\right)$
for all integer numerators $n$ if $t>(d-1)\times N$.
Observe how we multiply the numerator by a reciprocal ($\frac{t}{d}$) that is
rounded up. It suggests that we need to store precomputed reciprocals while
rounding up. In contrast, when processing positive decimal exponents, we would
merely truncate the power, thus effectively rounding it down.
We consider two scenarios. Firstly, the case with negative powers with
exponents near zero requires more care because we may need to round to the
nearest even (§ 9.1). Secondly, we consider the general case ($5^{-q}\geq
2^{64}$) (§ 9.2).
### 9.1 Negative Exponents Near Zero ($q\geq-27$)
We are interested in identifying cases when rounding to even is needed because
of a tie (e.g., $2^{-1022}+2^{-1074}+2^{-1075}$ is one such number). Round-to-
even only happens for negative values of $q$ when $q\geq-4$ in the 64-bit case
and when $q\geq-17$ in the 32-bit case (see § 6). In either cases, we have
that $5^{-q}$ fits in a 64-bit word (i.e., $5^{-q}<2^{64}$).
Theorem 9.1 tells us how to quickly compute a quotient given a precomputed
reciprocal, but for values of $q$ near zero, we need to be able to detect when
the remainder is zero, so we know when to round ties to even. We use the
following technical lemma which allows us to verify quickly whether our
significand is divisible by the power of five.
###### Lemma 9.3.
Given an integer divisor $d>0$, pick an integer $K>0$ such that $2^{K}\geq d$.
Given an integer $w>0$, then $(w\times 2^{K})\div d$ is divisible by $2^{K}$
if and only if $w$ is divisible by $d$.
###### Proof 9.4.
If $w$ is divisible by $d$ then $(w\times 2^{K})\div d=(w\div d)\times 2^{K}$
and thus $(w\times 2^{K})\div d$ is divisible by $2^{K}$. Suppose that
$(w\times 2^{K})\div d$ is divisible by $2^{K}$, then we can write $w\times
2^{K}=d\times 2^{K}\times\gamma+\rho$ where $\gamma,\rho$ are non-negative
integers with $\rho<d$. But because $\rho<2^{K}$, we must have $\rho=0$ and so
$w\times 2^{K}=d\times 2^{K}\times\gamma$ or $w=d\times\gamma$ which shows
that $w$ is divisible by $d$.
To detect a zero remainder, we can consider the decimal significand $w$ as a
128-bit integer (made of two 64-bit words). Within the most significant 64-bit
word, we shift $w$ by an adequate power of two so that the
63${}^{\textrm{rd}}$ bit has value 1: let us call the result $w^{\prime}$. The
second, least significant, word is virtual and assumed to be zero. In effect,
we consider the 128-bit value $2^{64}\times w^{\prime}$. We are going to treat
mathematically this 128-bit value as a 127-bit value equal to
$w^{\prime}\times 2^{63}$, ignoring the least significant bit. We could ignore
more than one bit, but it is convenient to ignore the least significant bit.
As long as $5^{-q}<2^{63}$, we have that $(w^{\prime}\times 2^{63})\div
5^{-q}\geq 2^{63}$ and we have more than enough accuracy to compute the
desired binary significand. Hence, we can go as low as $q=-27$.666Though we go
as low as $q=-27$, values of $q$ smaller than $-17$ cannot lead to a tie-to-
even scenario using 64-bit or 32-bit floating-point numbers. We apply Theorem
9.1 to divide this value ($w^{\prime}\times 2^{63}$) no larger than
$N=2^{127}$ by the power of five $5^{-q}$: we set $t=2^{b}$ with
$b=127+\operatorname{ceiling}(\log_{2}(5^{-q}))$ and the reciprocal
$c=\operatorname{ceiling}(t/d)=\operatorname{ceiling}(2^{b}/5^{-q})$. We can
check that $c\in[2^{127},2^{128})$. We can verify the condition
($t>(d-1)\times N$) of Corollary 9.2. The values of the 128-bit reciprocals
are given in Table 4.
$q$ | reciprocal $\div 2^{64}$ | reciprocal $\bmod 2^{64}$
---|---|---
-1 | cccccccccccccccc | cccccccccccccccd
-2 | a3d70a3d70a3d70a | 3d70a3d70a3d70a4
-3 | 83126e978d4fdf3b | 645a1cac083126ea
-4 | d1b71758e219652b | d3c36113404ea4a9
-5 | a7c5ac471b478423 | 0fcf80dc33721d54
-6 | 8637bd05af6c69b5 | a63f9a49c2c1b110
-7 | d6bf94d5e57a42bc | 3d32907604691b4d
-8 | abcc77118461cefc | fdc20d2b36ba7c3e
-9 | 89705f4136b4a597 | 31680a88f8953031
-10 | dbe6fecebdedd5be | b573440e5a884d1c
-11 | afebff0bcb24aafe | f78f69a51539d749
-12 | 8cbccc096f5088cb | f93f87b7442e45d4
-13 | e12e13424bb40e13 | 2865a5f206b06fba
-14 | b424dc35095cd80f | 538484c19ef38c95
-15 | 901d7cf73ab0acd9 | 0f9d37014bf60a11
-16 | e69594bec44de15b | 4c2ebe687989a9b4
-17 | b877aa3236a4b449 | 09befeb9fad487c3
-18 | 9392ee8e921d5d07 | 3aff322e62439fd0
-19 | ec1e4a7db69561a5 | 2b31e9e3d06c32e6
-20 | bce5086492111aea | 88f4bb1ca6bcf585
-21 | 971da05074da7bee | d3f6fc16ebca5e04
-22 | f1c90080baf72cb1 | 5324c68b12dd6339
-23 | c16d9a0095928a27 | 75b7053c0f178294
-24 | 9abe14cd44753b52 | c4926a9672793543
-25 | f79687aed3eec551 | 3a83ddbd83f52205
-26 | c612062576589dda | 95364afe032a819e
-27 | 9e74d1b791e07e48 | 775ea264cf55347e
Table 4: Values of the 128-bit reciprocals in hexadecimal form for powers with
negative exponents near zero as two 64-bit words. The reciprocal is given by
$\operatorname{ceiling}(\frac{2^{b}}{5^{-q}})$ with
$b=127+\operatorname{ceiling}(\log_{2}(5^{-q}))$.
By Theorem 9.1, we have that $(2^{63}\times w^{\prime})\div
5^{-q}=((2^{63}\times w^{\prime})\times c)\div 2^{b}$. The 128-bit constant
$c$ is precomputed for all relevant powers of $q=-1,-2,\ldots,-27$.
Computationally, we do not need to store the product of the shifted 128-bit
$w$ with the 128-bit reciprocal $c$ as a 256-bit product: we compute only the
first two words (i.e., the most significant 128-bit). And then we select only
the most significant 127 bits.
Thus checking that the second (least significant) word of the computed product
is zero except maybe for the least significant bit is enough to determine that
our word ($w$) is divisible by five. By dividing (the shifted) $w$ (that is
$w^{\prime}$) by $5^{-q}$, we effectively compute a binary significand $2m+1$
as per the equation $(2m+1)\times 2^{p-1}=w\times 10^{q}$. We need to round to
even whenever $(2m+1)\in[2^{53},2^{54})$ for 64-bit floating-point numbers and
whenever $(2m+1)\in[2^{24},2^{53})$ for 32-bit floating-point numbers.
Otherwise, we round the result normally to the nearest 53-bit word (64-bit
numbers) or 24-bit word (32-bit numbers).
We need up to two 64-bit multiplications. We stop after the first
multiplication if and only if the least significant bits of the most
significant words are not all ones. We want that the most significant bits of
the most significant 64-bit word are exact: 55 bits for 64-bit numbers and 26
bits for 32-bit numbers. We have that $64-55$ is 9 and $64-26$ is 38. Hence,
we check the least significant 9 bits for 64-bit numbers and the least
significant 38 bits for 32-bit numbers. As long as we are not in a round-to-
even case, we round up or down based on the least significant selected bit. If
all bits have the value 1, then rounding up overflows into a more significant
bit and we must shift by one bit.
We need to ensure that we correctly identify all ties requiring the round-to-
even strategy. Specifically, we need to never incorrectly classify a number as
a tie, and we need to never miss a tie.
* •
We need to be concerned about a false round-to-even scenario when, after
stopping with just one multiplication, we end up with a misleading result that
could pass as a round-to-even case. Indeed, we can stop after one
multiplication when the least significant bits of the most significant words
are all zeros. However, a round-to-even case cannot occur after a single
multiplication:
1. 1.
It could happen if the least significant $64+9$ bits of the product are zeros.
The 128-bit product of two 64-bit words may only have as many trailing zeros
as the sum of the number of trailing zeros of the first 64-bit word with the
number of trailing zeros of the second word. To get a total of $64+9$ trailing
zeros, assuming that both words are non-zero, we have the necessary conditions
that both words must have at least 10 trailing zeros. Thus, for this problem
to occur, we need for the most significant 64-bit word of the reciprocal $c$
to have at least 10 trailing zeros. We can check that it does not happen:
there are only 17 powers to examine. We find at most 2 trailing zeros. See
Table 5 (third column).
2. 2.
A false round-to-even may also happen if all the least significant $64+9$ bits
of the product are zeros, except for the least significant bit. However, for
the 128-bit product of two 64-bit words to have its least significant bit be
1, we need for both of the 64-bit words to have their least significant bits
set to 1 (they are odd). Given an odd 64-bit integer, there is only one other
64-bit integer such as the least significant 64 bits of the product is 1.
Indeed suppose that $a\times b_{1}=1\bmod 2^{64}$ and $a\times b_{2}=1\bmod
2^{64}$ for numbers in $[0,2^{64})$ then $a\times(b_{1}-b_{2})=0\bmod 2^{64}$
which implies that $b_{1}=b_{2}$. They are effectively multiplicative inverses
(modulo $2^{64}$). We can thus compute the multiplicative inverses (see Fig.
4) and check the full 128-bit product. Again, we only need to examine 17
powers. We find the powers that have an odd integer in their most significant
64 bits, we compute the multiplicative inverse and we compute the full
product. Looking at the most significant 64 bits of the resulting product, we
find that they have at most 5 trailing zeros. See Table 5 (last column).
$q$ | reciprocal $\div 2^{64}$ | inverse | product | 0s
---|---|---|---|---
-3 | 83126e978d4fdf3b | c687d6343eb1a1f3 | 65a5cdedb181dc22 | 1
-4 | d1b71758e219652b | 6978533007ec3183 | 5666aa8c1bca175b | 0
-5 | a7c5ac471b478423 | b464ceec1a874b8b | 76390df51733b898 | 3
-6 | 8637bd05af6c69b5 | 2d28ff519dc1fc9d | 17ad4acbd85ad372 | 1
-9 | 89705f4136b4a597 | 47a5ffb53d302a27 | 26774920b7634d5b | 0
-12 | 8cbccc096f5088cb | ccda17e7d0519ce3 | 709e5881abf430de | 1
-13 | e12e13424bb40e13 | a976a8f009f3ec1b | 950fca8d051f7f36 | 1
-14 | b424dc35095cd80f | 4776114e932f16ef | 32494e3df377fbda | 1
-15 | 901d7cf73ab0acd9 | b7d434f9093d1369 | 677c8a9266f5159b | 0
-16 | e69594bec44de15b | 30fad280461f66d3 | 2c1df79145125a20 | 5
-17 | b877aa3236a4b449 | 89ee897ef59d7df9 | 6363ec689fe3979b | 0
Table 5: Values of the most significant 64 bits of the 128-bit reciprocals in
hexadecimal form for powers of negative exponents near zero and the
multiplicative inverse modulo $2^{64}$ of the reciprocal for odd reciprocals
($q=-1,-2,-7,-8,-10,-11$ are omitted since their reciprocals are even). We
compute the most significant bits of the 128-bit product between the
reciprocal and its inverse. We indicate the number of trailing zeros for the
most significant bits of the product.
* •
We need to be concerned with the reverse scenario where, after a single
multiplication, we stop the computation and fail to detect an actual round-to-
even case. If we stop after one multiplication, then at least one of the least
significant bits (9 bits for 64-bit numbers, 38 bits for 32-bit numbers) of
the most significant 64 bits is zero. In such a case, the 128 most significant
bits of the full (exact) product must end with a long stream of zeros, except
maybe for the least significant bit. We know that the most significant 64 bits
are exact after a single product, except maybe for the need to increment by 1.
The most significant 64 bits cannot be exact after one multiplication if we
have a round-to-even case. So we must increment them by 1 following the second
multiplication, and then the final result contains at least one non-zero bit
in the least significant bits (9 bits for 64-bit numbers, 38 bits for 32-bit
numbers) of the most significant 64 bits. It contradicts the fact that we had
an actual round-to-even case. Hence, we cannot fail to detect an actual round-
to-even case by stopping the computation after one multiplication.
Thus we can identify accurately the round-to-even cases. In these cases, we
proceed as in § 8.1. After discarding a potential leading zero-bit, we have 54
bits (64-bit case) and 25 bits (32-bit case). The least significant bit is
always a 1-bit. We round down when the second least significant bit is zero,
otherwise we round up. When rounding up, we might overflow into an additional
bit if we only have ones, in such a case we shift the result.
### 9.2 Other Negative Powers ($q<-27$)
Consider the case where the decimal exponent is far from zero ($q<-27$). In
such cases, the decimal number can never be exactly in-between two floating-
point numbers: thus with a single extra bit of accuracy, we can safely either
round up or down.
The smallest positive value that can be represented using a 64-bit floating-
point number is $2^{-1074}$. For 32-bit numbers, we have the larger value
$2^{-149}$. Because we have that $w\times 10^{-343}<2^{-1074}$ for all
$w<2^{64}$, it follows that we never have to be concerned with overly small
decimal exponents: when $q<-342$, then the number is assuredly zero.
From the decimal number $m\times 10^{q}$, we seek the binary significand
$m=\operatorname{round}(w\times 2^{q-p}/5^{-q})$ where the binary power $p$ is
chosen such that $m$ is within the range of the floating-point numbers (e.g.,
$m\in[2^{52},2^{53})$). It is enough to compute
$m^{\prime}=\operatorname{floor}(w\times 2^{b}/5^{-q})$ with $b$ large enough
that $m^{\prime}\geq 2^{53}$ so that we can compute
$m=\operatorname{round}(w\times 2^{q-p}/5^{-q})$ accurately by selecting the
most significant 53 bits (64-bit numbers) or 24 bits (32-bit numbers) of the
wider value $m^{\prime}$ and then round it up (or down) based on the
$54^{\mathrm{th}}$ or $25^{\mathrm{th}}$ bit value.
We can pick $b=64+\operatorname{ceiling}(\log_{2}5^{-q})$. We apply Corollary
9.2 with $t=2^{2b}$, $d=5^{-q}$, and $N=(2^{64}-1)2^{b}$. We precompute
$c=\operatorname{ceiling}(t/d)=\operatorname{ceiling}(2^{2b}/5^{-q})$ for all
relevant powers of $q\geq-342$. See Table 6. We only store the most
significant 128 bits of $c$, and rely on a truncated multiplication. Because
there is no concern with rounding to even, we can safely round up from the
most significant bits of the computed quotient. We do just one multiplication
if it provides the number of significant bits of the floating-point standard
(53 bits for 64-bit numbers and 24 bits for 32-bit numbers) plus one
additional bit to determine the rounding direction, and yet one more bit to
handle the scenario where the computed product has a leading zero. We always
stop after this second multiplication when we have a truncated product with
the second most significant word not filled with ones ($2^{64}-1$). Otherwise,
we fall back on a higher-precision approach, an unlikely event.
After possibly omitting the leading zero of the resulting product, we select
the most significant bits (54 bits in the 64-bit case, 25 bits in the 32-bit
case). We then round up or down based on the least significant bit to 53 bits
(64-bit case) or to 24 bits (32-bit case). When rounding up, we might overflow
to an additional bit if we have all ones: in such case we shift to get back 53
bits (64-bit case) or 24 bits (32-bit case).
$q$ | reciprocal (64 msb) | reciprocal (next 64 msb)
---|---|---
-40 | 8b61313bbabce2c6 | 2323ac4b3b3da015
-39 | ae397d8aa96c1b77 | abec975e0a0d081a
-38 | d9c7dced53c72255 | 96e7bd358c904a21
-37 | 881cea14545c7575 | 7e50d64177da2e54
-36 | aa242499697392d2 | dde50bd1d5d0b9e9
-35 | d4ad2dbfc3d07787 | 955e4ec64b44e864
-34 | 84ec3c97da624ab4 | bd5af13bef0b113e
-33 | a6274bbdd0fadd61 | ecb1ad8aeacdd58e
-32 | cfb11ead453994ba | 67de18eda5814af2
-31 | 81ceb32c4b43fcf4 | 80eacf948770ced7
-30 | a2425ff75e14fc31 | a1258379a94d028d
-29 | cad2f7f5359a3b3e | 96ee45813a04330
-28 | fd87b5f28300ca0d | 8bca9d6e188853fc
Table 6: Values of the 128-bit reciprocals in hexadecimal form for negative
exponents as two 64-bit words. The reciprocal is given by
$\operatorname{ceiling}(\frac{2^{2b}}{5^{-q}})$ with
$b=64+\operatorname{ceiling}(\log_{2}5^{-q})$.
###### Example 3
Consider the case of the string 9.109e-31. We parse it as $9109\times
10^{-34}$. We load up the most significant 64 bits of the reciprocal
corresponding to $q=-34$ which is 0x84ec3c97da624ab4 in hexadecimal form (see
Table 4). We normalize 9109 so that, as a 64-bit word, its most significant
bit is 1: $9109\times 2^{50}$. We multiply the two words to get that the most
significant 64 bits of the product are 49e6a7201cf62db0 whereas the next most
significant 64 bits are 0x5b10000000000000. We stop the computation since the
second word is not filled with ones. The most significant bit of the product
contains a 0. We shift the most significant 64 bits by 9 bits to get
10400639386286870. The least significant bit is zero so we round down to
5200319693143435 or 0x1279a9c8073d8b in hexadecimal form. We get that
$9109\times 10^{-34}$ is the floating-point number 0x1.279a9c8073d8bp-100. See
Example 4 in § 10 to learn how we determine that the binary exponent is -100.
### 9.3 Subnormals
To represent values that are too small, the floating-point standard uses
special values called subnormals. Whenever we end up with a value $m\times
2^{p}$ with $m\in[2^{52},2^{53})$ (64-bit case) or $m\in[2^{23},2^{24})$
(32-bit) but with $p$ too small, smaller than $-1022-52$ in the 64-bit case or
smaller than $-126-23$ in the 32-bit case, we fall back on the subnormal
representation. It uses a small value for the exponent to represent values in
the range $[2^{-1022-52},2^{-1022})$ (64-bit case) or in the range
$[2^{-126-23},2^{-126})$ (32-bit case). The values are given by $m\times
2^{-1022-52}$ (64-bit) or $m\times 2^{-126-23}$ (32-bit) while allowing $m$ to
be any positive value no larger than $2^{52}$ or $2^{23}$.
To construct the subnormal value, we take the original binary significand $m$
and we divide it by a power of two, with rounding. Thus, for example, if we
are given the 64-bit value $(2^{53}-1)\times 2^{-1022-54}$, we observe that
the power of two is too small ($-1022-53<-1022-52$) by exactly two. Thus we
take the binary significand $2^{53}-1$ and divide it by four, with rounding:
we get $2^{51}$ and so we get the subnormal floating-point number
$2^{51}\times 2^{-1022-52}$. Thankfully, rounding is relatively easy since we
never need to handle the round-to-even case with subnormals, because it only
occurs with powers of exponents near zero.
We should be mindful that, in exceptional cases, the rounding process can lead
us to find that we do not have a subnormal. Indeed, consider the value
$(2^{53}-1)\times 2^{-1022-53}$, its power of two is too small
($-1022-53<-1022-52$) by exactly one. We take the binary significand
$2^{53}-1$ and divide it by two, with rounding, getting $2^{52}$ and so we end
up with the normal number $2^{52}\times 2^{-1022-52}$.
## 10 Computing the Binary Exponent Efficiently
We are approximating a decimal floating-point number $w\times 10^{q}$ with a
binary floating-point number $m\times 2^{p}$. We must compute the binary
exponent $p$. Starting from the power of ten $10^{q}$, we want write it as a
value in $[1,2)$, as prescribed by the floating-point standard, multiplied by
a power of two. We have two distinct cases depending on the sign of $q$:
* •
when $q\geq 0$, we have $10^{q}=2^{q}\times
5^{q}=\frac{5^{q}}{2^{\operatorname{floor}(\log_{2}5^{q})}}\times
2^{q+\operatorname{floor}(\log_{2}5^{q})}$,
* •
when $q<0$, we have $10^{q}=2^{q}\times
5^{q}=\frac{2^{\operatorname{ceiling}(\log_{2}5^{-q})}}{5^{-q}}\times
2^{q-\operatorname{ceiling}(\log_{2}5^{-q})}$.
We can verify that both constraints are satisfied:
$5^{q}/2^{\operatorname{floor}(\log_{2}5^{q})}\in[1,2)$ and
$2^{\operatorname{ceiling}(\log_{2}5^{-q})}/5^{-q}\in[1,2)$. Hence we have
that the binary powers corresponding to the powers of ten are given by
$q+\operatorname{floor}(\log_{2}(5^{q}))=q-\operatorname{ceiling}(\log_{2}5^{-q})$.
For example, we have that $10^{5}=5^{5}/2^{11}\times
2^{16}=1.52587890625\times 2^{16}$ since
$\operatorname{floor}(\log_{2}5^{5})=11$. Computing $q+\log_{2}(5^{q})$ could
require an expensive iterative process. The decimal exponent $q$ is in limited
range of values, say $q\in(-400,350)$. We have that
$q+\log_{2}(5^{q})=q+q\log_{2}(5)=q(1+\log_{2}(5))$ and $1+\log_{2}(5)\approx
217706/2^{16}$. We can check that over the interval $q\in(-400,350)$, we have
that $q+\operatorname{floor}(\log_{2}(5^{q}))=(217706\times q)\div 2^{16}$
(exactly) as one can verify numerically. The division (by $2^{16}$) can be
implemented as a logical shift. Thus we only require a multiplication followed
by a shift. We initially derived this efficient formula using a
satisfiability-modulo-theories (SMT) solver [26].
In our algorithm, we normalize the decimal significand so that it is in
$[2^{63},2^{64})$. That is, given the string 1e12, we first parse it as the
decimal significand $w=1$ and the decimal exponent $q=12$. We then normalize
$w=1$ to $w^{\prime}=2^{63}$ (shifting it by 63 bits) and we proceed with the
computation of the binary significand. Had we started with $w=2^{4}$ (say),
then we would have shifted by only $63-4$ bits and then the binary exponent
must be incremented by 4. For example, using the input string 16e12 instead of
1e12, we would have used the decimal significand $w=16$ but still ended up
with the normalized significand $w^{\prime}=2^{63}$. Yet the binary exponent
of 16e12 is clearly 4 more than the binary exponent of 1e12. In other words,
we need to take into account the number of leading zeroes of the decimal
significand. Thus we increment the binary exponent by $63-l$ where $l$ is the
number of leading zeros of the original decimal significand $w$ as a 64-bit
word.
For powers of ten, the product of the normalized significand with either the
power of five or its reciprocal has a leading zero since
$2^{63}\times(2^{64}-1)<2^{127}$. When the product is larger and it overflows
in the most significant bit, then the binary exponent must be incremented by
one. Thus we finally have the following formula
$\left(\left(217706\times q\right)\div 2^{16}\right)+63-l+u$
where $u$ is the value of the most significant bit of the product (0 or 1) and
where $l$ is the number of leading zeros of $w$.
Furthermore, when we round up the resulting significand, it may sometimes
overflow: e.g., if the most significant bits of the product are all ones, we
overflow to a more significant bit and we need to shift the result. In such
cases, we increment the binary exponent by one.
When serializing the exponent in the IEEE binary format, we need to add either
1023 (64-bit) or 127 (32-bit) to the exponent; these constants (1023 and 127)
are sometimes called _exponent biases_. For example, the 64-bit binary
exponent value from $-1022$ to $1023$ are stored as the unsigned integer
values from $1$ to $2046$. The serialized exponent value 0 is reserved for
subnormal values while the serialized exponent value $2047$ is reserved for
non-finite values.
###### Example 4
Consider again Example 3. We start from $9109\times 10^{-34}$. Because $q<0$,
we compute $q-\operatorname{ceiling}(\log_{2}5^{-q})$ and get -113. We have
that 9109 has 50 leading zeros as a 64-bit word and we normalize it as
$9109\times 2^{50}$. Thus we have $I=50$ and so we need to increment the
binary exponent by $63-I$ or 13. We get a binary exponent of -100. We verify
that the product has a leading zero bit so we have that the binary exponent
must be -100.
## 11 Processing Long Numbers Quickly
In some uncommon instances, we may have a decimal significand that exceeds 19
digits. Unfortunately, if we are given a value with superfluous digits, we
cannot truncate the digits: it may be necessary to read tens or even hundreds
of digits (up to 768 digits in the worst case). Indeed, consider the second
smallest 64-bit normal floating-point value: $2^{-1022}+2^{-1074}$ ($\approx
2.2250738585072019\times 10^{-308}$) and the next smallest value
$2^{-1022}+2^{-1073}$ ($\approx 2.2250738585072024\times 10^{-308}$). If we
pick a value that is exactly in-between ($2^{-1022}+2^{-1074}+2^{-1075}$), we
need to break the tie by rounding to even (to the larger value
$2.2250738585072024\times 10^{-308}$ in this case). Yet any truncation of the
value would be slightly closer to the lower value ($\approx
2.2250738585072019\times 10^{-308}$). We can write
$2^{-1022}+2^{-1074}+2^{-1075}$ exactly as a decimal floating-point value
$w\times 10^{q}$ for integers $w$ and $q$, but the significand requires 768
digits. We can show that it is the worst case.
When there are too many digits, we could immediately fall back on a higher-
precision approach. However, if we just use the most significant 19 digits,
and truncate any subsequent digits, we might be able to uniquely identify the
exact number. It is trivially the case if the truncated digits are all zeros,
in which case we can safely dismiss the zeros. Otherwise, if $w$ is the
truncated significand, then the exact value is in the interval $(w\times
10^{q},(w+1)\times 10^{q})$. Thus we may apply our algorithm to both $w\times
10^{q}$ and $(w+1)\times 10^{q}$. If they both round to the same binary
floating-point number, then this floating-point number has to match exactly
the true decimal value. If $w$ is limited to 19 digits, then $w+1\leq
10^{19}<2^{64}$ so we do not have to worry about possible overflows.
To assess the effectiveness of this approach, we can try a numerical
experiment. We generate random 19-digit significands and append an exponent
(e.g., 1383425612993491676e-298 and 1383425612993491677e-298). We find that
for such randomly generated values, about 99.8% of the successive values map
to the same 64-bit floating-point number, over a range of exponents (e.g.,
from $-300$ to $300$). We can also generate random 64-bit numbers in the unit
interval $[0,1]$, serialize them to 19 digits and add one to the last digit.
We get that in about 99.7% of all cases, changing the last digit does not
affect the value. In other words, we often can determine exactly a floating-
point value after truncating to 19 digits in most cases.
When it fails, we can fall back on a higher-precision approach. In our
software implementation (see § 5), we adapted a general implementation used as
part of the Go standard library. Given that it should be rarely needed, its
performance is secondary. However, it has to be exact.
## 12 Experiments
We implemented our algorithm and published it as an open source software
library.777https://github.com/fastfloat/fast_float It closely follows the
C++17 standard for the std::from_chars functions, supporting both 64-bit and
32-bit floating-point numbers. It has been thoroughly tested. Though our code
is written using generally efficient C++ patterns, we have not micro-optimized
it. Our implementation requires a C++11-compliant compiler. It does not
allocate memory on the heap and it does not throw exceptions.
To implement our algorithm, we use a precomputed table of powers of five and
reciprocals, see Appendix B. Though it uses $10\text{\,}\mathrm{KiB}\text{/}$,
we should compare it with the original Gay’s implementation of strtod in C
which uses $160\text{\,}\mathrm{KiB}\text{/}$ and compiles to tens of
kilobytes. Our table is used for parsing both 64-bit and 32-bit numbers.
There are many libraries that support number parsing. For our purposes, we
limit ourselves to C++ production-quality libraries. We only consider
libraries that offer exact parsing. See Table 7. We choose to omit libraries
written in other programming languages (Java, D, Rust, etc.) since direct
comparisons between programming languages are error prone—see Appendix E for
benchmarks of a Rust version of our algorithm.888The release notes for Go
version 1.16, which makes use of our approach, state that “ParseFloat now uses
the [new] algorithm, improving performance by up to a factor of 2.”,
https://golang.org/doc/go1.16. Our C++ code was also ported to C#,
https://github.com/CarlVerret/csFastFloat, and Java,
https://github.com/wrandelshofer/FastDoubleParser with good results. We also
include in our benchmarks the system’s C function strtod, configured with the
default locale. Though the standard Linux C++ library supports the C++17
standard, it does not yet provide an implementation of the std::from_chars
functions for floating-point numbers.
To ensure reproducibility, we publish our full benchmarking
software.999https://github.com/lemire/simple_fastfloat_benchmark, git tag
v0.1.0 Our benchmarking routine takes as input a long array of strings that
are parsed in sequence. Somewhat arbitrarily, we seek to compute the minimum
of all encountered numbers. Such a running-minimum function carries minimal
overhead compared to number parsing. Hence, we effectively measure the
throughput of number parsing. We are also careful to use datasets containing
thousands of numbers for two reasons:
* •
On the one hand, all measures have a small bounded error: by using large
sequence of tests, we amortize such errors.
* •
On the other hand, the performance of modern processors is often closely
related to its ability to predict branches. A single mispredicted branch can
waste between 10 to 20 cycles of computations. When in a repeating loop, some
recent processors can learn to predict with high accuracy a few thousands of
branches [seznec2011new].
We repeat all experiments 100 times. We avoid memory allocations throughout
the process. On such a computational benchmark, timings follow a distribution
resembling a log-normal distribution with a long tail associated with noise
(interrupts, cache competition, context switches, etc.) and a non-zero
minimum. The median is located between the minimum and the average. Using a
common convention [28], we compute both the minimum time and the average time:
the difference between the two is our margin of error. If the minimum time and
the average time are close, our measures are reliable. We find that the error
margin is consistently less than 5% on all platforms—often under 1%.
On Linux platforms, we can _instrument_ our benchmark so that we can
programmatically track the number of cycles and number of instructions retired
using CPU performance counters from within our own software. Such
instrumentation is precise (i.e., not the result of sampling) and does not add
overhead to the execution of the code. Typically, the number of instructions
retired by a given routine varies little from run to run and may be considered
exact, especially given that we ensure a stable number of branch
mispredictions. One benefit of instrumented code is that we can measure the
effective CPU clock frequency during the benchmarked code: modern processors
adjust their frequency dynamically based on load, power usage and heat. Our
Linux systems are configured for performance and we observe the expected CPU
frequencies.
Our benchmarks exclude disk access or memory allocation: strings are
preallocated once. To ensure a consistent and reproducible system
configuration, we run our benchmark under a privileged docker environment
based on a Ubuntu 20.10
image.101010https://github.com/lemire/docker_programming_station, git tag
v0.1.0 According to our tests, the docker overhead for purely computational
tasks when the host is itself Linux, is negligible. For our benchmarks, we use
the GNU GCC 10.2 compiler with full optimization (-O3 -DNDEBUG) under Linux.
Our benchmark programs are single binaries applying the different parsing
functions to the same strings. Though we access megabytes of memory, most of
the data remains in the last-level CPU cache. We are not limited by cache or
memory performance.
We rely a realistic data source that is used by the Go developers to benchmark
the standard library: the canada dataset comes from a JSON file commonly used
for benchmarking [28]. It contains 111k 64-bit floating-point numbers
serialized as strings. The canada number strings are part of geographic
coordinates: e.g., 83.109421000000111. We also include synthetic datasets
containing 100k numbers each. The uniform dataset is made of 64-bit random
numbers in the unit interval $[0,1]$. The integer data set is made of randomly
generated 32-bit integers. Though it is inefficient to use a floating-point
number parser for integer values, we believe that it might be an interesting
test case. It is an instance where our code fails to show large benefits. For
the synthetic dataset, we considered two subcases: the floating-point number
can either be serialized using a fixed decimal significand (17 digits) or
using a minimal decimal significand as 64-bit numbers (using at most 17 digits
[16]). We found relatively little difference in performance (no more than 10%)
on a per-float basis between these two cases. In both cases, the serialization
is exact: an exact 64-bit parser should recover exactly the original floating-
point value. We present our results with the concise serialization.
Table 7: Production-quality number parsing C++ libraries. Both double-conversion and abseil have been authored by Google engineers. Processor | snapshot | link
---|---|---
Gay’s strtod (netlib) | 2001 | www.netlib.org/fp/
double-conversion | version 3.1.5 | github.com/google/double-conversion.git
abseil | 20200225.2 | github.com/abseil/abseil-cpp
To better assess our algorithm, we tested it on a wide range of Linux-based
systems which include x64 processors, an ARM server processor and an IBM
POWER9 processor. See § 8. We report the effective frequency, that is, the CPU
frequency measured during the execution of our code. Our experiments are
single-threaded: the Ampere system contains 32 ARM cores and would normally be
competitive against the other systems if all cores were used. However, on a
single-core basis, it is not expected to match the other processors.
Table 8: Systems tested
Processor | Effective Frequency | Microarchitecture | Compiler
---|---|---|---
Intel i7-6700 | $3.7\text{\,}\mathrm{GHz}\text{/}$ | Skylake (x64, 2015) | GCC 10.2
AMD EPYC 7262 | $3.39\text{\,}\mathrm{GHz}\text{/}$ | Zen 2 (x64, 2019) | GCC 10.2
Ampere | $3.2\text{\,}\mathrm{GHz}\text{/}$ | ARM Skylark (aarch64, 2018) | GCC 10.2
IBM | $3.77\text{\,}\mathrm{GHz}\text{/}$ | POWER9 (ppc64le, 2018) | GCC 10.2
We report the speed in millions of numbers per second for our different
datasets and different processors in Table 9. We find that the from_chars
function in the abseil library is often superior to Gay’s implementation of
strtod (labeled as netlib) which is itself superior to both double-conversion
and the strtod function including the GNU standard library. The implementation
notes of the abseil library [11] indicate that it relies on a general strategy
which is not fundamentally different from our own.111111The abseil library
does not rely on Clinger’s fast path when parsing numbers. It also uses less
accurate product computation. Even so, our approach is generally twice as fast
as the abseil library and up to five times faster than what the standard
library offers. We find that for the integer test, netlib is superior to all
other alternatives (including abseil) except for our own. The gap between our
approach and netlib when parsing integers is modest (about 20%). Overall, our
proposed approach is three to five times faster than the strtod function
available in the GNU standard library. And it is often more than twice as fast
as the state-of-the-art abseil library.
Table 9: Millions of 64-bit floating-point numbers parsed per second under
different processor architectures
| canada | uniform | integer
---|---|---|---
netlib | 9.6 | 10 | 48
d.-conversion | 9.4 | 10 | 18
strtod | 9.0 | 9.4 | 20
abseil | 18 | 19 | 27
our parser | 45 | 45 | 61
(a) Intel Skylake (x64)
canada | uniform | integer
---|---|---
10 | 11 | 57
9.0 | 9.9 | 24
9.3 | 9.9 | 18
21 | 21 | 30
51 | 52 | 70
(b) AMD Zen 2 (x64)
| canada | uniform | integer
---|---|---|---
netlib | 8.1 | 8.7 | 23
d.-conversion | 5.4 | 5.8 | 12
strtod | 3.9 | 4.2 | 8.7
abseil | 9.1 | 9.4 | 13
our parser | 22 | 21 | 26
(c) Ampere Skylark (ARM, aarch64)
canada | uniform | integer
---|---|---
9.0 | 10 | 39
5.8 | 6.4 | 18
4.8 | 5.3 | 12
12 | 12 | 17
42 | 39 | 46
(d) IBM POWER 9
To understand our good results, we look and the number of instructions and
cycles per number for one representative dataset (uniform) and for the AMD Zen
2 processor. See Table 10. As expected, we use half as many instructions on
average as the abseil library. We find interesting that we use only about
three times fewer instructions than the strtod function, but 5.6 times fewer
cycles. Our approach causes almost no branch mispredictions, in contrast with
Gay’s netlib library. Similarly, while we retire 4.2 instructions per cycle,
Gay’s netlib library is limited at 2.2 instructions per cycle. To summarize,
our approach uses fewer instructions, generates fewer branch mispredictions
and retires more instructions per cycle.
To identify our bottleneck, we run the parsing routine while skipping the
conversion from a decimal significand and exponent to the standard decimal
form. Instead, we sum the decimal significand and the exponent and return the
result as a simulated floating-point value. We find that we save only about a
quarter of the number of instructions and a quarter of the time (cycles). In
other words, our decimal-to-binary routine is so efficient that it only uses
about a quarter of our computational time. Most of the time goes into parsing
the input string and converting it to a decimal significand and exponent.
Table 10: Instructions, mispredicted branches and cycles per 64-bit floating-point number in the uniform model on the AMD Zen 2 processor. We also provide the number of instructions per cycle. The “just string” row corresponds to our parser but without the final decimal to binary conversion. | Instructions | mispredictions | cycles | instructions/cycle
---|---|---|---|---
netlib | 740 | 4.1 | 330 | 2.2
double-conversion | 1100 | 1.7 | 380 | 3.0
strtod | 1100 | 0.7 | 370 | 3.0
abseil | 600 | 0.5 | 160 | 3.8
our parser | 280 | 0.01 | 66 | 4.2
(just string) | 215 | 0.00 | 46 | 4.7
Our results using 32-bit numbers are similar. To ease comparison, we produce
exactly the same numbers strings as in the 64-bit case. We replace the strtod
function with the equivalent strtof function. We present the result in Table
11. It suggests that there is little speed benefit in reading numbers as
32-bit floating-point numbers instead of 64-bit floating-point numbers given
the same input strings. The result does not surprise us given that we rely on
the same algorithm.
Table 11: Instructions, mispredicted branches and cycles per 32-bit floating-point number in the uniform model on the AMD Zen 2 processor. We also provide the number of instructions per cycle. | Instructions | mispredictions | cycles | instructions/cycle
---|---|---|---|---
strtof | 1100 | 0.7 | 350 | 3.1
abseil | 600 | 0.5 | 170 | 3.6
our parser | 280 | 0.00 | 64 | 4.3
We find it interesting to represent the parsing speed in terms of bytes per
second. On the canada dataset using the AMD Zen2 system, our parser exceeds
$1\text{\,}\mathrm{GiB}\text{/}\mathrm{s}$
($1080\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$). It is $2.5$ times faster than
the fastest competitor (abseil) and $5$ times faster than the other parser.
See Fig. 2. For the synthetic dataset, we use the concise number serialization
instead of relying on a fixed number of digits, to avoid overestimating the
parsing speed. Our parser runs at almost over
$900\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ compared to less than
$200\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ for the strtod function. If we
serialize the numbers so that they use a fixed number of digits (17), we reach
higher speeds: our parser exceeds $1\text{\,}\mathrm{GiB}\text{/}\mathrm{s}$
(not shown).
netlibd.-conv.strtodabseilthis paper$0$$200$$400$$600$$800$$1{,}000$throughput
($\mathrm{MiB}\text{/}\mathrm{s}$) (a) canada
netlibd.-conv.strtodabseilthis paper$0$$200$$400$$600$$800$$1{,}000$throughput
($\mathrm{MiB}\text{/}\mathrm{s}$) (b) uniform
Figure 2: Parsing speed for the canada dataset and for random 64-bit
floating-point number in the uniform model, serialized concisely, on the AMD
Zen 2 processor.
In Table 12, we provide statistics regarding which code paths are used by
different datasets. The integer dataset is entirely covered by Clinger’s fast
path. It explains why our performance on this dataset is similar to the netlib
approach, since we rely on essentially the same algorithm. For both the canada
and uniform dataset, most of the processing falls on our parser as opposed to
Clinger’s fast path. After initially parsing the input string, our fast
algorithm begins with one or two multiplications between the decimal
significand and looked up table values. We observe that a single
multiplication is all that is necessary in most cases. In our experiments, we
never need to fall back on a higher-precision approach.
Table 12: Code path frequencies for different datasets using our parser. The percentages are relative to the number of input number strings. | canada | uniform | integer
---|---|---|---
Clinger’s fast path | 8.8% | 0% | 100%
our path | 91.2% | 100% | 0%
two multiplications | 0.6% | 0.66% | 0%
#### Many digits
We designed our algorithm for the scenario where numbers are serialized to
strings using no more than 17 digits. However, we can not always ensure that
such a reasonable limit is respected. To test the case where we have many more
than 17 digits, we create big integer values by serializing three randomly
selected 64-bit integers in sequence. On the AMD Zen 2 system, we find that
our parser exceeds $1100\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$. The abseil
library achieves similar speeds $910\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$
which is more than twice as fast as Gay’s netlib
$390\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$. The strtod function is limited
to $110\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$.
#### Visual Studio
Unfortunately, we are not aware of a standard implementation of the from_chars
function under Linux. However, Microsoft provides one such fast function as
part of its Visual Studio 2019 system. We use the latest available Microsoft
C++ compiler (19.26.28806 for x64). We compile in release mode with the flags
/O2 /Ob2 /DNDEBUG. These results under Windows are generally comparable to our
Linux results. See Table 13. Microsoft’s from_chars function is faster than
its strtod function. However, our parser is several times faster than
Microsoft’s from_chars function.
Table 13: Millions of 64-bit floating-point numbers parsed per second under a $4.2\text{\,}\mathrm{GHz}\text{/}$ Intel 7700K processor using Visual Studio 2019 | canada | uniform | integer
---|---|---|---
netlib | 20 | 18 | 48
d.-conversion | 10 | 10 | 18
strtod | 6.0 | 5.8 | 15
from_chars | 6.7 | 7.2 | 22
abseil | 16 | 15 | 22
our parser | 37 | 48 | 60
#### Apple M1 Processor
In November 2020, Apple released laptops with a novel ARM 3.2 GHz processor
(M1). The M1 processor has 8 instruction decoders compared to only 4 decoders
on most x64 processors. Though we would normally avoid benchmarking on a
laptop due to potential frequency throttling, we found consistent run-to-run
results (within 1%) and a low margin of error (within 1%). We compiled our
benchmark software on such a laptop using Apple’s LLVM clang compiler (Apple
clang version 12.0.0 using the flags -O3 -DNDEBUG). We present our throughput
results in Fig. 3. Our parser reaches
$1.5\text{\,}\mathrm{GiB}\text{/}\mathrm{s}$ on the uniform dataset. On the
Apple platform, the strtod function is several times slower than any other
number parser. Other parsers (netlib, double-conversion and abseil) are about
three times slower in these tests.
netlibd.-conv.strtodabseilthis paper$0$$500$$1{,}000$$1{,}500$throughput
($\mathrm{MiB}\text{/}\mathrm{s}$) (a) canada
netlibd.-conv.strtodabseilthis paper$0$$500$$1{,}000$$1{,}500$throughput
($\mathrm{MiB}\text{/}\mathrm{s}$) (b) uniform
Figure 3: Parsing speed for the canada dataset and for random 64-bit
floating-point number in the uniform model, serialized concisely, on the Apple
M1 processor.
## 13 Conclusion
Parsing floating-point numbers from strings is a fundamental operation
supported by the standard library of almost all programming languages. Our
results suggest that widely used implementations might be several times slower
than needed on modern 64-bit processors. When the input strings are retrieved
from disks or networks with gigabytes per second in bandwidth, a faster
approach should be beneficial.
We expect that more gains are possible mostly in how we parse the input
strings into a decimal significand and exponent. For example, we could use
advanced processor instructions such as SIMD instructions [28].
It also be possible to accelerate the processing by relaxing correctness
conditions: e.g., the parsing could be only exact up to an error in the last
digit. However, we should be mindful of the potential problems that arise when
different software components parse the same numbers to different binary
values.
Floating-point numbers may be stored in binary form and accessed directly
without parsing. However, some engineers prefer to rely on text formats.
Hexadecimal floating-point numbers (Appendix C) may provide a convenient
alternative for greater speed in such cases.
## Acknowledgements
Our work benefited especially from exchanges with M. Eisel who motivated the
original research with his key insights. We thank N. Tao who provided
invaluable feedback and who contributed an earlier and simpler version of this
algorithm to the Go standard library. Our fallback implementation includes
code adapted from Google Wuffs, a memory-safe programming language, which was
published under the Apache 2.0 license. To our knowledge, the fast path for
long numbers was first implemented by R. Oudompheng for the Go standard
library. We thank A. Milovidov for his feedback regarding benchmarking. We are
grateful to W. Muła for his thorough review of an early manuscript: his
comments helped us improve the document significantly. We thank I. Smirnov for
his feedback on benchmarking statistics. We thank P. Cawley for his feedback
on the manuscript.
## References
* IEEE [2000] IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Framework and Rules. IEEE Std 1516-2000 2000;p. 1–28.
* Grützmacher et al. [2020] Grützmacher T, Cojean T, Flegar G, Göbel F, Anzt H. A customized precision format based on mantissa segmentation for accelerating sparse linear algebra. Concurrency and Computation: Practice and Experience 2020;32(15):e5418. https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.5418, e5418 cpe.5418.
* Knuth [2014] Knuth DE. Art of computer programming, volume 2: Seminumerical algorithms. Addison-Wesley Professional; 2014.
* Gustafson [2017] Gustafson JL. The End of Error: Unum Computing. CRC Press; 2017.
* Darvish Rouhani et al. [2020] Darvish Rouhani B, Lo D, Zhao R, Liu M, Fowers J, Ovtcharov K, et al. Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point. Advances in Neural Information Processing Systems 2020;33.
* Cowlishaw et al. [2001] Cowlishaw MF, Schwarz EM, Smith RM, Webb CF. A decimal floating-point specification. In: Proceedings 15th IEEE Symposium on Computer Arithmetic. ARITH-15 2001 IEEE; 2001. p. 147–154.
* Goldberg [1991] Goldberg D. What Every Computer Scientist Should Know about Floating-Point Arithmetic. ACM Comput Surv 1991 Mar;23(1):5–48. https://doi.org/10.1145/103162.103163.
* Clinger [1990] Clinger WD. How to Read Floating Point Numbers Accurately. SIGPLAN Not 1990 Jun;25(6):92–101. https://doi.org/10.1145/93548.93557.
* Clinger [2004] Clinger WD. How to Read Floating Point Numbers Accurately. SIGPLAN Not 2004 Apr;39(4):360–371. https://doi.org/10.1145/989393.989430.
* Gay [1990] Gay DM, Correctly rounded binary-decimal and decimal-binary conversions; 1990. AT&T Bell Laboratories Numerical Analysis Manuscript 90-10.
* Abseil [2020] Abseil, charconv Design Notes; 2020. https://abseil.io/about/design/charconv [last checked November 2020].
* Adams [2018] Adams U. Ryū: Fast Float-to-String Conversion. In: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation PLDI 2018, New York, NY, USA: Association for Computing Machinery; 2018. p. 270–282. https://doi.org/10.1145/3192366.3192369.
* Adams [2019] Adams U. Ryu Revisited: Printf Floating Point Conversion. Proc ACM Program Lang 2019 Oct;3(OOPSLA). https://doi.org/10.1145/3360595.
* Andrysco et al. [2016] Andrysco M, Jhala R, Lerner S. Printing Floating-Point Numbers: A Faster, Always Correct Method. In: Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages POPL ’16, New York, NY, USA: Association for Computing Machinery; 2016. p. 555–567. https://doi.org/10.1145/2837614.2837654.
* Burger and Dybvig [1996] Burger RG, Dybvig RK. Printing Floating-Point Numbers Quickly and Accurately. SIGPLAN Not 1996 May;31(5):108–116. https://doi.org/10.1145/249069.231397.
* Loitsch [2010] Loitsch F. Printing Floating-Point Numbers Quickly and Accurately with Integers. In: Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation PLDI ’10, New York, NY, USA: Association for Computing Machinery; 2010. p. 233–243. https://doi.org/10.1145/1806596.1806623.
* Steele and White [2004] Steele GL, White JL. How to Print Floating-Point Numbers Accurately. SIGPLAN Not 2004 Apr;39(4):372–389. https://doi.org/10.1145/989393.989431.
* Bray [2017] Bray T, The JavaScript Object Notation (JSON) Data Interchange Format; 2017. Internet Engineering Task Force, Request for Comments: 8259. https://tools.ietf.org/html/rfc8259.
* Fisher and Dietz [1998] Fisher RJ, Dietz HG. Compiling for SIMD within a register. In: International Workshop on Languages and Compilers for Parallel Computing Springer; 1998. p. 290–305.
* Hars [2006] Hars L. Applications of fast truncated multiplication in cryptography. EURASIP Journal on Embedded Systems 2006;2007(1):061721.
* Fousse et al. [2007] Fousse L, Hanrot G, Lefèvre V, Pélissier P, Zimmermann P. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Transactions on Mathematical Software (TOMS) 2007;33(2):13–es.
* Krandick and Johnson [1993] Krandick W, Johnson JR. Efficient multiprecision floating point multiplication with optimal directional rounding. In: Proceedings of IEEE 11th Symposium on Computer Arithmetic IEEE; 1993\. p. 228–233.
* Mulders [2000] Mulders T. On short multiplications and divisions. Applicable Algebra in Engineering, Communication and Computing 2000;11(1):69–88.
* Lemire et al. [2019] Lemire D, Kaser O, Kurz N. Faster remainder by direct computation: Applications to compilers and software libraries. Software: Practice and Experience 2019;49(6):953–970.
* Warren [2013] Warren HS Jr. Hacker’s Delight. 2nd ed. Boston: Addison-Wesley; 2013.
* Dutertre [2014] Dutertre B. Yices 2.2. In: International Conference on Computer Aided Verification Springer; 2014\. p. 737–744.
* Lemire [2020] Lemire D, Making Your Code Faster by Taming Branches; 2020. https://www.infoq.com/articles/making-code-faster-taming-branches/ [last checked November 2020].
* Langdale and Lemire [2019] Langdale G, Lemire D. Parsing gigabytes of JSON per second. The VLDB Journal 2019;28(6):941–960.
* Dumas [2013] Dumas JG. On newton–raphson iteration for multiplicative inverses modulo prime powers. IEEE Transactions on Computers 2013;63(8):2106–2109.
## Appendix A Multiplicative Inverses
Given an odd 64-bit integer $x$, there is a unique integer $y$ such that
$x\times y\bmod 2^{64}=1$. We refer to $y$ as the _multiplicative inverse_ of
$x$ [29]. Fig. 4 presents an efficient C++ function to compute the
multiplicative inverse of 64-bit odd integers. It relies on five successive
calls to a function involving two integer multiplications.
⬇
uint64_t f64(uint64_t x, uint64_t y) { return y * ( 2 - y * x ); }
uint64_t findInverse64(uint64_t x) {
uint64_t y = x; y = f64(x,y); y = f64(x,y); y = f64(x,y);
y = f64(x,y); y = f64(x,y); return y;
}
Figure 4: C++ function (findInverse64) to compute the multiplicative inverse
of an odd 64-bit integer using Newton’s method
## Appendix B Table Generation Script
Fig. 5 provides a convenient Python script to general all relevant reciprocal
and normalized powers of five. In practice, each 128-bit value may be stored
as two 64-bit words.
⬇
for q in range(-342,-27):
power5 = 5**-q
z = 0
while( (1<<z) < power5) : z += 1
b = 2 * z + 2 * 64
c = 2 ** b // power5 + 1
while(c >= (1<<128)): c //= 2
print(c)
for q in range(-27,0):
power5 = 5**-q
z = 0
while( (1<<z) < power5) : z += 1
b = z + 127
c = 2 ** b // power5 + 1
print(c)
for q in range(0,308+1):
power5 = 5**q
while(power5 < (1<<127)) : power5 *= 2
while(power5 >= (1<<128)): power5 //= 2
print(power5)
Figure 5: Python script to print out all 128-bit reciprocals ($q\in[-342,0)$)
and all 128-bit truncated powers of five ($q\in[0,308)$).
## Appendix C Hexadecimal Floating-Point Numbers
It could be convenient to represent floating-point numbers using the
hexadecimal floating-point notation. The hexadecimal notation may provide an
exact ASCII string representation of the binary floating-point number. It
makes it relatively easy to provide an unambiguous string that should always
be parsed to the same binary value. Furthermore, the parsing and serialization
speeds could be much higher. The main downsides are that human beings may find
such strings harder to understand and that they are not natively supported in
all mainstream programming languages.
The hexadecimal floating-point notation is supported in the C (C99), C++
(C++17), Swift, Java, Julia and Go programming languages. As in the usual
hexadecimal notation for integers, we start the string with 0x followed by the
significand in hexadecimal form. Each hexadecimal character (0–9, A–F)
represents 4 bits (a _nibble_). Instead of writing the exponential part in
full (e.g., $\times 2^{4}$ or $\times 2^{-4}$), we append the suffix p
followed by the exponent (e.g., p4 or p-4). Optionally, we can add an
hexadecimal point in the significand. With a decimal point, we interpret the
decimal fraction by dividing it by the appropriate power of ten. E.g, we write
$1.45=145/10^{2}$. The hexadecimal point works similarly. Thus 0x1.FCp17 means
$\mathtt{0x1FC}/4^{2}\times 2^{17}$ or $3.968\,75$ where we divide
$\mathtt{0x1FC}$ by $4^{2}$ because there are two nibbles after the binary
point. When the value is a normal 64-bit floating-point number, the
significand can be expressed as a most significant 1 followed by up to 52
bits, or 13 hexadecimal character. Thus $9\,000\,000\,000\,000\,000$ can be
written as 0x1.ff973cafa8p+52. The mass of the Earth in kilogram ($5.972\times
10^{24}$) is 0x1.3c27b13272fb6p+82.
## Appendix D String-Parsing Functions in C++
Fig. 6 illustrates the computation of the decimal significand with pseudo-C++
code. We omit the code necessary to check whether there are leading spaces and
sign characters (+ or -) and other error checks. We must further parse an
eventual exponent preceded by the characters e or E. Moreover, we must also
check whether we had more than 19 digits in the decimal significand. Thus our
actual code is slightly more complex. Fig. 7 presents SWAR [19] functions to
check all at once whether a sequence of 8 digits is available and to compute
the corresponding decimal integer.
⬇
const char *p = // points at the beginning of the string
const char *pend = // points at the end of the string
int64_t exponent = 0; // exponent
uint64_t i = 0; // significand
while ((p != pend) && is_integer(*p)) { i = 10 * i + uint64_t(*p - ’0’); ++p;
}
if ((p != pend) && (*p == ’.’)) {
++p; const char *first_after_period = p;
if ((p + 8 <= pend) && is_made_of_eight_digits(p)) {
i = i * 100000000 + parse_eight_digits(p); p += 8;
if ((p + 8 <= pend) && is_made_of_eight_digits(p))
{ i = i * 100000000 + parse_eight_digits(p); p += 8; }
}
while ((p != pend) && is_integer(*p)) { uint8_t digit = uint8_t(*p - ’0’);
++p;
i = i * 10 + digit; }
exponent = first_after_period - p;
}
Figure 6: Simplified pseudo-C++ code to compute the decimal significand from
an ASCII string
⬇
bool is_made_of_eight_digits(const char *chars) {
uint64_t val; memcpy(&val, chars, 8);
return !((((val + 0x4646464646464646) | (val - 0x3030303030303030))
& 0x8080808080808080));
}
uint32_t parse_eight_digits(const char *chars) {
uint64_t val; memcpy(&val, chars, sizeof(uint64_t));
val = (val & 0x0F0F0F0F0F0F0F0F) * 2561 >> 8;
val = (val & 0x00FF00FF00FF00FF) * 6553601 >> 16;
return uint32_t((val & 0x0000FFFF0000FFFF) * 42949672960001 >> 32);
}
Figure 7: C++ functions to check whether 8 ASCII characters are made of
digits, and to convert them to an integer value under a little-endian system
## Appendix E Benchmarks in Rust
Our C++ implementation and benchmarks have been ported to Rust by I.
Smirnov.121212https://github.com/aldanor/fast-float-rust Unlike our C++
implementation, it does not attempt to skip leading white spaces, but there
are otherwise few differences. This Rust port allows us to compare against a
popular Rust number processing library
(lexical131313https://docs.rs/lexical/5.2.0/lexical/—v5.2.0) as well as the
standard Rust library (from_str). Using Rust 1.49 on our AMD Rome (Zen 2)
system, we get the following results on the canada dataset: the standard Rust
library is limited to $92\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$, lexical
library achieves $280\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ while the Rust
port of our library achieves $670\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$. On
the Apple M1 system, we get $130\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$
(standard library), $370\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ (lexical) and
$1200\text{\,}\mathrm{MiB}\text{/}\mathrm{s}$ (Rust port). The tests are
repeated 1000 times and the difference between the best speed and the median
speed is low on our test systems (less than 1%).
|
# Reproducing kernel Hilbert $C^{*}$-module and kernel mean embeddings
Yuka Hashimoto<EMAIL_ADDRESS>
NTT Network Service Systems Laboratories, NTT Corporation
3-9-11, Midori-cho, Musashinoshi, Tokyo, 180-8585, Japan /
Graduate School of Science and Technology, Keio University
3-14-1, Hiyoshi, Kohoku, Yokohama, Kanagawa, 223-8522, Japan Isao Ishikawa
<EMAIL_ADDRESS>
Center for Data Science, Ehime University
2-5, Bunkyo-cho, Matsuyama, Ehime, 790-8577, Japan /
Center for Advanced Intelligence Project, RIKEN
1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Masahiro Ikeda
<EMAIL_ADDRESS>
Center for Advanced Intelligence Project, RIKEN
1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan /
Faculty of Science and Technology, Keio University
3-14-1, Hiyoshi, Kohoku, Yokohama, Kanagawa, 223-8522, Japan Fuyuta Komura
<EMAIL_ADDRESS>
Faculty of Science and Technology, Keio University
3-14-1, Hiyoshi, Kohoku, Yokohama, Kanagawa, 223-8522, Japan /
Center for Advanced Intelligence Project, RIKEN
1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Takeshi Katsura
<EMAIL_ADDRESS>
Faculty of Science and Technology, Keio University
3-14-1, Hiyoshi, Kohoku, Yokohama, Kanagawa, 223-8522, Japan /
Center for Advanced Intelligence Project, RIKEN
1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Yoshinobu Kawahara
<EMAIL_ADDRESS>
Institute of Mathematics for Industry, Kyushu University
744, Motooka, Nishi-ku, Fukuoka, 819-0395, Japan /
Center for Advanced Intelligence Project, RIKEN
1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
###### Abstract
Kernel methods have been among the most popular techniques in machine
learning, where learning tasks are solved using the property of reproducing
kernel Hilbert space (RKHS). In this paper, we propose a novel data analysis
framework with reproducing kernel Hilbert $C^{*}$-module (RKHM) and kernel
mean embedding (KME) in RKHM. Since RKHM contains richer information than RKHS
or vector-valued RKHS (vvRKHS), analysis with RKHM enables us to capture and
extract structural properties in such as functional data. We show a branch of
theories for RKHM to apply to data analysis, including the representer
theorem, and the injectivity and universality of the proposed KME. We also
show RKHM generalizes RKHS and vvRKHS. Then, we provide concrete procedures
for employing RKHM and the proposed KME to data analysis.
Keywords: reproducing kernel Hilbert $C^{*}$-module, kernel mean embedding,
structured data, kernel PCA, interaction effects
## 1 Introduction
Kernel methods have been among the most popular techniques in machine learning
(Schölkopf and Smola, 2001), where learning tasks are solved using the
property of reproducing kernel Hilbert space (RKHS). RKHS is the space of
complex-valued functions equipped with an inner product determined by a
positive-definite kernel. One of the important tools with RKHS is kernel mean
embedding (KME). In KME, a probability distribution (or measure) is embedded
as a function in an RKHS (Smola et al., 2007; Muandet et al., 2017;
Sriperumbudur et al., 2011), which enables us to analyze distributions in
RKHSs.
Whereas much of the classical literature on RKHS approaches has focused on
complex-valued functions, RKHSs of vector-valued functions, i.e., vector-
valued RKHSs (vvRKHSs), have also been proposed (Micchelli and Pontil, 2005;
Álvarez et al., 2012; Lim et al., 2015; Minh et al., 2016; Kadri et al.,
2016). This allows us to learn vector-valued functions rather than complex-
valued functions.
In this paper, we develop a branch of theories on reproducing kernel Hilbert
$C^{*}$-module (RKHM) and propose a generic framework for data analysis with
RKHM. RKHM is a generalization of RKHS and vvRKHS in terms of $C^{*}$-algebra,
and we show that RKHM is a powerful tool to analyze structural properties in
such as functional data. An RKHM is constructed by a $C^{*}$-algebra-valued
positive definite kernel and characterized by a $C^{*}$-algebra-valued inner
product (see Definition 2.21). The theory of $C^{*}$-algebra has been
discussed in mathematics, especially in operator algebra theory. An important
example of $C^{*}$-algebra is $L^{\infty}(\Omega)$, where $\Omega$ is a
compact measure space. Another important example is
$\mathcal{B}(\mathcal{W})$, which denotes the space of bounded linear
operators on a Hilbert space $\mathcal{W}$. Note that
$\mathcal{B}(\mathcal{W})$ coincides with the space of matrices
$\mathbb{C}^{m\times m}$ if the Hilbert space $\mathcal{W}$ is finite
dimensional.
Although there are several advantages for studying RKHM compared with RKHS and
vvRKHS, those can be summarized into two points as follows: First, an RKHM is
a “Hilbert $C^{*}$-module”, which is mathematically more general than a
“Hilbert space”. The inner product in an RKHM is $C^{*}$-algebra-valued, which
captures more information than the complex-valued one in an RKHS or vvRKHS and
enables us to extract richer information. For example, if we set
$L^{\infty}(\Omega)$ as a $C^{*}$-algebra, we can control and extract features
of functional data such as derivatives, total variation, and frequency
components. Also, if we set $\mathcal{B}(\mathcal{W})$ as a $C^{*}$-algebra
and the inner product is described by integral operators, we can control and
extract features of continuous relationships between pairs of functional data.
This cannot be achieved, in principle, by RKHSs and vv-RKHSs. This is because
their inner products are complex-valued, where such information degenerates
into one complex value or is lost by discretizations of function into complex
values. Therefore, we cannot reconstruct the information from a vector in an
RKHS or vvRKHS. Second, RKHM generalizes RKHS and vvRKHS, that is, it can be
shown that we can reconstruct RKHSs and vvRKHSs from RKHMs. This implies that
existing algorithms with RKHSs and vvRKHSs are reconstructed by using the
framework of RKHM.
The theory of RKHM has been studied in mathematical physics and pure
mathematics (Itoh, 1990; Heo, 2008; Szafraniec, 2010). On the other hand, to
the best of our knowledge, as for the application of RKHM to data analysis, we
can find the only literature by Ye (2017), where only the case of setting the
space of matrices as a $C^{*}$-algebra is discussed. In this paper, we develop
a branch of theories on RKHM and propose a generic framework for data analysis
with RKHM. We show a theoretical property on minimization with respect to
orthogonal projections and give a representer theorem in RKHMs. These
properties are fundamental for data analysis that have been investigated and
applied in the cases of RKHS and vvRKHS, which has made RKHS and vvRKHS
widely-accepted tools for data analysis (Schölkopf et al., 2001). Moreover, we
define a KME in an RKHM, and provide theoretical results about the injectivity
of the proposed KME and the connection with universality of RKHM. Note that,
as is well known for RKHSs, these two properties have been actively studied to
theoretically guarantee the validity of kernel-based algorithms (Steinwart,
2001; Gretton et al., 2006; Fukumizu et al., 2007; Sriperumbudur et al.,
2011). Then, we apply the developed theories to generalize kernel PCA
(Schölkopf and Smola, 2001), analyze time-series data with the theory of
dynamical system, and analyze interaction effects for infinite dimensional
data.
The remainder of this paper is organized as follows. First, in Section 2, we
briefly review RKHS, vvRKHS, and the definition of RKHM. In Section 3, we
provide an overview of the motivation of studying RKHM for data analysis. In
Section 4, we show general properties of RKHM for data analysis and the
connection of RKHMs with RKHSs and vvRKHSs. In Sections 5, we propose a KME in
RKHMs, and show the connection between the injectivity of the KME and the
universality of RKHM. Then, in Section 6, we discuss applications of the
developed results to kernel PCA, time-series data analysis, and the analysis
of interaction effects in finite or infinite dimensional data. Finally, in
Section 7, we discuss the connection of RKHMs and the proposed KME with the
existing notions, and conclude the paper in Section 8.
##### Notations
Lowercase letters denote $\mathcal{A}$-valued coefficients (often by
$a,b,c,d$), vectors in a Hilbert $C^{*}$-module $\mathcal{M}$ (often by
$p,q,u,v$), or vectors in a Hilbert space $\mathcal{W}$ (often by $w,h$).
Lowercase Greek letters denote measures (often by $\mu,\nu,\lambda$) or
complex-valued coefficients (often by $\alpha,\beta$). Calligraphic capital
letters denote sets. And, bold lowercase letters denote vectors in
$\mathcal{A}^{n}$ for $n\in\mathbb{N}$ (a finite dimensional Hilbert
$C^{*}$-module). Also, we use $\sim$ for objects related to RKHSs. Moreover,
an inner product, an absolute value, and a norm in a space or a module
$\mathcal{S}$ (see Definitions 2.12 and 2.13) are denoted as
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{S}}$, $|\cdot|_{\mathcal{S}}$,
and $\|\cdot\|_{\mathcal{S}}$, respectively.
The typical notations in this paper are listed in Table 1.
Table 1: Notation table $\mathcal{A}$ | A $C^{*}$-algebra
---|---
$1_{\mathcal{A}}$ | The multiplicative identity in $\mathcal{A}$
$\mathcal{A}_{+}$ | The subset of $\mathcal{A}$ composed of all positive elements in $\mathcal{A}$
$\leq_{\mathcal{A}}$ | For $c,d\in\mathcal{A}$, $c\leq_{\mathcal{A}}d$ means $d-c$ is positive.
$<_{\mathcal{A}}$ | For $c,d\in\mathcal{A}$, $c<d$ means $d-c$ is strictly positive, i.e., $d-c$ is positive and invertible.
$L^{\infty}(\Omega)$ | The space of complex-valued $L^{\infty}$ functions on a measure space $\Omega$
$\mathcal{B}(\mathcal{W})$ | The space of bounded linear operators on a Hilbert space $\mathcal{W}$
$\mathbb{C}^{m\times m}$ | A set of all complex-valued $m\times m$ matrix
$\mathcal{M}$ | A Hilbert $\mathcal{A}$-module
$\mathcal{X}$ | A nonempty set for data
$C(\mathcal{X},\mathcal{Y})$ | The space of $\mathcal{Y}$-valued continuous functions on $\mathcal{X}$ for topological spaces $\mathcal{X}$ and $\mathcal{Y}$
$n$ | A natural number that represents the number of samples
$k$ | An $\mathcal{A}$-valued positive definite kernel
$\phi$ | The feature map endowed with $k$
$\mathcal{M}_{k}$ | The RKHM associated with $k$
$\mathcal{S}^{\mathcal{X}}$ | The set of all functions from a set $\mathcal{X}$ to a space $\mathcal{S}$
$\tilde{k}$ | A complex-valued positive definite kernel
$\tilde{\phi}$ | The feature map endowed with $\tilde{k}$
$\mathcal{H}_{\tilde{k}}$ | The RKHS associated with $\tilde{k}$
$\mathcal{H}_{k}^{\operatorname{v}}$ | The vvRKHS associated with $k$
$\mathcal{D}(\mathcal{X},\mathcal{A})$ | The set of all $\mathcal{A}$-valued finite regular Borel measures
$\Phi$ | The proposed KME in an RKHM
$\delta_{x}$ | The $\mathcal{A}$-valued Dirac measure defined as $\delta_{x}(E)=1_{\mathcal{A}}$ for $x\in E$ and $\delta_{x}(E)=0$ for $x\notin E$
$\tilde{\delta}_{x}$ | The complex-valued Dirac measure defined as $\tilde{\delta}_{x}(E)=1$ for $x\in E$ and $\tilde{\delta}_{x}(E)=0$ for $x\notin E$
$\chi_{E}$ | The indicator function of a Borel set $E$ on $\mathcal{X}$
${C}_{0}(\mathcal{X},\mathcal{A})$ | The space of all continuous $\mathcal{A}$-valued functions on $\mathcal{X}$ vanishing at infinity
$\mathbf{G}$ | The $\mathcal{A}$-valued Gram matrix defined as $\mathbf{G}_{i,j}=k(x_{i},x_{j})$ for given samples $x_{1},\ldots,x_{n}\in\mathcal{X}$
$p_{j}$ | The $j$-th principal axis generated by kernel PCA with an RKHM
$r$ | A natural number that represents the number of principal axes
$Df_{\mathbf{c}}$ | The Gâteaux derivative of a function $f:\mathcal{M}\to\mathcal{A}$ at $\mathbf{c}\in\mathcal{M}$
$\nabla f_{\mathbf{c}}$ | The gradient of a function $f:\mathcal{M}\to\mathcal{A}$ at $\mathbf{c}\in\mathcal{M}$
## 2 Background
We briefly review RKHS and vvRKHS in Subsections 2.1 and 2.2, respectively.
Then, we review $C^{*}$-algebra and $C^{*}$-module in Subsection 2.3, Hilbert
$C^{*}$-module in Subsection 2.4, and RKHM in Subsection 2.5.
### 2.1 Reproducing kernel Hilbert space (RKHS)
We review the theory of RKHS. An RKHS is a Hilbert space and useful for
extracting nonlinearity or higher-order moments of data (Schölkopf and Smola,
2001; Saitoh and Sawano, 2016).
We begin by introducing positive definite kernels. Let $\mathcal{X}$ be a non-
empty set for data, and $\tilde{k}$ be a positive definite kernel, which is
defined as follows:
###### Definition 2.1 (Positive definite kernel)
A map $\tilde{k}:\mathcal{X}\times\mathcal{X}\to\mathbb{C}$ is called a
positive definite kernel if it satisfies the following conditions:
1. 1.
$\tilde{k}(x,y)=\overline{\tilde{k}(y,x)}$ for $x,y\in\mathcal{X}$,
2. 2.
$\sum_{i,j=1}^{n}\overline{\alpha}_{i}\alpha_{j}\tilde{k}(x_{i},x_{j})\geq 0$
for $n\in\mathbb{N}$, $\alpha_{i}\in\mathbb{C}$, $x_{i}\in\mathcal{X}$.
Let $\tilde{\phi}:\mathcal{X}\to\mathbb{C}^{\mathcal{X}}$ be a map defined as
$\tilde{\phi}(x)=\tilde{k}(\cdot,x)$. With $\tilde{\phi}$, the following space
as a subset of $\mathbb{C}^{\mathcal{X}}$ is constructed:
$\mathcal{H}_{\tilde{k},0}:=\bigg{\\{}\sum_{i=1}^{n}\alpha_{i}\tilde{\phi}(x_{i})\bigg{|}\
n\in\mathbb{N},\ \alpha_{i}\in\mathbb{C},\ x_{i}\in\mathcal{X}\bigg{\\}}.$
Then, a map
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}_{\tilde{k}}}:\mathcal{H}_{\tilde{k},0}\times\mathcal{H}_{\tilde{k},0}\to\mathbb{C}$
is defined as follows:
$\bigg{\langle}\sum_{i=1}^{n}\alpha_{i}\tilde{\phi}(x_{i}),\sum_{j=1}^{l}\beta_{j}\tilde{\phi}(y_{j})\bigg{\rangle}_{\mathcal{H}_{\tilde{k}}}:=\sum_{i=1}^{n}\sum_{j=1}^{l}\overline{\alpha_{i}}\beta_{j}\tilde{k}(x_{i},y_{j}).$
By the properties in Definition 2.1 of $\tilde{k}$,
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}_{\tilde{k}}}$ is well-
defined, satisfies the axiom of inner products, and has the reproducing
property, that is,
$\langle\tilde{\phi}(x),v\rangle_{\mathcal{H}_{\tilde{k}}}=v(x)$
for $v\in\mathcal{H}_{\tilde{k},0}$ and $x\in\mathcal{X}$.
The completion of $\mathcal{H}_{\tilde{k},0}$ is called the RKHS associated
with $\tilde{k}$ and denoted as $\mathcal{H}_{\tilde{k}}$. It can be shown
that $\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}_{\tilde{k}}}$ is
extended continuously to $\mathcal{H}_{\tilde{k}}$ and the map
$\mathcal{H}_{\tilde{k}}\ni
v\mapsto(x\mapsto\langle\tilde{\phi}(x),v\rangle_{\mathcal{H}_{\tilde{k}}})\in\mathbb{C}^{\mathcal{X}}$
is injective. Thus, $\mathcal{H}_{\tilde{k}}$ is regarded to be a subset of
$\mathbb{C}^{\mathcal{X}}$ and has the reproducing property. Also,
$\mathcal{H}_{\tilde{k}}$ is determined uniquely.
The map $\tilde{\phi}$ maps data into $\mathcal{H}_{\tilde{k}}$ and is called
the feature map. Since the dimension of $\mathcal{H}_{\tilde{k}}$ is higher
(often infinite dimensional) than that of $\mathcal{X}$, complicated behaviors
of data in $\mathcal{X}$ are expected to be transformed into simple ones in
$\mathcal{H}_{\tilde{k}}$ (Schölkopf and Smola, 2001).
### 2.2 Vector-valued RKHS (vvRKHS)
We review the theory of vvRKHS. Complex-valued functions in RKHSs are
generalized to vector-valued functions in vvRKHSs. Similar to the case of
RKHS, we begin by introducing positive definite kernels. Let $\mathcal{X}$ be
a non-empty set for data and $\mathcal{W}$ be a Hilbert space. In addition,
let $k$ be an operator-valued positive definite kernel, which is defined as
follows:
###### Definition 2.2 (Operator-valued positive definite kernel)
A map $k:\mathcal{X}\times\mathcal{X}\to\mathcal{B}(\mathcal{W})$ is called an
operator-valued positive definite kernel if it satisfies the following
conditions:
1. 1.
$k(x,y)=k(y,x)^{*}$ for $x,y\in\mathcal{X}$,
2. 2.
$\sum_{i,j=1}^{n}\left\langle
w_{i},k(x_{i},x_{j})w_{j}\right\rangle_{\mathcal{W}}\geq 0$ for
$n\in\mathbb{N}$, $w_{i}\in\mathcal{W}$, $x_{i}\in\mathcal{X}$.
Here, ∗ represents the adjoint.
Let $\phi:\mathcal{X}\to\mathcal{B}(\mathcal{W})^{\mathcal{X}}$ be a map
defined as $\phi(x)=k(\cdot,x)$. With $\phi$, the following space as a subset
of $\mathcal{W}^{\mathcal{X}}$ is constructed:
$\mathcal{H}_{k,0}^{\operatorname{v}}:=\bigg{\\{}\sum_{i=1}^{n}\phi(x_{i})w_{i}\bigg{|}\
n\in\mathbb{N},\ w_{i}\in\mathcal{W},\ x_{i}\in\mathcal{X}\bigg{\\}}.$
Then, a map
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}:\mathcal{H}_{k,0}^{\operatorname{v}}\times\mathcal{H}_{k,0}^{\operatorname{v}}\to\mathbb{C}$
is defined as follows:
$\displaystyle\bigg{\langle}\sum_{i=1}^{n}\phi(x_{i})w_{i},\sum_{j=1}^{l}\phi(y_{j})h_{j}\bigg{\rangle}_{\mathcal{H}_{k}^{\operatorname{v}}}:=\sum_{i=1}^{n}\sum_{j=1}^{l}\left\langle
w_{i},k(x_{i},y_{j})h_{j}\right\rangle_{\mathcal{W}}.$
By the properties in Definition 2.2 of $k$,
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}$ is
well-defined, satisfies the axiom of inner products, and has the reproducing
property, that is,
$\left\langle\phi(x)w,u\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}=\left\langle
w,u(x)\right\rangle_{\mathcal{W}}$ (1)
for $u\in\mathcal{H}_{k,0}^{{\operatorname{v}}}$, $x\in\mathcal{X}$, and
$w\in\mathcal{W}$.
The completion of $\mathcal{H}_{k,0}^{\operatorname{v}}$ is called the vvRKHS
associated with $k$ and denoted as $\mathcal{H}_{k}^{\operatorname{v}}$. Note
that since an inner product in $\mathcal{H}_{k}^{\operatorname{v}}$ is defined
with the complex-valued inner product in $\mathcal{W}$, it is complex-valued.
### 2.3 $C^{*}$-algebra and Hilbert $C^{*}$-module
A $C^{*}$-algebra and a $C^{*}$-module are generalizations of the space of
complex numbers $\mathbb{C}$ and a vector space, respectively. In this paper,
we denote a $C^{*}$-algebra by $\mathcal{A}$ and a $C^{*}$-module by
$\mathcal{M}$, respectively. As we see below, many complex-valued notions can
be generalized to $\mathcal{A}$-valued.
A $C^{*}$-algebra is defined as a Banach space equipped with a product
structure, an involution $(\cdot)^{*}:\mathcal{A}\rightarrow\mathcal{A}$, and
additional properties. We denote the norm of $\mathcal{A}$ by
$\|\cdot\|_{\mathcal{A}}$.
###### Definition 2.3 (Algebra)
A set $\mathcal{A}$ is called an algebra on a filed $\mathbb{F}$ if it is a
vector space equipped with an operation
$\cdot:\mathcal{A}\times\mathcal{A}\to\mathcal{A}$ which satisfies the
following conditions for $b,c,d\in\mathcal{A}$ and $\alpha\in\mathbb{F}$:
$\bullet$ $(b+c)\cdot
d={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}b}\cdot
d+c\cdot d$, $\bullet$ $b\cdot(c+d)=b\cdot c+b\cdot d$, $\bullet$ $(\alpha
c)\cdot d=\alpha(c\cdot d)=c\cdot(\alpha d)$.
The symbol $\cdot$ is omitted when it does not cause confusion.
###### Definition 2.4 ($C^{*}$-algebra)
A set $\mathcal{A}$ is called a $C^{*}$-algebra if it satisfies the following
conditions:
1. 1.
$\mathcal{A}$ is an algebra over $\mathbb{C}$ and equipped with a bijection
$(\cdot)^{*}:\mathcal{A}\to\mathcal{A}$ that satisfies the following
conditions for $\alpha,\beta\in\mathbb{C}$ and $c,d\in\mathcal{A}$:
$\bullet$ $(\alpha c+\beta
d)^{*}=\overline{\alpha}c^{*}+\overline{\beta}d^{*}$, $\bullet$
$(cd)^{*}=d^{*}c^{*}$, $\bullet$ $(c^{*})^{*}=c$.
2. 2.
$\mathcal{A}$ is a normed space with $\|\cdot\|_{\mathcal{A}}$, and for
$c,d\in\mathcal{A}$,
$\|cd\|_{\mathcal{A}}\leq\|c\|_{\mathcal{A}}\|d\|_{\mathcal{A}}$ holds. In
addition, $\mathcal{A}$ is complete with respect to $\|\cdot\|_{\mathcal{A}}$.
3. 3.
For $c\in\mathcal{A}$, $\|c^{*}c\|_{\mathcal{A}}=\|c\|_{\mathcal{A}}^{2}$
holds.
###### Definition 2.5 (Multiplicative identity and unital $C^{*}$-algebra)
The multiplicative identity of $\mathcal{A}$ is the element $a\in\mathcal{A}$
which satisfies $ac=ca=c$ for any $c\in\mathcal{A}$. We denote by
$1_{\mathcal{A}}$ the multiplicative identity of $\mathcal{A}$. If a
$C^{*}$-algebra $\mathcal{A}$ has the multiplicative identity, then it is
called a unital $C^{*}$-algebra.
###### Example 2.6
Important examples of (unital) $C^{*}$-algebras are $L^{\infty}(\Omega)$ and
$\mathcal{B}(\mathcal{W})$, i.e., the space of complex-valued $L^{\infty}$
functions on a $\sigma$-finite measure space $\Omega$ and the space of bounded
linear operators on a Hilbert space $\mathcal{W}$, respectively.
1. 1.
For $\mathcal{A}=L^{\infty}(\Omega)$, the product of two functions
$c,d\in\mathcal{A}$ is defined as $(cd)(t)=c(t)d(t)$ for any $t\in\Omega$, the
involution is defined as $c(t)=\overline{c(t)}$, the norm is the
$L^{\infty}$-norm, and the multiplicative identity is the constant function
whose value is $1$ at almost everywhere $t\in\Omega$.
2. 2.
For $\mathcal{A}=\mathcal{B}(\mathcal{W})$, the product structure is the
product (the composition) of operators, the involution is the adjoint, the
norm $\|\cdot\|_{\mathcal{A}}$ is the operator norm, and the multiplicative
identity is the identity map.
In fact, by the Gelfand–Naimark theorem (see, for example, Murphy (1990)), any
$C^{*}$-algebra can be regarded as a subalgebra of $\mathcal{B}(\mathcal{W})$
for some Hilbert space $\mathcal{W}$. Therefore, considering the case of
$\mathcal{A}=\mathcal{B}(\mathcal{W})$ is sufficient for applications.
The positiveness is also important in $C^{*}$-algebras.
###### Definition 2.7 (Positive)
An element $c$ of $\mathcal{A}$ is called positive if there exists
$d\in\mathcal{A}$ such that $c=d^{*}d$ holds. For a unital $C^{*}$-algebra
$\mathcal{A}$, if a positive element $c\in\mathcal{A}$ is invertible, i.e.,
there exists $d\in\mathcal{A}$ such that $cd=dc=1_{\mathcal{A}}$, then $c$ is
called strictly positive. For $c,d\in\mathcal{A}$, we denote
$c\leq_{\mathcal{A}}d$ if $d-c$ is positive and $c<_{\mathcal{A}}d$ if $d-c$
is strictly positive. We denote by $\mathcal{A}_{+}$ the subset of
$\mathcal{A}$ composed of all positive elements in $\mathcal{A}$.
###### Example 2.8
1. 1.
For $\mathcal{A}=L^{\infty}(\Omega)$, a function $c\in\mathcal{A}$ is positive
if and only if $c(t)\geq 0$ for almost everywhere $t\in\Omega$, and strictly
positive if and only if $c(t)>0$ for almost everywhere $t\in\Omega$.
2. 2.
For $\mathcal{A}=\mathcal{B}(\mathcal{W})$, the positiveness is equivalent to
the positive semi-definiteness of self-adjoint operators and the strictly
positiveness is equivalent to the positive definiteness of self-adjoint
operators.
The positiveness provides us the (pre) order in $\mathcal{A}$ and, thus,
enables us to consider optimization problems in $\mathcal{A}$.
###### Definition 2.9 (Supremum and infimum)
1. 1.
For a subset $\mathcal{S}$ of $\mathcal{A}$, $a\in\mathcal{A}$ is said to be
an upper bound with respect to the order $\leq_{\mathcal{A}}$, if
$d\leq_{\mathcal{A}}a$ for any $d\in\mathcal{S}$. Then, $c\in\mathcal{A}$ is
said to be a supremum of $\mathcal{S}$, if $c\leq_{\mathcal{A}}a$ for any
upper bound $a$ of $\mathcal{S}$.
2. 2.
For a subset $\mathcal{S}$ of $\mathcal{A}$, $a\in\mathcal{A}$ is said to be a
lower bound with respect to the order $\leq_{\mathcal{A}}$, if
$a\leq_{\mathcal{A}}d$ for any $d\in\mathcal{S}$. Then, $c\in\mathcal{A}$ is
said to be a infimum of $\mathcal{S}$, if $a\leq_{\mathcal{A}}c$ for any lower
bound $a$ of $\mathcal{S}$.
We now introduce a $C^{*}$-module over $\mathcal{A}$, which is a
generalization of the vector space.
###### Definition 2.10 (Right multiplication)
Let $\mathcal{M}$ be an abelian group with operation $+$. For
$c,d\in\mathcal{A}$ and $u,v\in\mathcal{M}$, if an operation
$\cdot:\mathcal{M}\times\mathcal{A}\to\mathcal{M}$ satisfies
1. 1.
$(u+v)\cdot c=u\cdot c+v\cdot c$,
2. 2.
$u\cdot(c+d)=u\cdot c+u\cdot d$,
3. 3.
$u\cdot(cd)=(u\cdot{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}c})\cdot{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}d}$,
4. 4.
$u\cdot 1_{\mathcal{A}}=u$ if $\mathcal{A}$ is unital,
then, $\cdot$ is called a (right) $\mathcal{A}$-multiplication. The
multiplication $u\cdot c$ is usually denoted as $uc$.
###### Definition 2.11 ($C^{*}$-module)
Let $\mathcal{M}$ be an abelian group with operation $+$. If $\mathcal{M}$ is
equipped with a (right) $\mathcal{A}$-multiplication, $\mathcal{M}$ is called
a (right) $C^{*}$-module over $\mathcal{A}$.
In this paper, we consider column vectors rather than row vectors for
representing $\mathcal{A}$-valued coefficients, and column vectors act on the
right. Therefore, we consider right multiplications. However, considering row
vectors and left multiplications instead of column vectors and right
multiplications is also possible.
### 2.4 Hilbert $C^{*}$-module
A Hilbert $C^{*}$-module is a generalization of a Hilbert space. We first
consider an $\mathcal{A}$-valued inner product, which is a generalization of a
complex-valued inner product, and then, introduce the definition of a Hilbert
$C^{*}$-module.
###### Definition 2.12 ($\mathcal{A}$-valued inner product)
A $\mathbb{C}$-linear map with respect to the second variable
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}}:\mathcal{M}\times\mathcal{M}\to\mathcal{A}$
is called an $\mathcal{A}$-valued inner product if it satisfies the following
properties for $u,v,p\in\mathcal{M}$ and $c,d\in\mathcal{A}$:
1. 1.
$\left\langle u,vc+pd\right\rangle_{\mathcal{M}}=\left\langle
u,v\right\rangle_{\mathcal{M}}c+\left\langle u,p\right\rangle_{\mathcal{M}}d$,
2. 2.
$\left\langle v,u\right\rangle_{\mathcal{M}}=\left\langle
u,v\right\rangle_{\mathcal{M}}^{*}$,
3. 3.
$\left\langle u,u\right\rangle_{\mathcal{M}}\geq_{\mathcal{A}}0$,
4. 4.
If $\left\langle u,u\right\rangle_{\mathcal{M}}=0$ then $u=0$.
###### Definition 2.13 ($\mathcal{A}$-valued absolute value and norm)
For $u\in\mathcal{M}$, the $\mathcal{A}$-valued absolute value
$|u|_{\mathcal{M}}$ on $\mathcal{M}$ is defined by the positive element
$|u|_{\mathcal{M}}$ of $\mathcal{A}$ such that
$|u|_{\mathcal{M}}^{2}=\left\langle u,u\right\rangle_{\mathcal{M}}$. The
(real-valued) norm $\|\cdot\|_{\mathcal{M}}$ on $\mathcal{M}$ is defined by
$\|u\|_{\mathcal{M}}=\big{\|}|u|_{\mathcal{M}}\big{\|}_{\mathcal{A}}$.
Since the absolute value $|\cdot|_{\mathcal{M}}$ takes values in
$\mathcal{A}$, it behaves more complicatedly. For example, the triangle
inequality does not hold for the absolute value. However, it provides us with
more information than the norm $\|\cdot\|_{\mathcal{M}}$ (which is real-
valued). For example, let $\mathcal{M}={\mathcal{A}=}\mathbb{C}^{m\times m}$,
$c=\operatorname{diag}\\{\alpha,0,\ldots,0\\}$, and
$d=\operatorname{diag}\\{\alpha,\ldots,\alpha\\}$, where
$\alpha\in\mathbb{C}$. Then, $\|c\|_{\mathcal{M}}=\|d\|_{\mathcal{M}}$, but
$|c|_{\mathcal{M}}\neq|d|_{\mathcal{M}}$. For a self-adjoint matrix, the
absolute value describes the whole spectrum of it, but the norm only describes
the largest eigenvalue.
###### Definition 2.14 (Hilbert $C^{*}$-module)
Let $\mathcal{M}$ be a (right) $C^{*}$-module over $\mathcal{A}$ equipped with
an $\mathcal{A}$-valued inner product defined in Definition 2.12. If
$\mathcal{M}$ is complete with respect to the norm $\|\cdot\|_{\mathcal{M}}$,
it is called a Hilbert $C^{*}$-module over $\mathcal{A}$ or Hilbert
$\mathcal{A}$-module.
###### Example 2.15
A simple example of Hilbert $C^{*}$ modules over $\mathcal{A}$ is
$\mathcal{A}^{n}$ for a natural number $n$. The $\mathcal{A}$-valued inner
product between $\mathbf{c}=[c_{1},\ldots,c_{n}]^{T}$ and
$\mathbf{d}=[d_{1},\ldots,d_{n}]^{T}$ is defined as
$\left\langle\mathbf{c},\mathbf{d}\right\rangle_{\mathcal{A}^{n}}=\sum_{i=1}^{n}c_{i}^{*}d_{i}$.
The absolute value and norm in $\mathcal{A}^{n}$ are given as
$|\mathbf{c}|_{\mathcal{A}^{n}}^{2}=(\sum_{i=1}^{n}c_{i}^{*}c_{i})$ and
$\|\mathbf{c}\|_{\mathcal{A}^{n}}=\|\sum_{i=1}^{n}c_{i}^{*}c_{i}\|_{\mathcal{A}}^{1/2}$,
respectively.
Similar to the case of Hilbert spaces, the following Cauchy–Schwarz inequality
for $\mathcal{A}$-valued inner products is available (Lance, 1995, Proposition
1.1).
###### Lemma 2.16 (Cauchy–Schwarz inequality)
For $u,v\in\mathcal{M}$, the following inequality holds:
$|\left\langle
u,v\right\rangle_{\mathcal{M}}|_{\mathcal{A}}^{2}\;\leq_{\mathcal{A}}\|u\|_{\mathcal{M}}^{2}\left\langle
v,v\right\rangle_{\mathcal{M}}.$
An important property associated with an inner product is the orthonormality.
The orthonormality plays an important role in data analysis. For example, an
orthonormal basis constructs orthogonal projections and an orthogonally
projected vector minimizes the deviation from its original vector in the
projected space. Therefore, we also introduce the orthonormality in Hilbert
$C^{*}$-module. See, for example, Definition 1.2 in (Bakić and Guljaš, 2001)
for more details.
###### Definition 2.17 (Normalized)
A vector $q\in\mathcal{M}$ is normalized if $0\neq\left\langle
q,q\right\rangle_{\mathcal{M}}=\left\langle
q,q\right\rangle_{\mathcal{M}}^{2}$.
Note that in the case of a general $C^{*}$-valued inner product, for a
normalized vector $q$, $\left\langle q,q\right\rangle_{\mathcal{M}}$ is not
always equal to the identity of $\mathcal{A}$ in contrast to the case of a
complex-valued inner product.
###### Definition 2.18 (Orthonormal system and basis)
Let $\mathcal{I}$ be an index set. A set
$\mathcal{S}=\\{q_{i}\\}_{i\in\mathcal{I}}\subseteq\mathcal{M}$ is called an
orthonormal system (ONS) of $\mathcal{M}$ if $q_{i}$ is normalized for any
$i\in\mathcal{I}$ and $\left\langle q_{i},q_{j}\right\rangle_{\mathcal{M}}=0$
for $i\neq j$. We call $\mathcal{S}$ an orthonormal basis (ONB) if the module
generated by $\mathcal{S}$ is an ONS and dense in $\mathcal{M}$.
In Hilbert $C^{*}$-modules, $\mathcal{A}$-linear is often used instead of
$\mathbb{C}$-linear.
###### Definition 2.19 ($\mathcal{A}$-linear operator)
Let $\mathcal{M}_{1},\mathcal{M}_{2}$ be Hilbert $\mathcal{A}$-modules. A
linear map $L:\mathcal{M}_{1}\to\mathcal{M}_{2}$ is referred to as
$\mathcal{A}$-linear if it satisfies $L(uc)=(Lu)c\;$ for any $u\in\mathcal{M}$
and $c\in\mathcal{A}$.
###### Definition 2.20 ($\mathcal{A}$-linearly independent)
The set $\mathcal{S}$ of $\mathcal{M}$ is said to be $\mathcal{A}$-linearly
independent if it satisfies the following condition: For any finite subset
$\\{v_{1},\ldots,v_{n}\\}$ of $\mathcal{S}$, if $\sum_{i=1}^{n}v_{i}c_{i}=0$
for $c_{i}\in\mathcal{A}$, then $c_{i}=0$ for $i=1,\ldots,n$.
For further details about $C^{*}$-algebra, $C^{*}$-module, and Hilbert
$C^{*}$-module, refer to Murphy (1990); Lance (1995).
### 2.5 Reproducing kernel Hilbert $C^{*}$-module (RKHM)
We summarize the theory of RKHM, which is discussed, for example, in Heo
(2008).
Similar to the case of RKHS, we begin by introducing an $\mathcal{A}$-valued
generalization of a positive definite kernel on a non-empty set $\mathcal{X}$
for data.
###### Definition 2.21 ($\mathcal{A}$-valued positive definite kernel)
An $\mathcal{A}$-valued map $k:\ \mathcal{X}\times\mathcal{X}\to\mathcal{A}$
is called a positive definite kernel if it satisfies the following conditions:
1. 1.
$k(x,y)=k(y,x)^{*}$ for $x,y\in\mathcal{X}$,
2. 2.
$\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}\geq_{\mathcal{A}}0$ for
$n\in\mathbb{N}$, $c_{i}\in\mathcal{A}$, $x_{i}\in\mathcal{X}$.
###### Example 2.22
1. 1.
Let $\mathcal{X}=C([0,1]^{m})$. Let $\mathcal{A}=L^{\infty}([0,1])$ and let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined as
$k(x,y)(t)=\int_{[0,1]^{m}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\overline{(t-x(s))}}(t-y(s))ds$
for $t\in[0,1]$. Then, for $x_{1},\ldots,x_{n}\in\mathcal{X}$,
$c_{1},\ldots,c_{n}\in\mathcal{A}$ and $t\in[0,1]$, we have
$\displaystyle\sum_{i,j=1}^{n}c_{i}^{*}(t)k(x_{i},x_{j})(t)c_{j}(t)$
$\displaystyle=\int_{[0,1]^{m}}\sum_{i,j=1}^{n}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\overline{c_{i}(t)(t-x_{i}(s))}}(t-x_{j}(s))c_{j}(t)ds$
$\displaystyle=\int_{[0,1]^{m}}\sum_{i=1}^{n}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\overline{c_{i}(t)(t-x_{i}(s))}}\sum_{j=1}^{n}(t-x_{j}(s))c_{j}(t)ds\geq
0$
for $t\in[0,1]$. Thus, $k$ is an $\mathcal{A}$-valued positive definite
kernel.
2. 2.
Let $\mathcal{A}=L^{\infty}([0,1])$ and
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined such that
$k(x,y)(t)$ is a complex-valued positive definite kernel for any $t\in[0,1]$.
Then, $k$ is an $\mathcal{A}$-valued positive definite kernel.
3. 3.
Let $\mathcal{W}$ be a separable Hilbert space and let
$\\{e_{i}\\}_{i=1}^{\infty}$ be an orthonormal basis of $\mathcal{W}$. Let
$\mathcal{A}=\mathcal{B}(\mathcal{W})$. Let
$k_{i}:\mathcal{X}\times\mathcal{X}\to\mathbb{C}$ be a complex-valued positive
definite kernel for any $i=1,2,\ldots$. Assume for any $x\in\mathcal{X}$,
there exists $C>0$ such that for any $i=1,2,\ldots$, $|k_{i}(x,x)|\leq C$
holds. Let $k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined as
$k(x,y)e_{i}=k_{i}(x,y)e_{i}$. Then, for $x_{1},\ldots,x_{n}\in\mathcal{X}$,
$c_{1},\ldots,c_{n}\in\mathcal{A}$ and $w\in\mathcal{W}$, we have
$\displaystyle\bigg{\langle}w,\bigg{(}\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}\bigg{)}w\bigg{\rangle}_{\mathcal{W}}$
$\displaystyle=\sum_{i,j=1}^{n}\sum_{l=1}^{\infty}\left\langle\alpha_{i,l}e_{l},k(x_{i},x_{j})\alpha_{j,l}e_{l}\right\rangle_{\mathcal{W}}$
$\displaystyle=\sum_{l=1}^{\infty}\sum_{i,j=1}^{n}\overline{\alpha_{i,l}}\alpha_{j,l}\tilde{k}_{l}(x_{i},x_{j})\geq
0,$
where $c_{i}w=\sum_{l=1}^{\infty}\alpha_{i,l}e_{l}$ is the expansion with
respect to $\\{e_{i}\\}_{i=1}^{\infty}$. Thus, $k$ is an $\mathcal{A}$-valued
positive definite kernel.
4. 4.
Let $\mathcal{X}=C(\Omega,\mathcal{Y})$ and $\mathcal{W}=L^{2}(\Omega)$ for a
topological space $\Omega$ with a finite Borel measure and a topological space
$\mathcal{Y}$. Let $\mathcal{A}=\mathcal{B}(\mathcal{W})$, and
$\tilde{k}:\mathcal{Y}\times\mathcal{Y}\to\mathbb{C}$ be a complex-valued
bounded continuous positive definite kernel. Moreover, let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined as
$(k(x,y)w)(s)=\int_{t\in\Omega}\tilde{k}(x(s),y(t))w(t)dt$. Then, for
$x_{1},\ldots,x_{n}\in\mathcal{X}$, $c_{1},\ldots,c_{n}\in\mathcal{A}$ and
$w\in\mathcal{W}$, we have
$\displaystyle\bigg{\langle}w,\bigg{(}\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}\bigg{)}w\bigg{\rangle}_{\mathcal{W}}$
$\displaystyle=\int_{t\in\Omega}\int_{s\in\Omega}\sum_{i,j=1}^{n}\overline{d_{i}(s)}\tilde{k}(x_{i}(s),x_{j}(t))d_{j}(t)dsdt\geq
0,$
where $d_{i}=c_{i}w$. Thus, $k$ is an $\mathcal{A}$-valued positive definite
kernel.
Let $\phi:\mathcal{X}\to\mathcal{A}^{\mathcal{X}}$ be the feature map
associated with $k$, which is defined as $\phi(x)=k(\cdot,x)$ for
$x\in\mathcal{X}$. Similar to the case of RKHS, we construct the following
$C^{*}$-module composed of $\mathcal{A}$-valued functions by means of $\phi$:
$\mathcal{M}_{k,0}:=\bigg{\\{}\sum_{i=1}^{n}\phi(x_{i})c_{i}\bigg{|}\
n\in\mathbb{N},\ c_{i}\in\mathcal{A},\ x_{i}\in\mathcal{X}\bigg{\\}}.$
An $\mathcal{A}$-valued map
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}_{k}}:\mathcal{M}_{k,0}\times\mathcal{M}_{k,0}\to\mathcal{A}$
is defined as follows:
$\bigg{\langle}\sum_{i=1}^{n}\phi(x_{i})c_{i},\sum_{j=1}^{l}\phi(y_{j})d_{j}\bigg{\rangle}_{\mathcal{M}_{k}}:=\sum_{i=1}^{n}\sum_{j=1}^{l}c_{i}^{*}k(x_{i},y_{j})d_{j}.$
By the properties in Definition 2.21 of $k$,
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}_{k}}$ is well-defined and
has the reproducing property
$\left\langle\phi(x),v\right\rangle_{\mathcal{M}_{k}}=v(x)$
for $v\in\mathcal{M}_{k,0}$ and $x\in\mathcal{X}$. Also, it satisfies the
properties in Definition 2.12. As a result,
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}_{k}}$ is shown to be an
$\mathcal{A}$-valued inner product.
The reproducing kernel Hilbert $\mathcal{A}$-module (RKHM) associated with $k$
is defined as the completion of $\mathcal{M}_{k,0}$. We denote by
$\mathcal{M}_{k}$ the RKHM associated with $k$.
Heo (2008) focused on the case where a group acts on $\mathcal{X}$ and
investigated corresponding actions on RKHMs. Moreover, he considered the space
of operators on Hilbert $\mathcal{A}$-module and proved that for each
operator-valued positive definite kernel associated with a group and cocycle,
there is a corresponding representation on the Hilbert $C^{*}$-module
associated with the positive definite kernel.
## 3 Application of RKHM to functional data
In this section, we provide an overview of the motivation for studying RKHM
for data analysis. We especially focus on the application of RKHM to
functional data.
Analyzing functional data has been researched to take advantage of the
additional information implied by the smoothness of functions underlying data
(Ramsay and Silverman, 2005; Levitin et al., 2007; Wang et al., 2016). By
describing data as functions, we obtain information as functions such as
derivatives. Applying kernel methods to functional data is also proposed
(Kadri et al., 2016). In these frameworks, the functions are assumed to be
vectors in a Hilbert space such as $L^{2}(\Omega)$ for a measure space
$\Omega$, or they are embedded in an RKHS or vvRKHS. Then, analyses are
addressed in these Hilbert spaces.
However, since functional data itself is infinite-dimensional data, Hilbert
spaces are not always sufficient for extracting its continuous behavior. This
is because the inner products in Hilbert spaces are complex-valued,
degenerating or failing to capture the continuous behavior of the functional
data. We compare algorithms in Hilbert spaces and those in Hilbert
$C^{*}$-modules and show advantages of algorithms in Hilbert $C^{*}$-modules
over those in Hilbert spaces, which are summarized in Figure 1. We first
consider algorithms in Hilbert spaces for analyzing functional data
$x_{1},x_{2},\ldots\in C(\Omega,\mathcal{X})$, where $\Omega$ is a
$\sigma$-finite measure space and $\mathcal{X}$ is a Hilbert space. There are
two possible typical patterns of algorithms in Hilbert spaces. The first
pattern (Pattern 1 in Fig. 1) is regarding each function $x_{i}$ as a vector
in a Hilbert space $\mathcal{H}$ containing $C(\Omega,\mathcal{X})$. In this
case, the inner product $\left\langle x_{i},x_{j}\right\rangle_{\mathcal{H}}$
between two functions $x_{i}$ and $x_{j}$ is single complex-valued although
$x_{i}$ and $x_{j}$ are functions. Therefore, information of the value of
functions at each point degenerates into a complex value. The second pattern
(Pattern 2 in Fig. 1) is discretizing each function $x_{i}$ as
$x_{i}(t_{0}),x_{i}(t_{1}),\ldots$ for $t_{0},t_{1},\ldots\in\Omega$ and
regarding each discretized value $x_{i}(t_{l})$ as a vector in the Hilbert
space $\mathcal{X}$. In this case, we obtain the complex-valued inner product
$\left\langle x_{i}(t_{l}),x_{j}(t_{l})\right\rangle_{\mathcal{X}}$ at each
point $t_{l}\in\Omega$. However, because of the discretization, continuous
behaviors, for example, derivatives, total variation, and frequency
components, of the function $x_{i}$ are lost. Algorithms of both patterns in
the Hilbert spaces proceed by using the computed complex-valued inner
products. As a result, capturing features of functions with the algorithms in
the Hilbert spaces is difficult. On the other hand, if we regard each function
$x_{i}$ as a vector in a Hilbert $C^{*}$-module $\mathcal{M}$ (the rightmost
picture in Fig. 1), then the inner product $\left\langle
x_{i},x_{j}\right\rangle_{\mathcal{M}}$ between two functions $x_{i}$ and
$x_{j}$ in the Hilbert $C^{*}$-module is $C^{*}$-algebra-valued. Thus, if we
set the $C^{*}$-algebra as a function space such as $L^{\infty}(\Omega)$, the
inner product $\left\langle x_{i},x_{j}\right\rangle_{\mathcal{M}}$ is
function-valued. Therefore, algorithms in Hilbert $C^{*}$-modules enable us to
capture and extract continuous behaviors of functions. Moreover, in the case
of the outputs are functions, we can control the outputs according to the
features of the functions.
Since RKHM is a generalization of RKHS and vvRKHS (see Subsection 4.2 for
further details), the framework of RKHMs (Hilbert $C^{*}$-modules) allows us
to generalize kernel methods in RKHSs and vvRKHSs (Hilbert spaces) to those in
Hilbert $C^{*}$-modules. Therefore, by using RKHM, we can capture and extract
features of functions in kernel methods. The remainder of this paper is
devoted to developing the theory of applying RKHMs to data analysis and
showing examples of practical applications of data analysis in RKHMs (PCA,
time-series data analysis, and analysis of interaction effects).
$x_{2}(t)$$t$$x_{1}(t)$$t$Compute theinner product$\left\langle
x_{1},x_{2}\right\rangle_{\mathcal{H}}\in\mathbb{C}$$c_{i}=\left\langle
x_{1}(t_{i}),x_{2}(t_{i})\right\rangle\in\mathbb{C}$Degenerates
informationalong $t$aa$x_{1}$,$x_{2}$ : Functional
data$x_{1},x_{2}\in\mathcal{H}$Algorithms in Hilbert spaces(e.g. RKHSs and
vvRKHSs)(Pattern 1)
$x_{2}(t)$$t$$x_{1}(t)$$t$$t_{0}$$t_{1}$$t_{2}$$t_{3}$$t_{4}$$t_{5}$$t_{6}$$t_{7}$$c_{0}$$c_{1}$$c_{2}$$c_{3}$$c_{4}$$c_{5}$$c_{6}$$c_{7}$$c_{i}=\left\langle
x_{1}(t_{i}),x_{2}(t_{i})\right\rangle_{\mathcal{X}}\in\mathbb{C}$Fails to
capturecontinuous behaviors(derivatives, total variation,frequency
components,…)Functional dataAlgorithms(e.g. RKHSs)(Pattern
2)$x_{1}(t),x_{2}(t)\in\mathcal{X}$
$x_{2}(t)$$t$$x_{1}(t)$$t$$\left\langle
x_{1},x_{2}\right\rangle(t)$$t$$\left\langle
x_{1},x_{2}\right\rangle_{\mathcal{M}}\in\mathcal{A}$Capture and
controlcontinuous behaviorsFunctional dataAlgorithms inHilbert
$C^{*}$-modules(e.g. RKHMs)$x_{1},x_{2}\in\mathcal{M}$
Figure 1: Advantages of algorithms in Hilbert $C^{*}$-modules over those in
Hilbert spaces
## 4 RKHM for data analysis
As we mentioned in Section 1, RKHM has been studied in mathematical physics
and pure mathematics. In existing studies, mathematical properties of RKHM
such as the relationship between group actions and RKHMs (see the last
paragraph of Subsection 2.5) have been discussed. However, these studies have
not been focused on data and algorithms for analyzing it. Therefore, we fill
the gaps between the existing theory of RKHM and its application to data
analysis in this section. We develop theories for the validity to applying it
to data analysis in Subsection 4.1. Also, we investigate the connection of
RKHM with RKHS and vvRKHS in Subsection 4.2.
Generalizations of theories of Hilbert space and RKHS are quite nonobvious for
general $C^{*}$-algebras since fundamental properties in Hilbert spaces such
as the Riesz representation theorem and orthogonal complementedness are not
always obtained in Hilbert $C^{*}$-modules. Therefore, we consider limiting
$C^{*}$-algebras to an appropriate class of $C^{*}$-algebras. In fact, von
Neumann-algebras satisfy desired properties.
###### Definition 4.1 (von Neumann-algebra)
A C*-algebra $\mathcal{A}$ is called a von Neumann-algebra if $\mathcal{A}$ is
isomorphic to the dual Banach space of some Banach space.
The following propositions are fundamental for deriving useful properties for
data analysis in Hilbert $C^{*}$-modules and RKHMs (Skeide, 2000, Theorem
4.16), (Manuilov and Troitsky, 2000, Proposition 2.3.3).
###### Proposition 4.2 (The Riesz representation theorem for Hilbert
$\mathcal{A}$-modules)
Let
$\mathcal{A}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\subseteq\mathcal{B}(\mathcal{W})}$
be a von Neumann-algebra and let $\mathcal{M}$ be a Hilbert
$\mathcal{A}$-module. Let
$\mathcal{H}=\mathcal{M}\otimes_{\mathcal{A}}\mathcal{W}$ (see Definition 4.12
for the definition of the product
$\mathcal{M}\otimes_{\mathcal{A}}\mathcal{W}$). Then, every $v\in\mathcal{M}$
can be regarded as an operator in $\mathcal{B}(\mathcal{W},\mathcal{H})$, the
set of bounded linear operators from $\mathcal{W}$ to $\mathcal{H}$. If
$\mathcal{M}\subseteq\mathcal{B}(\mathcal{W},\mathcal{H})$ is strongly closed
(in this case, we say that $\mathcal{M}$ is a von Neumann
$\mathcal{A}$-module), then for a bounded $\mathcal{A}$-linear map
$L:\mathcal{M}\to\mathcal{A}$ (see Definition 2.19), there exists a unique
$u\in\mathcal{M}$ such that $Lv=\left\langle u,v\right\rangle_{\mathcal{M}}$
for all $v\in\mathcal{M}$.
Let $\mathcal{A}$ be a von Neumann-algebra. We remark that the Hilbert
$\mathcal{A}$-module $\mathcal{A}^{n}$ for some $n\in\mathbb{N}$ is a von
Neumann $\mathcal{A}$-module. Moreover, for an $\mathcal{A}$-valued positive
definite kernel defined as $\tilde{k}1_{\mathcal{A}}$, where $\tilde{k}$ is a
(standard) positive definite kernel, the RKHM $\mathcal{M}_{k}$ is a von
Neumann $\mathcal{A}$-module. (Generally, the Hilbert $\mathcal{A}$-module
represented as $\mathcal{H}\otimes\mathcal{A}$ for a Hilbert space
$\mathcal{H}$ is a von Neumann $\mathcal{A}$-module. Here, $\otimes$
represents the tensor product of a Hilbert space and $C^{*}$-module. See Lance
(1995, p.6) for further details about the tensor product.)
###### Proposition 4.3 (Orthogonal complementedness in Hilbert
$\mathcal{A}$-modules)
Let $\mathcal{A}$ be a unital $C^{*}$-algebra and let $\mathcal{M}$ be a
Hilbert $\mathcal{A}$-module. Let $\mathcal{V}$ be a finitely (algebraically)
generated closed submodule of $\mathcal{M}$. Then, any $u\in\mathcal{M}$ is
decomposed into $u=u_{1}+u_{2}$ where $u_{1}\in\mathcal{V}$ and
$u_{2}\in\mathcal{V}^{\perp}$. Here, $\mathcal{V}^{\perp}$ is the orthogonal
complement of $\mathcal{V}$ defined as $\\{u\in\mathcal{M}\mid\ \left\langle
u,v\right\rangle_{\mathcal{M}}=0\\}$.
Let $\mathcal{A}$ be a unital $C^{*}$-algebra, let $\mathcal{M}$ be a Hilbert
$\mathcal{A}$-module, and let $\\{q_{1},\ldots,q_{n}\\}$ be an ONS of
$\mathcal{M}$. Then, the submodule $\mathcal{V}$ generated by
$\\{q_{1},\ldots,q_{n}\\}$ is isomorphic to
$\bigoplus_{i=1}^{n}\mathcal{V}_{i}$, where $\mathcal{V}_{i}=\\{\left\langle
q_{i},q_{i}\right\rangle_{\mathcal{M}}c\mid\ c\in\mathcal{A}\\}$ is a closed
submodule of $\mathcal{A}$. Thus, we have
$\mathcal{M}=\mathcal{V}\oplus\mathcal{V}^{\perp}$.
Therefore, we set $\mathcal{A}$ as a von Neumann-algebra to derive useful
properties of RKHM for data analysis. Note that every von Neumann-algebra is
unital (see Definition 2.5).
###### Assumption 4.4
We assume $\mathcal{A}$ is a von Neumann-algebra throughout this paper.
$C^{*}$-algebras in Example 2.6 are also von Neumann-algebras. As we noted
after Example 2.6, any $C^{*}$-algebra can be regarded as a subalgebra of
$\mathcal{B}(\mathcal{W})$. Thus, this fact implies setting the range of the
positive definite kernel as $\mathcal{B}(\mathcal{W})$ rather than general
$C^{*}$-algebras is effective for data analysis.
### 4.1 General properties of RKHM for data analysis
#### 4.1.1 Fundamental properties of RKHM
Similar to the cases of RKHSs, we show RKHMs constructed by
$\mathcal{A}$-valued positive definite kernels have the reproducing property.
Also, we show that the RKHM associated with an $\mathcal{A}$-valued positive
definite kernel $k$ is uniquely determined.
###### Proposition 4.5
The map $\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}_{k}}$ defined on
$\mathcal{M}_{k,0}$ is extended continuously to $\mathcal{M}_{k}$ and the map
$\mathcal{M}_{k}\ni
v\mapsto(x\mapsto\left\langle\phi(x),v\right\rangle_{\mathcal{M}_{k}})\in\mathcal{A}^{\mathcal{X}}$
is injective. Thus, $\mathcal{M}_{k}$ is regarded to be the subset of
$\mathcal{A}^{\mathcal{X}}$ and has the reproducing property.
###### Proposition 4.6
Assume a Hilbert $C^{*}$-module $\mathcal{M}$ over $\mathcal{A}$ and a map
$\psi:\mathcal{X}\to\mathcal{M}$ satisfy the following conditions:
1. 1.
${}^{\forall}x,y\in\mathcal{X}$,
$\left\langle\psi(x),\psi(y)\right\rangle_{\mathcal{M}}=k(x,y)$
2. 2.
$\overline{\\{\sum_{i=1}^{n}\psi(x_{i})c_{i}\mid\ x_{i}\in\mathcal{X},\
c_{i}\in\mathcal{A}\\}}=\mathcal{M}$
Then, there exists a unique $\mathcal{A}$-linear bijection map
$\Psi:\mathcal{M}_{k}\to\mathcal{M}$ that preserves the inner product and
satisfies the following commutative diagram:
$\textstyle{\mathcal{M}_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi}$$\textstyle{\mathcal{M}}$$\textstyle{\mathcal{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\psi}$$\scriptstyle{\circlearrowright}$
We give the proofs for the above propositions in Appendix A.
#### 4.1.2 Minimization property and representer theorem in RKHMs
We now develop some theories for the validity to apply RKHM to data analysis.
First, we show a minimization property of orthogonal projection operators,
which is a fundamental property in Hilbert spaces, is also available in
Hilbert $C^{*}$-modules.
###### Theorem 4.7 (Minimization property of orthogonal projection operators)
Let $\mathcal{I}$ be a finite index set. Let $\\{q_{i}\\}_{i\in\mathcal{I}}$
be an ONS of $\mathcal{M}$ and $\mathcal{V}$ be the submodule of $\mathcal{M}$
spanned by $\\{q_{i}\\}_{i\in\mathcal{I}}$. For $u\in\mathcal{M}_{k}$, let
$P:\mathcal{M}\to\mathcal{V}$ be the projection operator defined as
$Pu:=\sum_{i\in\mathcal{I}}q_{i}\left\langle
q_{i},u\right\rangle_{\mathcal{M}}$. Then $Pu$ is the unique solution of the
following minimization problem, where the minimum is taken with respect to a
(pre) order in $\mathcal{A}$ (see Definition 2.9):
$\min_{v\in\mathcal{V}}|u-v|_{\mathcal{M}}^{2}.$ (2)
Proof By Proposition 4.3, $u\in\mathcal{M}$ is decomposed into
$u=u_{1}+u_{2}$, where $u_{1}=Pu\in\mathcal{V}$ and
$u_{2}=u-u_{1}\in\mathcal{V}^{\perp}$. Let $v\in\mathcal{V}$. Since
$u_{1}-v\in\mathcal{V}$, the identity $\left\langle
u_{2},u_{1}-v\right\rangle_{\mathcal{M}}=0$ holds. Therefore, we have
$|u-v|_{\mathcal{M}}^{2}=|u_{2}+(u_{1}-v)|_{\mathcal{M}}^{2}=|u_{2}|_{\mathcal{M}}^{2}+|u_{1}-v|_{\mathcal{M}}^{2},$
(3)
which implies
$|u-v|_{\mathcal{M}}^{2}-|u-u_{1}|_{\mathcal{M}}^{2}\geq_{\mathcal{A}}0$.
Since $v\in\mathcal{V}$ is arbitrary, $u_{1}$ is a solution of
$\min_{v\in\mathcal{V}}|u-v|_{\mathcal{M}}$.
Moreover, if there exists $u^{\prime}\in\mathcal{V}$ such that
$|u-u_{1}|_{\mathcal{M}}^{2}=|u-u^{\prime}|_{\mathcal{M}}^{2}$, then letting
$v=u^{\prime}$ in Eq. (3) derives
$|u-u^{\prime}|_{\mathcal{M}}^{2}=|u_{2}|_{\mathcal{M}}^{2}+|u_{1}-u^{\prime}|_{\mathcal{M}}^{2}$,
which implies $|u_{1}-u^{\prime}|_{\mathcal{M}}^{2}=0$. As a result,
$u_{1}=u^{\prime}$ holds and the uniqueness of $u_{1}$ has been proved.
Proposition 4.7 shows the orthogonally projected vector uniquely minimizes the
deviation from an original vector in $\mathcal{V}$. Thus, we can generalize
methods related to orthogonal projections in Hilbert spaces to Hilbert
$C^{*}$-modules.
Next, we show the representer theorem in RKHMs.
###### Theorem 4.8 (Representer theorem)
Let $x_{1},\ldots,x_{n}\in\mathcal{X}$ and $a_{1},\ldots,a_{n}\in\mathcal{A}$.
Let $h:\mathcal{X}\times\mathcal{A}^{2}\to\mathcal{A}_{+}$ be an error
function and let $g:\mathcal{A}_{+}\to\mathcal{A}_{+}$ satisfy
$g(c)\leq_{\mathcal{A}}g(d)$ for $c\leq_{\mathcal{A}}d$. Assume the module
spanned by $\\{\phi(x_{i})\\}_{i=1}^{n}$ is closed. Then, any
$u\in\mathcal{M}_{k}$ minimizing
$\sum_{i=1}^{n}h(x_{i},a_{i},u(x_{i}))+g(|u|_{\mathcal{M}_{k}})$ admits a
representation of the form $\sum_{i=1}^{n}\phi(x_{i})c_{i}$ for some
$c_{1},\ldots,c_{n}\in\mathcal{A}$.
Proof Let $\mathcal{V}$ be the module spanned by
$\\{\phi(x_{i})\\}_{i=1}^{n}$. By Proposition 4.3, $u\in\mathcal{M}_{k}$ is
decomposed into $u=u_{1}+u_{2}$, where $u_{1}\in\mathcal{V}$,
$u_{2}\in\mathcal{V}^{\perp}$. By the reproducing property of
$\mathcal{M}_{k}$, the following equalities are derived for $i=1,\ldots,n$:
$\displaystyle
u(x_{i})=\left\langle\phi(x_{i}),u\right\rangle_{\mathcal{M}_{k}}=\left\langle\phi(x_{i}),u_{1}+u_{2}\right\rangle_{\mathcal{M}_{k}}=\left\langle\phi(x_{i}),u_{1}\right\rangle_{\mathcal{M}_{k}}.$
Thus, $\sum_{i=1}^{n}h(x_{i},a_{i},u(x_{i}))$ is independent of $u_{2}$. As
for the term $g(|u|_{\mathcal{M}_{k}})$, since $g$ satisfies
$g(c)\leq_{\mathcal{A}}g(d)$ for $c\leq_{\mathcal{A}}d$, we have
$\displaystyle
g(|u|_{\mathcal{M}_{k}})=g(|u_{1}+u_{2}|_{\mathcal{M}_{k}})=g\Big{(}\big{(}|u_{1}|_{\mathcal{M}_{k}}^{2}+|u_{2}|_{\mathcal{M}_{k}}^{2}\big{)}^{1/2}\Big{)}\geq_{\mathcal{A}}g(|u_{1}|_{\mathcal{M}_{k}}).$
Therefore, setting $u_{2}=0$ does not affect the term
$\sum_{i=1}^{n}h(x_{i},a_{i},u(x_{i}))$, while strictly reducing the term
$g(|u|_{\mathcal{M}_{k}})$, which implies any minimizer must have $u_{2}=0$.
As a result, any minimizer takes the form $\sum_{i=1}^{n}\phi(x_{i})c_{i}$.
### 4.2 Connection with RKHSs and vvRKHSs
We show that the framework of RKHM is more general than those of RKHS and
vvRKHS. Let $\tilde{k}$ be a complex-valued positive definite kernel and let
$\mathcal{H}_{\tilde{k}}$ be the RKHS associated with $\tilde{k}$. In
addition, let $k$ be an $\mathcal{A}$-valued positive definite kernel and
$\mathcal{M}_{k}$ be the RKHM associated with $k$. The following proposition
is derived by the definitions of RKHSs and RKHMs.
###### Proposition 4.9 (Connection between RKHMs with RKHSs)
If $\mathcal{A}=\mathbb{C}$ and $k=\tilde{k}$, then
$\mathcal{H}_{\tilde{k}}=\mathcal{M}_{k}$.
As for the connection between vvRKHSs and RKHMs, we first remark that in the
case of $\mathcal{A}=\mathcal{B}(\mathcal{W})$, Definition 2.21 is equivalent
to the operator valued positive definite kernel (Definition 2.2) for the
theory of vv-RKHSs.
###### Lemma 4.10 (Connection between Definition 2.21 and Definition 2.2)
If $\mathcal{A}=\mathcal{B}(\mathcal{W})$, then, the $\mathcal{A}$-valued
positive definite kernel defined in Definition 2.21 is equivalent to the
operator valued positive definite kernel defined in Definition 2.2.
The proof for Lemma 4.10 is given in Appendix A.
Let $\mathcal{A}=\mathcal{B}(\mathcal{W})$ and let
$\mathcal{H}_{k}^{\operatorname{v}}$ be the vvRKHS associated with $k$. To
investigate further connections between vvRKHSs and RKHMs, we introduce the
notion of interior tensor (Lance, 1995, Chapter 4).
###### Proposition 4.11
Let $\mathcal{M}$ be a Hilbert $\mathcal{B}(\mathcal{W})$-module and let
$\mathcal{M}\otimes\mathcal{W}$ be the tensor product of $\mathcal{M}$ and
$\mathcal{W}$ as vector spaces. The map
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}\otimes\mathcal{W}}:\mathcal{M}\otimes\mathcal{W}\;\times\;\mathcal{M}\otimes\mathcal{W}\to\mathbb{C}$
defined as
$\left\langle v\otimes w,u\otimes
h\right\rangle_{\mathcal{M}\otimes\mathcal{W}}=\left\langle w,\left\langle
v,u\right\rangle_{\mathcal{M}}h\right\rangle_{\mathcal{W}}$
is a complex-valued pre inner product on $\mathcal{M}\otimes\mathcal{W}$.
###### Definition 4.12 (Interior tensor)
The completion of $\mathcal{M}\otimes\mathcal{W}$ with respect to the pre
inner product
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}\otimes\mathcal{W}}$ is
referred to as the interior tensor between $\mathcal{M}$ and $\mathcal{W}$,
and denoted as $\mathcal{M}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}$.
Note that $\mathcal{M}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}$ is a
Hilbert space. We now show vvRKHSs are reconstructed by the interior tensor
between RKHMs and $\mathcal{W}$.
###### Theorem 4.13 (Connection between RKHMs and vvRKHSs)
If $\mathcal{A}=\mathcal{B}(\mathcal{W})$, then two Hilbert spaces
$\mathcal{H}_{k}^{\operatorname{v}}$ and
$\mathcal{M}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}$ are isomorphic.
Theorem 4.13 is derived by the following lemma.
###### Lemma 4.14
There exists a unique unitary map
$U\colon\mathcal{M}_{k}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}\to\mathcal{H}_{k}^{\operatorname{v}}$
such that $U(\phi(x)c\otimes w)=\phi(x)(cw)$ holds for all $x\in\mathcal{X}$,
$c\in\mathcal{B}(\mathcal{W})$ and $w\in\mathcal{W}$.
Proof First, we show that
$\displaystyle\bigg{\langle}\sum_{i=1}^{n}\phi({x_{i}})c_{i}\otimes
w_{i},\sum_{j=1}^{l}\phi({y_{j}})d_{j}\otimes
h_{j}\bigg{\rangle}_{\mathcal{M}_{k}\otimes\mathcal{W}}=\bigg{\langle}\sum_{i=1}^{n}\phi({x_{i}})(c_{i}w_{i}),\sum_{j=1}^{l}\phi({y_{j}})(d_{j}h_{j})\bigg{\rangle}_{\mathcal{H}_{k}^{\operatorname{v}}}$
holds for all $\sum_{i=1}^{n}\phi({x_{i}})c_{i}\otimes
w_{i},\sum_{j=1}^{l}\phi({y_{j}})d_{j}\otimes
h_{j}\in\mathcal{M}_{k}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}$. This
follows from the straightforward calculation. Indeed, we have
$\displaystyle\bigg{\langle}\sum_{i=1}^{n}\phi({x_{i}})c_{i}\otimes
w_{i},\sum_{j=1}^{l}\phi({y_{j}})d_{j}\otimes
h_{j}\bigg{\rangle}_{\mathcal{M}_{k}\otimes\mathcal{W}}=\sum_{i=1}^{n}\sum_{j=1}^{l}\left\langle
w_{i},\left\langle\phi(x_{i})c_{i},\phi(y_{j})d_{j}\right\rangle_{k}h_{j}\right\rangle_{\mathcal{W}}$
$\displaystyle\qquad=\sum_{i=1}^{n}\sum_{j=1}^{l}\left\langle
w_{i},c_{i}^{*}k(x_{i},y_{j})d_{j}h_{j}\right\rangle_{\mathcal{W}}=\sum_{i=1}^{n}\sum_{j=1}^{l}\left\langle
c_{i}w_{i},k(x_{i},y_{j})d_{j}h_{j}\right\rangle_{\mathcal{W}}$
$\displaystyle\qquad=\bigg{\langle}\sum_{i=1}^{n}\phi({x_{i}})(c_{i}w_{i}),\sum_{j=1}^{l}\phi({y_{j}})(d_{j}h_{j})\bigg{\rangle}_{\mathcal{H}_{k}^{\operatorname{v}}}.$
Therefore, by the standard functional analysis argument, it turns out that
there exists an isometry
$U\colon\mathcal{M}_{k}\otimes_{\mathcal{B}(\mathcal{W})}\mathcal{W}\to\mathcal{H}_{k}^{\operatorname{v}}$
such that $U(\phi(x)c\otimes w)=\phi(x)(cw)$ holds for all $x\in\mathcal{X}$,
$c\in\mathcal{B}(\mathcal{W})$ and $w\in\mathcal{W}$. Since the image of $U$
is closed and dense in $\mathcal{H}_{k}^{\operatorname{v}}$, $U$ is
surjective. Thus $U$ is a unitary map.
## 5 Kernel mean embedding in RKHM
We generalize KME in RKHSs, which is widely used in analyzing distributions,
to RKHMs. By using the framework of RKHM, we can embed $\mathcal{A}$-valued
measures instead of probability measures (more generally, complex-valued
measures). We provide a brief review of $\mathcal{A}$-valued measures and the
integral with respect to $\mathcal{A}$-valued measures in Appendix B. We
define a KME in RKHMs in Subsection 5.1 and show its theoretical properties in
Subsection 5.2.
To define a KME by using $\mathcal{A}$-valued measures and integrals, we first
define $c_{0}$-kernels.
###### Definition 5.1 (Function space ${C}_{0}(\mathcal{X},\mathcal{A})$)
For a locally compact Hausdorff space $\mathcal{X}$, the set of all
$\mathcal{A}$-valued continuous functions on $\mathcal{X}$ vanishing at
infinity is denoted as $C_{0}(\mathcal{X},\mathcal{A})$. Here, an
$\mathcal{A}$-valued continuous function $u$ is said to vanish at infinity if
the set $\\{x\in\mathcal{X}\mid\ \|u(x)\|_{\mathcal{A}}\geq\epsilon\\}$ is
compact for any $\epsilon>0$. The space ${C}_{0}(\mathcal{X},\mathcal{A})$ is
a Banach $\mathcal{A}$-module with respect to the sup norm.
Note that if $\mathcal{X}$ is compact, any continuous function is contained in
${C}_{0}(\mathcal{X},\mathcal{A})$.
###### Definition 5.2 ($c_{0}$-kernel)
Let $\mathcal{X}$ be a locally compact Hausdorff space. An
$\mathcal{A}$-valued positive definite kernel
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ is referred to as a
$c_{0}$-kernel if $k$ is bounded and
$\phi(x)=k(\cdot,x)\in{C}_{0}(\mathcal{X},\mathcal{A})$ for any
$x\in\mathcal{X}$.
In this section, we impose the following assumption.
###### Assumption 5.3
We assume $\mathcal{X}$ is a locally compact Hausdorff space and $k$ is an
$\mathcal{A}$-valued $c_{0}$-positive definite kernel. In addition, we assume
$\mathcal{M}_{k}$ is a von Neumann $\mathcal{A}$-module (see Proposition 4.2).
For example, we often consider $\mathcal{X}=\mathbb{R}^{d}$ in practical
situations. Also, we provide examples of $c_{0}$-kernels as follows.
###### Example 5.4
1. 1.
Let $\mathcal{A}=L^{\infty}([0,1])$ and $k$ is an $\mathcal{A}$-valued
positive definite kernel defined such that $k(x,y)(t)$ is a complex-valued
positive definite kernel for $t\in[0,1]$ (see Example 2.22.2). Assume there
exists a complex-valued $c_{0}$-positive definite kernel $\tilde{k}$ such that
for any $t\in[0,1]$, $|k(x,y)(t)|\leq|\tilde{k}(x,y)|$ holds. If
$\|k(x,y)\|_{\mathcal{A}}$ is continuous with respect to $y$ for any
$x\in\mathcal{X}$, then the inclusion
$\\{y\in\mathcal{X}\mid\
\|k(x,y)\|_{\mathcal{A}}\geq\epsilon\\}\subseteq\\{y\in\mathcal{X}\mid\
|\tilde{k}(x,y)|\geq\epsilon\\}$
holds for $x\in\mathcal{X}$ and $\epsilon>0$. Since $\tilde{k}$ is a
$c_{0}$-kernel, the set $\\{y\in\mathcal{X}\mid\
|\tilde{k}(x,y)|\geq\epsilon\\}$ is compact (see Definition 5.1). Thus,
$\\{y\in\mathcal{X}\mid\ \|k(x,y)\|_{\mathcal{A}}\geq\epsilon\\}$ is also
compact and $k$ is an $\mathcal{A}$-valued $c_{0}$-positive definite kernel.
Examples of complex-valued $c_{0}$-positive definite kernels are Gaussian,
Laplacian and $B_{2n+1}$-spline kernels.
2. 2.
Let $\mathcal{W}$ be a separable Hilbert space and let
$\\{e_{i}\\}_{i=1}^{\infty}$ be an orthonormal basis of $\mathcal{W}$. Let
$\mathcal{A}=\mathcal{B}(\mathcal{W})$ and let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined as
$k(x,y)e_{i}=k_{i}(x,y)e_{i}$, where
$k_{i}:\mathcal{X}\times\mathcal{X}\to\mathbb{C}$ is a complex-valued positive
definite kernel for any $i=1,2,\ldots$ (see Example 2.22.3). Assume there
exists a complex-valued $c_{0}$-positive definite kernel $\tilde{k}$ such that
for any $i=1,2,\ldots$, $|k_{i}(x,y)|\leq|\tilde{k}(x,y)|$ holds.
If $\|k(x,y)\|_{\mathcal{A}}$ is continuous with respect to $y$ for any
$x\in\mathcal{X}$, then $k$ is shown to be an $\mathcal{A}$-valued
$c_{0}$-positive definite kernel in the same manner as the above example.
We introduce $\mathcal{A}$-valued measure and integral in preparation for
defining a KME in RKHMs. They are special cases of vector measure and integral
(Dinculeanu, 1967, 2000), respectively. We review vector measure and integral
as $\mathcal{A}$-valued ones in Appendix B. The notions of measure and the
Lebesgue integral are generalized to $\mathcal{A}$-valued.
### 5.1 Kernel mean embedding of $C^{*}$-algebra-valued measures
We now define a KME in RKHMs.
###### Definition 5.5 (KME in RKHMs)
Let $\mathcal{D}(\mathcal{X},\mathcal{A})$ be the set of all
$\mathcal{A}$-valued finite regular Borel measures. A kernel mean embedding in
an RKHM $\mathcal{M}_{k}$ is a map
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\rightarrow\mathcal{M}_{k}$ defined
by
$\Phi(\mu):=\int_{x\in\mathcal{X}}\phi(x)d\mu(x).$ (4)
We emphasize that the well-definedness of $\Phi$ is not trivial, and von
Neumann $\mathcal{A}$-module is adequate to show it. More precisely, the
following theorem derives the well-definedness.
###### Theorem 5.6 (Well-definedness for the KME in RKHMs)
Let $\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$. Then,
$\Phi(\mu)\in\mathcal{M}_{k}$. In addition, the following equality holds for
any $v\in\mathcal{M}_{k}$:
$\left\langle\Phi(\mu),v\right\rangle_{\mathcal{M}_{k}}=\int_{x\in\mathcal{X}}d\mu^{*}(x)v(x).$
(5)
To show Theorem 5.6, we use the Riesz representation theorem for Hilbert
$\mathcal{A}$-modules (Proposition 4.2).
Proof Let $L_{\mu}:\mathcal{M}_{k}\to\mathcal{A}$ be an $\mathcal{A}$-linear
map defined as $L_{\mu}v:=\int_{x\in\mathcal{X}}d\mu^{*}(x)v(x)$. The
following inequalities are derived by the reproducing property and the
Cauchy–Schwarz inequality (Lemma 2.16):
$\displaystyle\|L_{\mu}v\|_{\mathcal{A}}$
$\displaystyle\leq\int_{x\in\mathcal{X}}\|v(x)\|_{\mathcal{A}}d|\mu|(x)=\int_{x\in\mathcal{X}}\|\left\langle\phi(x),v\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}d|\mu|(x)$
$\displaystyle\leq\|v\|_{\mathcal{M}_{k}}\int_{x\in\mathcal{X}}\|\phi(x)\|_{\mathcal{M}_{k}}d|\mu|(x)\leq|\mu|(\mathcal{X})\|v\|_{\mathcal{M}_{k}}\sup_{x\in\mathcal{X}}\|\phi(x)\|_{\mathcal{M}_{k}},$
(6)
where the first inequality is easily checked for a step function
$s(x):=\sum_{i=1}^{n}c_{i}\chi_{E_{i}}(x)$ as follows:
$\displaystyle\bigg{\|}\int_{x\in\mathcal{X}}d\mu^{*}(x)s(x)\bigg{\|}_{\mathcal{A}}$
$\displaystyle=\bigg{\|}\sum_{i=1}^{n}\mu(E_{i})^{*}c_{i}\bigg{\|}_{\mathcal{A}}\leq\sum_{i=1}^{n}\|\mu(E_{i})\|_{\mathcal{A}}\|c_{i}\|_{\mathcal{A}}$
$\displaystyle\leq\sum_{i=1}^{n}|\mu|(E_{i})\|c_{i}\|_{\mathcal{A}}=\int_{x\in\mathcal{X}}\|s(x)\|_{\mathcal{A}}d|\mu|(x).$
Thus, it holds for any totally measurable functions. Since both
$|{\mu}|(\mathcal{X})$ and
$\sup_{x\in\mathcal{X}}\|\phi(x)\|_{\mathcal{M}_{k}}$ are finite, inequality
(6) means $L_{\mu}$ is bounded. Thus, by the Riesz representation theorem for
Hilbert $\mathcal{A}$-modules (Proposition 4.2), there exists
$u_{\mu}\in\mathcal{M}_{k}$ such that $L_{\mu}v=\left\langle
u_{\mu},v\right\rangle_{\mathcal{M}_{k}}$. By setting $v=\phi(y)$, for
$y\in\mathcal{X}$, we have
$u_{\mu}(y)=L_{\mu}\phi(y)^{*}=\int_{x\in\mathcal{X}}k(y,x)d\mu(x)$.
Therefore, $\Phi(\mu)=u_{\mu}\in\mathcal{M}_{k}$ and
$\left\langle\Phi(\mu),v\right\rangle_{\mathcal{M}_{k}}=\int_{x\in\mathcal{X}}d\mu^{*}(x)v(x)$.
###### Corollary 5.7
For $\mu,\nu\in\mathcal{D}(\mathcal{X},\mathcal{A})$, the inner product
between $\Phi(\mu)$ and $\Phi(\nu)$ is given as follows:
$\left\langle\Phi(\mu),\Phi(\nu)\right\rangle_{\mathcal{M}_{k}}=\int_{x\in\mathcal{X}}\int_{y\in\mathcal{X}}d\mu^{*}(x)k(x,y)d\nu(y).$
Moreover, many basic properties for the existing KME in RKHS are generalized
to the proposed KME as follows.
###### Proposition 5.8 (Basic properties of the KME $\Phi$)
For $\mu,\nu\in\mathcal{D}(\mathcal{X},\mathcal{A})$ and $c\in\mathcal{A}$,
$\Phi(\mu+\nu)=\Phi(\mu)+\Phi(\nu)$ and $\Phi(\mu c)=\Phi(\mu)c$ (i.e., $\Phi$
is $\mathcal{A}$-linear, see Definition 2.19) hold. In addition, for
$x\in\mathcal{X}$, $\Phi(\delta_{x})=\phi(x)$ (see Definition B.2 for the
definition of the $\mathcal{A}$-valued Dirac measure $\delta_{x}$).
This is derived from Eqs. (4) and (5). Note that if $\mathcal{A}=\mathbb{C}$,
then the proposed KME (4) is equivalent to the existing KME in RKHS considered
in Sriperumbudur et al. (2011).
### 5.2 Injectivity and universality
Here, we show the connection between the injectivity of the KME and the
universality of RKHM. The proofs of the propositions in this subsection are
given in Appendix C.
#### 5.2.1 injectivity
In practice, the injectivity of $\Phi$ is important to transform problems in
$\mathcal{D}(\mathcal{X},\mathcal{A})$ into those in $\mathcal{M}_{k}$. This
is because if a KME $\Phi$ in an RKHM is injective, then $\mathcal{A}$-valued
measures are embedded into $\mathcal{M}_{k}$ through $\Phi$ without loss of
information. Note that, for probability measures, the injectivity of the
existing KME is also referred to as the “characteristic” property. The
injectivity of the existing KME in RKHS has been discussed in, for example,
Fukumizu et al. (2007); Sriperumbudur et al. (2010, 2011). These studies give
criteria for the injectivity of the KMEs associated with important complex-
valued kernels such as transition invariant kernels and radial kernels.
Typical examples of these kernels are Gaussian, Laplacian, and inverse
multiquadratic kernels. Here, we define the transition invariant kernels and
radial kernels for $\mathcal{A}$-valued measures, and generalize their
criteria to RKHMs associated with $\mathcal{A}$-valued kernels.
To characterize transition invariant kernels, we first define a Fourier
transform and support of an $\mathcal{A}$-valued measure.
###### Definition 5.9 (Fourier transform and support of an
$\mathcal{A}$-valued measure)
For an $\mathcal{A}$-valued measure $\lambda$ on $\mathbb{R}^{d}$, the Fourier
transform of $\lambda$, denoted as $\hat{\lambda}$, is defined as
$\hat{\lambda}(x)=\int_{\omega\in\mathbb{R}^{d}}e^{-\sqrt{-1}x^{T}\omega}d\lambda(\omega).$
In addition, the support of $\lambda$ is defined as
$\operatorname{supp}(\lambda)=\\{x\in\mathbb{R}^{d}\mid\
\lambda(\mathcal{U})>_{\mathcal{A}}0\mbox{ for any open set $\mathcal{U}$ such
that }x\in\mathcal{U}\\}.$
###### Definition 5.10 (Transition invariant kernel and radial kernel)
1. 1.
An $\mathcal{A}$-valued positive definite kernel
$k:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathcal{A}$ is called a transition
invariant kernel if it is represented as $k(x,y)=\hat{\lambda}(y-x)$ for a
positive $\mathcal{A}$-valued measure $\lambda$.
2. 2.
An $\mathcal{A}$-valued positive definite kernel
$k:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathcal{A}$ is called a radial kernel
if it is represented as $k(x,y)=\int_{[0,\infty)}e^{-t\|x-y\|^{2}}d\eta(t)$
for a positive $\mathcal{A}$-valued measure $\eta$.
Here, an $\mathcal{A}$-valued measure $\mu$ is said to be positive if
$\mu(E)\geq_{\mathcal{A}}0$ for any Borel set $E$.
We show transition invariant kernels and radial kernels induce injective KMEs.
###### Proposition 5.11 (The injectivity for transition invariant kernels)
Let $\mathcal{A}=\mathbb{C}^{m\times m}$ and $\mathcal{X}=\mathbb{R}^{d}$.
Assume $k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ is a transition
invariant kernel with a positive $\mathcal{A}$-valued measure $\lambda$ that
satisfies $\operatorname{supp}(\lambda)=\mathcal{X}$. Then, KME
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\to\mathcal{M}_{k}$ defined as Eq.
(4) is injective.
###### Proposition 5.12 (The injectivity for radial kernels)
Let $\mathcal{A}=\mathbb{C}^{m\times m}$ and $\mathcal{X}=\mathbb{R}^{d}$.
Assume $k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ is a radial kernel with
a positive definite $\mathcal{A}$-valued measure $\eta$ that satisfies
$\operatorname{supp}(\eta)\neq\\{0\\}$. Then, KME
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\to\mathcal{M}_{k}$ defined as Eq.
(4) is injective.
###### Example 5.13
1. 1.
If $k:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{C}^{m\times m}$ is a
matrix-valued kernel whose diagonal elements are Gaussian, Laplacian, or
$B_{2n+1}$-spline and nondiagonal elements are $0$, then $k$ is a
$c_{0}$-kernel (See Example 2.22.1). There exists a matrix-valued measure
$\lambda$ that satisfies $k(x,y)=\hat{\lambda}(y-x)$ and whose diagonal
elements are nonnegative and supported by $\mathbb{R}^{d}$ (c.f. Table 2 in
Sriperumbudur et al. (2010)) and nondiagonal elements are $0$. Thus, by
Proposition 5.11, $\Phi$ is injective.
2. 2.
If $k$ is a matrix-valued kernel whose diagonal elements are inverse
multiquadratic and nondiagonal elements are $0$, then $k$ is a $c_{0}$-kernel.
There exists a matrix-valued measure $\eta$ that satisfies
$k(x,y)=\int_{[0,\infty)}e^{-t\|x-y\|^{2}}d\eta(t)$, and whose diagonal
elements are nonnegative and $\operatorname{supp}(\eta)\neq\\{0\\}$ and
nondiagonal elements are $0$ (c.f. Theorem 7.15 in Wendland (2004)). Thus, by
Proposition 5.12, $\Phi$ is injective.
#### 5.2.2 Connection with universality
Another important property for kernel methods is universality, which ensures
that kernel-based algorithms approximate each continuous target function
arbitrarily well. For RKHS, Sriperumbudur et al. (2011) showed the equivalence
of the injectivity of the existing KME in RKHSs and universality of RKHSs. We
define a universality of RKHMs as follows.
###### Definition 5.14 (Universality)
An RKHM is said to be universal if it is dense in
${C}_{0}(\mathcal{X},\mathcal{A})$.
We show the above equivalence holds also for RKHM in the case of
$\mathcal{A}=\mathbb{C}^{m\times m}$.
###### Proposition 5.15 (Equivalence of the injectivity and universality for
$\mathcal{A}=\mathbb{C}^{m\times m}$)
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. Then,
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\to\mathcal{M}_{k}$ is injective if
and only if $\mathcal{M}_{k}$ is dense in ${C}_{0}(\mathcal{X},\mathcal{A})$.
By Proposition 5.15, if $k$ satisfies the condition in Proposition 5.11 or
5.12, then $\mathcal{M}_{k}$ is universal.
For the case where $\mathcal{A}$ is infinite dimensional, the universality of
$\mathcal{M}_{k}$ in ${C}_{0}(\mathcal{X},\mathcal{A})$ is a sufficient
condition for the injectivity of the proposed KME.
###### Theorem 5.16 (Connection between the injectivity and universality for
general $\mathcal{A}$)
If $\mathcal{M}_{k}$ is dense in ${C}_{0}(\mathcal{X},\mathcal{A})$, then
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\to\mathcal{M}_{k}$ is injective.
However, the equivalence of the injectivity and universality, and the
injectivity for transition invariant kernels and radial kernels are open
problems. This is because their proofs strongly depend on the Hahn–Banach
theorem and Riesz–Markov representation theorem, and generalizations of these
theorems to $\mathcal{A}$-valued functions and measures are challenging
problems due to the situation peculiar to the infinite dimensional spaces.
Further details of the proofs of propositions in this section are given in
Appendix C.
## 6 Applications
We apply the framework of RKHM described in Sections 4 and 5 to problems in
data analysis. We propose kernel PCA in RKHMs in Subsection 6.1, time-series
data analysis in RKHMs in Subsection 6.2, and analysis of interaction effects
in finite or infinite dimensional data with the proposed KME in RKHMs in
Subsection 6.3. Then, we discuss further applications in Subsection 6.4.
### 6.1 PCA in RKHMs
Principal component analysis (PCA) is a fundamental tool for describing data
in a low dimensional space. Its implementation in RKHSs has also been proposed
(c.f. Schölkopf and Smola (2001)). It enables us to deal with the
nonlinearlity of data by virtue of the high expressive power of RKHSs. Here,
we generalize the PCA in RKHSs to capture more information in data, such as
multivariate data and functional data, by using the framework of RKHM.
##### Applying RKHM to PCA
In the existing framework of PCA in Hilbert spaces, the following
reconstruction error is minimized with respect to vectors
$p_{1},\ldots,p_{r}$:
$\sum_{i=1}^{n}\bigg{\|}x_{i}-\sum_{j=1}^{r}p_{j}\left\langle
p_{j},x_{i}\right\rangle\bigg{\|}^{2},$ (7)
where $x_{1},\ldots,x_{n}$ are given samples in a Hilbert space and
$p_{1},\ldots,p_{r}$ are called principal axes. Here, the complex-valued inner
product $\left\langle p_{j},x_{i}\right\rangle$ is the weight with respect to
the principal axis $p_{j}$ for representing the sample $x_{i}$. PCA for
functional data (functional PCA) has also investigated (Ramsay and Silverman,
2005). For example, in standard functional PCA settings, we set the Hilbert
space as $L^{2}(\Omega)$ for a $\sigma$-finite measure space $\Omega$.
However, if samples $x_{1},\ldots,x_{n}$ are finite dimensional vectors or
functions, Eq. (7) fails to describe their element wise or continuous
dependencies on the principal axes. For $d$-dimensional (finite dimensional)
vectors, we can just split $x_{i}=[x_{i,1},\ldots,x_{i,d}]$ into $d$ vectors
$[x_{i,1},0,\ldots,0],\ldots,[0,\ldots,0,x_{i,d}]$. Then, we can understand
which element is dominant for representing $x_{i}$ by using the principal axis
$p_{j}$. On the other hand, for functional data, the situation is completely
different. For example, assume samples are in $L^{2}(\Omega)$. Since delta
functions are not contained in $L^{2}(\Omega)$, we cannot split a sample
$x_{i}=x_{i}(t)$ into discrete functions. In this case, how can we understand
the continuous dependencies on the principal axes with respect to the variable
$t\in\Omega$? One possible way to answer this question is to employ Hilbert
$C^{*}$-modules instead of Hilbert spaces. We consider the same type of
reconstruction error as Eq. (7) in Hilbert $C^{*}$-modules. In this case, the
inner product $\left\langle p_{j},x_{i}\right\rangle_{\mathcal{W}}$ is
$C^{*}$-algebra-valued, which allows us to provide more information than the
complex-valued one. If we set the $C^{*}$-algebra as the function space on
$\Omega$ such as $L^{\infty}(\Omega)$ and define a $C^{*}$-algebra-valued
inner product which depends on $t\in\Omega$, then, the weight $\left\langle
p_{j},x_{i}\right\rangle_{\mathcal{W}}$ depends on $t$. As a result, we can
extract continuous dependencies of samples on the principal axes. More
generally, PCA is often considered in an RKHS $\mathcal{H}_{\tilde{k}}$. In
this case, $x_{i}$ in Eq. (7) is replaced with $\tilde{\phi}(x_{i})$, where
$\tilde{\phi}$ is the feature map, and the inner product and norm are replaced
with those in the RKHS. We can extract continuous dependencies of samples on
the principal axes by generalizing RKHS to RKHM.
#### 6.1.1 Generalization of the PCA in RKHSs to RKHMs
Let $x_{1},\ldots,x_{n}\in\mathcal{X}$ be given samples. Let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be an $\mathcal{A}$-valued
positive definite kernel on $\mathcal{X}$ and let $\mathcal{M}_{k}$ be the
RKHM associated with $k$. We explore a useful set of axes $p_{1},\ldots,p_{r}$
in $\mathcal{M}_{k}$, which are referred to as principal axes, to describe the
feature of given samples $x_{1},\ldots,x_{n}$. The corresponding components
$p_{j}\left\langle p_{j},\phi(x_{i})\right\rangle_{\mathcal{M}_{k}}$ are
referred to as principal components. We emphasize our proposed PCA in RKHM
provides weights of principal components contained in $\mathcal{A}$, not in
complex numbers. This is a remarkable difference between our method and
existing PCAs. When samples have some structures such as among variables or in
functional data, $\mathcal{A}$-valued weights provide us richer information
than complex-valued ones. For example, if $\mathcal{X}$ is the space of
functions of multi-variables and if we set $\mathcal{A}$ as
$L^{\infty}([0,1])$, then we can reduce multi-variable functional data to
functions in $L^{\infty}([0,1])$, functions of single variable (as illustrated
in Section 6.1.4).
To obtain $\mathcal{A}$-valued weights of principal components, we consider
the following minimization problem regarding the following reconstruction
error (see Definition 2.18 for the definition of ONS):
$\inf_{\\{p_{j}\\}_{j=1}^{r}\subseteq\mathcal{M}_{k}\mbox{\footnotesize:
ONS}}\;\sum_{i=1}^{n}\bigg{|}\phi(x_{i})-\sum_{j=1}^{r}p_{j}\left\langle
p_{j},\phi(x_{i})\right\rangle_{\mathcal{M}_{k}}\bigg{|}_{\mathcal{M}_{k}}^{2},$
(8)
where the infimum is taken with respect to a (pre) order in $\mathcal{A}$ (see
Definition 2.9). Since the identity
$|\phi(x_{i})-\sum_{j=1}^{r}p_{j}\left\langle
p_{j},\phi(x_{i})\right\rangle_{\mathcal{M}_{k}}|_{\mathcal{M}_{k}}^{2}=k(x_{i},x_{i})-\sum_{j=1}^{r}\left\langle\phi(x_{i}),p_{j}\right\rangle_{\mathcal{M}_{k}}\left\langle
p_{j},\phi(x_{i})\right\rangle_{\mathcal{M}_{k}}$ holds and
$\left\langle\phi(x_{i}),p_{j}\right\rangle_{\mathcal{M}_{k}}$ is represented
as $p_{j}(x_{i})$ by the reproducing property, the problem (8) can be reduced
to the minimization problem
$\inf_{\\{p_{j}\\}_{j=1}^{r}\subseteq\mathcal{M}_{k}\mbox{\footnotesize:
ONS}}\;\sum_{i=1}^{n}\sum_{j=1}^{r}-p_{j}(x_{i})p_{j}(x_{i})^{*}.$ (9)
In the case of RKHS, i.e., $\mathcal{A}=\mathbb{C}$, the solution of the
problem (9) is obtained by computing eigenvalues and eigenvectors of Gram
matrices (see, for example, Schölkopf and Smola (2001)). Unfortunately, we
cannot extend their procedure to RKHM straightforwardly. Therefore, we develop
two methods to obtain approximate solutions of the problem (9): by gradient
descents on Hilbert $C^{*}$-modules, and by the minimization of the trace of
the $\mathcal{A}$-valued objective function.
#### 6.1.2 Gradient descent on Hilbert $C^{*}$-modules
We propose a gradient descent method on Hilbert $\mathcal{A}$-module for the
case where $\mathcal{A}$ is commutative. An important example of commutative
von Neumann-algebra is $L^{\infty}([0,1])$. The gradient descent for a real-
valued function on a Hilbert space has been proposed (Smyrlis and Zisis,
2004). However, in our situation, the objective function of the problem (9) is
an $\mathcal{A}$-valued function in a Hilbert $C^{*}$-module
$\mathcal{A}^{n}$. Thus, the existing gradient descent is not applicable to
our situation. Therefore, we generalize the existing gradient descent
algorithm to $\mathcal{A}$-valued functions on Hilbert $C^{*}$-modules.
Let $\mathcal{A}$ be a commutative von Neumann-algebra. Assume the positive
definite kernel $k$ takes its values in
$\mathcal{A}_{r}:=\\{c-d\in\mathcal{A}\mid\ c,d\in\mathcal{A}_{+}\\}$. For
example, for $\mathcal{A}=L^{\infty}([0,1])$, $\mathcal{A}_{r}$ is the space
of real-valued $L^{\infty}$ functions on $[0,1]$. Based on the representer
theorem (Theorem 4.8), we find a solution of the problem (9) which is
represented as $p_{j}=\sum_{i=1}^{n}\phi(x_{i})c_{j,i}$ for some
$c_{j,i}\in\mathcal{A}$. Moreover, since $\mathcal{A}$ is commutative,
$p_{j}(x_{i})p_{j}(x_{i})^{*}$ is equal to $p_{j}(x_{i})^{*}p_{j}(x_{i})$.
Therefore, the problem (8) on $\mathcal{M}_{k}$ is equivalent to the following
problem on the Hilbert $\mathcal{A}$-module $\mathcal{A}^{n}$ (see Example
2.15 about $\mathcal{A}^{n}$):
$\inf_{\mathbf{c}_{j}\in\mathcal{A}^{n},\
\\{\sqrt{\mathbf{G}}\mathbf{c}_{j}\\}_{j=1}^{r}\mbox{\footnotesize:
ONS}}-\sum_{j=1}^{r}\mathbf{c}_{j}^{*}\mathbf{G}^{2}\mathbf{c}_{j},$ (10)
where $\mathbf{G}$ is the $\mathcal{A}$-valued Gram matrix defined as
$\mathbf{G}_{i,j}=k(x_{i},x_{j})$. For simplicity, we assume $r=1$, i.e., the
number of principal axes is $1$. We rearrange the problem (10) to the
following problem by adding a penalty term:
$\inf_{\mathbf{c}\in\mathcal{A}^{n}}(-\mathbf{c}^{*}\mathbf{G}^{2}\mathbf{c}+\lambda|\mathbf{c}^{*}\mathbf{Gc}-1_{\mathcal{A}}|_{\mathcal{A}}^{2}),$
(11)
where $\lambda$ is a real positive weight for the penalty term. For $r>1$, let
$\mathbf{c}_{1}$ be a solution of the problem (10). Then, we solve the same
problem in the orthogonal complement of the module spanned by
$\\{\mathbf{c}_{1}\\}$ and set the solution of this problem as
$\mathbf{c}_{2}$. Then, we solve the same problem in the orthogonal complement
of the module spanned by $\\{\mathbf{c}_{1},\mathbf{c}_{2}\\}$ and repeat this
procedure to obtain solutions $\mathbf{c}_{1},\ldots\mathbf{c}_{r}$. The
problem (11) is the minimization problem of an $\mathcal{A}$-valued function
defined on the Hilbert $\mathcal{A}$-module $\mathcal{A}^{n}$. We search a
solution of the problem (11) along the steepest descent directions. To
calculate the steepest descent directions, we introduce a derivative
$Df_{\mathbf{c}}$ of an $\mathcal{A}$-valued function $f$ on a Hilbert
$C^{*}$-module at $\mathbf{c}\in\mathcal{M}$. It is defined as the derivative
on Banach spaces (c.f. Blanchard and Brüning (2015)). The definition of the
derivative is included in Appendix D. The following gives the derivative of
the objective function in problem (11).
###### Proposition 6.1 (Derivative of the objective function)
Let $f:\mathcal{A}^{n}\to\mathcal{A}$ be defined as
$f(\mathbf{c})=-\mathbf{c}^{*}\mathbf{G}^{2}\mathbf{c}+\lambda|\mathbf{c}^{*}\mathbf{G}\mathbf{c}-1_{\mathcal{A}}|_{\mathcal{A}}^{2}.$
(12)
Then, $f$ is infinitely differentiable and the first derivative of $f$ is
calculated as
$Df_{\mathbf{c}}(u)=-2\mathbf{c}^{*}\mathbf{G}^{2}u-4\lambda\mathbf{c}^{*}\mathbf{G}u+4\lambda\mathbf{c}^{*}\mathbf{G}\mathbf{c}\mathbf{c}^{*}\mathbf{G}u.$
Moreover, for each $\mathbf{c}\in\mathcal{A}^{n}$, there exists a unique
$\mathbf{d}\in\mathcal{A}^{n}$ such that
$\left\langle\mathbf{d},u\right\rangle_{\mathcal{A}^{n}}=Df_{\mathbf{c}}(u)$
for any $u\in\mathcal{A}^{n}$. The vector $\mathbf{d}$ is calculated as
$\mathbf{d}=-2\mathbf{G}^{2}\mathbf{c}-4\lambda\mathbf{G}\mathbf{c}+4\lambda\mathbf{G}\mathbf{c}\mathbf{c}^{*}\mathbf{G}\mathbf{c}.$
(13)
Proof The derivative of $f$ is calculated by the definition and the
assumption that $\mathcal{A}$ is commutative. Since $Df_{\mathbf{c}}$ is a
bounded $\mathcal{A}$-linear operator, by the Riesz representation theorem
(Proposition 4.2), there exists a unique $\mathbf{d}\in\mathcal{A}^{n}$ such
that
$\left\langle\mathbf{d},u\right\rangle_{\mathcal{A}^{n}}=Df_{\mathbf{c}}(u)$.
###### Definition 6.2 (Gradient of $\mathcal{A}$-valued functions on Hilbert
$C^{*}$-modules)
Let $f:\mathcal{M}\to\mathcal{A}$ be a differentiable function. Assume for
each $\mathbf{c}\in\mathcal{M}$, there exists a unique
$\mathbf{d}\in\mathcal{M}$ such that
$\left\langle\mathbf{d},u\right\rangle_{\mathcal{A}^{n}}=Df_{\mathbf{c}}(u)$
for any $u\in\mathcal{M}$. In this case, we denote $\mathbf{d}$ by $\nabla
f_{\mathbf{c}}$ and call it the gradient of $f$ at $\mathbf{c}$.
We now develop an $\mathcal{A}$-valued gradient descent scheme.
###### Theorem 6.3
Assume $f:\mathcal{M}\to\mathcal{A}$ is differentiable. Moreover, assume there
exists $\nabla f_{\mathbf{c}}$ for any $\mathbf{c}\in\mathcal{M}$. Let
$\eta_{t}>0$. Let $\mathbf{c}_{0}\in\mathcal{M}$ and
$\mathbf{c}_{t+1}=\mathbf{c}_{t}-\eta_{t}\nabla f_{\mathbf{c}_{t}}$ (14)
for $t=0,1,\ldots$. Then, we have
$f(\mathbf{c}_{t+1})=f(\mathbf{c}_{t})-\eta_{t}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{M}}+S(\mathbf{c}_{t},\eta_{t}),$ (15)
where $S(\mathbf{c},\eta)$ satisfies $\lim_{\eta\to
0}\|S(\mathbf{c},\eta)\|_{\mathcal{A}}/\eta=0$.
The statement is derived by the definition of the derivative (Definition D.1).
The following examples show the scheme (14) is valid to solve the problem
(11).
###### Example 6.4
Let $\mathcal{A}=L^{\infty}([0,1])$, let $a_{t}=|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}\in\mathcal{A}$ and let
$b_{t,\eta}=S(\mathbf{c}_{t},\eta)\in\mathcal{A}$. If
$a_{t}\geq_{\mathcal{A}}\delta 1_{\mathcal{A}}$ for some positive real value
$\delta$, then the function $a_{t}$ on $[0,1]$ satisfies $a_{t}(s)>0$ for
almost everywhere $s\in[0,1]$. On the other hand, since $b_{t,\eta}$ satisfies
$\lim_{\eta\to 0}\|b_{t,\eta}\|_{\mathcal{A}}/\eta^{2}=0$, there exists
sufficiently small positive real value $\eta_{t,0}$ such that for almost
everywhere $s\in[0,1]$,
$b_{t,\eta_{t,0}}(s)\leq\|b_{t,\eta_{t,0}}\|_{\mathcal{A}}\leq\eta^{2}_{t,0}\delta\leq\eta_{t,0}(1-\xi_{1})\delta$
hold for some positive real value $\xi_{1}$. As a result, $-\eta_{t,0}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}+S(\mathbf{c}_{t},\eta_{t,0})\leq_{\mathcal{A}}-\eta_{t,0}\xi_{1}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}$ holds and by the Eq. (15), we have
$f(\mathbf{c}_{t+1})<_{\mathcal{A}}f(\mathbf{c}_{t})$ (16)
for $t=0,1,\ldots$. As we mentioned in Example 2.8, the inequality (16) means
the function $f(\mathbf{c}_{t+1})\in L^{\infty}([0,1])$ is smaller than the
function $f(\mathbf{c}_{t})\in L^{\infty}([0,1])$ at almost every points on
$[0,1]$, i.e.,
$f(\mathbf{c}_{t+1})(s)<f(\mathbf{c}_{t})(s)$
for almost every $s\in[0,1]$.
###### Example 6.5
Assume $\mathcal{A}$ is a finite dimensional space. If $|\nabla
f_{\mathbf{c}_{t}}|_{\mathcal{A}}^{2}\geq_{\mathcal{A}}\delta 1_{\mathcal{A}}$
for some positive real value $\delta$, the inequality
$f(\mathbf{c}_{t+1})\leq_{\mathcal{A}}f(\mathbf{c}_{t})-\eta_{t}\xi_{1}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}$ holds for $t=0,1,\ldots$ and some
$\eta_{t}$ and $\xi_{1}$ in the same manner as Example 6.4. Moreover, the
function $f$ defined as Eq. (12) is bounded below and $\nabla
f_{\mathbf{c}_{t}}$ is Lipschitz continuous on the set
$\\{\mathbf{c}\in\mathcal{A}^{n}\mid\
f(\mathbf{c})\leq_{\mathcal{A}}f(\mathbf{c}_{0})\\}$. In this case, if there
exists a positive real value $\xi_{2}$ such that $\|\nabla
f_{\mathbf{c}_{t+1}}-\nabla
f_{\mathbf{c}_{t}}\|_{\mathcal{A}^{n}}\geq\xi_{2}\|\nabla
f_{\mathbf{c}_{t}}\|_{\mathcal{A}^{n}}$, then we have
$\xi_{2}\|\nabla f_{\mathbf{c}_{t}}\|_{\mathcal{A}^{n}}\leq
L\|\mathbf{c}_{t+1}-\mathbf{c}_{t}\|_{\mathcal{A}^{n}}\leq L\eta_{t}\|\nabla
f_{\mathbf{c}_{t}}\|_{\mathcal{A}^{n}},$
where $L$ is a Liptschitz constant of $\nabla f_{\mathbf{c}_{t}}$. As a
result, we have
$f(\mathbf{c}_{t+1})\leq_{\mathcal{A}}f(\mathbf{c}_{t})-\eta_{t}\xi_{1}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}\leq_{\mathcal{A}}f(\mathbf{c}_{t})-\frac{\xi_{1}\xi_{2}}{L}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}},$
which implies $\sum_{t=1}^{T}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}\leq_{\mathcal{A}}L/(\xi_{1}\xi_{2})(f(\mathbf{c}_{1})-f(\mathbf{c}_{T+1}))$.
Since $f$ is bounded below, the sum $\sum_{t=1}^{\infty}|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}$ converges. Therefore, $|\nabla
f_{\mathbf{c}_{t}}|^{2}_{\mathcal{A}^{n}}\to 0$ as $t\to\infty$, i.e., the
gradient $\nabla f_{\mathbf{c}_{t}}$ in Eq. (14) converges to $0$.
###### Remark 6.6
It is possible to generalize the above method to the case where the objective
function $f$ has the form $f(\mathbf{c})=\mathbf{c}^{*}\mathbf{G}\mathbf{c}$
for $\mathbf{G}\in\mathcal{A}^{n\times n}$ and $\mathcal{A}$ is
noncommutative. In this case, the derivative $Df_{\mathbf{c}}$ is calculated
as
$Df_{\mathbf{c}}(u)=u^{*}\mathbf{G}\mathbf{c}+\mathbf{c}^{*}\mathbf{G}u.$
Therefore, defining the gradient $\nabla f_{\mathbf{c}}$ as $\nabla
f_{\mathbf{c}}=\mathbf{G}\mathbf{c}$ results in $Df_{\mathbf{c}}(-\eta\nabla
f_{\mathbf{c}})=-2\eta\mathbf{c}^{*}\mathbf{G}^{2}\mathbf{c}\leq_{\mathcal{A}}0$
for a real positive value $\eta$, which allows us to derive the same result as
Theorem 6.3.
###### Remark 6.7
The computational complexity of the PCA in RKHMs is higher than the standard
PCA in RKHSs. Indeed, in the case of RKHSs, the minimization problem is
reduced to an eigenvalue problem of the Gram matrix with respect to given
samples. On the other hand, we solve the minimization problem (8) by the
gradient descent, and in each iteration step, we compute the gradient
$\mathbf{d}$ in Eq. (13). Since the elements of $\mathbf{G}$ and $\mathbf{c}$
are in $\mathcal{A}$, the computation of $\mathbf{d}$ involves the
multiplication in $\mathcal{A}$ such as multiplication of functions. Even
though we compute the multiplication in $\mathcal{A}$ approximately in
practice (see Subsection 6.1.4), its computational cost is much higher than
the multiplication in $\mathbb{C}$.
#### 6.1.3 Minimization of the trace
In the case of $\mathcal{A}=\mathcal{B}(\mathcal{W})$, $p_{j}(x_{i})$ and
$p_{j}(x_{i})^{*}$ in the problem (9) do not always commute. Therefore, we
restrict the solution to the form
$p_{j}(x_{i})=\sum_{i=1}^{n}\phi(x_{i})c_{i}$ where each $c_{i}$ is a
Hilbert–Schmidt operator and minimize the trace of the objective function of
the problem (9) as follows:
$\inf_{\mathbf{c}_{j}\in F,\
\\{\sqrt{\mathbf{G}}\mathbf{c}_{j}\\}_{j=1}^{r}\mbox{\footnotesize:
ONS}}-\operatorname{tr}\bigg{(}\sum_{j=1}^{r}\mathbf{c}_{j}^{*}\mathbf{G}^{2}\mathbf{c}_{j}\bigg{)},$
(17)
where $F=\\{\mathbf{c}=[c_{1},\ldots,c_{n}]\in\mathcal{A}^{n}\mid\ c_{i}\mbox{
is a Hilbert--Schmidt operator for }i=1,\ldots,n\\}$. If
$\mathcal{A}=\mathbb{C}^{m\times m}$, i.e., $\mathcal{W}$ is a finite
dimensional space, then we solve the problem (17) by regarding $\mathbf{G}$ as
an $mn\times mn$ matrix and computing the eigenvalues and eigenvectors of
$\mathbf{G}$.
###### Proposition 6.8
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. Let
$\lambda_{1},\ldots,\lambda_{r}\in\mathbb{C}$ and
$\mathbf{v}_{1},\ldots,\mathbf{v}_{r}\in\mathbb{C}^{mn}$ be the largest $r$
eigenvalues and the corresponding orthonormal eigenvectors of
$\mathbf{G}\in\mathbb{C}^{mn\times mn}$. Then,
$\mathbf{c}_{j}=[\mathbf{v}_{j},0,\ldots,0]\lambda_{j}^{-1/2}$ is a solution
of the problem (17).
Proof Since the identity
$\sum_{j=1}^{r}\mathbf{c}_{j}^{*}\mathbf{G}^{2}\mathbf{c}_{j}=\sum_{j=1}^{r}(\sqrt{\mathbf{G}}\mathbf{c}_{j})^{*}\mathbf{G}(\sqrt{\mathbf{G}}\mathbf{c}_{j})$
holds, any solution $\mathbf{c}_{j}$ of the problem (17) satisfies
$\sqrt{\mathbf{G}}\mathbf{c}_{j}=\mathbf{v}_{j}u^{*}$ for a normalized vector
$u\in\mathbb{C}^{m}$. Thus, $p_{j}=\sum_{i=1}^{n}\phi(x_{i})c_{i,j}$, where
${c}_{i,j}$ is the $i$-th element of
$\lambda_{j}^{-1/2}[\mathbf{v}_{j},0,\ldots,0]$, is a solution of the problem.
If $\mathcal{W}$ is an infinite dimensional space, we rewrite the problem (17)
with the Hilbert–Schmidt norm as follows:
$\inf_{\mathbf{c_{j}}\in F,\
\\{\sqrt{\mathbf{G}}\mathbf{c}_{j}\\}_{j=1}^{r}\mbox{\footnotesize:
ONS}}-\sum_{j=1}^{r}\|\mathbf{G}\mathbf{c}_{j}\|^{2}_{F},$ (18)
where $\|\mathbf{c}\|_{F}^{2}=\sum_{i=1}^{n}\|c_{i}\|_{\operatorname{HS}}^{2}$
and $\|\cdot\|_{\operatorname{HS}}$ is the Hilbert–Schmidt norm for
Hilbert–Schmidt operators. Similar to Eq. (11), we rearrange the problem (18)
to the following problem by adding a penalty term:
$\inf_{\mathbf{c}\in
F}-\|\mathbf{G}\mathbf{c}\|^{2}_{F}+\lambda\big{|}\big{\|}\sqrt{\mathbf{Gc}}\big{\|}^{2}_{F}-1\big{|},$
(19)
where $\lambda$ is a real positive weight for the penalty term. Then, we can
apply the standard gradient descent method in Hilbert spaces to the problem in
$F$ (Smyrlis and Zisis, 2004) since $F$ is the Hilbert space equipped with the
Hilbert–Schmidt inner product. Similar to the case of Eq. (11), for $r>1$, let
$\mathbf{c}_{1}$ be a solution of the problem (19). Then, we solve the same
problem in the orthogonal complement of the space spanned by
$\\{\mathbf{c}_{1}\\}$ and set the solution of this problem as
$\mathbf{c}_{2}$. Then, we solve the same problem in the orthogonal complement
of the space spanned by $\\{\mathbf{c}_{1},\mathbf{c}_{2}\\}$ and repeat this
procedure to obtain solutions $\mathbf{c}_{1},\ldots\mathbf{c}_{r}$.
#### 6.1.4 Numerical examples
##### Experiments with synthetic data
We applied the above PCA with $\mathcal{A}=L^{\infty}([0,1])$ to functional
data. We randomly generated three kinds of sample-sets from the following
functions of two variables on $[0,1]\times[0,1]$:
$\displaystyle y_{1}(s,t)=e^{10(s-t)},\quad y_{2}(s,t)=10st,\quad
y_{3}(s,t)=\cos(10(s-t)).$
Each sample-set $i$ is composed of 20 samples with random noise. We denote
these samples by $x_{1},\ldots,x_{60}$. The noise was randomly drawn from the
Gaussian distribution with mean $0$ and standard deviation $0.3$. Since
$L^{\infty}([0,1])$ is commutative, we applied the gradient descent proposed
in Subsection 6.1.2 to solve the problem (8). The parameters were set as
$\lambda=0.1$ and $\eta_{t}=0.01$. We set the $L^{\infty}([0,1])$-valued
positive definite kernel $k$ as
$(k(x_{i},x_{j}))(t)=\int_{0}^{1}\int_{0}^{1}(t-x_{i}(s_{1},s_{2}))(t-x_{j}(s_{1},s_{2}))ds_{1}ds_{2}$
(see Example 2.22.1). Since $(k(x_{i},x_{j}))(t)$ is a polynomial of $t$, all
the computations on $\mathcal{A}$ result in polynomials. Thus, the results are
obtained by keeping coefficients of the polynomials. Moreover, we set
$\mathbf{c}_{0}$ as the constant function $[1,\ldots,1]^{T}\in\mathcal{A}^{n}$
and computed $\mathbf{c}_{1},\mathbf{c}_{2},\ldots$ according to Eq. (14). For
comparison, we also vectorized the samples by discretizing $y_{i}$ at
$121=11\times 11$ points composed of 11 equally spaced points in $[0,1]$
($0,0.1,\ldots,1$) and applied the standard kernel PCA in the RKHS associated
with the Laplacian kernel on $\mathbb{R}^{121}$. The results are illustrated
in Figure 2. Since the samples are contaminated by the noise, the PCA in the
RKHS cannot separate three sample-sets. On the other hand, the
$L^{\infty}([0,1])$-valued weights of principal components obtained by the
proposed PCA in the RKHM reduce the information of the samples as functions.
As a result, it clearly separates three sample-sets.
Figure 3 shows the convergence of the proposed gradient descent. In this
example, we only compute the first principal components, hence $r$ is set as
$1$. For the objective function $f$ defined as
$f(\mathbf{c})=-\mathbf{c}^{*}\mathbf{G}^{2}\mathbf{c}+\lambda\mathbf{c}\mathbf{G}\mathbf{c}\mathbf{c}^{*}\mathbf{G}\mathbf{c}+\lambda\mathbf{c}^{*}\mathbf{G}\mathbf{c}$,
functions $f(\mathbf{c}_{t})\in L^{\infty}([0,1])$ for $t=0,\ldots,9$ are
illustrated. We can see $f(\mathbf{c}_{t+1})<f(\mathbf{c}_{t})$ and
$f(\mathbf{c}_{t})$ gradually approaches a certain function as $t$ grows.
Figure 2: The $L^{\infty}([0,1])$-valued first principal components obtained
by the proposed PCA in an RKHM (left) and the real-valued first and second
principal components obtained by the standard PCA in an RKHS (right) Figure 3:
The convergence of the function $f(\mathbf{c}_{t})$ along $t$.
##### Experiments with real-world data
To show the proposed PCA with RKHMs extracts the continuous dependencies of
samples on the principal axes as we insisted in Section 3, we conducted
experiments with climate data in Japan111available at
https://www.data.jma.go.jp/gmd/risk/obsdl/. The data is composed of the
maximum and minimum daily temperatures at 47 prefectures in Japan in 2020. The
original data is illustrated in Figure 4. The red line represents the
temperature at Hokkaido, the northernmost prefecture in Japan and the blue
line represents that at Okinawa, the southernmost prefecture in Japan. We
respectively fit the maximum and minimum temperatures at each location to the
Fourier series $a_{0}+\sum_{i=1}^{10}(a_{i}\cos(it)+b_{i}\sin(it))$. The
fitted functions $x_{1},\ldots,x_{47}\in C([0,366],\mathbb{R}^{2})$ are
illustrated in Figure 5. Then, we applied the PCA with the RKHM associated
with the $L^{\infty}([0,366])$-valued positive definite kernel
$(k(x,y))(t)=e^{-\|x(t)-y(t)\|_{2}^{2}}$. Let
$\mathcal{F}=\\{a_{0}+\sum_{i=1}^{10}(a_{i}\cos(it)+b_{i}\sin(it))\mid\
a_{i},b_{i}\in\mathbb{R}\\}\subseteq L^{2}([0,366])$. We project $k(x,y)$ onto
$\mathcal{F}$. Then, for $c,d\in\mathcal{F}$, $c+d\in\mathcal{F}$ is
satisfied, but $cd\in\mathcal{F}$ is not always satisfied. Thus, we
approximate $cd$ with $a_{0}+\sum_{i=1}^{N}(a_{i}\cos(it)+b_{i}\sin(it))$ for
$N\leq 10$ to restrict all the computations in $\mathcal{F}$ in practice.
Here, to remove high frequency components corresponding to noise and extract
essential information, we set $N=3$. Figure 6(a) shows the computed
$L^{\infty}([0,366])$-valued weights of the first principal axis in the RKHM,
which continuously depends on time. The red and blue lines correspond to
Hokkaido and Okinawa, respectively. We see these lines are well-separated from
other lines corresponding to other prefectures. For comparison, we also
applied the PCA in RKHSs to discrete time data. First, we respectively applied
the standard kernel PCA with RKHSs to the original temperature each day and
obtained real-valued weights of the first principal components. Here, we used
the complex-valued Gaussian kernel $\tilde{k}(x,y)=e^{-\|x-y\|_{2}^{2}}$.
Then, we connected the results and obtained Figure 6(b). Since the original
data is not smooth, the PCA amplifies the non-smoothness, which provides
meaningless results. Next, we respectively applied the standard kernel PCA
with the RKHS to the value of the fitted Fourier series each day and obtained
real-valued weights of the first principal components. Then, similar to the
case of Figure 6(b), we connected the results and obtained Figure 6(c). In
this case, the extracted features somewhat capture the continuous behaviors of
the temperatures. However, the PCA in the RKHS amplifies high frequency
components, which correspond to noise. Therefore, the result fails to separate
the temperatures of Hokkaido and Okinawa, whose behaviors are significantly
different as illustrated in Figure 4. On the other hand, the PCA in the RKHM
captures the feature of each sample as a function and removes nonessential
high frequency components, which results in separating functional data
properly.
Figure 4: Original climate data at 47 locations
Figure 5: Fitted Fourier series
(a) PCA with RKHMs for the fitted Fourier series
(b) PCA with RKHSs for the original data
(c) PCA with RKHSs for the fitted Fourier series
Figure 6: Principal components of PCA for climate data
### 6.2 Time-series data analysis
The problem of analyzing dynamical systems from data by using Perron–Frobenius
operators and their adjoints (called Koopman operators), which are linear
operators expressing the time evolution of dynamical systems, has recently
attracted attention in various fields (Budišić et al., 2012; Črnjarić-Žic et
al., 2020; Takeishi et al., 2017a, b; Lusch et al., 2018). And, several
methods for this problem using RKHSs have also been proposed (Kawahara, 2016;
Klus et al., 2020; Ishikawa et al., 2018; Hashimoto et al., 2020; Fujii &
Kawahara, 2019). In these methods, sequential data is supposed to be generated
from dynamical systems and is analyzed through Perron–Frobenius operators in
RKHSs. To analyze the time evolution of functional data, we generalize
Perron–Frobenius operators defined in RKHSs to those in RKHMs by using an
operator-valued positive definite kernel describing similarities between pairs
of functions.
##### Defining Perron–Frobenius operators in RKHMs
We consider the RKHM and vvRKHS associated with an operator-valued positive
definite kernel. VvRKHSs are associated with operator-valued kernels, and as
we stated in Lemma 4.10, those operator-valued kernels are special cases of
$C^{*}$-algebra-valued positive definite kernels. Here, we discuss the
advantage of RKHMs over vvRKHSs. Comparing with vvRKHSs, RKHMs have enough
representation power for preserving continuous behaviors of infinite
dimensional operator-valued kernels, while vvRKHSs are not sufficient for
preserving such behaviors. Let $\mathcal{W}$ be a Hilbert space, let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{B}(\mathcal{W})$ be an operator-
valued positive definite kernel on a data space $\mathcal{X}$, and let
$\mathcal{H}_{k}^{\operatorname{v}}$ be the vvRKHS associated with $k$. Since
the inner products in vvRKHSs have the form $\left\langle
w,k(x,y)h\right\rangle$ for $w,h\in\mathcal{W}$ and $x,y\in\mathcal{X}$, if
$\mathcal{W}$ is a $d$-dimensional space, putting $w$ as $d$ linearly
independent vectors in $\mathcal{W}$ reconstructs $k(x,y)$. However, if
$\mathcal{W}$ is an infinite dimensional space, we need infinitely many $w$ to
reconstruct $k(x,y)$, and we cannot recover the continuous behavior of the
operator $k(x,y)$ with finitely many $w$. For example, let
$\mathcal{X}=C(\Omega,\mathcal{Y})$ and $\mathcal{W}=L^{2}(\Omega)$ for a
compact measure space $\Omega$ and a topological space $\mathcal{Y}$. Let
$(k(x,y)w)(s)=\int_{t\in\Omega}\tilde{k}(x(s),y(t))w(t)dt$, where $\tilde{k}$
is a complex-valued positive definite kernel on $\mathcal{Y}$ (see Example
2.22.4). The operator $k(x,y)$ for functional data $x$ and $y$ describes the
continuous changes of similarities between function $x$ and $y$. However, the
estimation or prediction of the operator $k(x,y)$ in vvRKHSs fails to extract
the continuous behavior of the function $\tilde{k}(x(s),y(t))$ in the operator
$k(x,y)$ since vectors in vvRKHSs have the form $k(\cdot,y)w$ and we cannot
completely recover $k(x,y)$ with finitely many vectors in the vvRKHS. On the
other hand, RKHMs have enough information to recover $k(x,y)$ since it is just
the inner product between two vectors $\phi(x)$ and $\phi(y)$.
#### 6.2.1 Perron–Frobenius operator in RKHSs
We briefly review the definition of the Perron-Frobenius operator on RKHS and
existing methods for analysis of time-series data through Perron–Frobenius
operators and construction of their estimations (Kawahara, 2016; Hashimoto et
al., 2020) . First, we define Perron–Frobenius operators in RKHSs. Let
$\\{x_{0},x_{1},\ldots\\}\subseteq\mathcal{X}$ be time-series data. We assume
it is generated from the following deterministic dynamical system:
$x_{i+1}=f(x_{i}),$ (20)
where $f:\mathcal{X}\to\mathcal{X}$ is a map. By embedding $x_{i}$ and
$f(x_{i})$ in an RKHS $\mathcal{H}_{\tilde{k}}$ associated with a positive
definite kernel $\tilde{k}$ and the feature map $\tilde{\phi}$, dynamical
system (20) in $\mathcal{X}$ is transformed into that in the RKHS as
$\tilde{\phi}(x_{i+1})=\tilde{\phi}(f(x_{i})).$
The Perron–Frobenius operator $\tilde{K}$ in the RKHS is defined as a linear
operator on $\mathcal{H}_{\tilde{k}}$ satisfying
$\tilde{K}\tilde{\phi}(x):=\tilde{\phi}(f(x))$
for $x\in\mathcal{X}$. If $\\{\tilde{\phi}(x)\mid\ x\in\mathcal{X}\\}$ is
linearly independent, $\tilde{K}$ is well-defined as a linear map in the RKHS.
For example, if $\tilde{k}$ is a universal kernel (Sriperumbudur et al., 2011)
such as the Gaussian or Laplacian kernel on $\mathcal{X}=\mathbb{R}^{d}$,
$\\{\tilde{\phi}(x)\mid\ x\in\mathcal{X}\\}$ is linearly independent.
By considering eigenvalues and the corresponding eigenvectors of $\tilde{K}$,
we can understand the long-time behavior of the dynamical system. For example,
let $v_{1},\ldots,v_{m}$ be the eigenvectors with respect to eigenvalue 1 of
$\tilde{K}$. We project the vector $\tilde{\phi}(x_{0})$ onto the subspace
spanned by $v_{1},\dots,v_{m}$. We denote the projected vector by $v$. Then,
for $\alpha=1,2,\ldots$, we have
$\tilde{\phi}(x_{\alpha})=\tilde{K}^{\alpha}(v+v^{\perp})=v+\tilde{K}^{\alpha}v^{\perp},$
where $v^{\perp}=\tilde{\phi}(x_{0})-v$. Therefore, by calculating a pre-image
of $v$, we can extract the time-invariant component of the dynamical system
with the initial value $x_{0}$.
For practical uses of the above discussion, we construct an estimation of
$\tilde{K}$ only with observed data
$\\{x_{0},x_{1},\ldots\\}\subseteq\mathcal{X}$ as follows: We project
$\tilde{K}$ onto the finite dimensional subspace spanned by
$\\{\tilde{\phi}(x_{0}),\ldots,\tilde{\phi}(x_{T-1})\\}$. Let
$\tilde{W}_{T}:=[\tilde{\phi}(x_{0}),\ldots,\tilde{\phi}(x_{T-1})]$ and
$\tilde{W}_{T}=\tilde{Q}_{T}\tilde{\mathbf{R}}_{T}$ be the QR decomposition of
$\tilde{W}_{T}$ in the RKHS. Then, the Perron–Frobenius operator $\tilde{K}$
is estimated by projecting $\tilde{K}$ onto the space spanned by
$\\{\tilde{\phi}(x_{0}),\ldots,\tilde{\phi}(x_{T-1})\\}$. Since
$\tilde{K}\tilde{\phi}(x_{i}):=\tilde{\phi}(f(x_{i}))=\tilde{\phi}(x_{i+1})$
holds, we construct an estimation $\tilde{\mathbf{K}}_{T}$ of $\tilde{K}$ as
follows:
$\displaystyle\tilde{\mathbf{K}}_{T}:$
$\displaystyle=\tilde{Q}_{T}^{*}\tilde{K}\tilde{Q}_{T}=\tilde{Q}_{T}^{*}\tilde{K}\tilde{W}_{T}\tilde{\mathbf{R}}_{T}^{-1}=\tilde{Q}_{T}^{*}[\tilde{\phi}(x_{1}),\ldots,\tilde{\phi}(x_{T})]\tilde{\mathbf{R}}_{T}^{-1},$
which can be computed only with observed data.
#### 6.2.2 Perron–Frobenius operator in RKHMs
Existing analyses (Kawahara, 2016; Hashimoto et al., 2020) of time-series data
with Perron–Frobenius operators are addressed only in RKHSs. In the remaining
parts of this section, we generalize the existing analyses to RKHM to extract
continuous behaviors of functional data. We consider the case where time-
series is functional data. Let $\Omega$ be a compact measure space,
$\mathcal{Y}$ be a topological space, $\mathcal{X}=C(\Omega,\mathcal{Y})$,
$\mathcal{A}=\mathcal{B}(L^{2}(\Omega))$, and
$\\{x_{0},x_{1},\ldots\\}\subseteq\mathcal{X}$ be functional time-series data.
Let $k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ be defined as
$(k(x,y)w)(s)=\int_{t\in\Omega}\tilde{k}(x(s),y(t))w(t)dt$, where
$\tilde{k}:\mathcal{Y}\times\mathcal{Y}\to\mathbb{C}$ is a complex-valued
positive definite kernel (see Example 2.22.4 and the last paragraph of Section
3). The operator $k(x,y)$ is the integral operator whose integral kernel is
$\tilde{k}(x(s),y(t))$. We define a Perron–Frobenius operator in the RKHM
$\mathcal{M}_{k}$ associated with the above kernel $k$ as an
$\mathcal{A}$-linear operator satisfying
${K}{\phi}(x)={\phi}(f(x))$
for $x\in\mathcal{X}$. We assume $K$ is well-defined on a dense subset of
$\mathcal{M}_{k}$. Then, for $\alpha,\beta=1,2,\ldots$, we have
$k(x_{\alpha},x_{\beta})=\left\langle\phi(x_{\alpha}),\phi(x_{\beta})\right\rangle_{\mathcal{M}_{k}}=\big{\langle}K^{\alpha}\phi(x_{0}),K^{\beta}\phi(x_{0})\big{\rangle}_{\mathcal{M}_{k}}.$
Therefore, by estimating $K$ in the RKHM $\mathcal{M}_{k}$, we can extract the
similarity between arbitrary points of functions $x_{\alpha}$ and $x_{\beta}$.
Moreover, the eigenvalues and eigenvectors of $K$ provide us a decomposition
of the similarity $k(x_{\alpha},x_{\beta})$ into a time-invariant term and
time-dependent term. Since $K$ is a linear operator on a Banach space
$\mathcal{M}_{k}$, eigenvalues and eigenvectors of $K$ are available. Let
$v_{1},\ldots,v_{m}\in\mathcal{M}_{k}$ be the eigenvectors with respect to
eigenvalue $1$ of $K$. We project the vector $\phi(x_{0})$ onto the submodule
spanned by $v_{1},\ldots,v_{m}$, which is denoted by $\mathcal{V}$. Let
$\\{q_{1},\ldots,q_{m}\\}\subseteq\mathcal{M}_{k}$ be an orthonormal basis of
$\mathcal{V}$ and let $v=\sum_{i=1}^{m}q_{i}\left\langle
q_{i},\phi(x_{0})\right\rangle_{\mathcal{M}_{k}}$. Then, we have
$k(x_{\alpha},x_{\beta})=\big{\langle}K^{\alpha}(v+v^{\perp}),K^{\beta}(v+v^{\perp})\big{\rangle}_{\mathcal{M}_{k}}=\left\langle
v,v\right\rangle_{\mathcal{M}_{k}}+r({\alpha},\beta),$ (21)
where $v^{\perp}=\phi(x_{0})-v$ and $r(\alpha,\beta)=\left\langle
K^{\alpha}v,K^{\beta}v^{\perp}\right\rangle_{\mathcal{M}_{k}}+\left\langle
K^{\alpha}v^{\perp},K^{\beta}v\right\rangle_{\mathcal{M}_{k}}+\left\langle
K^{\alpha}v^{\perp},K^{\beta}v^{\perp}\right\rangle_{\mathcal{M}_{k}}$.
Therefore, the term $\left\langle v,v\right\rangle_{\mathcal{M}_{k}}$ provides
us with the information about time-invariant similarities.
###### Remark 6.9
We can also consider the vvRKHS $\mathcal{H}_{k}^{\operatorname{v}}$ with
respect to the operator-valued kernel $k$. Here, we discuss the difference
between the case of vvRKHS and RKHM. The Perron–Frobenius operator
$K^{\operatorname{v}}$ in a vvRKHS $\mathcal{H}_{k}^{\operatorname{v}}$ (Fujii
& Kawahara, 2019) is defined as a linear operator satisfying
$K^{\operatorname{v}}\phi(x)w=\phi(f(x))w$
for $x\in\mathcal{X}$ and $w\in\mathcal{W}$. However, with finitely many
vectors in $\mathcal{H}_{k}^{\operatorname{v}}$, we can only recover an
projected operator $UU^{*}k(x_{\alpha},x_{\beta})UU^{*}$, where
$N\in\mathbb{N}$, $U=[u_{1},\ldots,u_{N}]$, and $\\{u_{1},\ldots,u_{N}\\}$ is
an orthonormal system on $\mathcal{W}$ as follows:
$U^{*}k(x_{\alpha},x_{\beta})U=\big{[}\left\langle\phi(x_{s})u_{i},\phi(x_{t})u_{j}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}\big{]}_{i,j}=\big{[}\big{\langle}(K^{\operatorname{v}})^{\alpha}\phi(x_{0})u_{i},(K^{\operatorname{v}})^{\beta}\phi(x_{0})u_{j}\big{\rangle}_{\mathcal{H}_{k}^{\operatorname{v}}}\big{]}_{i,j}.$
(22)
Furthermore, let $v_{1},\ldots,v_{m}\in\mathcal{M}_{k}$ be the eigenvectors
with respect to eigenvalue $1$ of $K^{\operatorname{v}}$. Let
$\\{q_{1},\ldots,q_{m}\\}\subseteq\mathcal{H}_{k}^{\operatorname{v}}$ be an
orthonormal basis of the subspace spanned by $v_{1},\ldots,v_{m}$ and let
$\tilde{v}_{j}=\sum_{i=1}^{m}q_{i}\left\langle
q_{i},\phi(x_{0})u_{j}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}$.
Then, we have
$U^{*}k(x_{\alpha},x_{\beta})U=\big{[}\langle(K^{\operatorname{v}})^{\alpha}(\tilde{v}_{i}+\tilde{v}_{i}^{\perp}),(K^{\operatorname{v}})^{\beta}(\tilde{v}_{j}+\tilde{v}_{j}^{\perp})\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}\big{]}_{i,j}=[\left\langle\tilde{v}_{i},\tilde{v}_{j}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}]_{i,j}+\tilde{r}(\alpha,\beta),$
(23)
where $\tilde{v}_{i}^{\perp}=\phi(x_{0})u_{i}-\tilde{v}_{i}$ and
$\tilde{r}(\alpha,\beta)=[\langle(K^{\operatorname{v}})^{\alpha}\tilde{v}_{i},(K^{\operatorname{v}})^{\beta}\tilde{v}_{j}^{\perp}\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}+\langle(K^{\operatorname{v}})^{\alpha}\tilde{v}_{i}^{\perp},(K^{\operatorname{v}})^{\beta}\tilde{v}_{j}\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}+\langle(K^{\operatorname{v}})^{\alpha}\tilde{v}_{i}^{\perp},(K^{\operatorname{v}})^{\beta}\tilde{v}_{j}^{\perp}\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}]_{i,j}$.
Therefore, with vvRKHSs, we cannot recover the continuous behavior of the
operator $k(x,y)$ which encodes similarities between functions $x$ and $y$.
#### 6.2.3 Estimation of Perron–Frobenius operators in RKHMs
In practice, we only have time-series data but do not know the underlying
dynamical system and its Perron–Frobenius operator in an RKHM. Therefore, we
consider estimating the Perron–Frobenius operator only with the data. To do
so, we generalize the Gram–Schmidt orthonormalization algorithm to Hilbert
$C^{*}$-modules to apply the QR decomposition and project Perron–Frobenius
operators onto the submodule spanned by
$\\{\phi(x_{0}),\ldots,\phi(x_{T-1})\\}$. The Gram–Schmidt orthonormalization
in Hilbert modules is theoretically investigated by Cnops (1992). Here, we
develop a practical method for our settings. Then, we can apply the
decomposition (21), proposed in Subsection 6.2.2, of the estimated operator
regarding eigenvectors. Since we are considering the RKHM associated with the
integral operator-valued positive definite kernel defined in the first part of
Subsection 6.2.2, we assume $\mathcal{A}=\mathcal{B}(\mathcal{W})$ and we
denote by $\mathcal{M}$ a Hilbert $C^{*}$-module over $\mathcal{A}$ throughout
this subsection. Note that integral operators are compact.
We first develop a normalization method for Hilbert $C^{*}$-modules. In
$C^{*}$-algebras, nonzero elements are not always invertible, which is the
main difficulty of the normalization in Hilbert $C^{*}$-modules. However, by
carefully applying the definition of normalized (see Definition 2.17), we can
construct a normalization method.
###### Proposition 6.10 (Normalization)
Let $\epsilon\geq 0$ and let $\hat{q}\in\mathcal{M}$ satisfy
$\|\hat{q}\|_{\mathcal{M}}>\epsilon$. Assume
$\langle\hat{q},\hat{q}\rangle_{\mathcal{M}}$ is compact. Then, there exists
$\hat{b}\in\mathcal{A}$ such that $\|\hat{b}\|_{\mathcal{A}}<1/\epsilon$ and
$q:=\hat{q}\hat{b}$ is normalized. In addition, there exists $b\in\mathcal{A}$
such that $\|\hat{q}-qb\|_{\mathcal{M}}\leq\epsilon$.
Proof Let $\lambda_{1}\geq\lambda_{2}\geq\cdots\geq 0$ be the eigenvelues of
the compact operator $\left\langle\hat{q},\hat{q}\right\rangle_{\mathcal{M}}$,
and $m^{\prime}:=\max\\{j\mid\ \lambda_{j}>\epsilon^{2}\\}$. Since
$\langle\hat{q},\hat{q}\rangle_{\mathcal{M}}$ is positive and compact, it
admits the spectral decomposition
$\langle\hat{q},\hat{q}\rangle_{\mathcal{M}}=\sum_{i=1}^{\infty}\lambda_{i}v_{i}v_{i}^{*}$,
where $v_{i}$ is the orthonormal eigenvector with respect to $\lambda_{i}$.
Also, since $\lambda_{1}=\|\hat{q}\|_{\mathcal{M}}^{2}>\epsilon^{2}$, we have
$m^{\prime}\geq 1$. Let
$\hat{b}=\sum_{i=1}^{m^{\prime}}1/\sqrt{\lambda_{i}}v_{i}v_{i}^{*}$. By the
definition of $\hat{b}$,
$\|\hat{b}\|_{\mathcal{A}}=1/\sqrt{\lambda_{m^{\prime}}}<1/\epsilon$ holds.
Also, we have
$\displaystyle\langle\hat{q}\hat{b},\hat{q}\hat{b}\rangle_{\mathcal{M}}$
$\displaystyle=\hat{b}^{*}\left\langle\hat{q},\hat{q}\right\rangle_{\mathcal{M}}\hat{b}=\sum_{i=1}^{m^{\prime}}\frac{1}{\sqrt{\lambda_{i}}}v_{i}v_{i}^{*}\sum_{i=1}^{\infty}\lambda_{i}v_{i}v_{i}^{*}\sum_{i=1}^{m^{\prime}}\frac{1}{\sqrt{\lambda_{i}}}v_{i}v_{i}^{*}=\sum_{i=1}^{m^{\prime}}v_{i}v_{i}^{*}.$
Thus, $\langle\hat{q}\hat{b},\hat{q}\hat{b}\rangle_{\mathcal{M}}$ is a nonzero
orthogonal projection.
In addition, let $b=\sum_{i=1}^{m^{\prime}}\sqrt{\lambda_{i}}v_{i}v_{i}^{*}$.
Since $\hat{b}b=\sum_{i=1}^{m^{\prime}}v_{i}v_{i}^{*}$, the identity
$\langle\hat{q},\hat{q}\hat{b}b\rangle=\langle\hat{q}\hat{b}b,\hat{q}\hat{b}b\rangle$
holds, and we obtain
$\displaystyle\langle\hat{q}-qb,\hat{q}-qb\rangle_{\mathcal{M}}$
$\displaystyle=\langle\hat{q}-\hat{q}\hat{b}b,\hat{q}-\hat{q}\hat{b}b\rangle_{\mathcal{M}}=\langle\hat{q},\hat{q}\rangle-\langle\hat{q}\hat{b}b,\hat{q}\hat{b}b\rangle_{\mathcal{M}}$
$\displaystyle=\sum_{i=1}^{\infty}{\lambda_{i}}v_{i}v_{i}^{*}-\sum_{i=1}^{m^{\prime}}{\lambda_{i}}v_{i}v_{i}^{*}=\sum_{i=m^{\prime}+1}^{\infty}{\lambda_{i}}v_{i}v_{i}^{*}.$
Thus,
$\|\hat{q}-q\hat{b}\|_{\mathcal{M}}=\sqrt{\lambda_{m^{\prime}+1}}\leq\epsilon$
holds, which completes the proof of the proposition.
Proposition 6.10 and its proof provide a concrete procedure to obtain
normalized vectors in $\mathcal{M}$. This enables us to compute an orthonormal
basis practically by applying Gram-Schmidt orthonormalization with respect to
$\mathcal{A}$-valued inner product.
###### Proposition 6.11 (Gram-Schmidt orthonormalization)
Let $\\{w_{i}\\}_{i=1}^{\infty}$ be a sequence in $\mathcal{M}$. Assume
$\left\langle w_{i},w_{j}\right\rangle_{\mathcal{M}}$ is compact for any
$i,j=1,2,\ldots$. Consider the following scheme for $i=1,2,\ldots$ and
$\epsilon\geq 0$:
$\displaystyle\hat{q}_{j}$
$\displaystyle=w_{j}-\sum_{i=1}^{j-1}q_{i}\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}},\quad
q_{j}=\hat{q}_{j}\hat{b}_{j}\quad\mbox{if
}\;\|\hat{q}_{j}\|_{\mathcal{M}}>\epsilon,$ (24) $\displaystyle q_{j}$
$\displaystyle=0\quad\mbox{o.w.},$
where $\hat{b}_{j}$ is defined as $\hat{b}$ in Proposition 6.10 by setting
$\hat{q}=\hat{q}_{j}$. Then, $\\{q_{j}\\}_{j=1}^{\infty}$ is an orthonormal
basis in $\mathcal{M}$ such that any $w_{j}$ is contained in the
$\epsilon$-neighborhood of the module spanned by $\\{q_{j}\\}_{j=1}^{\infty}$.
###### Remark 6.12
We give some remarks about the role of $\epsilon$ in Propositions 6.10. The
vector $\hat{q}_{i}$ can always be reconstructed by $w_{i}$ only when
$\epsilon=0$. This is because the information of the spectrum of
$\left\langle\hat{q}_{i},\hat{q}_{i}\right\rangle_{\mathcal{M}}$ may be lost
if $\epsilon>0$. However, if $\epsilon$ is sufficiently small, we can
reconstruct $\hat{q}_{i}$ with a small error. On the other hand, the norm of
$\hat{b}_{i}$ can be large if $\epsilon$ is small, and the computation of
$\\{q_{i}\\}_{i=1}^{\infty}$ can become numerically unstable. This corresponds
to the trade-off between the theoretical accuracy and numerical stability.
To prove Proposition 6.11, we first prove the following lemmas.
###### Lemma 6.13
For $c\in\mathcal{A}$ and $v\in\mathcal{M}$, if $\left\langle
v,v\right\rangle_{\mathcal{M}}c=\left\langle v,v\right\rangle_{\mathcal{M}}$,
then $vc=v$ holds.
Proof If $\left\langle v,v\right\rangle_{\mathcal{M}}c=\left\langle
v,v\right\rangle_{\mathcal{M}}$, then $c^{*}\left\langle
v,v\right\rangle_{\mathcal{M}}=\left\langle v,v\right\rangle_{\mathcal{M}}$
and we have
$\displaystyle\left\langle
vc-v,vc-v\right\rangle_{\mathcal{M}}=c^{*}\left\langle
v,v\right\rangle_{\mathcal{M}}c-c^{*}\left\langle
v,v\right\rangle_{\mathcal{M}}-\left\langle
v,v\right\rangle_{\mathcal{M}}c+\left\langle
v,v\right\rangle_{\mathcal{M}}=0,$
which implies $vc=v$.
###### Lemma 6.14
If $q\in\mathcal{M}$ is normalized, then $q\left\langle
q,q\right\rangle_{\mathcal{M}}=q$ holds.
Proof Since $\left\langle q,q\right\rangle_{\mathcal{M}}$ is a projection,
$\left\langle q,q\right\rangle_{\mathcal{M}}\left\langle
q,q\right\rangle_{\mathcal{M}}=\left\langle q,q\right\rangle_{\mathcal{M}}$
holds. Therefore, letting $c=\left\langle q,q\right\rangle_{\mathcal{M}}$ and
$v=q$ in Lemma 6.13 completes the proof of the lemma.
Proof of Proposition 6.11 By Proposition 6.10, $q_{j}$ is normalized, and for
$\epsilon\geq 0$, there exists $b_{j}\in\mathcal{A}$ such that
$\|\hat{q}_{j}-q_{j}b_{j}\|_{\mathcal{M}}\leq\epsilon$. Therefore, by the
definition of $\hat{q}_{j}$, $\|w_{j}-v_{j}\|_{\mathcal{M}}\leq\epsilon$
holds, where $v_{j}$ is a vector in the module spanned by
$\\{q_{j}\\}_{j=0}^{\infty}$ which is defined as
$v_{j}=\sum_{i=1}^{j-1}q_{i}\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}}-q_{j}b_{j}$. This means that the
$\epsilon$-neighborhood of the space spanned by $\\{q_{j}\\}_{j=1}^{\infty}$
contains $\\{w_{j}\\}_{j=1}^{\infty}$. Next, we show the orthogonality of
$\\{q_{j}\\}_{j=1}^{\infty}$. Assume $q_{1},\ldots,q_{j-1}$ are orthogonal to
each other. For $i<j$, the following identities are deduced by Lemma 6.14:
$\displaystyle\left\langle q_{j},q_{i}\right\rangle_{\mathcal{M}}$
$\displaystyle=\hat{b}_{t}^{*}\left\langle\hat{q}_{j},q_{i}\right\rangle_{\mathcal{M}}=\hat{b}_{j}^{*}\bigg{\langle}w_{j}-\sum_{l=1}^{j-1}q_{l}\left\langle
q_{l},w_{j}\right\rangle,q_{i}\bigg{\rangle}_{\mathcal{M}}$
$\displaystyle=\hat{b}_{j}^{*}\left(\left\langle
w_{j},q_{i}\right\rangle_{\mathcal{M}}-\left\langle q_{i}\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}},q_{i}\right\rangle\right)=\hat{b}_{j}^{*}\left(\left\langle
w_{j},q_{i}\right\rangle_{\mathcal{M}}-\left\langle
w_{j},q_{i}\right\rangle_{\mathcal{M}}\right)=0.$
Therefore, $q_{1},\ldots,q_{j}$ are also orthogonal to each other, which
completes the proof of the proposition.
In practical computations, the scheme (24) should be represented with
matrices. For this purpose, we derive the following QR decomposition from
Proposition 6.11. This is a generalization of the QR decomposition in Hilbert
spaces.
###### Corollary 6.15 (QR decomposition)
For $n\in\mathbb{N}$, let $W:=[w_{1},\ldots,w_{n}]$ and
$Q:=[q_{1},\ldots,q_{n}]$. Let $\epsilon\geq 0$. Then, there exist
$\mathbf{R},\mathbf{R}_{\operatorname{inv}}\in\mathcal{A}^{n\times n}$ that
satisfy
$Q=W\mathbf{R}_{\operatorname{inv}},\quad\|W-Q\mathbf{R}\|\leq\epsilon.$ (25)
Here, $\|W\|$ for a $\mathcal{A}$-linear map $W:\mathcal{A}^{n}\to\mathcal{M}$
is defined as $\|W\|:=\sup_{\|v\|_{\mathcal{A}^{n}}=1}\|Wv\|_{\mathcal{M}}$.
Proof Let $\mathbf{R}=[r_{i,j}]_{i,j}$ be an $n\times n$ $\mathcal{A}$-valued
matrix. Here, $r_{i,j}$ is defined by $r_{i,j}=\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}}\in\mathcal{A}$ for $i<j$, $r_{i,j}=0$
for $i>j$, and $r_{j,j}=b_{j}$, where $b_{j}$ is defined as $b$ in Proposition
6.10 by setting $\hat{q}=\hat{q}_{j}$. In addition, let
$\hat{\mathbf{B}}=\operatorname{diag}\\{\hat{b}_{1},\ldots,\hat{b}_{n}\\}$,
$\mathbf{B}=\operatorname{diag}\\{{b}_{1},\ldots,{b}_{n}\\}$, and
$\mathbf{R}_{\operatorname{inv}}=\mathbf{\hat{B}}(I+(\mathbf{R}-\mathbf{B})\mathbf{\hat{B}})^{-1}$
be $n\times n$ $\mathcal{A}$-valued matrices. The equality
$Q=W\mathbf{R}_{\operatorname{inv}}$ is derived directly from scheme (24). In
addition, by the scheme (24), for $t=1,\ldots,n$, we have
$\displaystyle w_{j}$ $\displaystyle=\sum_{i=1}^{j-1}q_{i}\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}}+\hat{q}_{j}=\sum_{i=1}^{j-1}q_{i}\left\langle
q_{i},w_{j}\right\rangle_{\mathcal{M}}+q_{j}b_{j}+\hat{q}_{j}-q_{j}b_{j}=Q\mathbf{r}_{j}+\hat{q}_{j}-q_{j}b_{j},$
where $\mathbf{r}_{j}\in\mathcal{A}^{n}$ is the $i$-th column of $\mathbf{R}$.
Therefore, by Proposition 6.10,
$\|w_{j}-Q\mathbf{r}_{j}\|_{\mathcal{M}}=\|\hat{q}_{j}-q_{j}b_{j}\|_{\mathcal{M}}\leq\epsilon$
holds for $j=1,\ldots,n$, which implies $\|W-Q\mathbf{R}\|\leq\epsilon$.
We call the decomposition (25) as the QR decomposition in Hilbert
$C^{*}$-modules. Although we are handling vectors in $\mathcal{M}$, by
applying the QR decomposition, we only have to compute
$\mathbf{R}_{\operatorname{inv}}$ and $\mathbf{R}$.
We now consider estimating the Perron–Frobenius operator $K$ with observed
time-series data $\\{x_{0},x_{1},\ldots\\}$. Let
${W}_{T}=[{\phi}(x_{0}),\ldots,{\phi}(x_{T-1})]$. We are considering an
integral operator-valued positive definite kernel (see the first part of
Subsection 6.2.2 and the last paragraph in Section 3). Since integral
operators are compact, $W_{T}$ satisfies the assumption in Corollary 6.11.
Thus, let ${W}_{T}{\mathbf{R}}_{\operatorname{inv},T}={Q}_{T}$ be the QR
decomposition (25) of ${W}_{T}$ in the RKHM $\mathcal{M}_{k}$. The
Perron–Frobenius operator $K$ is estimated by projecting ${K}$ onto the module
spanned by $\\{{\phi}(x_{0}),\ldots,\phi(x_{T-1})\\}$. We define
${\mathbf{K}}_{T}$ as the estimation of $K$. Since
${K}{\phi}(x_{i})={\phi}(f(x_{i}))={\phi}(x_{i+1})$ hold, ${\mathbf{K}}_{T}$
can be computed only with observed data as follows:
$\displaystyle{\mathbf{K}}_{T}$
$\displaystyle={Q}_{T}^{*}{K}{Q}_{T}={Q}_{T}^{*}{K}{W}_{T}{\mathbf{R}}_{\operatorname{inv},T}={Q}_{T}^{*}[{\phi}(x_{1}),\ldots,{\phi}(x_{T})]{\mathbf{R}}_{\operatorname{inv},T}.$
###### Remark 6.16
In practical computations, we only need to keep the integral kernels to
implement the Gram–Schmidt orthonormalization algorithm and estimate
Perron–Frobenius operators in the RKHM associated with the integral operator-
valued kernel $k$. Therefore, we can directly access integral kernel functions
of operators, which is not achieved by vvRKHS as we stated in Remark 4.13.
Indeed, the operations required for estimating Perron–Frobenius operators are
explicitly computed as follows: Let $c,d\in\mathcal{B}(L^{2}(\Omega))$ be
integral operators whose integral kernels are $f(s,t)$ and $g(s,t)$. Then, the
integral kernels of the operator $c+d$ and $cd$ are $f(s,t)+g(s,t)$ and
$\int_{r\in\Omega}f(s,r)g(r,t)dr$, respectively. And that of $c^{*}$ is
$f(t,s)$. Moreover, if $c$ is positive, let $c_{\epsilon}^{+}$ be
$\sum_{\lambda_{i}>\epsilon}1/\sqrt{\lambda_{i}}v_{i}{v_{i}}^{*}$, where
$\lambda_{i}$ are eigenvalues of the compact positive operator $c$ and $v_{i}$
are corresponding orthonormal eigenvectors. Then, the integral kernel of the
operator $c_{\epsilon}^{+}$ is
$\sum_{\lambda_{i}>\epsilon}1/\sqrt{\lambda_{i}}v_{i}(s)\overline{v_{i}(t)}$.
#### 6.2.4 Numerical examples
To show the proposed analysis with RKHMs captures continuous changes of values
of kernels along functional data as we insisted in Section 3, we conducted
experiments with river flow data of the Thames River in London222available at
https://nrfa.ceh.ac.uk/data/search. The data is composed of daily flow at 10
stations. We used the data for 51 days beginning from January first, 2018. We
regard every daily flow as a function of the ratio of the distance from the
most downstream station and fit it to a polynomial of degree 5 to obtain time
series $x_{0},\ldots,x_{50}\in C([0,1],\mathbb{R})$. Then, we estimated the
Perron–Frobenius operator which describes the time evolution of the series
$x_{0},\ldots,x_{50}$ in the RKHM associated with the
$\mathcal{B}(L^{2}([0,1]))$-valued positive definite kernel $k(x,y)$ defined
as the integral operator whose integral kernel is
$\tilde{k}(s,t)=e^{-|x(s)-y(t)|^{2}}$ for $x,y\in C([0,1],\mathbb{R})$. In
this case, $T=50$. As we noted in Remark 6.16, all the computations in
$\mathcal{A}=\mathcal{B}(L^{2}([0,1]))$ are implemented by keeping integral
kernels of operators. Let $\mathcal{F}$ be the set of polynomials of the form
$x_{i}(s,t)=\sum_{j,l=0}^{5}\eta_{j,l}s^{j}t^{l}$, where
$\eta_{j,l}\in\mathbb{R}$. We project $\tilde{k}$ onto $\mathcal{F}$. Then,
for $c,d\in\mathcal{F}$, $c+d\in\mathcal{F}$ is satisfied, but
$cd\in\mathcal{F}$ is not always satisfied. Thus, we project $cd$ onto
$\mathcal{F}$ to restrict all the computations in $\mathcal{F}$ in practice.
We computed the time-invariant term $\left\langle
v,v\right\rangle_{\mathcal{M}_{k}}$ in Eq. (22). Regarding the computation of
eigenvectors with respect to the eigenvalue $1$, we consider the following
minimization problem for the estimated Perron–Frobenius operator
$\mathbf{K}_{T}$:
$\inf_{\mathbf{v}\in\mathcal{A}^{T}}|\mathbf{K}_{T}\mathbf{v}-\mathbf{v}|_{\mathcal{A}^{T}}^{2}-\lambda|\mathbf{v}|_{\mathcal{A}^{T}}^{2}.$
(26)
Here, $-\lambda|\mathbf{v}|_{\mathcal{A}^{T}}^{2}$ is a penalty term to keep
$\mathbf{v}$ not going to $0$. Since the objective function of the problem
(26) is represented as
$\mathbf{v}^{*}(\mathbf{K}_{T}^{*}\mathbf{K}_{T}-\mathbf{K}_{T}^{*}-\mathbf{K}_{T}+(1-\lambda)\mathbf{I})\mathbf{v}$,
where $\mathbf{I}$ is the identity operator on $\mathcal{A}^{T}$, we apply the
gradient descent on $\mathcal{A}^{T}$ (see Remark 6.6). Figure 7(a) shows the
heat map representing the integral kernel of $\left\langle
v,v\right\rangle_{\mathcal{M}_{k}}$.
For comparison, we also applied the similar analysis in a vvRKHS. We computed
the time-invariant term
$[\left\langle\tilde{v}_{i},\tilde{v}_{j}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}]_{i,j}$
in Eq. (23) by setting $u_{i}$ as orthonormal polynomials of the form
$u_{i}(s)=\sum_{j=1}^{5}\eta_{j}s^{j}$, where $\eta_{j}\in\mathbb{R}$. Let
$c_{\operatorname{inv}}=[\left\langle\tilde{v}_{i},\tilde{v}_{j}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}]_{i,j}$.
In this case, we cannot obtain the integral kernel of the time-invariant term
of the operator $k(x_{\alpha},x_{\beta})$, which is denoted by
$\tilde{k}_{\operatorname{inv}}$ here. Instead, by approximating
$k(x_{\alpha},x_{\beta})$ by $UU^{*}k(x_{\alpha},x_{\beta})UU^{*}$ and
computing $Uc_{\operatorname{inv}}U^{*}\chi_{[0,t]}$, we obtain an
approximation of $\int_{0}^{t}\tilde{k}_{\operatorname{inv}}(s,r)dr$ for
$s\in[0,1]$. Here, $\chi_{E}:[0,1]\to\\{0,1\\}$ is the indicator function for
a Borel set $E$ on $[0,1]$. Therefore, by numerically differentiating
$Uc_{\operatorname{inv}}U^{*}\chi_{[0,t]}$ by $t$, we obtain an approximation
of $\tilde{k}_{\operatorname{inv}}$. Figure 7(b) shows the heat map
representing the approximation of $\tilde{k}_{\operatorname{inv}}$.
Around the upstream stations, there are many branches and the flow is affected
by them. Thus, the similarity between flows at two points would change along
time. While, around the downstream stations, the flow is supposed not to be
affected by other rivers. Thus, the similarity between flows at two points
would be invariant along time. The values around the diagonal part of Figure
7(a) (RKHM) become small as $s$ and $t$ become large (as going up the river).
On the other hand, those of Figure 7(b) (vvRKHS) are also large for large $s$
and $t$. Therefore, RKHM captures the aforementioned fact more properly.
(a) RKHM
(b) vvRKHS
Figure 7: Heat maps representing time-invariant similarities
### 6.3 Analysis of interaction effects
Polynomial regression is a classical problem in statistics (Hastie et al.,
2009) and analyzing interacting effects by the polynomial regression has been
investigated (for its recent improvements, see, for example, Suzumura et al.
(2017)). Most of the existing methods focus on the case of finite dimensional
(discrete) data. However, in practice, we often encounter situations where we
cannot fix the dimension of data. For example, observations are obtained at
multiple locations and the locations are not fixed. It may be changed
depending on time. Therefore, analysing interaction effects of infinite
dimensional (continuous) data is essential. We show the KMEs of
$\mathcal{A}$-valued measures in RKHMs provide us with a method for the
analysis of infinite dimensional data by setting $\mathcal{A}$ as an infinite
dimensional space such as $\mathcal{B}(\mathcal{W})$. Moreover, the proposed
method does not need the assumption that interaction effects are described by
a polynomial. We first develop the analysis in RKHMs for the case of finite
dimensional data in Subsection 6.3.1. Then, we show the analysis is naturally
generalized to the infinite dimensional data in Subsection 6.3.2.
##### Applying $\mathcal{A}$-valued measures and KME in RKHMs
Using $\mathcal{A}$-valued measures, we can describe the measure corresponding
to each point of functional data as functions or operators. For example, let
$\mathcal{X}$ be a locally compact Hausdorff space and let
$x_{1},x_{2},\ldots\in C([0,1],\mathcal{X})$ be samples. Let
$\mathcal{A}=L^{\infty}([0,1])$ and let $\mu$ be the $\mathcal{A}$-valued
measure defined as $\mu(t)=\tilde{\mu}_{t}$, where $\tilde{\mu}_{t}$ is the
distribution which samples $x_{1}(t),x_{2}(t),\ldots$ follow. Then, $\mu$
describes continuous behaviors of the distribution of samples
$x_{1}(t),x_{2}(t),\ldots$ with respect to $t$. Moreover, let
$\mathcal{A}=\mathcal{B}(L^{2}([0,1]))$ and let $\mu$ be the
$\mathcal{A}$-valued measure defined as
$(\mu(E)v)(s)=\int_{t\in[0,1]}\tilde{\mu}(E)_{s,t}v(t)dt$ for a Borel set $E$,
where $\tilde{\mu}_{s,t}$ is the joint distribution of the distributions which
samples $x_{1}(s),x_{2}(s),\ldots$ and samples $x_{1}(t),x_{2}(t),\ldots$
follow. Then, $\mu$ describes continuous dependencies of samples
$x_{1}(s),x_{2}(s),\ldots$ and samples $x_{1}(t),x_{2}(t),\ldots$ with respect
to $s$ and $t$. Using the KME in RKHMs, we can embed $\mathcal{A}$-valued
measures into RKHMs, which enables us to compute inner products between
$\mathcal{A}$-valued measures. Then, we can generalize algorithms in Hilbert
spaces to $\mathcal{A}$-valued measures.
#### 6.3.1 The case of finite dimensional data
In this subsection, we assume $\mathcal{A}=\mathbb{C}^{m\times m}$. Let
$\mathcal{X}$ be a locally compact Hausdorff space and let
$x_{1},\ldots,x_{n}\in\mathcal{X}^{m\times m}$ and
$y_{1},\ldots,y_{n}\in\mathcal{A}$ be given samples. We assume there exist
functions $f_{j,l}:\mathcal{X}\to\mathcal{A}$ such that
$y_{i}=\sum_{j,l=1}^{m}f_{j,l}((x_{i})_{j,l})$
for $i=1,\ldots,n$. For example, the $(j,l)$-element of each $x_{i}$ describes
an effect of the $l$-th element on the $j$-th element of $x_{i}$ and $f_{j,l}$
is a nonlinear function describing an impact of the effect to the value
$y_{i}$. If the given samples $y_{i}$ are real or complex-valued, we can
regard them as $y_{i}1_{\mathcal{A}}$ to meet the above setting. Let
$\mu_{x}\in\mathcal{D}(\mathcal{X},\mathbb{C}^{m\times m})$ be a
$\mathbb{C}^{m\times m}$-valued measure defined as
$(\mu_{x})_{j,l}=\tilde{\delta}_{x_{j,l}}$, where $\tilde{\delta}_{x}$ for
$x\in\mathcal{X}$ is the standard (complex-valued) Dirac measure centered at
$x$. Note that the $(j,l)$-element of $\mu_{x}$ describes a measure regarding
the element $x_{j,l}$. Let $k$ be an $\mathcal{A}$-valued $c_{0}$-kernel (see
Definition 5.2), let $\mathcal{M}_{k}$ be the RKHM associated with $k$, and
let $\Phi$ be the KME defined in Section 5.1. In addition, let $\mathcal{V}$
be the submodule of $\mathcal{M}_{k}$ spanned by
$\\{\Phi(\mu_{x_{1}}),\ldots,\Phi(\mu_{x_{n}})\\}$, and let
$P_{f}:\mathcal{V}\to\mathbb{C}^{m\times m}$ be a $\mathbb{C}^{m\times
m}$-linear map (see Definition 2.19) which satisfies
$P_{f}\Phi(\mu_{x_{i}})=\sum_{j,l=1}^{m}{f_{j,l}((x_{i})_{j,l})}$
for $i=1,\ldots,n$. Here, we assume the vectors
$\Phi(\mu_{x_{1}}),\ldots,\Phi(\mu_{x_{n}})$ are $\mathbb{C}^{m\times
m}$-linearly independent (see Definition 2.20).
#### 6.3.2 Generalization to the continuous case
We generalize the setting mentioned in Subsection 6.3.1 to the case of
functional data. We assume Assumption 5.3 in this subsection. We set
$\mathcal{A}$ as $\mathcal{B}(L^{2}[0,1])$ instead of $\mathbb{C}^{m\times m}$
in this subsection. Let $x_{1},\ldots,x_{n}\in
C([0,1]\times[0,1],\mathcal{X})$ and $y_{1},\ldots,y_{n}\in\mathcal{A}$ be
given samples. We assume there exists an integrable function
$f:[0,1]\times[0,1]\times\mathcal{X}\to\mathcal{A}$ such that
$y_{i}=\int_{0}^{1}\int_{0}^{1}f(s,t,x_{i}(s,t))dsdt$
for $i=1,\ldots,n$. We consider an $\mathcal{A}$-valued positive definite
kernel $k$ on $\mathcal{X}$, the RKHM $\mathcal{M}_{k}$ associated with $k$,
and the KME $\Phi$ in $\mathcal{M}_{k}$. Let
$\mu_{x}\in\mathcal{D}(\mathcal{X},\mathcal{B}(L^{2}([0,1])))$ be a
$\mathcal{B}(L^{2}([0,1]))$-valued measure defined as
$\mu_{x}(E)v=\left\langle\chi_{E}(x(s,\cdot)),v\right\rangle_{L^{2}([0,1])}$
for a Borel set $E$ on $\mathcal{X}$. Here, $\chi_{E}:\mathcal{X}\to\\{0,1\\}$
is the indicator function for $E$. Note that $\mu_{x}(E)$ is an integral
operator whose integral kernel is $\chi_{E}(x(s,t))$, which corresponds to the
Dirac measure $\tilde{\delta}_{x(s,t)}(E)$. Let $\mathcal{V}$ be the submodule
of $\mathcal{M}_{k}$ spanned by
$\\{\Phi(\mu_{x_{1}}),\ldots,\Phi(\mu_{x_{n}})\\}$, and let
$P_{f}:\mathcal{V}\to\mathcal{B}(L^{2}([0,1]))$ be a
$\mathcal{B}(L^{2}([0,1]))$-linear map (see Definition 2.19) which satisfies
$P_{f}\Phi(\mu_{x_{i}})=\int_{0}^{1}\int_{0}^{1}f(s,t,x_{i}(s,t))dsdt$
for $i=1,\ldots,n$. Here, we assume the vectors
$\Phi(\mu_{x_{1}}),\ldots,\Phi(\mu_{x_{n}})$ are
$\mathcal{B}(L^{2}([0,1]))$-linearly independent (see Definition 2.20).
We estimate $P_{f}$ by restricting it to a submodule of $\mathcal{V}$. For
this purpose, we apply the PCA in RKHMs proposed in Section 6.1 and obtain
principal axes $p_{1},\ldots,p_{r}$ to construct the submodule. We replace
$\phi(x_{i})$ in the problem (8) with $\Phi(\mu_{x_{i}})$ and consider the
problem
$\inf_{\\{p_{j}\\}_{j=1}^{r}\subseteq\mathcal{M}_{k}\mbox{\footnotesize:
ONS}}\;\sum_{i=1}^{n}\bigg{|}\Phi(\mu_{x_{i}})-\sum_{j=1}^{r}p_{j}\left\langle
p_{j},\Phi(\mu_{x_{i}})\right\rangle_{\mathcal{M}_{k}}\bigg{|}_{\mathcal{M}_{k}}^{2}.$
(27)
The projection operator onto the submodule spanned by $p_{1},\ldots,p_{r}$ is
represented as $QQ^{*}$, where $Q=[p_{1},\ldots,p_{r}]$. Therefore, we
estimate $P_{f}$ by $P_{f}QQ^{*}$. We can compute $P_{f}QQ^{*}$ as follows.
###### Proposition 6.17
The solution of the problem (27) is represented as
$p_{j}=\sum_{i=1}^{n}\Phi(\mu_{x_{i}})c_{i,j}$ for some
$c_{i,j}\in\mathcal{A}$. Let $C=[c_{i,j}]_{i,j}$. Then, the estimation
$P_{f}QQ^{*}$ is computed as
$P_{f}QQ^{*}=[y_{1},\ldots,y_{n}]CQ^{*}.$
The following proposition shows we can obtain a vector which attains the
largest transformation by $P_{f}$.
###### Proposition 6.18
Let $u\in\mathcal{M}_{k}$ be a unique vector satisfying for any
$v\in\mathcal{M}_{k}$, $\left\langle
u,v\right\rangle_{\mathcal{M}_{k}}=P_{f}QQ^{*}v$. For $\epsilon>0$, let
$b_{\epsilon}=(|u|_{\mathcal{M}_{k}}+\epsilon 1_{\mathcal{A}})^{-1}$ and let
$v_{\epsilon}=ub_{\epsilon}$. Then, $P_{f}QQ^{*}v_{\epsilon}$ converges to
$\sup_{v\in\mathcal{M}_{k},\ \|v\|_{\mathcal{M}_{k}}\leq 1}P_{f}QQ^{*}v$ (28)
as $\epsilon\to 0$, where the supremum is taken with respect to a (pre) order
in $\mathcal{A}$ (see Definition 2.9). If $\mathcal{A}=\mathbb{C}^{m\times
m}$, then the supremum is replaced with the maximum. In this case, let
$|u|_{\mathcal{M}_{k}}^{2}=a^{*}da$ be the eigenvalue decomposition of the
positive semi-definite matrix $|u|_{\mathcal{M}_{k}}^{2}$ and let
$b=a^{*}d^{+}a$, where the $i$-th diagonal element of $d^{+}$ is
$d_{i,i}^{-1/2}$ if $d_{i,i}\neq 0$ and $0$ if $d_{i,i}=0$. Then, $ub$ is the
solution of the maximization problem.
Proof By the Riesz representation theorem (Proposition 4.2), there exists a
unique $u\in\mathcal{M}_{k}$ satisfying for any $v\in\mathcal{M}_{k}$,
$\left\langle u,v\right\rangle_{\mathcal{M}_{k}}=P_{f}QQ^{*}v$. Then, for
$v\in\mathcal{M}_{k}$ which satisfies $\|v\|_{\mathcal{M}_{k}}=1$, by the
Cauchy–Schwarz inequality (Lemma 2.16), we have
$P_{f}QQ^{*}v=\left\langle
u,v\right\rangle_{\mathcal{M}_{k}}\leq_{\mathcal{A}}|u|_{\mathcal{M}_{k}}\|v\|_{\mathcal{M}_{k}}\leq_{\mathcal{A}}|u|_{\mathcal{M}_{k}}.$
(29)
The vector $v_{\epsilon}$ satisfies $\|v_{\epsilon}\|_{\mathcal{M}_{k}}\leq
1$. In addition, we have
$\displaystyle|u|_{\mathcal{M}_{k}}^{2}-(|u|_{\mathcal{M}_{k}}^{2}-\epsilon^{2}1_{\mathcal{A}})\geq_{\mathcal{A}}0.$
By multiplying $(|u|_{\mathcal{M}_{k}}+\epsilon 1_{\mathcal{A}})^{-1}$ on the
both sides, we have $\left\langle
u,v_{\epsilon}\right\rangle_{\mathcal{M}_{k}}+\epsilon
1_{\mathcal{A}}-|u|_{\mathcal{M}_{k}}\geq_{\mathcal{A}}0$, which implies
$\||u|_{\mathcal{M}_{k}}-\left\langle
u,v_{\epsilon}\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}\leq\epsilon$,
and $\lim_{\epsilon\to 0}P_{f}QQ^{*}v_{\epsilon}=\lim_{\epsilon\to
0}\left\langle
u,v_{\epsilon}\right\rangle_{\mathcal{M}_{k}}=|u|_{\mathcal{M}_{k}}$. Since
$\left\langle
u,v_{\epsilon}\right\rangle_{\mathcal{M}_{k}}\leq_{\mathcal{A}}d$ for any
upper bound $d$ of $\\{\left\langle u,v\right\rangle_{\mathcal{M}_{k}}\ \mid\
\|v\|_{\mathcal{M}_{k}}\leq 1\\}$, $|u|_{\mathcal{M}_{k}}\leq_{\mathcal{A}}d$
holds. As a result, $|u|_{\mathcal{M}_{k}}$ is the supremum of $P_{f}QQ^{*}v$.
In the case of $\mathcal{A}=\mathbb{C}^{m\times m}$, the inequality (29) is
replaced with the equality by setting $v=ub$.
The vector $ub_{\epsilon}$ is represented as
$ub_{\epsilon}=QC^{*}[y_{1},\ldots,y_{n}]^{T}b_{\epsilon}=\sum_{i=1}^{n}\Phi(\mu_{x_{i}})d_{i}$,
where $d_{i}\in\mathcal{A}$ is the $i$-th element of
$CC^{*}[y_{1},\ldots,y_{n}]^{T}b_{\epsilon}\in\mathcal{A}^{n}$, and $\Phi$ is
$\mathcal{A}$-linear (see Proposition 5.8). Therefore, the vector
$ub_{\epsilon}$ corresponds to the $\mathcal{A}$-valued measure
$\sum_{i=1}^{n}\mu_{x_{i}}d_{i}$, and if $\Phi$ is injective (see Example
5.13), the corresponding measure is unique. This means that if we transform
the samples $x_{i}$ according to the measure $\sum_{i=1}^{n}\mu_{x_{i}}d_{i}$,
then the transformation makes a large impact to $y_{i}$.
#### 6.3.3 Numerical examples
We applied our method to functional data $x_{1},\ldots,x_{n}\in
C([0,1]\times[0,1],[0,1])$, where $n=30$, $x_{i}$ are polynomials of the form
$x_{i}(s,t)=\sum_{j,l=0}^{5}\eta_{j,l}s^{j}t^{l}$. The coefficients
$\eta_{j,l}$ of $x_{i}$ are randomly and independently drawn from the uniform
distribution on $[0,0.1]$. Then, we set $y_{i}\in\mathbb{R}$ as
$\displaystyle
y_{i}=\int_{0}^{1}\int_{0}^{1}x_{i}(s,t)^{-\alpha+\alpha|s+t|}dsdt$
for $\alpha=3,0.5$. We set $\mathcal{A}=\mathcal{B}(L^{2}([0,1]))$ and
$k(x_{1},x_{2})=\tilde{k}(x_{1},x_{2})1_{\mathcal{A}}$, where $\tilde{k}$ is a
complex-valued positive definite kernel on $[0,1]$ defined as
$\tilde{k}(x_{1},x_{2})=e^{-\|x_{1}-x_{2}\|_{2}^{2}}$. We applied the PCA
proposed in Subsection 6.1.3 with $r=3$, and then computed $\lim_{\epsilon\to
0}ub_{\epsilon}\in\mathcal{M}_{k}$ in Proposition 6.17, which can be
represented as $\Phi(\sum_{i=1}^{n}\mu_{x_{i}}d_{i})$ for some
$d_{i}\in\mathcal{A}$. The parameter $\lambda$ in the objective function of
the PCA was set as $0.5$. Figure 8 shows the heat map representing the value
related to the integral kernel of the $\mathcal{A}$-valued measure
$\sum_{i=1}^{n}\mu_{x_{i}}(E)d_{i}$ for $E=[0,0.1]$. We denote
$\sum_{i=1}^{n}\mu_{x_{i}}(E)d_{i}$ by $\nu(E)$ and the integral kernel of the
integral operator $\nu(E)$ by $\tilde{k}_{\nu(E)}$. As we stated in Section
6.3.2, if we transform the samples $x_{i}$ according to the measure $\nu$,
then the transformation makes a large impact to $y_{i}$. Moreover, the value
of $\tilde{k}_{\nu(E)}$ at $(s,t)$ corresponds to the measure at $(s,t)$.
Therefore, the value of $\tilde{k}_{\nu(E)}$ at $(s,t)$ describes the impact
of the effect of $t$ on $s$ to $y_{i}$. To additionally take the effect of $s$
on $t$ into consideration, we show the value of
$\tilde{k}_{\nu(E)}(s,t)+\tilde{k}_{\nu(E)}(t,s)$ in Figure 8. The values for
$\alpha=3$ are larger than those for $\alpha=0.5$, which implies the overall
impacts to $y_{i}$ for $\alpha=3$ are larger than that for $\alpha=0.5$.
Moreover, the value is large if $s+t$ is small. This is because for
$x_{i}(s,t)\in[0,0.1]$, $x_{i}(s,t)^{-\alpha+\alpha|s+t|}$ is large if $s+t$
is small. Furthermore, the values around $(s,t)=(1,0)$ and $(0,1)$ are also
large since $x_{i}$ has the form
$x_{i}(s,t)=\sum_{j,l=0}^{5}\eta_{j,l}s^{j}t^{l}$ for $\eta_{j,l}\in[0,0.1]$
and $x_{i}(s,t)$ itself is large around $(s,t)=(1,0)$ and $(0,1)$, which
results in $x_{i}(s,t)^{-\alpha+\alpha|s+t|}\approx x_{i}(s,t)$ being large.
(a) $\alpha=3$
(b) $\alpha=0.5$
Figure 8: Heat map representing the value the integral kernel of $\nu([0,1])$
### 6.4 Other applications
#### 6.4.1 Maximum mean discrepancy with kernel mean embedding
Maximum mean discrepancy (MMD) is a metric of measures according to the
largest difference in means over a certain subset of a function space. It is
also known as integral probability metric (IPM). For a set $\mathcal{U}$ of
real-valued bounded measurable functions on $\mathcal{X}$ and two real-valued
probability measures $\mu$ and $\nu$, MMD $\gamma(\mu,\nu,\mathcal{U})$ is
defined as follows (Müller, 1997; Gretton et al., 2012):
$\sup_{u\in\mathcal{U}}\bigg{|}\int_{x\in\mathcal{X}}u(x)d\mu(x)-\int_{x\in\mathcal{X}}u(x)d\nu(x)\bigg{|}.$
For example, if $\mathcal{U}$ is the unit ball of an RKHS, denoted as
$\mathcal{U}_{\operatorname{RKHS}}$, the MMD can be represented using the KME
$\tilde{\Phi}$ in the RKHS as
$\gamma(\mu,\nu,\mathcal{U}_{\operatorname{RKHS}})=\|\tilde{\Phi}(\mu)-\tilde{\Phi}(\nu)\|_{\mathcal{H}_{\tilde{k}}}$.
In addition, let $\mathcal{U}_{\operatorname{K}}=\\{u\mid\ \|u\|_{L}\leq 1\\}$
and let $\mathcal{U}_{\operatorname{D}}=\\{u\mid\ \|u\|_{\infty}+\|u\|_{L}\leq
1\\}$, where, $\|u\|_{L}:=\sup_{x\neq y}|u(x)-u(y)|/|x-y|$, and
$\|u\|_{\infty}$ is the sup norm of $u$. The MMDs with
$\mathcal{U}_{\operatorname{K}}$ and $\mathcal{U}_{\operatorname{D}}$ are also
discussed in Rachev (1985); Dudley (2002); Sriperumbudur et al. (2012).
Let $\mathcal{X}$ be a locally compact Hausdorff space, let
$\mathcal{U}_{\mathcal{A}}$ be a set of $\mathcal{A}$-valued bounded and
measurable functions, and let
$\mu,\nu\in\mathcal{D}(\mathcal{X},\mathcal{A})$. We generalize the MMD to
that for $\mathcal{A}$-valued measures as follows:
$\gamma_{\mathcal{A}}(\mu,\nu,\mathcal{U}_{\mathcal{A}}):=\sup_{u\in\mathcal{U}_{\mathcal{A}}}\bigg{|}\int_{x\in\mathcal{X}}u(x)d\mu(x)-\int_{x\in\mathcal{X}}u(x)d\nu(x)\bigg{|}_{\mathcal{A}},$
where the supremum is taken with respect to a (pre) order in $\mathcal{A}$
(see Definition 2.9). Let $k$ be an $\mathcal{A}$-valued positive definite
kernel and let $\mathcal{M}_{k}$ be the RKHM associated with $k$. We assume
Assumption 5.3. Let $\Phi$ be the KME defined in Section 5.1. The following
theorem shows that similar to the case of RKHS, if $\mathcal{U}_{\mathcal{A}}$
is the unit ball of an RKHM, the generalized MMD
$\gamma_{\mathcal{A}}(\mu,\nu,\mathcal{U}_{\mathcal{A}})$ can also be
represented using the proposed KME in the RKHM.
###### Proposition 6.19
Let $\mathcal{U}_{\operatorname{RKHM}}:=\\{u\in\mathcal{M}_{k}\mid\
\|u\|_{\mathcal{M}_{k}}\leq 1\\}$. Then, for
$\mu,\nu\in\mathcal{D}(\mathcal{X},\mathcal{A})$, we have
$\gamma_{\mathcal{A}}(\mu,\nu,\mathcal{U}_{\operatorname{RKHM}})=|\Phi(\mu)-\Phi(\nu)|_{\mathcal{M}_{k}}.$
Proof By the Cauchy–Schwarz inequality (Lemma 2.16), we have
$\displaystyle\bigg{|}\int_{x\in\mathcal{X}}d\mu^{*}u(x)-\int_{x\in\mathcal{X}}d\nu^{*}u(x)\bigg{|}_{\mathcal{A}}=|\left\langle\Phi(\mu-\nu),u\right\rangle_{\mathcal{M}_{k}}|_{\mathcal{A}}$
$\displaystyle\qquad\leq_{\mathcal{A}}\|u\|_{\mathcal{M}_{k}}|\Phi(\mu-\nu)|_{\mathcal{M}_{k}}\;\leq_{\mathcal{A}}|\Phi(\mu-\nu)|_{\mathcal{M}_{k}}$
for any $u\in\mathcal{M}_{k}$ such that $\|u\|_{\mathcal{M}_{k}}\leq 1$. Let
$\epsilon>0$. We put $v=\Phi(\mu-\nu)$ and
$u_{\epsilon}=v(|v|_{\mathcal{M}_{k}}+\epsilon 1_{\mathcal{A}})^{-1}$. In the
same manner as Proposition 6.18, $|\Phi(\mu-\nu)|_{\mathcal{M}_{k}}$ is shown
to be the supremum of
$|\int_{x\in\mathcal{X}}d\mu^{*}u(x)-\int_{x\in\mathcal{X}}d\nu^{*}u(x)|_{\mathcal{A}}$.
Various methods with the existing MMD of real-valued probability measures are
generalized to $\mathcal{A}$-valued measures by applying our MMD. Using our
MMD of $\mathcal{A}$-valued measures instead of the existing MMD allows us to
evaluate discrepancies between measures regarding each point of structured
data such as multivariate data and functional data. For example, the following
existing methods can be generalized:
Two-sample test: In two-sample test, samples from two distributions (measures)
are compared by computing the MMD of these measures (Gretton et al., 2012).
Kernel mean matching for generative models: In generative models, MMD is used
in finding points whose distribution is as close as that of input points
(Jitkrittum et al., 2019).
Domain adaptation: In domain adaptation, MMD is used in describing the
difference between the distribution of target domain data and that of source
domain data (Li et al., 2019).
#### 6.4.2 Time-series data analysis with random noise
Recently, random dynamical systems, which are (nonlinear) dynamical systems
with random effects, have been extensively researched. Analyses of them by
generalizing the discussion mentioned in Subsection 6.2.1 using the existing
KME in RKHSs have been proposed (Klus et al., 2020; Hashimoto et al., 2020).
We can apply our KME of $\mathcal{A}$-valued measures to generalize the
analysis proposed in Subsection 6.2.2 to random dynamical systems. Then, we
can extract continuous behaviors of the time evolution of functions with
consideration of random noise.
## 7 Connection with existing methods
In this section, we discuss connections between the proposed methods and
existing methods. We show the connection with the PCA in vvRKHSs in Subsection
7.1 and an existing notion in quantum mechanics.
### 7.1 Connection with PCA in vvRKHSs
We show that PCA in vvRKHSs is a special case of the proposed PCA in RKHMs.
Let $\mathcal{W}$ be a Hilbert space and we set
$\mathcal{A}=\mathcal{B}(\mathcal{W})$. Let
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{B}(\mathcal{W})$ be a
$\mathcal{B}(\mathcal{W})$-valued positive definite kernel. In addition, let
$x_{1},\ldots,x_{n}\in\mathcal{X}$ be given data and
$w_{1,1},\ldots,w_{1,N},\ldots,w_{n,1},\ldots,w_{n,N}\in\mathcal{W}$ be fixed
vectors in $\mathcal{W}$. The following proposition shows that we can
reconstruct principal components of PCA in vvRKHSs by using the proposed PCA
in RKHMs.
###### Proposition 7.1
Let $W_{j}:\mathcal{X}\to\mathcal{W}$ be a map satisfying
$W_{j}(x_{i})=w_{i,j}$ for $j=1,\ldots,N$, let $W=[W_{1},\ldots,W_{N}]$, and
let $\hat{k}:\mathcal{X}\times\mathcal{X}\to\mathbb{C}^{N\times N}$ be defined
as $\hat{k}(x,y)=W(x)^{*}k(x,y)W(y)$. Let
$\\{q_{1},\ldots,q_{r}\\}\subseteq\mathcal{F}_{\hat{k}}$ is a solution of the
minimization problem
$\min_{\\{q_{j}\\}_{j=1}^{r}\subseteq\mathcal{F}_{\hat{k}}\mbox{\footnotesize:
ONS}}\;\sum_{i=1}^{n}\operatorname{tr}\Big{(}\big{|}\phi(x_{i})-\sum_{j=1}^{r}q_{j}\left\langle
q_{j},\phi(x_{i})\right\rangle_{\mathcal{M}_{\hat{k}}}\big{|}_{\mathcal{M}_{\hat{k}}}^{2}\Big{)},$
(30)
where $\mathcal{F}_{k}=\\{v\in\mathcal{M}_{k}\mid\ v(x)\mbox{ is a rank $1$
operator for any }x\in\mathcal{X}\\}$. In addition, let
$p_{1},\ldots,p_{r}\in\mathcal{H}_{k}^{\operatorname{v}}$ be the solution of
the minimization problem
$\min_{\\{p_{j}\\}_{j=1}^{r}\subseteq\mathcal{H}_{k}^{\operatorname{v}}\mbox{\footnotesize:
ONS}}\;\sum_{i=1}^{n}\sum_{l=1}^{N}\bigg{\|}\phi(x_{i})w_{i,l}-\sum_{j=1}^{r}p_{j}\left\langle
p_{j},\phi(x_{i})w_{i,l}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}\bigg{\|}_{\mathcal{H}_{k}^{\operatorname{v}}}^{2}.$
(31)
Then, $\|(\langle
q_{j},\hat{\phi}(x_{i})\rangle_{\mathcal{M}_{\hat{k}}})_{l}\|_{\mathbb{C}^{N}}=\left\langle
p_{j},\phi(x_{i})w_{i,l}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}$
for $i=1,\ldots,n$, $j=1,\ldots,r$, and $l=1,\ldots,N$. Here, $(\langle
q_{j},\hat{\phi}(x_{i})\rangle_{\mathcal{M}_{\hat{k}}})_{l}$ is the $l$-th
column of the matrix $\langle
q_{j},\hat{\phi}(x_{i})\rangle_{\mathcal{M}_{\hat{k}}}\in\mathbb{C}^{N\times
N}$.
Proof Let $\mathbf{G}\in(\mathbb{C}^{N\times N})^{n\times n}$ be defined as
$\mathbf{G}_{i,j}=\hat{k}(x_{i},x_{j})$. By Proposition 6.8, any solution of
the problem (30) is represented as
$q_{j}=\sum_{i=1}^{n}\hat{\phi}(x_{i})c_{i,j}$, where $j=1,\ldots,r$ and
$[c_{1,j},\ldots,c_{n,j}]^{T}=\lambda_{j}^{-1/2}\mathbf{v}_{j}u^{*}$ for any
normalized vector $u\in\mathbb{C}^{N}$. Here, $\lambda_{j}$ are the largest
$r$ eigenvalues and $\mathbf{v}_{j}$ are the corresponding orthonormal
eigenvectors of the matrix $\mathbf{G}$. Therefore, by the definition of
$\hat{k}$, the principal components are calculated as
$\langle
q_{j},\hat{\phi}(x_{i})\rangle_{\mathcal{M}_{\hat{k}}}^{*}=\lambda_{j}^{-1/2}W(x_{i})^{*}[k(x_{i},x_{1})W(x_{1}),\ldots,k(x_{i},x_{n})W(x_{n})]\mathbf{v}_{j}u^{*}.$
On the other hand, in the same manner as Proposition 6.8, the solution of the
problem (31) is shown to be represented as
$p_{j}=\sum_{i=1}^{n}\sum_{l=1}^{N}\phi(x_{i})w_{i,l}\alpha_{(i-1)N+l,j}$,
where $j=1,\ldots,r$ and
$[\alpha_{1,j},\ldots,\alpha_{Nn,j}]^{T}=\lambda_{j}^{-1/2}\mathbf{v}_{j}$.
Therefore, the principal components are calculated as
$\overline{\left\langle
p_{j},\phi(x_{i})w_{i,l}\right\rangle_{\mathcal{H}_{k}^{\operatorname{v}}}}=\lambda_{j}^{-1/2}W_{l}(x_{i})^{*}[k(x_{i},x_{1})W(x_{1}),\ldots,k(x_{i},x_{n})W(x_{n})]\mathbf{v}_{j},$
which completes the proof of the proposition.
### 7.2 Connection with quantum mechanics
Positive operator-valued measures play an important role in quantum mechanics.
A positive operator-valued measure is defined as an $\mathcal{A}$-valued
measure $\mu$ such that $\mu(\mathcal{X})=I$ and $\mu(E)$ is positive for any
Borel set $E$. It enables us to extract information of the probabilities of
outcomes from a state (Peres and Terno, 2004; Holevo, 2011). We show that the
existing inner product considered for quantum states (Balkir, 2014; Deb, 2016)
is generalized with our KME of positive operator-valued measures.
Let $\mathcal{X}=\mathbb{C}^{m}$ and $\mathcal{A}=\mathbb{C}^{m\times m}$. Let
$\rho\in\mathcal{A}$ be a positive semi-definite matrix with unit trace,
called a density matrix. A density matrix describes the states of a quantum
system, and information about outcomes is described as measure
$\mu\rho\in\mathcal{D}(\mathcal{X},\mathcal{A})$. We have the following
proposition. Here, we use the bra-ket notation, i.e.,
$|\alpha\rangle\in\mathcal{X}$ represents a (column) vector in $\mathcal{X}$,
and $\langle\alpha|$ is defined as $\langle\alpha|=|\alpha\rangle^{*}$:
###### Proposition 7.2
Assume $\mathcal{X}=\mathbb{C}^{m}$, $\mathcal{A}=\mathbb{C}^{m\times m}$, and
$k:\mathcal{X}\times\mathcal{X}\to\mathcal{A}$ is a positive definite kernel
defined as
$k(|\alpha\rangle,|\beta\rangle)=|\alpha\rangle\langle\alpha|\beta\rangle\langle\beta|$.
If $\mu$ is represented as
$\mu=\sum_{i=1}^{m}\delta_{|\psi_{i}\rangle}|\psi_{i}\rangle\langle\psi_{i}|$
for an orthonormal basis $\\{|\psi_{1}\rangle,\ldots,|\psi_{m}\rangle\\}$ of
$\mathcal{X}$, then for any $\rho_{1},\rho_{2}\in\mathcal{A}$,
$\operatorname{tr}(\left\langle\Phi(\mu\rho_{1}),\Phi(\mu\rho_{2})\right\rangle_{\mathcal{M}_{k}})=\left\langle\rho_{1},\rho_{2}\right\rangle_{\operatorname{HS}}$
holds. Here, $\left\langle\cdot,\cdot\right\rangle_{\operatorname{HS}}$ is the
Hilbert–Schmidt inner product.
Proof Let $M_{i}=|\psi_{i}\rangle\langle\psi_{i}|$ for $i=1,\ldots,m$. The
inner product between $\Phi(\mu\rho_{1})$ and $\Phi(\mu\rho_{2})$ is
calculated as follows:
$\displaystyle\left\langle\Phi(\mu\rho_{1}),\Phi(\mu\rho_{2})\right\rangle_{\mathcal{M}_{k}}$
$\displaystyle=\int_{x\in\mathcal{X}}\int_{y\in\mathcal{X}}\rho_{1}^{*}\mu^{*}(x)k(x,y)\mu\rho_{2}(y)=\sum_{i,j=1}^{m}\rho_{1}^{*}M_{i}k(|\psi_{i}\rangle,|\psi_{j}\rangle)M_{j}\rho_{2}.$
Since the identity $k(|\psi_{i}\rangle,|\psi_{j}\rangle)=M_{i}M_{j}$ holds and
$\\{|\psi_{1}\rangle,\ldots,|\psi_{m}\rangle\\}$ is orthonormal, we have
$\left\langle\Phi(\mu\rho_{1}),\Phi(\mu\rho_{2})\right\rangle_{\mathcal{M}_{k}}=\sum_{i=1}^{m}\rho_{1}^{*}M_{i}\rho_{2}$.
By using the identity $\sum_{i=1}^{m}M_{i}=I$, we have
$\operatorname{tr}\bigg{(}\sum_{i=1}^{m}\rho_{1}^{*}M_{i}\rho_{2}\bigg{)}=\operatorname{tr}\bigg{(}\sum_{i=1}^{m}M_{i}\rho_{2}\rho_{1}^{*}\bigg{)}=\operatorname{tr}(\rho_{2}\rho_{1}^{*}),$
which completes the proof of the proposition.
In previous studies (Balkir, 2014; Deb, 2016), the Hilbert–Schmidt inner
product between density matrices was considered to represent similarities
between two quantum states. Liu and Rebentrost (2018) considered the
Hilbert–Schmidt inner product between square roots of density matrices.
Theorem 7.2 shows that these inner products are represented via our KME in
RKHMs.
## 8 Conclusions and future works
In this paper, we proposed a new data analysis framework with RKHM and
developed a KME in RKHMs for analyzing distributions. We showed the
theoretical validity for applying those to data analysis. Then, we applied it
to kernel PCA, time-series data analysis, and analysis of interaction effects
in finite or infinite dimensional data. RKHM is a generalization of RKHS in
terms of $C^{*}$-algebra, and we can extract rich information about structures
in data such as functional data by using $C^{*}$-algebras. For example, we can
reduce multi-variable functional data to functions of single variable by
considering the space of functions of single variables as a $C^{*}$-algebra
and then by applying the proposed PCA in RKHMs. Moreover, we can extract
information of interaction effects in continuously distributed spatio data by
considering the space of bounded linear operators on a function space as a
$C^{*}$-algebra.
As future works, we will address $C^{*}$-algebra-valued supervised problems on
the basis of the representer theorem (Theorem 4.8) and apply the proposed KME
in RKHMs to quantum mechanics.
## Acknowledgments
We would like to thank Dr. Tomoki Mihara, whose comments improve the
mathematical rigorousness of this paper. This work was partially supported by
JST CREST Grant Number JPMJCR1913.
## Appendix A Proofs of the lemmas and propositions in Section 2.5
### Proof of Proposition 4.5
(Existence) For $u,v\in\mathcal{M}_{k}$, there exist
$u_{i},v_{i}\in\mathcal{M}_{k,0}\ (i=1,2,\ldots)$ such that
$v=\lim_{i\to\infty}v_{i}$ and $w=\lim_{i\to\infty}w_{i}$. By the Cauchy-
Schwarz inequality (Lemma 2.16), the following inequalities hold:
$\displaystyle\|\left\langle
u_{i},v_{i}\right\rangle_{\mathcal{M}_{k}}-\left\langle
u_{j},v_{j}\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}$
$\displaystyle\leq\|\left\langle
u_{i},v_{i}-v_{j}\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}+\|\left\langle
u_{i}-u_{j},u_{j}\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}$
$\displaystyle\leq\|u_{i}\|_{\mathcal{M}_{k}}\;\|v_{i}-v_{j}\|_{\mathcal{M}_{k}}+\|u_{i}-u_{j}\|_{\mathcal{M}_{k}}\;\|v_{j}\|_{\mathcal{M}_{k}}$
$\displaystyle\to 0\ (i,j\to\infty),$
which implies $\\{\left\langle
u_{i},v_{i}\right\rangle_{\mathcal{M}_{k}}\\}_{i=1}^{\infty}$ is a Cauchy
sequence in $\mathcal{A}$. By the completeness of $\mathcal{A}$, there exists
a limit $\lim_{i\to\infty}\left\langle
u_{i},v_{i}\right\rangle_{\mathcal{M}_{k}}$.
(Well-definedness) Assume there exist
$u^{\prime}_{i},v^{\prime}_{i}\in\mathcal{M}_{k,0}\ (i=1,2,\ldots)$ such that
$u=\lim_{i\to\infty}u_{i}=\lim_{i\to\infty}u^{\prime}_{i}$ and
$v=\lim_{i\to\infty}v_{i}=\lim_{i\to\infty}v^{\prime}_{i}$. By the Cauchy-
Schwarz inequality (Lemma 2.16), we have
$\|\left\langle u_{i},v_{i}\right\rangle_{\mathcal{M}_{k}}-\left\langle
u^{\prime}_{i},v^{\prime}_{i}\right\rangle_{\mathcal{M}_{k}}\|_{\mathcal{A}}\leq\|u_{i}\|_{\mathcal{M}_{k}}\|v_{i}-v^{\prime}_{i}\|_{\mathcal{M}_{k}}+\|u_{i}-u^{\prime}_{i}\|_{\mathcal{M}_{k}}\|v^{\prime}_{i}\|_{\mathcal{M}_{k}}\to
0\ (i\to\infty),$
which implies $\lim_{i\to\infty}\left\langle
u_{i},v_{i}\right\rangle_{\mathcal{M}_{k}}=\lim_{i\to\infty}\left\langle
u^{\prime}_{i},v^{\prime}_{i}\right\rangle_{\mathcal{M}_{k}}$.
(Injectivity) For $u,v\in\mathcal{M}_{k}$, we assume
$\left\langle\phi(x),u\right\rangle_{\mathcal{M}_{k}}=\left\langle\phi(x),v\right\rangle_{\mathcal{M}_{k}}$
for $x\in\mathcal{X}$. By the linearity of
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}_{k}}$, $\left\langle
p,u\right\rangle_{\mathcal{M}_{k}}=\left\langle
p,v\right\rangle_{\mathcal{M}_{k}}$ holds for $p\in\mathcal{M}_{k,0}$. For
$p\in\mathcal{M}_{k}$, there exist $p_{i}\in\mathcal{M}_{k,0}\ (i=1,2,\ldots)$
such that $p=\lim_{i\to\infty}p_{i}$. Therefore, $\left\langle
p,u-v\right\rangle_{\mathcal{M}_{k}}=\lim_{i\to\infty}\left\langle
p_{i},u-v\right\rangle_{\mathcal{M}_{k}}=0$. As a result, $\left\langle
u-v,u-v\right\rangle_{\mathcal{M}_{k}}=0$ holds by setting $p=u-v$, which
implies $u=v$.
### Proof of Proposition 4.6
We define $\Psi:\mathcal{M}_{k,0}\to\mathcal{M}$ as an $\mathcal{A}$-linear
map that satisfies $\Psi(\phi(x))=\psi(x)$. We show $\Psi$ can be extended to
a unique $\mathcal{A}$-linear bijection map on $\mathcal{M}_{k}$ , which
preserves the inner product.
(Uniqueness) The uniqueness follows by the definition of $\Psi$.
(Inner product preservation) For $x,y\in\mathcal{X}$, we have
$\left\langle\Psi(\phi(x)),\Psi(\phi(y))\right\rangle_{\mathcal{M}_{k}}=\left\langle\psi(x),\psi(y)\right\rangle_{\mathcal{M}}=k(x,y)=\left\langle\phi(x),\phi(y)\right\rangle_{\mathcal{M}_{k}}.$
Since $\Psi$ is $\mathcal{A}$-linear, $\Psi$ preserves the inner products
between arbitrary $u,v\in\mathcal{M}_{k,0}$.
(Well-definedness) Since $\Phi$ preserves the inner product, if
$\\{v_{i}\\}_{i=1}^{\infty}\subseteq\mathcal{M}_{k}$ is a Cauchy sequence,
$\\{\Psi(v_{i})\\}_{i=1}^{\infty}\subseteq\mathcal{M}$ is also a Cauchy
sequence. Therefore, by the completeness of $\mathcal{M}$, $\Psi$ also
preserves the inner product in $\mathcal{M}_{k}$, and for
$v\in\mathcal{M}_{k}$, $\|\Psi(v)\|_{\mathcal{M}}=\|v\|_{\mathcal{M}_{k}}$
holds. As a result, for $v\in\mathcal{M}_{k}$, if $v=0$,
$\|\Psi(v)\|_{\mathcal{M}}=\|v\|_{\mathcal{M}_{k}}=0$ holds. This implies
$\Psi(v)=0$.
(Injectivity) For $u,v\in\mathcal{M}_{k}$, if $\Psi(u)=\Psi(v)$, then
$0=\|\Psi(u)-\Psi(v)\|_{\mathcal{M}}=\|u-v\|_{\mathcal{M}_{k}}$ holds since
$\Psi$ preserves the inner product, which implies $u=v$.
(Surjectivity) It follows directly by the condition
$\overline{\\{\sum_{i=0}^{n}\psi(x_{i})c_{i}\mid\ x_{i}\in\mathcal{X},\
c_{i}\in\mathcal{A}\\}}=\mathcal{M}$.
### Proof of Lemma 4.10
Let $k$ be an $\mathcal{A}$-valued positive definite kernel defined in
Definition 2.21. Let $w\in\mathcal{W}$. For $n\in\mathbb{N}$,
$w_{1},\ldots,w_{n}\in\mathcal{W}$, let $c_{i}\in\mathcal{B}(\mathcal{W})$ be
defined as $c_{i}h:=\left\langle w,h\right\rangle_{\mathcal{W}}/\left\langle
w,w\right\rangle_{\mathcal{W}}w_{i}$ for $h\in\mathcal{W}$. Since
$w_{i}=c_{i}w$ holds, the following equalities are derived for
$x_{1},\ldots,x_{n}\in\mathcal{X}$:
$\displaystyle\sum_{i,j=1}^{n}\left\langle
w_{i},k(x_{i},x_{j})w_{j}\right\rangle_{\mathcal{W}}$
$\displaystyle=\sum_{i,j=1}^{n}\left\langle
c_{i}w,k(x_{i},x_{j})c_{j}w\right\rangle_{\mathcal{W}}=\bigg{\langle}w,\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{i}w\bigg{\rangle}_{\mathcal{W}}.$
By the positivity of $\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}$, $\langle
w,\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}w\rangle_{\mathcal{W}}\geq 0$
holds, which implies $k$ is an operator valued positive definite kernel
defined in Definition 2.2.
On the other hand, let $k$ be an operator valued positive definite kernel
defined in Definition 2.2. Let $v\in\mathcal{W}$. For $n\in\mathbb{N}$,
$c_{1},\ldots,c_{n}\in\mathcal{A}$ and $x_{1},\ldots,x_{n}\in\mathcal{X}$, the
following equality is derived:
$\displaystyle\bigg{\langle}w,\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}w\bigg{\rangle}_{\mathcal{W}}\\!\\!\\!\\!=\sum_{i,j=1}^{n}\left\langle
c_{i}w,k(x_{i},x_{j})c_{j}w\right\rangle_{\mathcal{W}}.$
By Definition 2.2, $\sum_{i,j=1}^{n}\left\langle
c_{i}w,k(x_{i},x_{j})c_{j}w\right\rangle_{\mathcal{W}}\geq 0$ holds, which
implies $k$ is an $\mathcal{A}$-valued positive definite kernel defined in
Definition 2.21.
## Appendix B $\mathcal{A}$-valued measure and integral
We introduce $\mathcal{A}$-valued measure and integral in preparation for
defining a KME in RKHMs. $\mathcal{A}$-valued measure and integral are special
cases of vector measure and integral (Dinculeanu, 1967, 2000), respectively.
Here, we review these notions especially for the case of $\mathcal{A}$-valued
ones. The notions of measures and the Lebesgue integrals are generalized to
$\mathcal{A}$-valued. The left and right integral of an $\mathcal{A}$-valued
function $u$ with respect to an $\mathcal{A}$-valued measure $\mu$ is defined
through $\mathcal{A}$-valued step functions.
###### Definition B.1 ($\mathcal{A}$-valued measure)
Let $\varSigma$ be a $\sigma$-algebra on $\mathcal{X}$.
1. 1.
An $\mathcal{A}$-valued map $\mu:\varSigma\to\mathcal{A}$ is called a
(countably additive) $\mathcal{A}$-vaued measure if
$\mu(\bigcup_{i=1}^{\infty}E_{i})=\sum_{i=1}^{\infty}\mu(E_{i})$ for all
countable collections $\\{E_{i}\\}_{i=1}^{\infty}$ of pairwise disjoint sets
in $\varSigma$.
2. 2.
An $\mathcal{A}$-valued measure $\mu$ is said to be finite if
$|\mu|(E):=\sup\\{\sum_{i=1}^{n}\|\mu(E_{i})\|_{\mathcal{A}}\mid\
n\in\mathbb{N},\ \\{E_{i}\\}_{i=1}^{n}\mbox{ is a finite partition of
}E\in\varSigma\\}<\infty$. We call $|\mu|$ the total variation of $\mu$.
3. 3.
An $\mathcal{A}$-valued measure $\mu$ is said to be regular if for all
$E\in\varSigma$ and $\epsilon>0$, there exist a compact set $K\subseteq E$ and
an open set $G\supseteq E$ such that $\|\mu(F)\|_{\mathcal{A}}\leq\epsilon$
for any $F\subseteq G\setminus K$. The regularity corresponds to the
continuity of $\mathcal{A}$-valued measures.
4. 4.
An $\mathcal{A}$-valued measure $\mu$ is called a Borel measure if
$\varSigma=\mathcal{B}$, where $\mathcal{B}$ is the Borel $\sigma$-algebra on
$\mathcal{X}$ ($\sigma$-algebra generated by all compact subsets of
$\mathcal{X}$).
The set of all $\mathcal{A}$-valued finite regular Borel measures is denoted
as $\mathcal{D}(\mathcal{X},\mathcal{A})$.
###### Definition B.2 ($\mathcal{A}$-valued Dirac measure)
For $x\in\mathcal{X}$, we define
$\delta_{x}\in\mathcal{D}(\mathcal{X},\mathcal{A})$ as
$\delta_{x}(E)=1_{\mathcal{A}}$ for $x\in E$ and $\delta_{x}(E)=0$ for
$x\notin E$. The measure $\delta_{x}$ is referred to as the
$\mathcal{A}$-valued Dirac measure at $x$.
Similar to the Lebesgue integrals, an integral of an $\mathcal{A}$-valued
function with respect to an $\mathcal{A}$-valued measure is defined through
$\mathcal{A}$-valued step functions.
###### Definition B.3 (Step function)
An $\mathcal{A}$-valued map $s:\mathcal{X}\to\mathcal{A}$ is called a step
function if $s(x)=\sum_{i=1}^{n}c_{i}\chi_{E_{i}}(x)$ for some
$n\in\mathbb{N}$, $c_{i}\in\mathcal{A}$ and finite partition
$\\{E_{i}\\}_{i=1}^{n}$ of $\mathcal{X}$, where
$\chi_{E}:\mathcal{X}\to\\{0,1\\}$ is the indicator function for
$E\in\mathcal{B}$. The set of all $\mathcal{A}$-valued step functions on
$\mathcal{X}$ is denoted as $\mathcal{S}(\mathcal{X},\mathcal{A})$.
###### Definition B.4 (Integrals of functions in
$\mathcal{S}(\mathcal{X},\mathcal{A})$)
For $s\in\mathcal{S}(\mathcal{X},\mathcal{A})$ and
$\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$, the left and right integrals of
$s$ with respect to $\mu$ are respectively defined as
$\int_{x\in\mathcal{X}}s(x)d\mu(x):=\sum_{i=1}^{n}c_{i}\mu(E_{i}),\quad\int_{x\in\mathcal{X}}d\mu(x)s(x):=\sum_{i=1}^{n}\mu(E_{i})c_{i}.$
As we explain below, the integrals of step functions are extended to those of
“integrable functions”. For a real positive finite measure $\nu$, let
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$ be the set of all
$\mathcal{A}$-valued $\nu$-Bochner integrable functions on $\mathcal{X}$,
i.e., if $u\in{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$, there exists a sequence
$\\{s_{i}\\}_{i=1}^{\infty}\subseteq\mathcal{S}(\mathcal{X},\mathcal{A})$ of
step functions such that
$\lim_{i\to\infty}\int_{x\in\mathcal{X}}\|u(x)-s_{i}(x)\|_{\mathcal{A}}d\nu(x)=0$
(Diestel, 1984, Chapter IV). Note that
$u\in{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$ if and only if
$\int_{x\in\mathcal{X}}\|u(x)\|_{\mathcal{A}}d\nu(x)<\infty$, and
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$ is a Banach $\mathcal{A}$-module
(i.e., a Banach space equipped with an $\mathcal{A}$-module structure) with
respect to the norm defined as
$\|u\|_{{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})}=\int_{x\in\mathcal{X}}\|u(x)\|_{\mathcal{A}}d\nu(x)$.
###### Definition B.5 (Integrals of functions in
${L}^{1}_{|\mu|}(\mathcal{X},\mathcal{A})$)
For $u\in{L}^{1}_{|\mu|}(\mathcal{X},\mathcal{A})$, the left and right
integrals of $u$ with respect to $\mu$ is respectively defined as
$\lim_{i\to\infty}\int_{x\in\mathcal{X}}d\mu(x)s_{i}(x),\quad\lim_{i\to\infty}\int_{x\in\mathcal{X}}s_{i}(x)d\mu(x),$
where
$\\{s_{i}\\}_{i=1}^{\infty}\subseteq\mathcal{S}(\mathcal{X},\mathcal{A})$ is a
sequence of step functions whose
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$-limit is $u$.
Note that since $\mathcal{A}$ is not commutative in general, the left and
right integrals do not always coincide.
There is also a stronger notion for integrability. An $\mathcal{A}$-valued
function $u$ on $\mathcal{X}$ is said to be totally measurable if it is a
uniform limit of a step function, i.e., there exists a sequence
$\\{s_{i}\\}_{i=1}^{\infty}\subseteq\mathcal{S}(\mathcal{X},\mathcal{A})$ of
step functions such that
$\lim_{i\to\infty}\sup_{x\in\mathcal{X}}\|u(x)-s_{i}(x)\|_{\mathcal{A}}=0$. We
denote by $\mathcal{T}(\mathcal{X},\mathcal{A})$ the set of all
$\mathcal{A}$-valued totally measurable functions on $\mathcal{X}$. Note that
if $u\in\mathcal{T}(\mathcal{X},\mathcal{A})$, then
$u\in{L}^{1}_{|\mu|}(\mathcal{X},\mathcal{A})$ for any
$\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$. In fact, the continuous
functions in ${C}_{0}(\mathcal{X},\mathcal{A})$ is totally measurable (see
Definition 5.1 for the definition of ${C}_{0}(\mathcal{X},\mathcal{A})$).
###### Proposition B.6
The space $C_{0}(\mathcal{X},\mathcal{A})$ is contained in
$\mathcal{T}(\mathcal{X},\mathcal{A})$. Moreover, for any real positive finite
regular measure $\nu$, it is dense in ${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$
with respect to $\|\cdot\|_{{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})}$.
For further details, refer to Dinculeanu (1967, 2000).
## Appendix C Proofs of the propositions and theorem in Section 5.2
Before proving the propositions and theorem, we introduce some definitions and
show fundamental properties which are related to the propositions and theorem.
###### Definition C.1 ($\mathcal{A}$-dual)
For a Banach $\mathcal{A}$-module $\mathcal{M}$, the $\mathcal{A}$-dual of
$\mathcal{M}$ is defined as
$\mathcal{M}^{\prime}:=\\{f:\mathcal{M}\to\mathcal{A}\mid\ f\mbox{ is bounded
and $\mathcal{A}$-linear}\\}$.
Note that for a right Banach $\mathcal{A}$-module $\mathcal{M}$,
$\mathcal{M}^{\prime}$ is a left Banach $\mathcal{A}$-module.
###### Definition C.2 (Orthogonal complement)
For an $\mathcal{A}$-submodule $\mathcal{M}_{0}$ of a Banach
$\mathcal{A}$-module $\mathcal{M}$, the orthogonal complement of
$\mathcal{M}_{0}$ is defined as a closed submodule
$\mathcal{M}_{0}^{\perp}:=\bigcap_{u\in\mathcal{M}_{0}}\\{f\in\mathcal{M}^{\prime}\mid\
f(u)=0\\}$ of $\mathcal{M}^{\prime}$. In addition, for an
$\mathcal{A}$-submodule $\mathcal{N}_{0}$ of $\mathcal{M}^{\prime}$, the
orthogonal complement of $\mathcal{N}_{0}$ is defined as a closed submodule
$\mathcal{N}_{0}^{\perp}:=\bigcap_{f\in\mathcal{N}_{0}}\\{u\in\mathcal{M}\mid\
f(u)=0\\}$ of $\mathcal{M}$.
Note that for a von Neumann $\mathcal{A}$-module $\mathcal{M}$, by Proposition
4.2, $\mathcal{M}^{\prime}$ and $\mathcal{M}$ are isomorphic. The following
lemma shows a connection between an orthogonal complement and the density
property.
###### Lemma C.3
For a Banach $\mathcal{A}$-module $\mathcal{M}$ and its submodule
$\mathcal{M}_{0}$, $\mathcal{M}_{0}^{\perp}=\\{0\\}$ if $\mathcal{M}_{0}$ is
dense in $\mathcal{M}$.
Proof We first show
$\overline{\mathcal{M}_{0}}\subseteq(\mathcal{M}_{0}^{\perp})^{\perp}$. Let
$u\in\mathcal{M}_{0}$. By the definition of orthogonal complements,
$u\in(\mathcal{M}_{0}^{\perp})^{\perp}$. Since
$(\mathcal{M}_{0}^{\perp})^{\perp}$ is closed,
$\overline{\mathcal{M}_{0}}\subseteq(\mathcal{M}_{0}^{\perp})^{\perp}$. If
$\mathcal{M}_{0}$ is dense in $\mathcal{M}$,
$\mathcal{M}\subseteq(\mathcal{M}_{0}^{\perp})^{\perp}$ holds, which means
$\mathcal{M}_{0}^{\perp}=\\{0\\}$.
Moreover, in the case of $\mathcal{A}=\mathbb{C}^{m\times m}$, a
generalization of the Riesz–Markov representation theorem for
$\mathcal{D}(\mathcal{X},\mathcal{A})$ holds.
###### Proposition C.4 (Riesz–Markov representation theorem for
$\mathbb{C}^{m\times m}$-valued measures)
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. There exists an isomorphism between
$\mathcal{D}(\mathcal{X},\mathcal{A})$ and
${C}_{0}(\mathcal{X},\mathcal{A})^{\prime}$.
Proof For $f\in{C}_{0}(\mathcal{X},\mathcal{A})^{\prime}$, let
$f_{i,j}\in{C}_{0}(\mathcal{X},\mathbb{C})^{\prime}$ be defined as
$f_{i,j}(u)=(f(u1_{\mathcal{A}}))_{i,j}$ for
$u\in{C}_{0}(\mathcal{X},\mathbb{C})$. Then, by the Riesz–Markov
representation theorem for complex-valued measure, there exists a unique
finite complex-valued regular measure $\mu_{i,j}$ such that
$f_{i,j}(u)=\int_{x\in\mathcal{X}}u(x)d\mu_{i,j}(x)$. Let
$\mu(E):=[\mu_{i,j}(E)]_{i,j}$ for $E\in\mathcal{B}$. Then,
$\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$, and we have
$\displaystyle f(u)$
$\displaystyle=f\bigg{(}\sum_{l,l^{\prime}=1}^{m}u_{l,l^{\prime}}e_{l,l^{\prime}}\bigg{)}=\sum_{l,l^{\prime}=1}^{m}[f_{i,j}(u_{l,l^{\prime}})]_{i,j}e_{l,l^{\prime}}$
$\displaystyle=\sum_{l,l^{\prime}=1}^{m}\bigg{[}\int_{x\in\mathcal{X}}u_{l,l^{\prime}}(x)d\mu_{i,j}(x)\bigg{]}_{i,j}e_{l,l^{\prime}}=\int_{x\in\mathcal{X}}d\mu(x)u(x),$
where $e_{i,j}$ is an $m\times m$ matrix whose $(i,j)$-element is $1$ and all
the other elements are $0$. Therefore, if we define
$h^{\prime}:{C}_{0}(\mathcal{X},\mathcal{A})^{\prime}\to\mathcal{D}(\mathcal{X},\mathcal{A})$
as $f\mapsto\mu$, $h^{\prime}$ is the inverse of $h$, which completes the
proof of the proposition.
### C.1 Proofs of Propositions 5.11 and 5.12
To show Propositions 5.11 and 5.12, the following lemma is used.
###### Lemma C.5
$\Phi:\mathcal{D}(\mathcal{X},\mathcal{A})\to\mathcal{M}_{k}$ is injective if
and only if
$\left\langle\Phi(\mu),\Phi(\mu)\right\rangle_{\mathcal{M}_{k}}\neq 0$ for any
nonzero $\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$.
Proof ($\Rightarrow$) Suppose there exists a nonzero
$\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$ such that
$\left\langle\Phi(\mu),\Phi(\mu)\right\rangle_{\mathcal{M}_{k}}=0$. Then,
$\Phi(\mu)=\Phi(0)=0$ holds, and thus, $\Phi$ is not injective.
($\Leftarrow$) Suppose $\Phi$ is not injective. Then, there exist
$\mu,\nu\in\mathcal{D}(\mathcal{X},\mathcal{A})$ such that
$\Phi(\mu)=\Phi(\nu)$ and $\mu\neq\nu$, which implies $\Phi(\mu-\nu)=0$ and
$\mu-\nu\neq 0$.
We now show Propositions 5.11 and 5.12.
Proof of Theorem 5.11 Let $\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$,
$\mu\neq 0$. We have
$\displaystyle\left\langle\Phi(\mu),\Phi(\mu)\right\rangle$
$\displaystyle=\int_{x\in\mathbb{R}^{d}}\int_{y\in\mathbb{R}^{d}}d\mu^{*}(x)k(x,y)d\mu(y)$
$\displaystyle=\int_{x\in\mathbb{R}^{d}}\int_{y\in\mathbb{R}^{d}}d\mu^{*}(x)\int_{\omega\in\mathbb{R}^{d}}e^{-\sqrt{-1}(y-x)^{T}\omega}d\lambda(\omega)d\mu(y)$
$\displaystyle=\int_{\omega\in\mathbb{R}^{d}}\int_{x\in\mathbb{R}^{d}}e^{\sqrt{-1}x^{T}\omega}d\mu^{*}(x)d\lambda(\omega)\int_{y\in\mathbb{R}^{d}}e^{-\sqrt{-1}y^{T}\omega}d\mu(y)$
$\displaystyle=\int_{\omega\in\mathbb{R}^{d}}\hat{\mu}(\omega)^{*}d\lambda(\omega)\hat{\mu}(\omega).$
Assume $\hat{\mu}=0$. Then, $\int_{x\in\mathcal{X}}u(x)d\mu(x)=0$ for any
$u\in{C}_{0}(\mathcal{X},\mathcal{A})$ holds, which implies
$\mu\in{C}_{0}(\mathcal{X},\mathcal{A})^{\perp}=\\{0\\}$ by Proposition C.4
and Lemma C.3. Thus, ${\mu}=0$. In addition, by the assumption,
$\operatorname{supp}(\lambda)=\mathbb{R}^{d}$ holds. As a result,
$\int_{\omega\in\mathbb{R}^{d}}\hat{\mu}(\omega)^{*}d\lambda(\omega)\hat{\mu}(\omega)\neq
0$ holds. By Lemma C.5, $\Phi$ is injective.
Proof of Theorem 5.12 Let $\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$,
$\mu\neq 0$. We have
$\displaystyle\left\langle\Phi(\mu),\Phi(\mu)\right\rangle$
$\displaystyle=\int_{x\in\mathbb{R}^{d}}\int_{y\in\mathbb{R}^{d}}d\mu^{*}(x)k(x,y)d\mu(y)$
$\displaystyle=\int_{x\in\mathbb{R}^{d}}\int_{y\in\mathbb{R}^{d}}d\mu^{*}(x)\int_{t\in[0,\infty)}e^{-t\|x-y\|^{2}}d\eta(t)d\mu(y)$
$\displaystyle=\int_{x\in\mathbb{R}^{d}}\int_{y\in\mathbb{R}^{d}}d\mu^{*}(x)\int_{t\in[0,\infty)}\frac{1}{(2t)^{d/2}}\int_{\omega\in\mathbb{R}^{d}}e^{-\sqrt{-1}(y-x)^{T}\omega-\frac{\|\omega\|^{2}}{4t}}d\omega
d\eta(t)d\mu(y)$
$\displaystyle=\int_{\omega\in\mathbb{R}^{d}}\hat{\mu}(\omega)^{*}\int_{t\in[0,\infty)}\frac{1}{(2t)^{d/2}}e^{\frac{-\|\omega\|^{2}}{4t}}d\eta(t)\hat{\mu}(\omega)d\omega,$
(32)
where we applied a formula
$e^{-t\|x\|^{2}}={(2t)^{-d/2}}\int_{\omega\in\mathbb{R}^{d}}e^{-\sqrt{-1}x^{T}\omega-\|\omega\|^{2}/(4t)}d\omega$
in the third equality. In the same manner as the proof of Theorem 5.11,
$\hat{\mu}\neq 0$ holds. In addition, since
$\operatorname{supp}(\eta)\neq\\{0\\}$ holds,
$\int_{t\in[0,\infty)}(2t)^{-d/2}e^{-\|\omega\|^{2}/(4t)}d\eta(t)$ is positive
definite. As a result, the last formula in Eq. (32) is nonzero. By Lemma C.5,
$\Phi$ is injective.
### C.2 Proofs of Proposition 5.15 and Theorem 5.16
Let $\mathcal{R}_{+}(\mathcal{X})$ be the set of all real positive-valued
regular measures, and $\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$ the set of
all finite regular Borel $\mathcal{A}$-valued measures $\mu$ whose total
variations are dominated by $\nu\in\mathcal{R}_{+}(\mathcal{X})$ (i.e.,
$|\mu|\leq\nu$). We apply the following representation theorem to derive
Theorem 5.16.
###### Proposition C.6
For $\nu\in\mathcal{R}_{+}(\mathcal{X})$, there exists an isomorphism between
$\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$ and
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})^{\prime}$.
Proof For $\mu\in\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$ and
$u\in{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$, we have
$\bigg{\|}\int_{x\in\mathcal{X}}d\mu(x)u(x)\bigg{\|}_{\mathcal{A}}\leq\int_{x\in\mathcal{X}}\|u(x)\|_{\mathcal{A}}d|\mu|(x)\leq\int_{x\in\mathcal{X}}\|u(x)\|_{\mathcal{A}}d\nu(x).$
Thus, we define
$h:\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})\to{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})^{\prime}$
as $\mu\mapsto(u\mapsto\int_{x\in\mathcal{X}}d\mu(x)u(x))$.
Meanwhile, for $f\in{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})^{\prime}$ and
$E\in\mathcal{B}$, we have
$\|f(\chi_{E}1_{\mathcal{A}})\|_{\mathcal{A}}\leq
C\int_{x\in\mathcal{X}}\|\chi_{E}1_{\mathcal{A}}\|_{\mathcal{A}}d\nu(x)=C\nu(E)$
for some $C>0$ since $f$ is bounded. Here, $\chi_{E}$ is an indicator function
for a Borel set $E$. Thus, we define
$h^{\prime}:{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})^{\prime}\to\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$
as $f\mapsto(E\mapsto f(\chi_{E}1_{\mathcal{A}}))$.
By the definitions of $h$ and $h^{\prime}$, $h(h^{\prime}(f))(s)=f(s)$ holds
for $s\in\mathcal{S}(\mathcal{X},\mathcal{A})$. Since
$\mathcal{S}(\mathcal{X},\mathcal{A})$ is dense in
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$, $h(h^{\prime}(f))(u)=f(u)$ holds for
$u\in{L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$. Moreover,
$h^{\prime}(h(\mu))(E)=\mu(E)$ holds for $E\in\mathcal{B}$. Therefore,
$\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$ and
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})^{\prime}$ are isomorphic.
Proof of Theorem 5.16 Assume $\mathcal{M}_{k}$ is dense in
${C}_{0}(\mathcal{X},\mathcal{A})$. Since ${C}_{0}(\mathcal{X},\mathcal{A})$
is dense in ${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$ for any
$\nu\in\mathcal{R}_{+}(\mathcal{X})$, $\mathcal{M}_{k}$ is dense in
${L}^{1}_{\nu}(\mathcal{X},\mathcal{A})$ for any
$\nu\in\mathcal{R}_{+}(\mathcal{X})$. By Proposition C.3,
$\mathcal{M}_{k}^{\perp}=\\{0\\}$ holds. Let
$\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$. There exists
$\nu\in\mathcal{R}_{+}(\mathcal{X})$ such that
$\mu\in\mathcal{D}_{\nu}(\mathcal{X},\mathcal{A})$. By Proposition C.6, if
$\int_{x\in\mathcal{X}}d\mu(x)u(x)=0$ for any $u\in\mathcal{M}_{k}$, $\mu=0$.
Since $\int_{x\in\mathcal{X}}d\mu(x)u(x)=\left\langle
u,\Phi(\mu)\right\rangle_{\mathcal{M}_{k}}$,
$\int_{x\in\mathcal{X}}d\mu(x)u(x)=0$ means $\Phi(\mu)=0$. Therefore, by Lemma
C.5, $\Phi$ is injective.
For the case of $\mathcal{A}=\mathbb{C}^{m\times m}$, we apply the following
extension theorem to derive the converse of Theorem 5.16.
###### Proposition C.7 (c.f. Theorem in Helemskii (1994))
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. Let $\mathcal{M}$ be a Banach
$\mathcal{A}$-module, $\mathcal{M}_{0}$ be a closed submodule of
$\mathcal{M}$, and $f_{0}:\mathcal{M}_{0}\to\mathcal{A}$ be a bounded
$\mathcal{A}$-linear map. Then, there exists a bounded $\mathcal{A}$-linear
map $f:\mathcal{M}\to\mathcal{A}$ that extends $f_{0}$ (i.e., $f(u)=f_{0}(u)$
for $u\in\mathcal{M}_{0}$).
Proof Von Neumann-algebra $\mathcal{A}$ itself is regarded as an
$\mathcal{A}$-module and is normal. Also, $\mathbb{C}^{m\times m}$ is Connes
injective. By Theorem in Helemskii (1994), $\mathcal{A}$ is an injective
object in the category of Banach $\mathcal{A}$-module. The statement is
derived by the definition of injective objects in category theory.
We derive the following lemma and proposition by Proposition C.7.
###### Lemma C.8
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. Let $\mathcal{M}$ be a Banach
$\mathcal{A}$-module and $\mathcal{M}_{0}$ be a closed submodule of
$\mathcal{M}$. For $u_{1}\in\mathcal{M}\setminus\mathcal{M}_{0}$, there exists
a bounded $\mathcal{A}$-linear map $f:\mathcal{M}\to\mathcal{A}$ such that
$f(u_{0})=0$ for $u_{0}\in\mathcal{M}_{0}$ and $f(u_{1})\neq 0$.
Proof Let $q:\mathcal{M}\to\mathcal{M}/\mathcal{M}_{0}$ be the quotient map
to $\mathcal{M}/\mathcal{M}_{0}$, and $\,\mathcal{U}_{1}:=\\{q(u_{1})c\mid\
c\in\mathcal{A}\\}$. Note that $\mathcal{M}/\mathcal{M}_{0}$ is a Banach
$\mathcal{A}$-module and $\,\mathcal{U}_{1}$ is its closed submodule. Let
$\mathcal{V}:=\\{c\in\mathcal{A}\mid\ q(u_{1})c=0\\}$, which is a closed
subspace of $\mathcal{A}$. Since $\mathcal{V}$ is orthogonally complemented
(Manuilov and Troitsky, 2000, Proposition 2.5.4), $\mathcal{A}$ is decomposed
into $\mathcal{A}=\mathcal{V}+\mathcal{V}^{\perp}$. Let
$p:\mathcal{A}\to\mathcal{V}^{\perp}$ be the projection onto
$\mathcal{V}^{\perp}$ and $f_{0}:\mathcal{U}_{1}\to\mathcal{A}$ defined as
$q(u_{1})c\mapsto p(c)$. Since $p$ is $\mathcal{A}$-linear, $f_{0}$ is also
$\mathcal{A}$-linear. Also, for $c\in\mathcal{A}$, we have
$\displaystyle\|q(u_{1})c\|_{\mathcal{M}/\mathcal{M}_{0}}=\|q(u_{1})(c_{1}+c_{2})\|_{\mathcal{M}/\mathcal{M}_{0}}=\|q(u_{1})c_{1}\|_{\mathcal{M}/\mathcal{M}_{0}}$
$\displaystyle\qquad\geq\inf_{d\in\mathcal{V}^{\perp},\|d\|_{\mathcal{A}}=1}\|q(u_{1})d\|_{\mathcal{M}/\mathcal{M}_{0}}\;\|c_{1}\|_{\mathcal{A}}=\inf_{d\in\mathcal{V}^{\perp},\|d\|_{\mathcal{A}}=1}\|q(u_{1})d\|_{\mathcal{M}/\mathcal{M}_{0}}\;\|p(c)\|_{\mathcal{A}},$
where $c_{1}=p(c)$ and $c_{2}=c_{1}-p(c)$. Since
$\inf_{d\in\mathcal{V}^{\perp},\|d\|_{\mathcal{A}}=1}\|q(u_{1})d\|_{\mathcal{M}/\mathcal{M}_{0}}\;\|p(c)\|_{\mathcal{A}}>0$,
$f_{0}$ is bounded. By Proposition C.7, $f_{0}$ is extended to a bounded
$\mathcal{A}$-linear map $f_{1}:\mathcal{M}/\mathcal{M}_{0}\to\mathcal{A}$.
Setting $f:=f_{1}\circ q$ completes the proof of the lemma.
Then we prove the converse of Lemma C.3.
###### Proposition C.9
Let $\mathcal{A}=\mathbb{C}^{m\times m}$. For a Banach $\mathcal{A}$-module
$\mathcal{M}$ and its submodule $\mathcal{M}_{0}$, $\mathcal{M}_{0}$ is dense
in $\mathcal{M}$ if $\mathcal{M}_{0}^{\perp}=\\{0\\}$.
Proof Assume $u\notin\overline{\mathcal{M}_{0}}$. We show
$\overline{\mathcal{M}_{0}}\supseteq(\mathcal{M}_{0}^{\perp})^{\perp}$. By
Lemma C.8, there exists $f\in\mathcal{M}^{\prime}$ such that $f(u)\neq 0$ and
$f(u_{0})=0$ for any $u_{0}\in\overline{\mathcal{M}_{0}}$. Thus,
$u\notin(\mathcal{M}_{0}^{\perp})^{\perp}$. As a result,
$\overline{\mathcal{M}_{0}}\supseteq(\mathcal{M}_{0}^{\perp})^{\perp}$.
Therefore, if $\mathcal{M}_{0}^{\perp}=\\{0\\}$, then
$\overline{\mathcal{M}_{0}}\supseteq\mathcal{M}$, which implies
$\mathcal{M}_{0}$ is dense in $\mathcal{M}$.
As a result, we derive Proposition 5.15 as follows.
Proof of Proposition 5.15 Let $\mu\in\mathcal{D}(\mathcal{X},\mathcal{A})$.
Then, “$\Phi(\mu)=0$” is equivalent to
“$\int_{x\in\mathcal{X}}d\mu^{*}(x)u(x)=\left\langle\Phi(\mu),u\right\rangle_{\mathcal{M}_{k}}=0$
for any $u\in\mathcal{M}_{k}$”. Thus, by Proposition C.4,
“$\Phi(\mu)=0\Rightarrow\mu=0$” is equivalent to
“$f\in{C}_{0}(\mathcal{X},\mathcal{A})^{\prime}$, $f(u)=0$ for any
$u\in\mathcal{M}_{k}$ $\Rightarrow$ $f=0$”. By the definition of
$\mathcal{M}_{k}^{\perp}$ and Proposition C.9, $\mathcal{M}_{k}$ is dense in
${C}_{0}(\mathcal{X},\mathcal{A})$.
## Appendix D Derivative on Banach spaces
###### Definition D.1 (Fréchet derivative)
Let $\mathcal{M}$ be a Banach space. Let $f:\mathcal{M}\to\mathcal{A}$ be an
$\mathcal{A}$-valued function defined on $\mathcal{M}$. The function $f$ is
referred to as (Fréchet) differentiable at a point $\mathbf{c}\in\mathcal{M}$
if there exists a continuous $\mathbb{R}$-linear operator $l$ such that
$\lim_{u\to 0,\
u\in\mathcal{M}\setminus\\{0\\}}\frac{\|f(\mathbf{c}+u)-f(\mathbf{c})-l(u)\|_{\mathcal{A}}}{\|u\|_{\mathcal{M}}}=0$
for any $u\in\mathcal{M}$. In this case, we denote $l$ as $Df_{\mathbf{c}}$.
## References
* Álvarez et al. (2012) M. Álvarez, L. Rosasco, and N. Lawrence. Kernels for vector-valued functions: A review. _Foundations and Trends in Machine Learning_ , 4, 2012.
* Bakić and Guljaš (2001) D. Bakić and B. Guljaš. Operators on Hilbert $H^{*}$-modules. _Journal of Operator Theory_ , 46:123–137, 2001.
* Balkir (2014) E. Balkir. _Using Density Matrices in a Compositional Distributional Model of Meaning_. Master’s thesis, University of Oxford, 2014.
* Blanchard and Brüning (2015) P. Blanchard and E. Brüning. _Mathematical Methods in Physics_. Birkhäuser, 2nd edition, 2015.
* Budišić et al. (2012) M. Budišić, R. Mohr, and I. Mezić. Applied Koopmanism. _Chaos_ , 22:047510, 2012.
* Cnops (1992) J. Cnops. A Gram–Schmidt method in Hilbert modules. _Clifford Algebras and their Applications in Mathematical Physics_ , 47:193–203, 1992.
* Črnjarić-Žic et al. (2020) N. Črnjarić-Žic, S. Maćešić, and I. Mezić. Koopman operator spectrum for random dynamical systems. _Journal of Nonlinear Science_ , 30:2007–2056, 2020.
* Deb (2016) P. Deb. Geometry of quantum state space and quantum correlations. _Quantum Information Processing_ , 15:1629–1638, 2016.
* Diestel (1984) J. Diestel. _Sequences and Series in Banach spaces_. Graduate texts in mathematics ; Volume 92. Springer-Verlag, 1984.
* Dinculeanu (1967) N. Dinculeanu. _Vector Measures_. International Series of Monographs on Pure and Applied Mathematics ; Volume 95. Pergamon Press, 1967.
* Dinculeanu (2000) N. Dinculeanu. _Vector Integration and Stochastic Integration in Banach Spaces_. John Wiley & Sons, 2000.
* Dudley (2002) R. M. Dudley. _Real Analysis and Probability_. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2nd edition, 2002.
* Fujii & Kawahara (2019) Fujii, K. and Kawahara, Y. Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables. _Neural Networks_ , 117:94–103, 2019.
* Fukumizu et al. (2007) K. Fukumizu, A. Gretton, X. Sun, and B. Schölkopf. Kernel measures of conditional dependence. _Advances in Neural Information Processing Systems 20_ , 489–496, 2007\.
* Gretton et al. (2006) A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola. A kernel method for the two-sample-problem. _Advances in Neural Information Processing Systems 19_ , 513–520, 2006\.
* Gretton et al. (2012) A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. J. Smola. A kernel two-sample test. _Journal of Machine Learning Research_ , 13(25):723–773, 2012.
* Hashimoto et al. (2020) Y. Hashimoto, I. Ishikawa, M. Ikeda, Y. Matsuo, and Y. Kawahara. Krylov subspace method for nonlinear dynamical systems with random noise. _Journal of Machine Learning Research_ , 21(172):1–29, 2020.
* Hastie et al. (2009) T. Hastie, R. Tibshirani, and J. Friedman. _The Elements of Statistical Learning: Data Mining, Inference, and Prediction_. Springer, 2nd edition, 2009.
* Helemskii (1994) A. Helemskii. The spatial flatness and injectiveness of connes operator algebras. _Extracta mathematicae_ , 9:75–81, 1994.
* Heo (2008) J. Heo. Reproducing kernel Hilbert $C^{*}$-modules and kernels associated with cocycles. _Journal of Mathematical Physics_ , 49:103507, 2008.
* Holevo (2011) A. S. Holevo. _Probabilistic and Statistical Aspects of Quantum Theory_. Scuola Normale Superiore, 2011\.
* Ishikawa et al. (2018) I. Ishikawa, K. Fujii, M. Ikeda, Y. Hashimoto, and Y. Kawahara. Metric on nonlinear dynamical systems with Perron–Frobenius operators. In _Advances in Neural Information Processing Systems 31_ , 2856–2866, 2018.
* Itoh (1990) S. Itoh. Reproducing kernels in modules over $C^{*}$-algebras and their applications. _Journal of Mathematics in Nature Science_ , 37:1–20, 1990.
* Jitkrittum et al. (2019) W. Jitkrittum, P. Sangkloy, M. W.Gondal, A. Raj, J. Hays, and B. Schölkopf. Kernel mean matching for content addressability of GANs. In _Proceedings of the 36th International Conference on Machine Learning_ , 3140–3151, 2019.
* Kadri et al. (2016) H. Kadri, E. Duflos, P. Preux, S. Canu, A. Rakotomamonjy, and J. Audiffren. Operator-valued kernels for learning from functional response data. _Journal of Machine Learning Research_ , 17(20):1–54, 2016.
* Kawahara (2016) Y. Kawahara. Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis. In _Advances in Neural Information Processing Systems 29_ , 911–919, 2016.
* Klus et al. (2020) S. Klus, I. Schuster, and K. Muandet. Eigendecompositions of transfer operators in reproducing kernel Hilbert spaces. _Journal of Nonlinear Science_ , 30:283–315, 2020.
* Lance (1995) E. C. Lance. _Hilbert $C^{*}$-modules – a toolkit for operator algebraists_. London Mathematical Society Lecture Note Series; Volume 210. Cambridge University Press, 1995.
* Levitin et al. (2007) D. J. Levitin, R. L. Nuzzo, B. W. Vines, and J. O. Ramsay. Introduction to functional data analysis. _Canadian Psychology_ , 48:135–155, 2007.
* Li et al. (2019) H. Li, S. J. Pan, S. Wang, and A. C. Kot. Heterogeneous domain adaptation via nonlinear matrix factorization. _IEEE Transactions on Neural Networks and Learning Systems_ , 31:984–996, 2019.
* Lim et al. (2015) N. Lim, F. Buc, C. Auliac, and G. Michailidis. Operator-valued kernel-based vector autoregressive models for network inference. _Machine Learning_ , 99(3):489–513, 2015.
* Liu and Rebentrost (2018) N. Liu and P. Rebentrost. Quantum machine learning for quantum anomaly detection. _Physical Review A_ , 97:042315, 2018.
* Lusch et al. (2018) B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. _Nature Communications_ , 9:4950, 2018.
* Manuilov and Troitsky (2000) V. M. Manuilov and E. V. Troitsky. Hilbert $C^{*}$\- and $W^{*}$-modules and their morphisms. _Journal of Mathematical Sciences_ , 98:137–201, 2000.
* Micchelli and Pontil (2005) C. A. Micchelli and M. Pontil. On learning vector-valued functions. _Neural Computation_ , 17:177–204, 2005.
* Minh et al. (2016) H. Q. Minh, L. Bazzani, and V. Murino. A unifying framework in vector-valued reproducing kernel Hilbert spaces for manifold regularization and co-regularized multi-view learning. _Journal of Machine Learning Research_ , 17(25):1–72, 2016.
* Muandet et al. (2017) K. Muandet, K. Fukumizu, B. K. Sriperumbudur, and B. Schölkopf. Kernel mean embedding of distributions: A review and beyond. _Foundations and Trends in Machine Learning_ , 10(1–2), 2017.
* Müller (1997) A. Müller. Integral probability metrics and their generating classes of functions. _Advances in Applied Probability_ , 29:429–443, 1997.
* Murphy (1990) G. J. Murphy. _C*-Algebras and Hilbert Space Operators_. Academic Press, 1990.
* Peres and Terno (2004) A. Peres and D. R. Terno. Quantum information and relativity theory. _Reviews of Modern Physics_ , 76:93–123, 2004.
* Rachev (1985) S. T. Rachev. On a class of minimal functionals on a space of probability measures. _Theory of Probability & Its Applications_, 29:41–49, 1985.
* Ramsay and Silverman (2005) J. O. Ramsay and B. W. Silverman. _Functional Data Analysis_. Springer, 2nd edition, 2005.
* Saitoh and Sawano (2016) S. Saitoh and Y. Sawano. _Theory of Reproducing Kernels and Applications_. Springer, 2016.
* Schölkopf and Smola (2001) B. Schölkopf and A. J. Smola. _Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond_. MIT Press, 2001.
* Skeide (2000) M. Skeide. Generalised matrix $C^{\ast}$-algebras and representations of Hilbert modules. _Mathematical Proceedings of the Royal Irish Academy_ , 100A(1):11–38, 2000.
* Smola et al. (2007) A. J. Smola, A. Gretton, L. Song, and B. Schölkopf. A Hilbert space embedding for distributions. In _Proceedings of the 18th International Conference on Algorithmic Learning Theory_ , 13–31, 2007.
* Schölkopf et al. (2001) B. Schölkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In _Proceedings of the 14th Annual Conference on Computational Learning Theory_ , 416–426, 2001.
* Smyrlis and Zisis (2004) G. Smyrlis and V. Zisis. Local convergence of the steepest descent method in Hilbert spaces. _Journal of Mathematical Analysis and Applications_ , 300:436–453, 2004.
* Sriperumbudur et al. (2010) B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. G. Lanckriet. Hilbert space embeddings and metrics on probability measures. _Journal of Machine Learning Research_ , 11:1517–1561, 2010\.
* Sriperumbudur et al. (2011) B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. _Journal of Machine Learning Research_ , 12(70):2389–2410, 2011.
* Sriperumbudur et al. (2012) B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. G. Lanckriet. On the empirical estimation of integral probability metrics. _Electronic Journal of Statistics_ , 6:1550–1599, 2012\.
* Steinwart (2001) I. Steinwart. On the influence of the kernel on the consistency of support vector machines. _Journal of Machine Learning Research_ , 2:67–93, 2001\.
* Suzumura et al. (2017) S. Suzumura, K. Nakagawa, Y. Umezu, K. Tsuda, and I. Takeuchi. Selective inference for sparse high-order interaction models. In _Proceedings of the 34th International Conference on Machine Learning_ , 3338–3347, 2017.
* Szafraniec (2010) F. H. Szafraniec. Murphy’s positive definite kernels and Hilbert $C^{*}$-modules reorganized. _Noncommutative Harmonic Analysis with applications to probability II_ , 89:275–295, 2010.
* Takeishi et al. (2017a) N. Takeishi, Y. Kawahara, and T. Yairi. Subspace dynamic mode decomposition for stochastic Koopman analysis. _Physical Review E_ , 96:033310, 2017a.
* Takeishi et al. (2017b) N. Takeishi, Y. Kawahara, and T. Yairi. Learning Koopman invariant subspaces for dynamic mode decomposition. In _Advances in Neural Information Processing Systems 30_ , 1130–1140, 2017b.
* Wang et al. (2016) J. L. Wang, J. M. Chiou, and H. G. Müller. Functional data analysis. _Annual Review of Statistics and Its Application_ , 3:257–295, 2016.
* Wendland (2004) H. Wendland. _Scattered Data Approximation_. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, 2004.
* Ye (2017) Y. Ye. The matrix Hilbert space and its application to matrix learning. _arXiv:1706.08110v2_ , 2017.
|
# Rydberg quantum simulator of topological insulators
Mohammadsadegh Khazali Institute for Quantum Optics and Quantum Information
of the Austrian Academy of Sciences, Innsbruck, Austria
###### Abstract
This article proposes the implementation of Rydberg multi-dimensional
discrete-time quantum walk (DTQW) that could ideally simulate different
classes of topological insulators. Using an exchange interaction between
Rydberg excited atoms in an atomic-array with dual lattice-constants, the new
setup operates both coined and coin-less models of DTQW. Here, complicated
coupling tesselations are performed by global laser excitation. The long-range
interaction provides a new feature of designing different topologically
ordered periodic boundary conditions. Limiting the Rydberg population to two
excitations, coherent QW over thousand lattice sites and steps are achievable
with the current technology. These features would improve the performance of
this quantum machine in running the quantum search algorithm over
topologically ordered databases as well as diversifying the range of
topological insulators that could be simulated.
## I Introduction
There is a huge effort in making quantum hardwares that outperform classical
counterparts in performing certain algorithms and simulating other complicated
quantum systems. Among different approaches, implementing the quantum walk
(QW) Aha93 ; Far98 ; Kem03 is receiving wide interest. Unlike classical
random walk, particles performing a quantum walk can take superposition of all
possible paths through their environment simultaneously, leading to faster
propagation and enhanced sensitivity to initial conditions Dad18 ; Sum16 ;
Pre15 . These properties provide an appealing bases for implementation of
quantum algorithms like searching Por13 ; She03 ; Chi03 ; Chi04 ; Por17 ,
quantum processing Chi09 ; ken20 ; Lov10 ; Chi13 ; Sal12 and simulation of
topological insulators Kit10 . Improving the hardware in terms of size,
coherence, dimensions, controlling elements and other features like the
topologically ordered boundary-conditions, would improve the performance and
diverse the applications that could run over this quantum hardware.
Quantum walk has been implemented on trapped ions Sch09 ; Zah10 , neutral
atoms Kar09 ; Wei11 ; Fuk13 ; Pre15 and among other platforms Man14 . While
the ion-traps are limited to a 1D array of 50 atoms, neutral atoms are pioneer
in terms of multi-dimensional trapping of a large number of identical qubits.
From this perspective, trapping and controlling more qubits than any other
platforms has already been demonstrated in one dimension (1D) Ber17 ; Omr19 ,
2D Zha11 ; Pio13 ; Nog14 ; Xia15 ; Zei16 ; Lie18 ; Nor18 ; Coo18 ; Hol19 ;
Sas19 and 3D Wan16 ; Bar18 geometries. The current advances in exciting
atoms to the Rydberg level Lev18 , provides a controllable long-range
interaction for quantum technology Saf10 ; Ada19 ; Kha15 ; kha20 ;
khaz2020rydberg ; khaz16 ; Kha21 ; Kha19 ; Kha16 ; Kha17 ; Kha18 . Rydberg
interaction has been used in different time-independent lattice Hamiltonian
models leading to continuous-time quantum transport Cot06 ; Bar15 ; Scho15 ;
Ori18 ; Gun13 ; Sch15 ; Let18 ; Wus11 and simulating topological Mott
insulators Dau16 . The continuous-time interaction in these simulators is
limiting the range of programable problems. Also, exciting all the sites to
the Rydberg level would cause significant de-coherence in addition to the
strong interaction between atoms. To overcome this interaction, very demanding
exciting and trapping lasers are required. It also puts a limit on the density
of sites in the lattice.
Figure 1: Rydberg discrete-time quantum walk (DTQW) scheme. (a) Level scheme:
The walker is an $nP$ Rydberg excitation. QW operates by exciting a
neighbouring lattice to $nS$ Rydberg state featuring resonant exchange-
interaction with the walker. (b) The exchange interaction $V$ forms a site-
dependent level shift of $nS$ Rydberg state. Using two lattice constants and
tuning the exciting laser’s frequency only a desired site get in resonance
with the laser and apply quantum walk. (c[d]) Adjusting laser’s detuning to
the inter- [intra-] dimer interaction, desired coupling tessellation $W_{0}$
[$W_{1}$] would be formed. (e) The maximum population of the auxiliary state
$|70S\rangle$ over a 2$\pi$ pulse is plotted as a function of interatomic
distance from the walker $|70P\rangle$. Laser detuning is set to
$\Delta=-\frac{C_{3}}{a^{3}}$. Only the site at the distance $a=3\mu$m from
the walker would get in resonance and hence goes under the quantum walk
($\Omega/2\pi=2$MHz, $C_{3}=8.4$GHz.$\mu$m3). (f) The hopping angle $\theta$
could be controlled by manipulating the effective detuning of targeted site
$\delta=\Delta+V(a_{\\{0,1\\}})$.
This paper proposes an approach that revolutionizes the level of control over
the interaction leading to Rydberg discrete-time quantum walk in multi-
dimensions. Benefitting from the long-range Rydberg interaction, the scheme
features QW implementation on topologically ordered (e.g. torus) periodic
boundary conditions. Limited Rydberg population in this proposal would be a
big step towards scalable quantum simulators. While using global lasers to
switch among multiple coupling tessellations, adding local external fields
provide an extra degree of control for engineering space-dependent coupling
properties. Valuable features that are not realized in other neutral atom QW
schemes, and which open a wide range of applications. As an example, the
implementation of different classes of Floquet topological insulators with
Rydberg model will be overviewed.
Topological insulators are a new class of quantum materials that are
insulating in the bulk but exhibiting robust topologically protected current-
carrying edge states. Topological insulator materials are challenging to
synthesize, and limited in topological phases accessible with solid-state
materials And13 . This has motivated the search for topological phases on the
systems that simulate the same principles underlying topological insulators.
Discrete-time quantum walks (DTQWs) have been proposed for making Floquet
topological insulators. This periodically driven system simulates an effective
(Floquet) Hamiltonian that is topologically nontrivial Cay13 . This system
replicates the effective Hamiltonians from all universality classes of 1- to
3-D topological insulators Kit09 ; Kit10 ; Pan20 . Interestingly, topological
properties of Floquet topological insulators could be controlled via an
external periodic drive rather than an external magnetic field.
DTQW generates topological phases that are reacher than those of time-
independent lattice Hamiltonians Dau16 . Topological edge states have been
realized exclusively in photonic DTQW with limited sites ($<20$) and steps
($<10$) Rec13 ; Xia17 ; Muk17 . Introduced Rydberg DTQW scheme could realize
edge state over a thousand sites and steps in 1, 2, and 3 dimensions with the
available experimental setups. The mentioned advances of the designed hardware
improve the performance of quantum algorithms, e.g. torus topological periodic
boundary conditions could result in a quadratic speedup of quantum-walk-based
search algorithms Por13 ; Por17 , and diversify the range of topological
phenomena that could be simulated. This work opens up the chance of realizing
topological properties in multi-dimensional QWs as well as QWs over
topologically ordered surfaces.
The article is organized as follows. In Sec. II, the coined and coin-less
Rydberg discrete-time quantum walk schemes are presented in 1D. Sec. III
extends the model to higher dimensions and discusses the approaches for
imposing periodic boundary conditions or applying quantum walk on different
topological surfaces. The Coherence of the proposed scheme under a wide range
of imperfection sources would be evaluated in Sec. IV. The scheme’s
performance in multi-dimensions are then quantified in Sec. V. At the end,
applications of this model in simulating multi-dimensional topological
insulators are discussed in Sec. VI.
## II Rydberg discrete-time quantum walk
In the coin-less DTQW different coupling tessellations must be considered, in
a way that each tessellation covers all vertices with non-overlapping
connections and the tessellation union covers all edges. This model is the
discrete-time version of the famous Su-Schrieffer-Heeger (SSH) model
su1979solitons . Distinguishing even ${|{m,e}\rangle}={|{2m}\rangle}$ and odd
${|{m,o}\rangle}={|{2m-1}\rangle}$ lattice sites in the $m^{\text{th}}$ sub-
lattice (dimer), two types of QW operators, covers the intra-dimer
$W_{0}=\exp(\text{i}\theta_{0}H_{0})$ and inter-dimer
$W_{1}=\exp(\text{i}\theta_{1}H_{1})$ coupling tessellations with
$\displaystyle
H_{0}=\sum\limits_{m=1}^{N/2}({|{m,e}\rangle}\\!{\langle{m,o}|}+\text{h.c.})$
(1) $\displaystyle
H_{1}=\sum\limits_{m=1}^{N/2}({|{m,e}\rangle}\\!{\langle{m+1,o}|}+\text{h.c.}),$
see Fig. 1c,d.
The physical implementation of the proposed Rydberg discrete-time quantum walk
(DTQW) is presented in Fig. 1. The walker is a
${|{p}\rangle}={|{nP_{3/2},3/2}\rangle}$ Rydberg excitation while other sites
are in the ground state ${|{g}\rangle}={|{5S_{1/2},1/2}\rangle}$. The desired
inter- and intra-dimer coupling labeled by $k\in\\{0,1\\}$, could be realized
by site selective excitation of ground state atom to
${|{s}\rangle}={|{nS_{1/2},1/2}\rangle}$ Rydberg level featuring exchange
interaction with the walker ${|{p}\rangle}$. The site selection is controlled
by adjusting of the global laser’s detuning from ${|{s}\rangle}$ state
$\Delta_{k}$ under the concept of Rydberg aggregate, see bellow. The effective
Rydberg quantum-walk is governed under the following Hamiltonian
$\displaystyle H_{k}^{Ry}=$
$\displaystyle\sum\limits_{i<j}V(r_{ij})({|{s_{i}p_{j}}\rangle}\\!{\langle{p_{i}s_{j}}|}+\text{h.c.})$
$\displaystyle+\sum\limits_{i}\Omega({|{s}\rangle}_{i}{\langle{g}|}+\text{h.c.})+\Delta_{k}{|{s}\rangle}_{i}{\langle{s}|},$
where $i$ sums over all the sites. The exchange interaction
$V(r_{ij})=C_{3}/(r_{ij})^{3}$ between the walker ${|{p}\rangle}$ and
auxiliary excited Rydberg state ${|{s}\rangle}$, separated by $r_{ij}$,
evolves the initial state of the two sites ${|{p_{i}g_{j}}\rangle}$ to a
superposition state
$\cos\theta{|{p_{i}g_{j}}\rangle}+\text{i}\sin\theta{|{g_{i}p_{j}}\rangle}$
over the 2$\pi$ pulse of $\Omega$ laser. The absence of self-interaction over
delocalized single ${|{s}\rangle}$ and ${|{p}\rangle}$ excitations are
justified in the App. A1.
To operate the two tessellation types of Eq. 1 under Rydberg Hamiltonian Eq.
II with a global laser, the space-dependent nature of interaction $V(r_{ij})$
is used over a superlattice with distinct lattice-constants inside ($a_{0}$)
and outside ($a_{1}$) the dimers. By adjusting the exciting laser’s detuning
from the ${|{s}\rangle}$ Rydberg state, to be opposite of the interaction of
specific lattice site at distance $a_{k}$ ($k\in\\{0,1\\}$) from the walker
($\Delta_{k}=-\frac{C_{3}}{a_{k}^{3}}$), one can choose the site pairs that
get in resonance with the laser conditioned on the presence of the walker and
thus undergo the quantum walk, see Fig. 1. The single non-local quantum walker
${|{p}\rangle}$ would induce the excitation of a single nonlocal auxiliary
${|{s}\rangle}$ Rydberg state over each $2\pi$ pulse operation, see App. A1.
Adjusting the laser detuning at each pulse would connect the targeted site at
$a_{0}$ or $a_{1}$ distance from the walker, generating the desired
$W_{k}=\exp(\text{i}H^{Ry}_{k}t_{k})$ with $k\in\\{0,1\\}$ corresponding to
intra- and inter-dimer coupling tessellations. Duration of each step $t_{k}$
is defined by $\int\limits_{0}^{t_{k}}\Omega_{\text{eff}}\text{d}t=2\pi$,
where the effective Rabi frequency is given by
$\Omega_{\text{eff}}=\sqrt{\Omega^{2}+\delta^{2}}$ and
$\delta=\Delta_{k}+V(a_{k})$ is the effective detuning of the targeted site at
either $a_{0}$ or $a_{1}$. Fig. 1f shows how the hopping angle $\theta$ in
each step can get controlled by $\frac{\delta}{\Omega}$ ratio.
To implement coined DTQW, the dimers would be considered as the individual
units. The coin is formed by the relative population of odd and even sites in
each sub-lattice (dimer), see Fig. 1c,d. The coin rotation operator
$R_{\theta}=\exp(\text{i}H_{0}\theta)$ is applied by population rotation in
the sub-lattice basis using $H_{0}$ operator of Eq. 1. The desired transition
operators of coined DTQW would be realized by subsequent application of intra-
and inter-dimer population swapping i.e.
$T=\text{e}^{\text{i}H_{1}\pi/2}R(\pi/2)=\sum\limits_{m}({|{m-1,e}\rangle}\\!{\langle{m,e}|}+{|{m+1,o}\rangle}\\!{\langle{m,o}|})$.
## III Multi-dimensional DTQW with periodic boundary conditions
Figure 2: Multi-dimensional DTQW. (a) Kronecker multiplication of 1D QW leads
to a 2D lattice of tetramers and could trivially be extended to a 3D lattice
of octamers. (b) Extension to the 3D lattice of dimers provides a non-
separable multi-dimensional Rydberg DTQW. In (b) quantization axis would alter
during the operation to be along the connections.
The idea behind Fig. 1, is extendable to higher-dimensions by two approaches.
Kronecker multiplication of 1D staggered quantum walk, would make 2D and 3D
lattices of tetramers and octamers respectively. A more enriched non-separable
DTQW could be applied in a multi-dimensional lattice of dimers. The angular-
dependency of the exchange interaction $V_{ij}$ provides a wider range of
laser detuning, available for dynamic control over the exciting sites.
##### Multi-dimensional DTQW via Kronecker multiplication:
Extension to higher dimensions could be realized as the combination of coin-
less DTQWs along different directions. In two dimensions, this would result in
a 2D lattice of tetramers as depicted in Fig. 2a. The QW is performed by
concatenated application of the four sets of quantum jump operators
$W_{xl}=\exp(\text{i}\theta_{xl}H_{xl}\otimes\mathbbm{1}_{y})$ and
$W_{yl}=\exp(\text{i}\theta_{yl}\mathbbm{1}_{x}\otimes H_{yl})$ where $H_{l}$
($l\in\\{0,1\\}$) in each dimension is given by Eq. 1, with distinguished odd
and even sites along $x$ and $y$ dimension, see Eq1 for the expanded set of
Hamiltonians. For the implementation, two lattice constants along each
dimension is required to distinguish inter- and intra-cell couplings.
Extension to the 3D lattice of octamers is trivial.
##### Multi-dimensional Rydberg DTQW in a lattice of dimers
provides an enriched non-separable Hamiltonian. Fig. 2b shows the connectivity
graphs over the 3D lattice with the coupling Hamiltonians presented in Eq2 .
To realize this set of couplings, the Rydberg quantization axis must be
changed to be along the exchange orientation. The quantization axis is defined
by the orientation of polarized lasers. The lattice structure in this model
consists of unique lattice constant along $y$ and $z$ dimension while
containing two inter- and intra-cell lattice constants along $x$ dimension.
These coupling Hamiltonians could be used for coinless DTQW operators
$W_{l}=\text{e}^{\text{i}H_{l}\theta_{l}}$ with $l=\\{x,xy,xz\\}_{[0,1]}$ as
explained in Sec. II. Fine tuning of presented connectivities and operation
fidelities in multi-dimensions are discussed in Sec. V.
##### Multi-dimensional coined DTQW in a lattice of dimers:
The proposed system of Fig. 2b, can be used for the implementation of multi-
dimensional coined DTQW, where the coin is formed by the relative population
of odd and even sites in each sub-lattice. The coin rotation
$R_{\theta}=\exp(\text{i}H_{x0}\theta)=\cos(\theta)\mathbbm{1}_{\\{e,o\\}}+\text{i}\sin(\theta)({|{e}\rangle}\\!{\langle{o}|}+{|{o}\rangle}\\!{\langle{e}|})$
is applied by the intra-dimer population rotation. The transition operators
are applied by concatenated implementation of intra- and inter-dimer
population swapping i.e. $T_{xyz}=\text{e}^{\text{i}H_{xyz1}\pi/2}R(\pi/2)$,
$T_{xy}=\text{e}^{\text{i}H_{xy1}\pi/2}R(\pi/2)$,
$T_{xz}=\text{e}^{\text{i}H_{xz1}\pi/2}R(\pi/2)$
$T_{x}=\text{e}^{\text{i}H_{x1}\pi/2}R(\pi/2)$,
$T_{y}=\text{e}^{\text{i}H_{xy0}\pi/2}R(\pi/2)$, and
$T_{z}=\text{e}^{\text{i}H_{xz0}\pi/2}R(\pi/2)$. Extended forms of transition
operators are presented in Eq3 .
##### Topological periodic boundary conditions
The long-range Rydberg interaction could be used for making different
topological periodic boundary conditions. Fig. 3 shows two examples of DTQW
over (a) Möbius stripe and (b) torus topological surfaces. While the torus
boundary condition could be implemented by global laser over limited lattice
sites, forming other topological surfaces e.g. Möbius and Kline bottle
requires local laser excitations. During the boundary operation step $W_{yb}$
($W_{xb}$) with the local lasers, the pair sites with the same pentagon
(hexagon) numbers would excited to ${|{s}\rangle}$ under local lasers with
detuning adjusted to their interactions. The sequence of the QW operators
would be $U=W_{y0}W_{y1}W_{yb}W_{x0}W_{x1}W_{xb}$.
Figure 3: Rydberg DTQW over topological surfaces (a) Möbius stripe and (b)
torus. To realize the boundary conditions the pair sites with the same
pentagon (hexagon) number would get excited to $nS$ state with local lasers
over the boundary operation step $W_{yb}$ ($W_{xb}$). Concatenated operation
of $W_{y0}W_{y1}W_{yb}W_{x0}W_{x1}W_{xb}$, performs the QW on the desired
topological surface.
## IV Decoherence in Rydberg DTQW
### IV.1 Non-unitary dynamics
The non-unitary dynamics of the quantum walk can be described by the
projection of quantum state onto the pointer states Schl07 . In this model the
pointer state projector $\Pi_{x}={|{x}\rangle}\\!{\langle{x}|}$, projects the
walker into the $x^{\text{th}}$ site. In the presented Rydberg QW model, the
evolution of quantum walker over a single step is mainly coherent. Hence, the
decoherence could be applied in a discrete-time manner after each step. The
effective stroboscopic non-unitary operator would be
$\rho_{i+1}=(1-P_{s})W\rho_{i}W^{\dagger}+P_{s}\sum\limits_{x}\Pi_{x}W\rho_{i}W^{\dagger}\Pi_{x}^{\dagger}$
(3)
where $\rho_{i}=\sum_{x,x^{\prime}}{|{x}\rangle}\\!{\langle{x^{\prime}}|}$ is
the density operator after i${}^{\text{th}}$ step. The spatial de-phasing
terms discussed in the next section would reduce the off-diagonal coherence
terms with a probability of $P_{s}$ after each step and hence suppress the
superposition effect. In the totally diffusive case $P_{s}=1$, the absence of
the interference of quantum paths would only leave a classical Gaussian
probability distribution in the diagonal terms, see Fig. 4.
(a) (b) (c) (d) (e) (f) (g) (h) (i)
Figure 4: Effects of de-phasing on (left column) density matrix terms
$\rho(x,x^{\prime})$, (middle column) anti-diagonal coherence $\rho(x,-x)$ and
(right column) probability distribution $\rho(x,x)$ after 50 Hadamard steps
$\theta=\pi/4$ of coinless Rydberg DTQW over a 1D lattice with N=101 sites.
The three rows from top to bottom are corresponding to $P_{s}=[0,0.05,1]$ de-
phasing probability over single step.
To define the coherence length $l_{co}$ one can look at the suppression of
anti-diagonal terms $|\rho_{(x,-x)}|=|\rho_{{}_{0}(x,-x)}|\exp(-|x|/l_{co})$
with respect to the unperturbed one $\rho_{{}_{0}(x,-x)}$. Coherence length is
plotted as a function of $P_{s}$ in Fig. 5a. The crossover between coherent
and incoherent cases can also be seen in the rate of mean square displacement
$\langle x^{2}\rangle=\sum\limits_{x}\rho(x,x)x^{2}$. Fig. 5b, shows that
within the coherent period $t<\Gamma^{-1}$ the walker propagates ballistically
$\propto t^{2}$ while in the diffusive regime $t>\Gamma^{-1}$ it propagates
linearly with time. Overall, Fig. 5, shows that coherent staggered quantum
walk over a lattice with $N\approx P_{s}^{-1}$ sites is realizable that would
propagate ballistically over $i=P_{s}^{-1}$ steps. In the next section,
$P_{s}$ is evaluated in the proposed scheme.
(a) (b)
Figure 5: Effects of dephasing on (a) coherent-length and (b) Ballistic
distribution. (a) The number of lattice-sites over which the coherence
preserves is numerically calculated, see the text for more details. Figure
(b), numerically calculates $<x^{2}>$ after 50 steps in N=101 lattice
dimension as a function of $\gamma t=iP_{s}$ where $i$ is the step number,
$\gamma$ is the total dephasing rate and $t$ is the operation time. For time
scales below $\gamma^{-1}$ the square displacement is ballistic $\propto
t^{2}$ while above that it becomes diffusive $\propto t$. Dotted-dashed line
is a quadratic fit to the ballistic part $22.5(\gamma t)^{2}$. The hopping
angle in (a) is $\theta=\frac{\pi}{4}$.
### IV.2 Sources of errors
#### Laser error sources
Laser noise over the Rydberg excitation process, causes de-phasing. The laser
noise $\gamma_{la}$ that is encountered here is what remains after laser
locking in the two-photon excitation. Fig. 6a, simulates the de-phasing
probability $P_{s}$ after one step as a function of the relative laser noise
$\frac{\gamma_{la}}{\Omega}$. In this figure, effects of the laser-noise over
the Rydberg population rotation are quantified using master equation with
$\sqrt{\gamma_{la}}{|{s}\rangle}\\!{\langle{s}|}$ Lindblad term. The plot
considers 70S-70P interaction with $V=\frac{-8.4}{3^{3}}$GHz over the lattice
constant of 3$\mu$m, and $\Omega=|V|/100$. In the recent experiments Ber17 ;
Lev18 , laser noise are suppressed below the effective bandwidth of the lock,
resulting in narrow line-widths of 0.5 kHz for the two-photon Rabi frequency
of $\Omega/2\pi=$2MHz. Thus, dephasing probability over one Hadamard step
($\theta=\frac{\pi}{4}$) is $P_{s}\approx 10^{-4}$.
Spontaneous scattering from the optical lattice lasers as well as Rydberg
exciting lasers, destroy the coherence by projecting the quantum walker’s
wave-function into a single lattice site. The new advances in clock optical
lattices have suppressed the trap lasers’ scattering rate, reaching coherence
times above 12s Ros19 making the corresponding dephasing per step negligible
$P_{s}\approx 10^{-8}$. Spontaneous emission also occurs from the intermediate
state ${|{p}\rangle}$, over the two-photon Rydberg
excitation${|{g}\rangle}-{|{s}\rangle}$. The two lasers $\Omega_{1}$,
$\Omega_{2}$ are detuned from the intermediate level by $\Delta_{p}$. The
dominant decay channel from ${|{p}\rangle}$ is back into the ground state
${|{g}\rangle}$. This would result in an effective Lindblad de-phasing term
$\sqrt{\gamma_{p}}{|{g}\rangle}\\!{\langle{g}|}$, where
$\gamma_{p}/2\pi=1.4$MHz is decay rate of the intermediate level
${|{p}\rangle}=6P_{3/2}$ in Rb. Over one quantum step operation time of
$\frac{2\pi}{\Omega}$ with effective Rabi frequency of
$\Omega=\frac{\Omega_{1}\Omega_{2}}{2\Delta_{p}}$, the de-phasing probability
after one step would be
$P_{s}=\frac{\pi\gamma_{p}}{2\Delta_{p}}(\frac{\Omega_{1}}{\Omega_{2}}+\frac{\Omega_{2}}{\Omega_{1}})$
Saf10 . Using the experiment parameters in exciting ${|{r}\rangle}=70S$ via
${|{p}\rangle}=6P_{3/2}$ intermediate level Lev18 with
$(\Omega_{1},\Omega_{2})=2\pi\times(60,40)$MHz for (420nm,1013nm) lasers and
the detuning of $\Delta_{p}/2\pi=600$MHz the effective Rabi frequency would be
$\Omega/2\pi=2$MHz and de-phasing probability over single quantum step would
be $P_{s}=2.5\times 10^{-4}$.
#### Lattice geometry and confinement
Known detuning would cause a phase that gets absorbed in the definition of
${|{g_{k}}\rangle}$ state. However, random fluctuations of detuning
$E(\delta)$ caused by spatial uncertainty and Doppler broadening leads to
spatial de-phasing (see Fig. 6b) and could affect the population rotation
designed for the quantum jump protocol.
Confinement: Atomic motion in optical lattice causes an average uncertainty of
interaction $E_{V}$. The atomic motion could be made negligible by sideband
cooling within the optical tweezer Kau12 ; Tho13 and optical lattice Bel13
to the motional ground state. Considering a 3$\mu$m lattice constant, and the
motional ground state expansion of 20nm for Rb atoms with trap frequency of
$\omega_{tr}/2\pi=150$kHz (close to 125kHz in Kau12 ; Tho13 ), the average
relative uncertainty of interaction would be $\frac{E_{V}}{\Omega}=0.1$ for
principal number $n$=70 and $\Omega/2\pi=2$MHz. This would cause
$P_{s}=2\times 10^{-4}$ de-phasing per step, see Fig. 6b.
Doppler broadening: Detuning error could also be caused by Doppler broadening.
Using counter-propagating 1013nm and 420nm lasers for two-photon Rydberg
excitation, the Doppler detuning would be $\delta_{D}=(k_{1}-k_{2})v$.
Considering the sideband cooling to motional ground state, the maximum
velocity in the thermal motion would be
$v=\sqrt{\frac{\hbar\omega_{tr}}{2m}}=18$mm/s. The random Doppler shift
generates a maximum relative uncertainty of $\frac{\delta_{D}}{\Omega}=0.01$
for $\Omega/2\pi=2$MHz. Corresponding de-phasing probability per step is
$P_{s}=3\times 10^{-6}$, see Fig. 6b.
(a) (b)
Figure 6: Spatial de-phasing probability in single step $P_{s}$ as a function
of (a) relative laser noise $\gamma_{la}/\Omega$ and (b) transition detuning
errors $\frac{E(\delta)}{\Omega}$.
#### Spontaneous emission
Rydberg levels’ spontaneous emission, reduces both diagonal and off-diagonal
terms of density matrix alike. Hence, after the projective measurement, the
spontaneous emission does not effect the coherence of the QW operation.
However, it would limit the step numbers before loosing the quantum walker as
discussed in Sec. V.2 in details.
## V Operation Fidelity in 3D lattice
The implementation of 3D QW with coupling tesselations presented in Fig. 2b,
benefits from interaction angular dependency. After formulating the
anisotropic exchange interaction in Sec. V.1, the operation fidelity of
different coupling tessellations are quantified in Sec. V.2. Then, the scaling
of achievable step numbers for different principal numbers are discussed.
### V.1 Angular dependent interaction
The angular-dependent exchange interaction of
${|{nS_{1/2}1/2,\,nP_{3/2}3/2}\rangle}{\stackrel{{\scriptstyle
V}}{{\rightleftharpoons}}}{|{nP_{3/2}3/2,\,nS_{1/2}1/2}\rangle}$ is given by
$V=\frac{\bm{\mu}_{1}.\bm{\mu}_{2}-3(\bm{\mu}_{1}.\hat{R})(\bm{\mu}_{2}.\hat{R})}{4\pi\epsilon_{0}R^{3}}$
(4)
where $R$ is the interatomic separation and $\vec{\mu_{k}}$ is the electric
dipole matrix element, connecting initial and final Rydberg state of $k^{th}$
atom. The angular dependent interaction between sites $i$ and $j$ could be
expanded to
$\displaystyle
V_{ij}(\phi)=\frac{1}{4\pi\epsilon_{0}R_{ij}^{3}}[f_{1}(\phi)(\mu_{1+}\mu_{2-}+\mu_{1-}\mu_{2+}+2\mu_{1z}\mu_{2z})$
$\displaystyle+f_{2}(\phi)(\mu_{1+}\mu_{2z}-\mu_{1-}\mu_{2z}+\mu_{1z}\mu_{2+}-\mu_{1z}\mu_{2-})$
(5) $\displaystyle-
f_{3}(\phi)(\mu_{1+}\mu_{2+}-\mu_{1-}\mu_{2-})]=\frac{C_{3}(\phi)}{R_{ij}^{3}},$
where $\phi$ shows inter-atomic orientation with respect to the quantization
axis, defined by the propagating direction of exciting polarized lasers.
Dipole operators in the spherical basis are denoted by
$\mu_{k,\pm}=\mp(\mu_{k,x}\pm i\mu_{k,y})/\sqrt{2}$. The terms associated with
pre-factors $f_{1}(\phi)=(1-3\cos^{2}\phi)/2$,
$f_{2}(\phi)=\frac{3}{\sqrt{2}}\sin\phi\cos\phi$ and
$f_{3}(\phi)=3/2\sin^{2}\phi$ couple Rydberg pairs. Fig. 7b represents the
angular dependent exchange interaction for two principal numbers.
Figure 7: The anisotropic $nS-nP$ exchange interaction $V$ provides a wider
range of laser detuning in both positive and negative sides, available for
addressing the desired site. Here $\phi$ is the angle between the inter-atomic
orientation and the quantization axis. Quantization axis is defined by the
propagation direction of the polarized exciting laser.
### V.2 Coupling tessellations’ fine tuning in 3D
This section defines the QW fidelity in terms of the accuracy of population
transfer to the desired site and does not encounter the decoherence effects
discussed in the previous section. Tuning the laser to be in resonance with
the interacting sites separated by ${\bf R}_{ij}$ i.e. $\Delta=-V_{ij}$, the
operation infidelity would be the sum of population leakage to unwanted sites
$k$ i.e.
$F_{ij}=\sum\limits_{k}\frac{\Omega^{2}/4}{(V_{ij}-V_{ik})^{2}}.$ (6)
Since the laser is oriented along $R_{ij}$ the denominator would be
$C_{3}(0)/R_{ij}^{3}-C_{3}(\phi_{k})/R_{ik}^{3}$ with $\phi_{k}$ being the
angle between ${\bf R}_{ij}$ and ${\bf R}_{ik}$. The infidelity of desired
coupling tessellations of Fig. 2b are plotted in Fig. 8a as a function of
$\Omega/\Delta$ with the analytical approach of Eq. 6 and numerical evaluation
of Schrödinger equation considering 8 neighboring lattice sites at different
inetr-atomic orientations.
The contrast of inter- and intra-dimer lattice constants
$(a_{x0}-a_{x1})/a_{x0}$ would define the speed of operation. In general,
better contrast leads to faster operation for a given fidelity, see Fig. 8b.
However, at some geometries, specific lattice sites might get close to
resonance with the laser. This would require slower operation as can be seen
at the local minimum in the dashed line of Fig. 8b at $a_{x1}=2.2a_{x0}$.
Realizable QW step number scales linearly with Rydberg principal number $n$.
While Rydberg spontaneous emission does not affect the coherence, see Sec.
IV.2, it would limit the operation number. In this sense, faster operation at
constant $\Omega/\Delta$ would require stronger interaction
$V_{ij}=C_{3}/a_{ij}^{3}$. While $C_{3}=8.4(n/70)^{3}$MHz.$\mu$m3 for Rb
atoms, the minimum lattice constant is limited by the LeRoy-radius and hence
scales as $a_{min}=950(n/70)^{2}$nm. Hence, the loss over a single QW
operation is scaled by $2\pi/(\Omega\tau)\propto n^{-1}$ where
$\tau=450(n/70)^{-3}\mu$s Bet09 is the Rydberg quantum walker lifetime in Rb
atoms at $T=77$K. Considering the $W_{x}$ connectivity with $F=97\%$ operation
accuracy, after $N=210(n/70)$ step numbers, the Poissonian probability of not
losing the walker would be 40%. These numbers would be significantly enhanced
by coherent fast excitation of circular states Sig17 ; Car20 featuring
exchange interaction and several minutes lifetime Ngu18 .
Finally, the initialization and detection must be taking into account while
quantifying the device operation. After preselection, the probability of
correct state initialization of more than 98% has been achieved Ber17 .
Fluorescence detection of ground-state atoms has been realized with 98%
fidelity in Rb Kwo17 and 0.99991 fidelity in Sr Cov19 .
(a) (b)
Figure 8: Fidelity of quantum walk in a 3D lattice of dimers. (a) Infidelity
of desired coupling tessellations of Fig. 2b are plotted as a function of
$\Omega/\Delta$ with analytical (lines) and numerical simulation (circles)
considering 8 neighbouring lattice sites. (b) The required relative laser
coupling $\Omega/\Delta$ is plotted as a function of the contrast of inter-
and intra-dimer lattice constants $(a_{x0}-a_{x1})/a_{x0}$ to realize
connectivities with 99% fidelity. The local minimum in the dashed line is
caused by specific lattice site that gets close to resonance with the laser at
$a_{x1}=2.2a_{x0}$. The lattice constants are $a_{x0}=a_{y}=a_{z}=1\mu$m in
(a,b) and $a_{x1}=1.5$ in (a).
## VI Topologically protected edge state/ Floquet topological insulators
Application of SSH models in making topological matters are vastly studied.
The proposed discrete-time SSH model in this article could be used for making
an enriched form of Floquet topological insulators and topologically protected
edge state, similar to the coined DTQWs approach Kit10 .
### VI.1 Fourier transformed Hamiltonian
For the QWs on an infinitely extended lattice, or a lattice with periodic
torus boundary conditions, the QW operators can be transformed into quasi-
momentum space $\tilde{H}$. The Fourier transformation of the odd $o$ and even
$e$ sites in 1D is given by
$\displaystyle{|{m,e}\rangle}=\frac{1}{\sqrt{\frac{N}{2}}}\sum\limits_{k}\text{e}^{\text{i}km}{|{k,e}\rangle}$
(7)
$\displaystyle{|{m,o}\rangle}=\frac{1}{\sqrt{\frac{N}{2}}}\sum\limits_{k}\text{e}^{\text{i}km-\text{i}k\bar{a}_{1}}{|{k,o}\rangle}$
where ${|{{k},e}\rangle}={|{k}\rangle}\otimes{|{e}\rangle}$. Also
$\bar{a}_{1}=\frac{a_{1}}{a_{0}+a_{1}}$ with $a_{0}$, $a_{1}$ being the inter-
and intra-dimer lattice constants. The wave-number is chosen to take on values
from the first Brillouin zone $k=l\frac{2\pi}{N/2}$ with $1\leq
l\leq\frac{N}{2}$. The Fourier transformed Hamiltonian obtains by replacing
Eq. 7 into the QW Hamiltonians of Eq. 1, simplified to
$\displaystyle\tilde{H}_{0}=\sum_{k}\begin{pmatrix}0&\text{e}^{\text{i}{k}\bar{a}_{1}}\\\
\text{e}^{-\text{i}{k}\bar{a}_{1}}&0\end{pmatrix}{|{k}\rangle}\\!{\langle{k}|}$
(8)
$\displaystyle\tilde{H}_{1}=\sum_{k}\begin{pmatrix}0&\text{e}^{-\text{i}k(1-\bar{a}_{1})}\\\
\text{e}^{\text{i}k(1-\bar{a}_{1})}&0\end{pmatrix}{|{k}\rangle}\\!{\langle{k}|},$
with matrices being presented in the $\\{{|{o}\rangle},{|{e}\rangle}\\}$
basis.
### VI.2 Edge states in 1D
The Floquet topological phases of Rydberg coinless discrete-time quantum walk
can be accessed by looking at the full-time evolution of the walk. The quantum
operator in momentum basis
$\displaystyle\tilde{W}_{\text{eff}}=\text{e}^{\text{i}\frac{\theta_{0}}{2}\tilde{H}_{0}}\text{e}^{\text{i}\theta_{1}\tilde{H}_{1}}\text{e}^{\text{i}\frac{\theta_{0}}{2}\tilde{H}_{0}}$
(9)
can be written as
$\tilde{W}_{\text{eff}}=\text{e}^{\text{i}\tilde{H}_{\text{eff}}T}$, where $T$
is the period of applying a set of QW operators of Eq. 9. In a lattice of
dimers, $\tilde{H}_{\text{eff}}$ has two bands and hence the effective
Hamiltonian could be written as
$\tilde{H}_{\text{eff}}=\sum\limits_{k}E(k)\bm{n}(k).\boldsymbol{\sigma}\,{|{k}\rangle}\\!{\langle{k}|}$
(10)
where $\boldsymbol{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is the vector
of Pauli matrices operating in the odd-even basis
$\\{{|{o}\rangle},{|{e}\rangle}\\}$ and the $\bm{n}(k)=(n_{x},n_{y},n_{z})$
defines the quantization axis for the spinor eigenstates at each momentum $k$.
The quantization axis of the eigen states are given by
$n_{x}=-\frac{(\sin\theta_{0}\cos\theta_{1}\cos(k\bar{a}_{1})+\cos(k+k\bar{a}_{1})\cos\theta_{0}\sin\theta_{1})}{\sin
E(k)}$, $n_{y}=$
$\frac{-\cos\theta_{0}\sin\theta_{1}\cos
k\sin(k\bar{a}_{1})+\sin\theta_{0}\sin
k\cos(k\bar{a}_{1})-\sin\theta_{0}\cos\theta_{1}\sin(k\bar{a}_{1})}{\sin
E(k)}$, $n_{z}=0$.
The band structure of quasi-energy formed by discrete translational invariance
in time, which is given by
$\cos E(k)=\cos\theta_{0}\cos\theta_{1}-\sin\theta_{0}\sin\theta_{1}\cos(k),$
(11)
and plotted in Fig. 9b. The two bands correspond to the two degrees of freedom
of the system. The minimum and maximum of two bands separation are given by
$|\theta_{0}\pm\theta_{1}|$. With homogeneous coupling
$\theta_{0}=\theta_{1}$, the system would resemble a conductor. In that case,
there are plane wave eigenstates with small energy, which transport the
excitation from one end to the other end of the chain. As the gap due to the
difference of the hopping angles opens, the energy of occupied states is
lowered, while unoccupied states move to higher energies. Thus, the staggering
is energetically favourable. Also, $\theta_{0}=\pi/2,\theta_{1}=0$ results in
a chain of isolated dimers. Hence there will be no transformation of the
walker and hence the energy $E$ would be $k$ independent.
Figure 9: Topological edge state in 1D coinless DTQW. (a) Topological
invariants (winding number $\nu_{0},\nu_{\pi}$) associated with the hopping
angles $\theta_{0}$ and $\theta_{1}$, see Eq. 13. (b) Band structure of the
quantum walk for $\theta_{0}=\pi/4$,
$\theta_{1}=[\frac{\pi}{8},\frac{\pi}{4},\frac{3\pi}{8}]$. (c,d) shows the
walker’s wave-function after 100 steps for the space-dependent hopping angles
explained in Eq. 14 and over the range depicted by arrows in (a). The walker
is initialized at the centre $x=0$ where the sharp change of $\theta$ happens
$w=0.1a_{0}$. In (c) the positive and negative sites have different winding
numbers, providing the topological phase transition and leading to bound state
at the centre. In (d) the evolution is adiabatic (no crossing of the phase
boundary) and hence no population traps at the centre.
In 1D, the topological phase is originated from the chiral symmetry. The
chiral symmetry exist, if there is a unitary operator fulfilling the following
relation $\Gamma H_{\text{eff}}\Gamma^{\dagger}=-H_{\text{eff}}$. While the
trivial QW operator
$\tilde{W}_{\text{eff}}=\text{e}^{\text{i}\theta_{1}\tilde{H}_{1}}\text{e}^{\text{i}\theta_{0}\tilde{H}_{0}}$
do not show the chrial symmetry, splitting one of the steps and moving the
time frame to either of the following forms
$\displaystyle\tilde{W}_{\text{eff}}=\text{e}^{\text{i}\frac{\theta_{0}}{2}\tilde{H}_{0}}\text{e}^{\text{i}\theta_{1}\tilde{H}_{1}}\text{e}^{\text{i}\frac{\theta_{0}}{2}\tilde{H}_{0}}$
(12)
$\displaystyle\tilde{W}^{\prime}_{\text{eff}}=\text{e}^{\text{i}\frac{\theta_{1}}{2}\tilde{H}_{1}}\text{e}^{\text{i}\theta_{0}\tilde{H}_{0}}\text{e}^{\text{i}\frac{\theta_{1}}{2}\tilde{H}_{1}},$
the quantum walk do exhibit chiral symmetry with $\Gamma=\sigma_{z}$.
The chiral symmetry enforces the $\bm{n}(\theta_{0},\theta_{1},k)$ to rotate
in the plain perpendicular to $\mathbf{\Gamma}$. The corresponding topological
invariants is the number of times $\bm{n}(\theta_{0},\theta_{1},k)$ winds
around the origin, as the quasi-momentum $k$ runs in the first Brillouin zone,
called winding number
$\nu=\frac{1}{2\pi}\int\limits_{0}^{2\pi}\frac{1}{|\boldsymbol{n}|^{2}}(n_{x}\partial_{k}n_{y}-n_{y}\partial_{k}n_{x})\text{d}k.$
(13)
The two nonequivalent shifted time-frames $\tilde{W}_{\text{eff}}$,
$\tilde{W}^{\prime}_{\text{eff}}$ (Eq. 12), leads to two winding numbers
$\nu,\nu^{\prime}$. The two invariants $\nu_{0}=(\nu+\nu^{\prime})/2$,
$\nu_{\pi}=(\nu-\nu^{\prime})/2$ would completely describe the topology. The
phase diagram of the winding number is plotted in Fig. 9a. The band structure
in Fig. 9b are made of the energy eigenvalues $E(k,\theta_{0},\theta_{1})$.
Manipulating the hopping angles $\theta_{0},\theta_{1}$ within the
distinguished regions of Fig. 9a, would make the system to continuously
transit between the band structures without closing the energy gap, i.e.
without changing the topological character of the system that is the winding
number in here. At the borders separating distinct topological regions, the
band structure closes, see Fig. 9b.
The topological character can be revealed at the boundary between
topologically distinct phases. To implement such a boundary one can apply
inhomogeneous spatial hoping angle of the form
$\theta_{i}(x)=\frac{\theta_{i-}+\theta_{i+}}{2}+\frac{\theta_{i+}-\theta_{i-}}{2}\tanh(x/w)$
(14)
where $w$ determines the spatial width of phase transition region which
defines the width of the bound state. The variation of the hopping angle in
the Rydberg system could be realized by different sets of exciting lasers or
by applying space-dependent Stark-shift using an external field. Fig. 9c(d)
shows the walker’s wave-function after 100 steps for the cases with (without)
the phase transition. The walker is initialized at $x=0$ in both cases. In
Fig. 9c the positive and negative sites have different winding numbers. This
would lead to the topological phase transition and hence forms bound state at
the centre. In Fig. 9d, the evolution is adiabatic (i.e. there is no crossing
of the phase boundary) and no population traps at the centre.
### VI.3 Floquet Topological Insulators with multi-dimensional DTQW
Different classes of floquet topological insulators could be realized by the
multi-dimensional coupling tessellations introduced in Sec. III. The $nP$
excitation propagates unidirectionally and without backscattering along the
edge, that is a line/surface in 2D/3D lattice, and eventually distributes
uniformly along the boundary. Unlike in 1D, symmetry is not required for the
presence of topological phase in multi-dimensions. Using the operators of Eq3
, a set of Hamiltonian
$W_{\text{eff}}=T_{x}R_{\theta_{0}}T_{y}R_{\theta_{1}}T_{xy}R_{\theta_{0}}$
could be used for making 2D topological insulators with non-vanishing Chern
numbers, see Supp and Kit10 . Unlike the time-independent QW in the time-
dependent approach, topologically protected (TP) edge states could be formed
even in the cases where the Chern number is zero for all the bands. As an
example, the simple model of coined QW steps of
$W_{\text{eff}}=T_{y}R_{\theta_{1}}T_{x}R_{\theta_{0}}$, leads to a 2D
topological insulator, see Supp. Here, the Chern number of both bands are zero
and the presented topological invariant is the Rudner winding number rud13 .
Multi-dimensional topological insulators could also be formed by Rydberg
discrete-time SSH model as discussed in the Supp.
A 3D topological insulators could be realized by the current proposal under a
simple set of operators
$W_{\text{eff}}=T_{x}R_{\theta_{0}}T_{y}R_{\theta_{1}}T_{z}R_{\theta_{0}}$,
where the elements are defined in Eq3 . The topological phase diagram is
plotted in Fig. 10a, where the gap between the two bands closes at $E=0$
($E=\pi$) at the red (blue) borders. TP edge states in 3D exist along the
surfaces in spatial borders between two regions with distinct topological
phases. A particle, that is prepared in the superposition of the TP edge
states, propagates coherently along the spatial border. Fig. 10b demonstrate
TP propagating edge modes by considering an inhomogeneous 3D coined DTQW with
spatially inhomogeneous coin angles. Here, a flat surface borders is
considered. The pair of coin angles inside and outside the strip belongs to
different topological phases. The hopping angles are
$(\theta_{0},\theta_{1})=(4\pi/10,\pi/10)$ inside $3\leq x\leq 5$ and
$(\theta_{0},\theta_{1})=(\pi/10,4\pi/10)$ outside the stripe $x<3$ and $x>5$.
The excitation is initialized on the borders as
${|{\psi_{0}}\rangle}=({|{x=3,y=3,z=3,o}\rangle}+{|{x=5,y=3,z=3,o}\rangle})/\sqrt{2}$.
The excitation would distribute over the border surface after large step
numbers ( $N_{step}=200$ in here).
Figure 10: Edge state in a 3D array of a coined Rydberg DTQW
$W_{\text{eff}}=T_{x}R_{\theta_{0}}T_{y}R_{\theta_{1}}T_{z}R_{\theta_{0}}$
with elements presented in Eq. LABEL:Eq_T3DCoined. (a) The topological phase
diagram are plotted where the two bands closes at $E=0$ ($E=\pi$) at the red
(blue) borders. (b) Edge state are formed on the surface boundaries $x=3$ and
$x=5$. It would distribute on the edge surface after large step numbers
($N_{step}=200$ in here). The hoping angles are
$(\theta_{0},\theta_{1})=(4\pi/10,\pi/10)$ inside $3\leq x\leq 5$ and
$(\theta_{0},\theta_{1})=(\pi/10,4\pi/10)$ outside the stripe $x<3$ and $x>5$.
Periodic boundary condition is imposed along $x$, $y$ and $z$ dimensions. Each
site represents the sum of dimer’s elements population.
## VII Conclusion and Outlook
This article Khazali21 shows that smart Rydberg excitation in a
holographically designed atomic lattice would act as a universal simulator of
topological phases and boundary states by performing varieties of DTQW
operators. In the project’s outlook, the presented model could be used for the
simulation of electron movement on the Fermi surfaces that are 2D manifolds
fitted in the Brillouin zone of a crystal (electron movement on the Fermi
surfaces that cut each other like Kline bottle surface) Wie16 . The other
extension avenue would be obtained by adding synthetic magnetic fields to the
current DTQW model. This would obtain by applying a gradient of an external
electric field resulting in magnetic QW with applications in making Chern
insulators saj18 . The new features of the presented model in applying QW on
topological surfaces provides a platform to study the performance of QW based
algorithms on topologically ordered databases.
## Supplemental material
## A1: Self-interaction
Having one delocalized Rydberg excitation, the partial Rydberg population at
different sites do not interact with each other. The nonlocal wave function
only gives the probability of finding the single excitation at different
sites.
Simulating the Schrodinger equation showed that partial population
$P_{{|{p}\rangle}}$ at a specific site $i$ would induce the same population of
auxiliary state $P_{{|{s}\rangle}}=P_{{|{p}\rangle}}$ in the the resonant
sites $i$ and $j$. Hence, when the single walker is not localized, the total
population of the non-local auxiliary Rydberg state would add up to 1.
Therefore the absence of self interaction argument also applies to the induced
single Rydberg auxiliary ${|{s}\rangle}$ state.
This argument is similar to the absence of self-interaction over a Rydberg
polariton excited by a single photon in an atomic ensemble. The other point of
similarity is in the Rydberg population dependence. In the Rydberg
electromagnetically induced transparency (EIT), the ladder two-step excitation
is derived by a faint quantum and a strong classical light. The population of
Rydberg level is a function of the photon number in the quantum light.
Similarly in the proposed DTQW here, the maximum population of the auxiliary
${|{s}\rangle}$ state excited by the strong laser shining to multiple ground
state atoms is defined by the population of ${|{p}\rangle}$ state that makes
the strong field in resonance with the laser transition. Therefore, the
argument of the self-interaction also applies to the ${|{s}\rangle}$ single
excitation.
Following this argument, Eq. II only includes the inter-excitation interaction
$V_{S-P}$ and does not consider self interactions $V_{S-S}$ and $V_{P-P}$.
### A2: 2D Topological insulator
#### Coined DTQW
Figure 11: Edge state in a 2D array of coined Rydberg DTQW. TP edge states
formed under coined quantum walk with the operator of (a,b)
$W_{\text{eff}}=T_{x}R_{\theta_{0}}T_{y}R_{\theta_{1}}T_{xy}R_{\theta_{0}}$
and (c,d) $W_{\text{eff}}=T_{y}R_{\theta_{1}}T_{x}R_{\theta_{0}}$ with
elements discussed in Eq. LABEL:Eq_T3DCoined. The topological phase diagram
are plotted with the topological invariant that is the (a) chern number and
(b) Rudner winding number. The energy gap of the two bands closes at 0 ($\pi$)
at the red (blue) borders. (b,d) Edge state is around the linear boundary at
$x=2$ and $x=4$. It would distribute along the edge after large step numbers
($N_{step}=200$ in here). The hoping angles are
$(\theta_{0},\theta_{1})=(\pi/10,4\pi/10)$ inside $2\leq x\leq 4$ and
$(\theta_{0},\theta_{1})=(4\pi/10,\pi/10)$ outside the stripe $x<2$ and $x>4$.
Periodic boundary condition is imposed along both $x$ and $y$ dimensions. Each
site represents the sum of dimer’s elements population.
The coined Rydberg DTQW operators of Eq3 , could ideally implement topological
insulators with non-vanishing Chern numbers Kit10 . The concatenated operators
$W_{\text{eff}}=T_{x}R_{\theta_{0}}T_{y}R_{\theta_{1}}T_{xy}R_{\theta_{0}}$
could be used for making topological insulators. To quantify the topological
properties of this QW, the effective Hamiltonian in the momentum space is
considered
$\tilde{W}_{\text{eff}}=\text{e}^{\text{i}\tilde{H}_{\text{eff}}T}$. Like in
the one-dimensional case discussed above, the discrete-time quantum walk is a
stroboscopic simulator of the evolution generated by $\tilde{H}_{\text{eff}}$
at discrete-times. The effective Hamiltonian would be
$\tilde{H}_{\text{eff}}=\sum\limits_{{\bm{k}}}E({\bm{k}})\bm{n}({\bm{k}}).\boldsymbol{\sigma}{|{{\bm{k}}}\rangle}\\!{\langle{{\bm{k}}}|}$
(15)
where $\boldsymbol{\sigma}$, is the vector of Pauli matrices dealing with odd
and even bases and the $\bm{n}({\bm{k}})$ defines the quantization axis for
the spinor eigenstates at each momentum ${\bm{k}}$. The topological invariant
in this two dimensional Brillouin zone is given by the Chern number as
$C=\frac{1}{4\pi}\int\text{d}{\bm{k}}\,{\bm{n}}.(\partial_{k_{x}}{\bm{n}}\times\partial_{k_{y}}{\bm{n}}).$
(16)
The phase diagram of the quantum walk is plotted in Fig. 11a. Red and blue
borders are associated with points where the energy gap closes at $E=0$ and
$E=\pi$ respectively.
Like in the 1D case, TP edge states exist in 2D in spatial borders between two
regions with distinct topological phases. These states are propagating uni-
directionally in the bulk gaps and connect the bulk bands. A particle, that is
prepared in the superposition of the TP edge states, propagates coherently
along the spatial border. The chirality of the edge states is topologically
protected. In another word, their directions of propagation do not change
under continuous variation of the parameters of the system as long as the bulk
gaps remain open. Fig. 11b,d demonstrate TP propagating edge modes by
considering an inhomogeneous 2D coined QW with spatial dependent coin angles.
Flat borders in the form of a strip geometry is considered in the 2D lattice,
where the pair of coin angles inside and outside the strip belongs to
different topological phases. The hoping angles are
$(\theta_{0},\theta_{1})=(\pi/10,4\pi/10)$ inside $2\leq x\leq 4$ and
$(\theta_{0},\theta_{1})=(4\pi/10,\pi/10)$ outside the stripe $x<2$ and $x>4$.
The excitation is initialized on the borders as
$({|{x=2,y=5,e}\rangle}+{|{x=4,y=5,e}\rangle})/\sqrt{2}$. The excitation would
distribute along the border after large step numbers ( $N_{step}=200$ in
here).
Fig. 11c,d discusses a simpler model of QW steps of
$W_{\text{eff}}=T_{y}R_{\theta_{1}}T_{x}R_{\theta_{0}}$ with operators being
presented in Eq3 . In the phase diagram of Fig. 11c, the Chern number of both
bands are zero and the presented topological invariant is the Rudner winding
number rud13 .
#### Coinless DTQW
Figure 12: Topological insulators in a 2D array of (a,b) tetramers and (c)
dimers with coinless DTQW. (a) Applying an inhomogeneous hopping angle along
the $\hat{x}$ dimension with $(\theta_{x0},\theta_{x1})=(\pi/10,4\pi/10)$
inside $3<x\leq 9$ and $(\theta_{x0},\theta_{x1})=(4\pi/10,\pi/10)$ outside
the stripe ($x\leq 3$ and $x>9$), the excitation propagates unidirectionally
along the borders of the stripe. The hoping angles along the $y$ direction
$(\theta_{y0},\theta_{y1})=(\pi/3,\pi/10)$ is not changing over different
spatial regions. The population would distribute along the edge after large
step numbers ( $N_{step}=209$ in here). (b) The corresponding 4 bands
associated with the tetramer sublattice are plotted for the hopping angle
parameter choice inside the stripe. (c) In the 2D lattice of dimers all bulk
states are localized, see the red arrow. A particle initially localized at any
site in the bulk returns to its original position in one set of QW operations.
The coinless models of a 2D lattice of tetramers as shown in Fig. 2a, could
also be used for making TP edge states. Here the QW are effectively separable
in two-dimensions. Fig. 12a shows that by applying inhomogeneous hopping
angles belonging to different regions of Fig. 9a, along $\hat{x}$ dimension
and using open(periodic) boundaries along $\hat{x}$($\hat{y}$) direction, one
can form excitation currents solely on the borders.
Coin-less DTQW in a 2D lattice of dimers shown in Fig. 2b could be used for
the realization of the anomalous topological edge states rud13 . Here the QWs
in different dimensions are not separable. In an intuitive discussion,
applying the operator
$W=\text{e}^{\text{i}H_{x0}\pi/2}\text{e}^{\text{i}H_{xy1}\pi/2}\text{e}^{\text{i}H_{x1}\pi/2}\text{e}^{\text{i}H_{xy0}\pi/2}$
(with the Hamiltonians defined in Eq2 ), on the excitation initialized on the
border, leads to the clockwise transportation of the walker along the boarder
as depicted by green arrows in Fig. 12c. This is while the initialized
excitation in the bulk would go under unitary operator with no excitation
transport, see red arrow in Fig. 12c. The exclusive conductance on the
boundary provides the desired topological insulator. One can use inhomogeneous
spatial angles with different Rudner winding numbers rud13 to design the
shape of the edge state.
Topological insulators could also get implemented with 3D coinless DTQW model.
For example $W=W_{x1}W_{xz1}W_{x0}W_{xz0}W_{xy0}W_{x0}W_{xy1}W_{x1}$ where
$W_{i}=\exp(\text{i}H_{i}\theta_{i})$ with Hamiltonians defined in Eq2 and
uniform $\theta=\pi/2$ hopping angles on an open boundary lattice, results in
an insulating bulk with exclusive excitation currents on the open boundaries.
## References
* (1) Y. Aharonov, L. Davidovich, and N. Zagury. Quantum random walks. Physical Review A, 48(2):1687, 1993.
* (2) E. Farhi and S. Gutmann. Quantum computation and decision trees. Physical Review A, 58(2):915, 1998.
* (3) J. Kempe. Quantum random walks: an introductory overview. Contemporary Physics, 50(1):339–359, 2009.
* (4) S. Dadras, A. Gresch, C.Groiseau, S. Wimberger, and G. S Summy. Quantum walk in momentum space with a bose-einstein condensate. Physical review letters, 121(7):070402, 2018.
* (5) G. Summy and S. Wimberger. Quantum random walk of a bose-einstein condensate in momentum space. Physical Review A, 93(2):023638, 2016.
* (6) P. M Preiss, R. Ma, M E. Tai, A. Lukin, M. Rispoli, P. Zupancic, Y. Lahini, R. Islam, and M. Greiner. Strongly correlated quantum walks in optical lattices. Science, 347(6227):1229–1233, 2015.
* (7) R. Portugal. Quantum walks and search algorithms. Springer, 2013.
* (8) N. Shenvi, J. Kempe, and K B. Whaley. Quantum random-walk search algorithm. Physical Review A, 67(5):052307, 2003.
* (9) A. M Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A Spielman. Exponential algorithmic speedup by a quantum walk. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 59–68, 2003.
* (10) A. M Childs and J. Goldstone. Spatial search by quantum walk. Physical Review A, 70(2):022314, 2004.
* (11) R. Portugal and T. D Fernandes. Quantum search on the two-dimensional lattice using the staggered model with hamiltonians. Physical Review A, 95(4):042341, 2017.
* (12) A. M Childs. Universal computation by quantum walk. Physical review letters, 102(18):180501, 2009.
* (13) V. Kendon, How to compute using quantum walks. arXiv preprint arXiv:2004.01329, 2020.
* (14) N. B Lovett, S. Cooper, M. Everitt, M. Trevers, and V. Kendon. Universal quantum computation using the discrete-time quantum walk. Physical Review A, 81(4):042330, 2010.
* (15) A. M Childs, D. Gosset, and Z. Webb. Universal computation by multiparticle quantum walk. Science, 339(6121):791–794, 2013.
* (16) S. Elías V.-Andraca. Quantum walks: a comprehensive review. Quantum Information Processing, 11(5):1015–1106, 2012.
* (17) T. Kitagawa, M. S Rudner, E. Berg, and E. Demler. Exploring topological phases with quantum walks. Physical Review A, 82(3):033429, 2010.
* (18) H. Schmitz, R. Matjeschk, Ch. Schneider, J. Glueckert, M. Enderlein, T. Huber, and T. Schaetz. Quantum walk of a trapped ion in phase space. Phys. Rev. Lett., 103:090504, Aug 2009.
* (19) F. Zähringer, G. Kirchmair, R. Gerritsma, E. Solano, R. Blatt, and C. F. Roos. Realization of a quantum walk with one and two trapped ions. Phys. Rev. Lett., 104:100503, Mar 2010.
* (20) M. Karski, L. Förster, J. Choi, A. Steffen, W. Alt, D. Meschede, and A. Widera. Quantum walk in position space with single optically trapped atoms. Science, 325(5937):174–177, 2009.
* (21) C. Weitenberg, M. Endres, J. F Sherson, M. Cheneau, P. Schauß, T. Fukuhara, I. Bloch, and S. Kuhr. Single-spin addressing in an atomic mott insulator. Nature, 471(7338):319–324, 2011.
* (22) T. Fukuhara, P. Schauß, M. Endres, S. Hild, M. Cheneau, I. Bloch, and C. Gross. Microscopic observation of magnon bound states and their dynamics. Nature, 502(7469):76–79, 2013.
* (23) Ji. Wang and K. Manouchehri. Physical implementation of quantum walks. Springer, 2013.
* (24) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S Zibrov, M. Endres, M. Greiner, et al. Probing many-body dynamics on a 51-atom quantum simulator. Nature, 551(7682):579–584, 2017.
* (25) A. Omran, H. Levine, A. Keesling, G. Semeghini, T. T Wang, S. Ebadi, H. Bernien, A. S Zibrov, H. Pichler, S. Choi, et al. Generation and manipulation of schrödinger cat states in rydberg atom arrays. Science, 365(6453):570–574, 2019.
* (26) S Zhang, F Robicheaux, and M Saffman. Magic-wavelength optical traps for rydberg atoms. Physical Review A, 84(4):043408, 2011.
* (27) MJ Piotrowicz, M Lichtman, K Maller, G Li, S Zhang, L Isenhower, and M Saffman. Two-dimensional lattice of blue-detuned atom traps using a projected gaussian beam array. Physical Review A, 88(1):013420, 2013.
* (28) F. Nogrette, H. Labuhn, S. Ravets, Daniel Barredo, Lucas Béguin, Aline Vernier, Thierry Lahaye, and Antoine Browaeys. Single-atom trapping in holographic 2d arrays of microtraps with arbitrary geometries. Physical Review X, 4(2):021034, 2014.
* (29) T Xia, M Lichtman, K Maller, AW Carr, MJ Piotrowicz, L Isenhower, and M Saffman. Randomized benchmarking of single-qubit gates in a 2d array of neutral-atom qubits. Physical review letters, 114(10):100503, 2015.
* (30) J. Zeiher, R. Van Bijnen, P. Schauß, S. Hild, J. Choi, T. Pohl, I. Bloch, and C. Gross. Many-body interferometry of a rydberg-dressed spin lattice. Nature Physics, 12(12):1095–1099, 2016.
* (31) V. Lienhard, S. de Léséleuc, D. Barredo, T. Lahaye, A. Browaeys, M. Schuler, L. P. Henry, and A. M Läuchli. Observing the space-and time-dependent growth of correlations in dynamically tuned synthetic ising models with antiferromagnetic interactions. Physical Review X, 8(2):021070, 2018.
* (32) MA Norcia, AW Young, and AM Kaufman. Microscopic control and detection of ultracold strontium in optical-tweezer arrays. Physical Review X, 8(4):041054, 2018.
* (33) A. Cooper, J. P Covey, I. S Madjarov, S. G Porsev, M. S Safronova, and M. Endres. Alkaline-earth atoms in optical tweezers. Physical Review X, 8(4):041055, 2018.
* (34) S. Hollerith, J. Zeiher, J. Rui, A. Rubio-Abadal, V. Walther, T. Pohl, D. M Stamper-Kurn, I. Bloch, and C. Gross. Quantum gas microscopy of rydberg macrodimers. Science, 364(6441):664–667, 2019.
* (35) S. Saskin, JT Wilson, B. Grinkemeyer, and J. D. Thompson. Narrow-line cooling and imaging of ytterbium atoms in an optical tweezer array. Physical review letters, 122(14):143002, 2019.
* (36) Y. Wang, A. Kumar, T.-Y. Wu, and D. S Weiss. Single-qubit gates based on targeted phase shifts in a 3d neutral atom array. Science, 352(6293):1562–1565, 2016.
* (37) D. Barredo, V. Lienhard, S. De Leseleuc, T. Lahaye, and A. Browaeys. Synthetic three-dimensional atomic structures assembled atom by atom. Nature, 561(7721):79–82, 2018.
* (38) H. Levine, A. Keesling, A. Omran, H. Bernien, S. Schwartz, A. S Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D Lukin. High-fidelity control and entanglement of rydberg-atom qubits. Physical review letters, 121(12):123603, 2018.
* (39) M. Saffman, T. G Walker, and K. Mølmer. Quantum information with rydberg atoms. Reviews of modern physics, 82(3):2313, 2010.
* (40) CS Adams, JD Pritchard, and JP Shaffer. Rydberg atom quantum technologies. Journal of Physics B: Atomic, Molecular and Optical Physics, 53(1):012002, 2019.
* (41) M. Khazali, K. Heshami, and C. Simon. Photon-photon gate via the interaction between two collective rydberg excitations. Physical Review A, 91(3):030301, 2015.
* (42) M. Khazali and K. Mølmer. Fast multiqubit gates by adiabatic evolution in interacting excited-state manifolds of rydberg atoms and superconducting circuits. Physical Review X, 10(2):021054, 2020.
* (43) M. Khazali. Rydberg noisy-dressing and applications in making soliton-molecules and droplet quasi-crystals. arXiv preprint arXiv:2007.01039, 2020.
* (44) M. Khazali. Applications of atomic ensembles for photonic quantum information processing and fundamental tests of quantum physics. 2016\.
* (45) Khazali, M. Quantum Information and Computation with Rydberg Atoms. Iranian Journal of Applied Physics, 10(4): 19, (2021).
* (46) M. Khazali, C. R Murray, and T. Pohl. Polariton exchange interactions in multichannel optical networks. Physical Review Letters, 123(11):113605, 2019.
* (47) M. Khazali, H. W. Lau, A. Humeniuk, and C. Simon. Large energy superpositions via rydberg dressing. Physical Review A, 94(2):023408, 2016.
* (48) M. Khazali, K. Heshami, and C. Simon. Single-photon source based on rydberg exciton blockade. Journal of Physics B: Atomic, Molecular and Optical Physics, 50(21):215301, 2017.
* (49) M. Khazali. Progress towards macroscopic spin and mechanical superposition via rydberg interaction. Physical Review A, 98(4):043836, 2018.
* (50) R. Côté, A. Russell, E. E Eyler, and P. L Gould. Quantum random walk with rydberg atoms in an optical lattice. New Journal of Physics, 8(8):156, 2006.
* (51) D. Barredo, H. Labuhn, S. Ravets, T. Lahaye, A. Browaeys, and C. S Adams. Coherent excitation transfer in a spin chain of three rydberg atoms. Physical review letters, 114(11):113002, 2015.
* (52) DW Schönleber, A. Eisfeld, M. Genkin, S Whitlock, and S. Wüster. Quantum simulation of energy transport with embedded rydberg aggregates. Physical review letters, 114(12):123005, 2015.
* (53) A Pineiro Orioli, A Signoles, H Wildhagen, G Günter, J Berges, S Whitlock, and M Weidemüller. Relaxation of an isolated dipolar-interacting rydberg quantum spin system. Physical review letters, 120(6):063601, 2018.
* (54) G Günter, H Schempp, M Robert-de Saint-Vincent, V Gavryusev, S Helmrich, CS Hofmann, S Whitlock, and M Weidemüller. Observing the dynamics of dipole-mediated energy transport by interaction-enhanced imaging. Science, 342(6161):954–956, 2013.
* (55) H Schempp, G Günter, S Wüster, M Weidemüller, and S Whitlock. Correlated exciton transport in rydberg-dressed-atom spin chains. Physical review letters, 115(9):093002, 2015.
* (56) F. Letscher and . Petrosyan. Mobile bound states of rydberg excitations in a lattice. Physical Review A, 97(4):043415, 2018.
* (57) S Wüster, C Ates, A Eisfeld, and JM Rost. Excitation transport through rydberg dressing. New Journal of Physics, 13(7):073044, 2011.
* (58) A. Dauphin, M. Müller, and M. A. Martin-Delgado. Quantum simulation of a topological mott insulator with rydberg atoms in a lieb lattice. Physical Review A, 93(4):043611, 2016.
* (59) Y. Ando. Topological insulator materials. Journal of the Physical Society of Japan, 82(10):102001, 2013.
* (60) J. Cayssol, B. Dóra, F. Simon, and R. Moessner. Floquet topological insulators. physica status solidi (RRL)–Rapid Research Letters, 7(1-2):101–108, 2013.
* (61) A. Kitaev. Periodic table for topological insulators and superconductors. In AIP conference proceedings, volume 1134, pages 22–30. American Institute of Physics, 2009.
* (62) S Panahiyan and S Fritzsche. Toward simulation of topological phenomenas with one-, two-and three-dimensional quantum walks. arXiv preprint arXiv:2005.08720, 2020.
* (63) Mikael C Rechtsman, Julia M Zeuner, Yonatan Plotnik, Yaakov Lumer, Daniel Podolsky, Felix Dreisow, Stefan Nolte, Mordechai Segev, and Alexander Szameit. Photonic floquet topological insulators. Nature, 496(7444):196–200, 2013.
* (64) L Xiao, X Zhan, ZH Bian, KK Wang, X Zhang, XP Wang, J Li, K Mochizuki, D Kim, N Kawakami, et al. Observation of topological edge states in parity–time-symmetric quantum walks. Nature Physics, 13(11):1117–1123, 2017.
* (65) S. Mukherjee, A. Spracklen, M. Valiente, E. Andersson, P. Öhberg, N. Goldman, and R. R Thomson. Experimental observation of anomalous topological edge modes in a slowly driven photonic lattice. Nature communications, 8(1):1–7, 2017.
* (66) W_P Su, JR Schrieffer, and Ao J Heeger. Solitons in polyacetylene. Physical review letters, 42(25):1698, 1979.
* (67) M. S Rudner, N. H Lindner, E. Berg, and M. Levin. Anomalous edge states and the bulk-edge correspondence for periodically driven two-dimensional systems. Physical Review X, 3(3):031005, 2013.
* (68) M. A Schlosshauer. Decoherence: and the quantum-to-classical transition. Springer Science & Business Media, 2007.
* (69) R. B Hutson, A. Goban, G E. Marti, L. Sonderhouse, C. Sanner, and J. Ye. Engineering quantum states of matter for atomic clocks in shallow optical lattices. Physical review letters, 123(12):123401, 2019.
* (70) A. M Kaufman, B. J Lester, and C. A Regal. Cooling a single atom in an optical tweezer to its quantum ground state. Physical Review X, 2(4):041014, 2012.
* (71) J. D. Thompson, TG Tiecke, A. S Zibrov, V Vuletić, and M. D Lukin. Coherence and raman sideband cooling of a single atom in an optical tweezer. Physical review letters, 110(13):133001, 2013.
* (72) N. Belmechri, L. Förster, W. Alt, A. Widera, D. Meschede, and A. Alberti. Microwave control of atomic motional states in a spin-dependent optical lattice. Journal of Physics B: Atomic, Molecular and Optical Physics, 46(10):104006, 2013.
* (73) II Beterov, II Ryabtsev, DB Tretyakov, and VM Entin. Quasiclassical calculations of blackbody-radiation-induced depopulation rates and effective lifetimes of rydberg n s, n p, and n d alkali-metal atoms with n? 80. Physical Review A, 79(5):052504, 2009.
* (74) T. Long Nguyen, J. M. Raimond, C. Sayrin, R. Cortinas, T. Cantat-Moltrecht, F. Assemat, I. Dotsenko, S. Gleyzes, S. Haroche, G. Roux, et al. Towards quantum simulation with circular rydberg atoms. Physical Review X, 8(1):011032, 2018.
* (75) Signoles, A., Dietsche, E.K., Facon, A., Grosso, D., Haroche, S., Raimond, J.M., Brune, M. and Gleyzes, S., Coherent transfer between low-angular-momentum and circular rydberg states, Physical review letters 118, 253603 (2017).
* (76) R. Cardman and G. Raithel, Circularizing Rydberg atoms with time-dependent optical traps, Physical Review A 101, 013434 (2020).
* (77) M. Kwon, M. F Ebert, T. G Walker, and M Saffman. Parallel low-loss measurement of multiple atomic qubits. Physical review letters, 119(18):180504, 2017.
* (78) J. P Covey, I. S Madjarov, A. Cooper, and M. Endres. 2000-times repeated imaging of strontium atoms in clock-magic tweezer arrays. Physical review letters, 122(17):173201, 2019.
* (79) Khazali, Mohammadsadegh. ”Rydberg quantum simulator of topological insulators.” arXiv:2101.11412 (2021).
* (80) B J Wieder and CL Kane. Spin-orbit semimetals in the layer groups. Physical Review B, 94(15):155108, 2016.
* (81) M. Sajid, J K Asboth, D Meschede, R Werner, and A Alberti. Creating floquet chern insulators with magnetic quantum walks. arXiv preprint arXiv:1808.08923, 2018.
* (82) 2D DTQW in a lattice of tetramer: $\displaystyle H_{x0}=\sum\limits_{m_{x}=1}^{N_{x}/2}({|{m_{x},e_{x}}\rangle}\\!{\langle{m_{x},o_{x}}|}\otimes\mathbbm{1}_{y}+\text{h.c.})$ $\displaystyle H_{x1}=\sum\limits_{m_{x}=1}^{N_{x}/2}({|{m_{x},e_{x}}\rangle}\\!{\langle{m_{x}+1,o_{x}}|}\otimes\mathbbm{1}_{y}+\text{h.c.})$ $\displaystyle H_{y0}=\sum\limits_{m_{y}=1}^{N_{y}/2}(\mathbbm{1}_{x}\otimes{|{m_{y},e_{y}}\rangle}\\!{\langle{m_{y},o_{y}}|}+\text{h.c.})$ $\displaystyle H_{y1}=\sum\limits_{m_{y}=1}^{N_{y}/2}(\mathbbm{1}_{x}\otimes{|{m_{y},e_{y}}\rangle}\\!{\langle{(m_{y}+1),o_{y}}|}+\text{h.c.})$
* (83) 2D DTQW in a lattice of dimers: $\displaystyle H_{x0}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z},e}\rangle}\\!{\langle{m_{x},m_{y},m_{z},o}|}+\text{h.c.})$ $\displaystyle H_{x1}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z},e}\rangle}\\!{\langle{m_{x}+1,m_{y},m_{z},o}|}+\text{h.c.})$ $\displaystyle H_{xy0}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z},e}\rangle}\\!{\langle{m_{x},m_{y}+1,m_{z},o}|}+\text{h.c.})$ $\displaystyle H_{xy1}=\sum\limits_{{\bf m}}({|{m_{x},m_{y}+1,m_{z},e}\rangle}\\!{\langle{m_{x}+1,m_{y},m_{z},o}|}+\text{h.c.})$ $\displaystyle H_{xz0}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z}+1,e}\rangle}\\!{\langle{m_{x},m_{y},m_{z},o}|}+\text{h.c.})$ $\displaystyle H_{xz1}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z},e}\rangle}\\!{\langle{m_{x}+1,m_{y},m_{z}+1,o}|}+\text{h.c.})$ $\displaystyle H_{xyz1}=\sum\limits_{{\bf m}}({|{m_{x},m_{y},m_{z},e}\rangle}\\!{\langle{m_{x}+1,m_{y}+1,m_{z}+1,o}|}$ $\displaystyle\quad\quad\quad+\text{h.c.}).$
* (84) Coined DTQW operators in 3D: $\displaystyle R_{\theta}=\text{e}^{\text{i}\theta\sigma_{x}}=\cos(\theta)\mathbbm{1}_{\\{e,o\\}}+\text{i}\sin(\theta)({|{e}\rangle}\\!{\langle{o}|}+{|{o}\rangle}\\!{\langle{e}|})$ $\displaystyle T_{x}=\sum\limits_{m_{x}}({|{m_{x}-1,e}\rangle}\\!{\langle{m_{x},e}|}+{|{m_{x}+1,o}\rangle}\\!{\langle{m_{x},o}|})\otimes\mathbbm{1}_{y,z}$ $\displaystyle T_{y}=\sum\limits_{m_{y}}({|{m_{y}-1,e}\rangle}\\!{\langle{m_{y},e}|}+{|{m_{y}+1,o}\rangle}\\!{\langle{m_{y},o}|})\otimes\mathbbm{1}_{x,z}$ $\displaystyle T_{z}=\sum\limits_{m_{z}}({|{m_{z}-1,e}\rangle}\\!{\langle{m_{z},e}|}+{|{m_{z}+1,o}\rangle}\\!{\langle{m_{z},o}|})\otimes\mathbbm{1}_{x,y}$ $\displaystyle T_{xy}=\sum\limits_{m_{x},m_{y}}({|{m_{x}-1,m_{y}+1,e}\rangle}\\!{\langle{m_{x},m_{y},e}|}$ $\displaystyle\quad\quad\quad\quad\quad+{|{m_{x}+1,m_{y}-1,o}\rangle}\\!{\langle{m_{x},m_{y},o}|})\otimes\mathbbm{1}_{z}$ $\displaystyle T_{xz}=\sum\limits_{m_{x},m_{z}}({|{m_{x}-1,m_{z}-1,e}\rangle}\\!{\langle{m_{x},m_{z},e}|}$ $\displaystyle\quad\quad\quad\quad\quad+{|{m_{x}+1,m_{z}+1,o}\rangle}\\!{\langle{m_{x},m_{z},o}|})\otimes\mathbbm{1}_{y}$ $\displaystyle T_{xyz}=\sum\limits_{{\bf m}}({|{m_{x}-1,m_{y}-1,m_{z}-1,e}\rangle}\\!{\langle{m_{x},m_{y},m_{z},e}|}$ $\displaystyle\quad\quad\quad+{|{m_{x}+1,m_{y}+1,m_{z}+1,o}\rangle}\\!{\langle{m_{x},m_{y},m_{z},o}|}),$
|
# A Two-Functional-Network Framework of Opinion Dynamics
Wentao Zhang, Zhiqiang Zuo, and Yijing Wang This work was supported by the
National Natural Science Foundation of China No. 61933014, No. 61773281, No.
61673292.The authors are with the Tianjin Key Laboratory of Process
Measurement and Control, School of Electrical and Information Engineering,
Tianjin University, Tianjin, 300072, P. R. China. (e-mail: {wtzhangee, zqzuo,
yjwang}@tju.edu.cn).
###### Abstract
A common trait involving the opinion dynamics in social networks is an anchor
on interacting network to characterize the opinion formation process among
participating social actors, such as information flow, cooperative and
antagonistic influence, etc. Nevertheless, interacting networks are generally
public for social groups, as well as other individuals who may be interested
in. This blocks a more precise interpretation of the opinion formation process
since social actors always have complex feeling, motivation and behavior, even
beliefs that are personally private. In this paper, we formulate a general
configuration on describing how individual’s opinion evolves in a distinct
fashion. It consists of two functional networks: interacting network and
appraisal network. Interacting network inherits the operational properties as
DeGroot iterative opinion pooling scheme while appraisal network, forming a
belief system, quantifies certain cognitive orientation to interested
individuals’ beliefs, over which the adhered attitudes may have the potential
to be antagonistic. We explicitly show that cooperative appraisal network
always leads to consensus in opinions. Antagonistic appraisal network,
however, causes opinion cluster. It is verified that antagonistic appraisal
network affords to guarantee consensus by imposing some extra restrictions.
They hence bridge a gap between the consensus and the clusters in opinion
dynamics. We further attain a gauge on the appraisal network by means of the
random convex optimization approach. Moreover, we extend our results to the
case of mutually interdependent issues.
###### Index Terms:
Social networks, appraisal network, cooperative/antagonistic interaction,
random convex optimization.
## I Introduction
The study of opinion dynamics in social networks has a long history. Compared
with many natural or man-made systems (networks), social actors (agents) in
social networks rarely display a common interest. In contrast, they usually
aggregate into several small groups where the agents in the same group achieve
a unanimous behavior while the opinions of the whole network comprise several
clusters. Opinion dynamics in social networks are universal topics and have
captured massive interests from different disciplines, such as sociology,
social anthropology, economics, ideological political science, physics,
biology and control theory, even in the field of military [1, 2].
A simple yet instructive mathematical model is fundamental for the study of
opinion dynamics in social networks. As a backbone for opinion dynamics,
DeGroot’s iterative opinion pooling configuration shows that social actors can
share a common viewpoint if a convex combination mechanism is performed (cf.
[3]). In many practical situations, agents often interact with those who are
like-minded, and agree on more deviant viewpoints with discretion. For
Hegselmann-Krause model, each agent only communicates with those whose
opinions are constrained into its confidence interval [4, 5]. Actually, this
model is implicitly based on the principle of biased assimilation or
homophily. It is always the case that some individuals in social networks have
their own prejudices, no matter what kind of opinion formation mechanism is
applied. An attempt towards this direction is the Friedkin-Johnsen (FJ) model
where some of the individuals (stubbornness) are affected by external signal
[6]. Unlike Hegselmann-Krause model, FJ model achieves opinion clusters even
in the linear opinion formation process. Conventionally, FJ model mainly
focuses on the issue free opinion dynamics or the scalar opinion dynamics.
Refs. [7, 8] extended the FJ model to the vector-based opinion dynamics where
the opinions tightly relate to several issues, see also [9]. As hinted by
belief systems (cf. [10]), topic-specific opinion dynamics are usually
entwined whenever agent’s opinion is involved into several interdependent
issues. Parsegov _et al_. [11] introduced a row stochastic matrix (MiDS
matrix) which clarifies what the attitude of agent towards the issue sequence
is. Similarly, the property of belief system dynamics subject to logical
constraints was elaborated in [12], and a case study was provided to show how
the fluctuations of population’s attitudes evolve. The gossip-based version of
the FJ model was proposed in [13]. More recently, an approach building on the
DeGroot model and the FJ model was given to investigate the evolutions with
respect to the self-appraisal, interpersonal influences and social power over
issue sequence for the star, the irreducible and the reducible communication
topologies [14].
Nowadays, people have realized that postulate of “cognitive algebra” on
heterogeneous information (cf. [15]) for opinion formation process among
individuals may not attribute all complex behaviors in social networks due to
the negative interpersonal influences that often emerge in many community of
dissensus (cf. [16, 17]). Typical instances include but are not limited to
multiple-party political systems, biological systems, international alliances,
bimodal coalitions, rival business cartels, duopolistic markets, as well as
boomerang effect in dyad systems. Based on multi-agent system theory, a lot of
research interests have been devoted to the opinion polarization and the
stabilization problems. By signed graph theory, Altafini [18] proved that
structurally balanced graph is the necessary and sufficient condition for
bipartite consensus. Proskurnikov _et al_. [19] further reported the necessary
and sufficient criterion to stabilize Altafini’s model. For general
communication graph, [20] discussed the interval bipartite consensus problem.
The second-order/high-order multi-agent systems in the presence of
antagonistic information were also discussed in [21, 22], as well as finite-
time consensus [23]. More related work about the signed graph theory refers to
[24, 25]. Cooperative control with antagonistic reciprocity was discussed in
[26, 27, 28] using the node-based mechanism.
Unlike complex systems or multi-agent systems, agents (individuals) in social
networks usually display diversity behaviors, such as attitude, feeling and
belief, etc. In business negotiation or alliance, for instance, each member
tends to collect more “sides information” with respect to the remaining
members before organizing a meeting, by which they aim at achieving the profit
maximization. They possess introspective ability to access what and how they
should do. Notice that the above research is based on a common hypothesis that
merely the public network is used to quantify opinion formation process.
However, a downside of public networks111As hinted by [1] and [11], we treat
the interaction topologies (communication graphs) in the aforementioned
literature by the public networks. in the aforementioned literature to
describe the antagonistic interactions among participating individuals is that
individuals in social networks can completely get access to the opinion, the
attitude and the belief of the other individuals towards the interested
individuals. But this is not always the case since the individuals in social
networks feature complex behaviors, even complex thoughts [29], naturally
leading to the complex opinion dynamics that the natural or the man-made
complex networks cannot show. Moreover, as discussed in [30, 11], public
networks generally characterize the social influence structure, and henceforth
they are assumed to be thoroughly known. For this reason, potential hostile
information will become transparent whenever only the public network is
utilized to model the opinion formation process. Obviously, this is not always
true for social networks where it is rather arduous to have access to the
individual’s opinion in prior.
In this paper, we propose a new framework with functional networks, that is,
the appraisal network and the interacting network, to describe the opinion
formation process. The opinion evolution in social network is firstly governed
by an appraisal network characterizing how each individual assigns its
attitude or influence towards other individuals. Afterwards, each individual
updates its opinion through an interacting network as the conventional DeGroot
model (we call it the interacting network or the public network). To the best
of the authors’ knowledge, there are no results available to model the
evolution of opinion dynamics using two functionally independent networks.
More importantly, we will show that the proposed formulation indeed provides
some intriguing phenomena that cannot be preserved by the existing setups. We
summarize the main features of this paper as follows:
(a)
Cooperative appraisal networks lead to consensus in opinion whereas the final
aggregated value may not be contained in a convex hull spanned by the initial
opinions of the participating individuals. This potentially implies that the
way of the decision among individuals in social networks formulated in this
paper is not necessarily constrained into the convex hull as the usual models.
In fact, non-convex interactions in social networks are common, and it is
natural to model the evolution of the opinions by taking this factor into
account [29];
(b)
Antagonistic appraisal networks result in clusters in opinions. In particular,
we show that consensus in opinion dynamics appears provided that the
considered antagonistic appraisal networks enjoy certain requirements. It is
illustrated that most of the existing results merely guarantee the clusters in
opinions subject to hostile interactions (cf. [24, 25]), or stability of the
agents [19];
(c)
Random convex optimization (cf. [31, 32]) is formulated to provide a feasible
estimation on the appraisal network, with the purpose of finite-horizon sense.
Also, a bound on the number of “required observations”, which enables us to
get rid of a-prior specified level of probabilistic violation, is explicitly
given. Therefore we can make a justification of the postulate on self-
preservation of the appraisal network for each individual, as opposed to the
hypothesis on interacting network in the literature;
(d)
The proposed setup could be further extended to the multiple mutually
entangled issues that are quantified by an issue-dependence matrix, upon which
we deduce the criteria associated with the convergence (resp. stability) of
the agents. More interestingly, we point out that the introduction of the
issue-dependence matrix enables us to steer the leader’s opinion, which has
been verified to be fixed all the time in the context of multi-agent systems
community.
The layout of this paper is outlined as follows: Section II describes some
basic preliminaries and the problem formulation as well as the dynamic model.
Section III presents the results in the context of the cooperative and the
antagonistic appraisal networks. Section IV reports the results for the
interdependent issues. Section V gives numerical examples to support the
derived theoretical results as well as some discussions. Finally, a conclusion
is drawn in Section VI.
## II Preliminaries and Problem Formulation
### II-A Basic Notations
The real set, the nonnegative real set and the nonnegative integer set are
denoted by $\mathbb{R}$, $\mathbb{R}_{+}$ and $\mathbb{Z}$. Symbol $``\prime"$
denotes the transpose regarding a vector or a matrix. Symbol $|\cdot|$
represents the modulus or the cardinality, and $|\cdot|_{2}$ the $2$-norm.
$|Q|$ stands for the spectral norm for matrix $Q$. $\lambda(Q)$ and $Q^{-1}$
denote the eigenvalue and the inverse of an invertible matrix $Q$. $Q\succ 0$
indicates that matrix $Q$ is symmetric, and positive definite. We denote
$\\{1,...,N\\}$ and $(1,...,1)^{\prime}$ by $\mathbb{I}_{N}$ and
$\textbf{1}_{N}$, where $N$ is the number of agents in social networks. $I$
and $\mathcal{O}$ are, respectively, the identity matrix and the zero matrix.
The diagonal matrix is represented by ${\rm diag}(\cdot)$. The sign function
is abbreviated by ${\rm sgn}(\cdot)$. For any complex number $\lambda$,
$\lambda\triangleq{\rm Re}(\lambda)+\mathbbm{i}{\rm
Im}(\lambda)=|\lambda|(\cos(\arg(\lambda))+\mathbbm{i}\sin(\arg(\lambda)))$
where $\mathbbm{i}^{2}=-1$ and $\arg(\lambda)$ represents the value of
argument principle.
In addition, we denote $(\Omega,\mathcal{F},\mathcal{P})$ the probability
space, upon which $\Omega$ represents the sample space, $\mathcal{F}$ the
Borel $\sigma$-algebra, and $\mathcal{P}$ the probability measure.
An interacting graph $\mathcal{G}$ is commonly represented by a triple
$\\{\mathbb{V},\mathbb{E},\mathbb{A}\\}$ where $\mathbb{V}$ is the node set,
$\mathbb{E}$ is the edge set and $\mathbb{A}=(a_{ij})_{N\times N}$ is the
adjacent matrix with $a_{ij}>0$ provided that $(j,i)\in\mathbb{E}$ and
$a_{ij}=0$ otherwise. No self-loop is allowed throughout the paper, i. e.,
$a_{ii}=0$. The associated Laplacian matrix $\mathcal{L}=(l_{ij})_{N\times N}$
is given by $l_{ii}=\sum^{N}_{j=1,j\neq i}a_{ij}$ and $l_{ij,j\neq
i}=-a_{ij,j\neq i}$222In the context of the opinion dynamics, $p_{ij}$
represents the attitude of agent $i$ towards agent $j$, while $l_{ij}$ in the
framework of the multi-agent systems means that agent $j$ is a neighbor of
agent $i$. Therefore, if no confusion arises, we treat both $p_{ij}$ and
$l_{ij}$ as the influence of agent $j$ imposed on agent $i$ with the purpose
of the notion consistency.. A digraph is strongly connected if for any two
distinct nodes, they are connected by a path. A root of $\mathcal{G}$ is a
special node, from which we can arrive at any other nodes. A graph has a
directed spanning tree if and only if it contains at least a root.
### II-B Dynamic Model for Social Networks
Consider a group of interacting individuals (agents or social actors), whose
opinions evolve according to
$\displaystyle\xi_{i}(k+1)=$
$\displaystyle~{}\xi_{i}(k)+\varrho_{i}\sum_{j\in\mathcal{N}_{i}}a_{ij}(z_{j}(k)-z_{i}(k))$
(1a) $\displaystyle z_{i}(k)=$
$\displaystyle~{}\sum^{N}_{j=1}\delta_{ij}\xi_{j}(k),~{}i\in\mathbb{I}_{N},~{}k\in\mathbb{Z}$
(1b)
where $\xi_{i}(k)\in\mathbb{R}$ stands for the opinion of the $i$th agent at
instant $k$, $\mathcal{N}_{i}$ is the neighboring set of agent $i$. Similar to
the FJ model, we call parameter $\varrho_{i}$ ($\varrho_{i}\neq 0$) the
susceptibility factor with respect to the $i$th social actor. $z_{i}(k)$ in
$(\ref{20191eq1b})$ represents the appraisal or the self-appraisal
($z_{i}(k)=\delta_{i}\xi_{i}(k)$) of agent $i$. The constant $\delta_{ij}$
specifies the weighted influence assigned by individual $i$ towards individual
$j$, and satisfies $0<\sum^{N}_{j=1}|\delta_{ij}|\leq 1$. The compact form of
$(\ref{20191eq1})$ is expressed by
$\displaystyle\xi(k+1)=$
$\displaystyle~{}(I-\Lambda\mathcal{L}\mathcal{D})\xi(k),~{}k\in\mathbb{Z}$
(2)
where $\xi(k)=(\xi_{1}(k),...,\xi_{N}(k))^{\prime}$, $\Lambda={\rm
diag}(\varrho_{1},...,\varrho_{N})$ and $\mathcal{D}=(\delta_{ij})_{N\times
N}$.
It is easy to see that system $(\ref{20191eq2})$ involves two networks, and we
call them the interacting or the public network (quantified by matrix
$\mathcal{L}$) and the appraisal network (quantified by matrix $\mathcal{D}$),
respectively. We emphasize that the susceptibility factor $\varrho_{i}$ is
crucial for the convergence of system $(\ref{20191eq2})$ since the appraisal
network and the interacting network are entwined, leading to a significant
difference in contrast with the DeGroot model. It is noted that a system might
collapse due to the hostility. And as will be shown later, a system subject to
antagonistic information fails to converge, regardless of the connection
property of the underlying interacting graph333For Ref. [18], the proposed
consensus algorithm always assures the convergence of the reciprocal agents as
long as the communication topology attains a directed spanning tree, that is,
bipartite consensus (interval bipartite consensus) for structurally balanced
graph (cf.[18, 20]) and stability for graph that contains the in-isolated
structurally balanced subgraphs, or is structurally unbalanced (cf. [19])..
###### Remark 1
One can easily see that system $(\ref{20191eq2})$ boils down to the classical
DeGroot model (cf. [3]) or the multi-agent systems (cf. [33]) when we fix
$\mathcal{D}$ to be an identity matrix and $\varrho_{i}$ some positive
constant $\varrho$ fulfilling
$0<\varrho<\max_{i\in\mathbb{I}_{N}}\\{l_{ii}\\}$ (in such a scenario we
always treat $\varrho$ as the step size). It should be pointed out that the
two networks described above have nothing to do with the multilayer networks
in the context of complex networks [34] where all layer networks functionally
inherent with the interacting network $\mathcal{L}$. $\blacklozenge$
We are now in a position to give some interpretations about the reason why we
introduce the appraisal network: (1) Unlike the natural and the man-made
complex networks where the interacting agents are creatures and smart
machines, the individuals in social networks are people. In one word, the
subject in social networks possesses emotion, belief as well as attitude
towards a specific object. Therefore, it is a common sense that people try to
search and collect as much information as possible before they make a
decision, such as organizing a conference, exchanging ideas with colleagues,
etc. That is to say, people in social networks always evaluate and self-
reflect their opinions, beliefs, and behaviors before interacting with others;
(2) As mentioned before, the hostile interaction is a key element in the study
of opinion dynamics in social networks. It is notable that the interacting
networks are generally assumed to be completely known (cf. [30, 11]).
Therefore, the private opinions of the participating agents may be leaked if
merely the interacting networks are applied to model the opinion evolution in
social networks, especially, involving antagonistic interactions.
Before moving on, we give some useful definitions regarding system
$(\ref{20191eq2})$.
###### Definition 1
System $(\ref{20191eq2})$ achieves:
* (i)
the _consensus in opinions_ , if
$\displaystyle\lim_{k\rightarrow\infty}\xi_{i}(k)=\varphi,~{}~{}i\in\mathbb{I}_{N}$
where $\varphi\in\mathbb{R}$ is a constant.
* (ii)
the _convergence in opinions_ , if
$\displaystyle\lim_{k\rightarrow\infty}\xi_{i}(k)=\varphi_{i},~{}~{}i\in\mathbb{I}_{N}$
where $\varphi_{i}\in\mathbb{R}$ is a constant.
* (iii)
the _stability in opinions_ , if
$\displaystyle\lim_{k\rightarrow\infty}\xi_{i}(k)=0,~{}~{}i\in\mathbb{I}_{N}$
The mechanism behind Definition 1 is that a cooperative appraisal network
gives rise to the opinion aggregation while an antagonistic appraisal network
leads to the clusters in opinion in general. Moreover, the consensus of the
reciprocal agents is a special type of the convergence where all interacting
individuals share a common viewpoint eventually.
Figure 1: A holistic paradigm of the opinion evolution for system
$(\ref{20191eq2})$ where the appraisal network (characterized by
$\mathcal{D}$) is cooperative, i.e., $\delta_{ij}\geq 0$.
## III Main Results
In the following discussions, we will show that the opinion formation process
with setup $(\ref{20191eq1})$ is more general comparing with the DeGroot
model. More specific, system $(\ref{20191eq1})$ permits non-convex
interactions of the individuals, and it provides an intriguing viewpoint on
how the individual’s introspective process influences the opinion evolution of
social networks.
### III-A Cooperative Appraisal Network
In this subsection, we are concerned with the situation where the appraisal
network is cooperative, i.e., each influence weight $\delta_{ij}$ is
nonnegative. A simple illustration for this case is drawn in Fig. 1. Hence,
$\sum^{N}_{j=1}\delta_{ij}=1$. In such a case,
$\mathcal{D}=(\delta_{ij})_{N\times N}$ is a nonnegative stochastic matrix.
Before proceeding on, we introduce a simple proposition which bridges the gap
between the stochastic matrix and the Laplacian matrix in connection with the
communication topology in a unified manner.
###### Proposition 1
([33]) Given any nonnegative stochastic matrix
$\mathcal{D}=(\delta_{ij})_{N\times N}$, there always exists a Laplacian
matrix $L=(l_{ij})_{N\times N}$ such that
$\displaystyle\mathcal{D}=I-\epsilon L$ (3)
where $\epsilon>0$ is the step size.
It should be emphasized that the Laplacian matrix $L$ depicted in
$(\ref{20191eq37})$ generally has little connection with the interacting
network $\mathcal{L}$ reported in $(\ref{20191eq2})$. In the sequel, we give
the main results on cooperative appraisal network.
###### Theorem 1
Suppose that the appraisal network is cooperative, i.e., $\delta_{ij}\geq 0$.
Then _consensus_ in system $(\ref{20191eq2})$ is achieved if and only if
matrix $I-\Lambda\mathcal{L}\mathcal{D}$ has a simple $1$ eigenvalue and the
remaining eigenvalues are preserved in the unit disk.
###### Proof:
Suppose that the communication graph associated with $\mathcal{L}$ has a
directed spanning tree. By Sylvester rank inequality, one has
$\displaystyle{\rm rank}(\mathcal{L})+{\rm rank}(\mathcal{D})-N\leq$
$\displaystyle~{}{\rm rank}(\mathcal{L}\mathcal{D})$ $\displaystyle\leq$
$\displaystyle~{}\min\bigg{\\{}{\rm rank}(\mathcal{L}),{\rm
rank}(\mathcal{D})\bigg{\\}}$
In fact, the zero eigenvalue is simple if the graph induced by $\mathcal{L}$
has a directed spanning tree. Then, zero is an eigenvalue of matrix
$\Lambda\mathcal{L}\mathcal{D}$. Additionally, vector one is the eigenvector
of the zero eigenvalue. Hence, system $(\ref{20191eq2})$ can achieve the
consensus.
We proceed with the condition for consensus. Actually, matrix
$I-\Lambda\mathcal{L}\mathcal{D}$ has a simple $1$ eigenvalue and the
remaining eigenvalues are preserved in the unit disk if and only if
$\Lambda\mathcal{L}\mathcal{D}$ contains a simple zero eigenvalue and the
remaining eigenvalues share positive real parts (here we can redefine the
coupling matrix by $\Lambda=\epsilon_{1}\Lambda^{{\dagger}}$ where
$\epsilon_{1}$ is a small step size, by doing so we could access to the
continuous version of $(\ref{20191eq2})$ by
$\dot{\xi}=-\Lambda^{{\dagger}}\mathcal{L}\mathcal{D}\xi$). It further assumes
that the left and the right eigenvectors with respect to the zero eigenvalue
are, respectively, $\varsigma\in\mathbb{R}^{N}$ and $\iota\in\mathbb{R}^{N}$
with the constraint $\varsigma^{\prime}\iota=1$. Note that
$\iota=\textbf{1}_{N}$ for such a scenario since $\mathcal{D}\iota=\iota$. We
define the disagreement error by
$\theta(k)=(I-\iota\varsigma^{\prime})\xi(k)$, then one has
$\displaystyle\theta(k+1)=(I-\Lambda\mathcal{L}\mathcal{D})\theta(k)$ (4)
We can see that the eigenvalues of $I-\Lambda\mathcal{L}\mathcal{D}$ are
entirely constrained in the unit disk over the space
$\mathbb{R}^{N}\backslash\\{\varphi\\}$ where
$\varphi=\iota\varsigma^{\prime}\xi(0)=\alpha\iota$. Thus the error system
$(\ref{20191eq11})$ is exponentially stable, which ensures the consensus of
system $(\ref{20191eq2})$. This ends the proof by Definition 1. ∎
###### Remark 2
Different from the DeGroot model[3] and multi-agent systems[33], the final
shared common opinion of the interacting individuals governed by
$(\ref{20191eq2})$ may not be restricted in a convex hull spanned by the
initial opinions of the individuals. This is because that the appraisal
network specifying the interaction rule in this paper is no long to be convex,
to a large extent. We emphasize that non-convex interactions among
participating individuals in social networks are rather common, which have
been clearly pointed out by Wang _et al_. in [29]. This intriguing issue will
be verified via an example later. $\blacklozenge$
Next, we consider a special case where the appraisal network is the same as
the interacting network, i.e., $L=\mathcal{L}$. In order to achieve the
consensus, it suffices to fix $\varrho_{i}=\varrho$ for all $i$. One hence
arrives at
$\displaystyle\xi(k+1)=$
$\displaystyle~{}(I-\varrho\mathcal{L}+\epsilon\varrho\mathcal{L}^{2})\xi(k)$
(5)
where $\epsilon>0$ is a constant.
###### Corollary 1
Suppose that the graph induced by $\mathcal{L}$ has a directed spanning tree.
Then system $(\ref{20191eq6})$ achieves the _consensus in opinions_ if and
only if
$\displaystyle 0<\varrho<\min_{{\rm
Re}(\lambda^{\star}_{i})>0}\bigg{\\{}\frac{2{\rm
Re}(\lambda^{\star}_{i})}{{\rm Re}^{2}(\lambda^{\star}_{i})+{\rm
Im}^{2}(\lambda^{\star}_{i})}\bigg{\\}}$ (6)
where $\lambda^{\star}_{i}$ stands for the $i$th eigenvalue of
$\mathcal{L}-\epsilon\mathcal{L}^{2}$, and constant $\epsilon$ satisfies
$\left\\{\begin{aligned} &\epsilon\in\mathbb{R},~{}~{}|{\rm
Re}(\lambda_{i})|=|{\rm Im}(\lambda_{i})|\\\ &\epsilon<\min_{\lambda_{i}\neq
0}\frac{{\rm Re}(\lambda_{i})}{{\rm Re}^{2}(\lambda_{i})-{\rm
Im}^{2}(\lambda_{i})},~{}~{}|{\rm Re}(\lambda_{i})|>|{\rm Im}(\lambda_{i})|\\\
&\epsilon>\max_{\lambda_{i}\neq 0}-\frac{{\rm Re}(\lambda_{i})}{{\rm
Im}^{2}(\lambda_{i})-{\rm Re}^{2}(\lambda_{i})},~{}~{}|{\rm
Re}(\lambda_{i})|<|{\rm Im}(\lambda_{i})|\end{aligned}\right.$ (7)
where $\lambda_{i}$ is the $i$th eigenvalue of matrix $\mathcal{L}$. Moreover,
if $\epsilon$ is further required to be positive, then we have
$\displaystyle\epsilon\in\bigg{(}0,~{}~{}\min_{\lambda_{i}\neq 0,~{}|{\rm
Re}(\lambda_{i})|>|{\rm Im}(\lambda_{i})|}\frac{{\rm Re}(\lambda_{i})}{{\rm
Re}^{2}(\lambda_{i})-{\rm Im}^{2}(\lambda_{i})}\bigg{)}$
Moreover, the final aggregated value is restricted in a convex hull spanned by
the initial opinions of the roots.
###### Proof:
Let us first study an auxiliary system of $(\ref{20191eq6})$ by
$\displaystyle\dot{\xi}(t)=$ $\displaystyle~{}-W\xi(t),~{}t\in\mathbb{R}_{+}$
(8)
where $W=\mathcal{L}-\epsilon\mathcal{L}^{2}$. It is known that system
$(\ref{20191eq7})$ guarantees the consensus if and only if matrix $W$ contains
a simple zero eigenvalue and the remaining eigenvalues share positive real
parts. When the induced graph by Laplacian matrix $\mathcal{L}$ has a directed
spanning tree, one has that the zero eigenvalue of matrix $W$ is simple. We
continuous to show that the nonzero eigenvalues of $W$ have positive real
parts. One can easily find that the nonzero eigenvalues of $W$ are of the form
$\displaystyle\lambda^{\star}_{i}=$ $\displaystyle~{}{\rm
Re}(\lambda_{i})-\epsilon({\rm Re}^{2}(\lambda_{i})-{\rm
Im}^{2}(\lambda_{i}))$ (9) $\displaystyle+\mathbbm{i}({\rm
Im}(\lambda_{i})-2\epsilon{\rm Re}(\lambda_{i}){\rm Im}(\lambda_{i}))$
By $(\ref{20191eq8})$, the nonzero eigenvalues in $W$ share positive real
parts if and only if $(\ref{ch7_20191eq10_1})$ is desirable.
It is noticeable that the relationship between the eigenvalues of system
matrix in $(\ref{20191eq7})$ and those in $(\ref{20191eq6})$ can be formulated
by
$\displaystyle\lambda^{\ast}_{i}=$
$\displaystyle~{}1-\varrho\lambda^{\star}_{i},~{}i\in\mathbb{I}_{N}$
where $\lambda^{\ast}_{i}$ represents the $i$th eigenvalue of
$I-\varrho\mathcal{L}+\epsilon\varrho\mathcal{L}^{2}$. As a result, one can
check that $I-\varrho\mathcal{L}+\epsilon\varrho\mathcal{L}^{2}$ has a simple
$1$ eigenvalue and the remaining eigenvalues are restricted in the unit disk
if and only if $(\ref{20191eq10})$ is desirable, by which
$|\lambda^{\ast}_{i}|<1$ is always guaranteed as long as
$\lambda^{\star}_{i}\neq 0$.
The second part is trivial, and hence is omitted. ∎
An extension of $\epsilon=\varrho$ would be interesting since the developed
method in Corollary 1 does not work in such a case. Actually, $\varrho$ is
involved in both $(\ref{20191eq6})$ and $(\ref{20191eq7})$, leading to the
failure of computing the allowable range for $\varrho$. Apart from these
concerns, the argument induced within a general interacting network is far
from obvious in contrast to the case of bidirectional interacting network, as
suggested by Corollary 1.
###### Theorem 2
Suppose that the graph induced by $\mathcal{L}$ has a directed spanning tree.
Then system $(\ref{20191eq6})$ achieves the _consensus in opinions_ if and
only if the step size $\varrho$ is bounded with the following constraints,
* (i)
If ${\rm Im}(\lambda_{i})=0$ for $\lambda_{i}\neq 0$,
$\displaystyle\varrho\in\min_{\lambda_{i}}\bigg{(}0,\frac{1}{\lambda_{i}}\bigg{)}$
where $\lambda_{i}>0$ is the eigenvalue of $\mathcal{L}$.
* (ii)
If ${\rm Im}(\lambda_{i})\neq 0$ with $\lambda_{i}\neq 0$,
$\left\\{\begin{aligned} &\min_{\varrho>0,\lambda_{i}\neq
0}f_{i}(\varrho,\lambda_{i},\arg(\lambda_{i}))>0\\\
&\varrho\not\in\bigg{\\{}\varrho_{i,1},\varrho_{i,2}\bigg{\\}}\bigcup\bigg{\\{}\varrho_{i,3},\varrho_{i,4}\bigg{\\}}\end{aligned}\right.$
where
$\displaystyle f_{i}(\varrho,\lambda_{i},\arg(\lambda_{i}))$ $\displaystyle=$
$\displaystyle~{}-\varrho^{3}|\lambda_{i}|^{3}+\varrho^{2}|\lambda_{i}|^{2}\cos^{2}(\arg(\lambda_{i}))$
$\displaystyle-2\varrho|\lambda_{i}|\sin(2\arg(\lambda_{i}))-\varrho|\lambda_{i}|+2\cos(\arg(\lambda_{i}))$
and $\varrho_{i,1}$, $\varrho_{i,2}$, $\varrho_{i,3}$, $\varrho_{i,4}$ are
depicted in $(\ref{20191eq107})$
$\left\\{\begin{aligned}
\varrho_{i,1}=&~{}\frac{\cos(\arg(\lambda_{i}))+\sqrt{\cos^{2}(\arg(\lambda_{i}))(8\cos(\theta)-7)+4(1-\cos(\theta))}}{2|\lambda_{i}|\cos(2\arg(\lambda_{i}))}\\\
\varrho_{i,2}=&~{}\frac{\cos(\arg(\lambda_{i}))-\sqrt{\cos^{2}(\arg(\lambda_{i}))(8\cos(\theta)-7)+4(1-\cos(\theta))}}{2|\lambda_{i}|\cos(2\arg(\lambda_{i}))}\\\
\varrho_{i,3}=&~{}\frac{\sin(\arg(\lambda_{i}))+\sqrt{\cos^{2}(\arg(\lambda_{i}))(8\cos(\theta)-7)+4(1-\cos(\theta))}}{2|\lambda_{i}|\sin(2\arg(\lambda_{i}))}\\\
\varrho_{i,4}=&~{}\frac{\sin(\arg(\lambda_{i}))-\sqrt{\cos^{2}(\arg(\lambda_{i}))(8\cos(\theta)-7)+4(1-\cos(\theta))}}{2|\lambda_{i}|\sin(2\arg(\lambda_{i}))}\end{aligned}\right.$
(10)
where $\theta\in[0,2\pi)$.
###### Proof:
The proof of Theorem 2 is self-contained, and is reported in Appendix for the
sake of concinnity. ∎
From Theorem 2, the requirement on $\varrho$ coincides with the statement in
[33] if the underlying network is bidirectional. For the general interpersonal
network, the condition on $\varrho$ is far from trivial as the previous case
since it tightly links to the amplitude and the argument principal value of
nonzero eigenvalues corresponding to $\mathcal{L}$. In fact, there is an
alternative to guarantee the consensus in opinion for cooperative antagonistic
network, even though it falls short of elegance as Theorem 2.
###### Corollary 2
Suppose that the graph induced by $\mathcal{L}$ attains a directed spanning
tree. Then system $(\ref{20191eq6})$ achieves the _consensus in opinions_ if
and only if
$\displaystyle\varrho\in\bigcap_{\varrho>0,\lambda_{i}\neq
0}\bigg{\\{}a_{i}\varrho^{3}+b_{i}\varrho^{2}+c_{i}\varrho+d_{i}<0\bigg{\\}}$
(11)
where
$\displaystyle a_{i}=$ $\displaystyle~{}4{\rm Re}^{2}(\lambda_{i})+({\rm
Re}^{2}(\lambda_{i})-{\rm Im}^{2}(\lambda_{i}))^{2}$ $\displaystyle b_{i}=$
$\displaystyle~{}2{\rm Re}(\lambda_{i})({\rm Re}^{2}(\lambda_{i})-{\rm
Im}^{2}(\lambda_{i}))-4{\rm Re}(\lambda_{i}){\rm Im}(\lambda_{i})$
$\displaystyle c_{i}=$ $\displaystyle~{}3{\rm Re}^{2}(\lambda_{i})-{\rm
Im}^{2}(\lambda_{i})$ $\displaystyle d_{i}=$ $\displaystyle~{}-2{\rm
Re}(\lambda_{i})$
In addition, consensus in system $(\ref{20191eq6})$ is achieved for the
undirected graph if and only if
$\displaystyle\varrho\in\min_{\lambda_{i}}\bigg{(}0,\frac{1}{\lambda_{i}}\bigg{)}$
where $\lambda_{i}>0$ is the eigenvalue of $\mathcal{L}$.
###### Proof:
System $(\ref{20191eq6})$ can be rewritten as
$\displaystyle\xi(k+1)=$
$\displaystyle~{}(I-\varrho\mathcal{L}+\varrho^{2}\mathcal{L}^{2})\xi(k)$ (12)
The eigenvalue of matrix $I-\varrho\mathcal{L}+\varrho^{2}\mathcal{L}^{2}$ is
of the form
$\displaystyle\lambda^{\ast}_{i}=$
$\displaystyle~{}1-\varrho\lambda_{i}+\varrho^{2}\lambda^{2}_{i}$
$\displaystyle=$ $\displaystyle~{}1-{\rm Re}(\lambda_{i})\varrho+\bigg{(}{\rm
Re}^{2}(\lambda_{i})-{\rm Im}^{2}(\lambda_{i})\bigg{)}\varrho^{2}$
$\displaystyle+\bigg{(}2{\rm Re}(\lambda_{i}){\rm
Im}(\lambda_{i})\varrho^{2}-{\rm Im}(\lambda_{i})\varrho\bigg{)}\mathbbm{i}$
Therefore, $|\lambda^{\ast}_{i}|<1$ with ${\rm Re}(\lambda_{i})>0$ is equal to
$\displaystyle 1\geq$ $\displaystyle~{}\bigg{(}1-{\rm
Re}(\lambda_{i})\varrho+\bigg{(}{\rm Re}^{2}(\lambda_{i})-{\rm
Im}^{2}(\lambda_{i})\bigg{)}\varrho^{2}\bigg{)}^{2}$
$\displaystyle+\bigg{(}2{\rm Re}(\lambda_{i}){\rm
Im}(\lambda_{i})\varrho^{2}-{\rm Im}(\lambda_{i})\varrho\bigg{)}^{2}$
By tedious computation, it yields the requirement in $(\ref{20191eq63})$. And
the second statement is consistent with that in Theorem 2, and is hence
omitted. ∎
Figure 2: A holistic paradigm of the opinion evolution for system
$(\ref{20191eq2})$ where the appraisal network (characterized by
$\mathcal{D}$) is antagonistic, that is, the red arcs mean $\delta_{ij}<0$
while the black ones indicate that $\delta_{ij}>0$.
### III-B Antagonistic Appraisal Network
A remarkable feature of the social networks is that social actors rarely share
unanimous opinions. It has been recognized that biased assimilation principle
or homophily principle (cf. [35]) is vital to explain the opinion clusters.
Another reason for the opinion dynamics giving rise to clusters is antagonism.
The main goal of this subsection is to investigate the case where there exists
antagonistic interactions among social actors.
By following the basic route as that in subsection III-A, the public network
characterized by $\mathcal{L}$ is the same as before. Meanwhile, the appraisal
network characterized by $\mathcal{D}$ involves hostile interactions. A simple
illustration for system $(\ref{20191eq2})$ is depicted by Fig. 2.
Suppose that agent $i$ possesses an opposite attitude to the opinion of agent
$j$, then ${\rm sgn}(\delta_{ij})=-1$. In addition, since $|\delta_{ij}|$
quantifies the degree of the influence among the total social influence that
the $i$th social actor has, we require $\sum^{N}_{j=1}|\delta_{ij}|=1$ for all
$i$. Before moving on, a condiment is needed.
###### Definition 2
Two graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ share the same topology if
there exists an edge from the $i$th node to the $j$th node in
$\mathcal{G}_{1}$, then the edge $(i,j)$ also belongs to $\mathcal{G}_{2}$,
and vice versa.
Actually, two graphs satisfying Definition 2 differ only in the weighted
values of edges. Therefore, we can define a new stochastic matrix
$\mathcal{D}^{\star}\triangleq(|\delta_{ij}|)_{N\times N}$. With the help of
Definition 2, the graphs induced by $\mathcal{D}$ and $\mathcal{D}^{\star}$
have a topology structure in common.
###### Theorem 3
Suppose that the appraisal network is antagonistic. Then system
$(\ref{20191eq2})$ _converges in opinions_ if and only if matrix
$I-\Lambda\mathcal{L}\mathcal{D}$ has a simple $1$ eigenvalue and the
remaining eigenvalues are preserved in the unit disk.
###### Proof:
The proof of Theorem 3 is similar to that in Theorem 1. Generally speaking,
the right eigenvector $\iota$ associated with the zero eigenvalue of
$\Lambda\mathcal{L}\mathcal{D}$ may not be equal to vector one when the
appraisal network contains hostile interactions. Therefore, it follows that
$\varphi=\iota\varsigma^{\prime}\xi(0)$. This implies the convergence in
opinions of the social actors. ∎
According to Theorem 3, the clusters in opinions occur due to the presence of
antagonistic interactions in the appraisal network. In view of Theorems 1 and
3, it can be concluded that cooperative appraisal networks lead to the
consensus in opinions, while hostile appraisal networks attain the clusters in
opinions. Here we emphasize that it is rather arduous to give some specific
indices on the weighted influence matrix $\Lambda$ due to the potential
complexity of the appraisal network. Another interesting welfare behind
Theorems 1 and 3 is that the agents in system $(\ref{20191eq2})$ will not
converge to zero for almost all initial values. This is because that
$I-\Lambda\mathcal{L}\mathcal{D}$ contains at least an eigenvalue one.
Fortunately, this intriguing feature could be addressed by virtue of the issue
dependence matrix that is about to be elucidated later.
Figure 3: (a) Opinion evolution for the antagonistic appraisal network where
the initial opinions $\xi(0)=(25,75,85)$ are partly borrowed from [11,
Equation (15)]; (b) Opinion evolution for the antagonistic appraisal network
with the same parameters as (a) except for replacing $\Lambda$ by $0.5I$.
### III-C Connection between Cooperative and Antagonistic Appraisal Networks
Subsections III-A and III-B discuss the opinion evolution on cooperative and
antagonistic appraisal networks, respectively. Here we are interested in a
question: what is the bridge between the cooperative and the antagonistic
appraisal networks? To answer this question, we first look at a simple
example.
###### Example 1
Consider a network with three social actors whose interacting and appraisal
networks are of the forms
$\displaystyle\mathcal{L}=\begin{bmatrix}2&-1&-1\\\ -1&2&-1\\\
-1&-1&2\end{bmatrix},~{}\mathcal{D}=\begin{bmatrix}0.5&-0.5&0\\\ 0&0.5&-0.5\\\
-0.5&0&0.5\end{bmatrix}$
We choose $\Lambda={\rm diag}(-0.05,0.5,0.5)$, and then compute the
eigenvalues of $I-\Lambda\mathcal{L}\mathcal{D}$
$\displaystyle\lambda(I-\Lambda\mathcal{L}\mathcal{D})\in\bigg{\\{}0,0.9448,1.9052\bigg{\\}}$
Opinion evolutions of the social actors are plotted in Fig. 3(a). It reveals
an interesting phenomenon that antagonistic appraisal networks could achieve
consensus in opinions. More importantly, we can see that the final aggregated
opinion is not restricted in the convex hull spanned by the initial opinions.
Next, we select another group of susceptibility factors by $0.5I$. In this
case, the consensus in opinions could still be preserved while it is contained
in the convex hull spanned by the initial opinions (see Fig. 3(b) for more
details). Therefore, we could treat the susceptibility factor $\varrho_{i}$ as
a design parameter quantifying how does the opinion formation process work.
$\blacklozenge$
Note that consensus in opinions could be preserved for cooperative appraisal
network. It implicitly manifests that the opinions either achieve consensus or
diverge whenever the appraisal network is cooperative. However, opinions
either achieve consensus or clusters if the underlying appraisal network is
antagonistic, except for the divergence in opinions. We summarize them as
below.
###### Proposition 2
Suppose that the appraisal network is antagonistic. Then system
$(\ref{20191eq2})$ achieves _consensus in opinions_ if and only if the
following properties hold:
* (i)
System $(\ref{20191eq2})$ converges;
* (ii)
$\mathcal{D}\textbf{1}_{N}=\mathcal{O}_{N\times 1}$ or
$\mathcal{D}\textbf{1}_{N}=-\textbf{1}_{N}$.
###### Proof:
(Necessity) According to Definition 1, the consensus in $(\ref{20191eq2})$
indicates that
$\lim_{k\rightarrow\infty}\xi_{i}(k)=\lim_{k\rightarrow\infty}\xi_{j}(k)$ for
arbitrary $i,j\in\mathbb{I}_{N}$. Hence, $\textbf{1}_{N}$ is the right
eigenvector associated with the eigenvalue one of matrix
$I-\Lambda\mathcal{L}\mathcal{D}$. It further arrives at
$\displaystyle\mathcal{O}_{N\times 1}=$
$\displaystyle\Lambda\mathcal{L}\mathcal{D}\textbf{1}_{N}\Rightarrow\mathcal{O}_{N\times
1}=\mathcal{L}\mathcal{D}\textbf{1}_{N}$
The above formula is true since matrix $\Lambda$ is nonsingular. The directed
spanning tree preserved in the interacting network implies that merely one of
the following equalities is satisfied,
$\displaystyle\mathcal{D}\textbf{1}_{N}=$ $\displaystyle~{}\textbf{1}_{N}$
(13a) $\displaystyle\mathcal{D}\textbf{1}_{N}=$
$\displaystyle~{}\mathcal{O}_{N\times 1}$ (13b)
$\displaystyle\mathcal{D}\textbf{1}_{N}=$ $\displaystyle~{}-\textbf{1}_{N}$
(13c)
Next we will show that only $(\ref{20191eq48b})$ or $(\ref{20191eq48c})$ hold
for the antagonistic appraisal network. Suppose that $(\ref{20191eq48a})$
holds, i.e.,
$\displaystyle\sum^{N}_{j=1}\delta_{ij}=1,~{}\forall~{}i\in\mathbb{I}_{N}$
(14)
Moreover, we also require
$\displaystyle\sum^{N}_{j=1}|\delta_{ij}|=1,~{}\forall~{}i\in\mathbb{I}_{N}$
(15)
One can see that $(\ref{20191eq49})$ and $(\ref{20191eq50})$ suggest
$\displaystyle|\delta_{ij}|=\delta_{ij},~{}\forall~{}i,j\in\mathbb{I}_{N}$
which implies that the appraisal network is cooperative. It is a
contradiction. Therefore, we always have
$\mathcal{D}\textbf{1}_{N}=\mathcal{O}_{N\times 1}$ or
$\mathcal{D}\textbf{1}_{N}=-\textbf{1}_{N}$. Obviously,
$\mathcal{D}\textbf{1}_{N}=-\textbf{1}_{N}$ means that all entries of
$\mathcal{D}$ are non-positive, which indicates that the appraisal network is
antagonistic.
(Sufficiency) The convergence of system $(\ref{20191eq2})$ means that
$I-\Lambda\mathcal{L}\mathcal{D}$ has a simple eigenvalue one (note that
system $(\ref{20191eq2})$ cannot guarantee the stability since matrix
$\Lambda\mathcal{L}\mathcal{D}$ always has at least a zero eigenvalue). In
addition, $\mathcal{D}\textbf{1}_{N}=\mathcal{O}_{N1}$ or
$\mathcal{D}\textbf{1}_{N}=-\textbf{1}_{N}$ suggests that the vector
$\textbf{1}_{N}$ is an eigenvector associated with the eigenvalue one of
matrix $I-\Lambda\mathcal{L}\mathcal{D}$. Therefore, system $(\ref{20191eq2})$
preserves consensus in opinions. ∎
With the help of Proposition 2 and Theorem 3, we could conclude that
cooperative appraisal network achieves consensus in opinions, while
antagonistic appraisal network shows both consensus and clusters in opinions.
However, antagonistic appraisal network that can achieve consensus in opinions
has certain special requirements for its structure.
### III-D Estimation for Appraisal Network
Briefly speaking, it is a thorny problem to determine the structure of the
appraisal network. Fortunately, a large number of efforts have been poured on
identifying the dynamic structure and topologies of the networks with the help
of the experiment data. The emerging fields include sociology, signal
processing and statistics (see, e.g., [1, 36, 37]). As hinted before,
interacting network is always known to all; while appraisal network is
generally private, and thus is of great significance to be estimated. The
reason lies that it is the first step to understand the mechanism behind the
emerging and the evolution of the opinions in social networks.
To cope with the estimation issue related to appraisal network, a technical
lemma is needed.
###### Lemma 1
([38]) For any matrix $Q=(q_{1},...,q_{n})$, its vectorization, denoted by
${\rm vec}(Q)$, is
$\displaystyle{\rm vec}(Q)=(q^{\prime}_{1},...,q^{\prime}_{n})^{\prime}$
For matrices $Q$, $W$ and $R$, the vectorization with respect to their product
is given by
$\displaystyle{\rm vec}(QWR)=$ $\displaystyle(R^{\prime}\otimes Q){\rm
vec}(W)$
By virtue of Lemma 1, two properties can be obtained.
###### Corollary 3
For any variable $\xi\in\mathbb{R}^{N}$ and matrix
$\Lambda\mathcal{L}\mathcal{D}$, the following facts are true
$\left\\{\begin{aligned} &{\rm vec}(\xi)=~{}\xi\\\ &{\rm
vec}(\Lambda\mathcal{L}\mathcal{D}\xi)=~{}(\xi^{\prime}\otimes\Lambda\mathcal{L}){\rm
vec}(\mathcal{D})\end{aligned}\right.$
###### Proof:
The proof is trivial by Lemma 1. ∎
With the above preparations, we are about to formulate the estimation problem
on appraisal network. Apart from $\Lambda$ and $\mathcal{L}$, we postulate
that one has access to $m$ observed opinions of length $1$ during opinion
formation process under $(\ref{20191eq1})$, i.e., $m$ sequences of opinions
$(\xi_{t}(k-1),\xi_{t}(k))$ with $t\in\mathbb{I}_{m}$ and
$k\in\mathbb{Z}_{+}$, in a manner of independent and identically distribution
(i.i.d.). More specifically, we associate a uniform observation of $m$ by
$\displaystyle\Omega_{m}=~{}\bigg{\\{}(\xi_{t}(k-1),\xi_{t}(k)),1\leq t\leq
m,\forall~{}k\in\mathbb{Z}_{+}\bigg{\\}}$ (16)
Upon collecting the $m$ (randomly with uniform) opinion sets and using
Corollary 3, we formulate the following optimization problem:
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\min_{\zeta,~{}Q}~{}\gamma$
(17) $\displaystyle{\rm
s.t.}~{}~{}~{}~{}~{}~{}f(\zeta,m)=\frac{1}{m}\sum^{m}_{t=1}g(t)\leq\gamma$
$\displaystyle g(t)=X^{\prime}_{t}(k)QX_{t}(k)$ $\displaystyle
X_{t}(k)=\xi_{t}(k)-\xi_{t}(k-1)+\bigg{(}\xi^{\prime}_{t}(k-1)\otimes\Lambda\mathcal{L}\bigg{)}\zeta$
$\displaystyle\gamma\geq 0,~{}Q\succ 0$
###### Remark 3
For optimization problem $(\ref{20191eq31})$, we emphasize: (i) The
utilization of Corollary 3 is helpful since the coupling between the
interacting network and the appraisal network is entangled; (ii) In general,
it is unrealistic to require parameter $m$ to be infinite because system
matrix in $(\ref{20191eq2})$ is not stable.
Algorithm 1 depicted below gives a lower bound on the “observations” needed
for a desirable performance.
Algorithm 1 Lower Bound for the Observations in $\Omega_{m}$
1: Given a sequence of observations of length $m$, and a prior level of
performance index $0<\gamma_{0}<\infty$;
2: Set $m=m_{0}$ with $m_{0}\geq 1$;
3: while $\gamma>\gamma_{0}$; do
4: Print: Gauge of appraisal network fails;
5: Set $m\leftarrow 1+m$;
6: Optimize $(\ref{20191eq31})$;
7: end while
8: return $m$;
Although Algorithm 1 may provide an estimation on appraisal network, we still
cannot give specific value of sample length $m$ that is essential for solving
$(\ref{20191eq31})$. Obviously, too much observations bring computation
complexity, while less observation may yield inaccuracy estimation.
Furthermore, even Algorithm 1 may mitigate the downside on $(\ref{20191eq31})$
to a certain degree, a lot of computing resources are needed apart from the
time consumption since Algorithm 1 executes in a way of try-once-discard. All
the discussions above naturally raise an interesting question: What is the
confidence can we possess with the aid of a finite sets on opinions of
empirical observations, on which the appraisal network can work for the whole
opinion space? To this end, we resort to the tool from random convex
optimization (also called by chance-constrained optimization); see Ref. [31,
32] for more details.
Now, we are going to formulate the considered problem. For fixed probability
space $(\Omega,\mathcal{F},\mathcal{P})$, compact set
$\mathbb{X}\subseteq\mathbb{R}^{N\times N}$ is convex and contains the origin
as its interior. A measurable function
$f:\mathbb{X}\times\Omega\mapsto\mathbb{R}$ is convex on its first argument
for any fixed second argument, and bounded for the second argument whenever
the first is fixed. Thereby, for random finite opinion set $\Omega_{m}$ in
$(\ref{20191eqnew1})$ and any level parameter $\varepsilon\in(0,1)$, given a
confidence level $\beta\in(0,1)$ and some symmetric positive definite matrix
$Q$, the probability of violation (cf. [31, Definition 1]) is
$\displaystyle
V(\zeta,m)=\mathcal{P}\\{m\in\mathbb{Z}_{+}:f(\zeta,m)>\gamma^{\star}\\}$
where $\gamma^{\star}$ is the optimal solution of the optimization problem
$(\ref{20191eq31})$.
We then focus on the following optimization problem:
$\displaystyle\min_{\zeta}{}{}{}{}{}{}$ $\displaystyle c^{\prime}\zeta$ (18)
$\displaystyle{\rm s.t.}{}{}{}{}{}{}$
$\displaystyle\mathcal{P}\\{(\zeta,m):V(\zeta,m)>\varepsilon\\}\leq\beta$
$\displaystyle\zeta\in\mathbb{X}$
where $c$ is a certain “cost” vector (also known as the objective direction).
For optimization problem $(\ref{20191eqnew2})$, we get the probability with
respect to $f(\zeta,m)\leq\gamma^{\star}$ at least $1-\beta$ using the samples
with length $m$. A smaller violation level $\varepsilon$ is a more desirable
estimation on the appraisal network. As a result, the number of samples
increases. The next theorem gives a confirmative answer on how many opinion
observations are needed to obtain a satisfactory performance.
###### Theorem 4
Consider optimization problem $(\ref{20191eqnew2})$ with $m\geq N^{2}$ where
$\Omega_{m}\subseteq\mathbb{X}$, in the sense of i.i.d. Then, for
$\forall\varepsilon\in[0,1]$, it follows that
$\displaystyle\mathcal{P}\\{(\zeta,m):V(\zeta,m)>\varepsilon\\}\leq\beta(\varepsilon,m)$
where
$\displaystyle\beta(\varepsilon,m)=\sum^{m}_{\ell=0}\begin{pmatrix}N^{2}\\\
\ell\end{pmatrix}\varepsilon^{\ell}(1-\varepsilon)^{m-\ell}$
Moreover, the low bound on $m$ is
$\displaystyle
m(\varepsilon,\beta)=\min\bigg{\\{}m\in\mathbb{Z}_{+}\bigg{|}\sum^{m}_{\ell=0}\begin{pmatrix}N^{2}\\\
\ell\end{pmatrix}\varepsilon^{\ell}(1-\varepsilon)^{m-\ell}\leq\beta\bigg{\\}}$
###### Proof:
By [32, Theorem 3.3] and [39, Theorem 1], the proof follows directly. ∎
###### Remark 4
Theorem 4 explicitly provides the number of required samples to obtain a
“good” estimation for the true appraisal network. In [11], the authors
concentrated on the estimation of multi-issue dependence matrix. We do some
further work in this paper. That is, on the one hand, we can give a desirable
estimation on the appraisal network. On the other hand, a bound on the samples
is provided which has no been addressed in the previous work.
## IV Topic Specific Opinion
Unlike man-made systems, the actors in social networks are generally affected
by a couple of topics that are interdependent. For example, the leader of the
corporate should account for many factors if he/she intends to operate a
policy, such as the cost, the external and the internal environment, the
potential market as well as the potential customers, etc. As a matter of fact,
the issue dependence problem has been well addressed for a long time, in
particular the disciplines such as social anthropology, sociology and
political science and psychology where they share a common ground that certain
objects are coupled by interdependent cognitive orientations.
The first step on how agents’ interpersonal influences form a belief system
was proposed by the FJ model (cf. [6])
$\displaystyle\xi_{i}(k+1)=$
$\displaystyle\lambda_{ii}C\sum^{N}_{j=1}p_{ij}\xi_{j}(k)+(1-\lambda_{ii})\mu_{i}$
(19)
where $\xi_{i}(k)\in\mathbb{R}^{n}$, $\lambda_{ii}$ and $\mu_{i}$ are,
respectively, the susceptibility and the initial opinion of the $i$th agent.
$P=(p_{ij})_{N\times N}$ with $p_{ij}\geq 0$ is a stochastic matrix. Matrix
$C=(c_{ij})_{n\times n}$ stands for the introspective transformation called
the multi-issues dependence structure (MiDS) (cf. [11]), and satisfies
$\sum^{n}_{j=1}|c_{ij}|=1$.
For topic specific issues, the opinion evolving equation $(\ref{20191eq1a})$
becomes
$\displaystyle\xi_{i}(k+1)=$
$\displaystyle~{}C\xi_{i}(k)+\varrho_{i}C\sum_{j\in\mathcal{N}_{i}}a_{ij}(z_{j}(k)-z_{i}(k))$
(20)
where the constant matrix $C\in\mathbb{R}^{n\times n}$ describes the MiDS,
while the remaining variables are the same as those in $(\ref{20191eq1a})$.
Combining $(\ref{20191eq1b})$ and $(\ref{20191eq53})$ gives
$\displaystyle\xi(k+1)=$
$\displaystyle~{}(I-\Lambda\mathcal{L}\mathcal{D})\otimes C\xi(k)$ (21)
where $\otimes$ denotes the Kronecker product.
Here we discuss the similarities and the differences between
$(\ref{20191eq52})$ and $(\ref{20191eq54})$.
Similarity:
They both model how the interdependent issues affect the opinion’s evolution.
Differences:
$(\ref{20191eq52})$ is concerned with the opinion evolution in the context of
the cooperative interacting networks while $(\ref{20191eq54})$ studies the
opinion evolution with antagonistic interactions characterized by an appraisal
network. We emphasize that the interacting network in $(\ref{20191eq54})$
merely quantifies whether or not there exists an information flow between a
pair of agents. Moreover, we can further extend $(\ref{20191eq54})$ to the
case where some agents may never forget their initial opinions. However, this
is beyond the scope of this paper, and is thus omitted.
Note that $(\ref{20191eq2})$ usually ensures the convergence in opinions.
However, agents in $(\ref{20191eq54})$ may converge to zero due to the
appearance of matrix $C$.
###### Theorem 5
Suppose that matrix $I-\Lambda\mathcal{L}\mathcal{D}$ has a simple eigenvalue
one. The agents in $(\ref{20191eq54})$ achieve the _stability_ if and only if
$|\lambda_{\max}(C)|<1$.
###### Proof:
It is more vulnerable to check that the system is stable if and only if the
eigenvalues in matrix $(I-\Lambda\mathcal{L}\mathcal{D})\otimes C$ are
constrained in the unit disk. Note that the eigenvalues of matrix
$(I-\Lambda\mathcal{L}\mathcal{D})\otimes C$ are
$\lambda(I-\Lambda\mathcal{L}\mathcal{D})\lambda(C)$ according to the matrix
theory. Hence, $(\ref{20191eq54})$ is stable if and only if
$|\lambda_{\max}(I-\Lambda\mathcal{L}\mathcal{D})\lambda_{\max}(C)|<1$. This
is equivalent to $|\lambda_{\max}(C)|<1$. The proof hence follows. ∎
Theorem 5 suggests that we can achieve the stability of the agents by just
imposing the restriction on matrix $C$ provided that the issue-free cases are
convergent. Note that the number of issues are drastically less than that of
the participating individuals in general. An interesting welfare from Theorem
5 in contrast to Theorems 1 and 3 is the stability of the interacting agents,
which is another motivation to introduce the issue-interdependence matrix $C$
for setup $(\ref{20191eq1})$. Indeed, only the convergence of the opinions is
generally assured for the issue-free scenario (see Theorems 1 and 3 for more
information).
As discussed before, the social networks rarely achieve unanimous behavior (we
treat the stability of the agents by a special case of the consensus).
Therefore, it is of great importance to further study the convergence
condition on $(\ref{20191eq54})$ in the presence of matrix $C$.
###### Theorem 6
Suppose that matrix $I-\Lambda\mathcal{L}\mathcal{D}$ has a simple eigenvalue
one. The agents in $(\ref{20191eq54})$ are _convergent_ if and only if
$|\lambda^{\star}_{\max}(I-\Lambda\mathcal{L}\mathcal{D})\lambda_{\max}(C)|<1$
where $\lambda^{\star}_{\max}(I-\Lambda\mathcal{L}\mathcal{D})$ stands for the
eigenvalue of $I-\Lambda\mathcal{L}\mathcal{D}$ with the second largest
magnitude compared with eigenvalue $1$.
###### Proof:
Here we prove the theorem by following the idea from $(\ref{20191eq11})$ and
$(\ref{20191eq54})$. The error system is revised as
$\displaystyle\theta(k+1)=(I-\Lambda\mathcal{L}\mathcal{D})\otimes C\theta(k)$
(22)
where
$\theta(k)\in\bigg{(}\mathbb{R}^{N}\backslash\\{\varphi\\}\bigg{)}\otimes\mathbb{R}^{n}$.
As $I-\Lambda\mathcal{L}\mathcal{D}$ has a simple eigenvalue one over the
space
$\theta(k)\in\bigg{(}\mathbb{R}^{N}\backslash\\{\varphi\\}\bigg{)}\otimes\mathbb{R}^{n}$,
the eigenvalues in matrix $I-\Lambda\mathcal{L}\mathcal{D}$ are completely
contained in the unit disk. Therefore, we can see that the error system
$(\ref{20191eq55})$ is stable if and only if
$|\lambda^{\star}_{\max}(I-\Lambda\mathcal{L}\mathcal{D})\lambda_{\max}(C)|<1$.
The proof hence follows. ∎
Based on the conclusion in Theorem 6, it does not require
$|\lambda_{\max}(C)|<1$ as Theorem 5. Hence, there is a matrix $C$ with
$|\lambda_{\max}(C)|>1$ that we could still guarantee the convergence of the
participating agents as long as
$|\lambda^{\star}_{\max}(I-\Lambda\mathcal{L}\mathcal{D})\lambda_{\max}(C)|<1$
is desirable. The following corollary gives some specific requirements on the
matrix for guaranteeing the convergence of the agents.
###### Corollary 4
Suppose that matrix $I-\Lambda\mathcal{L}\mathcal{D}$ has a simple eigenvalue
one. The agents in $(\ref{20191eq54})$ are _convergent_ only if
$\lim_{k\rightarrow\infty}C^{k}$ exists.
###### Proof:
One can see that
$\displaystyle\xi(k)=$
$\displaystyle~{}(I-\Lambda\mathcal{L}\mathcal{D})^{k}\otimes C^{k}\xi(0)$
(23)
Therefore, $(\ref{20191eq56})$ converges only if
$\lim_{k\rightarrow\infty}C^{k}$ exists. This ends the proof. ∎
Due to the antagonistic information, matrix $I-\Lambda\mathcal{L}\mathcal{D}$
may not be a nonnegative stochastic matrix in general. Consequently, the
developed methods in the framework of the multi-agent systems are no longer
applicable. By virtue of the approach developed in Subsection III-D, it is
possible to estimate matrix $C$ if $\Lambda$ and $\mathcal{D}$ are available.
Figure 4: Interacting graph of four social actors associated with
$(\ref{20191eq44})$.
Figure 5: Opinion evolution of four social actors according to the DeGroot
model where the initial opinions are randomly selected from $[-10,10]$.
Figure 6: (a)-(b) Opinion evolution with the issues independence under the
cooperative appraisal network; (c)-(d) Opinion evolution with the issues
dependence under the cooperative appraisal network.
Figure 7: (a)-(b) Opinion evolution with the issues independence under the
antagonistic appraisal network; (c)-(d) Opinion evolution with the issues
dependence under the antagonistic appraisal network.
Figure 8: (a)-(b) Stability of the agents with the cooperative appraisal
network depicted in Section V-A; (c)-(d) Stability of the agents with the
antagonistic appraisal network depicted in Section V-B.
## V Numerical Example
Consider a social network with four individuals. The interacting matrix ($P$
corresponds to the DeGroot model) is
$\displaystyle P=\begin{bmatrix}0.22&0.12&0.36&0.3\\\
0.147&0.215&0.344&0.294\\\ 0&0&1&0\\\ 0.09&0.178&0.446&0.286\end{bmatrix}$
(24)
It should be pointed out that the elements of matrix $P$ in
$(\ref{20191eq40})$ come from real experiment (see [6]). More specific, each
entry of $P$ denotes the inter-personal influence by individual $i$ to
individual $j$. By Proposition 1, the Laplacian matrix is
$\displaystyle\mathcal{L}=\begin{bmatrix}0.78&-0.12&-0.36&-0.3\\\
-0.147&0.785&-0.344&-0.294\\\ 0&0&0&0\\\
-0.09&-0.178&-0.446&0.714\end{bmatrix}$ (25)
with $\epsilon=1$, and the associated interacting graph is presented in Fig.
4. The opinion evolution with respect to four social actors are drawn in Fig.
5.
### V-A Cooperative Appraisal Network
In the sequel, we examine the opinion evolution of the interacting agents with
the cooperative appraisal network
$\displaystyle\mathcal{D}_{1}=\begin{bmatrix}0.2&0.2&0.3&0.3\\\
0.1&0.5&0&0.4\\\ 0.1&0.4&0&0.5\\\ 0.4&0.3&0.2&0.1\end{bmatrix}$ (26)
The opinion evolution of the individuals for the issue independence
($C_{1}=I$) is plotted in Fig. 6(a)-(b) with the susceptibility factor matrix
$\Lambda_{1}={\rm diag}(-1,1,1,-1)$, where the initial opinions are borrowed
from [11, Equation (15)],
$\displaystyle\xi(0)\in\bigg{\\{}\overbrace{25,25}^{\xi_{1}(0)},\overbrace{25,15}^{\xi_{2}(0)},\overbrace{75,-50}^{\xi_{3}(0)},\overbrace{85,5}^{\xi_{4}(0)}\bigg{\\}}$
From Fig. 6(a)-(b), the opinions of agents aggregate to the opinion of the
leader’s, i.e., the $3$rd agent.
In what follows, we discuss the opinion evolution of the interacting agents
with the issue interdependence influence. Matrix $C_{1}$ has the form as that
in [11, Section VII],
$\displaystyle C_{1}=\begin{bmatrix}0.9&0.1\\\ 0.1&0.9\end{bmatrix}$
With other parameters unchanged, the opinion evolution of agents is depicted
in Fig. 6(c)-(d). Analogous to the issues independence case, the opinions
achieve the consensus. However, some interesting phenomena arise: (1) Unlike
the issue free case, the leader’s opinion varies over time. Traditionally, the
leader’s opinion remains the same even if the JF model (including the issue
interdependence version [11]) is applied; (2) By introducing the issue
dependence matrix $C_{1}$, the final aggregated opinion may be steered to an
opposite direction of the leader’s initial opinion. In one word, cooperative
appraisal network leads to consensus in opinions, which coincides with the
conclusion drawn in Theorem 1.
### V-B Antagonistic Appraisal Network
This subsection focuses on the opinion evolution with antagonistic appraisal
network, which is of the form
$\displaystyle\mathcal{D}_{2}=\begin{bmatrix}0.2&-0.2&-0.3&-0.3\\\
0.1&0.5&0&0.4\\\ -0.1&0.4&0&0.5\\\ 0.4&0.3&-0.2&0.1\end{bmatrix}$
Similar to Subsection V-A, we first consider the issue free case. The agents’
opinion evolutions are shown in Fig. 7(a)-(b) where the parameters are the
same except for $\mathcal{D}_{2}$ and $\Lambda_{2}={\rm diag}(-1.5,2,1,-0.5)$.
Using the method in [11], the agents’ final opinions share the same direction
with the leader’s, see [11, Fig. 5(a)]. However, the second agent’s opinion
has an opposite sign with the leader’s, even if their initial opinions have
the same direction (see Fig. 7(a)-(b)).
We proceed with the issue interdependence case with
$\displaystyle C_{2}=\begin{bmatrix}0.6&0.4\\\ 0.3&0.7\end{bmatrix}$ (27)
Using the same parameters as before, the opinions of the agents are depicted
in Fig. 7(c)-(d). It is clear that the leader’s opinion changes along with the
time-evolution, as opposed to the issue free case. In addition, we can see
that although some agents have the same direction on the initial opinion of
the leader’s at the beginning, the agents’ final opinions appear an opposite
direction with the leader’s. Furthermore, the opinions tend to clusters in
such a case, which is in accordance with Theorem 3. One can check that $C_{2}$
in $(\ref{20191eq59})$) does not satisfy the conditions in Theorem 5. Indeed,
we compute the eigenvalues of matrix $C_{2}$, i.e.,
$\lambda(C_{2})\in\\{1,0.3\\}$. They, however, meet the requirement in Theorem
6.
### V-C Stability of Interacting Agents
Now we are dedicated to studying the stability of agents with cooperative and
antagonistic appraisal networks. For the cooperative appraisal network, we
perform the simulation using the parameters in Subsection V-A by replacing
matrix $C_{1}$ with $C^{\star}_{1}$
$\displaystyle C^{\star}_{1}=0.85\begin{bmatrix}0.9&0.1\\\
0.1&0.9\end{bmatrix}$
It can be verified that the requirement in Theorem 5 is fulfilled. The
opinions of the agents are depicted in Fig. 8(a)-(b). For the case of
antagonistic appraisal network, we use the same parameters as Subsection V-B
by replacing matrix $C_{2}$ with $C^{\star}_{2}$
$\displaystyle C^{\star}_{2}=0.95\begin{bmatrix}0.6&0.4\\\
0.3&0.7\end{bmatrix}$
The opinion evolution of the agents can be found in Fig. 8(c)-(d).
### V-D Further Discussion
With the framework of $(\ref{20191eq1})$, we can achieve both consensus and
clusters in opinions, and the consensus in opinions is not necessarily
restricted into a convex hull as the classical DeGroot model (see Fig. 3(a)).
Moreover, formulation $(\ref{20191eq1})$ extends the protocol of [27] in
several aspects: (i) it could be utilized to describe more general behaviors
in social networks; (ii) it further manifests the importance on the weighted
gain matrix $\Lambda$. In other words, system $(\ref{20191eq2})$ may diverge
without the help of $\Lambda$.
In [11, Equation (15)], Parsegov _et al_. endowed both the initial opinions
and the final opinions of the social actors with some specific meanings: the
positive (resp. the negative) opinions correspond to the vegetarian (resp. the
all-meat) diets by introducing an interdependent issue matrix. According to
[11, Examples $3$ and $4$], all social actors are the vegetarian which is
coincide with their initial opinions, $\xi^{1}(0)=(25,25,75,85)$ (cf. [11,
Equation (15)]), i.e., the initial opinions on the first issue. However, we
can see from Fig. 6, some agents become the all-meat diets eventually, even if
they are the vegetarian at the beginning. This also indicates that some agents
have the opposite attitudes compared with the leader’s ever though they all
have the same direction of attitudes at first. Additionally, from Figs.
6(c)-(d) and 7(c)-(d), the leader’s final opinion may be affected by the
evolutions of other agents, which is new from the perspective of the classical
DeGroot model. In a nutshell, the proposed setup in this paper brings some
interesting phenomena comparing with the existing literature.
## VI Conclusion
This paper has studied the opinion dynamics in social networks by introducing
an appraisal network to quantify the cooperative or antagonistic information.
We have shown that the cooperative appraisal network achieves the consensus in
opinions while the antagonistic appraisal network leads to the opinion
clusters. The tool of random convex optimization is used to estimate the
appraisal network with a confident level of robustness, along with the lower
bound on the amounts of sampled observations. Moreover, the proposed setup has
been extended to the case of multiple issues interdependence. Some discussions
have also been given to compare with the existing literature.
[Proof of Theorem 2] To prove Theorem 2, the Hermite-Biehler Theorem (cf.
[40]) and the Bilinear Transformation Theorem (cf. [41]) are needed. To begin
with, we first introduce a lemma.
###### Lemma 2
([41]) Given two polynomials $\mathbb{S}(z)$ of degree $d$ and $\mathbb{Q}(z)$
with
$\displaystyle\mathbb{Q}(z)=~{}(z-1)^{d}\mathbb{S}\bigg{(}\frac{z+1}{z-1}\bigg{)}$
Then the Schur stability of $\mathbb{S}(z)$ implies the Hurwitz stability on
$\mathbb{Q}(z)$, and vice versa.
For complex polynomial $\mathbb{Q}(z)$, replacing $z$ with $\mathbbm{i}w$
yields
$\displaystyle\mathbb{Q}(\mathbbm{i}w)=~{}S(w)+\mathbbm{i}Q(w)$
where both $S(w)$ and $Q(w)$ are real polynomials. A relationship among the
roots of $S(w)$ and $Q(w)$ is depicted as follows.
###### Definition 3
([40]) For any real polynomials $S(w)$ and $Q(w)$, they are interlaced if
* (i)
The roots of $S(w)$ (denoted by $\\{S_{1},...,s_{\ell_{S}}\\}$) and $Q(w)$
(denoted by $\\{Q_{1},...,Q_{\ell_{Q}}\\}$) satisfy
$\left\\{\begin{aligned} &S_{1}<S_{2}<\cdots<S_{\ell_{S}}\\\
&Q_{1}<Q_{2}<\cdots<Q_{\ell_{Q}}\end{aligned}\right.$
* (ii)
$|\ell_{S}-\ell_{Q}|\leq 1$, and one of the following facts holds
$\left\\{\begin{aligned}
&S_{1}<Q_{1}<S_{2}<Q_{2}\cdots<Q_{\ell_{Q}}<S_{\ell_{S}},~{}\ell_{S}=\ell_{Q}+1\\\
&Q_{1}<S_{1}<Q_{2}<S_{2}\cdots<S_{\ell_{S}}<Q_{\ell_{Q}},~{}\ell_{Q}=\ell_{S}+1\\\
&Q_{1}<S_{1}<Q_{2}<S_{2}\cdots<Q_{\ell_{Q}}<S_{\ell_{S}},~{}\ell_{Q}=\ell_{S}\\\
&S_{1}<Q_{1}<S_{2}<Q_{2}\cdots<S_{\ell_{S}}<Q_{\ell_{Q}},~{}\ell_{S}=\ell_{Q}\\\
\end{aligned}\right.$
The next lemma develops a judgement on the Hurwitz stability associated with
$\mathbb{Q}(z)$ that is closely linked to the roots of polynomials $S(w)$ and
$Q(w)$ and the interlaced property defined in Definition 3.
###### Lemma 3
([40]) Polynomial $\mathbb{Q}(z)$ is Hurwitz stable if and only if
* (i)
Polynomials $S(w)$ and $Q(w)$ are interlaced;
* (ii)
Polynomials $S(w)$, $\frac{\partial Q(w)}{\partial w}$, $\frac{\partial
S(w)}{\partial w}$ and $Q(w)$ at the origin fulfill
$\displaystyle S(0)\frac{\partial Q(0)}{\partial w}-\frac{\partial
S(0)}{\partial w}Q(0)>0$
With the above preparations, we are about to give the proof of Theorem 2.
###### Proof:
It is easy to see that the updating dynamics of the individuals is determined
by $(\ref{20191eq64})$. Accordingly, the characteristic equation of
$(\ref{20191eq64})$ can be written as
$\displaystyle{\rm
det}\bigg{(}zI-(I-\varrho\mathcal{L}+\varrho^{2}\mathcal{L}^{2})\bigg{)}$
$\displaystyle=$
$\displaystyle~{}(z-1)\prod^{N}_{i=2}(z-1+\varrho\lambda_{i}-\varrho^{2}\lambda^{2}_{i})$
Before proceeding further, it is indispensable to consider two cases where
${\rm Im}(\lambda_{i})$ is equal to zero or not. And we start with the
scenario that ${\rm Im}(\lambda_{i})=0$.
As argued before, consensus in opinions indicates that $|z|<1$ if $z\neq 1$.
In such a setting, it is enough to demonstrate that
$\displaystyle\varrho^{2}\lambda^{2}_{i}-\varrho\lambda_{i}+1<1$ (28a)
$\displaystyle\varrho^{2}\lambda^{2}_{i}-\varrho\lambda_{i}+1>-1$ (28b)
with the constraints $\varrho>0$ and $\lambda_{i}\neq 0$. By computation, one
derives that the feasible region on $\varrho$ for inequality
$(\ref{20191eq79a})$ is $(0,\frac{1}{\lambda_{i}})$, while the feasible region
on $\varrho$ associated with inequality $(\ref{20191eq79b})$ is $[0,\infty)$.
Thus, the feasible region for $\varrho$ is
$\displaystyle\varrho\in\bigg{(}0,\frac{1}{\lambda_{i}}\bigg{)}$
In the sequel, we are dedicated to the case of ${\rm Im}(\lambda_{i})\neq 0$.
Let $\mathbb{S}_{i}(z)$ be of the form
$\displaystyle\mathbb{S}_{i}(z)=~{}z-1+\varrho\lambda_{i}-\varrho^{2}\lambda^{2}_{i},~{}i=2,...,N$
(29)
where $\lambda_{i}$ stands for the $i$th eigenvalue of $\mathcal{L}$. Applying
the bilinear transformation
$\displaystyle\mathbb{Q}_{i}(z)=~{}(z-1)\mathbb{S}_{i}\bigg{(}\frac{z+1}{z-1}\bigg{)}$
(30)
One gets
$\displaystyle\mathbb{Q}_{i}(z)=$
$\displaystyle~{}2+\varrho\lambda_{i}(1-\varrho\lambda_{i})z+\varrho\lambda_{i}(\varrho\lambda_{i}-1)$
$\displaystyle=$
$\displaystyle~{}2+\varrho|\lambda_{i}|\bigg{(}\cos(\arg(\lambda_{i}))+\mathbbm{i}\sin(\arg(\lambda_{i}))\bigg{)}z$
$\displaystyle-\varrho|\lambda_{i}|\bigg{(}\cos(\arg(\lambda_{i}))+\mathbbm{i}\sin(\arg(\lambda_{i}))\bigg{)}$
$\displaystyle-\varrho^{2}|\lambda_{i}|^{2}\bigg{(}\cos^{2}(\arg(\lambda_{i}))-\sin^{2}(\arg(\lambda_{i}))$
$\displaystyle+2\mathbbm{i}\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))\bigg{)}z$
$\displaystyle-\varrho^{2}|\lambda_{i}|^{2}\bigg{(}\cos^{2}(\arg(\lambda_{i}))-\sin^{2}(\arg(\lambda_{i}))$
$\displaystyle+2\mathbbm{i}\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))\bigg{)}$
Substituting $z=\mathbbm{i}w$ into $\mathbb{Q}_{i}(z)$ results in
$\displaystyle\mathbb{Q}_{i}(\mathbbm{i}w)=$
$\displaystyle~{}~{}2+\varrho|\lambda_{i}|\bigg{(}\mathbbm{i}\cos(\arg(\lambda_{i}))-\sin(\arg(\lambda_{i}))\bigg{)}w$
(31)
$\displaystyle-\varrho|\lambda_{i}|\bigg{(}\cos(\arg(\lambda_{i}))+\mathbbm{i}\sin(\arg(\lambda_{i}))\bigg{)}$
$\displaystyle-\varrho^{2}|\lambda_{i}|^{2}\bigg{(}\mathbbm{i}\cos^{2}(\arg(\lambda_{i}))-\mathbbm{i}\sin^{2}(\arg(\lambda_{i}))$
$\displaystyle-2\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))\bigg{)}w$
$\displaystyle-\varrho^{2}|\lambda_{i}|^{2}\bigg{(}\cos^{2}(\arg(\lambda_{i}))-\sin^{2}(\arg(\lambda_{i}))$
$\displaystyle+2\mathbbm{i}\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))\bigg{)}$
$\displaystyle\triangleq$ $\displaystyle~{}{\rm
Re}(\mathbb{Q}_{i}(w))+\mathbbm{i}{\rm Im}(\mathbb{Q}_{i}(w))$
With the constraint ${\rm Re}(\mathbb{Q}_{i}(w))=0$, it yields
$\displaystyle w_{1}$ $\displaystyle=$
$\displaystyle~{}\frac{2+\varrho^{2}|\lambda_{i}|^{2}(2\cos^{2}(\arg(\lambda_{i}))-1)-\varrho|\lambda_{i}|\cos(\arg(\lambda_{i}))}{\varrho|\lambda_{i}|\sin(\arg(\lambda_{i}))-2\varrho^{2}|\lambda_{i}|^{2}\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))}$
$\displaystyle\triangleq$ $\displaystyle~{}\frac{2+\mathbbm{x}}{\mathbbm{y}}$
Analogously, the requirement of ${\rm Im}(\mathbb{Q}_{i}(w))=0$ immediately
leads to
$\displaystyle w_{2}$ $\displaystyle=$
$\displaystyle~{}\frac{2\varrho^{2}|\lambda_{i}|^{2}\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))-\varrho|\lambda_{i}|\sin(\arg(\lambda_{i}))}{\varrho^{2}|\lambda_{i}|^{2}(2\cos^{2}(\arg(\lambda_{i}))-1)-\varrho|\lambda_{i}|\cos(\arg(\lambda_{i}))}$
In accordance with $w_{1}$, $w_{2}$ hence is of the form
$\displaystyle w_{2}=~{}\frac{-\mathbbm{y}}{\mathbbm{x}}$
We now determine the condition guaranteeing
$\displaystyle w_{1}\frac{1}{w_{2}}\neq 1$
For ease of discussion, we resort to the opposite statement, i.e.,
$\displaystyle 1=$ $\displaystyle~{}w_{1}\frac{1}{w_{2}}$ (32)
$\displaystyle=$
$\displaystyle~{}\frac{2\mathbbm{x}+\mathbbm{x}^{2}}{-\mathbbm{y}^{2}}$
We can see that $(\ref{20191eq84})$ is equivalent to
$\displaystyle 1=~{}(\mathbbm{x}+1)^{2}+\mathbbm{y}^{2}$
By the polar coordination, for $\theta\in[0,2\pi)$, it attains
$\displaystyle\mathbbm{x}=$ $\displaystyle~{}\cos(\theta)-1$ (33a)
$\displaystyle\mathbbm{y}=$ $\displaystyle~{}\sin(\theta)$ (33b)
For $(\ref{20191eq85a})$, it is more prone to access that
$\displaystyle\mathbbm{x}-\cos(\theta)+1=$
$\displaystyle~{}\varrho^{2}|\lambda_{i}|^{2}\cos(2\arg(\lambda_{i}))$
$\displaystyle-\varrho|\lambda_{i}|\cos(\arg(\lambda_{i}))-\cos(\theta)+1$
$\displaystyle\triangleq$ $\displaystyle~{}f_{\theta}(\varrho)$
For $\forall~{}\theta\in[0,2\pi)$, denote
$\displaystyle\Delta_{\mathbbm{x}}\triangleq$
$\displaystyle~{}|\lambda_{i}|^{2}+4|\lambda_{i}|^{2}\cos(2\arg(\lambda_{i}))(\cos(\theta)-1)$
$\displaystyle=$
$\displaystyle~{}|\lambda_{i}|^{2}\bigg{(}\cos^{2}(\arg(\lambda_{i}))(8\cos(\theta)-7)+4(1-\cos(\theta))\bigg{)}$
In the sequel, two cases involving the real roots of $f_{\theta}(\varrho)=0$
are formulated.
Case 1) $\theta\in[-\arccos(\frac{7}{8}),\arccos(\frac{7}{8})]$,
$\Delta_{\mathbbm{x}}\geq 0$ follows directly.
Case 2)
$\theta\in(-\frac{\pi}{2},-\arccos(\frac{7}{8}))\bigcup(\arccos(\frac{7}{8}),\frac{\pi}{2})$.
In such a circumstance, one has $\Delta_{\mathbbm{x}}\geq 0$ if and only if
$\arg(\lambda_{i})$ is preserved in the set
$\displaystyle\bigg{[}\arccos(\sqrt{\frac{4(1-\cos(\theta))}{7-8\cos(\theta)}},\frac{\pi}{2})\bigg{)}$
$\displaystyle\bigcup$
$\displaystyle\bigg{(}-\frac{\pi}{2},-\arccos(\sqrt{\frac{4(1-\cos(\theta))}{7-8\cos(\theta)}})\bigg{]}$
The two real roots of $f_{\theta}(\varrho)=0$ are given by
$\left\\{\begin{aligned}
\varrho_{i,1}=&~{}\frac{|\lambda_{i}|\cos(\arg(\lambda_{i}))+\sqrt{\Delta_{\mathbbm{x}}}}{2|\lambda_{i}|^{2}\cos(2\arg(\lambda_{i}))}\\\
\varrho_{i,2}=&~{}\frac{|\lambda_{i}|\cos(\arg(\lambda_{i}))-\sqrt{\Delta_{\mathbbm{x}}}}{2|\lambda_{i}|^{2}\cos(2\arg(\lambda_{i}))}\\\
\end{aligned}\right.$
Moreover, we calculate
$\displaystyle|\lambda_{i}|^{2}\cos^{2}(\arg(\lambda_{i}))-\Delta_{\mathbbm{x}}=4|\lambda_{i}|^{2}(1-\cos(\theta))\cos(2\arg(\lambda_{i}))$
Therefore, the sign of the calculation value is entirely contingent on
$\cos(2\arg(\lambda_{i}))$, which in turn indicates that both $\varrho_{1}$
and $\varrho_{2}$ are positive.
For $(\ref{20191eq85b})$, we find
$\displaystyle\sin(\theta)-\mathbbm{y}=$
$\displaystyle~{}2\varrho^{2}|\lambda_{i}|^{2}\sin(2\arg(\lambda_{i}))-\varrho|\lambda_{i}|\sin(\arg(\lambda_{i})$
$\displaystyle+\sin(\theta)$ $\displaystyle\triangleq$
$\displaystyle~{}g_{\theta}(\varrho),~{}\theta\in[0,2\pi)$
For $\forall~{}\theta\in[0,2\pi)$, denote
$\displaystyle\Delta_{\mathbbm{y}}\triangleq$
$\displaystyle~{}|\lambda_{i}|^{2}\bigg{(}\sin^{2}(\arg(\lambda_{i}))-4\sin(2\arg(\lambda_{i}))\sin(\theta)\bigg{)}$
Evidently, for $\theta\in[0,2\pi)$, $\Delta_{\mathbbm{y}}\geq 0$ implies
$\displaystyle\sin^{2}(\arg(\lambda_{i}))\geq
8\sin(\arg(\lambda_{i}))\cos(\arg(\lambda_{i}))\sin(\theta)$ (34)
In a similar manner, two scenarios should be argued.
Case i) $\arg(\lambda_{i})\in(0,\frac{\pi}{2})$. In this case,
$(\ref{20191eq91})$ is desirable provided that
$\displaystyle\arg(\lambda_{i})\in\bigg{[}\max\\{0,\arctan(8\sin(\theta))\\},\frac{\pi}{2}\bigg{)}\backslash\\{0\\},~{}\theta\in[0,2\pi)$
Case ii) $\arg(\lambda_{i})\in(-\frac{\pi}{2},0)$. It is trivial that
$\displaystyle 0\leq\Delta_{\mathbbm{y}},~{}\forall~{}\theta\in[0,\pi]$
And for $\theta\in[-\pi,0]$, $(\ref{20191eq91})$ could be further expressed by
$\displaystyle\sin^{2}(\arg(\lambda_{i}))\geq
8|\sin(\arg(\lambda_{i}))|\cos(\arg(\lambda_{i}))|\sin(\theta)|$
which is true as long as
$\displaystyle\arg(\lambda_{i})\in\bigg{[}-\frac{\pi}{2},\min\\{0,-\arctan(8\sin(\theta))\\}\bigg{)}\backslash\\{0\\}$
With the foregoing arguments, $g_{\theta}(\varrho)=0$ has two real roots,
which can be described as
$\left\\{\begin{aligned}
\varrho_{i,3}=&~{}\frac{|\lambda_{i}|\sin(\arg(\lambda_{i}))+\sqrt{\Delta_{\mathbbm{y}}}}{2|\lambda_{i}|^{2}\sin(2\arg(\lambda_{i}))}\\\
\varrho_{i,4}=&~{}\frac{|\lambda_{i}|\sin(\arg(\lambda_{i}))-\sqrt{\Delta_{\mathbbm{y}}}}{2|\lambda_{i}|^{2}\sin(2\arg(\lambda_{i}))}\\\
\end{aligned}\right.$
Therefore, $w_{1}$ and $w_{2}$ are interlaced (cf. Definition 3) with the
constraints on $\varrho$, i.e.,
$\displaystyle\varrho\not\in\\{\varrho_{i,1},\varrho_{i,2}\\}\bigcup\\{\varrho_{i,3},\varrho_{i,4}\\}$
By virtue of the specifications on ${\rm Re}(\mathbb{Q}_{i}(w))$ and ${\rm
Im}(\mathbb{Q}_{i}(w))$ (cf. $(\ref{20191eq78})$), one gets
$\left\\{\begin{aligned} \frac{\partial{\rm Re}(\mathbb{Q}_{i}(0))}{\partial
w}=&~{}\varrho^{2}|\lambda_{i}|^{2}\sin(2\arg(\lambda_{i}))-\varrho|\lambda_{i}|\sin(\arg(\lambda_{i}))\\\
{\rm
Re}(\mathbb{Q}_{i}(0))=&~{}2+\varrho^{2}|\lambda_{i}|^{2}(2\cos^{2}(\arg(\lambda_{i}))-1)\\\
&-\varrho|\lambda_{i}|\cos(\arg(\lambda_{i}))\\\ \frac{\partial{\rm
Im}(\mathbb{Q}_{i}(0))}{\partial
w}=&~{}\varrho^{2}|\lambda_{i}|^{2}(1-2\cos^{2}(\arg(\lambda_{i})))\\\
&+\varrho|\lambda_{i}|\cos(\arg(\lambda_{i}))\\\ {\rm
Im}(\mathbb{Q}_{i}(0))=&~{}\varrho^{2}|\lambda_{i}|^{2}\sin(2\arg(\lambda_{i}))-\varrho|\lambda_{i}|\sin(\arg(\lambda_{i}))\end{aligned}\right.$
We perform the calculation
$\displaystyle{\rm Re}(\mathbb{Q}_{i}(0))\frac{\partial{\rm
Im}(\mathbb{Q}_{i}(0))}{\partial w}-\frac{\partial{\rm
Re}(\mathbb{Q}_{i}(0))}{\partial w}{\rm Re}(\mathbb{Q}_{i}(0))$
$\displaystyle=$
$\displaystyle~{}\varrho|\lambda_{i}|\bigg{(}-\varrho^{3}|\lambda_{i}|^{3}+\varrho^{2}|\lambda_{i}|^{2}\cos^{2}(\arg(\lambda_{i}))$
$\displaystyle-2\varrho|\lambda_{i}|\sin(2\arg(\lambda_{i}))-\varrho|\lambda_{i}|+2\cos(\arg(\lambda_{i}))\bigg{)}$
$\displaystyle\triangleq$
$\displaystyle~{}\varrho|\lambda_{i}|f_{i}(\varrho,\lambda_{i},\arg(\lambda_{i})),~{}i=2,...,N$
Obviously, ${\rm Re}(\mathbb{Q}_{i}(0))\frac{\partial{\rm
Im}(\mathbb{Q}_{i}(0))}{\partial w}-\frac{\partial{\rm
Re}(\mathbb{Q}_{i}(0))}{\partial w}{\rm Re}(\mathbb{Q}_{i}(0))$ is positive if
and only if $f_{i}(\varrho,\lambda_{i},\arg(\lambda_{i}))>0$.
Consequently, by Lemma 3, we can see that $\mathbb{Q}_{i}(z)$ in
$(\ref{20191eq76})$ is Hurwitz stable. With the aid of Lemma 2, one could
state that $\mathbb{S}_{i}(z)$ in $(\ref{20191eq75})$ is Schur stable. This
ends the proof. ∎
## References
* [1] N. E. Friedkin and E. C. Johnsen, _Social Influence Network Theory: A Sociological Examination of Small Group Dynamics_. Cambridge University Press, Cambridge, UK, 2011.
* [2] F. Bullo, _Lectures on Network Systems_ , 1st ed. Kindle Direct Publishing, 2019, with contributions by J. Cortés, F. Dörfler, and S. Martínez. [Online]. Available: http://motion.me.ucsb.edu/book-lns
* [3] M. H. DeGroot, “Reaching a consensus,” _Journal of the American Statistical Association_ , vol. 69, no. 345, pp. 118–121, 1974.
* [4] R. Hegselmann, U. Krause _et al._ , “Opinion dynamics and bounded confidence models, analysis, and simulation,” _Journal of Artificial Societies and Social Simulation_ , vol. 5, no. 3, pp. 1–24, 2002.
* [5] V. D. Blondel, J. M. Hendrickx, and J. N. Tsitsiklis, “On krause’s multi-agent consensus model with state-dependent connectivity,” _IEEE Transactions on Automatic Control_ , vol. 54, no. 11, pp. 2586–2597, 2009.
* [6] N. E. Friedkin and E. C. Johnsen, “Social influence networks and opinion change,” _Advances in Group Processes_ , vol. 16, pp. 1–29, 1999.
* [7] N. E. Friedkin, “The problem of social control and coordination of complex systems in sociology: A look at the community cleavage problem,” _IEEE Control Systems Magazine_ , vol. 35, no. 3, pp. 40–51, 2015.
* [8] S. Fortunato, V. Latora, A. Pluchino, and A. Rapisarda, “Vector opinion dynamics in a bounded confidence consensus model,” _International Journal of Modern Physics C_ , vol. 16, no. 10, pp. 1535–1551, 2005.
* [9] N. E. Friedkin and F. Bullo, “How truth wins in opinion dynamics along issue sequences,” _Proceedings of the National Academy of Sciences_ , vol. 114, no. 43, pp. 11380–11385, 2017.
* [10] P. E. Converse, “The nature of belief systems in mass publics (1964),” _Critical Review_ , vol. 18, no. 1-3, pp. 1–74, 2006.
* [11] S. E. Parsegov, A. V. Proskurnikov, R. Tempo, and N. E. Friedkin, “Novel multidimensional models of opinion dynamics in social networks,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 5, pp. 2270–2285, 2017.
* [12] N. E. Friedkin, A. V. Proskurnikov, R. Tempo, and S. E. Parsegov, “Network science on belief system dynamics under logic constraints,” _Science_ , vol. 354, no. 6310, pp. 321–326, 2016.
* [13] C. Ravazzi, P. Frasca, R. Tempo, and H. Ishii, “Ergodic randomized algorithms and dynamics over networks,” _IEEE Transactions on Control of Network Systems_ , vol. 2, no. 1, pp. 78–87, 2015.
* [14] P. Jia, A. MirTabatabaei, N. E. Friedkin, and F. Bullo, “Opinion dynamics and the evolution of social power in influence networks,” _SIAM Review_ , vol. 57, no. 3, pp. 367–397, 2015.
* [15] N. H. Anderson, _Foundations of Information Integration Theory_. New York: Academic Press, 1981.
* [16] R. E. Petty and J. T. Cacioppo, _Communication and Persuasion: Central and Peripheral Routes to Attitude Change_. Springer Science & Business Media, 2012.
* [17] A. Flache and M. W. Macy, “Small worlds and cultural polarization,” _The Journal of Mathematical Sociology_ , vol. 35, no. 1-3, pp. 146–176, 2011.
* [18] C. Altafini, “Consensus problems on networks with antagonistic interactions,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 4, pp. 935–946, 2013\.
* [19] A. V. Proskurnikov, A. S. Matveev, and M. Cao, “Opinion dynamics in social networks with hostile camps: Consensus vs. polarization,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 6, pp. 1524–1536, 2016.
* [20] D. Meng, M. Du, and Y. Jia, “Interval bipartite consensus of networked agents associated with signed digraphs,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 12, pp. 3755–3770, 2016.
* [21] Y. Zhang and Y. Liu, “Nonlinear second-order multi-agent systems subject to antagonistic interactions without velocity constraints,” _Applied Mathematics and Computation_ , vol. 364, p. 124667, 2020.
* [22] M. E. Valcher and P. Misra, “On the consensus and bipartite consensus in high-order multi-agent dynamical systems with antagonistic interactions,” _Systems & Control Letters_, vol. 66, pp. 94–103, 2014.
* [23] J. Lu, Y. Wang, X. Shi, and J. Cao, “Finite-time bipartite consensus for multiagent systems under detail-balanced antagonistic interactions,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , 2019, DOI: 10.1109/TSMC.2019.2938419.
* [24] A. V. Proskurnikov and R. Tempo, “A tutorial on modeling and analysis of dynamic social networks. Part II,” _Annual Reviews in Control_ , vol. 45, pp. 166–190, 2018.
* [25] G. Shi, C. Altafini, and J. S. Baras, “Dynamics over signed networks,” _SIAM Review_ , vol. 61, no. 2, pp. 229–257, 2019.
* [26] W. Zhang, Z. Zuo, and Y. Wang, “Cooperative control in the presence of antagonistic reciprocity,” in _11th Asian Control Conference_. IEEE, 2017, pp. 745–749.
* [27] W. Zhang, Z. Zuo, Y. Wang, and W. Ren, “Quasi-containment control against antagonistic information,” _IEEE Transactions on Automatic Control_ , conditionally accepted.
* [28] W. Zhang, Z. Zuo, Y. Wang, and Z. Zhang, “Double-integrator dynamics for multiagent systems with antagonistic reciprocity,” _IEEE Transactions on Cybernetics_ , 2019, DOI: 10.1109/TCYB.2019.2939487.
* [29] L. Wang, T. Ye, and J. Du, “Opinion dynamics in social networks (in chinese),” _SCIENTIA SINICA Informationis_ , vol. 1, no. 48, pp. 3–23, 2018\.
* [30] N. E. Friedkin, _A Structural Theory of Social Influence_. Cambridge University Press, 2006.
* [31] G. C. Calafiore and M. C. Campi, “The scenario approach to robust control design,” _IEEE Transactions on Automatic Control_ , vol. 51, no. 5, pp. 742–753, 2006.
* [32] G. C. Calafiore, “Random convex programs,” _SIAM Journal on Optimization_ , vol. 20, no. 6, p. 3427, 2010.
* [33] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” _IEEE Transactions on Automatic Control_ , vol. 49, no. 9, pp. 1520–1533, 2004.
* [34] M. De Domenico, A. Solé-Ribalta, E. Cozzo, M. Kivelä, Y. Moreno, M. A. Porter, S. Gómez, and A. Arenas, “Mathematical formulation of multilayer networks,” _Physical Review X_ , vol. 3, no. 4, pp. 1–15, 2013.
* [35] P. Dandekar, A. Goel, and D. T. Lee, “Biased assimilation, homophily, and the dynamics of polarization,” _Proceedings of the National Academy of Sciences_ , vol. 110, no. 15, pp. 5791–5796, 2013.
* [36] T. A. Snijders, J. Koskinen, and M. Schweinberger, “Maximum likelihood estimation for social network dynamics,” _The Annals of Applied Statistics_ , vol. 4, no. 2, pp. 567–588, 2010.
* [37] H.-T. Wai, A. Scaglione, and A. Leshem, “Active sensing of social networks,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 2, no. 3, pp. 406–419, 2016.
* [38] A. J. Laub, _Matrix Analysis for Scientists and Engineers_. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, USA, 2005.
* [39] M. C. Campi and S. Garatti, “The exact feasibility of randomized solutions of uncertain convex programs,” _SIAM Journal on Optimization_ , vol. 19, no. 3, pp. 1211–1230, 2008.
* [40] L. Huang, L. Wang, and C. Hollot, “On robust stability of polynomials and related topics,” _Systems Science and Mathematical Science_ , vol. 5, no. 1, pp. 42–54, 1992.
* [41] K. Ogata, _Discrete-Time Control Systems_. Prentice Hall Englewood Cliffs, NJ, 1995.
|
# Spectrum Sharing for 6G Integrated Satellite-Terrestrial Communication
Networks Based on NOMA and Cognitive Radio
Xin Liu, Kwok-Yan Lam, Feng Li, Jun Zhao, Li Wang (_Correspondence author:
Feng Li_) X. Liu is with the School of Information and Communication
Engineering, Dalian University of Technology, Dalian 116024, China
(e-mail:liuxinstar1984@dlut.edu.cn).K. Lam and J. Zhao are with School of
Computer Science and Engineering, Nanyang Technological University, 639798,
Singapore<EMAIL_ADDRESS>junzhao@ntu.edu.sg)F. Li is with School of
Information and Electronic Engineering, Zhejiang Gongshang University,
Hangzhou, 310018, China. F. Li is also at School of Computer Science and
Engineering, Nanyang Technological University, 639798, Singapore.
(fengli2002@yeah.net)L. Wang is with College of Marine Electrical Engineering,
Dalian Maritime University, Dalian, 116026, China<EMAIL_ADDRESS>
###### Abstract
The explosive growth of bandwidth hungry Internet applications has led to the
rapid development of new generation mobile network technologies that are
expected to provide broadband access to the Internet in a pervasive manner.
For example, 6G networks are capable of providing high-speed network access by
exploiting higher frequency spectrum; high-throughout satellite communication
services are also adopted to achieve pervasive coverage in remote and isolated
areas. In order to enable seamless access, Integrated Satellite-Terrestrial
Communication Networks (ISTCN) has emerged as an important research area.
ISTCN aims to provide high speed and pervasive network services by integrating
broadband terrestrial mobile networks with satellite communication networks.
As terrestrial mobile networks began to use higher frequency spectrum (between
3GHz to 40GHz) which overlaps with that of satellite communication (4GHz to
8GHz for C band and 26GHz to 40GHz for Ka band), there are opportunities and
challenges. On one hand, satellite terminals can potentially access
terrestrial networks in an integrated manner; on the other hand, there will be
more congestion and interference in this spectrum, hence more efficient
spectrum management techniques are required. In this paper, we propose a new
technique to improve spectrum sharing performance by introducing Non-
orthogonal Frequency Division Multiplexing (NOMA) and Cognitive Radio (CR) in
the spectrum sharing of ISTCN. In essence, NOMA technology improves spectrum
efficiency by allowing different users to transmit on the same carrier and
distinguishing users by user power levels while CR technology improves
spectrum efficiency through dynamic spectrum sharing. Furthermore, some open
researches and challenges in ISTCN will be discussed.
## Introductions
In order to enable pervasive network connectivity, Integrated Satellite-
Terrestrial Communication Networks (ISTCN) has emerged as an important
research area. ISTCN aims to provide high-speed and pervasive network services
by integrating broadband terrestrial mobile networks with satellite
communication networks. The ISTCN can provide reliable communications and
global interconnections for disaster affected areas, remote areas and
emergency areas, where the terrestrial communication facilities are not easy
to use [1].
The future 6G networks are expected to offer unprecedented opportunities for
Smart Cities and Internet of Things applications through their global seamless
coverage, G-bit communication capacity, ultra reliable real-time
communications and ubiquitous machine type communications. This new generation
terrestrial mobile network achieved their functionalities by intelligently and
optimally exploiting the higher frequency spectrum, typically in the range of
3GHz to 40GHz. However, for suburban and isolated geographic locations, the
coverage of high-speed terrestrial mobile networks could be limited hence need
to be complemented by satellite communications in order to meet the
connectivity requirements of safety critical applications such as Internet of
Vehicles and Industry 4.0 control systems [4].
As terrestrial mobile network began to use higher frequency spectrum which
overlaps with that of satellite communications (e.g. 4GHz to 8GHz for C band
and 26GHz to 40GHz for Ka band), there are vast opportunities as well as
difficult challenges. On one hand, satellite terminals can potentially access
terrestrial network in an integrated manner; on the other hand, there will be
more congestion and interference in this spectrum, hence more efficient
spectrum management techniques are required. The objective is to make full use
of the complementary advantages of satellite networks and terrestrial mobile
networks, so as to realize the all-weather and all-regional seamless coverage
of high-speed mobile broadband network .
In addition, it aims to effectively alleviate the shortage of satellite
spectrum resources by applying spectrum sharing technology to reuse the
terrestrial spectrum for the satellite communications [2]. In 6G mobile
communications, Non-orthogonal Multiple Access (NOMA) and Cognitive Radio (CR)
are two most promising spectrum sharing technologies [3]. NOMA is different
from the traditional Orthogonal Multiple Access (OMA), which uses non-
orthogonal resource allocation approach to accommodate more users [5]. At the
transmitter, the transmit information of multiple users is superimposed and
encoded in the power domain by intentionally adding the interference
information. At the receiver, Successive Interference Cancellation (SIC) is
used to separate the user information by sequentially detecting and canceling
the signal of each user. It is estimated that NOMA can improve the current
spectrum efficiency by 5$\sim$15 times [6, 7]. Therefore, the satellite-
terrestrial NOMA spectrum sharing can make one satellite frequency band
accommodate more users and thus greatly improve the communication capacity.
CR, based on software radio, allows the system to adaptively adjust transmit
parameters by sensing the current communication environment, so as to achieve
efficient spectrum resource utilization [8, 9]. It can share the spectrum
resources among the heterogeneous communication systems through spectrum
sensing and dynamic reconfiguration capability. As a secondary user (SU), the
CR system can opportunistically utilize the idle spectrum of primary user (PU)
or share the spectrum with the PU at a lower power [10, 11]. Satellite-
terrestrial CR spectrum sharing makes the satellite system and terrestrial
system utilize the same spectrum resources, which can alleviate satellite
spectrum tension effectively.
However, if compared with the spectrum sharing studies for terrestrial
networks, the related works for ISTCN still remain insufficient. In [12], the
capacity of NOMA-uplink satellite network was analyzed, which has proved the
advantage of NOMA to improve the satellite communication capacity. In [13], a
joint resource allocation and network stability optimization was proposed to
maximize the long-term network utility of NOMA-downlink satellite system. In
[14], regret minimization solution was put forward for PU and SU’s spectrum
access to the satellite resources when existing cognitive interferers. In
[15], cooperative transmission strategy was proposed for cognitive satellite
networks, where the mobile users in the terrestrial network can help the
communication of the satellite network to improve its transmission
performance. Nevertheless, NOMA and CR for integrated satellite-terrestrial
spectrum sharing are less considered.
In this article, NOMA and CR based spectrum sharing for the ISTCN is proposed
to solve the problem of satellite spectrum scarcity. The contributions of the
article are concluded as follows. (1) The network model and network access
model for the ISTCN are proposed, which allow the satellite system and
terrestrial system share the same spectrum by the integration of satellite and
terrestrial components; (2) The satellite-terrestrial NOMA spectrum sharing is
presented to let multiple users to access the same satellite spectrum by
superposition coding in power domain; (3) The satellite-terrestrial CR
spectrum sharing is proposed to make the satellite system and terrestrial
system share the same spectrum resource by suppressing their mutual
interferences; (4) By combining NOMA and CR, the satellite-terrestrial CR-NOMA
spectrum sharing is put forward to achieve full spectrum access by using both
the idle and busy spectrum.
## Integrated Satellite-Terrestrial Communication Networks
Contemporary satellite communication services are no longer competitors of
terrestrial cellular network. Instead, they are often adopted to complement
cellular network services so as to provide seamless coverage. Terrestrial
cellular networks are suitable for areas with high user density in the urban
areas; however, they are typically less cost effective in covering remote and
even isolated geographic areas. While satellite communications can provide
large area coverage at low cost, they have their limitations in covering urban
areas due to the influence of shadowing effect. Therefore, ISTCN is believed
to be a suitable approach to achieve the global coverage with optimal cost.
The network model for the ISTCN is shown in Fig. 1, which is an integrated
satellite-terrestrial system composed of one or more Highly-Elliptical-Orbit
(HEO) and Low-Earth-Orbit (LEO) satellites and terrestrial cellular system.
Both the terrestrial system and satellite system operate in the same frequency
band to ensure the global seamless coverage of the user terminals. In the
ISTCN, the satellite terminal and terrestrial terminal can communicate with
each other depending on the network switching between the satellite system and
cellular system, and also a dual-mode satellite terminal can choose either of
the two systems to communicate by measuring the transmission cost.
Figure 1: Network model for ISTCN.
The network access model for spectrum sharing in ISTCN includes hybrid network
access (HNA) and combined network access (CNA), as shown in Fig. 2. In the
HNA, the user terminal and subscription are different, and the access network
and core network of each system are disjoint and linked together through the
public network. Therefore, the users may have good access to the two systems,
but there is no integration between them. The satellite system and terrestrial
system can adopt the same or different air interface technology depending on
the specific network scenario. In the HNA, however, the user only have one
terminal and one subscription, and the services of the two systems are almost
seamless switching. Therefore, the quality of service (QoS) of HNA is higher
due to the system integration. But the two systems have to adopt compatible
air interface technology and share the same frequency band.
Figure 2: Network access mode for ISTCN.
In the ISTCN, the satellite system and terrestrial system may coexist in the
same frequency band to alleviate the satellite spectrum scarcity. The existing
spectrum sharing methods are mainly divided into the following two categories.
Static spectrum sharing: The idea of static spectrum sharing is spatial
isolation, which can reuse the time, frequency and other resources through
orthogonal access manner in different spatial areas. However, it allocates the
specific spectrum resource for each user, which can not meet the dynamic
spectrum demands of the users, resulting in that the load of some spectrum is
too heavy while other spectrum has higher idle rate. In 6G, NOMA, as a new
static spectrum sharing approach based on power domain multiplexing, has been
proposed to allocate the same time-frequency resource to different users,
which can greatly improve the spectrum efficiency compared with 5G.
Dynamic spectrum sharing: By adopting CR technology, the communication system
can opportunistically use the underutilized frequency resources to achieve
better dynamic spectrum management. Satellite-terrestrial CR spectrum sharing
can realize the heterogeneous integration of satellite network and terrestrial
network and solve the problem of satellite spectrum shortage.
Therefore, NOMA and CR as efficient spectrum sharing technologies can make the
ISTCN achieve the interconnections between massive satellite terminals and
terrestrial terminals under the limited satellite spectrum resources.
## Satellite-Terrestrial NOMA Spectrum Sharing
The core idea of NOMA is to realize the multi-user multiplexing of single
time-frequency resource block by introducing a new power domain dimension. At
the transmitter, the signals of different users are set with different power
levels, which are transmitted in the same resource block by superposition
coding. While at the receiver, the signals of different users are separated
and decoded by using SIC in the descending order of the power levels. .
### NOMA Spectrum Sharing Model
The satellite-terrestrial NOMA spectrum sharing model is shown in Fig. 3,
where the terrestrial users access the satellite spectrum through NOMA. In the
satellite uplink, the data and channel state information (CSI) of the
terrestrial terminals are sent to the satellite gateway, which then groups the
users according to the CSI and allocates the maximum transmit power to each
user for superposition coding. The signals of the same grouping users are sent
to the satellite in the same frequency band. The satellite receiver uses SIC
to decode the signals of each user. If a user transmits stronger signal in a
better link, its signal will be decoded first. And the signal in a poor link
will be decoded from the remaining signals after subtracting the decoded
signals by SIC. In the satellite downlink, the satellite transmitter allocates
the power of each user according to the link quality, and the user in a poor
link is allocated larger power to ensure the receiving performance. The NOMA
signal is transmitted to each satellite terminal, which uses SIC to first
decode the signals with larger power and then decode its own signal from the
remaining signals.
Figure 3: NOMA based satellite-terrestrial spectrum sharing model.
### User NOMA Grouping
The terrestrial users can be divided into several groups, each of which is
assigned a separate frequency band for NOMA transmission. The NOMA grouping
can reduce the multi-user interference in decoding by decreasing the number of
users in the same frequency band. The advantage of NOMA is obvious only when
the users with great channel differences are assigned to one group. In the
terrestrial NOMA, the physical distance between user and base station is used
as the grouping basis, whereby the center user and edge user within the
coverage of base station are usually assigned to one group. However, the
satellite communication channel is more complex than the terrestrial mobile
communication channel, whose path attenuation is not sensitive to the user’s
geographical location. Therefore, the distance grouping basis is no longer
applicable in the satellite communications. Satellite NOMA grouping needs to
fully consider other attenuation characteristics of the satellite channels
besides the free space loss, such as beam gain, shadow fading, multipath
fading, and rain fading etc. The channel fading difference of different
satellite users can be used as the grouping basis to eliminate the insensitive
path attenuation.
### Cooperative Satellite-Terrestrial NOMA
Cooperative NOMA is mostly used in the downlink of a communication system,
whereby the user with good channel can help to decode the information of the
user with poor channel, so as to enhance its receiving performance. In the
ISTCN, cooperative NOMA can be carried out among different satellite and
terrestrial terminals to improve the transmission performance of the users in
fading satellite channels. As shown in Fig. 6, the satellite terminals in
fading channels can form a NOMA group with either the satellite terminals in
good channels or the terrestrial terminals. The satellite transmits the NOMA
signal to the satellite terminals and the terrestrial base station. The
satellite terminals in good channels first decode all the signals by SIC, and
then use decode-and-forward (DF) protocol to send the decoded signals to the
satellite terminals in fading channels. However, the terrestrial terminals
cannot achieve the prior information of the satellite terminals and thus are
unable to decode their signals. Therefore, the terrestrial terminals first
decode their own signals and subtract the decoded signals from the received
signal, and then use amplify-and-forward (AF) protocol to send the remaining
signal to the satellite terminals in fading channels.
Figure 4: NOMA-based cooperative spectrum sharing.
## Satellite-Terrestrial CR Spectrum Sharing
### CR Spectrum Sharing Model
Using CR technology, the satellite communication system can flexibly share
spectrum resources with the terrestrial communication system, which can
improve the spectrum utilization via opportunistically accessing the frequency
bands permitted by the licensed users. The typical CR spectrum sharing
scenarios of the ISTCN can be divided into two categories. One is licensed
satellite system and terrestrial CR system, and the other is licensed
terrestrial system and satellite CR system.
As shown in Fig. 5, in the satellite uplink, the satellite CR system
communicates in the terrestrial channels by spectrum sharing technology. If
the terrestrial user is not using the channel, the satellite user can transmit
data with its maximum power. However, the satellite user must always sense the
channel state. If the presence of the terrestrial user in the channel has been
detected, the satellite user has to switch to another idle channel. However,
if there is no idle channels, the satellite user may continue to use this
channel but cause harmful interference to the terrestrial system. In the
satellite downlink, the interference from the terrestrial user will also
decrease the satellite communication performance. To achieve low-interference
spectrum sharing, the satellite user must detect the spectrum occupation state
of the terrestrial system accurately and select an idle channel for
transmissions. In addition, the satellite system can also access the busy
spectrum by controlling its transmit power so that its power does not exceed
the maximum interference tolerated by the terrestrial system.
Figure 5: CR spectrum sharing model.
### Interference suppression technology
The premise of CR spectrum sharing is that the interference between the
satellite system and terrestrial system does not affect their normal
communications. Some interference suppression technologies are introduced as
follows.
Interference cognition: Interference cognition can detect the interference
holes in the surrounding electromagnetic environment, identify the
interference and estimate the channel quality, which can provide the basis for
the anti-interference decision. In the terrestrial communication, the
interference mostly occurs in the channel from CR system to licensed system,
and the interference cognition is usually defined around the licensed
receiver. However, there may be two-way interference in the ISTCN, and the
interference cognition should be defined both around CR system and licensed
system. For example, the interference cognition for the ISTCN can be defined
around the earth station and the satellite spot beam.
Power control: CR system combines channel state, receiver signal-to-noise
ratio (SNR) and interference information to flexibly adjust its transmit power
to avoid interference with licensed user in the same frequency band. On the
one hand, the transmit power can be minimized to save the energy of the
satellite terminal on the premise of ensuring the communication capacity. On
the other hand, the power can be optimized to maximize the communication
capacity of the ISTCN providing that the interference threshold is not
exceeded.
Satellite beamforming: Beamforming technology can transmit the target signal
along the desired direction by weighting the received signals of antenna array
elements. Beamforming allows multiple users to utilize the same frequency band
in the same geographical area at the same time, which makes it possible to
deploy dense networks with less interference in the unexpected direction. It
can be used as an interference cancellation technology in the transmitter or
receiver of the satellite, which can realize the spectrum sharing of satellite
system and terrestrial system in the angle domain.
## Satellite-Terrestrial CR-NOMA Spectrum Sharing
CR can realize the spectrum coexistence of satellite system and terrestrial
system, while NOMA can achieve the sharing access of limited satellite
spectrum by massive users. Therefore, by combining CR and NOMA, the satellite
spectrum utilization can be further improved. Satellite terminals can access
both the idle and busy spectrum via CR-NOMA, which will achieve high-efficient
full spectrum access.
CR-NOMA in idle spectrum: Multiple satellite terminals can access the idle
spectrum by NOMA, which will not bring any interference to the terrestrial
system. However, the available idle frequency bands are usually discontinuous
and fragmented, which are difficult to meet the broadband access of massive
satellite users. Spectrum aggregation technology has been put forward to
aggregate discrete idle frequency bands into broadband spectrum to support
large-bandwidth data transmissions. Non-continuous Orthogonal Frequency
Division Multiplexing (NC-OFDM) can realize the subcarrier aggregation by
zeroing the non-idle subcarriers according to their bandwidth and locations.
Therefore, by introducing NC-OFDM into NOMA, the superimposed broadband signal
can be transmitted over the aggregated idle subcarriers.
CR-NOMA in busy spectrum: Satellite terminal and terrestrial terminal can
share the same spectrum by NOMA, but interfere with each other. To guarantee
the terrestrial communication performance, the satellite signals are first
decoded with the terrestrial signals as the noise. Then the decoded satellite
signals are cancelled from the received NOMA signal by SIC, and the remaining
signal is used to decode the terrestrial signals without the interference from
the satellite. However, if the transmit power of terrestrial terminals is
large enough or the terrestrial communication performance is ignored, the
terrestrial signals can be first decoded to guarantee the decoding performance
of satellite signals.
Figure 6: CR-NOMA spectrum sharing.
Though the CR-NOMA can make the ISTCN achieve full spectrum access, the multi-
user interference caused by NOMA and the satellite-terrestrial interference
cause by CR may decrease the NOMA decoding performance. The satellite
terminals and terrestrial terminals should be grouped appropriately and the
decoding order must be properly arranged according to the power and service
requirements of the users.
## Open Researches and Challenges
This article has introduced some fundamental works on spectrum sharing for
ISTCN, such as ISTCN network model, satellite-terrestrial NOMA spectrum
sharing, satellite-terrestrial CR spectrum sharing and satellite-terrestrial
CR-NOMA spectrum sharing. However, there are still some open researches and
challenges to be discussed in the future.
Satellite spectrum sensing: The premise of satellite-terrestrial spectrum
sharing without interference is to perform accurate spectrum sensing. Due to
the limited satellite transmission capacity and the significant signal
attenuation caused by atmospheric effects such as shadowing and rain fading,
it is a great challenge to accurately sense satellite spectrum. In addition,
the spectrum sensing in LEO satellite communication also faces the problems of
mobility and available frequency shortage.
Fair satellite NOMA grouping: Beam edge users are located in the overlapping
area of different satellite beams and will suffer great inter-beam
interference. Therefore, multi-user interference and inter-beam interference
will seriously reduce the decoding performance of the edge users. It is
necessary to propose a fair satellite NOMA grouping method to allocate low-
power users and fewer users for the groups of edge users, so as to decrease
the NOMA decoding interference.
Satellite NOMA receiver design: Due to the shadowing, multipath fading, rain
fading and other channel interference of satellite communication, it is
difficult to carry out perfect SIC at the satellite terminal. When
restructuring and canceling the inaccurate decoded signals, the decoding error
will be transferred to the subsequent signal demodulation, which will decrease
the decoding performance. Therefore, to guarantee SIC performance, the
satellite receiver design should adopt some new signal processing technology
to suppress the interference and noise, such as adaptive filtering, wavelet
transform and weak signal detection etc.
Integrated satellite-6G network: 6G can support high-capacity, multi-service
and high-speed wireless communications. Integrated satellite-6G network can
meet the global coverage of mobile Internet and the ubiquitous network access
of all kinds of users. To better integrate with the terrestrial 6G network,
the satellite segment needs to reuse all the functional modules of 6G core
network, such as Internet interface, quality of service, user mobility and
security etc.
## Conclusion Remarks and Future Works
In this article, we propose NOMA and CR based spectrum sharing schemes for
ISTCN to improve the satellite spectrum utilization by allowing the satellite
communication to share the spectrum licensed to the terrestrial communication.
The satellite-terrestrial spectrum sharing need to sense the terrestrial
spectrum state and suppress the mutual interference between satellite system
and terrestrial system. Some interference suppression technologies for
satellite-terrestrial spectrum sharing are also introduced. By combining CR
and NOMA, the ISTCN can use CR-NOMA to achieve full spectrum sharing by
accessing both the idle and busy spectrum. Finally, some promising researches
and challenges for the ISTCN have been discussed.
## References
* [1] M. Jia, X. Gu, Q. Guo, W. Xiang, and N. Zhang, “Broadband hybrid satellite-terrestrial communication systems based on cognitive radio toward 5g,” IEEE Wireless Communications, vol. 23, no. 6, pp. 96–106, 2016.
* [2] B. Feng, H. Zhou, H. Zhang, G. Li, H. Li, S. Yu, and H. Chao, “Hetnet: A flexible architecture for heterogeneous satellite-terrestrial networks,” IEEE Network, vol. 31, no. 6, pp. 86–92, 2017.
* [3] S. M. R. Islam, M. Zeng, O. A. Dobre, and K. Kwak, “Resource allocation for downlink noma systems: Key techniques and open issues,” IEEE Wireless Communications, vol. 25, no. 2, pp. 40–47, 2018.
* [4] A. Chattopadhyay, K. Lam, and Y. Tavva, “Autonomous vehicle: Security by design,” IEEE Transactions on Intelligent Transportation Systems, vol. to appear, pp. 1–15, 2020.
* [5] H. Zhang, Y. Qiu, K. Long, G. K. Karagiannidis, X. Wang, and A. Nallanathan, “Resource allocation in noma-based fog radio access networks,” IEEE Wireless Communications, vol. 25, no. 3, pp. 110–115, 2018\.
* [6] S. Mounchili and S. Hamouda, “Pairing distance resolution and power control for massive connectivity improvement in noma systems,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4093–4103, 2020.
* [7] F. Zhou, Y. Wu, R. Q. Hu, Y. Wang, and K. K. Wong, “Energy-efficient noma enabled heterogeneous cloud radio access networks,” IEEE Network, vol. 32, no. 2, pp. 152–160, 2018.
* [8] X. Liu, S. Hu, M. Li, and B. Lai, “Energy-efficient resource allocation for cognitive industrial internet of things with wireless energy harvesting,” IEEE Transactions on Industrial Informatics, pp. 1–1, 2020\.
* [9] F. Li, K. Lam, X. Li, Z. Sheng, J. Hua, and L. Wang, “Advances and emerging challenges in cognitive internet-of-things,” IEEE Transactions on Industrial Informatics, vol. 16, no. 8, pp. 5489–5496, 2020.
* [10] G. Hattab and M. Ibnkahla, “Multiband spectrum access: Great promises for future cognitive radio networks,” Proceedings of the IEEE, vol. 102, no. 3, pp. 282–306, 2014.
* [11] J. Zou, H. Xiong, D. Wang, and C. W. Chen, “Optimal power allocation for hybrid overlay/underlay spectrum sharing in multiband cognitive radio networks,” IEEE Transactions on Vehicular Technology, vol. 62, no. 4, pp. 1827–1837, 2013.
* [12] X. Yan, H. Xiao, K. An, G. Zheng, and S. Chatzinotas, “Ergodic capacity of noma-based uplink satellite networks with randomly deployed users,” IEEE Systems Journal, vol. 14, no. 3, pp. 3343–3350, 2020.
* [13] J. Jiao, Y. Sun, S. Wu, Y. Wang, and Q. Zhang, “Network utility maximization resource allocation for noma in satellite-based internet of things,” IEEE Internet of Things Journal, vol. 7, no. 4, pp. 3230–3242, 2020.
* [14] Y. E. Sagduyu, Y. Shi, A. B. MacKenzie, and Y. T. Hou, “Regret minimization for primary/secondary access to satellite resources with cognitive interference,” IEEE Transactions on Wireless Communications, vol. 17, no. 5, pp. 3512–3523, 2018.
* [15] S. H. Chae, C. Jeong, and K. Lee, “Cooperative communication for cognitive satellite networks,” IEEE Transactions on Communications, vol. 66, no. 11, pp. 5140–5154, 2018.
|
∎
11institutetext: D-ặng Võ Phúc 22institutetext: Faculty of Education Studies
University of Khanh Hoa, Nha Trang, Khanh Hoa, Vietnam
22email<EMAIL_ADDRESS>
# On modules over the mod 2 Steenrod algebra and hit problems
D-ặng Võ Phúc
###### Abstract
Let us consider the prime field of two elements,
$\mathbb{F}_{2}\equiv\mathbb{Z}_{2}.$ It is well-known that the classical ”hit
problem” for a module over the mod 2 Steenrod algebra $\mathscr{A}$ is an
interesting and important open problem of Algebraic topology, which asks a
minimal set of generators for the polynomial algebra
$\mathcal{P}_{m}:=\mathbb{F}_{2}[x_{1},x_{2},\ldots,x_{m}]$, regarded as a
connected unstable $\mathscr{A}$-module on $m$ variables $x_{1},\ldots,x_{m},$
each of degree 1. The algebra $\mathcal{P}_{m}$ is the
$\mathbb{F}_{2}$-cohomology of the product of $m$ copies of the Eilenberg-
MacLan complex $K(\mathbb{F}_{2},1).$ Although the hit problem has been
thoroughly studied for more than 3 decades, solving it remains a mystery for
$m\geq 5.$
It is our intent in this work is of studying the hit problem of five
variables. More precisely, we develop our previous work [Commun. Korean Math.
Soc. 35 (2020), 371-399] on the hit problem for $\mathscr{A}$-module
$\mathcal{P}_{5}$ in a degree of the generic form
$n_{t}:=5(2^{t}-1)+18.2^{t},$ for any non-negative integer $t.$ An efficient
approach to solve this problem had been presented. Two applications of this
study are to determine the dimension of $\mathcal{P}_{6}$ in the generic
degree $5(2^{t+4}-1)+n_{1}.2^{t+4}$ for all $t>0$ and to describe the modular
representations of the general linear group of rank 5 over $\mathbb{F}_{2}.$
As a corollary, the cohomological ”transfer”, defined by William Singer [Math.
Z. 202 (1989), 493-523], is an isomorphism in bidegree $(5,5+n_{0}).$ Singer’s
transfer is one of the relatively efficient tools to approach the structure of
mod-2 cohomology of the Steenrod algebra.
Mathematics Subject Classification (2010) 13A50 $\cdot$ 55Q45 $\cdot$ 55S10
$\cdot$ 55S05 $\cdot$ 55T15 $\cdot$ 55R12
###### Keywords:
Adams spectral sequencesSteenrod algebra Hit problemAlgebraic transfer
## 1 Introduction
Let $\mathcal{O}^{S}(i,\mathbb{F}_{2},\mathbb{F}_{2})$ denote the set of all
stable cohomology operations of degree $i,$ with coefficient in the prime
field $\mathbb{F}_{2}.$ Then, the $\mathbb{F}_{2}$-algebra
$\mathscr{A}:=\bigoplus_{i\geq
0}\mathcal{O}^{S}(i,\mathbb{F}_{2},\mathbb{F}_{2})$ is called the mod 2
Steenrod algebra. In other words, the algebra $\mathscr{A}$ is the algebra of
stable operations on the mod 2 cohomology. In J.M , Milnor observed that this
algebra is also a graded connected cocommutative Hopf algebra over
$\mathbb{F}_{2}.$ In some cases, the resulting $\mathscr{A}$-module structure
on $H^{*}(X,\mathbb{F}_{2})$ provides additional information about CW-
complexes $X;$ for instance (see section three for a detailed proof), the CW-
complexes $\mathbb{C}P^{4}/\mathbb{C}P^{2}$ and
$\mathbb{S}^{6}\vee\mathbb{S}^{8}$ have cohomology rings that agree as a
graded commutative $\mathbb{F}_{2}$-algebras, but are different as a module
over $\mathscr{A}.$ Afterwards, the Steenrod algebra is widely studied by
mathematicians whose interests range from algebraic topology and homotopy
theory to manifold theory, combinatorics, representation theory, and more. It
is well-known that the $\mathbb{F}_{2}$-cohomology of the Eilenberg-MacLan
complex $K(\mathbb{F}_{2},1)$ is isomorphic to $\mathbb{F}_{2}[x],$ the
polynomial ring of degree $1$ in one variable. Hence, based upon the Künneth
formula for cohomology, we have an isomorphism of $\mathbb{F}_{2}$-algebras
$\mathcal{P}_{m}:=H^{*}((K(\mathbb{F}_{2},1))^{\times
m},\mathbb{F}_{2})\cong\underset{\text{ $m$
times}}{\underbrace{\mathbb{F}_{2}[x_{1}]\otimes_{\mathbb{F}_{2}}\mathbb{F}_{2}[x_{2}]\otimes_{\mathbb{F}_{2}}\cdots\otimes_{\mathbb{F}_{2}}\mathbb{F}_{2}[x_{m}]}}\cong\mathbb{F}_{2}[x_{1},\ldots,x_{m}],$
where $x_{i}\in H^{1}((K(\mathbb{F}_{2},1))^{\times m},\mathbb{F}_{2})$ for
every $i.$ Since $\mathcal{P}_{m}$ is the cohomology of a CW-complex, it is
equipped with a structure of unstable module over $\mathscr{A}.$ It has been
known (see also S.E ) that $\mathscr{A}$ is spanned by the Steenrod squares
$Sq^{i}$ of degree $i$ for $i\geq 0$ and that the action of $\mathscr{A}$ on
$\mathcal{P}_{m}$ is depicted as follows:
$\begin{array}[]{ll}Sq^{i}(x_{t})&=\left\\{\begin{array}[]{lll}x_{t}&\mbox{if}&i=0,\\\
x_{t}^{2}&\mbox{if}&i=1,\ \ \mbox{({the instability condition})},\\\
0&\mbox{if}&i>1,\end{array}\right.\\\ Sq^{i}(FG)&=\sum_{0\leq\alpha\leq
i}Sq^{\alpha}(F)Sq^{i-\alpha}(G),\ \mbox{for all $F,\ G\in\mathcal{P}_{m}$}\ \
(\mbox{{the Cartan formula})}.\end{array}$
It is to be noted that since $Sq^{\deg(F)}(F)=F^{2}$ for any
$F\in\mathcal{P}_{m}$, the polynomial ring $\mathcal{P}_{m}$ is also an
unstable $\mathscr{A}$-algebra. Letting $GL_{m}:=GL(m,\mathbb{F}_{2})$ for the
general linear group of degree $m$ over $\mathbb{F}_{2}.$ This $GL_{m}$ when
$m\geq 2,$ which can be generated by two elements (see Waterhouse Waterhouse
), acts on $\mathcal{P}_{m}$ by matrix substitution. So, in addition to
$\mathscr{A}$-module structure, $\mathcal{P}_{m}$ is also a (right)
$\mathbb{F}_{2}GL_{m}$-module. The classical ”hit problem” for the algebra
$\mathscr{A}$, which is concerned with seeking a minimal set of
$\mathscr{A}$-generators for $\mathcal{P}_{m}$, has been initiated in a
variety of contexts by Peterson F.P , Priddy S.P , Singer W.S2 , and Wood R.W
. Structure of modules over $\mathscr{A}$ and hit problems are currently one
of the central subjects in Algebraic topology and has a great deal of
intensively studied by many authors like Brunetti and collaborators Brunetti1
; Brunetti2 , Crabb-Hubbuck C.H , Inoue M.I1 ; M.I2 , Janfada-Wood J.W ; J.W2
, Janfada Janfada1 ; Janfada2 , Kameko M.K , Mothebe-Uys M.M , Mothebe M.M2 ,
Pengelley-William P.W , the present author and N. Sum P.S1 ; P.S2 ; D.P1 ;
D.P2 ; D.P3 ; D.P6 ; D.P7 ; N.S1 ; N.S2 ; N.S3 , Walker-Wood W.W ; W.W2 , etc.
As it is known, when $\mathbb{F}_{2}$ is an $\mathscr{A}$-module concentrated
in degree 0, solving the hit problem is to determine an $\mathbb{F}_{2}$-basis
for the space of indecomposables, or ”unhit” elements, $Q^{\otimes
m}:=\mathbb{F}_{2}\otimes_{\mathscr{A}}\mathcal{P}_{m}=\mathcal{P}_{m}/\overline{\mathscr{A}}\mathcal{P}_{m}$
where $\overline{\mathscr{A}}$ is the positive degree part of $\mathscr{A}$.
It is well-known that the action of $GL_{m}$ and the action of $\mathscr{A}$
on $\mathcal{P}_{m}$ commute. So, there is an induced action of $GL_{m}$ on
$Q^{\otimes m}.$ The structure of $Q^{\otimes m}$ has been treated for $m\leq
4$ by Peterson F.P , Kameko M.K and Sum N.S1 . The general case is an
interesting open problem. Most notably, the study of this space plays a vital
role in describing the $E^{2}$-term of the Adams spectral sequence (Adams SS),
${\rm Ext}_{\mathscr{A}}^{m,m+*}(\mathbb{F}_{2},\mathbb{F}_{2})$ via the
$m$-th Singer cohomological ”transfer” W.S1 . This transfer is a linear map
$Tr^{\mathscr{A}}_{m}:(\mathbb{F}_{2}\otimes_{GL_{m}}P_{\mathscr{A}}((\mathcal{P}_{m})^{*}))_{n}\to{\rm
Ext}_{\mathscr{A}}^{m,m+n}(\mathbb{F}_{2},\mathbb{F}_{2})=H^{m,m+n}(\mathscr{A},\mathbb{F}_{2}),$
from the subspace of all $\overline{\mathscr{A}}$-annihilated elements to the
$E^{2}$-term of the Adams SS. Here
$(\mathcal{P}_{m})^{*}=H_{*}((K(\mathbb{F}_{2},1))^{\times m},\mathbb{F}_{2})$
and $\mathbb{F}_{2}\otimes_{GL_{m}}P_{\mathscr{A}}((\mathcal{P}_{m})^{*})$ are
the dual of $\mathcal{P}_{m}$ and $(Q^{\otimes m})^{GL_{m}},$ respectively,
where $(Q^{\otimes m})^{GL_{m}}$ denotes the space of $GL_{m}$-invariants. A
natural question arises: Why do we need to calculate the Adams $E^{2}$-term?
The answer is that it is involved in determining the stable homotopy groups of
spheres. These groups are pretty fundamental and interesting. Nevertheless,
they are also not fully-understood subjects yet. Therefore, the clarification
of these problems is an important task of Algebraic topology. It has been
shown (see J.B , W.S1 ) that the algebraic transfer is highly nontrivial, more
precisely, that $Tr^{\mathscr{A}}_{m}$ is an isomorphism for $0<m<4$ and that
the ”total” transfer $\bigoplus_{m\geq 0}Tr^{\mathscr{A}}_{m}:\bigoplus_{m\geq
0}(\mathbb{F}_{2}\otimes_{GL_{m}}P_{\mathscr{A}}((\mathcal{P}_{m})^{*}))_{n}\to\bigoplus_{m\geq
0}{\rm Ext}_{\mathscr{A}}^{m,m+n}(\mathbb{F}_{2},\mathbb{F}_{2})$ is a
homomorphism of bigraded algebras with respect to the product by concatenation
in the domain and the usual Yoneda product for the Ext group. Minami’s works
N.M1 ; N.M2 have shown the usefulness of the Singer transfer and the hit
problem for surveying the Kervaire invariant one problem. This problem, which
is a long standing open topic in Algebraic topology, asks when there are
framed manifolds with Kervaire invariant one. (Note that a framing on a closed
smooth manifold $M^{n}$ is a trivialization of the normal bundle $\nu(M,i)$ of
some smooth embedding $i:M\hookrightarrow\mathbb{R}^{n+*}.$ Here $\nu(M,i)$ is
defined to be a quotient of the pullback of the tangent bundle of
$\mathbb{R}^{n+*}$ by the sub-bundle given by the tangent bundle of $M.$ So,
$\nu(M,i)$ is an $*$-dimensional real vector bundle over $M^{n}.$ For more
details, we refer the reader to Snaith .) Framed manifolds of Kervaire
invariant one have been constructed in dimension $2^{k}-2$ for $2\leq k\leq
6.$ In 2016, by using mod 8 equivariant homotopy theory, Hill, Hopkins, and
Ravenel claimed in their surprising work Hill that the Kervaire invariant is
0 in dimension $2^{k}-2$ for $k\geq 8.$ Up to present, it remains undetermined
for $k=7$ (or dimension $126$) and this has the status of a hypothesis by
Snaith Snaith .
Return to Singer’s transfer, in higher homological degrees, the works B.H.H ,
Ha , Hung , T.N2 , and H.Q determined completely the image of
$Tr_{4}^{\mathscr{A}}$. The authors show that $Tr_{4}^{\mathscr{A}}$ contains
all the elements of the families $\\{d_{t}|\,t\geq 0\\},\ \\{e_{t}|\,t\geq
0\\}$, $\\{f_{t}|\,t\geq 0\\}$, and $\\{p_{t}|\,t\geq 0\\}$, but none from the
families $\\{g_{t+1}|\,t\geq 0\\}$, $\\{D_{3}(t)|\,t\geq 0\\}$, and
$\\{p^{\prime}_{t}|\,t\geq 0\\}$. Remarkably, since the family
$\\{g_{t+1}|\,t\geq 0\\}\not\subset{\rm Im}(Tr_{4}^{\mathscr{A}}),$ a question
of Minami N.M2 concerning the so-called new doomsday conjecture refuted. In
Hung , Hưng indicated that $Tr_{4}^{\mathscr{A}}$ is not an isomorphism in
infinitely many degrees. In particular, from preliminary calculations in W.S1
, Singer proposed the following.
###### Conjecture 1.1
The transfer homomorphism is a monomorphism in every rank $m>0.$
We have seen above that $Tr_{m}^{\mathscr{A}}$ is an isomorphism for $m<4,$
and so the conjecture holds in these ranks $m.$ Our recent work D.P14 has
shown that it is also true for $m=4,$ but the answer to the general case
remains a mystery, even in the case of $m=5$ with the help of a computer
algebra. It is known, in ranks $\leq 4$, the calculations of Singer W.S1 , Hà
Ha , and Nam T.N2 tell us that the non-zero elements $h_{t}\in{\rm
Ext}_{\mathscr{A}}^{1,2^{t}}(\mathbb{F}_{2},\mathbb{F}_{2}),\,e_{t}\in{\rm
Ext}_{\mathscr{A}}^{4,2^{t+4}+2^{t+2}+2^{t}}(\mathbb{F}_{2},\mathbb{F}_{2}),\,f_{t}\in{\rm
Ext}_{\mathscr{A}}^{4,2^{t+4}+2^{t+2}+2^{t+1}}(\mathbb{F}_{2},\mathbb{F}_{2}),$
for all $t\geq 0,$ are detected by the cohomological transfer. In rank 5,
based on invariant theory, Singer W.S1 gives an explicit element in ${\rm
Ext}_{\mathscr{A}}^{5,5+9}(\mathbb{F}_{2},\mathbb{F}_{2}),$ namely $Ph_{1}$,
that is not detected by $Tr_{5}^{\mathscr{A}}.$ In general, direct calculating
the value of $Tr_{m}^{\mathscr{A}}$ on any non-zero element is difficult.
Moreover, there is no general rule for that, and so, each computation is
important on its own. By this and the above results, in the present text, we
would like to investigate the family $\\{h_{t}f_{t}=h_{t+1}e_{t}\in{\rm
Ext}_{\mathscr{A}}^{5,5+(23.2^{t}-5)}(\mathbb{F}_{2},\mathbb{F}_{2})|\,t\geq
0\\},$ and Singer’s conjecture for $m=5$ in degree
$5(2^{t}-1)+18.2^{t}=23.2^{t}-5$ with $t=0.$ To do this, we use a basis of the
indecomposables $Q^{\otimes 5}$ in degree $18=5(2^{0}-1)+18.2^{0}$, which is
given by our previous work D.P2 (see Proposition 2.7 below). In addition, the
main goal of this work is to also compute explicitly the dimension of
$Q^{\otimes 5}$ in degree $5(2^{t}-1)+18.2^{t}$ for the cases $t\geq 1.$ Then,
Singer’s conjecture for $m=5$ and these degrees will be discussed at the end
of section two. We hope that our results would be necessary to formulate
general solutions.
## 2 Statement of results
Some notes. Throughout this paper, let us write
$\begin{array}[]{ll}\vskip 6.0pt plus 2.0pt minus
2.0pt(\mathcal{P}_{m})_{n}&:=\langle\\{f\in\mathcal{P}_{m}|\,\mbox{$f$ is a
homogeneous polynomial of degree $n$}\\}\rangle,\\\ Q^{\otimes
m}_{n}&:=\langle\\{[f]\in Q^{\otimes
m}|\,f\in(\mathcal{P}_{m})_{n}\\}\rangle,\end{array}$
which are $\mathbb{F}_{2}GL_{m}$-submodules of $\mathcal{P}_{m}$ and
$Q^{\otimes m},$ respectively. So $\mathcal{P}_{m}=\bigoplus_{n\geq
0}(\mathcal{P}_{m})_{n}$ and $Q^{\otimes m}=\bigoplus_{n\geq 0}Q^{\otimes
m}_{n}.$ Recall that to solve the hit problem of three variables, Kameko M.K
constructed a $\mathbb{F}_{2}GL_{m}$-modules epimorphism:
$\begin{array}[]{ll}(\widetilde{Sq^{0}_{*}})_{(m,m+2n)}:Q^{\otimes
m}_{m+2n}&\longrightarrow Q^{\otimes m}_{n}\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\mbox{[}\prod_{1\leq j\leq
m}x_{j}^{a_{j}}\mbox{]}&\longmapsto\left\\{\begin{array}[]{ll}\mbox{[}\prod_{1\leq
j\leq m}x_{j}^{\frac{a_{j}-1}{2}}\mbox{]}&\text{if $a_{j}$ odd,
$j=1,2,\ldots,m$},\\\ 0&\text{otherwise},\end{array}\right.\end{array}$
which induces the homomorphism $\widetilde{Sq^{0}_{*}}:(Q^{\otimes
m}_{m+2n})^{GL_{m}}\to(Q^{\otimes m}_{n})^{GL_{m}}.$ Since $\mathscr{A}$ is a
cocommutative Hopf algebra, there exists the squaring operations $Sq^{i}:{\rm
Ext}_{\mathscr{A}}^{m,m+n}(\mathbb{F}_{2},\mathbb{F}_{2})\to{\rm
Ext}_{\mathscr{A}}^{m+i,2m+2n}(\mathbb{F}_{2},\mathbb{F}_{2}),$ which share
most of the properties with $Sq^{i}$ on the cohomology of spaces (see May ),
but the classical $Sq^{0}$ is not the identity in general. Remarkably, this
$Sq^{0}$ commutes with the dual of $\widetilde{Sq^{0}_{*}}$ through the Singer
transfer (see J.B , N.M2 ). The reader who is familiar with Kameko’s
$(\widetilde{Sq^{0}_{*}})_{(m,m+2n)}$ will probably agree that this map is
very useful in solving the hit problem. Indeed, Kameko M.K showed that if
$m=\xi(n)={\rm min}\\{\gamma\in\mathbb{N}:\ n=\sum_{1\leq
i\leq\gamma}(2^{d_{i}}-1),\,d_{i}>0,\forall i,\,1\leq i\leq\gamma\\},$ then
$(\widetilde{Sq^{0}_{*}})_{(m,m+2n)}$ is an isomorphism of
$\mathbb{F}_{2}GL_{m}$-modules. This statement and Wood’s work R.W together
are sufficient to determine $Q^{\otimes m}_{n}$ in each degree $n$ of the
special ”generic” form $n=r(2^{t}-1)+d.2^{t},$ whenever $0<\xi(d)<r<m,$ and
$t\geq 0$ (see also D.P6 ).
As we mentioned at the beginning, the hit problem was completely solved for
$m\leq 4.$ Very little information is known for $m=5$ and degrees $n$ given
above. At least, it is surveyed by the present writer D.P6 for
$(r,d,t)\in\\{(5,18,0),(5,8,t)\\}.$ We now extend for the case
$(r,d,t)=(5,18,t),$ in which $t$ an arbitrary non-negative integer. We start
with a useful remark.
###### Remark 2.1
It can be easily seen that
$5(2^{t}-1)+18.2^{t}=2^{t+4}+2^{t+2}+2^{t+1}+2^{t-1}+2^{t-1}-5,$ and so
$\xi(5(2^{t}-1)+18.2^{t})=5$ for any $t>1.$ This implies that the iterated
Kameko map
$((\widetilde{Sq^{0}_{*}})_{(5,5(2^{t}-1)+18.2^{t})})^{t-1}:Q^{\otimes
5}_{5(2^{t}-1)+18.2^{t}}\to Q^{\otimes 5}_{5(2^{1}-1)+18.2^{1}}$
is an isomorphism, for all $t\geq 1,$ and therefore, it is enough to determine
$Q^{\otimes 5}_{5(2^{t}-1)+18.2^{t}}$ for $t\in\\{0,1\\}$. The case $t=0$ has
explicitly been computed by us in D.P3 . When $t=1,$ because Kameko’s
homomorphism
$(\widetilde{Sq^{0}_{*}})_{(5,5(2^{1}-1)+18.2^{1})}:Q^{\otimes
5}_{5(2^{1}-1)+18.2^{1}}\to Q^{\otimes 5}_{5(2^{0}-1)+18.2^{0}}$
is an epimorphism, we have an isomorphism
$Q^{\otimes 5}_{5(2^{1}-1)+18.2^{1}}\cong{\rm
Ker}((\widetilde{Sq^{0}_{*}})_{(5,5(2^{1}-1)+18.2^{1})})\bigoplus Q^{\otimes
5}_{5(2^{0}-1)+18.2^{0}}.$
The space $Q^{\otimes 5}_{5(2^{0}-1)+18.2^{0}}$ is known by our previous work
D.P3 . Thus, we need compute the kernel of
$(\widetilde{Sq^{0}_{*}})_{(5,5(2^{1}-1)+18.2^{1})}.$ For this, our approach
can be summarized as follows:
* (i)
A mononial in $\mathcal{P}_{5}$ is assigned a weight vector $\omega$ of degree
$5(2^{1}-1)+18.2^{1}$, which stems from the binary expansion of the exponents
of the monomial. The space of indecomposable elements ${\rm
Ker}((\widetilde{Sq^{0}_{*}})_{(5,5(2^{1}-1)+18.2^{1})})$ is then decomposed
into a direct sum of $(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{0}$ and the
subspaces $(Q^{\otimes 5})^{\omega^{>0}}$ indexed by the weight vectors
$\omega.$ Here $[F]_{\omega}=[G]_{\omega}$ in
$(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{\omega}$ if the polynomial $F-G$ is
hit, modulo a sum of monomials of weight vectors less than $\omega.$ Basing
the previous results by Peterson F.P , Kameko M.K , Sum N.S1 , and by us D.P6
, one can easily determine $(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{0}.$
* (ii)
The monomials in a given degree are lexicographically ordered first by weight
vectors and then by exponent vectors. This leads to the concept of admissible
monomial; more explicitly, a monomial is admissible if, modulo hit elements,
it is not equal to a sum of monomials of smaller orders. The space
$(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{\omega^{>0}}$ above is easily seen to
be isomorphic to the space generated by admissible monomials of the weight
vector $\omega.$
* (iii)
In a given (small) degree, we first list all possible weight vectors of an
admissible monomial. This is done by first using a criterion of Singer W.S1
on the hit monomials, and then combining with the results by Kameko M.K and
Sum N.S1 of the form ”$XZ^{2^{r}}$ (or $ZY^{2^{t}}$) admissible implying $Z$
admissible, under some mild conditions”.
* (iv)
In a given weight vector, we claim the (strict) inadmissibility of some
explicit monomials. The proof is given for a typical monomial in each case by
explicit computations. Finally, a direct calculation using Theorems 3.2, 3.3,
and some homomorphisms in section three, we obtain a basis of
$(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{\omega^{>0}}.$ This approach is much
less computational and it can be applied for all certain degrees and all
variables $m.$ Moreover, the MAGMA computer algebra Magma has been used for
verifying the results.
Before going into detail and proceeding to the main results, let us provide
some basic concepts. Of course, we assume that the reader is not familiar with
the basics of hit problems.
Weight vector and exponent vector. Let
$\omega=(\omega_{1},\omega_{2},\ldots,\omega_{t},\ldots)$ be a sequence of
non-negative integers. We say that $\omega$ is a weight vector, if
$\omega_{t}=0$ for $t\gg 0.$ Then, we also define $\deg(\omega)=\sum_{t\geq
1}2^{t-1}\omega_{t}.$ Let $X=x_{1}^{u_{1}}x_{2}^{u_{2}}\ldots x_{m}^{u_{m}}$
be a mononial in $\mathcal{P}_{m},$ define two sequences associated with $X$
by
$\omega(X):=(\omega_{1}(X),\omega_{2}(X),\ldots,\omega_{t}(X),\ldots),\ \
u(X):=(u_{1},u_{2},\ldots,u_{m}),$
where $\omega_{t}(X)=\sum_{1\leq j\leq m}\alpha_{t-1}(u_{j})$ in which
$\alpha_{t}(n)$ denotes the $t$-th coefficients in dyadic expansion of a
positive integer $n.$ They are called the weight vector and the exponent
vector of $X,$ respectively. We use the convention that the sets of all the
weight vectors and the exponent vectors are given the left lexicographical
order.
Linear order on $\mathcal{P}_{m}$. Assume that
$X=x_{1}^{u_{1}}x_{2}^{u_{2}}\ldots x_{m}^{u_{m}}$ and
$Y=x_{1}^{v_{1}}x_{2}^{v_{2}}\ldots x_{m}^{v_{m}}$ are the monomials of the
same degree in $\mathcal{P}_{m}.$ We say that $X<Y$ if and only if one of the
following holds:
1. (i)
$\omega(X)<\omega(Y);$
2. (ii)
$\omega(X)=\omega(Y)$ and $u(X)<v(Y).$
Equivalence relations on $\mathcal{P}_{m}$. For a weight vector $\omega,$ we
denote two subspaces associated with $\omega$ by
$\begin{array}[]{ll}\mathcal{P}^{\leq\omega}_{m}=\langle\\{X\in\mathcal{P}_{m}|\,\deg(X)=\deg(\omega),\
\omega(X)\leq\omega\\}\rangle,\\\
\mathcal{P}^{<\omega}_{m}=\langle\\{X\in\mathcal{P}_{m}|\,\deg(X)=\deg(\omega),\
\omega(X)<\omega\\}\rangle.\end{array}$
Let $F$ and $G$ be the homogeneous polynomials in $\mathcal{P}_{m}$ such that
$\deg(F)=\deg(G).$ We say that
1. (i)
$F\equiv G$ if and only if
$(F-G)\in\overline{\mathscr{A}}\mathcal{P}_{m}=\sum_{i\geq 0}{\rm
Im}(Sq^{2^{i}}).$ Specifically, if $F\equiv 0,$ then $F$ is hit (or
$\mathscr{A}$-decomposable), i.e., $F$ can be written in the form $\sum_{i\geq
0}Sq^{2^{i}}(F_{i})$ for some $F_{i}\in\mathcal{P}_{m}$;
2. (ii)
$F\equiv_{\omega}G$ if and only if $F,\,G\in\mathcal{P}^{\leq\omega}_{m}$ and
$(F-G)\in((\overline{\mathscr{A}}\mathcal{P}_{m}\cap\mathcal{P}_{m}^{\leq\omega})+\mathcal{P}_{m}^{<\omega}).$
It is not difficult to show that the binary relations ”$\equiv$” and
”$\equiv_{\omega}$” are equivalence ones. So, one defines the quotient space
$(Q^{\otimes
m})^{\omega}=\mathcal{P}_{m}^{\leq\omega}/((\overline{\mathscr{A}}\mathcal{P}_{m}\cap\mathcal{P}_{m}^{\leq\omega})+\mathcal{P}_{m}^{<\omega}).$
Moreover, due to Sum N.S3 , $(Q^{\otimes m})^{\omega}$ is also an
$\mathbb{F}_{2}GL_{m}$-module.
Admissible monomial and inadmissible monomial. A monomial
$X\in\mathcal{P}_{m}$ is said to be inadmissible if there exist monomials
$Y_{1},Y_{2},\ldots,Y_{k}$ such that $Y_{j}<X$ for $1\leq j\leq k$ and
$X\equiv\sum_{1\leq j\leq k}Y_{j}.$ Then, $X$ is said to be admissible if it
is not inadmissible.
Thus, with the above definitions in hand, it is straightforward to see that
the set of all the admissible monomials of degree $n$ in $\mathcal{P}_{m}$ is
a minimal set of $\mathscr{A}$-generators for $\mathcal{P}_{m}$ in degree $n.$
So, $Q_{n}^{\otimes m}$ is a $\mathbb{F}_{2}$-vector space with a basis
consisting of all the classes represent by the admissible monomials of degree
$n$ in $\mathcal{P}_{m}.$ Further, as stated in D.P2 , the dimension of
$Q_{n}^{\otimes m}$ can be represented as the sum of the dimensions
$(Q^{\otimes m})^{\omega}$ such that $\deg(\omega)=n.$ For later convenience,
we need to set some notation. Let $\mathcal{P}^{0}_{m}$ and
$\mathcal{P}^{>0}_{m}$ denote the $\mathscr{A}$-submodules of
$\mathcal{P}_{m}$ spanned all the monomials $\prod_{1\leq j\leq
m}x_{j}^{t_{j}}$ such that $\prod_{1\leq j\leq m}t_{j}=0,$ and $\prod_{1\leq
j\leq m}t_{j}>0,$ respectively. Let us write $(Q^{\otimes
m})^{0}:=\mathbb{F}_{2}\otimes_{\mathscr{A}}\mathcal{P}^{0}_{m},\ \mbox{and}\
(Q^{\otimes
m})^{>0}:=\mathbb{F}_{2}\otimes_{\mathscr{A}}\mathcal{P}^{>0}_{m},$ from which
one has that $Q^{\otimes m}=(Q^{\otimes m})^{0}\,\bigoplus\,(Q^{\otimes
m})^{>0}.$ For a polynomial $F\in\mathcal{P}_{m},$ we denote by $[F]$ the
classes in $Q^{\otimes m}$ represented by $F.$ If $\omega$ is a weight vector
and $F\in\mathcal{P}_{m}^{\leq\omega},$ then denote by $[F]_{\omega}$ the
classes in $(Q^{\otimes m})^{\omega}$ represented by $F.$ For a subset
$\mathscr{C}\subset\mathcal{P}_{m},$ we also write $|\mathscr{C}|$ for the
cardinal of $\mathscr{C}$ and put
$[\mathscr{C}]=\\{[F]\,:\,F\in\mathscr{C}\\}.$ If $\mathscr{C}\subset
P_{m}^{\leq\omega},$ then put
$[\mathscr{C}]_{\omega}=\\{[F]_{\omega}\,:\,F\in\mathscr{C}\\}.$ Let us denote
by $\mathscr{C}^{\otimes m}_{n}$ the set of all admissible monomials of degree
$n$ in $\mathcal{P}_{m},$ and let $\omega$ be a weight vector of degree $n.$
By setting
$\begin{array}[]{ll}(\mathscr{C}^{\otimes
m}_{n})^{\omega}:=\mathscr{C}^{\otimes
m}_{n}\cap\mathcal{P}_{m}^{\leq\omega},\ \ (\mathscr{C}^{\otimes
m}_{n})^{\omega^{0}}:=(\mathscr{C}^{\otimes
m}_{n})^{\omega}\cap\mathcal{P}^{0}_{m},\ \ (\mathscr{C}^{\otimes
m}_{n})^{\omega^{>0}}:=(\mathscr{C}^{\otimes
m}_{n})^{\omega}\cap\mathcal{P}^{>0}_{m},\\\ (Q_{n}^{\otimes
m})^{\omega^{0}}:=(Q^{\otimes m})^{\omega}\cap(Q^{\otimes m}_{n})^{0},\ \
(Q_{n}^{\otimes m})^{\omega^{>0}}:=(Q^{\otimes m})^{\omega}\cap(Q^{\otimes
m}_{n})^{>0},\end{array}$
then the sets $[(\mathscr{C}^{\otimes
m}_{n})^{\omega}]_{\omega},\,[(\mathscr{C}^{\otimes
m}_{n})^{\omega^{0}}]_{\omega}$ and $[(\mathscr{C}^{\otimes
m}_{n})^{\omega^{>0}}]_{\omega}$ are the bases of the $\mathbb{F}_{2}$-vector
spaces $(Q_{n}^{\otimes m})^{\omega},\ (Q_{n}^{\otimes m})^{\omega^{0}}$ and
$(Q_{n}^{\otimes m})^{\omega^{>0}},$ respectively.
Main results and applications. Let us now return to our study of the kernel of
the Kameko homomorphism $(\widetilde{Sq^{0}_{*}})_{(5,5(2^{1}-1)+18.2^{1})}$
and state our main results in greater detail. Firstly, by direct calculations
using the results by Kameko M.K , Singer W.S1 , Sum N.S1 , and Tín N.T , we
obtain the following, which is one of our main results and is crucial for an
application on the dimension of $Q^{\otimes 6}.$
###### Theorem 2.2
We have an isomorphism
$\mbox{\rm
Ker}(\widetilde{Sq_{*}^{0}})_{(5,5(2^{1}-1)+18.2^{1})}\cong(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes
5})^{0}\bigoplus(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes
5})^{\widetilde{\omega}^{>0}},$
where $\widetilde{\omega}=(3,3,2,1,1)$ is the weight vector of the degree
$5(2^{1}-1)+18.2^{1}.$
###### Remark 2.3
We are given in D.P6 that $(Q_{n}^{\otimes 5})^{0}\cong\bigoplus_{1\leq s\leq
4}\bigoplus_{\ell(\mathcal{J})=s}(Q_{n}^{\otimes\,\mathcal{J}})^{>0},$ where
$Q^{\otimes\mathcal{J}}=\langle[x_{j_{1}}^{t_{1}}x_{j_{2}}^{t_{2}}\ldots
x_{j_{s}}^{t_{s}}]\;|\;t_{i}\in\mathbb{N},\,i=1,2,\ldots,s\\}\rangle\subset
Q^{\otimes 5}$
with $\mathcal{J}=(j_{1},j_{2},\ldots,j_{s}),$ $1\leq j_{1}<\ldots<j_{s}\leq
5$, $1\leq s\leq 4,$ and $\ell(\mathcal{J}):=s$ denotes the length of
$\mathcal{J}.$ This implies that $\dim((Q_{n}^{\otimes 5})^{0}))=\sum_{1\leq
s\leq 4}\binom{5}{s}\dim((Q_{n}^{\otimes\,s})^{>0}),\ \mbox{for all $n\geq
0.$}$ On the other side, since $\xi(5(2^{1}-1)+18.2^{1})=3,$ by Peterson F.P
and Wood R.W , the spaces $Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 1}$ and
$Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 2}$ are trivial. Moreover, following Kameko
M.K and Sum N.S1 , we have seen that $(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes
3})^{>0}$ is $15$-dimensional and that $(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes
4})^{>0}$ is $165$-dimensional. Therefore, we may conclude that
$\dim((Q^{\otimes
5}_{5(2^{1}-1)+18.2^{1}})^{0}=15.\binom{5}{3}+165.\binom{5}{4}=975.$
Next, due to Remarks 2.1, 2.3, and to Theorem 2.2, the space $Q^{\otimes
5}_{5(2^{1}-1)+18.2^{1}}$ will be determined by computing
$(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes 5})^{\omega^{>0}}.$ To accomplish this, we
use the method described above to explicitly indicate all the admissible
monomials in the set $(\mathscr{C}^{\otimes
5}_{5(2^{1}-1)+18.2^{1}})^{\widetilde{\omega}^{>0}}.$ As a result, it reads as
follows.
###### Theorem 2.4
There exist exactly $925$ admissible monomials of degree $5(2^{1}-1)+18.2^{1}$
in $\mathcal{P}_{5}^{>0}$ such that their weight vectors are
$\widetilde{\omega}.$ Consequently, $(Q_{5(2^{1}-1)+18.2^{1}}^{\otimes
5})^{\widetilde{\omega}^{>0}}$ has dimension $925.$
This theorem, together with the fact that $Q_{5(2^{t}-1)+18.2^{t}}^{\otimes
5}=(Q_{5(2^{t}-1)+18.2^{t}}^{\otimes
5})^{0}\,\bigoplus\,(Q_{5(2^{t}-1)+18.2^{t}}^{\otimes 5})^{>0},$ yields an
immediate corollary that
###### Corollary 2.5
The space $Q_{5(2^{t}-1)+18.2^{t}}^{\otimes 5}$ is $730$-dimensional if $t=0,$
and is $2630$-dimensional if $t\geq 1.$
As applications, one would also be interested in applying results and
techniques of hit problems into the cases of higher ranks $m$ of $Q^{\otimes
m}$ and the modular representations of the general linear groups (see also the
relevant discussions in literatures J.B , N.M1 ; N.M2 , T.N2 , W.W ; W.W2 ).
Two applications below of the contributions of this paper are also not beyond
this target.
First application: the dimension of $Q^{\otimes 6}.$ The hit problem of six
variables has been not yet known. Using Corollary 2.5 for the case $t\geq 1$
and a result in Sum N.S1 , we state that
###### Theorem 2.6
With the generic degree $5(2^{t+4}-1)+41.2^{t+4},$ where $t$ an arbitrary
positive integer, then the $\mathbb{F}_{2}$-vector space $Q^{\otimes 6}$ has
dimension $165690$ in this degree.
Observing from Corollary 2.5 and Theorem 2.6, the readers can notice that the
dimensions of $Q^{\otimes 5}$ and $Q^{\otimes 6}$ in degrees given are very
large. So, a general approach to hit problems, other than providing a mononial
basis of the vector space $Q_{n}^{\otimes m},$ is to find upper/lower bounds
on the dimension of this space. However, in this work, we have not studied
this side of the problem and it is our concern the next time. It is remarkable
that, we have Kameko’s conjecture M.K on an upper bound for the dimension of
$Q_{n}^{\otimes m},$ but unfortunately, it was refuted for $m\geq 5$ by the
brilliant work of Sum N.S .
Second application: the behavior of the fifth Singer transfer. We adopt
Corollary 2.5 for $t=0,$ together with a fact of the Adams $E^{2}$-term, ${\rm
Ext}_{\mathscr{A}}^{5,5+*}(\mathbb{F}_{2},\mathbb{F}_{2})$, to obtain
information about the behavior of Singer’s cohomological transfer in the
bidegree $(5,5+(5(2^{0}-1)+18.2^{0}))$. More precisely, it is known, the
calculations of Lin W.L , and Chen T.C imply that ${\rm
Ext}_{\mathscr{A}}^{5,5+(5(2^{t}-1)+18.2^{t})}(\mathbb{F}_{2},\mathbb{F}_{2})=\langle
h_{t}f_{t}\rangle$ and $h_{t}f_{t}=h_{t+1}e_{t}\neq 0$ for all $t\geq 0.$ So,
to determine the transfer map in the above bidegree, we shall compute the
dimension of (the domain of the fifth transfer)
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5(2^{0}-1)+18.2^{0}}$
by using a mononial basis of $Q^{\otimes 5}_{5(2^{0}-1)+18.2^{0}}.$ (We
emphasize that computing the domain of $Tr_{m}^{\mathscr{A}}$ in each degree
$n$ is very difficult, particularly for values of $m$ as large as $m=5.$ The
understanding of special cases should be a helpful step toward the solution of
the general problem. Moreover, we believe, in principle, that our method could
lead to a full analysis of
$\mathbb{F}_{2}\otimes_{GL_{m}}P_{\mathscr{A}}((\mathcal{P}_{m})^{*})$ in each
$m$ and degree $n>0$, as long as nice decompositions of the space of
$GL_{m}$-invariants of $Q^{\otimes m}$ in degrees given. However, the
difficulty of such a task must be monumental, as $Q^{\otimes m}$ becomes much
larger and harder to understand with increasing $m.$) Details for this
application are as follows. It may need to be recalled that by the previous
discussions D.P3 , we get the technical result below.
###### Proposition 2.7
The following hold:
1. i)
If $Y\in\mathscr{C}_{5(2^{0}-1)+18.2^{0}}^{\otimes 5},$ then
$\overline{\omega}:=\omega(Y)$ is one of the following sequences:
$\overline{\omega}_{[1]}:=(2,2,1,1),\ \ \overline{\omega}_{[2]}:=(2,2,3),\ \
\overline{\omega}_{[3]}:=(2,4,2),$ $\overline{\omega}_{[4]}:=(4,1,1,1),\ \
\overline{\omega}_{[5]}:=(4,1,3),\ \ \overline{\omega}_{[6]}:=(4,3,2).$
2. ii)
$|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[k]}}|=\left\\{\begin{array}[]{ll}300&\mbox{{\rm
if}}\ k=1,\\\ 15&\mbox{{\rm if}}\ k=2,5,\\\ 10&\mbox{{\rm if}}\ k=3,\\\
110&\mbox{{\rm if}}\ k=4,\\\ 280&\mbox{{\rm if}}\ k=6.\end{array}\right.$
One should note that $|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[k]}}|=|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[k]}^{>0}}|$ for $k=2,3$, and
that $|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[2]}^{0}}|=0=|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[3]}^{0}}|.$ Moreover,
$\dim(Q^{\otimes 5}_{5(2^{0}-1)+18.2^{0}})=\sum_{1\leq k\leq
6}|(\mathscr{C}^{\otimes
5}_{5(2^{0}-1)+18.2^{0}})^{\overline{\omega}_{[k]}}|=730.$ Next, applying
these results, we explicitly compute the subspaces of $GL_{5}$-invariants
$((Q_{5(2^{0}-1)+18.2^{0}}^{\otimes 5})^{\overline{\omega}_{[k]}})^{GL_{5}},$
for $1\leq k\leq 6,$ and obtain
###### Theorem 2.8
The following assertions are true:
1. i)
$((Q_{5(2^{0}-1)+18.2^{0}}^{\otimes 5})^{\overline{\omega}_{[k]}})^{GL_{5}}=0$
with $k\in\\{1,2,3,5,6\\}.$
2. ii)
$((Q_{5(2^{0}-1)+18.2^{0}}^{\otimes
5})^{\overline{\omega}_{[4]}})^{GL_{5}}=\langle[\Re^{\prime}_{4}]_{\overline{\omega}_{[4]}}\rangle,$
where
$\begin{array}[]{ll}\vskip 6.0pt plus 2.0pt minus
2.0pt\Re^{\prime}_{4}&=x_{1}x_{2}x_{3}x_{4}x_{5}^{14}+x_{1}x_{2}x_{3}x_{4}^{14}x_{5}+x_{1}x_{2}x_{3}^{14}x_{4}x_{5}+x_{1}x_{2}^{3}x_{3}x_{4}x_{5}^{12}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt&\quad+x_{1}x_{2}^{3}x_{3}x_{4}^{12}x_{5}+x_{1}x_{2}^{3}x_{3}^{12}x_{4}x_{5}+x_{1}^{3}x_{2}x_{3}x_{4}x_{5}^{12}+x_{1}^{3}x_{2}x_{3}x_{4}^{12}x_{5}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt&\quad+x_{1}^{3}x_{2}x_{3}^{12}x_{4}x_{5}+x_{1}^{3}x_{2}^{5}x_{3}x_{4}x_{5}^{8}+x_{1}^{3}x_{2}^{5}x_{3}x_{4}^{8}x_{5}+x_{1}^{3}x_{2}^{5}x_{3}^{8}x_{4}x_{5}.\end{array}$
Now, because
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5.(2^{0}-1)+18.2^{0}}$
is isomorphic to $(Q^{\otimes 5}_{5.(2^{0}-1)+18.2^{0}})^{GL_{5}},$ by Theorem
2.8, we have the following estimate:
$\begin{array}[]{ll}\dim(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5.(2^{0}-1)+18.2^{0}}&=\dim(Q^{\otimes
5}_{5.(2^{0}-1)+18.2^{0}})^{GL_{5}}\\\ &\leq\sum_{1\leq k\leq
6}\dim((Q_{5(2^{0}-1)+18.2^{0}}^{\otimes
5})^{\overline{\omega}_{[k]}})^{GL_{5}}\leq 1.\end{array}$
On the other side, as shown in section one, $\\{h_{t}|\,t\geq 0\\}\subset{\rm
Im}(Tr^{\mathscr{A}}_{1}),$ and $\\{f_{t}|\,t\geq 0\\}\subset{\rm
Im}(Tr^{\mathscr{A}}_{4}).$ Combining this with the fact that the total
transfer $\bigoplus_{m\geq 0}Tr_{m}^{\mathscr{A}}$ is a homomorphism of
algebras, it may be concluded that the non-zero element $h_{t}f_{t}\in{\rm
Ext}_{\mathscr{A}}^{5,23.2^{t}}(\mathbb{F}_{2},\mathbb{F}_{2})$ is in the
image of $Tr^{\mathscr{A}}_{5}$ for all $t\geq 0.$ This could be directly
proved as in Appendix. This statement implies that
$\dim(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5.(2^{0}-1)+18.2^{0}}\geq
1,$
and therefore
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5.(2^{0}-1)+18.2^{0}}$
is one-dimensional. As a consequence, we immediately obtain
###### Corollary 2.9
The cohomological transfer
$Tr^{\mathscr{A}}_{5}:(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{5(2^{0}-1)+18.2^{0}}\to{\rm
Ext}_{\mathscr{A}}^{5,5+5(2^{0}-1)+18.2^{0}}(\mathbb{F}_{2},\mathbb{F}_{2})$
is an isomorphism. Consequently, Conjecture 1.1 holds in the rank 5 case and
the degree $5(2^{0}-1)+18.2^{0}.$
Comments and open issues. From the above results, it would be interesting to
see that $Q^{\otimes 5}$ is $730$-dimensional in degree $5(2^{0}-1)+18.2^{0},$
but the space of $GL_{5}$-coinvariants of it in this degree is only one-
dimensional. In general, it is quite efficient in using the results of the hit
problem of five variables to study
$\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}).$ This
provides a valuable method for verifying Singer’s open conjecture on the fifth
algebraic transfer. We now close the introduction by discussing about
Conjecture 1.1 in the rank 5 case and the internal degree
$n_{t}:=5(2^{t}-1)+18.2^{t}$ for all $t\geq 1.$ Let us note again that the
iterated Kameko homomorphism
$((\widetilde{Sq^{0}_{*}})_{(5,n_{t})})^{t-1}:Q^{\otimes 5}_{n_{t}}\to
Q^{\otimes 5}_{n_{1}}$ is an $\mathbb{F}_{2}GL_{5}$-module isomorphism for all
$t\geq 1.$ So, from a fact of ${\rm
Ext}_{\mathscr{A}}^{5,5+n_{1}}(\mathbb{F}_{2},\mathbb{F}_{2})$, to check
Singer’s conjecture in the above degree, we need only determine
$GL_{5}$-coinvariants of $Q^{\otimes 5}_{n_{t}}$ for $t=1.$ We must recall
that Kameko’s map $(\widetilde{Sq^{0}_{*}})_{(5,n_{1})}:Q^{\otimes
5}_{n_{1}}\to Q^{\otimes 5}_{n_{0}}$ is an epimorphism of $GL_{5}$-modules. On
the other side, as shown before, the non-zero element $h_{1}f_{1}\in{\rm
Ext}_{\mathscr{A}}^{5,5+n_{1}}(\mathbb{F}_{2},\mathbb{F}_{2})$ is detected by
the fifth transfer. From these data and Theorem 2.8, one has an estimate
$0\leq\dim((\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{n_{1}})-1\leq\dim({\rm
Ker}(\widetilde{Sq^{0}_{*}})_{(5,n_{1})})^{GL_{5}}.$
Moreover, basing the proof of Theorem 2.8 together with a few simple
arguments, it follows that the elements in
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{n_{1}}$
are dual to the classes
$\begin{array}[]{ll}\vskip 6.0pt plus 2.0pt minus
2.0pt&\gamma[x_{1}^{3}x_{2}^{3}x_{3}^{3}x_{4}^{3}x_{5}^{29}+x_{1}^{3}x_{2}^{3}x_{3}^{3}x_{4}^{29}x_{5}^{3}+x_{1}^{3}x_{2}^{3}x_{3}^{29}x_{4}^{3}x_{5}^{3}+x_{1}^{3}x_{2}^{7}x_{3}^{3}x_{4}^{3}x_{5}^{25}+x_{1}^{3}x_{2}^{7}x_{3}^{3}x_{4}^{25}x_{5}^{3}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt&\quad+x_{1}^{3}x_{2}^{7}x_{3}^{25}x_{4}^{3}x_{5}^{3}+x_{1}^{7}x_{2}^{3}x_{3}^{3}x_{4}^{3}x_{5}^{25}+x_{1}^{7}x_{2}^{3}x_{3}^{3}x_{4}^{25}x_{5}^{3}+x_{1}^{7}x_{2}^{3}x_{3}^{25}x_{4}^{3}x_{5}^{3}+x_{1}^{7}x_{2}^{11}x_{3}^{3}x_{4}^{3}x_{5}^{17}\\\
&\quad+x_{1}^{7}x_{2}^{11}x_{3}^{3}x_{4}^{17}x_{5}^{3}+x_{1}^{7}x_{2}^{11}x_{3}^{17}x_{4}^{3}x_{5}^{3}]+[\zeta],\end{array}$
where $\gamma\in\mathbb{F}_{2},$ and
$[\zeta]\in\text{Ker}(\widetilde{Sq^{0}_{*}})_{(5,n_{1})}.$ It could be
noticed that calculating explicitly these elements is not easy. However, in
view of our previous works D.P2 ; D.P6 , and motivated by the above
computations, we have the following prediction.
###### Conjecture 2.10
For each $t\geq 1,$ the space of $GL_{5}$-invariants elements of ${\rm
Ker}(\widetilde{Sq^{0}_{*}})_{(5,n_{t})}$ is trivial. Consequently, the
coinvariant
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{n_{t}}$
is 1-dimensional.
Since $h_{t}f_{t}\in{\rm Im}(Tr_{5}^{\mathscr{A}}),$ for all $t\geq 0,$ if
Conjecture 2.10 is true, then $Tr_{5}^{\mathscr{A}}$ is also isomorphism when
acting on the coinvariant
$(\mathbb{F}_{2}\otimes_{GL_{5}}P_{\mathscr{A}}((\mathcal{P}_{5})^{*}))_{n_{t}}$
for $t\geq 1,$ and so, Conjecture 1.1 holds in bidegree $(5,5+n_{t})$. We also
wish that our predictions are correct. If not, Singer’s conjecture will be
disproved. We leave these issues as future research. At the same time, we also
appreciate that some readers may have an interest in solving them.
Overview. Let us give a brief outline of the contents of this paper. Section
three contains a brief review of Steenrod squares and some useful linear
transformations. The dimensions of the polynomial algebras $\mathcal{P}_{5}$
and $\mathcal{P}_{6}$ in the generic degrees $n_{t}=5(2^{t}-1)+18.2^{t}$ and
$5(2^{t+4}-1)+n_{1}.2^{t+4}$ are respectively obtained in section four by
proving Theorems 2.2, 2.4, and 2.6. Section five is to present the proof of
Theorem 2.8. In the remainder of the text, we give a direct proof of an event
claimed above that the non-zero elements $h_{t}f_{t}\in{\rm
Ext}_{\mathscr{A}}^{5,23.2^{t}}(\mathbb{F}_{2},\mathbb{F}_{2})$ are detected
by $Tr_{5}^{\mathscr{A}}$. The proof is based on a representation in the
lambda algebra of the fifth Singer transfer. Finally, we describe the set
$(\mathscr{C}_{n_{1}}^{\otimes 5})^{\widetilde{\omega}^{>0}}$ and list some
the admissible monomials in $\mathscr{C}_{n_{0}}^{\otimes 5}$ and the strictly
inadmissible monomials in ($\mathcal{P}_{5}^{>0})_{n_{1}}.$
###### Acknowledgment
The author would like to give my deepest sincere thanks to Professor N. Sum
(Quy Nhon University, Sai Gon University), my thesis advisor, for meaningful
discussions. I am very grateful to Professor W. Singer for many enlightening
e-mail exchanges.
## 3 The Necessary Preliminaries
This section begins with a few words on the Steenrod algebra over
$\mathbb{F}_{2}$ and ends with a brief sketch of some homomorphisms in N.S1 .
At the same time, we prove some elementary results that will be used in the
rest of this text.
### 3.1 Steenrod squares and their properties
The mod 2 Steenrod algebra $\mathscr{A}$ was defined by Cartan Cartan to be
the algebra of stable cohomology operations for mod 2 cohomology. This algebra
is generated by the Steenrod squares $Sq^{i}:H^{n}(X,\mathbb{F}_{2})\to
H^{n+i}(X,\mathbb{F}_{2}),$ for $i\geq 0,$ where $H^{n}(X,\mathbb{F}_{2})$
denotes the $n$-th singular cohomology group of a topological space $X$ with
coefficient over $\mathbb{F}_{2}.$ Steenrod and Epstein S.E showed that these
squares are characterized by the following 5 axioms:
1. (i)
$Sq^{i}$ is an additive homomorphism and is natural with respect to any
$f:X\to Y.$ So $f^{*}(Sq^{i}(x))=Sq^{i}(f^{*}(x)).$
2. (ii)
$Sq^{0}$ is the identity homomorphism.
3. (iii)
$Sq^{i}(x)=x\smile x$ for all $x\in H^{i}(X,\mathbb{F}_{2})$ where $\smile$
denotes the cup product in the graded cohomology ring
$H^{*}(X,\mathbb{F}_{2}).$
4. (iv)
If $i>\deg(x),$ then $Sq^{i}(x)=0.$
5. (v)
Cartan’s formula: $Sq^{n}(x\smile y)=\sum_{i+j=n}Sq^{i}(x)\smile Sq^{j}(y).$
In addition, Steenrod squares have the following properties:
$\bullet$ $Sq^{1}$ is the Bockstein homomorphism of the coefficient sequence:
$0\to\mathbb{Z}/2\to\mathbb{Z}/4\to\mathbb{Z}/2\to 0.$
$\bullet$ $Sq^{i}$ commutes with the connecting morphism of the long exact
sequence in cohomology. In particular, it commutes with respect to suspension
$H^{n}(X,\mathbb{F}_{2})\cong H^{n+1}(\Sigma X,\mathbb{F}_{2}).$
$\bullet$ They satisfy the Adem relations: $Sq^{i}Sq^{j}=\sum_{0\leq
t\leq[i/2]}\binom{j-t-1}{i-2t}Sq^{i+j-t}Sq^{t},\,\,0<i<2j,$ where the binomial
coefficients are to be interpreted mod 2. These relations, which were
conjectured by Wu Wu and established by Adem Adem , allow one to write an
arbitrary composition of Steenrod squares as a sum of Serre-Cartan basis
elements.
Note that the structure of the cohomology $H^{*}(X,\mathbb{F}_{2})$ is not
only as graded commutative $\mathbb{F}_{2}$-algebra, but also as an
$\mathscr{A}$-module. In many cases, the $\mathscr{A}$-module structure on
$H^{*}(X,\mathbb{F}_{2})$ provides additional information on $X.$
## References
* (1) J. Adem, The iteration of the Steenrod squares in Algebraic Topology, Proc. Natl. Acad. Sci. USA 38 (1952), 20-726.
* (2) J.M. Boardman, Modular representations on the homology of power of real projective space, in Algebraic Topology: Oaxtepec 1991, ed. M. C. Tangor; in Contemp. Math. 146 (1993), 49-70.
* (3) A.K. Bousfield, E.B. Curtis, D.M. Kan, D.G. Quillen, D.L. Rector, and J.W. Schlesinger, The mod-$p$ lower central series and the Adams spectral sequence, Topology 5 (1966), 331-342.
* (4) R.R. Bruner, L.M. Hà, and N.H.V. Hưng, On behavior of the algebraic transfer, Trans. Amer. Math. Soc. 357 (2005), 437-487.
* (5) M. Brunetti, A. Ciampella, and A.L. Lomonaco, A total Steenrod operation as homomorphism of Steenrod algebra-modules, Ric. Mat. 61 (2012), 1-17.
* (6) M. Brunetti, and A.L. Lomonaco, A representation of the dual of the Steenrod algebra, Ric. Mat. 63 (2014), 19-24.
* (7) H. Cartan, Sur l’itération des opérations de Steenrod, Comment. Math. Helv. 29 (1955), 40-58.
* (8) T.W. Chen, Determination of $\mbox{Ext}^{5,*}_{\mathscr{A}}(\mathbb{Z}/2,\mathbb{Z}/2)$, Topol. Appl. 158 (2011), 660-689.
* (9) P.H. Chơn, and L.M. Hà, Lambda algebra and the Singer transfer, C. R. Math. Acad. Sci. Paris 349 (2011), 21-23.
* (10) M.C. Crabb, and J.R. Hubbuck, Representations of the homology of BV and the Steenrod algebra II, in Algebra Topology: New trend in localization and periodicity; in Progr. Math. 136 (1996), 143-154.
* (11) L.M. Hà, Sub-Hopf algebras of the Steenrod algebra and the Singer transfer, Geom. Topol. Monogr. 11 (2007), 101-124.
* (12) A. Hatcher, Algebraic Topology, Cambridge University Press, 2002, 551 pp.
* (13) M. A. Hill, M. J. Hopkins, and D. C. Ravenel, On the non-existence of elements of kervaire invariant one, Ann. of Math. (2) 184 (2016), 1-262.
* (14) N.H.V. Hưng, The cohomology of the Steenrod algebra and representations of the general linear groups, Trans. Amer. Math. Soc. 357 (2005), 4065-4089.
* (15) N.H.V. Hưng and V.T.N. Quỳnh, The image of Singer’s fourth transfer, C. R. Math. Acad. Sci. Paris 347 (2009), 1415-1418.
* (16) M. Inoue, $\mathcal{A}$-generators of the cohomology of the steinberg summand $M(n)$, In: D.M. Davis, J. Morava, G. Nishida, W. S. Wilson and N. Yagita (eds.) Recent Progress in Homotopy Theory (Baltimore, MD, 2000). Contemporary Mathematics, vol. 293, pp 125-139. American Mathematical Society, Providence (2002).
* (17) M. Inoue, Generators of the cohomology of $M(n)$ as a module over the odd primary Steenrod algebra, J. Lond. Math. Soc. (2) 75 (2007), 317-329.
* (18) A.S. Janfada, and R.M.W. Wood, The hit problem for symmetric polynomials over the Steenrod algebra, Math. Proc. Cambridge Philos. Soc. 133 (2002), 295-303.
* (19) A.S. Janfada, and R.M.W. Wood, Generating $H^{*}(BO(3),\mathbb{F}_{2})$ as a module over the Steenrod algebra, Math. Proc. Cambridge Philos. Soc. 134 (2003), 239-258.
* (20) A.S. Janfada, A criterion for a monomial in $P(3)$ to be hit, Math. Proc. Cambridge Philos. Soc. 145 (2008), 587-599.
* (21) A.S. Janfada, A note on the unstability conditions of the Steenrod squares on the polynomial algebra, J. Korean Math. Soc. 46 (2009), 907-918.
* (22) M. Kameko, Products of projective spaces as Steenrod modules, PhD. thesis, The Johns Hopkins University, ProQuest LLC, Ann Arbor, MI, 1990, 29 pages.
* (23) W.H. Lin, ${\rm Ext}_{\mathcal{A}}^{4,*}(\mathbb{Z}/2,\mathbb{Z}/2)\mbox{ {\it and} }{\rm Ext}_{\mathcal{A}}^{5,*}(\mathbb{Z}/2,\mathbb{Z}/2)$, Topol. Appl. 155 (2008), 459-496.
* (24) Magma Computational Algebra System (V2.25-8), the Computational Algebra Group at the University of Sydney, (2020), http://magma.maths.usyd.edu.au/magma/.
* (25) J.P. May, A General Algebraic Approach to Steenrod Operations, Lect. Notes Math., vol. 168, Springer-Verlag (1970), 153-231.
* (26) N. Minami, The Adams spectral sequence and the triple transfer, Amer. J. Math. 117 (1995), 965-985.
* (27) N. Minami, The iterated transfer analogue of the new doomsday conjecture, Trans. Amer. Math. Soc. 351 (1999), 2325-2351.
* (28) J.W. Milnor, The Steenrod algebra and its dual, Ann. of Math. (2) 67 (1958), 150-171.
* (29) M.F. Mothebe, and L. Uys, Some relations between admissible monomials for the polynomial algebra, Int. J. Math. Math. Sci., Article ID 235806, 2015, 7 pages.
* (30) M.F. Mothebe, _The admissible monomial basis for the polynomial algebra in degree thirteen_ , East-West J. Math. 18 (2016), 151-170.
* (31) T.N. Nam, Transfert algébrique et action du groupe linéaire sur les puissances divisées modulo 2, Ann. Inst. Fourier (Grenoble) 58 (2008), 1785-1837.
* (32) D.J. Pengelley, and F. Williams, The hit problem for $H^{*}(BU(2);\mathbb{F}_{p})$, Algebr. Geom. Topol. 13 (2013), 2061-2085.
* (33) F.P. Peterson, Generators of $H^{*}(\mathbb{R}P^{\infty}\times\mathbb{R}P^{\infty})$ as a module over the Steenrod algebra, Abstracts Amer. Math. Soc. 833 (1987).
* (34) Đ.V. Phúc, and N. Sum, On the generators of the polynomial algebra as a module over the Steenrod algebra, C.R.Math. Acad. Sci. Paris 353 (2015), 1035-1040.
* (35) D.V. Phuc, and N. Sum, On a minimal set of generators for the polynomial algebra of five variables as a module over the Steenrod algebra, Acta Math. Vietnam. 42 (2017), 149-162.
* (36) D.V. Phuc, The hit problem for the polynomial algebra of five variables in degree seventeen and its application, East-West J. Math. 18 (2016), 27-46.
* (37) Đ.V. Phúc, The ”hit” problem of five variables in the generic degree and its application, Topol. Appl. 107321 (2020), 34 pages, in press.
* (38) Đ.V. Phúc, $\mathcal{A}$-generators for the polynomial algebra of five variables in degree $5(2^{t}-1)+6.2^{t}$, Commun. Korean Math. Soc. 35 (2020), 371-399.
* (39) Đ.V. Phúc, On Peterson’s open problem and representations of the general linear groups, J. Korean Math. Soc. 58 (2021), 643-702.
* (40) Đ.V. Phúc, On the dimension of $H^{*}((\mathbb{Z}_{2})^{\times t},\mathbb{Z}_{2})$ as a module over Steenrod ring, Topol. Appl. 303 (2021), 107856.
* (41) Đ.V. Phúc, The answer to Singer’s conjecture on the cohomological transfer of rank 4, Preprint 2021, available online at https://www.researchgate.net/publication/352284459, submitted for publication.
* (42) S. Priddy, On characterizing summands in the classifying space of a group, I, Amer. Jour. Math. 112 (1990), 737-748.
* (43) J. Repka, and P. Selick, On the subalgebra of $H_{*}((\mathbb{R}P^{\infty})^{n};\mathbb{F}_{2})$ annihilated by Steenrod operations, J. Pure Appl. Algebra 127 (1998), 273-288.
* (44) W.M. Singer, The transfer in homological algebra, Math. Z. 202 (1989), 493-523.
* (45) W.M. Singer, On the action of the Steenrod squares on polynomial algebras, Proc. Amer. Math. Soc. 111 (1991), 577-583.
* (46) V.P. Snaith, Stable homotopy - around the Arf-Kervaire invariant, Birkhauser Progress on Math. Series vol. 273 (April 2009), 250 pages.
* (47) N.E. Steenrod, and D.B.A. Epstein, Cohomology operations, Annals of Mathematics Studies 50, Princeton University Press, Princeton N.J, 1962.
* (48) N. Sum, The negative answer to Kameko’s conjecture on the hit problem, Adv. Math. 225 (2010), 2365-2390.
* (49) N. Sum, On the Peterson hit problem, Adv. Math. 274 (2015), 432-489.
* (50) N. Sum, On a construction for the generators of the polynomial algebra as a module over the Steenrod algebra, in Singh M., Song Y., Wu J. (eds), Algebraic Topology and Related Topics. Trends in Mathematics. Birkhäuser/Springer, Singapore (2019), 265-286.
* (51) N. Sum, The squaring operation and the Singer algebraic transfer, Vietnam J. Math. 49 (2021), 1079-1096.
* (52) N.K. Tín, The hit problem for the polynomial algebra in five variables and applications, PhD. thesis, Quy Nhon University, 2017.
* (53) G. Walker, and R.M.W. Wood, Polynomials and the mod 2 Steenrod Algebra: Volume 1, The Peterson hit problem, in London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, 2018.
* (54) G. Walker, and R.M.W. Wood, Polynomials and the mod 2 Steenrod Algebra: Volume 2, Representations of $GL(n;\mathbb{F}_{2})$, in London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, 2018.
* (55) W.C. Waterhouse, Two generators for the general linear groups over finite fields, Linear Multilinear Algebra 24 (1989), 227-230.
* (56) R.M.W. Wood, Steenrod squares of polynomials and the Peterson conjecture, Math. Proc. Cambriges Phil. Soc. 105 (1989), 307-309.
* (57) W. Wu, Sur les puissances de Steenrod, Colloque de Topologie de Strasbourg, 1951, no. IX, 9 pp. La Bibliothèque Nationale et Universitaire de Strasbourg, 1952.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.