jackkuo commited on
Commit
bc71878
·
verified ·
1 Parent(s): 5e176d6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -tFQT4oBgHgl3EQf7DaV/content/tmp_files/2301.13441v1.pdf.txt +1770 -0
  2. -tFQT4oBgHgl3EQf7DaV/content/tmp_files/load_file.txt +0 -0
  3. .gitattributes +54 -0
  4. 0dE0T4oBgHgl3EQfdQAz/content/tmp_files/2301.02373v1.pdf.txt +1482 -0
  5. 0dE0T4oBgHgl3EQfdQAz/content/tmp_files/load_file.txt +0 -0
  6. 2dAyT4oBgHgl3EQfPvZl/content/tmp_files/2301.00030v1.pdf.txt +642 -0
  7. 2dAyT4oBgHgl3EQfPvZl/content/tmp_files/load_file.txt +471 -0
  8. 2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf +3 -0
  9. 2dE4T4oBgHgl3EQfagyV/vector_store/index.pkl +3 -0
  10. 49AzT4oBgHgl3EQfEPqD/vector_store/index.pkl +3 -0
  11. 5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/2301.01925v1.pdf.txt +1743 -0
  12. 5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/load_file.txt +0 -0
  13. 5NE6T4oBgHgl3EQflhFb/content/tmp_files/2301.06150v1.pdf.txt +0 -0
  14. 5NE6T4oBgHgl3EQflhFb/content/tmp_files/load_file.txt +0 -0
  15. 5tAyT4oBgHgl3EQf2fk_/content/tmp_files/2301.00751v1.pdf.txt +0 -0
  16. 5tAyT4oBgHgl3EQf2fk_/content/tmp_files/load_file.txt +0 -0
  17. 6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss +3 -0
  18. 89AzT4oBgHgl3EQfgvxw/content/tmp_files/2301.01473v1.pdf.txt +1986 -0
  19. 89AzT4oBgHgl3EQfgvxw/content/tmp_files/load_file.txt +0 -0
  20. 8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf +3 -0
  21. 8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf +3 -0
  22. 8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss +3 -0
  23. 8tE3T4oBgHgl3EQfSAk3/vector_store/index.pkl +3 -0
  24. 9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf +3 -0
  25. 9dE1T4oBgHgl3EQf8AVQ/vector_store/index.pkl +3 -0
  26. AtE2T4oBgHgl3EQfnAjp/content/tmp_files/2301.04005v1.pdf.txt +488 -0
  27. AtE2T4oBgHgl3EQfnAjp/content/tmp_files/load_file.txt +318 -0
  28. BNE4T4oBgHgl3EQfFAx2/content/tmp_files/2301.04882v1.pdf.txt +2112 -0
  29. BNE4T4oBgHgl3EQfFAx2/content/tmp_files/load_file.txt +0 -0
  30. D9FRT4oBgHgl3EQfxziA/content/tmp_files/2301.13643v1.pdf.txt +1909 -0
  31. D9FRT4oBgHgl3EQfxziA/content/tmp_files/load_file.txt +0 -0
  32. EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss +3 -0
  33. GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss +3 -0
  34. HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss +3 -0
  35. INAzT4oBgHgl3EQfjf2_/content/tmp_files/2301.01518v1.pdf.txt +1071 -0
  36. INAzT4oBgHgl3EQfjf2_/content/tmp_files/load_file.txt +0 -0
  37. J9FIT4oBgHgl3EQfZytY/content/tmp_files/2301.11254v1.pdf.txt +1646 -0
  38. J9FIT4oBgHgl3EQfZytY/content/tmp_files/load_file.txt +0 -0
  39. JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss +3 -0
  40. JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss +3 -0
  41. KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss +3 -0
  42. L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf +3 -0
  43. L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss +3 -0
  44. L9E1T4oBgHgl3EQfHAM0/vector_store/index.pkl +3 -0
  45. MNE1T4oBgHgl3EQfHAOD/content/tmp_files/2301.02921v1.pdf.txt +1306 -0
  46. MNE1T4oBgHgl3EQfHAOD/content/tmp_files/load_file.txt +0 -0
  47. MdFIT4oBgHgl3EQfcCt7/vector_store/index.pkl +3 -0
  48. NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf +3 -0
  49. NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss +3 -0
  50. NNAzT4oBgHgl3EQfzP4S/vector_store/index.pkl +3 -0
-tFQT4oBgHgl3EQf7DaV/content/tmp_files/2301.13441v1.pdf.txt ADDED
@@ -0,0 +1,1770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CMLCompiler: A Unified Compiler for Classical Machine
2
+ Learning
3
+ Xu Wen
4
+ Institute of Computing Technology,
5
+ Chinese Academy of Sciences
6
+ University of Chinese Academy of
7
+ Sciences
8
+ wenxu@ict.ac.cn
9
+ Wanling Gao
10
+ Institute of Computing Technology,
11
+ Chinese Academy of Sciences
12
+ University of Chinese Academy of
13
+ Sciences
14
+ gaowanling@ict.ac.cn
15
+ Anzheng Li
16
+ Institute of Computing Technology,
17
+ Chinese Academy of Sciences
18
+ University of Chinese Academy of
19
+ Sciences
20
+ lianzheng20g@ict.ac.cn
21
+ Lei Wang
22
+ Institute of Computing Technology,
23
+ Chinese Academy of Sciences
24
+ University of Chinese Academy of
25
+ Sciences
26
+ wanglei_2011@ict.ac.cn
27
+ Zihan Jiang
28
+ Institute of Computing Technology,
29
+ Chinese Academy of Sciences
30
+ University of Chinese Academy of
31
+ Sciences
32
+ jiangzihan@ict.ac.cn
33
+ Jianfeng Zhan∗
34
+ Institute of Computing Technology,
35
+ Chinese Academy of Sciences
36
+ University of Chinese Academy of
37
+ Sciences
38
+ zhanjianfeng@ict.ac.cn
39
+ ABSTRACT
40
+ Classical machine learning (CML) occupies nearly half of machine
41
+ learning pipelines in production applications. Unfortunately, it fails
42
+ to utilize the state-of-the-practice devices fully and performs poorly.
43
+ Without a unified framework, the hybrid deployments of deep learn-
44
+ ing (DL) and CML also suffer from severe performance and porta-
45
+ bility issues. This paper presents the design of a unified compiler,
46
+ called CMLCompiler, for CML inference. We propose two unified
47
+ abstractions: operator representations and extended computational
48
+ graphs. The CMLCompiler framework performs the conversion and
49
+ graph optimization based on two unified abstractions, then outputs
50
+ an optimized computational graph to DL compilers or frameworks.
51
+ We implement CMLCompiler on TVM. The evaluation shows CML-
52
+ Compiler’s portability and superior performance. It achieves up to
53
+ 4.38× speedup on CPU, 3.31× speedup on GPU, and 5.09× speedup
54
+ on IoT devices, compared to the state-of-the-art solutions — scikit-
55
+ learn, intel sklearn, and hummingbird. Our performance of CML
56
+ and DL mixed pipelines achieves up to 3.04x speedup compared
57
+ with cross-framework implementations.
58
+ CCS CONCEPTS
59
+ • Computing methodologies → Machine learning; • Computer
60
+ systems organization → Real-time systems.
61
+ KEYWORDS
62
+ Classical Machine Learning, Deep Learning, Compiler
63
+ 1
64
+ INTRODUCTION
65
+ Deep learning (DL) and classical machine learning (CML), collec-
66
+ tively called machine learning (ML), have played an increasingly
67
+ critical role in recent years. DL refers to those neural network mod-
68
+ els, such as convolutional neural networks (CNNs) [24], recurrent
69
+ neural networks (RNNs) [28], and generative adversarial networks
70
+ (GANs) [16]. Different from DL, CML represents a set of non-neural
71
+ network models in ML, e.g., linear models [37], decision trees [26],
72
+ ∗Corresponding author.
73
+ Hardware
74
+ CPU
75
+ GPU
76
+ IoT
77
+ ...
78
+ Models
79
+ Linear Models
80
+ Trees
81
+ SVMs
82
+ ...
83
+ Compiler Framework
84
+ Unified Abstractions
85
+ CMLCompiler
86
+ DL Frameworks (PyTorch)
87
+ PyTorch Runtime
88
+ DL compilers (TVM)
89
+ TVM Runtime
90
+ DL Frameworks (PyTorch)
91
+ PyTorch Runtime
92
+ DL compilers (TVM)
93
+ TVM Runtime
94
+ Figure 1: The CMLCompiler design. Our contributions are
95
+ highlighted in green color.
96
+ random forests [4], and support vector machines [42]. DL stands
97
+ out because of its accuracy, while CML is still widely used for lower
98
+ time and energy costs. Doris Xin et al. [47] analyze 3000 produc-
99
+ tion ML pipelines at Google and find that 40% of them use CML
100
+ models. Besides, many real-world applications adopt hybrid de-
101
+ ployments of CML and DL [2] to guarantee high accuracy and low
102
+ latency [25, 27, 36, 38], e.g., DL models for feature embedding and
103
+ CML models for classification or regression.
104
+ DL compilers, like TVM [7, 10, 23], provide a structural approach
105
+ to tackle the portability issue and facilitates wide deployment of DL
106
+ models on a broad spectrum of devices like GPUs, FPGAs, and IoT
107
+ devices and guarantees an appreciable performance. DL compilers
108
+ use computational graphs as high-level abstractions, supporting
109
+ a large variety of DL models. Meanwhile, DL compilers propose
110
+ low-level abstractions such as tensor representation to generate
111
+ executable code. For newborn hardware, the vendor just need to
112
+ provide hardware primitives, instead of a sophisticated high per-
113
+ formance library that is prohibitively costly. Based on the tensor
114
+ 1
115
+ arXiv:2301.13441v1 [cs.LG] 31 Jan 2023
116
+
117
+ Xu Wen et al.
118
+ representation and computational graphs abstractions, many opti-
119
+ mizations [8, 22, 49] are proposed to boost performance, e.g., they
120
+ provide sophisticated support for CPU processor architectures as
121
+ the latter has different architectures, diverse core numbers, ex-
122
+ tended instructions, and cache sizes.
123
+ However, despite its popularity and importance, CML suffers
124
+ from severe portability and performance issues. State-of-the-practice
125
+ and state-of-the-art CML frameworks [17, 29, 32] provide ad-hoc
126
+ solutions, implementing each CML model on every hardware device
127
+ case by case due to the lack of unified abstractions. These ad-hoc
128
+ solutions raise considerable difficulties in developing a general-
129
+ purpose framework and optimization techniques to achieve optimal
130
+ performance for every model. They either lack the support or only
131
+ partially support various hardware devices, such as GPUs, FPGAs,
132
+ and IoT devices. In addition, adding support for a model on a new
133
+ hardware device needs great effort, more than several thousands
134
+ of lines of codes [13], let alone hundreds or thousands of models
135
+ and devices. Moreover, they also face performance issues. Even on
136
+ the CPUs – the most popular CML platform, the performance is
137
+ unsatisfactory due to the lack of specific optimizations for advanced
138
+ characteristics like multi-cores and SIMD. The hybrid deployment
139
+ of CML and DL models faces more severe problems.
140
+ Our intuition is to enable CML to leverage DL’s well-defined
141
+ unified abstractions and highly mature compilers, optimization
142
+ technologies, and frameworks. Unfortunately, it is not a trivial task.
143
+ There are significant distinctions in operators and models between
144
+ CML and DL. DL operators focus on tensors, while CML handles ar-
145
+ rays, matrices, scalars, and tables. DL models are all neural network
146
+ models, while CML models, such as decision trees and SVMs, can
147
+ hardly be represented as neural networks. Most DL models are ex-
148
+ pressible as flat sequences of operations without if-statements [35],
149
+ but if-statements frequently occur in CML models. Existing DL ab-
150
+ stractions, such as tensor representation and computational graphs,
151
+ can not directly represent CML operators and models. Those dis-
152
+ tinctions determine CML can hardly leverage the DL ecosystems
153
+ directly. Several efforts attempt to support CML models on DL
154
+ frameworks, e.g., TensorFlow [1] provides a CPU-based decision
155
+ forest library TF-DF [43]. However, these attempts do not solve
156
+ the generality and portability issue. They only support a narrower
157
+ range of models, lacking support for GPUs and IoT devices.
158
+ This paper focuses on CML inference for the first step, consid-
159
+ ering its great significance that occupies nearly half of the total
160
+ cost [2] and its wide applications in online serving, Internet of
161
+ things (IoT), etc [18, 46]. We will extend our work to CML training
162
+ in the near future. As illustrated in Fig. 1, we propose a unified
163
+ compiler, CMLCompiler, for CML inference, which enables CML to
164
+ leverage the mature DL ecosystems. At the core of CMLCompiler
165
+ are two unified abstractions: operator representations and extended
166
+ computational graphs (ECGs) and a compiler framework. Operator
167
+ representations convert CML operators into tensor formats, while
168
+ an ECG organizes these converted operators in an optimization-
169
+ friendly way. The two unified abstractions define how to convert
170
+ and translate CML models into DL computational graphs, which
171
+ can be recognized and executed by DL frameworks and compilers.
172
+ The CMLCompiler framework consists of four modules – opera-
173
+ tor converter, model parser, graph optimizer, and graph translator.
174
+ The CMLCompiler framework performs the conversion and graph
175
+ optimization based on two unified abstractions, then outputs an
176
+ optimized DL computational graph to DL compilers or frameworks.
177
+ CMLCompiler can also optimize the mixed pipelines of CML and DL.
178
+ As TVM provides portability and sophisticated optimizations, we
179
+ choose to implement CMLCompiler on TVM. Currently, it supports
180
+ up to 35 CML models.
181
+ This paper makes the following contributions:
182
+ • We propose two unified abstractions – operator represen-
183
+ tations and extended computational graphs– to represent
184
+ CML operators and models.
185
+ • We present the design of CMLCompiler, a unified compiler
186
+ for CML inference, based on these abstractions. The CML-
187
+ Compiler framework performs the conversion and graph
188
+ optimization based on two unified abstractions, then outputs
189
+ an optimized DL computational graph to DL compilers or
190
+ frameworks.
191
+ • CMLCompiler enables the hybrid deployment of CML and
192
+ DL with a unified framework.
193
+ • We implement CMLCompiler on top of TVM, achieving up
194
+ to 4.38x speedup on CPU, 3.31x speedup on GPU, and 5.09x
195
+ speedup on IoT devices, compared to the state-of-the-art
196
+ solutions — scikit-learn, intel sklearn, and hummingbird. Our
197
+ support for CML and DL mixed pipelines achieves up to 3.04x
198
+ speedup compared with cross-framework implementations.
199
+ The remainder of the paper is organized as follows. Section 2
200
+ introduces the motivation. Section 3 introduces unified abstractions.
201
+ Section 4 shows design and implementation. Section 5 presents our
202
+ evaluation. Section 6 illustrates the related work. Finally, we draw
203
+ a conclusion in Section 7.
204
+ 2
205
+ MOTIVATION
206
+ CML faces severe portability and performance issues. Fig. 2 com-
207
+ pares the performance of sklearn, the most widely used CML frame-
208
+ work on GitHub [33]— against CMLCompiler leveraging DL com-
209
+ pilers. We find that sklearn can not support GPUs and only supports
210
+ IoT devices partially. Adding support for a new hardware device
211
+ needs great effort due to the ad-hoc implementations. For exam-
212
+ ple, adding support for random forest on GPU needs 2.7k lines
213
+ of code [13]. Many models and hardware devices need to be sup-
214
+ ported, requiring hundreds or thousands of more effort. Moreover,
215
+ due to the lack of compilation support for CPU’s features, sklearn
216
+ has poor performance. As shown in Fig .2, CMLCompiler achieves
217
+ 2.3x speedup by utilizing AVX2 through compilation compared
218
+ with sklearn. Other CML frameworks such as Spark MLlib [29] and
219
+ H2O [17] face the same problems. Our solution is to propose uni-
220
+ fied abstractions to utilize DL compilers and frameworks, achieving
221
+ portability and high performance.
222
+ CML and DL models are often deployed hybrid in NLP [36], in-
223
+ telligent healthcare [38], recommendation systems [25], etc., espe-
224
+ cially in the scenarios with limited computational power and small
225
+ datasets. Many of them are deployed on heterogeneous hardware
226
+ devices for online serving. As there is no unified system, different
227
+ frameworks are deployed with three disadvantages. First, this lim-
228
+ its the portability. If one framework fails on the target device, the
229
+ whole pipeline corrupts. Second, there are extra costs due to data
230
+ 2
231
+
232
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
233
+ ���
234
+ ���
235
+ ���
236
+ ������������
237
+ ��
238
+
239
+ ��
240
+
241
+ ��������
242
+ �������
243
+ ��������
244
+ ���
245
+ ���
246
+ ���
247
+ ����������������������
248
+ ��
249
+
250
+ ��
251
+
252
+ �������
253
+ ��������
254
+ Figure 2: This figure compares the performance of sklearn,
255
+ the most widely used CML framework on GitHub [33]—
256
+ against CMLCompiler. Our evaluation shows that sklearn
257
+ suffers from both performance and portability issues for a
258
+ lack of unified abstractions.
259
+ conversions across frameworks. Third, it is hard to make optimiza-
260
+ tions across different frameworks. Using a unified framework can
261
+ overcome these disadvantages, so we add the support for hybrid
262
+ deployment of CML and DL in CMLCompiler.
263
+ 3
264
+ THE UNIFIED ABSTRACTIONS
265
+ CMLCompiler takes CML models as input and returns DL compu-
266
+ tational graphs as output, utilizing DL frameworks or compilers
267
+ to compile and deploy them. At the core of CMLCompiler are two
268
+ unified abstractions. Operator representations are used to represent
269
+ CML operators in tensor format, as shown in Section 3.1. Extend
270
+ computational graph (ECG) organizes operator representations in
271
+ an optimization-friendly way and can be used to represent CML
272
+ models, as shown in Section 3.2. Section 3.3 shows the supported
273
+ algorithms and extensions for other algorithms.
274
+ 3.1
275
+ Operator Representation
276
+ An operator representation uses a combination of one or more DL
277
+ operators with tensors as input and output to represent a CML oper-
278
+ ator. We convert CML operators into DL operators and wrap them
279
+ in the format of operator representations. Data in CML has mainly
280
+ four formats: arrays, matrices, scalars, and tables [44]. Matrices
281
+ and arrays are regarded as two types of tensors whose operators
282
+ can naturally be converted into DL operators. When CML models
283
+ deal with tables, they take numeric data from tables and operate
284
+ it, which can also be regarded as scalars. Hereby, we focus on the
285
+ operators on scalars.
286
+ 3.1.1
287
+ Operator categories and corresponding representations. As
288
+ shown in Table 1, we classify CML operators into six categories
289
+ and provide operator representations, respectively.
290
+ (1) Assignment operators assign values to variables. If we assign
291
+ n values 𝑣1, 𝑣2, ..., 𝑣𝑛 to n variables 𝑥1, 𝑥2, ..., 𝑥𝑛, we organize
292
+ these variables and values in two tensors 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛] and
293
+ 𝑉 = [𝑣1, 𝑣2, ..., 𝑣𝑛]. Then we assign tensor V to tensor X to replace
294
+ n scalar assignments. Tensor assignments benefit memory copy
295
+ which stores data in block.
296
+ (2) Swap operators swap two or more variables. These variables
297
+ can be represented in a tensor format and use reorganization oper-
298
+ ators such as 𝑟𝑒𝑠ℎ𝑎𝑝𝑒 to swap the elements.
299
+ (3) Basic arithmetic operators refers to those arithmetic calcu-
300
+ lations based on scalars, such as 𝑎𝑑𝑑, 𝑠𝑢𝑏, 𝑚𝑢𝑙 and 𝑑𝑖𝑣. We use
301
+ element-wise arithmetic operators based on tensors to replace them,
302
+ which can utilize SIMD instructions better.
303
+ (4) Aggregation operators refer to operators that calculate ag-
304
+ gregates among many scalars, such as 𝑚𝑖𝑛, 𝑚𝑎𝑥, 𝑠𝑢𝑚, and 𝑎𝑣𝑔.
305
+ Reduction operators can be used to accomplish that.
306
+ (5) Comparison operators make a comparison between scalars
307
+ and return True or False, such as 𝑙𝑒𝑠𝑠, 𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟. Compar-
308
+ isons with the same operator can be represented in a tensor format
309
+ and use an element-wise comparison to replace.
310
+ (6) Conditional operators are used to represent if-else statements,
311
+ in the form of𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3, where𝑒𝑥𝑝𝑟1 is a compar-
312
+ ison operator. If 𝑒𝑥𝑝𝑟2 and 𝑒𝑥𝑝𝑟3 are all assignment or arithmetic
313
+ operators, we convert all three expressions into tensors. However,
314
+ the situation gets tricky if one of 𝑒𝑥𝑝𝑟2 or 𝑒𝑥𝑝𝑟3 is still a conditional
315
+ operator. We call those operators sequential conditional operators.
316
+ Sequential conditional operators may contain many conditions,
317
+ where each element in a tensor may have quite different decision
318
+ paths. The complexity of decision paths makes it difficult to con-
319
+ vert those operators into tensor operators. Those frequent if-else
320
+ statements perform poorly on hardware devices such as GPUs and
321
+ ASICs. Sequential conditional operators are the most delicate, and
322
+ we defer their discussion later.
323
+ 3.1.2
324
+ Conditional operators representation. We analyze those widely
325
+ used CML models and find that sequential conditional operators
326
+ mainly occur in tree-based models. So we use decision tree as an
327
+ example to introduce the representation of conditional operators in
328
+ detail, as shown in Fig. 3. We use the combination of DL operators
329
+ to represent those sequential conditional operators.
330
+ The left is a decision tree. The input data is a list of samples;
331
+ each has many features. 𝐼 refers to internal nodes, numbered in the
332
+ order of Level Order Traversal. Each internal node is a conditional
333
+ operator, making a comparison between a feature 𝐹𝑗 and a constant
334
+ threshold 𝑇𝑖. 𝐿 refers to leaf nodes, numbered in the order of In-
335
+ Order Traversal. Each leaf node is an assignment operator, reaching
336
+ which node determines the final result.
337
+ The right in Fig. 3 shows the operator representation, whose
338
+ definitions and properties of weights are shown in Table 2. Input
339
+ data multiplied by 𝑊1 returns those features used in internal nodes
340
+ in an appropriate order. Comparing with 𝑊2 returns the choice of
341
+ each internal node: 0 means left and 1 means right. These choices
342
+ are multiplied by 𝑊3 and then use 𝑎𝑟𝑔𝑚𝑎𝑥 to return the first index
343
+ of the maximum values for each row. For each sample 𝑥𝑘, that index
344
+ is the leaf node 𝑥𝑘 reaches, as proved in appendix A.
345
+ 3.1.3
346
+ The features of CML operator representations. As described
347
+ above, we represent CML operators in the format of operator rep-
348
+ resentations. These operator representations have unique features
349
+ different from operators in DL models.
350
+ First, the weights of DL operators and CML operator represen-
351
+ tations have different meanings. The weights in DL models are all
352
+ learnable parameters. Without approximate optimizations such as
353
+ pruning and quantization, those weights are dense, and the data
354
+ type (dtype) should be float32 to ensure accuracy. Many weights of
355
+ CML operator representations have other meanings, such as repre-
356
+ senting the structure of conditional operators. Those weights are
357
+ sparse and can naturally be expressed as low-precision dtypes such
358
+ 3
359
+
360
+ Xu Wen et al.
361
+ Table 1: The summary of operator representation. Each operator representation represents a CML operator. Scalars are marked
362
+ as lower-case letters, while tensors are marked as upper-case letters. EW is short for element-wise.
363
+ CML operators in scalar format
364
+ Operator Representation in tensor format
365
+ Operator Type
366
+ Expressions
367
+ Operator Type
368
+ Expressions
369
+ Assignment
370
+ 𝑥1 ← 𝑣1; 𝑥2 ← 𝑣2; ...; 𝑥𝑛 ← 𝑣𝑛
371
+ Assignment
372
+ 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑉 = [𝑣1, 𝑣2, ..., 𝑣𝑛]; 𝑋 ← 𝑉
373
+ Swap
374
+ 𝑥1 ← 𝑥2; 𝑥2 ← 𝑥1;
375
+ Reorganization
376
+ 𝑋 = [𝑥1,𝑥2]; 𝑟𝑒𝑠ℎ𝑎𝑝𝑒(𝑋);
377
+ Basic Arithmetic
378
+ 𝑥1 + 𝑦1; 𝑥2 + 𝑦2; ...; 𝑥𝑛 + 𝑦𝑛
379
+ EW Arithmetic
380
+ 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑌 = [𝑦1,𝑦2, ...,𝑦𝑛]; 𝑋 + 𝑌
381
+ Aggregation
382
+ 𝑠𝑢𝑚(𝑥1,𝑥2, ...,𝑥𝑛)
383
+ Reduction
384
+ 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑠𝑢𝑚(𝑋)
385
+ Comparison
386
+ 𝑥1 < 𝑦1; 𝑥2 < 𝑦2; ...; 𝑥𝑛 < 𝑦𝑛
387
+ EW Comparsion
388
+ 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑌 = [𝑦1,𝑦2, ...,𝑦𝑛]; 𝑋 < 𝑌
389
+ Conditional
390
+ 𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2 𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3
391
+ Described in Section 3.1.2
392
+ F5 < T1
393
+ F1 < T2
394
+ F4 < T3
395
+ L2
396
+ F2 < T4
397
+ L1
398
+ L3
399
+ L4
400
+ L5
401
+ True
402
+ False
403
+ I1
404
+ I2
405
+ I3
406
+ I4
407
+ F5 < T1
408
+ F1 < T2
409
+ F4 < T3
410
+ L2
411
+ F2 < T4
412
+ L1
413
+ L3
414
+ L4
415
+ L5
416
+ True
417
+ False
418
+ I1
419
+ I2
420
+ I3
421
+ I4
422
+ matmul
423
+ greater
424
+ matmul
425
+ argmax
426
+ W1
427
+ W2
428
+ W3
429
+ Input
430
+ Output
431
+ matmul
432
+ greater
433
+ matmul
434
+ argmax
435
+ W1
436
+ W2
437
+ W3
438
+ Input
439
+ Output
440
+ Figure 3: An example of conditional operator representation in decision tree, a typical classical machine learning model. 𝐹, 𝑇,
441
+ 𝐼, and 𝐿 refer to features, thresholds, internal nodes, and leaf nodes. 𝑊1, 𝑊2, and 𝑊3 are the weights of DL operators, whose
442
+ definitions and properties are shown in Table 2, matmul is short for matrix multiplication.
443
+ Table 2: The properties of weights in Fig. 3. 𝑁𝑆, 𝑁𝐹 , 𝑁𝐼 , and
444
+ 𝑁𝐿 refer to the number of samples, features, internal nodes,
445
+ and leaf nodes, respectively. 𝐼𝑛𝑝𝑢𝑡 ∈ R𝑁𝑆×𝑁𝐹 means 𝑁𝑆 sam-
446
+ ples, each has 𝑁𝐹 features. 𝑊1 ∈ {0, 1}𝑁𝐹 ×𝑁𝐼 captures the re-
447
+ lationship between features and internal nodes. 𝑊2 ∈ R𝑁𝐼
448
+ is the thresholds used in internal nodes. 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿
449
+ represents the structure between internal nodes and leaf
450
+ nodes. 𝑂𝑢𝑡𝑝𝑢𝑡 ∈ N𝑁𝑆 returns the leaf node index each sam-
451
+ ple reaches. Dtype is the data type of weights. Sparsity is the
452
+ ratio of non-zero data to all data in weights.
453
+ Definition
454
+ Dtype
455
+ Sparsity
456
+ 𝑊1[𝑖][𝑗] =
457
+ � 1, 𝐹𝑖 ∈ 𝐶𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛(𝐼𝑗)
458
+ 0, otherwise
459
+ bool
460
+ 1
461
+ 𝑁𝐹
462
+ 𝑊2[𝑖] = 𝑇ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑(𝐼𝑖)
463
+ float32
464
+ 1
465
+ 𝑊3[𝑖][𝑗] =
466
+ � 0, 𝐿𝑗 ∈ 𝐿𝑒𝑓 𝑡𝑆𝑢𝑏𝑇𝑟𝑒𝑒(𝐼𝑖)
467
+ 1, otherwise
468
+ bool
469
+ [ 1
470
+ 2, 1 −
471
+ 1
472
+ 𝑁𝐿 ]
473
+ as bool. The natural sparse features bring optimizations described
474
+ in Section 4.3.2.
475
+ Second, the frequent operators in DL and CML are not the same.
476
+ Almost all operators in DL take float32 as input and return float32
477
+ as output. CML uses many comparison operators, such as 𝑙𝑒𝑠𝑠,
478
+ 𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟, which rarely occur in DL models. Those com-
479
+ parison operators take float or integer as input and return bool
480
+ tensors, bringing remarkable changes in the dtype of input and
481
+ output, which can be used to make optimizations as described in
482
+ Section 4.3.1. Both DL and CML models use indices operators, which
483
+ compare input and returns indices, such as 𝑎𝑟𝑔𝑠𝑜𝑟𝑡 and 𝑎𝑟𝑔𝑚𝑎𝑥.
484
+ Those indices operators have mathematical properties that can
485
+ be used to make graph-level optimizations, as described in Sec-
486
+ tion 4.3.3. These optimizations can be ignored in DL models with
487
+ dozens or hundreds of layers but are helpful for those CML models
488
+ with fewer layers.
489
+ 3.2
490
+ Extended Computational Graph
491
+ This section introduces extended computational graph (ECG), which
492
+ organizes operator representations in an optimization-friendly way
493
+ and can be used to represent CML models. ECG is an extension
494
+ based on DL computational graph. In general, a DL computational
495
+ graph is represented as a directed graph where nodes represent
496
+ operations on tensors or program inputs and edges represent data
497
+ dependencies between operations [7]. From a perspective of the DL
498
+ frameworks and compilers, computational graphs are dense and
499
+ float32 by default, such as neural network models. Using approxi-
500
+ mate optimizations like pruning and quantization brings sparse and
501
+ low-precision data to all operators and weights. These optimiza-
502
+ tions cause a decrease in accuracy and bring extra computation,
503
+ such as calibration. When we convert CML operators to operator
504
+ representations, part of those converted operators and weights are
505
+ sparse and low-precision naturally. Using DL computational graphs
506
+ to represent CML models directly is not precise enough and ignores
507
+ many optimization opportunities due to the data type and sparse
508
+ features. So we extend the computational graph in the DL systems
509
+ into extended computational graph (ECG) as the unified abstraction
510
+ for CML models.
511
+ Before introducing ECG, first, we present more details about
512
+ data type (dtype) and sparsity. We define the partial order relation
513
+ for dtypes used in our work:
514
+ 𝑓 𝑙𝑜𝑎𝑡32 > 𝑖𝑛𝑡32/𝑓 𝑙𝑜𝑎𝑡16 > 𝑖𝑛𝑡16 > 𝑖𝑛𝑡8 > 𝑖𝑛𝑡4 > 𝑏𝑜𝑜𝑙
515
+ 4
516
+
517
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
518
+ Table 3: Operators used in ECGs
519
+ Operator Type
520
+ Examples
521
+ Comparison
522
+ less, equal, greater, less_equal
523
+ Indices
524
+ argmax, argmin, argsort, argwhere
525
+ Monotonic
526
+ sigmoid, softmax, relu, tanh, exp
527
+ Reduction
528
+ sum, max, min, avg, all, any
529
+ Arithmetic
530
+ gemm, conv, pool
531
+ The lower dtype can be converted into a higher dtype without
532
+ accuracy loss, while a backward conversion with accuracy loss is
533
+ forbidden. Using lower dtype computation, such as int8 matmul,
534
+ can speed up and reduce memory usage. However, there are many
535
+ limitations to dtype optimization. For example, the inputs of the
536
+ same operator should have the same dtype; thus, the dtype of opera-
537
+ tors depends on the largest dtype of inputs. Besides, many hardware
538
+ devices have extended instructions based on specific dtypes. For
539
+ example, an Intel processor speeds up int8 computation using AVX
540
+ instruction, while bool cannot benefit from that. Considering the
541
+ complexity of dtype optimization, we add dtype as a property for
542
+ ECG.
543
+ Sparsity is defined as the ratio of non-zero data to all data. If data
544
+ sparsity is relatively small, we take it as sparse data and store it in
545
+ a compressed sparse row (CSR) format. Using sparse operators to
546
+ handle those sparse data can perform better than dense operators.
547
+ Taking advantage of sparsity influences optimization greatly, so we
548
+ add sparsity as another property for ECG.
549
+ We classify the inputs of an operator into two categories: interme-
550
+ diate results and weights. Intermediate results are other operators’
551
+ outputs and can only be handled during runtime. Input data is the
552
+ first intermediate result in ECG, while output data is the last. Inter-
553
+ mediate results are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}. If we
554
+ want to change the dtype of intermediate results, we should add
555
+ dtype converting operator in the ECG.
556
+ Weights are model parameters that can be loaded from trained
557
+ models. Weights can be handled both during compilation and run-
558
+ time, while a proper transformation during compilation can reduce
559
+ runtime costs. Weights are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡_𝑑𝑡𝑦−
560
+ 𝑝𝑒,𝑎𝑐𝑡𝑢𝑎𝑙_𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}. Smallest_dtype is the smallest dtype for
561
+ weights without accuracy loss, actual_dtype is the dtype actually
562
+ used. Smallest_dtype depends on the property of weights, while
563
+ actual_dtype is fixed based on smallest_dtype and operators. As
564
+ shown in Fig. 3, 𝑊1 represents the relationship between input fea-
565
+ tures and internal nodes for decision trees, which is a 0-1 matrix.
566
+ The smallest_dtype of 𝑊1 is bool. However, W1 is multiplied by
567
+ input data with a dtype of float32. If we choose bool as the ac-
568
+ tual_dtype, 𝑊1 will be converted to float32 during runtime. To
569
+ reduce the execution time in runtime, we should convert 𝑊1 to
570
+ float32 during compilation, so we set actual_dtype as float32 rather
571
+ than bool.
572
+ Operators are represented in the form of {𝑤𝑒𝑖𝑔ℎ𝑡𝑠, 𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒_
573
+ 𝑟𝑒𝑠𝑢𝑙𝑡𝑠, 𝑢𝑠𝑒_𝑠𝑝𝑎𝑟𝑠𝑒, 𝑡𝑦𝑝𝑒, 𝑑𝑡𝑦𝑝𝑒, 𝐷𝐿_𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟}. Weights and in-
574
+ termediate_results are inputs of operators. Use_sparse is a flag of
575
+ whether using the sparse operator or not, which is closely related
576
+ to sparse operator replacing optimization described in Section 4.3.2.
577
+ Operator type is the type of operator. As shown in Table 3, we
578
+ Table 4: Supported Algorithms
579
+ Preprocessing Algorithms
580
+ Binarizer, LabelBinarizer, Normalizer, MaxAbsScaler,
581
+ MinMaxScaler, StandardScaler, RobustScaler,
582
+ PolynomialFeatures, LabelEncoder
583
+ Feature Selectors
584
+ SelectKBest, VarianceThreshold
585
+ Linear Models
586
+ LogisticRegression, LogisticRegressionCV, Perception,
587
+ RidgeClassifier, RidgeClassifierCV, SGDClassifier,
588
+ LinearRegression, Ridge, RidgeCV, SGDRegressor
589
+ Tree-based Models
590
+ DecisionTreeClassifier, DecisionTreeRegressor,
591
+ ExtraTreeClassifier, ExtraTreeRegressor,
592
+ RandomForestClassifier, RandomForestRegressor,
593
+ ExtraTreesClassifier, ExtraTreesRegressor,
594
+ GradientBoostingClassifier, GradientBoostingRegressor
595
+ Support Vector Machines
596
+ LinearSVC, LinearSVR, NuSVR, SVR
597
+ divide operators used in ECG into five categories. Comparison op-
598
+ erators refer to those operators that compare two tensors and return
599
+ bool tensors. Indices operators refer to those operators that return
600
+ tensors’ indices based on specific conditions. Those two kinds of
601
+ operators are dtype-lowering operators, the output dtype of which
602
+ is smaller than the input. Models without those operators, such as
603
+ most DL models, use the same dtype through the whole graphs,
604
+ where dtype optimizations cannot be used without approximate op-
605
+ timization. CML models make much use of those operators, which
606
+ have wide usage of dtype rewriting optimization described in Sec-
607
+ tion 4.3.1. Monotonic operators refer to those operators who meet
608
+ the following conditions:
609
+ ∀𝑥1 ≤ 𝑥2 =⇒ 𝑓 (𝑥1) ≤ 𝑓 (𝑥2)
610
+ A series of monotonic operators followed by an indices operator
611
+ is mathematically equivalent to the indices operators alone. Those
612
+ properties provide more optimizations, as described in Section 4.3.3.
613
+ Reduction operators calculate aggregates over input. Arithmetic
614
+ operators refer to other arithmetic calculations. Operator dtype is
615
+ the operators’ data type, such as int8 matmul or float32 matmul.
616
+ Operator dtype depends on the dtype of weights and intermedi-
617
+ ate_results. DL_operator is the native definition of operators in
618
+ DL computational graphs, which we use to translate ECG to DL
619
+ computational graphs.
620
+ 3.3
621
+ Supported Algorithms and Extension for
622
+ Other Algorithms
623
+ CMLCompiler supports 35 CML algorithms nowadays, as shown
624
+ in Table 4, covering most of the popular CML algorithms [34]. Our
625
+ work can also be extended to other algorithms, such as clustering
626
+ and matrix decomposition. Most CML algorithms use operators
627
+ categorized in Section 3.1.1, each of which can be converted to cor-
628
+ responding Operator Representations—our low-level abstractions,
629
+ guaranteeing our extensibility. We take Kmeans as an example.
630
+ 5
631
+
632
+ Xu Wen et al.
633
+ Operator Converter
634
+ CMLCompiler
635
+ Model Parser
636
+ Graph Optimizer
637
+ Operator Representation
638
+ Extended Computational Graph
639
+ Optimized ECG
640
+ Unified Abstractions
641
+ Graph Translator
642
+ Figure 4: The CMLCompiler architecture.
643
+ Kmeans use basic arithmetic operators to calculate the distance
644
+ between nodes, which can be converted to element-wise arithmetic
645
+ operators and use aggregation operators to make clustering, which
646
+ can be converted to reduction operators. When all operators of a
647
+ CML algorithm are converted to Operator Representations, it can
648
+ utilize our work to compile and make optimizations.
649
+ 4
650
+ DESIGN AND IMPLEMENTATION
651
+ This section illustrates the design and implementation of CMLCom-
652
+ piler, as shown in Fig. 4. We build our framework based on the
653
+ two unified abstractions, including four parts. Operator Converter
654
+ converts CML operators into operator representations, as shown in
655
+ Section 4.1. Model Parser organizes those operator representations
656
+ in an optimization-friendly way and uses ECGs to represent CML
657
+ models, as shown in Section 4.2. Graph Optimizer makes graph
658
+ level optimizations, as described in Section 4.3. An optimized ECG
659
+ is converted into a DL computational graph by Graph Translator
660
+ in Section 4.4. DL frameworks or compilers take DL computational
661
+ graphs as input and make more optimizations, compiling them into
662
+ executable modules to deploy. Section 4.5 shows the mixture usage
663
+ of CML and DL. Section 4.6 shows the implementation details.
664
+ 4.1
665
+ Operator Converter
666
+ Operator Converter traverses the operators in CML models and
667
+ converts them into operator representations, respectively. Opera-
668
+ tors based on matrices and arrays are converted into DL operators
669
+ directly. Scalar-based operators are converted into DL operators
670
+ based on their categories, according to Section 3.1. These converted
671
+ DL operators are wrapped into operator representations.
672
+ 4.2
673
+ Model Parser
674
+ Model Parser converts operator representations into an ECG, as
675
+ shown in Algorithm 1. Operators in an operator representation are
676
+ initialized as nodes in an ECG, the data structure of which is defined
677
+ in Section 3.2. Operator.weights and operator.intermediate_results
678
+ are set according to data dependencies, and edges are built be-
679
+ tween nodes. Operator.use_sparse and operator.dtype are set as
680
+ False and Unknown, respectively. Operator.type is set according to
681
+ operator type, which is defined in Table 3. Then weights and inter-
682
+ mediate_result are initialized. Weight.sparsity is set as the ratio of
683
+ non-zero data and all data for weight, known during compilation.
684
+ Weight.smallest_dtype is set as the smallest dtype without accuracy
685
+ loss, and weight.actual_dtype is initialized the same. Intermedi-
686
+ ate_result.sparsity and intermediate_result.dtype are set according
687
+ to operator. When all operators are visited, the ECG is established.
688
+ Algorithm 1 Model Parser
689
+ Input: Operator Representation
690
+ Output: Extended Computational Graph 𝐸𝐶𝐺
691
+ for operator in Operator Representation do
692
+ Initialize operator as ECG node
693
+ Set operator.weights and operator. intermediate_results ac-
694
+ cording to data dependencies and build edges between nodes
695
+ operator.use_sparse ← False
696
+ operator.type ← operator type
697
+ operator.dtype ← Unknown
698
+ for weight in operator.weights do
699
+ weight.sparsity ← the ratio of non-zero data and all data
700
+ weight.smallest_dtype ← the smallest dtype without accu-
701
+ racy loss
702
+ weight.actual_dtype ← weight.smallest_dtype
703
+ end for
704
+ for ir in operator.intermediate_results do
705
+ set ir.sparsity and ir.dtype according to operator
706
+ end for
707
+ end for
708
+ 4.3
709
+ Graph Optimizer
710
+ Graph Optimizer performs graph-level optimizations, using a func-
711
+ tionally equivalent transformation for ECGs. These optimizations
712
+ are based on the features of CML models and do not influence accu-
713
+ racy. There are three specific graph rewriting optimizations: dtype
714
+ rewriting, sparse operator replacing, and redundant elimination.
715
+ 4.3.1
716
+ Dtype rewriting. Dtype rewriting uses low precision compu-
717
+ tation with faster speed and less memory to replace high precision
718
+ computation. As analyzed in Section 3.1.3, many weights used in
719
+ CML can be represented as bool or int8. Besides, comparison opera-
720
+ tors and indices operators widely used in CML are dtype-lowering
721
+ operators. The intermediate results after those operators are bool or
722
+ int8. When intermediate data and weights can be both expressed as
723
+ low precision dtype, the corresponding operators can be converted
724
+ into low precision computation as well.
725
+ As shown in Fig. 5a, the top is the ECG of decision trees before
726
+ optimization; many details are hidden. Weight 𝑊3 represents the
727
+ relationship between leaf nodes and internal nodes for decision
728
+ trees, which is a matrix only containing 0 and 1. The smallest_dtype
729
+ of 𝑊3 is bool. The output of 𝑔𝑟𝑒𝑎𝑡𝑒𝑟 operator has a dtype of bool
730
+ as well. So the following matrix multiplication (matmul) operator
731
+ can use a dtype of bool rather than float32. Intel processors speed
732
+ up int8 computation using AVX instruction, while bool cannot
733
+ benefit from that feature. So we convert the dtype of matmul to
734
+ int8 according to hardware specification. In Fig. 5a, the below is
735
+ the ECG after graph rewriting. Those white weights and operators
736
+ use float32, while gray weights and operators use int8.
737
+ 6
738
+
739
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
740
+ matmul
741
+ greater
742
+ matmul
743
+ argmax
744
+ W1
745
+ W2
746
+ W3
747
+ input
748
+ out
749
+ matmul
750
+ greater
751
+ matmul
752
+ argmax
753
+ W1
754
+ W2
755
+ W3
756
+ input
757
+ out
758
+ matmul
759
+ greater
760
+ matmul
761
+ argmax
762
+ W1
763
+ W2
764
+ W3
765
+ input
766
+ out
767
+ float32
768
+ int8
769
+ (a) Dtype Rewriting
770
+ matmul
771
+ greater
772
+ matmul
773
+ argmax
774
+ W1
775
+ W2
776
+ W3
777
+ input
778
+ out
779
+ matmul
780
+ greater
781
+ matmul
782
+ argmax
783
+ W1
784
+ W2
785
+ W3
786
+ input
787
+ out
788
+ matmul
789
+ greater
790
+ matmul
791
+ argmax
792
+ W1
793
+ W2
794
+ W3
795
+ input
796
+ out
797
+ dense
798
+ sparse
799
+ (b) Sparse Operator Replacing
800
+ matmul
801
+ add
802
+ softmax
803
+ argmax
804
+ W1
805
+ W2
806
+ input
807
+ out
808
+ matmul
809
+ add
810
+ argmax
811
+ W1
812
+ W2
813
+ input
814
+ out
815
+ redundant operator
816
+ (c) Redundant Elimination
817
+ Figure 5: Graph rewriting optimizations. Dtype rewriting converts float32 operators and weights into low-precision. Sparse
818
+ operator replacing converts dense operators and weights into sparse. Redundant elimination reduces redundant operators.
819
+ Now we introduce the dtype rewriting principle in detail. Algo-
820
+ rithm 2 shows the procedure of dtype rewriting:
821
+ (1) Visit all operators in ECG. For each operator, dtype is set as the
822
+ largest dtype of all inputs. After that, operator dtype is converted to
823
+ the dtype which can utilize hardware’s SIMD instructions best. We
824
+ keep a list of hardware specifications to modulate operator dtype.
825
+ In order to guarantee accuracy, dtype cannot get smaller. Then we
826
+ modulate operator implementation based on operator dtype.
827
+ (2) When operator dtype is fixed, we set the input dtype. The
828
+ dtype of weights is set the same as the operator, reducing dtype
829
+ conversion in runtime. The dtype of intermediate results cannot be
830
+ converted during compilation. So we add dtype converting operator,
831
+ .i.e, cast, before the operator.
832
+ We explain the differences between dtype rewriting for CML
833
+ models and model quantization for DL models. Quantization is an
834
+ approximate algorithm for DL models that causes a decrease in
835
+ accuracy and brings extra computation, such as calibration. Dtype
836
+ rewriting for CML models is based on the properties of CML, con-
837
+ verting dtype of operators and weights with no accuracy decrease
838
+ and extra computation.
839
+ Algorithm 2 Dtype Rewriting
840
+ Input: ECG 𝐺, hardware configuration 𝐻
841
+ Output: Optimized ECG 𝐺′
842
+ for operator in 𝐺 do
843
+ operator.dtype ← largest dtype in operator.weights and oper-
844
+ ator.intermediate_results
845
+ Modulate operator.dtype based on 𝐻
846
+ Modulate operator.DL_operator based on operator.dtype
847
+ for weight in operator.weights do
848
+ weight.actual_dtype ← operator.dtype
849
+ end for
850
+ for data in operator.intermediate_results do
851
+ if data.dtype < operator.dtype then
852
+ Add cast(data, operator.dtype) before operator
853
+ end if
854
+ end for
855
+ end for
856
+ 4.3.2
857
+ Sparse operator replacing. Replacing dense operators with
858
+ sparse operations can speed up as well. Algorithm 3 shows the
859
+ procedure of sparse operator replacing. The sparsity of input data
860
+ can be known until runtime, while the sparsity of weights can
861
+ be known during compilation. So we convert the data format of
862
+ weights rather than input data. Different hardware devices have
863
+ different support for sparse operators. For example, CPUs can ben-
864
+ efit from sparse computation while GPUs have little effect. So we
865
+ set a threshold based on hardware specification. If weight sparsity
866
+ is smaller than the threshold, we store it in a compressed sparse
867
+ row (CSR) format. Then we convert the corresponding operator
868
+ into a sparse implementation. An example is shown in Fig. 5b, we
869
+ convert 𝑊1 and the corresponding matmul to sparse.
870
+ Algorithm 3 Sparse Operator Replacing
871
+ Input: ECG 𝐺, Threshold 𝑇
872
+ Output: Optimized ECG 𝐺′
873
+ for operator in 𝐺 do
874
+ for weight in operator.weights do
875
+ if weight.sparsity < 𝑇 then
876
+ Store weight into CSR format
877
+ operator.use_sparse ← True
878
+ Convert operator.DL_operator into sparse implementa-
879
+ tion
880
+ end if
881
+ end for
882
+ end for
883
+ 4.3.3
884
+ Redundant elimination. Redundant elimination eliminates
885
+ those operators who do not influence final results due to their math-
886
+ ematical properties. For example, a series of monotonic operators
887
+ followed by an indices operator is mathematically equivalent to
888
+ the indices operators alone. Algorithm 4 shows the procedure of
889
+ redundant elimination. For each operator in ECGs, we check its
890
+ operator type. If another monotonic operator follows a monotonic
891
+ operator, we fuse them. We eliminate the monotonic operator if it
892
+ is followed by an indices operator. An example is shown in Fig. 5c,
893
+ the softmax before argmax is eliminated.
894
+ 4.4
895
+ Graph Translator
896
+ Graph Translator converts the optimized ECG into DL computa-
897
+ tional graph, choosing the proper implementation based on ECG
898
+ 7
899
+
900
+ Xu Wen et al.
901
+ Algorithm 4 Redundant Elimination
902
+ Input: Extended Computational Graph 𝐺
903
+ Output: Optimized ECG 𝐺′
904
+ for operator in 𝐺 do
905
+ if operator.type == "monotonic" then
906
+ Check the next operator operator’
907
+ if operator’.type == "monotonic" then
908
+ Merge operator and operator’
909
+ else if operator’.type == "indices" then
910
+ Eliminate operator
911
+ end if
912
+ end if
913
+ end for
914
+ DL models
915
+ CML models
916
+ Single ECG for hybrid models
917
+ Cross-framework implementation
918
+ Figure 6: CMLCompiler uses a single ECG to represent CML
919
+ and DL mixed pipeline.
920
+ and hardware specification information. DL frameworks or compil-
921
+ ers, like TVM, take DL computational graphs as input and make
922
+ more optimizations, finally compiling them into executable mod-
923
+ ules.
924
+ 4.5
925
+ Hybrid Deployment of CML and DL with a
926
+ Unified Framework
927
+ We convert those CML and DL hybrid applications under a unified
928
+ framework to reduce the cost of switching frameworks and provide
929
+ an opportunity for end-to-end optimizations, as shown in Fig. 6. We
930
+ load models from PyTorch and sklearn and convert them into ECG
931
+ subgraphs. We build edges according to data dependency and merge
932
+ those subgraphs in a single ECG. Then we can use optimizations
933
+ both in our work and DL compilers. Finally, we compile and deploy
934
+ it on diverse hardware devices.
935
+ 4.6
936
+ Implementation
937
+ Due to the benefits in portability and performance, we implement
938
+ CMLCompiler on the basis of TVM. The intermediate representa-
939
+ tions and transforms are all written in python. We read trained
940
+ models from CML frameworks such as sklearn and convert them
941
+ into operator representations, implementing them in the format
942
+ of TVM relay functions and storing their weights in TVM arrays.
943
+ We wrap those relay functions in the format of ECGs. After opti-
944
+ mizations in Section 4.3, we convert ECGs into TVM’s IRModules.
945
+ Then we utilize TVM to make more optimizations and compile to
946
+ executable modules based on specific hardware targets. We use
947
+ cross-compilation to support a broad spectrum of hardware devices.
948
+ We deploy them on lightweight runtime based on TVM runtime
949
+ and make inference on various hardware devices.
950
+ 5
951
+ EVALUATION
952
+ This section summarizes the evaluation. Section 5.1 shows experi-
953
+ mental setup. Section 5.2 evaluates the performance of graph rewrit-
954
+ ing optimizations based on ECGs. Section 5.3 compares our work
955
+ with the state-of-the-art frameworks. Section 5.4 evaluates the hy-
956
+ brid deployment of CML and DL.
957
+ 5.1
958
+ Experimental Setup
959
+ We deploy a server node equipped with two Xeon E5-2620 V3
960
+ (Haswell) CPUs, an Nvidia Titan RTX GPU, and 64 GB memory to
961
+ conduct the experiments on CPU and GPU. Each CPU contains six
962
+ physical cores. The GPU contains 4608 Cuda cores and 24 GB mem-
963
+ ory. The operating system is Ubuntu 16.04, and the other software
964
+ includes TVM 0.8, PyTorch 1.8.1, hummingbird 0.3.1, scikit-learn
965
+ 1.0.1, and CUDA 10.2. For the IoT experiments, we use Raspber-
966
+ rypi4b with Raspbian 10 operating system and deploy the above
967
+ software with the same version. We use YearPrediction [12] as the
968
+ dataset, with 515345 samples and 90 features. We use 80% data to
969
+ train models and 20% data to make inference. We run all the exper-
970
+ iments five times and use the average as the final results. We test
971
+ hummingbird [30] using both two backends (PyTorch and TVM)
972
+ and select their best results.
973
+ 5.2
974
+ Optimizations
975
+ This section evaluates graph rewriting optimizations based on
976
+ ECGs, as described in Section 4.3. These optimizations: dtype rewrit-
977
+ ing, sparse operator replacing, and redundant elimination, can work
978
+ together and produce cumulative optimization effects. They can
979
+ also coexist with the optimizations in TVM. We choose four typ-
980
+ ical tree models: DecisionTreeClassifier, RandomForestClassifier,
981
+ ExtraTreeClassifier, and ExtraTreesClassifier, as well as two typical
982
+ linear models: LogisticRegression and SGDClassifier. We evaluate
983
+ the dtype rewriting and sparse operator replacing for tree models,
984
+ and redundant elimination for linear models according to their
985
+ unique patterns.
986
+ Fig. 7a shows the result on CPU. For tree models, using our work
987
+ without optimizations has a 1.31x-2.54x speedup compared with
988
+ sklearn; this is due to our abstractions which utilize optimizations
989
+ of TVM, including better utilization of SIMD instructions and multi
990
+ cores. Using dtype rewriting and sparse operator replacing bring
991
+ 1x-1.21x and 1.26x-1.75x speedup, respectively, achieving 1.27x-
992
+ 2.11x speedup together, 1.84x-4.44x faster than sklearn. For linear
993
+ models, our work without optimizations runs slower than sklearn.
994
+ However, using redundant elimination brings 1.22x-1.51x speedup;
995
+ the result after our optimizations is 1.06x-1.14x faster than sklearn.
996
+ Fig. 7b shows the result of IoT devices. Note that sklearn lacks
997
+ enough support for IoT devices. For example, 64-bit tree models
998
+ trained on servers cannot be executed on Raspberrypi4b with a
999
+ 32-bit operating system. Retraining those models in 32-bit format
1000
+ on Raspberrypi4b from scratch takes more time, so we regard those
1001
+ models as unsupported, marked as cross. So we take our work with-
1002
+ out optimizations as the baseline. Using dtype rewriting and sparse
1003
+ operator replacing bring 1.01x-1.33x and 1.23x-2.3x speedup, respec-
1004
+ tively, achieving 1.49x-2.53x speedup together. For linear models,
1005
+ 8
1006
+
1007
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
1008
+ ����������������������
1009
+ ����������������������
1010
+ � �����������������
1011
+ � ������������������
1012
+ ������������������
1013
+ �������������
1014
+
1015
+
1016
+
1017
+ �������
1018
+ �������
1019
+ �����
1020
+ ��
1021
+ ������
1022
+ ��
1023
+ (a) CPU
1024
+ ����������������������
1025
+ ����������������������
1026
+ � �����������������
1027
+ � ������������������
1028
+ ������������������
1029
+ �������������
1030
+
1031
+
1032
+
1033
+ �������
1034
+ �������
1035
+ ����
1036
+ ��
1037
+ ������
1038
+ ��
1039
+ (b) Raspberrypi4b
1040
+ Figure 7: Graph Rewriting Optimizations. "base" means our work without optimizations. "DR" means only using dtype rewrit-
1041
+ ing. "DR+SOR" means using both dtype rewriting and sparse operator replacing. "RE" means using redundant elimination.
1042
+ our work without optimizations achieves 1.71x-1.84x speedup. Us-
1043
+ ing redundant elimination brings 1.08x-1.14x more speedup, 1.95x-
1044
+ 1.98x faster than sklearn. The computation part of GPU is less than
1045
+ 20%, so those optimizations play a limited role on GPU. In conclu-
1046
+ sion, CML models can benefit from both TVM’s optimizations and
1047
+ our optimizations and achieve obvious speedup.
1048
+ 5.3
1049
+ Overall Results
1050
+ This section evaluates 14 typical CML algorithms covering prepro-
1051
+ cessing algorithms, linear models, tree-based models, and SVMs,
1052
+ on CPU, GPU, and IoT devices, compared with state-of-the-art
1053
+ frameworks including sklearn, intel extension for sklearn [20], and
1054
+ hummingbird. It contains two parts: batch experiments for all data
1055
+ and query experiments for a single record.
1056
+ The differences between the accuracy of CMLCompiler and
1057
+ sklearn are all less than 1 × 10−5, which means that our work
1058
+ does not affect the accuracy. The outputs on different hardware
1059
+ are all the same, so we focus on performance hereinafter. Table 5
1060
+ shows the performance of batch experiments. On CPU, our work
1061
+ reflects the best performance on 12 algorithms out of 14, achieving
1062
+ 1.02x-10.57x speedup compared with sklearn, 1.14x-4.38x speedup
1063
+ compared with hummingbird, and 1.44x-8.47x speedup compared
1064
+ with intel sklearn. On GPU, our work achieves competitive perfor-
1065
+ mance compared with hummingbird. Our work performs better
1066
+ on 11 algorithms out of 14, with a 1.11x-3.31x speedup. On an IoT
1067
+ device Raspberrypi4b, our work performs better on 13 algorithms
1068
+ out of 14, with a 1.28x-5.09x speedup.
1069
+ Table 6 shows the performance of query experiments for a single
1070
+ record. On CPU, our work achieves the best performance on 11
1071
+ algorithms out of 14, with a 1.36x-170.68x speedup compared with
1072
+ sklearn, a 1.56x-4.47x speedup compared with hummingbird, and
1073
+ a 1.31x-169.43x speedup compared with intel sklearn. Our work
1074
+ has better performance on GPU on 10 algorithms out of 14 com-
1075
+ pared with hummingbird, with a 1.41x-4.64x speedup. Our latency
1076
+ on Raspberrypi4b does not differ much compared with sklearn.
1077
+ However, we perform better in model support.
1078
+ In conclusion, we have advantages in both batch and query ex-
1079
+ periments for all three hardware devices. Many models in sklearn
1080
+ only support a single core and cannot fully utilize the SIMD in-
1081
+ structions. We perform better than sklearn and intel sklearn due
1082
+ to better utilization of multi cores and SIMD instructions through
1083
+ compilation. Hummingbird uses both PyTorch and TVM as back-
1084
+ ends, where TVM performs better in most cases of our evaluations.
1085
+ It implements models in PyTorch and converts them into TVM
1086
+ using 𝑓 𝑟𝑜𝑚_𝑝𝑦𝑡𝑜𝑟𝑐ℎ API. This conversion is not direct and effi-
1087
+ cient enough, causing a performance decrease. Besides, hardware
1088
+ information is missed during conversion, which limits the optimiza-
1089
+ tions of TVM for hummingbird. We map ECGs into relay opera-
1090
+ tors directly and select the most efficient implementation based on
1091
+ ECGs and hardware specification information. Additionally, our
1092
+ abstractions bring more optimizations, as described in Section 4.3,
1093
+ bringing up to 2.53x speedup, working together to achieve better
1094
+ performance.
1095
+ 5.4
1096
+ Hybrid Deployment of CML and DL
1097
+ This section shows three hybrid deployment cases of CML and DL.
1098
+ As the baselines, without a unified framework, a DL framework
1099
+ is used to implement DL algorithms, while a CML framework is
1100
+ used to implement CML algorithms. Our work converts CML and
1101
+ DL models into a single ECG, making optimizations and compiling
1102
+ to diverse hardware devices. We test the latency of a single query,
1103
+ which is essential in real-world applications.
1104
+ 5.4.1
1105
+ Sentence Sentiment Classification. The first one is a sentence
1106
+ sentiment classification case, which uses Bert to embed English
1107
+ sentences and logistic regression to make a classification [36]. We
1108
+ use BERT-tiny [3] as the pre-trained Bert model and SST2 [40]
1109
+ as the dataset. The baseline implements BERT-tiny in pytorch-
1110
+ transformers [45] and logistic regression in sklearn. The result is
1111
+ shown in Fig .8a. Our work achieves a 1.67x speedup on server
1112
+ CPUs. Pytorch-transformers cannot be installed on IoT devices, so
1113
+ the baseline cannot run on Raspberrypi4b. The latency of our work
1114
+ on Raspberrypi4b is 18 milliseconds, which is acceptable in most
1115
+ use cases.
1116
+ 5.4.2
1117
+ Radiographic Image Analysis. The second case uses Deep
1118
+ Hybrid Learning [38] to analyze radiographic images, which uses
1119
+ 9
1120
+
1121
+ Xu Wen et al.
1122
+ Table 5: Execution time for batch experiments over all data on CPU (12 cores), GPU, and IoT devices (take Raspberrypi4b
1123
+ as an example) in milliseconds. SK, HB, and Intel is short for scikit-learn, hummingbird, and Intel extension for sklearn,
1124
+ respectively. "-" means unsupported.
1125
+ Algorithm
1126
+ CPU
1127
+ GPU
1128
+ IOT
1129
+ SK
1130
+ HB
1131
+ Intel
1132
+ Our
1133
+ HB
1134
+ Our
1135
+ SK
1136
+ Our
1137
+ Binarizer
1138
+ 97
1139
+ 31
1140
+ 77
1141
+ 9
1142
+ 19
1143
+ 6
1144
+ 634
1145
+ 126
1146
+ Normalizer
1147
+ 25
1148
+ 33
1149
+ 15
1150
+ 15
1151
+ 7
1152
+ 5
1153
+ 241
1154
+ 168
1155
+ MinMaxScaler
1156
+ 19
1157
+ 31
1158
+ 13
1159
+ 8
1160
+ 21
1161
+ 6
1162
+ 199
1163
+ 148
1164
+ RobustScaler
1165
+ 28
1166
+ 32
1167
+ 25
1168
+ 12
1169
+ 19
1170
+ 5
1171
+ 343
1172
+ 156
1173
+ LinearRegression
1174
+ 12
1175
+ 18
1176
+ 4
1177
+ 6
1178
+ 6
1179
+ 7
1180
+ 61
1181
+ 116
1182
+ LogisticRegression
1183
+ 98
1184
+ 104
1185
+ 137
1186
+ 86
1187
+ 7
1188
+ 7
1189
+ 1889
1190
+ 952
1191
+ SGDClassifier
1192
+ 94
1193
+ 98
1194
+ 139
1195
+ 88
1196
+ 9
1197
+ 7
1198
+ 1886
1199
+ 969
1200
+ DecisionTreeClassifier
1201
+ 33
1202
+ 48
1203
+ 23
1204
+ 16
1205
+ 7
1206
+ 5
1207
+ -
1208
+ 99
1209
+ DecisionTreeRegressor
1210
+ 7
1211
+ 19
1212
+ 3
1213
+ 15
1214
+ 7
1215
+ 6
1216
+ -
1217
+ 211
1218
+ RandomForestClassifier
1219
+ 2130
1220
+ 885
1221
+ 2003
1222
+ 601
1223
+ 20
1224
+ -
1225
+ -
1226
+ 5820
1227
+ ExtraTreeClassifier
1228
+ 29
1229
+ -
1230
+ 26
1231
+ 16
1232
+ -
1233
+ 6
1234
+ -
1235
+ 206
1236
+ ExtraTreesClassifier
1237
+ 10022
1238
+ 2522
1239
+ 9421
1240
+ 2256
1241
+ 99
1242
+ -
1243
+ -
1244
+ 47959
1245
+ LinearSVC
1246
+ 92
1247
+ 122
1248
+ 152
1249
+ 77
1250
+ 9
1251
+ 6
1252
+ 1896
1253
+ 930
1254
+ LinearSVR
1255
+ 39
1256
+ 26
1257
+ 34
1258
+ 5
1259
+ 6
1260
+ 5
1261
+ 323
1262
+ 112
1263
+ Table 6: Latency for query experiments over one single record on CPU (12 cores), GPU, and IoT devices (take Raspberrypi4b
1264
+ as an example) in milliseconds. The symbols are the same as Table 5.
1265
+ Algorithm
1266
+ CPU
1267
+ GPU
1268
+ IOT
1269
+ SK
1270
+ HB
1271
+ Intel
1272
+ Our
1273
+ HB
1274
+ Our
1275
+ SK
1276
+ Our
1277
+ Binarizer
1278
+ 0.2
1279
+ 0.26
1280
+ 0.34
1281
+ 0.09
1282
+ 0.93
1283
+ 0.64
1284
+ 0.44
1285
+ 0.59
1286
+ Normalizer
1287
+ 0.32
1288
+ 0.26
1289
+ 0.28
1290
+ 0.11
1291
+ 0.25
1292
+ 0.68
1293
+ 0.59
1294
+ 0.41
1295
+ MinMaxScaler
1296
+ 0.15
1297
+ 0.31
1298
+ 0.14
1299
+ 0.09
1300
+ 0.91
1301
+ 0.63
1302
+ 0.33
1303
+ 0.37
1304
+ RobustScaler
1305
+ 0.14
1306
+ 0.22
1307
+ 0.14
1308
+ 0.11
1309
+ 1.02
1310
+ 0.72
1311
+ 0.37
1312
+ 0.37
1313
+ LinearRegression
1314
+ 0.24
1315
+ 0.35
1316
+ 0.32
1317
+ 0.1
1318
+ 0.91
1319
+ 0.55
1320
+ 0.52
1321
+ 0.69
1322
+ LogisticRegression
1323
+ 0.35
1324
+ 0.36
1325
+ 0.29
1326
+ 0.19
1327
+ 3.29
1328
+ 0.71
1329
+ 0.67
1330
+ 2.59
1331
+ SGDClassifier
1332
+ 0.4
1333
+ 0.35
1334
+ 0.29
1335
+ 0.23
1336
+ 2.93
1337
+ 0.67
1338
+ 0.68
1339
+ 0.65
1340
+ DecisionTreeClassifier
1341
+ 0.24
1342
+ 1.62
1343
+ 0.27
1344
+ 0.36
1345
+ 3.01
1346
+ 0.8
1347
+ -
1348
+ 0.9
1349
+ DecisionTreeRegressor
1350
+ 0.22
1351
+ 0.22
1352
+ 0.25
1353
+ 0.38
1354
+ 1.03
1355
+ 0.72
1356
+ -
1357
+ 0.88
1358
+ RandomForestClassifier
1359
+ 103.96
1360
+ 1.6
1361
+ 103.2
1362
+ 0.61
1363
+ 2.56
1364
+ -
1365
+ -
1366
+ 1.05
1367
+ ExtraTreeClassifier
1368
+ 0.23
1369
+ -
1370
+ 0.4
1371
+ 0.47
1372
+ -
1373
+ -
1374
+ -
1375
+ 1.81
1376
+ ExtraTreesClassifier
1377
+ 205.27
1378
+ 12.74
1379
+ 204.25
1380
+ 1.73
1381
+ 2.41
1382
+ -
1383
+ -
1384
+ 3.11
1385
+ LinearSVC
1386
+ 0.4
1387
+ 0.37
1388
+ 0.45
1389
+ 0.19
1390
+ 2.71
1391
+ 0.61
1392
+ 0.65
1393
+ 1.07
1394
+ LinearSVR
1395
+ 0.31
1396
+ 0.34
1397
+ 0.37
1398
+ 0.09
1399
+ 0.91
1400
+ 0.62
1401
+ 0.54
1402
+ 0.91
1403
+ ���
1404
+ �������������
1405
+
1406
+
1407
+ ��
1408
+ ��
1409
+ ������������
1410
+ ��������
1411
+ ��������
1412
+ (a) Bert+LogisticRegression for sentence
1413
+ sentiment classification
1414
+ ���
1415
+ �������������
1416
+
1417
+ ��
1418
+ ��
1419
+ ��
1420
+ ������������
1421
+ ��������
1422
+ ��������
1423
+ (b) SimpleDNN+RandomForest for
1424
+ radiographic image analysis
1425
+ ���
1426
+ �������������
1427
+
1428
+
1429
+ ��
1430
+ ��
1431
+ ������������
1432
+ ��������
1433
+ ��������
1434
+ (c) GBDT+Wide&Deep for click through
1435
+ prediction
1436
+ Figure 8: The latency of a single query for CML and DL mixed pipelines. All three baselines cannot run on IoT devices.
1437
+ 10
1438
+
1439
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
1440
+ simple DNN to make feature engineering and CML models such as
1441
+ random forests to make a classification. We use CheXpert [21] as
1442
+ the dataset. The baseline implements DNN in PyTorch and random
1443
+ forest in sklearn. The result is shown in Fig .8b. Our work achieves a
1444
+ 2.3x speedup on server CPUs. The pre-trained random forest cannot
1445
+ run on IoT devices, while our work solves this problem through
1446
+ cross-compilation.
1447
+ 5.4.3
1448
+ Click Through Rate Prediction. The third case is click-through
1449
+ rate prediction used in recommendation systems of our anonymous
1450
+ industry partners, using GBDT [15] to extract features and the
1451
+ Wide and Deep [9] models to make prediction. We use avazu 1 as
1452
+ the dataset. The baseline implements GBDT in sklearn and Wide
1453
+ and Deep in PyTorch. The result is shown in Fig .8c. We achieve
1454
+ 3.04x speedup on the server CPUs. The GBDT model in the baseline
1455
+ cannot be executed on IoT devices, while our latency on IoT devices
1456
+ is only 5.06 ms.
1457
+ 6
1458
+ RELATED WORK
1459
+ CML frameworks and libraries can be divided into three categories.
1460
+ (1) General-purpose solution uses one framework to support various
1461
+ models. Scikit-learn [32] is the most widely used CML framework
1462
+ on GitHub [33]. Spark MLlib [29] is an extension to Spark [48].
1463
+ H2O [17] uses MapReduce [11] to support both CML and DL. There
1464
+ are many other works, such as Shogun [41] and RapidMiner [19].
1465
+ These frameworks only support CPU, suffering from severe perfor-
1466
+ mance and portability issues. (2) Specific-purpose solution focuses
1467
+ on one type of model. LibLinear [14] supports logistic regression
1468
+ and linear SVM. LibSVM [5] focuses on SVMs. These works are
1469
+ limited to CPUs. Some other works attempt to support various hard-
1470
+ ware devices. XGBoost [6] implements gradient boosting decision
1471
+ tree algorithm on CPUs and GPUs. Muhsen Owaida et al. [31] bring
1472
+ XGBoost to FPGAs. Toby Sharp [39] implements decision trees and
1473
+ forests on GPUs. These frameworks only support a narrowed vari-
1474
+ ety of models and solve the problem of portability to a certain extent.
1475
+ (3) Extension based on DL attempts to utilize DL frameworks to
1476
+ support CML models. TF-DF [43] is a decision forest library based
1477
+ on TensorFlow but is limited to CPUs. It’s implemented in an ad-hoc
1478
+ way, losing the portability of DL frameworks. Hummingbird [30]
1479
+ is a general-purpose solution based on PyTorch, adding support
1480
+ for GPUs. They utilize those abstractions in DL frameworks di-
1481
+ rectly without digging into the features of CML, missing many
1482
+ optimization chances.
1483
+ 7
1484
+ CONCLUSION
1485
+ This paper presented the design and implementation of CMLCom-
1486
+ piler, a unified compiler for classical Machine Learning (CML) in-
1487
+ ference. CMLCompiler proposed two unified abstractions: oper-
1488
+ ator representations and extended computational graphs (ECGs).
1489
+ Operator representations convert CML operators into tensor for-
1490
+ mats, while an ECG organizes these converted operators in an
1491
+ optimization-friendly way. The CMLCompiler framework performs
1492
+ the conversion and graph optimization based on two unified ab-
1493
+ stractions, then outputs an optimized computational graph to deep
1494
+ learning compilers or frameworks. CMLCompiler also enables the
1495
+ 1https://www.kaggle.com/c/avazu-ctr-prediction
1496
+ hybrid deployment of CML and DL with a unified framework. Our
1497
+ implementations of CMLCompiler on top of TVM show the effec-
1498
+ tiveness and achieve up to 4.38x speedup on CPU, 3.31x speedup
1499
+ on GPU, and 5.09x speedup on IoT devices, compared to the state-
1500
+ of-the-art solutions — scikit-learn, intel sklearn, and hummingbird.
1501
+ Our support for CML and DL mixed pipelines achieves up to 3.04x
1502
+ speedup compared with cross-framework implementations.
1503
+ A
1504
+ PROOF
1505
+ Here we prove that 𝑎𝑟𝑔𝑚𝑎𝑥 in Fig. 3 returns the leaf node finally
1506
+ reaches. 𝑁𝑆, 𝑁𝐼, and 𝑁𝐿 refer to the number of samples, internal
1507
+ nodes, and leaf nodes, respectively. 𝐼 refers to internal nodes, num-
1508
+ bered in the order of Level Order Traversal. 𝐿 refers to leaf nodes,
1509
+ numbered in the order of In-Order Traversal. 𝑋 ∈ {0, 1}𝑁𝑆×𝑁𝐼 is
1510
+ the result after comparison with 𝑊2. Each row 𝑋𝑖 ∈ {0, 1}𝑁𝐼 refers
1511
+ to choices for one sample x, marked as �𝑥. 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿 can
1512
+ be regarded as a list of column vector { �𝐿1, �𝐿2,..., �
1513
+ 𝐿𝑁𝐿}. �𝐿𝑖 ∈ {0, 1}𝑁𝐼
1514
+ represents the relationship between leaf node 𝐿𝑖 and all internal
1515
+ nodes. Then we should prove that 𝑎𝑟𝑔𝑚𝑎𝑥(�𝑥 · �𝐿1, �𝑥 · �𝐿2, ..., �𝑥 ·
1516
+
1517
+ 𝐿𝑁𝐿)
1518
+ returns the leaf x reaches, where 𝑎𝑟𝑔𝑚𝑎𝑥 returns the index of the
1519
+ maximum values among the input tensor. It returns the first index
1520
+ if maximum appears more than once. We assume that 𝐿𝑘 is the leaf
1521
+ node x reaches.
1522
+ First we prove that �𝑥 · �𝐿𝑘 is the maximum value in {�𝑥 · �𝐿1, �𝑥 ·
1523
+ �𝐿2, ..., �𝑥 ·
1524
+
1525
+ 𝐿𝑁𝐿 }. We define the path from root node 𝐼0 to 𝐿𝑘 as the
1526
+ decision path of x.
1527
+ 𝐿𝑘 [𝑖] =
1528
+ � 0, choose left in 𝐼𝑖 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑝𝑎𝑡ℎ
1529
+ 1, otherwise
1530
+ 𝑥[𝑖] =
1531
+ � 0, choose left in 𝐼𝑖
1532
+ 1, choose right in 𝐼𝑖
1533
+ Because x reaches 𝐿𝐾, if x[i] = 1 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑝𝑎𝑡ℎ, then
1534
+ 𝐿𝑘 [𝑖] = 1. DP represents decision path, right means choosing right
1535
+ in internal node and left means choosing left in internal node.
1536
+ �𝑥 · �𝐿𝑘 =
1537
+ ∑︁
1538
+ 𝑖
1539
+ 𝑥[𝑖] ∗ 𝐿𝑘 [𝑖]
1540
+ =
1541
+ ∑︁
1542
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖
1543
+ 1 ∗ 𝐿𝑘 [𝑖] +
1544
+ ∑︁
1545
+ 𝑖, 𝑙𝑒𝑓 𝑡 𝑖𝑛 𝐼𝑖
1546
+ 0 ∗ 𝐿𝑘 [𝑖]
1547
+ =
1548
+ ∑︁
1549
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖
1550
+ 1 ∗ 𝐿𝑘 [𝑖]
1551
+ =
1552
+ ∑︁
1553
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃
1554
+ 1 ∗ 𝐿𝑘 [𝑖] +
1555
+ ∑︁
1556
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃
1557
+ 1 ∗ 𝐿𝑘 [𝑖]
1558
+ =
1559
+ ∑︁
1560
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃
1561
+ 1 ∗ 1 +
1562
+ ∑︁
1563
+ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃
1564
+ 1 ∗ 1
1565
+ = 𝐶𝑜𝑢𝑛𝑡𝑠 𝑜𝑓 1 𝑖𝑛 �𝑥
1566
+ �𝑥 and { �𝐿1, �𝐿2,..., �
1567
+ 𝐿𝑁𝐿} are all 0-1 vector. Counts of 1 in �𝑥 is the maxi-
1568
+ mum value of {�𝑥 · �𝐿1, �𝑥 · �𝐿2, ..., �𝑥 ·
1569
+
1570
+ 𝐿𝑁𝐿 }.
1571
+ Then we prove that 𝑘 is the first index that returns the maximum.
1572
+ We assume that there exists a leaf node 𝐿𝑡 ahead of 𝐿𝐾 which meets
1573
+ the condition �𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. Now that 𝐿𝑡 is ahead of 𝐿𝑘 and
1574
+ the leaf nodes are numbered in a In-Order Traversal. ∃ an internal
1575
+ node 𝐼𝑖 where 𝐿𝑡 is in the left subtree of 𝐼𝑖 and 𝐿𝑘 in the right subtree
1576
+ of 𝐼𝑖. X passes by 𝐼𝑖 and reaches 𝐿𝐾 in its right subtree, so x[i] = 1.
1577
+ 𝐿𝑡 is in the left subtree of 𝑇𝑖, so 𝐿𝑡 [𝑖]= 0, where x[i] is multiplied
1578
+ 11
1579
+
1580
+ Xu Wen et al.
1581
+ by zero. So �𝑥 · �𝐿𝑡 < 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. Conflict with the assumption that
1582
+ �𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. So 𝐿𝑘 is the first index that returns maximum.
1583
+ REFERENCES
1584
+ [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey
1585
+ Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Man-
1586
+ junath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray,
1587
+ Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan
1588
+ Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning.
1589
+ In Proceedings of the 12th USENIX Conference on Operating Systems Design and
1590
+ Implementation, OSDI’16, page 265–283, USA, 2016. USENIX Association.
1591
+ [2] Amazon.
1592
+ The
1593
+ total
1594
+ cost
1595
+ of
1596
+ ownership
1597
+ (tco)
1598
+ of
1599
+ ama-
1600
+ zon
1601
+ sagemaker.
1602
+ https://pages.awscloud.com/rs/112-TZM-
1603
+ 766/images/Amazon_SageMaker_TCO_uf.pdf, 2020.
1604
+ [3] Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers. Generalization in nli:
1605
+ Ways (not) to go beyond simple heuristics, 2021.
1606
+ [4] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
1607
+ [5] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector
1608
+ machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–
1609
+ 27, 2011.
1610
+ [6] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In
1611
+ Proceedings of the 22nd acm sigkdd international conference on knowledge discovery
1612
+ and data mining, pages 785–794, 2016.
1613
+ [7] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan
1614
+ Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and
1615
+ Arvind Krishnamurthy. Tvm: An automated end-to-end optimizing compiler
1616
+ for deep learning. In Proceedings of the 13th USENIX Conference on Operating
1617
+ Systems Design and Implementation, OSDI’18, page 579–594, USA, 2018. USENIX
1618
+ Association.
1619
+ [8] Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis
1620
+ Ceze, Carlos Guestrin, and Arvind Krishnamurthy. Learning to optimize tensor
1621
+ programs. Advances in Neural Information Processing Systems, 31, 2018.
1622
+ [9] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra,
1623
+ Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al.
1624
+ Wide & deep learning for recommender systems. In Proceedings of the 1st workshop
1625
+ on deep learning for recommender systems, pages 7–10, 2016.
1626
+ [10] Scott Cyphers, Arjun K. Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew
1627
+ Brookhart, Avijit Chakraborty, William Constable, Christian Convey, Leona Cook,
1628
+ Omar Kanawi, Robert Kimball, Jason Knight, Nikolay Korovaiko, Varun Kumar
1629
+ Vijay, Yixing Lao, Christopher R. Lishka, Jaikrishnan Menon, Jennifer Myers,
1630
+ Sandeep Aswath Narayana, Adam Procter, and Tristan J. Webb. Intel ngraph:
1631
+ An intermediate representation, compiler, and executor for deep learning. CoRR,
1632
+ abs/1801.08058, 2018.
1633
+ [11] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on
1634
+ large clusters. Communications of the ACM, 51(1):107–113, 2008.
1635
+ [12] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
1636
+ [13] EasonLiao. Cudatree. https://github.com/EasonLiao/CudaTree, 2022.
1637
+ [14] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen
1638
+ Lin. Liblinear: A library for large linear classification. the Journal of machine
1639
+ Learning research, 9:1871–1874, 2008.
1640
+ [15] Jerome H Friedman. Greedy function approximation: a gradient boosting machine.
1641
+ Annals of statistics, pages 1189–1232, 2001.
1642
+ [16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
1643
+ Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.
1644
+ Advances in neural information processing systems, 27, 2014.
1645
+ [17] H2O.ai. H2o: Scalable machine learning platform. https://github.com/h2oai/h2o-3,
1646
+ 2022.
1647
+ [18] Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro
1648
+ Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law,
1649
+ Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, and
1650
+ Xiaodong Wang. Applied machine learning at facebook: A datacenter infras-
1651
+ tructure perspective. In 2018 IEEE International Symposium on High Performance
1652
+ Computer Architecture (HPCA), pages 620–629, 2018.
1653
+ [19] Markus Hofmann and Ralf Klinkenberg. RapidMiner: Data mining use cases and
1654
+ business analytics applications. CRC Press, 2016.
1655
+ [20] Intel.
1656
+ Intel® extension for scikit-learn*.
1657
+ https://intel.github.io/scikit-learn-
1658
+ intelex/, 2022.
1659
+ [21] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus,
1660
+ Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya,
1661
+ et al. Chexpert: A large chest radiograph dataset with uncertainty labels and
1662
+ expert comparison. In Proceedings of the AAAI conference on artificial intelligence,
1663
+ volume 33, pages 590–597, 2019.
1664
+ [22] Zhihao Jia, Oded Padon, James Thomas, Todd Warszawski, Matei Zaharia, and
1665
+ Alex Aiken. Taso: optimizing deep learning computation with automatic gen-
1666
+ eration of graph substitutions. In Proceedings of the 27th ACM Symposium on
1667
+ Operating Systems Principles, pages 47–62, 2019.
1668
+ [23] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis,
1669
+ Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Olek-
1670
+ sandr Zinenko. Mlir: A compiler infrastructure for the end of moore’s law. arXiv
1671
+ preprint arXiv:2002.11054, 2020.
1672
+ [24] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A survey
1673
+ of convolutional neural networks: analysis, applications, and prospects. IEEE
1674
+ Transactions on Neural Networks and Learning Systems, 2021.
1675
+ [25] Xiaoliang Ling, Weiwei Deng, Chen Gu, Hucheng Zhou, Cui Li, and Feng Sun.
1676
+ Model ensemble for click prediction in bing search ads. In Proceedings of the 26th
1677
+ international conference on world wide web companion, pages 689–698, 2017.
1678
+ [26] Wei-Yin Loh. Classification and regression trees. Wiley interdisciplinary reviews:
1679
+ data mining and knowledge discovery, 1(1):14–23, 2011.
1680
+ [27] Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, and Bing Xiang.
1681
+ Universal text representation from bert: An empirical study. arXiv preprint
1682
+ arXiv:1910.07973, 2019.
1683
+ [28] Larry Medsker and Lakhmi C Jain. Recurrent neural networks: design and applica-
1684
+ tions. CRC press, 1999.
1685
+ [29] Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkatara-
1686
+ man, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris
1687
+ Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, and Ameet
1688
+ Talwalkar.
1689
+ Mllib: Machine learning in apache spark.
1690
+ J. Mach. Learn. Res.,
1691
+ 17(1):1235–1241, jan 2016.
1692
+ [30] Supun Nakandala, Karla Saur, Gyeong-In Yu, Konstantinos Karanasos, Carlo
1693
+ Curino, Markus Weimer, and Matteo Interlandi. A tensor compiler for unified
1694
+ machine learning prediction serving. In 14th {USENIX} Symposium on Operating
1695
+ Systems Design and Implementation ({OSDI} 20), pages 899–917, 2020.
1696
+ [31] Muhsen Owaida, Hantian Zhang, Ce Zhang, and Gustavo Alonso. Scalable
1697
+ inference of decision tree ensembles: Flexible design for cpu-fpga platforms. In
1698
+ 2017 27th International Conference on Field Programmable Logic and Applications
1699
+ (FPL), pages 1–8. IEEE, 2017.
1700
+ [32] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel,
1701
+ Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron
1702
+ Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Courna-
1703
+ peau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn:
1704
+ Machine learning in python. J. Mach. Learn. Res., 12(null):2825–2830, nov 2011.
1705
+ [33] Fotis Psallidas, Yiwen Zhu, Bojan Karlas, Matteo Interlandi, Avrilia Floratou,
1706
+ Konstantinos Karanasos, Wentao Wu, Ce Zhang, Subru Krishnan, Carlo Curino,
1707
+ and Markus Weimer. Data science through the looking glass and what we found
1708
+ there. CoRR, abs/1912.09536, 2019.
1709
+ [34] Susmita Ray. A quick review of machine learning algorithms. In 2019 Inter-
1710
+ national conference on machine learning, big data, cloud and parallel computing
1711
+ (COMITCon), pages 35–39. IEEE, 2019.
1712
+ [35] James Reed, Zachary DeVito, Horace He, Ansley Ussery, and Jason Ansel. torch.
1713
+ fx: Practical program capture and transformation for deep learning in python.
1714
+ Proceedings of Machine Learning and Systems, 4:638–651, 2022.
1715
+ [36] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using
1716
+ siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
1717
+ [37] Shayle R Searle and Marvin HJ Gruber. Linear models. John Wiley & Sons, 2016.
1718
+ [38] Duhita Sengupta, Sk Nishan Ali, Aditya Bhattacharya, Joy Mustafi, Asima
1719
+ Mukhopadhyay, and Kaushik Sengupta. Nuclear morphology optimized deep
1720
+ hybrid learning (numodril): A novel architecture for accurate diagnosis/prognosis
1721
+ of ovarian cancer. bioRxiv, 2020.
1722
+ [39] Toby Sharp. Implementing decision trees and forests on a gpu. In European
1723
+ conference on computer vision, pages 595–608. Springer, 2008.
1724
+ [40] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning,
1725
+ Andrew Ng, and Christopher Potts. Parsing With Compositional Vector Gram-
1726
+ mars. In EMNLP. 2013.
1727
+ [41] Sören Sonnenburg, Gunnar Rätsch, Sebastian Henschel, Christian Widmer, Jonas
1728
+ Behr, Alexander Zien, Fabio de Bona, Alexander Binder, Christian Gehl, and
1729
+ Vojtěch Franc. The shogun machine learning toolbox. The Journal of Machine
1730
+ Learning Research, 11:1799–1802, 2010.
1731
+ [42] Shan Suthaharan. Support vector machine. In Machine learning models and
1732
+ algorithms for big data classification, pages 207–235. Springer, 2016.
1733
+ [43] TensorFlow.
1734
+ Tensorflow
1735
+ decision
1736
+ forests.
1737
+ https://www.tensorflow.org/decision_forests, 2022.
1738
+ [44] Jake VanderPlas. Python data science handbook: Essential tools for working with
1739
+ data. " O’Reilly Media, Inc.", 2016.
1740
+ [45] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De-
1741
+ langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
1742
+ Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu,
1743
+ Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest,
1744
+ and Alexander M. Rush. Transformers: State-of-the-art natural language pro-
1745
+ cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural
1746
+ Language Processing: System Demonstrations, pages 38–45, Online, October 2020.
1747
+ Association for Computational Linguistics.
1748
+ [46] Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat
1749
+ Dukhan, Kim Hazelwood, Eldad Isaac, Yangqing Jia, Bill Jia, Tommer Leyvand,
1750
+ 12
1751
+
1752
+ CMLCompiler: A Unified Compiler for Classical Machine Learning
1753
+ Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, Andrew Tulloch,
1754
+ Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian,
1755
+ Sungjoo Yoo, and Peizhao Zhang. Machine learning at facebook: Understanding
1756
+ inference at the edge. In 2019 IEEE International Symposium on High Performance
1757
+ Computer Architecture (HPCA), pages 331–344, 2019.
1758
+ [47] Doris Xin, Hui Miao, Aditya Parameswaran, and Neoklis Polyzotis. Production
1759
+ machine learning pipelines: Empirical analysis and optimization opportunities.
1760
+ In Proceedings of the 2021 International Conference on Management of Data, pages
1761
+ 2639–2652, 2021.
1762
+ [48] Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, and Ion
1763
+ Stoica. Spark: cluster computing with working sets. In Proceedings of the 2nd
1764
+ USENIX conference on Hot topics in cloud computing, 2010.
1765
+ [49] Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer
1766
+ Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez,
1767
+ and Ion Stoica. Ansor: Generating High-Performance Tensor Programs for Deep
1768
+ Learning. USENIX Association, USA, 2020.
1769
+ 13
1770
+
-tFQT4oBgHgl3EQf7DaV/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -4560,3 +4560,57 @@ X9FPT4oBgHgl3EQfszXP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
4560
  ZNE3T4oBgHgl3EQfcgp3/content/2301.04526v1.pdf filter=lfs diff=lfs merge=lfs -text
4561
  ytFKT4oBgHgl3EQfMC3E/content/2301.11749v1.pdf filter=lfs diff=lfs merge=lfs -text
4562
  JdE4T4oBgHgl3EQfhg2P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4560
  ZNE3T4oBgHgl3EQfcgp3/content/2301.04526v1.pdf filter=lfs diff=lfs merge=lfs -text
4561
  ytFKT4oBgHgl3EQfMC3E/content/2301.11749v1.pdf filter=lfs diff=lfs merge=lfs -text
4562
  JdE4T4oBgHgl3EQfhg2P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4563
+ T9E5T4oBgHgl3EQfAg7p/content/2301.05380v1.pdf filter=lfs diff=lfs merge=lfs -text
4564
+ v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf filter=lfs diff=lfs merge=lfs -text
4565
+ atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4566
+ w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf filter=lfs diff=lfs merge=lfs -text
4567
+ 9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf filter=lfs diff=lfs merge=lfs -text
4568
+ JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4569
+ L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf filter=lfs diff=lfs merge=lfs -text
4570
+ itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf filter=lfs diff=lfs merge=lfs -text
4571
+ m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf filter=lfs diff=lfs merge=lfs -text
4572
+ ZNE3T4oBgHgl3EQfcgp3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4573
+ n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4574
+ 6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4575
+ NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf filter=lfs diff=lfs merge=lfs -text
4576
+ ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4577
+ ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4578
+ v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4579
+ PNAzT4oBgHgl3EQfIfvC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4580
+ 8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf filter=lfs diff=lfs merge=lfs -text
4581
+ 2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf filter=lfs diff=lfs merge=lfs -text
4582
+ EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4583
+ 8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4584
+ 8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf filter=lfs diff=lfs merge=lfs -text
4585
+ j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf filter=lfs diff=lfs merge=lfs -text
4586
+ j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4587
+ T9E5T4oBgHgl3EQfAg7p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4588
+ itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4589
+ kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf filter=lfs diff=lfs merge=lfs -text
4590
+ XNE3T4oBgHgl3EQfFwkF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4591
+ qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf filter=lfs diff=lfs merge=lfs -text
4592
+ lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf filter=lfs diff=lfs merge=lfs -text
4593
+ oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4594
+ NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4595
+ i9FKT4oBgHgl3EQfwC46/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4596
+ sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4597
+ v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf filter=lfs diff=lfs merge=lfs -text
4598
+ L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4599
+ btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4600
+ ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf filter=lfs diff=lfs merge=lfs -text
4601
+ ZdFJT4oBgHgl3EQf7i0F/content/2301.11678v1.pdf filter=lfs diff=lfs merge=lfs -text
4602
+ HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4603
+ v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4604
+ j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4605
+ j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf filter=lfs diff=lfs merge=lfs -text
4606
+ u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf filter=lfs diff=lfs merge=lfs -text
4607
+ oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf filter=lfs diff=lfs merge=lfs -text
4608
+ kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4609
+ ZdFJT4oBgHgl3EQf7i0F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4610
+ pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf filter=lfs diff=lfs merge=lfs -text
4611
+ btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf filter=lfs diff=lfs merge=lfs -text
4612
+ Z9FRT4oBgHgl3EQfQDdI/content/2301.13520v1.pdf filter=lfs diff=lfs merge=lfs -text
4613
+ KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4614
+ ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4615
+ GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4616
+ JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0dE0T4oBgHgl3EQfdQAz/content/tmp_files/2301.02373v1.pdf.txt ADDED
@@ -0,0 +1,1482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Astronomy & Astrophysics manuscript no. 44705corr
2
+ ©ESO 2023
3
+ January 9, 2023
4
+ A framework for the architecture of exoplanetary systems
5
+ II. Nature versus nurture: Emergent formation pathways of architecture classes
6
+ Lokesh Mishra1, 2 , Yann Alibert1 , Stéphane Udry2
7
+ , and Christoph Mordasini1
8
+ 1 Institute of Physics, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland
9
+ e-mail: exomishra@gmail.com
10
+ 2 Geneva Observatory, University of Geneva, Chemin Pegasi 51b, 1290 Versoix, Switzerland
11
+ Received DD MMM YYYY; accepted DD MMM YYYY
12
+ ABSTRACT
13
+ In the first paper of this series, we proposed a model-independent framework for characterising the architecture of planetary systems
14
+ at the system level. There are four classes of planetary system architecture: similar, mixed, anti-ordered, and ordered. In this paper,
15
+ we investigate the formation pathways leading to these four architecture classes. To understand the role of nature versus nurture in
16
+ sculpting the final (mass) architecture of a system, we apply our architecture framework to synthetic planetary systems — formed
17
+ via core-accretion — using the Bern model. General patterns emerge in the formation pathways of the four architecture classes.
18
+ Almost all planetary systems emerging from protoplanetary disks whose initial solid mass was less than one Jupiter mass are similar.
19
+ Systems emerging from heavier disks may become mixed, anti-ordered, or ordered. Increasing dynamical interactions (planet–planet,
20
+ planet–disk) tends to shift a system’s architecture from mixed to anti-ordered to ordered. Our model predicts the existence of a new
21
+ metallicity–architecture correlation. Similar systems have very high occurrence around low-metallicity stars. The occurrence of the
22
+ anti-ordered and ordered classes increases with increasing metallicity. The occurrence of mixed architecture first increases and then
23
+ decreases with increasing metallicity. In our synthetic planetary systems, the role of nature is disentangled from the role of nurture.
24
+ Nature (or initial conditions) pre-determines whether the architecture of a system becomes similar; otherwise nurture influences
25
+ whether a system becomes mixed, anti-ordered, or ordered. We propose the ‘Aryabhata formation scenario’ to explain some planetary
26
+ systems which host only water-rich worlds. We finish this paper with a discussion of future observational and theoretical works that
27
+ may support or refute the results of this paper.
28
+ Key words. Planetary systems – Planets and satellites: detection – Planets and satellites: formation – Planets and satellites: physical
29
+ evolution
30
+ 1. Introduction
31
+ Studying planetary systems as single units of a physical sys-
32
+ tem makes them amenable to system level examinations. Inves-
33
+ tigating the ensemble of bound objects (host star(s), planets, mi-
34
+ nor bodies) coherently can allow a deeper and more compre-
35
+ hensive understanding of exoplanetary astrophysics to emerge.
36
+ The purview of this multi-body physics covers a breadth of
37
+ topics including stability of planetary systems (Gladman 1993;
38
+ Laskar 1997, 2000; Chambers 1999; Fang & Margot 2013; Pu
39
+ & Wu 2015; Laskar & Petit 2017; Obertas et al. 2017; Petit
40
+ et al. 2018; Wang et al. 2019; Yeh et al. 2020; Tamayo et al.
41
+ 2020; Turrini et al. 2020), stellar host and protoplanetary disk
42
+ properties (Petigura et al. 2018; Manara et al. 2019; Mulders
43
+ et al. 2021), novel approaches to system-level characterisation
44
+ (Tremaine 2015; Kipping 2018; Alibert 2019; Mishra et al. 2019;
45
+ Gilbert & Fabrycky 2020; Bashi & Zucker 2021; Sandford et al.
46
+ 2021), and the architecture of planetary systems (Lissauer et al.
47
+ 2011; Ciardi et al. 2013; Fabrycky et al. 2014; Weiss et al.
48
+ 2018; Millholland et al. 2017; Adams 2019; Adams et al. 2020;
49
+ Mulders et al. 2020; He et al. 2019; He et al. 2021; Mishra
50
+ et al. 2021; Adibekyan et al. 2021; Millholland & Winn 2021;
51
+ Winter et al. 2020). Analysing multi-body system level physics
52
+ may allow us to understand whether planetary systems are self-
53
+ organizing emergent structures – i.e. whether global level pat-
54
+ terns are emerging from local level interactions.
55
+ Inspired by the peas in a pod architecture (Weiss et al. 2018;
56
+ Millholland et al. 2017; Mishra et al. 2021), we introduced a new
57
+ framework for studying the architecture of planetary systems
58
+ (Mishra et al. 2023, ; hereafter Paper I). Studying the architecture
59
+ as a global-system level phenomena, this framework allows us to
60
+ characterise, quantify, and compare the architecture of individual
61
+ planetary systems. Four classes of planetary system architecture
62
+ emerged from this framework. These classes are labelled simi-
63
+ lar, mixed, anti-ordered, and ordered depending on the arrange-
64
+ ment and distribution of planets around the host star. The key
65
+ idea behind this framework is that the arrangement and distribu-
66
+ tion of planets contains additional information that cannot be ex-
67
+ tracted by studying single planets individually. Hints of the pres-
68
+ ence of this additional information were revealed in some works
69
+ (Tremaine 2015; Laskar & Petit 2017; Kipping 2018; Mishra
70
+ et al. 2019; Gilbert & Fabrycky 2020; Sandford et al. 2021).
71
+ Explaining the formation, evolution, and final assembly of
72
+ planetary systems remains an outstanding theoretical problem.
73
+ Planet-formation physics spans astronomical orders of magni-
74
+ tude in mass, size, and time (Udry & Santos 2007; Armitage
75
+ 2010). The processes occurring during planet formation convert
76
+ gases and micron-sized dust particles from the protoplanetary
77
+ disk into different kinds of planets arranged in different architec-
78
+ tures over timescales of millions and billions of years. However,
79
+ it remains unclear as to how initial conditions derived from the
80
+ host star or protoplanetary disk combine with the formation and
81
+ Article number, page 1 of 12
82
+ arXiv:2301.02373v1 [astro-ph.EP] 6 Jan 2023
83
+
84
+ A&A proofs: manuscript no. 44705corr
85
+ evolution processes to give rise to the observed exoplanetary sys-
86
+ tems.
87
+ We are interested in understanding the role of nature ver-
88
+ sus nurture in sculpting the final planetary system and the extent
89
+ to which the character of the mature planetary system is influ-
90
+ enced by its initial conditions. Kipping (2018) suggested, using
91
+ an entropy-like formulation for planetary systems, that the ini-
92
+ tial conditions of planet formation could be inferred based on
93
+ their present-day architecture. However, the presence of stochas-
94
+ tic processes makes it difficult to connect the initial conditions
95
+ with the final system. It is also unclear as to whether or not
96
+ stochastic physical processes can erase all memory of initial con-
97
+ ditions, or indeed leave their own impressions on the final archi-
98
+ tecture. Using ideas from the fields of machine-learning-based
99
+ natural language processing, Sandford et al. (2021) showed that
100
+ planetary systems are not randomly assembled. While it is clear
101
+ that planetary systems are not identical copies of one another, the
102
+ quest for quantifying the similarity between planetary systems is
103
+ a tantalising one.
104
+ In this paper, we investigate the formation pathways that
105
+ lead to the four architecture classes. Due to the stochastic na-
106
+ ture of this problem, understanding the formation of a single
107
+ planetary system can be very complicated. For example, two
108
+ systems with almost identical initial conditions may evolve into
109
+ two completely different planetary systems. Chaos arising from
110
+ multi-body gravitational interactions may cause differing forma-
111
+ tion pathways for these two systems. However, some patterns
112
+ are found to emerge when studying planetary systems as part of
113
+ an ensemble. These trends, as we show in this paper, help us
114
+ to understand the role played by initial conditions and physical
115
+ processes in shaping the architecture.
116
+ Figure 1 (bottom) summarises the main findings of this pa-
117
+ per. We show that the effects of planet formation and evolution
118
+ processes are imprinted in the system-level architecture. Fig-
119
+ ure 1 shows the formation pathways of the architecture classes
120
+ that emerge due to the system-level approach of our architecture
121
+ framework (Fig. 1 (top)). This sankey diagram has nodes for pro-
122
+ toplanetary disk gas mass, protoplanetary disk solid mass, metal-
123
+ licity, and planetary architecture. We find that the formation of
124
+ similar planetary systems is dominated by initial conditions. If
125
+ the initial conditions disfavour the formation of similar archi-
126
+ tecture, the other three architectures may emerge. Whether the
127
+ final architecture is mixed, ordered, or anti-ordered seems to de-
128
+ pend on the stochastic formation processes. Increasing dynam-
129
+ ical interactions (disk–planet, planet–planet) generally tends to
130
+ produce mixed, anti-ordered, and then ordered architectures, re-
131
+ spectively.
132
+ We first summarise the architecture framework and some re-
133
+ sults from Paper I in Sect. 2. We study the role of nature (initial
134
+ conditions) and nurture (dynamical processes) in Sects. 3 and
135
+ 4, respectively. In these sections, we study the influence of pro-
136
+ toplanetary disk mass, metallicity, protoplanetary disk lifetime,
137
+ planet–disk interactions, planet–planet interactions, and N-body
138
+ interactions on the final architecture of simulated planetary sys-
139
+ tems. We summarise our results, suggest possible future studies,
140
+ and conclude this paper in Sect. 6.
141
+ 2. Summary of Paper I and the Bern model
142
+ 2.1. Architecture framework
143
+ The arrangement of multiple planets and the collective distri-
144
+ bution of their physical properties around the host star(s) char-
145
+ acterises the architecture of a planetary system (Mishra et al.
146
+ Distance from star
147
+ Quantity (e.g. Mass)
148
+ Similar
149
+ Anti-Ordered
150
+ Ordered
151
+ Mixed
152
+ Fig. 1. The four classes of planetary system architecture and their emer-
153
+ gent formation pathways.
154
+ Top: Reproduced from Paper I – Schematic diagram depicting the Four
155
+ classes of planetary system architecture: similar, anti-ordered, mixed,
156
+ and ordered. Depending on how a quantity (such as mass or size) varies
157
+ from one planet to another, the architecture of a system can be identi-
158
+ fied. The framework is model independent.
159
+ Bottom: Emergence of formation pathways: Sankey diagram depicting
160
+ the emergence of formation pathways of architecture classes. The thick-
161
+ ness of the links and nodes is proportional to the relative number of
162
+ synthetic systems in our simulation. This result is derived from syn-
163
+ thetic planetary systems around a solar mass star via the Bern model.
164
+ Disk gas mass and metallicity are binned at their median values.
165
+ 2021). To quantify the architecture of a planetary system, we de-
166
+ veloped a novel model-independent framework Paper I. Some
167
+ key aspects of this framework are briefly summarised here, and
168
+ we refer the reader to Sect. 3 of Paper I for details.
169
+ Conceptually, the framework defines four classes of plane-
170
+ tary system architecture: similar, mixed, anti-ordered, and or-
171
+ Article number, page 2 of 12
172
+
173
+ Protoplanetary Disk
174
+ Mgas
175
+ < 0.03Mo
176
+ disk
177
+ Gas
178
+ Star Metallicity
179
+ [Fe/H] < 0
180
+ [Fe/H] ≥0
181
+ Protoplanetary Disk
182
+
183
+ 1MJ
184
+ Solids
185
+ Planetary System
186
+ Architecture Class
187
+ Ordered
188
+ Similar
189
+ Ordered
190
+ Mixed
191
+ AntiL. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes
192
+ dered. Consider a planetary quantity (such as mass, radius, etc.)
193
+ as a function of the distance of the planet to the host star (see Fig.
194
+ 1). When all planets in a system have similar values of a plane-
195
+ tary quantity, the architecture of such systems is similar. When
196
+ the planetary quantity increases with increasing distance, the
197
+ system is said to exhibit an ordered architecture. Alternatively,
198
+ if the quantity shows an overall decreasing trend with increas-
199
+ ing distance, the architecture is considered to be anti-ordered.
200
+ Finally, the planetary quantities could also show variations that
201
+ are not captured in the three classes above. A mixed architec-
202
+ ture may depict large, increasing, and decreasing variations with
203
+ distance. By studying the variation of a planetary quantity with
204
+ distance for all planets in the system, our framework captures the
205
+ arrangement and distribution of planets in the system.
206
+ The architecture of a system is quantified via two coeffi-
207
+ cients: the coefficient of similarity, CS (qi), and the coefficient of
208
+ variation, CV(qi). Here, qi represents a planetary quantity (e.g.
209
+ mass, radius, eccentricity, density) for the ith planet. When the
210
+ coefficients are calculated using planetary masses, they inform
211
+ us about the mass architecture of a system, that is, the arrange-
212
+ ment and distribution of mass in a given system. Likewise, we
213
+ can study the radius architecture, density architecture, water-
214
+ mass-fraction architecture, eccentricity architecture, and so on.
215
+ The versatility of our architecture framework lies in its ability
216
+ to allow us to study the multifaceted architectures of a planetary
217
+ system. In Paper I, we explored the relationship between these
218
+ different kinds of architectures. As in Paper I, we identify the
219
+ architecture of a system by its bulk mass architecture.
220
+ Calibrated on planetary masses, a classification scheme to
221
+ identify the architecture class was proposed in Paper I (eq. 8).
222
+ The CS versus CV plane represents the architecture space for
223
+ planetary systems (Fig. 3 in Paper I). This new parameter space
224
+ was found to be endowed with a curious mathematical property,
225
+ namely planetary systems cannot occupy all parts of the archi-
226
+ tecture plane, as some regions of this parameter space are math-
227
+ ematically forbidden.
228
+ To understand the implications of this architecture frame-
229
+ work, we applied it on several catalogues in Paper I. These
230
+ included 41 observed multi-planetary systems and numerically
231
+ simulated systems via population synthesis using the Generation
232
+ III Bern model (Emsenhuber et al. 2021a,b).
233
+ 2.2. Bern model
234
+ For the synthetic planetary systems, as the initial conditions and
235
+ the physical processes are known, it is possible (and desirable) to
236
+ understand how different architecture classes are formed. As this
237
+ paper is dedicated to planet formation and its imprints on archi-
238
+ tecture, we briefly review the ingredients of the Bern model here.
239
+ Readers interested in further details of this model are referred to
240
+ the recent NGPPS series of papers (Emsenhuber et al. 2021a,b;
241
+ Schlecker et al. 2021a; Burn et al. 2021; Schlecker et al. 2021b;
242
+ Mishra et al. 2021). The historic development of the Bern model
243
+ may be traced through the works of Alibert et al. (2004, 2005);
244
+ Mordasini et al. (2009); Alibert et al. (2011); Mordasini et al.
245
+ (2012a,b); Alibert et al. (2013); Fortier et al. (2013); Marboeuf
246
+ et al. (2014b); Thiabaud et al. (2014); Dittkrist et al. (2014); Jin
247
+ et al. (2014) and is reviewed in Benz et al. (2014); Mordasini
248
+ (2018).
249
+ Based on the core-accretion paradigm (Pollack et al. 1996),
250
+ the Bern model is a global model of planet formation and evo-
251
+ lution. The model studies the growth of several lunar-mass pro-
252
+ toplanetary embryos embedded in protoplanetary disks (consist-
253
+ ing of a gaseous and solid phase) around a solar-type star. The
254
+ disk model is based on viscous angular momentum transport
255
+ (Lynden-Bell & Pringle 1974; Veras & Armitage 2004; Hueso
256
+ & Guillot 2005). Turbulence is characterised by the Shakura &
257
+ Sunyaev (1973) approach. The initial mass of the solid disk de-
258
+ pends on the metallicity of the star and also on the condensation
259
+ state of the molecules in the disk (Thiabaud et al. 2014). The
260
+ solids in the disk are composed of a swarm of rocky and icy
261
+ planetesimals. The solids in the disk evolve via (a) accretion by
262
+ growing planets, (b) interaction with gaseous disk, (c) dynamical
263
+ stirring from planets and other planetesimals, and so on (Fortier
264
+ et al. 2013). The 1D geometrically thin disk evolution is studied
265
+ up to 1000 au.
266
+ This star–disk–embryo numerical system is endowed with
267
+ several physical processes, which are occurring simultaneously
268
+ and in a self-consistently coupled way. Some of these physical
269
+ processes are: stellar evolution (Baraffe et al. 2015), interactions
270
+ between viscous protoplanetary disk and star (Lynden-Bell &
271
+ Pringle 1974; Shakura & Sunyaev 1973; Clarke et al. 2001; Mat-
272
+ suyama et al. 2003; Veras & Armitage 2004; Nakamoto & Nak-
273
+ agawa 1994; Hueso & Guillot 2005), condensation of volatile
274
+ and/or refractory species (Marboeuf et al. 2014b,a; Thiabaud
275
+ et al. 2014), planet formation physics (Alibert et al. 2013; Fortier
276
+ et al. 2013; Mordasini et al. 2012b), orbital and tidal migration
277
+ (Coleman & Nelson 2014; Paardekooper et al. 2011; Dittkrist
278
+ et al. 2014), gravitational N-body interactions (Chambers 1999;
279
+ Alibert et al. 2013; Emsenhuber et al. 2021a,b), atmospheric es-
280
+ cape (Jin et al. 2014), bloating (Sarkis et al. 2021), and so on
281
+ (see Fig. 1 in Mishra et al. (2019) for a schematic diagram). In
282
+ addition, the model also calculates the internal structure of all
283
+ planets, assuming them all to be spherically symmetric.
284
+ In the synthetic planetary population we use in the present
285
+ work, some initial conditions are fixed, namely we use a 1M⊙
286
+ mass star and a disk viscosity α = 2 × 10−3, describing the ini-
287
+ tial shape of the gas and planetesimal disks via power laws (Ve-
288
+ ras & Armitage 2004), with a planetesimal size of 300m, and a
289
+ fixed density (rocky 3.2 g cm−3, icy 1 g cm−3). We add 100 pro-
290
+ toplanetary embryos to the protoplanetary disk. We ensure that
291
+ no two embryos start within 10 hill radii of each other (Kokubo
292
+ & Ida 1998, 2002). This model is then run 1000 times while
293
+ varying other initial conditions. We varied the initial gas mass in
294
+ the protoplanetary disk, disk lifetime, stellar metallicity, disk in-
295
+ ner edge, and the initial location of the protoplanetary embryos
296
+ (for details see Emsenhuber et al. 2021b).
297
+ The Bern model includes a significant variety of physics
298
+ and uses plausible choices of initial conditions, which are mo-
299
+ tivated by observations. However, it is only a simplified low-
300
+ dimensional approximation of our current understanding of
301
+ planet formation. For example, we model planet formation via
302
+ core-accretion only and ignore other methods, such as disk insta-
303
+ bility (Schib et al. 2021). Among others, we also assume that the
304
+ dust-to-gas ratio is the same for both the host star and the disk,
305
+ and that all dust in the disk is aggregated into planetesimals. The
306
+ N-body interactions are tracked for only 20 Myr, which may be
307
+ inadequate to capture dynamical effects occurring in the outer
308
+ parts of the system. The assumptions, choices, and simplifica-
309
+ tions made in this model may have a strong impact on the out-
310
+ come of this paper. Nevertheless, exploring the implications of
311
+ our architecture framework using synthetic populations via the
312
+ Bern model is a necessary first step. The main result of this pa-
313
+ per is not in understanding the formation of any single plane-
314
+ tary system but to show that, for different architecture classes,
315
+ discernible patterns of formation pathways emerge. Future stud-
316
+ ies could apply our architecture framework (from Paper I) with
317
+ other planet formation models. If the formation pathways for the
318
+ Article number, page 3 of 12
319
+
320
+ A&A proofs: manuscript no. 44705corr
321
+ different architecture classes were found to remain the same af-
322
+ ter using different formation models, then our results would be
323
+ strengthened and become more robust.
324
+ 3. Nature: Role of star and disk initial conditions
325
+ In this section, we study the connection between the initial con-
326
+ ditions and the final architecture of a system. We begin by count-
327
+ ing the number of different architecture classes that emerge from
328
+ our population synthesis as a function of the various initial con-
329
+ ditions that are varied. The role of varying disk masses and stel-
330
+ lar metallicities is presented in Sect. 3.1, and that of varying disk
331
+ lifetimes in Sect. 3.2. For completeness, we measure the relative
332
+ count for an architecture class within a bin by dividing the num-
333
+ ber of systems of a particular architecture class in a bin by the
334
+ total number of systems in that bin. We emphasise that, as in Pa-
335
+ per I, the architecture of a system is identified with its bulk mass
336
+ architecture. Thus, when we refer to a similar or ordered sys-
337
+ tem, we are referring to a system whose bulk mass architecture
338
+ is similar or ordered, respectively.
339
+ 3.1. Protoplanetary disk: Mass and stellar metallicity
340
+ Figure 2 (upper left) shows the dependence of the architecture
341
+ class relative counts on the initial mass of gas in the protoplan-
342
+ etary disk. Over 96% of all disks that started with gas masses
343
+ ≲ 0.04M⊙ give rise to planetary systems of similar architecture.
344
+ About 1% of these low-mass disks lead to each of the other three
345
+ architecture classes. The relative count of systems with similar
346
+ architecture shows a clear decreasing trend with increasing mass
347
+ in the disk gas.
348
+ The production of the remaining three architecture classes
349
+ tends to increase with increasing disk gas mass, but with dis-
350
+ tinct trends. As the mass in the gas disk increases, the relative
351
+ count of mixed architectures increases first, and then decreases
352
+ for gas mass ≳ 0.12M⊙. The relative count for both anti-ordered
353
+ and ordered architectures continues to increase with increasing
354
+ disk mass. Anti-ordered architectures become the most common
355
+ outcome from large disks with gas mass ≳ 0.12M⊙.
356
+ In Fig. 2 (upper right), we see the binned relative count of
357
+ different architecture classes as a function of the mass of the
358
+ solids in the protoplanetary disk. This plot shows some of the
359
+ same features that we saw in Fig. 2 (upper left). About 99% of all
360
+ disks that have solid masses ≲ 200M⊕ give rise to similar plan-
361
+ etary systems. The production of similar architecture decreases
362
+ as the mass of solids in a disk is increased.
363
+ Before continuing, we note that this is already a result of
364
+ considerable importance. The physical processes encoded in the
365
+ Bern model are the same for all 1000 planetary systems. The
366
+ only difference between these synthetic systems arises from the
367
+ variations in their initial conditions. We are seeing that almost
368
+ all low-mass disks give rise to only one architecture, the similar
369
+ class. This occurs despite all the physical processes that can act
370
+ upon the system and induce some architectural variation. As we
371
+ show below, the low mass of the disk limits some of the phys-
372
+ ical processes that sculpt a system’s architecture. We conclude
373
+ that the production of systems of the similar architecture class is
374
+ dominated by initial conditions.
375
+ Close to 60% of all observed systems in our multi-planetary
376
+ systems catalogue (from Paper I) are similar in their mass ar-
377
+ chitecture (Paper I). For some of these similar class systems
378
+ (like Trappist-1, TOI-178, etc), if their formation is via core-
379
+ accretion, our work may suggest strong limits on the initial mass
380
+ of their protoplanetary disks.
381
+ The relative count of the other three architecture classes in-
382
+ creases as the solid mass in the disk increases. The production
383
+ of mixed architectures peaks around disks of ≈ 1MJ and then
384
+ decreases. The prevalence of anti-ordered and ordered architec-
385
+ tures continues to increase with increasing disk mass. For heavy
386
+ massive disks, anti-ordered architecture is the most common out-
387
+ come.
388
+ Figure 2 (middle left) shows the relative count of each ar-
389
+ chitecture class in the synthetic population as a function of stel-
390
+ lar metallicity. Figure 2 (middle right) shows the same for the
391
+ 41 observed multi-planetary systems. The selection criterion for
392
+ our observed catalogue is detailed in Paper I. We find an inter-
393
+ esting correlation between the metallicity and the architecture
394
+ of a system, hereafter referred to as the metallicity–architecture
395
+ correlation, and note the following trends. Over 98% of all sys-
396
+ tems with Fe/H < −0.2 are of similar type. The relative count
397
+ of similar architecture decreases as the metallicity is increased.
398
+ The relative counts of the other three architecture classes are be-
399
+ low 5% for metallicities ≤ −0.2. At different rates, the relative
400
+ counts of mixed, ordered, and anti-ordered classes increase with
401
+ increasing metallicity. Our catalogue of observed planetary sys-
402
+ tems shows an encouragingly similar trend.
403
+ Our observations catalogue suffers from detection biases and
404
+ incompleteness. One way in which these limitations manifest
405
+ is that we do not find any observed example of anti-ordered
406
+ architecture. The qualitative trend for the relative count of ob-
407
+ served system architectures as a function of their stellar metal-
408
+ licity agrees with our synthetic systems. For example, the rela-
409
+ tive count of similar observed systems decreases with increasing
410
+ metallicity. The relative count of ordered architectures continues
411
+ to increase with increasing metallicity.
412
+ To understand the origin of these correlations, we study the
413
+ relation between initial disk mass (both in solids and gases), stel-
414
+ lar metallicity, and the final architecture of the systems in our
415
+ model. In the Bern model, the initial solid mass of the disk is a
416
+ fraction of the initial gas mass of the disk. This fraction is cor-
417
+ related with the dust-to-gas ratio, which also depends on the gas
418
+ mass itself because the location of different icelines depend on it.
419
+ By simulating systems with varying dust-to-gas ratio (fD/G), we
420
+ simulate systems around stars with different metallicities. This
421
+ is due to the following relation:
422
+ 10[Fe/H] =
423
+ fD/G
424
+ fD/G,⊙
425
+ ,
426
+ fD/G,⊙ = 0.0149 (Lodders 2003).
427
+ (1)
428
+ The metallicities in our simulations vary from −0.6 to 0.5 fol-
429
+ lowing Santos et al. (2005).
430
+ Figure 2 shows the solid disk mass as a function of the gas
431
+ disk mass (bottom left) and the total mass in the planets as a
432
+ function of the solid disk mass (bottom right). Each point rep-
433
+ resents one planetary system, and the shape and colour of the
434
+ marker shows its final architecture. These two plots help us un-
435
+ derstand the correlations discussed above.
436
+ The bottom left panel of Fig. 2 shows the relationship be-
437
+ tween gas disk mass, solid disk mass, metallicity, and the final ar-
438
+ chitecture of the system. Generally, when the mass of the solids
439
+ in a disk is ≳ 1MJ(≈ 318M⊕), the production of architectures
440
+ other than similar is triggered. We note that up to a certain gas
441
+ disk mass (≲ 0.02M⊙), irrespective of the metallicity, all disks
442
+ lead to similar architecture. For heavier gas disks (≳ 0.02M⊙),
443
+ metallicities begin to play a role. If the gas disk mass is high
444
+ enough, even low metallicities (≈ −0.2) can trigger the produc-
445
+ tion of architectures other than the similar class. However, for
446
+ lower gas disk masses, higher metallicities are required to pro-
447
+ duce about a 1MJ mass in the solid disk.
448
+ Article number, page 4 of 12
449
+
450
+ L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes
451
+ 0.00
452
+ 0.04
453
+ 0.08
454
+ 0.12
455
+ 0.16
456
+ Protoplanetary Disk: Gas Mass [M
457
+ ]
458
+ 0
459
+ 20
460
+ 40
461
+ 60
462
+ 80
463
+ 100
464
+ Relative count of planetary systems [%]
465
+ Bern Model
466
+ Similar
467
+ Anti-Ordered
468
+ Mixed
469
+ Ordered
470
+ 0
471
+ 200
472
+ 400
473
+ 600
474
+ 800
475
+ 1000
476
+ Protoplanetary Disk: Solid Mass [M
477
+ ]
478
+ 0
479
+ 20
480
+ 40
481
+ 60
482
+ 80
483
+ 100
484
+ Relative count of planetary systems [%]
485
+ Bern Model
486
+ Similar
487
+ Anti-Ordered
488
+ Mixed
489
+ Ordered
490
+ 0.6
491
+ 0.4
492
+ 0.2
493
+ 0.0
494
+ 0.2
495
+ 0.4
496
+ 0.6
497
+ Metallicity [Fe/H]
498
+ 0
499
+ 20
500
+ 40
501
+ 60
502
+ 80
503
+ 100
504
+ Relative count of planetary systems [%]
505
+ Bern Model
506
+ Similar
507
+ Anti-Ordered
508
+ Mixed
509
+ Ordered
510
+ 0.6
511
+ 0.4
512
+ 0.2
513
+ 0.0
514
+ 0.2
515
+ 0.4
516
+ Metallicity [Fe/H]
517
+ 0
518
+ 20
519
+ 40
520
+ 60
521
+ 80
522
+ 100
523
+ Relative count of planetary systems [%]
524
+ Observations
525
+ Similar
526
+ Anti-Ordered
527
+ Mixed
528
+ Ordered
529
+ 10
530
+ 2
531
+ 10
532
+ 1
533
+ Protoplanetary Disk: Gas Mass [M
534
+ ]
535
+ 10
536
+ 1
537
+ 10
538
+ 2
539
+ 10
540
+ 3
541
+ Protoplanetary Disk: Solid Mass [M
542
+ ]
543
+ 1MJ
544
+ 318M
545
+ Bern Model
546
+ Similar
547
+ Anti-Ordered
548
+ Mixed
549
+ Ordered
550
+ [Fe/H] = 0.5
551
+ [Fe/H] = -0.6
552
+ 10
553
+ 1
554
+ 10
555
+ 2
556
+ 10
557
+ 3
558
+ Protoplanetary Disk: Solid Mass [M
559
+ ]
560
+ 10
561
+ 0
562
+ 10
563
+ 1
564
+ 10
565
+ 2
566
+ 10
567
+ 3
568
+ 10
569
+ 4
570
+ Total Mass in Planets [M
571
+ ]
572
+ 10 %
573
+ 100 %
574
+ 1MJ
575
+ 318M
576
+ Bern Model
577
+ Similar
578
+ Anti-Ordered
579
+ Mixed
580
+ Ordered
581
+ Efficiency of
582
+ solid accretion[%]
583
+ Fig. 2. Role of disk mass and the metallicity–architecture correlation. The top two rows show the binned relative count of each architecture class
584
+ as a function of initial disk gas mass (upper left), disk solid mass (upper right), stellar metallicity in the synthetic population (middle left), and
585
+ stellar metallicity in observed systems (middle right). The length of the error bars corresponds to the total number of systems in each bin as:
586
+ 100/
587
+
588
+ bin counts. In the bottom panels, each point corresponds to a single planetary system. The system architecture is indicated by the colour
589
+ and shape of the marker. The bottom left panel shows the solid mass in the disk as a function of the disk gas mass. The two diagonal lines convey
590
+ the role of stellar metallicity. The dashed horizontal line indicates the mass of Jupiter. The bottom right panel shows the total mass in planets as a
591
+ function of the solid mass in the protoplanetary disk. The two diagonal lines indicate the efficiency of converting solids from the disk into planets.
592
+ If the planets in a hypothetical system could accrete all the solid mass of its disk, and these planets had no gaseous atmosphere, then such a system
593
+ would lie on the diagonal line corresponding to 100% accretion efficiency. The dashed vertical line indicates the mass of Jupiter.
594
+ Article number, page 5 of 12
595
+
596
+ A&A proofs: manuscript no. 44705corr
597
+ It is clear that the mass in the solids of the protoplanetary
598
+ disk plays an essential role here. The bottom right panel of Fig.
599
+ 2 explains the above statement. The total mass in the planets
600
+ increases as the mass of solids in the disk increases. When the
601
+ mass of solids in the disk is ∼ 1MJ, the distribution of total mass
602
+ in planets shows a jump. This is because massive planets can be-
603
+ gin to accrete significant amounts of gas. For the core-accretion
604
+ scenario, this plot suggests that similar architectures occur for
605
+ low-mass disks because they cannot produce massive giant plan-
606
+ ets. Gas giants are very effective in inducing dynamical stirring,
607
+ which are in turn responsible for shaping the system architecture.
608
+ This signifies the role played by physical processes in producing
609
+ the mixed, anti-ordered, and ordered architectures1.
610
+ 3.2. Lifetime of the protoplanetary disk
611
+ In this section, we explore the role of disk lifetime (i.e. the age
612
+ of a protoplanetary disk) in defining the final architecture class
613
+ of a system. The lifetime of a disk, in the Bern model, is influ-
614
+ enced by the external disk photo-evaporation rate (see Emsenhu-
615
+ ber et al. (2021a) for details) and the mass of the disk.
616
+ Figure 3 (left) shows the binned relative count of system ar-
617
+ chitecture as a function of disk lifetime. About 80% of all disks
618
+ with lifetimes ranging from 1 to 5 Myr produce systems of the
619
+ similar architecture class. The relative count of similar systems
620
+ decreases as disk lifetime increases. The relative count of mixed
621
+ architecture does not show any significant variation with disk
622
+ lifetime. The relative counts of anti-ordered and ordered archi-
623
+ tectures vary as the disk lifetime increases. This suggests that
624
+ the physical mechanisms by which disks shape the final archi-
625
+ tectures of systems play a role in shaping similar, anti-ordered,
626
+ and ordered architectures.
627
+ The trends of the relative counts of architecture classes with
628
+ disk lifetime are similar to the distribution of relative counts as
629
+ functions of disk mass. We would like to understand whether
630
+ system architecture is influenced by disk lifetime directly or via
631
+ an inherent dependence of disk lifetime on disk mass. The right
632
+ panel of Fig. 3 shows the gas disk mass as a function of disk
633
+ lifetime. The scatter plot depicting each individual disk shows
634
+ that, generally, low-mass disks have short lifetimes. The solid
635
+ lines depict the average gas mass for each architecture class for
636
+ each disk lifetime bin.
637
+ The gas mass of the disks that go on to form systems of
638
+ mixed, anti-ordered, or ordered architecture shows a weak de-
639
+ pendence on disk lifetime. On average, the more massive disks
640
+ seem to last longer. For disks that give rise to the similar archi-
641
+ tecture class, this trend is clearly visible. If more massive disks
642
+ also live longer, this partly explains the relative count distribu-
643
+ tion seen in Fig. 3 (left).
644
+ However, disks also affect the planetary architecture in other
645
+ interesting ways, namely orbital migration and eccentricity, and
646
+ inclination damping. We study the effect of these planet–disk
647
+ interactions in shaping system architecture in Sect. 4.1.
648
+ 1 The architecture framework is not sensitive to the absolute value of
649
+ a planetary quantity, such as mass, but only the ratio of the quantities
650
+ for adjacent planets. Independent of the architecture framework, we will
651
+ present another system-level framework analysing the state of a plane-
652
+ tary system. This other classification framework is sensitive to the abso-
653
+ lute mass of a planet and will address the role of giant planets on system-
654
+ level properties. The state classification framework reveals a drastic dif-
655
+ ference between systems with and without giant planets (Mishra et al.
656
+ in prep.).
657
+ 4. Nurture: Role of dynamical stirring
658
+ Whether or not the final architecture of a planetary system is
659
+ pre-determined by its initial conditions from the host star and
660
+ the protoplanetary disk remains unclear. If not, the mechanism
661
+ by which dynamical processes shape the architecture of a plane-
662
+ tary system remains to be determined. It also remains unclear as
663
+ to whether or not dynamical processes remove all traces of ini-
664
+ tial conditions from the final system, or whether these stochastic
665
+ processes leave their impressions on the final architecture. In this
666
+ section, we try to answer these questions. We focus our attention
667
+ on dynamical interactions between planets and the protoplane-
668
+ tary disk, and the gravitational multi-body interactions amongst
669
+ planets themselves.
670
+ While there exist several dynamical mechanisms that shape
671
+ the final architecture, we simplify the task before us by concen-
672
+ trating on violent dynamical instabilities that change a planetary
673
+ system in a non-trivial manner. For each synthetic planetary sys-
674
+ tem, we count the number of planet–planet mergers, planetary
675
+ ejections, and planets falling into their host star. We use these
676
+ counts as a proxy to assess the strength of dynamical interactions
677
+ that occur in a system. In the subsequent subsections, we study
678
+ planet–disk interactions and planet–planet interactions (mergers,
679
+ ejections, stellar accretion). These dynamical effects give rise to
680
+ stochasticity and are thereby inherently unpredictable. However,
681
+ we hope that the underlying dynamical processes that are sculpt-
682
+ ing the system architecture emerge as patterns in the counts of
683
+ these violent events.
684
+ 4.1. Planet–disk interactions
685
+ Protoplanetary disks interact with planets via several mecha-
686
+ nisms. Planets may experience orbital migration via gravitation
687
+ interactions with the disk. Low-mass planets undergo type I mi-
688
+ gration, which in the Bern model is implemented following the
689
+ approaches of Coleman & Nelson (2014); Paardekooper et al.
690
+ (2011). Massive planets may open a gap in the disk and undergo
691
+ type II migration (Dittkrist et al. 2014). The disk also dampens
692
+ the eccentricity and inclination of planets, which is coherently
693
+ applied within the N-body integrator. Readers interested in the
694
+ details of the implementation are referred to Emsenhuber et al.
695
+ (2021a,b).
696
+ Figure 4 (left) shows the count of mergers and ejections for
697
+ each planetary system in our synthetic population as a function
698
+ of the lifetime of its protoplanetary disk. For an easier visualisa-
699
+ tion of any underlying trend, we also show the average merger
700
+ and ejection counts for each disk lifetime bin. The number of
701
+ planet–planet mergers shows a clear correlation with disk life-
702
+ time. Disks that live longer usually give rise to planetary sys-
703
+ tems that undergo more mergers than short-lived disks. We refer
704
+ to this correlation as ‘migration assisted mergers’. One possible
705
+ explanation for this correlation could be that disks allow plan-
706
+ ets to migrate depending on their mass 2. Two adjacent planets
707
+ that are not migrating at the same rate, perhaps owing to their
708
+ different masses, can come close enough for a merger to occur.
709
+ The number of ejections does not show any clear trend with disk
710
+ lifetime. Disks dampen a planet’s eccentricity and inclination.
711
+ As ejection requires extremely violent interactions (marked by
712
+ 2 There could be other scenarios which contribute to the ‘migration as-
713
+ sisted mergers’ correlation. For example, migration may allow planets
714
+ to become more massive by accreting more material due to increased
715
+ access to planetesimals (Alibert et al. 2005). Massive planets may inter-
716
+ act more amongst themselves, leading to more mergers.
717
+ Article number, page 6 of 12
718
+
719
+ L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes
720
+ 1.0
721
+ 1.8
722
+ 3.2
723
+ 5.6
724
+ 10.0
725
+ 17.8
726
+ Protoplanetary Disk: Lifetime [Myr]
727
+ 0
728
+ 20
729
+ 40
730
+ 60
731
+ 80
732
+ 100
733
+ Relative count of planetary systems [%]
734
+ Bern Model
735
+ Similar
736
+ Anti-Ordered
737
+ Mixed
738
+ Ordered
739
+ 1.0
740
+ 1.8
741
+ 3.2
742
+ 5.6
743
+ 10.0
744
+ 17.8
745
+ Disk Lifetime [Myr]
746
+ 10
747
+ 2
748
+ 10
749
+ 1
750
+ Protoplanetary Disk: Gas Mass [M
751
+ ]
752
+ Bern Model
753
+ Similar
754
+ Anti-Ordered
755
+ Mixed
756
+ Ordered
757
+ Fig. 3. Role of disk lifetime on system architecture. Left: Binned relative counts of architecture classes as a function of disk lifetime. The length of
758
+ error bars corresponds to the total number of systems in each bin, as: 100/
759
+
760
+ bin counts. Right: Scatter plot shows the disk gas mass as a function
761
+ of disk lifetime. The solid lines show the binned average gas disk mass for each architecture class.
762
+ 1.0
763
+ 1.6
764
+ 2.6
765
+ 4.2
766
+ 6.8
767
+ 11.0
768
+ 17.8
769
+ Disk Lifetime [Myr]
770
+ 0
771
+ 20
772
+ 40
773
+ 60
774
+ 80
775
+ 100
776
+ Counts
777
+ Bern Model
778
+ Mergers
779
+ Ejections
780
+ 0
781
+ 20
782
+ 40
783
+ 60
784
+ 80
785
+ 100
786
+ Planet counts
787
+ 0.00
788
+ 0.02
789
+ 0.04
790
+ 0.06
791
+ 0.08
792
+ 0.10
793
+ Density
794
+ Bern Model
795
+ Ejections: w/ planet-disk interactions
796
+ Mergers: w/ planet-disk interactions
797
+ Ejections: w/o planet-disk interactions
798
+ Mergers: w/o planet-disk interactions
799
+ Fig. 4. Effect of planet–disk interactions on architecture. Left: Scatter plot shows the number of planet–planet mergers and planetary ejections that
800
+ occurred in systems as a function of disk lifetime. The solid lines show the average counts for each disk lifetime bin. Right: Distribution of the
801
+ total number of mergers (dashed) and ejections (solid) for the entire synthetic population. The black line depicts the nominal synthetic population,
802
+ and the red line depicts a different synthetic population in which the disk-)planet interactions were artificially switched off.
803
+ high eccentricities and inclinations), disks may essentially in-
804
+ hibit planetary ejections.
805
+ To test these ideas, we simulated another population of 1000
806
+ planetary systems. In this population (NG140), planet–disk in-
807
+ teractions (gas-driven migrations, and eccentricity and inclina-
808
+ tion damping) are artificially switched off. For all such systems,
809
+ we count the number of mergers and ejections and compare them
810
+ with our nominal population. Figure 4 (right) shows the distribu-
811
+ tion of the number of planet–planet mergers and planetary ejec-
812
+ tions in the two populations.
813
+ As expected, the number of planet–planet mergers decreases
814
+ (distribution shifts to the left) when planet–disk interactions are
815
+ switched off. This confirms the migration-assisted mergers cor-
816
+ relation presented above. The distribution of ejections, on the
817
+ other hand, increases significantly when planet–disk interactions
818
+ are switched off. When the damping of the planetary eccentricity
819
+ and inclination by the disk is switched off, the gravitational in-
820
+ teractions between planets increases, such that many planets are
821
+ ejected.
822
+ We make two observations from the results presented so
823
+ far. First, counts of mergers and ejections seem to be a good
824
+ proxy for the prevalence of dynamical interactions, as they cap-
825
+ ture some of the well-established dynamical effects concern-
826
+ ing planet–disk interactions. Second, we observe that disks af-
827
+ Article number, page 7 of 12
828
+
829
+ A&A proofs: manuscript no. 44705corr
830
+ fect system architecture in a multitude of ways. While disk
831
+ mass shows a direct relation to final architecture, disks also af-
832
+ fect system architecture indirectly by influencing the dynami-
833
+ cal interactions that occur therein. Long-living disks give rise to
834
+ more mergers and inhibit planetary ejections. Conversely, sys-
835
+ tems emerging from short-lived disks experience fewer mergers.
836
+ 4.2. Planet–planet interactions
837
+ Above, we show that planet–disk interactions in the Bern model
838
+ may influence the dynamical interactions occurring in a system.
839
+ Now, in this section, we are interested in understanding how
840
+ these violent events shape the final architecture of a system.
841
+ Planets interact with each other gravitationally. These multi-
842
+ body interactions are tracked via a N-body integrator in the Bern
843
+ model. The end result of some of the more violent interactions
844
+ is that planets are lost via one of several channels: planet–planet
845
+ mergers3, planetary ejections, accretion by the host star, and so
846
+ on. These channels allow a planetary system to fundamentally
847
+ alter itself and its architecture.
848
+ Figure 5 shows, for each architecture class, the distribution
849
+ of planet–planet mergers and the number of planets lost via ejec-
850
+ tions and stellar accretion. At first glance, losing planets to the
851
+ host star may not seem appropriate for planet–planet interac-
852
+ tions. However, many of these planets meet their fate, in the
853
+ Bern model, when they are pushed inwards after being captured
854
+ in mean-motion resonances with other planets4. Therefore, this
855
+ channel of losing planets is included here. We caution the reader
856
+ that the absolute number of planets lost via any channel is model-
857
+ dependent. The quantity of interest here is the relative difference
858
+ between the different architecture classes.
859
+ Figure 5 suggests that the similar architecture class is almost
860
+ completely shaped by planet–planet mergers. Most similar sys-
861
+ tems in our simulations have between 40 and 80 mergers taking
862
+ place within them, and the median number of mergers is 63. Vi-
863
+ olent dynamical interactions that lead to the ejection of planets
864
+ seems to be very rare in this architecture type, as 100% of all
865
+ similar systems lose less than five planets via planetary ejection
866
+ (median ejections is 0). Likewise, similar systems seem to not
867
+ rely on the stellar accretion channel for losing planets (median
868
+ stellar accretions is 0).
869
+ Systems with mixed architecture also undergo many planet–
870
+ planet mergers. The number of mergers in mixed systems ranges
871
+ from 50 to 85, and the median of mergers is 70. In a clear con-
872
+ trast from similar architectures, the ejection and stellar accretion
873
+ channels play an important role for mixed systems. The median
874
+ number of planets lost via ejections is 7, and via stellar accre-
875
+ tions is 2.
876
+ anti-ordered systems utilise all three dynamical channels.
877
+ The distribution of mergers in anti-ordered systems is roughly
878
+ similar to that of mixed systems. The range is between 50 and 85
879
+ and the median number of mergers is 67. However, anti-ordered
880
+ systems tend to lose more planets via the ejection channel. The
881
+ number of planets lost via dynamical ejection ranges from 0 to
882
+ 35 with a median value of 14.5. Compared to mixed systems,
883
+ 3 In our model, when the distance between two planets becomes
884
+ smaller than the sum of their radii, a planet–planet collision is said to oc-
885
+ cur. We treat such merger events in a simplified manner: the cores of the
886
+ target–impactor pair are merged, the lesser massive body loses its enve-
887
+ lope, and the impact energy is added to the merged new body following
888
+ Broeg & Benz (2012), which determines what part of the gaseous enve-
889
+ lope is ejected.
890
+ 4 The model also includes inward migration of planets as a result of
891
+ the stellar tides.
892
+ anti-ordered systems also tend to lose more planets via stellar
893
+ accretion (median is 6).
894
+ Amongst the four architecture classes, ordered systems seem
895
+ to undergo the greatest number of dynamical interactions. The
896
+ distribution of planet–planet mergers in ordered systems shows
897
+ a tail-like feature. The number of mergers ranges from 55 to 85,
898
+ with 62 being the median. All ordered systems eject at least five
899
+ planets. The number of ejections has a range from 5 to 35, and
900
+ the median is 23. The distribution of planets lost via the stel-
901
+ lar accretion channel shows a shift to the right. The number of
902
+ planets accreted by the star ranges from 0 to 20 with 8 being the
903
+ median.
904
+ A comprehensive picture of the role of dynamical history in
905
+ shaping the final architecture emerges from the four panels in
906
+ Fig. 5. Similar systems tend to rely only on the merger channel
907
+ for shaping their system architecture. As planetary systems in
908
+ all four architecture classes undergo a considerable number of
909
+ mergers, this channel may not suffice to explain or distinguish
910
+ the emergence of the four architecture classes. This is in line
911
+ with what was found before, namely that the emergence of the
912
+ similar class is mostly governed by the initial conditions.
913
+ While initial conditions seem to decide whether a system be-
914
+ comes similar or one of the other three architectures, there ap-
915
+ pears to be a trend in the role of dynamical interactions in shap-
916
+ ing mixed, anti-ordered, and ordered architectures. The distri-
917
+ butions of the ejection and accretion channels distinguish these
918
+ three architectures. These distributions show a shift to the right,
919
+ indicating that more planets are being lost via these two channels
920
+ as we move from mixed to anti-ordered and to ordered architec-
921
+ tures. Thus, we conclude that if initial conditions do not allow
922
+ a system to become similar, its fate is decided by its dynamical
923
+ history, among other effects. If the strength of the dynamical in-
924
+ teractions increases in a system, the architecture of the system
925
+ changes from mixed to anti-ordered or to ordered.
926
+ All systems in the Bern model start with 100 protoplane-
927
+ tary embryos. Above, we show that systems of different archi-
928
+ tecture show varying propensity to lose planets via the different
929
+ dynamical channels. This suggests that we should also see an ef-
930
+ fect of the dynamical history of the four architecture classes in
931
+ their multiplicity distribution. We observed this effect in Fig. 6
932
+ of Paper I. We do not have a way to determine the initial num-
933
+ ber of embryos of the planetary systems we observe today. Our
934
+ approach may therefore not be directly applicable to observed
935
+ planetary systems. We remind the reader that while the quanti-
936
+ tative aspects we present in this section are probably model de-
937
+ pendent, the qualitative nature of these results is of paramount
938
+ importance.
939
+ 5. The Aryabhata formation scenario
940
+ In this section, we propose a planet-formation scenario to ex-
941
+ plain a feature observed by Paper I (Sect. 5.4). We found that
942
+ many synthetic planetary systems have a peculiar water-mass-
943
+ fraction architecture namely that all planets hosted in these sys-
944
+ tems are water-rich worlds. We explain this peculiar feature with
945
+ the ‘Aryabhata formation scenario’.
946
+ The first exoplanets to be discovered were hot Jupiters —
947
+ giant planets orbiting their host stars at very short periods
948
+ (Mayor & Queloz 1995). Orbital migration was suggested as
949
+ a possible mechanism to explain these short periods (Lin et al.
950
+ 1996; Lin & Ida 1997). Theoretical studies indicate that orbital
951
+ migration and planet–star tidal interactions should make many
952
+ close-in planets unstable. In the 1990s, Doug Lin described ‘the
953
+ last of the Mohicans’ scenario (Garaud 2011). In this scenario,
954
+ Article number, page 8 of 12
955
+
956
+ L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes
957
+ 0
958
+ 20
959
+ 40
960
+ 60
961
+ 80
962
+ 100
963
+ Planet Counts
964
+ 0
965
+ 20
966
+ 40
967
+ 60
968
+ 80
969
+ 100
970
+ Distribution of systems [%]
971
+ Similar Systems
972
+ Merged
973
+ Ejected
974
+ Star Accreted
975
+ 0
976
+ 20
977
+ 40
978
+ 60
979
+ 80
980
+ 100
981
+ Planet Counts
982
+ 0
983
+ 20
984
+ 40
985
+ 60
986
+ 80
987
+ 100
988
+ Distribution of systems [%]
989
+ Mixed Systems
990
+ Merged
991
+ Ejected
992
+ Star Accreted
993
+ 0
994
+ 20
995
+ 40
996
+ 60
997
+ 80
998
+ 100
999
+ Planet Counts
1000
+ 0
1001
+ 20
1002
+ 40
1003
+ 60
1004
+ 80
1005
+ 100
1006
+ Distribution of systems [%]
1007
+ Anti-Ordered Systems
1008
+ Merged
1009
+ Ejected
1010
+ Star Accreted
1011
+ 0
1012
+ 20
1013
+ 40
1014
+ 60
1015
+ 80
1016
+ 100
1017
+ Planet Counts
1018
+ 0
1019
+ 20
1020
+ 40
1021
+ 60
1022
+ 80
1023
+ 100
1024
+ Distribution of systems [%]
1025
+ Ordered Systems
1026
+ Merged
1027
+ Ejected
1028
+ Star Accreted
1029
+ Fig. 5. Effect of planet–planet interactions on system architecture. For each architecture class, the panels show a histogram of the counts of planet–
1030
+ planet mergers, ejections, and stellar accretion occurring in the synthetic population. The y-axis in all panels is scaled to reflect the percentage of
1031
+ systems in each of the four architecture classes. For example, 100% of all similar systems lost less than five planets via planetary ejection.
1032
+ the protoplanetary disk gives rise to planets, many of which are
1033
+ doomed to fall onto the star. The surviving observable planets
1034
+ are those that were able to escape annihilation.
1035
+ For some simulated systems, we noticed a modified version
1036
+ of this scenario. Protoplanetary disks seem to give rise to planets
1037
+ at different epochs. In the first epoch, several intermediate-mass
1038
+ planets (1 − 100M⊕) are formed within the first 1Myr. Most of
1039
+ these ‘first generation’ planets are subsequently lost mainly via
1040
+ giant impacts (and a few are lost via orbital or tidal migration
1041
+ leading to stellar accretion). This purging phase is catastrophic
1042
+ to all planets that started within the ice line. Over the next few
1043
+ million years, a second epoch sees the advent of a ‘second gen-
1044
+ eration’ of planets. Most of these second-generation planets are
1045
+ born outside the ice line, and are able to migrate inwards dur-
1046
+ ing the disk lifetime. After disk dissipation, migration comes to
1047
+ a halt and many of these planets survive long-term N-body evo-
1048
+ lution in our simulations. We call this the Aryabhata formation
1049
+ scenario. The key difference between the two scenarios is that
1050
+ in the Aryabhata formation scenario (a) planets (surviving and
1051
+ lost) are born in different epochs, and (b) most first-generation
1052
+ planets are lost via giant impacts.
1053
+ We quantify this scenario with the Aryabhata’s number, µ,
1054
+ which is the ratio of the surviving planets that started inside the
1055
+ ice line to the total number of surviving planets:
1056
+ Aryabhata’s number: µ =
1057
+ n(astart
1058
+ embryo ≤ aice)
1059
+ n
1060
+ .
1061
+ (2)
1062
+ At the start of our calculations, all systems have an Aryabhata’s
1063
+ number ≈ 0.5 ± 0.1. Figure 12 of Paper I (middle) shows the ice
1064
+ mass fraction architecture of simulated planetary systems. The
1065
+ colour of each point shows the Aryabhata’s number.
1066
+ Most planetary systems with CS ( fice) ≈ CV( fice) ≈ 0 have
1067
+ µ close to zero. This suggests that most (or all) of the surviv-
1068
+ ing planets in such systems started outside the ice line. The for-
1069
+ mation path of these systems falls into the Aryabhata formation
1070
+ scenario. These classes of systems can be identified by two char-
1071
+ acteristics: (i) the core water-mass fraction for different planets
1072
+ in these systems is similar, and (ii) the core water-mass fraction
1073
+ for most planets is high (owing to their origin outside the ice
1074
+ line) making them water-rich planets. Approximately, one-fifth
1075
+ of the simulated systems fall into this scenario. Among these,
1076
+ about half are of similar class, one-third are anti-ordered, and
1077
+ the remaining systems have either a mixed or ordered mass ar-
1078
+ chitecture.
1079
+ There exists an almost linear relationship between CV( fice)
1080
+ and µ. Using scipy’s linear regression module, we obtain a slope
1081
+ of 1.8 and intercept of 0.18 between these two quantities. The
1082
+ correlation coefficient is R = 0.95, indicating a strong correlation
1083
+ between the Aryabhata’s number and the coefficient of variation
1084
+ of core water mass fraction. This suggests a possibility to iden-
1085
+ tify observed exoplanetary systems that may have originated via
1086
+ the Aryabhata formation scenario. By determining the CV( fice)
1087
+ of a system, the Aryabhata’s number can be estimated. Systems
1088
+ with low µ values probably arose from this scenario.
1089
+ For systems that fall into the default scenario (positive
1090
+ CS ( fice), implying an increasing core water mass fraction inside-
1091
+ out), the Aryabhata’s number is µ > 0. We note that most sys-
1092
+ tems with µ ⪆ 0.6 show similarity in their mass architecture.
1093
+ Overall, the intra-system core water-mass-fraction architec-
1094
+ ture of most planetary systems seems to take one of two forms.
1095
+ (i) Those characterised by CS ( fice) ≈ CV( fice) ≈ 0 and µ = 0.
1096
+ These systems are composed of water-rich planets wherein the
1097
+ core water mass fraction is similar across the different planets.
1098
+ All surviving planets in these systems started outside the ice line.
1099
+ The Aryabhata formation scenario explains these systems. (ii)
1100
+ Those with CS ( fice) > 0 and µ > 0. These systems represent
1101
+ the ‘default’ or common outcome of our simulations. The plan-
1102
+ etary core water-mass fraction in these systems increases from
1103
+ one planet to another with increasing distance from the host star.
1104
+ Some of the surviving planets started from inside the ice line.
1105
+ At the extreme end, systems in which 60% or more surviving
1106
+ planets started inside the ice line tend to have a similar mass
1107
+ architecture.
1108
+ 6. Summary, conclusions, and future work
1109
+ Paper I of this series introduced a novel, model-independent
1110
+ framework for characterising the architecture of planetary sys-
1111
+ tems at the system level. Planetary-system architectures can be
1112
+ separated into four classes: similar, mixed, anti-ordered, and or-
1113
+ dered. This classification is achieved via two quantities: the co-
1114
+ efficient of similarity and the coefficient of variation. The math-
1115
+ ematical CS versus CV architecture space was found to have
1116
+ forbidden regions – regions in which no planetary system can
1117
+ exist. In Paper I, the mass architecture classes of observed and
1118
+ synthetic systems were characterised. The mass architecture of
1119
+ synthetic systems was compared with their radii architecture,
1120
+ bulk-density architecture, core-mass architecture, spacing archi-
1121
+ tecture, and water-mass-fraction architecture. As in Paper I, we
1122
+ identify a system’s architecture with its mass architecture.
1123
+ In this paper, we explore the core-accretion-based formation
1124
+ pathways —around a solar-like star— of the four classes of plan-
1125
+ etary system architecture. We tried to disentangle the role of
1126
+ nature (initial conditions of planet formation) from that of nur-
1127
+ Article number, page 9 of 12
1128
+
1129
+ A&A proofs: manuscript no. 44705corr
1130
+ ture (physical processes occurring during planet formation). Our
1131
+ findings can be summarised as follows:
1132
+ 1. System-level analysis: Our findings show that a system-
1133
+ level analysis of planetary system architecture via our ar-
1134
+ chitecture framework (Paper I) provides an abundance of in-
1135
+ formation. We show that planetary formation and evolution
1136
+ process leave their imprint on the entire system architecture.
1137
+ 2. Solid disk mass: The initial amount of solids in the proto-
1138
+ planetary disk in our models plays an important role in decid-
1139
+ ing the architectural fate of a planetary system. Disks with a
1140
+ solid mass (initial content of planetesimals) of ≲ 1MJ almost
1141
+ always give rise to systems with similar architecture. Mixed
1142
+ architectures arise most often from disks with solid masses
1143
+ ≈ 1MJ. Disks with solid mass ≳ 1MJ favour the production
1144
+ of anti-ordered and ordered architectures.
1145
+ 3. Gas disk mass and metallicity: Initial gas disk mass and
1146
+ stellar metallicity influences the final architecture of a plane-
1147
+ tary system by controlling the initial mass of solids in the
1148
+ disk. Metallicity, in our models, is simply related to the
1149
+ dust-to-gas ratio, which allows us to convert a fraction of
1150
+ the initial gas disk mass into initial dust mass (eq. 1). Ap-
1151
+ plying the architecture framework on the synthetic systems
1152
+ from the Bern model allows us to predict the existence of
1153
+ a metallicity–architecture correlation. The observed correla-
1154
+ tion between metallicity and final architecture is in qualita-
1155
+ tive agreement with the Bern model.
1156
+ 4. Metallicity–architecture correlation: The architecture of a
1157
+ planetary system correlates with the metallicity of the host
1158
+ star. Most systems hosted by a low-metallicity star (Fe/H <
1159
+ −0.2) are of similar architecture. As the metallicity of the
1160
+ star increases, mixed, ordered, and anti-ordered architectures
1161
+ become increasingly common.
1162
+ 5. Disk lifetime: The occurrence of systems of a similar ar-
1163
+ chitecture around short-lived disks is high, and their fre-
1164
+ quency reduces around long-lived disks. The frequency of
1165
+ anti-ordered architecture increases as disk lifetime increases.
1166
+ These correlations are mediated in at least two ways. First,
1167
+ disks interact with planets, where orbital migration and ec-
1168
+ centricity and inclination damping occur. Due to the ‘mi-
1169
+ gration assisted merger’ correlation, long-lasting disks allow
1170
+ planetary systems to have, in general, more planet–planet
1171
+ mergers and inhibit planetary ejections. These dynamical
1172
+ events shape a system’s final architecture. In addition, in our
1173
+ model, disk lifetimes are correlated with disk masses, which
1174
+ also strongly influences the system architecture.
1175
+ 6. Dynamical interactions: Planetary systems can signifi-
1176
+ cantly alter their architecture via (at least) three dynamical
1177
+ channels: planet–planet mergers, planetary ejections, and ac-
1178
+ cretion by the host star. All architecture classes in our forma-
1179
+ tion model were found to undergo numerous merger events.
1180
+ Similar systems rely entirely on mergers to shape their final
1181
+ architecture. As the strength of the dynamical interactions
1182
+ experienced by a system (quantified by the number of ejec-
1183
+ tions and/or accretions) increases, the architecture of a sys-
1184
+ tem shifts from mixed to anti-ordered to ordered.
1185
+ 7. The Aryabhata formation scenario: Systems following
1186
+ this formation scenario have the following formation path-
1187
+ way. First-generation planets (formed within 1 Myr) are lost
1188
+ mostly via giant impacts. Second-generation planets started
1189
+ outside the ice line and migrated inwards. The surviving
1190
+ planets are from the second generation and shape the ar-
1191
+ chitecture of the system. This scenario explains about 20%
1192
+ of simulated systems in which the core water-mass-fraction
1193
+ architecture is different from the default scenario. Systems
1194
+ following this formation scenario (i) host only those planets
1195
+ that have a high core water-mass fraction and (ii) host only
1196
+ those planets that started outside the ice line. We introduce
1197
+ the Aryabhata’s number to identify those systems that follow
1198
+ this formation scenario and find that 80% of all anti-ordered
1199
+ simulated systems are formed via the Aryabhata formation
1200
+ scenario.
1201
+ 8. Nature versus nurture: Overall, our model suggests that
1202
+ initial conditions —or ‘nature’— dictate whether a system
1203
+ will have a similar architecture or one of the other three ar-
1204
+ chitecture classes, namely mixed, anti-ordered, or ordered
1205
+ (via initial disk mass). If nature does not allow a system
1206
+ to have a similar mass architecture, then the final architec-
1207
+ ture is controlled by ‘nurture’, or dynamical interactions,
1208
+ among other possible effects. As the dynamical interactions
1209
+ increase, the final architecture tends to become mixed, anti-
1210
+ ordered, and then ordered.
1211
+ We would like to offer readers some warning when interpret-
1212
+ ing our results. Although the architecture framework (from Pa-
1213
+ per I) is model-independent, the present results hinge critically
1214
+ on the underlying planet formation model – the Bern model.
1215
+ There are several assumptions, simplifications, and choices to
1216
+ be made when simulating synthetic planetary systems using the
1217
+ Bern model. For example, the treatment of planet–planet merg-
1218
+ ing collisions is relatively simple (Ali-Dib et al. 2022). We also
1219
+ assume simplified planet-formation conditions; that is, our star–
1220
+ disk–planet system is isolated enough so that we may ignore the
1221
+ influence of the stellar neighbourhood, stellar flybys, and so on
1222
+ (Bate 2012, 2018). The main strength of this study does not lie in
1223
+ providing an explanation of the formation pathway of any partic-
1224
+ ular system. Instead, our main result is the observation that when
1225
+ groups of planetary systems are identified (architecture classes),
1226
+ general trends in formation pathways emerge. This allowed us to
1227
+ map the roles of nature and nurture in shaping the final architec-
1228
+ ture of a planetary system.
1229
+ The results of this study can be strengthened or challenged
1230
+ in several observational and theoretical ways. We list some pos-
1231
+ sibilities for future studies emerging from this work:
1232
+ 1. Linking disk mass distribution and architecture occur-
1233
+ rence rates: Our model suggests that there should be a direct
1234
+ relationship between the mass of the solid disk and the final
1235
+ architecture of a system. While initial disk masses and the
1236
+ final architecture of the same system will forever remain un-
1237
+ observable, this relation can be tested statistically. The dis-
1238
+ tribution of initial disk masses and the distribution of final
1239
+ system architecture can be linked by formation models. We
1240
+ speculate that in future, when these two distributions become
1241
+ available, formation models can be used to predict one or the
1242
+ other. In fact, this problem can also be turned around; we can
1243
+ identify the right family of models as those that correctly link
1244
+ the observed distributions of protoplanetary disk masses and
1245
+ architecture occurrence rates. We believe such tests are cru-
1246
+ cial for the development and eventual emergence of a stan-
1247
+ dard model for exoplanetary astrophysics.
1248
+ 2. Metallicity–architecture correlation: Our work suggests
1249
+ that the current architecture of a planetary system should be
1250
+ related to the metallicity of its host star. As both of these
1251
+ are observable, testing this metallicity–architecture correla-
1252
+ tion should be feasible. Here, we used a catalogue of 41 ob-
1253
+ served multi-planet systems (from Paper I) to test this corre-
1254
+ lation. We find a qualitative agreement between theory and
1255
+ Article number, page 10 of 12
1256
+
1257
+ L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes
1258
+ observations. However, our observational catalogue suffers
1259
+ from incompleteness and low-number statistics, which pre-
1260
+ vents us from making any further assertions. More obser-
1261
+ vational data are required to confirm or reject the proposed
1262
+ metallicity-architecture correlation. It would also be interest-
1263
+ ing to estimate the current architecture occurrence rate based
1264
+ on the known metallicity distributions.
1265
+ 3. Confirming formation pathways: Confirming the forma-
1266
+ tion pathways discovered in the present study with obser-
1267
+ vations is challenging. However, the strength of our results
1268
+ will increase if different planet-formation models are stud-
1269
+ ied through the architecture framework. Hence, one possible
1270
+ line of future work involves repeating the present study using
1271
+ different planet-formation models.
1272
+ 4. Extending the architecture framework: So far, we have
1273
+ calibrated our classification scheme for the mass architec-
1274
+ tures only. Calibrating the architecture classification frame-
1275
+ work on other quantities maybe useful. Especially for plan-
1276
+ etary radii, which are observable via transit surveys, the use
1277
+ of machine learning methods may be necessary.
1278
+ 5. Temporal evolution of system architecture: In the nomi-
1279
+ nal Bern model population studied in this paper, protoplane-
1280
+ tary embryos of 100 lunar masses are initialised in the pro-
1281
+ toplanetary disk at the start. This necessarily implies that all
1282
+ planetary systems start as similar type systems. It would be
1283
+ interesting to inquire whether this is generally true in nature
1284
+ as well. If this is the case, this implies that the ‘default’ ar-
1285
+ chitecture of all planetary systems is similar and the phys-
1286
+ ical processes playing out in the system evolve this archi-
1287
+ tecture into other possibilities. Investigating this may lead to
1288
+ deep insights into the structure of planetary system architec-
1289
+ ture. In addition, such studies would be necessary to interpret
1290
+ the observed architecture occurrences, as observed planetary
1291
+ systems are seldom of the same age.
1292
+ 6. External perturbations: Stellar flybys or multi-planetary
1293
+ systems around binaries provide excellent theoretical and ob-
1294
+ servational laboratories with which to study the influence of
1295
+ external perturbations on the architecture of planetary sys-
1296
+ tems. This problem, when turned around, is also useful in
1297
+ deducing the perturbed or dynamical (or lack of) history of
1298
+ observed planetary systems.
1299
+ This paper presents new insights obtained by analysing plan-
1300
+ etary systems at the system-level. We showed that several pat-
1301
+ terns emerged in the formation pathways of the four architecture
1302
+ classes. These patterns linked the initial conditions of planet for-
1303
+ mation with the final architecture of a system – bridging the vast
1304
+ temporal gap of several billions of years between the birth of
1305
+ planets to their final assembly.
1306
+ Acknowledgements. This work has been carried out within the frame of the Na-
1307
+ tional Centre for Competence in Research PlanetS supported by the Swiss Na-
1308
+ tional Science Foundation. We acknowledge the support of the Swiss National
1309
+ Fund under grant 200020_172746 and 200021_204847 “PlanetsInTime”. LM ac-
1310
+ knowledges the generous hospitality of the "Planet Formation" workshop by the
1311
+ Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded
1312
+ by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
1313
+ under Germany’s Excellence Strategy – EXC-2094 – 390783311.
1314
+ Data: The synthetic planetary populations (NGPPS) used in this work are avail-
1315
+ able online at http://dace.unige.ch. Software: Python (Van Rossum &
1316
+ Drake 2009), NumPy (Oliphant 2006), Seaborn (Waskom & the seaborn de-
1317
+ velopment team 2020), Pandas (pandas development team 2020), Matplotlib
1318
+ (Hunter 2007).
1319
+ References
1320
+ Adams, F. C. 2019, MNRAS, 488, 1446
1321
+ Adams, F. C., Batygin, K., Bloch, A. M., & Laughlin, G. 2020, Monthly Notices
1322
+ of the Royal Astronomical Society, 493, 5520
1323
+ Adibekyan, V., Santos, N. C., Demangeon, O. D. S., et al. 2021, Astronomy &
1324
+ Astrophysics, 649, A111
1325
+ Ali-Dib, M., Cumming, A., & Lin, D. N. C. 2022, MNRAS, 509, 1413
1326
+ Alibert, Y. 2019, Astronomy & Astrophysics, 624, A45
1327
+ Alibert, Y., Carron, F., Fortier, A., et al. 2013, Astronomy & Astrophysics, 558,
1328
+ A109
1329
+ Alibert, Y., Mordasini, C., & Benz, W. 2004, Astronomy & Astrophysics, 417,
1330
+ L25
1331
+ Alibert, Y., Mordasini, C., & Benz, W. 2011, A&A, 526, A63
1332
+ Alibert, Y., Mordasini, C., Benz, W., & Winisdoerffer, C. 2005, Astronomy &
1333
+ Astrophysics, 434, 343
1334
+ Armitage, P. J. 2010, Astrophysics of Planet Formation
1335
+ Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, Astronomy & Astro-
1336
+ physics, 577, A42
1337
+ Bashi, D. & Zucker, S. 2021, A&A, 651, A61
1338
+ Bate, M. R. 2012, MNRAS, 419, 3115
1339
+ Bate, M. R. 2018, MNRAS, 475, 5618
1340
+ Benz, W., Ida, S., Alibert, Y., Lin, D., & Mordasini, C. 2014, in Protostars and
1341
+ Planets VI, ed. H. Beuther, R. Klessen, C. Dullemond, & T. Henning (Uni-
1342
+ versity of Arizona, Tucson), 691–713
1343
+ Broeg, C. H. & Benz, W. 2012, A&A, 538, A90
1344
+ Burn, R., Schlecker, M., Mordasini, C., et al. 2021, Astronomy & Astrophysics,
1345
+ 656, A72
1346
+ Chambers, J. E. 1999, Monthly Notices of the Royal Astronomical Society, 304,
1347
+ 793
1348
+ Ciardi, D. R., Fabrycky, D. C., Ford, E. B., et al. 2013, The Astrophysical Jour-
1349
+ nal, 763, 41
1350
+ Clarke, C. J., Gendrin, A., & Sotomayor, M. 2001, Monthly Notices of the Royal
1351
+ Astronomical Society, 328, 485
1352
+ Coleman, G. A. & Nelson, R. P. 2014, Monthly Notices of the Royal Astronom-
1353
+ ical Society, 445, 479
1354
+ Dittkrist, K. M., Mordasini, C., Klahr, H., Alibert, Y., & Henning, T. 2014, As-
1355
+ tronomy & Astrophysics, 567 [arXiv:1402.5969]
1356
+ Emsenhuber, A., Mordasini, C., Burn, R., et al. 2021a, Astronomy & Astro-
1357
+ physics, 656, A69
1358
+ Emsenhuber, A., Mordasini, C., Burn, R., et al. 2021b, Astronomy & Astro-
1359
+ physics, 656, A70
1360
+ Fabrycky, D. C., Lissauer, J. J., Ragozzine, D., et al. 2014, The Astrophysical
1361
+ Journal, 790, 146
1362
+ Fang, J. & Margot, J.-L. 2013, The Astrophysical Journal, 767, 115
1363
+ Fortier, A., Alibert, Y., Carron, F., Benz, W., & Dittkrist, K.-M. 2013, Astronomy
1364
+ & Astrophysics, 549, A44
1365
+ Garaud, P. 2011, The Astrophysical Journal Letters, Volume 728, Issue 2, article
1366
+ id. L30, <NUMPAGES>5</NUMPAGES> pp. (2011)., 728, L30
1367
+ Gilbert, G. J. & Fabrycky, D. C. 2020, The Astronomical Journal, 159, 281
1368
+ Gladman, B. 1993, Icarus, 106, 247
1369
+ He, M. Y., Ford, E. B., & Ragozzine, D. 2019, Monthly Notices of the Royal
1370
+ Astronomical Society, 490, 4575
1371
+ He, M. Y., Ford, E. B., & Ragozzine, D. 2021, AJ, 161, 16
1372
+ Hueso, R. & Guillot, T. 2005, Astronomy & Astrophysics, 442, 703
1373
+ Hunter, J. D. 2007, Computing in science & engineering, 9, 90
1374
+ Jin, S., Mordasini, C., Parmentier, V., et al. 2014, ApJ, 795, 65
1375
+ Kipping, D. 2018, Monthly Notices of the Royal Astronomical Society, 473, 784
1376
+ Kokubo, E. & Ida, S. 1998, Icarus, 131, 171
1377
+ Kokubo, E. & Ida, S. 2002, The Astrophysical Journal, 581, 666
1378
+ Laskar, J. 1997, Large scale chaos and the spacing of the inner planets., Tech.
1379
+ rep.
1380
+ Laskar, J. 2000, Physical Review Letters, 84, 3240
1381
+ Laskar, J. & Petit, A. C. 2017, Astronomy & Astrophysics, 605, 1
1382
+ Lin, D. N., Bodenheimer, P., & Richardson, D. C. 1996, Nature, Volume 380,
1383
+ Issue 6575, pp. 606-607 (1996)., 380, 606
1384
+ Lin, D. N. C. & Ida, S. 1997, The Astrophysical Journal, Volume 477, Issue 2,
1385
+ pp. 781-791., 477, 781
1386
+ Lissauer, J. J., Ragozzine, D., Fabrycky, D. C., et al. 2011, The Astrophysical
1387
+ Journal Supplement Series, 197, 8
1388
+ Lodders, K. 2003, The Astrophysical Journal, 591, 1220
1389
+ Lynden-Bell, D. & Pringle, J. E. 1974, Monthly Notices of the Royal Astronom-
1390
+ ical Society, 168, 603
1391
+ Manara, C. F., Mordasini, C., Testi, L., et al. 2019, Astronomy & Astrophysics,
1392
+ 631, L2
1393
+ Marboeuf, U., Thiabaud, A., Alibert, Y., Cabral, N., & Benz, W. 2014a, Astron-
1394
+ omy and Astrophysics, 570 [arXiv:1407.7282]
1395
+ Marboeuf, U., Thiabaud, A., Alibert, Y., Cabral, N., & Benz, W. 2014b, Astron-
1396
+ omy and Astrophysics, 570 [arXiv:1407.7271]
1397
+ Matsuyama, I., Johnstone, D., & Murray, N. 2003, The Astrophysical Journal,
1398
+ 585, L143
1399
+ Mayor, M. & Queloz, D. 1995, Nature, 378, 355
1400
+ Article number, page 11 of 12
1401
+
1402
+ A&A proofs: manuscript no. 44705corr
1403
+ Millholland, S., Wang, S., & Laughlin, G. 2017, The Astrophysical Journal, 849,
1404
+ L33
1405
+ Millholland, S. C. & Winn, J. N. 2021, ApJ, 920, L34
1406
+ Mishra, L., Alibert, Y., Leleu, A., et al. 2021, Astronomy & Astrophysics, 656,
1407
+ A74
1408
+ Mishra, L., Alibert, Y., & Udry, S. 2019, in EPSC-DPS Joint Meeting 2019, held
1409
+ 15-20 September 2019 in Geneva, Switzerland, id. EPSC-DPS2019-1616,
1410
+ Vol. 2019, EPSC–DPS2019–1616
1411
+ Mishra, L., Alibert, Y., Udry, S., & Mordasini, C. 2023, Astronomy & Astro-
1412
+ physics
1413
+ Mordasini, C. 2018, in Handbook of Exoplanets, ed. H. J. Deeg & J. A. Bel-
1414
+ monte, 143
1415
+ Mordasini, C., Alibert, Y., & Benz, W. 2009, Astronomy & Astrophysics, 501,
1416
+ 1139
1417
+ Mordasini, C., Alibert, Y., Georgy, C., et al. 2012a, Astronomy & Astrophysics,
1418
+ 547, A112
1419
+ Mordasini, C., Alibert, Y., Klahr, H., & Henning, T. 2012b, Astronomy & Astro-
1420
+ physics, 547, A111
1421
+ Mulders, G. D., O’brien, D. P., Ciesla, F. J., Apai, D., & Pascucci, I. 2020
1422
+ Mulders, G. D., Pascucci, I., Ciesla, F. J., & Fernandes, R. B. 2021
1423
+ [arXiv:2107.12520]
1424
+ Nakamoto, T. & Nakagawa, Y. 1994, The Astrophysical Journal, 421, 640
1425
+ Obertas,
1426
+ A.,
1427
+ Van
1428
+ Laerhoven,
1429
+ C.,
1430
+ &
1431
+ Tamayo,
1432
+ D.
1433
+ 2017,
1434
+ Icarus
1435
+ [arXiv:1703.08426]
1436
+ Oliphant, T. E. 2006, A guide to NumPy, Vol. 1 (Trelgol Publishing USA)
1437
+ Paardekooper, S. J., Baruteau, C., & Kley, W. 2011, Monthly Notices of the
1438
+ Royal Astronomical Society, 410, 293
1439
+ pandas development team, T. 2020, pandas-dev/pandas: Pandas
1440
+ Petigura, E. A., Marcy, G. W., Winn, J. N., et al. 2018, The Astronomical Journal,
1441
+ 155, 89
1442
+ Petit, A. C., Laskar, J., & Boué, G. 2018, Astronomy & Astrophysics, 617, A93
1443
+ Pollack, J. B., Hubickyj, O., Bodenheimer, P., et al. 1996, Icarus, 124, 62
1444
+ Pu, B. & Wu, Y. 2015, The Astrophysical Journal, Volume 807, Issue 1, article
1445
+ id. 44, <NUMPAGES>10</NUMPAGES> pp. (2015)., 807, 44
1446
+ Sandford, E., Kipping, D., & Collins, M. 2021, Monthly Notices of the Royal
1447
+ Astronomical Society, Volume 505, Issue 2, pp.2224-2246, 505, 2224
1448
+ Santos, N. C., Israelian, G., Mayor, M., et al. 2005, Astronomy & Astrophysics,
1449
+ 437, 1127
1450
+ Sarkis, P., Mordasini, C., Henning, T., Marleau, G. D., & Mollière, P. 2021,
1451
+ A&A, 645, A79
1452
+ Schib, O., Mordasini, C., Wenger, N., Marleau, G. D., & Helled, R. 2021, A&A,
1453
+ 645, A43
1454
+ Schlecker, M., Mordasini, C., Emsenhuber, A., et al. 2021a, Astronomy and As-
1455
+ trophysics, 656, A71
1456
+ Schlecker, M., Pham, D., Burn, R., et al. 2021b, Astronomy and Astrophysics,
1457
+ 656, A73
1458
+ Shakura, N. I. & Sunyaev, R. A. 1973, Astronomy & Astrophysics, 24, 337
1459
+ Tamayo, D., Gilbertson, C., & Foreman-Mackey, D. 2020, Stability constrained
1460
+ characterization of multiplanet systems
1461
+ Thiabaud, A., Marboeuf, U., Alibert, Y., et al. 2014, Astronomy & Astrophysics,
1462
+ 562 [arXiv:1312.3085]
1463
+ Tremaine, S. 2015, Astrophysical Journal, 807, 157
1464
+ Turrini, D., Zinzi, A., & Belinchon, J. A. 2020, Astronomy and Astrophysics,
1465
+ 636 [arXiv:2003.05366]
1466
+ Udry, S. & Santos, N. C. 2007, Annual Review of Astronomy and Astrophysics,
1467
+ 45, 397
1468
+ Van Rossum, G. & Drake, F. L. 2009, Python 3 Reference Manual (Scotts Valley,
1469
+ CA: CreateSpace)
1470
+ Veras, D. & Armitage, P. J. 2004, Monthly Notices of the Royal Astronomical
1471
+ Society, 347, 613
1472
+ Wang, Y., lin Zhou, J., yao Liu, F., et al. 2019, Monthly Notices of the Royal
1473
+ Astronomical Society, 490, 359
1474
+ Waskom, M. & the seaborn development team. 2020, mwaskom/seaborn
1475
+ Weiss, L. M., Marcy, G. W., Petigura, E. A., et al. 2018, The Astronomical Jour-
1476
+ nal, 155, 48
1477
+ Winter, A. J., Kruijssen, J. M., Longmore, S. N., & Chevance, M. 2020, Nature,
1478
+ 586, 528
1479
+ Yeh, L.-C., Jiang, I.-G., & Gajendran, S. 2020, Astrophysics and Space Science,
1480
+ 365 [arXiv:2012.09431]
1481
+ Article number, page 12 of 12
1482
+
0dE0T4oBgHgl3EQfdQAz/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2dAyT4oBgHgl3EQfPvZl/content/tmp_files/2301.00030v1.pdf.txt ADDED
@@ -0,0 +1,642 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00030v1 [math-ph] 28 Dec 2022
2
+ Duality family of KdV equation
3
+ Xin Gu,a Yuan-Yuan Liu,b Wen-Du Li,c,1 and Wu-Sheng Daia,2
4
+ aDepartment of Physics, Tianjin University, Tianjin 300350, P.R. China
5
+ bTheoretical Physics Division, Chern Institute of Mathematics, Nankai University, PR China
6
+ cCollege of Physics and Materials Science, Tianjin Normal University, Tianjin 300387, PR China
7
+ Abstract: It is revealed that there exist duality families of the KdV type equation. The
8
+ duality family consists of an infinite number of the generalized KdV (GKdV) equation.
9
+ A duality transformation relates the GKdV equations in a duality family. Once a family
10
+ member is solved, the duality transformation presents the solutions of all other family
11
+ members. We show some dualities as examples, such as the soliton solution-soliton solution
12
+ duality and the periodic solution-soliton solution duality.
13
+ 1liwendu@tjnu.edu.cn
14
+ 2daiwusheng@tju.edu.cn
15
+
16
+ Contents
17
+ 1
18
+ Introduction
19
+ 1
20
+ 2
21
+ Duality family of GKdV equation
22
+ 3
23
+ 3
24
+ Duality family of KdV equation: Example
25
+ 5
26
+ 4
27
+ Conclusion
28
+ 7
29
+ 1
30
+ Introduction
31
+ After Russell found the solitary wave phenomenon, and studying nonlinear evolution equa-
32
+ tions began in physics and mathematics [1]. When Kortoweg and de Vries studied the water
33
+ wave in the long-wave approximation and finite small amplitude, they gave the Korteweg-de
34
+ Vries (KdV) equation [1–3],
35
+ ∂u
36
+ ∂t − 6u∂u
37
+ ∂x + ∂3u
38
+ ∂x3 = 0.
39
+ (1.1)
40
+ The KdV equation is a basic model in nonlinear evolution equations [4, 5].
41
+ The KdV
42
+ equation defines many physical phenomena, such as waves in anharmonic crystals [6], waves
43
+ in bubble liquid mixtures [7], ion acoustic waves [8–10], and waves in warm plasma [8–10].
44
+ Soliton solution. The solitary wave solutions of the KdV equation are noted as solitons.
45
+ The velocity of the solitary wave relates to its magnitude [11], and after the collision, it re-
46
+ tains the original magnitude, shape, and velocity [12, 13]. The theory of solitons emerges in
47
+ biochemistry, nonlinear optics, mathematical biosciences, fluid dynamics, plasma physics,
48
+ nuclear physics, and geophysics [14]. There have been many approaches to calculating the
49
+ soliton solution [15, 16], such as the Painlevé analysis method, Bäcklund transformation
50
+ method, Hirota bilinear method, inverse scattering method, and Darboux transformation
51
+ method [1]. These methods apply not only to calculating the soliton solution of the KdV
52
+ equation but also to other partial differential equations [17]. These methods have differ-
53
+ ent limits in applications, and there is no universal method for solving nonlinear partial
54
+ differential equations generally [18].
55
+ Modified KdV (mKdV) equation and generalized KdV (GKdV) equation.
56
+ The KdV
57
+ equation is a special case of the GKdV equation. The GKdV equation generally is [19]
58
+ ∂u
59
+ ∂t − f (u) ∂u
60
+ ∂x + ∂3u
61
+ ∂x3 = 0.
62
+ (1.2)
63
+ The GKdV equation recovers the KdV equation (1.1) when f (u) = 6u.
64
+ A special GKdV equation with f (u) = −αuk is the KdV type equation with a power-
65
+ law nonlinearity [20],
66
+ ∂u
67
+ ∂t + αuk ∂u
68
+ ∂x + ∂3u
69
+ ∂x3 = 0,
70
+ (1.3)
71
+ – 1 –
72
+
73
+ and the mKdV equation is Eq. (1.3) with k = 2 and α = 6 [21]. The Miura transforma-
74
+ tion establishes a one-to-one correspondence between the solutions of the KdV equation
75
+ and the solutions of the mKdV equation [22]. The mKdV equation has a rich physical
76
+ background [23, 24]. The mKdV equation can describe a bounded particle propagating in
77
+ a one-dimensional nonlinear lattice with a harmonic force [25], small amplitude ion acous-
78
+ tic waves propagating in plasma physics [8], and the thermal pulse propagating through a
79
+ single crystal of sodium fluoride [26, 27].
80
+ Duality and duality family. Newton in Principia revealed a duality between gravitation
81
+ and elasticity in classical mechanics, now called the Newton-Hooke duality [28]. E. Kasner
82
+ and V.I. Arnol’d independently find the generalized duality between power potentials: two
83
+ power potentials U (r) = ξra and V (r) = ηrA are dual if a+2
84
+ 2
85
+ =
86
+ 2
87
+ A+2, called the Kasner-
88
+ Arnol’d theorem [29–31].
89
+ Recently, we find that such a duality generally exists in classical mechanics, quantum
90
+ mechanics, and scalar fields and present the duality among arbitrary potentials [32]. We
91
+ find that the duality is not a duality only between two potentials but exists duality families
92
+ [32]. Each duality family consists of an infinite number of potentials dual to each other.
93
+ Each duality family consists of an infinite number of potentials; in a duality family, every
94
+ potential is dual to all other potentials. Once a family member’s solution is obtained, we can
95
+ obtain all other members’ solutions by the duality transformation. Therefore, the duality
96
+ relation can be used to find the solutions for classical mechanics, quantum mechanics, field
97
+ theory, and nonlinear equations (such as the Gross-Pitaevskii equation) [33–35]. The duality
98
+ can also be used to classify long-range potentials in quantum mechanics [36].
99
+ In this paper, we reveal duality and duality families for the GKdV equation.
100
+ The
101
+ duality transformation can transform the solution of a GKdV equation into the solution of
102
+ its dual GKdV equation. The GKdV equation duality family consists of an infinite number
103
+ of GKdV equations that are dual to each other. The solution of all GKdV equations in a
104
+ duality family can be obtained from the solution of one solved family member by the duality
105
+ transformation. This way, we can obtain a series of exact solutions of GKdV equations.
106
+ As an example, we discuss the KdV equation duality family in which the KdV equation
107
+ is a member. As an example, we discuss the KdV equation duality family in which the
108
+ KdV equation (1.1) and the KdV type equation with a power-law nonlinearity (1.3) are
109
+ family members. The duality transformation gives a series of 1-soliton solutions of GKdV
110
+ equations from a 1-soliton solution of the KdV equation (1.1). We also consider the duality
111
+ between the periodic solution of the KdV equation and the soliton solution of the mKdV
112
+ equation.
113
+ In particular, since the solution of all GKdV equations in a duality family can be
114
+ obtained from the solution of one family member by the duality transformation, we can
115
+ develop an indirect approach for solving GKdV equations: (1) constructing the duality
116
+ family of this equation; (2) looking for an ‘easy’ equation in the duality family and solving
117
+ the ‘easy’ equation; (3) solving the wanted equation by the duality transformation.
118
+ In section 2, we present the duality and duality family of the GKdV equation. In section
119
+ 3, we consider two examples: (1) solving the KdV equation with a power-law nonlinearity
120
+ from the KdV equation by the duality transformation; (2) the duality between the periodic
121
+ – 2 –
122
+
123
+ solution of the KdV equation and the soliton solution of the mKdV equation. The conclusion
124
+ is given in section 4.
125
+ 2
126
+ Duality family of GKdV equation
127
+ In this section, we give the duality and duality family of the traveling wave GKdV equation.
128
+ The solutions of a GKdV equation can be obtained from its dual equation by the duality
129
+ transformation.
130
+ The traveling wave with a velocity C of the GKdV equation (1.2) is given by
131
+ d3u
132
+ dz3 + [C − f (u)] du
133
+ dz = 0.
134
+ (2.1)
135
+ where u (x, t) = u (z) and z = x + Ct.
136
+ The traveling wave GKdV equation (2.1) has the following duality relation.
137
+ Two traveling wave GKdV equations,
138
+ d3u
139
+ dz3 + [C − f (u)] du
140
+ dz = 0,
141
+ (2.2)
142
+ d3v
143
+ dζ3 + [C − g (v)] dv
144
+ dζ = 0,
145
+ (2.3)
146
+ if
147
+ 1
148
+ C u−2 [G − U (u) − Fu] = 1
149
+ C v−2 [G − V (v) − Fv] ,
150
+ (2.4)
151
+ where
152
+ d2U (u)
153
+ du2
154
+ = −f (u) ,
155
+ (2.5)
156
+ d2V (v)
157
+ dv2
158
+ = −g (v) ,
159
+ (2.6)
160
+ F = −
161
+ �d2u
162
+ dz2 + Cu + dU (u)
163
+ du
164
+
165
+ ,
166
+ (2.7)
167
+ F = −
168
+ �d2v
169
+ dζ2 + Cv + dV (v)
170
+ dv
171
+
172
+ ,
173
+ (2.8)
174
+ G = 1
175
+ 2
176
+ �du
177
+ dz
178
+ �2
179
+ + 1
180
+ 2Cu2 + U (u) + Fu,
181
+ (2.9)
182
+ G = 1
183
+ 2
184
+ �dv
185
+
186
+ �2
187
+ + 1
188
+ 2Cv2 + V (v) + Fv,
189
+ (2.10)
190
+ then their solutions satisfy
191
+ u ↔ vσ,
192
+ (2.11)
193
+ z ↔
194
+
195
+ C
196
+ C σζ.
197
+ (2.12)
198
+ – 3 –
199
+
200
+ Here σ is an arbitrarily chosen constant.
201
+ Integral of motion. Before going on, we first illustrate the meaning of G, F, G, and F,
202
+ taking G and F as examples.
203
+ Broadly speaking, G and F are both integrals of motion for the equation of motion (2.2).
204
+ In principle, the integral of the equation of motion over time is known as the integral of
205
+ motion. Here G and F are integration constants of integrating the traveling wave equation
206
+ (2.2) over z and u, respectively; we here still call them integral of motion.
207
+ Multiplying both sides of the GKdV equation (2.2) by dz and integrating, and using
208
+ (2.5) give d2u
209
+ dz2 + Cu + dU(u)
210
+ du
211
+ = −F, i.e., Eq. (2.7), where F is the integration constant of
212
+ the integral over z.
213
+ Similarly, multiplying both sides of (2.7) by du and integrating give 1
214
+ 2
215
+ � du
216
+ dz
217
+ �2 + 1
218
+ 2Cu2 +
219
+ U (u) + Fu = G, i.e., (2.9), where G is the integration constant of the integral over u and
220
+
221
+ du d2u
222
+ dz2 =
223
+
224
+ dz du
225
+ dz
226
+ d2u
227
+ dz2 = 1
228
+ 2
229
+
230
+ dz d
231
+ dz
232
+ �du
233
+ dz
234
+ �2 = 1
235
+ 2
236
+ � du
237
+ dz
238
+ �2 is used.
239
+ Proof of duality relation. Substituting the duality transformations (2.11) and (2.12)
240
+ into (2.7) gives
241
+ C
242
+ C
243
+ d2v
244
+ dζ2 + C
245
+ C (σ − 1) v−1
246
+ �dv
247
+
248
+ �2
249
+ + σCv + v2(1−σ) dU (vσ)
250
+ dv
251
+ + σv1−σF = 0.
252
+ (2.13)
253
+ By (2.9), we have
254
+ C
255
+ C (σ − 1) v−1
256
+ �dv
257
+
258
+ �2
259
+ = 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] − C (σ − 1) v.
260
+ (2.14)
261
+ Using (2.14) to eliminate the term (σ − 1) v−1 �
262
+ dv
263
+
264
+ �2
265
+ in (2.13), we arrive at
266
+ C
267
+ C
268
+ d2v
269
+ dζ2 + Cv + 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + v2(1−σ) dU (vσ)
270
+ dv
271
+ + σv1−σF = 0. (2.15)
272
+ By the duality transformation (2.4), we can obtain
273
+ V (v) = G − Fv − C
274
+ C v2−2σ [G − U (vσ) − Fvσ] .
275
+ (2.16)
276
+ Taking the derivative of (2.16) with respect to v gives
277
+ dV (v)
278
+ dv
279
+ = −F + 2 C
280
+ C (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + C
281
+ C v2(1−σ)
282
+ �dU (vσ)
283
+ dv
284
+ + σvσ−1F
285
+
286
+ .
287
+ (2.17)
288
+ Substituting (2.17) into (2.15) gives
289
+ d2v
290
+ dζ2 + Cv + dV (v)
291
+ dv
292
+ + F = 0.
293
+ (2.18)
294
+ Then taking the derivative with respect to ζ and using (2.6), we arrive at (2.3).
295
+ Discussion of U. The relation between f (u) in the GKdV equation (2.2) and U (u) in
296
+ (2.5) is not unique. U (u; a, b) = U (u) + au + b and U (u) lead to the same f (u), and both
297
+ correspond to the GKdV equation (1.2).
298
+ – 4 –
299
+
300
+ The integral of motion F, corresponding to U (u; a, b), by (2.7), is F (a, b) = −
301
+
302
+ d2u
303
+ dz2 + Cu + dU(u;a,b)
304
+ du
305
+
306
+ =
307
+ F − a; the integral of motion G, corresponding to U (u; a, b) , by (2.9), is G (a, b) =
308
+ 1
309
+ 2
310
+ �du
311
+ dz
312
+ �2 + 1
313
+ 2Cu2 + U (u; a, b) + F (a, b) u = G + b. Therefore, by (2.4), the duality transfor-
314
+ mation given by U (u; a, b) is
315
+ 1
316
+ C u−2 [G (a, b) − U (u; a, b) − F (a, b) u] = 1
317
+ C v−2 [G − V (v; a, b) − Fv] .
318
+ (2.19)
319
+ Here V (v; a, b) is the duality of U (u; a, b).
320
+ Substituting U (u; a, b), F (a, b), and G (a, b) into the duality transformation (2.19)
321
+ gives
322
+ V (v; a, b) = G − Fv − C
323
+ C v2−2σ [G − U (vσ) − Fvσ] = V (v) .
324
+ (2.20)
325
+ That is, in the GKdV equation, although the correspondence between f (u) and U (u) is
326
+ not unique, the same f (u) corresponding to different U (u), the choice of U (u) does not
327
+ influence the duality of the GKdV equation.
328
+ 3
329
+ Duality family of KdV equation: Example
330
+ We consider a special duality family of the GKdV equation as an example in this section.
331
+ The KdV equation and mKdV equation are family members of this duality family. The
332
+ solutions of all family members in a duality family are related by a duality transformation.
333
+ In a duality family containing the KdV equation, we can solve all the GKdV equations
334
+ in the family from the solution of the KdV equation by the duality transformation. In
335
+ this section, we give the solution of the KdV equation with a power-law nonlinearity from
336
+ the solution of the KdV equation; the mKdV equation is the power-law nonlinearity KdV
337
+ equation with power 2.
338
+ Duality family of the KdV equation and the KdV equation with a power-law nonlinearity.
339
+ The KdV equation (1.1) with z = x − Ct,
340
+ d3u
341
+ dz3 − (C + 6u) du
342
+ dz = 0,
343
+ (3.1)
344
+ has a 1-soliton solution [37]
345
+ u (z) = −C
346
+ 2 sech2
347
+ �√
348
+ C
349
+ 2 z
350
+
351
+ .
352
+ (3.2)
353
+ The soliton solution is a localized traveling wave solution. The localization solution, taking
354
+ the 1-soliton solution as an example, means that (3.2) when z → ±∞, u (z) → 0. The
355
+ integral of motion of the 1-soliton solution (3.2), by (2.7), (2.9) and (3.2), is
356
+ F = 0 and G = 0.
357
+ (3.3)
358
+ Then the dual equation of the traveling wave KdV equation given by the duality transfor-
359
+ mation (2.4) is
360
+ d3v
361
+ dζ3 −
362
+
363
+ C + C
364
+ C (2 + σ) (1 + σ) vσ
365
+ � dv
366
+ dζ = 0.
367
+ (3.4)
368
+ – 5 –
369
+
370
+ Since σ can be chosen arbitrarily, (3.4) is not a single equation but forms a duality family.
371
+ All the GKdV equations labeled by different σ in the duality family are dual equations of
372
+ the KdV equation.
373
+ By (2.11) and (2.12), we can obtain the solution of Eq. (3.4)
374
+ v (ζ) =
375
+
376
+ −C
377
+ 2 sech2
378
+ �√
379
+ C
380
+ 2 σζ
381
+ ��1/σ
382
+ ,
383
+ (3.5)
384
+ where ζ = x − Ct has a velocity −C.
385
+ Instead of z, rewrite the dual equation (3.4) by (t, x):
386
+ ∂v
387
+ ∂t + αvσ ∂v
388
+ ∂x + ∂3v
389
+ ∂x3 = 0,
390
+ (3.6)
391
+ where α = − C
392
+ C (2 + σ) (1 + σ). When σ is taken as a positive integer, (3.6) is the KdV
393
+ equation with a power-law nonlinearity, and the solution (3.5) becomes
394
+ v (x, t) =
395
+
396
+ −C
397
+ 2 sech2
398
+ �√
399
+ C
400
+ 2 σ (x − Ct)
401
+ ��1/σ
402
+ ,
403
+ (3.7)
404
+ or equivalently, v (x, t) =
405
+
406
+ C(2+σ)(1+σ)
407
+ 2α cosh2� √
408
+ C
409
+ 2 σ(x−Ct)
410
+
411
+ �1/σ
412
+ , which agrees with Ref. [38].
413
+ In this duality family, the family member σ = 1 is the KdV equation (1.1), and the
414
+ family member σ = 2 is the mKdV equation
415
+ ∂v
416
+ ∂t − 12 C
417
+ C v2 ∂v
418
+ ∂x + ∂3v
419
+ ∂x3 = 0.
420
+ (3.8)
421
+ (3.7) with σ = 2 gives the 1-soliton solution of the mKdV equation (3.8)
422
+ v (x, t) = ±
423
+
424
+ −C
425
+ 2 sech
426
+ �√
427
+ C (x − Ct)
428
+
429
+ .
430
+ (3.9)
431
+ Now, by the duality relation, we have obtained all family members’ solutions from the KdV
432
+ equation’s solution.
433
+ Periodic solution-soliton solution duality. A duality exists between the periodic solution
434
+ and the soliton solution of the GKdV equation. We take the periodic solution of the KdV
435
+ equation and the soliton solution of the mKdV equation as an example.
436
+ The KdV equation (1.1) has a periodic solution
437
+ u (x, t) = 1
438
+ 6C
439
+
440
+ 1 + 3 tan2
441
+ �√
442
+ C
443
+ 2
444
+ (x − Ct)
445
+ ��
446
+ .
447
+ (3.10)
448
+ The KdV equation (1.1) with z = x − Ct becomes (3.1), and its solution (3.10) becomes
449
+ u (z) = C
450
+ 6
451
+
452
+ 1 + 3 tan2
453
+ �C
454
+ 2 z
455
+ ��
456
+ (3.11)
457
+ – 6 –
458
+
459
+ with the period
460
+
461
+
462
+ C .
463
+ The integral of motion of the periodic solution (3.10) of the KdV equation, by (2.7),
464
+ (2.9) and (3.10), is
465
+ F = 0,
466
+ G = −C3
467
+ 54 .
468
+ (3.12)
469
+ The dual equation of the traveling wave KdV equation given by the duality transformation
470
+ (2.4) is then
471
+ d3v
472
+ dζ3 +
473
+
474
+ C − 1
475
+ 27 (1 − σ) (1 − 2σ) CC2v−2σ + C
476
+ C (σ + 1) (σ + 2) vσ
477
+ � dv
478
+ dζ = 0,
479
+ (3.13)
480
+ where ζ = x+Ct. The duality transformations (2.11) and (2.12) give the solution of (3.13).
481
+ v (ζ) =
482
+
483
+ C
484
+ 6
485
+
486
+ 1 − 3 tanh2
487
+ �√
488
+ C
489
+ 2 σζ
490
+ ���1/σ
491
+ .
492
+ (3.14)
493
+ σ running over all possible values gives all equations and their solutions in the duality
494
+ family.
495
+ The family member σ = 1 and C = −C in the duality family is the KdV equation
496
+ (1.1). Different from the 1-soliton solution (3.4), however, the family member σ = −1 is
497
+ the traveling wave mKdV equation
498
+ d3v
499
+ dζ3 + C
500
+
501
+ 1 − 2
502
+ 9C2v2
503
+ � dv
504
+ dζ = 0.
505
+ (3.15)
506
+ or, with ζ = x + Ct and C = 27
507
+ C2 ,
508
+ ∂v
509
+ ∂t − 6v2 ∂v
510
+ ∂x + ∂3v
511
+ ∂x3 = 0,
512
+ (3.16)
513
+ which, by (3.14), has a traveling wave solution
514
+ v (x, t) =
515
+ 2
516
+
517
+ C
518
+
519
+ 3
520
+
521
+ 1 − 3 tanh2 � √
522
+ C
523
+ 2 (x + Ct)
524
+ ��.
525
+ (3.17)
526
+ It can be directly verified that v (x, t) → −
527
+
528
+ 3C
529
+ 3
530
+ when x, t → ±∞, so (3.17) is a soliton
531
+ solution of the mKdV equation (3.16).
532
+ In this example, the duality of the periodic solution is a soliton solution.
533
+ Indirect approach for solving equations. The existence of the duality family gives us
534
+ an indirect approach to solving equations. When solving an equation, we can (1) find its
535
+ duality family; (2) look for and solve an ‘easy’ family member, and (3) achieve the solution
536
+ of this equation by the duality transformation.
537
+ 4
538
+ Conclusion
539
+ This paper reveals a duality among the GKdV equations, and all the GKdV equations that
540
+ are dual to each other form a duality family. In a duality family, the solutions of different
541
+ family members are related by the duality transformation.
542
+ – 7 –
543
+
544
+ In a duality family, we only need to solve one family member, and the duality trans-
545
+ formation can give solutions for all other family members. This allows us to develop an
546
+ indirect approach to solving the GKdV equation.
547
+ In this paper, as an example, we discuss the GKdV equation duality family containing
548
+ the KdV equation and the KdV equation with a power-law nonlinearity: seeking 1-soliton
549
+ solution of the KdV equation with a power-law nonlinearity from a 1-soliton solution of
550
+ the KdV equation by the duality relation. In another example, we consider the periodic
551
+ solution-soliton solution duality. By the duality transformation, we give a soliton solution
552
+ of the mKdV equation from a periodic solution of the KdV equation.
553
+ Acknowledgments
554
+ We are very indebted to Dr G. Zeitrauman for his encouragement. This work is supported
555
+ in part by Special Funds for theoretical physics Research Program of the NSFC under Grant
556
+ No. 11947124, and NSFC under Grant Nos. 11575125 and 11675119.
557
+ References
558
+ [1] M. J. Ablowitz, M. Ablowitz, P. Clarkson, and P. A. Clarkson, Solitons, nonlinear evolution
559
+ equations and inverse scattering, vol. 149. Cambridge university press, 1991.
560
+ [2] D. Kordeweg and G. de Vries, On the change of form of long waves advancing in a
561
+ rectangular channel, and a new type of long stationary wave, Philos. Mag 39 (1895) 422–443.
562
+ [3] D. H. Peregrine, Calculations of the development of an undular bore, Journal of Fluid
563
+ Mechanics 25 (1966), no. 2 321–330.
564
+ [4] S. B. G. Karakoc and K. K. Ali, New exact solutionsand numerical approximations of the
565
+ generalized kdv equation, .
566
+ [5] A. Silem, H. Wu, and D.-j. Zhang, Nonisospectral effects on generating localized waves,
567
+ Communications in Theoretical Physics 73 (2021), no. 11 115002.
568
+ [6] N. J. Zabusky, A synergetic approach to problems of nonlinear dispersive wave propagation
569
+ and interaction, in Nonlinear partial differential equations, pp. 223–258. Elsevier, 1967.
570
+ [7] L. Van Wijngaarden, On the equations of motion for mixtures of liquid and gas bubbles,
571
+ Journal of fluid mechanics 33 (1968), no. 3 465–474.
572
+ [8] K. Konno and Y. H. Ichikawa, A modified korteweg de vries equation for ion acoustic waves,
573
+ Journal of the Physical Society of Japan 37 (1974), no. 6 1631–1636.
574
+ [9] F. Haas, L. Garcia, J. Goedert, and G. Manfredi, Quantum ion-acoustic waves, Physics of
575
+ Plasmas 10 (2003), no. 10 3858–3866.
576
+ [10] H. Schamel, A modified korteweg-de vries equation for ion acoustic wavess due to resonant
577
+ electrons, Journal of Plasma Physics 9 (1973), no. 3 377–387.
578
+ [11] L. D. Faddeev and V. E. Korepin, Quantum theory of solitons, Physics Reports 42 (1978),
579
+ no. 1 1–87.
580
+ [12] A. Korkmaz, Numerical algorithms for solutions of korteweg–de vries equation, Numerical
581
+ methods for partial differential equations 26 (2010), no. 6 1504–1521.
582
+ – 8 –
583
+
584
+ [13] G. L. Lamb Jr, Elements of soliton theory, New York (1980) 29.
585
+ [14] A. Biswas, 1-soliton solution of the k (m, n) equation with generalized evolution, Physics
586
+ Letters A 372 (2008), no. 25 4601–4602.
587
+ [15] M. Wang, Y. Zhou, and Z. Li, Application of a homogeneous balance method to exact
588
+ solutions of nonlinear equations in mathematical physics, Physics Letters A 216 (1996),
589
+ no. 1-5 67–75.
590
+ [16] N. Kudryashov, Exact soliton solutions of the generalized evolution equation of wave
591
+ dynamics, Journal of applied mathematics and mechanics 52 (1988), no. 3 361–365.
592
+ [17] I. Dorfman, Dirac structures and integrability of nonlinear evolution equations, vol. 18.
593
+ Wiley, 1993.
594
+ [18] P. G. Drazin and R. S. Johnson, Solitons: an introduction, vol. 2. Cambridge university
595
+ press, 1989.
596
+ [19] M. M. Melo, Generalized solutions to the gkdv equation., Electronic Journal of Differential
597
+ Equations (EJDE)[electronic only] 2010 (2010) Paper–No.
598
+ [20] A.-M. Wazwaz, New sets of solitary wave solutions to the kdv, mkdv, and the generalized kdv
599
+ equations, Communications in Nonlinear Science and Numerical Simulation 13 (2008), no. 2
600
+ 331–339.
601
+ [21] D.-J. Zhang, S.-L. Zhao, Y.-Y. Sun, and J. Zhou, Solutions to the modified korteweg–de vries
602
+ equation, Reviews in Mathematical Physics 26 (2014), no. 07 1430006.
603
+ [22] R. M. Miura, C. S. Gardner, and M. D. Kruskal, Korteweg-de vries equation and
604
+ generalizations. ii. existence of conservation laws and constants of motion, Journal of
605
+ Mathematical physics 9 (1968), no. 8 1204–1209.
606
+ [23] D.-j. Zhang, Wronskian solutions of integrable systems, in Nonlinear Systems and Their
607
+ Remarkable Mathematical Structures, pp. 415–444. Chapman and Hall/CRC, 2019.
608
+ [24] S.-l. Zhao and D.-j. Zhang, Rational solutions to q3δ in the adler-bobenko-suris list and
609
+ degenerations, Journal of nonlinear mathematical physics 26 (2019), no. 1 107–132.
610
+ [25] M. Wadati, Wave propagation in nonlinear lattice. i, Journal of the Physical Society of Japan
611
+ 38 (1975), no. 3 673–680.
612
+ [26] V. Narayanamurti and C. Varma, Nonlinear propagation of heat pulses in solids, Physical
613
+ Review Letters 25 (1970), no. 16 1105.
614
+ [27] F. Tappert and C. Varma, Asymptotic theory of self-trapping of heat pulses in solids,
615
+ Physical Review Letters 25 (1970), no. 16 1108.
616
+ [28] S. Chandrasekhar, Newton’s Principia for the common reader. Oxford University Press, 2003.
617
+ [29] V. I. Arnold, Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis
618
+ and catastrophe theory from evolvents to quasicrystals. Springer Science & Business Media,
619
+ 1990.
620
+ [30] T. Needham, Visual complex analysis. Oxford University Press, 1998.
621
+ [31] V. I. Arnol’d, Mathematical methods of classical mechanics, vol. 60. Springer Science &
622
+ Business Media, 2013.
623
+ [32] W.-D. Li and W.-S. Dai, Duality family of scalar field, Nuclear Physics B 972 (2021) 115569.
624
+ – 9 –
625
+
626
+ [33] S.-L. Li, Y.-J. Chen, Y.-Y. Liu, W.-D. Li, and W.-S. Dai, Solving eigenproblem by duality
627
+ transform, Annals of Physics 443 (2022) 168962.
628
+ [34] Y.-J. Chen, S.-L. Li, W.-D. Li, and W.-S. Dai, An indirect approach for quantum-mechanical
629
+ eigenproblems: duality transforms, Communications in Theoretical Physics 74 (2022), no. 5
630
+ 055103.
631
+ [35] Y.-Y. Liu, W.-D. Li, and W.-S. Dai, Exactly solvable gross–pitaevskii type equations, Journal
632
+ of Physics Communications 5 (2021), no. 1 015011.
633
+ [36] W.-D. Li and W.-S. Dai, Long-range potential scattering: Converting long-range potential to
634
+ short-range potential by tortoise coordinate, Journal of Mathematical Physics 62 (2021),
635
+ no. 12 122102.
636
+ [37] G. Griffiths and W. E. Schiesser, Traveling wave analysis of partial differential equations:
637
+ numerical and analytical methods with MATLAB and Maple. Academic Press, 2010.
638
+ [38] M. Hayek, Constructing of exact solutions to the kdv and burgers equations with power-law
639
+ nonlinearity by the extended g’ g-expansion method, Applied Mathematics and Computation
640
+ 217 (2010), no. 1 212–221.
641
+ 10
642
+
2dAyT4oBgHgl3EQfPvZl/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf,len=470
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
3
+ page_content='00030v1 [math-ph] 28 Dec 2022 Duality family of KdV equation Xin Gu,a Yuan-Yuan Liu,b Wen-Du Li,c,1 and Wu-Sheng Daia,2 aDepartment of Physics, Tianjin University, Tianjin 300350, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
4
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
5
+ page_content=' China bTheoretical Physics Division, Chern Institute of Mathematics, Nankai University, PR China cCollege of Physics and Materials Science, Tianjin Normal University, Tianjin 300387, PR China Abstract: It is revealed that there exist duality families of the KdV type equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
6
+ page_content=' The duality family consists of an infinite number of the generalized KdV (GKdV) equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
7
+ page_content=' A duality transformation relates the GKdV equations in a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
8
+ page_content=' Once a family member is solved, the duality transformation presents the solutions of all other family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
9
+ page_content=' We show some dualities as examples, such as the soliton solution-soliton solution duality and the periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
10
+ page_content=' 1liwendu@tjnu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
11
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
12
+ page_content='cn 2daiwusheng@tju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
13
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
14
+ page_content='cn Contents 1 Introduction 1 2 Duality family of GKdV equation 3 3 Duality family of KdV equation: Example 5 4 Conclusion 7 1 Introduction After Russell found the solitary wave phenomenon, and studying nonlinear evolution equa- tions began in physics and mathematics [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
15
+ page_content=' When Kortoweg and de Vries studied the water wave in the long-wave approximation and finite small amplitude, they gave the Korteweg-de Vries (KdV) equation [1–3], ∂u ∂t − 6u∂u ∂x + ∂3u ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
16
+ page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
17
+ page_content='1) The KdV equation is a basic model in nonlinear evolution equations [4, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
18
+ page_content=' The KdV equation defines many physical phenomena, such as waves in anharmonic crystals [6], waves in bubble liquid mixtures [7], ion acoustic waves [8–10], and waves in warm plasma [8–10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
19
+ page_content=' Soliton solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
20
+ page_content=' The solitary wave solutions of the KdV equation are noted as solitons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
21
+ page_content=' The velocity of the solitary wave relates to its magnitude [11], and after the collision, it re- tains the original magnitude, shape, and velocity [12, 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
22
+ page_content=' The theory of solitons emerges in biochemistry, nonlinear optics, mathematical biosciences, fluid dynamics, plasma physics, nuclear physics, and geophysics [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
23
+ page_content=' There have been many approaches to calculating the soliton solution [15, 16], such as the Painlevé analysis method, Bäcklund transformation method, Hirota bilinear method, inverse scattering method, and Darboux transformation method [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
24
+ page_content=' These methods apply not only to calculating the soliton solution of the KdV equation but also to other partial differential equations [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
25
+ page_content=' These methods have differ- ent limits in applications, and there is no universal method for solving nonlinear partial differential equations generally [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
26
+ page_content=' Modified KdV (mKdV) equation and generalized KdV (GKdV) equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
27
+ page_content=' The KdV equation is a special case of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
28
+ page_content=' The GKdV equation generally is [19] ∂u ∂t − f (u) ∂u ∂x + ∂3u ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
29
+ page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
30
+ page_content='2) The GKdV equation recovers the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
31
+ page_content='1) when f (u) = 6u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
32
+ page_content=' A special GKdV equation with f (u) = −αuk is the KdV type equation with a power- law nonlinearity [20], ∂u ∂t + αuk ∂u ∂x + ∂3u ∂x3 = 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
33
+ page_content='3) – 1 – and the mKdV equation is Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
34
+ page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
35
+ page_content='3) with k = 2 and α = 6 [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
36
+ page_content=' The Miura transforma- tion establishes a one-to-one correspondence between the solutions of the KdV equation and the solutions of the mKdV equation [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
37
+ page_content=' The mKdV equation has a rich physical background [23, 24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
38
+ page_content=' The mKdV equation can describe a bounded particle propagating in a one-dimensional nonlinear lattice with a harmonic force [25], small amplitude ion acous- tic waves propagating in plasma physics [8], and the thermal pulse propagating through a single crystal of sodium fluoride [26, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
39
+ page_content=' Duality and duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
40
+ page_content=' Newton in Principia revealed a duality between gravitation and elasticity in classical mechanics, now called the Newton-Hooke duality [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
41
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
42
+ page_content=' Kasner and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
43
+ page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
44
+ page_content=' Arnol’d independently find the generalized duality between power potentials: two power potentials U (r) = ξra and V (r) = ηrA are dual if a+2 2 = 2 A+2, called the Kasner- Arnol’d theorem [29–31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
45
+ page_content=' Recently, we find that such a duality generally exists in classical mechanics, quantum mechanics, and scalar fields and present the duality among arbitrary potentials [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
46
+ page_content=' We find that the duality is not a duality only between two potentials but exists duality families [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
47
+ page_content=' Each duality family consists of an infinite number of potentials dual to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
48
+ page_content=' Each duality family consists of an infinite number of potentials;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
49
+ page_content=' in a duality family, every potential is dual to all other potentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
50
+ page_content=' Once a family member’s solution is obtained, we can obtain all other members’ solutions by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
51
+ page_content=' Therefore, the duality relation can be used to find the solutions for classical mechanics, quantum mechanics, field theory, and nonlinear equations (such as the Gross-Pitaevskii equation) [33–35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
52
+ page_content=' The duality can also be used to classify long-range potentials in quantum mechanics [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
53
+ page_content=' In this paper, we reveal duality and duality families for the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
54
+ page_content=' The duality transformation can transform the solution of a GKdV equation into the solution of its dual GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
55
+ page_content=' The GKdV equation duality family consists of an infinite number of GKdV equations that are dual to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
56
+ page_content=' The solution of all GKdV equations in a duality family can be obtained from the solution of one solved family member by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
57
+ page_content=' This way, we can obtain a series of exact solutions of GKdV equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
58
+ page_content=' As an example, we discuss the KdV equation duality family in which the KdV equation is a member.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
59
+ page_content=' As an example, we discuss the KdV equation duality family in which the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
60
+ page_content='1) and the KdV type equation with a power-law nonlinearity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
61
+ page_content='3) are family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
62
+ page_content=' The duality transformation gives a series of 1-soliton solutions of GKdV equations from a 1-soliton solution of the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
63
+ page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
64
+ page_content=' We also consider the duality between the periodic solution of the KdV equation and the soliton solution of the mKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
65
+ page_content=' In particular, since the solution of all GKdV equations in a duality family can be obtained from the solution of one family member by the duality transformation, we can develop an indirect approach for solving GKdV equations: (1) constructing the duality family of this equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
66
+ page_content=' (2) looking for an ‘easy’ equation in the duality family and solving the ‘easy’ equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
67
+ page_content=' (3) solving the wanted equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
68
+ page_content=' In section 2, we present the duality and duality family of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
69
+ page_content=' In section 3, we consider two examples: (1) solving the KdV equation with a power-law nonlinearity from the KdV equation by the duality transformation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
70
+ page_content=' (2) the duality between the periodic – 2 – solution of the KdV equation and the soliton solution of the mKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
71
+ page_content=' The conclusion is given in section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
72
+ page_content=' 2 Duality family of GKdV equation In this section, we give the duality and duality family of the traveling wave GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
73
+ page_content=' The solutions of a GKdV equation can be obtained from its dual equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
74
+ page_content=' The traveling wave with a velocity C of the GKdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
75
+ page_content='2) is given by d3u dz3 + [C − f (u)] du dz = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
76
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
77
+ page_content='1) where u (x, t) = u (z) and z = x + Ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
78
+ page_content=' The traveling wave GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
79
+ page_content='1) has the following duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
80
+ page_content=' Two traveling wave GKdV equations, d3u dz3 + [C − f (u)] du dz = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
81
+ page_content='2) d3v dζ3 + [C − g (v)] dv dζ = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
82
+ page_content='3) if 1 C u−2 [G − U (u) − Fu] = 1 C v−2 [G − V (v) − Fv] , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
83
+ page_content='4) where d2U (u) du2 = −f (u) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
84
+ page_content='5) d2V (v) dv2 = −g (v) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
85
+ page_content='6) F = − �d2u dz2 + Cu + dU (u) du � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
86
+ page_content='7) F = − �d2v dζ2 + Cv + dV (v) dv � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
87
+ page_content='8) G = 1 2 �du dz �2 + 1 2Cu2 + U (u) + Fu, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
88
+ page_content='9) G = 1 2 �dv dζ �2 + 1 2Cv2 + V (v) + Fv, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
89
+ page_content='10) then their solutions satisfy u ↔ vσ, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
90
+ page_content='11) z ↔ � C C σζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
91
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
92
+ page_content='12) – 3 – Here σ is an arbitrarily chosen constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
93
+ page_content=' Integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
94
+ page_content=' Before going on, we first illustrate the meaning of G, F, G, and F, taking G and F as examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
95
+ page_content=' Broadly speaking, G and F are both integrals of motion for the equation of motion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
96
+ page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
97
+ page_content=' In principle, the integral of the equation of motion over time is known as the integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
98
+ page_content=' Here G and F are integration constants of integrating the traveling wave equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
99
+ page_content='2) over z and u, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
100
+ page_content=' we here still call them integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
101
+ page_content=' Multiplying both sides of the GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
102
+ page_content='2) by dz and integrating, and using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
103
+ page_content='5) give d2u dz2 + Cu + dU(u) du = −F, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
104
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
105
+ page_content=', Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
106
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
107
+ page_content='7), where F is the integration constant of the integral over z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
108
+ page_content=' Similarly, multiplying both sides of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
109
+ page_content='7) by du and integrating give 1 2 � du dz �2 + 1 2Cu2 + U (u) + Fu = G, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
110
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
111
+ page_content=', (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
112
+ page_content='9), where G is the integration constant of the integral over u and � du d2u dz2 = � dz du dz d2u dz2 = 1 2 � dz d dz �du dz �2 = 1 2 � du dz �2 is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
113
+ page_content=' Proof of duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
114
+ page_content=' Substituting the duality transformations (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
115
+ page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
116
+ page_content='12) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
117
+ page_content='7) gives C C d2v dζ2 + C C (σ − 1) v−1 �dv dζ �2 + σCv + v2(1−σ) dU (vσ) dv + σv1−σF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
118
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
119
+ page_content='13) By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
120
+ page_content='9), we have C C (σ − 1) v−1 �dv dζ �2 = 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] − C (σ − 1) v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
121
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
122
+ page_content='14) Using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
123
+ page_content='14) to eliminate the term (σ − 1) v−1 � dv dζ �2 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
124
+ page_content='13), we arrive at C C d2v dζ2 + Cv + 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + v2(1−σ) dU (vσ) dv + σv1−σF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
125
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
126
+ page_content='15) By the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
127
+ page_content='4), we can obtain V (v) = G − Fv − C C v2−2σ [G − U (vσ) − Fvσ] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
128
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
129
+ page_content='16) Taking the derivative of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
130
+ page_content='16) with respect to v gives dV (v) dv = −F + 2 C C (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + C C v2(1−σ) �dU (vσ) dv + σvσ−1F � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
131
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
132
+ page_content='17) Substituting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
133
+ page_content='17) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
134
+ page_content='15) gives d2v dζ2 + Cv + dV (v) dv + F = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
135
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
136
+ page_content='18) Then taking the derivative with respect to ζ and using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
137
+ page_content='6), we arrive at (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
138
+ page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
139
+ page_content=' Discussion of U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
140
+ page_content=' The relation between f (u) in the GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
141
+ page_content='2) and U (u) in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
142
+ page_content='5) is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
143
+ page_content=' U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
144
+ page_content=' a, b) = U (u) + au + b and U (u) lead to the same f (u), and both correspond to the GKdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
145
+ page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
146
+ page_content=' – 4 – The integral of motion F, corresponding to U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
147
+ page_content=' a, b), by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
148
+ page_content='7), is F (a, b) = − � d2u dz2 + Cu + dU(u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
149
+ page_content='a,b) du � = F − a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
150
+ page_content=' the integral of motion G, corresponding to U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
151
+ page_content=' a, b) , by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
152
+ page_content='9), is G (a, b) = 1 2 �du dz �2 + 1 2Cu2 + U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
153
+ page_content=' a, b) + F (a, b) u = G + b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
154
+ page_content=' Therefore, by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
155
+ page_content='4), the duality transfor- mation given by U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
156
+ page_content=' a, b) is 1 C u−2 [G (a, b) − U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
157
+ page_content=' a, b) − F (a, b) u] = 1 C v−2 [G − V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
158
+ page_content=' a, b) − Fv] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
159
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
160
+ page_content='19) Here V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
161
+ page_content=' a, b) is the duality of U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
162
+ page_content=' a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
163
+ page_content=' Substituting U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
164
+ page_content=' a, b), F (a, b), and G (a, b) into the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
165
+ page_content='19) gives V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
166
+ page_content=' a, b) = G − Fv − C C v2−2σ [G − U (vσ) − Fvσ] = V (v) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
167
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
168
+ page_content='20) That is, in the GKdV equation, although the correspondence between f (u) and U (u) is not unique, the same f (u) corresponding to different U (u), the choice of U (u) does not influence the duality of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
169
+ page_content=' 3 Duality family of KdV equation: Example We consider a special duality family of the GKdV equation as an example in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
170
+ page_content=' The KdV equation and mKdV equation are family members of this duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
171
+ page_content=' The solutions of all family members in a duality family are related by a duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
172
+ page_content=' In a duality family containing the KdV equation, we can solve all the GKdV equations in the family from the solution of the KdV equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
173
+ page_content=' In this section, we give the solution of the KdV equation with a power-law nonlinearity from the solution of the KdV equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
174
+ page_content=' the mKdV equation is the power-law nonlinearity KdV equation with power 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
175
+ page_content=' Duality family of the KdV equation and the KdV equation with a power-law nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
176
+ page_content=' The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
177
+ page_content='1) with z = x − Ct, d3u dz3 − (C + 6u) du dz = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
178
+ page_content='1) has a 1-soliton solution [37] u (z) = −C 2 sech2 �√ C 2 z � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
179
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
180
+ page_content='2) The soliton solution is a localized traveling wave solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
181
+ page_content=' The localization solution, taking the 1-soliton solution as an example, means that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
182
+ page_content='2) when z → ±∞, u (z) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
183
+ page_content=' The integral of motion of the 1-soliton solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
184
+ page_content='2), by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
185
+ page_content='7), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
186
+ page_content='9) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
187
+ page_content='2), is F = 0 and G = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
188
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
189
+ page_content='3) Then the dual equation of the traveling wave KdV equation given by the duality transfor- mation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
190
+ page_content='4) is d3v dζ3 − � C + C C (2 + σ) (1 + σ) vσ � dv dζ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
191
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
192
+ page_content='4) – 5 – Since σ can be chosen arbitrarily, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
193
+ page_content='4) is not a single equation but forms a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
194
+ page_content=' All the GKdV equations labeled by different σ in the duality family are dual equations of the KdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
195
+ page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
196
+ page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
197
+ page_content='12), we can obtain the solution of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
198
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
199
+ page_content='4) v (ζ) = � −C 2 sech2 �√ C 2 σζ ��1/σ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
200
+ page_content='5) where ζ = x − Ct has a velocity −C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
201
+ page_content=' Instead of z, rewrite the dual equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
202
+ page_content='4) by (t, x): ∂v ∂t + αvσ ∂v ∂x + ∂3v ∂x3 = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
203
+ page_content='6) where α = − C C (2 + σ) (1 + σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
204
+ page_content=' When σ is taken as a positive integer, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
205
+ page_content='6) is the KdV equation with a power-law nonlinearity, and the solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
206
+ page_content='5) becomes v (x, t) = � −C 2 sech2 �√ C 2 σ (x − Ct) ��1/σ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
207
+ page_content='7) or equivalently, v (x, t) = � C(2+σ)(1+σ) 2α cosh2� √ C 2 σ(x−Ct) � �1/σ , which agrees with Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
208
+ page_content=' [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
209
+ page_content=' In this duality family, the family member σ = 1 is the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
210
+ page_content='1), and the family member σ = 2 is the mKdV equation ∂v ∂t − 12 C C v2 ∂v ∂x + ∂3v ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
211
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
212
+ page_content='8) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
213
+ page_content='7) with σ = 2 gives the 1-soliton solution of the mKdV equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
214
+ page_content='8) v (x, t) = ± � −C 2 sech �√ C (x − Ct) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
215
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
216
+ page_content='9) Now, by the duality relation, we have obtained all family members’ solutions from the KdV equation’s solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
217
+ page_content=' Periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
218
+ page_content=' A duality exists between the periodic solution and the soliton solution of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
219
+ page_content=' We take the periodic solution of the KdV equation and the soliton solution of the mKdV equation as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
220
+ page_content=' The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
221
+ page_content='1) has a periodic solution u (x, t) = 1 6C � 1 + 3 tan2 �√ C 2 (x − Ct) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
222
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
223
+ page_content='10) The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
224
+ page_content='1) with z = x − Ct becomes (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
225
+ page_content='1), and its solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
226
+ page_content='10) becomes u (z) = C 6 � 1 + 3 tan2 �C 2 z �� (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
227
+ page_content='11) – 6 – with the period 2π √ C .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
228
+ page_content=' The integral of motion of the periodic solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
229
+ page_content='10) of the KdV equation, by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
230
+ page_content='7), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
231
+ page_content='9) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
232
+ page_content='10), is F = 0, G = −C3 54 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
233
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
234
+ page_content='12) The dual equation of the traveling wave KdV equation given by the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
235
+ page_content='4) is then d3v dζ3 + � C − 1 27 (1 − σ) (1 − 2σ) CC2v−2σ + C C (σ + 1) (σ + 2) vσ � dv dζ = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
236
+ page_content='13) where ζ = x+Ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
237
+ page_content=' The duality transformations (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
238
+ page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
239
+ page_content='12) give the solution of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
240
+ page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
241
+ page_content=' v (ζ) = � C 6 � 1 − 3 tanh2 �√ C 2 σζ ���1/σ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
242
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
243
+ page_content='14) σ running over all possible values gives all equations and their solutions in the duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
244
+ page_content=' The family member σ = 1 and C = −C in the duality family is the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
245
+ page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
246
+ page_content=' Different from the 1-soliton solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
247
+ page_content='4), however, the family member σ = −1 is the traveling wave mKdV equation d3v dζ3 + C � 1 − 2 9C2v2 � dv dζ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
248
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
249
+ page_content='15) or, with ζ = x + Ct and C = 27 C2 , ∂v ∂t − 6v2 ∂v ∂x + ∂3v ∂x3 = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
250
+ page_content='16) which, by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
251
+ page_content='14), has a traveling wave solution v (x, t) = 2 √ C √ 3 � 1 − 3 tanh2 � √ C 2 (x + Ct) ��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
252
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
253
+ page_content='17) It can be directly verified that v (x, t) → − √ 3C 3 when x, t → ±∞, so (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
254
+ page_content='17) is a soliton solution of the mKdV equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
255
+ page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
256
+ page_content=' In this example, the duality of the periodic solution is a soliton solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
257
+ page_content=' Indirect approach for solving equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
258
+ page_content=' The existence of the duality family gives us an indirect approach to solving equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
259
+ page_content=' When solving an equation, we can (1) find its duality family;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
260
+ page_content=' (2) look for and solve an ‘easy’ family member, and (3) achieve the solution of this equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
261
+ page_content=' 4 Conclusion This paper reveals a duality among the GKdV equations, and all the GKdV equations that are dual to each other form a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
262
+ page_content=' In a duality family, the solutions of different family members are related by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
263
+ page_content=' – 7 – In a duality family, we only need to solve one family member, and the duality trans- formation can give solutions for all other family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
264
+ page_content=' This allows us to develop an indirect approach to solving the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
265
+ page_content=' In this paper, as an example, we discuss the GKdV equation duality family containing the KdV equation and the KdV equation with a power-law nonlinearity: seeking 1-soliton solution of the KdV equation with a power-law nonlinearity from a 1-soliton solution of the KdV equation by the duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
266
+ page_content=' In another example, we consider the periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
267
+ page_content=' By the duality transformation, we give a soliton solution of the mKdV equation from a periodic solution of the KdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
268
+ page_content=' Acknowledgments We are very indebted to Dr G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
269
+ page_content=' Zeitrauman for his encouragement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
270
+ page_content=' This work is supported in part by Special Funds for theoretical physics Research Program of the NSFC under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
271
+ page_content=' 11947124, and NSFC under Grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
272
+ page_content=' 11575125 and 11675119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
273
+ page_content=' References [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
274
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
275
+ page_content=' Ablowitz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
276
+ page_content=' Ablowitz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
277
+ page_content=' Clarkson, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
278
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
279
+ page_content=' Clarkson, Solitons, nonlinear evolution equations and inverse scattering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
280
+ page_content=' 149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
281
+ page_content=' Cambridge university press, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
282
+ page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
283
+ page_content=' Kordeweg and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
284
+ page_content=' de Vries, On the change of form of long waves advancing in a rectangular channel, and a new type of long stationary wave, Philos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
285
+ page_content=' Mag 39 (1895) 422–443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
286
+ page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
287
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
288
+ page_content=' Peregrine, Calculations of the development of an undular bore, Journal of Fluid Mechanics 25 (1966), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
289
+ page_content=' 2 321–330.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
290
+ page_content=' [4] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
291
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
292
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
293
+ page_content=' Karakoc and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
294
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
295
+ page_content=' Ali, New exact solutionsand numerical approximations of the generalized kdv equation, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
296
+ page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
297
+ page_content=' Silem, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
298
+ page_content=' Wu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
299
+ page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
300
+ page_content=' Zhang, Nonisospectral effects on generating localized waves, Communications in Theoretical Physics 73 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
301
+ page_content=' 11 115002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
302
+ page_content=' [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
303
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
304
+ page_content=' Zabusky, A synergetic approach to problems of nonlinear dispersive wave propagation and interaction, in Nonlinear partial differential equations, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
305
+ page_content=' 223–258.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
306
+ page_content=' Elsevier, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
307
+ page_content=' [7] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
308
+ page_content=' Van Wijngaarden, On the equations of motion for mixtures of liquid and gas bubbles, Journal of fluid mechanics 33 (1968), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
309
+ page_content=' 3 465–474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
310
+ page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
311
+ page_content=' Konno and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
312
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
313
+ page_content=' Ichikawa, A modified korteweg de vries equation for ion acoustic waves, Journal of the Physical Society of Japan 37 (1974), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
314
+ page_content=' 6 1631–1636.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
315
+ page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
316
+ page_content=' Haas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
317
+ page_content=' Garcia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
318
+ page_content=' Goedert, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
319
+ page_content=' Manfredi, Quantum ion-acoustic waves, Physics of Plasmas 10 (2003), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
320
+ page_content=' 10 3858–3866.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
321
+ page_content=' [10] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
322
+ page_content=' Schamel, A modified korteweg-de vries equation for ion acoustic wavess due to resonant electrons, Journal of Plasma Physics 9 (1973), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
323
+ page_content=' 3 377–387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
324
+ page_content=' [11] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
325
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
326
+ page_content=' Faddeev and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
327
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
328
+ page_content=' Korepin, Quantum theory of solitons, Physics Reports 42 (1978), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
329
+ page_content=' 1 1–87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
330
+ page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
331
+ page_content=' Korkmaz, Numerical algorithms for solutions of korteweg–de vries equation, Numerical methods for partial differential equations 26 (2010), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
332
+ page_content=' 6 1504–1521.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
333
+ page_content=' – 8 – [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
334
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
335
+ page_content=' Lamb Jr, Elements of soliton theory, New York (1980) 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
336
+ page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
337
+ page_content=' Biswas, 1-soliton solution of the k (m, n) equation with generalized evolution, Physics Letters A 372 (2008), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
338
+ page_content=' 25 4601–4602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
339
+ page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
340
+ page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
341
+ page_content=' Zhou, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
342
+ page_content=' Li, Application of a homogeneous balance method to exact solutions of nonlinear equations in mathematical physics, Physics Letters A 216 (1996), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
343
+ page_content=' 1-5 67–75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
344
+ page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
345
+ page_content=' Kudryashov, Exact soliton solutions of the generalized evolution equation of wave dynamics, Journal of applied mathematics and mechanics 52 (1988), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
346
+ page_content=' 3 361–365.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
347
+ page_content=' [17] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
348
+ page_content=' Dorfman, Dirac structures and integrability of nonlinear evolution equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
349
+ page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
350
+ page_content=' Wiley, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
351
+ page_content=' [18] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
352
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
353
+ page_content=' Drazin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
354
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
355
+ page_content=' Johnson, Solitons: an introduction, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
356
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
357
+ page_content=' Cambridge university press, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
358
+ page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
359
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
360
+ page_content=' Melo, Generalized solutions to the gkdv equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
361
+ page_content=', Electronic Journal of Differential Equations (EJDE)[electronic only] 2010 (2010) Paper–No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
362
+ page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
363
+ page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
364
+ page_content=' Wazwaz, New sets of solitary wave solutions to the kdv, mkdv, and the generalized kdv equations, Communications in Nonlinear Science and Numerical Simulation 13 (2008), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
365
+ page_content=' 2 331–339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
366
+ page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
367
+ page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
368
+ page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
369
+ page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
370
+ page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
371
+ page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
372
+ page_content=' Sun, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
373
+ page_content=' Zhou, Solutions to the modified korteweg–de vries equation, Reviews in Mathematical Physics 26 (2014), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
374
+ page_content=' 07 1430006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
375
+ page_content=' [22] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
376
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
377
+ page_content=' Miura, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
378
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
379
+ page_content=' Gardner, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
380
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
381
+ page_content=' Kruskal, Korteweg-de vries equation and generalizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
382
+ page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
383
+ page_content=' existence of conservation laws and constants of motion, Journal of Mathematical physics 9 (1968), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
384
+ page_content=' 8 1204–1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
385
+ page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
386
+ page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
387
+ page_content=' Zhang, Wronskian solutions of integrable systems, in Nonlinear Systems and Their Remarkable Mathematical Structures, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
388
+ page_content=' 415–444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
389
+ page_content=' Chapman and Hall/CRC, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
390
+ page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
391
+ page_content='-l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
392
+ page_content=' Zhao and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
393
+ page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
394
+ page_content=' Zhang, Rational solutions to q3δ in the adler-bobenko-suris list and degenerations, Journal of nonlinear mathematical physics 26 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
395
+ page_content=' 1 107–132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
396
+ page_content=' [25] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
397
+ page_content=' Wadati, Wave propagation in nonlinear lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
398
+ page_content=' i, Journal of the Physical Society of Japan 38 (1975), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
399
+ page_content=' 3 673–680.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
400
+ page_content=' [26] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
401
+ page_content=' Narayanamurti and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
402
+ page_content=' Varma, Nonlinear propagation of heat pulses in solids, Physical Review Letters 25 (1970), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
403
+ page_content=' 16 1105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
404
+ page_content=' [27] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
405
+ page_content=' Tappert and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
406
+ page_content=' Varma, Asymptotic theory of self-trapping of heat pulses in solids, Physical Review Letters 25 (1970), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
407
+ page_content=' 16 1108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
408
+ page_content=' [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
409
+ page_content=' Chandrasekhar, Newton’s Principia for the common reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
410
+ page_content=' Oxford University Press, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
411
+ page_content=' [29] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
412
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
413
+ page_content=' Arnold, Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
414
+ page_content=' Springer Science & Business Media, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
415
+ page_content=' [30] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
416
+ page_content=' Needham, Visual complex analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
417
+ page_content=' Oxford University Press, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
418
+ page_content=' [31] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
419
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
420
+ page_content=' Arnol’d, Mathematical methods of classical mechanics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
421
+ page_content=' 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
422
+ page_content=' Springer Science & Business Media, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
423
+ page_content=' [32] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
424
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
425
+ page_content=' Li and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
426
+ page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
427
+ page_content=' Dai, Duality family of scalar field, Nuclear Physics B 972 (2021) 115569.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
428
+ page_content=' – 9 – [33] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
429
+ page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
430
+ page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
431
+ page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
432
+ page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
433
+ page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
434
+ page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
435
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
436
+ page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
437
+ page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
438
+ page_content=' Dai, Solving eigenproblem by duality transform, Annals of Physics 443 (2022) 168962.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
439
+ page_content=' [34] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
440
+ page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
441
+ page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
442
+ page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
443
+ page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
444
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
445
+ page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
446
+ page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
447
+ page_content=' Dai, An indirect approach for quantum-mechanical eigenproblems: duality transforms, Communications in Theoretical Physics 74 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
448
+ page_content=' 5 055103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
449
+ page_content=' [35] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
450
+ page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
451
+ page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
452
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
453
+ page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
454
+ page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
455
+ page_content=' Dai, Exactly solvable gross–pitaevskii type equations, Journal of Physics Communications 5 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
456
+ page_content=' 1 015011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
457
+ page_content=' [36] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
458
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
459
+ page_content=' Li and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
460
+ page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
461
+ page_content=' Dai, Long-range potential scattering: Converting long-range potential to short-range potential by tortoise coordinate, Journal of Mathematical Physics 62 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
462
+ page_content=' 12 122102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
463
+ page_content=' [37] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
464
+ page_content=' Griffiths and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
465
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
466
+ page_content=' Schiesser, Traveling wave analysis of partial differential equations: numerical and analytical methods with MATLAB and Maple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
467
+ page_content=' Academic Press, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
468
+ page_content=' [38] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
469
+ page_content=' Hayek, Constructing of exact solutions to the kdv and burgers equations with power-law nonlinearity by the extended g’ g-expansion method, Applied Mathematics and Computation 217 (2010), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
470
+ page_content=' 1 212–221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
471
+ page_content=' 10' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'}
2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:960a223ca27981df8a19ccdf9b189d850f7ecb2ab1719646f5dd6ad3fbc64ed8
3
+ size 453802
2dE4T4oBgHgl3EQfagyV/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b45014918474daef645b8f4512cb0538c28c920dfd825899b940839041fd2110
3
+ size 329099
49AzT4oBgHgl3EQfEPqD/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8816543420058486898207f7b244926364593e6fa7c94db41292f193b227c8fe
3
+ size 46125
5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/2301.01925v1.pdf.txt ADDED
@@ -0,0 +1,1743 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01925v1 [math.NT] 5 Jan 2023
2
+ SELBERG’S CENTRAL LIMIT THEOREM OF L-FUNCTIONS NEAR
3
+ THE CRITICAL LINE
4
+ YOONBOK LEE
5
+ Abstract. We find an asymptotic expansion of a multi-dimensional version of Sel-
6
+ berg’s central limit theorem for L-functions on σ = 1
7
+ 2 + (log T )−θ and t ∈ [T, 2T ],
8
+ where 0 < θ < 1
9
+ 2 is a constant.
10
+ 1. Introduction
11
+ Selberg’s central limit theorem says that the function
12
+ log ζ(σ + it)
13
+
14
+ π �
15
+ p<t p−2σ
16
+ has a Gaussian distribution in the complex plane for 1
17
+ 2 ≤ σ ≤ σT(θ), where
18
+ σT := σT (θ) := 1
19
+ 2 +
20
+ 1
21
+ (log T)θ
22
+ for θ > 0 throughout the paper. See [8, Theorem 6.1] for a proof and [6] for a simple
23
+ proof for the real part. It also holds for other L-functions. See [7, Theorem 2] for a
24
+ general statement.
25
+ When σ = σT and T ≤ t ≤ 2T, we have more precise estimations for the distribution
26
+ of log ζ(σ + it) in [2] and [5] as follows.
27
+ Theorem 1.1. [5, Theorem 1.2 and Lemma 2.3] Let 0 < θ < 1
28
+ 2, a < b and c < d be
29
+ real numbers. There exist constants ǫ, κ > 0 and a sequence {dk,ℓ}k,ℓ≥0 of real numbers
30
+ such that
31
+ (1.1)
32
+ 1
33
+ T meas{t ∈ [T, 2T] : log ζ(σT + it)
34
+ √πψT
35
+ ∈ [a, b] × [c, d]}
36
+ =
37
+
38
+ k+ℓ≤ǫψT
39
+ dk,ℓ
40
+ √ψT
41
+ k+ℓ
42
+ � b
43
+ a
44
+ e−πu2Hk(√πu)du
45
+ � d
46
+ c
47
+ e−πv2Hℓ(√πv)dv + O
48
+
49
+ 1
50
+ (log T)κ
51
+
52
+ as T → ∞, where meas denotes the Lebesgue measure on R,
53
+ ψT :=
54
+
55
+ p
56
+
57
+ k≥1
58
+ 1
59
+ k2p2kσT
60
+ Date: January 6, 2023.
61
+ 2010 Mathematics Subject Classification. 11M41.
62
+ Key words and phrases. Central limit theorem, joint distribution of L-functions.
63
+ 1
64
+
65
+ 2
66
+ YOONBOK LEE
67
+ and Hn(x) is the n-th Hermite polynomial defined by
68
+ (1.2)
69
+ Hn(x) := (−1)nex2 dn
70
+ dxn(e−x2).
71
+ Moreover, d0,0 = 1, dk,ℓ = 0 for k + ℓ = 1, 2 and dk,ℓ = O(δ−k−ℓ
72
+ 0
73
+ ) for some δ0 > 0 and
74
+ all k, ℓ.
75
+ The leading term of the expansion in (1.1) is
76
+ � b
77
+ a
78
+ e−πu2du
79
+ � d
80
+ c
81
+ e−πv2dv,
82
+ which is Gaussian, and the lower order terms may be evaluated using
83
+ � b
84
+ a
85
+ e−πu2Hk(√πu)du = −1
86
+ √π
87
+
88
+ e−πb2Hk−1(√πb) − e−πa2Hk−1(√πa)
89
+
90
+ for k ≥ 1. Note that the sequence {dk,ℓ} is defined by the generating series (2.19) in
91
+ [5] and ψT = θ log log T + O(1) by the prime number theorem. It might be interesting
92
+ to compare the asymptotic expansion in (1.1) with an Edgeworth expansion in the
93
+ probability theory. See [1, Chapter 7] for more information.
94
+ In this paper, we generalize Theorem 1.1 to a multi-variate setting for the L-
95
+ functions L1, . . . , LJ satisfying the following assumptions:
96
+ A1: (Euler product) For j = 1, . . . , J and Re(s) > 1 we have
97
+ Lj(s) =
98
+
99
+ p
100
+ d
101
+
102
+ i=1
103
+
104
+ 1 − αj,i(p)
105
+ ps
106
+ �−1
107
+ ,
108
+ where |αj,i(p)| ≤ pη for some fixed 0 ≤ η < 1
109
+ 2 and for every i = 1, . . . , d.
110
+ A2: (Functional equation) The functions L1, L2, . . . , LJ satisfy the same functional
111
+ equation
112
+ Λj(s) = ωΛj(1 − ¯s),
113
+ where
114
+ Λj(s) := Lj(s)Qs
115
+ k
116
+
117
+ ℓ=1
118
+ Γ(λℓs + µℓ),
119
+ |ω| = 1, Q > 0, λℓ > 0 and µℓ ∈ C with Re(µℓ) ≥ 0.
120
+ A3: (Ramanujan hypothesis on average)
121
+
122
+ p≤x
123
+ d
124
+
125
+ i=1
126
+ |αj,i(p)|2 = O(x1+ǫ)
127
+ holds for every ǫ > 0 and for every j = 1, . . . , J as x → ∞.
128
+
129
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
130
+ 3
131
+ A4: (Zero density hypothesis) Let Nf(σ, T) be the number of zeros of f(s) in Re(s) ≥
132
+ σ and 0 ≤ Im(s) ≤ T. Then there exist positive constants κ1, κ2 such that for
133
+ every j = 1, . . . , J and all σ ≥ 1
134
+ 2 we have
135
+ NLj(σ, T) ≪ T 1−κ1(σ− 1
136
+ 2 )(log T)κ2.
137
+ A5: (Selberg orthogonality conjecture) By assumption A1 we can write
138
+ log Lj(s) =
139
+
140
+ p
141
+
142
+
143
+ k=1
144
+ βLj(pk)
145
+ pks
146
+ .
147
+ Then for all 1 ≤ j, k ≤ J, there exist constants ξj > 0 and cj,k such that
148
+
149
+ p≤x
150
+ βLj(p)βLk(p)
151
+ p
152
+ = δj,kξj log log x + cj,k + O
153
+
154
+ 1
155
+ log x
156
+
157
+ ,
158
+ where δj,k = 0 if j ̸= k and δj,k = 1 if j = k.
159
+ The assumptions A1–A5 are standard and expected to hold for all L-functions arising
160
+ from automorphic representation for GL(n). In particular, they are verified by GL(1)
161
+ and GL(2) L-functions, which are the Riemann zeta function, Dirichlet L-functions,
162
+ L-functions attached to Hecke holomorphic or Maass cusp forms. Assumption A4 is
163
+ weaker than the Riemann hypothesis, but it is strong enough to find a short Dirichlet
164
+ approximation to each log Lj(σT + it) for almost all t ∈ [T, 2T]. For example, see [4,
165
+ Lemma 4.2] for a proof. Assumption A5 insures the statistical independence of the
166
+ log Lj(σT + it) for j = 1, . . . , J.
167
+ Assuming assumptions A1–A5 for L1, . . . , LJ, we want to find an asymptotic ex-
168
+ pansion for
169
+ (1.3)
170
+ 1
171
+ T meas{t ∈ [T, 2T] : log Lj(σT + it)
172
+
173
+ πψj,T
174
+ ∈ [aj, bj] × [cj, dj] for all j = 1, . . . , J},
175
+ where
176
+ (1.4)
177
+ ψj,T := ξjθ log log T
178
+ with the constants ξj in assumption A5 and aj, bj, cj, dj are real numbers for all j =
179
+ 1, . . . , J. Let
180
+ L(s) :=
181
+
182
+ log |L1(s)|, . . . , log |LJ(s)|, arg L1(s), . . . , arg LJ(s)
183
+
184
+ and
185
+ RT :=
186
+ J�
187
+ j=1
188
+ [aj
189
+
190
+ πψj,T, bj
191
+
192
+ πψj,T] ×
193
+ J�
194
+ j=1
195
+ [cj
196
+
197
+ πψj,T, dj
198
+
199
+ πψj,T],
200
+ then (1.3) equals to
201
+ ΦT(RT ) := 1
202
+ T meas{t ∈ [T, 2T] : L(σT + it) ∈ RT }.
203
+
204
+ 4
205
+ YOONBOK LEE
206
+ Theorem 1.2. Let 0 < θ < 1
207
+ 2. Assume assumptions A1–A5 for L1, . . . , LJ. Then there
208
+ exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that
209
+ (1.5)
210
+ ΦT(RT) =
211
+
212
+ K(k+l)≤ǫ log log T
213
+ bk,l
214
+ J�
215
+ j=1
216
+ 1
217
+
218
+ ψj,T
219
+ kj+ℓj
220
+ ×
221
+ J
222
+
223
+ j=1
224
+ � � bj
225
+ aj
226
+ e−πu2Hkj(√πu)du
227
+ � dj
228
+ cj
229
+ e−πv2Hℓj(√πv)dv
230
+
231
+ + O
232
+
233
+ 1
234
+ (log T)κ
235
+
236
+ ,
237
+ where k = (k1, . . . , kJ) and l = (ℓ1, . . . , ℓJ) are vectors in (Z≥0)J and K(k) := k1 +
238
+ · · · + kJ. Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l)
239
+ 0
240
+ ) for some
241
+ δ0 > 0 and all k, l.
242
+ Theorem 1.2 will be proved in the beginning of Section 2. Theorem 1.2 is essentially
243
+ the same as Theorem 2.1 in [3], but it looks that the expansion in Theorem 1.2 is
244
+ longer. Moreover, since the paper [3] contains only a sketched proof, our proof should
245
+ be useful.
246
+ Unlike dk,ℓ in Theorem 1.1, bk,l in Theorem 1.2 may not be zero for K(k + l) = 2.
247
+ One reason is that ψT in Theorem 1.1 and ψj,T in Theorem 1.2 are different up to
248
+ a constant order, even though they are asymptotically same. Moreover, when J > 1,
249
+ there are additional terms essentially from the constants cj,k in assumption A5.
250
+ Since the leading term in (1.5) is Gaussian and the other nonvanishing terms are
251
+ O
252
+
253
+ 1
254
+ log log T
255
+
256
+ , we obtain the following corollary.
257
+ Corollary 1.3. Let 0 < θ < 1
258
+ 2. Assume assumptions A1–A5 for L1, . . . , LJ. Then we
259
+ have
260
+ ΦT(RT ) =
261
+ J�
262
+ j=1
263
+ � � bj
264
+ aj
265
+ e−πu2du
266
+ � dj
267
+ cj
268
+ e−πv2dv
269
+
270
+ + O
271
+
272
+ 1
273
+ log log T
274
+
275
+ .
276
+ We will prove theorems and propositions in Section 2 and lemmas in Section 3. We
277
+ conclude the introduction with a summary of notations:
278
+ • σT = σT (θ) = 1
279
+ 2 +
280
+ 1
281
+ (log T)θ and 0 < θ < 1
282
+ 2.
283
+ • k = (k1, . . . , kJ) and l = (ℓ1, . . . , ℓJ) are vectors in (Z≥0)J.
284
+ • u = (u1, . . . , uJ), v = (v1, . . . , vJ), x = (x1, . . . , xJ) and y = (y1, . . . , yJ) are
285
+ vectors in RJ.
286
+ • z = (z1, . . . , zJ) = x + iy and ¯z = (z1, . . . , zJ) = x − iy are vectors in CJ.
287
+ • k! := k1! · · · kJ! and K(k) := k1 + · · · + kJ.
288
+ • xk := xk1
289
+ 1 · · ·xkJ
290
+ J .
291
+ • x · u = �J
292
+ j=1 xjuj, ||z|| =
293
+ ��J
294
+ j=1 |zj|2 =
295
+ ��J
296
+ j=1(x2
297
+ j + y2
298
+ j).
299
+
300
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
301
+ 5
302
+ 2. Estimates on random model
303
+ We define the random vector
304
+ L(σ, X) =
305
+
306
+ log |L1(σ, X)|, . . . , log |LJ(σ, X)|, arg L1(σ, X), . . . , arg LJ(σ, X)
307
+
308
+ for σ > 1
309
+ 2, where each Lj(σ, X) is defined by the product
310
+ (2.1)
311
+ Lj(σ, X) =
312
+
313
+ p
314
+ d
315
+
316
+ i=1
317
+
318
+ 1 − αj,i(p)X(p)
319
+
320
+ �−1
321
+ and {X(p)}p is a sequence of independent random variables, indexed by the prime
322
+ numbers, and uniformly distributed on the unit circle {z ∈ C : |z| = 1}. The product
323
+ converges almost surely for σ > 1
324
+ 2 by Kolmogorov’s three series theorem.
325
+ Define a probability measure
326
+ (2.2)
327
+ Φrand
328
+ T
329
+ (B) := P(L(σT, X) ∈ B)
330
+ for a Borel set B in R2J. By [4, Theorem 2.3] we have
331
+ ΦT (RT) = Φrand
332
+ T
333
+ (RT ) + O((log T)(θ−1)/2 log log T)
334
+ for 0 < θ < 1
335
+ 2. It means that the distribution of L(σT + it) is well approximated by the
336
+ distribution of its random model L(σT , X) when 0 < θ < 1
337
+ 2. Thus, Theorem 1.2 is an
338
+ immediate consequence of the following theorem.
339
+ Theorem 2.1. Let 0 < θ < 1
340
+ 2. Assume assumptions A1–A5 for L1, . . . , LJ. Then there
341
+ exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that
342
+ Φrand
343
+ T
344
+ (RT) =
345
+
346
+ K(k+l)≤ǫ log log T
347
+ bk,l
348
+ J�
349
+ j=1
350
+ 1
351
+
352
+ ψj,T
353
+ kj+ℓj
354
+ ×
355
+ J
356
+
357
+ j=1
358
+ � � bj
359
+ aj
360
+ e−πu2Hkj(√πu)du
361
+ � dj
362
+ cj
363
+ e−πv2Hℓj(√πv)dv
364
+
365
+ + O
366
+
367
+ 1
368
+ (log T)κ
369
+
370
+ .
371
+ Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l)
372
+ 0
373
+ ) for some δ0 > 0
374
+ and all k, l.
375
+ In [4, Section 7] we find that the measure Φrand
376
+ T
377
+ is absolutely continuous and it has
378
+ a density function HT(u, v) such that
379
+ (2.3)
380
+ Φrand
381
+ T
382
+ (RT) =
383
+ ��
384
+ RT
385
+ HT(u, v)dudv.
386
+ Hence, Theorem 2.1 follows from (2.3) and the following proposition, which upgrades
387
+ [4, Lemma 7.4].
388
+
389
+ 6
390
+ YOONBOK LEE
391
+ Proposition 2.2. Let 0 < θ < 1
392
+ 2. Assume assumptions A1–A5 for L1, . . . , LJ. There
393
+ exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that
394
+ HT(u, v) =
395
+
396
+ K(k+l)≤ǫ log log T
397
+ bk,l
398
+ J�
399
+ j=1
400
+ 1
401
+ π
402
+
403
+ ψj,T
404
+ kj+ℓj+2e
405
+
406
+ u2
407
+ j +v2
408
+ j
409
+ ψj,T Hkj
410
+
411
+ uj
412
+
413
+ ψj,T
414
+
415
+ Hℓj
416
+
417
+ vj
418
+
419
+ ψj,T
420
+
421
+ + O
422
+
423
+ 1
424
+ (log T)κ
425
+
426
+ .
427
+ Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l)
428
+ 0
429
+ ) for some δ0 > 0
430
+ and all k, l.
431
+ To prove Proposition 2.2, it requires to understand the Fourier transform
432
+ �Φrand
433
+ T
434
+ (x, y) :=
435
+
436
+ R2J e2πi(x·u+y·v)dΦrand
437
+ T
438
+ (u, v)
439
+ for x, y ∈ RJ. By the definition of Φrand
440
+ T
441
+ in (2.2), we have
442
+ �Φrand
443
+ T
444
+ (x, y) = E
445
+
446
+ exp
447
+
448
+ 2πi
449
+ J
450
+
451
+ j=1
452
+
453
+ xj log |Lj(σT , X)| + yj arg Lj(σT, X)
454
+
455
+ ��
456
+ .
457
+ By assumptions A1 and A5 we see that
458
+ (2.4)
459
+ βLj(pk) = 1
460
+ k
461
+ d
462
+
463
+ i=1
464
+ αj,i(p)k.
465
+ By (2.4) and (2.1) we have
466
+ log Lj(σ, X) =
467
+
468
+ p
469
+
470
+
471
+ k=1
472
+ βLj(pk)X(p)k
473
+ pk��
474
+ .
475
+ Define
476
+ (2.5)
477
+ gj,p(σ) :=
478
+
479
+
480
+ k=1
481
+ βLj(pk)X(p)k
482
+ pkσ
483
+ ,
484
+ then we have
485
+ (2.6)
486
+ �Φrand
487
+ T
488
+ (x, y) =
489
+
490
+ p
491
+ ϕp,σT (x, y),
492
+ where
493
+ ϕp,σ(x, y) := E
494
+
495
+ exp
496
+
497
+ 2πi
498
+ J
499
+
500
+ j=1
501
+
502
+ xjRe (gj,p(σ)) + yjIm (gj,p(σ))
503
+
504
+ ��
505
+ for each prime p. Let z = (z1, . . . , zJ) = x + iy, then we find that
506
+ ϕp,σ(x, y) = E
507
+ � J�
508
+ j=1
509
+ eπizjgj,p(σ)eπizjgj,p(σ)
510
+
511
+ .
512
+
513
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
514
+ 7
515
+ By expanding the 2J exponential functions into power series we obtain
516
+ ϕp,σ(x, y) =
517
+
518
+ k,l∈(Z≥0)J
519
+ (πi)K(k+l)zkzl
520
+ k!l!
521
+ E
522
+
523
+ J�
524
+ j=1
525
+ gj,p(σ)kjgj,p(σ)
526
+ ℓj
527
+
528
+ with notations for vectors in the end of Section 1. It is easy to see that the expectation
529
+ (2.7)
530
+ Ap,σ(k, l) := E
531
+
532
+ J�
533
+ j=1
534
+ gj,p(σ)kjgj,p(σ)
535
+ ℓj
536
+
537
+ satisfies Ap,σ(0, 0) = 1 and Ap,σ(0, k) = Ap,σ(k, 0) = 0 for k ̸= 0. Thus, we obtain
538
+ (2.8)
539
+ ϕp,σ(x, y) = 1 + Rp,σ(z),
540
+ where
541
+ (2.9)
542
+ Rp,σ(z) :=
543
+
544
+ k̸=0
545
+
546
+ l̸=0
547
+ (πi)K(k+l)zkzl
548
+ k!l!
549
+ Ap,σ(k, l).
550
+ Hence, by (2.6) and (2.8) we have
551
+ (2.10)
552
+ �Φrand
553
+ T
554
+ (x, y) =
555
+
556
+ p
557
+ (1 + Rp,σT (z)).
558
+ To compute the product in (2.10), it requires the following lemma.
559
+ Lemma 2.3. There exists a constant δ1 > 0 such that
560
+ |Rp,σT (z)| ≤ 1
561
+ 2
562
+ for every prime p and ||z|| ≤ δ1.
563
+ See Section 3.1 for a proof. By Lemma 2.3 we have
564
+ �Φrand
565
+ T
566
+ (x, y) = exp
567
+ � �
568
+ p
569
+ log(1 + Rp,σT (z))
570
+
571
+ = exp
572
+ � �
573
+ p
574
+
575
+
576
+ m=1
577
+ (−1)m−1
578
+ m
579
+ Rp,σT (z)m
580
+
581
+ (2.11)
582
+ for ||z|| ≤ δ1. By (2.9) the sum �
583
+ p
584
+ �∞
585
+ m=1
586
+ (−1)m−1
587
+ m
588
+ Rp,σ(z)m has a power series represen-
589
+ tation in z1, . . . , zJ, z1, . . . , zJ, so let Bσ(k, l) be the coefficients such that
590
+ (2.12)
591
+
592
+ k̸=0
593
+
594
+ l̸=0
595
+ Bσ(k, l)zkzl =
596
+
597
+ p
598
+
599
+
600
+ m=1
601
+ (−1)m−1
602
+ m
603
+ Rp,σ(z)m.
604
+ Define In,σ(z) for each n ≥ 2 by the sum of the degree n terms in the above sum, i.e.,
605
+ (2.13)
606
+ In,σ(z) :=
607
+
608
+ k,l̸=0
609
+ K(k+l)=n
610
+ Bσ(k, l)zkzl.
611
+
612
+ 8
613
+ YOONBOK LEE
614
+ We see that In,σ(z) is a homogeneous polynomial in x1, . . . , xJ, y1, . . . , yJ of degree n,
615
+ and that
616
+ (2.14)
617
+ �Φrand
618
+ T
619
+ (x, y) = exp
620
+ � ∞
621
+
622
+ n=2
623
+ In,σT (z)
624
+
625
+ for ||z|| ≤ δ1 by (2.11)–(2.13). We find an asymptotic formula for In,σT (z) as T → ∞
626
+ in the following lemma.
627
+ Lemma 2.4. There are complex numbers Cj1,j2 such that
628
+ (2.15)
629
+ I2,σT (z) = −π2
630
+ J
631
+
632
+ j=1
633
+ ψj,T|zj|2 +
634
+ J
635
+
636
+ j1,j2=1
637
+ Cj1,j2zj1zj2 + O
638
+ �log log T
639
+ (log T)θ
640
+
641
+ for ||z|| ≤ δ1, where ψj,T is defined in (1.4) and Cj1,j2 = Cj2,j1. For n ≥ 3, there is a
642
+ constant C = CJ,d,η > 0 such that
643
+ |In,σ(z)| ≤ Cn||z||n
644
+ for σ ≥ 1
645
+ 2 and
646
+ |In,σT (z) − In,1/2(z)| ≤ Cn||z||n
647
+ (log T)θ .
648
+ See Section 3.2 for a proof. Define
649
+ (2.16)
650
+ QT(z) := −π2
651
+ J
652
+
653
+ j=1
654
+ ψj,T|zj|2,
655
+ (2.17)
656
+ I2(z) :=
657
+ J
658
+
659
+ j1,j2=1
660
+ Cj1,j2zj1zj2
661
+ and
662
+ (2.18)
663
+ In(z) := In,1/2(z)
664
+ for n > 2. By (2.17) and the Cauchy-Schwarz inequality we obtain
665
+ |I2(z)| ≤ J(max
666
+ j1,j2 |Cj1,j2|)||z||2.
667
+ By this inequality, (2.18) and Lemma 2.4 we have
668
+ (2.19)
669
+ |In(z)| ≤ 2−n
670
+ for n ≥ 2 and ||z|| ≤ δ2, where
671
+ (2.20)
672
+ δ2 := min
673
+
674
+ δ1, 1
675
+ 2C ,
676
+ 1
677
+ 2
678
+
679
+ J maxj1,j2 |Cj1,j2|
680
+
681
+ .
682
+
683
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
684
+ 9
685
+ It follows from (2.14), Lemma 2.4 and (2.16)–(2.19) that
686
+ �Φrand
687
+ T
688
+ (x, y) = exp
689
+
690
+ QT(z) +
691
+
692
+
693
+ n=2
694
+ In(z) + O
695
+ �log log T
696
+ (log T)θ
697
+ ��
698
+ = eQT (z)
699
+ � ∞
700
+
701
+ r=0
702
+ 1
703
+ r!
704
+
705
+
706
+
707
+ n=2
708
+ In(z)
709
+ �r
710
+ + O
711
+ �log log T
712
+ (log T)θ
713
+ ��
714
+ (2.21)
715
+ for ||z|| ≤ δ2. Note that each In(z) is a homogeneous polynomial in x1, . . . , xJ, y1, . . . , yJ
716
+ of degree n and does not depend on T. Since the sum �∞
717
+ r=0
718
+ 1
719
+ r!
720
+ � �∞
721
+ n=2 In(z)
722
+ �r is a power
723
+ series in x and y, we let {bk,l} be a sequence of complex numbers such that
724
+ (2.22)
725
+ G(x, y) :=
726
+
727
+ k,l
728
+ (2πi)K(k+l)bk,lxkyl =
729
+
730
+
731
+ r=0
732
+ 1
733
+ r!
734
+
735
+
736
+
737
+ n=2
738
+ In(z)
739
+ �r
740
+ .
741
+ Then the bk,l satisfy the following properties.
742
+ Lemma 2.5. Let δ3 be a constant satisfying 0 < δ3 <
743
+ π
744
+
745
+ J δ2, then bk,l is a real number
746
+ and
747
+ (2.23)
748
+ |bk,l| ≤
749
+ √e
750
+ δK(k+l)
751
+ 3
752
+ for every k, l. In particular, b0,0 = 1 and bk,l = 0 if K(k + l) = 1.
753
+ See Section 3.3 for a proof. The infinite sum over k, l in (2.22) can be approximated
754
+ by its partial sum. We shall prove a quantitative version. Let ǫ > 0. By (2.22) and
755
+ (2.19) we have
756
+ ����
757
+
758
+ K(k+l)>ǫ log log T
759
+ (2πi)K(k+l)bk,lxkyl
760
+ ���� ≤
761
+
762
+
763
+ r=1
764
+ 1
765
+ r!
766
+
767
+ n1,...,nr≥2
768
+ n1+···+nr>ǫ log log T
769
+ �1
770
+ 2
771
+ �n1+···+nr
772
+
773
+
774
+
775
+ r=1
776
+ 1
777
+ r!
778
+
779
+ m>ǫ log log T
780
+ 1
781
+ 2m
782
+
783
+ n1,...,nr≥2
784
+ n1+···+nr=m
785
+ 1
786
+ for ||z|| ≤ δ2. We substitute nj by n′
787
+ j + 2 for j = 1, . . . , r in the last sum, then the last
788
+ sum equals to the number of nonnegative integers n′
789
+ 1, . . . , n′
790
+ r such that n′
791
+ 1 + . . . + n′
792
+ r =
793
+ m − 2r, which equals to
794
+ �m−r−1
795
+ r−1
796
+
797
+ . Thus, the above sum is
798
+
799
+
800
+
801
+ r=1
802
+ 1
803
+ r!
804
+
805
+ m>ǫ log log T
806
+ 1
807
+ 2m
808
+ �m − r − 1
809
+ r − 1
810
+
811
+
812
+
813
+
814
+ r=1
815
+ 1
816
+ r!
817
+
818
+ m>ǫ log log T
819
+ 1
820
+ 2m
821
+ mr−1
822
+ (r − 1)!
823
+
824
+
825
+ m>ǫ log log T
826
+ 1
827
+ 2m
828
+
829
+
830
+ n=0
831
+ mn
832
+ (n!)2 ≤
833
+
834
+ m>ǫ log log T
835
+ 1
836
+ 2m
837
+
838
+
839
+
840
+ n=0
841
+ √mn
842
+ n!
843
+ �2
844
+ =
845
+
846
+ m>ǫ log log T
847
+ e2√m
848
+ 2m
849
+
850
+
851
+ m>ǫ log log T
852
+ �2
853
+ 3
854
+ �m
855
+ ≤ 3
856
+ �2
857
+ 3
858
+ �ǫ log log T
859
+
860
+ 1
861
+ (log T)κ
862
+
863
+ 10
864
+ YOONBOK LEE
865
+ with a constant κ ≤ ǫ log 3
866
+ 2. It follows from these estimates, (2.21), (2.22) and Lemma
867
+ 2.5 we obtain the following proposition.
868
+ Proposition 2.6. Let δ2 be the constant defined in (2.20). Let κ and ǫ be constants
869
+ such that 0 < κ < θ and κ ≤ ǫ log 3
870
+ 2. Let {bk,l} be a sequence of real numbers defined
871
+ by its generating series (2.22). Then
872
+ �Φrand
873
+ T
874
+ (x, y) = eQT (z)
875
+
876
+
877
+ K(k+l)≤ǫ log log T
878
+ (2πi)K(k+l)bk,lxkyl + O
879
+
880
+ 1
881
+ (log T)κ
882
+ ��
883
+ holds for ||z|| ≤ δ2.
884
+ We are ready to prove Proposition 2.2. The density function HT(u, v) of the measure
885
+ Φrand
886
+ T
887
+ is the inverse Fourier transform of �Φrand
888
+ T
889
+ , so that
890
+ HT(u, v) =
891
+
892
+ RJ
893
+
894
+ RJ
895
+ �Φrand
896
+ T
897
+ (x, y)e−2πi(x·u+y·v)dxdy.
898
+ Let δ4 be a constant such that 0 < δ4 ≤ min{δ2, δ3
899
+ 4π}. By Lemma 7.1 and (7.14) in [4]
900
+ we find that
901
+ HT(u, v) =
902
+ ��
903
+ ||z||≤δ4
904
+ �Φrand
905
+ T
906
+ (x, y)e−2πi(x·u+y·v)dxdy + O
907
+
908
+ 1
909
+ (log T)κ
910
+
911
+ for some κ > 0. See the proof of [4, Lemma 7.4] for a detail.
912
+ By Proposition 2.6 we have
913
+ HT(u, v) =
914
+
915
+ K(k+l)≤ǫ log log T
916
+ (2πi)K(k+l)bk,l
917
+ ��
918
+ ||z||≤δ4
919
+ eQT (z)−2πi(x·u+y·v)xkyldxdy+O
920
+
921
+ 1
922
+ (log T)κ
923
+
924
+ for some ǫ, κ > 0. Let ξmin = minj≤J ξj > 0, then we have
925
+ ����
926
+ ��
927
+ ||z||≥δ4
928
+ eQT (z)−2πi(x·u+y·v)xkyldxdy
929
+ ���� ≤
930
+ ��
931
+ ||z||≥δ4
932
+ e−π2ξminθ log log T||z||2||z||K(k+l)dxdy
933
+
934
+ � ∞
935
+ δ4
936
+ e−(π2ξminθ log log T)r2rK(k+l)+2J−1dr
937
+
938
+ 1
939
+ (π2ξminθ log log T)
940
+ K(k+l)
941
+ 2
942
+ +J
943
+ � ∞
944
+ πδ4
945
+ √ξminθ log log T
946
+ e−r2rK(k+l)+2J−1dr
947
+ by the change of variables to the polar coordinates. By the Cauchy-Schwarz inequality
948
+ we have
949
+ � ∞
950
+ X
951
+ e−r2rMdr ≤
952
+ �� ∞
953
+ X
954
+ e−r2rdr
955
+ � ∞
956
+ 0
957
+ e−r2r2M−1dr =
958
+
959
+ (M − 1)!
960
+ 2
961
+ e− 1
962
+ 2 X2.
963
+ Hence, it follows from Lemma 2.5 and the above estimations that
964
+ HT(u, v) =
965
+
966
+ K(k+l)≤ǫ log log T
967
+ (2πi)K(k+l)bk,l
968
+
969
+ RJ
970
+
971
+ RJ eQT (z)−2πi(x·u+y·v)xkyldxdy
972
+
973
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
974
+ 11
975
+ + O
976
+
977
+ 1
978
+ (log T)
979
+ 1
980
+ 2π2δ2
981
+ 4ξminθ
982
+
983
+ K(k+l)≤ǫ log log T
984
+ �2π
985
+ δ3
986
+ �K(k+l) �
987
+ (K(k + l) + 2J − 2)!
988
+ (π2ξminθ log log T)
989
+ K(k+l)
990
+ 2
991
+ +J
992
+
993
+ + O
994
+
995
+ 1
996
+ (log T)κ
997
+
998
+ .
999
+ By Stirling’s formula the k, l-sum in the above O-term is
1000
+
1001
+
1002
+ K(k+l)≤ǫ log log T
1003
+ �2π
1004
+ δ3
1005
+ �K(k+l)
1006
+ 1
1007
+ (π2ξminθ log log T)
1008
+ K(k+l)
1009
+ 2
1010
+ +J
1011
+ �2ǫ log log T
1012
+ e
1013
+ � K(k+l)
1014
+ 2
1015
+ +J− 3
1016
+ 4
1017
+
1018
+
1019
+ k,l
1020
+
1021
+ 2
1022
+
1023
+
1024
+ δ3
1025
+ √ξminθe
1026
+ �K(k+l)
1027
+
1028
+
1029
+ k,l
1030
+ �1
1031
+ 2
1032
+ �K(k+l)
1033
+ = 22J,
1034
+ provided that 0 < ǫ ≤
1035
+ 1
1036
+ 32δ2
1037
+ 3ξminθe. With this choice of ǫ, we have
1038
+ HT(u, v) =
1039
+
1040
+ K(k+l)≤ǫ log log T
1041
+ (2πi)K(k+l)bk,l
1042
+
1043
+ RJ
1044
+
1045
+ RJ eQT (z)−2πi(x·u+y·v)xkyldxdy
1046
+ + O
1047
+
1048
+ 1
1049
+ (log T)κ
1050
+
1051
+ for some κ > 0
1052
+ It remains to calculate the above integral. We first write it as repeated integrals
1053
+
1054
+ RJ
1055
+
1056
+ RJ eQT (z)−2πi(x·u+y·v)xkyldxdy
1057
+ =
1058
+ J
1059
+
1060
+ j=1
1061
+
1062
+ R
1063
+
1064
+ R
1065
+ e−ψj,T π2(x2
1066
+ j+y2
1067
+ j )−2πi(xjuj+yjvj)x
1068
+ kj
1069
+ j y
1070
+ ℓj
1071
+ j dxjdyj
1072
+ =
1073
+ J
1074
+
1075
+ j=1
1076
+
1077
+ R
1078
+ e−ψj,T π2x2
1079
+ j−2πixjujx
1080
+ kj
1081
+ j dxj
1082
+
1083
+ R
1084
+ e−ψj,T π2y2
1085
+ j −2πiyjvjy
1086
+ ℓj
1087
+ j dyj.
1088
+ Each integral can be written in terms of the Hermite polynomials defined in (1.2). Since
1089
+
1090
+ R
1091
+ e−ψπ2x2−2πixuxkdx =
1092
+ 1
1093
+ (−2πi)k
1094
+ dk
1095
+ duk
1096
+
1097
+ R
1098
+ e−ψπ2x2−2πixudx
1099
+ =
1100
+ 1
1101
+ (−2πi)k
1102
+ dk
1103
+ duk
1104
+ 1
1105
+ √πψe− u2
1106
+ ψ
1107
+ =
1108
+ 1
1109
+ (2πi)k√π√ψ
1110
+ k+1e− u2
1111
+ ψ Hk
1112
+ � u
1113
+ √ψ
1114
+
1115
+ ,
1116
+ we have
1117
+
1118
+ RJ
1119
+
1120
+ RJ eQT (z)−2πi(x·u+y·v)xkyldxdy
1121
+
1122
+ 12
1123
+ YOONBOK LEE
1124
+ =
1125
+ J�
1126
+ j=1
1127
+ 1
1128
+ π(2πi)kj+ℓj�
1129
+ ψj,T
1130
+ kj+ℓj+2e
1131
+
1132
+ u2
1133
+ j +v2
1134
+ j
1135
+ ψj,T Hkj
1136
+
1137
+ uj
1138
+
1139
+ ψj,T
1140
+
1141
+ Hℓj
1142
+
1143
+ vj
1144
+
1145
+ ψj,T
1146
+
1147
+ .
1148
+ Thus, we have
1149
+ HT(u, v) =
1150
+
1151
+ K(k+l)≤ǫ log log T
1152
+ bk,l
1153
+ J�
1154
+ j=1
1155
+ 1
1156
+ π
1157
+
1158
+ ψj,T
1159
+ kj+ℓj+2e
1160
+
1161
+ u2
1162
+ j +v2
1163
+ j
1164
+ ψj,T Hkj
1165
+
1166
+ uj
1167
+
1168
+ ψj,T
1169
+
1170
+ Hℓj
1171
+
1172
+ vj
1173
+
1174
+ ψj,T
1175
+
1176
+ + O
1177
+
1178
+ 1
1179
+ (log T)κ
1180
+
1181
+ for some ǫ, κ > 0. This completes the proof of Proposition 2.2.
1182
+ 3. Proofs of lemmas
1183
+ We prove Lemma 2.3 in Section 3.1, Lemma 2.4 in Section 3.2 and Lemma 2.5 in
1184
+ Section 3.3. In the proofs, we need the inequalities
1185
+ (3.1)
1186
+ |βLj(pk)| ≤ d
1187
+ kpkη
1188
+ for k ≥ 1,
1189
+ (3.2)
1190
+ |βLj(pk)| ≤ 1
1191
+ k
1192
+ d
1193
+
1194
+ i=1
1195
+ |αj,i(p)|k ≤ p(k−2)η
1196
+ k
1197
+ d
1198
+
1199
+ i=1
1200
+ |αj,i(p)|2
1201
+ for k ≥ 2
1202
+ and
1203
+ (3.3)
1204
+ |βLj(p)|2 ≤
1205
+
1206
+ d
1207
+
1208
+ i=1
1209
+ |αj,i(p)|
1210
+ �2
1211
+ ≤ d
1212
+ d
1213
+
1214
+ i=1
1215
+ |αj,i(p)|2,
1216
+ which follows by (2.4) and assumpion A1.
1217
+ 3.1. Proof of Lemma 2.3. By (2.5) and (3.1) there is a constant C1 := C1,d,η > 0
1218
+ such that
1219
+ (3.4)
1220
+ |gj,p(σT)| ≤
1221
+
1222
+
1223
+ k=1
1224
+ d
1225
+ k
1226
+ pkη
1227
+ p
1228
+ k
1229
+ 2 ≤
1230
+ C1
1231
+ p
1232
+ 1
1233
+ 2 −η
1234
+ for every prime p and j = 1, . . . , J. By (2.7), (2.9) and (3.4) we obtain
1235
+ |Rp,σT (z)| ≤
1236
+
1237
+ k̸=0
1238
+
1239
+ l̸=0
1240
+ 1
1241
+ k!l!
1242
+
1243
+ π||z|| C1
1244
+ p
1245
+ 1
1246
+ 2−η
1247
+ �K(k+l)
1248
+ =
1249
+
1250
+ exp
1251
+
1252
+ J πC1||z||
1253
+ p
1254
+ 1
1255
+ 2−η
1256
+
1257
+ − 1
1258
+ �2
1259
+ .
1260
+ Thus, there exists a constant C2 := C2,d,J,η > 0 such that
1261
+ |Rp,σT (z)| ≤
1262
+ C2
1263
+ p1−2η ||z||2 ≤
1264
+ C2
1265
+ 21−2�� ||z||2
1266
+ for ||z|| ≤ 1 and every prime p. Therefore, there exists a constant δ1 > 0 such that
1267
+ |Rp,σT (z)| ≤ 1
1268
+ 2
1269
+ for ||z|| ≤ δ1 and every prime p.
1270
+
1271
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
1272
+ 13
1273
+ 3.2. Proof of Lemma 2.4. We first find an useful expression
1274
+ (3.5)
1275
+ In,σ(z) = (πi)n
1276
+
1277
+ 1≤m≤n/2
1278
+ (−1)m−1
1279
+ m
1280
+
1281
+ k1,...,km,l1,...,lm̸=0
1282
+ K(k1+···+km+l1+···+lm)=n
1283
+ zk1+···+kmzl1+···+lm
1284
+ k1! · · ·km!l1! · · ·lm!
1285
+ ×
1286
+
1287
+ p
1288
+ Ap,σ(k1, l1) · · ·Ap,σ(km, lm)
1289
+ by (2.9), (2.12) and (2.13). Here, the sum over m is 1 ≤ m ≤ n/2 because
1290
+ n = K(k1 + · · · + km + l1 + · · · + lm) ≥ 2m
1291
+ for k1, . . . , km, l1, . . . , lm ̸= 0.
1292
+ The asymptotic (2.15) of I2,σT (z) is known before. See (7.16) of [4, Lemma 7.3]. We
1293
+ next prove
1294
+ (3.6)
1295
+ Cj1,j2 = Cj2,j1.
1296
+ We have
1297
+ (3.7)
1298
+ Ap,σ(k, l) = Ap,σ(l, k)
1299
+ by (2.7). By (3.5) we also have
1300
+ (3.8)
1301
+ I2,σ(z) = I2,σ(z).
1302
+ So we obtain (3.6) by (2.15) and (3.8).
1303
+ For the case n > 2, we observe that Ap,σ(k, l) for a real σ can be extended to an
1304
+ analytic function in a complex variable s via
1305
+ (3.9)
1306
+ Ap,s(k, l) = E
1307
+
1308
+ J�
1309
+ j=1
1310
+ � ∞
1311
+
1312
+ k=1
1313
+ βLj(pk)X(p)k
1314
+ pks
1315
+ �kj� ∞
1316
+
1317
+ k=1
1318
+ βLj(pk)X(p)k
1319
+ pks
1320
+ �ℓj�
1321
+ .
1322
+ This observation essentially leads us to prove the following lemma.
1323
+ Lemma 3.1. Let η be the constant in assumption A1 and assume K(k1 + · · · + km +
1324
+ l1 + · · · + lm) = n ≥ 3. The Dirichlet series
1325
+ f(s) :=
1326
+
1327
+ p
1328
+ Ap,s(k1, l1) · · ·Ap,s(km, lm)
1329
+ is absolutely convergent for Re(s) ≥
1330
+ 5+2η
1331
+ 12 . Moreover, there exists a constant C3 =
1332
+ C3,J,d,η > 0 such that
1333
+ |f(s)| ≤ Cn
1334
+ 3
1335
+ for Re(s) ≥ 5+2η
1336
+ 12
1337
+ and
1338
+ |f(σT) − f( 1
1339
+ 2)| ≤
1340
+ Cn
1341
+ 3
1342
+ (log T)θ .
1343
+
1344
+ 14
1345
+ YOONBOK LEE
1346
+ Proof. We first show that there is a constant C4 > 0 such that
1347
+ |f(s)| ≤ Cn
1348
+ 4
1349
+ for Re(s) ≥ 5+2η
1350
+ 12 . By (3.9) we find that
1351
+ |Ap,s(k, l)| ≤
1352
+ � ∞
1353
+
1354
+ k=1
1355
+ maxj≤J |βLj(pk)|
1356
+ pkRe(s)
1357
+ �K(k+l)
1358
+ .
1359
+ Thus, we have
1360
+ |f(s)| ≤
1361
+
1362
+ p
1363
+
1364
+
1365
+
1366
+ k=1
1367
+ maxj≤J |βLj(pk)|
1368
+ pkRe(s)
1369
+ �n
1370
+ ≤ 2n �
1371
+ p
1372
+ �maxj≤J |βLj(p)|
1373
+ pRe(s)
1374
+ �n
1375
+ + 2n �
1376
+ p
1377
+ � ∞
1378
+
1379
+ k=2
1380
+ maxj≤J |βLj(pk)|
1381
+ pkRe(s)
1382
+ �n
1383
+ .
1384
+ (3.10)
1385
+ The first sum on the right hand side of (3.10) is
1386
+
1387
+ p
1388
+
1389
+ maxj≤J |βLj(p)|
1390
+ �n
1391
+ pnRe(s)
1392
+
1393
+
1394
+ p
1395
+ (dpη)n−2�
1396
+ maxj≤J d �d
1397
+ i=1 |αj,i(p)|2�
1398
+ pnRe(s)
1399
+ ≤ dn−1 �
1400
+ p
1401
+ �J
1402
+ j=1
1403
+ �d
1404
+ i=1 |αj,i(p)|2
1405
+ p1+ε
1406
+ ≤ Cn
1407
+ 5
1408
+ for Re(s) ≥ 5+2η
1409
+ 12
1410
+ by (3.1) and (3.3), where ε = 1
1411
+ 4 − η
1412
+ 2 > 0 and
1413
+ C5 := max
1414
+
1415
+ d,
1416
+
1417
+ p
1418
+ �J
1419
+ j=1
1420
+ �d
1421
+ i=1 |αj,i(p)|2
1422
+ p1+ε
1423
+
1424
+ .
1425
+ Note that the last p-sum is convergent by assumption A3 and a partial summation.
1426
+ The second sum on the right hand side of (3.10) is
1427
+
1428
+ p
1429
+ � ∞
1430
+
1431
+ k=2
1432
+ maxj≤J |βLj(pk)|
1433
+ pkRe(s)
1434
+ �n
1435
+
1436
+
1437
+ p
1438
+
1439
+
1440
+
1441
+ k=2
1442
+ maxj≤J
1443
+ �d
1444
+ i=1 |αj,i(p)|2
1445
+ kpkRe(s)−(k−2)η
1446
+ �n
1447
+
1448
+
1449
+ p
1450
+ �maxj≤J
1451
+ �d
1452
+ i=1 |αj,i(p)|2
1453
+ p2Re(s)
1454
+ 1
1455
+ 2
1456
+ 1
1457
+ 1 −
1458
+ 1
1459
+ pRe(s)−η
1460
+ �n
1461
+
1462
+ �1
1463
+ 2
1464
+ 1
1465
+ 1 −
1466
+ 1
1467
+ p
1468
+ 5
1469
+ 12 (1−2η)
1470
+ �n �
1471
+ p
1472
+ (dp2η)n−1 maxj≤J
1473
+ �d
1474
+ i=1 |αj,i(p)|2
1475
+ p2nRe(s)
1476
+
1477
+ �1
1478
+ 2
1479
+ 1
1480
+ 1 −
1481
+ 1
1482
+ p
1483
+ 5
1484
+ 12 (1−2η)
1485
+ �n
1486
+ dn−1 �
1487
+ p
1488
+ �J
1489
+ j=1
1490
+ �d
1491
+ i=1 |αj,i(p)|2
1492
+ p1+6ε
1493
+ ≤ Cn
1494
+ 6
1495
+
1496
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
1497
+ 15
1498
+ for Re(s) ≥ 5+2η
1499
+ 12
1500
+ by (3.2), where
1501
+ C6 := 1
1502
+ 2
1503
+ 1
1504
+ 1 −
1505
+ 1
1506
+ p
1507
+ 5
1508
+ 12 (1−2η)
1509
+ max
1510
+
1511
+ d,
1512
+
1513
+ p
1514
+ �J
1515
+ j=1
1516
+ �d
1517
+ i=1 |αj,i(p)|2
1518
+ p1+6ε
1519
+
1520
+ .
1521
+ We choose C4 = 2(C5 + C6), then we have
1522
+ (3.11)
1523
+ |f(s)| ≤ Cn
1524
+ 4
1525
+ for Re(s) ≥ 5+2η
1526
+ 12 . One can easily see in the above estimations that f(s) is absolutely
1527
+ convergent for Re(s) ≥ 5+2η
1528
+ 12 .
1529
+ Let ε1 = 1
1530
+ 2 − 5+2η
1531
+ 12
1532
+ > 0. Since
1533
+ f(σT) − f( 1
1534
+ 2) =
1535
+ � σT
1536
+ 1/2
1537
+ f ′(u)du =
1538
+ � σT
1539
+ 1/2
1540
+ 1
1541
+ 2πi
1542
+
1543
+ |z−u|=ε1
1544
+ f(z)
1545
+ (z − u)2dzdu,
1546
+ we obtain
1547
+ (3.12)
1548
+ |f(σT) − f( 1
1549
+ 2)| ≤ (σT − 1
1550
+ 2) 1
1551
+ ε1
1552
+ sup
1553
+ Re(z)≥ 1
1554
+ 2 −ε1
1555
+ |f(z)| ≤
1556
+ Cn
1557
+ 4
1558
+ ε1(log T)θ
1559
+ by (3.11). Let C3 = C4/ε1 > C4, then (3.11) and (3.12) imply both inequalities in the
1560
+ lemma.
1561
+
1562
+ Therefore by Lemma 3.1, (3.5) and Stirling’s formula we have
1563
+ |In,σ(z)| ≤ ||z||n(πC3)n �
1564
+ m≤n/2
1565
+ 1
1566
+ m
1567
+
1568
+ K(k1+···+km+l1+···+lm)=n
1569
+ 1
1570
+ k1! · · · km!l1! · · ·lm!
1571
+ = ||z||n(πC3)n �
1572
+ m≤n/2
1573
+ 1
1574
+ m
1575
+ (2mJ)n
1576
+ n!
1577
+ ≤ ||z||n(JπC3)nnn
1578
+ n!
1579
+ ≤ ||z||n(JπC3e)n
1580
+ for σ ≥ 5+2η
1581
+ 12
1582
+ and n > 2. Similarly, we have
1583
+ |In,σT (z) − In,1/2(z)| ≤ ||z||n(JπC3e)n
1584
+ (log T)θ
1585
+ for n > 2. Therefore, Lemma 2.4 holds with a constant
1586
+ (3.13)
1587
+ C = JπC3e.
1588
+ 3.3. Proof of Lemma 2.5. We first consider G(x, y) in (2.22) as a function in complex
1589
+ variables x1, . . . , xJ, y1, . . . , yJ. We replace xj by
1590
+ xj
1591
+ 2πi and yj by
1592
+ yj
1593
+ 2πi for j = 1, . . . , J in
1594
+ (2.22), then we obtain that
1595
+ (3.14)
1596
+
1597
+ k,l
1598
+ bk,lxkyl =
1599
+
1600
+
1601
+ r=0
1602
+ 1
1603
+ r!
1604
+ � ∞
1605
+
1606
+ n=2
1607
+ In(z)(2πi)−n
1608
+ �r
1609
+ .
1610
+
1611
+ 16
1612
+ YOONBOK LEE
1613
+ Now we consider x1, . . . , xJ, y1, . . . , yJ as real variables. By (3.5) and (3.7) we have
1614
+ In,σ(z)(2πi)−n = In,σ(z)(2πi)−n,
1615
+ which implies that In,σ(z)(2πi)−n is a polynomial in real variables x1, . . . , xJ, y1, . . . , yJ
1616
+ with real coefficients. Since In(z)(2πi)−n is also a homogeneous polynomial in x1, . . . , xJ,
1617
+ y1, . . . , yJ of degree n with real coefficients, we obtain by comparing coefficients in (3.14)
1618
+ that bk,l ∈ R, b0,0 = 1 and bk,l = 0 for K(k + l) = 1.
1619
+ It remains to prove the inequality (2.23). Again we consider G(x, y) defined in (2.22)
1620
+ as an analytic function in complex variables x1, . . . , xJ, y1, . . . , yJ. Assume that
1621
+ sup{|x1|, . . . , |xJ|, |y1|, . . . , |yJ|} ≤
1622
+ δ2
1623
+ 2
1624
+
1625
+ J
1626
+ .
1627
+ Then we see that
1628
+ |I2(z)| ≤
1629
+ J
1630
+
1631
+ j1,j2=1
1632
+ |Cj1,j2| δ2
1633
+ 2
1634
+ 4J ≤ 1
1635
+ 16
1636
+ by (2.17) and (2.20). For n ≥ 3 we have
1637
+ |In(z)| ≤
1638
+ �δ2πC3
1639
+
1640
+ J
1641
+ �n �
1642
+ m≤n/2
1643
+ 1
1644
+ m
1645
+
1646
+ K(k1+···+km+l1+···+lm)=n
1647
+ 1
1648
+ k1! · · ·km!l1! · · ·lm!
1649
+ ≤ (δ2
1650
+
1651
+ JπC3e)n ≤ (δ2C)n ≤ 2−n
1652
+ by (2.18), (2.20), (3.5), (3.13) and Lemma 3.1. Thus,
1653
+ |G(x, y)| ≤
1654
+
1655
+
1656
+ r=0
1657
+ 1
1658
+ r!
1659
+
1660
+
1661
+
1662
+ n=2
1663
+ |In(z)|
1664
+ �r
1665
+
1666
+
1667
+
1668
+ r=0
1669
+ 1
1670
+ r!2−r = √e.
1671
+ Let 0 < δ3
1672
+ 2π = δ′
1673
+ 3 <
1674
+ δ2
1675
+ 2
1676
+
1677
+ J . Since
1678
+ bk,l =
1679
+ 1
1680
+ (2πi)K(k+l)+2J
1681
+
1682
+ |x1|=δ′
1683
+ 3
1684
+ · · ·
1685
+
1686
+ |xJ|=δ′
1687
+ 3
1688
+
1689
+ |y1|=δ′
1690
+ 3
1691
+ · · ·
1692
+
1693
+ |yJ|=δ′
1694
+ 3
1695
+ G(x, y)
1696
+ xkyl
1697
+ dyJ
1698
+ yJ
1699
+ · · · dy1
1700
+ y1
1701
+ dxJ
1702
+ xJ
1703
+ · · · dx1
1704
+ x1
1705
+ by Cauchy’s integral formula, we obtain
1706
+ |bk,l| ≤
1707
+ √e
1708
+ (2πδ′
1709
+ 3)K(k+l) =
1710
+ √e
1711
+ δK(k+l)
1712
+ 3
1713
+ .
1714
+ Acknowledgements
1715
+ This work has been supported by the National Research Foundation of Korea (NRF)
1716
+ grant funded by the Korea government (MSIP) (No. 2019R1F1A1050795).
1717
+
1718
+ CENTRAL LIMIT THEOREM OF L-FUNCTIONS
1719
+ 17
1720
+ References
1721
+ [1] H. Cram´er, Random variables and probability distributions, 3rd edition, Cambridge University
1722
+ Press, 1970.
1723
+ [2] J. Ha and Y. Lee, The a-values of the Riemann zeta function near the critical line, J. Math. Anal.
1724
+ Appl. 464, (2018), 838–863.
1725
+ [3] D. Hejhal, On Euler products and multi-variate Gaussians, C. R. Acad. Sci. Paris, Ser. I 337
1726
+ (2003), 223–226.
1727
+ [4] Y. Lamzouri and Y. Lee, The number of zeros of linear combinations of L-functions near the
1728
+ critical line, to appear J. Anal. Math. Preprint available at arXiv:2010.10490.
1729
+ [5] Y. Lee, An asymptotic expansion of Selberg’s central limit theorem near the critical line, J. Number
1730
+ Theory 236 (2022), 323–333.
1731
+ [6] M. Radziwi�l�l and K. Soundararajan, Selberg’s central limit theorem for log |ζ( 1
1732
+ 2 + it)|, Enseign.
1733
+ Math. 63 (2017), 1–19.
1734
+ [7] A. Selberg, Old and new conjectures and results about a class of Dirichlet series, Bombieri, E.
1735
+ (ed.) et al., Proceedings of the Amalfi conference on analytic number theory, held at Maiori,
1736
+ Amalfi, Italy, from 25 to 29 September, 1989. Salerno: Universit´a di Salerno, 367–385 (1992) =
1737
+ Collected Papers, vol. II, 47–63, Springer, 1991.
1738
+ [8] K.M. Tsang, The distribution of the values of the Riemann zeta-function, ProQuest LLC, Ann
1739
+ Arbor, MI, 1984, Thesis (Ph.D.)-Princeton University.
1740
+ Department of Mathematics, Research Institute of Basic Sciences, Incheon Na-
1741
+ tional University, 119 Academy-ro, Yeonsu-gu, Incheon, 22012, Korea
1742
+ Email address: leeyb@inu.ac.kr, leeyb131@gmail.com
1743
+
5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NE6T4oBgHgl3EQflhFb/content/tmp_files/2301.06150v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NE6T4oBgHgl3EQflhFb/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tAyT4oBgHgl3EQf2fk_/content/tmp_files/2301.00751v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tAyT4oBgHgl3EQf2fk_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b06ba99ee61d13cb563f0b926f6cd6ddef3381c3d8d7fbc56f8ccf3ab88639b2
3
+ size 3604525
89AzT4oBgHgl3EQfgvxw/content/tmp_files/2301.01473v1.pdf.txt ADDED
@@ -0,0 +1,1986 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01473v1 [math.CO] 4 Jan 2023
2
+ State Transfer in Complex Quantum Walks
3
+ Antonio Acuaviva1, Ada Chan2, Summer Eldridge3, Chris Godsil4, Matthew How-Chun-Lun5,
4
+ Christino Tamon6, Emily Wright7, and Xiaohong Zhang8
5
+ 1Department of Mathematics, Universidad Complutense de Madrid
6
+ 2Department of Mathematics and Statistics, York University
7
+ 3Department of Mathematics, University of Toronto
8
+ 4Department of Combinatorics and Optimization, University of Waterloo
9
+ 5Department of Mathematics, McMaster University
10
+ 6Department of Computer Science, Clarkson University
11
+ 7Department of Mathematics, Queen’s University
12
+ 8Centre de recherches mathématiques, Université de Montréal
13
+ January 5, 2023
14
+ Abstract
15
+ Given a graph with Hermitian adjacency matrix 퐻, perfectstate transfer occurs fromvertex 푎 to vertex
16
+ 푏 if the (푏, 푎)-entryof the unitary matrix exp(−푖퐻푡) has unit magnitudefor some time 푡. This phenomenon
17
+ is relevant for information transmission in quantum spin networks and is known to be monogamous under
18
+ real symmetric matrices. We prove the following results:
19
+ • For oriented graphs (whose nonzero weights are ±푖), the oriented 3-cycle and the oriented edge
20
+ are the only graphs where perfect state transfer occurs between every pair of vertices. This settles
21
+ a conjecture of Cameron et al. [1]. On the other hand, we construct an infinite family of oriented
22
+ graphs with perfect state transfer between any pair of vertices on a subset of size four.
23
+ • There are infinite families of Hermitian graphs with one-way perfect state transfer, where perfect
24
+ state transfer occurs without periodicity. In contrast, perfect state transfer implies periodicity when-
25
+ ever the adjacency matrix has algebraic entries (see Godsil [2]).
26
+ • There are infinite families with non-monogamous pretty good state transfer in rooted graph prod-
27
+ ucts. In particular, we generalize known results on double stars (due to Fan and Godsil [3]) and
28
+ on paths with loops (due to Kempton, Lippner and Yau [4]). The latter extends the experimental
29
+ observation of quantum transport (made by Zimborás et al. [5]) and shows non-monogamouspretty
30
+ good state transfer can occur amongst distant vertices.
31
+ 1
32
+ Introduction
33
+ Given a graph 푋 = (푉 , 퐸) with adjacency matrix 퐴, a continuous-time quantum walk on 푋 is defined by the
34
+ time-dependent unitary matrix 푈(푡) = 푒−푖퐴푡. This natural quantum generalization of continuous-time ran-
35
+ dom walks is important for designing quantum algorithms. Childs et al. [6] showed that a continuous-time
36
+ quantum walk algorithm provides an exponential time speedup for an explicit search problem on graphs.
37
+ 1
38
+
39
+ Subsequently, Childs [7] showed that continuous-time quantum walk is a universal model of quantum com-
40
+ putation.
41
+ Our focus in this paper is motivated by Bose [8] who studied quantum communication via continuous-
42
+ time quantum walk on graphs. We say that there is pretty good state transfer in a graph 푋 from vertex 푎 to
43
+ vertex 푏 if for any 휖 > 0, there is a time 푡 so that ‖‖푈(푡)푒푎 − 훾푒푏‖‖ ≤ 휖 where 훾 is a phase factor. Here, 푒푎
44
+ denotes the unit vector with 1 at position 푎 and 0 elsewhere; similarly for 푒푏. If 휖 = 0 is achievable, we say
45
+ there is perfect state transfer in 푋 from 푎 to 푏 at time 푡.
46
+ Kay [9] proved a monogamy property for perfect state transfer on graphs with real symmetric adjacency
47
+ matrices: if there is perfect state transfer from 푎 to 푏 and from 푎 to 푐 then 푏 = 푐. In contrast, Cameron
48
+ et al. [1] showed that there are oriented graphs (whose adjacency matrices are Hermitian with ±푖 nonzero
49
+ entries) where state transfer occurs between every pair of vertices. This latter property is called universal
50
+ state transfer. Their primary examples are oriented cycles of prime order with universal pretty good state
51
+ transfer. A notable exception is the oriented 3-cycle which exhibits universal perfect state transfer.
52
+ It was conjectured in [1] that the oriented 퐾2 and 3-cycle are the only oriented graph with universal perfect
53
+ state transfer. We prove their conjecture in this work. This confirms that universal perfect state transfer is
54
+ an extremely rare phenomenon in oriented graphs. On the other hand, there are known infinite families of
55
+ graphs with universal perfect state transfer but with adjacency matrices that are Hermitian matrices with no
56
+ restriction on the entries (see Connelly et al. [10]). We call these Hermitian graphs.
57
+ Godsil and Lato [11] proved a strong characterization of perfect state tranfer in oriented graphs and
58
+ observed that perfect state transfer always implies periodicity (by the Gelfond-Schneider theorem). In fact,
59
+ Godsil [2] had observed the latter property holds for any adjacency matrix with algebraic entries. Our next
60
+ observation shows that the latter assumption is necessary to guarantee periodicity. We construct the first
61
+ infinite family of Hermitian graphs with one-way perfect state transfer, where perfect state transfer occurs
62
+ without periodicity. These examples also exhibit a one-time perfect state transfer property where perfect state
63
+ transfer occurs at a single unique time (and never to repeat again).
64
+ Godsil and Lato [11] also introduced a relaxation of universal perfect state transfer called multiple perfect
65
+ state transfer. We say a graph 푋 has multiple state transfer on a subset 푆 ⊂ 푉 (푋) of vertices, with |푆| ≥ 3,
66
+ if state transfer occurs between every pair of vertices of 푆. An explicit example of a 8-vertex circulant with
67
+ multiple perfect state tranfer was given in [11], but it was not clear if there are more examples sharing the
68
+ same properties. We construct the first infinite family of oriented graphs with multiple perfect state transfer
69
+ (which contains the aforementioned 8-vertex circulant as a special case). This shows that, unlike universal
70
+ perfect state transfer, multiple perfect state transfer is not an extremely rare phenomenon.
71
+ It is known that perfect state transfer is closed under the Cartesian graph product. In this work, under mild
72
+ assumptions, we show that multiple state transfer is closed under the rooted graph product (see Godsil and
73
+ McKay [12]). First, we prove a complete characterization of pretty good state transfer on the rooted product
74
+ of the oriented 3-cycle with stars 퐾1,푚. This generalizes a result of Fan and Godsil [3] on the double stars.
75
+ Next, we consider rooted product with single-looped paths instead of stars. Let 푋 be a 푛-vertex circulant
76
+ with universal perfect state transfer and let 푃 훾
77
+ 푚 be a 푚-vertex path with a self-loop of weight 훾 at one of its
78
+ endpoints. We prove that the rooted product 푋◦푃 훾
79
+ 푚 has multiple pretty good state transfer between every
80
+ pair of vertices with self-loop provided 훾 is transcendental. This generalizes a result of Kempton, Lippner
81
+ and Yau [4] and shows the power of loops to facilitate multiple state transfer among distant vertices. In the
82
+ special case when 푋 is the oriented 3-cycle, our result strengthens the experimental observations in Zimborás
83
+ et al. [5] (with the help of self-loops).
84
+ 2
85
+
86
+ 2
87
+ Preliminary
88
+ Given a graph 푋 and an associated Hermitian matrix 퐻, the transition matrix of its continuous-time quantum
89
+ walk is
90
+ 푈(푡) = 푒−i푡퐻.
91
+ We call 푋 a Hermitian graph if we do not assume any additional condition on the entries of 퐻. For the
92
+ special case where 푋 is an oriented graph, we use the Hermitian matrix 퐻 defined as
93
+ 퐻푎,푏 =
94
+
95
+
96
+
97
+ ⎪⎩
98
+ i
99
+ if there is an arc from 푎 to 푏 in 푋,
100
+ −i
101
+ if there is an arc from 푏 to 푎 in 푋, and
102
+ 0
103
+ if there is no arc between 푎 and 푏 in 푋.
104
+ Let 휃1, … , 휃푑 be distinct eigenvalues of 퐻. For 푟 = 1, … , 푑, let 퐸푟 denote the orthogonal projection
105
+ matrix onto the 휃푟-eigenspace of 퐻. Then 퐸푟퐸푠 = 훿푟,푠퐸푟 and ∑
106
+ 푟 퐸푟 = 퐼. The spectral decomposition
107
+ 퐻 = ∑
108
+ 푟 휃푟퐸푟 gives
109
+ 푈(푡) =
110
+
111
+
112
+ 푟=1
113
+ 푒−i푡휃푟퐸푟.
114
+ Given a unit vector 푣 ∈ ℂ푛, the system with initial state 푣 evolves to 푈(푡)푣 = ∑
115
+ 푟 푒−푖푡휃푟퐸푟푣 at time 푡. Therefore
116
+ the pair (휃푟, 퐸푟) with 퐸푟푣 = 0 does not influence the state. We define the eigenvalue support of the vector
117
+ 푣 to be Φ푣 = {휃푟 ∶ 퐸푟푣 ≠ 0}. In the case 푣 = 푒푎 for some vertex 푎, we also call Φ푒푎 (Φ푎 for short) the
118
+ eigenvalue support of 푎.
119
+ Perfect state transfer from vertex 푎 to vertex 푏 occurs at time 휏 if
120
+ 푈(휏)푒푎 = 훼푒푏,
121
+ (1)
122
+ for some phase factor 훼. If 푎 = 푏 then we say the quantum walk is periodic at 푎.
123
+ Multiplying 퐸푟 to both sides of Equation (1) gives
124
+ 푒−i휏휃푟퐸푟푒푎 = 훼퐸푟푒푏.
125
+ (2)
126
+ Hence, for 푟 = 1, … , 푑, there exists 푞푟(푎, 푏) ∈ [0, 2휋) such that
127
+ 퐸푟푒푎 = 푒i푞푟(푎,푏)퐸푟푒푏.
128
+ (3)
129
+ We say the vertices 푎 and 푏 are strongly cospectral when this condition is satisfied, and call 푞푟(푎, 푏) the quarrel
130
+ from 푎 to 푏 relative to the eigenvalue 휃푟. Note that strongly cospectral vertices have the same eigenvalue
131
+ support.
132
+ We study perfect state transfer in oriented graphs and in Hermitian graphs in Sections 3 and 4. We give
133
+ here a characterization of perfect state transfer in Hermitian graphs.
134
+ Theorem 2.1. Perfect state transfer occurs from 푎 to 푏 in a Hermitian graph 푋 if and only if
135
+ i. 푎 and 푏 are strongly cospectral vertices with quarrels 푞푟(푎, 푏), for 휃푟 ∈ Φ푎, and
136
+ ii. for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁, there exist integers 푚푟,푠 and 푚ℎ,퓁 satisfying
137
+ 휃푟 − 휃푠
138
+ 휃ℎ − 휃퓁
139
+ =
140
+ 푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋
141
+ 푞ℎ(푎, 푏) − 푞퓁(푎, 푏) + 2푚ℎ,퓁휋 .
142
+ 3
143
+
144
+ Proof. From Equation (3), we see that perfect state transfer from 푎 to 푏 implies they are strongly cospectral.
145
+ Suppose 푎 and 푏 are strongly cospectral with quarrel 푞푟(푎, 푏), for 휃푟 ∈ Φ푎(= Φ푏). Then Equation (1)
146
+ holds if and only if for 휃푟, 휃푠 ∈ Φ푎,
147
+ 훼 = 푒i(푞푟(푎,푏)−휏휃푟) = 푒i(푞푠(푎,푏)−휏휃푠).
148
+ (4)
149
+ This is equivalent to
150
+ 푒i휏(휃푟−휃푠) = 푒i(푞푟(푎,푏)−푞푠(푎,푏))
151
+ and
152
+
153
+ (
154
+ 휃푟 − 휃푠
155
+ )
156
+ = 푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋,
157
+ for some integer 푚푟,푠. Condition (ii) follows immediately.
158
+ We say the ratio condition on Φ푎 holds if
159
+ 휃푟 − 휃푠
160
+ 휃ℎ − 휃퓁
161
+ ∈ Q
162
+ (5)
163
+ for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁.
164
+ Theorem 2.2. In a Hermitian graph 푋, 푎 is periodic if and only if the ratio condition on Φ푎 holds.
165
+ Proof. Note that 푞푟(푎, 푎) = 0 for 휃푟 ∈ Φ푎. The result follows immediately from Theorem 2.1.
166
+ In Section 5, we consider a relaxation of perfect state transfer. A graph has pretty good state transfer
167
+ from 푎 to 푏 if, for any 휀 > 0, there is a time 휏 satisfying
168
+ |푈(휏)푎,푏| ≥ 1 − 휀.
169
+ (6)
170
+ Using the proof of Lemma 13.1 in [13], we conclude that if there is pretty good state transfer from 푎 to 푏
171
+ then 푎 and 푏 are strongly cospectral. From
172
+ 푈(푡)푎,푏 =
173
+
174
+
175
+ 푟=1
176
+ 푒−i푡휃푟푒푇
177
+ 푎 퐸푟푒푏 =
178
+
179
+
180
+ 푟=1
181
+ 푒i(푞푟(푎,푏)−푡휃푟)(퐸푟)푏,푏,
182
+ we see that there is pretty good state transfer from 푎 to 푏 if and only if for any 휖 > 0, there exists 휏 > 0 and
183
+ 훿휖 ∈ R such that
184
+ |휏휃푟 − 푞푟(푎, 푏) − 훿휖| < 휖
185
+ (mod 2휋),
186
+ for 푟 ∈ Φ푎.
187
+ Theorem 2.3. (Kronecker [14]) Let 휃1, … , 휃푑 and 푞1, … , 푞푑 be arbitrary real numbers. For any 휖 > 0, the
188
+ system of inequalities
189
+ |휃푟휏 − 푞푟| < 휖
190
+ (mod 2휋),
191
+ 푟 = 1, … , 푑
192
+ admits a solution for 휏 if and only if, for all set of integers 푙1, … , 푙푑,
193
+ 푙1휃1 + … + 푙푑휃푑 = 0
194
+ implies
195
+ 푙1푞1 + … + 푙푑푞푑 = 0
196
+ (mod 2휋).
197
+ 4
198
+
199
+ Theorem 2.4. Let 푋 be Hermitian graph with eigenvalues 휃1, … , 휃푑 ∈ Φ푎. Then 푋 has pretty good state
200
+ transfer from 푎 to 푏 if and only if the following conditions hold.
201
+ i. The vertices 푎 and 푏 are strongly cospectral with quarrels 푞푟(푎, 푏), for 푟 = 1, … , 푑.
202
+ ii. There exists 훿 ∈ R such that, for all integers 푙1, … , 푙푑 satisfying ∑푑
203
+ 푟=1 푙푟휃푟 = 0, we have
204
+
205
+
206
+ 푟=1
207
+ 푙푟
208
+ (
209
+ 푞푟(푎, 푏) + 훿
210
+ )
211
+ = 0
212
+ (mod 2휋).
213
+ (7)
214
+ Proof. The result follows from Proposition 4.01 of [15] and Theorem 2.3.
215
+ Let 푆 be a set of vertices in 푋, we say multiple pretty good state transfer occurs on 푆 if there is pretty
216
+ good state transfer between any two vertices in 푆. Section 5 gives two families of Hermitian graphs that have
217
+ multiple pretty good state transfer.
218
+ 3
219
+ Perfect state transfer in oriented graphs
220
+ For graphs with real symmetric adjacency matrix, Kay shows that perfect state transfer cannot happen from
221
+ one vertex to two distinct vertices [9]. This monogamous behaviour does not hold in Hermitian graphs with
222
+ non-real entries. A graph has multiple perfect state transfer on a set 푆 of at least three vertices if there is
223
+ perfect state transfer between any two vertices in 푆. When 푆 = 푉 (푋), we say 푋 has universal perfect
224
+ state transfer. Lemma 22 of [10] gives a construction of Hermitian circulants that admit universal perfect
225
+ state transfer. The oriented 3-cycle is a special case of this construction. In the same paper, Cameron et
226
+ al. conjecture that the oriented 퐾2 and the oriented 퐾3 are the only oriented graphs that can have universal
227
+ perfect state transfer. We confirm this conjecture in Section 3.1.
228
+ In [11], Godsil and Lato investigated multiple perfect state transfer in oriented graph where 푆 is a proper
229
+ subset of 푉 (푋). They give an example of an oriented graph on eight vertices that admits multiple perfect
230
+ state transfer on a set of four vertices. In Section 3.2, we extend their example to an infinite family of oriented
231
+ graphs that have multiple perfect state transfer.
232
+ 3.1
233
+ Universal perfect state transfer
234
+ In [1], Cameron et al. show that the oriented 퐾2 and 퐾3 with any orientation admit universal perfect state
235
+ transfer. They give the following necessary conditions on the Hermitian graphs admitting universal perfect
236
+ state transfer.
237
+ Theorem 3.1. Let 퐻 be the matrix associated with a Hermitian graph 푋 that admits universal perfect state
238
+ transfer. Then the following holds:
239
+ 1. All eigenvalues of 퐻 are simple.
240
+ 2. If 푃 is a unitary matrix diagonalizing 퐻 then |푃푎,푏| =
241
+ 1
242
+
243
+ 푛, for 푎, 푏 ∈ 푉 (푋).
244
+ 3. Every vertex in 푋 is periodic.
245
+ 5
246
+
247
+ Suppose 푋 is an oriented graph on 푛 vertices that has universal perfect state transfer. Let 퐻 be its
248
+ associated Hermitian matrix with spectral decomposition
249
+ 퐻 =
250
+
251
+
252
+ 푟=1
253
+ 휃푟퐸푟.
254
+ Then 퐸푟 has rank one with constant diagonal entries 푛−1. We see that 퐻2 has constant diagonal entries and
255
+ the underlying (undirected) graph of 푋 is regular. Further, it follows from Theorem 6.1 of [11] that there
256
+ exists a positive square-free integer Δ such that 휃푟 ∈ Z
257
+
258
+ Δ, for 푟 = 1, … , 푛. Hence
259
+ min
260
+ 푟≠푠 |휃푟 − 휃푠| ≥
261
+
262
+ Δ.
263
+ (8)
264
+ We show in the following lemmas that an oriented graph with universal perfect state transfer can have at
265
+ most eleven vertices.
266
+ Lemma 3.2. Let 퐻 be a Hermitian matrix of order 푛 with zero diagonal entries. Let 휃1 ≤ 휃2 ≤ ⋯ ≤ 휃푛 be
267
+ the eigenvalues of 퐻. Then
268
+
269
+
270
+ 푟,푠=1
271
+ (
272
+ 휃푟 − 휃푠
273
+ )2 = 2푛 Tr(퐻2).
274
+ Proof. Observe that 휃푟 − 휃푠 is an eigenvalue of (퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻), for 푟, 푠 = 1 … , 푛. Hence
275
+
276
+
277
+ 푟,푠=1
278
+ (
279
+ 휃푟 − 휃푠
280
+ )2 = Tr
281
+ (
282
+ 퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻
283
+ )2 = Tr
284
+ (
285
+ 퐻2 ⊗ 퐼푛 + 퐼푛 ⊗ 퐻2 − 2퐻 ⊗ 퐻
286
+ )
287
+ .
288
+ The result follows from Tr(퐻 ⊗ 퐻) = 0.
289
+ Lemma 3.3. Let 푋 be an oriented graph on 푛 vertices and 푚 edges with eigenvalues 휃1 < ⋯ < 휃푛. Let
290
+ 휎 = min푟≠푠 |휃푟 − 휃푠|. Then
291
+ 휎2 푛(푛2 − 1)
292
+ 24
293
+ ≤ 푚
294
+ and
295
+ 휎2 ≤
296
+ 12
297
+ 푛 + 1.
298
+ Proof. It follows from the definition of 휎 that 휎|푟 − 푠| ≤ |휃푟 − 휃푠|, and
299
+ 휎2
300
+
301
+
302
+ 푟,푠=1
303
+ (푟 − 푠)2 ≤
304
+
305
+
306
+ 푟,푠=1
307
+ (휃푟 − 휃푠
308
+ )2 .
309
+ The lower bound is
310
+ 휎2
311
+
312
+
313
+ 푟,푠=1
314
+ (푟 − 푠)2 = 휎2
315
+
316
+
317
+ ⎜⎝
318
+ 2푛
319
+
320
+
321
+ 푟=1
322
+ 푟2 − 2
323
+ ( 푛
324
+
325
+ 푟=1
326
+
327
+ )2⎞
328
+
329
+ ⎟⎠
330
+ = 휎2푛2(푛2 − 1)
331
+ 6
332
+ .
333
+ Applying Lemma 3.2 gives
334
+ 휎2 푛2(푛2 − 1)
335
+ 6
336
+ ≤ 2푛 Tr(퐻2) = 4푚푛.
337
+ The second inequality in the lemma follows immediately from 푚 ≤
338
+ (푛
339
+ 2
340
+ )
341
+ .
342
+ 6
343
+
344
+ Corollary 3.4. Let 푋 be an oriented graph on 푛 vertices. If 푋 admits universal perfect state transfer then
345
+ 푛 ≤ 11. Further, if 푛 ≥ 6 then 푋 has integral eigenvalues.
346
+ Proof. It follows from Equation (8) that 휎2 ≥ Δ ≥ 1. The second inequality of Lemma 3.3 gives 푛 ≤ 11.
347
+ When 푛 ≥ 6, we have 휎2 < 2 which implies Δ = 1 and the eigenvalues of 푋 are integers.
348
+ We are ready to rule out universal perfect state transfer in oriented graphs on more than three vertices.
349
+ Theorem 3.5. The oriented 퐾2 and 퐾3 are the only oriented graphs admitting universal perfect state transfer.
350
+ Proof. Suppose 푋 is an oriented graph on 푛 vertices that admits universal perfect state transfer. Then the
351
+ underlying graph of 푋 is 푘-regular, for some integer 푘.
352
+ Let 휃1 < ⋯ < 휃푛 be the eigenvalues of the Hermitian matrix 퐻 associated with 푋. Then 휃푟 ∈ Z
353
+
354
+ Δ,
355
+ for some positive square-free integer Δ. Since i퐻 is a skew-symmetric matrix with entries ±1, we have
356
+ 휃푟 = −휃푛+1−푟
357
+ for 푟 = 1, … , 푛.
358
+ (9)
359
+ Further, the characteristic polynomial of i퐻 is equal to the characteristic polynomial of its underlying graph
360
+ over Z2.
361
+ When 푛 = 4 or 5, 퐶푛 and 퐾푛 are the only regular graphs on 푛 vertices. An exhaustive search rules out
362
+ oriented graphs on 4 or 5 vertices with spectrum satisying the above conditions.
363
+ For 푛 ≥ 6, it follows from Lemma 3.3 and Corollary 3.4 that 휎 = min푟≠푠 |휃푟 − 휃푠| = 1 and
364
+ 푛2 − 1
365
+ 12
366
+ ≤ 푘 ≤ 푛 − 1.
367
+ Using this inequality together with the fact that 푘 is even when 푛 is odd, we narrow down to the following
368
+ possibilities.
369
+
370
+ 6
371
+ 7
372
+ 8
373
+ 9
374
+ 10
375
+ 11
376
+
377
+ 3, 4, 5
378
+ 4, 6
379
+ 6, 7
380
+ 8
381
+ 9
382
+ 10
383
+ Applying Equation (9) to Tr(퐻2) yields
384
+ 푛푘 = 2
385
+ ⌊ 푛+1
386
+ 2 ⌋
387
+
388
+ 푟=1
389
+ 휃2
390
+ 푟 .
391
+ Direct computation returns integral solutions to this equation for only three cases:
392
+
393
+
394
+ underlying graph
395
+ Possible spectrum of i퐻
396
+ 11
397
+ 10
398
+ 퐾11
399
+ 0, ±i, ±2i, ±3i, ±4i, ±5i
400
+ 7
401
+ 6
402
+ 퐾7
403
+ 0, ±i, ±2i, ±4i
404
+ 7
405
+ 4
406
+ 퐶7
407
+ 0, ±i, ±2i, ±3i
408
+ It is straightforward to check that for each case, the characteristic polynomial of the underlying graph is not
409
+ equal to the polynomial with the roots listed in the table over Z2.
410
+ We conclude that there is no oriented graph on 푛 ≥ 4 vertices admitting universal perfect state transfer.
411
+ 7
412
+
413
+ 3.2
414
+ Multiple perfect state transfer
415
+ In [11], Godsil and Lato relax the notion of universal perfect state transfer to multiple perfect state transfer
416
+ on a subset of vertices in oriented graphs. Let
417
+ 퐻⃖⃗퐶4 =
418
+
419
+
420
+
421
+ ⎢⎣
422
+ 0
423
+ −i
424
+ 0
425
+ i
426
+ i
427
+ 0
428
+ −i
429
+ 0
430
+ 0
431
+ i
432
+ 0
433
+ −i
434
+ −i
435
+ 0
436
+ i
437
+ 0
438
+
439
+
440
+
441
+ ⎥⎦
442
+ be the Hermitian matrix of the directed 4-cycle. They show that the oriented graph with Hermitian matrix
443
+ [1
444
+ 0
445
+ 0
446
+ 1
447
+ ]
448
+ ⊗ 퐻⃖⃗퐶4 +
449
+ [ 0
450
+ i
451
+ −i
452
+ 0
453
+ ]
454
+ ⊗ 퐽4
455
+ has multiple perfect state transfer on a set of four vertices.
456
+ Making use of this technical lemma from [16], we extend the above example to an infinite family of
457
+ oriented graphs where multiple perfect state transfer occur.
458
+ Lemma 3.6. Let 퐴 and 퐵 be Hermitian matrices where 퐴 has spectral decomposition 퐴 = ∑
459
+ 푟 휃푟퐸푟. Then
460
+ 푒−푖푡(퐴⊗퐵) =
461
+
462
+
463
+ 퐸푟 ⊗ 푒−푖푡휃푟퐵.
464
+ Lemma 3.7. Suppose 푋 is an oriented graph on 푛 vertices with associated Hermitian matrix 퐻푋, whose
465
+ eigenvalues are odd integers. Let 푌 be the oriented graph with Hermitian matrix
466
+ 퐻푌 = 퐼푛 ⊗ 퐻⃖⃗퐶4 + 퐻푋 ⊗ 퐽4.
467
+ Then �� admits multiple perfect state transfer on the set {4ℎ+1, 4ℎ+2, 4ℎ+3, 4ℎ+4}, for ℎ = 0, 1, … , 푛−1.
468
+ Proof. Let 퐻푋 = ∑
469
+ 푟 휃푟퐸푟 be the spectral decomposition of 퐻푋. Since 퐼푛 ⊗ 퐻⃖⃗퐶4 and 퐻푋 ⊗ 퐽4 commute,
470
+ applying Lemma 3.6 gives
471
+ 푒−i푡퐻푌 =
472
+ (
473
+ 퐼푛 ⊗ 푒
474
+ −i푡퐻⃗퐶4
475
+ ) (
476
+
477
+
478
+ 퐸푟 ⊗ 푒−i푡휃푟퐽4
479
+ )
480
+ =
481
+
482
+
483
+ 퐸푟 ⊗ 푒
484
+ −i푡
485
+ (
486
+ 퐻⃗퐶4
487
+ +휃푟퐽4
488
+ )
489
+ .
490
+ For odd integer 휃푟, we have
491
+
492
+ −i 휋
493
+ 4
494
+ (
495
+ 퐻⃗퐶4
496
+ +휃푟퐽4
497
+ )
498
+ =
499
+
500
+
501
+
502
+ ⎢⎣
503
+ 0
504
+ −1
505
+ 0
506
+ 0
507
+ 0
508
+ 0
509
+ −1
510
+ 0
511
+ 0
512
+ 0
513
+ 0
514
+ −1
515
+ −1
516
+ 0
517
+ 0
518
+ 0
519
+
520
+
521
+
522
+ ⎥⎦
523
+ .
524
+ Hence
525
+ 푒−i 휋
526
+ 4 퐻푌 = 퐼푛 ⊗
527
+
528
+
529
+
530
+ ⎢⎣
531
+ 0
532
+ −1
533
+ 0
534
+ 0
535
+ 0
536
+ 0
537
+ −1
538
+ 0
539
+ 0
540
+ 0
541
+ 0
542
+ −1
543
+ −1
544
+ 0
545
+ 0
546
+ 0
547
+
548
+
549
+
550
+ ⎥⎦
551
+ ,
552
+ and, for ℎ = 0, 1, … , 푛 − 1, the vertex 4ℎ + 1 has perfect state transfer to 4ℎ + 4, 4ℎ + 3 and 4ℎ + 2 at time
553
+
554
+ 4, 휋
555
+ 2 and 3휋
556
+ 4 , respectively.
557
+ 8
558
+
559
+ If 푋 is obtained by orienting all edges in the (2푚 + 1)-cube from one bipartition to the other bipartition,
560
+ then its associated matrix has the form
561
+ 퐻푋 =
562
+ [
563
+ 0
564
+ i퐵
565
+ −i퐵푇
566
+ 0
567
+ ]
568
+ .
569
+ Then 퐻푋 has the same spectrum as the adjacency matrix of the (undirected) (2푚 + 1)-cube, which consists
570
+ of only odd integers. Lemma 3.7 gives an oriented graph admitting multiple perfect state transfer for integer
571
+ 푚 ≥ 0. When 푚 = 0, then 푌 is the oriented graph given in [11].
572
+ 4
573
+ Perfect state transfer in Hermitian graphs
574
+ We focus on Hermitian graphs with algebraic entries in the first part of this section. In particular, we study
575
+ the phase factors when perfect state transfer occurs in these graphs in Section 4.1.
576
+ Suppose 푋 is a Hermitian graph with algebraic entries. By Theorem 6.1 of [2] and Theorem 2.2, if
577
+ perfect state transfer from 푎 to 푏 occurs then the quantum walk on 푋 is periodic at both 푎 and 푏. Section 4.2
578
+ gives examples of Hermitian graphs (with transcendental entries) in which perfect state transfer occurs from
579
+ 푎 to 푏 but 푎 and 푏 are not periodic.
580
+ 4.1
581
+ Phase factor
582
+ We restrict our attention to Hermitian graphs with algebraic entries and extract information about the phase
583
+ factor when perfect state transfer occurs.
584
+ Let 퐻 be an algebraic Hermitian matrix. Its characteristic polynomial has algebraic coefficients. Given
585
+ spectral decomposition 퐻 = ∑
586
+ 푟 휃푟퐸푟, the eigenvalues 휃푟’s are algebraic so are the entries in 퐸푟.
587
+ Theorem 4.1. Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition
588
+ 퐻 = ∑
589
+ 푟 휃푟퐸푟. If perfect state transfer occurs from 푎 to 푏 with phase factor 훼, then 훼 is algebraic if and only
590
+ if
591
+ 휃푟
592
+ 휃푠
593
+ ∈ Q,
594
+ for 휃푟, 휃푠 ∈ Φ푎 such that 휃푠 ≠ 0.
595
+ Proof. Suppose perfect state transfer occurs from 푎 to 푏 at time 휏 with algebraic phase factor 훼. It follows
596
+ from Equation (2) that 푒−i휏휃푟 is algebraic, for 휃푟 ∈ Φ푎 = Φ푏. Applying the Gelfond-Schneider Theorem to
597
+ (푒−i휏휃푠) 휃푟
598
+ 휃푠 = 푒−i휏휃푟,
599
+ for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0, we conclude that 휃푟
600
+ 휃푠 is rational.
601
+ Now suppose 휃푠
602
+ 휃푟 ∈ Q for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0. Let 푞푟(푎, 푏) be the quarrels from 푎 to 푏 relative to
603
+ 휃푟 ∈ Φ푎. It follows from Equation (3) that 푒i푞푟(푎,푏) is algebraic. Applying Equation (4) yields
604
+
605
+ ( 휃푟
606
+ 휃푠 −1
607
+ )
608
+ =
609
+ (
610
+ 푒i(푞푠(푎,푏)−휏휃푠)) 휃푟
611
+ 휃푠 푒i(휏휃푟−푞푟(푎,푏)) =
612
+ (
613
+ 푒i푞푠(푎,푏)) 휃푟
614
+ 휃푠 푒−i푞푟(푎,푏).
615
+ The right-hand side is algebraic, so is 훼.
616
+ 9
617
+
618
+ Theorem 4.2. Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition
619
+ 퐻 = ∑
620
+ 푟 휃푟퐸푟. Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼. If there exist integers
621
+ 푘푟’s satisfying
622
+
623
+ 푟∈Φ푎
624
+ 푘푟휃푟 = 0
625
+ and
626
+
627
+ 푟∈Φ푎
628
+ 푘푟 ≠ 0
629
+ then 훼 is algebraic.
630
+ Proof. From Equation (4), we have
631
+
632
+
633
+ 푟∈Φ푎 푘푟 = 푒
634
+ −i휏
635
+ (∑
636
+ 푟∈Φ푎 푘푟휃푟
637
+ ) ∏
638
+ 푟∈Φ푎
639
+ (푒i푞푟(푎,푏))푘푟 =
640
+
641
+ 푟∈Φ푎
642
+ (푒i푞푟(푎,푏))푘푟 .
643
+ Since the right-hand side is algebraic and ∑
644
+ 푟∈Φ푎 푘푟 ≠ 0, we conclude that 훼 is algebraic.
645
+ We apply the theorem to algebraic Hermitian graphs where Φ푎 contains all eigenvalues of 퐻.
646
+ Corollary 4.3. Let 퐻 be an algebraic matrix associated with a Hermitian graph with zero diagonal entries.
647
+ Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼. If 푎 has full eigenvalue support then 훼
648
+ is algebraic.
649
+ Proof. Let 푘푟 be the multiplicity of 휃푟, for 휃푟 ∈ Φ푎. Since Φ푎 contains all eigenvalues of 퐻, we have
650
+
651
+ 푟∈Φ푎 푘푟휃푟 = Tr(퐻) = 0 and ∑
652
+ 푟∈Φ푎 푘푟 equals the number of vertices. It follows from Theorem 4.2 that the
653
+ phase factor at perfect state transfer is algebraic.
654
+ Given spectral decomposition of an algebraic Hermitian matrix 퐻 = ∑
655
+ 푟 휃푟퐸푟, if 퐸푟 has constant diagonal
656
+ then every vertex has full eigenvalue support. In particular, Corollary 4.3 applies to
657
+ • the adjacency matrix of a walk regular graph,
658
+ • an algebraic Hermitian matrix with zero diagonal that belongs to a Bose-Mesner algebra, and
659
+ • Hermitian circulants with algebraic entries and zero diagonal.
660
+ 4.2
661
+ One-way perfect state transfer
662
+ We saw at the beginning of Section 4 that if perfect state transfer occurs from 푎 to 푏 in an algebraic Hermitian
663
+ graph then both 푎 and 푏 are periodic. In particular, there is perfect state transfer from 푏 back to 푎.
664
+ We give a family of Hermitian graphs, with transcendental entries, that have perfect state transfer from
665
+ 푎 to 푏 but not periodic at 푎 nor 푏. In particular, they do not have perfect state transfer from 푏 to 푎.
666
+ Theorem 4.4. There exist infintely many Hermitian graphs which admit perfect state transfer from 푎 to 푏 but
667
+ are not periodic at 푎.
668
+ Proof. Let 휆 be any real number such that 휆 ∉ Q휋. Define matrices
669
+ 푃 = 1
670
+ 2
671
+
672
+
673
+
674
+ ⎢⎣
675
+ 1
676
+ 1
677
+ 1
678
+ 1
679
+ 1
680
+ 1
681
+ −1
682
+ −1
683
+ 1
684
+ −1
685
+ 푒i휆
686
+ −푒i휆
687
+ 1
688
+ −1
689
+ −푒i휆
690
+ 푒i휆
691
+
692
+
693
+
694
+ ⎥⎦
695
+ and
696
+ 퐷 =
697
+
698
+
699
+
700
+ ⎢⎣
701
+ 0
702
+ 0
703
+ 0
704
+ 0
705
+ 0
706
+
707
+ 0
708
+ 0
709
+ 0
710
+ 0
711
+
712
+ 0
713
+ 0
714
+ 0
715
+ 0
716
+ 휆 + 휋
717
+
718
+
719
+
720
+ ⎥⎦
721
+ .
722
+ 10
723
+
724
+ Consider the Hermitian matrix
725
+ 퐻 ∶= 푃 퐷푃 −1 =
726
+ (휋 + 휆
727
+ 2
728
+ )
729
+ 퐼4 −
730
+
731
+
732
+
733
+
734
+ ⎢⎣
735
+ 0
736
+
737
+ 2
738
+
739
+ 4(1 + 푒−푖휆)
740
+
741
+ 4(1 − 푒−푖휆)
742
+
743
+ 2
744
+ 0
745
+
746
+ 4(1 − 푒−푖휆)
747
+
748
+ 4(1 + 푒−푖휆)
749
+
750
+ 4 (1 + 푒푖휆)
751
+
752
+ 4(1 − 푒푖휆)
753
+ 0
754
+
755
+ 2
756
+
757
+ 4 (1 − 푒푖휆)
758
+
759
+ 4(1 + 푒푖휆)
760
+
761
+ 2
762
+ 0
763
+
764
+
765
+
766
+
767
+ ⎥⎦
768
+ .
769
+ Let 휃1 = 0, 휃2 = 휋, 휃3 = 휆 and 휃4 = 휆 + 휋. All vertices have full eigenvalue support. Vertices 1 and
770
+ 3 are strongly cospectral with quarrels: 푞1(3, 1) = 0, 푞2(3, 1) = 휋, 푞3(3, 1) = 휆, and 푞4(3, 1) = 휆 + 휋. By
771
+ Theorem 2.1, we have perfect state transfer from vertex 3 to 1 at time 휏 = 1 with phase factor 1. As 휆 is not
772
+ a rational multiple of 휋, we have
773
+ 휃3 − 휃1
774
+ 휃2 − 휃1
775
+ = 휆
776
+ 휋 ∉ ℚ.
777
+ By Theorem 2.2, 퐻 is not periodic at vertex 1 nor at vertex 3.
778
+ Example 4.5. Consider the complex Hadamard matrix
779
+ 푃 =
780
+
781
+
782
+
783
+
784
+
785
+
786
+
787
+
788
+
789
+ ⎢⎣
790
+ 1
791
+ 1
792
+ 1
793
+ 1
794
+
795
+
796
+
797
+
798
+ 1
799
+ −1
800
+ 푒푖휃
801
+ −푒푖휃
802
+ −1
803
+ 1
804
+ −푒푖휃
805
+ 푒푖휃
806
+ 1
807
+ 1
808
+ 푒푖2휃
809
+ 푒푖2휃
810
+ −푖
811
+ −푖
812
+ −푖푒푖2휃
813
+ −푖푒푖2휃
814
+ 1
815
+ −1
816
+ 푒푖3휃
817
+ −푒푖3휃
818
+ 1
819
+ −1
820
+ 푒푖3휃
821
+ −푒푖3휃
822
+
823
+
824
+ −푖
825
+ −푖
826
+ −1
827
+ −1
828
+ 1
829
+ 1
830
+ −푖
831
+
832
+ 푖푒푖휃
833
+ −푖푒푖휃
834
+
835
+ −푖
836
+ −푖푒푖휃
837
+ 푖푒푖휃
838
+
839
+
840
+ −푖푒푖2휃
841
+ −푖푒푖2휃
842
+ 1
843
+ 1
844
+ −푒푖2휃
845
+ −푒푖2휃
846
+ −푖
847
+
848
+ 푖푒푖3휃
849
+ −푖푒푖3휃
850
+ −푖
851
+
852
+ 푖푒푖3휃
853
+ −푖푒푖3휃
854
+
855
+
856
+
857
+
858
+
859
+
860
+
861
+
862
+
863
+ ⎥⎦
864
+ and diagonal matrix 퐷 = diag
865
+ (
866
+ 0, 휋, 휃, 휃 + 휋, 휋
867
+ 2, 3휋
868
+ 2 , 휃 + 휋
869
+ 2, 휃 + 3휋
870
+ 2
871
+ )
872
+ . Then the Hermitian graph 푋 with
873
+ matrix 퐻 = 푃 퐷푃 −1 admit perfect state transfer from vertex 1 to 2 at 푡 = 1, from vertex 1 to 3 at 푡 = 2, from
874
+ vertex 1 to 4 at 푡 = 3. Each vertex has full eigenvalue support, and if 휃 ∉ Q휋, then the ratio condition is not
875
+ satisfied and 푋 is not periodic at any vertex.
876
+ 5
877
+ Multiple pretty good state transfer
878
+ Theorem 4.4 shows that it is possible to have one-way perfect state transfer in Hermitian graph. We now
879
+ show that pretty good state transfer in Hermitian graphs goes both ways.
880
+ Lemma 5.1. If a Hermitian graph admits pretty good state transfer from 푎 to 푏, then it has pretty good state
881
+ transfer from 푏 to 푎.
882
+ Proof. Suppose 푈(푡) is the transition matrix of a Hermitian graph that has pretty good state transfer from
883
+ 푎 to 푏. Then, for 휀 > 0, there exists a time 휏1 such that 푈(휏1)푒푎 = 훾1푒푏 + 휌1, for some phase factor 훾1 and
884
+ vector 휌1 with ‖휌1‖ < 휀
885
+ 2.
886
+ As 푈(푡) is almost periodic, there exists 휏2 > 휏1 such that 푈(휏2)푒푎 = 훾2푒푎 + 휌2, for some phase factor 훾2
887
+ and some vector 휌2 with ‖휌2‖ < 휀
888
+ 2. We have
889
+ 푈(휏2 − 휏1)푒푏 = 훾1푈(휏2)
890
+ (
891
+ 푒푎 − 푈(−휏1)휌1
892
+ )
893
+ = 훾1
894
+ (
895
+ 훾2푒푎 + 휌2 − 푈(휏2 − 휏1)휌1
896
+ )
897
+ .
898
+ 11
899
+
900
+ Hence
901
+ ‖푈(휏2 − 휏1)푒푏 − 훾1훾2푒푎‖ = ‖휌2 − 푈(휏2 − 휏1)휌1‖ ≤ ‖휌1‖ + ‖휌2‖ < 휀
902
+ and there is pretty good state transfer from 푏 to 푎.
903
+ In [5], Zimborás et al. assign a complex weight 푒i훽 to an edge in the following graph and use the weight
904
+ to control the fidelity at 푏 and 푐 with initial state 푒푎.
905
+ 푒i훽
906
+
907
+
908
+
909
+ This graph can be viewed as the rooted product of the weighted 퐾3 with a path. Given a graph 푋 on 푛 vertices
910
+ and a rooted graph 푌 with root 푎. The rooted product of 푋 and 푌 , 푋◦푌 , is obtained by taking 푛 isomorphic
911
+ copies of 푌 and identifying the 푗-th vertex of 푋 with the root of the 푗-th copy of 푌 . In this section, we give
912
+ two families of rooted products that have multiple pretty good state transfer.
913
+ 5.1
914
+ Oriented 3-cycle rooted with a star
915
+ In [3], Fan and Godsil show that the double star, the rooted product of 퐾2 and 퐾1,푚, has pretty good state
916
+ transfer between the two non-pendant vertices if and only if 4푚 + 1 is not a perfect square. Note that 퐾2 is
917
+ the only simple undirected graph with universal perfect state transfer. We extend their result to the rooted
918
+ product of the oriented 3-cycle ⃖⃖⃗
919
+ 퐾3 with ̂
920
+ 퐾1,푚, where ̂
921
+ 퐾1,푚 denotes the star 퐾1,푚 with the non-pendant vertex
922
+ being its root.
923
+
924
+
925
+
926
+ Lemma 5.2. Suppose 푎 and 푏 are strongly cospectral vertices in the Hermitian graph 푋 on 푛 ≥ 2 vertices.
927
+ Then they are strongly cospectral in the rooted product 푋◦ ̂
928
+ 퐾1,푚.
929
+ Proof. Let 퐻푋 be the Hermitian matrix associated with 푋 with spectral decomposition 퐻푋 = ∑푑
930
+ 푟=1 휃푟퐸푟 .
931
+ Then the matrix associated with the rooted product 푌 = 푋◦ ̂
932
+ 퐾1,푚 is
933
+ 퐻푌 =
934
+
935
+
936
+
937
+
938
+
939
+ ⎢⎣
940
+ 1
941
+ 0
942
+ 0
943
+
944
+ 0
945
+ 0
946
+ 0
947
+ 0
948
+
949
+ 0
950
+
951
+
952
+
953
+
954
+
955
+ 0
956
+ 0
957
+ 0
958
+
959
+ 0
960
+ 0
961
+ 0
962
+ 0
963
+
964
+ 0
965
+
966
+
967
+
968
+
969
+
970
+ ⎥⎦
971
+ ⊗ 퐻푋 +
972
+
973
+
974
+
975
+
976
+
977
+ ⎢⎣
978
+ 0
979
+ 1
980
+ 1
981
+
982
+ 1
983
+ 1
984
+ 0
985
+ 0
986
+
987
+ 0
988
+
989
+
990
+
991
+
992
+
993
+ 1
994
+ 0
995
+ 0
996
+
997
+ 0
998
+ 1
999
+ 0
1000
+ 0
1001
+
1002
+ 0
1003
+
1004
+
1005
+
1006
+
1007
+
1008
+ ⎥⎦
1009
+ ⊗ 퐼푛.
1010
+ 12
1011
+
1012
+ For 푟 = 1, … , 푑, define
1013
+ 휆±
1014
+ 푟 =
1015
+ 휃푟 ±
1016
+
1017
+ 휃2
1018
+ 푟 + 4푚
1019
+ 2
1020
+ ,
1021
+ and
1022
+ 퐹 ±
1023
+ 푟 =
1024
+ 1
1025
+ (휆±
1026
+ 푟 )2 + 푚
1027
+
1028
+
1029
+
1030
+
1031
+
1032
+ ⎢⎣
1033
+ (휆±
1034
+ 푟 )2
1035
+ 휆±
1036
+
1037
+ 휆±
1038
+
1039
+
1040
+ 휆±
1041
+
1042
+ 휆±
1043
+
1044
+ 1
1045
+ 1
1046
+
1047
+ 1
1048
+
1049
+
1050
+
1051
+
1052
+
1053
+ 휆±
1054
+
1055
+ 1
1056
+ 1
1057
+
1058
+ 1
1059
+ 휆±
1060
+
1061
+ 1
1062
+ 1
1063
+
1064
+ 1
1065
+
1066
+
1067
+
1068
+
1069
+
1070
+ ⎥⎦
1071
+ ⊗ 퐸푟.
1072
+ Define
1073
+ 퐹0 =
1074
+
1075
+
1076
+ ⎢⎣
1077
+ 0
1078
+ ퟎ푚
1079
+ ퟎ푚
1080
+ 퐼푚 − 1
1081
+ 푚퐽푚
1082
+
1083
+
1084
+ ⎥⎦
1085
+ ⊗ 퐼푛.
1086
+ Then 퐻푌 has spectral decomposition
1087
+ 퐻푌 = 0 ⋅ 퐹0 +
1088
+
1089
+
1090
+ 푟=1
1091
+ (휆+
1092
+ 푟 ⋅ 퐹 +
1093
+ 푟 + 휆−
1094
+ 푟 ⋅ 퐹 −
1095
+
1096
+ ) .
1097
+ (10)
1098
+ Note that the (1, 1)-block are indexed by the vertices in 푋 and the eigenvalue 0 is not in the support of 푎 nor
1099
+ 푏. The result follows from the (1, 1)-block of 퐹 +
1100
+ 푟 and 퐹 −
1101
+ 푟 being non-zero scalar multiple of 퐸푟.
1102
+ Corollary 5.3. Suppose 푋 is a Hermitian graph with universal perfect state transfer with spectrum Φ. Let
1103
+ 푆 be the set of non-pendant vertices in 푋◦ ̂
1104
+ 퐾1,푚. Let
1105
+ Ψ =
1106
+ {
1107
+ 휃 ±
1108
+
1109
+ 휃2 + 4푚
1110
+ 2
1111
+ ||| 휃 ∈ Φ
1112
+ }
1113
+ .
1114
+ If Ψ is linearly independent over Q, then 푋◦ ̂
1115
+ 퐾1,푚 has multiple pretty good state transfer on 푆.
1116
+ Proof. For 푎, 푏 ∈ 푆, there is perfect state transfer between 푎 and 푏 in 푋, so 푎 and 푏 are strongly cospectral in
1117
+ 푋◦ ̂
1118
+ 퐾1,푚 by Lemma 5.2. We see in Equation (10) that Ψ is the eigenvalue support of 푎 in the rooted product.
1119
+ It follows from Theorem 2.4 that pretty good state transfer occurs between 푎 and 푏 in 푋◦ ̂
1120
+ 퐾1,푚.
1121
+ In the following result, we focus on 푋 = ⃖⃖⃗
1122
+ 퐾3 which has spectral decomposition
1123
+
1124
+
1125
+ ⎢⎣
1126
+ 0
1127
+ −i
1128
+ i
1129
+ i
1130
+ 0
1131
+ −i
1132
+ −i
1133
+ i
1134
+ 0
1135
+
1136
+
1137
+ ⎥⎦
1138
+ = 0 ⋅ 1
1139
+ 3퐽3 +
1140
+
1141
+ 3 ⋅ 1
1142
+ 3
1143
+
1144
+
1145
+ ⎢⎣
1146
+ 1
1147
+ 푒−2휋i∕3
1148
+ 푒2휋i∕3
1149
+ 푒2휋i∕3
1150
+ 1
1151
+ 푒−2휋i∕3
1152
+ 푒−2휋i∕3
1153
+ 푒2휋i∕3
1154
+ 1
1155
+
1156
+
1157
+ ⎥⎦
1158
+
1159
+
1160
+ 3 ⋅ 1
1161
+ 3
1162
+
1163
+
1164
+ ⎢⎣
1165
+ 1
1166
+ 푒2휋i∕3
1167
+ 푒−2휋i∕3
1168
+ 푒−2휋i∕3
1169
+ 1
1170
+ 푒2휋i∕3
1171
+ 푒2휋i∕3
1172
+ 푒−2휋i∕3
1173
+ 1
1174
+
1175
+
1176
+ ⎥⎦
1177
+ .
1178
+ Hence any two vertices in ⃖⃖⃗
1179
+ 퐾3 are strongly cospectral. Let 푉 (⃖⃖⃗
1180
+ 퐾3) = {푎, 푏, 푐}. Then the eigenvalue support
1181
+ of 푎 in ⃖⃖⃗
1182
+ 퐾3◦ ̂
1183
+ 퐾1,푚 are 휆1 =
1184
+
1185
+ 푚, 휆2 = −
1186
+
1187
+ 푚,
1188
+ 휆3 =
1189
+
1190
+ 3 +
1191
+
1192
+ 3 + 4푚
1193
+ 2
1194
+ ,
1195
+ 휆4 =
1196
+
1197
+ 3 −
1198
+
1199
+ 3 + 4푚
1200
+ 2
1201
+ ,
1202
+ 휆5 = −
1203
+
1204
+ 3 +
1205
+
1206
+ 3 + 4푚
1207
+ 2
1208
+ and
1209
+ 휆6 = −
1210
+
1211
+ 3 −
1212
+
1213
+ 3 + 4푚
1214
+ 2
1215
+ .
1216
+ 13
1217
+
1218
+ From Equation (10), the quarrels in ⃖⃖⃗
1219
+ 퐾3◦ ̂
1220
+ 퐾1,푚 are
1221
+ 푞푟(푎, 푏) =
1222
+
1223
+
1224
+
1225
+ ⎪⎩
1226
+ 0
1227
+ if 푟 = 1, 2,
1228
+ 2휋
1229
+ 3
1230
+ if 푟 = 3, 4, and
1231
+ −2휋
1232
+ 3
1233
+ if 푟 = 5, 6.
1234
+ Theorem 5.4. The rooted product ⃖⃖⃗
1235
+ 퐾3◦ ̂
1236
+ 퐾1,푚 admits multiple pretty good state transfer on the set {푎, 푏, 푐} of
1237
+ non-pendant vertices if and only if one of the following holds.
1238
+ 1. gcd(3, 푚) = 1.
1239
+ 2. 푚 = 3푠, for some integer 푠 such that neither 푠 nor 4푠 + 1 are perfect square.
1240
+ 3. 푚 = 27푘2, for some integer 푘.
1241
+ 4. 푚 = 27푘2 + 27푘 + 6, for some integer 푘.
1242
+ Proof. Since ⃖⃖⃗
1243
+ 퐾3◦ ̂
1244
+ 퐾1,푚 has an automorphism that maps 푎 to 푏, 푏 to 푐 and 푐 to 푎, it is sufficient to prove that
1245
+ there is pretty good state transfer from 푎 to 푏 in the rooted product.
1246
+ By Lemma 5.2, Condition (i) of Theorem 2.4 holds. For Condition (ii) of Theorem 2.4, we consider
1247
+ integers 푙1, … , 푙6 satisfying
1248
+ 6
1249
+
1250
+ 푟=1
1251
+ 푙푟휆푟 =
1252
+ (
1253
+ 푙1 − 푙2
1254
+ ) √
1255
+ 푚 +
1256
+ (푙3 + 푙4 − 푙5 − 푙6
1257
+ 2
1258
+ ) √
1259
+ 3 +
1260
+ (푙3 − 푙4 + 푙5 − 푙6
1261
+ 2
1262
+ ) √
1263
+ 3 + 4푚 = 0.
1264
+ (11)
1265
+ Case 1: If gcd(3, 푚) = 1 then the set {
1266
+
1267
+ 3,
1268
+
1269
+ 푚,
1270
+
1271
+ 3 + 4푚} is linearly independent over Q. Equation (11)
1272
+ implies (푙3 + 푙4 − 푙5 − 푙6)∕2 = 0 and
1273
+ 6
1274
+
1275
+ 푟=1
1276
+ 푙푟푞푟(푎, 푏) =
1277
+ (
1278
+ 푙3 + 푙4 − 푙5 − 푙6
1279
+ ) 2휋
1280
+ 3 = 0
1281
+ (mod 2휋).
1282
+ (12)
1283
+ Condition (ii) of Theorem 2.4 holds with 훿 = 0, so there is pretty good state transfer from 푎 to 푏 in
1284
+ ⃖⃖⃗
1285
+ 퐾3◦ ̂
1286
+ 퐾1,푚.
1287
+ Case 2: When 푚 = 3푠, Equation (11) becomes
1288
+ (
1289
+ 푙1 − 푙2
1290
+ ) √
1291
+ 푠 +
1292
+ (푙3 + 푙4 − 푙5 − 푙6
1293
+ 2
1294
+ )
1295
+ +
1296
+ (푙3 − 푙4 + 푙5 − 푙6
1297
+ 2
1298
+ ) √
1299
+ 1 + 4푠 = 0.
1300
+ If 푠 and 4푠 + 1 are not perfect squares then {1,
1301
+
1302
+ 푠,
1303
+
1304
+ 1 + 4푠} is linearly independent over Q and
1305
+ Equation (11) implies Equation (12). Hence there is pretty good state transfer from 푎 to 푏.
1306
+ Case 3: Suppose 푚 = 3ℎ2, for some integer ℎ. Then 4ℎ2 + 1 is not a perfect square, and Equation (11)
1307
+ becomes
1308
+ (2ℎ(푙1 − 푙2) + 푙3 + 푙4 − 푙5 − 푙6
1309
+ 2
1310
+ )
1311
+ +
1312
+ (푙3 − 푙4 + 푙5 − 푙6
1313
+ 2
1314
+ ) √
1315
+ 4ℎ2 + 1 = 0,
1316
+ 14
1317
+
1318
+ which implies 푙3 + 푙4 − 푙5 − 푙6 = −2ℎ(푙1 − 푙2). If ℎ = 3푘, for some integer 푘, then Equation (12)
1319
+ holds and pretty good state transfer occurs from 푎 to 푏.
1320
+ Suppose ℎ is not divisible by 3. Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0 and 푙3 = 푙6 = 1.
1321
+ Since
1322
+ 6
1323
+
1324
+ 푟=1
1325
+ 푙푟
1326
+ (푞푟(푎, 푏) + 훿) = 2훿,
1327
+ Equation (7) holds if and only if 훿 ∈ Z휋.
1328
+ Equation (11) also holds when 푙1 = 1, 푙2 = 푙3 = 푙4 = 0, 푙5 = 푙6 = ℎ, but
1329
+ 6
1330
+
1331
+ 푟=1
1332
+ 푙푟(푞푟(푎, 푏) + 훿) = −4ℎ휋
1333
+ 3
1334
+ + (2ℎ + 1)훿 ≠ 0
1335
+ (mod 2휋)
1336
+ when 훿 ∈ Z휋. We conclude that pretty good state transfer from 푎 to 푏 does not occur.
1337
+ Case 4: Suppose 푚 = 3푠 with 4푠 + 1 = ℎ2, for some integer ℎ. Then 푠 is not a perfect square, and Equa-
1338
+ tion (11) becomes
1339
+ (푙1 − 푙2)
1340
+
1341
+ 푠 + (푙3 + 푙4 − 푙5 − 푙6) + ℎ(푙3 − 푙4 + 푙5 − 푙6)
1342
+ 2
1343
+ = 0,
1344
+ which implies 푙3 + 푙4 − 푙5 − 푙6 = −ℎ(푙3 − 푙4 + 푙5 − 푙6). If ℎ is divisible by 3 then Equation (12)
1345
+ holds and pretty good state transfer occurs from 푎 to 푏. In this case, 푚 = 27푘2 + 27푘 + 6 if we write
1346
+ 4푠 + 1 = 32(2푘 + 1)2.
1347
+ If ℎ is not divisible by 3, Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0, 푙3 = 푙6 = 1 and when
1348
+ 푙1 = 푙2 = 0, 푙3 = 푙4 = ℎ, 푙5 = −1 and 푙6 = 1. Using the same argument as in the previous case, we
1349
+ see that there does not exist 훿 satisfying Equation (7) for both assignments for the 푙푗’s. We conclude
1350
+ that pretty good state transfer from 푎 to 푏 does not occur.
1351
+ 5.2
1352
+ Circulants rooted with a looped path
1353
+ In [4], Kempton et al. show that a path with a loop on each end-vertex with transcendental weight 훾 has
1354
+ pretty good state transfer between the two end-vertices. We use 푃 훾
1355
+ 푚 to denote the rooted path on vertices
1356
+ {1, 2, … , 푚} that has root 푚 and a loop on vertex 1 with weight 훾. Then the path of length 2푚 − 1 with a
1357
+ loop of weight 훾 on each end-vertex studied in [4] can be viewed as the rooted product of 퐾2 with 푃 훾
1358
+ 푚.
1359
+ Path 푃 훾
1360
+ 푚 rooted at 푚 with a loop at 1
1361
+ 1
1362
+ 2
1363
+
1364
+
1365
+ 15
1366
+
1367
+ We extend their result to the rooted product 푋◦푃 훾
1368
+ 푚 where 푋 is Hermitian circulant with rational eigen-
1369
+ values that admits universal perfect state transfer. Orthogonal polynomials and field trace are the main tools
1370
+ used in this section. Please see Chapter 8 of [17] for the background of orthogonal polynomials, and see [4]
1371
+ and Chapter 14 of [18] for some basic facts on field trace.
1372
+ Suppose 푉 (푋) = {푥0, 푥1, … , 푥푛−1}. Then we label the vertices of 푋◦푃 훾
1373
+ 푚 with the ordered pair (푥ℎ, 푗)
1374
+ denoting the 푗-th vertex on 푃 훾
1375
+ 푚 that is rooted at 푥ℎ in 푋, for ℎ = 0, 1, … , 푛 − 1 and 푗 = 1, … , 푚.
1376
+ (푥0, 푚)
1377
+ (푥1, 푚)
1378
+ (푥2, 푚)
1379
+ (푥0, 1)
1380
+ (푥0, 2)
1381
+
1382
+ (푥1, 1)
1383
+ (푥1, 2)
1384
+
1385
+ (푥2, 1)
1386
+ (푥2, 2)
1387
+
1388
+ The rooted product of ⃖⃖⃗
1389
+ 퐾3 with 푃 훾
1390
+
1391
+ Let 퐻푋 be the matrix of the Hermitian circulant 푋 with universal perfect state transfer. It follows from
1392
+ Theorem 8 of [1] that the eigenvalues of 퐻푋 are simple. Given distinct eigenvalues 휃0, 휃1, … , 휃푛−1 of 퐻푋
1393
+ and the discrete Fourier matrix of order 푛
1394
+ 퐹푛 =
1395
+ 1
1396
+
1397
+
1398
+
1399
+
1400
+
1401
+
1402
+
1403
+ ⎢⎣
1404
+ 1
1405
+ 1
1406
+ 1
1407
+
1408
+ 1
1409
+ 1
1410
+
1411
+ 휁2
1412
+
1413
+ 휁푛−1
1414
+ 1
1415
+ 휁2
1416
+ 휁4
1417
+
1418
+ 휁2(푛−1)
1419
+
1420
+
1421
+
1422
+
1423
+
1424
+ 1
1425
+ 휁푛−1
1426
+ 휁2(푛−1)
1427
+
1428
+ 휁(푛−1)2
1429
+
1430
+
1431
+
1432
+
1433
+
1434
+ ⎥⎦
1435
+ where 휁 = 푒2휋i∕푛, we can write
1436
+ 퐻푋 = 퐹푛
1437
+
1438
+
1439
+
1440
+ ⎢⎣
1441
+ 휃0
1442
+ 0
1443
+
1444
+ 0
1445
+ 0
1446
+ 휃1
1447
+
1448
+ 0
1449
+
1450
+
1451
+
1452
+
1453
+ 0
1454
+ 0
1455
+
1456
+ 휃푛−1
1457
+
1458
+
1459
+
1460
+ ⎥⎦
1461
+ 퐹 ∗
1462
+ 푛 .
1463
+ For 0 ≤ 푎, 푏 ≤ 푛 − 1, the vertices 푥푎 and 푥푏 are strongly cospectral with quarrel
1464
+ 푞푗(푥푎, 푥푏) = 2휋푗(푏 − 푎)
1465
+
1466
+ ,
1467
+ (13)
1468
+ for 푗 = 0, 1, … , 푛 − 1.
1469
+ Theorem 22 of [1] gives the following characterization of Hermitian circulants that have universal perfect
1470
+ state transfer.
1471
+ Theorem 5.5. Let 푋 be a Hermitian circulant on 푛 vertices with simple eigenvalues 휃0, … , 휃푛−1. Then 푋
1472
+ has universal perfect state transfer if and only if there exist 훼, 훽 ∈ R with 훽 > 0, 푐0, … , 푐푛−1 ∈ Z and integer
1473
+ ℎ coprime with 푛 such that
1474
+ 휃푗 = 훼 + 훽 (푗ℎ + 푐푗푛) ,
1475
+ for 푗 = 0, … , 푛 − 1.
1476
+ 16
1477
+
1478
+ To determine the spectrum of 푍 = 푋◦푃 훾
1479
+ 푚, we consider the 푚 × 푚 Jacobi matrices
1480
+ 푇푗 ∶=
1481
+
1482
+
1483
+
1484
+
1485
+
1486
+
1487
+ ⎢⎣
1488
+
1489
+ 1
1490
+ 0
1491
+
1492
+ 0
1493
+ 0
1494
+ 1
1495
+ 0
1496
+ 1
1497
+
1498
+ 0
1499
+ 0
1500
+ 0
1501
+ 1
1502
+ 0
1503
+
1504
+ 0
1505
+ 0
1506
+
1507
+
1508
+
1509
+
1510
+
1511
+
1512
+ 0
1513
+ 0
1514
+ 0
1515
+
1516
+ 0
1517
+ 1
1518
+ 0
1519
+ 0
1520
+ 0
1521
+
1522
+ 1
1523
+ 휃푗
1524
+
1525
+
1526
+
1527
+
1528
+
1529
+
1530
+ ⎥⎦
1531
+ ,
1532
+ for 푗 = 0, 1, … , 푛 − 1.
1533
+ (14)
1534
+ Let 휑푗,0 = 1 and let 휑푗,푟(푡) be the characteristic polynomial of the 푟-th leading principal submatrix of 푇푗, for
1535
+ 푟 = 1, … , 푚. Then 휑푗,0(푡), 휑푗,1(푡), … , 휑푗,푚(푡) is a sequence of orthogonal polynomials satisfying 휑푗,0(푡) = 1,
1536
+ 휑푗,1(푡) = 푡 − 훾,
1537
+ 휑푗,푟(푡) = 푡 휑푗,푟−1(푡) − 휑푗,푟−2(푡)
1538
+ (15)
1539
+ for 푟 = 2, … , 푚 − 1, and
1540
+ 휑푗,푚(푡) = (푡 − 휃푗
1541
+ ) 휑푗,푚−1(푡) − 휑푗,푚−2(푡).
1542
+ (16)
1543
+ From Lemma 8.5.2 of [17], the roots 휆푗,1, … , 휆푗,푚 of 휑푗,푚(푡) = 0 are the eigenvalues of 푇푗. Further,
1544
+ Φ푗,푠 =
1545
+ [1
1546
+ 휑푗,1(휆푗,푠)
1547
+
1548
+ 휑푗,푚−1(휆푗,푠)]푇
1549
+ is an eigenvector of 푇푗 corresponding to eigenvalue 휆푗,푠, for 푠 = 1, … , 푚. It follows from Lemma 8.1.1 of
1550
+ [17] that the eigenvalues of 푇푗 are simple. It is also known that consecutive orthogonal polynomials do not
1551
+ have non-trivial common factor.
1552
+ The Hermitian matrix of 푍 is
1553
+ 퐻푍 =
1554
+
1555
+
1556
+
1557
+
1558
+
1559
+ ⎢⎣
1560
+ 0
1561
+ 0
1562
+
1563
+ 0
1564
+ 0
1565
+ 0
1566
+ 0
1567
+
1568
+ 0
1569
+ 0
1570
+
1571
+
1572
+
1573
+
1574
+
1575
+ 0
1576
+ 0
1577
+
1578
+ 0
1579
+ 0
1580
+ 0
1581
+ 0
1582
+
1583
+ 0
1584
+ 1
1585
+
1586
+
1587
+
1588
+
1589
+
1590
+ ⎥⎦
1591
+ ⊗ 퐻푋 +
1592
+
1593
+
1594
+
1595
+
1596
+
1597
+ ⎢⎣
1598
+
1599
+ 1
1600
+
1601
+ 0
1602
+ 0
1603
+ 1
1604
+ 0
1605
+
1606
+ 0
1607
+ 0
1608
+
1609
+
1610
+
1611
+
1612
+
1613
+ 0
1614
+ 0
1615
+
1616
+ 0
1617
+ 1
1618
+ 0
1619
+ 0
1620
+
1621
+ 1
1622
+ 0
1623
+
1624
+
1625
+
1626
+
1627
+
1628
+ ⎥⎦
1629
+ ⊗ 퐼푛.
1630
+ (17)
1631
+ Since 퐻푋퐹푛푒푗 = 휃푗퐹푛푒푗, we have
1632
+ 퐻푍
1633
+ (Φ푗,푠 ⊗ 퐹푛푒푗
1634
+ ) = 휆푗,푠
1635
+ (Φ푗,푠 ⊗ 퐹푛푒푗
1636
+ )
1637
+ (18)
1638
+ for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.
1639
+ Lemma 5.6. Let 푋 be a Hermitian circulant with distinct eigenvalues 휃0, 휃1, … , 휃푛 and let 퐹푛, 휆푗,푠, and Φ푗,푠
1640
+ be defined as above. For 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚, 휆푗,푠 is a simple eigenvalue of the Hermitian
1641
+ graph 푍 defined in Equation (17), with spectral decomposition
1642
+ 퐻푍 =
1643
+ 푛−1
1644
+
1645
+ 푗=0
1646
+
1647
+
1648
+ 푠=1
1649
+ 휆푗,푠
1650
+ 1
1651
+ ‖Φ푗,푠‖2
1652
+ (
1653
+ Φ푗,푠Φ∗
1654
+ 푗,푠
1655
+ )
1656
+
1657
+ (
1658
+ (퐹푛푒푗)(퐹푛푒푗)∗)
1659
+ .
1660
+ For 푥푎, 푥푏 ∈ 푉 (푋) and ℎ = 1, … , 푚, the vertices (푥푎, ℎ) and (푥푏, ℎ) are strongly cospectral in 푍 with
1661
+ quarrel corresponding to eigenvalues 휆푗,푠 being
1662
+ 푞푗,푠
1663
+ (
1664
+ (푥푎, ℎ), (푥푏, ℎ)
1665
+ )
1666
+ = 2휋푗(푏 − 푎)
1667
+
1668
+ ,
1669
+ for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.
1670
+ 17
1671
+
1672
+ Proof. It is sufficient to show that the eigenvalues 휆푗,푠 of 푍, for 푗 = 0, … , 푛−1 and 푠 = 1, … , 푚, are distinct.
1673
+ Supoose 휆푗1,푠1 = 휆푗2,푠2. From Equation (15), we have
1674
+ 휑푗1,푟
1675
+ (휆푗1,푠1
1676
+ ) = 휑푗2,푟
1677
+ (휆푗2,푠2
1678
+ ) ,
1679
+ for 푟 = 1, … , 푚 − 1. From Equation (16), 휑푗1,푚
1680
+ (휆푗1,푠1
1681
+ ) = 휑푗2,푚
1682
+ (휆푗2,푠2
1683
+ ) = 0 implies 휃푗1 = 휃푗2 and 푗1 = 푗2.
1684
+ Since 휑푗1,푚(푡) = 0 has 푚 distinct roots, we conclude that 푠1 = 푠2.
1685
+ We get the quarrels of 푍 directly from Equations (18) and (13).
1686
+ For the rest of this section, we assume that 훾 is transcendental and 휃0, 휃1, … , 휃푛−1 ∈ Q as in Theorem 5.8.
1687
+ Applying Laplace expansion along the first two rows of 푇푗 in Equation (14) gives
1688
+ 휑푗,푚(푡) = (푡 − 훾)푔푛−1(푡) − 푔푛−2(푡),
1689
+ where 푔푛−1(푡) is the characteristic polynomial of the (푛 − 1) × (푛 − 1) Jacobi matrix
1690
+
1691
+
1692
+
1693
+
1694
+
1695
+ ⎜⎝
1696
+ 휃푗
1697
+ 1
1698
+
1699
+ 0
1700
+ 0
1701
+ 1
1702
+ 0
1703
+
1704
+ 0
1705
+ 0
1706
+
1707
+
1708
+
1709
+
1710
+
1711
+ 0
1712
+ 0
1713
+
1714
+ 0
1715
+ 1
1716
+ 0
1717
+ 0
1718
+
1719
+ 1
1720
+ 0
1721
+
1722
+
1723
+
1724
+
1725
+
1726
+ ⎟⎠
1727
+ ,
1728
+ and 푔푛−2(푡) is the characteristic polynomial of its (푛 − 2)-th leading principal submatrix. Now 푔푛−1(푡) and
1729
+ 푔푛−2(푡) are consecutive orthogonal polynomials, so they do not have any common factor of positive degree.
1730
+ Since 푔푛−1(푡) and 푔푛−2(푡) are rational polynomials and 훾 is transcendental, we conclude that 휑푗,푚(푡) is irre-
1731
+ ducible over Q(훾). Then the splitting field 퐹푗 of 휑푗,푚(푡) is a Galois extension over Q(훾).
1732
+ Given a Galois extension 퐸∕퐾, we use Tr퐸∕퐾(휇) to denote the trace of 휇 from 퐸 to 퐾. Here are some
1733
+ properties of the trace map useful for the proof of Theorem 5.8.
1734
+ Theorem 5.7. Let 퐸∕퐾 be a Galois extension. The following properties hold.
1735
+ i. For 휇 ∈ 퐸, Tr퐸∕퐾(휇) ∈ 퐾.
1736
+ ii. For 휇 ∈ 퐾, Tr퐸∕퐾(휇) = [퐸 ∶ 퐾]휇.
1737
+ iii. For 휇1, 휇2 ∈ 퐸, Tr퐸∕퐾(휇1 + 휇2) = Tr퐸∕퐾(휇1) + Tr퐸∕퐾(휇2).
1738
+ iv. If 퐾 ⊂ 퐹 ⊂ 퐸 are extension fields, then Tr퐸∕퐾(휇) = Tr퐹∕퐾
1739
+ (
1740
+ Tr퐸∕퐹 (휇)
1741
+ )
1742
+ .
1743
+ v. If the minimal polynomial of 휇 ∈ 퐸 over 퐾 is 푡푚 + 푎푚−1푡푚−1 + ⋯ + 푐0 then
1744
+ Tr퐸∕퐾(휇) = −[퐸 ∶ 퐾]
1745
+
1746
+ 푎푚−1.
1747
+ The eigenvalue 휆푗,푠 of 푋◦푃 훾
1748
+ 푚 has minimal polynomial 휑푗,푚(푡) over Q(훾). Applying Property (v) to 휆푗,푠 ∈
1749
+ 퐹푗, Equation (16) gives
1750
+ Tr퐹푗∕Q(훾)(휆푗,푠) =
1751
+ [퐹푗 ∶ Q(훾)]
1752
+
1753
+ (
1754
+ 훾 + 휃푗
1755
+ )
1756
+ .
1757
+ (19)
1758
+ Consider the smallest extension field 푀 of 퐹푗 that contains 퐹0, … , 퐹푛−1. For 푗 = 0, … , 푛 − 1, 푀∕퐹푗 is a
1759
+ Galois extension. It follows from Properties (ii) and (iv) and Equation (19) that
1760
+ Tr푀∕Q(훾)(휆푗,푠) = Tr퐹푗∕Q(훾)
1761
+ (
1762
+ [푀 ∶ 퐹푗]휆푗,푠
1763
+ )
1764
+ = [푀 ∶ 퐹푗]
1765
+ [퐹푗 ∶ Q(훾)]
1766
+
1767
+ (
1768
+ 훾 + 휃푗
1769
+ )
1770
+ = [푀 ∶ Q(훾)]
1771
+
1772
+ (
1773
+ 훾 + 휃푗
1774
+ )
1775
+ .
1776
+ (20)
1777
+ 18
1778
+
1779
+ Theorem 5.8. Let 푋 be a Hermitian circulant on 푛 vertices that admits universal perfect state transfer with
1780
+ eigenvalues given in Theorem 5.5. If 휃0, … , 휃푛−1 ∈ Q and 훾 is transcendental then, for any positive integer
1781
+ 푚, the rooted product 푋◦푃 훾
1782
+ 푚 has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)},
1783
+ for 1 ≤ ℎ ≤ 푚.
1784
+ Proof. For ℎ = 1, … , 푚, 푋◦푃 훾
1785
+ 푚 has an automorhism that maps (푥푎, ℎ) to (푥푎+1, ℎ), for 푎 ∈ Z푛. It is sufficient
1786
+ to show that there is pretty good state transfer from (푥0, ℎ) to (푥1, ℎ). By Lemma 5.6, (푥0, ℎ) and (푥1, ℎ) are
1787
+ strongly cospectral with quarrels
1788
+ 푞푗,푠
1789
+ (
1790
+ (푥0, ℎ), (푥1, ℎ)
1791
+ )
1792
+ = 2휋푗
1793
+ 푛 ,
1794
+ for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.
1795
+ To show the Theorem 2.4 (ii) holds, consider integers 푙푗,푠’s satisfying
1796
+ 푛−1
1797
+
1798
+ 푗=0
1799
+
1800
+
1801
+ 푠=1
1802
+ 푙푗,푠휆푗,푠 = 0.
1803
+ (21)
1804
+ We apply the trace from 푀 to Q(훾) to both sides. Applying Theorem 5.7 (iii) and Equation (20), Equa-
1805
+ tion (21) implies
1806
+ 푛−1
1807
+
1808
+ 푗=0
1809
+
1810
+
1811
+ 푠=1
1812
+ 푙푗,푠(훾 + 휃푗) = 훾
1813
+ (푛−1
1814
+
1815
+ 푗=0
1816
+
1817
+
1818
+ 푠=1
1819
+ 푙푗,푠
1820
+ )
1821
+ +
1822
+ 푛−1
1823
+
1824
+ 푗=0
1825
+ 휃푗
1826
+ ( 푚
1827
+
1828
+ 푠=1
1829
+ 푙푗,푠
1830
+ )
1831
+ = 0.
1832
+ Since 훾 is transcendental and ∑
1833
+ 푗 휃푗
1834
+ (∑
1835
+ 푠 푙푗,푠
1836
+ ) ∈ Q, Equation (21) is equivalent to
1837
+ 푛−1
1838
+
1839
+ 푗=0
1840
+
1841
+
1842
+ 푠=1
1843
+ 푙푗,푠 = 0
1844
+ (22)
1845
+ and
1846
+ 푛−1
1847
+
1848
+ 푗=0
1849
+ 휃푗
1850
+ ( 푚
1851
+
1852
+ 푠=1
1853
+ 푙푗,푠
1854
+ )
1855
+ = 0.
1856
+ (23)
1857
+ Recall 휃푗 = 훼 + 훽(푗ℎ + 푐푗푛) where gcd(ℎ, 푛) = 1. Equations (22) and (23) imply
1858
+ 푛−1
1859
+
1860
+ 푗=0
1861
+ (푗ℎ + 푐푗푛)
1862
+ ( 푚
1863
+
1864
+ 푠=1
1865
+ 푙푗,푠
1866
+ )
1867
+ = 0.
1868
+ Since gcd(ℎ, 푛) = 1, we have
1869
+ 푛−1
1870
+
1871
+ 푗=0
1872
+
1873
+
1874
+
1875
+ 푠=1
1876
+ 푙푗,푠 = 0
1877
+ (mod 푛).
1878
+ If Equations (22) and (23) hold then, for any 훿 ∈ R,
1879
+ 푛−1
1880
+
1881
+ 푗=0
1882
+
1883
+
1884
+ 푠=1
1885
+ 푙푗,푠
1886
+ (
1887
+ 푞푗,푠
1888
+ (
1889
+ (푥0, ℎ), (푥1, ℎ)
1890
+ )
1891
+ + 훿
1892
+ )
1893
+ = 2휋
1894
+
1895
+ (푛−1
1896
+
1897
+ 푗=0
1898
+
1899
+
1900
+
1901
+ 푠=1
1902
+ 푙푗,푠
1903
+ )
1904
+ + 훿
1905
+ (푛−1
1906
+
1907
+ 푗=0
1908
+
1909
+
1910
+ 푠=1
1911
+ 푙푗,푠
1912
+ )
1913
+ = 0
1914
+ (mod 2휋).
1915
+ By Theorem 2.4, pretty good state transfer occurs from (푥0, ℎ) to (푥1, ℎ), for ℎ = 1, … , 푚.
1916
+ 19
1917
+
1918
+ Remark 5.9.
1919
+ • Putting a transcendental weight 훾 on the loops is sufficient for 휑0,푚(푡), … , 휑푛−1,푚(푡) to be irreducible
1920
+ over Q(훾). Theorem 5.8 holds for irrational number 훾 as long as 휑0,푚(푡), … , 휑푛−1,푚(푡) are irreducible
1921
+ over Q(훾).
1922
+ • If we move the loops from the (푥푎, 1) to (푥푎, 푚), for 푎 = 0, … , 푛−1, then a similar argument shows that
1923
+ the resulting graph has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)}, for
1924
+ ℎ = 1, … , 푚.
1925
+ Acknowledgements
1926
+ This project was completed under the 2021 Fields Undergraduate Summer Research Program which provided
1927
+ support for A. Acuaviva, S. Eldridge, M. How and E. Wright. C. Godsil gratefully acknowledges the support
1928
+ of the Natural Sciences and Engineering Council of Canada (NSERC) Grant No. RGPIN-9439. A. Chan is
1929
+ grateful for the support of the NSERC Grant No. RGPIN-2021-03609.
1930
+ References
1931
+ [1]
1932
+ S. Cameron, S. Fehrenbach, L. Granger, S. Shrestha, and C. Tamon, “Universal state transfer on graphs,”
1933
+ Linear Algebra and Its Applications, vol. 455, pp. 115–142, 2014.
1934
+ [2]
1935
+ C. Godsil, “Real state transfer,” arXiv1710:04042.
1936
+ [3]
1937
+ X. Fan and C. Godsil, “Pretty good state transfer on double stars,” Linear Algebra and Its Applications,
1938
+ vol. 438, pp. 2346–2358, 2013.
1939
+ [4]
1940
+ M. Kempton, G. Lippner, and S.-T. Yau, “Pretty good quantum state transfer in symmetric spin net-
1941
+ works via magnetic field,” Quantum Inf. Process., vol. 16, no. 9, Paper No. 210, 23, 2017.
1942
+ [5]
1943
+ Z. Zimborás, M. Faccin, Z. Kádár, J. Whitfield, B. Lanyon, and J. Biamonte, “Quantum transport
1944
+ enchancement by time-reversal symmetry breaking,” Scientific Reports, vol. 3, p. 2361, 2013.
1945
+ [6]
1946
+ A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, “Exponential algo-
1947
+ rithmic speedup by a quantum walk,” Proceedings of the thirty-fifth ACM symposium on Theory of
1948
+ computing, 2003.
1949
+ [7]
1950
+ A. Childs, “Universal computation by quantum walk,” Physical Review Letters, vol. 102, p. 180 501,
1951
+ 2009.
1952
+ [8]
1953
+ S. Bose, “Quantum communication through an unmodulated spin chain,” Physical Review Letters,
1954
+ vol. 91, 20 2003.
1955
+ [9]
1956
+ A. Kay, “Perfect, efficient state transfer and its application as a constructive tool,” International Jour-
1957
+ nal of Quantum Information, vol. 8, pp. 641–676, 4 2011.
1958
+ [10]
1959
+ E. Connelly, N. Grammel, M. Kraut, L. Serazo, and C. Tamon, “Universality in perfect state transfer,”
1960
+ Linear Algebra and Its Applications, vol. 531, pp. 516–532, 2017.
1961
+ [11]
1962
+ C. Godsil and S. Lato, “Perfect state transfer on oriented graphs,” Linear Algebra and its Applications,
1963
+ vol. 604, pp. 278–292, 2020.
1964
+ 20
1965
+
1966
+ [12]
1967
+ C. Godsil and B. McKay, “A new graph product and its spectrum,” Bulletin of The Australian Math-
1968
+ ematical Society, vol. 18, Feb. 1978.
1969
+ [13]
1970
+ C. Godsil, “State Transfer on Graphs,” Discrete Mathematics, vol. 312, pp. 123–147, 2012.
1971
+ [14]
1972
+ B. M. Levitan and V. V. Zhikov, Almost periodic functions and differential equations. Cambridge
1973
+ University Press, Cambridge-New York, 1982, pp. xi+211.
1974
+ [15]
1975
+ C. van Bommel, “quantum walks and pretty good state transfer on paths,” Ph.D. dissertation, Univer-
1976
+ sity of Waterloo, 2019.
1977
+ [16]
1978
+ G. Coutinho and C. Godsil, “graph spectra and continuous quantum walks,” preprint.
1979
+ [17]
1980
+ C. Godsil, Algebraic Combinatorics. New York: Chapman & Hall, 1993, Chapman and Hall Mathe-
1981
+ matics Series.
1982
+ [18]
1983
+ D. S. Dummit and R. M. Foote, Abstract algebra, Third. John Wiley & Sons, Inc., Hoboken, NJ, 2004,
1984
+ pp. xii+932.
1985
+ 21
1986
+
89AzT4oBgHgl3EQfgvxw/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e566ab143524b6e852c709b1b1875bc9823e4ff9d926b44d2c35646d47e8233
3
+ size 12166408
8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93fdcceffe700ae3e0901e24d2d3afc6a053022a43952c9818ada1f97e67e895
3
+ size 2007897
8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bc1add8492d6de905caa4dd87976fb01ae64e4959d5bd80c27e67ece7e6d2ad
3
+ size 3473453
8tE3T4oBgHgl3EQfSAk3/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93419120599980c1a8d3548c7e12e3ec3082ff02c631e6f544939663747d6627
3
+ size 120912
9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aea66c97bdba8fd24d66035ada94474f20bf69877ee7f8c83e4d5c5c95dfb293
3
+ size 924256
9dE1T4oBgHgl3EQf8AVQ/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:776bf37ef8a77207d89fbe86d873a36296d4b67a824818834ce13ee21809897a
3
+ size 113244
AtE2T4oBgHgl3EQfnAjp/content/tmp_files/2301.04005v1.pdf.txt ADDED
@@ -0,0 +1,488 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Towards AI-controlled FES-restoration of arm
2
+ movements: Controlling for progressive muscular
3
+ fatigue with Gaussian state-space models
4
+ Nat Wannawas
5
+ Dept. of Bioengineering, Imperial College London
6
+ London, UK
7
+ nat.wannawas18@imperial.ac.uk
8
+ A. Aldo Faisal
9
+ Dept. of Bioengineering & Dept. of Computing,
10
+ Imperial College London, London, UK
11
+ Chair of Digital Health & Data Science, University of Bayreuth
12
+ Bayreuth, Germany
13
+ aldo.faisal@imperial.ac.uk
14
+ Abstract—Reaching disability limits an individual’s ability in
15
+ performing daily tasks. Surface Functional Electrical Stimulation
16
+ (FES) offers a non-invasive solution to restore lost ability.
17
+ However, inducing desired movements using FES is still an
18
+ open engineering problem. This problem is accentuated by the
19
+ complexities of human arms’ neuromechanics and the variations
20
+ across individuals. Reinforcement Learning (RL) emerges as
21
+ a promising approach to govern customised control rules for
22
+ different settings. Yet, one remaining challenge of controlling FES
23
+ systems for RL is unobservable muscle fatigue that progressively
24
+ changes as an unknown function of the stimulation, thereby
25
+ breaking the Markovian assumption of RL.
26
+ In this work, we present a method to address the unobservable
27
+ muscle fatigue issue, allowing our RL controller to achieve higher
28
+ control performances. Our method is based on a Gaussian State-
29
+ Space Model (GSSM) that utilizes recurrent neural networks
30
+ to learn Markovian state-spaces from partial observations. The
31
+ GSSM is used as a filter that converts the observations into
32
+ the state-space representation for RL to preserve the Markovian
33
+ assumption. Here, we start with presenting the modification of
34
+ the original GSSM to address an overconfident issue. We then
35
+ present the interaction between RL and the modified GSSM,
36
+ followed by the setup for FES control learning. We test our RL-
37
+ GSSM system on a planar reaching setting in simulation using
38
+ a detailed neuromechanical model. The results show that the
39
+ GSSM can help improve the RL’s control performance to the
40
+ comparable level of the ideal case that the fatigue is observable.
41
+ Index Terms—Functional Electrical Stimulation, FES, Gaus-
42
+ sian State-Space Model, Reinforcement Learning, Arm Motions
43
+ I. INTRODUCTION
44
+ Yearly, strokes and spinal cord injuries have left individuals
45
+ around the world with paralysis. Upper body paralysis, one
46
+ of the most commonly found following incidents, causes the
47
+ dysfunction of arm movements and severely affect the individ-
48
+ uals’ abilities in performing daily tasks. Functional Electrical
49
+ Stimulation (FES), a technique that uses electrical signals to
50
+ induce muscle contraction, offers a solution for restoring the
51
+ movements. Yet, controlling FES to induce desired movements
52
+ We acknowledge funding from the Royal Thai Government Scholarship to
53
+ NW and a UKRI Turing AI Fellowship to AAF.
54
+ is challenging. One challenge is that each individual requires
55
+ customised stimulation to induce a certain movement. This
56
+ causes difficulties in designing a control method that works
57
+ across different individuals without intensive, manual config-
58
+ urations. Another challenge is that the muscle’s responses to
59
+ the FES change over time because of muscular fatigue. Since
60
+ the fatigue level can not be directly measured, it is difficult for
61
+ a controller to maintain its performance over extended periods.
62
+ Several methods that can automatically find customised
63
+ stimulation have been investigated. One of those is Reinforce-
64
+ ment Learning (RL), a machine learning algorithm with a
65
+ learning agent (RL agent) that learns to control an environment
66
+ through interaction. The successes of RL in controlling body
67
+ movements have been presented in several scenarios: cycling
68
+ [1], walking [2], arm movements [3]–[7]. Additionally, [7]
69
+ shows that RL can deal with fatigue to a certain degree; yet,
70
+ the performance drop is still inevitable in many cases.
71
+ Different approaches have been employed to deal with
72
+ muscular fatigue. A widely used approach is to record elec-
73
+ tromyogram (EMG) or mechanomyogram (MMG) from which
74
+ muscle force can be estimated [8]–[11]. Although this ap-
75
+ proach could be straightforward, the successes are, currently,
76
+ limited to a few types of movements such as knee extension
77
+ [10], [11] and cycling [8], [9]. Additionally, it requires sensors
78
+ which can be difficult to set up. Approaches that exploit
79
+ the periodic nature of the movements such as walking are
80
+ used in [12], [13]. However, these may not be suitable to be
81
+ used in controlling arbitrary arm movements. Our previous
82
+ work [6] explores an approach that does not use dedicated
83
+ sensors and can be applied to arbitrary movements. The
84
+ approach uses a recurrent neural network (RNN) to encode
85
+ the history of observations and provide additional information
86
+ to the RL agent. This strategy can control arbitrary single-
87
+ joint movements in the real world. However, its capability in
88
+ multiple-joint cases is limited.
89
+ In this work, we present an AI-based system for con-
90
+ trolling FES that can induce arbitrary desired movements
91
+ and can maintain performance under progressive muscular
92
+ fatigue. Our system uses the combination of an RNN-based
93
+ arXiv:2301.04005v1 [eess.SY] 10 Jan 2023
94
+
95
+ Gaussian state-space model (GSSM) that learns Markovian
96
+ state-representations and RL that learns the control policies
97
+ on the representation spaces. In simple terms, the GSSM
98
+ here functions as a filter that provides insight information
99
+ of the systems’ states to the RL agents, allowing the agents
100
+ to select better actions. Compared to our previous work [6],
101
+ this system is more powerful and capable of leaning the
102
+ complex dynamics of multiple-joint movements. Additionally,
103
+ it produces probabilistic transition functions that can be useful,
104
+ for example, for model-based RL.
105
+ We present the details of our RL-GSSM system and the
106
+ setup for controlling arbitrary movements in the Methods
107
+ section. We also provide the modification of the original
108
+ GSSM [14] to address an overconfident issue. We demonstrate
109
+ our system in a planar arbitrary reaching setting using a
110
+ detailed neuromechanical model and show that our system can
111
+ achieve and maintain the control performance at the same level
112
+ as the ideal case in which muscle fatigue is observable.
113
+ II. METHODS
114
+ A. Gaussian State-Space Models (GSSM)
115
+ Here, GSSM functions as a filter that converts an observable
116
+ environment’s state vector (ot) into a state-representation vec-
117
+ tor (xt) which contains the information of the system’s hidden
118
+ states. Our GSSM is based on [14] whose main components
119
+ are an RNN-based filter (fF ilter) and a transition function
120
+ (fT ran). The filter converts ot into xt through a process
121
+ described as follows. The process starts at the zeroth time
122
+ step (t = 0) with the initialisation of the RNN’s hidden states
123
+ (h0) and state representations (x0). x0 is then concatenated
124
+ with the initial action vector ainit and is passed through Ws,
125
+ a small multilayer perceptron (MLP). This step is mathemati-
126
+ cally expressed as hx,t=0 = Ws([x0; a0]T ). Meanwhile, the
127
+ RNN observes the environment’s states o0 and updates its
128
+ hidden state to ht=1. hx,t=0 and ht=1 are then combined
129
+ as hc,t=1 =
130
+ 1
131
+ 2 tanh(hx,t=0 + ht=1). Next, hc,t=1 is passed
132
+ through Wx which is an MLP that outputs the distribution of
133
+ xt. The following time steps repeat this process but start with
134
+ the sampled xt and actual actions at. The trajectory of xt,
135
+ denoted as x0:T , is obtained by repeating this process through
136
+ the whole trajectory of observations o0:T . For future notation,
137
+ RNN, Wh, and Wx are referred collectively as fF ilter.
138
+ The GSSM is trained using the trajectory of observations
139
+ (o0:T ) as follows. The training process starts with using fF ilter
140
+ to sample x0:T corresponding to o0:T . Next, we reconstruct the
141
+ observations by passing the sampled x0:T through the obser-
142
+ vation mapping function Wg, expressed as k0:T = Wg(x0:T ).
143
+ The parameters of fF ilter are optimised through gradient
144
+ descent to minimise the following loss functions. The first loss
145
+ function is the likelihood between k0:T and o0:T , expressed
146
+ as llik = �T
147
+ t=1 p(ot|µk,t, Σk,t), where µk,t and Σk,t are
148
+ the mean and covariance of the reconstructed observations,
149
+ respectively. The second loss function is the KL divergence
150
+ between the x0:T distribution sampled by fF ilter and those
151
+ predicted by fT ran, expressed as
152
+ lDKL =
153
+ T
154
+
155
+ t=2
156
+ DKL[fF ilter(xt−1, o0:t)||fT ran(xt−1)].
157
+ Intuitively, this loss function encourages the filter-generated
158
+ distribution of xt, pf(xt), to have a Markovian structure,
159
+ i.e, pf(xt|xt−1, o0:t) = p(xt|xt−1). Note that the observation
160
+ history o0:t−1 is encoded in the RNN’s hidden states.
161
+ In the original model [14], fT ran is represented by a neural
162
+ network that directly outputs the means and variances of xt.
163
+ This network produces overconfidence in the learned transition
164
+ function. To mitigate this issue, we replace that network
165
+ with the ensemble of neural networks with randomised prior
166
+ functions (RP-Ensemble) [15]. The predictive means and vari-
167
+ ances are computed by fitting Gaussian distributions to the
168
+ ensemble’s outputs.
169
+ B. Generic RL-GSSM for controlling arbitrary movements
170
+ Reinforcement Learning (RL) learns a task through reward
171
+ signals collected from interactions with an environment. The
172
+ interactions occur in a discrete-time fashion, starting with
173
+ the agent observing the environment’s state st and selecting
174
+ an action at based on its policy π. The action causes the
175
+ environment to be in a new state st+1. The agent then receives
176
+ an immediate reward rt and observes the new state. This
177
+ interaction experience is collected as a tuple (st, at, rt, st+1)
178
+ which is stored in a replay buffer D. This tuple is used to learn
179
+ an optimal policy π∗ that maximises a return R–the sum of
180
+ discounted immediate rewards.
181
+ The introduction of GSSM into the system causes few
182
+ changes in the typical RL learning process. To avoid confusing
183
+ notation, we hereafter use st to denote RL state vectors.
184
+ Fig.1 shows the overview diagram of our RL-GSSM system.
185
+ The system has two phases–interaction and updating phases–
186
+ described as follows. At each time step in the interaction
187
+ phase, fF ilter observes ot, updates the RNN’s hidden states,
188
+ and generates state-representations xt. The agent then selects
189
+ an action at based on st = [ot; xt; ct]T , where ct is a control
190
+ target at time t. The action affects the environment, the system
191
+ moves into the next time step, and the process repeats. The
192
+ interactions are stored as ([ot; ct]T , at, rt, [ot+1); ct+1)]T ) in
193
+ a Trajectory Buffer.
194
+ The updating phase begins with drawing sampled trajec-
195
+ tories (˜o0:T ) from the Trajectory Buffer and using them to
196
+ update the GSSM. After that, the updated fF ilter is used to
197
+ generate new trajectories of st corresponding to ˜o0:T . The
198
+ new st trajectories are then converted into new RL experience
199
+ tuples stored in a typical Replay Buffer, and the RL agent is
200
+ updated following a typical method.
201
+ C. RL-GSSM setup for controlling planar movements
202
+ The environment here is a neuromechanical model built in
203
+ OpenSim. The model has a human arm placed on an arm
204
+ support that moves with low friction on a table Fig.2b. The
205
+
206
+ Fig. 1. (a) Diagram showing the overview of our RL-GSSM system. The dash blue line splits RL and GSSM. The GSSM’s parts in yellow boxes are excluded
207
+ during the interaction phase. This phase starts with the initialisation (on the left) and evolves as follows. At the time step t, The previous action at−1 are
208
+ appended to the state-representations of the previous time step xt−1. The Filter then combines the appended vector with the incoming observation ot and
209
+ samples the state-representations of the current time step xt. The average of xt, denoted as ¯xt, is concatenated with ot and a control target ct and become an
210
+ RL’s state vector st. The interaction data are stored in Trajectory Buffer. (b) Diagram showing the overview of the training phase that begins with sampling
211
+ the stored trajectories and updating GSSM. The updated Filter is then used to generate new RL’s experience tuples which are used to update the RL agent.
212
+ model has 6 muscles; 4 muscles labelled in the figure are stim-
213
+ ulated. The muscles are fatigued progressively as a function
214
+ of the stimulation (see [1] for more details). The observable
215
+ environment states are the angle and angular velocities of the
216
+ shoulder and elbow (ot = [θs,t; θe,t; ˙θs,t; ˙θe,t]T ).
217
+ The RL algorithm of choice is soft actor-critic [16]. Both
218
+ actor and critic are parameterised by fully-connected neural
219
+ networks with two hidden layers. The actor’s output layer has a
220
+ sigmoid activation function to squash the outputs within [0, 1].
221
+ The RL task here is to apply the muscle stimulation to move
222
+ the arm to the desired poses which are specified by target
223
+ joint angles–shoulder and elbow (θtar,t). The state vector st
224
+ is [ot; xt; θtar,t]T . The action vector at comprises normalised
225
+ stimulation intensities (i ∈ [0, 1]) of the stimulated muscles.
226
+ The immediate reward rt is simply computed using the square
227
+ error and action penalty as rt = −(θt − θtar,t)2 − Σn
228
+ i=0ai
229
+ n
230
+ ,
231
+ where n is the number of stimulated muscles.
232
+ The training is episodic. Each episode has 100 time steps
233
+ with a 100 ms time step size. The episodes begin at random
234
+ poses, targets, and fatigue levels. A new random target is
235
+ assigned at the 50th time step. Every 5 training episodes, the
236
+ control performances are evaluated in rmse measure on 50 test
237
+ episodes with the same settings as the training episodes.
238
+ III. RESULTS
239
+ A. Ensemble transition function
240
+ We replace fT rans of the original model [14], denoted as
241
+ fT r,Ori, with RP-Ensemble, denoted as fT r,Ens, to address the
242
+ overconfidence issue. We test both models on a benchmarking
243
+ function–Kink [17]. Fig.2a shows the learned transitions. Both
244
+ fT r,Ori and fT r,Ens produce good predictive means. However,
245
+ fT r,Ori is overconfident as presented by low predictive vari-
246
+ ances at the locations where the data, represented by x marks,
247
+ are absent. In contrast, fT r,Ens has higher predictive variances
248
+ at those locations.
249
+ B. Controlling planar arm movements
250
+ We train our RL-GSSM to control planar arm movements
251
+ under progressive muscular fatigue through muscle stimula-
252
+ tion. We explore 3 cases: the 1) RL-ideal and RL-vanilla cases
253
+ where the fatigue is observable and unobservable, respectively;
254
+ and 3) RL-GSSM case. The RL agents are trained for 100
255
+ episodes in all cases; the training is repeated 10 times.
256
+ Fig.2c shows the performance evaluations in rmse measure
257
+ along the training. RL-vanilla’s performance has the steepest
258
+ improvement at the beginning but stagnates at the worst
259
+ levels. RL-GSSM’s curve, compared to RL-ideal, has higher
260
+ standard deviations in the early period because the agents have
261
+ to simultaneously learn the controls and follow the not-yet-
262
+ converged GSSM. RL-GSSM’s performance improves slightly
263
+ slower but can reach the same level in 100 episodes.
264
+ Fig.3 shows the control behaviours in tracking an arbitrary
265
+ trajectory. The agents can produce good tracking in all cases.
266
+ The grey circles highlight good comparison points. Both RL-
267
+ ideal (Fig.3a) and RL-GSSM (Fig.3c) can bring the shoulder
268
+ and elbows to the [45◦, 45◦] targets anytime when requested.
269
+ RL-vanilla, however, tends to lose its performance in the
270
+ second half as the actual angles increasingly deviate from the
271
+ targets (Fig.3b). Fig.3d-f show the stimulation (solid lines) and
272
+ %maximum force that the muscles can produce (dash lines).
273
+ The %maximum force decreases over time as the stimulation
274
+ induces muscular fatigue. Compared to RL-ideal (Fig.3d), RL-
275
+ vanilla (Fig.3e) over stimulates and causes the rapid declines
276
+ of the muscle forces. The declines in RL-GSSM and RL-ideal
277
+ cases are at the same rate in average. RL-GSSM’s stimulation
278
+ has small noises along the session.
279
+
280
+ GSSM
281
+ RL
282
+ Training
283
+ Xo
284
+ UpdatableO
285
+ Network
286
+ Q
287
+ +
288
+ RP
289
+ (frozen)
290
+ ainit
291
+ Filter
292
+ Initialisation
293
+ RNN
294
+ (ho)Fig. 2. (a) The learnt kink function of the (left) original GSSM and (right) the GSSM with RP-Ensemble transition function. (b) Neuromechanical model of
295
+ planar arm movement built in OpenSim. (c) The control performances evaluated along the training. The shades show the standard deviations of 10 runs.
296
+ Fig. 3. Control behaviours in tracking an arbitrary target trajectory. (a-c) The plots showing the targets (dash) and the actual angles (solid) are achieved in (a)
297
+ RL − ideal, (b) RL − vanilla, and (c) RL − GSSM cases. (d-f) %maximum stimulation that the RL agents apply on the muscles (solid) and %maximum
298
+ forces that the muscles can produce (dash). The %maximum forces decrease in response to the muscular fatigue induced by the stimulation.
299
+ IV. CONCLUSIONS
300
+ We present a AI-based approach for controlling FES under
301
+ progressive muscular fatigue. Our RL-GSSM approach uses
302
+ RL to learn the control policies and GSSM, modified to
303
+ address the overconfidence issue, to provide Makovian state-
304
+ representations to the RL. We demonstrate our approach to
305
+ controlling arbitrary planar arm movements using a detailed
306
+ neuromechanical model. We show that our RL-GSSM can
307
+ achieve and maintain its control performances at the same level
308
+ as the ideal case where the fatigue is observable.
309
+ REFERENCES
310
+ [1] N. Wannawas, M. Subramanian, and A. A. Faisal, “Neuromechanics-
311
+ based deep reinforcement learning of neurostimulation control in fes
312
+ cycling,” in Intl. IEEE/EMBS Conf. on Neural Engineering (NER), 2021.
313
+ [2] A. Anand et al., “A deep reinforcement learning based approach towards
314
+ generating human walking behabior with a neuromuscular model,” in
315
+ 19th Intl. Conf. on Humanoid Robots, 2019.
316
+ [3] P. Thomas et al., “Creating a reinforcement learning controller for
317
+ functional electrical stimulation of a human arm,” in 14th Yale Workshop
318
+ on Adaptive and Learning Systems, 2008.
319
+ [4] K. M. Jagodnik et al., “Human-like rewards to train a reinforcement
320
+ learning controller for planar arm movement,” IEEE Trans on Human-
321
+ Machine Systems, vol. 46, pp. 723–733, 10 2016.
322
+ [5] D. N. Wolf, Z. A. Hall, and E. M. Schearer, “Model learning for control
323
+ of a paralyzed human arm with functional electrical stimulation,” in
324
+ IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020, p. 10148.
325
+ [6] N. Wannawas, A. Shafti, and A. A. Faisal, “Neuromuscular reinforce-
326
+ ment learning to actuate human limbs through fes,” in IFESS22, 2022.
327
+ [7] J. Abreu et al., “Deep reinforcement learning for control of time-varying
328
+ musculoskeletal systems with high fatigability: a feasibility study,” in
329
+ IEEE Trans. Neural Sys. and Rehab. Eng., 2022.
330
+ [8] B. Woods, M. Subramanian, A. Shafti, and A. A. Faisal, “Mechanomyo-
331
+ graphy based closed-loop functional electrical stimulation cycling sys-
332
+ tem,” in 7th IEEE Intl. Conf. on Biomed. Robotics and Biomechatronics,
333
+ vol. 2018-Augus.
334
+ IEEE, 8 2018, pp. 179–184.
335
+ [9] M. Islam et al., “Mechanomyography responses characterize altered
336
+ muscle function during electrical stimulation-evoked cycling in individ-
337
+ uals with spinal cord injury,” Clinical Biomechanics, vol. 58, 2018.
338
+ [10] J. Naeem et al., “Electrical stimulator with mechanomyography-based
339
+ real-time monitoring, muscle fatigue detection, and safety shut-off: A
340
+ pilot study,” Biomedizinische Technik, vol. 65, 2020.
341
+ [11] E. Krueger et al., “Neuromuscular fatigue detection by mechanomyogra-
342
+ phy in people with complete spinal cord injury,” Research on Biomedical
343
+ Engineering, vol. 36, pp. 203–212, 2020.
344
+ [12] A. J. Del-Ama, ´Angel Gil-Agudo, J. L. Pons, and J. C. Moreno,
345
+ “Hybrid fes-robot cooperative control of ambulatory gait rehabilitation
346
+ exoskeleton,” J. NeuroEngineering and Rehabilitation, vol. 11, 2014.
347
+ [13] K. H. Ha et al., “An approach for the cooperative control of fes with a
348
+ powered exoskeleton during level walking for persons with paraplegia,”
349
+ IEEE Trans on Neural Sys. and Rehab. Eng., vol. 24, 2016.
350
+ [14] R. G. Krishnan, U. Shalit, and D. Sontag, “Structured inference networks
351
+ for nonlinear state space models,” in AAAI, 2017.
352
+ [15] I. Osband, J. Aslanides, and A. Cassirer, “Randomized prior functions
353
+ for deep reinforcement learning,” in NIPS, 2018.
354
+ [16] T. Haarnoja et al., “Soft actor-critic algorithms and applications,”
355
+ arXiv:1812.05905v2 [cs.LG], 2019.
356
+ [17] A. D. Ialongo et al., “Overcoming mean-field approximations in recur-
357
+ rent gaussian process models,” in 36th ICML, 2019.
358
+
359
+ 30 -
360
+ Original
361
+ Ensemble
362
+ Obs-Fatigue
363
+ Not-Obs-Fatigue
364
+ 25
365
+ GSSM
366
+ Deltoid
367
+ -1
368
+ Posterior
369
+ Pectoralis major C
370
+ E 20 +
371
+ ×-2
372
+ Brachialis
373
+ -3
374
+ Table
375
+ -4
376
+ Triceps
377
+ 10
378
+ True function
379
+ True function
380
+ Medial
381
+ 5
382
+ Arm
383
+ Learned function
384
+ Learned function
385
+ Support
386
+ -6
387
+ 4
388
+ 2
389
+ 0
390
+ 4
391
+ 2
392
+ 0
393
+ 5
394
+ 6
395
+ -6
396
+ 20
397
+ 30
398
+ 40
399
+ 50
400
+ 60
401
+ 70
402
+ 80
403
+ 90
404
+ 100
405
+ Xt 1
406
+ Xt-1
407
+ a
408
+ Training Episode
409
+ cRL-ideal (observablefatigue)
410
+ RMSE: 7.02 °
411
+ RL-vanilla (unobservable fatigue) RMSE: 8.05
412
+ RL-GSSM
413
+ RMSE: 6.84
414
+ 100
415
+ b
416
+ c
417
+ 80
418
+ 60
419
+ Angle [
420
+ 40
421
+ 20
422
+ Shoulder
423
+ Elbow
424
+ Shoulder
425
+ Elbow
426
+ Shoulder
427
+ Elbow
428
+ 0
429
+ Biceps
430
+ Triceps
431
+ Pect. Maj.
432
+ Deltoid Post.
433
+ = Biceps
434
+ Triceps
435
+ Pect. Maj.
436
+ Deltoid Post.
437
+ Biceps
438
+ Triceps
439
+ Pect. Maj.
440
+ Deltoid Post.
441
+ Force
442
+ d
443
+ e
444
+ Stimulation (%)
445
+ 80
446
+ Max. Muscle F
447
+ 60
448
+ 40
449
+ 20
450
+ &
451
+ 0
452
+ 0
453
+ 10
454
+ 15
455
+ 20
456
+ 25
457
+ 30
458
+ 35
459
+ 40
460
+ 45
461
+ 50
462
+ 55
463
+ 60 0
464
+ 5
465
+ 10
466
+ 15
467
+ 20
468
+ 25
469
+ 30
470
+ 35
471
+ 40
472
+ 45
473
+ 50
474
+ 55
475
+ 60 0
476
+ 5
477
+ 10
478
+ 15
479
+ 25
480
+ 30
481
+ 35
482
+ 40
483
+ 45
484
+ 55
485
+ 60
486
+ time [s]
487
+ time [s]
488
+ time [s]
AtE2T4oBgHgl3EQfnAjp/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf,len=317
2
+ page_content='Towards AI-controlled FES-restoration of arm movements: Controlling for progressive muscular fatigue with Gaussian state-space models Nat Wannawas Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
3
+ page_content=' of Bioengineering, Imperial College London London, UK nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
4
+ page_content='wannawas18@imperial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
5
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
6
+ page_content='uk A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
7
+ page_content=' Aldo Faisal Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
8
+ page_content=' of Bioengineering & Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
9
+ page_content=' of Computing, Imperial College London, London, UK Chair of Digital Health & Data Science, University of Bayreuth Bayreuth, Germany aldo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
10
+ page_content='faisal@imperial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
11
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
12
+ page_content='uk Abstract—Reaching disability limits an individual’s ability in performing daily tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
13
+ page_content=' Surface Functional Electrical Stimulation (FES) offers a non-invasive solution to restore lost ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
14
+ page_content=' However, inducing desired movements using FES is still an open engineering problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
15
+ page_content=' This problem is accentuated by the complexities of human arms’ neuromechanics and the variations across individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
16
+ page_content=' Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
17
+ page_content=' Yet, one remaining challenge of controlling FES systems for RL is unobservable muscle fatigue that progressively changes as an unknown function of the stimulation, thereby breaking the Markovian assumption of RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
18
+ page_content=' In this work, we present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
19
+ page_content=' Our method is based on a Gaussian State- Space Model (GSSM) that utilizes recurrent neural networks to learn Markovian state-spaces from partial observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
20
+ page_content=' The GSSM is used as a filter that converts the observations into the state-space representation for RL to preserve the Markovian assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
21
+ page_content=' Here, we start with presenting the modification of the original GSSM to address an overconfident issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
22
+ page_content=' We then present the interaction between RL and the modified GSSM, followed by the setup for FES control learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
23
+ page_content=' We test our RL- GSSM system on a planar reaching setting in simulation using a detailed neuromechanical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
24
+ page_content=' The results show that the GSSM can help improve the RL’s control performance to the comparable level of the ideal case that the fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
25
+ page_content=' Index Terms—Functional Electrical Stimulation, FES, Gaus- sian State-Space Model, Reinforcement Learning, Arm Motions I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
26
+ page_content=' INTRODUCTION Yearly, strokes and spinal cord injuries have left individuals around the world with paralysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
27
+ page_content=' Upper body paralysis, one of the most commonly found following incidents, causes the dysfunction of arm movements and severely affect the individ- uals’ abilities in performing daily tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
28
+ page_content=' Functional Electrical Stimulation (FES), a technique that uses electrical signals to induce muscle contraction, offers a solution for restoring the movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
29
+ page_content=' Yet, controlling FES to induce desired movements We acknowledge funding from the Royal Thai Government Scholarship to NW and a UKRI Turing AI Fellowship to AAF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
30
+ page_content=' is challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
31
+ page_content=' One challenge is that each individual requires customised stimulation to induce a certain movement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
32
+ page_content=' This causes difficulties in designing a control method that works across different individuals without intensive, manual config- urations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
33
+ page_content=' Another challenge is that the muscle’s responses to the FES change over time because of muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
34
+ page_content=' Since the fatigue level can not be directly measured, it is difficult for a controller to maintain its performance over extended periods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
35
+ page_content=' Several methods that can automatically find customised stimulation have been investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
36
+ page_content=' One of those is Reinforce- ment Learning (RL), a machine learning algorithm with a learning agent (RL agent) that learns to control an environment through interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
37
+ page_content=' The successes of RL in controlling body movements have been presented in several scenarios: cycling [1], walking [2], arm movements [3]–[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
38
+ page_content=' Additionally, [7] shows that RL can deal with fatigue to a certain degree;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
39
+ page_content=' yet, the performance drop is still inevitable in many cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
40
+ page_content=' Different approaches have been employed to deal with muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
41
+ page_content=' A widely used approach is to record elec- tromyogram (EMG) or mechanomyogram (MMG) from which muscle force can be estimated [8]–[11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
42
+ page_content=' Although this ap- proach could be straightforward, the successes are, currently, limited to a few types of movements such as knee extension [10], [11] and cycling [8], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
43
+ page_content=' Additionally, it requires sensors which can be difficult to set up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
44
+ page_content=' Approaches that exploit the periodic nature of the movements such as walking are used in [12], [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
45
+ page_content=' However, these may not be suitable to be used in controlling arbitrary arm movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
46
+ page_content=' Our previous work [6] explores an approach that does not use dedicated sensors and can be applied to arbitrary movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
47
+ page_content=' The approach uses a recurrent neural network (RNN) to encode the history of observations and provide additional information to the RL agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
48
+ page_content=' This strategy can control arbitrary single- joint movements in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
49
+ page_content=' However, its capability in multiple-joint cases is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
50
+ page_content=' In this work, we present an AI-based system for con- trolling FES that can induce arbitrary desired movements and can maintain performance under progressive muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
51
+ page_content=' Our system uses the combination of an RNN-based arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
52
+ page_content='04005v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
53
+ page_content='SY] 10 Jan 2023 Gaussian state-space model (GSSM) that learns Markovian state-representations and RL that learns the control policies on the representation spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
54
+ page_content=' In simple terms, the GSSM here functions as a filter that provides insight information of the systems’ states to the RL agents, allowing the agents to select better actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
55
+ page_content=' Compared to our previous work [6], this system is more powerful and capable of leaning the complex dynamics of multiple-joint movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
56
+ page_content=' Additionally, it produces probabilistic transition functions that can be useful, for example, for model-based RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
57
+ page_content=' We present the details of our RL-GSSM system and the setup for controlling arbitrary movements in the Methods section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
58
+ page_content=' We also provide the modification of the original GSSM [14] to address an overconfident issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
59
+ page_content=' We demonstrate our system in a planar arbitrary reaching setting using a detailed neuromechanical model and show that our system can achieve and maintain the control performance at the same level as the ideal case in which muscle fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
60
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
61
+ page_content=' METHODS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
62
+ page_content=' Gaussian State-Space Models (GSSM) Here, GSSM functions as a filter that converts an observable environment’s state vector (ot) into a state-representation vec- tor (xt) which contains the information of the system’s hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
63
+ page_content=' Our GSSM is based on [14] whose main components are an RNN-based filter (fF ilter) and a transition function (fT ran).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
64
+ page_content=' The filter converts ot into xt through a process described as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
65
+ page_content=' The process starts at the zeroth time step (t = 0) with the initialisation of the RNN’s hidden states (h0) and state representations (x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
66
+ page_content=' x0 is then concatenated with the initial action vector ainit and is passed through Ws, a small multilayer perceptron (MLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
67
+ page_content=' This step is mathemati- cally expressed as hx,t=0 = Ws([x0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
68
+ page_content=' a0]T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
69
+ page_content=' Meanwhile, the RNN observes the environment’s states o0 and updates its hidden state to ht=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
70
+ page_content=' hx,t=0 and ht=1 are then combined as hc,t=1 = 1 2 tanh(hx,t=0 + ht=1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
71
+ page_content=' Next, hc,t=1 is passed through Wx which is an MLP that outputs the distribution of xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
72
+ page_content=' The following time steps repeat this process but start with the sampled xt and actual actions at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
73
+ page_content=' The trajectory of xt, denoted as x0:T , is obtained by repeating this process through the whole trajectory of observations o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
74
+ page_content=' For future notation, RNN, Wh, and Wx are referred collectively as fF ilter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
75
+ page_content=' The GSSM is trained using the trajectory of observations (o0:T ) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
76
+ page_content=' The training process starts with using fF ilter to sample x0:T corresponding to o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
77
+ page_content=' Next, we reconstruct the observations by passing the sampled x0:T through the obser- vation mapping function Wg, expressed as k0:T = Wg(x0:T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
78
+ page_content=' The parameters of fF ilter are optimised through gradient descent to minimise the following loss functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
79
+ page_content=' The first loss function is the likelihood between k0:T and o0:T , expressed as llik = �T t=1 p(ot|µk,t, Σk,t), where µk,t and Σk,t are the mean and covariance of the reconstructed observations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
80
+ page_content=' The second loss function is the KL divergence between the x0:T distribution sampled by fF ilter and those predicted by fT ran, expressed as lDKL = T � t=2 DKL[fF ilter(xt−1, o0:t)||fT ran(xt−1)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
81
+ page_content=' Intuitively, this loss function encourages the filter-generated distribution of xt, pf(xt), to have a Markovian structure, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
82
+ page_content='e, pf(xt|xt−1, o0:t) = p(xt|xt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
83
+ page_content=' Note that the observation history o0:t−1 is encoded in the RNN’s hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
84
+ page_content=' In the original model [14], fT ran is represented by a neural network that directly outputs the means and variances of xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
85
+ page_content=' This network produces overconfidence in the learned transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
86
+ page_content=' To mitigate this issue, we replace that network with the ensemble of neural networks with randomised prior functions (RP-Ensemble) [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
87
+ page_content=' The predictive means and vari- ances are computed by fitting Gaussian distributions to the ensemble’s outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
88
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
89
+ page_content=' Generic RL-GSSM for controlling arbitrary movements Reinforcement Learning (RL) learns a task through reward signals collected from interactions with an environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
90
+ page_content=' The interactions occur in a discrete-time fashion, starting with the agent observing the environment’s state st and selecting an action at based on its policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
91
+ page_content=' The action causes the environment to be in a new state st+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
92
+ page_content=' The agent then receives an immediate reward rt and observes the new state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
93
+ page_content=' This interaction experience is collected as a tuple (st, at, rt, st+1) which is stored in a replay buffer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
94
+ page_content=' This tuple is used to learn an optimal policy π∗ that maximises a return R–the sum of discounted immediate rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
95
+ page_content=' The introduction of GSSM into the system causes few changes in the typical RL learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
96
+ page_content=' To avoid confusing notation, we hereafter use st to denote RL state vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
97
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
98
+ page_content='1 shows the overview diagram of our RL-GSSM system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
99
+ page_content=' The system has two phases–interaction and updating phases– described as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
100
+ page_content=' At each time step in the interaction phase, fF ilter observes ot, updates the RNN’s hidden states, and generates state-representations xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
101
+ page_content=' The agent then selects an action at based on st = [ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
102
+ page_content=' xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
103
+ page_content=' ct]T , where ct is a control target at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
104
+ page_content=' The action affects the environment, the system moves into the next time step, and the process repeats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
105
+ page_content=' The interactions are stored as ([ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
106
+ page_content=' ct]T , at, rt, [ot+1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
107
+ page_content=' ct+1)]T ) in a Trajectory Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
108
+ page_content=' The updating phase begins with drawing sampled trajec- tories (˜o0:T ) from the Trajectory Buffer and using them to update the GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
109
+ page_content=' After that, the updated fF ilter is used to generate new trajectories of st corresponding to ˜o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
110
+ page_content=' The new st trajectories are then converted into new RL experience tuples stored in a typical Replay Buffer, and the RL agent is updated following a typical method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
111
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
112
+ page_content=' RL-GSSM setup for controlling planar movements The environment here is a neuromechanical model built in OpenSim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
113
+ page_content=' The model has a human arm placed on an arm support that moves with low friction on a table Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
114
+ page_content='2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
115
+ page_content=' The Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
116
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
117
+ page_content=' (a) Diagram showing the overview of our RL-GSSM system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
118
+ page_content=' The dash blue line splits RL and GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
119
+ page_content=' The GSSM’s parts in yellow boxes are excluded during the interaction phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
120
+ page_content=' This phase starts with the initialisation (on the left) and evolves as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
121
+ page_content=' At the time step t, The previous action at−1 are appended to the state-representations of the previous time step xt−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
122
+ page_content=' The Filter then combines the appended vector with the incoming observation ot and samples the state-representations of the current time step xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
123
+ page_content=' The average of xt, denoted as ¯xt, is concatenated with ot and a control target ct and become an RL’s state vector st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
124
+ page_content=' The interaction data are stored in Trajectory Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
125
+ page_content=' (b) Diagram showing the overview of the training phase that begins with sampling the stored trajectories and updating GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
126
+ page_content=' The updated Filter is then used to generate new RL’s experience tuples which are used to update the RL agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
127
+ page_content=' model has 6 muscles;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
128
+ page_content=' 4 muscles labelled in the figure are stim- ulated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
129
+ page_content=' The muscles are fatigued progressively as a function of the stimulation (see [1] for more details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
130
+ page_content=' The observable environment states are the angle and angular velocities of the shoulder and elbow (ot = [θs,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
131
+ page_content=' θe,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
132
+ page_content=' ˙θs,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
133
+ page_content=' ˙θe,t]T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
134
+ page_content=' The RL algorithm of choice is soft actor-critic [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
135
+ page_content=' Both actor and critic are parameterised by fully-connected neural networks with two hidden layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
136
+ page_content=' The actor’s output layer has a sigmoid activation function to squash the outputs within [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
137
+ page_content=' The RL task here is to apply the muscle stimulation to move the arm to the desired poses which are specified by target joint angles–shoulder and elbow (θtar,t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
138
+ page_content=' The state vector st is [ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
139
+ page_content=' xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
140
+ page_content=' θtar,t]T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
141
+ page_content=' The action vector at comprises normalised stimulation intensities (i ∈ [0, 1]) of the stimulated muscles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
142
+ page_content=' The immediate reward rt is simply computed using the square error and action penalty as rt = −(θt − θtar,t)2 − Σn i=0ai n , where n is the number of stimulated muscles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
143
+ page_content=' The training is episodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
144
+ page_content=' Each episode has 100 time steps with a 100 ms time step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
145
+ page_content=' The episodes begin at random poses, targets, and fatigue levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
146
+ page_content=' A new random target is assigned at the 50th time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
147
+ page_content=' Every 5 training episodes, the control performances are evaluated in rmse measure on 50 test episodes with the same settings as the training episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
148
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
149
+ page_content=' RESULTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
150
+ page_content=' Ensemble transition function We replace fT rans of the original model [14], denoted as fT r,Ori, with RP-Ensemble, denoted as fT r,Ens, to address the overconfidence issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
151
+ page_content=' We test both models on a benchmarking function–Kink [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
152
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
153
+ page_content='2a shows the learned transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
154
+ page_content=' Both fT r,Ori and fT r,Ens produce good predictive means.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
155
+ page_content=' However, fT r,Ori is overconfident as presented by low predictive vari- ances at the locations where the data, represented by x marks, are absent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
156
+ page_content=' In contrast, fT r,Ens has higher predictive variances at those locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
157
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
158
+ page_content=' Controlling planar arm movements We train our RL-GSSM to control planar arm movements under progressive muscular fatigue through muscle stimula- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
159
+ page_content=' We explore 3 cases: the 1) RL-ideal and RL-vanilla cases where the fatigue is observable and unobservable, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
160
+ page_content=' and 3) RL-GSSM case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
161
+ page_content=' The RL agents are trained for 100 episodes in all cases;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
162
+ page_content=' the training is repeated 10 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
163
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
164
+ page_content='2c shows the performance evaluations in rmse measure along the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
165
+ page_content=' RL-vanilla’s performance has the steepest improvement at the beginning but stagnates at the worst levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
166
+ page_content=' RL-GSSM’s curve, compared to RL-ideal, has higher standard deviations in the early period because the agents have to simultaneously learn the controls and follow the not-yet- converged GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
167
+ page_content=' RL-GSSM’s performance improves slightly slower but can reach the same level in 100 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
168
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
169
+ page_content='3 shows the control behaviours in tracking an arbitrary trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
170
+ page_content=' The agents can produce good tracking in all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
171
+ page_content=' The grey circles highlight good comparison points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
172
+ page_content=' Both RL- ideal (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
173
+ page_content='3a) and RL-GSSM (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
174
+ page_content='3c) can bring the shoulder and elbows to the [45◦, 45◦] targets anytime when requested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
175
+ page_content=' RL-vanilla, however, tends to lose its performance in the second half as the actual angles increasingly deviate from the targets (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
176
+ page_content='3b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
177
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
178
+ page_content='3d-f show the stimulation (solid lines) and %maximum force that the muscles can produce (dash lines).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
179
+ page_content=' The %maximum force decreases over time as the stimulation induces muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
180
+ page_content=' Compared to RL-ideal (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
181
+ page_content='3d), RL- vanilla (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
182
+ page_content='3e) over stimulates and causes the rapid declines of the muscle forces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
183
+ page_content=' The declines in RL-GSSM and RL-ideal cases are at the same rate in average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
184
+ page_content=' RL-GSSM’s stimulation has small noises along the session.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
185
+ page_content=' GSSM RL Training Xo UpdatableO Network Q + RP (frozen) ainit Filter Initialisation RNN (ho)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
186
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
187
+ page_content=' (a) The learnt kink function of the (left) original GSSM and (right) the GSSM with RP-Ensemble transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
188
+ page_content=' (b) Neuromechanical model of planar arm movement built in OpenSim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
189
+ page_content=' (c) The control performances evaluated along the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
190
+ page_content=' The shades show the standard deviations of 10 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
191
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
192
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
193
+ page_content=' Control behaviours in tracking an arbitrary target trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
194
+ page_content=' (a-c) The plots showing the targets (dash) and the actual angles (solid) are achieved in (a) RL − ideal, (b) RL − vanilla, and (c) RL − GSSM cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
195
+ page_content=' (d-f) %maximum stimulation that the RL agents apply on the muscles (solid) and %maximum forces that the muscles can produce (dash).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
196
+ page_content=' The %maximum forces decrease in response to the muscular fatigue induced by the stimulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
197
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
198
+ page_content=' CONCLUSIONS We present a AI-based approach for controlling FES under progressive muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
199
+ page_content=' Our RL-GSSM approach uses RL to learn the control policies and GSSM, modified to address the overconfidence issue, to provide Makovian state- representations to the RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
200
+ page_content=' We demonstrate our approach to controlling arbitrary planar arm movements using a detailed neuromechanical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
201
+ page_content=' We show that our RL-GSSM can achieve and maintain its control performances at the same level as the ideal case where the fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
202
+ page_content=' REFERENCES [1] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
203
+ page_content=' Wannawas, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
204
+ page_content=' Subramanian, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
205
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
206
+ page_content=' Faisal, “Neuromechanics- based deep reinforcement learning of neurostimulation control in fes cycling,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
207
+ page_content=' IEEE/EMBS Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
208
+ page_content=' on Neural Engineering (NER), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
209
+ page_content=' [2] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
210
+ page_content=' Anand et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
211
+ page_content=', “A deep reinforcement learning based approach towards generating human walking behabior with a neuromuscular model,” in 19th Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
212
+ page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
213
+ page_content=' on Humanoid Robots, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
214
+ page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
215
+ page_content=' Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
216
+ page_content=', “Creating a reinforcement learning controller for functional electrical stimulation of a human arm,” in 14th Yale Workshop on Adaptive and Learning Systems, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
217
+ page_content=' [4] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
218
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
219
+ page_content=' Jagodnik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
220
+ page_content=', “Human-like rewards to train a reinforcement learning controller for planar arm movement,” IEEE Trans on Human- Machine Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
221
+ page_content=' 46, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
222
+ page_content=' 723–733, 10 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
223
+ page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
224
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
225
+ page_content=' Wolf, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
226
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
227
+ page_content=' Hall, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
228
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
229
+ page_content=' Schearer, “Model learning for control of a paralyzed human arm with functional electrical stimulation,” in IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
230
+ page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
231
+ page_content=' on Robotics and Automation (ICRA), 2020, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
232
+ page_content=' 10148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
233
+ page_content=' [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
234
+ page_content=' Wannawas, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
235
+ page_content=' Shafti, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
236
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
237
+ page_content=' Faisal, “Neuromuscular reinforce- ment learning to actuate human limbs through fes,” in IFESS22, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
238
+ page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
239
+ page_content=' Abreu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
240
+ page_content=', “Deep reinforcement learning for control of time-varying musculoskeletal systems with high fatigability: a feasibility study,” in IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
241
+ page_content=' Neural Sys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
242
+ page_content=' and Rehab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
243
+ page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
244
+ page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
245
+ page_content=' [8] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
246
+ page_content=' Woods, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
247
+ page_content=' Subramanian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
248
+ page_content=' Shafti, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
249
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
250
+ page_content=' Faisal, “Mechanomyo- graphy based closed-loop functional electrical stimulation cycling sys- tem,” in 7th IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
251
+ page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
252
+ page_content=' on Biomed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
253
+ page_content=' Robotics and Biomechatronics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
254
+ page_content=' 2018-Augus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
255
+ page_content=' IEEE, 8 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
256
+ page_content=' 179–184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
257
+ page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
258
+ page_content=' Islam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
259
+ page_content=', “Mechanomyography responses characterize altered muscle function during electrical stimulation-evoked cycling in individ- uals with spinal cord injury,” Clinical Biomechanics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
260
+ page_content=' 58, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
261
+ page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
262
+ page_content=' Naeem et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
263
+ page_content=', “Electrical stimulator with mechanomyography-based real-time monitoring, muscle fatigue detection, and safety shut-off: A pilot study,” Biomedizinische Technik, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
264
+ page_content=' 65, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
265
+ page_content=' [11] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
266
+ page_content=' Krueger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
267
+ page_content=', “Neuromuscular fatigue detection by mechanomyogra- phy in people with complete spinal cord injury,” Research on Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
268
+ page_content=' 36, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
269
+ page_content=' 203–212, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
270
+ page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
271
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
272
+ page_content=' Del-Ama, ´Angel Gil-Agudo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
273
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
274
+ page_content=' Pons, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
275
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
276
+ page_content=' Moreno, “Hybrid fes-robot cooperative control of ambulatory gait rehabilitation exoskeleton,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
277
+ page_content=' NeuroEngineering and Rehabilitation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
278
+ page_content=' 11, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
279
+ page_content=' [13] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
280
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
281
+ page_content=' Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
282
+ page_content=', “An approach for the cooperative control of fes with a powered exoskeleton during level walking for persons with paraplegia,” IEEE Trans on Neural Sys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
283
+ page_content=' and Rehab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
284
+ page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
285
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
286
+ page_content=' 24, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
287
+ page_content=' [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
288
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
289
+ page_content=' Krishnan, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
290
+ page_content=' Shalit, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
291
+ page_content=' Sontag, “Structured inference networks for nonlinear state space models,” in AAAI, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
292
+ page_content=' [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
293
+ page_content=' Osband, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
294
+ page_content=' Aslanides, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
295
+ page_content=' Cassirer, “Randomized prior functions for deep reinforcement learning,” in NIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
296
+ page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
297
+ page_content=' Haarnoja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
298
+ page_content=', “Soft actor-critic algorithms and applications,” arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
299
+ page_content='05905v2 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
300
+ page_content='LG], 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
301
+ page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
302
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
303
+ page_content=' Ialongo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
304
+ page_content=', “Overcoming mean-field approximations in recur- rent gaussian process models,” in 36th ICML, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
305
+ page_content=' 30 - Original Ensemble Obs-Fatigue Not-Obs-Fatigue 25 GSSM Deltoid 1 Posterior Pectoralis major C E 20 + ×-2 Brachialis 3 Table 4 Triceps 10 True function True function Medial 5 Arm Learned function Learned function Support 6 4 2 0 4 2 0 5 6 6 20 30 40 50 60 70 80 90 100 Xt 1 Xt-1 a Training Episode cRL-ideal (observablefatigue) RMSE: 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
306
+ page_content='02 ° RL-vanilla (unobservable fatigue) RMSE: 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
307
+ page_content='05 RL-GSSM RMSE: 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
308
+ page_content='84 100 b c 80 60 Angle [ 40 20 Shoulder Elbow Shoulder Elbow Shoulder Elbow 0 Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
309
+ page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
310
+ page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
311
+ page_content=' = Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
312
+ page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
313
+ page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
314
+ page_content=' Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
315
+ page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
316
+ page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
317
+ page_content=' Force d e Stimulation (%) 80 Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
318
+ page_content=' Muscle F 60 40 20 & 0 0 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 25 30 35 40 45 55 60 time [s] time [s] time [s]' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'}
BNE4T4oBgHgl3EQfFAx2/content/tmp_files/2301.04882v1.pdf.txt ADDED
@@ -0,0 +1,2112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ZScribbleSeg: Zen and the Art of Scribble
2
+ Supervised Medical Image Segmentation
3
+ Ke Zhang and Xiahai Zhuang ⋆
4
+ School of Data Science, Fudan University, Shanghai
5
+ zxh@fudan.edu.cn
6
+ Abstract. Curating a large scale fully-annotated dataset can be both
7
+ labour-intensive and expertise-demanding, especially for medical images.
8
+ To alleviate this problem, we propose to utilize solely scribble anno-
9
+ tations for weakly supervised segmentation. Existing solutions mainly
10
+ leverage selective losses computed solely on annotated areas and gen-
11
+ erate pseudo gold standard segmentation by propagating labels to ad-
12
+ jacent areas. However, these methods could suffer from the inaccurate
13
+ and sometimes unrealistic pseudo segmentation due to the insufficient
14
+ supervision and incomplete shape features. Different from previous ef-
15
+ forts, we first investigate the principle of ”good scribble annotations”,
16
+ which leads to efficient scribble forms via supervision maximization and
17
+ randomness simulation. Furthermore, we introduce regularization terms
18
+ to encode the spatial relationship and shape prior, where a new formu-
19
+ lation is developed to estimate the mixture ratios of label classes. These
20
+ ratios are critical in identifying the unlabeled pixels for each class and
21
+ correcting erroneous predictions, thus the accurate estimation lays the
22
+ foundation for the incorporation of spatial prior. Finally, we integrate
23
+ the efficient scribble supervision with the prior into a unified framework,
24
+ denoted as ZScribbleSeg, and apply the method to multiple scenarios.
25
+ Leveraging only scribble annotations, ZScribbleSeg set new state-of-the-
26
+ arts on four segmentation tasks using ACDC, MSCMRseg, MyoPS and
27
+ PPSS datasets.
28
+ Keywords: Medical Image Segmentation· Scribble Supervision· Mix-
29
+ ture Model· Medical Image Analysis
30
+ In recent years, deep neural networks has demonstrated its potential on various
31
+ visual tasks [25]. However, the success of these methods relies on massive anno-
32
+ tations, which require assiduous manual efforts. For medical imaging, the dense
33
+ manual labeling can take several hours to annotate just one image for experi-
34
+ enced doctors, which is both expensive and expertise-demanding [60]. Humor-
35
+ ous efforts have contributed to the area of training segmentation networks with
36
+ weaker annotations [39], including scribbles [27], bounding boxes [34], points [2],
37
+ and image-level labels [35]. Numerous studies have been reported utilizing only
38
+ ⋆ Xiahai Zhuang is corresponding author. This work was funded by the National Nat-
39
+ ural Science Foundation of China (Grant No. 61971142 and 62111530195).
40
+ arXiv:2301.04882v1 [cs.CV] 12 Jan 2023
41
+
42
+ 2
43
+ K Zhang & X Zhuang
44
+ image-level labels [15,46,50,45]. These methods mainly rely on large-scale train-
45
+ ing datasets, and tend to underperform on small medical image datasets. On the
46
+ contrary, scribbles are suitable for labeling nested structures and easy to obtain
47
+ in practice. Several works have demonstrated their potential on both semantic
48
+ and medical image segmentation [17,21,27]. Therefore, we propose to investigate
49
+ this specific form of weakly supervised segmentation, which only uses scribble
50
+ annotations for model training.
51
+ Conventionally, scribble annotations are mainly focused on delineating the
52
+ structure of interests [42]. This can be effective in segmenting regular structures,
53
+ i.e., the targets with fixed shape patterns. Hence, this task is also referred to
54
+ as regular structure segmentation. However, such methods could be challenged
55
+ when they were applied to portray the irregular targets with heterogeneous dis-
56
+ tributions, such as pathologies. This is also referred to as irregular (object) seg-
57
+ mentation, which is particularly challenging for the medical tasks with small
58
+ training datasets. Existing scribble learning approaches mainly aim to recon-
59
+ struct complete labels from scribbles, and use the generated pseudo labels for
60
+ model training. These works include 1) label expansion strategies that assume
61
+ the pixels with similar features are likely to be in the same category [16,27],
62
+ and 2) ensemble methods that generate labels by fusing several independent
63
+ predictions [29]. These methods could be susceptible to the label noises intro-
64
+ duced by imprecise segmentation proposals. To overcome this issue, Obukhov
65
+ et al. proposed a regularization loss [32], which exploited the similarity between
66
+ labeled and unlabeled area. Adversarial learning approach has also been applied
67
+ to scribble supervised segmentation [42], by leveraging shape prior provided by
68
+ additional full annotations.
69
+ Scribble supervised segmentation generally suffers from inadequate supervi-
70
+ sion and imbalanced label classes. This leads to poor results, typically of under
71
+ segmentation of target structures, meaning the volumes of segmented structures
72
+ tend to be shrunk, as we shall describe in Section 2.3. To address the problem
73
+ of inadequate supervision, we first investigate the principles of generating ”good
74
+ scribbles”, as a guidance for designing methodologies to augment supervision, as
75
+ well as for generating manual annotations. The aim is to model efficient scrib-
76
+ bles by maximizing the supervision without increasing annotation efforts. Our
77
+ studies demonstrate that the model training benefit from the randomness of
78
+ wide range distributed scribbles and larger proportion of annotated areas. In-
79
+ spired by this, we propose to simulate such types of scribble-annotated images
80
+ as a means of supervision augmentation. This can be achieved via mixup and
81
+ occlusion operations on existing training images, and the supervision augmen-
82
+ tation is coupled with regularization terms penalizing any inconsistency in the
83
+ segmentation results.
84
+ Despite the lack of supervision, the scribble annotations typically have imbal-
85
+ anced annotated label proportions thus biased shape information. This means
86
+ the model cannot accurately capture the global shape of target structures. We
87
+ therefore further propose to correct the problematic prediction using prior-based
88
+ regularization, particularly from the spatial prior. This requires the preceding
89
+
90
+ ZScribbleSeg
91
+ 3
92
+ yet critical step of estimating the mixture proportion (ratio) of each label class
93
+ (referred to as π prior). We hence propose a new algorithm to compute this π
94
+ prior, based on which we develop a spatial loss on the basis of marginal proba-
95
+ bility of pixels belonging to certain label classes and spatial energy. This spatial
96
+ loss is a regularization term aimed to correct the shape of segmentation results.
97
+ The supervision augmentation and prior-based regularization work in a comple-
98
+ mentary way, and both contribute to the stable and robust training on a variety
99
+ of segmentation tasks.
100
+ The proposed scribble supervision-based segmentation method, referred to
101
+ as ZScribbleSeg, extends and generalizes the algorithms in our two preliminary
102
+ works [52,53], and has more scientific significance in the following aspects: Firstly,
103
+ we investigate principles of efficient scribble forms to guide the supervision aug-
104
+ mentation, which have never be reported to the best of our knowledge. Secondly,
105
+ we leverage spatial prior to adjust the predicted probability with computed spa-
106
+ tial energy. Thirdly, we implement a series of extensive experiments on various
107
+ scenarios, including irregular structure segmentation of medical pathology and
108
+ visual object segmentation. The contributions of this paper are summarized as
109
+ follows.
110
+ – We propose a unified framework for scribble-supervised segmentation by
111
+ modeling efficient scribbles, and correcting the network prediction with prior
112
+ regularization, which significantly alleviates the problems of inadequate su-
113
+ pervision and imbalanced label classes.
114
+ – To the best of our knowledge, this is the first work investigating the principles
115
+ of scribble forms. Motivated by the conclusion that network benefits from
116
+ larger and randomly distributed annotation, we model efficient scribbles by
117
+ maximizing supervision and simulating randomness.
118
+ – We propose a novel mechanism to correct the shape of model prediction
119
+ based on prior regularization, including π prior, spatial prior, and shape
120
+ prior. A new algorithm is introduced to estimate π prior, based on which we
121
+ further encode spatial relationship with spatial prior loss.
122
+ – Our approach achieved state-of-the-art performance for weakly-supervised
123
+ segmentation on regular structures from cardiac anatomical imaging, regular
124
+ structures from pathology enhanced imaging, irregular objects of medical
125
+ pathology, and human pose from natural scene.
126
+ The rest of this paper is organized as follows: Section 2 briefly introduces the
127
+ relevant researches. In section 3, we describe the modeling of efficient scribbles
128
+ and computation of prior. Section 4 presents the results of efficiency, ablation,
129
+ and validation study. Finally, we conclude this work in Section 5.
130
+ 1
131
+ Related work
132
+ This section provides a brief review of weakly supervised segmentation meth-
133
+ ods. Besides, we describe data augmentation strategies and regularization loss
134
+ functions that closely related to our work.
135
+
136
+ 4
137
+ K Zhang & X Zhuang
138
+ Fig. 1. Roadmap of the proposed ZScribbleSeg framework.
139
+ 1.1
140
+ Weakly supervised segmentation
141
+ Recently, a variety of weakly supervised segmentation strategies have been de-
142
+ veloped to reduce the manual annotation efforts [27,2,34,35]. Among them, the
143
+ scribbles are of particular interest for the application to medical image annota-
144
+ tion, given by its advantage in annotating nest structures compared to bounding
145
+ boxes. Current weakly supervised learning methods with image-level annotations
146
+ mainly generate label seeds with Class Activation Map (CAM) [56] at first, and
147
+ then train the network with refined pseudo labels. However, the training of CAM
148
+ requires a large scale of training data labeled with rich visual classes, which is
149
+ not practical in clinical applications. Therefore, we propose to investigate the
150
+ scribble supervised segmentation, due to its efficiency and effectiveness in both
151
+ medical and visual scenarios.
152
+ Scribble is a form of sparse annotation that provides labels for a small sub-
153
+ set of pixels in an image [39]. Previous approaches mainly calculate losses for
154
+ annotated pixels. One group of works is designed to expand the annotations
155
+ and reconstruct the full label for network training. However, the expansion of
156
+ labels needs to be achieved through iterative computation, which is particularly
157
+ time-consuming. To alleviate it, several works removed the relabeling process
158
+ and instead adopted conditional random fields to perform the refinement of seg-
159
+ mentation results [9,7,55,40]. However, the common issue is the unstable model
160
+ training caused by noisy pseudo labels.
161
+
162
+ Principles
163
+ Efficient scribbles
164
+ (Sec 3.2.1)
165
+ (Sec 3.2.2)
166
+ Maximal
167
+ Mixup
168
+ Randomness
169
+ Occlusion
170
+ supervision
171
+ zScribbleSeg
172
+ Lglobal
173
+ ZScribbleNet
174
+ Priors estimation
175
+ (Sec 3.3)
176
+ (Sec 3.4)
177
+ Prior T
178
+ Spatial priors
179
+ Shape priors
180
+ Lshape
181
+
182
+ Energy
183
+ RankingZScribbleSeg
184
+ 5
185
+ To obtain high-quality pseudo labels and update it throughout the training
186
+ process, Luo et al. [29] proposed to mix the predictions from dual-branch net-
187
+ work as auxiliary pseudo label. This approach has achieved promising results on
188
+ cardiac segmentation, but still susceptible to inaccurate supervisions, especially
189
+ on more challenging tasks with irregular objects. Obukhov et al. [31] introduced
190
+ the Gated CRF loss for unlabeled pixels, which regularizes model training by
191
+ exploiting the structural similarity between labeled and unlabeled data. Other
192
+ works [42,54] included a new module to evaluate the quality of segmentation
193
+ masks, which encourages the predictions to be realistic, but requiring extra full
194
+ annotations.
195
+ 1.2
196
+ Data augmentation
197
+ Augmentation methods are investigated to improve the model generalization
198
+ ability, by synthesizing virtual training examples in the vicinity of the training
199
+ dataset [6]. Common strategies include random cropping, rotation, flipping and
200
+ adding noise [5]. Recently, a line of research works have been proposed on Mixup
201
+ augmentation [51,10,49,18,19], which blends two image-label pairs to generate
202
+ new samples for classification tasks. Input Mixup [51] was introduced to perform
203
+ linear interpolation between two images and their labels. Manifold Mixup [43]
204
+ applied the Mixup operation to feature space. Cutout [10] randomly occluded
205
+ a square region of image, and CutMix [49] transplanted the occluded area to
206
+ another image. Kim et al. [18] proposed Puzzle Mix to leverage the saliency
207
+ and local statistics to facilitate image combination. Comixup [19] extended this
208
+ concept from two images to multiple images.
209
+ For medical image analysis, Mixup methods have been adopted for image
210
+ segmentation [8] and object detection tasks [44]. Although mixup operation may
211
+ generate unrealistic samples, mixed soft labels can provide rich information and
212
+ improve the model performance on semi-supervised segmentation [8].
213
+ 1.3
214
+ Regularization losses
215
+ Neural networks are used to perform pixel-wise image segmentation, typically
216
+ trained with cross entropy or Dice loss, which computes loss for each pixel in-
217
+ dependently. To predict segmentation coherent in the global sense [22], several
218
+ methods are proposed to regularize the model training. Here, we focus on the
219
+ consistency regularization and π prior regularization that most relevant to our
220
+ work.
221
+ The consistency regularization leverages the fact that the perturbed versions
222
+ of the same image patch should have the consistent segmentation. A series of
223
+ researches have been conducted on consistency regularization [57,23,41,33]. For
224
+ semi-supervised learning, regularization is applied to the augmented versions of
225
+ the input image by requiring consistency to obtain stable predictions for unla-
226
+ beled images [23,41,33].
227
+
228
+ 6
229
+ K Zhang & X Zhuang
230
+ Fig. 2. Overview of the training losses for the proposed ZScribbleNet, which consists
231
+ of modeling of efficient scribbles and computation of priors. The scribble modeling
232
+ includes mixup augmentation, regularized with global consistency (Lglobal). The priors
233
+ have three, i.e., class mixture ratios (π), spatial prior and shape prior, which contribute
234
+ to spatial prior loss (Lspatial) and shape prior loss (Lshape). Note that spatial prior loss
235
+ is complementary with the partial cross entropy loss (Lpce) which is solely calculated
236
+ for labeled pixels.
237
+ The proposed regularization of π prior is inspired from the binary mixture
238
+ proportion estimation [3,14,37], which was originally designed for binary (two-
239
+ class) positive unlabeled learning [11,12,20]. For multi-class segmentation, the
240
+ mixture ratios of classes are both imbalanced and inter-dependent, which cannot
241
+ be solved by existing binary estimation methods.
242
+ 2
243
+ Method
244
+ 2.1
245
+ Overview
246
+ Problem Setup: This work investigates the scenario of scribble supervised seg-
247
+ mentation, where the training images are solely annotated with a small number
248
+ of pixels, via scribbles, for each label class.
249
+ Strategy: Instead of solely focusing on techniques of weak supervision, we first
250
+ investigate different forms of scribbles to derive principles of efficient scribbles,
251
+ i.e., maximal supervision without increasing scribble efforts. These principles
252
+ enable effective and robust model training with minimal annotation cost. Then,
253
+ we focus on tackling the major problem of under segmentation, to correct model
254
+ prediction with prior.
255
+ Solution: We develop ZScribbleSeg consisting of (1) modeling efficient scrib-
256
+ bles via supervision maximization and randomness simulation. (2) modeling
257
+
258
+ Priors
259
+ spatial (Eq.29)
260
+ shape
261
+ L
262
+ (Eq.30)
263
+ π (3.3.2)
264
+ Spatial
265
+ Corrected
266
+ Image 1
267
+ Seg 1
268
+ Scribble 1
269
+ energy 1
270
+ shape 1
271
+ Ranking
272
+ A
273
+ Seg of
274
+ Mixed
275
+ Mixed
276
+ Mixed
277
+ mixed
278
+ Network
279
+ Seg
280
+ scribble
281
+ Image
282
+ image
283
+ π (3.3.2)
284
+ Spatial
285
+ Corrected
286
+ Image 2
287
+ Seg 2
288
+ Scribble 2
289
+ energy 2
290
+ shape 2
291
+ Lshape
292
+ Ranking
293
+ (Eq.30)
294
+ 4-
295
+ Priors
296
+ Spatial energy Correction Scribble
297
+ Mix
298
+ Image
299
+ SegZScribbleSeg
300
+ 7
301
+ and computation of prior, including label class proportion prior, spatial prior
302
+ and shape prior. (3) integration to develop deep neural network (referred to as
303
+ ZScribbleNet) having losses of partial cross entropy (Lpce), global consistency
304
+ (Lglobal), spatial prior loss (Lspatial), shape regularization (Lshape) and training
305
+ strategy of supervision augmentation and prior regularization. Figure 1 presents
306
+ the roadmap of the proposed framework.
307
+ 2.2
308
+ Principle and modeling of efficient scribbles
309
+ We investigate the principles of efficient scribbles and derive the objective of
310
+ maximizing supervision with minimal annotation efforts. This leads to the pro-
311
+ posal of supervision augmentation. In addition, we propose a global consistency
312
+ loss to penalize the non-equivalence in the augmentation.
313
+ Principles of efficient scribbles We shall verify the two principles of achiev-
314
+ ing efficient scribble annotation in terms of maximal supervision later through
315
+ the experiments in Section 3.2:
316
+ (1) The large proportion of pixels annotated by scribbles compared with the
317
+ whole set.
318
+ (2) The randomness of distribution of scribbles. This is represented by the ran-
319
+ dom and wide-range annotations.
320
+ Firstly, we are motivated by the knowledge that model training benefits from
321
+ the finer gradient flow through larger proportion of annotated pixels [39]. There-
322
+ fore, we try to increase the annotation proportion with the same effort. One
323
+ natural idea is to simply expand the width of scribbles. However, this way only
324
+ increases the label amount in local area, and lacks the ability to enlarge anno-
325
+ tation range across the entire image.
326
+ Secondly, we are inspired by the fact that the imaging data are easier to be
327
+ restored from random samples of pixels than from down-sampled low-resolution
328
+ images with regular patterns [13]. This was due to the fact that the randomly
329
+ and sparsely distributed samples maintain the global structure of the imaging
330
+ data, which therefore can be restored with existing low-rank or self-similarity
331
+ regularization terms. By contrast, the regularly down-sampled low-resolution
332
+ images have evidently reduced tensor ranks, compared with the original high-
333
+ resolution data, thus lose the global structure information. Motivated by this, we
334
+ assume the features of full segmentation (similarly to the global structure infor-
335
+ mation) can be portrayed (restored) with sparse scribble annotations randomly
336
+ and widely distributed within the entire dataset. With such scribble annotation,
337
+ the segmentation network can easily learn the global shape prior.
338
+ Based on the observations described above, we propose to model efficient
339
+ scribbles by supervision augmentation simulating large annotation proportion
340
+ and randomness of scribble distribution.
341
+ Modeling via supervision augmentation We aim to generate training im-
342
+ ages with efficient scribbles by maximizing the supervision via mixup operations
343
+
344
+ 8
345
+ K Zhang & X Zhuang
346
+ and achieving the randomness via occlusion operations. This resembles data
347
+ augmentation, which increases the data diversity and enables robust training.
348
+ Search optimal annotation with mixup: Motivated by the principles of ef-
349
+ ficient scribble, we first seek the optimal scribble with large annotated ratio,
350
+ high supervision, and the unchanged local features. To achieve that, instead of
351
+ maximizing the annotations directly, we aim to maximize the saliency of mixed
352
+ images, which measures the sensitivity of model to inputs. Given that the an-
353
+ notated area tends to be accompanied with high saliency, maximizing saliency
354
+ also increases the scribble annotations.
355
+ For two image-scribble pairs (X1, Y1), (X2, Y2) of dimension n, we denote
356
+ the resulted mixed image-label pair as (X′
357
+ 12, Y ′
358
+ 12). The transportation process is
359
+ defined by:
360
+ X′
361
+ 12 = T(X1, X2) and Y ′
362
+ 12 = T(Y1, Y2),
363
+ (1)
364
+ T(X1, X2) = (1 − β) ⊙ �
365
+ 1 X1 + β ⊙ �
366
+ 2 X2,
367
+ (2)
368
+ where T(X1, X2) represents the transportation process between image X1 and
369
+ X2; �
370
+ i denotes the transportation matrix of size n×n for image Xi; β means the
371
+ mask with value [0, 1] of dimension n; ⊙ is the element-wise multiplication. Then,
372
+ we aim to maximize the saliency of transportation result over the parameters
373
+ {�
374
+ 1, �
375
+ 2, β}:
376
+ {�
377
+ 1, �
378
+ 2, β} = arg max
379
+
380
+ 1,�
381
+ 2,β
382
+ [(1 − β) ⊙ �
383
+ 1M(X1) + β ⊙ �
384
+ 2M(X2)],
385
+ (3)
386
+ where M(X) denotes the saliency map of image X, which is obtained by com-
387
+ puting the l2 norm of gradient values. We solve this optimization problem based
388
+ on PuzzleMix [18]. To preserve the local statistic features, the optimization ob-
389
+ jective also includes the image local smoothness, and the mixing weight prior.
390
+ For details of the optimization objective, we refer readers to PuzzleMix [18] and
391
+ Appendix A of supplementary materials.
392
+ Introduce randomness via occlusion: We propose to simulate randomly
393
+ distributed scribbles via occlusion. Specifically, one square area of the mixed
394
+ image is randomly dropped and replaced with the background. Since that the
395
+ proportion of the background annotated by scribbles tends to be smaller than
396
+ that of the foreground classes, the occlusion operation alleviates the imbalance
397
+ problem of class mixture ratios within labeled pixels, and further improves the
398
+ results of mixture ratio estimation, which will be elaborated in Section 2.3.
399
+ We denote the occluded image-label pair as (X′′, Y ′′), which is obtained by:
400
+ X′′
401
+ 12 = (1 − 1b) ⊙ X′
402
+ 12
403
+ (4)
404
+ Y ′′
405
+ 12 = (1 − 1b) ⊙ Y ′
406
+ 12
407
+ (5)
408
+ where 1b denotes a rectangular mask of size n × n with value in [0, 1]. The
409
+ rectangular mask is randomly rotated to occlude the mixed image, and turns
410
+
411
+ ZScribbleSeg
412
+ 9
413
+ Fig. 3. Illustration of supervision augmentation and global consistency. Supervision
414
+ maximization is achieved with the mix augmentation to increase the annotated pro-
415
+ portion and data variety. Global consistency requires the segmentation result of mixed
416
+ image and unmixed image to be consistent.
417
+ the occluded area into background. Following [49], we set the size of rectangular
418
+ to be 32 × 32.
419
+ Global consistency loss: The objective of global consistency regularization is
420
+ to leverage the mix-invariant property. As Figure 3 shows, global consistency
421
+ requires the same image patch to have consistent segmentation in two scenarios,
422
+ i.e., the unmixed image and the mixed image. Let the segmentation result of
423
+ image X predicted by network be ˆY = f(X). For the transported image X′
424
+ 12 =
425
+ T(X1, X2), the consistency of mixup is formulated as:
426
+ T(f(X1), f(X2)) = f(T(X1, X2)),
427
+ (6)
428
+ which requires the segmentation of mixed image to be consistent with the mixed
429
+ segmentation, after the same transportation process. When applying the occlu-
430
+ sion operation, we further have:
431
+ (1 − 1b) ⊙ T( ˆY1, ˆY2) = f ((1 − 1b) ⊙ T(X1, X2)) .
432
+ (7)
433
+ Then, we propose to minimize the distance between two sides of Eq.(7). Let
434
+ u12 = (1 − 1b) ⊙ T( ˆY1, ˆY2) and v12 = f ((1 − 1b) ⊙ T(X1, X2)). The negative
435
+ cosine similarity Ln(u12, v12) is defined as:
436
+ Ln(u12, v12) = −
437
+ u · v
438
+ ||u12||2 · ||v12||2
439
+ .
440
+ (8)
441
+ Taking the symmetrical metric into consideration, we similarly penalize the in-
442
+ consistency between u21 and v21. Therefore, the global consistency loss is for-
443
+ mulated as:
444
+ Lglobal = 1
445
+ 2 [Ln(u12, v12) + Ln(u21, v21)] .
446
+ (9)
447
+
448
+ Supervision
449
+ Global
450
+ Image 1
451
+ Seg 1
452
+ Scribble 1
453
+ augmentation
454
+ consistency
455
+ pce
456
+ Seg mixed
457
+ Mixed scribble
458
+ Mixed seg
459
+ Mix
460
+ Occlusion
461
+ Lglobal
462
+ Network
463
+ (Eq.9)
464
+ pce
465
+ Seg 2
466
+ Scribble 2
467
+ Image 2
468
+ pce10
469
+ K Zhang & X Zhuang
470
+ Fig. 4. Illustration of spatial prior loss (Lspatial) for correction of prediction, via class
471
+ mixture ratios (π) and spatial prior (with spatial energy).
472
+ Discussion: Mixup operations could change the shape of target structures, re-
473
+ sulting in the unrealistic image. To tackle it, as shown in Figure 3, we propose
474
+ to combine the partial cross entropy (PCE) loss for labeled pixels of both mixed
475
+ and unmixed image, and leverage mix equivalence to preserve shape consistency
476
+ at global level. To further exploit the shape features, we propose to correct the
477
+ network prediction guided by computed prior, which is described in Section 2.3.
478
+ 2.3
479
+ Modeling and computation of prior
480
+ As shown in Figure 1, we model class mixture ratios, spatial prior, and shape
481
+ prior to better capture global shape information and regularize the network
482
+ training. As visualized in Figure 4, we compute the spatial energy to reflect the
483
+ probabilities of pixels belonging to each class. We propose a new formulation to
484
+ estimate critical prior of label class proportions, referred to as π, which guides
485
+ the correction of erroneous network prediction.
486
+ Problems statement The segmentation network trained with scribbles tends
487
+ to generate under segmentation results of the target structures. Considering that
488
+ the annotated ratio of classes can be imbalanced, the scribble supervised learning
489
+ also brings challenges to the estimation of class mixture ratios π.
490
+ Under segmentation: As shown in Figure 5, under segmentation refers to the
491
+ results, where the size of segmented structure is generally smaller than ground
492
+ truth, a phenomenon caused by the imbalanced annotated proportion and missed
493
+ shape information. To solve the problem, we propose to evaluate π and spatial
494
+ prior, which are crucial for the shape refinement. The accurate estimation of
495
+ π can correct the imbalanced label ratios, and enable model to adjust the size
496
+ of segmentation result. The computation of spatial prior is able to encode the
497
+ feature similarity between pixels, and rectify the shape of target structures. We
498
+
499
+ Correction of prediction
500
+ Scribble
501
+ Left ventricle
502
+ Spatial energy
503
+ T estimation
504
+ <-
505
+ Prediction
506
+ Under segmentation
507
+ Corrected shape
508
+ Spatial priors
509
+ Class mixture ratios
510
+ Right ventricle
511
+ Adjusted prediction
512
+ 4
513
+ Lspatial
514
+ (Eq.29)ZScribbleSeg
515
+ 11
516
+ (a)
517
+ (b)
518
+ Fig. 5. Two examples of under segmentation, pointed by the red arrows: (a) under
519
+ segmented foreground labels from ACDC segmentation, i.e., left ventricle and right
520
+ ventricle; (b) under segmented background from MyoPS segmentation.
521
+ encode π and spatial prior with spatial prior loss, by ranking the spatial energy
522
+ and select the top π ratio as the segmentation. To estimate π, we start from the
523
+ imbalanced annotated ratios (referred to as a) and adapt it from labeled pixels
524
+ to unlabeled pixels.
525
+ Note that the problem of under segmentation can be even worse without
526
+ the modeling of efficient scribbles. In the case of manually annotated scribbles,
527
+ the resulting annotations may be distributed in a non-random pattern due to
528
+ fixed labeling habits, resulting in the biased label distribution across the whole
529
+ dataset. This problem could be alleviated by simulating randomly distributed
530
+ labels through our proposed supervision augmentation.
531
+ Challenges of π estimation: The evaluation of class mixture ratios is a criti-
532
+ cal bottleneck in semi-/ weak-/ non-supervised learning, and serves as the basis
533
+ of classes identification [14] and variance reduction [47,38]. However, existing
534
+ methods are mainly proposed for binary classification, and can not be adapted
535
+ to multi-class scenario directly. For segmentation task, the class mixture ratios
536
+ are both imbalanced and interdependent, leading to the decrease in the perfor-
537
+ mance of previous binary estimation approaches. Despite the class imbalance
538
+ problem, the scribble supervised segmentation is also faced with the imbalance
539
+ of annotated class ratios. For example, the annotated ratio of the background
540
+ tends to be much smaller than that of the foreground classes. The imbalance of
541
+ annotated ratio further enhances the difficulty of π estimation.
542
+ Estimation of class mixture ratios π To tackle the under segmentation, we
543
+ propose to estimate the class mixture ratios within unlabeled pixels.
544
+ Objective: We aim to determine π to maximize the likelihood of observed
545
+ unlabeled pixels. For nu unlabeled pixels x = [x1, x2, · · · , xnu] sampled from
546
+
547
+ 12
548
+ K Zhang & X Zhuang
549
+ pu(x), the likelihood of these unlabeled pixels is formulated as:
550
+ L(π) =
551
+ nu
552
+
553
+ i=1
554
+ pu(xi) =
555
+ nu
556
+
557
+ i=1
558
+ [
559
+ m
560
+
561
+ k=1
562
+ pu(xi|ck)pu(ck)],
563
+ (10)
564
+ where pu(xi|ck) represents the within-class probability of class ck ∈ {c0, · · · , cm}
565
+ for unlabeled pixel xi. We assume the within-class probabilities of labeled and un-
566
+ labeled pixels to be unchanged. Then, we estimate π = [pu(c1), pu(c2), · · · , pu(cm)]
567
+ to maximize the likelihood of unlabeled observations in Eq.( 10).
568
+ To maximize the likelihood in Eq.(10), we follow the EM algorithm in [24,30]
569
+ and introduce the unknown variable s = (s1, s2, · · · , snu), where si is the one-
570
+ hot vector of dimension m with the i-th value equals 1. Then, the likelihood
571
+ L(π|x, s) is written as:
572
+ L(π|x, s) =
573
+ nu
574
+
575
+ i=1
576
+ m
577
+
578
+ k=1
579
+ [pu(xi|ck)pu(ck)]sik .
580
+ (11)
581
+ The log likelihood l(π|x, s) is derived as:
582
+ l(π|x, s) =
583
+ nu
584
+
585
+ i=1
586
+ m
587
+
588
+ k=1
589
+ sik log(pu(xi|ck))
590
+ +
591
+ nu
592
+
593
+ i=1
594
+ m
595
+
596
+ k=1
597
+ sik log(pu(ck))
598
+ (12)
599
+ E-step: The E-step of EM algorithm computes the expected value of l(s|x, π)
600
+ given the observations x and current estimate of π[t],
601
+ Q(π|x, π[t]) =E
602
+
603
+ l(π|s, x)|x, π[t]�
604
+ =
605
+ nu
606
+
607
+ i=1
608
+ m
609
+
610
+ k=1
611
+ E(sik|xi, π[t]
612
+ k ) log(pu(xi|ck))
613
+ +
614
+ nu
615
+
616
+ i=1
617
+ m
618
+
619
+ k=1
620
+ E(sik|xi, π[t]
621
+ k ) log(pu(ck)),
622
+ (13)
623
+ where E(sik|xi, π[t]
624
+ k ) is represented as:
625
+ E(sik|xi, π[t]
626
+ k ) = p(sik = 1|xi, π[t]
627
+ k ) = p[t]
628
+ u (ck|xi)
629
+ (14)
630
+ Estimation of p[t]
631
+ u (ck|xi): To solve the current estimate of p[t]
632
+ u (ck|xi), we aim
633
+ to adapt the posteriori probability from labeled pixels to unlabeled pixels. For
634
+ labeled pixels, the posteriori probability pl(ck|xi) is estimated by the model
635
+ prediction. For class ck and pixel xi, Based on our assumption that the within-
636
+ class probabilities of labeled and unlabeled pixels are same, we have
637
+ pu(xi|ck) = pl(xi|ck),
638
+ (15)
639
+
640
+ ZScribbleSeg
641
+ 13
642
+ Based on Bayes’ theorem, the within-class probabilities of labeled pixel pl(xi|ck)
643
+ and unlabeled pixel pu(xi|ck) are written as:
644
+ ˆpl(xi|ck) = ˆpl(ck|xi)p(xi)
645
+ ˆpl(ck)
646
+ (16)
647
+ ˆpu(xi|ck) = ˆpu(ck|xi)ˆpu(xi)
648
+ ˆpu(ck)
649
+ (17)
650
+ By substituting ˆpu(xi|ck) in Eq.(17) and ˆpl(xi|ck) in Eq.(16) into Eq.(15), we
651
+ adapt the within-class probabilities from labeled pixels to unlabeled pixels as
652
+ follows:
653
+ ˆpu(ck|xi) = ˆpl(xi)
654
+ ˆpu(xi) · ˆpu(ck)
655
+ ˆpl(ck) ˆpl(ck|xi).
656
+ (18)
657
+ For binary estimation, the mixture ratio is independently estimated for each
658
+ class, which does not leverage the inter-relationship between classes. For multi-
659
+ class segmentation, we naturally utilize the condition that the sum of the prob-
660
+ abilities of all classes equals to 1, i.e.,
661
+ m
662
+
663
+ k=0
664
+ ˆpu(ck|xi) = 1.
665
+ (19)
666
+ By combing Eq.(18) and Eq.(19), one can obtain:
667
+ 1 = ˆpl(xi)
668
+ ˆpu(xi)
669
+ m
670
+
671
+ k=0
672
+ ˆpu(ck)
673
+ ˆpl(ck) ˆpl(ck|xi).
674
+ (20)
675
+ Then, ˆpl(xi)/ˆpu(xi) is represented as:
676
+ ˆpl(xi)
677
+ ˆpu(xi) =
678
+ � m
679
+
680
+ k=0
681
+ [ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)]
682
+ �−1
683
+ .
684
+ (21)
685
+ By substituting ˆpl(xi)/ˆpu(xi) into Eq. (18), we can obtain the formulation of
686
+ ˆpu(ck|xi) as follows:
687
+ ˆpu(ck|xi) =
688
+ ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)
689
+ �m
690
+ k=0[ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)].
691
+ (22)
692
+ Therefore, the current estimate of posteriori probability ˆpu(ck|xi) is written
693
+ as:
694
+ ˆpt
695
+ u(ck|xi) =
696
+ πt
697
+ k ˆpl(ck|xi)/ˆpl(ck)
698
+ �m
699
+ k=0[πt
700
+ k ˆpl(ck|xi)/ˆpl(ck)],
701
+ (23)
702
+ where ˆpl(ck) is empirically evaluated by the class frequency within labeled pixels,
703
+ i.e., ˆpl(ck) = nk
704
+ l /nl.
705
+ M-step: The M-step maximizes Q(π, π[t]) in Eq.(13), i.e.,
706
+ π[t+1] := arg max
707
+ π
708
+ Q(π|x, π[t])
709
+ (24)
710
+
711
+ 14
712
+ K Zhang & X Zhuang
713
+ We empirically solve the πt+1
714
+ k
715
+ as:
716
+ π[t+1]
717
+ k
718
+ = 1
719
+ nu
720
+ nu
721
+
722
+ i=1
723
+ p[t]
724
+ u (ck|xi)
725
+ (25)
726
+ The π[t]
727
+ k is initialized with the class frequency within labeled pixels a, with
728
+ ak = nk
729
+ l
730
+ nl . Then, the E-step of Eq.(13) and M-step of Eq.(25) is repeated until
731
+ the estimation of π converges. The posteriori probability ˆpu(ck|xi) and priori
732
+ probability ˆpu(ck) are re-estimated in each iteration.
733
+ Discussion: There are two conditions of the proposed algorithm. Firstly, we
734
+ assume the within-class probabilities of labeled and unlabeled pixels be the same,
735
+ which means the labeled pixels should be randomly sampled based on classes.
736
+ Secondly, π is initiated with the class frequency of labeled pixels a. Since that
737
+ the annotated ratio of background is smaller than that of the foreground classes,
738
+ the priori probabilities of foreground classes within unlabeled pixels tend to
739
+ be over-estimated. The first problem can be tackled by modeling the efficient
740
+ scribbles, to achieve the random distribution of annotations. For the second
741
+ problem, by randomly occluding the image and replace the occluded area with
742
+ background, we are able to increase the ratio of background and alleviate this
743
+ problem to some extent. Furthermore, we propose to address it with the marginal
744
+ probability maximization, which will be explained in Section 2.3.
745
+ Computation of spatial energy Given the estimated class mixture ratios,
746
+ we aim to identify the unlabeled pixels by determining the probability of pixels
747
+ belonging to each class. Instead of using model predictions directly, we further
748
+ encode the spatial relationship to compensate the inaccurate results generated
749
+ by segmentation network. Inspired by [31], we estimate the spatial energy of
750
+ unlabeled pixels with energy term in a dense setting.
751
+ Firstly, we use Gaussian kernels Gij to measure the distance between pixels
752
+ at position i and j as:
753
+ Gij = exp
754
+
755
+ −(pi − pj)2
756
+ 2σ2p
757
+ − (oi − oj)2
758
+ 2σ2o
759
+
760
+ ,
761
+ (26)
762
+ where pi represents the position of pixel xi; oi denotes the color feature; σp and
763
+ σo are the bandwidth parameters for position and color information, respectively.
764
+ The shallow features like color and position are specific to the pixel and do not
765
+ rely on the network prediction. Then, the energy term φij leveraging prediction
766
+ ˆy is formulated as:
767
+ φij(ˆy) = Gij ˆyiˆyj,
768
+ (27)
769
+ which denotes the pairwise relationship between two pixels. This energy term
770
+ connects every pixels with each other within one image. Based on φi,j, we define
771
+ the element of spatial energy Φ in a dense setting, i.e.,
772
+ Φi(ˆy) =
773
+
774
+ j∈Ωi
775
+ φij(ˆy),
776
+ (28)
777
+
778
+ ZScribbleSeg
779
+ 15
780
+ where Ωi = {Pos(i) − Pos(j) ≤ r}, means the neighborhood window of radius r.
781
+ Instead of taking the total energy as the regularization loss as [31], we consider
782
+ Φ as the spatial energy to reflect the relative probability of pixels belonging to
783
+ each class.
784
+ Spatial prior and shape prior losses Spatial prior loss is computed by
785
+ ranking the spatial energy and selecting the top π proportion of pixels as the
786
+ segmentation. Considering that adjusting multiple structures directly can be
787
+ challenging, we instead separate each foreground class from the others, and
788
+ then tackle the individual structure. Given that the mixture ratios of foreground
789
+ classes tend to be over-estimated, we instead leverage the accurate negative pix-
790
+ els filtered by estimated mixture ratios, and maximize the marginal probability
791
+ of these pixels belonging to other classes.
792
+ Firstly, by ranking the spatial energy and applying the mixture ratio of each
793
+ class, we are able to distinguish negative pixels from unlabeled pixels. For fore-
794
+ ground class ck, we rank the unlabeled pixels according to the spatial energy Φk
795
+ of class ck in Eq. (28). Given the estimated mixture ratio πk, we set pixels in
796
+ the top πk proportion to be positive samples Ωk Correspondingly, the remaining
797
+ pixels are taken as negative pixels, denoted as ¯Ωk. Taking over-estimated πk into
798
+ account, we believe the set of negative pixels ¯Ωk is more accurate than Ωk.
799
+ Secondly, we design the spatial prior loss (Lspatial) based on maximal marginal
800
+ probability of negative samples ¯Ωk belonging to other classes. For each class
801
+ ck, we take it as foreground and fuse other classes except ck into background.
802
+ The fused class is denoted as ¯ck. For pixel xi in ¯Ωk, its marginal probabil-
803
+ ity belonging to ¯ck equals the sum of probabilities of the fused classes, i.e.,
804
+ ˆp(¯ck|xi, xi ∈ ¯Ωk) = �m
805
+ k′=1[1[k′̸=k]ˆp(ck|xi)]. To maximize the marginal proba-
806
+ bility of negative pixel xi belonging to ¯ck, we formulate the spatial prior loss
807
+ as:
808
+ Lspatial = −
809
+ m
810
+
811
+ k=1
812
+
813
+ xi∈ ¯
814
+ Ωk
815
+ log(ˆp (¯ck|xi)) .
816
+ (29)
817
+ Shape prior loss is developed to regularize inter-connected structures in the
818
+ segmentation results. This loss is adopted to further reduce noise and smooth
819
+ boundary. It requires the model prediction to be consistent with its maximum
820
+ connected area, and minimizes their cross entropy loss, i.e.,
821
+ Lshape = −
822
+
823
+ k∈Ψ
824
+ F( ˆYk) log( ˆYk),
825
+ (30)
826
+ where Ψ is the set of label classes with inter-connected structures; F(·) denotes
827
+ the morphological function, and outputs the largest inter-connected area of input
828
+ label.
829
+ 2.4
830
+ ZScribbleNet
831
+ ZScribbleSeg is achieved via a deep neural network referred to as ZScribbleNet.
832
+ ZScribbleNet does not depend on any particular network architecture, and can
833
+
834
+ 16
835
+ K Zhang & X Zhuang
836
+ Table 1. Efficiency analysis of scribble forms for regular structure segmentation of
837
+ cardiac ventricles (ACDC dataset) and irregular segmentation of myocardial pathology
838
+ (MyoPS dataset). Here, Nscribble and Npix respectively denote the number of manual
839
+ draws to generate scribble annotations and number of annotated pixels, which indicate
840
+ annotation efforts; k is the number of manual draws (scribbles) and n is the given
841
+ threshold of annotation efforts, where k << n. Segmentation results are evaluated on
842
+ test set and reported in Dice scores.
843
+ Methods
844
+ Nscribble Npix
845
+ Structural segmentation
846
+ Irregular segmentation
847
+ LV
848
+ MYO
849
+ RV
850
+ Avg
851
+ Scar
852
+ Edema
853
+ Avg
854
+ Points
855
+ n
856
+ n
857
+ .876±.134 .801±.089 .858±.081 .845±.107 .551±.246 .638±.115 .595±.194
858
+ Skeleton
859
+ k
860
+ n
861
+ .805±.145
862
+ .737±.095
863
+ .769±.128
864
+ .770±.126
865
+ .504±.213
866
+ .057±.022
867
+ .281±.271
868
+ Random walk
869
+ k
870
+ n
871
+ .798±.173
872
+ .698±.153
873
+ .753±.157
874
+ .744±.165
875
+ .516±.284
876
+ .529±.123
877
+ .522±.184
878
+ DirRandomWork
879
+ k
880
+ n
881
+ .844±.143
882
+ .755±.102
883
+ .798±.173
884
+ .799±.146
885
+ .539±.217
886
+ .637±.108
887
+ .588±.176
888
+ be directly applied to any CNN backbone. For all experiments, we adopt the
889
+ variant of UNet [1] as the backbone of segmentation network. As Figure 2 shows,
890
+ two images are mixed together to perform the supervision augmentation. Then,
891
+ our ZScribbleNet takes the mixed images and unmixed images as the input, and
892
+ output their segmentation results.
893
+ For model training, images and their scribble annotations are sampled to
894
+ estimate the training objective (L), which is formulated as:
895
+ L = Lpce + λ1Lglobal + λ2Lspatial + λ3Lshape
896
+
897
+ ��
898
+
899
+ unsup
900
+ ,
901
+ (31)
902
+ where Lpce is the partial cross entropy loss calculated for annotated pixels in
903
+ unmixed image and mixed image; the global consistency loss Lglobal in Eq.(9)
904
+ requires the mix equivalence for the supervision augmentation; spatial prior loss
905
+ Lspatial in Eq.(29) encodes the π prior and spatial prior; shape regularization
906
+ loss Lshape in Eq.(30) leverages shape prior; λ1, λ2, λ3 are hyper-parameters to
907
+ leverage the relative importance of different loss components.
908
+ In the training phase, We warmly started training the networks with partial
909
+ cross entropy loss Lpce, global consistency loss Lglobal, and shape regularization
910
+ loss Lshape for 100 epochs, and then invoked the spatial loss Lspatial. In the
911
+ testing phase, the trained network predicted the segmentation results of input
912
+ image directly.
913
+ 3
914
+ Experiments and Results
915
+ We first investigated a variety of scribble forms, and analyzed the principles
916
+ of efficient scribbles in Section 3.2. Then, we performed ablation study to the
917
+ proposed ZScribbleSeg in Section 3.3. Finally, we demonstrated the performance
918
+ of ZScribbleSeg with comparisons to other state-of-the-art methods in various
919
+ segmentation tasks using four open datasets in Section 3.4.
920
+
921
+ ZScribbleSeg
922
+ 17
923
+ (a)
924
+ (b)
925
+ (c)
926
+ (d)
927
+ Fig. 6. Performance of segmentation networks trained by the Points scribble form
928
+ with different number of pixels Npix, with comparisons to fully supervised models
929
+ (FullSupUNet): (a) and (c) visualize Dice scores with respect to different Npix on ACDC
930
+ and MyoPS, respectively. The performance of models trained by the Random walk
931
+ form, with increasing step length l, compared with models trained by DirRandWalk:
932
+ (b) and (d) show the Dice scores of segmentation on ACDC and MyoPS, respectively,
933
+ given Npix = n.
934
+ 3.1
935
+ Materials
936
+ Tasks and datasets Our validation included four segmentation tasks, including
937
+ (1) regular structure segmentation of cardiac ventricles from anatomical imag-
938
+ ing using ACDC dataset, (2) regular structure segmentation from pathology en-
939
+ hanced imaging with a smaller training size using MSCMRseg dataset, (3) irreg-
940
+ ular object segmentation of myocardial pathology from multi-modality imaging
941
+ using MyoPS dataset, and human pose segmentation from natural scene images
942
+ using PPSS dataset.
943
+ ACDC dataset was from the MICCAI’17 Automatic Cardiac Diagnosis
944
+ Challenge [4]. This dataset consists of short-axis cardiac images using anatomi-
945
+ cal MRI sequence (BSSFP) from 100 patients, with gold standard segmentation
946
+ of cardiac ventricular structures, including left ventricle blood cavity (LV), left
947
+
948
+ 0.85
949
+ 0.84
950
+ 0.83
951
+ Dice
952
+ 0.82
953
+ 0.81
954
+ Points
955
+ FullSupUNet
956
+ 0.80
957
+ 0.5
958
+ 1.0
959
+ 1.5
960
+ 2.0
961
+ 2.5
962
+ 3.0
963
+ Number of annotated pixels on ACDC(n)0.81
964
+ 0.80
965
+ 0.79
966
+ Score
967
+ 0.78
968
+ Dice
969
+ 0.77
970
+ 0.76
971
+ Random Walk
972
+ 0.75
973
+ DirRandomWalk
974
+ 1.0
975
+ 1.5
976
+ 2.0
977
+ 2.5
978
+ 3.0
979
+ 3.5
980
+ 4.0
981
+ Stepsizeofrandomwalkon ACDC0.63
982
+ 0.62
983
+ Score
984
+ 0.61
985
+ Dice
986
+ 0.60
987
+ 0.59
988
+ 0.58
989
+ Points
990
+ 0.57
991
+ FullSupUNet
992
+ 1
993
+ 2
994
+ 3
995
+ 4
996
+ 5
997
+ Number of annotated pixels on MyoPS(n)0.59
998
+ 0.58
999
+ 0.57
1000
+ Score
1001
+ 0.56
1002
+ Dice
1003
+ 0.55
1004
+ 0.54
1005
+ 0.53
1006
+ RandomWalk
1007
+ DirRandomWalk
1008
+ 0.52
1009
+ 1
1010
+ 2
1011
+ 3
1012
+ 4
1013
+ 5
1014
+ 6
1015
+ 7
1016
+ 8
1017
+ StepsizeofrandomwalkonMvoPs18
1018
+ K Zhang & X Zhuang
1019
+ ventricle myocardium (MYO), and right ventricle blood cavity (RV). For exper-
1020
+ iments, we randomly divided the 100 subjects into a training set of 70 subjects,
1021
+ a validation set of 15 subjects (particularly for ablation study), and a test set of
1022
+ 15 subjects.
1023
+ MSCMRseg was from the MICCAI’19 Multi-sequence Cardiac MR Seg-
1024
+ mentation Challenge [59,58], consisting of images from 45 patients with car-
1025
+ diomyopathy and the gold standard segmentation of LV, MYO and RV. We
1026
+ employed the 45 images of late gadolinium enhanced (LGE) MRI to evaluate
1027
+ the segmentation of ventricle structures. Following [48], we divided the 45 im-
1028
+ ages into three sets of 25 (training), 5 (validation), and 15 (test) images for
1029
+ all experiments. Note that this structure segmentation is more challenging than
1030
+ that on ACDC due to its smaller training set and pathology enhanced images.
1031
+ MyoPS dataset was from MICCAI’20 Myocardial pathology segmentation
1032
+ Challenge [26], consisting of paired images of BSSFP, LGE and T2 cardiac MRI
1033
+ from 45 patients. The task was to segment the myocardial pathologies, includ-
1034
+ ing scar and edema, which do not have regular shape or structure thus their
1035
+ segmentation represents a different task to the regular structure segmentation.
1036
+ Following the benchmark study [26], we split the data into 20 pairs of training
1037
+ set, 5 pairs of validation set and 20 pairs of test set.
1038
+ PPSS refers to the Pedestrian Parsing on Surveillance Scenes (PPSS) dataset [28].
1039
+ We employed the task of human pose segmentation to validate the generaliz-
1040
+ ability of models on natural scene images. PPSS is a large scale human pars-
1041
+ ing dataset including 3673 annotated samples of 171 surveillance videos. The
1042
+ ground truth segmentation of eight classes including hair, face, upper clothes,
1043
+ arms, lower clothes, legs, shoes, and background were provided. We used the
1044
+ first 100 surveillance scenes for training and the remaining 71 videos for test.
1045
+ Evaluation metrics For experiments on ACDC, MSCMRseg and MyoPS datasets,
1046
+ we reported the Dice score and Hausdorff Distance (HD) on each foreground
1047
+ class separately following the practice of medical image segmentation. On PPSS
1048
+ dataset, we measured the multi-class Dice scores following [42], where Dice=
1049
+ 2|ˆyy|
1050
+ |ˆy|+|y|, and ˆy and y denote the multi-channel prediction and ground truth la-
1051
+ bel, respectively.
1052
+ Pre-processing and implementation The two dimensional slices from ACDC
1053
+ and MSCMR datasets were of different resolutions. Hence, we first re-sampled all
1054
+ images into a fixed resolution of 1.37 × 1.37 mm and then extracted the central
1055
+ patch of size 212 × 212 for experiments. For MyoPS, we took the paired slices of
1056
+ BSSFP, LGE, and T2 CMR and cropped their center patches of size 192 × 192
1057
+ for experiments. We normalized the intensity of these medical images to be zero
1058
+ mean and unit variance. For PPSS dataset, we first re-sampled all images into
1059
+ the same resolution, and then padded the images to the size of 160 × 160. The
1060
+ intensities of images were normalized to a range between 0 and 1.
1061
+ For random occlusion, a square area of 32 × 32 was randomly occluded for
1062
+ each image. For the estimation of spatial energy, We adopted Gaussian kernels
1063
+
1064
+ ZScribbleSeg
1065
+ 19
1066
+ with color bandwidth σo = 0.1, position bandwidth σp = 6, and kernel radius
1067
+ r = 5. The hyper-parameters λ1, λ2, λ3 in Eq. (31) were empirically set to be
1068
+ 0.05, 1, and 1, respectively.
1069
+ All models were trained with a batch size of 4, learning rate of 1e−4, and
1070
+ augmentation of flipping and random rotation. We implemented our models
1071
+ with Pytorch. All models were trained on one NVIDIA 3090Ti 24GB GPU for
1072
+ 1000 epochs.
1073
+ Table 2. Results in Dice scores and Hausdorff Distance (HD) of the ablation study
1074
+ using ACDC dataset, where the models were evaluated on the validation set. Note that
1075
+ model #6 is ZScribbleSeg. Bold denotes the best result, and underline indicates the
1076
+ best but one in each category.
1077
+ Results in Dice
1078
+ Lpce Efficiency Lshape Lglobal Lspatial
1079
+ LV
1080
+ MYO
1081
+ RV
1082
+ Avg
1083
+ model #1
1084
+
1085
+ ×
1086
+ ×
1087
+ ×
1088
+ ×
1089
+ .863±.089
1090
+ .804±.063
1091
+ .774±.150
1092
+ .813±.112
1093
+ model #2
1094
+
1095
+
1096
+ ×
1097
+ ×
1098
+ ×
1099
+ .870±.100
1100
+ .833±.063
1101
+ .843±.076
1102
+ .848±.082
1103
+ model #3
1104
+
1105
+ ×
1106
+
1107
+ ×
1108
+ ×
1109
+ .915±.068
1110
+ .871±.056
1111
+ .871±.058
1112
+ .886±.064
1113
+ model #4
1114
+
1115
+
1116
+ ×
1117
+
1118
+ ×
1119
+ .920±.064
1120
+ .868±.051
1121
+ .886±.051
1122
+ .891±.059
1123
+ model #5
1124
+
1125
+ ×
1126
+ ×
1127
+ ×
1128
+
1129
+ .923±.078
1130
+ .869±.051
1131
+ .889±.056
1132
+ .894±.066
1133
+ model #6
1134
+
1135
+
1136
+
1137
+
1138
+
1139
+ .929±.057
1140
+ .876±.051
1141
+ .892±.049
1142
+ .899±.056
1143
+ Results in HD (mm) Lpce Efficiency Lshape Lglobal Lspatial
1144
+ LV
1145
+ MYO
1146
+ RV
1147
+ Avg
1148
+ model #1
1149
+
1150
+ ×
1151
+ ×
1152
+ ×
1153
+ ×
1154
+ 81.86±40.40
1155
+ 65.97±33.62 60.91±44.62 69.58±40.37
1156
+ model #2
1157
+
1158
+
1159
+ ×
1160
+ ×
1161
+ ×
1162
+ 119.78±19.14 23.90±17.32 52.38±23.40 65.35±45.06
1163
+ model #3
1164
+
1165
+ ×
1166
+
1167
+ ×
1168
+ ×
1169
+ 4.45±5.39
1170
+ 15.24±23.90 25.78±22.44 15.16±20.89
1171
+ model #4
1172
+
1173
+
1174
+ ×
1175
+
1176
+ ×
1177
+ 12.12±18.26
1178
+ 29.41±24.56 16.97±15.62 19.50±20.94
1179
+ model #5
1180
+
1181
+ ×
1182
+ ×
1183
+ ×
1184
+
1185
+ 28.95±36.57
1186
+ 44.77±34.69
1187
+ 7.51±5.34 27.08±32.76
1188
+ model #6
1189
+
1190
+
1191
+
1192
+
1193
+
1194
+ 6.09±8.53
1195
+ 11.14±14.53
1196
+ 8.86±5.88
1197
+ 8.70±10.40
1198
+ 3.2
1199
+ Efficiency of scribble forms
1200
+ In this study, we first compared four scribble forms to illustrate the efficacy of
1201
+ randomly annotated scribbles for supervision. Denoting the number of annotated
1202
+ pixels using a manual and skeleton-wise scribble form as n, we generated other
1203
+ scribble forms with the same annotated ratios for a fair comparison. Then, we
1204
+ studied the performance of segmentation with respect to the number of pixels
1205
+ annotated using a random and wide range scribble form, by setting the number
1206
+ of annotated pixels to different times of n. Finally, we further explored variants
1207
+ of random walk annotations to show the importance of wide range in the random
1208
+ distribution of scribbles.
1209
+ We applied two segmentation tasks, i.e., regular structure segmentation of
1210
+ the cardiac ventricles on ACDC dataset and irregular segmentation of myocardial
1211
+ pathologies using MyoPS dataset. To compare the supervision of scribble forms
1212
+ directly, we trained all models with partial cross entropy (PCE) loss calculated
1213
+ for annotated pixels from scribbles. All experiment results were reported on the
1214
+ test set.
1215
+ Scribble forms One can measure the efforts of scribble annotations from two
1216
+ perspectives, i.e., number of manual draws to generate scribble annotations
1217
+
1218
+ 20
1219
+ K Zhang & X Zhuang
1220
+ Table 3. Results and comparisons of regular structure segmentation on ACDC dataset.
1221
+ These models were evaluated on the test set.
1222
+ Methods
1223
+ Dice
1224
+ HD (mm)
1225
+ LV
1226
+ MYO
1227
+ RV
1228
+ Avg
1229
+ LV
1230
+ MYO
1231
+ RV
1232
+ Avg
1233
+ PCE
1234
+ .805±.145
1235
+ .737±.095
1236
+ .769±.128
1237
+ .770±.126 62.55±36.04 68.30±27.77 59.62±42.62
1238
+ 63.40±35.76
1239
+ WSL4 [29]
1240
+ .835±.164 .825±.032 .787±.191
1241
+ .792±.166 16.48±16.01 24.48±22.74 18.21±11.30
1242
+ 19.72±17.67
1243
+ GatedCRF [31] .846±.157
1244
+ .744±.108
1245
+ .822±.111
1246
+ .804±.135 37.38±46.37 22.30±15.72 20.88±11.85
1247
+ 26.85±30.03
1248
+ MAAG [42]
1249
+ .879
1250
+ .817
1251
+ .752
1252
+ .816
1253
+ 25.23
1254
+ 26.83
1255
+ 22.73
1256
+ 24.93
1257
+ CVIR [14]
1258
+ .866±.127
1259
+ .797±.102
1260
+ .737±.130
1261
+ .800±.130 47.51±50.82 10.70±8.39
1262
+ 14.39±9.00
1263
+ .24.20±34.17
1264
+ nnPU [20]
1265
+ .862±.134
1266
+ .792±.124
1267
+ .829±.102
1268
+ .828±.123 67.28±48.60 18.60±17.93
1269
+ 14.64±8.39
1270
+ 33.51±38.43
1271
+ CycleMix [52]
1272
+ .876±.096
1273
+ .794±.083
1274
+ .829±.099
1275
+ .833±.098 16.60±19.90 18.04±17.78 19.09±21.44
1276
+ 17.91±19.57
1277
+ ShapePU [53]
1278
+ .885±.103
1279
+ .806±.096
1280
+ .851±.089
1281
+ .848±.100 20.17±22.40 41.81±33.40 20.06±26.43
1282
+ 27.35±29.33
1283
+ ZScribbleSeg
1284
+ .900±.065 .825±.069 .862±.102 .862±.086 7.69±6.94 8.93±6.40 12.74±12.48 9.79±9.19
1285
+ FullSupUNet
1286
+ .882±.123
1287
+ .824±.099
1288
+ .856±.112
1289
+ .854±.113 11.94±13.58 12.65±12.52
1290
+ 14.82±9.69
1291
+ 13.14±11.97
1292
+ (Nscribble) and number of annotated pixels (Npix). Given the certain amount
1293
+ of efforts, we designed four forms following different generation procedures, i.e.,
1294
+ (1) Skeleton, (2) Random walk, (3) Directed random walk (DirRandomWalk),
1295
+ (4) Points, and compared the segmentation performance of models trained us-
1296
+ ing such scribble annotations for supervision. The details of scribble forms are
1297
+ described bellow.
1298
+ Skeleton indicates the widely adopted scribble form by a rater, who approx-
1299
+ imately outlines the shape of each label class within the segmentation mask. For
1300
+ a segmentation task with k label classes, including the background, one needs k
1301
+ manual draws (scribbles) for a training image. For ACDC dataset, we adopted
1302
+ the manual annotated skeleton scribble released by [42]; while for pathologies
1303
+ in MyoPS dataset, we generated the skeleton scribbles automatically using the
1304
+ skeletonization algorithm [36]. We refer the reader to Appendix B of the supple-
1305
+ mentary material for generation details.
1306
+ Random walk starts from a random point within the segmentation mask.
1307
+ Then, the annotation moves along a random direction of image lattice within the
1308
+ segmentation mask, with a given step length (l by default set to 1). We repeated
1309
+ such moves until the ratio or number of annotated pixels reached a threshold
1310
+ (n).
1311
+ Directed random walk, DirRandomWork for short, is the random walk
1312
+ with momentum. The scribble generated by Random walk tends to cluster within
1313
+ a local area of the radius √r given r-step walks. To achieve wide range distri-
1314
+ bution without manually setting the step length (l), we therefore adopted this
1315
+ directed random walk, which prefers moving along the same direction to the pre-
1316
+ vious step. If the next point does not lie in the segmentation mask, we changed
1317
+ the walking direction to be along the smallest angle to the previous one.
1318
+ Points scribble form refers to an ideal form, which randomly samples anno-
1319
+ tated pixels within the segmentation mask. However, it is difficult to generate
1320
+ such scribble annotation in practice, due to the huge number of manual draws
1321
+ which equals the number of annotated pixels, i.e., Nscribble = Npix. Therefore,
1322
+ we considered this form as the upper bound of scribble supervision under the
1323
+ same ratio of annotated pixels.
1324
+
1325
+ ZScribbleSeg
1326
+ 21
1327
+ Image
1328
+ Ground Truth
1329
+ PCE
1330
+ CVIR
1331
+ nnPU
1332
+ WSL4
1333
+ GatedCRF
1334
+ CycleMix
1335
+ ShapePU
1336
+ ZScribbleSeg FullSupUNet
1337
+ Dice (Avg)
1338
+ :MYO
1339
+ :RV
1340
+ Median
1341
+ Worst
1342
+ :LV
1343
+ 0.758
1344
+ 0.827
1345
+ 0.894
1346
+ 0.852
1347
+ 0.870
1348
+ 0.897
1349
+ 0.902
1350
+ 0.907
1351
+ 0.903
1352
+ 0.486
1353
+ 0.390
1354
+ 0.472
1355
+ 0.628
1356
+ 0.618
1357
+ 0.386
1358
+ 0.262
1359
+ 0.773
1360
+ 0.544
1361
+ Dice (Avg)
1362
+ Scribble
1363
+ Fig. 7. Visualization of cardiac segmentation on ACDC dataset. The two slices were
1364
+ from the median and the worst cases by the average Dice scores of all compared meth-
1365
+ ods.
1366
+ Results Given the same amount of annotated pixels, we show the effect of dif-
1367
+ ferent scribble forms on regular structures (ACDC) and irregular objects (My-
1368
+ oPS). As Table 1 illustrates, when the four scribble forms had the same number
1369
+ of annotated pixels Npix, Points achieved the best Dice scores on both of the
1370
+ structural segmentation and irregular segmentation tasks, thanks to the effects of
1371
+ randomness and wide range distribution of scribbles. However, when we limited
1372
+ the efforts of manual draws to be the same, DirRandomWalk became more favor-
1373
+ able, as the scribble form of Points could be impractical. Furthermore, Skeleton
1374
+ scribble was illustrated to be the least efficient form, particularly the segmenta-
1375
+ tion network trained on such dataset performed poorly on the irregular object
1376
+ segmentation task. This was probably due to the fact that when the target was
1377
+ difficult to outline, Skeleton form could fail to portray the entire segmentation,
1378
+ leading to poor performance or even a failure in training the segmentation net-
1379
+ works. On the contrary, randomly distributed scribble forms, such as Random
1380
+ walk and DirRandomWalk, demonstrated their superiority, particularly on the
1381
+ irregular object segmentation with remarkable improvements on average Dice
1382
+ over Skeleton of 24.1% and 30.7%, respectively.
1383
+ Number of annotated points: By varying the number of annotated pixels
1384
+ (Npix), we validated the influence of annotated proportions on scribble super-
1385
+ vised segmentation. As shown in Figure 6 (a) and (c), the model performance
1386
+ tended to be improved as Npix increases, indicating that model training bene-
1387
+ fited from larger proportion of annotated pixels. One can observe from Figure 6
1388
+ (a) that the segmentation performance started converging when Npix reached
1389
+ 2n. By contrast, for the more difficult segmentation task on irregular objects, as
1390
+ Figure 6 (c) illustrates, the model performance converged after Npix ≥ 4n.
1391
+ Wide-ranged distribution: We further investigated the influence of wide
1392
+ range distribution of scribbles, by training networks with varying step length l
1393
+ in Random walk. As the step length increases, the label distribution range of
1394
+ Random walk gradually expanded. From Figure 6 (b) and (d), one can see that
1395
+ the segmentation performance of average Dice scores was improved as the step
1396
+ length increased, and the performance gradually converged to that of DirRan-
1397
+ domWalk. This confirmed that the widely distributed scribbles were better to
1398
+ provide finer supervision under the same number of draws and annotated pixels.
1399
+
1400
+ 22
1401
+ K Zhang & X Zhuang
1402
+ Table 4. Results and comparisons of regular structure segmentation on pathology
1403
+ enhanced images (LGE CMR) using MSCMRseg dataset.
1404
+ Methods
1405
+ Dice
1406
+ HD (mm)
1407
+ LV
1408
+ MYO
1409
+ RV
1410
+ Avg
1411
+ LV
1412
+ MYO
1413
+ RV
1414
+ Avg
1415
+ PCE
1416
+ .514±.078
1417
+ .582±.067
1418
+ .058±.023
1419
+ .385±.243
1420
+ 259.4±14.19
1421
+ 228.1±21.36
1422
+ 257.4±12.43
1423
+ 248.3±21.63
1424
+ WSL4 [29]
1425
+ .902±.040
1426
+ .815±.033
1427
+ .828±.101
1428
+ .848±.076
1429
+ 55.95±4.88
1430
+ 42.07±13.48
1431
+ 32.08±6.57
1432
+ 43.37±31.04
1433
+ GatedCRF [31] .917±.044
1434
+ .825±.032
1435
+ .848±.073
1436
+ .863±.066
1437
+ 25.72±4.37
1438
+ 37.92±5.10
1439
+ 32.83±5.59
1440
+ 32.16±7.11
1441
+ CVIR [14]
1442
+ .331±.076
1443
+ .371±.088
1444
+ .404±.110
1445
+ .368±.095
1446
+ 259.2±14.23
1447
+ 243.0±13.76
1448
+ 180.9±55.44
1449
+ 227.7±47.63
1450
+ nnPU [20]
1451
+ .341±.067
1452
+ .538±.081
1453
+ .432±.100
1454
+ .437±.115
1455
+ 259.4±14.19
1456
+ 201.6±66.98
1457
+ 199.7±57.50
1458
+ 220.2±57.70
1459
+ CycleMix [52]
1460
+ .748±.064
1461
+ .730±.047
1462
+ .835±.041
1463
+ .771±.069 224.59±35.27 28.26±20.77
1464
+ 73.36±51.39 108.74±92.65
1465
+ ShapePU [53]
1466
+ .880±.046
1467
+ .785±.080
1468
+ .833±.087
1469
+ .833±.082 178.02±50.93 178.05±25.39 189.35±55.78 181.81±45.27
1470
+ ZScribbleSeg
1471
+ .922±.039 .834±.039 .854±.055 .870±.058 12.10±14.70 16.52±19.14 51.03±39.27 26.55±31.39
1472
+ FullSupUNet
1473
+ .909±.049
1474
+ .821±.054
1475
+ .826±.087
1476
+ .852±.076
1477
+ 10.02±12.36
1478
+ 11.89±11.34
1479
+ 56.91±41.99
1480
+ 26.27±33.63
1481
+ 3.3
1482
+ Ablation study
1483
+ We studied the effectiveness of the proposed strategies in modeling efficient scrib-
1484
+ bles and prior regularization for ZScribbleNet. We used the ACDC dataset and
1485
+ the expert-made scribble annotations released by [42], and evaluated the model
1486
+ performance on the validation set. We compared six ablated models which were
1487
+ trained with or without the usage of modeling efficient scribbles (denoted as
1488
+ Efficiency), and with different combinations of the four loss functions, i.e., the
1489
+ partial cross entropy (Lpce), the global consistency loss (Lglobal) in Eq.(9), the
1490
+ spatial prior loss (Lspatial) in Eq.(29), and the shape prior loss (Lshape) in Eq.(30).
1491
+ Table 2 presents the results. When model #2 adopted the proposed super-
1492
+ vision augmentation to model efficient scribbles (indicated by the column of
1493
+ Efficiency), its performance was improved compared to model #1, as one can
1494
+ see from their average Dice scores (0.848 vs. 0.813) and average HDs (65.35
1495
+ mm vs. 69.58 mm). This demonstrated the benefits of model training from the
1496
+ augmented supervision. When combining the supervision augmentation with the
1497
+ global consistency loss (Lglobal), leading to model #4, the performance was fur-
1498
+ ther boosted with remarkable improvements, namely 4.3% gain in Dice (0.891
1499
+ vs. 0.848) and over 45 mm error reduction in HD (19.50 mm vs. 65.35 mm). Al-
1500
+ ternatively, when leveraging inter connectivity via the shape regularization loss
1501
+ (Lshape), model #3 obtained an overwhelming improvement in HD, which was
1502
+ reduced from 69.58 mm to only 15.16 mm compared to model #1. This indicated
1503
+ the results were with much less noisy and outlier segmentation. We then further
1504
+ investigated the advantage of spatial prior (Lspatial) in training ZScribbleNet.
1505
+ One can see from the result of model #5 that it achieved the most evident gain
1506
+ in terms of Dice results, with an improvement of 8.1% (0.894 vs. 0.813) by solely
1507
+ including one extra loss. Finally, our ZScribbleSeg (model #6) achieved the best
1508
+ performance with average Dice of 0.899 and HD of 8.70 mm. This indicated that
1509
+ the combination of efficient scribbles and priors endowed the algorithm with sub-
1510
+ stantial supervision and prior knowledge for scribble-supervised segmentation.
1511
+ 3.4
1512
+ Performance and Comparisons
1513
+ We conducted experiments over the four segmentation tasks stated in Sec-
1514
+ tion 3.1. (1) For the structural segmentation of cardiac ventricles from ACDC
1515
+
1516
+ ZScribbleSeg
1517
+ 23
1518
+ Image
1519
+ Ground Truth
1520
+ PCE
1521
+ CVIR
1522
+ nnPU
1523
+ ShapePU
1524
+ WSL4
1525
+ GatedCRF
1526
+ CycleMix
1527
+ ShapePU
1528
+ ZScribbleSeg
1529
+ FullSupUNet
1530
+ Dice (Avg)
1531
+ :MYO
1532
+ :RV
1533
+ Median
1534
+ Worst
1535
+ :LV
1536
+ 0.389
1537
+ 0.353
1538
+ 0.412
1539
+ 0.886
1540
+ 0.885
1541
+ 0.787
1542
+ 0.865
1543
+ 0.893
1544
+ 0.880
1545
+ 0.370
1546
+ 0.328
1547
+ 0.428
1548
+ 0.723
1549
+ 0.814
1550
+ 0.735
1551
+ 0.723
1552
+ 0.829
1553
+ 0.830
1554
+ Scribble
1555
+ Dice (Avg)
1556
+ Fig. 8. Visualization of cardiac segmentation on LGE CMR using MSCMRseg dataset.
1557
+ The two slices were from the median and the worst cases by the average Dice scores of
1558
+ all compared methods.
1559
+ dataset, we used the expert-made scribbles released by [42]. (2) For the car-
1560
+ diac structural segmentation from pathology enhanced imaging (MSCMRseg)
1561
+ dataset, we used the manually annotated scribbles released by [52]. (3) For
1562
+ the irregular myocardial pathology segmentation from MyoPS dataset, we first
1563
+ adopted the standard skeletonization algorithm for the simulated scribble anno-
1564
+ tation of pathologies [36]. Then, we manually annotated skeleton scribbles for
1565
+ the structures of LV, Myo, RV and background. (4) For the human pose seg-
1566
+ mentation from PPSS dataset, we adopted the scribble annotations generated
1567
+ by the standard skeletonization algorithm [36].
1568
+ We compared ZScribbleSeg with eight to nine methods. We first implemented
1569
+ the PCE loss (Lpce) as a baseline method (referred to PCE). Then, we imple-
1570
+ mented four state-of-the-art (SOTA) scribble supervised segmentation methods,
1571
+ i.e., WSL4 [29], GatedCRF [31], CycleMix [52], and ShapePU [53] to run the
1572
+ same experiments. We cited the ACDC and PPSS results reported in [42] for the
1573
+ MAAG method, which is also a SOTA method for this task. Furthermore, we
1574
+ adopted two semi-supervised SOTA methods based on positive unlabeled learn-
1575
+ ing, i.e., CVIR [14] and nnPU [20], and re-implemented to adapt them for the
1576
+ scribble-supervised segmentation tasks. For more details of adaptation, the read-
1577
+ ers are referred to Appendix C of the supplementary material. Finally, we trained
1578
+ UNet with full annotations as a baseline of fully-supervised approach (referred
1579
+ to as FullSupUNet). Note that the post-processing steps of all experiments were
1580
+ removed for a fair comparison.
1581
+ Structure segmentation from anatomical images Table 3 presents the
1582
+ Dice and HD results of 10 approaches for regular structure segmentation of car-
1583
+ diac ventricles from ACDC dataset. One can observe that ZScribbleSeg achieved
1584
+ average Dice of 0.862 and HD of 9.79 mm, outperforming the other scribble-
1585
+ supervised methods evidently. The quantitative results of ZScribbleSeg were
1586
+ comparable to (or slightly better than) that of the fully supervised method (Full-
1587
+ supUNet) whose average Dice and HD are 0.854 and 13.14 mm, respectively.
1588
+ Particularly, the HD results of ZScribbleSeg (9.79 mm) and FullSupUNet
1589
+ (13.14 mm) were evidently much better than the other methods. Note that HD
1590
+ is highly sensitive to the noisy and outlier segmentation results, which are com-
1591
+ monly seen when the supervision of global shape information is not sufficient.
1592
+
1593
+ 24
1594
+ K Zhang & X Zhuang
1595
+ Table 5. Results and comparisons of irregular segmentation of myocardial pathologies
1596
+ on MyoPS dataset.
1597
+ Methods
1598
+ Dice
1599
+ HD (mm)
1600
+ Scar
1601
+ Edema
1602
+ Avg
1603
+ Scar
1604
+ Edema
1605
+ Avg
1606
+ PCE
1607
+ 0.504±0.213
1608
+ 0.057±0.022
1609
+ 0.281±0.271
1610
+ 82.68±33.95 147.61±20.59 115.15±43.00
1611
+ WSL4 [29]
1612
+ 0.031±0.029
1613
+ 0.106±0.033
1614
+ 0.069±0.049 172.37±45.13 170.05±20.44 171.20±34.60
1615
+ GatedCRF [31] 0.020±0.013
1616
+ 0.042±0.020
1617
+ 0.031±0.019 173.60±44.98 170.10±20.44 171.8±34.53
1618
+ CVIR [14]
1619
+ 0.505±0.214
1620
+ 0.080±0.031
1621
+ 0.293±0.263
1622
+ 61.59±32.09 125.27±20.83 93.43±41.86
1623
+ nnPU [20]
1624
+ 0.530±0.241
1625
+ 0.085±0.035
1626
+ 0.308±0.282
1627
+ 48.88±23.55 125.27±20.83 87.07±44.47
1628
+ CycleMix [52]
1629
+ 0.550±0.237
1630
+ 0.626±0.124
1631
+ 0.588±0.191
1632
+ 65.64±42.81
1633
+ 81.97±40.87
1634
+ 73.81±42.13
1635
+ ShapePU [53]
1636
+ 0.558±0.237
1637
+ 0.615±0.144
1638
+ 0.587±0.205
1639
+ 57.33±31.58
1640
+ 53.00±31.42
1641
+ 55.16±31.17
1642
+ ZScribbleSeg
1643
+ 0.596±0.237 0.676±0.113 0.636±0.188 46.73±20.04 47.05±24.30 46.89±21.98
1644
+ FullSupUNet
1645
+ 0.607±0.253
1646
+ 0.659±0.135
1647
+ 0.633±0.202
1648
+ 55.35±35.73
1649
+ 63.53±33.15
1650
+ 59.44±34.27
1651
+ Ground Truth
1652
+ PCE
1653
+ CVIR
1654
+ nnPU
1655
+ ShapePU
1656
+ CycleMix
1657
+ ZScribbleNet
1658
+ FullSupUNet
1659
+ Dice (Scar);
1660
+ Dice (Edema)
1661
+ 0.488;
1662
+ 0.039
1663
+ 0.478;
1664
+ 0.054
1665
+ 0.667;
1666
+ 0.062
1667
+ 0.591;
1668
+ 0.597
1669
+ 0.558;
1670
+ 0.616
1671
+ 0.671;
1672
+ 0.637
1673
+ 0.716;
1674
+ 0.713
1675
+ Median cases
1676
+ :Scar
1677
+ :Edema
1678
+ +
1679
+ (
1680
+ )
1681
+ Dice (Scar);
1682
+ Dice (Edema)
1683
+ 0.563;
1684
+ 0.042
1685
+ 0.564;
1686
+ 0.061
1687
+ 0.677;
1688
+ 0.059
1689
+ 0.726;
1690
+ 0.684
1691
+ 0.707;
1692
+ 0.698
1693
+ 0.755;
1694
+ 0.750
1695
+ 0.705;
1696
+ 0.686
1697
+ Image
1698
+ GatedCRF
1699
+ 0.041;
1700
+ 0.074
1701
+ 0.028;
1702
+ 0.056
1703
+ WSL4
1704
+ 0.041;
1705
+ 0.180
1706
+ 0.028;
1707
+ 0.101
1708
+ Scribble
1709
+ Fig. 9. Visualization of irregular segmentation of myocardial pathologies on MyoPS
1710
+ dataset. The two slices were from the median cases by average Dice scores of edema or
1711
+ scar segmentation of all compared methods.
1712
+ The results indicate the proposed efficient scribble modeling and prior regular-
1713
+ ization were able to alleviate the problem of inadequate supervision and incom-
1714
+ plete shape information from training images with scribble annotations. Finally,
1715
+ Figure 7 visualizes two typical cases (median and worst) for illustration.
1716
+ Structure segmentation from pathology enhanced images The anatomi-
1717
+ cal segmentation from pathology enhanced images, i.e., LGE CMR of MSCMRseg
1718
+ dataset, was a more challenging task compared to that of ACDC dataset. This
1719
+ is because MSCMRseg was a smaller dataset (e.g.: 25 vs. 70 training subjects),
1720
+ and the image quality and appearance pattern of LGE CMR could be worse and
1721
+ more complex.
1722
+ Table 4 provides the quantitative results, and Figure 8 visualizes two special
1723
+ examples (median and worst) for demonstration. ZScribbleSeg achieved promis-
1724
+ ing performance and better Dice and HD results than the other SOTA methods
1725
+ for scribble supervised segmentation. Notice that for this particular challenging
1726
+ task, the two general semi-supervised segmentation methods, i.e., CVIR and
1727
+ nnPU, could not work properly, which was confirmed by the two failed segmen-
1728
+ tation examples visualized in Figure 8.
1729
+ Finally, similar to the results in previous study (Section 3.4), ZScribbleSeg
1730
+ and FullSupUNet could achieve less noisy segmentation, affirmed by the remark-
1731
+ able better HD results in Table 4. Hence, we second to the conclusion that the
1732
+ proposed ZScribbleNet received greatly augmented supervision and global shape
1733
+ information via the proposed efficient scribble modeling and prior regularization.
1734
+
1735
+ CCCCCCCZScribbleSeg
1736
+ 25
1737
+ Irregular segmentation For segmentation of objects with heterogeneous shape
1738
+ features, it becomes particularly challenging to learn the accurate shape infor-
1739
+ mation for inference. We evaluated ZScribbleSeg on such challenging task of ir-
1740
+ regular segmentation using myocardial pathology segmentation (MyoPS), where
1741
+ we removed the shape regularization term Lshape due to the nature of pathologies
1742
+ lacking such property.
1743
+ Table 5 shows the performance in detail, and Figure 9 visualizes two typical
1744
+ cases, i.e., median cases by average Dice scores of edema and scar segmenta-
1745
+ tion, respectively. One can find that the advantages of the proposed methodolo-
1746
+ gies were demonstrated evidently in such challenging task, as the performance
1747
+ gains, either in terms of Dice or HD, were significant from CycleMix, ShapePU
1748
+ and finally to ZScribbleSeg compared to PCE, WSL4, GatedCRF, CVIR and
1749
+ nnPU (p < 0.001). In fact, the scribble-supervised segmentation of edema by
1750
+ the compared five methods were failed, and so were the segmentation of scar
1751
+ for WSL4 and GatedCRF. This is illustrated in the visualized examples in Fig-
1752
+ ure 9. Although WSL4 and GatedCRF worked well, with scribble supervision, in
1753
+ the above two regular structure segmentation tasks, they suffered severely from
1754
+ noisy labels due to their dependence of training on pseudo labels, which leads to
1755
+ the failure of model training. Furthermore, due to the similar texture between
1756
+ edema and surrounding tissues in all imaging modalities, it could be extremely
1757
+ difficult to segment such pathology relying solely on training images without ro-
1758
+ bust estimation and regularization of class mixture ratios. One can see from the
1759
+ result that this failed all the five compared methods in edema segmentation. By
1760
+ contrast, ShapePU and ZScribbleSeg succeeded in this task thanks to their own
1761
+ methods of estimating class prior π and applying spatial regularization, which
1762
+ is affirmed by the fact that they both achieved good HDs comparable to that
1763
+ of FullSupUNet for scar and edema segmentation. Notice that CycleMix did not
1764
+ illustrate such good performance in terms of HDs, but it achieved comparable
1765
+ good Dice scores thanks to the adoption of supervision augmentation.
1766
+ Segmentation from natural scenes We further validated the broad utility
1767
+ of ZScribbleSeg on the human pose segmentation task of natural scene images.
1768
+ We applied all the methods on the PPSS dataset, which consists of pedestrian
1769
+ images with occlusions, generated by different cameras with different resolutions.
1770
+ Table 6 presents the details, together with the summarized results from pre-
1771
+ vious three studies, i.e., ACDC, MSCMRseg and MyoPS. Similar to the three
1772
+ medical image segmentation tasks, the model of ZScribbleSeg generalized well to
1773
+ this 3-channel colored natural image segmentation task, with the performance
1774
+ comparable to FullSupUNet and Dice accuracy setting new state of the art for
1775
+ scribble supervised segmentation.
1776
+ Figure 10 visualizes three special cases, i.e., the best, median and the worst
1777
+ cases according to the average Dice by all compared methods. One can see from
1778
+ the figures that ZScribbleNet performed robustly and generated realistic seg-
1779
+ mentation with less noisy results, particularly compared with other scribble su-
1780
+ pervised methods and the fully supervised one (FullSupUNet).
1781
+
1782
+ 26
1783
+ K Zhang & X Zhuang
1784
+ Image
1785
+ Ground Truth
1786
+ PCE
1787
+ CVIR
1788
+ nnPU
1789
+ ShapePU
1790
+ CycleMix
1791
+ ZScribbleSeg
1792
+ FullSupUNet
1793
+ WSL4
1794
+ Best case
1795
+ Median case
1796
+ Worst case
1797
+ 0.721
1798
+ 0.736
1799
+ 0.688
1800
+ 0.699
1801
+ 0.745
1802
+ 0.708
1803
+ 0.781
1804
+ 0.690
1805
+ 0.821
1806
+ 0.795
1807
+ 0.814
1808
+ 0.791
1809
+ 0.817
1810
+ 0.832
1811
+ 0.860
1812
+ 0.862
1813
+ 0.869
1814
+ 0.795
1815
+ 0.871
1816
+ 0.868
1817
+ 0.885
1818
+ 0.898
1819
+ 0.908
1820
+ 0.914
1821
+ Dice (Avg)
1822
+ Scribble
1823
+ Dice (Avg)
1824
+ Dice (Avg)
1825
+ Fig. 10. Visualization of results on PPSS dataset. The selected subjects were the best,
1826
+ median and worst cases by the average Dice scores of all compared methods.
1827
+ Table 6. Dice results of the 10 methods on the four datasets. Note that sizes of training
1828
+ sets are given in the brackets.
1829
+ Methods
1830
+ ACDC
1831
+ MSCMRseg
1832
+ MyoPS
1833
+ PPSS
1834
+ (70)
1835
+ (25)
1836
+ (20)
1837
+ (2828)
1838
+ PCE
1839
+ .770±.126
1840
+ .385±.243
1841
+ .281±.271
1842
+ .805±.063
1843
+ WSL4 [29]
1844
+ .792±.166
1845
+ .848±.076
1846
+ -
1847
+ .762±.045
1848
+ GatedCRF [31] .804±.135
1849
+ .825±.032
1850
+ -
1851
+ -
1852
+ MAAG [42]
1853
+ .816
1854
+ -
1855
+ -
1856
+ .746
1857
+ CVIR [14]
1858
+ .800±.130
1859
+ .368±.095
1860
+ .293±.263
1861
+ .809±.054
1862
+ nnPU [20]
1863
+ .828±.123
1864
+ .437±.115
1865
+ .308±.282
1866
+ .794±.055
1867
+ CycleMix [52]
1868
+ .833±.098
1869
+ .771±.069
1870
+ .588±.191
1871
+ .835±.050
1872
+ ShapePU [53]
1873
+ .848±.100
1874
+ .833±.082
1875
+ .587±.205
1876
+ .823±.055
1877
+ ZScribbleSeg
1878
+ .862±.086 .870±.058 .636±.188 .838±.050
1879
+ FullSupUNet
1880
+ .854±.113
1881
+ .852±.076
1882
+ .633±.202
1883
+ .843±.071
1884
+ 4
1885
+ Conclusion
1886
+ In this work, we have presented a new framework for scribble-supervised segmen-
1887
+ tation, i.e., ZScribbleSeg, to integrate the efficient scribbles and prior regulariza-
1888
+ tion with implementation of a deep neural network (ZScribbleNet). ZScribbleSeg
1889
+ exploits the principles of ”good scribble annotations”, and effectively augments
1890
+ the scribble supervision of ZScribbleNet, via mixup-occlusion operations and
1891
+
1892
+ ZScribbleSeg
1893
+ 27
1894
+ global consistency regularization. Then, we explored to capture the global in-
1895
+ formation by incorporating the prior information, particularly with proposals
1896
+ of spatial prior loss and shape prior loss. The spatial prior loss was based on
1897
+ the estimated spatial energy and label class mixture proportions π. The former
1898
+ provides a new means to identify the probability of unlabeled pixels belonging to
1899
+ each class without directly using model predictions; and the latter was developed
1900
+ based on a novel estimation method and was aimed to correct the problematic
1901
+ prediction via the regularization of spatial prior loss.
1902
+ To examine to performance of ZScribbleSeg, we investigated a variety of seg-
1903
+ mentation tasks, including regular structural segmentation of cardiac ventricles
1904
+ from anatomical imaging data (using ACDC dataset), regular structural segmen-
1905
+ tation of pathology enhanced imaging data (MSCMRseg), irregular object seg-
1906
+ mentation from multi-modality imaging (MyoPS), and human pose segmentation
1907
+ from natural scenario (PPSS). Compared to others approaches, ZScribbleSeg has
1908
+ shown great competence and achieved comparable performance to the fully su-
1909
+ pervised UNet method. Particularly, thanks to the augmented supervision and
1910
+ prior regularization, ZScribbleSeg performed well and demonstrated reliability
1911
+ and generalizability in the scenarios with small training set (MSCMRseg task)
1912
+ and irregular structure segmentation (MyoPS task), both of which failed several
1913
+ other state-of-the-art approaches.
1914
+ References
1915
+ 1. Baumgartner, C.F., Koch, L.M., Pollefeys, M., Konukoglu, E.: An exploration of
1916
+ 2d and 3d deep learning techniques for cardiac mr image segmentation. In: Inter-
1917
+ national Workshop on Statistical Atlases and Computational Models of the Heart.
1918
+ pp. 111–119. Springer (2017)
1919
+ 2. Bearman, A., Russakovsky, O., Ferrari, V., Fei-Fei, L.: What’s the point: Semantic
1920
+ segmentation with point supervision. In: European conference on computer vision.
1921
+ pp. 549–565. Springer (2016)
1922
+ 3. Bekker, J., Davis, J.: Estimating the class prior in positive and unlabeled data
1923
+ through decision tree induction. In: Proceedings of the AAAI Conference on Arti-
1924
+ ficial Intelligence. vol. 32 (2018)
1925
+ 4. Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin,
1926
+ I., Lekadir, K., Camara, O., Gonzalez Ballester, M.A., Sanroma, G., Napel, S.,
1927
+ Petersen, S., Tziritas, G., Grinias, E., Khened, M., Kollerathu, V.A., Krishna-
1928
+ murthi, G., Roh´e, M.M., Pennec, X., Sermesant, M., Isensee, F., J¨ager, P., Maier-
1929
+ Hein, K.H., Full, P.M., Wolf, I., Engelhardt, S., Baumgartner, C.F., Koch, L.M.,
1930
+ Wolterink, J.M., Iˇsgum, I., Jang, Y., Hong, Y., Patravali, J., Jain, S., Humbert, O.,
1931
+ Jodoin, P.M.: Deep learning techniques for automatic mri cardiac multi-structures
1932
+ segmentation and diagnosis: Is the problem solved? IEEE Transactions on Medical
1933
+ Imaging 37(11), 2514–2525 (2018). https://doi.org/10.1109/TMI.2018.2837502
1934
+ 5. Bishop, C.M.: Training with noise is equivalent to tikhonov regularization. Neural
1935
+ computation 7(1), 108–116 (1995)
1936
+ 6. Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, vol. 4.
1937
+ Springer (2006)
1938
+
1939
+ 28
1940
+ K Zhang & X Zhuang
1941
+ 7. Can, Y.B., Chaitanya, K., Mustafa, B., Koch, L.M., Konukoglu, E., Baumgart-
1942
+ ner, C.F.: Learning to segment medical images with scribble-supervision alone. In:
1943
+ DLMIA/ML-CDS@MICCAI (2018)
1944
+ 8. Chaitanya, K., Karani, N., Baumgartner, C.F., Becker, A., Donati, O., Konukoglu,
1945
+ E.: Semi-supervised and task-driven data augmentation. In: International confer-
1946
+ ence on information processing in medical imaging. pp. 29–41. Springer (2019)
1947
+ 9. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Se-
1948
+ mantic image segmentation with deep convolutional nets, atrous convolution, and
1949
+ fully connected crfs. IEEE transactions on pattern analysis and machine intelli-
1950
+ gence 40(4), 834–848 (2017)
1951
+ 10. DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural net-
1952
+ works with cutout. arXiv preprint arXiv:1708.04552 (2017)
1953
+ 11. Du Plessis, M., Niu, G., Sugiyama, M.: Convex formulation for learning from pos-
1954
+ itive and unlabeled data. In: International conference on machine learning. pp.
1955
+ 1386–1394. PMLR (2015)
1956
+ 12. Du Plessis, M.C., Niu, G., Sugiyama, M.: Analysis of learning from positive and
1957
+ unlabeled data. Advances in neural information processing systems 27, 703–711
1958
+ (2014)
1959
+ 13. Gao, S., Zhuang, X.: Robust approximations of low-rank minimization for tensor
1960
+ completion. Neurocomputing 379, 319–333 (2020)
1961
+ 14. Garg, S., Wu, Y., Smola, A.J., Balakrishnan, S., Lipton, Z.: Mixture proportion
1962
+ estimation and pu learning: A modern approach. Advances in Neural Information
1963
+ Processing Systems 34 (2021)
1964
+ 15. Huang, Z., Wang, X., Wang, J., Liu, W., Wang, J.: Weakly-supervised semantic
1965
+ segmentation network with deep seeded region growing. In: Proceedings of the
1966
+ IEEE conference on computer vision and pattern recognition. pp. 7014–7023 (2018)
1967
+ 16. Ji, Z., Shen, Y., Ma, C., Gao, M.: Scribble-based hierarchical weakly supervised
1968
+ learning for brain tumor segmentation. In: International Conference on Medical Im-
1969
+ age Computing and Computer-Assisted Intervention. pp. 175–183. Springer (2019)
1970
+ 17. Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it:
1971
+ Weakly supervised instance and semantic segmentation. In: Proceedings of the
1972
+ IEEE conference on computer vision and pattern recognition. pp. 876–885 (2017)
1973
+ 18. Kim, J.H., Choo, W., Song, H.O.: Puzzle mix: Exploiting saliency and local statis-
1974
+ tics for optimal mixup. In: International Conference on Machine Learning (ICML)
1975
+ (2020)
1976
+ 19. Kim, J., Choo, W., Jeong, H., Song, H.O.: Co-mixup: Saliency guided joint mixup
1977
+ with supermodular diversity. In: International Conference on Learning Represen-
1978
+ tations (2021)
1979
+ 20. Kiryo, R., Niu, G., du Plessis, M.C., Sugiyama, M.: Positive-unlabeled learning
1980
+ with non-negative risk estimator. In: Advances in Neural Information Processing
1981
+ Systems. vol. 30 (2017)
1982
+ 21. Koch, L.M., Rajchl, M., Bai, W., Baumgartner, C.F., Tong, T., Passerat-Palmbach,
1983
+ J., Aljabar, P., Rueckert, D.: Multi-atlas segmentation using partially annotated
1984
+ data: methods and annotation strategies. IEEE transactions on pattern analysis
1985
+ and machine intelligence 40(7), 1683–1696 (2017)
1986
+ 22. Kohl, S., Romera-Paredes, B., Meyer, C., De Fauw, J., Ledsam, J.R., Maier-Hein,
1987
+ K., Eslami, S., Jimenez Rezende, D., Ronneberger, O.: A probabilistic u-net for
1988
+ segmentation of ambiguous images. Advances in neural information processing sys-
1989
+ tems 31 (2018)
1990
+ 23. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. arXiv
1991
+ preprint arXiv:1610.02242 (2016)
1992
+
1993
+ ZScribbleSeg
1994
+ 29
1995
+ 24. Latinne, P., Saerens, M., Decaestecker, C.: Adjusting the outputs of a classifier
1996
+ to new a priori probabilities may significantly improve classification accuracy: evi-
1997
+ dence from a multi-class problem in remote sensing. In: ICML. vol. 1, pp. 298–305
1998
+ (2001)
1999
+ 25. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444
2000
+ (2015)
2001
+ 26. Li, L., Wu, F., Wang, S., Luo, X., Martin-Isla, C., Zhai, S., Zhang, J., Liu, Y.,
2002
+ Zhang, Z., Ankenbrand, M.J., et al.: Myops: A benchmark of myocardial pathology
2003
+ segmentation combining three-sequence cardiac magnetic resonance images. arXiv
2004
+ preprint arXiv:2201.03186 (2022)
2005
+ 27. Lin, D., Dai, J., Jia, J., He, K., Sun, J.: Scribblesup: Scribble-supervised convolu-
2006
+ tional networks for semantic segmentation. In: Proceedings of the IEEE conference
2007
+ on computer vision and pattern recognition. pp. 3159–3167 (2016)
2008
+ 28. Luo, P., Wang, X., Tang, X.: Pedestrian parsing via deep decompositional network.
2009
+ In: Proceedings of the IEEE international conference on computer vision. pp. 2648–
2010
+ 2655 (2013)
2011
+ 29. Luo, X., Hu, M., Liao, W., Zhai, S., Song, T., Wang, G., Zhang, S.: Scribble-
2012
+ supervised medical image segmentation via dual-branch network and dynamically
2013
+ mixed pseudo labels supervision. In: Medical Image Computing and Computer
2014
+ Assisted Intervention (2022)
2015
+ 30. McLachlan, G.J., Krishnan, T.: The EM algorithm and extensions. John Wiley &
2016
+ Sons (2007)
2017
+ 31. Obukhov, A., Georgoulis, S., Dai, D., Gool, L.V.: Gated crf loss for weakly super-
2018
+ vised semantic image segmentation. ArXiv abs/1906.04651 (2019)
2019
+ 32. Obukhov, A., Georgoulis, S., Dai, D., Van Gool, L.: Gated crf loss for weakly
2020
+ supervised semantic image segmentation. arXiv preprint arXiv:1906.04651 (2019)
2021
+ 33. Ouali, Y., Hudelot, C., Tami, M.: Semi-supervised semantic segmentation with
2022
+ cross-consistency training. In: Proceedings of the IEEE/CVF Conference on Com-
2023
+ puter Vision and Pattern Recognition. pp. 12674–12684 (2020)
2024
+ 34. Papandreou, G., Chen, L.C., Murphy, K.P., Yuille, A.L.: Weakly-and semi-
2025
+ supervised learning of a deep convolutional network for semantic image segmenta-
2026
+ tion. In: Proceedings of the IEEE international conference on computer vision. pp.
2027
+ 1742–1750 (2015)
2028
+ 35. Pathak, D., Shelhamer, E., Long, J., Darrell, T.: Fully convolutional multi-class
2029
+ multiple instance learning. arXiv preprint arXiv:1412.7144 (2014)
2030
+ 36. Rajchl, M., Koch, L.M., Ledig, C., Passerat-Palmbach, J., Misawa, K., Mori, K.,
2031
+ Rueckert, D.: Employing weak annotations for medical image analysis problems.
2032
+ arXiv preprint arXiv:1708.06297 (2017)
2033
+ 37. Ramaswamy, H., Scott, C., Tewari, A.: Mixture proportion estimation via kernel
2034
+ embeddings of distributions. In: International conference on machine learning. pp.
2035
+ 2052–2060. PMLR (2016)
2036
+ 38. Sakai, T., Plessis, M.C., Niu, G., Sugiyama, M.: Semi-supervised classification
2037
+ based on classification from positive and unlabeled data. In: International con-
2038
+ ference on machine learning. pp. 2998–3006. PMLR (2017)
2039
+ 39. Tajbakhsh, N., Jeyaseelan, L., Li, Q., Chiang, J.N., Wu, Z., Ding, X.: Embracing
2040
+ imperfect datasets: A review of deep learning solutions for medical image segmen-
2041
+ tation. Medical Image Analysis 63, 101693 (2020)
2042
+ 40. Tang, M., Perazzi, F., Djelouah, A., Ayed, I.B., Schroers, C., Boykov, Y.: On
2043
+ regularized losses for weakly-supervised cnn segmentation. In: ECCV (2018)
2044
+
2045
+ 30
2046
+ K Zhang & X Zhuang
2047
+ 41. Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged
2048
+ consistency targets improve semi-supervised deep learning results. In: Guyon, I.,
2049
+ Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett,
2050
+ R. (eds.) Advances in Neural Information Processing Systems. vol. 30. Curran
2051
+ Associates, Inc. (2017)
2052
+ 42. Valvano, G., Leo, A., Tsaftaris, S.A.: Learning to segment from scribbles using
2053
+ multi-scale adversarial attention gates. IEEE Transactions on Medical Imaging
2054
+ pp. 1–1 (2021). https://doi.org/10.1109/TMI.2021.3069634
2055
+ 43. Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., Ben-
2056
+ gio, Y.: Manifold mixup: Better representations by interpolating hidden states. In:
2057
+ International Conference on Machine Learning. pp. 6438–6447. PMLR (2019)
2058
+ 44. Wang, D., Zhang, Y., Zhang, K., Wang, L.: Focalmix: Semi-supervised learning
2059
+ for 3d medical image detection. In: Proceedings of the IEEE/CVF Conference on
2060
+ Computer Vision and Pattern Recognition. pp. 3951–3960 (2020)
2061
+ 45. Wang, W., Sun, G., Van Gool, L.: Looking beyond single images for weakly su-
2062
+ pervised semantic segmentation learning. IEEE Transactions on Pattern Analysis
2063
+ and Machine Intelligence (2022)
2064
+ 46. Wang, Y., Zhang, J., Kan, M., Shan, S., Chen, X.: Self-supervised equivariant
2065
+ attention mechanism for weakly supervised semantic segmentation. In: Proceedings
2066
+ of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp.
2067
+ 12275–12284 (2020)
2068
+ 47. Wu, F., Zhuang, X.: Minimizing estimated risks on unlabeled data: A new for-
2069
+ mulation for semi-supervised medical image segmentation. IEEE Transactions on
2070
+ Pattern Analysis and Machine Intelligence (2022)
2071
+ 48. Yue, Q., Luo, X., Ye, Q., Xu, L., Zhuang, X.: Cardiac segmentation from lge mri
2072
+ using deep neural network incorporating shape and spatial priors. In: International
2073
+ Conference on Medical Image Computing and Computer-Assisted Intervention. pp.
2074
+ 559–567. Springer (2019)
2075
+ 49. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization
2076
+ strategy to train strong classifiers with localizable features. In: International Con-
2077
+ ference on Computer Vision (ICCV) (2019)
2078
+ 50. Zhang, B., Xiao, J., Jiao, J., Wei, Y., Zhao, Y.: Affinity attention graph neural net-
2079
+ work for weakly supervised semantic segmentation. IEEE Transactions on Pattern
2080
+ Analysis and Machine Intelligence (2021)
2081
+ 51. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical
2082
+ risk minimization. International Conference on Learning Representations (2018),
2083
+ https://openreview.net/forum?id=r1Ddp1-Rb
2084
+ 52. Zhang, K., Zhuang, X.: Cyclemix: A holistic strategy for medical image segmen-
2085
+ tation from scribble supervision. In: Proceedings of the IEEE/CVF Conference on
2086
+ Computer Vision and Pattern Recognition. pp. 11656–11665 (2022)
2087
+ 53. Zhang, K., Zhuang, X.: Shapepu: A new pu learning framework regularized
2088
+ by global consistency for scribble supervised cardiac segmentation. In: Medical
2089
+ Image Computing and Computer Assisted Intervention (2022)
2090
+ 54. Zhang, P., Zhong, Y., Li, X.: Accl: Adversarial constrained-cnn loss for weakly
2091
+ supervised medical image segmentation (2020)
2092
+ 55. Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang,
2093
+ C., Torr, P.H.: Conditional random fields as recurrent neural networks. In: Pro-
2094
+ ceedings of the IEEE international conference on computer vision. pp. 1529–1537
2095
+ (2015)
2096
+
2097
+ ZScribbleSeg
2098
+ 31
2099
+ 56. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features
2100
+ for discriminative localization. In: Proceedings of the IEEE conference on computer
2101
+ vision and pattern recognition. pp. 2921–2929 (2016)
2102
+ 57. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation
2103
+ using cycle-consistent adversarial networks. In: Proceedings of the IEEE interna-
2104
+ tional conference on computer vision. pp. 2223–2232 (2017)
2105
+ 58. Zhuang, X.: Multivariate mixture model for cardiac segmentation from multi-
2106
+ sequence mri. In: MICCAI (2016)
2107
+ 59. Zhuang, X.: Multivariate mixture model for myocardial segmentation combining
2108
+ multi-source images. IEEE Transactions on Pattern Analysis and Machine Intelli-
2109
+ gence 41(12), 2933–2946 (2019). https://doi.org/10.1109/TPAMI.2018.2869576
2110
+ 60. Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart
2111
+ segmentation of mri. Medical image analysis 31, 77–87 (2016)
2112
+
BNE4T4oBgHgl3EQfFAx2/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
D9FRT4oBgHgl3EQfxziA/content/tmp_files/2301.13643v1.pdf.txt ADDED
@@ -0,0 +1,1909 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13643v1 [math.CA] 31 Jan 2023
2
+ Some Expansion Formulas for Brenke
3
+ Polynomial Sets
4
+ Hamza Chaggara, Abdelhamid Gahami and Neila Ben
5
+ Romdhane
6
+ Last Revised:
7
+ February 1, 2023
8
+ Abstract. In this paper, we derive some explicit expansion formulas
9
+ associated to Brenke polynomials using operational rules based on their
10
+ corresponding generating functions. The obtained coefficients are ex-
11
+ pressed either in terms of finite double sums or finite sums or sometimes
12
+ in closed hypergeometric terms. The derived results are applied to Gen-
13
+ eralized Gould-Hopper polynomials and Generalized Hermite polynomi-
14
+ als introduced by Szeg¨o and Chihara. Some well-known duplication and
15
+ convolution formulas are deduced as particular cases.
16
+ Mathematics Subject Classification (2010). 33C45, 41A10, 41A58.
17
+ Keywords. Brenke polynomials, Connection coefficients, Generalized
18
+ Gould-Hopper polynomials, Generalized Hermite polynomials, Generat-
19
+ ing functions, Linearization coefficients.
20
+ Contents
21
+ 1.
22
+ Introduction
23
+ 2
24
+ 2.
25
+ Operators Associated to Brenke PSs
26
+ 4
27
+ 2.1.
28
+ Transfer Operator Associated to two Brenke Polynomials
29
+ 4
30
+ 2.2.
31
+ XD-Expansion of the Operator θ
32
+ 5
33
+ 2.3.
34
+ Examples
35
+ 5
36
+ 2.3.1.
37
+ Hypergeometric Transformation
38
+ 6
39
+ 2.3.2.
40
+ Particular Hypergeometric Transformation
41
+ 7
42
+ 2.3.3.
43
+ Dunkl Operator on the Real Line
44
+ 8
45
+ 3.
46
+ Connection and Linearization Problems
47
+ 9
48
+ 3.1.
49
+ Connection Problem
50
+ 9
51
+ 3.1.1.
52
+ Explicit Expression of the Connection Coefficients
53
+ 10
54
+ 3.1.2.
55
+ Connection between two Db-Appell PSs
56
+ 11
57
+ 3.1.3.
58
+ Addition and Convolution Type Formulas
59
+ 11
60
+
61
+ 2
62
+ H. Chaggara, A. Gahami and N. Ben Romdhane
63
+ 3.1.4.
64
+ Duplication Formula
65
+ 11
66
+ 3.2.
67
+ Linearization Problems
68
+ 12
69
+ 3.2.1.
70
+ Appell Polynomials
71
+ 13
72
+ 3.2.2.
73
+ Explicit Expression of the LC
74
+ 13
75
+ 4.
76
+ Application to Generalized Gould-Hopper Polynomial Set
77
+ 14
78
+ 4.1.
79
+ Connection Problem
80
+ 14
81
+ 4.2.
82
+ Linearization Formula
83
+ 16
84
+ 4.3.
85
+ Generalized Hermite Polynomials
86
+ 17
87
+ References
88
+ 18
89
+ 1. Introduction
90
+ Let P be the vector space of polynomials with coefficients in C. A polynomial
91
+ sequence in P is called polynomial set (PS for short) if deg Pn = n, for all n.
92
+ The connection and linearization problems are defined as follows.
93
+ Given two PSs {Pn}n≥0 and {Qn}n≥0, the so-called connection problem be-
94
+ tween them asks to find the coefficients Cm(n), called connection coefficients
95
+ CC, in the expression
96
+ Qn(x) =
97
+ n
98
+
99
+ m=0
100
+ Cm(n)Pm(x).
101
+ (1.1)
102
+ The particular cases Qn(x) = xn and Qn(x) = Pn(ax), a ̸= 0, in (1.1) are
103
+ known, respectively, as the inversion formula for {Pn}n≥0 and the duplication
104
+ or multiplication formula associated with {Pn}n≥0.
105
+ Given three PSs {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0, then for
106
+ Qi+j(x) =
107
+ Ri(x)Sj(x) in (1.1) we are faced to the general linearization
108
+ problem
109
+ Ri(x)Sj(x) =
110
+ i+j
111
+
112
+ k=0
113
+ Lij(k)Pk(x).
114
+ (1.2)
115
+ The coefficients Lij(k) are called linearization coefficients LC.
116
+ The particular case of this problem, Pn = Rn = Sn, is known as the standard
117
+ linearization problem or Clebsch-Gordan-type problem.
118
+ The computation and the positivity of the aforementioned coefficients
119
+ play important roles in many situations of pure and applied mathemat-
120
+ ics ranging from combinatorics and statistical mechanics to group theory
121
+ [4, 21, 23]. Therefore, different methods have been developed in the litera-
122
+ ture and several sufficient conditions for the sign properties to hold have
123
+ been derived in [3, 31], using for this purpose specific properties of the in-
124
+ volved polynomials such as orthogonality, generating functions, inversion for-
125
+ mulas, hypergeometric expansion formulas, recurrence relations, algorithmic
126
+ approaches, inverse relations,. . . (see e.g.[1, 2, 8, 13, 24, 32]). In particular, a
127
+ general method, based on operational rules and generating functions, was
128
+
129
+ Expansion Formulas for Brenke Polynomials
130
+ 3
131
+ developed for polynomial sets with equivalent lowering operators and with
132
+ Boas-Buck generating functions [6,12,14].
133
+ In this paper, we deeply discuss both the connection and the lineariza-
134
+ tion problems when the involved polynomials are of Brenke type. These poly-
135
+ nomials are defined by their exponential generating functions as follows [9,17]
136
+ A(t)B(xt) =
137
+
138
+
139
+ n=0
140
+ Pn(x)
141
+ n!
142
+ tn,
143
+ (1.3)
144
+ where A and B are two formal power series satisfying:
145
+ A(t) =
146
+
147
+
148
+ k=0
149
+ aktk,
150
+ B(t) =
151
+
152
+
153
+ k=0
154
+ bktk,
155
+ a0bk ̸= 0, ∀k ∈ N.
156
+ (1.4)
157
+ Brenke PSs are reduced to Appell ones when B = exp and they gener-
158
+ ated many well-known polynomials in the literature, namely monomials,
159
+ Hermite, Laguerre, Gould-Hopper, Generalized Hermite, Generalized Gould-
160
+ Hopper, Appell-Dunkl, d-Hermite, d-Laguerre, Bernoulli, Euler, Al-Salam-
161
+ Carlitz, Little q-Laguerre, q-Laguerre, discrete q-Hermite PSs,. . . .
162
+ These polynomials appear in many areas of mathematics. In particular,
163
+ in the framework of the standard orthogonality of polynomials, an exhaustive
164
+ classification of all Brenke orthogonal polynomials was established by Chihara
165
+ in [16]. Furthermore, Brenke polynomials play a central role in [25], where
166
+ the authors determined all MRM-triples associated with Brenke-type gener-
167
+ ating functions. Further, the positive approximation process discovered by
168
+ Korovkin, a powerful criterion in order to decide whether a given sequence of
169
+ positive linear operators on the space of continuous functions converges uni-
170
+ formly in this space, plays a central role and arises naturally in many problems
171
+ connected with functional analysis, harmonic analysis, measure theory, par-
172
+ tial differential equations, and probability theory. The most useful examples
173
+ of such operators are Sz´asz operators and many authors obtained a gener-
174
+ alization of these operators using Brenke polynomials (see [33, 34] and the
175
+ references therein).
176
+ This paper is organized as follows. In Section 2, we define the transfer
177
+ linear operator between two Brenke polynomials and which is illustrated by
178
+ three interesting examples in particular the hypergeometric transformation
179
+ and the Dunkl operator on the real line. Then in Section 3, we derive ex-
180
+ pansion formulas associated to Brenke polynomials using operational rules
181
+ and we give connection, linearization, inversion, duplication, and addition
182
+ formulas corresponding to these polynomials. The obtained coefficients are
183
+ expressed using generating functions involving the associated transfer lin-
184
+ ear operators. Finally, in Section 4, we apply our obtained results to both
185
+ Generalized Gould-Hopper PS (GGHPS) and Generalized Hermite PS (or
186
+ Szeg¨o-Chihara PS) and we recover many known formulas as special cases.
187
+
188
+ 4
189
+ H. Chaggara, A. Gahami and N. Ben Romdhane
190
+ 2. Operators Associated to Brenke PSs
191
+ In this section, first, we introduce a transfer operator between two Brenke
192
+ families, then we state its expression as an infinite series in the derivative
193
+ operator D and the multiplication operator X known as XD-expansion [19].
194
+ Finally, we give some examples.
195
+ 2.1. Transfer Operator Associated to two Brenke Polynomials
196
+ Any Brenke PS {Pn}n≥0 generated by (1.3) is Db-Appell of transfer power
197
+ series A, where A and b = (bn) are defined in (1.4). That is,
198
+ DbPn+1 = (n + 1)Pn
199
+ and
200
+ A(Db)(bnxn) = Pn
201
+ n! , n = 0, 1, 2, . . .,
202
+ (2.1)
203
+ where Db denotes the linear operator on P defined by [6]:
204
+ Db(1) = 0, Db(xn) = bn−1
205
+ bn
206
+ xn−1, n = 1, 2, . . . .
207
+ (2.2)
208
+ The operator Db is known as the lowering operator for the PS {Pn}n≥0,
209
+ however, A is the associated transfer series. (For more details, see [5]).
210
+ Let {Pn}n≥0 and {Qn}n≥0 be two Brenke PSs generated respectively
211
+ by:
212
+ A1(t)B1(xt) =
213
+
214
+
215
+ n=0
216
+ Pn(x)
217
+ n!
218
+ tn
219
+ and
220
+ A2(t)B2(xt) =
221
+
222
+
223
+ n=0
224
+ Qn(x)
225
+ n!
226
+ tn,
227
+ (2.3)
228
+ where for i = 1, 2,
229
+ Ai(t) =
230
+
231
+
232
+ k=0
233
+ a(i)
234
+ k tk,
235
+ Bi(t) =
236
+
237
+
238
+ k=0
239
+ b(i)
240
+ k tk,
241
+ a(i)
242
+ 0 b(i)
243
+ k
244
+ ̸= 0, ∀ k ∈ N.
245
+ (2.4)
246
+ Then, the corresponding operators Db(1) and Db(2) are related by:
247
+ Db(2)θ = θDb(1),
248
+ (2.5)
249
+ where θ is the bijective linear operator from P onto P (isomorphism of P)
250
+ acting on monomials as follows:
251
+ θ(xn) = b(2)
252
+ n
253
+ b(1)
254
+ n
255
+ xn
256
+ and
257
+ θ−1(xn) = b(1)
258
+ n
259
+ b(2)
260
+ n
261
+ xn.
262
+ (2.6)
263
+ The linear operator θ can be extended as a transfer operator taking any
264
+ formal power series to another formal power series as follows
265
+ θ(
266
+
267
+ n≥0
268
+ anxn) =
269
+
270
+ n≥0
271
+ anθ(xn),
272
+ (2.7)
273
+ and if φ(x) denotes a formal power series then one can easily check that,
274
+ θ
275
+
276
+ φ(x)
277
+
278
+
279
+ k=0
280
+ akxk�
281
+ =
282
+
283
+
284
+ k=0
285
+ akθ(φ(x)xk).
286
+ (2.8)
287
+ Hence, it is obvious that,
288
+ θ(B1(x)) = B2(x).
289
+ (2.9)
290
+
291
+ Expansion Formulas for Brenke Polynomials
292
+ 5
293
+ The operator θ will be called the transfer operator from B1 to B2 or transfer
294
+ operator from {Pn}n≥0 to {Qn}n≥0.
295
+ 2.2. XD-Expansion of the Operator θ
296
+ Now, recall that any operator L acting on formal power series has the follow-
297
+ ing formal expansion, known as XD-expansion (see [19] and the references
298
+ therein):
299
+ L =
300
+
301
+
302
+ k=0
303
+ Ak(X)Dk,
304
+ (2.10)
305
+ where D denotes the ordinary differentiation operator and {Ak(x)}k≥0 is a
306
+ polynomial sequence such that:
307
+ Lext =
308
+
309
+
310
+ k=0
311
+ Ak(x)tkext.
312
+ (2.11)
313
+ We note that the infinite sum (2.10) is always well defined on P since when
314
+ applied to any given polynomial, only a finite number of terms makes a
315
+ nonzero contribution.
316
+ The XD-expansion of the transfer operator θ is explicitly given by
317
+ Proposition 2.1. The operator θ defined by (2.6) has the formal expansion:
318
+ θ =
319
+
320
+
321
+ k=0
322
+ φk
323
+ k! XkDk,
324
+ (2.12)
325
+ where
326
+ φk = (−1)k
327
+ k
328
+
329
+ m=0
330
+ (−k)m
331
+ m!
332
+ b(2)
333
+ m
334
+ b(1)
335
+ m
336
+ .
337
+ Proof. By using (2.6) and (2.7) and then substituting L by θ in (2.11), we
338
+ obtain
339
+ θ(ext) =
340
+
341
+
342
+ k=0
343
+ b(2)
344
+ k
345
+ b(1)
346
+ k
347
+ (xt)k
348
+ k!
349
+ =
350
+
351
+
352
+ k=0
353
+ Ak(x)tkext.
354
+ Therefore,
355
+
356
+
357
+ k=0
358
+ Ak(x)tk = e−xt
359
+
360
+
361
+ k=0
362
+ b(2)
363
+ k
364
+ b(1)
365
+ k
366
+ (xt)k
367
+ k!
368
+ =
369
+
370
+
371
+ k=0
372
+
373
+ k
374
+
375
+ m=0
376
+ (−1)k (−k)m
377
+ m!
378
+ b(2)
379
+ m
380
+ b(1)
381
+ m
382
+
383
+ (xt)k
384
+ k!
385
+ ,
386
+ which establishes the desired result.
387
+
388
+ 2.3. Examples
389
+ Here, we consider three interesting particular cases of the linear operator θ
390
+ associated to two Brenke PSs and we essentially give integral representations
391
+ for this operator.
392
+
393
+ 6
394
+ H. Chaggara, A. Gahami and N. Ben Romdhane
395
+ 2.3.1. Hypergeometric Transformation. Recall first that rFs denotes
396
+ the generalized hypergeometric function with r numerator parameters and s
397
+ denominator parameters and defined as follows.
398
+ rFs
399
+ � (αr)
400
+ (βs) ; x
401
+
402
+ =
403
+
404
+
405
+ k=0
406
+ (α1)k(α2)k · · · (αr)k
407
+ (β1)k(β2)k · · · (βs)k
408
+ xk
409
+ k! ,
410
+ (2.13)
411
+ where the contracted notation (αr) is used to abbreviate the array
412
+ {α1, . . . , αr}, and (α)n denotes the Pochhammer symbol:
413
+ (α)n = Γ(α + n)
414
+ Γ(α)
415
+ .
416
+ (2.14)
417
+ Consider two Brenke PSs {Pn}n≥0 and {Qn}n≥0 generated by (2.3) and (2.4)
418
+ and such that the corresponding transfer linear operator θ takes the form:
419
+ θ(xn) = b(2)
420
+ n
421
+ b(1)
422
+ n
423
+ xn = (γ1)n(γ2)n · · · (γp)n
424
+ (δ1)n(δ2)n · · · (δp)n
425
+ xn, γi ∈ C, δi ∈ C \ {−N}.
426
+ (2.15)
427
+ In this case, for the action of the operator θ on hypergeometric functions, we
428
+ have the following result.
429
+ Proposition 2.2. Let θ be defined by (2.15) with 0 < ℜ(γi) < ℜ(δi), then
430
+ for r ≤ s + 1 and |x| < 1, we have
431
+ θrFs
432
+
433
+ (αr)
434
+ (βs) ; x
435
+
436
+ =
437
+ p
438
+
439
+ i=1
440
+ 1
441
+ β(γi, δi)
442
+
443
+ ]0,1[p
444
+ p
445
+
446
+ i=1
447
+ uγi−1
448
+ i
449
+ (1 − ui)δi−γi−1
450
+ × rFs
451
+
452
+ (αr)
453
+ (βs) ; x
454
+ p
455
+
456
+ i=1
457
+ ui
458
+
459
+ du1 · · · dup,
460
+ (2.16)
461
+ where β designates the usual Euler’s Beta function,
462
+ β(γ, δ) =
463
+ � 1
464
+ 0
465
+ tγ−1(1 − t)δ−1dt = Γ(γ)Γ(δ)
466
+ Γ(γ + δ) , ℜ(γ), ℜ(δ) > 0.
467
+ (2.17)
468
+ Proof. From (2.7) and (2.15), we have
469
+ θrFs
470
+
471
+ (αr)
472
+ (βs) ; x
473
+
474
+ = p+rFp+s
475
+
476
+ (αr), (γp)
477
+ (βs), (δp) ; x
478
+
479
+ .
480
+ Thus, by using the Euler integral representation of generalized hypergeomet-
481
+ ric functions, we obtain (see [27, p. 85]):
482
+ p+rFp+s
483
+
484
+ (αr), (γp)
485
+ (βs), (δp) ; x
486
+
487
+ =
488
+ Γ(δp)
489
+ Γ(γp)Γ(δp − γp)
490
+ � 1
491
+ 0
492
+ uδp−1
493
+ p
494
+ (1 − up)γp−δp−1
495
+ × p+r−1Fp+s−1
496
+
497
+ (αr), (γp−1)
498
+ (βs), (δp−1) ; xup
499
+
500
+ dup,
501
+ and after (p − 1) similar applications of the Euler integral representation we
502
+ get the desired result.
503
+
504
+
505
+ Expansion Formulas for Brenke Polynomials
506
+ 7
507
+ When the operator θ is given by (2.15), the coefficient φk in Proposi-
508
+ tion 2.1 is
509
+ φk = (−1)k
510
+ k
511
+
512
+ m=0
513
+ (−k)m
514
+ (γ1)m(γ2)m · · · (γp)m
515
+ m!(δ1)m(δ2)m · · · (δp)m
516
+ = (−1)kip+1Fp
517
+
518
+ −k, γ1, γ2, . . . , γp
519
+ δ1, δ2, . . . , δp
520
+ ; 1
521
+
522
+ .
523
+ Thus the corresponding XD expansion is
524
+ θ =
525
+
526
+
527
+ k=0
528
+ (−1)k
529
+ k!
530
+ p+1Fp
531
+
532
+ −k, γ1, γ2, . . . , γp
533
+ δ1, δ2, . . . , δp
534
+ ; 1
535
+
536
+ XkDk.
537
+ (2.18)
538
+ 2.3.2. Particular Hypergeometric Transformation. Here, we consider
539
+ the special case θ(xn) = (γ)n
540
+ (δ)n
541
+ xn, δ ̸= 0, −1, −2, . . ..
542
+ Proposition 2.3. For any analytic function f on ] − 1, 1[, f(x) =
543
+
544
+
545
+ n=0
546
+ anxn,
547
+ we have
548
+ θ(f)(x) =
549
+ 1
550
+ β(γ, δ − γ)
551
+ � 1
552
+ 0
553
+ tγ−1(1−t)δ−γ−1f(xt)dt, 0 < ℜ(γ) < ℜ(δ). (2.19)
554
+ Moreover, the XD-expansion of θ is the following
555
+ θ =
556
+
557
+
558
+ k=0
559
+ (−1)k
560
+ k!
561
+ (δ − γ)k
562
+ (γ)k
563
+ XkDk.
564
+ (2.20)
565
+ Proof. By using (2.14) and (2.17), we obtain
566
+ (γ)n
567
+ (δ)n
568
+ xn = Γ(γ + n)
569
+ Γ(δ + n)
570
+ Γ(δ)
571
+ Γ(γ)xn =
572
+ 1
573
+ β(γ, δ − γ)
574
+ � 1
575
+ 0
576
+ tγ−1(1 − t)δ−γ−1(xt)ndt.
577
+ Thus, substituting the above equation in (2.7), we obtain (2.19) since the
578
+ term-by-term integration is justified by the convergence of the series
579
+
580
+ n≥0
581
+ � 1
582
+ 0
583
+ ��antγ−1(1 − t)δ−γ−1(xt)n�� dt.
584
+ For (2.20), we use (2.18) and the Chu-Vandermonde reduction formula:
585
+ 2F1
586
+ � −k, γ
587
+ δ
588
+ ; 1
589
+
590
+ = (δ − γ)k
591
+ (δ)k
592
+ ,
593
+ δ ̸= 0, −1, −2, . . ..
594
+ (2.21)
595
+ Thus the proof is completed.
596
+
597
+
598
+ 8
599
+ H. Chaggara, A. Gahami and N. Ben Romdhane
600
+ 2.3.3. Dunkl Operator on the Real Line. The well-known Dunkl oper-
601
+ ator, Dµ, associated with the parameter µ on the real line provides a useful
602
+ tool in the study of special functions with root systems associated with finite
603
+ reflection groups [20] and it is closely related to certain representations of
604
+ degenerate affine Heke algebras [26]. This operator is defined by [20]:
605
+ Dµ(f)(x) = Df(x) + µ
606
+ x(f(x) − f(−x)),
607
+ µ ∈ C,
608
+ (2.22)
609
+ where f is a real variable complex-valued function and D is the differentiation
610
+ operator.
611
+ The Dunkl operator acts on monomials as follows:
612
+ Dµ(xn) =
613
+ γµ(n)
614
+ γµ(n − 1)xn−1, µ ̸= −1
615
+ 2, −3
616
+ 2, . . . ,
617
+ (2.23)
618
+ where
619
+ γµ(2p + ǫ) = 22p+ǫp!(µ + 1
620
+ 2)p+ǫ,
621
+ ǫ = 0, 1.
622
+ (2.24)
623
+ Hence, Dµ is a Db-operator type with bn =
624
+ 1
625
+ γµ(n), and we have the following
626
+ result.
627
+ Proposition 2.4. Let µ1 and µ2 be two real numbers satisfying −1
628
+ 2 < µ1 <
629
+ µ2, and θ given by
630
+ θ(xn) = γµ1(n)
631
+ γµ2(n)xn.
632
+ (2.25)
633
+ Then, for any analytic function, f on ] − 1, 1[, the following integral repre-
634
+ sentation of θ holds true
635
+ θ(f)(x) =
636
+ 1
637
+ β(µ1 + 1
638
+ 2, µ2 − µ1)
639
+ � 1
640
+ −1
641
+ f(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt.
642
+ (2.26)
643
+ Proof. By using (2.14), (2.17) and (2.24) with µ replaced by µ1 and µ2, and
644
+ for n = 2p + ǫ, ǫ = 0, 1, we obtain:
645
+ γµ1(n)
646
+ γµ2(n) = β(µ1 + 1
647
+ 2 + p + ǫ, µ2 − µ1)
648
+ β(µ1 + 1
649
+ 2, µ2 − µ1)
650
+ .
651
+ (2.27)
652
+ Now, with the beta integral representation (2.17), we get
653
+ β(µ1 + 1
654
+ 2 + p + ǫ, µ2 − µ1) =
655
+ � 1
656
+ 0
657
+ tµ1+p+ǫ− 1
658
+ 2 (1 − t)µ2−µ1−1 dt,
659
+ which, after the substitution u2 = t, and the distinction of the two cases
660
+ ǫ = 0 and ǫ = 1, becomes
661
+ β(µ1 + 1
662
+ 2 + p + ǫ, µ2 − µ1) =
663
+ � 1
664
+ −1
665
+ un|u|2µ1(1 − µ)µ2−µ1−1(1 + u)µ2−µ1 du.
666
+
667
+ Expansion Formulas for Brenke Polynomials
668
+ 9
669
+ Consequently, this gives
670
+ θ(xn) =
671
+ 1
672
+ β(µ1 + 1
673
+ 2, µ2 − µ1)
674
+ � 1
675
+ −1
676
+ (xt)n|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt,
677
+ (2.28)
678
+ and a term-by-term integration achieves the proof.
679
+
680
+ The following two particular cases are worthy to note.
681
+ • For f = expµ1, and according to (2.9), it is clear that
682
+ θ(expµ1) = expµ2,
683
+ where the generalized exponential function, expµ is defined by [28]
684
+ expµ(x) =
685
+
686
+
687
+ n=0
688
+ xn
689
+ γµ(n),
690
+ µ ̸= −1
691
+ 2, −3
692
+ 2, −5
693
+ 2, . . . .
694
+ (2.29)
695
+ So, for −1
696
+ 2 < µ1 < µ2, and by virtue of (2.26), the following integral
697
+ representation of expµ2 holds true [28, Eq. (2.3.4)]:
698
+ expµ2(x) =
699
+ 1
700
+ β(µ1 + 1
701
+ 2, µ2 − µ1)×
702
+ � 1
703
+ −1
704
+ expµ1(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt.
705
+ • For µ1 = 0 and µ2 = µ > 0, the transfer operator θ reduces to the well-
706
+ known Dunkl intertwining operator Vµ in the one dimensional case and
707
+ (2.26) is nothing else that its corresponding integral representation [20,
708
+ Theorem 5.1]:
709
+ Vµ(f)(x) =
710
+ 1
711
+ β( 1
712
+ 2, µ)
713
+ � 1
714
+ −1
715
+ f(xt)(1 − t)µ−1(1 + t)µ dt.
716
+ (2.30)
717
+ 3. Connection and Linearization Problems
718
+ In this section, we investigate connection and linearization formulas for
719
+ Brenke PSs.
720
+ 3.1. Connection Problem
721
+ Next, for two polynomial sequences of Brenke type, we state a generating
722
+ function for the connection coefficients using the operator θ. This result ap-
723
+ pears to be new. Some applications are given.
724
+ Theorem 3.1. Let {Pn}n≥0 and {Qn}n≥0 be two polynomial sequences gen-
725
+ erated by (2.3) and (2.4) and let θ be the corresponding transfer operator
726
+ defined in (2.6). Then the CC in (1.1), (Cm(n))n≥m≥0, are generated by:
727
+ A2(t)θ
728
+ � tm
729
+ A1(t)
730
+
731
+ =
732
+
733
+
734
+ n=m
735
+ m!
736
+ n! Cm(n)tn.
737
+ (3.1)
738
+
739
+ 10
740
+ H. Chaggara, A. Gahami and N. Ben Romdhane
741
+ Proof. On one hand, substituting (1.1) in (2.3) and using sum manipulations,
742
+ we get:
743
+ A2(t)B2(xt) =
744
+
745
+
746
+ n=0
747
+ Qn(x)tn
748
+ n! =
749
+
750
+
751
+ n=0
752
+
753
+ n
754
+
755
+ m=0
756
+ Cm(n)Pm(x)
757
+
758
+ tn
759
+ n!
760
+ =
761
+
762
+
763
+ m=0
764
+ � ∞
765
+
766
+ n=m
767
+ m!
768
+ n! Cm(n)tn
769
+
770
+ Pm(x)
771
+ m!
772
+ .
773
+ On the other hand, from (2.8), we have
774
+ A2(t)B2(xt) = A2(t)θtB1(xt) = A2(t)θt
775
+
776
+ 1
777
+ A1(t)
778
+
779
+
780
+ m=0
781
+ Pm(x)tm
782
+ m!
783
+
784
+ =
785
+
786
+
787
+ m=0
788
+ A2(t)θt
789
+ � tm
790
+ A1(t)
791
+ � Pm(x)
792
+ m!
793
+ .
794
+ Thus (3.1) follows and the proof is completed.
795
+
796
+ Some known results can be deduced from Theorem 3.1. Next, we quote
797
+ the four important ones of them.
798
+ 3.1.1. Explicit Expression of the Connection Coefficients.
799
+ Write
800
+ 1
801
+ A1(t) =
802
+
803
+
804
+ n=0
805
+ �a(1)
806
+ n tn, then
807
+ θt
808
+ � tm
809
+ A1(t)
810
+
811
+ =
812
+
813
+
814
+ n=0
815
+ b(2)
816
+ n+m
817
+ b(1)
818
+ n+m
819
+ �a(1)
820
+ n tn+m.
821
+ By virtue of (3.1), we get:
822
+
823
+
824
+ n=m
825
+ m!
826
+ n! Cm(n)tn =
827
+ � ∞
828
+
829
+ n=0
830
+ a(2)
831
+ n tn
832
+ � � ∞
833
+
834
+ n=0
835
+ b(2)
836
+ n+m
837
+ b(1)
838
+ n+m
839
+ �a(1)
840
+ n tn+m
841
+
842
+ = tm
843
+
844
+
845
+ n=0
846
+ � n
847
+
848
+ k=0
849
+ a(2)
850
+ k
851
+ b(2)
852
+ n+m−k
853
+ b(1)
854
+ n+m−k
855
+ �a(1)
856
+ n−k
857
+
858
+ tn
859
+ =
860
+
861
+
862
+ n=m
863
+ �n−m
864
+
865
+ k=0
866
+ b(2)
867
+ n−k
868
+ b(1)
869
+ n−k
870
+ a(2)
871
+ k �a(1)
872
+ n−m−k
873
+
874
+ tn.
875
+ Thus,
876
+ Cm(n) = n!
877
+ m!
878
+ n−m
879
+
880
+ k=0
881
+ b(2)
882
+ n−k
883
+ b(1)
884
+ n−k
885
+ a(2)
886
+ k �a(1)
887
+ n−m−k,
888
+ m = 0, . . . , n.
889
+ (3.2)
890
+ In particular, we can deduce the explicit expansion and the inversion formula
891
+ for any Brenke PS {Pn}n≥0 generated by (1.3):
892
+ Pn(x)
893
+ n!
894
+ =
895
+ n
896
+
897
+ m=0
898
+ bman−mxm,
899
+ and
900
+ bnxn =
901
+ n
902
+
903
+ m=0
904
+ �an−m
905
+ Pm(x)
906
+ m!
907
+ .
908
+ (3.3)
909
+
910
+ Expansion Formulas for Brenke Polynomials
911
+ 11
912
+ 3.1.2. Connection between two Db-Appell PSs. If B1 = B2, in (2.3),
913
+ then by using (2.6), we obtain that the expression (3.1) takes the following
914
+ simpler form [11].
915
+ A2(t)
916
+ A1(t) =
917
+
918
+
919
+ n=m
920
+ m!
921
+ n! Cm(n)tn−m.
922
+ (3.4)
923
+ 3.1.3. Addition and Convolution Type Formulas. The Brenke PS {Pn}n≥0
924
+ generated by (1.3) possesses the following generalized addition formula and
925
+ convolution type relation:
926
+ T b
927
+ yPn(x) =
928
+ n
929
+
930
+ m=0
931
+ n!
932
+ m!bn−myn−mPm(x),
933
+ and
934
+ A(Db)T b
935
+ yPn(x) =
936
+ n
937
+
938
+ m=0
939
+ �n
940
+ m
941
+
942
+ Pn−m(y)Pm(x),
943
+ where T b
944
+ y = B(yDb) designates the generalized translation operator satisfying
945
+ T b
946
+ y(B(xt) = B(yt)B(xt).
947
+ In fact, for the addition formula, we remark that the PS, {T b
948
+ yPn(x)}n≥0,
949
+ is generated by:
950
+ B(yt)A(t)B(xt) =
951
+
952
+
953
+ n=0
954
+ T b
955
+ yPn(x)
956
+ n!
957
+ tn,
958
+ then we apply (3.4) with A2(t) = B(yt)A(t) and A1(t) = A(t), to obtain
959
+ Cm(n) = n!
960
+ m!bn−myn−m.
961
+ For the convolution type relation, we apply the operator A(Db) to each
962
+ member of the addition formula and we use (2.1). We have
963
+ A(Db)T b
964
+ yPn(x) =
965
+ n
966
+
967
+ m=0
968
+ n!
969
+ m!(n − m)!A(Db)((n − m)!bn−myn−m)Pm(x)
970
+ =
971
+ n
972
+
973
+ m=0
974
+ �n
975
+ m
976
+
977
+ Pn−m(y)Pm(x).
978
+ 3.1.4. Duplication Formula. Brenke PS generated by (1.3) possesses the
979
+ following duplication formula [11]
980
+ Pn(ax) =
981
+ n
982
+
983
+ m=0
984
+ n!
985
+ m!amβn−mPm(x),
986
+ a ̸= 0,
987
+ (3.5)
988
+ where A(t)
989
+ A(at) =
990
+
991
+
992
+ k=0
993
+ βktk.
994
+ In fact, the PS Qn(x) = Pn(ax) is generated by
995
+ A(t)B(axt) =
996
+
997
+
998
+ n=0
999
+ Qn(x)
1000
+ n!
1001
+ tn.
1002
+
1003
+ 12
1004
+ H. Chaggara, A. Gahami and N. Ben Romdhane
1005
+ Thus, by using (2.6) and (2.7), we have θ(f)(x) = f(ax), where f is any
1006
+ formal power series.
1007
+ Now, from (3.1), with A1(t) = A2(t) = A(t), it follows immediately that
1008
+ (at)m A(t)
1009
+ A(at) =
1010
+
1011
+
1012
+ n=m
1013
+ m!
1014
+ n! Cm(n)tn.
1015
+ 3.2. Linearization Problems
1016
+ In the following result, we provide a generating function for the LC involving
1017
+ three Brenke polynomials.
1018
+ Theorem 3.2. Let {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0 be three Brenke PS with
1019
+ exponential generating functions:
1020
+ A1(t)B1(xt), A2(t)B2(xt) and A3(t)B3(xt),
1021
+ (3.6)
1022
+ where Ai(t) =
1023
+
1024
+
1025
+ k=0
1026
+ a(i)
1027
+ k tk, Bi(t) =
1028
+
1029
+
1030
+ k=0
1031
+ b(i)
1032
+ k tk, a(i)
1033
+ 0 b(i)
1034
+ k
1035
+ ̸= 0, ∀k ∈ N, i = 1, 2, 3.
1036
+ Then the LC, {Lij(k)}i,j≥0, k ∈ N, defined in (1.2) are generated by:
1037
+ A2(s)A3(t)
1038
+ k!
1039
+ θ(2)
1040
+ s θ(3)
1041
+ t
1042
+ (θ(1)
1043
+ s+t)−1
1044
+ � (s + t)k
1045
+ A1(s + t)
1046
+
1047
+ =
1048
+
1049
+ i,j≥0
1050
+ Lij(k)
1051
+ i!j! sitj
1052
+ (3.7)
1053
+ where θ(i)(tn) = n!b(i)
1054
+ n tn,
1055
+ i = 1, 2, 3.
1056
+ We note that θ(i), i = 1, 2, 3, are the transfer operators from {Pn}n≥0,
1057
+ {Rn}n≥0 and {Sn}n≥0, to the monomials, respectively.
1058
+ Proof. On one hand, according to (1.2) and with sum manipulation, we ob-
1059
+ tain:
1060
+
1061
+ i,j≥0
1062
+ Ri(x)Sj(x)si
1063
+ i!
1064
+ tj
1065
+ j! =
1066
+
1067
+ i,j≥0
1068
+ �i+j
1069
+
1070
+ k=0
1071
+ Lij(k)Pk(x)
1072
+
1073
+ si
1074
+ i!
1075
+ tj
1076
+ j!
1077
+ =
1078
+
1079
+
1080
+ k=0
1081
+
1082
+ k!
1083
+
1084
+ i,j≥0
1085
+ Lij(k)
1086
+ i!j! sitj
1087
+
1088
+  Pk(x)
1089
+ k!
1090
+ .
1091
+ (3.8)
1092
+ On the other hand, by using (2.6), we can easily verify that
1093
+ θ(2)
1094
+ s θ(3)
1095
+ t (θ(1)
1096
+ s+t)−1B1((s + t)x) =
1097
+
1098
+
1099
+ k=0
1100
+ � k
1101
+
1102
+ l=0
1103
+ b(2)
1104
+ l
1105
+ b(3)
1106
+ k−lsltk−l
1107
+
1108
+ xk,
1109
+ then
1110
+ B2(xs)B3(xt) = θ(2)
1111
+ s θ(3)
1112
+ t (θ(1)
1113
+ s+t)−1B1((s + t)x).
1114
+ Using the generating function of {Pn}n≥0, we obtain
1115
+ B2(xs)B3(xt) =
1116
+
1117
+
1118
+ k=0
1119
+
1120
+ θ(2)
1121
+ s θ(3)
1122
+ t
1123
+ (θ(1)
1124
+ s+t)−1 (s + t)k
1125
+ A1(s + t)
1126
+ � Pk(x)
1127
+ k!
1128
+ .
1129
+
1130
+ Expansion Formulas for Brenke Polynomials
1131
+ 13
1132
+ Thus
1133
+
1134
+ i,j≥0
1135
+ Ri(x)Sj(x)si
1136
+ i!
1137
+ tj
1138
+ j! =
1139
+
1140
+
1141
+ k=0
1142
+
1143
+ A2(s)A3(t)θ(2)
1144
+ s θ(3)
1145
+ t (θ(1)
1146
+ s+t)−1 (s + t)k
1147
+ A1(s + t)
1148
+ � Pk(x)
1149
+ k!
1150
+ .
1151
+ Equating the coefficients of Pk(x) in the above equation and (3.8), we obtain
1152
+ (3.7) which finishes the proof.
1153
+
1154
+ Next, as applications, we recover the generating function for the LC of
1155
+ three Appell polynomials and the explicit expression of the LC associated to
1156
+ three Brenke PS.
1157
+ 3.2.1. Appell Polynomials. Let {Pn}n≥0, {Rn}n≥0, and {Sn}n≥0, be three
1158
+ Appell-PS. Then we have B1 = B2 = B3 = exp, and by applying Theo-
1159
+ rem 3.2, we obtain that the LC in (1.2) are generated by
1160
+ A2(s)A3(t)
1161
+ A1(s + t)
1162
+ (s + t)k
1163
+ k!
1164
+ =
1165
+
1166
+
1167
+ i,j=0
1168
+ Lij(k)
1169
+ i!j! sitj,
1170
+ (3.9)
1171
+ which agrees with Carlitz Formula [10, Eq.(1.9)].
1172
+ Moreover, for Pn = Rn = Sn = Hn, where Hn are Hermite polynomials
1173
+ generated by
1174
+ e−t2e2xt =
1175
+
1176
+
1177
+ n=0
1178
+ Hn(x)tn
1179
+ n!,
1180
+ (3.10)
1181
+ we have A1(t) = A2(t) = A3(t) = A(t) = e−t2, and then
1182
+ A(s)A(t)
1183
+ A(s + t)
1184
+ (s + t)k
1185
+ k!
1186
+ = 1
1187
+ k!e2st(s + t)k.
1188
+ Thus, using (3.9) we deduce the standard linearization formula for Hermite
1189
+ PSs
1190
+ Hi(x)Hj(x) =
1191
+ min(i,j)
1192
+
1193
+ k=0
1194
+ �i
1195
+ k
1196
+ ��j
1197
+ k
1198
+
1199
+ 2kk!Hi+j−2k(x).
1200
+ (3.11)
1201
+ This formula is known as Feldheim formula [3].
1202
+ 3.2.2. Explicit Expression of the LC. For three Brenke PS satisfying
1203
+ the hypothesises of Theorem 3.2, the LC in (1.2) are given by:
1204
+ Lij(k) = i!j!
1205
+ k!
1206
+ i
1207
+
1208
+ n=0
1209
+ j
1210
+
1211
+ m=0
1212
+ b(2)
1213
+ n b(3)
1214
+ m
1215
+ b(1)
1216
+ n+m
1217
+ a(2)
1218
+ i−na(3)
1219
+ j−m�a(1)
1220
+ n+m−k,
1221
+ k = 0, 1, . . . , i + j, (3.12)
1222
+ where 1/A1(t) =
1223
+
1224
+
1225
+ n=0
1226
+ �a(1)
1227
+ n tn, and
1228
+ �a(1)
1229
+ −n = 0, n = 1, 2, . . . .
1230
+ Indeed, we have (s + t)k
1231
+ A1(s + t) =
1232
+
1233
+
1234
+ n=k
1235
+ �a(1)
1236
+ n−k(s + t)n, then by using (2.6), we get
1237
+ θ(2)
1238
+ s θ(3)
1239
+ t (θ(1)
1240
+ s+t)−1
1241
+ � (s + t)k
1242
+ A1(s + t)
1243
+
1244
+ =
1245
+
1246
+
1247
+ n=k
1248
+ �a(1)
1249
+ n−k
1250
+ n
1251
+
1252
+ m=0
1253
+ b(2)
1254
+ n−mb(3)
1255
+ m
1256
+ b(1)
1257
+ n
1258
+ tmsn−m.
1259
+
1260
+ 14
1261
+ H. Chaggara, A. Gahami and N. Ben Romdhane
1262
+ Thus, with sum manipulations and (3.7), one can easily verify that
1263
+
1264
+ i,j≥0
1265
+ Lij(k)
1266
+ i!j! sitj = 1
1267
+ k!
1268
+
1269
+
1270
+ n,m=0
1271
+ � ∞
1272
+
1273
+ i=n
1274
+ a(2)
1275
+ i−nsi
1276
+ � 
1277
+
1278
+
1279
+
1280
+ j=m
1281
+ a(3)
1282
+ j−mtj
1283
+
1284
+  b(2)
1285
+ n b(3)
1286
+ m
1287
+ b(1)
1288
+ n+m
1289
+ �a(1)
1290
+ n+m−k
1291
+ = 1
1292
+ k!
1293
+
1294
+ i,j≥0
1295
+
1296
+ i
1297
+
1298
+ n=0
1299
+ j
1300
+
1301
+ m=0
1302
+ b(2)
1303
+ n b(3)
1304
+ m
1305
+ b(1)
1306
+ n+m
1307
+ a(2)
1308
+ i−na(3)
1309
+ j−m�a(1)
1310
+ n+m−k
1311
+
1312
+ sitj,
1313
+ which leads to (3.12).
1314
+ We note that this result was first obtained in [11, Corollary 3.3] by using
1315
+ a method based on the inversion formula.
1316
+ 4. Application to Generalized Gould-Hopper
1317
+ Polynomial Set
1318
+ The (d + 1)-fold symmetric generalized Gould-Hopper polynomials,
1319
+ {Q(d+1)
1320
+ n
1321
+ (·, a, µ)}n≥0, are generated by [7]:
1322
+ eatd+1 expµ(xt) =
1323
+
1324
+
1325
+ n=0
1326
+ Q(d+1)
1327
+ n
1328
+ (x, a, µ)
1329
+ n!
1330
+ tn, a ∈ C, µ ̸= −1
1331
+ 2, −3
1332
+ 2, −5
1333
+ 2, . . . , (4.1)
1334
+ where a PS {Pn}n≥0 is said to be (d + 1)-fold symmetric, d = 1, 2, . . . , if
1335
+ Pn
1336
+
1337
+ e
1338
+ 2iπ
1339
+ d+1 x
1340
+
1341
+ = e
1342
+ 2inπ
1343
+ d+1 Pn(x).
1344
+ These polynomials constitute a unification of many known families such as:
1345
+ • Classical Hermite PS, Hn(x) = Q(2)
1346
+ n (2x, −1, 0).
1347
+ • Gould-Hopper PS, gm
1348
+ n (x, h) = Q(m)
1349
+ n
1350
+ (x, h, 0), (same notations as in [22]).
1351
+ • Generalized Hermite polynomials [30]:
1352
+
1353
+ n(x) = Q(2)
1354
+ n (2x, −1, µ).
1355
+ (4.2)
1356
+ The GGHPS are of Brenke type with transfer power series A(t) = exp(atd+1).
1357
+ They are the only (d + 1)-fold symmetric Dunkl-Appell d-orthogonal PS [7].
1358
+ Next, we solve the connection and linearization problems associated to
1359
+ GGHPS and we treat the particular case of generalized Hermite polynomials.
1360
+ 4.1. Connection Problem
1361
+ Here, we state the connection formulas for two GGHPS when one or two
1362
+ of the parameters are different and we give an integral representation of
1363
+ these coefficients. Moreover, the inversion formula, addition and convolution
1364
+ relations, and duplication formula are given.
1365
+ Theorem 4.1. The connection coefficients, Cn−i(d+1)(n), 0 ≤ i ≤ [
1366
+ n
1367
+ d + 1],
1368
+ between two GGHPS, {Q(d+1)
1369
+ n
1370
+ (·, a, µ1)}n≥0 and {Q(d+1)
1371
+ n
1372
+ (·, b, µ2)}n≥0 are given
1373
+
1374
+ Expansion Formulas for Brenke Polynomials
1375
+ 15
1376
+ by
1377
+ Cn−i(d+1)(n) =
1378
+ n!
1379
+ (n − i(d + 1))!
1380
+ i
1381
+
1382
+ k=0
1383
+ γµ1(n − k(d + 1))
1384
+ γµ2(n − k(d + 1))
1385
+ (−a)i−k
1386
+ (i − k)!
1387
+ bk
1388
+ k! .
1389
+ (4.3)
1390
+ Proof. By means of (2.6), we have
1391
+ θ(tme−atd+1) =
1392
+
1393
+
1394
+ n=0
1395
+ (−a)n
1396
+ n!
1397
+ γµ1(n(d + 1) + m)
1398
+ γµ2(n(d + 1) + m)tn(d+1)+m.
1399
+ Thus, by using (3.1), (4.1) and sum manipulation, we obtain
1400
+
1401
+
1402
+ n=m
1403
+ m!
1404
+ n! Cm(n)tn = ebtd+1θ(tme−atd+1)
1405
+ =
1406
+
1407
+
1408
+ i=0
1409
+ 1
1410
+ i!
1411
+ i
1412
+
1413
+ k=0
1414
+ �i
1415
+ k
1416
+ �γµ1(k(d + 1) + m)
1417
+ γµ2(k(d + 1) + m)bi−k(−a)k ti(d+1)+m.
1418
+ Therefore, for n = i(d + 1) + m, the desired result holds.
1419
+
1420
+ We note that for the particular case µ1 = µ2, (4.3) is reduced to
1421
+ Cn−i(d+1)(n) =
1422
+ n!(b − a)i
1423
+ i!(n − i(d + 1))!,
1424
+ 0 ≤ i ≤
1425
+
1426
+ n
1427
+ d + 1
1428
+
1429
+ .
1430
+ For the connection coefficients obtained in Theorem 4.3, we have the following
1431
+ result.
1432
+ Proposition 4.2. For µ2 > µ1 > −1
1433
+ 2, the connection coefficient given by
1434
+ (4.3) has the following integral representation,
1435
+ Cn−i(d+1)(n) = n!β−1(µ1 + 1
1436
+ 2, µ2 − µ1)
1437
+ i!(n − i(d + 1))!
1438
+ ×
1439
+ � 1
1440
+ −1
1441
+ tn−i(d+1)|t|2µ1(b − atd+1)i (1 − t2)µ2−µ1
1442
+ 1 − t
1443
+ dt.
1444
+ Proof. Using Proposition 2.4 with f(x) = xn−k(d+1) and x = 1, we obtain
1445
+ γµ1(n − k(d + 1))
1446
+ γµ2(n − k(d + 1)) =
1447
+ 1
1448
+ β(µ1 + 1
1449
+ 2, µ2 − µ1)
1450
+ � 1
1451
+ −1
1452
+ tn−k(d+1)|t|2µ1 (1 − t2)µ2−µ1
1453
+ 1 − t
1454
+ dt.
1455
+ Substituting the above equation in (4.3), we get:
1456
+ Cn−i(d+1)(n) =
1457
+ n!
1458
+ i!(n − i(d + 1))!
1459
+ 1
1460
+ β(µ1 + 1
1461
+ 2, µ2 − µ1)×
1462
+ � 1
1463
+ −1
1464
+ tn|t|2µ1 (1 − t2)µ2−µ1
1465
+ 1 − t
1466
+
1467
+ i
1468
+
1469
+ k=0
1470
+ �i
1471
+ k
1472
+
1473
+ (−a)i−k(
1474
+ b
1475
+ td+1 )k
1476
+
1477
+ dt,
1478
+ from which the desired result follows.
1479
+
1480
+ Next, we give some specific expansion relations associated to GGHPS.
1481
+
1482
+ 16
1483
+ H. Chaggara, A. Gahami and N. Ben Romdhane
1484
+ • Explicit and inversion formulas: The following explicit expression and
1485
+ inversion formula of {Q(d+1)
1486
+ n
1487
+ (·, a, µ}n≥0 can be easily derived from (3.3):
1488
+ Q(d+1)
1489
+ n
1490
+ (x, a, µ) = n!
1491
+ [
1492
+ n
1493
+ d+1 ]
1494
+
1495
+ k=0
1496
+ ak
1497
+ k!γµ(n − (d + 1)k) xn−(d+1)k,
1498
+ (4.4)
1499
+ and
1500
+ xn
1501
+ γµ(n) =
1502
+ [
1503
+ n
1504
+ d+1 ]
1505
+
1506
+ k=0
1507
+ (−a)k
1508
+ k!(n − (d + 1)k)!Q(d+1)
1509
+ n−(d+1)k(x, a, µ).
1510
+ (4.5)
1511
+ • Addition and convolution relations:
1512
+ T µ
1513
+ y Q(d+1)
1514
+ n
1515
+ (x, a, µ) =
1516
+ n
1517
+
1518
+ k=0
1519
+ n!yn−k
1520
+ k!γµ(n − k)Q(d+1)
1521
+ k
1522
+ (x, a, µ),
1523
+ (4.6)
1524
+ 2
1525
+ n
1526
+ d+1 T µ
1527
+ y Q(d+1)
1528
+ n
1529
+
1530
+ 2
1531
+ −1
1532
+ d+1 x, a, µ
1533
+
1534
+ =
1535
+ n
1536
+
1537
+ k=0
1538
+ �n
1539
+ k
1540
+
1541
+ Q(d+1)
1542
+ k
1543
+ (y, a, µ) Q(d+1)
1544
+ n−k (x, a, µ), (4.7)
1545
+ where T µ
1546
+ y = expµ(yDµ).
1547
+ For µ = 0, this equation is reduced to the well-known Gould-
1548
+ Hopper convolution type relation [22] and for m = 2, h = −1, we recover
1549
+ the Runge formula for Hermite polynomials [29]
1550
+ • Duplication formula:
1551
+ Q(d+1)
1552
+ n
1553
+ (αx, a, µ) = n!
1554
+ [
1555
+ n
1556
+ d+1 ]
1557
+
1558
+ k=0
1559
+ αn−k(d+1)(1 − αd+1)kak
1560
+ (n − k(d + 1))!k!
1561
+ Q(d+1)
1562
+ n−k(d+1)(x, a, µ), α ̸= 0.
1563
+ 4.2. Linearization Formula
1564
+ Taking into account the (d + 1)-fold symmetry property of the GGHPS, any
1565
+ LC Lij(k), in (1.2) vanishes when k ̸= i + j − r(d + 1). Thus, according to
1566
+ (3.12), the corresponding LC is given by:
1567
+ Lij(i + j − r(d + 1)) =
1568
+ i!j!
1569
+ (i + j − r(d + 1))!
1570
+ [
1571
+ i
1572
+ d+1 ]
1573
+
1574
+ n=0
1575
+ [
1576
+ j
1577
+ d+1 ]
1578
+
1579
+ m=0
1580
+ an
1581
+ 1am
1582
+ 2 (−a3)r−m−n
1583
+ n!m!(r − m − n)! ×
1584
+ γµ3(i + j − (m + n)(d + 1))
1585
+ γµ1(i − n(d + 1))γµ2(j − r(d + 1)), 0 ≤ r ≤
1586
+ � i + j
1587
+ d + 1
1588
+
1589
+ .
1590
+
1591
+ Expansion Formulas for Brenke Polynomials
1592
+ 17
1593
+ We remark that there is no difficulty in proving the corresponding formula
1594
+ for the linearization of any arbitrary number of GGHPSs. We have:
1595
+ N
1596
+
1597
+ s=1
1598
+ Q(d+1)
1599
+ is
1600
+ (x, as, µs) =
1601
+ [ i1+···+iN
1602
+ d+1
1603
+ ]
1604
+
1605
+ r=0
1606
+ i1! · · · iN!
1607
+ (i1 + · · · + iN − r(d + 1))!×
1608
+ [
1609
+ i1
1610
+ d+1 ]
1611
+
1612
+ s1=0
1613
+ · · ·
1614
+ [ iN
1615
+ d+1 ]
1616
+
1617
+ sN =0
1618
+ as1
1619
+ 1 · · · asN
1620
+ N (−aN+1)r−s1−···−sN
1621
+ s1! · · · sN!(r − s1 − · · · − sN)! ×
1622
+ γµN+1(i1 + · · · + iN − (d + 1)(s1 + · · · + sN))
1623
+ γµ1(i1 − (d + 1)s1) · · · γµN(iN − (d + 1)sN) ×
1624
+ Q(d+1)
1625
+ i1+···+iN −r(d+1)(x, aN+1, µN+1).
1626
+ 4.3. Generalized Hermite Polynomials
1627
+ The generalized Hermite polynomials, {Hµ
1628
+ n}n≥0, are introduced by Szeg¨o [30],
1629
+ then investigated by Chihara in his PhD Thesis [15] and further studied by
1630
+ many other authors [11,28]. They are generated by:
1631
+ e−td+1 expµ(2xt) =
1632
+
1633
+
1634
+ n=0
1635
+
1636
+ n(x)
1637
+ n!
1638
+ tn, µ ̸= −1
1639
+ 2, −3
1640
+ 2, −5
1641
+ 2, . . . .
1642
+ (4.8)
1643
+ Proposition 4.3. The following connection relation holds:
1644
+ �Hµ2
1645
+ n (x) =
1646
+ [n/2]
1647
+
1648
+ k=0
1649
+ (−1)k 4k
1650
+ k!
1651
+ (µ2 − µ1)k �Hµ1
1652
+ n−2k(x), µ2 > µ1 > −1
1653
+ 2,
1654
+ (4.9)
1655
+ where { �Hµi
1656
+ n }n, i = 1, 2 are the normalized generalized Hermite PS given by
1657
+ �Hµi
1658
+ n (x) = γµi(n)
1659
+ n![ n
1660
+ 2 ]! Hµi
1661
+ n (x).
1662
+ Proof. From what has already been stated, the connection coefficients from
1663
+ {Hµ2
1664
+ n }n to {Hµ1
1665
+ n }n are generated by
1666
+ e−t2θ(tmet2) =
1667
+
1668
+
1669
+ n=m
1670
+ m!
1671
+ n! Cm(n)tn,
1672
+ where θ is the operator defined in (2.25).
1673
+ Making use of the θ-integral representation (2.26), intercalate 0 in the interval
1674
+ of integration, we get:
1675
+
1676
+
1677
+ n=m
1678
+ m!
1679
+ n! Cm(n)tn =
1680
+ tme−t2
1681
+ β(µ1 + 1
1682
+ 2, µ2 − µ1)×
1683
+ � 1
1684
+ 0
1685
+ et2s2
1686
+ sm+2µ1
1687
+ (1 − s2)µ1−µ2
1688
+
1689
+ 1
1690
+ 1 − s + (−1)m
1691
+ 1 + s
1692
+
1693
+ ds.
1694
+
1695
+ 18
1696
+ H. Chaggara, A. Gahami and N. Ben Romdhane
1697
+ It follows, for m even and after substituting u = s2, that
1698
+
1699
+
1700
+ n=m
1701
+ m!
1702
+ n! Cm(n)tn =
1703
+ tme−t2
1704
+ β(µ1 + 1
1705
+ 2, µ2 − µ1)
1706
+ � 1
1707
+ 0
1708
+ eut2u
1709
+ m−1
1710
+ 2
1711
+ +µ1(1 − u)µ2−µ1−1du
1712
+ =
1713
+
1714
+
1715
+ n=0
1716
+ (−1)n
1717
+ n!
1718
+ β(µ1 + m+1
1719
+ 2 , µ2 − µ1 + n)
1720
+ β(µ1 + 1
1721
+ 2, µ2 − µ1)
1722
+ tm+2n,
1723
+ where the term by term integration is justified by the same argument as in
1724
+ the proof of Proposition 2.3.
1725
+ On the other hand, we have
1726
+ β(µ1 + 1
1727
+ 2 + k, µ2 − µ1 + n)
1728
+ β(µ1 + 1
1729
+ 2, µ2 − µ1)
1730
+ = Γ(µ1 + 1
1731
+ 2 + k)Γ(µ2 − µ1 + n)Γ(µ2 + 1
1732
+ 2)
1733
+ Γ(µ2 + n + k + 1
1734
+ 2)Γ(µ1 + 1
1735
+ 2)Γ(µ2 − µ1)
1736
+ = γµ1(2k)
1737
+ 22kk!
1738
+ 22(k+n)(k + n)!
1739
+ γµ2(2(k + n)) (µ2 − µ1)n
1740
+ =
1741
+ γµ1(m)
1742
+ γµ2(m + 2n)
1743
+ 4n([m/2] + n)!
1744
+ [m/2]!
1745
+ (µ2 − µ1)n.
1746
+ Thus, by virtue of (2.17) and (2.27), we obtain
1747
+
1748
+
1749
+ n=m
1750
+ m!
1751
+ n! Cm(n)tn =
1752
+
1753
+
1754
+ n=0
1755
+ (−1)n
1756
+ n!
1757
+ γµ1(m)4n([ m
1758
+ 2 ] + n)!
1759
+ γµ2(m + 2n)[ m
1760
+ 2 ]! (µ2 − µ1)ntm+2n.
1761
+ For m odd, similar computations lead to
1762
+
1763
+
1764
+ n=m
1765
+ m!
1766
+ n! Cm(n)tn =
1767
+
1768
+
1769
+ n=0
1770
+ (−1)n
1771
+ n!
1772
+ γµ1(m)
1773
+ γµ2(m + 2n)
1774
+ 4n([ m
1775
+ 2 ] + n)!
1776
+ [ m
1777
+ 2 ]!
1778
+ (µ2 − µ1)ntm+2n.
1779
+ Therefore, for m = 0, 1, 2, 3, . . ., we have:
1780
+
1781
+
1782
+ n=m
1783
+ m!
1784
+ n! Cm(n)tn =
1785
+
1786
+
1787
+ n=0
1788
+ (−1)n
1789
+ n!
1790
+ γµ1(m)
1791
+ γµ2(m + 2n)
1792
+ 4n([ m
1793
+ 2 ] + n)!
1794
+ [ m
1795
+ 2 ]!
1796
+ (µ2 − µ1)ntm+2n,
1797
+ Thus, for k = 0, 1, 2, . . ., [n
1798
+ 2 ], we get
1799
+ Cn−2k(n) = (−1)k
1800
+ k!
1801
+ n!
1802
+ (n − 2k)!
1803
+ 4k[ n
1804
+ 2 ]!
1805
+ [ n
1806
+ 2 − k]!
1807
+ γµ1(n − 2k)
1808
+ γµ2(n)
1809
+ (µ2 − µ1)k.
1810
+
1811
+ We note that the connection coefficients in (4.9) alternate in sign and
1812
+ that this relation was already derived in [14], where the authors used a linear
1813
+ computer algebra approach based on the Zeilberger’s algorithm.
1814
+ References
1815
+ [1] Abd-Elhameed, W., Badah, B.M.: New approaches to the general linearization
1816
+ problem of Jacobi polynomials based on moments and connection formulas.
1817
+ Mathematics 9, 1–28 (2021)
1818
+
1819
+ Expansion Formulas for Brenke Polynomials
1820
+ 19
1821
+ [2] Area,, I., Godoy, E., Rodal, J., Ronveaux, A., Zarzo A.: Bivariate Krawtchouk
1822
+ polynomials: Inversion and connection problems with the NAVIMA algorithm.
1823
+ J. Comput. Appl. Math. 284, 50–57 (2015)
1824
+ [3] Askey, R.: Orthogonal Polynomials and Special Functions, CBMS-NSF Re-
1825
+ gional Conference Series in Appl. Math., vol. 21. SIAM, Philadelphia, Pynn-
1826
+ sylvania (1975)
1827
+ [4] Askey, R., Gasper, G.: Jacobi polynomial expansions of Jacobi polynomials
1828
+ with non-negative coefficients. Proc. Camb. Phil. Soc. 70, 243–255 (1971)
1829
+ [5] Ben Cheikh, Y.: Some results on quasi-monomiality. Appl. Math. Comput.
1830
+ 141, 63–76 (2003)
1831
+ [6] Ben Cheikh, Y., Chaggara, H.: Connection coefficients between Boas–Buck
1832
+ polynomial set. J. Math. Anal. Appl. 319, 665–689 (2005)
1833
+ [7] Ben Cheikh, Y., Gaied, M.: Dunkl-Appell d-orthogonal polynomials. Integral
1834
+ Transforms Spec. Funct. 18, 581–597 (2007)
1835
+ [8] Ben Romdhane, N.: A general theorem on inversion problems for polynomial
1836
+ sets. Med. J. Math. 13, 2783–2793 (2016)
1837
+ [9] Brenke, W.: On generating functions of polynomial systems. Amer. Math.
1838
+ Monthly 52, 297–301 (1945)
1839
+ [10] Carlitz, L.: Products of Appell polynomials. Collect. Math. 112, 133–138
1840
+ (1963)
1841
+ [11] Chaggara, H.: Operational rules and a generalized Hermite polynomials. J.
1842
+ Math. Anal. Appl. 332, 11–21 (2007)
1843
+ [12] Chaggara, H.: Quasi monomialty and linearization coefficients for Sheffer poly-
1844
+ nomial sets. Difference Equations, Special Functions, And Orthogonal Polyno-
1845
+ mials pp. 90–99 (2007)
1846
+ [13] Chaggara, H., Mabrouk, M.: Linearization coefficients for some basic hyperge-
1847
+ ometric polynomials. J. Mathematics Volume 2022, 12 pages
1848
+ [14] Chaggara, H., Koepf, W.: On linearization and connection coefficients for gen-
1849
+ eralized Hermite polynomials. J. Math. Anal. Appl. 236, 65–73 (2011)
1850
+ [15] Chihara, T.: Generalized Hermite polynomials. Ph.D. thesis, Purdue (1955)
1851
+ [16] Chihara, T.: Orthogonal polynomials with Brenke type generating functions.
1852
+ Duke Math. J. 35, 505–517 (1968)
1853
+ [17] Chihara, T.: An Introduction to Orthogonal Polynomials. Gordon and Breach,
1854
+ New York, London, Paris (1978)
1855
+ [18] Dehesa, J., Martinez-Finkelshtein, A., S´anchez-Ruiz, J.: Quantum information
1856
+ entropies and orthogonal polynomials. J. Comput. Appl. Math. 133, 23–46
1857
+ (2001)
1858
+ [19] Di Bucchianico, A., Loeb, D.E.: Operator expansion in the derivative and mul-
1859
+ tiplication by x. Integral Transforms Spec. Funct. 4, 49–68 (1996)
1860
+ [20] Dunkl, C.: Integral kernels with reflection group invariance. Canad. J. Math.
1861
+ 43, 1213–1227 (1991)
1862
+ [21] Gasper, G.: Linearization of the product of Jacobi polynomials. Canad. J.
1863
+ Math. 22, 171–175 (1970)
1864
+ [22] Gould, H., Hopper, A.T.: Operational formulas connected with two generaliza-
1865
+ tions of Hermite polynomials. Duke Math. J. 29, 51–63 (1962)
1866
+
1867
+ 20
1868
+ H. Chaggara, A. Gahami and N. Ben Romdhane
1869
+ [23] Koornwinder, T.: Compact quantum groups and q-special functions 311, 46–
1870
+ 128 (1994)
1871
+ [24] Maroni, P., Da Rocha, Z.: Connection coefficients for orthogonal polynomials:
1872
+ symbolic computations, verification, and demonstrations in the Mathematica
1873
+ language. Numer. Algor. 63, 507–520 (2013)
1874
+ [25] Asai, N., Kubo, I., Kuo, H.H.: The Brenke type generating functions and ex-
1875
+ plicit forms of MRM-triples by means of q-hypergeometric series. Inf. Dimens.
1876
+ Anal. Quantum Probab. Related Topics, 16 27 pages (2013).
1877
+ [26] Opdam, E.M.: Dunkl operators, Bessel functions and the discriminant of a
1878
+ finite Coxeter group. Compos. Math., 85, 333–373 (1993).
1879
+ [27] Rainville, E.: Special Functions. The Macmillan Company, New York (1960)
1880
+ [28] Rosenblum, M.: Generalized Hermite polynomials and the Bose-like oscillator
1881
+ calculus. Oper. Theory Adv. Appl. 73, 369–396 (1994)
1882
+ [29] Runge, C.: ¨Uber eine besondere art von intergralgleichungen. Math. Ann. 75,
1883
+ 130–132 (1914)
1884
+ [30] Szeg¨o, G. : Orthogonal polynomials, 4rd edn. Amer. Math. Soc. Colloq. Vol.
1885
+ 23, Amer. Math. Soc, New York (1975)
1886
+ [31] Szwarc, R.: Convolution structures associated with orthogonal polynomials. J.
1887
+ Math. Anal. Appl. 170, 158–170 (1992)
1888
+ [32] Tcheutia,, D., Foupouagnigni, M., Koepf, W., Sadjang, N.N.: Coefficients of
1889
+ multiplication formulas for classical orthogonal polynomials. Ramanujan J. pp.
1890
+ 1–35 (2015)
1891
+ [33] Varma, S., Sezgin, S., ´I¸c¨oz, G.: Generalization of Szasz operators involving
1892
+ Brenke type polynomials. Comput. Math. Appl. 64, 121–127 (2012)
1893
+ [34] Wani, S., Mursaleen, M., Nisar, K.S.: Certain approximation properties of
1894
+ Brenke polynomials using Jakimovski-Leviatan operators. J. Inequal. Appl.
1895
+ 64, 1–16 (2021)
1896
+ Hamza Chaggara
1897
+ Mathematics Department, College of Science, King Khalid University, Abha, King-
1898
+ dom of Saudi Arabia/D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sci-
1899
+ ences et de la Technologie, Sousse University, Tunisia.
1900
+ e-mail: hshaggara@kku.edu.sa / hamza.chaggara@ipeim.rnu.tn
1901
+ Abdelhamid Gahami
1902
+ D´epartement de Math´ematiques, Institut Pr´eparatoire aux ´Etudes d’Ing´enieur, Sfax
1903
+ University, Tunisia.
1904
+ e-mail: aelgahami@yahoo.fr
1905
+ Neila Ben Romdhane
1906
+ D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sciences et de la Technologie,
1907
+ Sousse University, Tunisia.
1908
+ e-mail: neila.benromdhane@ipeim.rnu.tn
1909
+
D9FRT4oBgHgl3EQfxziA/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3487c95f8117da8aef44a1774522e04d1b39f332aff374079e81313a7c5479fe
3
+ size 3932205
GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb3fe2c7b1bf20c0411ddc34bb446d8c68162155d8f864e9fca47c422f547b3b
3
+ size 5242925
HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f996983dda92caffd0da403f59952dd1ed04ec4c28422f17acdc38d1c2c978c7
3
+ size 2228269
INAzT4oBgHgl3EQfjf2_/content/tmp_files/2301.01518v1.pdf.txt ADDED
@@ -0,0 +1,1071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Organised Firestorm as strategy for business
2
+ cyber-attacks
3
+ Andrea Russo
4
+ Department of Physics and Astronomy, University of Catania, Italy
5
+ Email: andrea.russo@phd.unict.it
6
+ Abstract—Having a good reputation is paramount for most or-
7
+ ganisations and companies. In fact, having an optimal corporate
8
+ image allows them to have better transaction relationships with
9
+ various customers and partners. However, such reputation is hard
10
+ to build and easy to destroy for all kind of business commercial
11
+ activities (B2C, B2B, B2B2C, B2G). A misunderstanding during
12
+ the communication process to the customers, or just a bad
13
+ communication strategy, can lead to a disaster for the entire
14
+ company. This is emphasised by the reaction of millions of
15
+ people on social networks, which can be very detrimental for
16
+ the corporate image if they react negatively to a certain event.
17
+ This is called a firestorm.
18
+ In this paper, I propose a well-organised strategy for firestorm
19
+ attacks on organisations, also showing how an adversary can
20
+ leverage them to obtain private information on the attacked
21
+ firm. Standard business security procedures are not designed to
22
+ operate against multi-domain attacks; therefore, I will show how
23
+ it is possible to bypass the classic and advised security procedures
24
+ by operating different kinds of attack. I also propose a different
25
+ firestorm attack, targeting a specific business company network
26
+ in an efficient way. Finally, I present defensive procedures to
27
+ reduce the negative effect of firestorms on a company.
28
+ Index
29
+ Terms—Firestorm,
30
+ Cyber-attack,
31
+ Business
32
+ Defence,
33
+ Socio-dynamics, Stress Test, Network Science, Cyberpunk 2077.
34
+ I. INTRODUCTION
35
+ Before the advent of social medias, brand crises were largely
36
+ caused by journalists’ contributions. Nowadays, a firestorm is
37
+ a cluster of consumers’ digital word of mouth that highlights
38
+ some communication error, or some terrible mistake made
39
+ by a company [15]. The Cambridge dictionary1 defines the
40
+ firestorm as “a sudden, and sometimes violent reaction” and
41
+ the shitstorm as “a wildly chaotic and unmanageable situation,
42
+ controversy, or sequence of events”. In this paper, I will use
43
+ both these terms interchangeably.
44
+ During the last years, many firestorms took place on the
45
+ Internet [19], [27], [31], mainly due to the increase of the
46
+ number of users on social networks. In some cases, firestorms
47
+ have been formally studied to better understand this phe-
48
+ nomenon [15], [28], [31]. In 2007, several researchers debated
49
+ over firestorms, and one of the main outcomes is that “a
50
+ natural science model of the research process is suitable for
51
+ studying the social world but a central issue remaining of
52
+ whether the social world can, and should be, studied according
53
+ to the same principles, procedures, and philosophy as the
54
+ natural sciences”
55
+ [1]. This is relevant because today I are
56
+ actually able to study and evaluate social dynamics by using
57
+ 1https://dictionary.cambridge.org
58
+ the massive amount of data coming from the digital world,
59
+ with particular emphasis on social networks [32].
60
+ Firestorms are not made of a single event with a standard
61
+ behaviour, instead they are caused by non-linear dynamics
62
+ leading to complex behaviours. Due to this, companies must
63
+ have appropriate procedures to respond to various crisis situa-
64
+ tions. Lehtonen’s theory [23] shows that a firestorm develops
65
+ in five stages: (1) latent stage, where weak signals of the
66
+ upcoming crisis are received; (2) triggering event, where the
67
+ subject becomes the target of news and social media attention;
68
+ (3) the subject is in the top-news and the media attention
69
+ spikes; (4) the media attention calms down to the level of
70
+ general philosophical and ethical discussion; and (5) there
71
+ are only minor media hits and attention is guided to other
72
+ issues [28].
73
+ As firestorms begin when there is a service failure, a
74
+ social failure or when a company fails to communicate prop-
75
+ erly [15], this kind of errors can be reduced by following
76
+ appropriate procedures. However, most of the existing quality
77
+ and security procedures, such as the ones suggested by ISO
78
+ 9001:2015 [17] and ISO/IEC 27002:2022 [18] are not ade-
79
+ quate for a multi-domain cyber and social attack. Because,
80
+ regard to the 27002:2022, social attacks are outside the scope,
81
+ while, 9001:2015, even if it focuses on better business process
82
+ quality, thus, less firestorm risk from the public, it does not
83
+ mitigate the firestorm from an attacker.
84
+ Hence, in this paper I theorise that it is possible for an
85
+ attacker to intentionally cause a firestorm attack to undermine
86
+ the reputation of a company, with the side-effect of advan-
87
+ taging the competitors. I argue that self-organised Firestorm
88
+ attacks require a high number of bots that are already active
89
+ on social medias: in this case, bots start the firestorm on the
90
+ target company, spreading fake news (or magnifying a certain
91
+ event, e.g., a mistake made by the company in the past) that
92
+ will cause a high volume of real people to react negatively
93
+ and continue the social attack, unknowingly on behalf of the
94
+ adversary.
95
+ Additionally, I argue that Open Source Intelligence (OS-
96
+ INT) could allow an adversary to identify weak spots in the
97
+ organization, namely people who most likely cannot react
98
+ properly or defend themselves from the firestorm, hence not
99
+ being able to timely mitigate its impact. Many workers have a
100
+ LinkedIn, Facebook, or Twitter account: moving the firestorm
101
+ on the social media accounts of people who work for the
102
+ target company can lead to an extremely stressful situation for
103
+ arXiv:2301.01518v1 [cs.CY] 4 Jan 2023
104
+
105
+ workers. This could be even worse for people who do not often
106
+ deal with public relations, and could cause confusion, panic
107
+ and distress. In fact, when a firestorm arises, even people who
108
+ work on communication processes and managers can panic,
109
+ and the fear of losing customers and partners can be very
110
+ detrimental for any company.
111
+ When people working in the target firm are in this altered
112
+ status, I argue it is possible to elaborate a social engineering
113
+ strategy to capture protected information: in this case, not only
114
+ firestorms serve the purpose to undermine the corporate image,
115
+ but they are also used as a diversion for a social engineering
116
+ attack. In fact, while most important organisations adhere
117
+ to best-practices listed in security standards like ISO/IEC
118
+ 27002:2022 [18], during a social attack like firestorms, some
119
+ best-practices and procedures may be distorted or bypassed,
120
+ both intentionally or by mistake, due to the pressure applied
121
+ to people who are in charge of complying to such procedures
122
+ [14].
123
+ Contributions. The paper makes these contributions:
124
+ 1) I explain how to make an automated and organized
125
+ firestorm attack, with only a few manual operations such
126
+ as the choice of a topic and of a hashtag;
127
+ 2) I introduce a taxonomy of possible actions that the
128
+ attacker could perform while doing the firestorm;
129
+ 3) I illustrate how the author of a firestorm can evade
130
+ detection for their attack by targeting single workers
131
+ instead of the company profiles, while increasing the
132
+ damage done to the firm.
133
+ 4) I show possible long and short term procedures that
134
+ a company can implement to mitigate the effect of
135
+ firestorms attacks.
136
+ II. CYBER-ATTACK PLANING PRELUDE
137
+ In this section, I illustrate a novel strategy to artificially
138
+ cause a firestorm, leveraging a botnet to start agitating real
139
+ people against a target company. Due to the large number of
140
+ posts that bots can create within seconds, they can be used
141
+ to amplify any idea on social networks, influencing political
142
+ affairs [3] and business company value [33]. For example,
143
+ due to a cyber-attack on a Twitter newspaper profile, such
144
+ newspaper shared a fake news about President Obama being
145
+ injured by a bomb in the White House, causing a flash-crash
146
+ in Wall Street and the stop all of economic transactions for
147
+ some minutes. This led to a loss of about 121 billion dollars
148
+ for S&P 500 and its related companies [11].
149
+ I structure the attack plan in six stages:
150
+ 1) Finding an event/topic to build the firestorm attack
151
+ on. This can be a past event or an error that the firm
152
+ has committed in the past, which will be used as a basis
153
+ for the upcoming attack. I define this event as the target
154
+ topic.
155
+ 2) Using bots to create or amplify the latent state. By
156
+ leveraging a botnet, an adversary can create a high num-
157
+ ber of posts on social media, allowing the target topic
158
+ to reach more people and giving them the opportunity
159
+ to react negatively. This can eventually lead to a state
160
+ where real people start to autonomously talk about the
161
+ subject and begin to spread information about the target
162
+ topic on their own. To facilitate this, the attacker can
163
+ reuse an old trending hashtag or create a new one: the
164
+ hashtag is the keyword to incite social action due to the
165
+ information symbolised by the word itself.
166
+ 3) Letting the topic spread among people. The ideal
167
+ situation for the attacker is that real people begin posting
168
+ about the target topic, after learning about it from the
169
+ botnet’s posts. This will bring more attention to the
170
+ topic, possibly making it a trending one. For example,
171
+ Twitter allows users to check what topics and hashtags
172
+ are currently popular. If this happens, there will be
173
+ moment in which there are enough people posting about
174
+ the target topic, so that the firestorm can sustain itself for
175
+ days, without any other post coming from the attacker’s
176
+ botnet. I call this moment the fire point.2 Instead, if real
177
+ people did not react negatively to the topic, or the topic
178
+ did not reach enough people to allow the firestorm to
179
+ reach the fire point, the discussion on the topic will slow
180
+ down and will eventually end. In this case, I say that
181
+ the firestorm is extinguished. However, the attacker can
182
+ change the target topic and restart from Stage 1.
183
+ 4) Identifying human targets. Managers (e.g., Chief Tech-
184
+ nical Officers, Chief Executive Officers) are the decision
185
+ makers of a company. The attacker might want to keep
186
+ a list of these people in order to use these names when
187
+ the attack will move over from the company’s social
188
+ network profiles to the employees’ ones. Identifying the
189
+ people who are most proud to work for the attacked
190
+ company can also be helpful in exerting more pressure
191
+ on the company (since they have more to do with the
192
+ value of the company).
193
+ 5) Focusing on workers. During the peak activity of the
194
+ firestorm, those same bots that built the latent state will
195
+ move their focus on the public social media profiles
196
+ owned by employees of the attacked firm. These pro-
197
+ files were identified in the previous step of the attack.
198
+ This may cause the attention of the firestorm to shift
199
+ towards the employees, also causing them to experience
200
+ discomfort. Because the brand is usually at the center
201
+ of the firestorm, focusing people will have a stronger
202
+ impact on them, and it can disrupt internal processes.
203
+ 6) Performing the cyber attack. Because people will put
204
+ less attention in following internal procedures, many
205
+ safety best-practices adopted by the company may not be
206
+ followed properly, or may even be ignored. The attacker
207
+ can exploit this behaviour to their own advantage.
208
+ In order to shift the focus from the company to the worker,
209
+ it is necessary to optimise the timescale and timing of the
210
+ transition, as it is not linear for people to attack the worker,
211
+ but it can happen more easily if the negative event is of high
212
+ negative impact and value. Shifting the attack on employees
213
+ 2In chemistry, the fire point is the lowest temperature at which a certain
214
+ fuel will continue to burn for a minimum of five seconds, when ignited.
215
+
216
+ has another side-effect, which is beneficial to the attacker:
217
+ the organisations that are responsible for the public cyber
218
+ security in every country cannot see the Firestorm attack
219
+ on the company page, because the Firestorm is focused on
220
+ workers only Such organisations will hardly be able to detect
221
+ all comments and posts focused on workers, allowing the
222
+ attacker to create a smoky form of the attack, which can
223
+ bypasses conventional security measures, procedures and
224
+ strategies. Since they have to focus primarily on the company
225
+ under attack, therefore, possibly not give so much attention
226
+ to analysing every single interaction against all the operators
227
+ of the attacked company.
228
+ III. BUSINESS SOCIAL MOOD-DISEASE AND NETWORK
229
+ STRATEGY
230
+ The Cambridge Analytica case highlighted the role and the
231
+ importance of social media for the majority of the population
232
+ and organisations. A document produced by the American
233
+ Ministry of Justice, to examine the possible foreign influence
234
+ on US, showed how there actually exist organisations (such as
235
+ the IRA - Internet Research Agency) [36] that aim to influence
236
+ individuals, public and private organisations [29].
237
+ A great part of what is needed to successfully influence
238
+ people lies to understand the initial conditions of the system,
239
+ i.e. in the correct profiling of such people through data
240
+ obtained on social networks. People who are more sensitive
241
+ to certain issues, and those key people who can influence the
242
+ most the community where they live and work are the main
243
+ focused people for a social attack, because they have a central
244
+ role (hubs) in the network.
245
+ Profiling consists in obtaining (through a process of data
246
+ collection and subsequent processing) an absolute or almost
247
+ absolute understanding of a group of individuals or a single
248
+ person, comprehending their habits and preferences [13]. The
249
+ information obtained concerns political, musical and social in-
250
+ terests, including the identification of their network of friends,
251
+ colleagues, and much more. This information allow a much
252
+ easier conveying of any content, as it is possible to understand
253
+ who is most susceptible and interested on various topics,
254
+ affecting their weaknesses, fears and interests. Furthermore,
255
+ it is possible to infer who could possibly propagate a certain
256
+ content through their network, exponentially increasing the
257
+ chance of success if the subject in question is a person with
258
+ an important or main role.
259
+ Cambridge Analytica used the OCEAN model, related to
260
+ personality traits, to understand preferences of many people
261
+ in the US during the national election on 2016 [36]. The
262
+ OCEAN model allows to send specific messages and contents
263
+ to people who are sensible to a certain topic. This method
264
+ is very different from the classic and standard mass commu-
265
+ nication, because it is possible to send the right content to
266
+ the right person. Unfortunately, the CA scandal was defined
267
+ as classic political influence, the old-fashioned way, thus
268
+ including prostitution, favouritism, etc. In reality, the scandal
269
+ found “a new type of weapon” as Brittany Kaiser (former CA
270
+ business development director) said during her question time
271
+ (on Commons culture committee in 2018) to describe the work
272
+ done from CA, but also to categorize AI as a real soft-power
273
+ weapon [13].
274
+ However, understanding hot topics for workers is not
275
+ enough – in order to modify their mood and obtain a good
276
+ social attack, a subject topic needs to be found as well.
277
+ On social networks, during firestorms , people are usually
278
+ triggered by three kinds of errors [15]:
279
+ 1) Social failure
280
+ 2) Communication failure
281
+ 3) Product or service failure
282
+ Although they may seem similar, different types of events
283
+ can lead to different types of dynamics and reactions. In the
284
+ case of product or service failures, for example, performance-
285
+ related crises raise doubts about the brand’s ability to deliver
286
+ basic functional performance [9]. Another research has also
287
+ identified not only short-term effects to a brand after a
288
+ firestorm, but also measured long-term ones, at least two years
289
+ after the latest firestorm [15].
290
+ I hereby give an example for each of the aforementioned
291
+ triggering factors.
292
+ 1) Social failure. The firm might be an accomplice of some
293
+ accident or crime, like Nike with children shoes [10],
294
+ [30] or the ING-DiBa case in 2012 [31].
295
+ 2) Communication failure. The firm might fail to commu-
296
+ nicate properly, for example making negative comments
297
+ regarding a certain community or movement [27].
298
+ 3) Product or service failure. The firm might distribute
299
+ a product that harms consumers, for example a vaccine
300
+ that can kill people [19].
301
+ These failures and the firestorm stemming from them might
302
+ cause affected employees to experience discomfort and panic,
303
+ because coworkers, friends and other people in their net-
304
+ work might see affected employees as the root-cause of the
305
+ Firestorm.
306
+ The social-cyber attack also provokes unlikely passive con-
307
+ sequences for companies:
308
+ 1) The value of the company on the financial market could
309
+ rapidly decrease; [11]
310
+ 2) People who worked in the company during the firestorm
311
+ might be subject to discrimination in future, especially if
312
+ the firestorm was caused by a (supposedly) unacceptable
313
+ mistake that could have been avoided [26], [38].
314
+ 3) As the people, also the offended brand could carry a
315
+ long-term stigma that would motivate other companies
316
+ to make job offers to the personnel of the attacked firm.
317
+ This could put it on an even greater disadvantage, as
318
+ workers would be incentivized to leave the attacked
319
+ company and accept the new offer.
320
+ The network, as well as the importance and scope of the
321
+ news, can thoughtfully influence the reaction and dynamics
322
+ of the company. The network, as well as the importance and
323
+ scope of the news, can thoughtfully influence the reaction and
324
+ dynamics of the company. For example, when a company’s
325
+
326
+ workers receive an high importance news, they may behave
327
+ helplessly in relation to the importance of the news; feeling
328
+ relieved of responsibility, since the event is bigger than their
329
+ actions, they tend to pass much of the responsibility on to the
330
+ company’s managers.
331
+ Indeed, in times of disorder or chaos, Entropy increases with
332
+ decreasing order, and emergency increases with increasing
333
+ order: this happens because people within the organisation
334
+ understood the emergency, and the organisation improve them-
335
+ self to respond to it [39].
336
+ When many workers in the company are panicking, the
337
+ organisation’s CCO (Chief Communication Officer) will elab-
338
+ orate and react to Firestorm on company pages, however, this
339
+ cannot stop the social attack on the individual profiles of the
340
+ employees. Hence, even people who are in charge of running
341
+ communication processes and managers can panic, as the more
342
+ is the duration of the firestorm, the higher is the chance of
343
+ losing clients and reputation. This is a terrible situation for
344
+ any company, especially after many years of work. However,
345
+ managers are considered "critical workers" on the organisation
346
+ chart, hence, they cannot be influenced by social manipulations
347
+ and social diseases, because of the responsibilities they have in
348
+ the company. While during the last century such organization
349
+ charts had the form of a pyramid, usually with the CEO on the
350
+ top, nowadays the AGILE model allows companies to organise
351
+ their personnel in different ways within their organization
352
+ charts. However, the legal and personal responsibility for every
353
+ error or critical issue will be always be of the top manager
354
+ of that area – for example, the CISO (Chief Information
355
+ Security Officer) is usually responsible for the cyber security.
356
+ A network side strategy can hard-influence workers close to
357
+ managers and directors, contaminating directly the mood of
358
+ the team, including the manager. In a more specific way, the
359
+ attacker the hub from the company network, defusing also
360
+ other workers from the company.Once the social-disease is
361
+ already widespread on the company, and many people are
362
+ stressed about the firestorm, the cyber attack can begin.
363
+ IV. ASSESSING THE ATTACK SURFACE
364
+ In this section, I introduce the possible actions that the
365
+ adversary (or the real people that contribute to firestorm)
366
+ can perform to further disrupt the target company’s business
367
+ processes, to sink its corporate image, or to get classified
368
+ information. To do so, I introduce a novel classification of
369
+ these actions and analyze their impact on the fundamental
370
+ properties of information security, that is, Confidentiality,
371
+ Integrity and Availability [34].
372
+ I show these actions can be divided in three categories:
373
+ 1) Controlling Large Scale Entities, that is, thousands
374
+ or even millions of different actors performing several
375
+ concurrent actions against a firm. These actors can act
376
+ both remotely and physically, and can be both robots
377
+ and humans.
378
+ 2) Leveraging Internal People, namely, exploiting mis-
379
+ takes performed by employees (e.g., because they are
380
+ stressed due to the firestorm), or having an insider threat
381
+ who can extract classified information.
382
+ 3) Asking for Ransoms, that is, the adversary may want
383
+ to ask for a payment to stop the firestorm. This would
384
+ cause the bots to be shutdown, or even to defend the
385
+ company on social medias.
386
+ I hereby analyse the different actions within each category
387
+ and their impact. This analysis is summarised in Table I.
388
+ A. Controlling Large Scale Entities
389
+ a) Denial of Service (DoS) Attacks: The adversary might
390
+ want to harm the firm’s reputation by negating the availability
391
+ of the services it offers. To this avail, the attacker can leverage
392
+ botnets to send a very high number of requests per second to
393
+ the target service, overwhelming the server and resulting in the
394
+ service going down. If possible, the attacker could even reuse
395
+ the botnet used to create the latent state, and rearm it with a
396
+ DoS script. Alternatively, if the adversary is not a single entity
397
+ but a large group of organised people, a DoS attack can be
398
+ performed with simple scripts, without leveraging any botnet,
399
+ as the large number of adversaries could be able to generate the
400
+ traffic required to overload the server. In this case, however, the
401
+ adversaries would have to carefully time their attack, and they
402
+ might want to hide their location, for example by using a VPN.
403
+ Finally, the adversary could encourage real people to overload
404
+ the target firm’s servers, as they could co-ordinate the attack
405
+ by using the bot profiles used for the hashtag propaganda.
406
+ b) Physical Actions: Business processes can be also
407
+ interrupted or slowed by legal, yet harmful, physical actions.
408
+ One example is a demonstration around the firm’s premises:
409
+ employees might not get to their workplace in time because
410
+ people manifesting outside the building are blocking or slow-
411
+ ing access to the premises, or they are creating more traffic
412
+ than usual on the way to the building. Another example is
413
+ people calling the organisation’s call centers with the only
414
+ goal of protesting.
415
+ B. Leveraging Internal People
416
+ a) Human Error: Even though it is widely known that
417
+ human error is one of the most prominent causes of security
418
+ incidents [16], [43], most companies still do not adequately
419
+ invest in training for their personnel, resulting in data breaches
420
+ or other security related events [22]. This means that, if the
421
+ attacker wants to obtain an initial foothold on the target
422
+ organization’s systems, they might be able to do so without
423
+ needing a firestorm attack, depending on the employees’ abil-
424
+ ity of recognizing phishing emails or scam websites. However,
425
+ workers who are experiencing firestorm, be it on the company
426
+ they are working with or on their own profile, will be more
427
+ inclined to break internal policies, hence committing mistakes,
428
+ due to the perceived crisis [2].
429
+ b) Offering Help: During the firestorm’s peak activity,
430
+ the adversary itself contacts the attacked firm, pretending to be
431
+ a professional (e.g, a consultant) who can help in mitigating
432
+ the effects of the firestorm, for example as a Social Media
433
+ Manager who has dealt with Firestorms before. This can
434
+
435
+ happen via emails, social networks or through the corporate’s
436
+ website, for example if the firm has some job openings and the
437
+ adversary pretends to be a candidate. For smaller enterprises,
438
+ the adversary may even show up in person to the attacked
439
+ company’s premises. If the attacker manages to get hired, they
440
+ might get access to classified information. I argue the attacker
441
+ does not want to tamper with documents or attack the firm’s
442
+ infrastructure while being an employee themselves.
443
+ c) Insider Threats: Instead of joining the firm them-
444
+ selves, the adversary might establish a contact with employees
445
+ who are still in the attacked company but are not showing
446
+ support on social media, or even manifested dissatisfaction
447
+ towards the company. The attacker might want to try to
448
+ persuade them in sharing confidential information, making
449
+ them insider threats [25] – if they have success, not only they
450
+ acquire classified information, but if the stolen content is also
451
+ compromising for the firm, it could be published online to
452
+ damage the firm’s reputation even more.
453
+ C. Asking for Ransoms
454
+ a) Extortion to Stop the Attack: The adversary contacts
455
+ the attacked firm and proves the botnet that is performing the
456
+ firestorm is in their control. They then ask for an arbitrary
457
+ amount of money in Bitcoins to shutdown the bots, stopping
458
+ a (hopefully) substantial part of the attack. In fact, if the
459
+ firestorm already managed to incite many people in joining the
460
+ social attack, the shutdown of the botnet might not stop or slow
461
+ down the firestorm. If the adversary plans to attack multiple
462
+ firms with their firestorms, they to avoid situations like this,
463
+ because the odds of a victim paying a ransom is proportional
464
+ to the reliability of the attacker in stopping the attack once
465
+ they receive the money. In other words, the attacker must be
466
+ considered “trusted” in stopping the attack if the ransom is
467
+ paid, so victims are more incentivized to pay [4].
468
+ b) Defence as a Service: The adversary contacts the
469
+ attacked firm, but instead of showing they are in charge of
470
+ running the attack and asking money to stop it, they try
471
+ to sell a fire(storm)fighter service to the victim, supposedly
472
+ consisting on bots defending the reputation of the firm: this
473
+ is basically a reversed firestorm, in which those same bots
474
+ that built the latent state now defend the company: to avoid
475
+ drawing excessive attention, the attacker might slowly change
476
+ the proportion of attacking bots versus defending ones, until
477
+ they are all defending the company.
478
+ V. CASE STUDY: CD PROJEKT RED
479
+ On December 10, 2020, CD PROJEKT RED released a long
480
+ awaited game called Cyberpunk 2077. This game was very
481
+ popular even before its release and it generated continuous
482
+ social hype from the video game community throughout its
483
+ development, also winning the “Best Game Awaited” from
484
+ Golden Joystick Awards for two consecutive years. [42] As
485
+ shown on Figure 1 and Figure 2, hype for the game substan-
486
+ tially increased during the 10 days before the release of the
487
+ game, reaching its apex on December 10, when the hashtag
488
+ #Cyberpunk2077 was tweeted 193,900 times on Twitter,
489
+ TABLE I
490
+ SOCIAL ATTACK SURFACE ASSESSMENT
491
+ Category
492
+ Action
493
+ Impacts
494
+ Confid.
495
+ Integ.
496
+ Avail.
497
+ Rep.
498
+ Large Scale
499
+ DoS Attack
500
+ No
501
+ No
502
+ Yes
503
+ Yes
504
+ Phys. Actions
505
+ No
506
+ No
507
+ Yes
508
+ Yes
509
+ Internal People
510
+ Human Error
511
+ Yes
512
+ Yes
513
+ Yes
514
+ Yes
515
+ Help Offer
516
+ Yes
517
+ No
518
+ No
519
+ No
520
+ Insider Threat
521
+ Yes
522
+ No
523
+ No
524
+ Yes
525
+ Ransoms
526
+ Extortion
527
+ No
528
+ No
529
+ No
530
+ No
531
+ Defence Service
532
+ No
533
+ No
534
+ No
535
+ No
536
+ Confid.: The action can affect the Confidentiality property. | Integ.: The
537
+ action can affect the Integrity property. | Avail.: The action can affect the
538
+ Availability property. | Rep.: The action can negatively affect the reputation
539
+ of the company.
540
+ from users of 53 different nationalities. During this time span,
541
+ many other hashtags regarding the game were very popular,
542
+ for example #Cyberpunk2077Hype was retweeted 10,000
543
+ times [41].
544
+ However, a few days after the release , the Cyberpunk 2077
545
+ topic arise again, this time associated with queries related to
546
+ patches and refunds. In fact, the game was released too early
547
+ and many bugs were present: due to this, several people had
548
+ asked a refund to CD PROJEKT RED, often also writing
549
+ a bad review for the game on online stores. This created a
550
+ "information-disease" within the company, just like the one
551
+ described in Section III: in this case, CD PROJEKT RED’s
552
+ employees became stressed and felt pressure related to the
553
+ quality of Cyberpunk 2077, in which they had invested more
554
+ than two years of hard work. [42]
555
+ In early February 2021, only 60 days after the game’s
556
+ release, CD PROJECT RED was hit by a ransomware attack
557
+ and attackers were able to extract the source code of several
558
+ games, including administrative files [8]. The attackers then
559
+ threatened the company of leaking or selling the stolen code
560
+ and files, unless the firm paid a large amount of money to the
561
+ cyber-criminals. In the end, CD PROJECT RED refused to
562
+ negotiate with the attackers, stating on a press release that they
563
+ would “not give in to demands or negotiate with the actor”,
564
+ also confirming that no personal information was obtained in
565
+ the attack and that they were working with law enforcement to
566
+ track down the attackers [7], [35]. Later on, security analysts
567
+ found the stolen source code while being auctioned on the dark
568
+ web for a minimum price of 1 million USD. [40] The auction
569
+ was closed after the attackers stated they had received an offer
570
+ that satisfied them [40] Within a week of these auctions, the
571
+ code was shared online via social media, and CD PROJECT
572
+ RED began using DMCA take down notices to remove posts
573
+ containing their code [24].
574
+ The social hype that CD PROJEKT RED generated for
575
+ Cyberpunk 2077, was used by hackers to threaten the company
576
+ in order to extorting money, but also, had a side effect,
577
+ i.e. damaging the company’s reputation, that can bring to
578
+ undermine the sales of other long awaited games.
579
+ In Table II I show the results of the sentiment analy-
580
+
581
+ sis, obtained from tweets and comments for the hashtag
582
+ #CDprojectRED. Data collected from Twitter respects the
583
+ timeline of Cyberpunk 2077’s release and its development;
584
+ data shown in the table can be organised in three categories:
585
+ before release (October and November), during release (De-
586
+ cember and January) and after the release of Cyberpunk 2077
587
+ (February).
588
+ It is possible to observe that in October and November the
589
+ sentiment remained neutral-positive with a few oscillations. In
590
+ December, when the game was released, I can observe a small
591
+ increase in the negative sentiment due to the high number of
592
+ bugs present in the game, however, this increment is quite
593
+ negligible. In January, when a greater number of players were
594
+ playing the game, the negative sentiment became stronger than
595
+ the positive one, causing not only a negative compound (-
596
+ 0.111), but also a neutral-negative sentiment for the game and
597
+ for the developers. Finally, on February the sentiment returned
598
+ neutral overall, however, the presence of negative sentiment is
599
+ still stronger compered to the one in October and November.
600
+ These data show how much pressure the CD PROJEKT
601
+ RED company had to experience during the release of the
602
+ game. Additionally, in Figure 3, I show the financial value
603
+ of the company during the whole game release timeline, also
604
+ marking the two critical events that occurred: the yellow line
605
+ indicates the release of the game, while the red line indicates
606
+ the ransomware attack. I can see that, after the release of the
607
+ game, the financial value of the company suffered a sudden
608
+ drop, that was likely conditioned by customers losing trust in
609
+ the company due to the presence of many bugs in the game,
610
+ bad reviews and critics. I can see that the company regains
611
+ more than half the value lost during the next two months,
612
+ however, the ransomware attack causes another drop in the
613
+ financial value of the company due to customers losing trust
614
+ in the company again, this time from a security perspective.
615
+ TABLE II
616
+ VADER SENTIMENT ON #CYBERPUNK2077 FROM TWITTER
617
+ Months
618
+ Negative
619
+ Neutral
620
+ Positive
621
+ Compound
622
+ October
623
+ 0,085
624
+ 0,757
625
+ 0,150
626
+ 0,163
627
+ November
628
+ 0,079
629
+ 0,766
630
+ 0,149
631
+ 0,163
632
+ December
633
+ 0,087
634
+ 0,750
635
+ 0,161
636
+ 0,153
637
+ January
638
+ 0,143
639
+ 0,758
640
+ 0,093
641
+ -0,111
642
+ February
643
+ 0,104
644
+ 0,745
645
+ 0,145
646
+ 0,120
647
+ VI. BUSINESS DEFENCE STRATEGY
648
+ To avoid dangerous events for companies, human factor is
649
+ a crucial element [37], however it is also possible to create
650
+ specific defence strategies. Failures introduced in Section III,
651
+ i.e. social failures, communication failures and product or
652
+ service failures can be analysed to prevent incidents. To the
653
+ most of us, the news that a particular piece of information (e.g.
654
+ a meme, a hashtag) went “viral”, reaching millions of nodes
655
+ in a short period of time may seem purely random and hence
656
+ unpredictable, but Kolli et al. [21] discovered that, at least 20%
657
+ of the times, the cascade volume changes in a manner that
658
+ appears to be random, and in the remaining 80% it is possible
659
+ Fig. 1. Interest Score showing social hype for the release of Cyberpunk 2077
660
+ Fig. 2. Queries showing social hype for the release of Cyberpunk 2077
661
+ to predict the cascade’s future volume. Hence, it is possible
662
+ to create short-term strategies to detect firestorm attacks while
663
+ they are still in the early stages, i.e. while the latent state is
664
+ being built. However, it is also possible to create long-term
665
+ defence strategies with a proactive governance. A possible
666
+ proactive strategy for the long-term could be as follows:
667
+ 1) Organise internal company procedures to help employ-
668
+ ees protect themselves against various attacks on social
669
+ media (like Linkedin);
670
+ 2) Organise procedures outside the company, such as con-
671
+ Fig. 3. Financial value of CD PROJEKT RED and critical events
672
+
673
+ 120
674
+ 100
675
+ score
676
+ 80
677
+ Internet search s
678
+ 60
679
+ 40
680
+ 20
681
+ 0
682
+ 01/11/2020
683
+ 03/11/2020
684
+ /11/2020
685
+ 07/11/2020
686
+ 09/11/2020
687
+ 11/11/2020
688
+ 13/11/2020
689
+ 15/11/2020
690
+ 17/11/2020
691
+ /2020
692
+ 21/11/2020
693
+ 23/11/2020
694
+ 25/11/2020
695
+ 27/11/2020
696
+ 29/11/2020
697
+ 01/12/2020
698
+ /12/2020
699
+ 05/12/2020
700
+ 07/12/2020
701
+ 09/12/2020
702
+ 19/11/
703
+ 03/20
704
+ 18
705
+ Query search score
706
+ 16
707
+ 14
708
+ 12
709
+ 10
710
+ 8
711
+
712
+ 4
713
+ 2
714
+ 0
715
+ date
716
+ date500
717
+ 450
718
+ 400
719
+ 350
720
+ Financial value
721
+ 300
722
+ 250
723
+ 200
724
+ 150
725
+ 100
726
+ 50
727
+ Development
728
+ Game release
729
+ Ransomwere attack
730
+ 0
731
+ 01.10.2020
732
+ 07.10.2020
733
+ 13.10.2020
734
+ 19.10.2020
735
+ 23.10.2020
736
+ 29.10.2020
737
+ 04.11.2020
738
+ 10.11.2020
739
+ 17.11.2020
740
+ 23.11.2020
741
+ 27.11.2020
742
+ 03.12.2020
743
+ 09.12.2020
744
+ 15.12.2020
745
+ 21.12.2020
746
+ 29.12.2020
747
+ 07.01.2021
748
+ 13.01.2021
749
+ 19.01.2021
750
+ 25.01.2021
751
+ 29.01.2021
752
+ 04.02.2021
753
+ 10.02.2021
754
+ 16.02.2021
755
+ 22.02.2021
756
+ 26.02.2021
757
+ 04.03.2021
758
+ 10.03.2021
759
+ 16.03.2021
760
+ 22.03.2021
761
+ 26.03.2021tacting allied/partner companies for help with the various
762
+ attacks on social media;
763
+ 3) Create in advance supporting bots that will defend the
764
+ company automatically;
765
+ 4) Create an international database of accounts that have
766
+ made firestorm. The database, accessible to all organi-
767
+ sations, both public and private, will help to understand
768
+ whether the type of firestorm taking place is real or
769
+ artificially created. [12]
770
+ These three possible actions can be highlighted by the
771
+ mass media, which will publicly show that the firestorm is
772
+ being fought because other people or organisations began
773
+ defending the attacked company. Hence, these actions allow
774
+ the firestorms to calm down, and eventually to be extinguished,
775
+ faster than simply doing nothing. [15] If a company has done
776
+ something enormously wrong in the past, it is possible that
777
+ every time the same company does something wrong, there
778
+ is a chance that another firestorm can restart, either for the
779
+ recent event or also for the past one. In fact, the firestorm can
780
+ come back with an interval of about 2 years [15].
781
+ In case of social failures, there is also an additional side-
782
+ effect that must be mitigated, that is, the firestorm naturally
783
+ expands to the employees without the manipulation of the
784
+ adversary. Example defence strategies against this side-effect
785
+ could be implemented as follows:
786
+ 1) Let people from outside and inside the company on
787
+ social network, dialogue about that topic (such as the
788
+ case of carnivores vs vegetarians at ING-DiBa [31]).
789
+ This strategy can increases the number of followers;
790
+ 2) Blame an entity that is external to the company as a
791
+ scapegoat, so the Firestorm can move from the company
792
+ to the designed entity. Even if it is not very moral, it is
793
+ something that usually works;
794
+ 3) Depending on the strength, length, and breadth of the
795
+ attack, it is possible to make strategy about possible
796
+ reaction for company.
797
+ a) Social failure: If the firestorm is linked to a partner
798
+ company, or only a certain sector of the company
799
+ is under attack, immediately distance yourself from
800
+ them.
801
+ b) Communication failure: The goal here is to safe-
802
+ guard the company’s reputation and authority. In
803
+ this case, try to detach yourself immediately from
804
+ the communication error, and continue with the
805
+ company’s reputations strategy, making it appear
806
+ that it was just an accident on the road. Further-
807
+ more, apologising for the event never hurts.
808
+ c) Product or service failure: Instantly block the pro-
809
+ duction of the affected product or the provision
810
+ of the service. Organise a commission that can
811
+ evaluate the quality of product/service. Even if it is
812
+ complicated given the amount of partners, quality
813
+ standards and corporate continuity, this action, if
814
+ done in time, creates a good defensive shield at
815
+ the communication level, as people can understand
816
+ that the company itself has also understood the
817
+ problem, limiting the damage;
818
+ Timing is essential during Firestorms, first of all to
819
+ understand whether the type of firestorm is real or artificial
820
+ (you can tell by the date of creation of the accounts that do
821
+ firestorm – if the initial accounts were born recently, they
822
+ are probably bots, hence artificial); secondly for improving
823
+ the cyber defence and be prepared for a possible cyber
824
+ attack; tertiary for the public reaction, because it means
825
+ that the affected company has noticed the failure faster or
826
+ as fast as other people (who are doing the firestorm on
827
+ social networks) and will promptly react to the problem,
828
+ reassuring customers that it will be solved. This will help in
829
+ calming down or extinguishing the firestorm. For example,
830
+ the carnivores vs vegetarians case at ING-DiBa was caused
831
+ by a communication failure. The company had never had so
832
+ much traffic on its Facebook page before, and they saw in this
833
+ an opportunity to increase the number of their followers. In
834
+ fact, after a few days had passed from the firestorm, and the
835
+ attackers were still posting, newly-acquired followers jumped
836
+ into the debate and started defending the company. [31]
837
+ Obviously,
838
+ depending
839
+ on
840
+ the
841
+ type
842
+ of
843
+ firestorm,real
844
+ or
845
+ artificial, it is necessary for the company to adapt its
846
+ strategies according to the type of attack (real or artificial).
847
+ The prevention part, of course, works in both cases, but
848
+ understanding who you are fighting against and the causes,
849
+ helps to save the reputation of the company, and sometimes
850
+ even the company itself.
851
+ VII. FUTURE WORK
852
+ In one of the next jobs, I would like to implement different
853
+ pressure dynamics, i.e., either implement rapid, massive, and
854
+ incisive firestorms, or permanent, with few accounts firestorm.
855
+ Depending on the firestorm, these types of dynamics can
856
+ change the pressure on companies and workers in different
857
+ ways, perhaps showing that for some companies it is better
858
+ to have a permanent firestorm, or for others a rapid one.
859
+ Another aspect I would like to draw attention to in future work
860
+ is also how people are contacted in the company, i.e. with
861
+ messages that are more likely to provoke an ethical reaction,
862
+ for example, when people are contacted by bots and they point
863
+ out to the worker the disaster he has made to his company.
864
+ This case is very interesting, as it is possible, after ’moralising’
865
+ the worker, to apply social engineering strategies to facilitate
866
+ the cyber attack. On the other hand, on the side outside the
867
+ company, i.e. not focused on employees, strategies can be used
868
+ to increase the chance of a successful cyber attack, or extortion
869
+ of information or money. For instance, during the firestorm,
870
+ it is possible to contact the company under attack, and pose
871
+ as the national cyber security agency, initiating strategies such
872
+ as:
873
+ 1) Passing themselves off as the national cyber security
874
+ agency, they say that most are fake accounts and get
875
+ information on their security;
876
+
877
+ 2) Passing themselves off as the national cyber security
878
+ agency, enter in their computer system.
879
+ 3) Passing themselves off as the national cyber security
880
+ agency, saying they are carrying out a cyber attack
881
+ to test their cyber defences, carry out a second attack
882
+ immediately afterwards, exploiting the information from
883
+ the first attack and passing on part of the defences, or,
884
+ say they are not defending themselves against the first
885
+ attack so as to obtain the desired data.
886
+ In any case, these kinds of interactions will be carried out
887
+ by means of computer simulations, since for obvious ethical
888
+ reasons it is impossible if not extremely difficult to apply these
889
+ strategies.
890
+ VIII. CONCLUSIONS
891
+ In this paper, I have shown how some events related to
892
+ cyber security are linked to certain social dynamics. When
893
+ social dynamics are mixed and linked to cyber purposes,
894
+ classic attack types (cyber or social attack) can no longer be
895
+ defined, but social-cyber attacks, as the effectiveness of one
896
+ also induces a probability of success of the other.
897
+ I introduce an novel model allowing researchers and com-
898
+ panies to (1) understand when companies and organisations
899
+ have fragile defence against a social-cyber attack, (2) illustrate
900
+ how company and organisation can defence them self from
901
+ firestorm, (3) proving that social-cyber attack must be defined
902
+ as a possible high risk event as multi domain sector, and (4)
903
+ showing a now model of cyber attack, with a multidisciplinary
904
+ sociological approach to increase the potentiality of common
905
+ cyber attack. The data collected from CD project red’s event
906
+ case, shows how these types of attacks, although still little
907
+ known, may become a norm in the future, as the company’s
908
+ assets are not only its human capital, or the production of
909
+ goods and/or services, but also its own reputation.
910
+ IX. AUTHORS & PAPER INFORMATION
911
+ A. Data gathering
912
+ I collect tweets related to the topics #Cyberpunk2077 by
913
+ using Tweepy and the Twitter archive API. Both service use
914
+ the permission from Twitter to obtain and gather data, but
915
+ any downloaded topic need revisions and cleaning process to
916
+ increase the quality of the research. For example, I found
917
+ many copy-paste tweets (caused by spamming process, or
918
+ fake-account/bot), and also several tweets had (during the
919
+ Vader Sentiment Analysis) incomprehensible word for the
920
+ Vader program, and I deleted it. For any topic I use the
921
+ same methodology to obtained standard and quality data. In
922
+ addition, to obtain the correct amount of tweet (define as the
923
+ number of tweet) for each day/hour I use getdaytrends.com, a
924
+ specific site where it is possible to monitoring every topic in
925
+ real-time and also aged topic. In total, our data count more then
926
+ ∼5000 Tweet. I obtain the Financial data of CD project RED
927
+ from https://www.investing.com/equities/cdproject-historical-
928
+ data site.
929
+ B. Author Contributions
930
+ Investigation and data resources, methodology, data cleaning
931
+ and software, A.R.; All authors have read and agreed to the
932
+ published version of the manuscript.
933
+ C. Funding
934
+ The author(s) disclosed receipt of the following financial
935
+ support for the research, authorship, and/ or publication of this
936
+ article: This project has received funding from the University
937
+ of Catania.
938
+ D. Author biographies
939
+ Andrea Russo is a PhD candidate in Complex Systems
940
+ at the University of Catania. He is currently working at the
941
+ Department of Physics and Astronomy. He collaborated with
942
+ CNR Ibam, he also has worked purely on projects involving
943
+ technology and society.
944
+ His
945
+ main
946
+ research
947
+ field
948
+ and
949
+ interests
950
+ are
951
+ focused
952
+ on
953
+ the study and the development of Computational social
954
+ method to explain social complexity, in particular field like
955
+ Politics - Economics - Business and Defense-Security sector
956
+ applications.
957
+ Orchid: 0000-0003-3816-0539
958
+ Corresponding author. Email: Andrea.russo@phd.unict.it
959
+ I would like to thank "Vereos" and "Andrea metal clone",
960
+ who helped me in idealising and refining the paper.
961
+ REFERENCES
962
+ [1] E. B. Alan Bryman. Business research methods, 2nd ed.oxford: Oxford
963
+ university. Oxford University Press, 2007.
964
+ [2] L. Bakos, D. D. Dumitras,cu, and K. Harangus.
965
+ Human factor pre-
966
+ paredness for decentralized crisis management and communication in
967
+ cyber-physical systems. Sustainability, 11(23):6676, 2019.
968
+ [3] G. Carrer and F. Bechis. Così la cina fa propaganda in italia, con i bot.
969
+ ecco l’analisi su twitter di alkemy per formiche. Formichiere.it, page 1,
970
+ 2020.
971
+ [4] E. Cartwright, J. Hernandez Castro, and A. Cartwright.
972
+ To pay or
973
+ not: game theoretic models of ransomware. Journal of Cybersecurity,
974
+ 5(1):tyz009, 2019.
975
+ [5] M. C. Ciccarelli. Rebuilding employee trust after a scandal. Human
976
+ resources executive, 2018.
977
+ [6] K. Creighton. How to restore employee trust after a very public company
978
+ scandal. hrdailyadvisor, page 1, 2019.
979
+ [7] C. Criddle. Cyberpunk 2077 makers cd projekt hit by ransomware hack.
980
+ bbc.com, 2021.
981
+ [8] D. D. Cdproject hacked, gwent source code leaked. eip.gg, 2021.
982
+ [9] N. Dawar and M. M. Pillutla. Impact of product-harm crises on brand
983
+ equity: The moderating role of consumer expectations.
984
+ Journal of
985
+ marketing research, 37(2):215–226, 2000.
986
+ [10] J. Day. Nike: ’no guarantee on child labour’. The Guardian, 2001.
987
+ [11] M. Farrell. High speed trading fueled twitter flash crash. CNN Business,
988
+ 2013.
989
+ [12] FrancescoArruzzoli.
990
+ “il ruolo della cyber threat intelligence nelle
991
+ organizzazioni” - zoom.
992
+ [13] G. Giovanni and A. Russo. Profilazione sociale e sicurezza nazionale.
993
+ SOCINT Press, 2021.
994
+ [14] G. Halkos and D. Bousinakis.
995
+ The effect of stress and satisfaction
996
+ on productivity. International Journal of Productivity and Performance
997
+ Management, 2010.
998
+ [15] N. Hansen, A.-K. Kupfer, and T. Hennig-Thurau. Brand crises in the
999
+ digital age: The short-and long-term effects of social media firestorms on
1000
+ consumers and brands. International Journal of Research in Marketing,
1001
+ 35(4):557–574, 2018.
1002
+
1003
+ [16] K. Hughes-Lartey, M. Li, F. E. Botchey, and Z. Qin. Human factor,
1004
+ a critical weak point in the information security of an organization’s
1005
+ internet of things. Heliyon, 7(3):e06522, 2021.
1006
+ [17] iso.org. Iso 9001:2015. iso.org, 2015.
1007
+ [18] iso.org. Iso/iec 27002:2022. iso.org, 2022.
1008
+ [19] A. G. Kate Connolly and J. Henley. Chaos in germany and italy after
1009
+ suspension of oxford vaccine. The Guardian, 2021.
1010
+ [20] R. Knight. If your company is going through a public scandal, should
1011
+ you leave? Harvard Business review, page 1, 2018.
1012
+ [21] N. Kolli, N. Balakrishnan, and K. Ramakrishnan. On quantifying pre-
1013
+ dictability in online social media cascades using entropy. In Proceedings
1014
+ of the 2017 IEEE/ACM International Conference on Advances in Social
1015
+ Networks Analysis and Mining 2017, pages 109–114, 2017.
1016
+ [22] P. Langlois. 2020 data breach investigations report, 2020.
1017
+ [23] J. Lehtonen. Kriisiviestintä. Mainostajien liitto, 1999.
1018
+ [24] Lorenzo. Cd projekt red uses dmca to take down tweets sharing stolen
1019
+ game code, 2022.
1020
+ [25] G. Mazzarolo and A. D. Jurcut. Insider threats in cyber security: The
1021
+ enemy within the gates, 2019.
1022
+ [26] K. McLeod. Workers left destitute after hes scandal say bosses had cash
1023
+ to pay wages but refused. dailyrecord, page 1, 2019.
1024
+ [27] M. Monkey. Twitter users not lovin’ mcdonald’s. The Guardian, 2012.
1025
+ [28] K. Nuortimo, E. Karvonen, and J. Härkönen. Establishing social media
1026
+ firestorm scale via large dataset media analytics. Journal of Marketing
1027
+ Analytics, pages 1–10, 2020.
1028
+ [29] U. D. of Justice. Report on the investigation into russian interference in
1029
+ the 2016 presidential election. Department of justice, 2019.
1030
+ [30] J. Oliver. Learning the lessons of brent spar saga. Politico, 1995.
1031
+ [31] J. Pfeffer, T. Zorbach, and K. M. Carley.
1032
+ Understanding online
1033
+ firestorms: Negative word-of-mouth dynamics in social media networks.
1034
+ Journal of Marketing Communications, 20(1-2):117–128, 2014.
1035
+ [32] F. M. Rinaldi, G. Giuffrida, and T. Negrete. Real-time monitoring and
1036
+ evaluation-emerging news as predictive process using big data-based
1037
+ approach, 2017.
1038
+ [33] R. R. Riverso.
1039
+ Barcellona, arrestato l’ex presidente Bartomeu, Mar.
1040
+ 2021.
1041
+ [34] S. Samonas and D. Coss. The cia strikes back: Redefining confidentiality,
1042
+ integrity and availability in security.
1043
+ Journal of Information System
1044
+ Security, 10(3), 2014.
1045
+ [35] J. Schreier.
1046
+ Cd projekt ransomware hack severely disrupts work on
1047
+ cyberpunk updates. bloomberg.com, 2021.
1048
+ [36] I. senate USA. Background to “assessing russian activities and inten-
1049
+ tions in recent us elections”: The analytic process and cyber incident
1050
+ attribution. USA Senate, 2017.
1051
+ [37] C. Simonelli. Prima educare, poi comprare. il fattore umano nella lotta
1052
+ al ransomware. Formiche.net, page 1, 29 maggio 2021.
1053
+ [38] K. Strauss. How volkswagen rallied its employees after its emissions
1054
+ scandal (at least for now). Forbes, page 1, 2017.
1055
+ [39] M. Tang and X. Mao. Information entropy-based metrics for measuring
1056
+ emergences in artificial societies. Entropy, 16(8):4583–4602, 2014.
1057
+ [40] Topic. Cd projekt red source code reportedly sells for millions in dark
1058
+ web auction [updated] | ars technica, 2022.
1059
+ [41] A. User. #cyberpunk2077hype • united states • twitter trending hashtag,
1060
+ 2022.
1061
+ [42] Wikipedia contributors.
1062
+ Cyberpunk 2077.
1063
+ https://it.wikipedia.org/w/
1064
+ index.php?title=Cyberpunk_2077&oldid=130410919.
1065
+ Accessed: NA-
1066
+ NA-NA.
1067
+ [43] C. C. Wood and W. W. Banks.
1068
+ Human error: An overlooked but
1069
+ significant information security problem. Comput. Secur., 12(1):51–60,
1070
+ feb 1993.
1071
+
INAzT4oBgHgl3EQfjf2_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
J9FIT4oBgHgl3EQfZytY/content/tmp_files/2301.11254v1.pdf.txt ADDED
@@ -0,0 +1,1646 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Femtosecond Laser Engraved 2D Tunable Optofluidic Liquid
6
+ Core/Air Cladding Channel Waveguides on PDMS
7
+ Sanyogita*, Amar Ghar and P. K. Panigrahi
8
+ Centre for Lasers and Photonics, Indian Institute of Technology, Kanpur-208016 (UP).
9
+ sanyogita.iitk@gmail.com
10
+
11
+ We have demonstrated fabrication and characterization of 2D liquid based multimode optical waveguide structures over
12
+ Polydimethylsiloxane (PDMS) material based chip. Fabrication of two separate microsturures, one with width of 14 micron
13
+ and depth of 27 micron while the other with width as well as depth of 110 micron, was achieved by femtosecond laser
14
+ micromachining process. The dye solution is passed through the microstructure from one end to the other; wherein dye
15
+ solution acts as the core while PDMS and air act as cladding medium. The femtosecond laser micromachining parameters
16
+ are optimized in terms of laser power, pulse width, writing speed, focused beam size etc. Quality of fabricated
17
+ microstructures is confirmed by microscopic analysis. The confirmation of liquid core/air cladding based waveguide is
18
+ obtained through the spectral and modal analysis. The optical analysis has been done by using fluorescence light coupled
19
+ out from waveguide structures filled with different dye solutions. These waveguide structures give strong light
20
+ confinement and intense interaction between dye solution and pump light. The developed micro structures are tunable in
21
+ terms of intensity, wavelength and beam size. Such micro structures can be implemented in design and development of
22
+ lab-on-chip micro lasers and sensing applications in any multifunction lab-on-chip devices.
23
+ Introduction
24
+ Optofluidic is a great research platform where the advantages of
25
+ both optics and microfluidics can be combined in a single chip to
26
+ move towards highly compact, portable and multifunctional devices
27
+ [1]. This optofluidic lab-on-a-chip (LOC) approach provides a huge
28
+ potential in terms of low-cost optical sources, sensors, liquid-liquid
29
+ waveguide, liquid core waveguide and real time detection.
30
+ Particularly in photonic science, and more specifically in the micro
31
+ and nano regime, the integration of fluid and light in the same path
32
+ offers the capacity to reconfigure the device in accordance with the
33
+ choice of fluid opted as the fluid medium and thus providing
34
+ dynamic and powerful practical tuning mechanism, making it
35
+ customizable in real time [2, 3].
36
+ Nonetheless, the fabrication and characterization process are
37
+ complicated owing to the miniscule dimensions of such
38
+ microstructures and managing the required smoothness at the
39
+ edges of microchannel and waveguide wall. High precision handling
40
+ of chip is also a must to minimize optical losses and for accurate
41
+ control over light and fluid in the micro/nano regime to maintain
42
+ good functionality. In the liquid core/air cladding waveguide chip,
43
+ the refractive index of core material has to be higher than that of
44
+ the cladding so as to enable total internal reflection (TIR)
45
+ phenomenon for the refractive index guided mode. Moreover, dye
46
+ solutions with different host materials and concentrations have
47
+ broad range variation in refractive index to that of water. Such an
48
+ enhanced range helps in sustaining the liquid core-air waveguide
49
+ over the long flow path for a much higher operational time. This
50
+ feature provides for a substantial increase in wider applications of
51
+ mode for such type of optofluidic chip.
52
+ Optofluidic waveguides can confine light in small dimensions and
53
+ generate high intensity optical beam over a long distance, creating
54
+ a potential for tremendous applications in the field of
55
+ environmental monitoring, bio-sensing, analytical chemistry etc. [4].
56
+
57
+ Various methods have been proposed to fabricate 2D structures;
58
+ among them, structure fabrication using soft lithography process is
59
+ widely prevalent [5,6]. But the soft lithography process in itself have
60
+ a number of disadvantages like involvement of multiple fabrication
61
+ steps, high rate of errors while achieving required depth of
62
+ microstructures, longer time of fabrication etc. Most noticeable
63
+ drawback of soft lithography is that it requires another lithography
64
+ method such as photolithography or e-beam lithography to
65
+ fabricate the stamp master used in further development process of
66
+ microstructure [6]. On the other hand, Femtosecond laser based
67
+ direct writing has many advantages over other conventional
68
+ methods such Excimer laser writing, CO2 laser writing-beam
69
+ lithography and soft lithography etc.[6,7] for fabrication of
70
+ microstructures. Femtosecond laser interaction with soft materials
71
+ has opened up a new field of waveguide fabrication methods for
72
+ structures on the surface as well as inside of transparent materials.
73
+ A femtosecond laser emits pulsed beams with durations of tens or
74
+ hundreds of femtosecond region which, nowadays, are used for
75
+ high-quality micro and nanofabrication. As the energy deposition
76
+ time of femtosecond laser is shorter than time required to release
77
+ the energy in the form of heat using electron-photon coupling
78
+ process, heat affected zone is completely suppressed during the
79
+ laser pulse interaction even with soft material like PDMS [7]. This
80
+ feature enables laser processing on PDMS with high precision and
81
+ resolution. Another advantage of femtosecond laser processing
82
+ over conventional methods is the capability of sculpturing complex
83
+ shapes at micro and nanoscale in transparent materials. With the
84
+ help of focused fs-laser beam one can achieve extremely high peak
85
+ intensity in the focused region which provides for high precision in
86
+ setting up interaction region at the surface or even inside the
87
+ volume. This feature not only eliminates a complicated and multiple
88
+ patterning processing, involved in the conventional methods like
89
+ photolithography for 2D fabrication, but also makes it feasible to
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+ create complex 2D structures which were not easily achievable by
98
+ other conventional methods. The application of femtosecond
99
+ micromachining to develop the optofluidic devices improves their
100
+ structural and optical qualities to such an extent that it could
101
+ provide a major alternate platform to innovate and produce novel
102
+ optical devices on mass production level. Hence, this unique
103
+ technique is going to contribute as a promising tool in the photonics
104
+ fields and will help in emergence of new businesses once it reaches
105
+ commercialization.
106
+ In this paper, we have demonstrated the fabrication of micro
107
+ structures by using femtosecond direct writing along with
108
+ development of liquid core-based waveguide. Structuring of 2D
109
+ micro channels on the surface of PDMS is fabricated by f-s laser.
110
+ These microchannels are converted to a super hydrophobic nature
111
+ which can provide for an effective wave guiding. For light flow path,
112
+ R6G and RH101 dye solutions were selected as liquid core medium.
113
+ These dyes are distributed evenly along the length of the two
114
+ prototypes that we have fabricated as two microchannels.
115
+ Concentration of dye solution is chosen in such a way that
116
+ refractive index of liquid medium is slightly higher than that of
117
+ PDMS and air so that the PDMS and air ends up acting as a clad.
118
+ Cross sections of these waveguide systems were captured by a CCD
119
+ camera. Role of incident power, concentration of liquid dye and
120
+ photo bleaching have been successfully studied thereof.
121
+ Experimental Details
122
+ Femtosecond laser micromachining process has been used to
123
+ fabricate two distinct dimensioned microstructures, each on a
124
+ separate PDMS surfaces with a provision of inlet and outlet at the
125
+ terminal ends for flow of liquid across the microchannel. These
126
+ microchannel act as two unique liquid core/air clad waveguides. Fig.
127
+ 1 shows the schematics of experimental set up for femtosecond
128
+ laser-based micromachining system. The proposed experiment
129
+ consists of regenerative Ti: Sapphire based amplified laser system
130
+ (CLRK-MXR, USA) capable of delivering a maximum output power of
131
+ 800 mW with pulse width of 120 fs having central wavelength of
132
+ 775 nm and repetition rate of 1 KHz.
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+ Fig. 1: Femtosecond micromachining fabrication setup for 2D
148
+ Microstructures/hallow waveguide structure on PDMS
149
+ The output beam from fs-laser system is focused on surface of
150
+ PDMS sample using 10X objective lens and beam aligning system
151
+ (OPTEC Belgium). All the microstructures are created by successive
152
+ translator movements of PDMS sample mounted on micro-position
153
+ stage without any movement of focused laser beam. The PDMS
154
+ substrate is irradiated with focused laser beam. The key steps in the
155
+ experiment includes focusing lens and micro-position translation
156
+ stage with 1 um resolution as shown in Fig 1. The focusing objective
157
+ lenses are used to converge the laser beam providing a greater
158
+ depth of field and smaller spot size as per the calculated
159
+ requirement which is important for precision laser micro-machining
160
+ process. Micro-position stage is used to move the sample as per the
161
+ designed program. The computer-controlled laser power and
162
+ micromachining system ensures that position errors and beam
163
+ distortions are minimized over the entire scan region.
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+ Fig. 2: Schematics of: a. Waveguide-I cross section; b. Waveguide-
175
+ II cross section
176
+ For this experimental study, two straight microchannels on
177
+ separate surfaces of PDMS have been fabricated successfully with
178
+ different focusing lens. Both the microchannels are fabricated with
179
+ different lasing power and focusing lenses. First microstructure
180
+ (larger microchannel) is fabricated with a width of 110 µm and a
181
+ depth of 110 µm and the second microstructure (smaller
182
+ microchannel) with a width of 14 um and a depth of 27.937 µm as
183
+ shown in Fig. 2. The larger microchannel has been fabricated by
184
+ setting the laser power at 25 mW with a spot size of 15 µm (writing
185
+ speed was kept 1mm/sec) and using multi-pass laser scan over the
186
+ square shaped cross section. Based on multimode waveguide, the
187
+ target cross-section is scanned 10 times horizontally and 5 times
188
+ vertically with a beam overlap of 10 µm. Fabrication of inlet and
189
+ outlet has also been done by fs-laser using multi-pass laser scan.
190
+ The smaller microchannel (waveguide I) as well, has been fabricated
191
+ with multi-pass laser scan but with slightly different writing
192
+ parameters. Here laser power was taken as 18 mW with a beam
193
+ spot size of 8 µm and horizontal scanning was done only twice with
194
+ a beam overlap of 6 µm (writing speed 1mm/sec). After the
195
+ measurement width of channel was found to be 14 µm and depth
196
+ was 27.937 µm. In order to flow the dye solutions through
197
+ fabricated channels, uniform inlet and outlet connected to central
198
+ microchannels have also been fabricated with a multi-pass and
199
+ multi scan using fs-laser. Inlet as well as outlet for bigger
200
+ microchannels measure 110 µm in width and 40 µm in depth and
201
+ for smaller microchannel width was 110 µm and depth was 20
202
+ microns. In both the cases we have kept the depth of inlet and
203
+ outlet less than the central microchannel, for easy flow of liquid in
204
+ to it from.
205
+
206
+ Femtosecond Laser
207
+ M1
208
+ M2
209
+ BS
210
+ CCD
211
+ Objective
212
+ Lens
213
+ Sample
214
+ Comp. Controlled
215
+ Translation Stage
216
+
217
+
218
+ PDMS
219
+ Air
220
+ PDMS
221
+ Dye
222
+ 14 μm
223
+ 27 μm
224
+
225
+
226
+ PDMS
227
+ PDMS
228
+ Air
229
+ Dye
230
+ 127 μm
231
+ 127 μm
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+ The corresponding width and depth of developed microstructures
264
+ have been confirmed by image analysis obtained with confocal
265
+ microscope (Olympus LEXT OLS 4000) as shown in Fig. 3 above. This
266
+ system capable of resolution up to 10 nm in Z direction and 120 nm
267
+ in X-Y plane. The super hydrophobic channels are effective in
268
+ creating air cladding between the dye filled liquid core and solid
269
+ walls of PDMS, thus providing a good coupling for TIR and the
270
+ waveguide. Here, due to 2D wave guiding, scattering and diffraction
271
+ of visible light still persists to the channel walls. Light undergoes TIR
272
+ at the front end of the channel too. Due to femtosecond structuring
273
+ on the PDMS material, the PDMS channel wall is also made
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
+
286
+
287
+
288
+
289
+
290
+
291
+
292
+
293
+
294
+
295
+
296
+
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+
305
+
306
+
307
+
308
+
309
+
310
+ hydrophobic which controls the losses of waveguide. After
311
+ measuring the contact angle for femtosecond direct-written 2D
312
+ microchannel as shown in Fig. 4, the hydrophobicity was checked
313
+ for the contact surface modified due to exposure of femtosecond
314
+ laser with similar parameters that one used to fabricate
315
+ microstructures on PDMS respectively. It was found that channel
316
+ has been converted into a hydrophobic channel. These hydrophobic
317
+ channels have low solid fraction that can effectively support the
318
+ liquid-core/air cladding waveguide configuration on lab-on-chip
319
+ platform. Hence, this unique structure allows an effective control
320
+ and flow of light from one end to other.
321
+
322
+ (a)
323
+
324
+ (c)
325
+
326
+ (d)
327
+
328
+
329
+
330
+ (a)
331
+ (b)
332
+ Microchannel (b)
333
+ Fig. 3: (a) 2D waveguide structure-I over PDMS, (b) Cross section of Waveguide structure-I (c) 2D microstructure-II
334
+ over PDMS (D) Cross section of microstructure-II
335
+ Fig. 4: Contact angle measurement for (a) Plane PDMS surface and (b) for PDMS surface exposed with femtosecond
336
+ laser
337
+
338
+
339
+
340
+
341
+ Obg
342
+ Inlet
343
+ 480
344
+ 320
345
+ 160
346
+ 160
347
+ 320
348
+ 480
349
+ 640
350
+ Microchannel112.3
351
+ 112.332
352
+ 64
353
+ 42
354
+ /21
355
+ 96
356
+ 128
357
+ 96
358
+ 64
359
+ 128
360
+ 32Inlet
361
+ 08
362
+ 320
363
+ 160
364
+ 320
365
+ 480
366
+ 640
367
+ 160
368
+ Microchannel320
369
+ 480
370
+ 640
371
+ 0
372
+ 160
373
+ Microchannel
374
+
375
+
376
+
377
+ Implementation of microstructure as an optical
378
+ waveguide
379
+ The two fabricated microchannels, with 2D square and rectangle
380
+ shape cross section respectively are filled with liquid dye medium in
381
+ order to convert it into liquid based multimode waveguide
382
+ microstructures. The structures act as liquid-core waveguide
383
+ platform when the refractive index (n) of cladding material
384
+ (PDMS/air) is smaller than that of the flowing dye solution which
385
+ acts as the core and enable the total internal reflection for the
386
+ configuration of the index-guided mode [8, 9]
387
+ The waveguide losses are also sensitive to the roughness of the
388
+ surfaces of the waveguide walls. As the waveguide walls are pretty
389
+ smooth in case of femtosecond fabrication, the losses are very
390
+ much minimized in comparison to other conventional fabrication
391
+ methods. Other challenges and issues in these experiments are also
392
+ resolved as gas (i.e., air) is used as cladding material [9, 10]. Air has
393
+ a much lower refractive index (nair=1.0) than most of the solid and
394
+ liquid materials, thus it allows a wider range of incident angles. Air
395
+ also has much lower viscosity than that of any liquid so that it can
396
+ significantly reduce the hydrodynamic friction and Joule heating at
397
+ the interface between the core and the cladding [10]. Higher
398
+ refractive index difference between the liquid core and air cladding
399
+ (Δn= 0.407) helps to increase the amount of light trapped inside the
400
+ core and avoids the diffusional mixing problem normally observed
401
+ in liquid to liquid L2 waveguide.
402
+
403
+
404
+
405
+
406
+
407
+
408
+
409
+
410
+
411
+
412
+
413
+
414
+
415
+
416
+
417
+
418
+
419
+
420
+
421
+
422
+
423
+
424
+
425
+
426
+
427
+ In presented case, two types of dyes have been used as the gain
428
+ material to demonstrate the concept of liquid-air waveguide on a
429
+ chip. First dye is Rhodamine-6G dissolved in ethanol and benzyl
430
+ alcohol while the second one is Rhodamine-101 dissolved in
431
+ mixture of ethanol + benzyl alcohol in a concentration range of
432
+ 1mM to 5mM for both liquid core solutions. The corresponding
433
+ change of refractive index of fluid observed by varying the dye
434
+ solution concentration for both dye solutions is measured by the
435
+ refractometer (Abbemat 500). The refractive index difference of
436
+ core and clad has been selected between 10-3 to 10-2 for index for
437
+ varying concentration form of R6G and Rh101 from 1% to 10 %.
438
+ From measurement, it is evident that dye solutions with different
439
+ concentration can act as two different liquid core medium with
440
+ varying characteristics. For example, in case of 1mM concentration
441
+ Rh-6G dye solution (n2=1.4030) in mix solution of (ethanol + benzyl
442
+ alcohol) is higher than that of cladding material i.e. air (n1= 1) and
443
+ PDMS (n3= 1.40). Liquid filled channel acts as a core in this case
444
+ wherein light propagates through liquid core waveguide by
445
+ satisfying condition of total internal reflection. This has been
446
+ demonstrated through the resulting fluorescence emerging at the
447
+ other end of the waveguide. Characteristics are found to be
448
+ drastically different between the gain materials as they are
449
+ confined to the liquid-air interface.
450
+
451
+
452
+
453
+
454
+
455
+
456
+
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+ (e)
481
+
482
+ (f)
483
+ Fig. 5: Ray-tracing simulation using FRED for two liquid waveguide structures looking from the top down. In both cases, core (liquid dye
484
+ solution) indicated with the lightly shaded region which is embedded in the darker cladding region. a. for multimode at liquid-air
485
+ interface with 110 micron width (Waveguide II); b. for multimode at liquid-PDMS interface (Waveguide II); c. for multimode at liquid-air
486
+ interface 14 micron width (Waveguide II) and d. for multimode at liquid-PDMS interface (Waveguide II) ; e. Mode field distribution in case
487
+ of liquid air interface for waveguide I; f. Mode field distribution in case of liquid air interface for waveguide II
488
+
489
+
490
+
491
+ (a)
492
+ (b)(c)
493
+ (d)(mt
494
+ 100
495
+ Ax/s
496
+ X Axis
497
+ (uw)
498
+ Local
499
+ 0.3
500
+ 3
501
+ once
502
+ 0.2
503
+ 2
504
+ n
505
+ 0
506
+ 0
507
+ Local
508
+ X Axis
509
+ 1
510
+ Local(mm
511
+ 2
512
+ 0.02
513
+ 02
514
+ Axis
515
+ 00
516
+ (mm)
517
+ 0.020.00
518
+ ueal
519
+ 12
520
+ 200
521
+ 100
522
+ 100
523
+ 0.00
524
+ 0.024
525
+ Axis
526
+ .00
527
+ 02
528
+ Axis
529
+
530
+
531
+ Characterization
532
+ For any waveguide structure, there is a range of ray angle that will
533
+ fulfill the total internal reflection condition based on relative
534
+ refractive index difference between the core and clad region. In this
535
+ case, dye solutions with different concentration act as the core
536
+ medium and PDMS/air act as clad. The number of TIR for light is
537
+ inversely proportional to the diameter or cross-section of
538
+ microchannel. Ray tracing simulation platform (FRED) is used to
539
+ understand the propagation of fluorescence light 532 nm through
540
+ dye filled microstructure. Optical losses at the liquid-air interface
541
+ and liquid-PDMS interface in case of multimode and single mode
542
+ microstructure respectively is obtained as shown in Fig. 5. To
543
+ illustrate this, Fig. 5 shows a ray-trace simulation of a liquid core
544
+ waveguides. Gaussian beam from a coherent laser source is coupled
545
+ at the one end of waveguide with the help of 10X objective lens for
546
+ both structures. The laser light source is illuminated at the normal
547
+ incidence of the waveguide. Dye solution is filled inside the
548
+ microstructure. Above simulation has been applied by considering
549
+ the liquid dye with R.I. of 1.4030 as the core medium embedded
550
+ inside PDMS with R.I. of 1.40 and air with R.I. of 1 as the substrate.
551
+ Outside the core, lower clad being PDMS (1.40) and upper clad
552
+ being (Air =1), lower index region is formed.
553
+ The result obtained for different cases, shows that light can be
554
+ coupled inside the microstructure filled with 1mM concentrated
555
+ dye solution and confirms its waveguide nature. It also clears from
556
+ this study that optical losses at liquid-air interface is comparatively
557
+ less than that of liquid-PDMS interface irrespective of the
558
+ dimensions of waveguide. However, dimensions of waveguide
559
+ affect the total internal reflection per unit length. It is observed that
560
+ waveguide structure with smaller diameter is more suitable to act
561
+ as liquid mode guiding structure leading to increased probability of
562
+ guiding more number of photons to reach the output end.
563
+ These results confirm that laser light can propagate through 2D
564
+ liquid core waveguide structure by satisfying condition of total
565
+ internal reflection over the interface of liquid core and PDMS/Air
566
+ clad. By above observations, it becomes clear that many
567
+ complications and challenges can be easily overcome for
568
+ propagating index guided mode when air is used as a cladding
569
+ material.
570
+ In this experiment, we have filled the dye solution mix of ethanol
571
+ and benzyl alcohol into two microchannels (15 mm length each),
572
+ with 110 micron and 14 micron width respectively, on PDMS chip.
573
+ The end fire coupling method is used for optical characterization of
574
+ the developed liquid waveguide structures. The schematic of
575
+ characterization set up is as shown in Fig. 6 above. Here, the light
576
+ from Nd:YAG laser is end coupled into waveguide I and waveguide II
577
+ by using objective lens and assembly of optics is also shown in Fig.
578
+ 6. The roughness of PDMS wall for 2D microchannel for both
579
+ waveguide I and II were approximately limited to 1 micrometer due
580
+ to the better quality of direct writing of femtosecond laser. To
581
+ characterize the chip, we have used a micro syringe to insert the
582
+ liquid dyes into the microchannels as the core medium. The
583
+ required liquid dyes for core medium are obtained by using ethanol
584
+ + benzyl alcohol as the host solution with two different solutes Rh-
585
+ 6G and Rh-101 to form two different dyes. Respective mixtures of
586
+ these two solutes in varying concentrations act as liquid cores
587
+ within the two microstructures.
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
596
+
597
+
598
+
599
+
600
+ Fig. 6: Characterization setup for liquid core /Air Cladding
601
+ waveguiding
602
+
603
+ As the absorption spectra of Rh-6G and Rh-101 lies in visible
604
+ wavelength therefore we have selected the Nd:YAG laser with 4
605
+ mW power and 7 nsec pulse duration with rep rate 10 Hz as the
606
+ pump source. This Nd: YAG laser is used to excite the fluorescent
607
+ dye molecules dissolved in the liquid core. The source is aligned to
608
+ beam iris and 10X objective lens. Across the objective lens beam
609
+ spot size is reduced to~100 µm for waveguide II structure and 10
610
+ micron for waveguide I structure. As the light and liquid are
611
+ pumped simultaneously to the microchannel, due to high refractive
612
+ index difference between liquid core and air, the fluorescence light
613
+ is guided and captured at the other end of microchannel. The outlet
614
+ end is connected to optical spectrometer. Fluorescence spectrums
615
+ are measured by changing the laser power and concentration of
616
+ dyes.
617
+ Model cross-sectional analysis for waveguide structures: In these
618
+ two structures as shown in Fig. 3 and 7, first one is multimode
619
+ waveguide II structure that allows multimodal tuning of waveguides
620
+ from liquid core and other one waveguide 1 support few modes
621
+ propagation.
622
+ To separate the fluorescence signal and excitation light, we need
623
+ ‘Spectroscopic analysis and it is quite a difficult job to separate
624
+ these two outputs over the output end of channel. The intensity
625
+ profile for fluorescent light generated and propagated through the
626
+ developed liquid waveguide structures have been measured using
627
+ ‘near-field intensity profile measurement’ experimental set up as
628
+ shown in Fig. 6 above.
629
+ The output profiles for both waveguide structures have been
630
+ captured using CCD equipped with band-pass light filter for pump
631
+ light (λ=532 nm). Intensity at the output end of liquid waveguide
632
+ structure and corresponding intensity profile is shown in figure 7.
633
+ Profile measurements make it clear that the fabricated
634
+ microstructures are supporting the index guided modes for the
635
+ propagation and can be used as a waveguide like structure for
636
+ various applications. The small beam size (~100 µm) of the input
637
+ beam, relative to that of the liquid core (100 µm), helps in reducing
638
+ the coupling losses of pump light at the cross-section of the
639
+ microchannel. Increment in the coupling and propagation losses
640
+ are due to the increasing effects of the scattering and diffraction of
641
+ the visible light through the PDMS channel walls (i.e., air/dye
642
+ solution/PDMS interfaces at the front and the end) with a normal
643
+ incident angle.
644
+
645
+
646
+
647
+
648
+
649
+ LASER
650
+ M1
651
+ M2
652
+ 10xObikns
653
+ OSA
654
+ 10XObjlens
655
+ BeamPellicke
656
+ Inket
657
+ Oullet
658
+ PDMS
659
+
660
+
661
+
662
+
663
+
664
+
665
+
666
+
667
+
668
+
669
+
670
+
671
+
672
+
673
+
674
+
675
+
676
+
677
+
678
+
679
+
680
+ Fig. 7:Intensity distribution for light propagating through: (a) Waveguide I and (b) multimode Waveguide II liquid core/air
681
+ clad cross section
682
+
683
+
684
+
685
+
686
+
687
+
688
+
689
+
690
+
691
+
692
+
693
+
694
+
695
+
696
+
697
+
698
+
699
+ Fig. 8: Comparative studies of emission spectra for Waveguide I, Waveguide II structure and cuvette for (a)Rh-6G and (b) Rh-101 dye
700
+ solution
701
+
702
+
703
+ Results and discussion
704
+ In order to confirm waveguide nature of dye filled 2-D
705
+ microstructures, we have studied the fluorescence spectroscopy for
706
+ 3mM concentration of dye (Rh-6G) as a liquid medium in three
707
+ different configurations i.e.) quartz Cuvette, b) waveguide II
708
+ structure and c) waveguide I structure.
709
+ The fluorescence emission spectra are collected for three different
710
+ structures in order to obtain the effect of microstructure
711
+ dimensions on the emission output. It is observed that emission
712
+ spectral
713
+ peak
714
+ wavelength
715
+ is
716
+ changed
717
+ by
718
+ 15
719
+ nm
720
+ from
721
+ microstructures to cuvette filled with same dye solution Rh-101 and
722
+ pumped to a uniform Nd: YAG laser at 4 mW power as shown in Fig.
723
+ 8.b. Similar shift has been observed in case of Rh-6G which shown
724
+ in Fig. 8.a. Increase in output photon density confirms the coupling
725
+ of FL inside the waveguide structure. It is also clear from the above
726
+ fig that FWHM of FL spectra gets narrower from Cuvette to
727
+ waveguide structure I. The spectral narrowing effect is observed
728
+ due to the Fabry-Perot resonator formed by dye solution filled
729
+
730
+ liquid waveguide and solvent-air interfaces. This result confirms
731
+ that fluorescence light generated by dye solutions gets coupled
732
+ through microchannel and forms Fabry-Perot type oscillations
733
+ which lead us to the conclusion that 2D structure fabricated on the
734
+ surface of PDMS functions as a liquid core/air cladding waveguide
735
+ structure. In addition, consideration of these two waveguides and
736
+ quartz cuvette confirms that dynamics of fluorescence spectra also
737
+ changes. The intensity, lasing peak and line width change according
738
+ to dimensions of individual structure. Same results are observed for
739
+ Rh-101 dye solution. FWHM of fluorescence signal of quartz
740
+ Cuvette is observed 48.8 nm and peak wavelength at 637.59 nm.
741
+ In multimode waveguide II for Rh-101 dye, line width achieved is
742
+ 13.53 nm, peak wavelength is 624.10 nm and that for waveguide I
743
+ structure line width is 6.94 nm and peak wavelength is 623.75nm.
744
+ In case of Rh-6Gdye solution, FWHM for Cuvette is 42.89 nm and
745
+ peak wavelength is 580.90 nm. For multimode waveguide II
746
+ structure is 14.52 nm, peak wavelength is 573 nm and for
747
+
748
+
749
+
750
+
751
+
752
+
753
+ Waveguide
754
+
755
+ Waveguide II
756
+ (a)
757
+ (b)
758
+ Fluorescence
759
+ emission on
760
+ cross section
761
+ Pump
762
+ Pump
763
+ Fluorescence emission on
764
+ cross section
765
+
766
+ Rh-6G
767
+ (a)
768
+ Cuvette
769
+ Waveguide II
770
+ Waveguide I
771
+
772
+ Rh-101
773
+ (b)
774
+ Cuvette
775
+ Waveguide II
776
+ Waveguide I
777
+
778
+
779
+
780
+ 1.0
781
+ 0.9
782
+ Intensity
783
+ 0.8
784
+ Normalized
785
+ 0.7
786
+ 0.6
787
+ 0.5
788
+ 0.4
789
+ 0
790
+ 5
791
+ 10
792
+ 15
793
+ 20
794
+ 25
795
+ 30
796
+ 35
797
+ WaveguideDepth (um)70000
798
+ Cuvette
799
+ 60000
800
+ Multimode
801
+ .U.
802
+ Single mode
803
+ A
804
+ 50000
805
+ Count
806
+ 40000
807
+ 30000
808
+ 20000
809
+ Ph
810
+ 10000
811
+ 0
812
+ 540
813
+ 560
814
+ 580
815
+ 600
816
+ 620
817
+ 640
818
+ Wavelength(nm)70000
819
+ Cuvette
820
+ 60000
821
+ Multimode
822
+ 3
823
+ Singlemode
824
+ A
825
+ 50000
826
+ Count
827
+ 40000
828
+ hoton
829
+ 30000
830
+ 20000
831
+ P
832
+ 10000
833
+ 0
834
+ 580600620640660680700720740
835
+ Wavelength(nm)Intensity
836
+ 0.8
837
+ Normalized
838
+ 0.6
839
+ 0.4
840
+ 0.2
841
+ 0.0
842
+ 0
843
+ 20
844
+ 40
845
+ 60
846
+ 80
847
+ 100
848
+ 120
849
+ WaveguideDepth (um)
850
+
851
+
852
+ waveguide I structure linewidth reduces to 5.34 nm, peak
853
+ wavelength is shifted at 573.70 nm. Through a comparison study, it
854
+ has been observed that peak wavelength in multimode waveguide I
855
+ and II structure is quite less (blue shifted) compare to Cuvette
856
+ output. In the present study, we can see for quartz Cuvette the
857
+ output florescence spectrum has a large bandwidth. Due to the
858
+ small dimensions of microchannels, the obtained graph clearly
859
+ indicates that the linewidth of waveguide structure II is less than
860
+ that of Cuvette and waveguide structure I have even lower line
861
+ width compared to the structure II.
862
+ Effect of power for higher concentration regime
863
+ For characterization of these FS written microchannels in terms of
864
+ multimode waveguide microstructures I and II, we have studied the
865
+ effect of pump power for Rh-6G and Rh-101 dye solutions. It has
866
+ been observed that with variation in the pump power, a significant
867
+ tunability has been observed in fluorescence spectra. All these
868
+ measurements have been observed at the room temperature. Fig.9
869
+ illustrates the measured emission spectra with Rh-6G for 10 mM in
870
+ both liquid core/air waveguide structure I and II. Here we have
871
+ varied the input power in a range of 4 - 12 mW for both cases and
872
+ observed that for lower concentrations, insignificant change was
873
+ observed in the fluorescence peak wavelength in correspondence
874
+ to the variation in incident laser power but for 10 mM, peak
875
+ wavelength shift has been observed as power is varied. A
876
+ florescence peak wavelength count emerges as optical pumping
877
+ power density is increased. The absorption of incident laser beam is
878
+ responsible for change in refractive index gradient of dye solution
879
+ of the order of 10-3 to 10-4 due to optically heated thermal lensing
880
+ effect [11]. Also, incident pulsed high-power laser beam generates
881
+ acoustic pressure waves inside the dye filled liquid waveguide
882
+ structure which induce the variation in the refractive index of
883
+ medium [11, 12].
884
+
885
+
886
+
887
+
888
+
889
+
890
+
891
+
892
+
893
+
894
+
895
+
896
+
897
+
898
+
899
+
900
+
901
+
902
+
903
+
904
+
905
+
906
+
907
+
908
+
909
+
910
+
911
+
912
+
913
+
914
+ In this way, incident laser power plays significant role in the shift of
915
+ florescent peak wavelength and output spectrum which is reflected
916
+ in the experimental results as shown in the Fig. 9 respectively. In
917
+ low concentration regime, isolated dye molecules are present but
918
+ as we increase the concentration of dye, the spacing between dye
919
+ molecules decreases and aggregates are formed.
920
+ Thus, peak wavelength variation can be seen in very high
921
+ concentration regime. The other phenomenon which contributes to
922
+ the modified output spectra of dye is ‘self-absorption’ due to higher
923
+ concentrations. As the molecular dimmer are formed at high
924
+ concentration, it explains the appearance of a second shift in
925
+ measured fluorescence spectroscopy such that red shift is observed
926
+ for 10 mM dye concentration by varying the power from 4 mW to
927
+ 12 mW. From Fig. 9, we can clearly observe the peak wavelength
928
+ for multimode waveguide structure II for Rh6G solution was
929
+ achieved at 579.8 nm at 4mW pump power. As the power increases
930
+ to 6mW, peak wavelength is shifted at 581.42 nm. By varying to
931
+ higher power, red shifted peak wavelength is reached up to 583.25
932
+ nm. Same experiment has been repeated for Rh-101 dye solution.
933
+ We took 10 mM solution and measured the fluorescence spectra
934
+ for multimode waveguide structure II at 2 mW, the peak
935
+ wavelength is captured at 626.48 nm. The amount of light guided
936
+ inside the multimode mode waveguides I and II are strongly
937
+ dependent on the refractive index difference between ncore and nclad
938
+ as:
939
+ ∆������������ = ������������������������������������������������ − ������������������������������������������������
940
+ The Rh-6G and Rh-101 are dissolved in mixture of ethanol +benzyl
941
+ alcohol as a host solution.
942
+
943
+
944
+
945
+
946
+
947
+
948
+
949
+
950
+
951
+
952
+
953
+
954
+
955
+
956
+
957
+
958
+
959
+
960
+
961
+
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+
970
+
971
+
972
+
973
+
974
+ Rh-6G Waveguide I
975
+
976
+ Rh-6G Waveguide II
977
+
978
+ Rh-101 Waveguide I
979
+
980
+ Rh-101 Waveguide II
981
+ (c)
982
+ (d)
983
+ (b)
984
+ (a)
985
+ Fig. 9: Study of pump power-based fluorescence emission spectra for(a) Rh6G in waveguide structure I (b) Rh6G in
986
+ multimode waveguide structure II (c) Rh101 in waveguide structure I and (d) Rh101 in waveguide structure II
987
+ (A=4mW, B=6 mW, C=8mW, D=10mW E=12mw)
988
+
989
+
990
+
991
+ 70000
992
+ A
993
+ B
994
+ 60000
995
+ c
996
+ 3
997
+ 3
998
+ 50000
999
+ E
1000
+ Counts
1001
+ 40000
1002
+ Photon
1003
+ 30000
1004
+ 20000
1005
+ 10000
1006
+ 0
1007
+ 560
1008
+ 570
1009
+ 580
1010
+ 590
1011
+ 600
1012
+ 610
1013
+ 620
1014
+ 630
1015
+ Wavelength (nm)70000
1016
+ 60000
1017
+ (A.U.)
1018
+ 50000
1019
+ Counts
1020
+ 40000
1021
+ Photon
1022
+ 30000
1023
+ 20000
1024
+ 10000
1025
+ 0 -
1026
+ 580
1027
+ 600
1028
+ 620
1029
+ 640
1030
+ 660
1031
+ 680
1032
+ Wavelength (nm)70000
1033
+ 60000
1034
+ (A.U.)
1035
+ 50000
1036
+ Counts
1037
+ 40000
1038
+ Photon
1039
+ 30000
1040
+ 20000
1041
+ 10000
1042
+ 0
1043
+ 550
1044
+ 560
1045
+ 570
1046
+ 580
1047
+ 590
1048
+ 600
1049
+ 610
1050
+ Wavelength (nm)70000
1051
+ 60000
1052
+ (wu)
1053
+ 50000
1054
+ Counts
1055
+ 40000
1056
+ Photon
1057
+ 30000
1058
+ 20000
1059
+ 10000
1060
+ 0
1061
+ 580
1062
+ 600
1063
+ 620
1064
+ 640
1065
+ 660
1066
+ 680
1067
+ Wavelength (nm)
1068
+
1069
+
1070
+
1071
+ The experiment is repeated for waveguide structure I for same
1072
+ solution. Light is coupled from the output end of waveguide. Light
1073
+ after being guided inside the waveguide structure I is observed at
1074
+ the cross section of the waveguide and we can observe in the graph
1075
+ that the peak wavelength and line width for same power and
1076
+ concentration changes. There is slight change in Peak wavelength
1077
+ but line width drastically changes in the waveguide I compared to
1078
+ the waveguide II.
1079
+ For waveguide structure I, the red shift in to fluorescence emission
1080
+ peak for both dyes caused by the variation pump power from 4 mW
1081
+ to 12 mW with step size of 2 mW. The corresponding tunability
1082
+ achieved is in the range of 579.87-583.25 nm and average line
1083
+ width is 6.8 nm in case of Rh-6G. For the waveguide structure I, in
1084
+ case of Rh-101 based active solution, tunability achieved is 7 nm
1085
+ and observed average line width is 6 nm. For Rh-101 dye in
1086
+ waveguide structure I tunability achieved is in the range of 620.49-
1087
+ 628.44 nm. In the case of waveguide structure II, for Rh-6G, red
1088
+ shift in peak wavelength has been observed. Tunability of peak
1089
+ wavelength being 4 nm and average line width being 10 nm. In the
1090
+ case of Rh-101, the tunability of 6 nm is achieved. For multimode
1091
+ waveguide structure II in case of Rh-101, spectral tunability is
1092
+ achieved in the range of 626.48 - 632.50 nm and average line width
1093
+ is 10 nm. In case of Rh-6G, for same multimode waveguide II,
1094
+ spectral tunability is achieved in the range of 579.87-583.25 nm and
1095
+ average FWHM line width is 9.5 nm.
1096
+
1097
+ Effect of concentration
1098
+ The tunability in output band of liquid filled microstructures is
1099
+ mainly determined by selection of dye solution and its solubility
1100
+ limit to highly dilute systems of Rh-6G and Rh-101. In case of lower
1101
+ concentration regime (concentrations of 0.1 mM), component of
1102
+ self-absorption is quite significant which decrease the intensity of
1103
+ signal. In addition, at higher concentrations regime (concentrations
1104
+ of 10 mM), intermolecular self-quenching rapidly decreased the
1105
+ output intensity [11]. Particularly, in high concentration regime, the
1106
+ Rh-6G and Rh-101 molecules arrange themselves into H type and J
1107
+ type dimmers [14, 15 & 16]. This dimmer formation changes the
1108
+ electronic structure and as a result, the output emission spectrum is
1109
+ also changed. In this way, the variation in the concentration of
1110
+ liquid medium provides an optical flexibility for liquid waveguide
1111
+ structures.
1112
+ The experimental observation for spectral dependency of liquid
1113
+ waveguide structures for varying concentration of Rh-6G and Rh-
1114
+ 101 dye solution at fixed pump power is as shown in below Fig.
1115
+ (10). The detailed analysis of output spectra for Rh-6G
1116
+ concentrations ranging from 1 mM to 4 mM and Rh-101
1117
+ concentrations ranging from 1 mM to 5 mM have been done.
1118
+
1119
+ It was observed that the spectral position of the propagating mode
1120
+ through the liquid waveguide structure shifts toward longer
1121
+ wavelengths by increasing the concentration of dye solution. In
1122
+ case of waveguide I filled Rh-6G solution, the peak wavelength shift
1123
+ observed from 573.16 nm to 580.67nm for 1mM to 4 mM
1124
+ concentration change. Along with peak wavelength, average line
1125
+ width shift is also observed from to 5 -6.01 nm for the same. For Rh-
1126
+ 101 filled waveguide I , 5nm shift in peak wavelength and ± 2 nm
1127
+ sift in line width is observed when concentration changes from 1
1128
+ mM to 5 mM respectively. As Fig. 10 shows, the wavelength of the
1129
+ peak maximum is red-shifted with varying concentration. The same
1130
+ experiments were carried out for multimode waveguide structure II
1131
+ for both dye solutions. Similarly, spectral study for different
1132
+ concentrations in multimode structure II for Rh-6G dye, 8 nm red
1133
+ shift in peak wavelength and 1.5 nm shift in linewidth have been
1134
+ observed while 5 nm peak wavelength red shift with ± 2 nm
1135
+ linewidth shift has been observed for Rh-101 respectively. Here, the
1136
+ peaks occurred at different wavelengths as per the changing
1137
+ concentration of liquid medium. Red shift in the output spectra is
1138
+ observed when concentration is increased from 1 mM to 4 mM. The
1139
+ apparent red shift in the emitted intensity signal is due to the small
1140
+ Stokes shift of Rh-6G and the large spectral overlap in absorption
1141
+ and emission [13, 14]. Same observations have been seen for Rh-
1142
+ 101 solution. The optimum optical absorption of pump beam, inside
1143
+ the dye solution filled microchannel is achieved at concentration of
1144
+ 1 mM.
1145
+
1146
+ Photo bleaching effect in microstructure
1147
+ The rate of photo bleaching primarily depends upon the type of
1148
+ dye, host material and their optical properties. Additionally,
1149
+ illumined intensity of source, wavelength of source, exposure time
1150
+ and temperature also affect the extent of photo bleaching [16, 17].
1151
+ Photo bleaching is not a desirable phenomenon for lab-on chip
1152
+ based optofluidic waveguides and optofluidic lasers. It destructs the
1153
+ continuous output of miniaturized device and limits its usage to
1154
+ short time periods only. Here, we have studied the photo bleaching
1155
+ effect in waveguide structure I and II for both Rh-6G and Rh-101
1156
+ dye mediums. This study helps us to design and improve upon the
1157
+ functionalities of optofluidic chips.
1158
+ As a consequence of photo bleaching due to the long exposure of
1159
+ pump intensity to the liquid active medium, the fluorophores lose
1160
+ the ability to emit fluorescence in the same magnitude of intensity.
1161
+ The linewidth and intensity of florescence output have been
1162
+ significantly changed due to photo bleaching effect in the liquid
1163
+ waveguide. Due to diffusion dynamics in the presence of on chip
1164
+ reservoirs, in case of micro dye lasers, the supply of unbleached dye
1165
+ solution on faster time scale is not required. In the studied case,
1166
+ length of microchannel is 15 mm and width is 110 micron (W/L=
1167
+ 0.0073) for waveguide structure II and for waveguide I (W/L=
1168
+ 0.00093). For both the cases longitudinal coupling of light have
1169
+ been done in slit area. Photo bleaching time for waveguides can be
1170
+ converted to just a few minutes without using any costly liquid
1171
+ handling devices and replacement. Here, we have used the static
1172
+ phenomenon of liquid waveguides without using the external fluidic
1173
+ handling systems such as syringe pumps. The experimentally
1174
+ observed fluorescence dynamics is in qualitative agreement with
1175
+ the bleaching-diffusion dynamics [17, 18 & 19].
1176
+
1177
+ In Microsystems, photo bleaching creates unwanted intensity
1178
+ changes in the output. The quantum yield of photo bleaching and
1179
+ the molar extinction coefficient are inherent properties of
1180
+ Rhodamine-6G and Rhodamine-101. In case of static measurements
1181
+ inside microstructures, most affricating factors for photo bleaching
1182
+ inside waveguide structures filled with diluted solutions can be
1183
+ determined by applying Beer’s law as [14]:
1184
+
1185
+ ������������������������������������������������ = ������������������������������������������������������������������������ (−������������������������������������������������������������������������������������������������������������������������������������)
1186
+
1187
+ Aout is the amount of emitting molecules remaining after photo
1188
+ bleaching; Ain is the original concentration of absorbed dye
1189
+
1190
+
1191
+
1192
+
1193
+
1194
+
1195
+
1196
+
1197
+
1198
+
1199
+
1200
+
1201
+
1202
+
1203
+
1204
+
1205
+
1206
+
1207
+
1208
+
1209
+
1210
+
1211
+
1212
+
1213
+
1214
+
1215
+
1216
+
1217
+
1218
+
1219
+
1220
+
1221
+
1222
+
1223
+
1224
+
1225
+
1226
+
1227
+
1228
+
1229
+ .
1230
+
1231
+
1232
+
1233
+
1234
+
1235
+
1236
+
1237
+
1238
+
1239
+
1240
+
1241
+
1242
+
1243
+
1244
+
1245
+
1246
+
1247
+
1248
+
1249
+
1250
+
1251
+
1252
+
1253
+
1254
+
1255
+
1256
+
1257
+
1258
+
1259
+
1260
+
1261
+
1262
+
1263
+
1264
+
1265
+
1266
+
1267
+
1268
+
1269
+
1270
+
1271
+
1272
+
1273
+
1274
+
1275
+
1276
+
1277
+
1278
+
1279
+
1280
+
1281
+
1282
+ Rh-6G waveguide I
1283
+
1284
+ Rh101 waveguide I
1285
+
1286
+ Rh101waveguide II
1287
+
1288
+ Rh-6G (waveguide I)
1289
+ (b)
1290
+
1291
+ (c)
1292
+ Rh-101 (waveguide II)
1293
+
1294
+ Rh-101 (waveguide I)
1295
+ (d)
1296
+
1297
+ (a)
1298
+ Rh-6G (waveguide II)
1299
+ (c)
1300
+ (d)
1301
+
1302
+ Rh-6G waveguide II
1303
+ (b)
1304
+ (a)
1305
+ Fig. 10: Studies of concentration variation-based fluorescence emission spectra for multimode waveguide structures I and II
1306
+ for Rh6G and Rh101: (a) Rh6G filled structure I (b) Rh6G filled multimode structure II (c) Rh101 filled structure I (d) Rh101
1307
+ filled multimode structure II.
1308
+ Fig. 11: Photobleaching studies for (a) Rh6G in multimode structure II (b) Rh6G in structure I (c) Rh101 in multimode structure II
1309
+ and (d) Rh101 in waveguide structure I
1310
+
1311
+
1312
+
1313
+ 70000
1314
+ 1 mM
1315
+ (n'v)
1316
+ 60000
1317
+ 2 mM
1318
+ 3 mM
1319
+ 50000
1320
+ 4mM
1321
+ Counts
1322
+ 40000
1323
+ 30000
1324
+ Photon
1325
+ 20000
1326
+ 10000
1327
+ 0
1328
+ 560
1329
+ 580
1330
+ 600
1331
+ 620
1332
+ 640
1333
+ Wavelength (nm)70000
1334
+ 60000
1335
+ o S
1336
+ A.U.
1337
+ 10 S
1338
+ 50000
1339
+ 20 S
1340
+ Count
1341
+ 30 S
1342
+ 40 S
1343
+ 40000
1344
+ 60 S
1345
+ 70 S
1346
+ Photon
1347
+ 30000
1348
+ 20000
1349
+ 10000
1350
+ 0
1351
+ 550
1352
+ 560
1353
+ 570
1354
+ 580
1355
+ 590
1356
+ 600
1357
+ 610
1358
+ Wavelength (nm)70000
1359
+ (A.U.)
1360
+ oS
1361
+ 60000
1362
+ 25 S
1363
+ 50 S
1364
+ 50000
1365
+ 75 S
1366
+ Counts
1367
+ 100 S
1368
+ 40000
1369
+ 125 S
1370
+ 150 S
1371
+ hoton
1372
+ 175 S
1373
+ 30000
1374
+ 200 S
1375
+ 225 S
1376
+ P
1377
+ 20000
1378
+ 10000
1379
+ 580
1380
+ 600
1381
+ 620
1382
+ 640
1383
+ 660
1384
+ 680
1385
+ 700
1386
+ Wavelength (nm)70000
1387
+ 10 S
1388
+ 60000
1389
+ 20 S
1390
+ 30 S
1391
+ (A.)
1392
+ 50000
1393
+ 40 S
1394
+ Counts
1395
+ 50 S
1396
+ 60 S
1397
+ 40000
1398
+ 70 S
1399
+ 80 S
1400
+ hoton
1401
+ 30000
1402
+ 90 S
1403
+ 20000
1404
+ 10000
1405
+ 0
1406
+ 580
1407
+ 600
1408
+ 620
1409
+ 640
1410
+ 660
1411
+ 680
1412
+ Wavelength (nm)70000
1413
+ os
1414
+ 60000
1415
+ 20 S
1416
+ (A.U.)
1417
+ 40 S
1418
+ 50000
1419
+ 60 S
1420
+ Count
1421
+ 80 S
1422
+ 40000
1423
+ 100 S
1424
+ 120 S
1425
+ Photon
1426
+ 30000
1427
+ 140 S
1428
+ 160 S
1429
+ 20000
1430
+ 180 S
1431
+ 10000
1432
+ 0
1433
+ 560
1434
+ 580
1435
+ 600
1436
+ 620
1437
+ Wavelength(nm)70000
1438
+ 1mM
1439
+ 60000
1440
+ 2 mM
1441
+ 3 mM
1442
+ 50000
1443
+ 4mM
1444
+ Counts
1445
+ 40000
1446
+ Photon
1447
+ 30000
1448
+ 20000
1449
+ 10000
1450
+ 0
1451
+ 560
1452
+ 580
1453
+ 600
1454
+ 620
1455
+ 640
1456
+ Wavelength (nm)70000
1457
+ 1 mM
1458
+ 60000
1459
+ 2 mM
1460
+ (A.U)
1461
+ 3 mM
1462
+ 50000
1463
+ 4mM
1464
+ Count
1465
+ 5 mM
1466
+ 40000
1467
+ 30000
1468
+ 20000
1469
+ 10000
1470
+ 0
1471
+ 600
1472
+ 610
1473
+ 620
1474
+ 630
1475
+ 640
1476
+ 650
1477
+ 660
1478
+ Wavelength (nm)70000
1479
+ 1 mM
1480
+ 2 mM
1481
+ (A.U
1482
+ 60000
1483
+ 3 mM
1484
+ 4 mM
1485
+ Counts
1486
+ 50000
1487
+ 5 mM
1488
+ 40000
1489
+ noton
1490
+ 30000
1491
+ 20000
1492
+ 10000
1493
+ 0
1494
+ 600
1495
+ 610
1496
+ 620
1497
+ 630
1498
+ 640
1499
+ 650
1500
+ Wavelength(nm)
1501
+
1502
+
1503
+
1504
+ molecules, I0 is the incident light irradiance, Qph is the quantum
1505
+ yield of photo bleaching and te is the exposure time. From the
1506
+ above equation, it is clear that quantity of photo bleached
1507
+ molecules inside the solution is exponentially dependent on
1508
+ exposure time and pump intensity. Therefore, even a small increase
1509
+ in time or light intensity results in a substantial increase in the
1510
+ amount of photo bleaching. Our experimental results reveal that
1511
+ these optofluidic waveguides can be operated over a few minutes
1512
+ without needing a flow of fresh dye solution as shown in Fig. (11). In
1513
+ case of Rh-6G solution, photo bleaching time is observed to be 70
1514
+ sec for waveguide I and 180 sec for multimode waveguide structure
1515
+ II, while in case of Rh-101, photo bleaching time is observed to be
1516
+ 90 sec and 225 sec for multimode waveguide structure I and II
1517
+ respectively.
1518
+ This experiment confirms that decay time of Rh-101 is slightly
1519
+ greater than that of Rh-6G. This observed behavior is justified by
1520
+ previous publication [17, 20]. The photo bleaching time can further
1521
+ be improved by a factor of 3 to 4 times by adding reservoirs on chip.
1522
+ Also, by converting the fabricated 2D structures into a 3D chip and
1523
+ using different pumping scheme, the developed liquid waveguide
1524
+ structure can be used in established optofluidic devices with
1525
+ enough output which would be sufficient and even more than is
1526
+ required to do the lab-on-chip experiments
1527
+
1528
+ Conclusion:
1529
+ In conclusion, we have demonstrated a novel femtosecond
1530
+ fabricated liquid-core/air-clad waveguide microstructures on a
1531
+ PDMS microchip. We have studied the role of concentration, photo-
1532
+ bleaching and incident power on the output of waveguides in detail.
1533
+ This work gives a very good understanding towards the interaction
1534
+ of light and fluid in micro dimension. Tunability in the form of
1535
+ intensity, wavelength and linewidth has been successfully obtained.
1536
+ The characteristic of these waveguide sources can be easily
1537
+ controlled and modulated by adjusting the fluid properties of the
1538
+ core medium. After converting these 2D chips into 3D chips and
1539
+ adding some optical component to the same, the liquid waveguide
1540
+ source can be made into a tunable optofluidic laser having a
1541
+ coherent light source that can be integrated with multifunctional
1542
+ lab-on chip systems. In this way, fluorescence measurement and
1543
+ detection by optofluidic devices can provide a powerful platform
1544
+ for analysis of biological systems and aid significantly in medical
1545
+ diagnostics and chemical detection. This research gives a brief idea
1546
+ about development and maintenance of highly functional lab-on-
1547
+ chip waveguides which can be used out of laboratory also for many
1548
+ applications.
1549
+
1550
+ Acknowledgement:
1551
+ We acknowledge the support provided by CMTI Bangalore, India for
1552
+ femtosecond micromachining fabrication facility.
1553
+
1554
+ Reference:
1555
+ 1.
1556
+ B. Helbo, A. Kristensen, and A. Menon, “A micro-cavity
1557
+ fluidic dye laser,” J. Micromech. Microeng, 2003,
1558
+ 13(2),307–311.
1559
+ 2.
1560
+ D. Psaltis, S. R. Quake, and C. Yang, “Developing
1561
+ optofluidic technology through the fusion of microfluidics
1562
+ and optics,” Nature, 2006, 442(7101), 381–386.
1563
+ 3.
1564
+ Z. Li and D. Psaltis, “Optofluidic dye lasers,” Microfluid.
1565
+ Nanofluidics 2008, 4 (1-2), 145–158.
1566
+ 4.
1567
+ Lin Pang,* H. Matthew Chen et al., “Optofluidic devices
1568
+ and applications in photonics, sensing and imaging” Lab
1569
+ on a Chip, 2012, 12, 3543–3551.
1570
+ 5.
1571
+ D. A. Chang-Yen, R. K. Eich, and B. K. Gale, “A monolithic
1572
+ PDMS waveguide system fabricated using soft-lithography
1573
+ techniques,” J. Lightwave Technol., 2005, 23(6), 2088–
1574
+ 2093.
1575
+ 6.
1576
+ Prashanth Reddy Konari et al.,“Experimental Analysis of
1577
+ Laser Micromachining of Microchannels in Common
1578
+ Microfluidic Substrates” Micromachines, 2021, 12, 138.
1579
+ 7.
1580
+ Felix Sima, Koji Sugioka et al.,“Three-dimensional
1581
+ femtosecond
1582
+ laser
1583
+ processing
1584
+ for
1585
+ lab-on-a-chip
1586
+ application”, Nanophotonics, 2018; 7(3): 613–634.
1587
+ 8.
1588
+ Y. Yan et al., “A tunable 3D optofluidic waveguide dye
1589
+ laser via two centrifugal Dean flow streams”, Lab on a
1590
+ Chip, 2011, 11, 3182.
1591
+ 9.
1592
+ Stijn Vandewiele et al., “Single-mode air-clad liquid-core
1593
+ waveguides on a surface energy patterned substrate”,
1594
+ OPTICS LETTERS, 2014, Vol. 39, No. 16.
1595
+ 10. PengFe et al.,“A compact optofluidic cytometer with
1596
+ integrated liquid-core/PDMS-cladding waveguides, Lab
1597
+ Chip, 2012, 12, 3700–3706.
1598
+ 11. S.k Mishra et.al. “Measurement of Thermo Optical
1599
+ Coefficient
1600
+ for
1601
+ Commonly
1602
+ used
1603
+ Dye
1604
+ Solvents”,
1605
+ International journal of photonics and optical technology,
1606
+ 2018,Vol. 4, Iss. 2, pp: 12-16.
1607
+ 12. Shane M. Eaton,Carmela De Marco,Rebeca Martinez-
1608
+ Vazquez,Roberta
1609
+ Ramponi,Stefano
1610
+ Turri,Giulio
1611
+ Cerullo,Roberto
1612
+ Osellame,
1613
+ “Femtosecond
1614
+ laser
1615
+ microstructuring for polymeric lab-on-chips” Journal of
1616
+ Biophotonics, 2012, 5(8-9).
1617
+ 13. Penzkofer. W. I.eupacher et al., “ Fluorescence behaviour
1618
+ of highly concentrate rhodamine 6G solutions”. journal of
1619
+ Luminescence, 1987,37, 61-72.
1620
+ 14. Florian M. Zehentbauer et al., “Fluorescence spectroscopy
1621
+ of Rhodamine 6G: Concentration and solvent effects”,
1622
+ SpectrOChimica Acta Pan A: Molecular and Biomolecular
1623
+ SpectrOscopy, 2014, 121() 147-151.
1624
+ 15. K. Noack, J. Kiefer, A.I.saipem, et al., “Concentration
1625
+ dependent hydrogen bonding effects on the dimethyl
1626
+ sulfoxide vibrational structure in the presence of water,
1627
+ methanol and ethano”l, ChemPhysChem 2010, 11, 630-
1628
+ 637.
1629
+ 16. VJ. Gavrilenko, MA Noginov, et al., “Ab initio study of
1630
+ optical properties of Rhodamine 6G molecular dimers”,
1631
+ Journal of Chemical Physic, 2006s 124, 044301.
1632
+ 17. Morten Gersborg-Hansen et al., “Bleaching and diffusion
1633
+ dynamics in optofluidic dye lasers”, APPLIED PHYSICS
1634
+ LETTERS, 2007,90, 143501.
1635
+ 18. Jerker Widengreny et al., “Mechanisms of photobleacing
1636
+ investigated by fluorescence correlation spectroscopy”,
1637
+ Bioimaging, 1996, 4, 149–15.
1638
+ 19. Mingyu Chapma et. al., “Rhodamine 6G Structural
1639
+ Changes in Water/Ethanol Mixed Solvent”, Journal of
1640
+ Fluorescence, 2018, 28:1431–1437.
1641
+ 20. Julien Laverdant et. al. , “Experimental Determination of
1642
+ the Fluorescence Quantum Yield of Semiconductor
1643
+ Nanocrystals”, Materials, 2011, 4, 1182-1193.
1644
+
1645
+
1646
+
J9FIT4oBgHgl3EQfZytY/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d180a1f49a684197a37a2bc57788ffec5ee88820d39044028daed2f1140edad8
3
+ size 6225965
JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f4e0826d705b484005e7a0d313f0b13d4e2b062554806193aa4c011d28d8525
3
+ size 5046317
KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43d9046d19b452c65829b6fd922e9bcfe740dcc714a437a1695a1b8eee2a30e8
3
+ size 7340077
L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4397fc0f169b7d9fd6522e521963142e0319fba5f1a101814c4a98f2abe5a8bf
3
+ size 260261
L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:946dc563b6f0d8ff283fd7346d44b5dfd98b76f52ca827f489674649054de826
3
+ size 5832749
L9E1T4oBgHgl3EQfHAM0/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:609f918d85686261924ef9d817a49e728c7ef71963b523765a5ffa5eb84ef7a7
3
+ size 186435
MNE1T4oBgHgl3EQfHAOD/content/tmp_files/2301.02921v1.pdf.txt ADDED
@@ -0,0 +1,1306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02921v1 [math.AP] 7 Jan 2023
2
+ Non-local optimized Schwarz method
3
+ with physical boundaries
4
+ X.Claeys1
5
+ 1Sorbonne Université, Laboratoire Jacques-Louis Lions
6
+ Abstract
7
+ We extend the theoretical framework of non-local optimized Schwarz methods as in-
8
+ troduced in [Claeys,2021], considering an Helmholtz equation posed in a bounded cavity
9
+ supplemented with a variety of conditions modeling material boundaries. The problem is
10
+ reformulated equivalently as an equation posed on the skeleton of a non-overlapping parti-
11
+ tion of the computational domain, involving an operator of the form "identity + contrac-
12
+ tion". The analysis covers the possibility of resonance phenomena where the Helmholtz
13
+ problem is not uniquely solvable. In case of unique solvability, the skeleton formulation
14
+ is proved coercive, and an explicit bound for the coercivity constant is provided in terms
15
+ of the inf-sup constant of the primary Helmholtz boundary value problem.
16
+ Introduction
17
+ Large scale simulation of harmonic wave propagation phenomena remains a challenge in the
18
+ context of which one of the most effective substructuring domain decomposition methods
19
+ (DDM) was introduced by Després [10]. Commonly referred to as Optimized Schwarz Method
20
+ (OSM), it consists in local solves of the wave equation, maintaining a coupling between sub-
21
+ domains through a reformulation of transmission conditions in terms of ingoing and outgoing
22
+ Robin traces. The new transmission conditions involve an exchange operator that swaps traces
23
+ from both sides of each interface between neighboring subdomains. This approach was put
24
+ in a general theoretical framework in [9] and we point to [14] for an overview of this type of
25
+ strategy.
26
+ In a discrete setting, the appropriate definition of the exchange operator raises issues at
27
+ cross-points, where at least three degrees of freedom have to communicate, because it is then
28
+ unclear what should be the discrete counterpart of swapping. Although several heuristics had
29
+ been proposed in the literature for dealing with this situation [12, 13, 19, 11, 1], most strategies
30
+ based on this local swapping operator experienced deteriorated performance in the presence
31
+ of cross points.
32
+ In a series of articles [5, 6, 7, 8], we proposed a variant of OSM where the usual local swap-
33
+ ping exchange operator is replaced by an alternative a priori non-local operator that naturally
34
+ accommodates the presence of cross-points. This new approach can cope with arbitrary sub-
35
+ domain partitions, with a possibly very complicated wire basket. In [5], we analyzed this new
36
+ approach at the continuous level considering a transmission problem posed on the full space
37
+ 1
38
+
39
+ Rd, and the formulation associated to this new DDM strategy was proved strongly coercive,
40
+ which paved the way to convergence estimates for linear solvers (e.g. Richardson, GMRes).
41
+ This novel approach was adapted to a finite element discretised setting and a full conver-
42
+ gence theory was developed in [8, 6]. In passing, this new theoretical framework covered the
43
+ case of the original Després algorithm hence offering a genuine generalization. The whole the-
44
+ ory was confirmed by numerical results both in 2D and 3D. While the previous developments
45
+ were concerned with scalar harmonic wave propagation, the case of Maxwell’s equations was
46
+ considered in [7, 20].
47
+ In the present contribution we extend the theory of [5] in several directions. First of all, while
48
+ [5] considered only the case of a transmission problem posed on the whole of Rd, we consider
49
+ here the case of a cavity problem posed in a bounded domain Ω ⊂ Rd. This boundary value
50
+ problem takes the form
51
+ div(µ−1∇u) + κ2u = −f in Ω
52
+ + boundary condition on ∂Ω.
53
+ (1)
54
+ Here again we reformulate it as an equation in terms of traces posed on the skeleton of the
55
+ subdomain partition, which we call skeleton formulation. While in previous contributions the
56
+ problem had been assumed uniquely solvable (see e.g. [8, §1] or [6, §1.2]), the analysis is
57
+ here extended so as to cover the case where (1) is not necessarily uniquely solvable which
58
+ covers the case of non-trivial resonance phenomenon. The skeleton formulation is then proved
59
+ uniquely solvable if and only if this holds for (1) and, if this condition is fulfilled, the skeleton
60
+ formulation is proved to be strongly coercive. Although coercivity was already established
61
+ in [5], we provide in addition an explicit estimate of the coercivity constant in terms of the
62
+ inf-sup condition of the primary variational formulation.
63
+ Our whole analysis rests on an interpretation of the properties of (1) in terms of a pair of
64
+ two closed linear manifolds: one that models transmission conditions, and another one that
65
+ models local wave equations. Studying properties of operators by means of pairs of closed
66
+ linear manifolds follows the spirit of [16, iv.4 & iv.5].
67
+ Like [5], the present contribution is purely theoretical. It aims at laying solid analytical
68
+ foundations for a better understanding of the spectral properties of the skeleton formulation,
69
+ which is important in the perspective of devising both computationally efficient eigensolvers
70
+ and domain decomposition preconditionners. We do not provide any numerical experiment.
71
+ Such results shall be presented in a forthcoming contribution that will develop a discrete
72
+ variant of the present analysis, in the spirit of [8, 6].
73
+ The outline of this article is as follows. In the first two sections we introduce general notations
74
+ for both Hilbert analysis and Sobolev spaces, including trace operators, Dirichlet-to-Neumann
75
+ maps and harmonic liftings. Next we describe the problem under study, specifying precisely
76
+ the assumptions underlying our analysis, which allows in particular to deal with a variety
77
+ of boundary conditions. How to apply this framework for common boundary conditions is
78
+ illustrated with examples. Further notations are introduced for dealing with multi-domain
79
+ configurations. This leads in particular to a characterization of transmission conditions based
80
+ on a non-local exchange operator, see Proposition 4.3, which had been an important innovation
81
+ of [5]. We use this multi-domain formalism to re-express the boundary value problem under
82
+ study. The kernel and the range of this operator are then re-interpreted in terms of a pair of
83
+ closed linear manifolds. One manifold models wave equations local to each subdomain, and
84
+ 2
85
+
86
+ the other one models transmission conditions. Wave equations local to each subdomain are
87
+ then re-expressed by means of a so-called scattering operator, which we use to finally provide a
88
+ formulation involving tuples of Robin traces on the skeleton of the subdomain partition. This
89
+ skeleton formulation is proved to systematically admit closed range, and its kernel is put in
90
+ correspondence with the kernel of the original formulation. Finally we prove strong coercivity
91
+ for the skeleton formulation and derive an estimate for the coercivity constant that is explicit
92
+ with respect to the inf-sup constant of the original variational formulation.
93
+ 1
94
+ General notation conventions
95
+ We first set a few general notation conventions regarding analysis in Banach spaces. All vector
96
+ spaces that we are going to consider have C as scalar field. Assuming that H is a Banach
97
+ space equipped with the norm ∥ · ∥H, its topological dual denoted H∗ will systematically be
98
+ equipped with the norm
99
+ ∥ϕ∥H∗ =
100
+ sup
101
+ v∈H\{0}
102
+ |ϕ(v)|
103
+ ∥v∥H
104
+ .
105
+ (2)
106
+ The canonical duality pairing will be systematically denoted ⟨·, ·⟩ : H∗×H → C and defined by
107
+ ⟨ϕ, v⟩ := ϕ(v). Although the space H does not appear explicitly in the notation "⟨ϕ, v⟩", when
108
+ such pairing angle brackets are used, it shall be clear from the context which pair of spaces
109
+ (H, H∗) is under consideration. We emphasize that the duality pairings we consider do not
110
+ involve any complex conjugation. We shall write ⟨v, ϕ⟩ = ⟨ϕ, v⟩ ∀v ∈ H, ϕ ∈ H∗ indifferently.
111
+ For any subset X ⊂ H, we denote its polar set by
112
+ X◦ := {ϕ ∈ H∗, ⟨ϕ, v⟩ = 0 ∀v ∈ X}.
113
+ (3)
114
+ Assuming that V is another Banach space equipped with the norm ∥ · ∥V, and L : H → V is a
115
+ bounded linear map, we shall refer to its inf-sup constant denoted and defined as follows
116
+ infsup
117
+ H→V
118
+ (L) :=
119
+ inf
120
+ u∈H\{0}
121
+ ∥L(u)∥V
122
+ ∥u∥H
123
+ (4)
124
+ In the case where L is invertible, this inf-sup constant equals the inverse to the continuity
125
+ modulus of L−1. The inf-sup constant is well defined even if L is not invertible though. The
126
+ adjoint to the map L : H → V shall be defined as the unique bounded linear map L∗ : V∗ → H∗
127
+ satisfying
128
+ ⟨L∗(p), u⟩ := ⟨p, L(u)⟩
129
+ (5)
130
+ for all p ∈ V∗ and all u ∈ H. Once again, we insist that no complex conjugation comes into
131
+ play in (5). The bounded linear map L induces another bounded linear map L : H → V
132
+ defined by L(u) := L(u) for all u ∈ H.
133
+ A bounded linear operator T : H → H∗ is called self-adjoint if T = T∗ and, in this case we
134
+ have ⟨T(u), u⟩ ∈ R for all u ∈ H. It is called positive definite if ⟨T(u), u⟩ ∈ (0, +∞) for all
135
+ u ∈ H\{0}. If T is both self-adjoint and positive definite, the sesquilinear form u, v �→ ⟨T(u), v⟩
136
+ induces a scalar product over H and the associated norm is denoted
137
+ ∥u∥T :=
138
+
139
+ ⟨T(u), u⟩.
140
+ (6)
141
+ 3
142
+
143
+ We shall also consider cartesian products H1 × · · · × HJ where each Hj is a Banach space
144
+ equipped with the norm ∥ · ∥Hj.
145
+ Then the cartesian product shall be equipped with the
146
+ following canonical norm and duality pairings
147
+ ∥v∥2
148
+ H1×···×HJ := ∥v1∥2
149
+ H1 + · · · + ∥vJ∥2
150
+ HJ
151
+ ⟨v, q⟩ := ⟨v1, q1⟩ + · · · + ⟨vJ, qJ⟩.
152
+ (7)
153
+ for v = (v1, . . . , vJ), vj ∈ Hj, and q = (q1, . . . , qJ), qj ∈ H∗
154
+ j. If Vj, j = 1, . . . , J is another
155
+ collection of Banach spaces and Lj : Hj → Vj are bounded linear maps, we shall also consider
156
+ the block-diagonal operator diag(L1, . . . , LJ), mapping H1 × · · · × HJ into V1 × · · · × VJ and
157
+ defined, for v = (v1, . . . , vJ), and q = (q1, . . . , qJ), by
158
+ ⟨q, diag(L1, . . . , LJ) v⟩ := ⟨q1, L1(v1)⟩ + · · · + ⟨qJ, LJ(vJ)⟩.
159
+ 2
160
+ Single domain functional setting
161
+ Now we need to introduce classical function spaces.
162
+ For any Lipschitz open set ω ⊂ Rd,
163
+ we consider L2(ω) := {v : ω → C measurable, ∥v∥2
164
+ L2(ω) :=
165
+
166
+ ω |v(x)|2dx < +∞} and define
167
+ Sobolev spaces
168
+ H1(ω) := {v ∈ L2(ω), ∇v ∈ L2(ω)d}
169
+ ∥v∥2
170
+ H1(ω) := ∥∇v∥2
171
+ L2(ω) + γ−2∥v∥2
172
+ L2(ω)
173
+ (8)
174
+ where γ > 0 is a real positive parameter. Incorporating γ-dependency in the norm will allow
175
+ to establish γ-uniform estimates in the sequel. The space H1
176
+ 0(ω) will refer to the closure of
177
+ D(ω) := {ϕ ∈ C ∞(Rd), supp(ϕ) ⊂ ω, supp(ϕ) bounded} for ∥ · ∥H1(ω).
178
+ Next we introduce the space of Dirichlet traces H1/2(∂ω) := {v|∂ω, v ∈ H1(Rd)} equipped with
179
+ the quotient norm ∥v∥H1/2(∂ω) := min{∥ϕ∥H1(Rd), ϕ ∈ H1(Rd) and ϕ|∂ω = v}. The topological
180
+ dual to H1/2(∂ω) will be denoted H−1/2(∂ω) = H1/2(∂ω)∗. As detailed for example in [17,
181
+ Thm.3.38], the trace map gives rise to a bounded linear operator
182
+ Bω : H1(ω) → H1/2(∂ω)
183
+ Bω(v) := v|∂ω
184
+ ∀v ∈ D(Rd).
185
+ (9)
186
+ We underline that Bω refers to the trace taken from the interior of ω. The norm (8) gives rise
187
+ to a natural right-inverse of this Dirichlet boundary trace operator. We define the harmonic
188
+ lifting operator B†
189
+ ω : H1/2(∂ω) → H1(ω), see [21, §1.2.2.4], through norm minimization
190
+ Bω · B†
191
+ ω(v) = v
192
+ ∀v ∈ H1/2(∂ω) and
193
+ ∥B†
194
+ ω(v)∥H1(ω) := min{∥φ∥H1(ω), Bω(φ) = v, φ ∈ H1(ω)}.
195
+ (10)
196
+ Denote H1(∆, ω) := {v ∈ H1(Ω), ∆v ∈ L2(Ω)} and let nω refer to the unit normal vector
197
+ field to the boundary ∂ω directed toward the exterior of ω.
198
+ The Dirichlet trace operator
199
+ ϕ �→ ϕ|∂ω, resp. the Neumann trace operator ϕ �→ nω · ∇ϕ|∂ω, can be extended by density as
200
+ a bounded linear map H1(ω) → H1/2(∂ω) resp. H1(∆, ω) → H−1/2(∂ω), see e.g. [17, Lem.4.3].
201
+ 4
202
+
203
+ The Dirichlet-to-Neumann (DtN) map Tω : H1/2(∂ω) → H−1/2(∂ω) is defined as the unique
204
+ bounded linear operator satisfying
205
+ Tω(φ|∂ω) := nω · ∇φ|∂ω
206
+ ∀φ ∈ H1(∆, ω) satisfying
207
+ − ∆φ + γ−2φ = 0
208
+ in ω.
209
+ (11)
210
+ This is a real valued and self-adjoint operator Tω = Tω and T∗
211
+ ω = Tω which induces a scalar
212
+ product over H+1/2(∂ω) and the Neumann-to-Dirichlet map T−1
213
+ ω
214
+ : H−1/2(∂ω) → H+1/2(∂ω)
215
+ induces a scalar product over H−1/2(∂ω). We set
216
+ ∥v∥2
217
+ Tω := ⟨Tω(v), v⟩
218
+ ∥p∥2
219
+ T−1
220
+ ω
221
+ := ⟨T−1
222
+ ω (p), p⟩.
223
+ (12)
224
+ It is a well established fact (see e.g.
225
+ [21, Def.1.41] or [23, §6.6.3]) that ∥ · ∥H1/2(∂ω) and
226
+ ∥·∥H−1/2(∂ω) are equivalent to the norms (12). Applying the Euler equation characterizing the
227
+ harmonic lifting B†
228
+ ω(v) as unique solution to the minimization (10), see e.g. [4, Thm.7.2-1],
229
+ we have −∆B†
230
+ ω(v) + γ−2B†
231
+ ω(v) = 0 in ω, so that Tω(v) = nω · ∇B†(v)|∂ω. We also deduce
232
+ that ∥φ|∂ω∥Tω = ∥B†
233
+ ω(φ|∂ω)∥H1(ω) ≤ ∥φ∥H1(ω) for all φ ∈ H1(ω) and, in particular, we have
234
+ the inequalities
235
+ ∥B†
236
+ ω(v)∥H1(ω) = ∥v∥Tω
237
+ ∀v ∈ H1/2(∂ω),
238
+ ∥Bω(u)∥Tω ≤ ∥u∥H1(ω)
239
+ ∀u ∈ H1(ω).
240
+ (13)
241
+ 3
242
+ Single domain variational formulation
243
+ The next step in our analysis will consist in writing Problem (1) in a variational form able to
244
+ cope with a variety of boundary conditions. This is why we treat the boundary condition by
245
+ means of an additional Lagrange parameter. Let Ω ⊂ Rd, Γ := ∂Ω refer to an open bounded
246
+ Lipschitz set and its boundary and denote
247
+ H(Ω × Γ) := H1(Ω) × H−1/2(Γ)
248
+ Our analysis will start from a variational formulation of (1), later referred to as the primary
249
+ formulation, that we write: find u ∈ H(Ω × Γ) such that
250
+ AΩ×Γ(u) = ℓΩ×Γ
251
+ (14)
252
+ where the bilinear map underlying the variational problem is written as a bounded linear
253
+ operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ assumed to systematically take the following form:
254
+ for any u, v ∈ H1(Ω) and p, q ∈ H−1/2(Γ),
255
+ Assumption:
256
+ ⟨AΩ×Γ(u, p), (v, q)⟩ := ⟨AΩ(u), v⟩ + ⟨AΓ(u|Γ, p), (v|Γ, q)⟩
257
+ (A1)
258
+ The map AΩ×Γ involves a volume part AΩ : H1(Ω) → H1(Ω)∗ that accounts for the Helmholtz
259
+ equation in the interior of the domain Ω. For µ ∈ C and κ : Ω → C an essentially bounded
260
+ 5
261
+
262
+ measurable function, it is assumed of the following form
263
+ Assumptions:
264
+ ⟨AΩ(u), v⟩ :=
265
+
266
+ Ω µ−1∇u · ∇v − κ2uv dx,
267
+ with ℑm{κ(x)2} ≥ 0, ∀x ∈ Ω
268
+ supx∈Ω|κ(x)| < ∞
269
+ ℜe{µ} > 0, ℑm{µ} ≥ 0.
270
+ (A2)
271
+ The assumptions above imply in particular that ℑm{⟨AΩ(u), u⟩} ≤ 0 ∀u ∈ H1(Ω).
272
+ The
273
+ operator AΩ×Γ also involves a pure boundary part AΓ that models boundary conditions,
274
+ AΓ : Hb(Γ) → Hb(Γ)∗
275
+ where Hb(Γ) := H1/2(Γ) × H−1/2(Γ).
276
+ (15)
277
+ The boundary operator AΓ involves traces on Γ and is chosen in accordance with the boundary
278
+ conditions of our primary boundary value problem (1). We will need to rely on the following
279
+ additional assumptions
280
+ Assumptions:
281
+ i) ℑm{⟨AΓ(u), u⟩} ≤ 0
282
+ ∀u ∈ Hb(Γ)
283
+ ii) range(AΩ×Γ) is closed in H(Ω × Γ)∗.
284
+ (A3)
285
+ In the remaining of this contribution we will almost systematically take (A1)-(A2)-(A3) as
286
+ assumptions. We do not require that AΩ×Γ = A∗
287
+ Ω×Γ. Let us underline that the assumptions
288
+ above are fulfilled by AΩ, AΓ, AΩ×Γ if and only if they are fulfilled by A∗
289
+ Ω, A∗
290
+ Γ, A∗
291
+ Ω×Γ (recall
292
+ that adjunction does not involve any complex conjugation here). The last hypothesis in (A3)
293
+ implies (see e.g. [2, Thm.2.19])
294
+ range(AΩ×Γ) = ker(A∗
295
+ Ω×Γ)◦.
296
+ (16)
297
+ hence codim(range(AΩ×Γ)) = dim(ker(A∗
298
+ Ω×Γ)). The source functional in (14) is assumed to
299
+ take the similar form ⟨ℓΩ×Γ, (v, q)⟩ := ⟨ℓΩ, v⟩+⟨ℓΓ, (v|Γ, q)⟩ where ⟨ℓΩ, v⟩ :=
300
+
301
+ Ω fv dx for some
302
+ f ∈ L2(Ω) and ℓΓ ∈ Hb(Γ)∗ = H−1/2(Γ)×H+1/2(Γ) is chosen in accordance with the boundary
303
+ condition.
304
+ Now we consider concrete boundary conditions, exhibit corresponding appropriate choices of
305
+ AΓ and point how these situations fit the previous assumptions (A1)-(A2)-(A3). Here and in
306
+ the following, for the sake of conciseness, we shall take the notational convention (see (11)),
307
+ TΓ := TRd\Ω.
308
+ Example 3.1 (Dirichlet boundary condition). In the case of a Dirichlet boundary condi-
309
+ tion, we set AΓ(α, p) := (p, α) and ℓΓ := (0, g) for some g ∈ H1/2(Γ). We have ℑm{⟨AΓ(u), u⟩} =
310
+ 0 for all u, which fits i) of (A3). Formulation (14) reduces to a variational formulation of a
311
+ Helmholtz problem with a Dirichlet condition imposed by means of a Lagrange parameter at
312
+ the boundary
313
+ u ∈ H1(Ω), p ∈ H−1/2(Γ) such that
314
+
315
+ Ω µ−1∇u · ∇v − κ2uv dx +
316
+
317
+ Γ pv dσ =
318
+
319
+ Ω fvdx
320
+ ∀v ∈ H1(Ω),
321
+
322
+ Γ uq dσ =
323
+
324
+ Γ gq dσ
325
+ ∀q ∈ H−1/2(Γ).
326
+ 6
327
+
328
+ Whenever there is existence and uniqueness of the solution pair (u, p) then p = −nΩ · ∇u|Γ.
329
+ Conditions in (A2) guarantee that the volume part of this equation is coercive modulo the
330
+ compact term attached to κ. Hence the operator associated to this system is of Fredholm type
331
+ with index 0. In particular it has closed range, which fits ii) of (A3).
332
+ Example 3.2 (Neumann boundary condition). In the case of Neumann conditions, the
333
+ boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) := (0, T−1
334
+ Γ p) and ℓΓ := (g, 0). Again
335
+ we have ℑm{⟨AΓ(u), u⟩} = 0 for all u, so this choice also matches i) of (A3). The primary
336
+ formulation (14) writes
337
+ u ∈ H1(Ω), p ∈ H−1/2(Γ) such that
338
+
339
+ Ω µ−1∇u · ∇v − κ2uv dx =
340
+
341
+ Ω fvdx +
342
+
343
+ Γ gvdσ
344
+ ∀v ∈ H1(Ω),
345
+
346
+ Γ qT−1
347
+ Γ p dσ = 0
348
+ ∀q ∈ H−1/2(Γ),
349
+ (17)
350
+ where u is decoupled from p. Actually we have in particular p = 0 and this variable is not
351
+ supposed to receive any particular interpretation.
352
+ Since T−1
353
+ Γ
354
+ : H−1/2(Γ) → H1/2(Γ) is an
355
+ isomorphism, the operator AΩ×Γ associated to (17) is of Fredholm type with index 0.
356
+ Example 3.3 (Robin boundary condition). Consider a bounded linear map Λ : H1/2(Γ) →
357
+ H−1/2(Γ) that satisfies ℜe{⟨Λ(v), v⟩} > 0 ∀v ∈ H1/2(Γ)\{0} (as a typical example: Λ(v) = λv
358
+ with λ > 0). In this case again the boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) :=
359
+ (−iΛα, T−1
360
+ Γ p) and ℓΓ := (g, 0).
361
+ This choice of AΓ corresponds to the boundary condition
362
+ nΩ · ∇u|Γ − iΛ(u) = 0 on Γ. Formulation (14) writes
363
+ u ∈ H1(Ω), p ∈ H−1/2(Γ) such that
364
+
365
+ Ω µ−1∇u · ∇v − κ2uv dx − i
366
+
367
+ Γ vΛ(u)dσ =
368
+
369
+ Ω fvdx +
370
+
371
+ Γ gvdσ
372
+ ∀v ∈ H1(Ω)
373
+
374
+ Γ qT−1
375
+ Γ p dσ = 0
376
+ ∀q ∈ H−1/2(Γ)
377
+ which is a variant of (17) involving i
378
+
379
+ Γ vΛ(u)dσ as an additional term. Again p is decoupled
380
+ from the rest of the system and p = 0. Again the operator AΩ×Γ associated to this system is
381
+ of Fredholm type with index 0.
382
+ 4
383
+ Multi-domain functional setting
384
+ The boundary value problem (1) has been reformulated as an equivalent global variational
385
+ problem with (14). As we aim at extending an analytical framework for domain decomposition
386
+ by substructuration though, we are going to reshape Formulation (14), adapting it to a multi-
387
+ domain geometrical configuration. For this, we need to introduce notations adapted to domain
388
+ decomposition. Consider a decomposition into a collection of non-overlapping Lipschitz open
389
+ sets Ωj ⊂ Rd, j = 1, . . . , J that satisfy
390
+ Ω = Ω1 ∪ · · · ∪ ΩJ,
391
+ with Ωj ∩ Ωk = ∅ for j ̸= k.
392
+ (18)
393
+ Such a decomposition may very well admit a non-trivial wire-basket i.e.
394
+ the set of cross
395
+ points is non-empty, and we wish to underline that this situation is covered by the subsequent
396
+ analysis. We shall refer to the skeleton of the decomposition by
397
+ Σ := ∂Ω1 ∪ · · · ∪ ∂ΩJ.
398
+ (19)
399
+ 7
400
+
401
+ Note that Γ = ∂Ω ⊂ Σ. We need to introduce notations for function spaces adapted to this
402
+ multi-domain setting. In this context, cartesian product spaces are probably the most natural,
403
+ so we set
404
+ Hb(Γ) := H
405
+ 1
406
+ 2 (Γ) × H− 1
407
+ 2(Γ)
408
+ H(Ω) := Hb(Γ) × H1(Ω1) × · · · × H1(ΩJ)
409
+ H(Σ) := H
410
+ 1
411
+ 2 (Γ) × H
412
+ 1
413
+ 2(∂Ω1) × · · · × H
414
+ 1
415
+ 2(∂ΩJ)
416
+ (20)
417
+ As cartesian products, these spaces are equipped with norms and duality pairings given by
418
+ (7). Apart from the boundary terms attached to Hb(Γ), the space H(Ω) should be understood
419
+ as functions defined over Ω, admitting potential jumps through interfaces. The space H(Σ)
420
+ consists in tuples of Dirichlet traces. Its dual is
421
+ H(Σ)∗ = H− 1
422
+ 2 (Γ) × H− 1
423
+ 2(∂Ω1) × · · · × H− 1
424
+ 2(∂ΩJ).
425
+ We need to introduce several operators acting in these spaces. First we shall consider the
426
+ operator T : H(Σ) → H(Σ)∗ defined as the block diagonal operator acting locally in each
427
+ subdomain
428
+ T := diag(TΓ, TΩ1, . . . , TΩJ)
429
+ where TΓ := TR\Ω
430
+ (21)
431
+ and each TΩj is defined with (11). The norms ∥ · ∥T and ∥ · ∥T−1 defined by (6) and (21) are
432
+ equivalent to ∥ · ∥H(Σ) and ∥ · ∥H(Σ)∗, which stems from the analogous property being satisfied
433
+ locally by each TΩj. These norms will play an important role in the subsequent analysis. Next
434
+ we introduce a boundary trace operator B : H(Ω) → H(Σ) and defined by
435
+ B := diag(BΓ, BΩ1, . . . , BΩJ)
436
+ where BΓ(α, p) := α
437
+ (22)
438
+ and each BΩj is the Dirichlet trace operator interior to subdomain Ωj as defined in (9). By
439
+ definition of T we have ∥B(u)∥T ≤ ∥u∥H(Ω) for all u ∈ H(Ω), since a similar inequality
440
+ is satisfied in each subdomain locally according to (13). We can also form a multi-domain
441
+ harmonic lifting map B† : H(Σ) → H(Ω) defined as the block-diagonal operator as follows
442
+ B† = diag(B†
443
+ Γ, B†
444
+ Ω1, . . . , B†
445
+ ΩJ)
446
+ where B†
447
+ Γ(α) := (α, 0)
448
+ (23)
449
+ and each B†
450
+ Ωj as defined in (10).
451
+ With this definition we have BB† = Id and B†B is an
452
+ orthogonal projector in H(Ω). Finally we also need to consider a restriction operator R :
453
+ H(Ω×Γ) → H(Ω) that embeds pairs (u, p) ∈ H(Ω×Γ) = H1(Ω)×H−1/2(Γ) into the cartesian
454
+ product H(Ω) by restricting locally to each subdomain
455
+ R(u, p) := ((u|Γ, p), u|Ω1, . . . , u|ΩJ)
456
+ for u ∈ H1(Ω), p ∈ H−1/2(Γ).
457
+ (24)
458
+ The image of this operator range(R) = R(H(Ω×Γ)) is a particular subspace of H(Ω) spanned
459
+ by tuples of functions that match through interfaces. This matching property is precisely
460
+ 8
461
+
462
+ what characterizes Dirichlet transmission conditions through interfaces of the decomposition
463
+ (18). This is why we dedicate notations to this.
464
+ X(Ω) := {R(u, p), u ∈ H1(Ω), p ∈ H−1/2(Γ)}
465
+ X(Σ) := {B(u), u ∈ X(Ω)}
466
+ X(Σ)◦ := {p ∈ H(Σ)∗, ⟨p, v⟩ = 0 ∀v ∈ X(Σ)}.
467
+ (25)
468
+ A rapid inspection of the previous definitions shows that X(Σ) = {(u|Γ, u|∂Ω1, . . . , u|∂ΩJ), u ∈
469
+ H1(Ω)} i.e. these are the tuples of Dirichlet traces that match through interfaces. The space
470
+ X(Σ) (resp.
471
+ X(Ω)) is a closed subspace of H(Σ) (resp.
472
+ H(Ω)) that encodes the Dirichlet
473
+ transmission conditions through interfaces, while X(Σ)◦ is a closed subspace of H(Ω)∗ that
474
+ encodes the Neumann transmission conditions. Indeed, considering restriction to interfaces in
475
+ the sense of distributions,
476
+ (v0, . . . , vJ) ∈ X(Σ)◦ =⇒ vj = +vk on Γj ∩ Γk,
477
+ (p0, . . . , pJ) ∈ X(Σ)◦ =⇒ pj = −pk on Γj ∩ Γk.
478
+ (26)
479
+ It is clear from these definitions that X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)}.
480
+ In particular
481
+ ker(B) ⊂ X(Ω). Recall the definition of polar sets given by (3). The following lemma is a
482
+ continuous counterpart to [6, Lem.2.1].
483
+ Lemma 4.1.
484
+ i) ker(B)◦ = range(B∗)
485
+ ii) ker(B∗) = {0}
486
+ iii) X(Ω) = B−1(X(Σ))
487
+ iv) X(Ω)◦ = B∗(X(Σ)◦)
488
+ Proof:
489
+ The first and second results are direct consequences of the surjectivity of the trace map
490
+ B : H(Ω) → H(Σ) combined with Theorem 4.7, 4.12 and 4.15 of [22]. The third result is a
491
+ rephrasing of X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)} in condensed form. To prove the last result,
492
+ first observe that B∗(X(Σ)◦) ⊂ X(Ω)◦ by routine verifications.
493
+ Now pick an arbitrary p ∈ X(Ω)◦. Since ker(B) ⊂ X(Ω) ⇒ X(Ω)◦ ⊂ ker(B)◦ = range(B∗),
494
+ there exists q ∈ H(Σ)∗ such that p = B∗q. For any v ∈ X(Σ), there exists u ∈ X(Ω) such
495
+ that v = B(u), which implies that ⟨q, v⟩ = ⟨p, u⟩ = 0. From this we conclude that q ∈ X(Σ)◦
496
+ hence p ∈ B∗(X(Σ)◦), which proves X(Ω)◦ ⊂ B∗(X(Σ)◦).
497
+
498
+ In Item iii) of the lemma above, B−1(X(Σ)) = {u ∈ H(Ω), B(u) ∈ X(Σ)} refers to a pre-image
499
+ (the operator B is obviously non-invertible i.e.
500
+ ker(B) ̸= {0}).
501
+ The following orthogonal
502
+ decomposition was established in [17, Prop.4.2].
503
+ Proposition 4.2.
504
+ We have H(Σ)∗ = X(Σ)◦ ⊕ T(X(Σ)) and this decomposition is T−1-orthogonal.
505
+ The orthogonal decomposition of the previous result can be used to elaborate a characteriza-
506
+ tion of transmission conditions. The following result was established in [17, Prop.5.4].
507
+ 9
508
+
509
+ Proposition 4.3.
510
+ Let Q : H(Σ)∗ → H(Σ)∗ refer to the T−1-orthogonal projection onto T(X(Σ)).
511
+ Then the
512
+ operator Π := 2Q − Id is a T−1-isometric involution i.e. Π2 = Id, ∥Π(q)∥T−1 = ∥q∥T−1 for
513
+ all q ∈ H(Σ)∗. Moreover, for any pair (u, p) ∈ H(Σ) × H(Σ)∗, we have
514
+ (u, p) ∈ X(Σ) × X(Σ)◦
515
+ ⇐⇒
516
+ −p + iT(u) = Π(p + iT(u)).
517
+ (27)
518
+ The characterization above relies on an exchange operator Π which is characteristic of Opti-
519
+ mized Schwarz Methods (OSM, see e.g. [1, Eq.37]) and ultra-weak variational formulations
520
+ (UWVF) see e.g. [3, Eq.1.19]. An explicit expression of this operator in terms of double layer
521
+ potentials attached to the equation −∆ + γ−2 was provided in [5, §5.2].
522
+ 5
523
+ Multi-domain variational formulation
524
+ Using the notations introduced in the previous sections, we now rewrite the primary formula-
525
+ tion (14), decomposing it according to the subdomain partition (18). Pick u, v arbitrarily in
526
+ H1(Ω) and expand the integral coming into play in the definition (A2) of AΩ. This leads to
527
+ ⟨AΩu, v⟩ = ⟨AΩ1(u|Ω1), v|Ω1⟩ + · · · + ⟨AΩJ(u|ΩJ), v|ΩJ⟩
528
+ with
529
+ ⟨AΩju, v⟩ :=
530
+
531
+ Ωj
532
+ µ−1∇u · ∇v − κ2uv dx
533
+ (28)
534
+ In the expression above only u|Ωj, v|Ωj ∈ H1(Ωj) come into play in the term attached to
535
+ Ωj. The source term in (14) can be decomposed in a similar manner ℓΩ(v) = ℓΩ1(v|Ω1) +
536
+ . . . ℓΩJ(v|ΩJ). The above decompositions lead to introducing a block-diagonal operator A :
537
+ H(Ω) → H(Ω)∗ associated to these local bilinear forms i.e. defined by
538
+ A := diag(AΓ, AΩ1, . . . , AΩJ)
539
+ so that AΩ×Γ = R∗AR.
540
+ (29)
541
+ We have factorized the operator of our primary boundary value problem AΩ×Γ, and this
542
+ factorization is interesting from the perspective of domain decomposition because local sub-
543
+ problems are disconnected from one another in A. The following property is inherited from
544
+ the assumptions we made in §3 about AΩ×Γ, µ, κ and AΓ,
545
+ ℑm{⟨A(u), u⟩} ≤ 0
546
+ ∀u ∈ H(Σ).
547
+ (30)
548
+ We also need a unique solvability property for local problems with impedance boundary con-
549
+ dition. Because we do not make much specific assumptions regarding the boundary operator
550
+ AΓ, we take this further property as an assumption:
551
+ Assumption:
552
+ A − iB∗TB : H(Ω) → H(Ω)∗
553
+ is an isomorphism.
554
+ (A4)
555
+ A notable consequence of (A2), (A3) and (A4) is that ker(A) ∩ ker(B) = {0}. Since A, T and
556
+ B are subdomain-wise block-diagonal, the assumption above is actually equivalent to imposing
557
+ that each AΩj − iB∗
558
+ ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ and AΓ − iB∗
559
+ ΓTΓBΓ : Hb(Γ) → Hb(Γ)∗ are
560
+ 10
561
+
562
+ isomorphisms.
563
+ These conditions are fulfilled in many concrete circumstances.
564
+ As regards
565
+ interior contributions, for example, we have the following simple consequence of the unique
566
+ continuation principle.
567
+ Lemma 5.1.
568
+ Assume (A1)-(A2) and that µ, κ are constants (i.e.
569
+ do not depend on x).
570
+ Then for any
571
+ j = 1, . . . , J the operator AΩj − iB∗
572
+ ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ is an isomorphism.
573
+ Proof:
574
+ Let us denote ω = Ωj for the sake of conciseness. According to (A2), there exists α > 0
575
+ such that
576
+ α∥u∥2
577
+ H1(ω) ≤ ℜe{⟨˜Aω(u), u⟩}
578
+ ∀u ∈ H1(ω),
579
+ ⟨˜Aω(u), v⟩ := ⟨(Aω − iB∗
580
+ ωTωBω)u, v⟩ +
581
+
582
+ ω(1 + κ2)uvdx.
583
+ Applying Lax-Milgram’s lemma, we see that the operator ˜Aω : H(ω) → H(ω)∗ is an isomor-
584
+ phism hence, since it differs by a compact perturbation, that Aω−iB∗
585
+ ωTωBω is of Fredholm type
586
+ with index 0, see e.g. [17, Chap.2]. There only remains to prove that ker(Aω − iB∗
587
+ ωTωBω) =
588
+ {0}. Pick any u ∈ H1(ω) such that (Aω − iB∗
589
+ ωTωBω)u = 0. Then we have
590
+ ∥Bω(u)∥2
591
+ Tω ≤ −ℑm{⟨(Aω − iB∗
592
+ ωTωBω)u, u⟩} = 0.
593
+ From this we conclude that u|∂ω = Bω(u) = 0 hence Aω(u) = 0. On the other hand Aω(u) =
594
+ 0 ⇒ nω · ∇u|∂ω = 0. There only remains to apply the unique continuation principle, see e.g.
595
+ Lemma 2.2 in [24], to conclude that u = 0 in ω.
596
+
597
+ Regarding classical boundary conditions and the associated choice of AΓ, we can also examine
598
+ the invertibility of AΓ − iB∗
599
+ ΓTΓBΓ.
600
+ Example 5.2 (Dirichlet condition). Taking the same notations as in Example 3.1, in this
601
+ situation we have the following expression (AΓ−iB∗
602
+ ΓTΓBΓ)(α, p) = (p−iTΓα, α). We conclude
603
+ that AΓ − iB∗
604
+ ΓTΓBΓ is continuously invertible with
605
+ (AΓ − iB∗
606
+ ΓTΓBΓ)−1(p, α) = (α, p + iTΓα).
607
+ Example 5.3 (Neumann condition). Taking the same notations as in Example 3.2, we have
608
+ (AΓ − iB∗
609
+ ΓTΓBΓ)(α, p) = (−iTΓα, T−1
610
+ Γ p). We conclude that AΓ − iB∗
611
+ ΓTΓBΓ is continuously
612
+ invertible with
613
+ (AΓ − iB∗
614
+ ΓTΓBΓ)−1(p, α) = (iT−1
615
+ Γ p, TΓα).
616
+ Example 5.4 (Robin condition). Taking the same notations as in Example 3.3, we have
617
+ (AΓ−iB∗
618
+ ΓTΓBΓ)(α, p) = (−i(Λ+TΓ)α, T−1
619
+ Γ p). Because ℜe{⟨Λ(α), α⟩} > 0 for all α ∈ H1/2(Γ),
620
+ we see that Λ+TΓ is coercive hence invertible and AΓ−iB∗
621
+ ΓTΓBΓ is then continuously invertible
622
+ with
623
+ (AΓ − iB∗
624
+ ��TΓBΓ)−1(p, α) = (i(Λ + TΓ)−1p, TΓα).
625
+ Similarly to what precedes, define ℓ ∈ H(Ω)∗ by ⟨ℓ, v⟩ = ℓΓ(v0, q) + ℓΩ1(v1) + · · · + ℓΩJ(vJ),
626
+ and we have ℓΩ×Γ = R∗ℓ. The primary variational problem (14) can then rewritten by means
627
+ of A as follows: find u ∈ H(Ω × Γ) such that ⟨AR(u), R(v)⟩ = ⟨ℓ, R(v)⟩ for all v ∈ H(Ω × Γ).
628
+ Making use of the definition of X(Ω) as the image of R see (25), this also rewrites
629
+ u ∈ X(Ω) and
630
+ ⟨A(u), v⟩ = ⟨ℓ, v⟩ ∀v ∈ X(Ω).
631
+ (31)
632
+ 11
633
+
634
+ 6
635
+ Closed linear manifolds interpretation
636
+ Formulation (14) which is the starting point of this study, is not assumed to be a priori uniquely
637
+ solvable. The kernel of AΩ×Γ might be non-trivial. In many relevant applications though, it is
638
+ of Fredholm type, and this is why we are interested in studying how this Fredholmness carries
639
+ over in the multi-domain context.
640
+ For this we are going to consider the skew-symmetric
641
+ bilinear form [·, ·] : ( H(Σ) × H(Σ)∗)2 → C defined by
642
+ [(u, p), (v, q)] := ⟨u, q⟩ − ⟨v, p⟩
643
+ u, v ∈ H(Σ), p, q ∈ H(Σ)∗.
644
+ (32)
645
+ This form is obviously non-degenerate and can be used as a duality pairing over the space of
646
+ tuples of Dirichlet-Neumann pairs of traces. Indeed denote
647
+ H (Σ) := H(Σ) × H(Σ)∗
648
+ with norm:
649
+ ∥(v, q)∥2
650
+ T×T−1 := ∥v∥2
651
+ T + ∥q∥2
652
+ T−1
653
+ then for any ϕ ∈ H (Σ)∗, there exists a unique u ∈ H (Σ) such that [u, v] = ϕ(v) ∀v ∈ H (Σ).
654
+ In other words, the pairing (32) puts H (Σ) in self-duality. We now introduce the subspace
655
+ of so-called Cauchy data that directly relates to the boundary value problem under study,
656
+ C (A) := {(B(u), p) | (u, p) ∈ H(Ω) × H(Σ)∗, Au = B∗p}
657
+ (33)
658
+ It must be understood as the space of tuples of Dirichlet-Neumann trace pairs stemming from
659
+ solutions to the problems local to each subdomain. If A : H(Ω) → H(Ω)∗ is an isomorphism,
660
+ we can define the associated Neumann-to-Dirichlet operator NtDA := BA−1B∗ and then
661
+ C (A) := {(NtDA(p), p) | p ∈ H(Σ)∗} appears to be the graph of it. On the other hand C (A)
662
+ is properly defined even if A fails to be invertible.
663
+ Lemma 6.1.
664
+ Assume (A1)-(A2)-(A3)-(A4). The application (v, p) → p − iT(v) continuously and isomor-
665
+ phically maps C (A) into H(Σ)∗ and, for all (v, p) ∈ C (A), satisfies the estimates
666
+ ∥v∥2
667
+ T + ∥p∥2
668
+ T−1 ≤ ∥p − iTv∥2
669
+ T−1
670
+ 1
671
+ 2∥p − iTv∥2
672
+ T−1 ≤ ∥v∥2
673
+ T + ∥p∥2
674
+ T−1.
675
+ Proof:
676
+ It suffices to prove surjectivity and the estimates. To prove surjectivity, pick an arbitrary
677
+ q ∈ H(Σ)∗ and define u = (A − iB∗TB)−1B∗q. The pair (v, p) = (B(u), q + iTB(u)) satisfies
678
+ Au = B∗p so that (v, p) ∈ C (A) and, by construction, we have p − iTv = q.
679
+ To prove the estimates, pick an arbitrary pair (v, p) ∈ C (A). According to (33) there exists
680
+ u ∈ H(Ω) such that B(u) = v and A(u) = B∗(p), hence ⟨p, v⟩ = ⟨p, B(u)⟩ = ⟨B∗(p), u⟩ =
681
+ ⟨A(u), u⟩. Taking account of (30), we deduce 0 ≤ ℜe{i⟨p, v⟩} ≤ ∥v∥2
682
+ T +∥p∥2
683
+ T−1 and conclude
684
+ 0 ≤ ∥p − iTv∥2
685
+ T−1 − (∥v∥2
686
+ T + ∥p∥2
687
+ T−1) ≤ ∥v∥2
688
+ T + ∥p∥2
689
+ T−1.
690
+
691
+ In the previous lemma, the space of Cauchy data has been proven boundedly isomorphic to a
692
+ Hilbert space and, as such, is closed.
693
+ 12
694
+
695
+ Corollary 6.2.
696
+ Assume (A1)-(A2)-(A3)-(A4). The subspace C (A) is closed in H (Σ).
697
+ The space of Cauchy data can be complemented in various ways. The next proposition exhibits
698
+ one possibility.
699
+ Proposition 6.3.
700
+ Assume (A1)-(A2)-(A3)-(A4). Define G (iT) := {(v, iT(v)), v ∈ H(Σ)}. Then
701
+ H (Σ) = C (A) ⊕ G (iT).
702
+ Proof:
703
+ First of all, assume that (u, p) ∈ C (A) ∩ G (iT). This that there exists v ∈ H(Ω) such
704
+ that Av = B∗p and Bv = u, and that p = iTu. Combining these equations yields (A −
705
+ iB∗TB)v = 0 hence v = 0 according to Lemma 5.1, and finally (u, p) = 0. We have proved
706
+ that C (A) ∩ G (iT) = {0}.
707
+ Now take an arbitrary (u, p) ∈ H(Σ) × H(Σ)∗. Since B : H(Ω) → H(Σ) is surjective, there
708
+ exists w ∈ H(Ω) such that B(w) = u. Define v ∈ H(Ω) by v = (A − iB∗TB)−1(Aw − B∗p)
709
+ which is valid a definition since A − iB∗TB : H(Ω) → H(Ω)∗ is an isomorphism according to
710
+ Lemma 5.1. We have in particular A(w − v) = B∗(p − iTBv). Set
711
+ u1 = B(v),
712
+ p1 = iTu1 = iTB(v),
713
+ u2 = B(w − v) = u − u1,
714
+ p2 = p − iTBv = p − p1.
715
+ (34)
716
+ By construction we have (u1, p1) ∈ G (iT). Moreover B(w −v) = u2 and A(w −v) = B∗p2 so
717
+ that (u2, p2) ∈ C (A). Finally, the second line in (34) indicates that (u, p) = (u1, p1)+(u2, p2)
718
+ which thus proves (u, p) ∈ C (A) + G (iT). We have just established that C (A) + G (iT) =
719
+ H(Σ) ⊕ H(Σ)∗ which ends the proof.
720
+
721
+ The space G (iT) is simply the graph of the (bounded) operator iT : H(Σ) → H(Σ)∗. In the
722
+ present analysis, it plays a secondary role and shall be used only to prove results about C (A).
723
+ We have the following immediate result.
724
+ Lemma 6.4.
725
+ Define G (iT)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ G (iT)}. Then G (iT)♯ = G (iT).
726
+ The proof is definitely straightforward. This result means that G (iT) is its own polar set
727
+ under the pairing [·, ·]. As we see now, the space C (A) fulfills a similar property.
728
+ Proposition 6.5.
729
+ Assume (A1)-(A2)-(A3)-(A4). Define C (A)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ C (A)}. Then
730
+ C (A)♯ = C (A∗).
731
+ Proof:
732
+ First of all we have C (A∗) ⊂ C (A)♯. Indeed take any (u, p) ∈ C (A). By definition, there
733
+ exists w ∈ H(Ω) such that B(w) = u and Aw = B∗p. Then for any (u′, p′) ∈ C (A∗), since
734
+ B(w′) = u′ and A∗w′ = B∗p′ for some w′ ∈ H(Ω), we have
735
+ [(u, p), (u′, p′)] = ⟨u, p′⟩ − ⟨u′, p⟩ = ⟨B(w), p′⟩ − ⟨B(w′), p⟩
736
+ = ⟨w, B∗(p′)⟩ − ⟨w′, B∗(p)⟩
737
+ = ⟨w, A∗(w′)⟩ − ⟨w′, A(w)⟩ = 0.
738
+ 13
739
+
740
+ Hence, to finish the proof, we need to show that C (A)♯ ⊂ C (A∗). For that, pick an arbitrary
741
+ u = (u, p) ∈ C (A)♯. The hypothesis of Section 3 hold for A∗
742
+ Ω×Γ instead of AΩ×Γ, hence we
743
+ can apply Proposition 6.3 to A∗. This yields a decomposition u = u1 +u2 for some u1 ∈ C (A∗)
744
+ and some u2 ∈ G (iT). We have to prove that u2 = 0. By assumption we have
745
+ 0 = [u, v] = [u1, v] + [u2, v] = [u2, v]
746
+ ∀v ∈ C (A),
747
+ since C (A) ⊂ C (A∗)♯. Next Lemma 6.4 implies that 0 = [u2, v] = [u2, v + v′] for all v ∈ C (A)
748
+ and all v′ ∈ G (iT). Since C (A) ⊕ G (iT) = H (Σ) according to Proposition 6.3, we conclude
749
+ that 0 = [u2, w] ∀w ∈ H (Σ) hence finally u2 = 0. This shows that u = u1 ∈ C (A∗). We have
750
+ just established that C (A)♯ ⊂ C (A∗).
751
+
752
+ We point that, because C (A) is closed, the previous result also implies that C (A) = C (A∗)♯.
753
+ Self-polarity appears to be a property of the following subspace (see Proposition 4.3) that is
754
+ pivotal in characterizing transmission conditions
755
+ X (Σ) := X(Σ) × X(Σ)◦.
756
+ Indeed we have X (Σ) = X (Σ)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ X (Σ)} by the very definition
757
+ of X (Σ), as X(Σ)◦◦ = X(Σ) since X(Σ) is a closed subspace of H(Σ) (see e.g. [22, Thm.4.7]
758
+ or [2, Prop.1.9]). The next result establishes an important connection between the two spaces
759
+ C (A), X (Σ) and our primary boundary value problem (14).
760
+ Proposition 6.6.
761
+ Assume (A1)-(A2)-(A3)-(A4).
762
+ The operator u �→ (BR(u), (B†)∗AR(u)) continuously and
763
+ isomorphically maps ker(AΩ×Γ) onto C (A) ∩ X (Σ). As a consequence
764
+ dim(ker(AΩ×Γ) ) = dim(C (A) ∩ X (Σ)).
765
+ Proof:
766
+ Let u ∈ H(Ω×Γ) satisfy AΩ×Γ(u) = 0. In particular R(u) ∈ X(Ω) and AR(u) ∈ X(Ω)◦, see
767
+ (24) and (31). According to iv) of Lemma 4.1, there exists p ∈ X(Σ)◦ such that AR(u) = B∗p
768
+ and it is unique since B∗ : H(Σ)∗ → H(Ω)∗ is injective. We have
769
+ (B†)∗AR(u) = (B†)∗B∗p = (BB†)∗p = p.
770
+ Setting v := B · R(u), by construction (v, p) ∈ C (A).
771
+ We also have v ∈ X(Σ) since
772
+ R(u) ∈ X(Ω), so that (v, p) ∈ X(Σ) × X(Σ)◦ = X (Σ). In addition, the formula (v, p) =
773
+ (BRu, (B†)∗ARu) establishes the continuous dependency of (v, p) on u.
774
+ Reciprocally, consider an arbitrary pair (v, p) ∈ C (A) ∩ X (Σ). Since (v, p) ∈ C (A),
775
+ there exists w ∈ H(Ω) such that Aw = B∗p and B(w) = v, and such a w is unique since
776
+ ker(A)∩ker(B) = {0}, according to Lemma 5.1. As v ∈ X(Σ), we have w ∈ X(Ω) = B−1(X(Σ))
777
+ according to iii) of Lemma 4.1, so there exists u ∈ H(Ω × Γ) such that R(u) = w and such
778
+ a u is unique due to the injectivity of R : H(Ω × Γ) → H(Ω). This leads to AR(u) = B∗p
779
+ and p ∈ X(Σ)◦ ⇒ B∗p ∈ X(Ω)◦ = ker(R∗). Since X(Ω) = R(H(Ω × Γ)), we conclude that
780
+ 0 = R∗AR(u) = AΩ×Γ(u).
781
+
782
+ Lemma 6.7.
783
+ Assume (A1)-(A2)-(A3)-(A4). The operator (u, p) �→ R∗(B∗p − AB†u) continuously maps
784
+ (C (A∗) ∩ X (Σ))♯ into range(AΩ×Γ).
785
+ 14
786
+
787
+ Proof:
788
+ Take an arbitrary (u, p) ∈ (C (A∗) ∩ X (Σ))♯ and set f = R∗(B∗p − AB†u).
789
+ Ap-
790
+ plying Proposition 6.6 to A∗
791
+ Ω×Γ instead of AΩ×Γ shows that ϕ ∈ ker(A∗
792
+ Ω×Γ) ⇒ (v, q) =
793
+ (BR(ϕ), (B†)∗A∗R(ϕ)) ∈ C (A∗) ∩ X (Σ). Hence ⟨f, ϕ⟩ = ⟨R∗(B∗p − AB†u), ϕ⟩ = ⟨p, BRϕ⟩ −
794
+ ⟨u, (B†)∗A∗Rϕ⟩ = [(v, q), (u, p)] = 0. This proves f ∈ ker(A∗
795
+ Ω×Γ)◦ = range(AΩ×Γ) according
796
+ to (16).
797
+
798
+ Proposition 6.8.
799
+ Assume (A1)-(A2)-(A3)-(A4). Then C (A) + X (Σ) = (C (A∗) ∩ X (Σ))♯. In particular the
800
+ subspace C (A) + X (Σ) is closed in H (Σ).
801
+ Proof:
802
+ Clearly we have C (A) + X (Σ) ⊂ (C (A∗) ∩ X (Σ))♯, so we only need to establish that
803
+ (C (A∗) ∩ X (Σ))♯ ⊂ C (A) + X (Σ). Pick any pair (pd, pn) ∈ (C (A∗) ∩ X (Σ))♯. According to
804
+ Lemma 6.7 we have R∗(B∗pn − AB†pd) ∈ range(AΩ×Γ). Applying the definition of A given
805
+ by (29), there exists ϕ ∈ X(Ω) satisfying ⟨Aϕ, w⟩ = ⟨B∗pn − AB†pd, w⟩ for all ∀w ∈ X(Ω).
806
+ Set φ = ϕ + B†(pd) and ud = B(φ) = B(ϕ) + pd.
807
+ By construction, ⟨A(φ), w⟩ =
808
+ ⟨pn, B(w)⟩ = 0 ∀w ∈ ker(B) ⊂ X(Ω), which rewrites A(φ) ∈ ker(B)◦. Applying i) of Lemma
809
+ 4.1 we have Aφ = B∗un for some un ∈ H(Σ)∗. This implies in particular un = (BB†)∗un =
810
+ (B†)∗B∗un = (B†)∗Aφ.
811
+ We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A). On the other hand pd −
812
+ ud = −Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω) hence
813
+ ⟨pn−un, w⟩ = ⟨Aφ, B†w⟩−⟨Aφ, B†w⟩ = 0, which implies pn−un ∈ X(Σ)◦. Finally (ud, un) ∈
814
+ C (A) and (pd, pn) − (ud, un) ∈ X (Σ) imply that (pd, pn) ∈ C (A) + X (Σ).
815
+
816
+ Corollary 6.9.
817
+ Assume (A1)-(A2)-(A3)-(A4). Then
818
+ codim(C (A) + X (Σ) ) = codim(range(AΩ×Γ) ).
819
+ Proof:
820
+ We have (C (A) + X (Σ))♯ = C (A)♯ ∩ X (Σ)♯ see e.g.
821
+ [2, Prop.2.14].
822
+ According to
823
+ Proposition 6.5 applied to A∗, and since X (Σ)♯ = X (Σ) by construction, we conclude
824
+ that (C (A) + X (Σ))♯ = C (A∗) ∩ X (Σ). As the bilinear pairing [·, ·] is non-degenerate and
825
+ C (A) + X (Σ) is closed according to Proposition 6.8, we conclude codim(C (A) + X (Σ)) =
826
+ dim((C (A) + X (Σ))♯) = dim(C (A∗) ∩ X (Σ)). There only remains to apply Proposition 6.6
827
+ to A∗
828
+ Ω×Γ combined with (16).
829
+
830
+ 7
831
+ Scattering operator
832
+ Proposition 6.6 and 6.8 and Corollary 6.9 above show that the kernel and the range of AΩ×Γ
833
+ are closely related to the pair of subspaces C (A), X (Σ). This can be exploited to study other
834
+ formulations of the same boundary value problem.
835
+ Proposition 7.1.
836
+ Assume (A1)-(A2)-(A3)-(A4). If u ∈ X(Ω) satisfies (31), then there exists a unique p ∈ H(Σ)∗
837
+ such that the pair (u, p) satisfies
838
+ u ∈ H(Ω), p ∈ H(Σ)∗,
839
+ Au − B∗p = ℓ,
840
+ − p + iTBu = Π(p + iTBu).
841
+ (35)
842
+ 15
843
+
844
+ Reciprocally if the pair (u, p) ∈ H(Ω) × H(Σ)∗ satisfies (35), then u satisfies (31).
845
+ Proof:
846
+ Assume first that u ∈ X(Ω) satisfies (31). This formulation rewrites equivalently as Au −
847
+ ℓ ∈ X(Ω)◦. Since X(Ω)◦ = B∗(X(Σ)◦) according to iv) Lemma 4.1, and as B∗ : H(Σ)∗ → H(Ω)∗
848
+ is injective (B is surjective), there exists a unique p ∈ X(Σ)◦ such that Au − ℓ = B∗p. On
849
+ the other hand, u ∈ X(Ω) ⇒ B(u) ∈ X(Σ) according to iii) of Lemma 4.1. Finally applying
850
+ Proposition 4.3, we obtain −p + iTBu = Π(p + iTBu).
851
+ Reciprocally, assume that (35) holds. Then, according to Proposition 4.3, we have p ∈
852
+ X(Σ)◦ and B(u) ∈ X(Σ). Moreover we have B(u) ∈ X(Σ) ⇒ u ∈ X(Ω) according to iii)
853
+ of Lemma 4.1. Since p ∈ X(Σ)◦, we have B∗p ∈ X(Ω)◦ so that, for any v ∈ X(Ω) we have
854
+ 0 = ⟨B∗p, v⟩ = ⟨Au − ℓ, v⟩. To sum up, we have proved that u ∈ X(Ω) and ⟨Au, v⟩ =
855
+ ⟨ℓ, v⟩ ∀v ∈ X(Ω).
856
+
857
+ In a domain decomposition context, a substructuring strategy applied to Problem (14) nat-
858
+ urally leads to eliminating the volume unknowns in (35). This is performed by means of a
859
+ scattering map that takes ingoing traces as input and returns outgoing traces as output.
860
+ Proposition 7.2.
861
+ Assume (A1)-(A2)-(A3)-(A4). There exists a unique bounded linear map S : H(Σ)∗ → H(Σ)∗,
862
+ later referred to as scattering operator, satisfying
863
+ p + iTv = S(p − iTv)
864
+ ∀(v, p) ∈ C (A).
865
+ (36)
866
+ It is also given by the formula S = Id+ 2iTB(A − iB∗TB)−1B∗. It is T−1-contractive and, for
867
+ any q ∈ H(Σ)∗, satisfies
868
+ ∥S(q)∥2
869
+ T−1 + 4|ℑm{⟨A(u), u⟩}| = ∥q∥2
870
+ T−1
871
+ where u = (A − iB∗TB)−1B∗q.
872
+ Proof:
873
+ We follow the proof pattern presented e.g. in [6, Lem.5.2]. First of all, Identity (36) clearly
874
+ and unambiguously defines the operator S as a linear map according to Lemma 6.1. Next,
875
+ pick an arbitrary q ∈ H(Σ)∗ and set u = (A − iB∗TB)−1B∗q and p = q + iTB(u). We have
876
+ Au − B∗p = 0 and q = p − iTB(u) and S(q) = p + iTB(u) = q + 2iTB(u), which leads
877
+ to S(q) = (Id + 2iTB(A − iB∗TB)−1B∗)q. Finally developing the squared norm, and taking
878
+ account of (30), we have
879
+ ∥S(q)∥2
880
+ T−1 = ∥p + iTB(u)∥2
881
+ T−1
882
+ = ∥p − iTB(u)∥2
883
+ T−1 + 4ℑm{⟨q, B(u)⟩} + 4∥B(u)∥2
884
+ T
885
+ = ∥q∥2
886
+ T−1 + 4ℑm{⟨B∗(q), u⟩} + 4∥B(u)∥2
887
+ T
888
+ = ∥q∥2
889
+ T−1 + 4ℑm{⟨A(u), u⟩} − 4ℑm{i⟨B∗TB(u), u⟩} + 4∥B(u)∥2
890
+ T
891
+ = ∥q∥2
892
+ T−1 − 4|ℑm{⟨A(u), u⟩}|
893
+
894
+ The space of Cauchy data was used to characterize the scattering operator. Reciprocally, the
895
+ scattering operator provides a characterization of the space of Cauchy data. The following
896
+ result should be compared with (27).
897
+ 16
898
+
899
+ Lemma 7.3.
900
+ Assume (A1)-(A2)-(A3)-(A4). For any (v, p) ∈ H (Σ) we have:
901
+ (v, p) ∈ C (A) ⇐⇒ p + iTv = S(p − iTv).
902
+ Proof:
903
+ From the very definition of the scattering operator in Proposition 7.2, it is clear that
904
+ (v, p) ∈ C (A) ⇒ p + iTv = S(p − iTv). Reciprocally pick arbitrarily some (v, p) ∈ H (Σ)
905
+ such that p + iTv = S(p − iTv). We know from Proposition 6.3 that there exists v′ ∈ H(Σ)
906
+ such that (v − v′, p − iTv′) ∈ C (A) so applying Proposition 7.2 we obtain
907
+ (p − iTv′) + iT(v − v′) = S( (p − iTv′) − iT(v − v′) )
908
+ ⇐⇒
909
+ p + iTv − 2iTv′ = S(p − iTv)
910
+ ⇐⇒
911
+ 2iTv′ = 0
912
+ =⇒
913
+ v′ = 0.
914
+
915
+ The scattering operator has a subdomain-wise block diagonal structure. This is clearly visible
916
+ from the formula S = Id + 2iTB(A − iB∗TB)−1B∗ where each term in the right hand side is
917
+ block diagonal. This yields
918
+ S = diag(SΓ, SΩ1, . . . , SΩJ)
919
+ where SΩj = Id + 2iTΩjBΩj(AΩj − iB∗
920
+ ΩjTΩjBΩj)−1B∗
921
+ Ωj
922
+ where SΓ = Id + 2iTΓBΓ(AΓ − iB∗
923
+ ΓTΓBΓ)−1B∗
924
+ Γ
925
+ Let us discuss the particular form that takes the boundary scattering operator SΓ for Dirichlet,
926
+ Neumann and Robin conditions. Recall that BΓ : Hb(Γ) := H1/2(Γ) × H−1/2(Γ) → H1/2(Γ) is
927
+ defined by BΓ(α, p) = α hence B∗
928
+ Γ(p) = (p, 0).
929
+ Example 7.4 (Dirichlet condition). Taking the same notations as in Example 3.1 and 5.2,
930
+ since B∗
931
+ Γp = (p, 0) for all p ∈ H−1/2(Γ), we conclude that BΓ(AΓ − iB∗
932
+ ΓTΓBΓ)−1B∗
933
+ Γ = 0 and
934
+ finally
935
+ SΓ = +Id.
936
+ Example 7.5 (Neumann condition). Taking the same notations as in Example 3.2 and
937
+ 5.3, in this situation we have BΓ(AΓ − iB∗
938
+ ΓTΓBΓ)−1B∗
939
+ Γ = iT−1
940
+ Γ . This yields the expression
941
+ SΓ = −Id.
942
+ Example 7.6 (Robin condition). Taking the same notations as in Example 3.3 and 5.4, in
943
+ this situation we have BΓ(AΓ − iB∗
944
+ ΓTΓBΓ)−1B∗
945
+ Γ = i(Λ + TΓ)−1 which yields
946
+ SΓ = (Λ − TΓ)(Λ + TΓ)−1.
947
+ 8
948
+ Skeleton formulation
949
+ Now we shall use the scattering operator of the previous section to transform further the
950
+ boundary value problem (35). Once volume unknowns have been eliminated, this reduces to
951
+ an equation involving only traces on the skeleton of the subdomain partition.
952
+ 17
953
+
954
+ Proposition 8.1.
955
+ Assume (A1)-(A2)-(A3)-(A4). Define f ∈ H(Σ)∗ by f = −2iΠTB(A−iB∗TB)−1ℓ. If (u, p) ∈
956
+ H(Ω) × H(Σ)∗ solves (35), then q = p − iTB(u) satisfies the skeleton problem
957
+ q ∈ H(Σ)∗ and
958
+ (Id + ΠS)q = f.
959
+ (37)
960
+ Reciprocally if q satisfies the above equation then the pair (u, p) ∈ H(Ω) × H(Σ)∗, given by
961
+ u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), solves (35).
962
+ Proof:
963
+ If (u, p) ∈ H(Ω) × H(Σ)∗ solves (35) and q = p − iTB(u), then (A − iB∗TB)u = B∗(p −
964
+ iTBu) + ℓ. Left multiplying this equality by 2iTB(A − iB∗TB)−1 yields an expression for
965
+ 2iTB(u) that can be used in p+iTB(u) = q+2iTB(u) in the last line of (35). This eventually
966
+ leads to (37).
967
+ Reciprocally if q solves (37) and u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), then we
968
+ have Au = B∗(q + iTBu) + ℓ = B∗p + ℓ. On the other hand, using the expression of f and
969
+ S, the skeleton equation in (37) writes
970
+ q + Π(q + 2iTB(A − iB∗TB)−1(B∗q + ℓ)) = 0
971
+ ⇐⇒
972
+ q + Π(q + 2iTB(u)) = 0
973
+ ⇐⇒
974
+ p − iTB(u) + Π(p + iTB(u)) = 0
975
+ This finally proves that the pair (u, p) satisfies (35)
976
+
977
+ Next we investigate whether or not the skeleton formulation (8.1) is uniquely solvable. We
978
+ will show that this is directly correlated to the unique solvability of (14).
979
+ Proposition 8.2.
980
+ Assume (A1)-(A2)-(A3)-(A4).
981
+ The application (v, p) �→ p − iT(v) induces a continuous
982
+ isomorphism from C (A) ∩ X (Σ) onto ker(Id + ΠS). As a consequence
983
+ dim( ker(Id + ΠS) ) = dim( ker(AΩ×Γ) ).
984
+ Proof:
985
+ First of all, if (v, p) ∈ C (A) ∩ X (Σ), then p + iTv = S(p − iTv) according to Lemma
986
+ 7.3, and p − iTv = −Π(p + iTv) according to (27). Combining these two identities leads to
987
+ p − iTv ∈ ker(Id + ΠS). Next if (v, p) ∈ C (A) ∩ X (Σ) and p − iTv = 0, then (v, p) = (0, 0)
988
+ according to Lemma 6.1 hence the injectivity.
989
+ Finally if q ∈ ker(Id + ΠS), then there exists (v, p) ∈ C (A) unique such that p − iTv = q
990
+ according to Lemma 6.1, and applying (36), we obtain S(q) = S(p−iTv) = p+iTv. From this
991
+ later identity and (Id+ΠS)q = 0 leads to −p+iTv = Π(p+iTv) which implies (v, p) ∈ X (Σ)
992
+ according to Proposition 4.3. Hence we conclude (v, p) ∈ C (A) ∩ X (Σ).
993
+
994
+ Proposition 8.3.
995
+ Assume (A1)-(A2)-(A3)-(A4). The subspace range(Id + ΠS) is closed in H(Σ)∗.
996
+ 18
997
+
998
+ Proof:
999
+ Define Θ : H(Σ)∗ → H (Σ) by Θ(q) := (iT−1(q), q), which satisfies 2∥q∥2
1000
+ T−1 = ∥Θ(q)∥2
1001
+ T×T−1
1002
+ for all q ∈ H(Σ)∗. Taking account that C (A) + X (Σ) is closed, see Proposition 6.8, we are
1003
+ going to prove that
1004
+ range(Id + ΠS) = Θ−1(C (A) + X (Σ)).
1005
+ Take any p ∈ range(Id + ΠS). Applying Lemma 6.1, there exists a unique (v, q) ∈ C (A) such
1006
+ that 2p = (Id + ΠS)(q − iTv). Since S(q − iTv) = q + iTv according to Proposition 7.2, and
1007
+ writing 2p = (Id + Π)p + (Id − Π)p, we obtain
1008
+ (Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv)
1009
+ ⇐⇒
1010
+ (Id + Π)p + (Id − Π)p = (Id + Π)q − (Id − Π)(iTv)
1011
+ ⇐⇒
1012
+ (Id + Π)(p − q) = −(Id − Π)(p + iTv).
1013
+ As (Id ± Π)/2 are two mutually orthogonal projectors, see Proposition 4.3, we deduce on
1014
+ the one hand that (Id + Π)(p − q) = 0 and (Id − Π)(p + iTv) = 0. This eventually leads
1015
+ to p − q ∈ X(Σ)◦ and p + iTv ∈ T(X(Σ))
1016
+ ⇐⇒
1017
+ iT−1p − v ∈ X(Σ). We conclude that
1018
+ Θ(p) − (v, q) ∈ X (Σ). Hence Θ(p) ∈ C (A) + X (Σ).
1019
+ Reciprocally pick an arbitrary p ∈ Θ−1(C (A)+X (Σ)). This means that Θ(p)−(v, q) ∈ X (Σ)
1020
+ for some (v, q) ∈ C (A). As a consequence (Id − Π)(p + iTv) = 0 and (Id + Π)(p − q) = 0.
1021
+ Adding these two equations, and taking account that q +iTv = S(q −iTv) according to (36),
1022
+ leads to
1023
+ (Id + Π)(p − q) = −(Id − Π)(p + iTv)
1024
+ ⇐⇒
1025
+ (Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv)
1026
+ ⇐⇒
1027
+ p = (Id + ΠS)(q − iTv).
1028
+
1029
+ Proposition 8.4.
1030
+ Assume (A1)-(A2)-(A3)-(A4). Then
1031
+ codim( range(Id + ΠS) ) = codim( range(AΩ×Γ) ).
1032
+ Proof:
1033
+ Since range(Id+ΠS) is closed according to Proposition 8.3, we deduce that codim( range(Id+
1034
+ ΠS) ) = dim( ker((Id + ΠS)∗) ). Proposition 4.3, in particular the characterization of Q =
1035
+ (Id + Π)/2 as a T−1-orthogonal projection, show that Π2 = Id and Π∗ = T−1ΠT, so we have
1036
+ (Id + ΠS)∗ = (TΠ∗)−1(Id + ΠTS∗T−1)TΠ∗.
1037
+ Setting ˜S := TS∗T−1, and noting that TΠ∗ : H(Σ) → H(Σ)∗ is an isomorphism, we have
1038
+ dim( ker((Id + ΠS)∗) ) = dim( ker(Id + Π˜S) ). Let us have a close look at ˜S, taking account of
1039
+ the formulas given by Proposition 7.2. Since T∗ = T, we obtain
1040
+ ˜S = Id + 2iTB(A∗ − iB∗TB)−1B∗.
1041
+ We see that ˜S differs from S only in that A is replaced by A∗. As a consequence, we can apply
1042
+ Proposition 8.2, replacing AΩ×Γ with A∗
1043
+ Ω×Γ. Using (16), this yields dim( ker(Id + Π˜S) ) =
1044
+ dim( ker(A∗
1045
+ Ω×Γ) ) = codim( range(AΩ×Γ) ).
1046
+
1047
+ 19
1048
+
1049
+ If V1, V2 are Banach spaces, a bounded linear map L : V1 → V2 is of Fredholm type if and
1050
+ only if range(L) is closed in V2, dim( ker(L) ) < ∞ and codim( range(L) ) < ∞. In this case
1051
+ the index of L is the number index(L) := dim( ker(L) ) − codim( range(L) ). The results of the
1052
+ present paragraph (in particular Proposition 8.2, 8.3 and 8.4) lead to the following corollary.
1053
+ Corollary 8.5.
1054
+ Assume (A1)-(A2)-(A3)-(A4). The operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ is of Fredholm
1055
+ type if and only if Id + ΠS : H(Σ)∗ → H(Σ)∗ is of Fredholm type and, in this case, both
1056
+ operators have the same index.
1057
+ 9
1058
+ Coercivity estimate
1059
+ Now we study quantitatively how the inf-sup constant of Id+ΠS relates to the inf-sup constant
1060
+ of the operator AΩ×Γ. Taking the cue from [6, §8], we first establish an intermediate result.
1061
+ Recall that inf-sup constants are defined according to (4).
1062
+ Proposition 9.1.
1063
+ Assume (A1)-(A2)-(A3)-(A4). Then
1064
+ infsup
1065
+ H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥)
1066
+ inf
1067
+ u∈C (A)\{0}
1068
+ v∈X (Σ)\{0}
1069
+ ∥u + v∥T×T−1
1070
+ ∥u∥T×T−1
1071
+ where
1072
+ ∥A∥ :=
1073
+ sup
1074
+ u,v∈H(Ω)\{0}
1075
+ |⟨u, A(v)⟩|
1076
+ ∥u∥H(Ω)∥v∥H(Ω)
1077
+ .
1078
+ Proof:
1079
+ In the case where C (A)∩X (Σ) ̸= {0}, the inf-sup constant vanishes since ker(AΩ×Γ) ̸= {0}
1080
+ according to Proposition 6.6. So the estimate is automatically satisfied in this case. We shall
1081
+ assume C (A) ∩ X (Σ) = {0}. According to Proposition 6.6 this leads to
1082
+ ker(AΩ×Γ) ̸= {0}
1083
+ α :=
1084
+ infsup
1085
+ H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) > 0.
1086
+ (38)
1087
+ Now pick any u ∈ C (A) \ {0} and any v ∈ X (Σ) \ {0}, and set (pd, pn) := u + v ∈ H (Σ) =
1088
+ H(Σ)×H(Σ)∗. The invertibility of AΩ×Γ provides the existence of a unique ϕ ∈ X(Ω) satisfying
1089
+ ⟨A(ϕ), w⟩ = −⟨AB†(pd), w⟩ + ⟨pn, B(w)⟩ for all w ∈ X(Ω). In particular
1090
+ α ∥ϕ∥H(Ω) ≤ ∥A∥ ∥pd∥T + ∥pn∥T−1.
1091
+ (39)
1092
+ Set φ = ϕ+B†(pd) and ud = B(φ) = B(ϕ)+pd. By construction, for any w ∈ H(Ω) satisfying
1093
+ B(w) = 0 we have ⟨A(φ), w⟩ = ⟨pn, B(w)⟩ = 0, which rewrites A(φ) ∈ ker(B)◦. Applying
1094
+ i) of Lemma 4.1 we have Aφ = B∗un for some un ∈ H(Σ)∗.
1095
+ This implies in particular
1096
+ un = (BB†)∗un = (B†)∗B∗un = (B†)∗Aφ. From the previous definitions, and the fact that
1097
+ ∥B(w)∥T ≤ ∥w∥H(Ω) and ∥B†(q)∥H(Ω) = ∥q∥T, we obtain the estimates
1098
+ ∥φ∥H(Ω) ≤ ∥ϕ∥H(Ω) + ∥pd∥T
1099
+ ∥ud∥T ≤ ∥φ∥H(Ω)
1100
+ ∥un∥T−1 ≤ ∥A∥ ∥φ∥H(Ω).
1101
+ (40)
1102
+ 20
1103
+
1104
+ We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A) by construction. On the other hand
1105
+ we have pd − ud = Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω)
1106
+ hence ⟨pn − un, w⟩ = ⟨Aφ, B†w⟩ − ⟨Aφ, B†w⟩ = 0, which implies that pn − un ∈ X(Σ)◦.
1107
+ Finally we have shown that (ud, un) ∈ C (A) and (pd, pn) − (ud, un) ∈ X (Σ) and, since
1108
+ p = u + v ∈ C (A) ⊕ X (Σ), we conclude that u = (ud, un). There only remains to combine
1109
+ (39) and (40) to obtain the desired estimate.
1110
+
1111
+ Theorem 9.2.
1112
+ Assume (A1)-(A2)-(A3)-(A4). Then
1113
+ infsup
1114
+ H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥)
1115
+ infsup
1116
+ H(Σ)∗→H(Σ)∗(Id + ΠS).
1117
+ Proof:
1118
+ In the case where ker(AΩ×Γ) ̸= {0} we also have ker(Id + ΠS) ̸= {0} according to Propo-
1119
+ sition 8.2 and, in this situation, the desired estimate is satisfied, with both sides of the es-
1120
+ timate equal to 0. Hence we can assume that ker(AΩ×Γ) = {0} and in this situation both
1121
+ AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ and Id + ΠS : H(Σ) → H(Σ)∗ are are injective with closed
1122
+ range. Pick an arbitrary f ∈ H(Σ)∗. According to Lemma 6.1, there exists a unique pair
1123
+ u = (ud, un) ∈ C (A) such that f = un − iT(ud) and we have ∥f∥T−1 ≤
1124
+
1125
+ 2∥u∥T×T−1 which
1126
+ re-writes as
1127
+ ∥u∥T×T−1
1128
+ ∥f∥T−1×T
1129
+
1130
+ 1
1131
+
1132
+ 2
1133
+ .
1134
+ Next set g = (Id + ΠS)f and p = (pd, pn) = (T−1(g), −ig)/2.
1135
+ We have in particular
1136
+ ∥g∥T−1 =
1137
+
1138
+ 2∥p∥T×T−1. Since S(f) = S(un−iT(ud)) = un+iT(ud) according to Proposition
1139
+ 7.2, we obtain
1140
+ un − iT(ud) + Π(un + iT(ud)) = f + ΠS(f)
1141
+ = g = (Id + Π)g/2 + (Id − Π)g/2
1142
+ = (Id + Π)pn − i(Id − Π)T(pd)
1143
+ = pn − iT(pd) + Π(pn + iT(pd))
1144
+ Re-arranging the terms in the equality above so as to move all contributions involving Π in
1145
+ the right hand side, we obtain −(pn − un) + iT(pd − ud) = Π((pn − un) + iT(pd − ud)).
1146
+ According to Proposition 4.3, this implies that (pd, pn) − (ud, un) ∈ X (Σ). Since we have
1147
+ (ud, un) ∈ C (A) by construction, we can apply Proposition 9.1 which yields
1148
+ ∥(Id + ΠS)f∥T−1
1149
+ ∥f∥T−1
1150
+ = ∥g∥T−1
1151
+ ∥f∥T−1 ≥ ∥p∥T×T−1
1152
+ ∥u∥T×T−1 ≥
1153
+ infsup
1154
+ H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ)/(1 + ∥A∥).
1155
+ This establishes the desired estimate, since this holds for any f ∈ H(Σ)∗.
1156
+
1157
+ The estimate provided by Theorem 9.2 is remarkable in several respects. First of all it holds
1158
+ even if ker(AΩ×Γ) is non-trivial. Secondly it does not involve any hidden “C > 0” constant.
1159
+ In particular it does not involve any frequency dependency, although the infsup constant of
1160
+ AΩ×Γ a priori depends itself on the frequency. This means that, to estimate the frequency
1161
+ dependency of the infsup constant of Id + ΠS, it suffices to derive such an estimate for AΩ×Γ.
1162
+ A further striking feature is that the number of subdomains J does not come into play in this
1163
+ estimate.
1164
+ 21
1165
+
1166
+ As an interesting additional result in the perspective of an effective linear solve, the contrac-
1167
+ tivity of Π and S leads to the coercivity of the operator Id + ΠS. The next result can be
1168
+ combined with Theorem 9.2 to obtain an effective estimate of the coercivity constant.
1169
+ Corollary 9.3.
1170
+ Assume (A1)-(A2)-(A3)-(A4). Then Id + ΠS : H(Σ)∗ → H(Σ)∗ is coercive with respect to the
1171
+ scalar product induced by T−1 and we have
1172
+ inf
1173
+ q∈H(Σ)∗\{0}
1174
+ ℜe{⟨(Id + ΠS)q, T−1q⟩}
1175
+ ∥q∥2
1176
+ T−1
1177
+ ≥ 1
1178
+ 2
1179
+
1180
+ infsup
1181
+ H(Σ)∗→H(Σ)∗(Id + ΠS)
1182
+ �2.
1183
+ Proof:
1184
+ For any q ∈ H(Σ)∗,
1185
+ ∥q∥2
1186
+ T−1 ≥ ∥ΠS(q)∥2
1187
+ T−1 = ∥(Id + ΠS)q − q∥2
1188
+ T−1
1189
+ ∥q∥2
1190
+ T−1 ≥ ∥ΠS(q)∥2
1191
+ T−1 = ∥(Id + ΠS)q∥2
1192
+ T−1 + ∥q∥2
1193
+ T−1 − 2ℜe{⟨(Id + ΠS)q, T−1q⟩}
1194
+ =⇒
1195
+ ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2
1196
+ T−1 ≥
1197
+
1198
+ ∥(Id + ΠS)q∥T−1/∥q∥T−1
1199
+ �2/2.
1200
+
1201
+ We conclude this article illustrating how the previous results lead to estimations of the coer-
1202
+ civity constant of the skeleton operator for a concrete case.
1203
+ Example 9.4.
1204
+ Consider the case Rd = R2 or R3. Assume that µ = 1, κ = k ∈ (0, +∞), and choose AΓ as in
1205
+ Example 3.3 with ⟨Λ(u), v⟩ = k
1206
+
1207
+ Γ uvdσ which models the Robin condition ∂nu − iku = 0 on
1208
+ Γ. So we. Assume in addition that Ω is a convex polyhedron. Then we have
1209
+ ⟨AΩ×Γ(u, p), (v, q)⟩ =
1210
+
1211
+
1212
+ ∇u∇v − k2uvdx − ik
1213
+
1214
+ Γ
1215
+ uvdσ +
1216
+
1217
+ Γ
1218
+ qTΓp dσ.
1219
+ Let us take γ = 1/k for the parameter involved in (8). From these choices, and proceeding
1220
+ like in [15, Lem.2.4] for dealing with boundary terms on Γ, we see that the continuity modulus
1221
+ ∥A∥ (as defined in Proposition 9.1) can be bounded independently of k. On the other hand,
1222
+ we know from [18] that
1223
+ infsup
1224
+ H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≥
1225
+ O
1226
+ k→∞(1/k).
1227
+ We can now plug this estimate into Theorem 9.2, and we see that the inf-sup constant of
1228
+ Id + ΠS admits also a lower bound that behaves like O(1/k) for k → ∞. Finally combining
1229
+ with Corollary 9.3, we see that the coercivity constant of the skeleton formulation behaves like
1230
+ O(1/k2) i.e.
1231
+ inf
1232
+ q∈H(Σ)∗\{0} ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2
1233
+ T−1 ≥
1234
+ O
1235
+ k→∞(1/k2).
1236
+ References
1237
+ [1] A. Bendali and Y. Boubendir. Non-overlapping domain decomposition method for a nodal
1238
+ finite element method. Numerische Mathematik, 103(4):515–537, Jun 2006.
1239
+ 22
1240
+
1241
+ [2] H. Brezis. Functional analysis, Sobolev spaces and partial differential equations. Univer-
1242
+ sitext. Springer, New York, 2011.
1243
+ [3] O. Cessenat and B. Despres. Application of an ultra weak variational formulation of ellip-
1244
+ tic PDEs to the two-dimensional Helmholtz problem. SIAM J. Numer. Anal., 35(1):255–
1245
+ 299, 1998.
1246
+ [4] P.G. Ciarlet. Introduction to numerical linear algebra and optimization. Camb. Texts
1247
+ Appl. Math. Cambridge etc.: Cambridge University Press, 1988.
1248
+ [5] X. Claeys.
1249
+ Non-local variant of the Optimised Schwarz Method for arbitrary non-
1250
+ overlapping subdomain partitions. ESAIM: M2AN, 55(2):429–448, 2021.
1251
+ [6] X. Claeys. Nonselfadjoint impedance in Generalized Optimized Schwarz Methods. IMA
1252
+ Journal of Numerical Analysis, November 2022.
1253
+ [7] X. Claeys, F. Collino, and E. Parolin. Nonlocal optimized schwarz methods for time-
1254
+ harmonic electromagnetics. Adv. Comput. Math., 48(6):Paper No. 72, 2022.
1255
+ [8] X. Claeys and E. Parolin. Robust treatment of cross-points in optimized Schwarz methods.
1256
+ Numer. Math., 151(2):405–442, 2022.
1257
+ [9] F. Collino, S. Ghanemi, and P. Joly. Domain decomposition method for harmonic wave
1258
+ propagation: a general presentation. Computer Methods in Applied Mechanics and En-
1259
+ gineering, 184(2):171 – 211, 2000.
1260
+ [10] B. Després. Méthodes de décomposition de domaine pour les problèmes de propagation
1261
+ d’ondes en régime harmonique. Le théorème de Borg pour l’équation de Hill vectorielle.
1262
+ Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquen-
1263
+ court, 1991. Thèse, Université de Paris IX (Dauphine), Paris, 1991.
1264
+ [11] B. Després, A. Nicolopoulos, and B. Thierry. Optimized transmission conditions in do-
1265
+ main decomposition methods with cross-points for Helmholtz equation. SIAM J. Numer.
1266
+ Anal., 60(5):2482–2507, 2022.
1267
+ [12] M. Gander and F. Kwok. On the applicability of Lions’ energy estimates in the analysis
1268
+ of discrete optimized schwarz methods with cross points. Lecture Notes in Computational
1269
+ Science and Engineering, 91, 01 2013.
1270
+ [13] M.J. Gander and K. Santugini. Cross-points in domain decomposition methods with a
1271
+ finite element discretization. Electron. Trans. Numer. Anal., 45:219–240, 2016.
1272
+ [14] M.J. Gander and H. Zhang.
1273
+ A class of iterative solvers for the Helmholtz equation:
1274
+ factorizations, sweeping preconditioners, source transfer, single layer potentials, polarized
1275
+ traces, and optimized Schwarz methods. SIAM Rev., 61(1):3–76, 2019.
1276
+ [15] I.G. Graham, E.A. Spence, and J.Zou. Domain decomposition with local impedance con-
1277
+ ditions for the Helmholtz equation with absorption. SIAM J. Numer. Anal., 58(5):2515–
1278
+ 2543, 2020.
1279
+ [16] T. Kato. Perturbation theory for linear operators. Classics in Mathematics. Springer-
1280
+ Verlag, Berlin, 1995. Reprint of the 1980 edition.
1281
+ 23
1282
+
1283
+ [17] W. McLean. Strongly elliptic systems and boundary integral equations. Cambridge: Cam-
1284
+ bridge University Press, 2000.
1285
+ [18] J. M. Melenk. On generalized finite-element methods. ProQuest LLC, Ann Arbor, MI,
1286
+ 1995. Thesis (Ph.D.)–University of Maryland, College Park.
1287
+ [19] A. Modave, A. Royer, X. Antoine, and C. Geuzaine. A non-overlapping domain decom-
1288
+ position method with high-order transmission conditions and cross-point treatment for
1289
+ Helmholtz problems. Comput. Methods Appl. Mech. Eng., 368:23, 2020. Id/No 113162.
1290
+ [20] E. Parolin. Non-overlapping domain decomposition methods with non-local transmission-
1291
+ operators for harmonic wave propagation problems. Theses, Institut Polytechnique de
1292
+ Paris, December 2020.
1293
+ [21] C. Pechstein. Finite and boundary element tearing and interconnecting solvers for mul-
1294
+ tiscale problems, volume 90 of Lecture Notes in Computational Science and Engineering.
1295
+ Springer, Heidelberg, 2013.
1296
+ [22] W. Rudin. Functional analysis. 2nd ed. New York, NY: McGraw-Hill, 2nd ed. edition,
1297
+ 1991.
1298
+ [23] O. Steinbach.
1299
+ Numerical approximation methods for elliptic boundary value problems.
1300
+ Springer, New York, 2008.
1301
+ Finite and boundary elements, Translated from the 2003
1302
+ German original.
1303
+ [24] T. von Petersdorff. Boundary integral equations for mixed Dirichlet, Neumann and trans-
1304
+ mission problems. Math. Methods Appl. Sci., 11(2):185–213, 1989.
1305
+ 24
1306
+
MNE1T4oBgHgl3EQfHAOD/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
MdFIT4oBgHgl3EQfcCt7/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bff595855c1b1315618cfa0aae9d8c56278a04a4a672dbd077bf270ca6c0d7e
3
+ size 434882
NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b4d4c173ea36cbd3754199be3af243b480fe1696c4a7e327a54e30eae6529d1
3
+ size 210718
NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2dcd6cf92b482d5ff8c625e14009b307591333fd95783be280912102ce66f9c
3
+ size 2097197
NNAzT4oBgHgl3EQfzP4S/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66d3fa97a9ca885259cc3eb710a9d728635525f35f3ce2ae9ebf09e90d83c2fe
3
+ size 78378